TerrSet Tutorial
TerrSet Tutorial
1987-2015
J. Ronald Eastman
Production
1987-2015
Clark University
www.clarklabs.org [email protected]
INTRODUCTION
The exercises of the Tutorial are arranged in a manner that provides a structured approach to the understanding of the fundamentals of
spatial analysis and modeling that the TerrSet system provides. The TerrSet Geospatial Monitoring and Modeling System is comprised of a
constellation of eight interdependent and integrated toolsets. These exercises explore all eight toolsets that cover a wide range of topics in the
areas of GIS analysis, image processing, spatial modeling and earth science. The exercises are organized as follows:
INTRODUCTION
GeOSIRIS
This exercise explores TerrSets GeOSIRIS modeler, an integrated vertical application for national-level REDD planning that estimates the
impacts of alternative policies for REDD+ projects on deforestation, emission reductions, and revenue generation.
We recommend you complete the exercises in the order in which they are presented within each section, though this is not strictly necessary.
Knowledge of concepts presented in earlier exercises, however, is assumed in subsequent exercises. All users who are not familiar with the
TerrSet system should complete the first set of exercises entitled Using the TerrSet System. After this, a user new to GIS and Image Processing
might wish to complete the Introductory GIS and Image Processing exercise sections, then come back to the Advanced exercises at a later
time. Users familiar with the system should be able to proceed directly to the particular exercises of interest. In only a few cases are results
from one exercise used in a later exercise.
As you are working on these exercises, you will want to access the Program Modules section in the on-line Help System any time you
encounter a new module. When action is required at the computer, the section in the exercise is designated by a letter. Throughout most
exercises, numbered questions will also appear. These questions provide opportunity for reflection and self-assessment on the concepts just
presented or operations just performed.
When working through an exercise, examine every result (even intermediate ones) by displaying it. If the result is not as expected, stop and
rethink what you have done. Geographical analysis can be likened to a cascade of operations, each one depending upon the previous one. As a
result, there are endless blind alleys, much like in an adventure game. In addition, errors accumulate rapidly. Your best insurance against this
is to think carefully about the result you expect and examine every product to see if it matches expectations.
INTRODUCTION
The TerrSet Tutorial data can be downloaded from the Clark Labs website: www.clarklabs.org. Once downloaded and unzipped, data for the
Tutorial are in a set of folders, one for each Tutorial section as outlined above.
INTRODUCTION
TUTORIAL 1
USING THE TERRSET SYSTEM
USING THE TERRSET SYSTEM EXERCISES
The TerrSet System Environment
Display: Layers and Group Files
Display : Layer Interaction Effects
Display : Surfaces -- Fly Through and Illumination
Display: Navigating Map Query
Map Composition
Palettes, Symbols, and Creating Text Layers
Data Structures and Scaling
Database Workshop: Working with Vector Layers
Database Workshop: Analysis and SQL
Database Workshop : Creating Text Layers / Layer Visibility
Data for the exercises in this section are in the \TerrSet Tutorial\Using TerrSet folder. The TerrSet Tutorial data can be downloaded from the
Clark Labs website: www.clarklabs.org.
EXERCISE 1-1
THE TERRSET SYSTEM ENVIRONMENT
Getting Started
To start the TerrSet system, double-click on the TerrSet application icon in the TerrSet Program Folder. This will load the TerrSet
system.
Once the system has loaded, notice that the screen has four distinct components. At the top, we have the main menu. Underneath we find the
tool bar of icons that can be used to control the display and access commonly used facilities. Below this is the main workspace, followed by
the status bar.
Depending upon your Windows setup, you may also have a Windows task bar at the very bottom of the screen. If the screen resolution of
your computer is somewhat low (e.g., 1024 x 768), you may wish to change your task bar settings to autohide.1 This will give you extra space
for displayalways an essential commodity with a GIS.
Now move your mouse over the tool bar icons. Notice that a short text label pops up below each icon to tell you its function. This
is called a hint. Several other features of the TerrSet interface also incorporate hints.
TerrSet Explorer
Click on the TerrSet Explorer icon, the left-most tool bar icon. This option will launch the TerrSet Explorer utility.
TerrSet Explorer is a general purpose utility to manage and explore TerrSet files and projects. Use TerrSet Explorer to set your project
environment, manage your group files, review metadata, display files, and simply organize your data with such tools as copy, delete, rename,
and move commands. You can use TerrSet Explorer to view the structure of TerrSet file formats and to drag and drop files into TerrSet dialog
boxes. TerrSet Explorer is permanently docked to the left edge of the TerrSet desktop. It cannot be moved but it can be minimized and
horizontally resized whenever more workspace is required. We will explore the various uses of TerrSet Explorer in the exercises that follow.
This can be done from the START menu of Windows. Choose START, then SETTINGS, then Task bar. Click "always on top" off and "autohide" on. When you do
this, you simply need to move your cursor to the bottom of the screen in order to make the task bar visible.
Projects
With TerrSet Explorer open, select the Projects tab at the top of TerrSet Explorer. This option allows you to set the project
environment of your file folders. Make sure that the Editor pane is open at the bottom of the Projects tab. If you right-click
anywhere in the Projects form you will have the option to show the Editor. The Editor pane will show the working and resource
folders for each project. During the installation a Default project is created. Make sure that you have selected this project by
clicking on it. The result will have the radio button highlighted for that project.
A project is an organization of data files, both the input files you will use and the output files you
will create. The most fundamental element is the Working Folder. The Working Folder is the
location where you will typically find most of your input data and will write most of the results of
your analyses.2 The first time TerrSet is launched, the Working Folder by default is named:
\\TerrSet Tutorial Data\Using TerrSet
If it is not set this way already, change the Working Folder to be the Using TerrSet
folder.3 To change the Working Folder, click in the Working Folder input box and
either type in the location or select the browse button to the right to locate the Using
TerrSet folder.
In addition to the Working Folder, you can also have any number of Resource
Folders. A Resource Folder is any folder from which you can read data, but to which
you typically will not write data.
For this exercise, define one Resource Folder:
\\TerrSet Tutorial\Introductory GIS
If this is not correctly set, use the New Folder icon at the bottom of the Editor pane to
specify the correct Resource Folder. Note that to remove folders, you must highlight
them in the list first and then click the Remove Folder icon at the bottom of Editor.
The project should now show \\TerrSet Tutorial\Using TerrSet as the Working
Folder and \\TerrSet Tutorial\Introductory GIS as the Resource Folder. Your settings
are automatically saved in a file named DEFAULT.ENV (the .env extension stands
for Project Environment File). As new projects are created, you can always use
Projects in TerrSet Explorer to re-load these settings.
You can always specify a different input or output path by typing that full path in the filename box directly or by using the Browse button and selecting another
folder.
During installation, the default location will be to the Public folder designated by Windows. This will usually be in a shared documents folder in Users or Documents
and Settings. Adjust these instructions accordingly.
The TerrSet system maintains your Project settings from one project to the next. Thus they will change only if they are
intentionally altered. As a consequence, there is no need to explicitly save your Project settings unless you expect to use several
projects and wish to have a quick way of alternating between them.
Now click the Files tab in TerrSet Explorer. You are now ready to start exploring the TerrSet system. We will discuss TerrSet
Explorer more in depth later, but from the Files tab you will see a list of all files in your working and resource folders.
The data for the exercises are installed in several folders. The introduction to each section of the Tutorial indicates which particular folder you
will need to access. Whenever you begin a new Tutorial section, change your project accordingly.
There are three ways to launch TerrSet module dialog boxes. The most commonly used modules have toolbar icons. Click the
Display icon to launch the DISPLAY Launcher dialog. Close the dialog by clicking the X in the upper right corner of the dialog
window. Now go to the Display menu and click on the DISPLAY Launcher menu entry. Close the dialog again. Finally, you can
access an alphabetical list of all the TerrSet
modules with the Shortcut utility, located at the
top of the TerrSet window. Shortcut will stay
open until you choose the Turn Shortcut Off
command under the File Menu. Click the
dropdown list arrow on Shortcut and scroll
down until you find DISPLAY Launcher, then
click on it and click the Open Dialog button
(green arrow to the right of Shortcuts), or
simply hit Enter. Note that you may also type
the module name directly into the Shortcut box. In the Tutorial Exercises, you will typically be instructed to find module names in
their menu location to reinforce your knowledge of the way in which a module is being used. The dialog box will be the same,
however, no matter how it has been opened.
Notice first the three buttons at the bottom of the DISPLAY Launcher dialog. The OK button is used after all options have been
set and you are ready to have the module do its work. By default TerrSet dialogs are persistent -- i.e., the dialog does not disappear
when you click OK. It does the work, but stays on the screen with all of its settings in case you want to do a similar analysis. If you
would prefer that dialogs immediately close after clicking OK, you can go to the User Preferences option under the File menu and
disable persistent dialogs. (Note: having said this, DISPLAY Launcher is never persistent.)
If persistent dialogs are enabled, the button to the right of the OK button will be labeled as Close. Clicking on this both closes the
dialog and cancels any parameters you may have set. If persistent forms are disabled, this button will be labeled Cancel. However,
the action is the same -- Cancel always aborts the operation and closes the dialog.
The Help button can be used to access the context-sensitive Help System. You probably noticed that the main menu also has a
Help button. This can be used to access the TerrSet Help System at its most general level. However, accessing the Help button on
a dialog will bring you immediately to the specific Help section for that module. Try it now. Then close the Help window by
clicking the X button in its upper-right corner.
The Help System does not duplicate information in the manuals. Rather, it is a supplement, acting as the primary technical reference for
specific program modules. In addition to providing directions for the operation of a module and explaining its options, the Help System also
provides many helpful tips and notes on the implementation of these procedures in the TerrSet system.
Dialogs are primarily made up of standard Windows elements such as input boxes (the white boxes) in which text can be entered, radio
buttons (such as the file type radio button group), check boxes (such as those to indicate whether or not the map layer should be displayed
with a legend), buttons, and so on. However, TerrSet has incorporated some special dialog elements to facilitate your use of the system.
In DISPLAY Launcher, make sure the File Type indicates that you wish to display a raster layer. Then click the small button with
the ellipses, just to the right of the left input box. This will launch the pick list. TerrSet uses this specially-designed selection tool
throughout the system.
The pick list displays the names of map layers and other data elements, organized by folders. Notice that it lists your Working Folder first,
followed by each Resource Folder. The pick list always opens with the Working Folder expanded and the Resource Folders collapsed. To
expand a collapsed folder, click on the plus sign next to the folder name. To collapse a folder, click on the minus sign next to the folder name.
A listed folder without a plus/minus symbol is an indication that the folder contains no files of the type required for that particular input box.
Note that you can also access other folders using the Browse button.
Collapse and expand the two folders. Since the pick list was invoked from an input box requiring the name of a raster layer, the
files listed are all the raster layers in each folder. Now expand the Working Folder. Find the raster layer named SIERRADEM and
click on it. Then click on the OK button of the pick list. Notice how its name is now entered into the input box on DISPLAY
Launcher and the pick list disappears.4
Note that double-clicking on a layer in the pick list will achieve the same result as above. Also note that double-clicking on an input box is an
alternate way of launching the pick list.
Now that we have selected the layer to be displayed, we need to choose an appropriate palette (a sequence of colors used in
rendering the raster image). In most cases, you will use one of the standard palettes represented by radio buttons. However, you
will learn later that it is possible to create a virtually infinite number of palettes. In this instance, the TerrSet Default Quantitative
palette is selected by default and is the palette we wish to use.
Notice that the autoscale option has been automatically set to Equal Intervals by the display system. This will be explained in
greater detail in a later exercise. However, for now it is sufficient to know that autoscaling is a procedure by which the system
determines the correspondence between numeric values in your image (SIERRADEM) and the color symbols in your palette.
The legend and title check boxes are self-explanatory. For this illustration, be sure that these check boxes are also selected and
then click OK. The image will then appear on the screen. This image is a Digital Elevation Model (DEM) of an area in Spain.
Move the mouse over the map window you just launched. Notice how the status bar continuously updates the column and row
position as well as the X and Y coordinate position of the mouse. Also notice what happens when the mouse is moved off of the
map window.
All map layers will display the X and Y positions of the mousecoordinates representing the ground position in a specific geographic
reference system (such as the Universal Transverse Mercator system in this case). However, only raster layers indicate a column and row
reference (as will be discussed further below).
Also note the Representative Fraction (RF) on the left of the status bar. The RF expresses the current map scale (as seen on the screen) as a
fraction reduction of the true earth. For example, an RF = 1/5000 indicates that the map display shows the earth 5000 times smaller than it
actually is.
Like the position fields, the RF field is updated continuously. To get a sense of this, click the icon marked Full Extent Maximized
(pause the cursor over the icons to see their names). Notice how the RF changes. Then click the Full Extent Normal icon. These
functions are also activated by the End and Home keys. Press the End key and then the Home key.
Note that when input filenames are chosen from the Pick List or typed without a full path, TerrSet first looks for the file in the Working Folder, then in each Resource
Folder until the file is found. Thus, if files with the same name exist in both the Working and Resource Folders, the file in the Working Folder will be selected.
You can set a specific RF by right-clicking in image. Select Set specific RF from the menu. A dialog will allow you to set a specific
RF. Clicking OK will display the image at this specified scale.
As indicated earlier, many of the tool bar icons launch module dialogs, just like the menu system. However, some of them are specifically
designed to access interactive features of the display system, such as the two you just explored. Two other interactive icons are the Measure
tools, both length and zone.
Click on the Measure Length icon located near the center of the top icons and represented by a ruler. Then, move the cursor into
the SIERRADEM image and left-click to begin measuring a length. As you move the cursor in any direction, an accompanying
dialog will record the length and azimuth along the length of the line. If you continue to left-click, you can add additional
segments that will add length to the original segment. A right-click of the mouse will end measuring.
Click on the Measure Zone icon located to the right of the Measure Length icon. Then click anywhere in the image and move the
mouse. As you drag the mouse, a circle will be drawn with a dialog showing the radius and area of the circle. A right-click will end
this process.
Menu Organization
As distributed, the main menu has nine sections: File, IDRISI GIS Analysis, IDRISI Image Processing, Land Change Modeler, Habitat and
Biodiversity Modeler, GeOSIRIS, Ecosystem Services Modeler, Earth Trends Modeler, Climate Change Adaptation Modeler. Collectively,
they provide access to over 300 analytical modules, as well as a host of specialized utilities and vertical applications. The File, IDRISI GIS
Analysis, IDRISI Image Processing items will open to your typical pull-down menu to expose more items. The remaining menu items will
launch vertical applications that are theme oriented. Each is explored later. For now we will explore the first three menu items that contain
the majority of the analytical functionality in TerrSet.
As the name suggests, the File menu contains a series of utilities for the import, export and organization of data files. However, as is
traditional with Windows software, the File menu is also where you set user preferences.
Open the User Preferences dialog from the File menu. We will discuss many of these options later. For now, click on the Display
Settings tab and then the Revert to Defaults button to ensure that your settings are set properly for this exercise. Click OK.
The Reformat submenu under the File menu contains a series of modules for the purpose of converting data from one format to another. It is
here, for example, that one finds routines for converting between raster and vector formats, changing the projection and grid reference system
of map layers, generalizing spatial data and extracting subsets.
The IDRISI GIS Analysis and the IDRISI Image Processing menus contain the majority of modules. The GIS Analysis menu is two to four
levels deep, with its primary organization at level two. The first four menu entries at this second level represent the core of GIS analysis:
Database Query, Mathematical Operators, Distance Operators and Context Operators. The others represent major analytical areas: Statistics,
Decision Support, Change and Time Series Analysis, and Surface Analysis. The IDRISI Image Processing menu includes ten submenus.
The Model Deployment Tools menu includes tools and facilities for constructing models as well as information for calling TerrSet capabilities
from user-written programs.
10
Go to the Surface Analysis submenu under the IDRISI GIS Analysis main menu and explore the four submenus there. Note that
most of the menu entries that open module dialog boxes (i.e., the end members of the menu trees) are indicated with capital
letters but some are not. Those designated with capital letters can be used as procedures with the IDRISI Macro Language (IML).
Now click on the CONTOUR menu entry in the Feature Extraction submenu to launch the CONTOUR module.
From the CONTOUR dialog, specify SIERRADEM as the input raster image. (Recall that the pick list may be launched with the
Pick List button, or by double-clicking on the input box.)
Enter the name CONTOURS as the output vector file. For output files, you cannot invoke the pick list to choose the filename
because we are creating a new file. (For output filename boxes, the pick list button allows you to direct the output to a folder other
than the Working Folder. You also can see a list of filenames already present in the Working Folder.)
Change the input boxes to specify a minimum contour value of 400 and a maximum of 2000, with a contour interval of 100. You
can leave the default values for the other two options. Enter a descriptive title to be recorded in the documentation of the output
file. In this case, the title "100 m Contours from SIERRADEM" would be appropriate. Click OK. Note that the status bar shows the
progress of this module as it creates the contours in two passesan initial pass to create the basic contours and a second pass to
generalize them. When the CONTOUR module has finished, TerrSet will automatically display the result.
The automatic display of analytical results is an optional feature of the System Settings of the User Preferences dialog (under the File menu).
The procedures for changing the Display Settings will be covered in the next exercise.
Move your cursor over the CONTOURS map window. Note that it does not display a column and row value in the status bar.
This is because CONTOURS is a vector layer.
To appreciate the difference between raster and vector layers better, close the CONTOURS map window by clicking on the X
button on its upper-right corner. Then, with the SIERRADEM display active, click the Add Layer option of the Composer dialog
and specify CONTOURS as the vector layer and Outline Black as the symbol file. Click OK to add this layer to your composition.
Composer is one of the most important tools you will use in the construction of map compositions. It allows you to add and
remove layers, change their hierarchical position and symbolization, and ultimately save and print map compositions. Composer
will be explored in far greater depth in the next exercise. By default, Composer will always be displayed on the right-side of the
desktop when any map window is open.
Along with Composer, the navigation tools on the tool bar (which are also available on the keyboard and mouse) are essential for
manipulating the map window. The tool bar has several icons for navigating around a map layer. There are icons for panning,
zooming and changing the size or extent of the map window. These functions are duplicated by keyboard and mouse operations.
The zoom in and zoom out icons not only zoom, but also center the image depending on where you place your cursor. The PgUp
and PgDn keys on the keyboard are similar but without the recentering. The Full Extent Normal and Full Extent Maximized icons
11
are duplicated by the Home and End keys. With the keyboard you can also pan using the arrow keys and with a properly
supported mouse, you can zoom in and out using the mouse wheel.
Now pan to an area of interest and zoom in until the cell structure of the raster image (SIERRADEM) becomes evident. As you
can see, the raster image is made up of a fine cellular structure of data elements (that only become evident under considerable
magnification). These cells are often referred to as pixels. Note, however, that at the same scale at which the raster structure
becomes evident, the vector contours still appear as thin lines.
In this instance, it would seem that the vector layer has a higher resolution, but looks can be deceiving. After all, the vector layer was derived
from the raster layer. In part, the continuity of the connected points that make up the vector lines gives this impression of higher resolution.
The generalization stage also served to add many additional interpolated points to produce the smooth appearance of the contours. The
chapter Introduction to GIS in the TerrSet Manual discusses raster and vector GIS data structures.
Click on the DISPLAY Launcher icon and specify the raster layer named SIERRA234. Note that the palette options are disabled in
this instance because the image represents a 24-bit full color image5 (in this case, a satellite image created from bands 2, 3 and 4 of
a Landsat scene). Click OK.
AA
Now choose the ORTHO option from the DISPLAY submenu under the File menu. Specify SIERRADEM as the surface image
and SIERRA234 as the drape image. Since this is a 24-bit image, you will not need to specify a palette. Keep the default settings for
all other parameters except for the output resolution. Choose one level below your display system's resolution.6 For example, if
your system displays images at 1024 x 768, choose 800 x 600. Then click OK. When the map window appears, press the End key to
maximize the display.
A 24-bit image is a special form of raster image that contains the data for three independent color channels which are assigned to the red, green and blue primaries of
the display system. Each of these three channels is represented by 256 levels, leading to over 16 million displayable colors. However, the ability of your system to
resolve this image will depend upon your graphics system. This can easily be determined by minimizing TerrSet and clicking the right mouse button on the Windows
desktop. Then choose the Settings tab of the Display Properties dialog. If your system is set for 24-bit true color, you are seeing this image at its fullest color
resolution. However, it is as likely as not that you are seeing this image at some lower resolution. High color settings (15 or 16 bit) look almost indistinguishable from
24-bit displays, but use far less memory (thus typically allowing a higher spatial resolution). However, 256 color settings provide quite poor approximations.
Depending upon your system, you will probably have a choice of settings in which you trade off color resolution for spatial resolution. Ideally, you should choose a
24-bit true color or 16-bit high color option and the largest spatial resolution available. A minimum of 800 x 600 spatial resolution is recommended, but 1024 x 768
or better is more desirable.
If you find that the resulting display has gaps that you find undesirable, choose a lower resolution. In most instances, you will want to choose the highest resolution
that produces a continuous display. The size of the images used with ORTHO (number of columns and rows) influences the result, so in one case, the best result may
be obtained with one resolution, while with another dataset, a different resolution is required.
12
The three-dimensional (i.e., orthographic) perspective offered through ORTHO can produce extremely dramatic displays and is a powerful
tool for visual analysis. Later we will explore another module that not only produces three dimensional displays, but also allows you to fly
through the model!
The rest of the exercises in this section of the Tutorial focus primarily on the elements of the Display System.
Housekeeping
As you are probably now beginning to appreciate, it takes little time before your workspace is filled with many windows. Go to the Window
List menu. Here you will find a list of all open dialogs and map windows. Clicking on any of these will cause that window to come to the top.
In addition, note that you can close groups of open windows from this menu. Choose Close All Windows to clean off the screen for the next
exercise.
13
EXERCISE 1-2
DISPLAY: LAYERS AND GROUP FILES
The digital representation of spatial data requires a series of constituent elements, the most important of which is the map layer. A layer is a
basic geographic theme, consisting of a set of similar features. Examples of layers include a roads layer, a rivers layer, a land use layer, a census
tract layer, and so on. Features are the constituents of map layers, and are the most fundamental geographic entitiesthe equivalent of
molecules, which are in turn compounds of more basic atomic features such as nodes, vertices and lines.
At a higher level, layers can be understood to be the basic building blocks of maps. Thus a map might be composed of a state boundaries layer,
a forest lands layer, a streams layer, a contours layer and a roads layer, along with a variety of ancillary map components such as legends,
titles, a scale bar, north arrow, and the like.
With traditional geographic representations, the map is the only entity that we can interact with. However, in GIS, any of these levels are
available to us. We can focus the display on specific features, isolated layers, or we can view any of a series of multi-layer custom-designed
maps. It is the layer, however, that is unquestionably the most important of these. Layers are not only the basic building blocks of maps, but
they are also the basic elements of geographic analysis. They are the variables of geographic models. Thus our exploration of GIS logically
starts with map layers, and the display system that allows us to explore them with the most important analytical tool at our disposalthe
visual system.
Make sure your main Working Folder is set to Using TerrSet. Then click on the DISPLAY Launcher icon on the tool bar. Note
that separate options are included for raster and vector layers, as well as a map composition option (which we will explore in a
later exercise). Despite the fact that their representational structures are very different, your means of displaying and interacting
with them is identical.
Display the vector layer named SIERRAFOREST. Select the user-defined symbol option, invoke the pick list for the symbol files
and choose the symbol file Forest. Turn the title and legend options off. Click OK.
14
This is a vector layer of forest stands for the Sierra de Gredos area of Spain. We examined a DEM and color composite image of
this area in the previous exercise. Vector layers are composed of points, which are linked to form lines and areal boundaries of
polygons.1 Use the zoom (PgUp and PgDn) and pan keys (the arrows) to focus in on some of these forest polygons. If you zoom in
far enough, the vector structure should become quickly apparent.
Press the Home key to restore the original display and then the End key to maximize the display of the layer. Click on a forest
polygon. The polygon becomes highlighted and its ID is shown near the cursor. Click on several other forest polygons. Also click
on some of the white areas between these polygons. Now select the Identify icon on the toolbar.
Continue to click on
polygons. Note the information presented in the Identify box that to the right of the map.
What should be evident here is that vector representations are feature-orientedthey describe featuresentities with distinct boundaries
and there is nothing between these features (the void!). Contrast this with raster layers.
Click on the Add Layer button on Composer. This dialog is a modified version of DISPLAY Launcher with options to add either
an additional raster or vector layer to the current composition. Any number of layers can be added in this way. In this instance,
select the raster layer option and choose SIERRANDVI from the pick list options. Then choose the NDVI palette and click OK.
This is a vegetation biomass image, created from satellite imagery using a simple mathematical model.2 With this palette, greener
areas have greater biomass. Areas with progressively less biomass range from yellow to brown to red. This is primarily a sparse
dry forest area.
Notice how this raster layer has completely covered over the vector layer. This is because it is on top and it contains no empty
space. To confirm that both layers are actually there, click on the check mark beside the SIERRANDVI layer in the Composer
dialog. This will temporarily turn its visibility off, allowing you to see the layer below it.
Make the raster layer visible again by clicking to the left of the filename. Raster layers are composed of a very fine matrix of cells
commonly called pixels,3 stored as a matrix of numeric values, but represented as a dense grid of varying colored rectangles.4
Zoom in with the PgDn key until this raster structure becomes apparent.
Raster layers do not describe features in space, but rather the fabric of space itself. Each cell describes the condition or character of
space at that location, and every cell is described. Since the Identify tool is still on, first click on the SIERRANDVI filename on
Composer (to select it for inquiry) then click onto a variety of cells with the cursor. Notice how each and every cell contains a
value. Consequently, when a raster layer is in a composition, we generally cannot see through to any layers below it. Conversely,
this is generally not the case with vector. However, the next exercise will explore ways in which we can blend the information in
layers and make background areas transparent.
Areal features, such as provinces, are commonly called polygons because the points which define their boundaries are always joined by straight lines, thus producing
a multi-sided figure. If the points are close enough, a linear or polygonal feature will appear to have a smooth boundary. However, this is only a visual appearance.
NDVI and many other vegetation indices are discussed in detail in the chapter Vegetation Indices in the TerrSet Manual as well as Tutorial Exercise 5-7.
The word pixel is a contraction of the words picture and element. Technically a pixel is a graphic element, while the data value which underlies it is a grid cell value.
However, in common parlance, it is not unusual to use the word pixel to refer to both.
Unlike most raster systems, TerrSet does not assume that all pixels are square. By comparing the number of columns and rows against the coordinate range in X and
Y respectively, it determines their shape automatically, and will display them either as squares or rectangles accordingly.
15
Change the position of the layers so that the vector layer is on top. To do this, click the name of the vector layer
(SIERRAFOREST) in Composer so that it becomes highlighted. Then press and hold the left mouse button down over the
highlighted bar and drag it until the pointer is over the SIERRANDVI filename and it becomes highlighted, then release the
mouse button. This will change its position.
With the vector layer on top, notice how you can see through to the layer below it wherever there is empty space. However, the
polygons themselves obscure everything behind them. This can be alleviated by using a different form of symbolization.
Select the SIERRAFOREST layer in Composer. Then click on the Layer Properties button. Layer Properties, as the name suggests,
displays some important details about the selected (highlighted) layer, including the palette or symbol file in use.
You have two options to change the symbol file used to display the SIERRAFOREST layer. One would be to click on the pick list
button and select a symbol file, as we did the first time. However, in this case, we are going to use the Advanced Palette/Symbol
Selection tool. Click that particular button--it is just below the symbol file input box.
The Advanced Palette/Symbol Selection tool provides quick access to over 1300 palette and symbol files. The first decision you
need to make is whether the data express quantitative variations (such as with the NDVI data), qualitative differences (such as
land cover categories that differ in kind rather than quantity) or simple set membership depicted with a uniform symbolization.
In our case, the latter applies, therefore click on the None (uniform) option. Then select the cross-stripe symbol type (x stripe)
and a blue color logic. Notice that there are four blue color options. Any of these four can be selected by clicking on the button
that illustrates the color sequence. Try clicking on these buttons and note what happens in the input box -- the symbol filename
changes! Thus all you are doing with this interface is selecting symbol files that you could also choose from a pick list. Ultimately,
click on the darkest blue option (the first button on the right) and then click on OK. This returns you to Layer Properties. You can
also click OK here.
Unlike the solid polygon fill of the Forest symbol file, the new symbol file you selected uses a cross-hatch pattern with a clear
background. As a result, we can now see the full layer below. In the next exercise you will learn about other ways of blending or
making layers transparent.
From the steps above, we can clearly see that vector and raster layers are different. However, their true relative strengths are not yet apparent.
Over the course of many more exercises, we will learn that raster layers provide the necessary ingredients to a large number of analytical
operationsthe ability to describe continuous data (such as the continuously varying biomass levels in the SIERRANDVI image), a simple
and predictable structure and surface topology that allows us to model movements across space, and an architecture that is inherently
compatible with that of computer memory. For vector layers, the real strength of their structure lies in the ability to store and manipulate data
for collections of layers that apply to the features described.
Group Files
In this section, we will begin an exploration of group files. In TerrSet, a group file is a collection of files that are specifically associated with
each other. Group files are associated with raster layers and to signature files. A group file, depending on the type, will have a specific
extension but is always a text file that lists files associated with a group. There are two types of raster group layers: raster and time series files
16
with .rgf and .ts extensions respectively. Signature group files are of two types, multispectral signature and hyperspectral signature group files
with .sgf and .hgf extensions respectively. All group files are created using TerrSet Explorer.
Open TerrSet Explorer from the File menu. By default TerrSet Explorer opens to the
Files tab displaying all the filtered files in the Working and Resource folders. Like the
pick list, you can display files in any of the folders by scrolling and clicking the
appropriate folder name. Make sure you are in the Using TerrSet folder. To create a
raster group file we will select the necessary files and then right-click to create this file.
Select each of the following files in turn. You may multi-select files by holding down the
shift key to select several files listed together or by holding down the control key to
select several files individually.
SIERRA1
SIERRA2
SIERRA3
SIERRA4
SIERRA5
SIERRA7
SIERRA234
SIERRA345
SIERRADEM
SIERRANDVI
If you make any mistakes, simply click the file to highlight or remove the highlight. If it
is highlighted, it is selected. Then, right-click in the Files pane and choose the
Create\Raster group option from the menu. By default the name given to this new
group file is RASTER GROUP.RGF. The files contained in the raster group will also be
displayed in TerrSet Explorer. Change the name of the raster group file to SIERRA by
right-clicking on the RASTER GROUP.RGF filename and select Rename.
By default, the Metadata pane should be visible in TerrSet Explorer. If it is not, right
click in the Files pane and select Metadata. Then when you select the SIERRA group file, Metadata will show the files contained in
this group and their order. In most cases order is not important, but if it is as in the case of Time Series analysis, you can always
change the order in Metadata.
Raster group files provide a range of powerful capabilities, including the ability to provide tabular summaries about the characteristics of any
location.
17
Bring up DISPLAY Launcher and select the raster layer option. Then click on the pick list button. Notice that your SIERRA group
appears with a plus sign, as well as the individual layers from which it was formed. Click on the plus sign to list the members of
the group and then select the SIERRA345 image. You should now see the text "sierra.sierra345" in the input box. Since this is a 24bit composite, you can now click OK without specifying a palette (this will be explained further in a later exercise). This is a color
composite of Landsat bands 3, 4 and 5 of the Sierra de Gredos area. Leave it on the screen for the next section.
With raster groups, the individual layers exist independent of the group. Thus, to display any one of these layers we can specify it either with
its direct name (e.g., SIERRA345) or with its group name attached (e.g., SIERRA.SIERRA345). What is the benefit, then, of using a group?
We will need to work through several exercises to fully answer this question. However, to get a sense, invoke the Identify tool
from the toolbar. Then move the mouse and use the left mouse button to click on various pixels around the image and look at the
Identify grid box to the right of the map.
Identify Mode allows you to inspect the value of any specific pixel for any map layer or across map layers. See the section on Display:
Navigating Map Query.
To display SIERRADEM from TerrSet Explorer, double-click on the filename. The map layer will appear on the TerrSet Desktop.
You can also display a member of a group file by double-clicking on the raster group file to expose the grouped files, then again
double-click on the file to display. The resulting file will be displayed with the dot logic in the filename, for example,
SIERRA.SIERRADEM.
When displaying layers from TerrSet Explorer you will have no control over its initial display characteristics, unlike DISPLAY Launcher.
However, once a layer is displayed you can alter its display from Layer Properties in Composer. As we will see in the next section, TerrSet
Explorer can also be used to Add Layers to map compositions, just as in Composer.
18
EXERCISE 1-3
DISPLAY: LAYER INTERACTION EFFECTS
As we have seen, map compositions are formed from stacking a series of layers in the same map window using Composer. By default, the
backgrounds of vector layers are transparent while those of raster layers are opaque. Thus, adding a raster layer to the top of a composition
will, by default, obscure the layers below. However, TerrSet provides a number of multi-layer interaction effects which can modify this action
to create some exciting display possibilities.
Blends
If your workspace contains any existing windows, clean it off by using the Close All Windows option from the Window List
menu. Then use DISPLAY Launcher to view the image named SIERRADEM using the Default Quantitative palette. The colours
in this image are directly related to the height of the land. However, it does not convey well the nature of the relief. Therefore we
will blend in some hillshading to give a sense of the topography.
First, go to the Surface Analysis section of the GIS Analysis menu and then the Topographic Variables submenu to select
HILLSHADE. This option accesses the SURFACE module to create hillshading from your digital elevation model. Specify
SIERRADEM as the elevation model and SIERRAHS as the output. Leave the sun azimuth and elevation values at their default
values and simply click OK.
The effect here is clearly dramatic! To create this by hand would have taken the skills of a talented topographer and many weeks
of painstaking artistic rendition using a tool such as an air brush. However, through illumination modeling in GIS, it takes only
moments to create this dramatic rendition.
Our next step is to blend this with our digital elevation model. Remove the hillshaded image from the screen by clicking the X in
its upper-right corner. Then click onto the banner of the map window containing SIERRADEM and click Add Layer in
Composer. When the Add Layer dialog appears, click on Raster as the layer type and indicate SIERRAHS as the image to be
displayed. For the palette, select Greyscale.
19
Notice how the hillshaded image obscures the layer below it. We will move the SIERRADEM layer so it is on top of the
hillshading layer by dragging it1 with the mouse so it is at the bottom position in Composers layer list. At this point, the DEM
should be obscuring the hillshading.
Now be sure SIERRADEM is highlighted in Composer (click on its name if it isnt) and then click the Blend button on Composer.
The Blend button blends the color information of the selected layer 50/50 with that of the assemblage of visible elements below it
in the map composition. The Layer Properties button contains a visibility dialog that allows other proportions to be used (such as
60/40, for example). However, a 50% blend is typically just right. Note that the blend can be removed by clicking the Blend button
a second time while that layer is highlighted in Composer. This application is probably the most common use of blend -- to
include topographic hillshading. However, any raster layer can be blended.2
Vector layers cannot be blended directly. However, they can be affected by blends in raster layers visually above them in the
composition. To appreciate this, click the Add Layer button on Composer and specify the vector layer named CONTOURS that
you created in the first exercise. Then click on the Advanced Palette/Symbol Selection tab. Set the Data Relationship to None
(uniform), the Symbol Type to Solid, and the Color Logic to Blue. Then click on the last choice to select LineSldUniformBlue4 and
click OK. As you can see, the contours somewhat dominate the display. Therefore drag the CONTOURS layer to the position
between SIERRAHS and SIERRADEM. Notice how the contours appear in a much more subtle color that varies between
contours. The reason for this is that the color from SIERRADEM has now blended with that of the contours as well.
Before we go on to consider transparency, lets make the color of SIERRADEM coordinate with the contours. First be sure that
the SIERRADEM layer is highlighted in Composer by clicking onto its name. Then click the Layer Properties button. In the
Display Min/Max Contrast Settings input boxes type 400 for the Display Min and 2000 for the Display Max. Then change the
Number of Classes to 16 and click the Apply button, followed by OK. Note the change in the legend as well as the relationship
between the color classes and the contours. Keep this composition on the screen for use in the next section.
Transparency
Lets now define the lakes and reservoirs. Although we dont have direct data for this, we do have the near-infrared band from a
Landsat image of the region. Near-infrared wavelengths are absorbed very heavily by water. Thus open water bodies tend to be
quite distinctive on near-infrared images. Click onto the DISPLAY Launcher icon and display the layer named SIERRA4. This is
the Landsat Band 4 image. Use Identify to examine pixel values in the lakes. Note that they appear to have reflectance values less
than 30. Therefore it would appear that we can use this threshold to define open water areas.
Click on the RECLASS icon on the toolbar. Set the type of file to be reclassified to Image and the classification type to UserDefined. Set the input file to SIERRA4 and the output file to LAKES. Set the reclass parameters to assign:
a 1 to values from 0 to just less than 30, and
To drag it, place the mouse over the layer name and press and hold the left mouse button down while you move the mouse to the new position where you want the
layer to be. Then release the left mouse button and the move will be implemented.
Vector layers cannot be the agents of a blend. However, they can be affected by blends in raster layers above them, as will be demonstrated in this exercise.
20
Now use the Add Layer button on Composer to add the raster layer LAKES. Again use the Advanced Palette/Symbol Selection tab
Clearly there is a problem here -- the LAKES layer obscures everything below it. However, this is easily remedied. Be sure the
and set the Data Relationship to None (uniform), the Color Logic to Blue and then click the third choice to select UniformBlue3.
LAKES layer is highlighted in Composer and then click the right-most of the small buttons above Add Layer.
This is the Transparency button. It makes all pixels assigned to color 0 in the prevailing palette transparent (regardless of what
that color is). Note that a layer can be made both transparent and blended -- try it! As with the blend effect, clicking the
Transparent button a second time while a transparent layer is highlighted will cause the transparency effect to be removed.
Composites
In the first exercise, you examined a 24-bit color composite layer, SIERRA234. Layers such as this are created with a special module named
COMPOSITE. However, COMPOSITE images can also be created on the fly through the use of Composer. We will explore both options here.
First remove any existing images or dialogs. Then use DISPLAY Launcher to display SIERRA4 using the Greyscale palette. Then
press the r key on the keyboard. This is a shortcut that launches the Add Layer dialog from Composer, set to add a raster layer
(note that you can also use the shortcut v to bring up Add Layer set to add a vector layer). Specify SIERRA5 as the layer and
again use the Greyscale palette. Then use the r shortcut again to add SIERRA7 using the Greyscale palette. At this point, you
should have a map composition containing three images, each obscuring the other.
Notice that the small buttons above Add Layer in Composer include three with red, green and blue colors. Be sure that SIERRA7
is highlighted in Composer and then click on the Red button. Then highlight SIERRA5 in Composer (i.e., click onto its name) and
then click the Green button. Finally, highlight SIERRA4 in Composer and click the Blue button.
Any set of three adjacent layers can be formed into a color composite in this way. Note that it was not important that they had a
Greyscale palette to start withany initial palette is fine. In addition, the layers assigned to red, green and blue can be in any
order. Finally, note that as with all of the other buttons in this row on Composer, clicking it a second time while that layer is
highlighted causes the effect to be removed.
Creating composites on the fly is very convenient, but not necessarily very efficient. If you are going to be working with a particular
composite often, it is much easier to merge the three layers into a single 24-bit color composite layer. 24-bit composite layers have a special
data type, known as RGB24 in TerrSet. These are TerrSets equivalent of the same kind of color composite found in BMP, TIFF and JPG files.
Open the COMPOSITE module, either from the Display menu or from its toolbar icon. Here we can create 24-bit composite images. Specify
SIERRA4, SIERRA5 and SIERRA7 as the blue, green and red bands, respectively. Call the output SIERRA457. We will use the default settings
to create a 24-bit composite with original values and stretched saturation points with a default saturation of 1%. Click OK.
21
The issue of scaling and saturation will be covered in more detail in a later exercise. However, to get a quick sense of it, create another
composite but use a temporary name for the output and use the simple linear option. To create a temporary output name, simply double-click
in the output name box. This will automatically generate a name beginning with the prefix tmp such as TMP000.
Notice how the result is much darker. This is caused by the presence of very bright, isolated features. Most of the image is occupied by
features that are not quite so bright. Using simple linear scaling, the available range of display brightnesses on each band is linearly applied to
cover the entire range, including the very bright areas. Since these isolated bright areas are typically very small (in many cases, they cant even
be seen), we are sacrificing contrast in the main brightness region in order to display them. A common form of contrast enhancement, then,
is to set the highest display brightness to a lower scene brightness. This has the effect of saturating the brightest areas (i.e., assigning a range of
scene brightnesses to the same display brightness), with the positive impact that the available display brightnesses are now much more
advantageously spread over the main group of scene brightnesses. Note, however, that the data values are not altered by this procedure (since
you used the second option for the output type to create the 24 bit composite using original values with stretched saturation points). This
procedure only affects the visual display. The nature of this will be explained further in a later exercise. In addition, note that when you used
the interactive on the fly composite procedure, it automatically calculated the 1% saturation points and stored these as the Display Min and
Max values for each layer3.
Anaglyphs
Anaglyphs are three-dimensional representations derived from superimposing a pair of separate views of the same scene in different colors,
such as the complementary colors, red and cyan. When viewed with 3-D glasses consisting of a red lens for one eye and a cyan lens for the
other, a three-dimensional view can be seen. To work properly, the two views (known as stereo images) must possess a left/right orientation,
with an alignment parallel to the eye4.
Use the Close All Windows option of the Window List menu to clear the screen. Then use DISPLAY Launcher to view the file
named IKONOS1 using the Greyscale palette. Then use Add Layer in Composer (or press the r key) to add the image named
IKONOS2, again with the Greyscale palette.
Click the checkmark next to the IKONOS2 image in Composer on and off repeatedly. These two images are portions of two
IKONOS satellite images (www.spaceimaging.com) of the same area (San Diego, United States, Balboa Park area), but they are
taken from different positions -- hence the differences evident as you compare the two images.
More specifically, they are taken at two positions along the satellite track from north to south (approximately) of the IKONOS
satellite system. Thus the tops of these images face west. They are also epipolar. Epipolar images are exactly aligned with the path
of viewing. When they are viewed such that the left eye only sees the left image (along track) and the right eye only sees the right
image, a three-dimensional view is perceived.
More specifically, the on the fly compositing feature in Composer looks to see whether the Display Min and Max are equal to the actual Min and Max. If they are, it
then calculates the 1% saturation points and alters the Display Min and Max values. However, if they are different, it assumes that you have already made decisions
about scaling and therefore uses the stored values directly.
If they have not already been prepared to have this orientation, it is necessary to use either the TRANSPOSE or RESAMPLE modules to reorient the images.
22
Many different techniques have been devised to present each eye with the appropriate image of a stereo pair. One of the simplest is
the anaglyph. With this technique each image is portrayed in a special color. Using special eyeglasses with filters of the same color
logic on each eye, a three-dimensional image can be perceived.
The TerrSet system can accommodate all anaglyphic color schemes using the layer interaction effects provided by Composer.
However, the red/cyan scheme typically provides the highest contrast. First be sure that the IKONOS2 image is highlighted in
Composer and is checked to be visible. If it is not, click on its name in Composer. Then click on the Cyan button of the group
above Add Layer (Cyan is the light blue color, also known as aquamarine). Then highlight the IKONOS1 image and click on the
Red button above Add Layer. Then view the result with the 3-D glasses, such that the red lens is over the left eye and the cyan lens
is over the right eye. You should now see a three-dimensional image. Then try reversing the eye glasses so that the red lens is over
the right eye. Notice how the three dimensional image becomes inverted. In general, if you get the color sequence reversed, you
should always be able to figure this out by looking at what happens with familiar objects.
This is only a small portion of an IKONOS stereo scene. Zoom in and roam around the image. The resolution is 1 meter -- truly
extraordinary! Note that other sensor systems are also capable of producing stereo images, including SPOT, QUICKBIRD and
ASTER. However, you may need to reorient the images to make them viewable as an anaglyph, either using TRANSPOSE or
RESAMPLE. TRANSPOSE is the simplest, allowing you to quickly rotate each image by 90 degrees. This will typically make them
useable as an anaglyphic pair. However, it does not guarantee that they will be truly epipolar. With truly epipolar images, a single
straight line joins up the two image centers and the position of the other images center in each. In many cases, this can only be
achieved with RESAMPLE.
23
EXERCISE 1-4
DISPLAY: SURFACESFLY THROUGH AND
ILLUMINATION
In the first exercise, we had a brief look at the use of ORTHO to produce a three-dimensional display, and in the previous exercise, we saw
how blends can be used to create dramatic maps of topography by combining hillshading with hypsometric tints. In this exercise, we will
explore the ability to interactively fly through a three-dimensional model. In addition, we will look at the use of the ILLUMINATE module
for preparing drape images for fly through.
Fly Through
If your workspace contains any existing windows, clean it off by using the Close All Windows option from the Window List
menu. Then click on the 3DFly Through icon on the toolbar (the one that looks like an airplane with a head-on view).
Alternatively, you can select Fly Through from the DISPLAY menu.
Look very carefully at the graphics on the Fly Through dialog. A fly through is created by specifying a digital elevation model
(DEM) and (typically) an image to drape upon it. Then you control your flight with a few simple controls.
Movement is controlled with the arrow keys. You will want to control these with one hand. Since you will less often be moving
backwards, try using your index and two middle fingers to control the forward and left and right keys. Note that you can press
more than one key simultaneously. Thus pressing the left and forward keys together will cause you to move in a left curve, while
holding these two keys and increasing your altitude will cause you to rise in a spiral.
You can control your altitude using the shift and control keys. Typically you will want to use your other hand for this on the
opposite side of the keyboard. Thus using your left and right hands together, you have complete flight control. Again, remember
that you can use these keys simultaneously! Also note that you are always flying horizontally, so that if you remove your fingers
from the altitude controls, you will be flying level with the ground.
Finally you can move your view up and down with the Page Up and Page Down keys. Initially your view will be slightly down
from level. Using these keys, you can move between the extremes of level and straight down.
24
Specify SIERRADEM as the surface image and SIERRA345 as the drape image. Then use the default system resource use (medium) and set
the initial velocity to slow (this is important, because this is not a big image). You can leave the other settings at their default values. Then
click OK, but read the following before flying!
Here is a strategy for your first flight. You may wish to maximize the Fly Through display window, but note that it will take a few
moments. Start by moving forward only. Then try using the left and right arrows in combination with the forward arrow. When
you get close to the model, try using the altitude keys in combination with the horizontal movement keys. Then experiment ...
youll get the hang of it very soon.
Here are some other points about Fly Through that you should note:
A right mouse click in the display area will provide several additional display options including the ability to change the
background color and view of the sky.
Fly Through occurs in a separate window from TerrSet. If you click on the main TerrSet window, the Fly Through display
might slip behind TerrSet. However, you can always click on its icon in the Windows taskbar to bring it back to the front.
Fly Through requires very substantial computing resources. It is constructed using OpenGL -- a special applications
programming interface designed for constructing interactive 3-D applications. Many newer graphics cards have special
settings for optimizing the performance of OpenGL. However, experiment with care and pay special attention to limitations
regarding display resolution. In general, the key to working with large images (with or without special support for OpenGL)
is having adequate RAM -- 256 megabytes should generally be regarded as a minimum. 512 megabytes to 1 gigabyte are really
required for smooth movement around very large images. Experiment using the three options for resource use (see the next
bullet) and varying image sizes. Also note that you should close all unnecessary applications and map windows to maximize
the amount of RAM available. See the Fly Through Help for more suggestions if problems occur.
Fly Through actually constructs a triangulated irregular network (TIN) for the interactive display -- i.e., the surface is
constructed from a series of connected triangular facets. Changing the resolution option affects both the resolution of the
drape image and the underlying TIN. However, in general, a smaller image with higher resource use will lead to the best
display. Again, experiment. If the triangular facets become disturbingly obvious, either move to a smaller image size or zoom
out to a higher altitude. Note that poor resolution may lead to some unusual interactions between the three-dimensional
model and the draped image (such as streams flowing uphill).
If surfaces were displayed true to scale in their vertical axis, they would typically appear to have very little relief. As a result,
the system automatically estimates a default exaggeration. In general this will work well. However, specific locations may
need adjustment. To do this, close any open Fly Through window and redisplay after adjusting the exaggeration factor. A
value of 50% will yield half the exaggeration while 200% will double it. 0% will clearly lead to a flat surface.
25
You can also record flights and play them back, or save them as .wav files. Right-click on the 3-D display window to bring up the recording
options.
Using Fly Through, use the images SFDEM and SF234 to open the 3-D display window. Maximize the display window. When the
3-D display window appears, right-click and select the Load option and load the file SF.CSV. Right-click again and select Play
(F9). This will replay a recorded flight path we developed. You can use the speed keys F10 and F9 to pause and play the loaded
path. You can also create your own and save it to an AVI file to be played back in Media Viewer or embed it into a PowerPoint
presentation!
Illuminate
The most dramatic Fly Through scenes are those that contain illumination effects. The shading associated with sunlight shining on a surface
is an important input to three-dimensional vision. Satellite imagery naturally contains illumination shading. However, this is not the case
with other layers. Fortunately, the ILLUMINATE module can be used to add illumination effects to any raster layer.
To appreciate the scope of the issue, first close all windows (including Fly Through) and then use Fly Through to explore the
DEM named SIERRADEM without a drape image. Use all of the default settings. Although this image does not contain any
illumination effects, it does present a reasonable impression of topography because the hypsometric tints (elevation-based colors)
are directly related to the topography.
Close the Fly Through display window and then use Fly Through to explore SIERRADEM using SIERRAFIRERISK as the drape
image. Use the user-defined palette named Sierrafirerisk and the defaults for all other settings. As you will note, the sense of
topographic relief exists but is not great. The problem here is that the colors have no necessary relationship to the terrain and
there is no shading related to illumination. This is where ILLUMINATE can help.
Go to the DISPLAY menu and launch ILLUMINATE. Use the default option to illuminate an image by creating hillshading for a
DEM. Specify SIERRAFIRERISK as the 256 color image to be illuminated1 and specify Sierrafirerisk as the palette to be used.
Then specify SIERRADEM as the digital elevation model and SIERRAILLUMINATED as the name of the output image. The
blend and sun orientation parameters can be left as they are.2 You will note that the result is the same as you might have produced
using the Blend option of Composer. The difference, however, is that you have created a single image that can be draped onto a
DEM either with Fly Through or with ORTHO.
Finally, run Fly Through using SIERRADEM and SIERRAILLUMINATED. As you can see, the result is clearly superior!
The implication is that any image that is not in byte binary format will need to be converted to that form through the use of modules such as STRETCH (for
quantitative data), RECLASS (for qualitative data), or CONVERT (for integer images that have data values between 0-255 and thus simply need to be converted to a
byte format.
ILLUMINATE performs an automatic contrast stretch that will negate much of the impact of varying the sun elevation angle. However, the sun azimuth will be very
noticeable. If you wish to have more control over the hillshading component, create it separately using the HILLSHADE module and then use the second option of
ILUMINATE.
26
EXERCISE 1-5
DISPLAY: NAVIGATING MAP QUERY
As should now be evident, one of the remarkable features of a GIS is that maps can be actively queried. They are not simply static
representations of singular themes, but collections of data that can be viewed in myriad ways. In this exercise, we will consolidate and extend
some of the interactive map query techniques already discussed.
First, close any open map windows. Then use Display Launcher and display the raster layer SIERRA234. Since this is a 24-bit
image, no palette is needed.
A 24-bit image is so named because it defines all possible colors (within reason) by means of the mixture of red, green and blue (RGB)
additive primaries needed to create any color. Each of these primaries is encoded using 8 bits of computer memory (thus 24 bits over all three
primaries) meaning that it encodes up to 256 levels from dark to bright for each primary.1 This yields a total of 16,777,216 combinations of
colora range typically called true color.2 24-bit images specify exactly how each pixel should be displayed, and are commonly used in
Remote Sensing applications. However, most GIS applications use "single band" images (i.e., raster images that only contain a single type of
information), thus requiring a palette to specify how the grid values should be interpreted as colors.
B
C
Keeping SIERRA234 on the screen, use DISPLAY Launcher to display SIERRA4 and SIERRANDVI. Use the Grey Scale palette for
the first of these and the NDVI palette for the second.
Each of these two images is a "single band" image, thus requiring the specification of a palette. Each palette contains up to 256
consecutive colors. Click on the Identify tool from the tool bar and then click various pixels in all three images.
In the binary number system, 00000000 (8 bits) equals 0 in the decimal system, while 11111111 (8 bits) equals 255 in the decimal system (a total of 256 values).
The degree to which this image will show its true colors will also depend upon your graphics system and its setting. You may wish to review your settings by looking
at your display system properties (accessible through Control Panel). With the system set to 256 colors, the rendition may seem somewhat poor. Obviously, setting
the system to 24-bit (true color) will give the best performance. Many systems offer 16-bit color as well, which is almost indistinguishable from 24-bit.
27
Without clicking on the Identify tool, clicking in any image will show the pixel value at the cursor location. Clicking on the Identify tool from
the icon bar will open the Identify box to the right the image showing the pixel value and the x and y location. As you click in the SIERRA234
image, notice that the 24-bit image actually stores three numeric values for each pixelthe levels of the red, green and blue primaries (each
on a scale from 0-255), as they should be mixed to produce the color viewed.
The SIERRA4 image is a Landsat satellite Band 4 image and shows the degree to which the landscape has reflected near-infrared wavelength
energy from the sun. It is identical in concept to a black and white photograph, even though it was taken with a scanner system rather than a
camera. This single band image is also quantized to 256 levels, ranging from 0 (depicted as black with the Grey Scale palette) to 255 (shown as
white with the Grey Scale palette). Note that this band is also one of the three components of SIERRA234. In SIERRA234, the Band 4
component is associated with the red primary.3
In the SIERRA4 image, there is a direct correspondence between pixel values and colors. For example, in the Grey Scale palette, middle grey
occupies the 128th position (half way between black at 0 and white at 255), and will be assigned to any pixels that have a value of 128.
However, notice that the SIERRANDVI image does not have this correspondence. Here the values range from -0.30 to 0.72. In cases such as
this, TerrSet uses a system of autoscaling to assign cell values to palette colors. We will explore the issue of autoscaling more thoroughly in
Exercise 1-8. For now, simply recognize that, by default, the system evenly divides the actual number range (-0.30 to 0.72) into 256 classes and
assigns each a color from the palette. For example, all cells with values between -0.300 and -0.296 are assigned color 0, those between -0.296
and -0.292 are assigned color 1, and so on.
Now lets see how we can view all the pixel values across many images at any location. To do this, again we will use the Identify tool.
From TerrSet Explorer, highlight the six SIERRA raw bands, SIERRA1, SIERRA2, SIERRA3, SIERRA4, SIERRA5, and SIERRA7.
Once they are highlighted, right-click in TerrSet Explorer and select Add Layer. This will display all 6 bands into one map
window.
Now, with the Identify tool selected, click anywhere in the map composition to view the pixel value across all the bands. The
The graphing option of the Identify tool also works by autoscaling. Click on the View as Graph checkbox at the bottom of the
Identify box to change the display to graph mode and then click around in the map composition containing the 6 images. By
default, the bars for each image are scaled in length between the minimum and maximum for that image. Thus a half-length bar
would signify that the selected pixel has a value half-way between the minimum and maximum for that image. This is called
independent scaling. However, notice that there is also a button to toggle this to relative scaling. In this case, all the bars are scaled
to a uniform minimum and maximum for the entire group. Try this. You will be required to specify the minimum and maximum
to be used. You can accept the default offered.
It has become common to specify the primaries from long to short wavelength (RGB) while satellite image bands are commonly specified from short to long
wavelengths (e.g., SIERRA234 which is composed from the green, red and near-infrared wavelengths, and assigned the blue, green and red primaries respectively).
28
Close all map windows. Then use DISPLAY Launcher and select from the SIERRA group the raster layer SIERRA234. It is
important here that this be selected from the group (i.e., that its name is specified as SIERRA.SIERRA234 in the input box).
Again, since this is a 24-bit image, no palette is needed. Alternatively, you can display this file from TerrSet Explorer by selecting
the file from within the SIERRA group file. To verify that it is displayed correctly with the dot-logic, the banner of the map
window should read SIERRA.SIERRA234.
Now display SIERRA2, SIERRA3 and SIERRA4 using the dot-logic, that is, display the images by selecting them from within the
Close the Identify box. Notice that this does not turn off the simpler Identify Mode. Now move the three images on your screen so
that you can see as much as possible of all three. Then click on the SIERRA.SIERRA234 layer to give it focus. Using the pan and
zoom keys, move around this image.
Normally, pan and zoom operations only affect the map window that has focus. However, since each of these map windows
belongs to a common group, their pan and zoom operations can also be linked.
29
Select the Group Link icon on the tool bar. Now pan and zoom around any of the images and watch the effect! We can also see
this with the Zoom Window operation. Zoom Window is a procedure whereby you can delineate a specific region you wish to
zoom into. To explore this, click on the Zoom Window icon and then move the mouse over one of your images. Notice the shape
of the cursor. We will zoom into an area that just encloses the large lake to the north. Move the mouse to the upper-left corner of
the rectangular area you will zoom into. Then hold down the left mouse button and keep it down while you drag the rectangle
until it encloses the lake region. When you let go of the mouse, this region will be zoomed into. Notice the effect on the other
group members! Finally, click on the Full Extent Normal icon on the tool bar (or press the Home key). Note that this linked zoom
feature can be turned off at any time by simply clicking onto the Group Link icon again.
Placemarks
As you zoom into various parts of a map, you may wish to save a particular view in order to return to it at a later time. This can be achieved
through the use of placemarks. A placemark is the spatial equivalent of a bookmark.
Use DISPLAY Launcher to bring up any layer you wish. Then use the zoom and pan keys to zoom into a specific view. Save that
view by clicking on the Placemarks icon (next to the Group Link icon).
The Placemarks tab of the Map Properties dialog is displayed. We will explore this dialog in much greater depth in the next
exercise. For now, click on the Add Current View as a New Placemark button to save your view. Then type in any name you wish
into the input box that opens on the right, and click the Enter and OK buttons.
30
Now zoom to another view, add it as a second placemark, and then exit from the Placemarks dialog. Press the Home key to
restore the original map window. At this point, your view corresponds with neither placemark. To return to one of your
placemarks, click the placemark icon and then select the name of the desired placemark from the placemarks window. Then click
the Go to Selected Placemark button.
TerrSet allows you to maintain up to 10 placemarks per map composition, where a composition consists of a single map window with one or
more layers. In the next exercise, we will explore map compositions in depth. However, for now it is simply necessary to recognize that
placemarks will be lost if a map window is removed from the screen without saving the composition, and that placemarks apply to the
composition and not to the individual map layer per se.
31
EXERCISE 1-6
MAP COMPOSITION
By now you have gained some familiarity with Composerthe utility that is present whenever a map window is on the screen. However, as
you will see in this illustration, it is but one piece of a very powerful system for map composition.
Map Components
A map composition consists of one or more map layers along with any number of ancillary map components, such as titles, a scale bar and so
on. Here we review each of these constituent elements.
Map Window
The map window is the window within which all map components are contained. A new map window is created each time you use DISPLAY
Launcher. The map window can be thought of as the piece of paper upon which you create your composition. Although DISPLAY Launcher
sets the size of the map window automatically, you can change its size either by pressing the End or Home keys. You can also move the mouse
over one of its borders, hold the left mouse button down, and then drag the border in or out.
Layer Frame
The layer frame is a rectangular region in which map layers are displayed. When you use DISPLAY Launcher, and choose not to display a title
or legend, the layer frame and the map window are exactly the same size. When you also choose to display a legend, however, the map
window is opened up to accommodate the legend to the right of the layer frame. In this case the map window is larger than the layer frame.
This is not merely a semantic distinction. As you will see in the practical sequence below, there is truly a layer frame object that contains the
map layers and that can be resized and moved. Each map composition contains one layer frame.
Legends
Legends can be constructed for raster layers and point, line and polygon vector layers. Like all map components, they are sizable and
positionable. The system allows you to display legends for up to five layers simultaneously. The text content of legends is derived either from
the legend information carried in the documentation file of the layer involved, or is constructed automatically by the system.
32
Scale Bar
The system allows a scale bar to be displayed for which you can control its length, text, number of divisions and color.
North Arrow
The standard north arrow supplied allows not only text and color changes, but can also be varied in its declination (its angle from grid north).
Declination angles are always specified as azimuths (as an angle from 0-360, clockwise from north).
Titles
In addition to text layers (which annotate layer features), you also have the ability to add up to three free-floating titles. These are referred to
as the title, sub-title and caption. However, they are all map objects of identical character and can thus be used for any purpose whatsoever.
Text Frame
In addition to titles, you can also incorporate a text frame. A text frame is a sizable and placeable rectangular box that contains text. It is
commonly used for blocks of descriptive text or credits. There is no limit on the amount of text, although it is rare that more than a paragraph
or two would be used (for reasons related to map composition space).
Graphic Insets
TerrSet also allows you to incorporate up to two graphic insets into your map. A graphic inset can be either a Windows Metafile (.wmf), an
Enhanced Windows Metafile (.emf) or a Windows Bitmap (.bmp) file. It is both sizable and placeable. Note that the Windows Metafile (.wmf)
format has now been superseded by the Enhanced Windows Metafile (.emf), which is preferred.
Map Grid
A map grid can also be incorporated into your composition quite easily. Parameters include the position of the origin and the increment (i.e.,
interval) in X and Y and the ability to display grids or tics. The grid is automatically labeled and can be varied in its position and color and
text font.
Backgrounds
All map components have backgrounds. By default, all are white. However, each can be varied individually or as a group. The layer frame and
map window backgrounds deserve special mention.
When one or more raster layers is present in the composition, the background of the layer frame will never be visible. However, when only
vector layers are involved, the layer frame background will be evident wherever no feature is present. For example, if you were creating a map
of an island with vector layers, you might wish to color the layer frame background blue to convey the sense of its surrounding ocean.
33
Changing the map window background is like changing the color of paper upon which you draw the map. However, when you do this, you
may wish to force all other map components to have the same color of background. As you will see below, there is a simple way to force all
map components to adopt the color of the map window background.
Use DISPLAY Launcher to launch a map window with the raster layer named WESTLUSE. Choose the user-defined palette
WESTLUSE. Also, be sure the legend and title options are both checked. Then click OK.
DISPLAY Launcher provides a quick composition facility for a single layer, with automatic placement of both the title and the legend (if
chosen). To add further layers or map components, however, we will need to use other tools. Let's first add some further layers to the
composition. All additional layers are added with Composer.
Click on the Add Layer button of Composer.1 Then add the vector layer named WESTROAD using the symbol file also named
WESTROAD. Then click on Add Layer again and add the vector text layer named WESTBOROTXT. It also has a special symbol
file, named WESTBOROTXT.
The text here is probably very hard to read. Therefore, press the End key (or click the Full Extent Maximized button on the tool
bar) to enlarge your composition. Depending upon your display resolution, this may or may not have helped much. However, this
is a limitation of your display system only. When it is printed, the text will have significantly better quality (because printers
characteristically have higher display resolutions than monitors).
An additional feature of text layers is that they maintain their relative size. Use the PgUp and PgDn keys to zoom into the map.
Notice how the text gets physically bigger, but retains its relative size. As you will see later, there is a way in which you can
specifically set the relationship between map scale and text size.
Press the Home key and then the End key to return to the previous state of the composition. Then click the Map Properties
button on Composer. This tabbed page dialog contains the means of controlling all non-layer components of the composition.
By default, the Map Properties dialog opens to the Legends tab. In this case, we need to add a legend for the roads layer. Notice
how the first legend object is set to the WESTLUSE layer. This was set when you chose to display a legend when first launching
the layer. We therefore will need to use one of the other legend objects. Click the down arrow of the layer name input box for
There are two short-cut keys for Add Layer, r for raster and v for vector. With a map layer in focus, hit the r or v key to bring up the Add Layer dialog box.
34
Legend 2 for a list of all the layers in the composition. Select the WESTROAD layer. Notice how the visible property is
automatically selected. Now click the Select Font button and set the text to be 8-point, the font to be Arial, the style to be regular,
and the color to be black. Then click the Select Font button for the WESTLUSE legend and make sure it has the same settings.
Then click OK.
When DISPLAY Launcher initiates the display, it is in complete control of where all elements belong. However, after it is
displayed, we can alter the location and the size of any component. Move the mouse over the roads legend and double-click onto
it. This will produce a set of sizing/move bars along the edge of the component. Once they appear, the component can be either
resized and/or moved. In this case, we simply want to move it. Place the mouse over the legend and hold the left button down to
drag to a new location. Then to fix it in place (and thereby stop the drag/size operation), click on any other map component (or
the banner of the map window). Do this now. You will know you have been successful if the sizing bars disappear.
Note that in Composer, Auto-Arrange is on by default whereby map elements such as titles, legends, scale bar, insets, etc. are
automatically arranged. When the Home and End key are pressed, the map compostion will return to its default display state.
Turning off the Auto-Arrange option allows the manual positioning of map elements.
Now move the mouse over the title and click the right mouse button. Right clicking over any map composition element will
launch the Map Properties dialog with the appropriate tab for the map component involved.2 Notice how the Title component has
been set to visible. Again, this was set when the land use layer was launched. When the title option was selected in DISPLAY
Launcher, it adopted the text of the title entry in the documentation file for that layer.3 However, we are going to change this.
Change the title to read "Westborough, Massachusetts." Then click on the Select Font button and change the font to Times New
Roman, bold italic style, maroon color and 22-point size.
Next, click into the Caption Text input box and type "Landuse / Landcover." Set the font to be bold 8-point Arial, in maroon.
Then click OK.
Turn off Auto-Arrange in Composer. Now bring up Map Properties again and select the Graphic Insets tab. Use the Browse
button to find the WESTBORO.BMP bitmap. Select this file and then set the Stretchable property on and the Show Border option
off. Then click OK. You will immediately note that you will need to both position and size this component. Double-click onto the
inset and move it so that its bottom-right corner is in the bottom-right corner of the map window, allowing a small margin equal
to that between the layer frame and the map window. Then grab the upper-left grab bar and drag it diagonally up and to the left
so that the inset occupies the full width of the legend area (again leaving a small margin equal to that placed on the right side).
Also be sure that the shape is roughly square. Then click any other component (of the map window banner) to set the inset in
place.
Now bring up Map Properties again and select the Scale Bar tab. Set the Units text to Meters, the number of divisions to 4 and the
length to 2000. Also click the Select Font button and set it to 8-point regular black Arial and click OK. Then double-click onto the
scale bar and move it to a position between the inset and the roads legend. Click onto the map window banner to set it in place.
In the case of right clicking over the layer frame, the default legend tab is activated.
If the title entry in the documentation file is blank, no title will appear even though space has been left for it.
35
Now select the Background tab from Map Properties. Click into the Map Window Background Color box to bring up the color
selection dialog. Select the upper-left-most color box and click the Define Custom Colors button. This will yield an expanded
dialog in which colors can be set graphically, or by means of their RGB or HLS color specifications. Set the Red, Green and Blue
coordinates to 255, 221 and 157 respectively. Then click on the Add to Custom Colors button, followed by the OK button. Now
that you are back at the Background tab, check the box labeled Assign Map Window Background Color to All Map Components.
Then click OK.
This time select the Map Grid tab from Map Properties. Set the origin X and Y coordinates to 0 and the increment in both X and
Y to be 200. Click the Current View option under the Map Grid Bounds. Choose the text option to Number inside. Set the color
(by clicking onto its box) to be the bright cyan (aquamarine) color in column 5, row 2 of the color selection options. Set the
Decimal Places to 0 and the Grid Line Width to 1. Then set the font to regular 8-point Arial with an Aqua color (to match the
grid). Then click OK to see the result.
Finally, bring up Map Properties and go to the GeoReferencing tab. We will not change anything here, but simply examine its
contents. This tab is used to set specific boundaries to the composition and the current view. Note that the units are specified in
the actual map reference units, which may represent a multiple of ground units. In this case, each map reference unit represents
20 meters. Note also the entries to change the relationship of Reference System coordinates to Text Points. At the moment, it has
been set to 1. This means that each text point is the equivalent of 1 map unit, which in turn represents 20 meters. Thus, for
example, a text label of 8-points would span an equivalent of 160 meters on the ground. Changing this value to 2 would mean that
8-point text would then span 320 meters. Try this if you would like. You will need to click on OK to have the change take place.
However, be sure to change it back to 1 before finishing.
Next, let's go to the North Arrow tab. Select one of the north arrows with your cursor. This will automatically select the visible
option. Besides the default north arrows, you have the option of creating your own and importing these and a BMP or EMF. You
have additional options for setting background color and declination. Like all other components, the North Arrow is also
placeable and sizable. Place it below the legends.
Click the Save button on Composer. Note the variety of options you have. However, only the first truly saves your composition in
a form that will allow you to recreate and further edit or extend your map composition. Click it now, and save it to a Map
Composition named WESTBORO. This will create a map composition file named "WESTBORO.MAP" in your Working Folder.
However, note that it only contains the instructions on how to create the map and not the actual data layers. It assumes that when
36
it recreates the map, it will be able to find the layers you reference in either the Working Folder or one of the Resource Folders of
the current Project Environment. Thus if you wish to copy the composition to another location, you should remember to copy
both the ".map" file and all layer, palette and symbol files required. (The TerrSet Explorer may be used to copy files.)
Once you have saved your composition, remove it from the screen. Then call DISPLAY Launcher and select the Map
Composition File option and search for your composition named WESTBORO. Then simply click OK to view the result. Once
your composition has finished displaying, you are exactly where you left off.
Now select the Print button from Composer. Select your printer and review its properties. If the Properties box for your printer
has a Graphics tab, select it and look at the settings. Be sure it has been set to the finest graphics option available. Also, if you have
the choice of rasterizing all graphic objects (as opposed to using vector graphics directly), do so. This is important since printers
that have this option typically do not have enough memory to draw complex map objects in vector directly. With this choice, the
rasterization will happen in the computer and not the printer (a better solution).
After you have reviewed the graphics options, set the paper orientation to landscape and then print your map.
You should always work with True Type fonts if you intend to print your map. Non-True Type fonts cannot be rotated properly (or
at all) by Windows (even on screen). In addition, some printers will substitute different fonts for non-True Type fonts without
asking for your permission. True Type fonts are always specially marked by Windows in the font selection dialog.
Some printers provide options to render True Type fonts as graphics or to download them as "soft fonts." Experiment with both
options, but most printers with this option require the "soft fonts" option in order to print text backgrounds correctly.
Probably the best value for money in printers for GIS and Image Processing lies with color ink jet printers. However, the quality of
paper makes a huge difference. Photo quality papers will yield stunning results, while draft quality papers may be blurred with poor
color fidelity.
37
EXERCISE 1-7
PALETTES, SYMBOLS AND CREATING
TEXT LAYERS
Throughout the preceding exercises, we have been using palettes and symbol files to graphically render map layers. Through the Advanced
Palette/Symbol Selection options of DISPLAY Launcher and Layer Properties in Composer, TerrSet provides over 1300 pre-defined palettes
and symbol files. However, there are times when you will need to make a special palette for a specific map layer. In this exercise, we explore
how to create these files. In addition, we explore the creation of text layers (a major form of annotation) through digitizing.
Find the icon for Symbol Workshop and click it. We will create a new palette to render topographic surfaces. Notice the large
matrix of boxes on the right. These represent the 256 colors that are possible in a color palette.1 Currently, they are all set to the
same color. We will change this in a moment. Now move your mouse over these boxes and notice that as the mouse is over each
box, a hint is displayed indicating which of the 256 palette entries that box represents.
From the File menu, select New. Specify a palette as the type of symbol and the name ETDEM as the filename and click OK.
Click into the box for palette entry 0. You will now be presented with the standard Windows color dialog. The color we want for
this entry is black, which is the sample color in the lower-left corner of the basic colors section of the dialog box. Select it and then
click OK.
C
1
Now click into the box for palette entry number 17. Define a custom color by setting the values for Red, Green and Blue (RGB) to
136 222 64 and click OK. Then set the From blend option to 0 and the To blend option to 17 and click the Blend button.
38
Now locate palette entry 51 and set its RGB values to 255 232 123. Set the blend limits from 17 to 51 and click the Blend button.
Set palette entry 119 to an RGB of 255 188 76. Then blend from 51 to 119.
Set palette entry 238 to an RGB of 180 255 255. Then blend from 119 to 238.
Finally, set palette entry 255 to white. This is the sample color in the lower right corner of the basic colors section (or you can set
it with an RGB of 255 255 255). Then blend from 238 to 255. This completes the palette. We can save it now by selecting the Save
option from Symbol Workshop's File menu. Exit from Symbol Workshop.
Now use DISPLAY Launcher to view the image named ETDEM. You will notice that DISPLAY Launcher automatically detects
that a palette exists with the same name as the image to be displayed, and therefore assumes that you want to use it. However, if
you had used a different name, you would simply need to select the Other/User-defined option and choose the palette you just
created from the pick list.2
Use the Add Layer button on Composer and add the file named ETPROV with the Outline Black symbol file. As you can see,
these lines (thin solid black) are somewhat too dark for the delicate palette we've created. Therefore, let's create a new symbol file
using grey lines.
Open Symbol Workshop either from the Display menu or by clicking on its icon. Under Symbol Workshop's File menu, select
Now select line symbol 0 and set its width to 1 and its style to solid. Then click on the color box to access the Windows color
Now click on the Copy button. By default, this function is set to copy the symbol characteristics from symbol 0 to all other
New. When the New Symbol File dialog appears, click on Line and specify the name Grey.
dialog to set its color to RGB 128 128 128. Click OK to exit the color selection dialog and again to exit the line symbol dialog.
symbols. Therefore, all 256 symbols should now appear the same as symbol 0. Choose Save from the Symbol Workshop File menu
and close Symbol Workshop.
Note that user-created palettes are always stored in the active Working Folder. However, you can save them elsewhere using the Save As option. If you create a
symbol file that you plan to use for multiple projects, save it to the Symbols folder under the main TerrSet program folder.
39
We will now apply the symbol file we just created to the province boundaries vector layer in the map display. Click on the entry
for ETPROV in the Composer list (to select it), then click the Layer Properties button. Change the symbol file to Grey. Then click
on the OK button of the Layer Properties dialog. The more subtle medium grey province boundaries go well with the colors of the
elevation palette.
Open Symbol Workshop and from the File menu, select New. In the New Symbol File dialog, specify Text and input the name
PROVTEXT. Select text symbol 0 and set its characteristics to 12 point bold italic Times New Roman in maroon. Click OK to
return to the main Symbol Workshop dialog, and use the Copy button to copy this symbol to all other categories. Then Save the
file (from the File menu) and exit Symbol Workshop.
We now have a symbol file to use in labeling the provinces. To create the text layer with the province names, we will use the TerrSet on-screen
digitizing utility. Before beginning, however, examine the provinces as delineated in your composition. Notice that if you start at the
northernmost province and move clockwise around the boundary, you can count 11 provinces, with two additional provinces in the middle
a northern one and a southern one. This is the order we will digitize in: number 1 for the northernmost province, number 2 for that which
borders it in the clockwise direction, and so on, finishing with number 13 as the more southerly of the two inner provinces.
First, press the End key to make your composition as large as possible. Then click the digitize icon on the tool bar (the one with
the cross in a circle). If the highlighted layer in Composer is the ETPROV layer, you will then be asked if you wish to add features
to this existing layer, or create a new layer. Indicate that you wish to create a new layer. If, on the other hand, the highlighted layer
in Composer was the ETDEM layer, it would automatically assume that you wished to create a new layer since ETDEM is raster,
and the on-screen digitizing feature always creates vector layers.
Specify PROVTEXT as the name of the layer to be created and click on Text as the layer type. For the symbol file, specify the
PROVTEXT symbol file you just created. Specify 1 as the index of the first feature, make sure the Automatic Index feature is
selected, and click OK. Now move to the middle of the northernmost province and click the left mouse button. Enter TIGRAY as
the text for the label. Most other elements can be left at their default values. However, select the Specify Rotation Angle option,
and leave it at its default value of 90.3 Also, the relative caption position should be set to Center. Then click OK.
Repeat this action for each of the remaining provinces. Their names and their feature ID's (the symbol type will remain at 1 for all
cases) are listed below. Remember to digitize them in clockwise order. For the two center provinces, digitize the northern one
first.
Text rotation angles are specified as azimuths (i.e., clockwise from north). Thus, 90 yields standard horizontal text while 270 produces text that is upside-down.
40
Welo
Harerge
Bale
Sidamo
Gamo Gofa
Kefa
Ilubabor
Welega
10
Gojam
11
Gonder
12
Shewa
13
Arsi
Don't worry if you make any mistakes, since they can be corrected at a later time. When you have finished, click the right mouse
button to signal that you have finished digitizing. Then click the Save Digitized Data icon on the tool bar (a red arrow pointing
downward, 2 icons to the right of the Digitize icon) to save your text layer.
When we initially created this text layer, we made all text labels horizontal. Let's delete the label for Shewa and put it on an angle
with the same orientation as the province. Make sure the text layer, PROVTEXT is highlighted in Composer. Click on the Delete
Feature icon on the tool bar (a red X to the right of the Digitize icon). Then move the mouse over the Shewa label and click the left
mouse button to select it. Press the Delete key on the keyboard. TerrSet will prompt you with a message to confirm that you do
wish to delete the feature. Click Yes. Click on the Delete Feature icon again to release this mode. Now click the Digitize icon and
indicate that you wish to add a feature to the existing layer. Specify that the index of the first feature to be added should be 12.
Then move the cursor to the center of the Shewa province and click the left mouse button. As before, type in the name Shewa, but
this time, indicate that you wish to use Interactive Rotation Specification Mode. Then click OK and move the cursor to the right.
Notice the rotation angle line. This is used simply to facilitate specification of the rotation angle. The length of the line has no
significanceonly the angle is meaningful. Now rotate the line to the northeast to an angle that is similar to the angle of the
province itself. Finally click the left mouse button to place the text.
If you made any mistakes in constructing the text layer, you can correct them in the same manner. Otherwise, click the right
mouse button to finish digitizing and then save your revised layer by clicking on the Save Digitized Data icon on the tool bar.4
To complete your composition, place the legend for the elevation layer in the upper-left corner of the layer frame. Since the
background color is black, you will want to use Map Properties to change the text color of the legend to be white and its
background to be black.
T
4
Add any other map components you wish and then save the composition under the name ETHIOPIA.
If you forget to save your digitizing, TerrSet will ask if you wish to save your data when you exit.
41
Photo Layers
A photo layer is a special example of a text layer. It was developed specifically for use with ground truthing. This final section of the exercise
will demonstrate using Photo Layers as part of a ground truth exercise in Venezuela. Photo Layers are created as text layers during the onscreen digitizing process, either through digitizing a new text layer or when laying down waypoints during GPS interaction. In both cases,
entering the correct syntax for the text caption will create a Photo Layer.
Using DISPLAY Launcher, display the layer LANDSAT345_JUNE2001. Then, use Add Layer on Composer to add the vector text
layer CORRIDOR.
Four text labels will appear corresponding to ground truth locations. The ground truth exercise was undertaken with the goal of creating a
land use map from the Landsat imagery shown in the raster layer. During the exercise a GPS was connected to laptop. As waypoints were
recorded, photos also were taken of the land cover which could be used to facilitate the classification process.
When the text layer is displayed, text labels associated with photos will be underlined. In our case, the text labels shown are different times
during the day, but on different days.
Using Identify, click on the text location. The photos associated only with that label will be displayed. Only one photo layer label
can be displayed at a time. Click on the other text labels and the previous photos will be removed as other layers are displayed.
Each photo shown corresponds exactly to the view azimuth at the location where the photo was taken. When you move the mouse
over the banner of a photo, its title will be displayed. In our case, each photo has a title corresponding to the name of the photo,
and also its azimuth. Arrows will correspond to the azimuth.
You will want to review the Help on Photo Layers for complete detail on creating these text layers. Once created, you can use them to recall
your ground truth experience.
42
EXERCISE 1-8
DATA STRUCTURES AND SCALING
Use DISPLAY Launcher to view both the WESTBORO and ETHIOPIA map compositions
created in Exercises 1-6 and 1-7.1 Notice the difference between the legends for the WESTLUSE
layer of the WESTBORO composition and the ETDEM legend of the ETHIOPIA composition.
To appreciate the reasons for this difference, choose TerrSet Explorer from the File menu or
click on its icon (the first icon).
TerrSet Explorer is a general purpose utility to manage and explore TerrSet files and projects. You can use
TerrSet Explorer to set your project environment, manage your group files, review metadata, display files,
and simply organize your data with such tools as copy, delete, rename, and move commands. You can use
TerrSet Explorer to view the structure of TerrSet file formats and to drag and drop files into TerrSet dialog
boxes. TerrSet Explorer is permanently docked to the left of the TerrSet desktop. It cannot be moved but it
can be minimized and horizontally resized.
With the Files tab selected in TerrSet Explorer, you will notice a filters tab where you can select the files that
only files selected with the filter will show in the files list. Notice that there is a Filters tab where one can
select the files to be shown in the Files tab. Alternatively, you can alter the filter at the bottom of the Files
pane. When you first open TerrSet Explorer, it automatically lists the files in your Working Folder.
However, like the pick-list, you can select to show files in any of your Resource Folders as well.
From the Files tab, select the folder that contains the WESTLUSE and ETDEM raster images.
Find the file WESTLUSE and right-click on its filename.
By right-clicking on any file or files you will be presented with a host of utilities including copying, deleting
and renaming files, along with a second set of utilities for showing its structure and/or for viewing file
contents of a binary file. We will use these latter operations in this exercise.
If you did not complete the earlier exercises, display the raster image WESTLUSE with the palette WESTLUSE and a legend. Also display ETDEM with the ETDEM
palette (or the Default Quantitative palette) and a legend. Then continue with this exercise.
43
Right-click again in the Files pane and make sure that the Metadata option is selected and showing on the bottom half of the Files
tab. Now, notice as you select any file, the metadata for that file is shown. Again, highlight the WESTLUSE layer. Notice that the
name is listed as "WESTLUSE.RST." This is the actual data file for this raster image, which has an ".rst" file extension.
Now change the filter to show all files. Go to the input box below the Files pane and select the pull-down menu. Select the All Files
(*.*) option. Now locate again WESTLUSE.RST. Notice, however, that also shown is a second file with an ".rdc" extension. The
".rdc" file is its accompanying metadata file. The term metadata means "data about data," i.e., documentation (which explains the
"rdc" extensionit stands for "raster documentation"). The data shown in the Metadata pane come from the .rdc files. Vector
files also have a documentation file, .vdc.
Change the filter back again to the default listing. You can do this from the pull-down menu.
Now with WESTLUSE highlighted, right-click and choose the Show Structure option. This shows the actual data values behind
the upper left-most portion (8 columns and 16 rows) of the raster image. Each of these numbers represents a land use type, and is
symbolized by the corresponding palette entry. For example, cells with a number 3 indicate forested land and are symbolized with
the third color in the WESTLUSE palette. Use the arrow keys to move around the image. Then close the Show Structure dialog.
Make sure that the WESTLUSE raster layer is still highlighted in TerrSet Explorer, and view its metadata which will show us the
contents of the "WESTLUSE.RDC" file. This file contains the fundamental information that allows the file to be displayed as a
raster image and to be registered with other map data.
The file type is specified as binary, meaning that numeric values are stored in standard IEEE base 2 format. The Show Structure utility in
TerrSet Explorer allows us to view these values in the familiar base 10 numeric system. However, they are not directly accessible through
other means such as a word processor. TerrSet also provides the ability to convert raster images to an ASCII2 format, although this format is
only used to facilitate import and export.
The data type is byte. This is a special sub-type of integer. Integer numbers have no fractional parts, increasing only by whole number steps.
The byte data type includes only the positive integers between 0 and 255. In contrast, files designated as having an integer data type can
contain any whole numbers from -32768 to + 32767. The reason that they both exist is that byte files only require one byte per cell whereas
integer files require 2. Thus, if only a limited integer range is required (as in this case), use of the byte data type can halve the amount of
computer storage space required. Raster files can also be stored as real numbers, as will be discussed below.
The columns and rows indicate the basic raster structure. Note that you cannot change this structure by simply changing these values. Entries
in a documentation file simply describe what exists. Changing the structure of a file requires the use of special procedures (which are
extensively provided within TerrSet). For example, to change the data type of a file from byte to integer, you would use the module
CONVERT.
There are seven fields related to the reference system to indicate where the image exists in space. The Georeferencing chapter in the TerrSet
Manual gives extensive details on these entries. However, for now, simply recognize that the reference system is typically the name of a
2
ASCII is the American Standard Code for Information Interchange. It was one of the earliest coding standards for the digital representation of alphabetic characters,
numerals and symbols. Each ASCII character takes one byte (8 bits) of memory. Recently, a new system has been introduced to cope with non-US alphabet systems
such as Greek, Chinese and Arabic. This is called UNICODE and requires 2 bytes per character. TerrSet accepts UNICODE for its text layers since the software is
used worldwide. However, the ASCII format is still very much in use as a means of storing single byte codes (such as Roman numerals), and is a subset of UNICODE.
44
special reference system parameter file (called a REF file in TerrSet) that is stored in the GEOREF sub-folder of the TerrSet program
directory. Reference units can be meters, feet, kilometers, miles, degrees or radians (abbreviated m, ft, km, mi, deg, rad). The unit distance
multiplier is used to accommodate units of other types (e.g., minutes). Thus, if the units are one of the six recognized unit types, the unit
distance will always be 1.0. With other types, the value will be other than 1. For example, units can be expressed in yards if one sets the units
to feet and the unit distance to 3.
The positional error indicates how close the actual location of a feature is to its mapped position. This is often unknown and may be left blank
or may read unknown. The resolution field indicates the size of each pixel (in X) in reference units. It may also be left blank or may read
unknown. Both the positional error and resolution fields are informational only (i.e., are not used analytically).
The minimum and maximum value fields express the lowest and highest values that occur in any cell, while the display minimum and display
maximum express the limits that are used for scaling (see below). Commonly, the display minimum and display maximum values are the
same as the minimum and maximum values.
The value units field indicates the unit of measure used for the attributes, while the value error field indicates either an RMS value for
quantitative data or a proportional error value for qualitative data. The value error field can also contain the name of an error map. Both fields
may be left blank or read unknown. They are used analytically by only a few modules.
A data flag is any special value. Some TerrSet modules recognize the data flags background or missing data as indicating non-data.
Using WESTLUSE we see there are 13 legend categories. Either double-click in Categories input box or select the ellipse button to
the right of the Categories input box to show the legend categories. This Categories dialog box contains interpretations for each of
the land use categories. Clearly it was this information that was used to construct the legend for this layer. You can now close
Categories dialog.
Now highlight the ETDEM raster layer in File Tab of TerrSet Explorer and right-click to Show Structure. What you will initially
see are the zeros which represent the background area. However, you may use the arrow keys to move farther to the right and
down until you reach cells within Ethiopia. Notice how some of the cells contain fractional parts. Then exit from Show Structure
and view this files Metadata.
Notice that the data type of this image is real. Real numbers are numbers that may contain fractional parts. In TerrSet, raster
images with real numbers are stored as single precision floating point numbers in standard IEEE format, requiring 4 bytes of
storage for each number. They can contain cells with data values from -1 x 1037 to +1 x 1037 with up to 7 significant figures. In
computer systems, such numbers may be expressed in general format (such as you saw in the Show Structure display) or in
scientific format. In the latter case, for example, the number 1624000 would be expressed as 1.624e+006 (i.e., 1.624 x 106).
Notice also that the minimum and maximum values range from 0 to 4267.
Now notice the number of legend categories. There is no legend stored for this image. This is logical. In these metadata files,
legend entries are simply keys to the interpretation of specific data values, and typically only apply to qualitative data. In this case,
any value represents an elevation.
45
Remove everything from the screen except your ETHIOPIA composition. Then use DISPLAY Launcher to display ETDEM, and
for variety, use the TerrSet Default Quantitative palette and select 16 as the number of classes. Be sure that the legend option is
selected and then click OK. Also, for variety, click the Transparency button on Composer (the one on the far right in Composer).
Notice that this is yet another form of legend.
What should be evident from this is that the manner in which TerrSet renders cell values as well as the nature of the legend depends on a
combination of the data type and the number of classes.
When the data type is either byte or integer, and the layer contains only positive values from 0-255 (the range of permissible
values for symbol codes), TerrSet will automatically interpret cell values as symbol codes. Thus, a cell value of 3 will be interpreted
as palette color 3. In addition, if the metadata contains legend captions, it will display those captions.
If the data type is integer and contains values less than 0 or greater than 255, or if the data type is real, TerrSet will automatically
assign cells to symbols using a feature known as autoscaling and it will automatically construct a legend.
Autoscaling divides the data range into as many categories as are included in the Autoscale Min to Autoscale Max range specified
in the palette (commonly 0-255, yielding 256 categories). It then assigns cell values to palette colors using this relationship. Thus,
for example, an image with values from 1000 to 3000 would assign the value 2000 to palette entry 128.
The nature of the scaling and the legend created under autoscaling depends upon the number of classes chosen. In the User
Preferences dialog under the File menu, there is an entry for the maximum number of displayable legend categories. By default, it
is set at 16. Thus when the number of classes is 16 or less, TerrSet will display them as separate classes and construct a legend
showing the range of values assigned to each class.
When there are more than 16 classes, the result depends on the data type. When the data contain real numbers or integers with
values less than 0 or greater than 255, it will create a continuous legend with pointers to representative values (such as you see in
the ETHIOPIA composition). For cases of positive integer values less than 256, it will use a third form of legend. To appreciate
this, use DISPLAY Launcher to examine the SIERRA4 layer using the Greyscale palette. Be sure the legend option is on but that
the autoscaling option is set to Off (Direct).
In this case, the image is not autoscaled (cell values all fall within a 0-255 range). However, the range of values for which legend
captions are required exceeds the maximum set in User Preferences,3 so TerrSet provides a scrollable legend. To understand this
effect further, click on the Layer Properties button in Composer. Then, alternately set the autoscaling option to Equal Intervals
and None (Direct). Notice how the legend changes.
You will also notice that when the autoscaling is set to Equal Intervals, the contrast of the image is improved. The Display Min
and Display Max sliders also become active when autoscaling is active. Set the autoscaling to Equal Intervals and then try sliding
these with the mouse. They can also be moved with the keyboard arrow keys (hold down the shift key with the arrows for smaller
increments).
46
Slide the Display Min slider to the far left. Then press the right arrow twice to move the Display Min to 26 (or close to it). Then
move the Display Max slider to the far right, followed by three clicks of the left arrow to move the Display Max to 137. Notice the
start and end legend categories on the display.
When the Display Min is increased from the actual minimum, all cell values lower than the Display Min are assigned the lowest
palette entry (black in this case). Similarly, all cell values higher than the Display Max are assigned the highest palette entry (white
in this case). This is a phenomenon called saturation. This can be very effective in improving the visual appearance of autoscaled
images, particularly those with very skewed distributions.
Use DISPLAY Launcher to display SIERRA2 with the Greyscale palette and without autoscaling. Clearly this image has very poor
contrast. Create a histogram display of this image using HISTO from the Display menu (or its toolbar icon). Specify SIERRA2 as
the image name and click OK, accepting all defaults.
Notice that the distribution is very skewed (the maximum extends to 96 despite the fact that very few pixels have values greater
than 60). Given that the palette ranges from 0-255, the dark appearance of the image is not surprising. Virtually all values are less
than 60 and are therefore displayed with the darkest quarter of palette colors.
If the Layer Properties dialog is not visible, be sure that SIERRA2 has focus and click Layer Properties again. Now set autoscaling
to use Equal Intervals and click Apply. This provides a big improvement in contrast since the majority of cell values now cover
half the color range (which is spread between the minimum of 23 and the maximum of 96). Now slide the Display Max slider to a
value around 60. Notice the dramatic improvement! Click the Save button. This saves the new Display Min and Display Max
values to the metadata file for that layer. Now whenever you display this image with equal intervals autoscaling, these enhanced
settings will be used.
You will have noticed that there are two other options for autoscaling -- Quantiles and Standard Scores. Use DISPLAY Launcher
to display SIERRA2 using the Greyscale palette and no autoscaling (i.e., Direct). Notice again how little contrast there is. Now go
to Layer Properties and select the Quantiles option. Notice how the contrast sliders are now greyed out. Despite this, choose 16
classes and click Apply. As you can see, the Quantiles scheme does not need any contrast enhancement! It is designed to create the
maximum degree of contrast possible by rank ordering pixel values and assigning equal numbers to each class.
Now use Layer Properties to select the Standard Scores autoscaling option using 6 classes. Click Apply. This scheme creates class
boundaries based on standard scores. The first class includes all pixels more than 2 standard deviations below the mean. The next
shows all cases between 1 and 2 standard deviations below the mean. The next shows cases from 1 standard deviation below the
mean to the mean itself. Similarly, the next class shows cases from the mean to one standard deviation above the mean, and so on.
As with the other end, the last class shows all cases of 2 or more standard deviations. For an appropriate palette, go to the
Advanced Palette / Symbol Selection dialog. Choose a Quantitative data relationship and a Bipolar (Low-High-Low) color logic.
Select the third scheme from the top of the four offered, and then set the inflection point to be 37.12 (the mean). Then click on
OK. Bipolar palettes seem to be composed of two different color groups -- in this case, the green and orange group, signifying
values below and above the mean.
Remove all images and dialogs from the screen and then display the color composite named SIERRA345. Then click on Layer
Properties on Composer. Notice that three sets of sliders are providedone for each primary color. Also notice that the Display
Min and Max values for each are set to values other than the actual minimum and maximum for each band. This was caused by
47
the saturation option in COMPOSITE. They have each been moved in so that 1% of the data values is saturated at each end of the
scale for each primary.
Experiment with moving the sliders. You probably won't be able to improve on what COMPOSITE calculated. Note also that you
can revert to the original image characteristics by clicking either the Revert or Close buttons.
Scaling is a powerful visual tool. In this exercise, we have explored it only in the context of raster layers and palettes. However, the same logic
applies to vector layers. Note that when we use the interactive scaling tools, we do not alter the actual data values of the layers. Only their
appearance when displayed is changed. When we use these layers analytically the original values will be used (which is what we want).
We have reviewed the important display techniques in TerrSet. With Composer and DISPLAY LAUNCHER you have limitless possibilities
for visualizing your data. Note, however, that you can also use TerrSet Explorer to quickly display raster and vector layers. But unlike with
DISPLAY LAUNCHER, you will not have control over its initial display, but you can always use Composer to alter its display characteristics.
Displaying files with TerrSet Explorer is meant as a quick look. Also, you can specify some initial parameters for the TerrSet Explorer display
in User Preferences under the File menu.
To finish this exercise, we will use TerrSet Explorer a bit further to examine the structure of vector layers.
Open TerrSet Explorer and make sure the filter used is displaying vector files (.vct). Then choose the WESTROAD layer and
right-click on Show Structure. As you can see, the output from this module is quite different for vector layers. Indeed, it will even
differ between vector layer types.
The WESTROAD file contains a vector line layer. However, what you see here is not the actual way it is stored. Like all TerrSet
data files, the true manner of storage is binary. To get a sense of this, close the Show Structure dialog and then right-click on
WESTROAD to Show Binary. Clearly this is unintelligible. The Show Structure procedure for vector layers provides an
interpreted format known as "Vector Export Format".4 That said, the logical correspondence between what is seen in Show
Structure and what is contained in the binary file is very close. The binary version does not contain the interpretation strings on
the left, and it encodes numbers in a standard IEEE binary format.
Remove any displays related to Show Structure or Show Binary. Then view the Metadata button for WESTROADS. As you can
see, there is a great deal of similarity between the metadata file structures for raster and vector. The primary difference is related to
the data type field, which in this case reads ID type. Vector files always store coordinates as double precision real numbers.
However, the ID field can be either integer5 or real. When it contains a real number, it is assumed that it is a free-standing vector
layer, not associated with a database. However, when it is an integer, the value may represent an ID that is linked to a data table,
or it may be a free-standing layer. In the first case, the vector feature IDs would match a link field in a database that contains
attributes relating to the features. In the second case, the vector feature IDs would be embedded integer attributes such as
elevations or land use codes.
A vector export format file has a ".vxp" extension and is editable. The CONVERT module can import and export these files. In addition, the content of Show
Structure can be saved as a VXP file (simply click on the Save to File button). Furthermore, you can edit within the Show Structure dialog. If you edit a VXP file, be
sure to re-import it under a new name using CONVERT. This way your original file will be left intact. The Help System has more details on this process.
The integer type is not further broken down into a byte subtype as it is with raster. In fact, the integer format used for vector files is technically a long integer, with a
range within +/- 2,000,000.
48
You may wish to explore some other vector files with the Show Structure option to see the differences in their structure. All are
self-evident in their organization, with the exception of polygon files. To appreciate this, find the AWRAJAS2 vector layer in the
Files list. Then right-click on Show Structure. The item that may be difficult to interpret is the Number of Parts. Most polygons
will have only one part (the polygon itself). However, polygons that contain holes will have more than one part. For example, a
polygon with two holes will list three partsthe main polygon, followed by the two holes.
49
EXERCISE 1-9
DATABASE WORKSHOP: WORKING WITH
VECTOR LAYERS
A spatial frame is simply a layer that describes only the geographic character of features and not their attributes. In raster, as with vector, this
spatial frame is bound by the minimum and maximum X and Y coordinates, but with raster its attributes are tied to the actual pixel values.
And as we worked with in earlier exercises, a raster group file is essentially a simple collection of raster layers. The case of vector layers is very
different in concept. A single vector layer acts as a spatial frame but its attributes can be associated with a data table of statistics for the
features depicted. By associating a data table with attribute data for each feature, a layer can be formed from each such data field. Although
simple vector layers can exist, the power of associating unique vector features to a collection of attributes in a table is the hallmark of vector
GIS.
In TerrSet we accomplish this association between a vector spatial frame and a collection of attributes with our Database Workshop facility.
Our native database format for storing attribute data is in Microsoft Access (.accdb) format. In these remaining exercises we will explore the
use of vector collections and Database Workshop.
Remove all map windows from the screen by choosing Close All Windows from the Window List menu.
Bring up DISPLAY Launcher and choose to display a vector layer. Then click on the Pick List button and find the entries named
MASSTOWNS. The first one listed is a spatial frame, while the second (with the + sign beside it) is a layer collection based on that
spatial frame. Select the layer named MASSTOWNS (the one without the plus sign). Click the legend option off and then go to the
Advanced Palette/Symbol Selection tab. A spatial frame defines features but does not carry any attribute (thematic) data. Instead,
each feature is identified by an ID number. Since these numbers do not have a quantitative relationship, click on the data
relationship button for Qualitative. Then select the Variety (black outline) color logic option and click OK. The state of
Massachusetts in the USA is divided into 351 towns. If you click on the polygons with Identify, you will be able to see the ID
numbers.
Now run DISPLAY Launcher again to display a vector layer and locate the vector collection named MASSTOWNS1 (look for the
+ sign beside it). Click on either the + sign or the MASSTOWNS filename, and notice that a whole set of layer names are then
listed below it. Select the layer named POP2000. Ensure that the title and legend are checked on. Then select the Advanced
There is no requirement that the spatial frame and the collection based upon it have the same name. However, this is often helpful in visually associating the two.
50
Palette/Symbol Selection tab. The data in this layer express the population in the year 2000. Since these data clearly represent
quantitative variations, select the Data Relationship to be Quantitative. Then set the color logic Unipolar (ramp from white). We
will use the default symbol file named PolyUnipolarWred. Then click OK.
Unipolar color schemes are those that appear to progress in a single sequence from less to more. You can easily see this in the
legend, but the map looks terrible! The problem here is that the population of Boston is so high compared to all other towns in the
state, the other towns must appear at the other end of the color scale in order to preserve the linear scaling between the colors and
the data values. To remedy this, click on Layer Properties and change the autoscaling option to be Quantiles. Then click OK.
As you can see, the quantiles autoscaling option is ideally suited to the display of highly skewed distributions such as this. It does
it by rank ordering the towns according to their data values and then assigning them in equal groups to each of a set of classes.
Notice how it automatically decided on 16 classes. However, you are not restricted to this. You can choose any number of classes
up to 16.
MASSTOWNS is a vector layer collection. In reality, the data file with the + sign beside it is a vector link file (also called a VLX file since it has
a .vlx extension, or simply a link file). A vector link file establishes a relationship between a vector spatial frame and a database table that
contains the information for a set (collection) of attributes associated with the features in the spatial frame.
To get an understanding of this, we will open TerrSet's relational database manager, Database Workshop. Make sure the
population for 2000 (MASSTOWNS.POP2000) map window has focus (click on its banner if you are unsureit will be
highlighted when it has focus). Then click on the Database Workshop icon on the tool bar (an icon with a grid-like pattern on the
right side of the toolbar). Ordinarily, Database Workshop will ask for the name of the database and table to display. However,
since the map window with focus is already associated with a database, it displays that one automatically. Click on the map
window to give it focus and then press the Home key to make it the original size. Then resize and move Database Workshop so
that it fits below the map window and shows all columns (it will only show a few rows).
Notice also the relationship between the title in your map window and the content of Database Workshop. The first part specifies
the database that it is associated with (MASSACHUSETTS.ACCDB); the second part indicates the table (Census 2000) and the
third part specifies the column. The column names of the table match the layer names included in the MASSTOWNS layer
collection in the Pick List in DISPLAY Launcher (including POP2000). In database terminology, each column is known as a field.
The rows are known as records, each of which represents a different feature (in this case, different towns in the state). Activate
51
Identify mode and click on several of the polygons in the map. Notice how the active record in Database Workshop (as indicated
by position of the highlighted cell) is immediately changed to that of the polygon clicked. Likewise, click on any record in the
database and its corresponding polygon will be highlighted in the map.
When a spatial frame is linked to a data table, each field becomes a different layer. Notice that Database Workshop has an icon
that is identical to that used for DISPLAY Launcher. If you hover over the icon with your mouse, the hint text will read Display
Current Field as Map Layer. As the icon would suggest, this can be used as a shortcut to display any of the numeric data fields. To
use it, we need to choose the layer to display by simply clicking the mouse into any cell within the column of the field of interest.
In this case, move over to the POPCH80_90 (population change from 1980 to 1990) field and click into any cell in that column.
Then click the Display Current Field as Map Layer icon on Database Workshop. As this is meant as a quick display utility, the
layer is displayed with default settings. However, they can easily be changed using the Layer Properties dialog.
There are four ways in which you can specify a vector layer for display that is part of a collection. The first is to select it from the Pick List as
we did to start. The second is to display it from Database Workshop. Thirdly, we can use DISPLAY Launcher and simply type in the name
using dot logic. Finally, we can display a part of a collection from TerrSet Explorer, very much the same way as we did in DISPLAY Launcher,
by opening up the .vlx file and displaying one of the numeric fields. Notice the names of the two layers currently displayed from the
MASSTOWNS collection (as visible both in Composer and on the Map Window banners). Each starts with a prefix equal to the collection
name, followed by a dot ("."), followed by the name of the data field from which it is derived. This same naming convention can be used to
specify any layer that belongs to a collection. You may now close the map windows and Database Workshop.
How is a vector layer collection established? It is done with the facility in Database Workshop with the Establish Display Link
option. This can be launched from an icon in Database Workshop or from the Query menu. If it is not already open, open the
MASSTOWNS.VLX database file in Database Workshop. Make sure that the Census 2000 table is selected, then open the
Establish Display Link dialog.
Notice that a vector link file contains three componentsthe name of the vector spatial frame, the database file, and the link field.
The spatial frame is any vector file which defines a set of features using a set of unique integer identifiers. In this case, the spatial
definition of the towns in the state of Massachusetts, MASSTOWNS.
The database file can be any Microsoft Access format file. In cases where a dBASE (.dbf) file is available, Database Workshop can
be used to convert it to Access format. This vector collection uses a database file called MASSACHUSETTS.ACCDB.
The link field is the field within the database table that contains the identifiers that link (i.e., match) with the identifiers used for
features in the spatial frame. This is the most important element of the vector link file, since it serves to establish the link between
records in the database and features in the vector frame file. The Town_ID field is the link field for this vector collection. It
contains the identifiers that match the feature identifiers of the polygon features of MASSTOWNS.
Note that database files can contain multiple tables and can be relational. The VLX file also stores from which table the VLX was
created. In this case, the CENSUS2000 table is used.
Our intention here is simply to examine the structure of an existing VLX file.
Once a link has been established, we can do more than simply display fields in a database. We can also export fields and directly create standalone vector or raster layers.
52
Similar to displaying any field, place the cursor in the field you wish to export (in this case choose the POPCH90_00), then select
from the File/Export/Field/to Vector File menu option. The Export Vector File dialog allows you to specify a filename for the new
vector file and the field to export. Notice also that it creates a suggested name for the output file by concatenating the table name
with the field name. If the field name is correct, click OK to create the new vector file. Otherwise, choose the correct field and click
OK. The new vector file reference parameters will be taken from the vector file listed in the link file.
Notice that the toolbar in Database Workshop has an icon for rapid selection of the option to create a vector file (fifth from the
left). Similarly there is one for exporting a raster layer (fourth from the left). Again, place the cursor in the field you want to
export (in this case, choose the AREA field) and then click the Create IDRISI Raster Image icon. You will be asked to specify a
name for the new layer. You can choose the suggested name and click OK. This will then yield a new display regarding reference
parameters.
Recall that a vector link file defines the relationship between a vector file as a spatial layer frame and a database as the vector
collection of data. Because we are exporting to a raster image, we will need to define the output parameters for a different type of
spatial layer frame. After defining the output filename, you will then be prompted for the output reference parameters. By default,
the coordinate reference system and bounding rectangle will be taken from the linked vector file. What we need to define is the
number of columns and rows the image will span. In addition, we may need to make adjustments necessary to match the
bounding rectangle to the resolution of cells. However, as it turns out, we already have a raster image with the exact parameters
we need, called TOWNS. Therefore, click on the Copy from Existing File option and specify TOWNS. You will then notice that it
specifies 2971 columns and 1829 rows (i.e., 100 meter resolution). Now click OK and the image will be autodisplayed.
Finally, with the link established, we can also import data to existing databases. From DISPLAY Launcher, display the raster
image STATEENVREGIONS, using the palette of the same name.
The state has been divided into ten state of environment regions. This designation is used primarily for state buildout
monitoring and analysis. We will now create a new field in the CENSUS2000 table and update that table to reflect each towns
environmental region code. Using the raster image, we will import this data to a new field.
With the CENSUS2000 table selected, choose Add Field from the Edit menu. Call the field ENVREGION with a data type of
integer. You will notice that it adds this new field to the far right of the table. Then from the File menu in Database Workshop, go
to Import/Field/from Raster Image. From the Import Raster dialog, enter TOWNS as the feature definition image and
STATEENVREGIONS as the image to be processed. Select Max2 as the Summary type and Update existing field for Output. For
the link field name, enter TOWN_ID and for the update field name, enter ENVREGION. Finally, click OK to have the data be
imported.
The result is added to the new field in the database. The new values contain the state environmental regions. Thus, each town in
the table now has assigned its region value in this new field.
We have just learned that collections of vector layers can be created by linking a vector spatial frame to a data table of attributes. In the next
exercise we will explore how this can facilitate certain types of analysis.
All of the pixels within each region have the same value, so it would seem that most of these options would yield the same result. However, to hedge against the
possibility that some pixels near the edge may partially intersect the background and be assumed to have a value of 0, the choice of Max is safest.
53
EXERCISE 1-10
DATABASE WORKSHOP: ANALYSIS AND SQL
As we saw in the previous exercise, a vector collection is created through an association of a database of attributes and a vector spatial frame.
As a consequence, standard database management procedures can be used to query and manipulate the database, thereby offering
counterparts to the database query and mathematical operators of raster GIS.
One of the most common means of accessing database tables is through a special language known as Structured Query Language (SQL).
TerrSet facilitates your use of SQL through two primary facilities: Filter and Calculate.
Filter
Make sure your main Working Folder is set to Using TerrSet. Then clear your screen and use DISPLAY Launcher to display the
POPCH90_00 vector layer from the MASSTOWNS vector collection. Use the Default Quantitative palette. Then open Database
Workshop, either from the GIS Analysis/Database Query menu or from its icon. Move the table to the bottom right of the screen
so that both the table and the map are in view, but with as little overlap as possible.
Now click the Filter Table icon (the one that looks like a pair of dark sunglasses) on the Database Workshop toolbar. This is the
SQL Filter dialog. The left side contains the beginnings of an SQL Select statement while the right side contains a utility to
facilitate constructing an SQL expression.
Although you can directly type an SQL expression into the boxes on the left, we recommend that you use the utility on the right
since it ensures that your syntax will be correct.1
We will filter this data table to find all towns that had a negative population change from two consecutive censuses, 1980 to 1990
and from 1990 to 2000.
The asterisk after the Select keyword indicates that the output table should contain all fields. You will commonly leave this as is.
However, if you wanted the result to contain only a subset of fields, they could be listed here, separated by commas.2
SQL is somewhat particular about spacinga single space must exist between all expression components. In addition, field names that contain spaces or unusual
characters must be enclosed in square brackets. Use of the SQL expression utility on the right will place the spaces correctly and will enclose all field names in
brackets.
54
Either type directly, or use the SQL expression tabs to create the following expression in the WHERE clause box: [popch80_90] <
0 and [popch90_00] < 0
Then click OK.
When the expression completes successfully, all features which meet the condition are shown in the Map Window in red, while those that do
not are shown in dark blue. Note also that the table only contains those records that meet the condition (i.e., the red colored polygons). As a
result, if you click on a dark blue polygon using Identify mode, the record will not be found.
Finally, to remove the filter, click the Remove Filter icon (the light glasses).
Calculate
Leaving the database on the screen, remove all maps derived from this collection. We need to add a new data field for the next
operation and this can only be done if the data table has exclusive access to the table (this is a standard security requirement with
databases). Since each map derived from a collection is actively attached to its database, these need to be closed in order to modify
the structure of the table.
Go to the Database Workshop Edit menu and choose the Add Field option. Call the new field POPCH80_00 and set its data type
Now click on the Calculate Field Values icon (+=) in the Database Workshop toolbar. In the Set clause input box, select
to Real. Click OK and then scroll to the right of the database to verify that the field was created.
POP80_00 from the dropdown list of database fields. Then enter the following expression into the Equals clause (use the SQL
expression tabs or type directly):
(([pop2000] - [pop1980]) / [pop2000]) * 100
Note that if the data table is actively linked to one or more maps and you only use a subset of output fields, one of these should be the ID field. Otherwise, an error
will be reported.
55
Then click OK and indicate, when asked, that you do wish to modify the database. Scroll to the POPCH80_00 field to see the
result.
Save the database and then make sure that the table cursor (i.e., the selected cell) is in any cell within the POPCH80_00 field. Then
click the Display icon on the Database Workshop toolbar to view a map of the result. Note the interesting spatial distribution.
Advanced SQL
The Advanced SQL menu item under the Query menu in Database Workshop can be used to query across relational databases. We will use
the database MASSACHUSETTS that has three tables: town census data for the year 2000, town hospitals, and town schools. Each table has
an associated vector file. The tables HOSPITALS and SCHOOLS have vector files of the same names. The table CENSUS2000 uses a vector
file named MASSTOWNS.
Clear your screen and open a new database, MASSACHUSETTS. When it is open, notice the tabs at the bottom of the dialog. You
With the CENSUS2000 table in view, select the Establish Display Link icon from the Database Workshop toolbar. Select the
can view the tables, CENSUS2000, HOSPITALS, and SCHOOLS, by selecting their tabs.
vector link file MASSTOWNS, the vector file MASSTOWNS, and the link field name TOWN_ID. Click OK. Once the display link
has been established, place the cursor in the POPCH90_00 field, then select the Display Current Field as Map Layer icon to
display the POPCH90_00 field as a vector layer. Examine the display to visualize those towns that have either significant increase
or decrease in population from the 1990 to the 2000 census.
We will now create a new table using information contained in two tables in the database to show only those towns that have hospitals.
From the Query menu, select Advanced SQL. Type in the following expression and click OK.
Select * into [townhosp] from [census2000] , [hospitals] where [census2000].[town] = [hospitals].[town]
When this expression is run, you will notice a new table has been created in your database named TOWNHOSP. It contains the same
information found in the table CENSUS2000, but only for those towns that have hospitals.
Challenge
Create a Boolean map of those towns in Massachusetts where there has been positive population growth.
The database query operations we performed in this exercise were carried out using the attributes in a database. This was possible because we
were working with a single geography, the towns of Massachusetts, for which we had multiple attributes. We displayed the results of the
database operations by linking the database to a vector file of the towns IDs. As we move on to Part 2 of the Tutorial, we will learn to use the
raster GIS tools provided by TerrSet to perform database query and other analyses on layers that describe different geographies.
56
EXERCISE 1-11
DATABASE WORKSHOP: CREATING TEXT
LAYERS / LAYER VISIBILITY
In an earlier exercise, we saw how we can create a new text layer by direct digitizing. In this exercise, we will explore how to create text layers
from the information in database files. In addition, well look at how we can affect the visibility of map layers according to the map scale.
Make sure your main Working Folder is set to Using TerrSet. Then clear your screen and use DISPLAY Launcher to display the
TOWN_ID field from the MASSTOWNS vector collection. Use the Advanced Palette/Symbol Selection tab to set the data
relationship to None (Uniform). Then select the lightest yellow color (the fourth one) from the Color Logic options.
Next, click on the Database Workshop icon to open the database associated with this collection. What we want to do is create a
vector layer from the TOWNS field. This is very easy! Click into the TOWNS column to select that field. Then click on the Create
IDRISI Vector File icon on the Database Workshop menu. All settings should be correct to immediately export the layer -- a single
symbol code of 1 will be assigned to each label. Click OK.
Notice that it not only created the layer but also added it to your composition. Also notice that it doesnt look that great -- its a
little congested! However, there is another issue to be resolved. Zoom in on the map. Notice how the features get bigger but the
text stays the same size. We have not seen this before. In previous exercises, the text layers automatically adjusted to scale
changes.
Both problems are related to a metadata setting. Click on TerrSet Explorer icon on the main TerrSet toolbar. Select to view vector
files and the Metadata pane, then click on the text layer you created (CENSUS2000_TOWN). In the Metadata pane notice the
metadata item titled Units per Point. Only text layers have this property. It specifies the relationship between the ground units
of the reference system and the measurement unit for text -- points (there are 72 points in an inch = 28.34 points per centimeter).
Currently it reads unknown because the export procedure from Database Workshop did not know how it should be displayed.
57
Change the unknown to be 100. This implies that one text point equals 100 meters, given the reference system in use by this layer.
Then save the modified metadata file.
Now go to Composer and remove CENSUS2000_TOWN. Then use Add Layer to add it again using the Default Quantitative
symbol file1. At the layer level, this is equivalent to rebooting the operating system! Changes to any georeferencing parameter
(which are generally very rare) require this kind of reloading.
Initially, the text may seem to be very small. However, zoom into the map. Notice how the size of the text increases in direct
proportion to the change in scale.
Layer Visibility
Press the Home key to return the display to the original window size. Although we have adjusted the relationship of text size to
scale, it is clear that at the default map window size it is too small to properly be read. This can be controlled by setting the layer
visibility parameters.
Make sure that CENSUS2000_TOWN is highlighted in Composer and then open Layer Properties. Click on the Visibility tab. The
Visibility tab can be used as an alternative to set the various layer interaction effects previously explored. There are also other
options. One is the order in which TerrSet draws vector features. This can be particularly important with point and line layers to
establish which symbols lie on top of others when they overlap. However, our concern is with the Scale/Visibility options.
The Scale/Visibility options control whether a layer is visible or not. By default, layers are always visible. More specifically, they
are visible at scales from 1:0 (infinitely large) to 1:10,000,000,000 (very, very small). Change the to scale denominator from
10,000,000,000 to 500,000 (without the comma). Then click OK.
Press the Home key to be sure that youre viewing the map at its base resolution. Depending upon the resolution of your screen,
the text should now not be visible. If it is, zoom out until it is invisible and look at the RF indicator in the lower-left of TerrSet.
Then zoom in. As you cross the 500,000 scale denominator threshold, you should see it become visible.
The layer visibility option allows for enormous flexibility in the development of compositions for map exploration. You can easily set varying
layers to become visible or invisible as you zoom in or out of varying detail.
The use of the Quant symbol file may seem illogical here. However, since the layer was originally displayed with this symbol file, and since all text labels share the
same ID (1), it makes sense to do this.
58
TUTORIAL 2
IDRISI GIS ANALYSIS
INTRODUCTORY GIS EXERCISES
Cartographic Modeling
Database Query
Distance and Context Operators
Exploring the Power of Macro Modeler
Cost Distances and Least Cost Pathways
Map Algebra
Multi-Criteria EvaluationCriteria Development and the Boolean Approach
Multi-Criteria EvaluationNon-Boolean Standardization and Weighted Linear Combination
Multi-Criteria EvaluationOrdered Weighted Averaging
Multi-Criteria EvaluationSite Selection Using Boolean and Continuous Results
Multi-Criteria Evaluation-Multiple Objectives
Multi-Criteria EvaluationConflict Resolution of Competing Objectives
Data for the first six exercises in this section are in the \TerrSet Tutorial\Introductory GIS folder. Data for the six Multi-Criteria Evaluation
exercises may be found in the folder \TerrSet Tutorial\MCE. The TerrSet Tutorial data can be downloaded from the Clark Labs website:
www.clarklabs.org.
59
Data for the exercises in this section are in the \TerrSet Tutorial\Advanced GIS folder. TerrSet Tutorial data can be downloaded from the
Clark Labs website: www.clarklabs.org.
60
EXERCISE 2-1
CARTOGRAPHIC MODELING
A cartographic model is a graphic representation of the data and analytical procedures used in a study. Its purpose is to help the analyst
organize and structure the necessary procedures as well as identify all the data needed for the study. It also serves as a source of
documentation and reference for the analysis.
We will be using cartographic models extensively in the Introductory GIS portion of the Tutorial. Some models will be provided for you, and
others you will construct on your own. We encourage you to develop a habit of using cartographic models in your own work.
In developing a cartographic model, we find it most useful to begin with the final product and proceed backwards in a step by step manner
toward the existing data. This process guards against the tendency to let the available data shape the final product. The procedure begins with
the definition of the final product. What values will the product have? What will those values represent? We then ask what data are necessary
to produce the final product, and we then define each of these data inputs and how they might be obtained or derived. The following example
illustrates the process:
Suppose we wish to produce a final product that shows those areas with slopes greater than 20 degrees. What data are
necessary to produce such an image? To produce an image of slopes greater than 20 degrees, we will first need an image of
all slopes. Is an image of all slopes present in our database? If not, we take one step further back and ask more questions:
What data are necessary to produce a map of all slopes? An elevation image can be used to create a slope map. Does an
elevation image exist in our database? If not, what data are necessary to derive it? The process continues until we arrive at
existing data.
The existing data may already be in digital form, or may be in the form of paper maps or tables that will need to be digitized. If the necessary
data are not available, you may need to develop a way to use other data layers or combinations of data layers as substitutes.
Once you have the cartographic model worked out, you may then proceed to run the modules and develop the output data layers. The Macro
Modeler may be used to construct and run models. However, when you construct a model in the Macro Modeler, you must know which
modules you will use to produce output data layers. In effect, it requires that you build the model from the existing data to the final product.
Hence, in these exercises, we will be constructing conceptual cartographic models as diagrams. Then we will be building models in the Macro
Modeler once we know the sequence of steps we must follow. Building the models in the Macro Modeler is worthwhile because it allows you
to correct mistakes or change parameters and then re-run the entire model without running each individual module again by hand.
The cartographic model diagrams in the Tutorial will adhere, to the extent possible, to the conventions of the Macro Modeler in terms of
symbology. We will construct the cartographic models with the final output on the right side of the model, and the data and command
elements will be shown in similar colors to those of the Macro Modeler. However, to facilitate the use of the Tutorial exercises when printed
from a black and white printer, each different data file type will be represented by a different shape in the Tutorial. (The Macro Modeler uses
61
rectangles for all input data and differentiates file types on the basis of color.) Data files in the Tutorial are respresented as shown in Figure 1.
Image files are represented by rectangles, vector files by triangles, values files by ovals, and tabular data by a page with the corner turned
down. Filenames are written inside the symbol.
filename
Raster Imag e Files
filename
filename
Vector Files
Tabular Data
Figure 1
Modules are shown as parallellograms, with module names in bold letters, as in the Macro Modeler. Modules link input and output data files
with arrows. When an operation requires the input of two files, the arrows from those two files are joined, with a single arrow pointing to the
module symbol (Figure 3).
Figure 2 shows the cartographic model constructed to execute the example described above. Starting with a raster elevation model called
ELEVATION, the module SLOPE is used to produce the raster output image called SLOPES. This images of all slope values is used with the
module RECLASS to create the final image, HIGH SLOPES, showing those areas with slope values greater than 20 degrees.
elevation
slope
slopes
reclass
high slopes
Figure 2
Figure 3 shows a model in which two raster images, area and population, are used with the module OVERLAY (the division option) to
produce a raster image of population density.
population
overlay
pop_density
area
Figure 3
For more information on the Macro Modeler, see the chapter TerrSet Modeling Tools in the TerrSet Manual. You will become quite
familiar with cartographic models and using the Macro Modeler to construct and run your models as you work through the Introductory GIS
Tutorial exercises.
62
EXERCISE 2-2
DATABASE QUERY
In this exercise, we will explore the most fundamental operation in GIS, database query. With database query, we are asking one of two
possible questions. The first is a query by location, "What is at this location?" The second is a query by attribute, "Where are all locations that
have this attribute?" As we move the cursor across an image, its column and row position as well as its X and Y coordinates are displayed in
the status bar at the bottom of the screen. When we click on the Identify icon and then on different locations in the image, the value of the
cell, known as the z value, is displayed next to the cursor and in the Identify box to the right of the map window. As we do this, we are
querying by location. In later exercises, we will look at more elaborate means of undertaking query by location (using the modules EXTRACT
and CROSSTAB), as well as the ability to interactively query a group of images at the same time. In this exercise, we will primarily perform
database query by attribute.
To query by attribute, we specify a condition and then ask the GIS to delineate all regions that meet that condition. If the condition involves
only a single attribute, we can use the modules RECLASS or ASSIGN to complete the query. If we have a condition that involves multiple
attributes, we must use OVERLAY. The following exercise will illustrate these procedures. If you have not already done so, read the section on
Database Query in the chapter Introduction to GIS in the TerrSet Manual prior to beginning the exercise.
First, we will set up the Working Folder that will be used in this exercise. Select TerrSet Explorer from the File menu. From the
Use DISPLAY Launcher to display a raster layer named DRELIEF. Use the Default Quantitative palette and choose to display
Projects tab set the Working Folder to the Introductory GIS folder and save the project to the default project environment.1
both a title and legend. Autoscaling, equal intervals with 256 classes will automatically be invoked, since DRELIEF has a real data
type. Click OK. Use Identify mode to examine the values at several locations.
This is a relief or topographic image, sometimes called a digital elevation model, for an area in Mauritania along the Senegal River. The area to
the south of the river (inside the horseshoe-shaped bend) is in Senegal and has not been digitized. As a result it has been given an arbitrary
height of ten meters. Our analysis will focus on the Mauritanian side of the river.
This area is subject to flooding each year during the rainy season. Since the area is normally very dry, local farmers practice what is known as
"recessional agriculture" by planting in the flooded areas after the waters recede. The main crop that is grown in this fashion is the cereal crop
sorghum.
If you are in a laboratory situation, you may wish to create a new folder for your own work and choose it as your Working Folder. Select the folder containing the
data as a Resource Folder. This will facilitate writing your results to your own folder, while still accessing the original data from the Resource Folder.
63
A project has been proposed to place a dam along the north bank at the northernmost part of this bend in the river. The intention is to let the
flood waters enter this area as usual, but then raise a dam to hold the waters in place for a longer period of time. This would allow more water
to soak into the soil, increasing sorghum yields. According to river gauge records, the normal flood stage for this area is nine meters.
In addition to water availability, soil type is an important consideration in recessional sorghum agriculture because some soils retain moisture
better than others and some are more fertile than others. In this area, only the clay soils are highly suitable for this type of agriculture.
Display a raster layer named DSOILS. Note that the Default Qualitative palette is automatically selected as the default for this
image. TerrSet uses a set of decision rules to guess if an image is qualitative or quantitative and sets the default palette accordingly.
In this case it has chosen well. Check that both the Title and Legend options are selected and click OK. This is the soils map for
the study area.
In determining whether to proceed with the dam project, the decision makers need to know what the likely impact of the project will be. They
want to know how many hectares of land are suitable for recessional agriculture. If most of the flooded regions turn out to be on unsuitable
soil types, then increase in sorghum yield will be minimal, and perhaps another location should be identified. However, if much of the
flooded region contains clay soils, the project could have a major impact on sorghum production.
Our task, a rather simple one, is to provide this information. We will map out and determine the area (in hectares) of all regions that are
suitable for recessional sorghum agriculture. This is a classic database query involving a compound condition. We need to find all areas that
are:
located in the normal flood zone AND on clay soils.
To construct a cartographic model for this problem, we will begin by specifying the desired final result we want at the right side of the model.
Ultimately, we want a single number representing the area, in hectares, that is suitable for recessional sorghum agriculture. In order to get
that number, however, we must first generate an image that differentiates the suitable locations from all others, then calculate the area that is
considered suitable. We will call this image BESTSORG.
Following the conventions described in the previous exercise, our cartographic model at this point looks like Figure 1. We dont yet
know which module we will use to do the area calculation, so for now, we will leave the module symbol blank.
b e s ts o r g
ha
s u ita b le
Figure 1
The problem description states that there are two conditions that make an area suitable for recessional sorghum agriculture: that the area
be flooded, and that it be on clay soils. Each of these conditions must be represented by an image. We'll call these images FLOOD and
BESTSOIL. BESTSORG, then, is the result of combining these two images with some operation that retains only those areas that meet
64
both conditions. If we add these elements to the cartographic model, we get Figure 2.
flood
bestsorg
bestsoil
ha
suitable
Figure 2
Because BESTSORG is the result of a multiple attribute query, it defines those locations that meet more than one condition. FLOOD and
BESTSOIL are the results of single attribute queries because they define those locations that meet only one condition. The most common way
to approach such problems is to produce Boolean2 images in the single attribute queries. The multiple attribute query can then be
accomplished using Boolean Algebra.
Boolean images (also known as binary or logical images) contain only values of 0 or 1. In a Boolean image, a value of 0 indicates a pixel that
does not meet the desired condition while a value of 1 indicates a pixel that does. By using the values 0 and 1, logical operations may be
performed between multiple images quite easily. For example, in this exercise we will perform a logical AND operation such that the image
BESTSORG will contain the value 1 only for those pixels that meet both the flood AND soil type conditions specified. The image FLOOD
must contain pixels with the value 1 only in those locations that will be flooded and the value 0 everywhere else. The image BESTSOIL must
contain pixels with the value 1 only for those areas that are on clay soils and the value 0 everywhere else. Given these two images, the logical
AND condition may be calculated with a simple multiplication of the two images. When two images are used as variables in a multiplication
operation, a pixel in the first image (e.g., FLOOD) is multiplied by the pixel in the same location in the second image (e.g., BESTSOIL). The
product of this operation (e.g., BESTSORG) has pixels with the value 1 only in the locations that have 1's in both the input images, as shown
in Figure 3 below.
FLOOD
BESTSOIL
BESTSORG
Figure 3
This logic could clearly be extended to any number of conditions, provided each condition is represented by a Boolean image.
Although the word binary is commonly used to describe an image of this nature (only 1's and 0's) we will use the term Boolean to avoid confusion with the use of the
term binary to refer to a form of data file storage. The name Boolean is derived from the name of George Boole (1815-1864), who was one of the founding fathers of
mathematical logic. In addition, the name is appropriate because the operations we will perform on these images are known as Boolean Algebra.
65
The Boolean image FLOOD will show areas that would be inundated by a normal 9 meter flood event (i.e., those areas with elevations less
than 9 meters). Therefore, to produce FLOOD, we will need the elevation model DRELIEF that we displayed earlier. To create FLOOD from
DRELIEF, we will change all elevations less than 9 meters to the value 1, and all elevations equal to or greater than 9 meters to the value 0.
Similarly, to create the Boolean image BESTSOIL, we will start with an image of all soil types (DSOILS) and then we will isolate only the clay
soils. To do this, we will change the values of the image DSOILS such that only the clay soils have the value 1 and everything else has the value
0. Adding these steps to the cartographic model produces Figure 4.
flood
drelief
bestsorg
ha
suitable
bestsoil
dsoils
Figure 4
We have now arrived at a place in the cartographic model where we have all the data required. The remaining task is to determine exactly
which TerrSet modules should be used to perform the desired operations (currently indicated with blank module symbols in Figure 4). We
will add the module names as we work through the problem with TerrSet. When we have completed the entire exercise, we will then explore
how Macro Modeler and Image Calculator might be used to do pieces of the same analysis.
First we will create the image FLOOD by isolating all areas in the image DRELIEF with elevations less than 9 meters. To do this we will use
the RECLASS module.
Now let's examine the characteristics of the file DRELIEF. (You may need to move the DSOILS display to the side to make
DRELIEF visible.) Click on the DRELIEF display to give it focus. Once the DRELIEF window has focus, click on the Layer
Properties button on Composer. Select the Properties tab.
What are the minimum and maximum elevation values in the image?
Before we perform any analysis, lets review the settings in User Preferences. Open User Preferences under the File menu. On the
System Settings tab, enable the option to automatically display the output of analytical modules if it is not already enabled. Click
on the Display Settings tab and choose the QUAL palette for qualitative display and the QUANT palette for quantitative display.
Also select the automatically show title and automatically show legend options. Click OK to save these settings.
Choose RECLASS from the IDRISI GIS Analysis/Database Query menu. We will reclassify an image file with the user-defined
reclass option. Specify DRELIEF as the input file and enter FLOOD as the output file. Then enter the following values in the first
row of the reclassification parameters area of the dialog box:
66
To values from:
Continue by clicking into the second row of the reclass parameters table and enter the following:
Assign a new value of:
To values from:
>
Click on the Save as .rcl file button and give the name FLOOD. An .rcl file is a simple ASCII file that lists the reclassification limits
and new values. We dont need the file right now but we will use it with the Macro Modeler at the end of the exercise. Press OK
and to create an integer output.
Note that we entered ">" as the highest value to be assigned the new value 0. The > symbol refers to the largest possible value in
the image. Likewise, the < can be used to refer to the smallest actual value in an image.
When RECLASS has finished, look at the new image named FLOOD (which will automatically display if you followed the
instructions above). This is a Boolean image, as previously described, where the value 1 represents areas meeting the specified
condition and the value 0 represents areas that do not meet the condition.
H
2
Now let's create a Boolean image (BESTSOIL) of all areas with clay soils. The image file DSOILS is the soils map for this region. If
you have closed the DSOILS display, redisplay it.
What is the numeric value of the clay soil class? (Use the Identify tool from the tool bar.)
We could use RECLASS here to isolate this class into a Boolean image. If we did (although we won't), our sequence in specifying the
reclassification would be as follows:
Assign a new value of:
To values from:
To values from:
To values from:
>
67
Notice how the range of values that are not of interest to us have to be explicitly set to 0 while the range of interest (soil type 2) is set to 1. In
RECLASS, any values that are not covered by a specified range will retain their original values in the output image.3 Notice also that when a
single value rather than a range is being reclassified, the original value may be entered twice, as both the "from" and "to" values.
RECLASS is the most general means of reclassifying or assigning new values to the data values in an image. In some cases, RECLASS is rather
cumbersome and we can use a much faster procedure, called ASSIGN, to accomplish the same result. ASSIGN assigns new values to a set of
integer data values. With ASSIGN, we can choose to assign a new value to each original value or we may choose to assign only two values, 0
and 1, to form a Boolean image.
Unlike RECLASS, the input image for ASSIGN must be either integer or byteit will not accept original values that are real. Also unlike
RECLASS, ASSIGN automatically assigns a value of zero to all data values not specifically mentioned in the reassignment. This can be
particularly useful when we wish to create a Boolean image. Finally, ASSIGN differs from RECLASS in that only individual integer values may
be specified, not ranges of values.
To work with ASSIGN, we first need to create an attribute values file that lists the new assignments for the existing data values. The simplest
form of an attribute values file in TerrSet is an ASCII text file with two columns of data (separated by one or more spaces).4 The left column
lists existing image "features" (using feature identifier numbers in integer format). The right column lists the values to be assigned to those
features.
In our case, the features are the soil types to which we will assign new values. We will assign the new value 1 to the original value 2 (clay soils)
and will assign the new value 0 to all other original values. To create the values file for use with ASSIGN we use a module named Edit.
Use Edit from the IDRISI GIS Analysis/Database Query menu to create a values file named CLAYSOIL. (Edit also has its own
icon) We want all areas in the image DSOILS with the value 2 to be assigned the new value 1 and all other areas to be assigned a 0.
Our values file might look like this:
10
21
30
40
50
As previously mentioned, however, any feature that is not mentioned in the values file is automatically assigned a new value of
zero. Thus our values file only really needs to have a single line as follows:
21
Type this into the Edit screen, with a single space between the two numbers. From the File menu on the Edit dialog box (not the
main menu) choose Save As and save the file as an attribute values file with the name CLAYSOIL. (When you choose attribute
values file from the list of file types, the proper filename extension, .avl, is automatically added to the filename you specify.) Click
Save and when prompted, choose integer as the data type.
The output of RECLASS is always integer, however, so real values will be rounded to the nearest integer in the output image. This does not affect our analysis here since
we are reclassifying to the integer values 0 and 1 anyway.
More complex, multi-field attribute values files are accessible through Database Workshop.
68
We have now defined the value assignments to be made. The next step is to assign these to the raster image.
Open the module ASSIGN from the GIS Analysis/Database Query menu. Since the soils map defines the features to which we will
assign new values, enter DSOILS as the feature definition image. Enter CLAYSOIL as the attribute values file. Then for the output
image file, specify BESTSOIL. Finally, enter a title for the output image and press OK.
When ASSIGN has finished, BESTSOIL will automatically display. The data values now represent clay soils with the value 1 and
all other areas with the value 0.
We now have Boolean images representing the two criteria for our suitability analysis, one created with RECLASS and the other with
ASSIGN.
While ASSIGN and RECLASS may often be used for the same purposes, they are not exactly equivalent, and usually one will require fewer
steps than the other for a particular procedure. As you become familiar with the operation of each, the choice between the two modules in
each particular situation will become more obvious.
At this point we have performed single attribute queries to produce two Boolean images (FLOOD and BESTSOIL) that meet the individual
conditions we specified. Now we need to perform a multiple attribute query to find the locations that fulfill both conditions and are therefore
suitable for recessional sorghum agriculture.
As described earlier in this exercise, a multiplication operation between two Boolean images may be used to produce the logical AND result.
In TerrSet, this is accomplished with the module OVERLAY. OVERLAY produces new images as a result of some mathematical operation
between two existing images. Most of these are simple arithmetic operations. For example, we can use OVERLAY to subtract one image from
another to examine their difference.
As illustrated above in Figure 3, if we use OVERLAY to multiply FLOOD and BESTSOIL, the only case where we will get the value 1 in the
output image BESTSORG is when the corresponding pixels in both input maps contain the value 1.
OVERLAY can be used to perform a variety of Boolean operations. For example, the cover option in OVERLAY produces a logical OR result.
The output image from a cover operation has the value 1 where either or both of the input images have the value 1.
Construct a table similar to that shown in Figure 3 to illustrate the OR operation and then suggest an OVERLAY operation other
than cover that could be used to produce the same result.
Run OVERLAY from the GIS Analysis/Database Query menu to multiply FLOOD and BESTSOIL to create a new image named
BESTSORG. Click Output Documentation to give the image a new title, and specify "Boolean" for the value units. Examine the
result. (Change the palette to QUAL if it is difficult to see.) BESTSORG shows all locations that are within the normal flood zone
AND have clay soils.
69
Our next step is to calculate the area, in hectares, of these suitable regions in BESTSORG. This can be accomplished with the
module AREA. Run AREA from the GIS Analysis/Database Query menu, enter BESTSORG as the input image, select the tabular
output format, and calculate the area in hectares.
How many hectares within the flood zone are on clay soils? What is the meaning of the other reported area figure?
Adding the module names to the cartographic model of Figure 4 produces the completed cartographic model for the above analysis, shown in
Figure 5.
The result we produced involved performing single attribute queries for each of the conditions specified in the suitability definition. We then
used the products of those single attribute queries to perform a multiple attribute query that identified all the locations that met both
conditions. While quite simple analytically, this type of analysis is one of the most commonly performed with GIS. The ability of GIS to
perform database query based not only on attributes but also on the location of those attributes distinguishes it from all other types of
database management software.
drelief
reclass
flood
overlay
dsoils
assign
bestsorg
area
ha
suitable
bestsoil
claysoil
edit
Figure 5
The area figure we just calculated is the total number of hectares for all regions that meet our conditions. However, there are several distinct
regions that are physically separate from each other. What if we wanted to calculate the number of hectares of each of these potential sorghum
plots separately?
When you look at a raster image display, you are able to interpret contiguous pixels having the same identifier as a single larger feature, such
as a soil polygon. For example, in the image BESTSORG, you can distinguish three separate suitable plots. However, in raster systems such as
TerrSet, the only defined "feature" is the individual pixel. Therefore since each separate region in BESTSORG has the same attribute (1),
TerrSet interprets them to be the same feature. This makes it impossible to calculate a separate area figure for each plot. The only way to
calculate the areas of these spatially distinct regions is to first assign each region a unique identifier. This can be achieved with the GROUP
module.
GROUP is designed to find and label spatially contiguous groups of like-value pixels. It assigns new values to groups of contiguous pixels
beginning in the upper-left corner of the image and proceeding left to right, top to bottom, with the first group being assigned value zero. The
value of a pixel is compared to that of its contiguous neighbors. If it has the same value, it is assigned the same group identifier. If it has a
70
different value, it is assigned a new group identifier. Because it uses information about neighboring pixels in determining the new value for a
pixel, GROUP is classified as a Context Operator. More context operators will be introduced in later exercises in this group.
Spatial contiguity may be defined in two ways. In the first case, pixels are considered part of a group if they join along one or more pixel edge
(left, right, top or bottom). In the second case, pixels are considered part of a group if they join along edges or at corners. The latter case is
indicated in TerrSet as including diagonals. The option you use depends upon your application.
The figure below illustrates the result of running GROUP on a simple Boolean image. Note the difference caused by including diagonals. The
example without diagonal links produces eight new groups (identifiers 0-7), while the same original image with diagonal links produces only
three distinct groups.
original image
including diagonals
Fi
no diagonals
6
Run GROUP from the IDRISI GIS Analysis/Context Operators menu on BESTSORG to produce an output image called PLOTS.
Include diagonals and uncheck the Ignore background option. Click OK. When GROUP has finished, examine PLOTS. Use
Identify mode to examine the data values for the individual regions. Notice how each contiguous group of like-value pixels now
has a unique identifier. (Some of the groups in this image are small. It may be helpful to use the category "flash" feature to see
these. To do so, place the cursor on the legend color box of the category of interest. Press and hold down the left mouse button.
The display will change to show the selected category in red and everything else in black. Release the mouse button to return the
display to its normal state.)
Three of these groups are our potential sorghum plots, but the others are groups of background pixels. Before we calculate the number of
hectares in each suitable plot, we must determine which group identifiers represent the suitable sorghum plots so we can find the correct
identifiers and area figures in the area table. Alternatively, we can mask out the background groups by assigning them all the same identifier
of 0, and leaving just the groups of interest with their unique non-zero identifiers. The area table will then be much easier to read. We will
follow the latter method.
In this case, we want to create an image in which the suitable sorghum plots retain their unique group identifiers and all the background
groups have the value 0. There are several ways to achieve this. We could use Edit and ASSIGN or we could use RECLASS. The easiest
method is to use an OVERLAY operation.
Which OVERLAY option can you use to yield the desired image? Using which images?
71
Perform the above operation to produce the image PLOTS2 and examine the result. Change the palette to QUAL. As in PLOTS,
Now we are ready to run AREA (found in the GIS Analysis/Database Query menu). Use PLOTS2 as the input image and ask for
the suitable plots are distinguished from the background, each with its own identifier.
The figure below shows the additional step we added to our original cartographic model. Note that the image file BESTSORG was used with
GROUP to create the output image PLOTS, then these two images were used in an OVERLAY operation to mask out those groups that were
unsuitable. The model could also be drawn with duplicate graphics for the BESTSORG image.
bestsorg
plots2
group
area
ha
suitable
per plot
plots
Finally, we may wish to know more about the individual plots than just their areas. We know all of these areas are on clay soils and have
elevations lower than 9 meters, but we may be interested in knowing the minimum, maximum or average elevation of each plot. The lower
the elevation, the longer the area should be inundated. This type of question is one of database query by location. In contrast with the pixelby-pixel query performed at the beginning of this exercise, the locations here are defined as areas, the three suitable plots.
The module EXTRACT is used to extract summary statistics for image features (as identified by the values in the feature definition image).
Choose EXTRACT from the IDRISI GIS Analysis/Database Query menu. Enter PLOTS2 as the feature definition image and
DRELIEF as the image to be processed. Choose to calculate all listed summary types. The results will automatically be written to a
tabular output.
In this exercise, we have looked at the most basic of GIS operations, database query. We have learned that we can query the database in two
ways, query by location and query by attribute. We performed query by location with the Identify mode in the display at the beginning of the
exercise and by using EXTRACT at the end of the exercise. In the rest of this exercise we have concentrated on query by attribute. The tools
72
we used for this were RECLASS, ASSIGN and OVERLAY. RECLASS and ASSIGN are similar and can be used to isolate categories of interest
located on any one map. OVERLAY allows us to combine queries from pairs of images and thereby produce compound queries.
One particularly important concept we learned in this process was the expression of simple queries as Boolean images (images containing
only ones and zeros). Expressing the results of single attribute queries as Boolean images allowed us to use Boolean or logical operations with
the arithmetic operations of OVERLAY to perform multiple attribute queries. For example, we learned that the OVERLAY multiply
operation produces a logical AND when Boolean images are used, while the OVERLAY cover operation produces a logical OR.
We also saw how a Boolean image may be used in an OVERLAY operation to retain certain values and mask out the remaining values by
assigning them the value zero. In such cases, the Boolean image may be referred to as a Boolean mask or simply as a mask image.
Choose Macro Modeler either from the Modeling menu or from its toolbar icon (third from the right). The modeling
We will proceed to build the model working from left to right from Figure 5 above. Begin by clicking on the Raster Layer icon
(seventh from the left) in the Macro Modeler toolbar and choosing the file DRELIEF. Before getting too far, go to the File menu
on the Macro Modeler and choose Save As. Give the model the name Exer2-2.
73
between the way the main dialog works and the way the module works in the Macro Modeler. RECLASS is such a module. On the
main dialog, you entered the reclassification sequence of values to be used. In the Macro Modeler, these values must be entered in
the form of a RECLASS (.rcl) file.
In the module parameters dialog boxes, the label for each parameter is shown in the left column and the choice for that parameter
is shown in the right column. When more than one choice is available for a parameter, you can see the list of choices by clicking
on the right column, as shown in Figure 8. Click on the file type with the left mouse button to see a list of possible choices for this
parameter. Choose Raster Layer. Click on Classification Type and choose File Mode. Then click on .rcl filename and choose
FLOOD (we saved this earlier from the RECLASS dialog box. These .rcl files may also be created with Edit or by clicking the New
button on the .rcl file Pick List.) Finally, choose Byte/Integer as the output data type and click OK. Essentially, we have filled out
all the information needed in the RECLASS dialog box and have stored it in the model. Now connect the input file, DRELIEF, to
RECLASS by clicking the connect icon on the toolbar. This turns the cursor into a pointing finger. Click DRELIEF and hold down
the left mouse button while dragging the cursor to the RECLASS symbol. When you release the button, you will see the link
formed and hear a snapping sound (if your computer has sound capabilities).
This is the first step of the model. We can run it to check the output. Save the model by choosing Save from the Macro Modeler
File menu or by clicking the Save icon (third from the left). Then run the model by choosing Run from the menu bar or with the
Run icon (fourth from the right). You will be prompted with a message that the output layer, FLOOD2, will be overwritten if it
exists. Click Yes to continue. The image FLOOD2, which should be identical to the image FLOOD created earlier, will
automatically display.
Continue building the model until it looks like that in Figure 9. Save and run the model after adding each step to check your
intermediate results. Each time you place a module, right-click on it and fill out the parameters exactly as you did when working
with the main dialogs. Note that the module Edit cannot be used in the Macro Modeler, but you have already created the values
file CLAYSOIL and may use it with ASSIGN. Also note that the AREA module does not provide tabular output in the Macro
Modeler. Stop with the production of BESTSORG and run AREA from its main dialog rather than from the Macro Modeler.
One of the most useful aspects of the Macro Modeler is that once a model is saved, it can be altered and run instantly. It also keeps an exact
record of the operations used and is therefore very helpful in discovering mistakes in an analysis. We will continue to use the Macro Modeler
as we explore the core set of GIS modules in this section of the Tutorial. For more information on the Macro Modeler see the chapter TerrSet
Modeling Tools in the TerrSet Manual, as well as the on-line Help System entry for Macro Modeler.
74
To see how the creation of BESTSORG in this exercise could be done with Image Calculator, open it from the IDRISI GIS
Analysis/Database Query menu or choose its icon. Choose the Logical Expression operation type since we are finding the logical
AND of two criteria. Type in the output image name BESTCALC. (We will give our result here a different name so that we can
compare it to BESTSORG.) Now enter the expression by clicking on the components such that the expression is exactly as shown
below. Note that you may type in filenames or press the insert image button to choose a filename from the Pick List. If you do the
latter, brackets will automatically enclose the filenames.
BESTCALC = ([DRELIEF]<= 9)AND([DSOILS]=2)
Press Process Expression and when the calculation is finished, compare the result to that obtained in Step l above which we called
BESTSORG.
Note that we could not finish our analysis solely with Image Calculator because it does not include the GROUP, AREA or EXTRACT
functions. Also note that in developing our model, it is much easier to identify errors in the process if we perform each individual step with
the relevant module and examine each result. While Image Calculator may save time, it does not supply us with the intermediate images to
check our logical progress along the way. Because of this, we will often choose to use individual modules or the Macro Modeler rather than
Image Calculator in the remainder of the Tutorial.
At this point you may delete all of the files you created in this exercise. The Delete utility is found in the TerrSet Explorer under the File
menu. Do not delete the original data files DSOILS and DRELIEF.
75
EXERCISE 2-3
DISTANCE AND CONTEXT OPERATORS
In this exercise,1 we will introduce two other groups of analytical operations, distance and context operators. Distance operators calculate
distances from some feature or set of features. In a raster environment, they produce a resultant image where every pixel is assigned a value
representing its distance from the nearest feature. There are many different concepts of distance that may be modeled. Euclidean, or straightline, distance is what we are most familiar with, and it is the type of distance analysis we will use in this exercise. In TerrSet, Euclidean
distances are calculated with the module DISTANCE. A related module, BUFFER, creates buffer zones around features using the Euclidean
distance concept. In Exercise 2-5 another type of distance, known as cost distance, will be explored.
Context operators determine the new value of a pixel based on the values of the surrounding pixels. The GROUP module, which was used in
Exercise 2-2 to identify contiguous groups of pixels, is a context operator since the group identifier assigned to any pixel depends upon the
values of the surrounding pixels. In this exercise, we will become familiar with another context operator, SURFACE, which may be used to
calculate slopes from an elevation image. The slope value assigned to each pixel depends upon the elevation of that pixel and its four nearest
neighbors.
We will use these distance and context operators and the tools we explored in earlier exercises to undertake one of the most common of GIS
analysis tasks, suitability mapping, a type of multi-criteria evaluation. A suitability map shows the degree of suitability for a particular purpose
at any location. It is most often produced from multiple images, since most suitability problems incorporate multiple criteria. In this exercise,
Boolean images will be combined using the OVERLAY module to yield a final map that shows the sites that meet all the specified criteria.
This type of Boolean multi-criteria evaluation is often referred to as constraint mapping, since each criterion is defined by a Boolean image
indicating areas that are either suitable for use (value 1) or constrained from use (value 0). The map made in Exercise 2-2 of sites suitable for
sorghum agriculture is a simple example of constraint mapping. In later exercises, we will explore tools for non-Boolean approaches to multicriteria suitability analysis.
Our problem in this exercise is to find all areas suitable for the location of a light manufacturing plant in a small region in central
Massachusetts near Clark University. The manufacturing company is primarily concerned that the site be on fairly level ground (with slopes
less than 2.5 degrees) with at least 10 hectares in area. The local town officials are concerned that the town's reservoirs be protected and have
thus specified that no facility can be within 250 meters of any reservoir. Additionally, we need to consider that not all land is available for
development. In fact, in this area, only forested land is available. To summarize, sites suitable for development must be:
i)
ii)
At this point in the exercises, you should be able to display images and operate modules such as RECLASS and OVERLAY without step by step instructions. If you
are unsure of how to fill in a dialog box, use the defaults. It is always a good idea to enter descriptive titles for output files.
76
To become familiar with the study area, run ORTHO from the Display menu with RELIEF as the surface image and LANDUSE as
the drape image. Accept the default output filename ORTHOTMP and all the view defaults. Indicate that you wish to use a userdefined palette called LANDUSE and a legend, and choose the output resolution that is one step smaller than your Windows
display (e.g., if you are displaying at 1024 x 768, choose the 800 x 600 output).
As you can see, the study area is dominated by deciduous forest, and is characterized by rather hilly topography.
We will go about solving the suitability problem in four steps, one for each suitability criterion.
Before reading ahead, fill in the cartographic model below to depict the steps described above.
module?
relief
B
2
module?
slopes
slopebool
Display RELIEF with the TerrSet Default Quantitative palette.2 Explore the values with the Identify tool.
For this exercise, make sure that your Display Preferences (under User Preferences in the File menu) are set to the default values by pressing the Revert to Defaults
button.
77
On a topographic map, the more contour lines you cross in a given distance (i.e., the more closely spaced they are), the steeper the slope.
Similarly, with a raster display of a continuous digital elevation model, the more palette colors you encounter across a given distance, the
more rapidly the elevation is changing, and therefore the higher the slope gradient.
Creating a slope map by hand is very tedious. Essentially, it requires that the spacing of contours be evaluated over the whole map. As is often
the case, tasks that are tedious for humans to do are simple for computers (the opposite also tends to be truetasks that seem intuitive and
simple to us are usually difficult for computers). In the case of raster digital elevation models (such as the RELIEF image), the slope at any cell
may be determined by comparing its height to that of each of its neighbors. In TerrSet, this is done with the module SURFACE. Similarly,
SURFACE may be used to determine the direction that a slope is facing (known as the aspect) and the manner in which sunlight would
illuminate the surface at that point given a particular sun position (known as analytical hillshading).
Launch Macro Modeler from its toolbar icon or from the Modeling menu. Place the raster file RELIEF and the module
SURFACE. Link RELIEF to SURFACE. Right-click on the output image and give it the filename SLOPES. Then right-click on the
SURFACE module symbol to access the module parameters. The dialog shows RELIEF as the input file and SLOPES as the output
file. The default surface operation, slope, is correct, but we need to change the slope measurement to be degrees. The conversion
factor is necessary when the reference units and value units are not the same. In the case of RELIEF, both are in meters, so the
conversion factor may be left blank. Choose Save As from the Macro Modeler File menu and give the new model the name Exer23. Run the model (click yes to all when prompted about overwriting files) and examine the resulting image.
The image named SLOPES can now be reclassified to produce a Boolean image that meets our first criterionareas with slopes less than 2.5
degrees.
Add the module RECLASS to the model. Connect SLOPES to it, then right-click on the output image and change the image name
to be SLOPEBOOL. Right-click on the RECLASS module symbol to set the module parameters. All the default settings are correct
in this case, but as we saw in the last exercise, when run from the Macro Modeler, RECLASS requires a text file (.rcl) to specify the
reclassification values. In the previous exercise, we saved the .rcl file after filling out the main RECLASS dialog. You may create
.rcl files like this if you prefer. However, you may find it quicker to create the file using a facility in Macro Modeler.
Right-click on the input box for .rcl file on the RECLASS module parameters dialog. This brings up a list of all the .rcl files that are
in the project. At the bottom of the Pick List window are two buttons, New and Edit. Click New.
This opens an editing window into which you can type the .rcl file. Information about the format of the file is given at the top of
the dialog. We want to assign the new value 1 to slopes from 0 to just less than 2.5 degrees and the value 0 to all those greater than
or equal to 2.5 degrees. In the syntax of the .rcl file (which matches the order and wording of the main RECLASS dialog), enter the
following values with a space between each:
1 0 2.5
0 2.5 999
Note that the last value given could be any value greater than the maximum slope value in the image. Click Save As and give the
filename SLOPEBOOL. Click OK and notice that the file you just created is now listed as the .rcl file to use in the RECLASS
module parameters dialog. Close the module parameters dialog.
Save the model then run it (click yes to all when prompted about overwriting files) and examine the result.
78
How could you create a Boolean image of reservoirs? From which image would you derive this? (There are two different modules
you could use.)
Display the image named LANDUSE using the user-defined palette LANDUSE. Determine the integer land-use code for
reservoirs.
Either RECLASS or Edit/ASSIGN could be used to create a Boolean image of reservoirs. Both require a text file be created outside the Macro
Modeler. We will use Edit/ASSIGN to create the Boolean image called RESERVOIRS.
Open Edit from the Data Entry menu or from its toolbar icon. Type in: 2 1
(the value of reservoirs in LANDUSE, a space and a 1). Choose Save As from Edits File menu, choose the Attribute Values File
file type, give the name RESERVOIRS, and save as an Integer data type. Close Edit.
In the Macro Modeler, place the attribute values file RESERVOIRS and move it to the left side of the model, under the slope
criteria branch of the model. Place the raster image LANDUSE under the attribute values file and the module ASSIGN to the right
of the two data files. Right click on the output file of ASSIGN and change the name to be RESERVOIRS. Before linking the input
files, right click on the ASSIGN module symbol. As we saw in the previous exercise, ASSIGN uses two input files, a raster feature
definition image and an attribute values file. The input files must be linked to the module in the order they are listed in the
module parameters dialog. Close the module parameters dialog and link the input raster feature definition image LANDUSE to
ASSIGN then link the attribute values file RESERVOIRS to ASSIGN. This portion of the model should appear similar to the
cartographic model below (although the values file symbol in the Macro Modeler is rectangular rather than oval). Note that the
placement of the raster and values file symbols for the ASSIGN operation could be reversedit is the order in which the links are
made and not the positions of the input files that determine which file is used as which input. Save and run the model. Note that
the slope branch of the model runs again as well and both terminal layers are displayed.
79
The image RESERVOIRS defines the features from which distances should be measured in creating the buffer zone. This image will be the
input file for whichever distance operation we use.
reservoirs
assign
reservoirs
landuse
The output images from DISTANCE and BUFFER are quite different. DISTANCE calculates a new image in which each cell value is the
shortest distance from that cell to the nearest feature. The result is a distance surface (a spatially continuous representation of distance).
BUFFER, on the other hand, produces a categorical, rather than continuous, image. The user sets the values to be assigned to three output
classes: target features, areas within the buffer zone and areas outside the buffer zone.
We would normally use BUFFER, since our desired output is categorical and this approach requires fewer steps. However, to become more
familiar with distance operators, we will take the time to complete this step using both approaches. First we will run DISTANCE and
RECLASS from their main dialogs, then we will add the BUFFER step to our model in Macro Modeler.
Run DISTANCE from the IDRISI GIS Analysis/Distance Operators menu. Give RESERVOIRS as the feature image and
RESDISTANCE as the output filename. Examine this image. Note that it is a smooth and continuous surface in which each pixel
has the value of its distance to the nearest reservoir.
Now use RECLASS to create a Boolean buffer image in which pixels with distances less than 250 meters from reservoirs have the
value 0 and pixels with distances greater than or equal to 250 meters have the value 1. Call the resulting image DISTANCEBOOL.
What values did you enter into the RECLASS dialog box to accomplish this?
Examine the result to confirm that it meets your expectations. It may be useful to display the LANDUSE image as well. Does
DISTANCEBOOL represent (with 1's) those areas outside a 250m buffer zone around reservoirs?
The image DISTANCEBOOL satisfies the buffer zone criterion for our suitability model. Before continuing on to the next criterion, we will
see how the module BUFFER can also be used to create such an image.
In Macro Modeler, add the module BUFFER to the right of the image RESERVOIRS and connect the image and module. Right
click to set the module parameters for BUFFER. Assign the value 0 for the target area, 0 for the buffer zone and 1 for the areas
outside the buffer zone. Enter 250 as the buffer width. Right click on the output image and change the image name to be
BUFFERBOOL. The second branch of your model will now be similar to the cartographic model shown below.
80
DISTANCEBOOL and BUFFERBOOL should be identical, and either approach could be used to complete this exercise. BUFFER is preferred
over DISTANCE when a categorical buffer zone image is the desired result. However, in other cases, a continuous distance image is required.
The MCE exercises later on in the Tutorial make extensive use of distance surface images.
reservoirs
assign
reservoirs
buffer
bufferbool
landuse
Figure 3
Describe the contents of the final image for this criterion. You are already familiar with two methods for producing such an image.
Draw the cartographic model showing the steps and call the final image FORESTBOOL.
You first must determine the numeric codes for the two forest categories (do not consider orchards or forested wetlands) in the
LANDUSE image. This can be done in a variety of ways. One easy method is to click on the LANDUSE file symbol in the model,
then click on the Describe icon (first icon on right) on the Macro Modeler toolbar. This opens the documentation file for the
highlighted layer. Scroll down to see the legend categories and descriptions. Then follow the cartographic model you drew above
to add the required steps to the model to create a Boolean map of forest lands (FORESTBOOL). Save and run the model. Note
that you may use the LANDUSE layer that is already placed in the model to link into this forest branch of the model. However, if
you wish you may alternatively add another LANDUSE raster layer symbol for this branch. (If you become stuck, the last page of
this exercise shows the full model.)
81
conditions. Before we can begin to address the area criterion, we must combine these three Boolean images into one final Boolean image that
shows those areas where all three conditions are met.
In this case, we want to model the Boolean AND condition. Only those areas that meet all three criteria are considered suitable. As we learned
in Exercise 2-2, Boolean algebra is accomplished with OVERLAY.
Add the OVERLAY operations necessary to create this composite Boolean image showing areas that meet all three conditions. To
do this, you will need to combine two images to create a temporary image, and then combine the third with that temporary image
to produce the final result.3 Call this final result COMBINED. Save and run the model.
Which operation in OVERLAY did you use to produce COMBINED? Draw the cartographic model that illustrates the steps taken
to produce COMBINED from the three Boolean criteria images.
Examine COMBINED. There are several contiguous areas in the image that are potential sites for our purposes. The last step is to
determine which of these meet the ten hectare minimum area condition.
O
7
Add the module GROUP to the model. Link COMBINED as the input file and change the output file to be GROUPS. Choose to
include diagonals in the GROUP module parameters dialog. Save and run the model.
Look at the GROUPS image. How can you differentiate between groups that had the value 1 in COMBINED (and are therefore
suitable) and groups that had the value 0 in COMBINED (and are therefore unsuitable)?
We will account for unsuitable groups in a moment. First add the AREA module to the model. Link GROUPS as the input file and
change the output image to be GROUPAREA. In the AREA module parameters dialog, choose to calculate area in hectares and
produce a raster image output.
with
Image
Calculator.
The
logical
expression
to
use
would
82
be:
In the output image, the pixels of each group are assigned the area of that entire group. Use the Identify tool to confirm this. It
may be helpful to display the GROUPS image beside the GROUPAREA image. Since the largest group has a value much larger
than those of the other groups and autoscaling is in effect, the GROUPAREA display may appear to show fewer groups than
expected. The Identify tool will reveal that each group was assigned its unique area. To remedy the display, make sure
GROUPAREA has focus (selected), then choose Layer Properties on Composer. Set the Display Max to 17. To do so, either drag
the slider to the left or type 17 into the Display Maximum input box and press Apply. The value 17 was chosen because it is just
greater than the area of the largest suitable group.
Altering the maximum display value does not change the values of the pixels. It merely tells the display system to saturate, or set the autoscale
endpoint, at a value that is different from the actual data endpoints. This allows more palette colors to be distributed among the other values
in the image, thus making visual interpretation easier.
We now want to isolate those groups that are greater than 10 hectares (whether suitable or not).
What module is required to do this? Why isn't Edit/ASSIGN an option in this case?
Add a RECLASS step to the model. Use either the Macro Modeler facility or the main RECLASS dialog box to create the .rcl file
needed by the RECLASS module parameters dialog box. Link RECLASS with GROUPAREA to create an output image called
BIGAREAS.
Finally, to produce a final image, we will need to mask out the unsuitable groups that are larger than 10 hectares from the
BIGAREAS image. To do so, add an OVERLAY command to the model to multiply BIGAREAS and COMBINED. Call the final
output image SUITABLE. Again, you may wish to link the COMBINED layer that is already in the model, or you may place
another COMBINED symbol in the model.
This exercise explored two important classes of GIS functions, distance operators and context operators. In particular, we saw how the
modules BUFFER and DISTANCE (combined with RECLASS) can be used to create buffer zones around a set of features. We also saw that
DISTANCE creates continuous distance surfaces. We used the context operators SURFACE to calculate slopes and GROUP to identify
contiguous areas.
We saw as well how Boolean algebra performed with the OVERLAY module may be extended to three (or more) images through the use of
intermediary images.
Do not delete your Exer2-3 model, nor the original images LANDUSE and RELIEF. You will need all of these for the next exercise where we
will explore further the utility of the Macro Modeler.
83
84
EXERCISE 2-4
EXPLORING THE POWER OF MACRO
MODELER
Up to this point, we have used cartographic modeling mostly as an organizational tool. However, the Macro Modeler is more than a layout
tool for analytical sequences, as we will explore in this exercise.
If you have closed it, open Macro Modeler, then open the model file Exer2-31. Run the macro as it is to produce the image
First we will see how the results change when we relax the slope criterion such that slopes less than 4 degrees are considered
SUITABLE.
suitable. Under the Macro Modeler File menu, choose Save As and give the model the new name Exer2-4a. Examine the model
and locate the step in which the slope threshold is specified. It is the RECLASS module operation that links SLOPES and
SLOPEBOOL. Right-click on the RECLASS command symbol.
If you dont have the Macro Modeler file from the previous exercise, it is installed in the Introductory GIS data directory in a zip file called Exer2-3.zip. Use your
Windows Explorer tools to unzip the file and extract the contents to the same directory.
85
C
2
How is the slope threshold specified in the RECLASS parameters dialog box? Review the previous exercise if you are uncertain.
Then change the slope threshold from 2.5 to 4.
Now change the name of the final output to SUITABLE2-4a (remember that this can be done using a right-click on the output
layer symbol). Then save your model and run it.
SubModels
One of the most powerful features of Macro Modeler is the ability to save models as submodels. A submodel is a model that is encapsulated
such that it acts like a new analytical module.
To save your suitability mapping procedure as a submodel, select the Save Model as a SubModel option from the File menu of Macro
Modeler. You will then be presented with a SubModel Properties form. This allows you to enter captions for your submodel parameters. In
this case, the submodel parameters will be the input and output files necessary to run the model. You should use titles that are descriptive of
the nature of the inputs required since the model will now become a generic modeling function. Here are some suggestions. Alter the captions
as you wish and then click OK to save the submodel2.
Layer or File
Caption
Relief
Relief Image
Land use
Landuse Image
Reservoirs
Reservoir Classes
Forestbool
Forest Classes
Suitable
Note that when the SubModel Parameters form comes up, the layers may not be in the order you wish. To set a specific order, cancel out of the SubModel Parameters
dialog and then click on each of the inputs, and then the outputs, in the order in which you would like them to appear. Then go back to the Save Model as a
SubModel option in the File menu.
86
To use your submodel, you will need to add an additional Resource Folder to your project (but leave the Working Folder set as it
is). Using TerrSet Explorer, add the Resource Folder \TerrSet Tutorial\Advanced GIS folder as it contains some layers that we will
need. Then in Macro Modeler click the New icon (the farthest left on the Macro Modeler toolbar) to start a new workspace. Add
the following two layers from the Advanced GIS folder to your workspace:
DEM
LANDUSE91
And add the following two attribute values files from your Introductory GIS folder:
WESTRES
WESTFOR
Click on the LANDUSE91 symbol to select it and click the Display icon on the Macro Modeler toolbar. This is a map of land use
and land cover for the town of Westborough (also spelled Westboro), Massachusetts, 1991. You may also view the DEM layer in a
similar fashion. This is a digital elevation model for the same area. The WESTRES values file simply contains a single line of data
specifying that class 5 (lakes) will be assigned the value 1 to indicate that they are the reservoirs (almost all of the lakes here are in
fact reservoirs). The WESTFOR values file also contains a single line specifying that class 7 will be assigned 1 to indicate forest.
Now click on the SubModel icon (eighth from the right). You will notice your submodel listed in the Working Folder. Select it
and place it in your workspace. Then do a right-click onto it. Do you notice your captions? Now use the Connect tool to connect
each of your input files to your submodel and give any name you wish to your output file. Then run the model.
Submodels are very powerful because they allow you to extend the analytical capabilities of your system. Once encapsulated in this manner,
they become generic tools that you can use in many other contexts. They also allow you to encapsulate processes that should be run
independently from other elements in your model.
To introduce DynaLinks, click on the Open model icon (second from the left) and select the model named RESIDENTIAL
GROWTH. As the name suggests, this model predicts areas of growth in residential land within existing forest land. The study
area is again Westborough, Massachusetts. First, run the model. The image that is displayed at the end shows the original areas of
residential as class 2 and new areas of growth as class 1. The logic by which it works is as follows (click on each layer mentioned to
select it and use the Display tool to view it as you go along):
the image named RESIDENTIAL91 shows the original areas of residential land in 1991.
87
the image named LDRESSUIT maps the inherent suitability of land for residential uses. It is based on factors such as proximity to
roads, slope and so on.
a filtering process is used to downweight the suitability of land for residential as one moves away from existing areas of residential.
The procedure uses a filter that is applied to the Boolean image of existing residential areas. The filter yields a result (PROXIMITY)
that has zeros in areas well away from existing residential areas and ones well within existing residential areas. However, in the
vicinity of the edge of existing residential areas, the filter causes a gradual transition from one to zero. This result is used as a
multiplier to progressively downweight the suitability of areas as one moves away from existing residential areas
(DOWNWEIGHT).
the RANDOM module is used to introduce a slight possibility of forest land converting to residential in any area (RANDOM SEED).
after suitabilities are downweighted, combined and constrained (FINAL SUITABILITY), cells are rank ordered in terms of their
suitability (RANKED SUITABILITY), while excluding consideration of areas of existing residential land. This rank ordered image is
then reclassified to extract the best 500 cells. These become the new areas of growth (BEST AREAS). These are combined with
existing areas of residential land to determine the new state of residential land (NEW RESID). The final layer then illustrates both
new and original areas (GROWTH).
We will now introduce a DynaLink to make this a dynamic process. Click on the DynaLink icon (the one that looks like a
lightning bolt). It works just like the Connect tool. Move it over the image named NEW RESID, hold the left button down and
drag the end of the DynaLink to the RESIDENTIAL91 image at the beginning of the model. Then release the mouse button. Now
run the model. The system will ask how many iterations you wishindicate 7 and check the option to display intermediate
images.
Now run the process again but do not display intermediates. Note in particular how the names for RESIDENTIAL91, NEW
RESID and GROWTH change. On the first iteration, RESIDENTIAL91 is one of the input maps, and is used towards the
production of NEW RESID_1 and GROWTH_1. Then before the second iteration starts, NEW RESID_1 is substituted for
RESIDENTIAL91 and becomes an input towards the creation of NEW RESID_2 and GROWTH_2. This production of multiple
outputs for NEW RESID occurs because it is the origin of a DynaLink, while the production of multiple outputs for GROWTH
occurs because it is a terminal layer (i.e., the last layer in the model sequence). If a model contains more than one terminal layer,
then each will yield multiple outputs. Finally, notice at the end that the original names reappear. For terminal layers, there is an
additional implication a copy of the final output (GROWTH_7, in this case) is then made using the original name specified
(GROWTH).
As you can see, DynaLinks are very powerful. By allowing the substitution of outputs to become new inputs, dynamic models can readily be
created, thereby greatly extending the potential of GIS for environmental modeling.
88
Use DISPLAY Launcher to examine the image named MAD82JAN. This is an image of Normalized Difference Vegetation Index
(NDVI) data for January 1982 produced from the AVHRR system aboard one of the NOAA series weather satellites. The original
image was global in extent (with 8 km resolution). Here we see only the island of Madagascar. NDVI is calculated from the
reflectance of solar energy in the red and infrared wavelength regions according to the simple formula:
NDVI = (IR R) / (IR + R)
This index has values that can range from 1 to +1. However, real numbers (i.e., those with fractional parts) require more
memory in digital systems than integer values. Thus it is common to rescale the index into a byte (0-255) range. With NDVI,
vegetation areas typically have values that range from 0.1 to 0.6 depending upon the amount of biomass present. In this case, the
data have been scaled such that 0 represents an NDVI of 0.05 and 255 represents an NDVI of 0.67.
You will note that MAD82JAN is only one of 18 images in your folder, showing January NDVI for all years from 1981 through
1999 (18 years). In this section we will use Macro Modeler to convert this whole set of images back to their original (unscaled)
NDVI values. The backwards conversion is as follows:
NDVI = (Dn * 0.0028) 0.05
First, lets create the model using MAD82JAN. Open Macro Modeler to start a new workspace (or click the New icon). Then click
on the Raster Layer icon and select MAD82JAN. Then click on the Module icon and select SCALAR. Right click on SCALAR and
change the operation to multiply and the value to 0.0028. Then connect MAD82JAN to SCALAR. Click on the Module icon and
select SCALAR again. Change the operation for this second SCALAR to subtract and enter a value of 0.05. Then connect the
output from the first SCALAR operation to the second SCALAR. Now test the module by clicking on the Run icon. If it worked,
you should have values in the output that range from 0.05 to 0.56.
To perform this same operation on all files, we now need to create a raster group file. Open TerrSet Explorer then scroll the File
pane in the Introductory GIS folder until you see MAD82JAN and click it to highlight it. Now hold down the shift key and press
the down arrow until the whole group (MAD82JAN through MAD99JAN) has been highlighted. Then right-click and select
Create / Raster Group from the menu. A new raster group file is created called RASTER GROUP.RGF. Click on this file and either
right-click or hit F2 to rename this group file to MADNDVI.
In Macro Modeler, click onto the MAD82JAN symbol to highlight it, and click the Delete icon (sixth icon from left) to remove it.
Next, locate the DynaGroup icon (its the one with a group file symbol along with a lightning bolt). Click it and select Raster
Group File. Then select your MADNDVI group file and connect it as the input to the first SCALAR operation (use the standard
Connect tool for this). Finally, go to the final output file of your model (it will have some form of temporary name at the moment)
and change it to have the following name:
89
NEW+<madndvi>
This is a special naming convention. The specification of <madndvi> in the name indicates that Macro Modeler should form the
output names from the names in the input DynaGroup named MADNDVI. In this case, we are saying that we want the new
names to be the same as the old names, but with a prefix of NEW added on.
Now run the model to see how it works. You will be informed that there will be 18 iterations (you cannot change thisit is
determined by the number of members in the group file). While it runs, note how the output filenames are formed. At the end, it
will also produce a raster group file using the prefix specified (NEW). Remember to save your model before you move on.
Open the model named MEAN. This has been specifically developed to work with your data. Note that it incorporates both a
DynaGroup and a DynaLink. To calculate the average (mean) of your images of Madagascar, we need to sum up the values in the
18 NDVI images and then divide by 18. The combined DynaGroup and DynaLink accomplishes the summation, while the last
SCALAR operation does the division.
Display the image named BLANK_NDVI. As you can see, it only contains zeros. In the first iteration, the model takes the first
image in the DynaGroup (MAD82JAN) and adds it to BLANK_NDVI (using OVERLAY) and places the result in SUM_1. The
DynaLink then substitutes SUM_1 for BLANK_NDVI and thus adds the second image in the group (MAD83JAN) to SUM_1. At
the end of the sequence, a final image named SUM is created, containing the sum of the 18 images. This is then divided by 18 by
the SCALAR operation to get the final result. Run the model and watch it work. Note also that the output of the DynaGroup did
not use any of the special naming conventions. In such cases the system reverts back to its normal naming convention for
multiple outputs (numeric suffixes).
As a final example, open the model named STANDEV. This calculates the standard deviation of images (across the series). It has
already been structured for the specific files in this exercise. Run it and see if you can figure out how it worksthe basic principles
are the same as for the previous example.
90
Lets first look at the steps needed to create the image SLOPEBOOL from the elevation model RELIEF as shown in Exercise 2-3.
The first module we used was SURFACE. Go to the TerrSet on-line Help System, click on the Index tab, and search for
SURFACE. Display the topic, then choose the Macro Command item. The information is shown as follows:
SURFACE Macro Command
91
The next module we used was RECLASS to create a Boolean image of slopes less than 2.5 degrees from our slope image. Again,
access the on-line Help System for the Macro Command format for RECLASS. Given our variables, the next line in the macro file
should be:
reclass x i*slopes*slopebl*2*1*0*2.5*0*2.5*999*-9999
Run Edit from the Data Entry submenu under the File menu. From the File menu in Edit, choose to open a file. Select the macro
file type and open the file EXERCISE2-3. The macro has already been created for you. Each line executes an TerrSet module
using the same parameters we used in Exercise 2-3. As the last line indicates, the final image will be called SUITABLE2, rather
than SUITABLE (created in Exercise 2-3), so we may compare the two. Note that the lines beginning with the letters REM are
considered by the program to be remarks and will not be executed. These remarks are used to document the macro.
Take some time to compare the Macro Command format information in the on-line Help System with some of the lines in the
macro. Note that you may size the on-line Help window smaller so you may have both it and the Edit window visible at the same
time. You may also choose to have the Help System stay on top from the Help System Options menu.
When you are finished examining the macro, choose to Exit. Do not save any changes you may have inadvertently made.
92
Choose Run Macro from the Model Deployment Tools submenu under the IDRISI GIS Analysis menu, then enter EXERCISE2-3
as the macro filename, and leave the macro parameters input box blank. As the macro is running, you will see a report on the
screen showing which step is currently being processed. When the macro has finished, SUITABLE2 will automatically display
with the Qual palette. Then display SUITABLE (created in Exercise 2-3) in a separate window with the same palette and position
it to see both images simultaneously.
Open the macro file with the TerrSet Edit module again and change it so that slopes less than 4 degrees are considered suitable.
The altered command line should be:
reclass x i*slopes*slopebl*2*1*0*4*0*4*999*-9999
You should also change the remark above the RECLASS command to indicate that you are creating a Boolean image of slopes less
than 4 degrees. In Edit, choose to Save then Exit. Run the macro and compare the results (SUITABLE2) to SUITABLE.
Y
4
Use the Edit module to open and change the macro once again so that it does not use diagonals in the GROUP process. Also
change the final image name to be SUITABLE3. (Retain the 4 degree slope threshold.) Save the macro and run it.
Describe the differences between SUITABLE, SUITABLE2 and SUITABLE3 and explain what caused these differences.
Command macros may also be written with variable placeholders as command line parameters. For example, suppose we wish to run
several iterations of the macro, each with a different slope threshold. We can set up the macro with a variable placeholder for the threshold
parameter. The desired threshold can then be entered into the Run Macro dialog in the Macro Parameters input box. This will be easier than
editing and saving a new macro file for each iteration. We will change the macro to take both the slope threshold and the output filename
from the Macro Parameters input box.
Open the macro EXERCISE2-3 in Edit. In the reclassification step for the slope criterion, replace the slope threshold value 4 with
the variable placeholder %1 in both places in which it occurs. The new command line should appear as follows:
reclass x i*slopes*slopebl*2*1*0*%1*0*%1*999*-9999
Also replace the output filename, SUITABLE3, with a second variable placeholder, %2, both in the last reclassification step and in
the last display step as follows
reclass x i*sitearea*%2*2*0*0*10*1*10*99999*-9999
display x n*%2*qual256*y
Save the macro file and Exit the editor.
93
AA
Choose Run Macro. Enter the macro filename and in the Macro Parameters input box, enter the slope threshold of interest and
the desired output filename, separated by a space. For example, if you wished to evaluate a thresold of 5 degrees and call the
output SUITABLE5, you would enter the following in the Macro Parameters box:
5 suitable5
Press Run Macro
BB
You may now quickly evaluate the results of using several different slope thresholds. Each time you run the macro, enter a slope
threshold value and an output filename.
Command macros are clearly a very powerful tool in GIS analysis. Once they are created, they allow for the very rapid evaluation of
variations on the same analysis. In addition, exactly the same analysis may be quickly performed for another study area simply by changing
the original input filenames. As an added advantage, macro files may be saved or printed along with the corresponding cartographic model.
This would provide a detailed record for checking possible sources of error in an analysis or for replicating the study.
Note:
TerrSet records all the commands you execute in a text file located in the Working Folder. This file is called a LOG file. The commands are
recorded in a similar format to the macro command format that we used in this exercise. All the error messages that are generated are also
recorded. Each time you open TerrSet, a new LOG file is created and the previous files are renamed. The log files of your five most recent
TerrSet projects are saved under the filenames IDRISI32.LOG, IDRISI32.LO2, ... IDRISI32.LO5 with IDRISI32.LOG being the most recent.
The LOG file may be edited and then saved as a macro file using Edit. Open the LOG file in Edit, edit the file to have a macro file format,
then choose the Save As option. Choose Macro file (*.iml) as the file type and enter a filename.
Note also that the command line used to generate each output image, whether interactively or by a macro, is recorded in that images Lineage
field in its documentation file. This may be viewed with the Metadata utility in TerrSet Explorer and may be copied and pasted into a macro
file using the CTRL+C keyboard sequence to copy highlighted text in Metadata and the CTRL+V keyboard sequence to paste it into the
macro file in Edit.
In addition to macros, Image Calculator offers some degree of automation. While the macro file offers more flexibility, any analyis that is
limited to the modules OVERLAY, SCALAR, TRANSFORM and RECLASS may be carried out with Image Calculator. Expressions in Image
Calculator may be saved to a file and called up again for later use.
94
EXERCISE 2-5
COST DISTANCE AND LEAST-COST
PATHWAYS
In the previous exercise, we introduced one of the TerrSet distance operators called DISTANCE. DISTANCE produces a continuous surface
of Euclidean distance values from a set of features. In this exercise, we will use a variant on the DISTANCE module called COST. While
DISTANCE produces values measured in units such as meters or kilometers, COST calculates distance in terms of some measure of cost, and
the resulting values are known as cost distances. Similar to DISTANCE, COST requires a feature image as the input from which cost distances
are calculated. However, unlike DISTANCE, COST also requires a friction surface that indicates the relative cost of moving through each cell.
The resulting continuous image is known as a cost distance surface.
The values of the friction surface are expressed in terms of the particular measure of cost being calculated. These values often have an actual
monetary meaning equal to the cost of movement across the landscape. However, friction values may also be expressed in other terms. They
may be expressed as travel time, where they represent the time it would take to cross areas with certain attributes. They might also represent
energy equivalents, where they would be proportional to total fuel or calories expended while traveling from a pixel to the nearest feature.
These friction values are always calculated relative to some fixed base amount which is given a value of 1. For example, if our only friction was
snow depth, we could assign areas with no snow a value of 1 (i.e., the base cost) and areas with snow cover values greater than 1. If we know
that it costs twice as much to traverse areas with snow six to ten inches deep than it does to cross bare ground, we would assign cells with
snow depths in that range a friction value of 2. Frictions are specified as real numbers to allow fractional values, and they can have values
between 0 and 1.0 x 1037. Frictions are rarely specified with values less than 1 (the base cost) because a friction value less than 1 actually
represents an acceleration or force that acts to aid movement.
No matter what scheme is used to represent frictions, the resulting cost distance image will incorporate both the actual distance traveled and
the frictional effects encountered along the way. In addition, because friction values will always be used to calculate cost distances, cost
distance will always be relative to the base friction value or cost. For example, if a cell is determined to have a cost distance of 5.25, this
indicates that it costs five and a quarter times as much as the base cost to get to this cell from the nearest feature from which cost was
calculated. Or, phrased differently, it costs the same amount to get to that cell as it would to cross five and a quarter cells having the base
friction. The module SCALAR may be used to transform relative cost distance values into actual monetary, time, or other units.
The discussion above focuses on isotropic frictions, one of two basic types of frictional effects. Isotropic frictions are independent of the
direction of movement through them. For example, a road surface will have a particular friction no matter which direction travel occurs. The
road surface has characteristics (paved, muddy, etc.) that make movement easier (low friction value) or more difficult (high friction value).
We will work with this type of friction surface in this exercise. The TerrSet module COST accounts for isotropic frictional effects.
95
Those frictions that vary in strength depending on the direction of movement are known as anisotropic frictions. An example is a prevailing
wind where movement directly into the wind would cause the cost of movement to be great, while traveling in the same direction as the wind
would aid movement, perhaps even causing an acceleration. In order to effectively model such anisotropic frictional effects, a dual friction
surface is requiredone image containing information on the magnitude of the friction, and another containing information on the direction
of frictional effect. The module VARCOST is used to model this type of cost surface. For more information, see the chapter on Anisotropic
Cost Analysis in the TerrSet Manual
In this exercise, we will be working only with isotropic frictions, and therefore will be using the COST module. COST offers two separate
algorithms for the calculation of cost surfaces. The first, COSTPUSH, is faster and works very well when friction surfaces are not complex or
network-like. The second, COSTGROW, can work with very complex friction surfaces, including absolute barriers to movement.1
An interesting and useful companion to the cost modules is PATHWAY. Once a cost surface has been created using any of the cost modules,
PATHWAY can be used to determine the least-cost route between any designated cell or group of cells and the nearest feature from which
cost distances were calculated.
We will use both the COST and PATHWAY modules in this exercise.
Our problem concerns a new manufacturing plant. This plant requires a considerable amount of electrical energy and needs a transformer
substation and a feeder line to the nearest high voltage power line. Naturally, plant executives want the construction of this line to be as
inexpensive as possible. Our problem is to determine the least-cost route for building the new feeder line from the new plant to the existing
power line.
Display the image named WORCWEST with the user-defined palette WORCWEST.2 (Note that DISPLAY Launcher
automatically looks for a palette or symbol file with the same name as the selected layer. (If found, this is entered as the default.)
This is a land use map for the western suburbs of Worcester, Massachusetts, USA, that was created through an unsupervised
classification of Landsat TM satellite imagery.3 Use Composer to add the vector layer NEWPLANT, with the user defined symbol
file NEWPLANT. The location of the new manufacturing plant will be shown with a large white circle just to the northwest of the
center of the image. Then add the vector file POWERLINE to the composition, using the user defined symbol file POWERLINE.
The existing power line is located in the lower left portion of the image and is represented with a red line. These are the two
features we want to connect with the least-cost pathway.
Open Macro Modeler. We will construct a model for this exercise as we proceed4.
Cost distance analysis requires two layers of information, a layer containing the features from which to calculate cost distances and a friction
surface. Both must be in raster format.
For further information on these algorithms, see: Eastman, J.R., 1989. Pushbroom Algorithms for Calculating Distances in Raster Grids. Proceedings, AUTOCARTO
9, 288-297.
For this exercise, make sure that your display settings (under File/User Preferences) are set to the default values by pressing the Revert to Defaults button.
This is an image processing technique explored in the Introductory Image Processing Exercises section of the TerrSet Tutorial.
The module RASTERVECTOR combines six previously released raster/vector conversion modules: POINTRAS, LINERAS, POLYRAS, POINTVEC, LINEVEC, and
POLYVEC. This exercise continues to use the command lines for these previous modules in Macro Modeler.
96
First we will create the friction surface that defines the costs associated with moving through different land cover types in this area. For the
purposes of this exercise, we will assume that it costs some base amount to build the feeder line through open land such as agricultural fields.
Given this base cost, Table 1 shows the relative costs of having the feeder line constructed through each of the land uses in the suburbs of
Worcester.
Land Use
Friction
Explanation
Agriculture
Deciduous Forest
Coniferous
Forest
Urban
1000
Pavement
Suburban
1000
Water
1000
Barren/Gravel
Table 1
You will notice that some of these frictions are very high. They act essentially as barriers. However, we do not wish to totally prohibit paths
that cross these land uses, only to avoid them at high cost. Therefore, we will simply set the frictions at values that are extremely high.
Place the raster layer WORCWEST on the Macro Modeler. Save the model as Exer2-5.
Access the documentation file for WORCWEST by clicking first on the WORCWEST image symbol to highlight it, then clicking
on the Describe icon on the Macro Modeler toolbar. (You can also access similar information from Metadata utility in TerrSet
Explorer). Determine the identifiers for each of the land use categories in WORCWEST. Match these to the land use categories
given in Table 1, then use Edit to create a values file called FRICTION. This values file will be used to assign the friction values to
the land use categories of WORCWEST. The first column of the values file should contain the original land use categories while
the second column contains the corresponding friction values. Save the values file and specify real as the data type (because COST
requires the input friction image to have real data type).
97
Place the values file you just created, FRICTION, into the model then place the module ASSIGN. Right-click on ASSIGN to see
the required order of the input filesthe feature definition image should be linked first, then the attribute values file. Close
module properties and link WORCWEST and then FRICTION to ASSIGN. Right-click on the output image and rename it
FRICTION. Save and run the model.
This completes the creation of our friction surface. The other required input to COST is the feature from which cost distances should be
calculated. COST requires this feature to be in the form of an image, not a vector file. Therefore, we need to create a raster version of the
vector file NEWPLANT.
When creating a raster version of a vector layer in TerrSet, it is first necessary to create a blank raster image that has the desired
spatial characteristics such as min/max X and Y values and numbers of rows and columns. This blank image is then updated
with the information of the vector file. The module INITIAL is used to create the blank raster image. Add the module INITIAL to
the model and right click on it. Note that there are two options for how the parameters of the output image will be defined. The
default, copy from an existing file, requires we link an input raster image that already has the desired spatial characteristics of the
file we wish to create (the attribute values stored in the image are ignored). We wish to create an image that matches the
characteristics of WORCWEST. Also note that INITIAL requires an initial value and data type. Leave the default initial value of 0
and change the data type to byte. Close Module Parameters and link the raster layer WORCWEST, which is already in the model,
to INITIAL. You may wish to re-arrange some model elements at this point to make the model more readable. You can also add a
second copy of WORCWEST to your model rather than linking the existing one if you prefer to do so. Right-click on the output
image of INITIAL and rename the file BLANK. Save and run the model. We have created the blank raster image now, but must
still update it with the vector information.
Add the vector file NEWPLANT to the model, then add the module POINTRAS to the model and right-click on it. POINTRAS
requires two inputsfirst the vector point file, then the raster image to be updated. The default operation type, to record the ID of
the point, is correct. Close module parameters. Link the vector layer NEWPLANT then the raster layer BLANK to the POINTRAS
module. Right-click on the output image of POINTRAS and rename this to be NEWPLANT. (Recall that vector and raster files
have different filename extensions, so this will not overwrite the existing vector file.) Save and run the model.
NEWPLANT will then automatically display. If you have difficulty seeing the single pixel that represents the plant location, you
may wish to use the interactive Zoom Window tool to enlarge the portion of the image that contains the plant. You may also add
the vector layer NEWPLANT, window into that location, then make the vector layer invisible by clicking in its check box in
Composer. You should see a single raster pixel with the value one representing the new manufacturing plant.
The operation you have just completed is known as vector-to-raster conversion, or rasterization. We now have both of the images needed to
run the COST module, a friction surface (FRICTION) and a feature definition image (NEWPLANT). Your model should be similar to the
figure below, though the arrangement of elements may be different.
98
friction
assign
friction
worcwest
newplant
pointras
initial
newplant
blank
Figure 1
Add the COST module to the model and right-click on it. Note that the feature image should be linked first, then the friction
image. Choose the COSTGROW algorithm (because our friction surface is rather complex). The default values for the last two
parameters are correct. Link the input files to COST, then right-click on the output file and rename it COSTDISTANCE. The
calculation of the cost distance surface may take a while if your computer does not have a very fast CPU. Therefore you may wish
to take a break here and let the model run.
When the model has finished running, use the Identify tool to investigate some of the data values in COSTDISTANCE. Verify
that the lowest values in the image occur near the plant location and that values accumulate with distance from the plant. Note
that crossing only a few pixels with very high frictions, such as the water bodies, quickly leads to extremely high cost distance
values.
In order to calculate the least cost pathway from the manufacturing plant to the existing power line, we will need to supply the module
PATHWAY with the cost distance surface just created and a raster representation of the existing power line.
Place the module LINERAS in the model and right-click on it. Like POINTRAS, it requires the vector file and an input raster
image to be updated. Close module parameters then place the vector file POWERLINE and link it to LINERAS. Rather than run
INITIAL again, we can simply link the output of the existing INITIAL process, the image BLANK, into LINERAS as well. Rightclick on the output image and rename it POWERLINE. Save the model, but dont run it yet. Since the COST calculation takes
some time, we will build the remainder of the model, then run it again.
We are now ready to calculate the least-cost pathway linking the existing power line and the new plant. The module PATHWAY works by
choosing the least-cost alternative each time it moves from one pixel to the next. Since the cost surface was calculated using the
manufacturing plant as the feature image, the lower costs occur nearer the plant. PATHWAY, therefore, will begin with cells along the power
line (POWERLINE) and then continue choosing the least cost alternative until it connects with the lowest point on the cost distance surface,
the manufacturing plant. (This is analogous to water running down a slope, always flowing into the next cell with the lowest elevation.)
Add the module PATHWAY to the model and right click on it. Make sure that multiple pathways is not selected. Note that it
requires the cost surface image be linked first, then the target image. Link COSTDISTANCE, then POWERLINE to PATHWAY.
Right-click on the output image and rename it NEWLINE. Save and run the model.
99
NEWLINE is the path that the new feeder power line should follow in order to incur the least cost, according to the friction values given. A
full cartographic model is shown in the figure below.
friction
assign
friction
worcwest
cost
costdistance
newplant
initial
pointras
newplant
lineras
powerline
blank
powerline
pathway
newline
Figure 2
For a final display, it would be nice to be able to display NEWPLANT, POWERLINE and NEWLINE all as vector layers on top of
WORCWEST. However, the output of PATHWAY is a raster image. We will convert the raster NEWLINE into a vector layer using the
module LINEVEC. To save time in creating the final product, we will do this outside the Modeler.
Select RASTERVECTOR from the Reformat menu. Select Raster to vector then Raster to line. The input image is NEWLINE and
The place where the new feeder line meets the existing power line is clearly the position for the new transformer substation. How
do you think PATHWAY determined that the feeder line should join here rather than somewhere else along the power line? (Read
carefully the module description for PATHWAY in the on-line Help System.)
What would be the result if PATHWAY were used on a Euclidean distance surface created using the module DISTANCE, with the
feature image NEWPLANT, and POWERLINE as the target feature?
100
In this exercise, we were introduced to cost distances as a way of modeling movement through space where various frictional elements act to
make movement more or less difficult. This is useful in modeling such variables as travel times and monetary costs of movement. We also saw
how the module PATHWAY may be used with a cost distance surface to find the least-cost path connecting the features from which cost
distances were calculated to other target features.
In addition, we learned how to convert vector data to raster for use with the analytical modules of TerrSet. Normally we would use the
module RASTERVECTOR, but in Macro Modeler this was accomplished with POINTRAS for point vector data and LINERAS for line vector
data. A third module, POLYRAS, is used for rasterization of vector polygon data. These modules require an existing image that will then be
updated with the vector information. INITIAL may be used to create a blank image to update. We also converted the raster output image of
the new power line to vector format for display purposes using the module LINEVEC. The modules POINTVEC and POLYVEC perform the
same raster to vector transformation for point and polygon vector files.
It is not necessary to save any of the images created in this exercise.
101
EXERCISE 2-6
MAP ALGEBRA
In Exercises 2-2 and 2-4, we used the OVERLAY module to perform Boolean (or logical) operations. However, this module can also be used
as a general arithmetic operator between images. This then leads to another important set of operations in GIS called Map Algebra.
Map Algebra refers to the use of images as variables in normal arithmetic operations. With a GIS, we can undertake full algebraic operations
on sets of images. In the case of TerrSet, mathematical operations are available through three modules: OVERLAY, TRANSFORM, and
SCALAR (and by extension through the Image Calculator, which includes the functionality of these three modules). While OVERLAY
performs mathematical operations between two images, SCALAR and TRANSFORM both act on a single image. SCALAR is used to
mathematically change every pixel in an image by a constant. For example, with SCALAR we can change a relief map from meters to feet by
multiplying every pixel in the image by 3.28084. TRANSFORM is used to apply a uniform mathematical transformation to every pixel in an
image. For example, TRANSFORM may be used to calculate the reciprocal (one divided by the pixel value) of an image, or to apply
logarithmic or trigonometric transformations.
These three modules give us mathematical modeling capability. In this exercise, we will work primarily with SCALAR, OVERLAY, and Image
Calculator. We will also use a module called REGRESS, which evaluates relationships between images or tabular data to produce regression
equations. The mathematical operators will then be used to evaluate the derived equations. Those who are unfamiliar with regression
modeling are encouraged to further investigate this important tool by consulting a statistics text. We will also use the CROSSTAB module,
which produces a new image based on all the unique combinations of values from two images.
In this exercise, we will create an agro-climatic zone map for the Nakuru District in Kenya. The Nakuru District lies in the Great Rift Valley of
East Africa and contains several lakes that are home to immense flocks of pink flamingos.
This is a digital elevation model for the area. The Rift Valley appears in the dark black and blue colors, and is flanked by higher elevations
shown in shades of green.
An agro-climatic zone map is a basic means of assessing the climatic suitability of geographical areas for various agricultural alternatives. Our
final image will be one in which every pixel is assigned to its proper agro-climatic zone according to the stated criteria.
For this exercise, make sure your User Preferences are set to the default values by opening File/User Preferences and pressing the Revert to Defaults button. Click OK
to save the settings.
102
The approach illustrated here is a very simple one adapted from the 1:1,000,000 Agro-Climatic Zone Map of Kenya (1980, Kenya Soil Survey,
Ministry of Agriculture). It recognizes that the major aspects of climate that affect plant growth are moisture availability and temperature.
Moisture availability is an index of the balance between precipitation and evaporation, and is calculated using the following equation:
moisture availability = mean annual rainfall / potential evaporation2
While important agricultural factors such as length and intensity of the rainy and dry seasons and annual variation are not accounted for in
this model, this simpler approach does provide a basic tool for national planning purposes.
The agro-climatic zones are defined as specific combinations of moisture availability zones and temperature zones. The value ranges for these
zones are shown in the table below.
Temperature Zone
<0.15
<10
0.15 - 0.25
10 - 12
0.25 - 0.40
12 - 14
0.40 - 0.50
14 - 16
0.50 - 0.65
16 - 18
0.65 - 0.80
18 - 20
>0.80
20 - 22
22 - 24
24 - 30
The term potential evaporation indicates the amount of evaporation that would occur if moisture were unlimited. Actual evaporation may be less than this, since
there may be dry periods in which there is simply no moisture available to evaporate.
103
For Nakuru District, the area shown in the image NRELIEF, three data sets are available to help us produce the agro-climatic zone map:
i)
ii)
iii)
In addition to these data, we have a published equation relating potential evaporation to elevation in Kenya.
Let's see how these pieces fit into a conceptual cartographic model illustrating how we will produce the agro-climatic zones map. We know
the final product we want is a map of agro-climatic zones for this district, and we know that these zones are based on the temperature and
moisture availability zones defined in Table 1. We will therefore need to have images representing the temperature zones (which we'll call
TEMPERZONES) and moisture availability zones (MOISTZONES). Then we will need to combine them such that each unique combination
of TEMPERZONES and MOISTZONES has a unique value in the result, AGROZONES. The module CROSSTAB is used to produce an
output image in which each unique combination of input values has a unique output value.
To produce the temperature and moisture availability zone images, we will need to have continuous images of temperature and moisture
availability. We will call these TEMPERATURE and MOISTAVAIL. These images will be reclassified according to the ranges given in Table 1
to produce the zone images. The beginning of the cartographic model is constructed in the figure below.
moistavail
reclass
moistzones
crosstab
temperature
reclass
agzones
temperzones
Figure 1
Unfortunately, neither the temperature image nor the moisture availability image are in the list of available datawe will need to derive them
from other data.
The only temperature information we have for this area is from the nine weather stations. We also have information about the elevation of
each weather station. In much of East Africa, including Kenya, temperature and elevation are closely correlated. We can evaluate the
relationship between these two variables for our nine data points, and if it is strong, we can then use that relationship to derive the
temperature image (TEMPERATURE) from the available elevation image.3
The elements needed to produce TEMPERATURE have been added to this portion of the cartographic model in the figure below. Since we do
not yet know the exact nature of the relationship that will be derived between elevation and temperature, we cannot fill in the steps for that
portion of the model. For now, we will indicate that there may be more than one step involved by leaving the module as unknown (????).
A later tutorial exercise on Geostatistics presents another method for developing a full raster surface from point data.
104
nrelief
????
temperature
Derived Relationship
weather station
elevation and
temperature data
Figure 2
Now let's think about the moisture availability side of the problem. In the introduction to the problem, moisture availability was defined as
the ratio of rainfall and potential evaporation. We will need an image of each of these, then, to produce MOISTAVAIL. As stated at the
beginning of this exercise, OVERLAY may be used to perform mathematical operations, such as the ratio needed in this instance, between
two images.
We already have a rainfall image (NRAIN) in the available data set, but we don't have an image of potential evaporation (EVAPO). We do
have, however, a published relationship between elevation and potential evaporation. Since we already have the elevation model, NRELIEF,
we can derive a potential evaporation image using the published relationship. As before, we won't know the exact steps required to produce
EVAPO until we examine the equation. For now, we will indicate that there may be more than one operation required by showing an
unknown module symbol in that portion of the cartographic model in the figure below.
nrain
overlay
nrelief
????
moistavail
evapo
Published
Relationship
Figure 3
Now that we have our analysis organized in a conceptual cartographic model, we are ready to begin performing the operations with the GIS.
Our first step will be to derive the relationship between elevation and temperature using the weather station data, which are presented in the
table below.
105
Station Number
Elevation (ft)
7086.00
15.70
7342.00
14.90
8202.00
13.70
9199.00
12.40
6024.00
18.20
6001.00
16.80
6352.00
16.30
7001.00
16.30
6168.00
17.20
We can see the nature of the relationship from an initial look at the numbersthe higher the elevation of the station, the lower the mean
annual temperature. However, we need an equation that describes this relationship more precisely. A statistical procedure called regression
analysis will provide this. In TerrSet, regression analysis is performed by the module REGRESS.
REGRESS analyzes the relationship either between two images or two attribute values files. In our case, we have tabular data and from it we
can create two attribute values files using Edit. The first values file will list the stations and their elevations, while the second will list the
stations and their mean annual temperatures.
Use Edit from the Data Entry menu, first to create the values file ELEVATION, then again to create the values file
TEMPERATURE. Remember that each file must have two columns separated by one or more spaces. The left column must
contain the station numbers (1-9) while the right column contains the attribute data. When you save each values file, choose Real
as the Data Type.
When you have finished creating the values files, run REGRESS from the GIS Analysis/Statistics menu. (Because the output of
REGRESS is an equation and statistics rather than a data layer, it cannot be implemented in the Macro Modeler.) Indicate that it is
106
a regression between values files. You must specify the names of the files containing the independent and dependent variables.
The independent variable will be plotted on the X axis and the dependent variable on the Y axis. The linear equation derived from
the regression will give us Y as a function of X. In other words, for any known value of X, the equation can be used to calculate a
value for Y. We later want to use this equation to develop a full image of temperature values from our elevation image. Therefore
we want to give ELEVATION as the independent variable and TEMPERATURE as the dependent variable. Press OK.
REGRESS will plot a graph of the relationship and its equation. The graph provides us with a variety of information. First, it shows the sample
data as a set of point symbols. By reading the X and Y values for each point, we can see the combination of elevation and temperature at each
station. The regression trend line shows the "best fit" of a linear relationship to the data at these sample locations. The closer the points are to
the trend line, the stronger the relationship. The correlation coefficient ("r") next to the equation tells us the same numerically. If the line is
sloping downwards from left to right, "r" will have a negative value indicating a "negative" or "inverse" relationship. This is the case with our
data since as elevation increases, temperature decreases. The correlation coefficient can vary from -1.0 (strong negative relationship) to 0 (no
relationship) to +1.0 (strong positive relationship). In this case, the correlation coefficient is -0.9652, indicating a very strong inverse
relationship between elevation and temperature for these nine locations.
The equation itself is a mathematical expression of the line. In this example, you should have arrived (with rounding) at the following
equation:
Y = 26.985 - 0.0016 X
The equation is that of a line, Y = a + bX, where a is the Y axis intercept and b is the slope. X is the independent variable and Y is the
dependent variable.
In effect, this equation is saying that you can predict the temperature at any location within this region if you take the elevation in feet,
multiply it by -0.0016, and add 26.985 to the result. This then is our "model":
TEMPERATURE = 26.985-0.0016 * [NRELIEF]
You may now close the REGRESS display. This model can be evaluated with either SCALAR (in or outside the Macro Modeler) or
Image Calculator. In this case, we will use Image Calculator to create TEMPERATURE.4 Open Image Calculator from the IDRISI
GIS Analysis/Mathematical Operators menu. We will create a Mathematical Expression. Type in TEMPERATURE as the output
image name. Tab or click into the Expression to process input box and type in the equation as shown above. When you are ready
to enter the filename NRELIEF, you can click the Insert Image button and choose the file from the Pick List. Entering filenames in
this manner ensures that square brackets are placed around the filename. When the entire equation has been entered, press Save
Expression and give the name TEMPER. (We are saving the expression in case we need to return to this step. If we do, we can
simply click Open Expression and run the equation without having to enter it again.) Then click Process Expression.
The resulting image should look very similar to the relief map, except that the values are reversedhigh temperatures are found in the Rift
Valley, while low temperatures are found in the higher elevations.
E
4
To verify this, drag the TEMPERATURE window such that you can see both it and NRELIEF.
If you were evaluating this portion of the model in Macro Modeler, you would need to use SCALAR twice, first to multiply NRELIEF by -0.0016 to produce an output
file, then again with that result to add 26.985.
107
Now that we have a temperature map, we need to create the second map required for agro-climatic zoninga moisture availability map. As
stated above, moisture availability can be approximated by dividing the average annual rainfall by the average annual potential evaporation.
We have the rainfall image NRAIN already, but we need to create the evaporation image. The relationship between elevation and potential
evaporation has been derived and published by Woodhead (1968, Studies of Potential Evaporation in Kenya, EAAFRO, Nairobi) as follows:
Eo(mm) = 2422 - 0.109 * elevation(feet)
We can therefore use the relief image to derive the average annual potential evaporation (Eo).
As with the earlier equation, we could evaluate this equation using SCALAR or Image Calculator. Again use Image Calculator to
create a mathematical expression. Enter EVAPO as the output filename, then enter the following as the expression to process.
(Remember that you can press the Insert Image button to bring up a Pick List of files rather than typing in the filename directly.)
2422 - (0.109*[NRELIEF])
Press Save Expression and give the filename MOIST. Then press Process Expression.
We now have both of the pieces required to produce a moisture availability map. We will build a model in the Macro Modeler for
the rest of the exercise. Open Macro Modeler and place the images NRAIN and EVAPO and the module OVERLAY. Connect the
two images to the module, connecting NRAIN first. Right-click on the OVERLAY module and select the Ratio (zero option)
operation. Close module parameters then right-click on the output image and call it MOISTAVAIL. Save the model as Exer2-6
then run it.
The resulting image has values that are unitless, since we divided rainfall in mm by potential evaporation which is also in mm.
When the result is displayed, examine some of the values using the Identify tool. The values in MOISTAVAIL indicate the balance
between rainfall and evaporation. For example, if a cell has a value of 1.0 in the result, this would indicate that there is an exact
balance between rainfall and evaporation.
What would a value greater than 1 indicate? What would a value less than 1 indicate? This would indicate that NRAIN had a
higher value than EVAPO -- a positive moisture balance
At this point, we have all the information we need to create our agro-climatic zone (ACZONES) map. The government of Kenya uses the
specific classes of temperature and moisture availability that were listed in Table 1 to form zones of varying agricultural suitability. Our next
step is therefore to divide our temperature and moisture availability surfaces into these specific classes. We will then find the various
combinations that exist for Nakuru District.
108
Place the RECLASS module in the model and connect the input image MOISTAVAIL. Right click on the output image and
rename it MOISTZONES. Right click the RECLASS symbol. As we saw in earlier exercises, RECLASS requires a text .rcl file that
defines the reclassification thresholds. The easiest way to construct this file is to use the main RECLASS dialog. Close Module
Parameters.
Open RECLASS from IDRISI GIS Analysis/Database Query. There is no need to enter filenames. Just enter the values as shown
for moisture zones in Table 1, then press Save as .RCL file. Give the filename MOISTZONES. Then right click to open the
RECLASS module parameters in the model and enter MOISTZONES as the .rcl file. Save and run the model.
I
2
Change the MOISTZONES display to use the Default Quantitative palette and equal interval autoscaling.
How many moisture availability zones are in the image? Why is this different from the number of zones given in the table? (If you
are having trouble answering this, you may wish to examine the documentation file of MOISTAVAIL.)
The information we have concerning these zones is published for use in all regions of Kenya. However, our study area is only a small part of
Kenya. It is therefore not surprising that some of the zones are not represented in our result.
Next we will follow a similar procedure to create the temperature zone map. Before doing so, however, first check the minimum
and maximum values in TEMPERATURE to avoid any wasted reclassification steps. Highlight the TEMPERATURE raster layer
in the model, then click the Describe icon on the Macro Modeler toolbar. Use Check for the minimum and maximum data values
in TEMPERATURE. Then use the main RECLASS dialog again to create an .rcl file called TEMPERZONES with the ranges given
in Table 1.
Place another RECLASS model element and rename the output file TEMPERZONES. Link TEMPERATURE as the input file and
right-click to open the module parameters. Enter the .rcl file, TEMPERZONES, that you just created.
Now that we have images of temperature zones and moisture availability zones, we can combine these to create agro-climatic zones. Each
resulting agro-climatic zone should be the result of a unique combination of temperature zone and moisture zone.
Previously we used OVERLAY to combine two images. Given the criteria for the final image, why can't we use OVERLAY for this
final step?
The operation that assigns a new identifier to every distinct combination of input classes is known as cross-classification. In
TerrSet, this is provided with the module CROSSTAB. Place the module CROSSTAB into the model. Link TEMPERZONES first
and MOISTZONES second. Right-click on the output image and rename it AGROZONES. Then right-click on CROSSTAB to
109
open the module parameters. (Note that the CROSSTAB module when run from the main dialog offers several addition output
options that are not available when used in the Macro Modeler.)
The cross-classification image shows all of the combinations of moisture availability and temperature zones in the study area. Notice that the
legend for AGROZONES explicitly shows these combinations in the same order as the input image names appear in the title.
Figure 4 shows one way the model could be constructed in Macro Modeler. (Your model should have the same data and command elements,
but may be arranged differently.
In this exercise, we used Image Calculator and OVERLAY to perform a variety of basic mathematical operations. We used images as variables
in evaluating equations, thereby deriving new images. This sort of mathematical modeling (also termed map algebra), in conjunction with
database query, form the heart of GIS. We were also introduced to the module CROSSTAB, which creates a new image based on the
combination of classes in two input images.
Optional Problem
The agro-climatic zones we have just delineated have been studied by geographers to determine the optimal agricultural activity for each
combination. For example, it has been determined that areas suitable for the growing of pyrethrum, a plant cultivated for use in insect
repellents, are those defined by combinations of temperature zones 6-8 and moisture availability zones 1-3.
L
4
Create a map showing the regions suitable for the growth of pyrethrum.
There are several ways to create a map of areas suitable for pyrethrum. Describe how you made your map.
We will not use any of the images created in this exercise for later exercises, so you may delete them all if you like, except for the original data
files NRAIN and NRELIEF.
110
This completes the GIS tools exercises of the Introductory GIS section of the Tutorial. Database query, distance operators, context operators,
and the mathematical operators of map algebra provide the tools you will use again and again in your analyses.
We have made heavy use of the Macro Modeler in these exercises. However, you may find as you are learning the system that the organization
of the main menu will help you understand the relationships and common uses for the modules that are listed alphabetically in the Modeler.
Therefore, we encourage you to explore the module groupings in the menu as well. In addition, some modules cannot be used in the Modeler
(e.g., REGRESS) and others (e.g., CROSSTAB) have additional capabilities when run from the menu.
The remaining exercises in this section concentrate on the role of GIS in decision support, particularly regarding suitability mapping.
111
EXERCISE 2-7
MCE: CRITERIA DEVELOPMENT AND THE
BOOLEAN APPROACH
The next five exercises will explore the use of GIS as a decision support system. Although techniques will be discussed that can enhance many
types of decision making processes, the emphasis will be placed on the use of GIS for suitability mapping and resource allocation decisions.
These decisions are greatly assisted by GIS tools because they often involve a variety of criteria that can be represented as layers of geographic
data. Multi-criteria evaluation (MCE) is a common method for assessing and aggregating many criteria. However, its full potential is only
recently being realized.
An important first step in understanding MCE is to develop a common language in which to present and approach such methods. If the
reader has not already done so, the chapter on Decision Support: Decision Strategy Analysis in the TerrSet Manual should be reviewed.
The language presented there will be used in these exercises.
In this next set of exercises we will explore a variety of MCE techniques. In this exercise, criteria will be developed and standardized, and a
simple Boolean aggregation method will be used to arrive at a solution. The following two exercises explore more flexible and sophisticated
aggregation methods. Exercise 2-8 illustrates the use of the Weighted Linear Combination (WLC), while Exercise 2-9 introduces the Ordered
Weighted Averaging (OWA) aggregation technique. Exercise 2-10 addresses issues of site selection, particularly regarding spatial contiguity
and minimum site area requirements. The final exercise in this set, Exercise 2-11, expands the problem to include more than one objective
and uses multi-objective allocation procedures to produce a final solution.
Often, some of the data layers developed in an exercise will be used in subsequent exercises. At the end of each exercise, you will be told which
layers must be kept. However, if possible, you may wish to keep all the data layers you develop for this set of exercises to facilitate further
independent exploration of the techniques presented.
To demonstrate the different ways criteria can be developed, as well as the variety of MCE procedures available, the first four exercises of this
series will concentrate on a single hypothetical suitability problem. The objective is to find the most suitable areas for residential development
in the town of Westborough, Massachusetts, USA. The town is located very near two large metropolitan areas and is a prime location for
residential (semi-rural) development.
Change your Working Folder to the TerrSet Tutorial\MCE folder using TerrSet Explorer.
112
Display the image MCELANDUSE using the user-defined MCELANDUSE palette. Choose to display the legend and title map
components. Use Add Layer to add the streams layer MCESTREAMS with the user-defined BLUE symbol file and the roads layer
MCEROADS with the default Outline Black symbol file.
As you can see, the town of Westborough and its immediate vicinity are quite diverse. The use of GIS will make the identification of suitable
lands more manageable.
Because of the prime location, developers have been heavily courting the town administrators in an effort to have areas that best suit their
needs allocated for residential development. However, environmental groups also have some influence on where new development will or will
not occur. The environmentally-mixed landscape of Westborough includes many areas that should be preserved as open space and for
wildlife. Finally, the town of Westborough has some specific regulations already in place that will limit land for development. All these
considerations must be incorporated into the decision making process.
This problem fits well into an MCE scenario. The goal is to explore potential suitable areas for residential development for the town of
Westborough: areas that best meet the needs of all groups involved. The town administrators are collaborating with both developers and
environmentalists and together they have identified several criteria that will assist in the decision making process. This is the first step in the
MCE process, identifying and developing criteria.
113
Constraints
The town's building regulations are constraints that limit the areas available for development. Let's assume new development cannot occur
within 50 meters of open water bodies, streams, and wetlands.
To create this image, information about open water bodies, streams, and wetlands was brought into the database. The open water data was
extracted from the land use map, MCELANDUSE. The streams data came from a USGS DLG file that was imported then rasterized. The
wetlands data used here were developed from classification of a SPOT satellite image. These three layers were combined to produce the
resultant map of all water bodies, MCEWATER.1
This is a Boolean image of the 50 m buffer zone of protected areas around the features in MCEWATER. Areas that should not be considered
are given the value 0 while those that should be considered are given the value 1. When the constraints are multiplied with the suitability map,
areas that are constrained are masked out (i.e., set to 0), while those that are not constrained retain their suitability scores.
In addition to the legal constraint developed above, new residential development will be constrained by current land use; new development
cannot occur on already developed land.
Look at MCELANDUSE again. (You can quickly bring any image to focus by choosing it from the Window List menu.) Clearly
some of these categories will be unavailable for residential development. Areas that are already developed, water bodies, and large
transportation corridors cannot be considered suitable to any degree.
Display LANDCON, a Boolean image produced from MCELANDUSE such that areas that are suitable have a value of 1 and areas
that are unsuitable for residential development have a value of 0.2
Now we will turn our attention to the continuous factor maps. Of the following six factors, the first four are relevant to building costs while
the latter two concern wildlife habitat preservation.
The wetlands data is in the image MCEWETLAND in MCESUPPLEMENTAL.ZIP. The streams data is the vector file MCESTREAMS we used earlier in this exercise.
MCELANDUSE categories 1-4 are considered suitable and categories 5-13 are constrained.
114
Factors
Having determined the constraining criteria, the more challenging process for the administrators was to identify the criteria that would
determine the relative suitability of the remaining areas. These criteria do not absolutely constrain development, but are factors that enhance
or detract from the relative suitability of an area for residential development.
For developers, these criteria are factors that determine the cost of building new houses and the attractiveness of those houses to purchasers.
The feasibility of new residential development is determined by factors such as current land use type, distance from roads, slopes, and
distance from the town center. The cost of new development will be lowest on land that is inexpensive to clear for housing, near to roads, and
on low slopes. In addition, building costs might be offset by higher house values closer to the town center, an area attractive to new home
buyers.
The first factor, that relating the current land use to the cost of clearing land, is essentially already developed in the MCELANDUSE image.
All that remains is to transform the land use category values into suitability scores. This will be addressed in the next section.
The second factor, distance from roads, is represented with the image ROADDIST. This is an image of simple linear distance from all roads in
the study area. This image was derived by rasterizing and using the module DISTANCE with the vector file of roads for Westborough.
The image TOWNDIST, the third factor, is a cost distance surface that can be used to calculate travel time from the town center. It was
derived from two vector files, the roads vector file and a vector file outlining the town center.
The final factor related to developers' financial concerns is slope. The image SLOPES was derived from an elevation model of Westborough.3
Examine the images ROADDIST, TOWNDIST, and SLOPES using the Default Qualitative palette. Display MCELANDUSE with
the Default Qualitative palette.
What are the values units for each of these continuous factors? Are they comparable?
Can categorical data (such as land use) be thought of in terms of continuous suitability? How?
While the factors above are important to developers, there are other factors to be considered, namely those important to environmentalists.
Environmentalists are concerned about groundwater contamination from septic systems and other residential non-point source pollution.
Although we do not have data for groundwater, we can use open water, wetlands, and streams as surrogates (i.e., the image MCEWATER).
Distance from these features has been calculated and can be found in the image WATERDIST. Note that a buffer zone of 50 meters around
the same features was considered an absolute constraint above. This does not preclude also using distance from these features as a factor in an
attempt by environmentalists to locate new development even further from such sensitive areas (i.e., development MUST be at least 50 meters
from water, but the further the better).
MCEROAD, a vector file of roads; MCECENTER, a vector file showing the town center; and MCEELEV, an image of elevation, can all be found in the compressed
file MCESUPPLEMENTAL. The cost-distance calculation used the cost grow option and a friction surface where roads had a value of 1 and off-road areas had a value
of 3.
115
The last factor to be considered is distance from already-developed areas. Environmentalists would like to see new residential development
near currently-developed land. This would maximize open land in the town and retain areas that are good for wildlife distant from any
development. Distance from developed areas, DEVELOPDIST, was created from the original land use image.
H
3
Examine the images WATERDIST and DEVELOPDIST using the Default Qualitative palette.
What are the values units for each of these continuous factors? Are they comparable with each other?
We now have the eight images that represent criteria to be standardized and aggregated using a variety of MCE approaches. The Boolean
approach is presented in this exercise while the following two exercises address other approaches. Regardless of the approach used, the
objective is to create a final image of suitability for residential development.
Display a Boolean image called LANDBOOL. It was created from the land use map MCELANDUSE using the RECLASS module.
In the LANDBOOL image, suitable areas have a value of 1 and unsuitable areas have a value of 0.
116
Display a Boolean image called ROADBOOL. It was created using RECLASS with the continuous distance image, ROADDIST. In
this image, areas within 400 meters of a road have a value of 1 and those beyond 400 meters have a value of 0.
Display a Boolean image called TOWNBOOL. It was created from the cost distance image TOWNDIST. In the new image, a value
of 1 is given to areas within 10 minutes of the town center.
Slope Factor
Because relatively low slopes make housing and road construction less expensive, we reclassified our slope image so that those areas with a
slope less than 15% are considered suitable and those equal to or greater than 15% are considered unsuitable.
Display a Boolean image called SLOPEBOOL. It was created from the slope image SLOPES.
Display a Boolean image called WATERBOOL. It was created from the distance image called WATERDIST. In the Boolean
image, suitable areas have a value of 1.
117
Display a Boolean image called DEVELOPBOOL. It was created from DEVELOPDIST by assigning a value of 1 to areas less than
300 meters from developed land.
Open the DISPLAY Launcher dialog box and invoke the Pick List. You should see the filename MCEBOOLGROUP in the list
with a small plus sign next to it. This is an image group file that has already been created using TerrSet Explorer. Click on the plus
sign to see a list of the files that are in the group. Choose MCEBOOL and press OK. Note that the filename shown in the
DISPLAY Launcher input box is MCEBOOLGROUP.MCEBOOL. Choose the Default Qualitative palette and click OK to display
MCEBOOL.4
Add all the images in the same group file to the same map window. Then use the Identify tool from the its toolbar and explore at
any pixel locational the values across the MCEBOOLGROUP.
What must be true of all criterion images for MCEBOOL to have a value 1? Is there any indication in MCEBOOL of how many
For those areas with the value 1, is there any indication which were better than others in terms of distance from roads, etc.? If
more suitable land has been identified than is required, how would one now choose between the alternatives of suitable areas for
development?
The interactive tools for group files (Group Link and the Identify tool) are only available when the image(s) have been displayed as members of the group, with the
full dot logic name. If you display MCEBOOL without its group reference, it will not be recognized as a group member.
118
Q
6
Display the BOOLOR image using the Default Qualitative palette. It was created using the logical OR operation in Image
Calculator. You can see that almost the entire image is mapped as suitable when the Boolean OR aggregation is used.
Describe BOOLOR. Can you think of a way to use the Boolean factors to create a suitability image that lies somewhere between
the extremes of AND and OR in terms of risk?
The exercises that follow will use other standardization and aggregation procedures that will allow us to alter the level of both tradeoff and
risk. The results will be images of continuous suitability rather than strict Boolean images of absolute suitability or non-suitability.
Criterion Importance
Another limitation of the simple Boolean approach we used here is that all factors have equal importance in the final suitability map. This is
not likely to be the case. Some criteria may be very important to determining the overall suitability for an area while others may be of only
marginal importance. This limitation can be overcome by weighting the factors and aggregating them with a weighted linear average or WLC.
The weights assigned govern the degree to which a factor can compensate for another factor. While this could be done with the Boolean
images we produced, we will leave the exploration of the WLC method for the next exercise.
119
selection, suitable but small sites are not appropriate. This problem of contiguity can be addressed by adding a post-aggregation constraint
such as "suitable areas must also be at least 20 hectares size." This constraint would be applied after all suitable locations (of any size) are
found.
Do not delete any images used or created in this exercise. They will be used in the following exercises.
120
EXERCISE 2-8
MCE: NON-BOOLEAN STANDARDIZATION
AND WEIGHTED LINEAR COMBINATION
The following exercise builds upon concepts discussed in the Decision Support: Decision Strategy Analysis chapter of the TerrSet Manual
as well as the previous exercise of the Tutorial. This exercise introduces a second method of standardization in which factors are not reduced
to simple Boolean constraints. Instead, they are standardized to a continuous scale of suitability from 0 (the least suitable) to 255 (the most
suitable). Rescaling our factors to a standard continuous scale allows us to compare and combine them, as in the Boolean case. However, we
will avoid the hard Boolean decision of defining any particular location as absolutely suitable or not for a given criteria. In this exercise we will
use a soft or fuzzy concept to give all locations a value representing its degree of suitability. Our constraints, however, will retain their
hard Boolean character.
We will also use a different aggregation method, the Weighted Linear Combination (WLC). This aggregation procedure not only allows us to
retain the variability from our continuous factors, it also gives us the ability to have our factors trade off with each other. A low suitability
score in one factor for any given location can be compensated for by a high suitability score in another factor. How factors tradeoff with each
other will be determined by a set of Factor Weights that indicate the relative importance of each factor. In addition, this aggregation
procedure moves the analysis well away from the extreme risk aversion of the Boolean AND operation. As we will see, WLC is an averaging
technique that places our analysis exactly halfway between the AND (minimum) and OR (maximum) operations, i.e., neither extreme risk
aversion nor extreme risk taking.
In this and subsequent exercises we will use the decision support modules FUZZY, MCE, WEIGHT and MOLA. We encourage you to
become familiar with the main interfaces of these modules. Each is used to build a full decision support model. In this exercise we will build a
model to identify sites for residential development.
The first step in developing a decision support model is to identify and develop the criteria, both constraints and factors. We have already
developed the two constraint maps for you, water and land use.
The water constraint map excludes areas within 50 meters of water sources and the land use constraint will limit residential development to
appropriate land use.
EXERCISE 2-8 MCE: NON-BOOLEAN STANDARDIZATION AND WEIGHTED LINEAR COMBINATION 121
The next step is to create the six factor maps that will decide areas of suitability for residential development. The six images used in the
previous exercise to create the Boolean factor images will be standardized to continuous factors using the module FUZZY.
Creating a quantitative factor from a qualitative input image can be done using Edit/ASSIGN or RECLASS in order to give each
land use category a suitability value. Display the image called LANDFUZZ. It is a standardized factor map derived from the image
MCELANDUSE. On the continuous (i.e., fuzzy) 0.0-1.0 scale we gave a suitability rating of 1.0 to Forested Land, 0.75 to Open
Undeveloped land, 0.50 to areas under Pasture, 0.3 to Cropland, and gave all other categories a value of 0.0.
We now need to create the remaining standardized factor maps using the FUZZY module. Standardization is necessary to transform the
disparate measurement units of the factor images into comparable suitability values. The selection of parameters for this standardization
relies upon the users knowledge of how suitability changes for each factor. When using FUZZY, it is important to know the minimum and
maximum data values for the input image. A fuzzy membership function shape and type must be specified and control point values are
entered based on the input image minimum and maximum data values. Below is a description of the standardization criteria used with each
factor image.
See the Decision Support chapter in the TerrSet Manual for a detailed discussion of fuzzy set membership functions.
EXERCISE 2-8 MCE: NON-BOOLEAN STANDARDIZATION AND WEIGHTED LINEAR COMBINATION 122
The TOWNDIST image was created using the COST module. This module transforms a cost distance surface using roads as the source image
and a friction image of road types to derive a relative travel time image to the town center. The assumption is that areas that are more
accessible to the amenities of the town center will be more suitable for residential development.
Open the module FUZZY. Set the membership function type to linear. Enter TOWNDIST as the input file and enter
TOWNFUZZ as the output name. Set the output format to real and choose the monotonically decreasing linear function. Enter
the control points for c of 0 and 582 for control point d. These values are the minimum (0) and maximum (582) distance values
found in our cost distance image. Click OK to run.
EXERCISE 2-8 MCE: NON-BOOLEAN STANDARDIZATION AND WEIGHTED LINEAR COMBINATION 123
environmentalists prefer to see residential development even further from these water bodies. However, a distance of 800 meters might be just
as good as a distance of 1000 meters. Suitability may not linearly increase with distance.
In our case study, suitability is very low within 100 meters of water. Beyond 100 meters, all parties agree that suitability increases with
distance. However, environmentalists point out that the benefits of distance level off to maximum suitability at approximately 800 meters.
Beyond 800 meters, suitability is again equal. This function cannot be described by the simple linear function used in the preceding factor. It
is best described by an increasing sigmoidal curve. We will use a monotonically increasing sigmoidal function to rescale the values in the
distance-from-water image WATERDIST.
Use FUZZY again. Select sigmoidal as the membership function type. Enter WATERDIST as the input file and enter
WATERFUZZ as the output file name. Select real as the output data format and monotonically increasing as the membership
function shape. To accommodate the two thresholds of 100 and 800 meters in our function, the control points are no longer the
minimum and maximum of our input values. Rather, they are equivalent to the points of inflection on the Sigmoidal curve. In the
case of an increasing function, the first control point (a) is the value at which suitability begins to rise sharply above zero and the
second control point (b) is the value at which suitability begins to level off and approaches a maximum of 1.0. Therefore, for this
factor, input a value of 100 for control point a and a value of 800 for control point b. See the Help for FUZZY for a complete
description of the fuzzy curves and control points. Click OK to run.
Use FUZZY again. Select J-shaped as the membership function type. Enter ROADDIST as the input file and enter ROADFUZZ as
the output file name. Select real as the output data format. To rescale our distance from roads factor to this J-shaped curve, we
chose a monotonically decreasing function. As with the other functions, the first control point is the value at which the suitability
begins to decline from maximum suitability. However, because the J-shaped function never reaches 0, the second control point is
EXERCISE 2-8 MCE: NON-BOOLEAN STANDARDIZATION AND WEIGHTED LINEAR COMBINATION 124
set at the value at which suitability is halfway between not suitable and perfectly suitable. Specify 50 for the value of the first
control point c and 400 for the value of the second control point d. Click OK to run.
Slopes Factor
We know from our discussion in the previous exercise that slopes below 15% are the most cost effective for development. However, the lowest
slopes are the best and any slope above 15% is equally unsuitable. We again use a monotonically decreasing sigmoidal function to rescale our
data to the 0.0-1.0 range.
Using FUZZY, select sigmoidal as the membership function type. Enter SLOPES as the input file and enter SLOPEFUZZ as the
output file name. Select real as the output data format. To rescale our slopes factor to this sigmoidal curve, we chose a
monotonically decreasing function. As with the other functions, the first control point is the value at which the suitability begins
to decline from maximum suitability. The second control point is the value at which suitability begins to level off and approaches
0.0. Specify 0.0 for the value of the first control point c and 15 for the value of the second control point d. Click OK to run.
EXERCISE 2-8 MCE: NON-BOOLEAN STANDARDIZATION AND WEIGHTED LINEAR COMBINATION 125
Open the module FUZZY. Set the membership function type to linear. Enter DEVELOPDIST as the input file and enter
DEVELOPFUZZ as the output name. Set the output format to real and chose the monotonically decreasing linear function. Enter
the control points for c of 0 and for 1325 for control point d. These values are the minimum and maximum distance values found
in our distance image. Click OK to run
All factors have now been standardized to the same continuous scale of suitability (0.0-1.0). Standardization makes comparable factors
representing different criteria measured in different ways. This will also allow us to combine or aggregate all the factor images.
Open the module WEIGHT. Choose to use a previous pairwise comparison file (.pcf) and select the file RESIDENTIAL. Also
specify that you wish to produce an output decision support file and type in the same name, RESIDENTIAL. Then press the Next
button.
The second WEIGHT dialog box displays a pairwise comparison matrix that contains the information stored in the .pcf file
RESIDENTIAL that was created for you. This matrix indicates the relative importance of any one factor relative to all others. It is
the hypothetical result of lengthy discussions amongst town planners and their constituents. To interpret the matrix, ask the
question, Relative to the column factor, how important is the row factor? Answers are located on the 9-point scale shown at the
top of the WEIGHT dialog. For example, relative to being near the town (TOWNFUZZ), being near to roads (ROADFUZZ) is
very strongly more important (a matrix value of 7) and compared to being on low slopes (SLOPEFUZZ), being near developed
EXERCISE 2-8 MCE: NON-BOOLEAN STANDARDIZATION AND WEIGHTED LINEAR COMBINATION 126
areas (DEVELOPFUZZ) is strongly less important. Take a few moments to assess the relative importance assigned to each factor.2
Press the OK button and choose to overwrite the file if prompted.
The weights derived from the pairwise comparison matrix are displayed in the Module Results box. These weights are also written to the
decision support file RESIDENTIAL. The higher the weight, the more important the factor in determining suitability for the objective.
What are the weights for each factor? Do these weights favor the concerns of developers or environmentalists?
We will choose to use the pairwise comparison matrix as it was developed. (You can return to WEIGHT later to explore the effect of altering
any of the pairwise comparisons.)
The WEIGHT module is designed to simplify the development of weights by allowing the decision makers to concentrate on the relative
importance of only two factors at a time. This focuses discussion and provides an organizational framework for working through the complex
relationships of multiple criteria. The weights derived through the module WEIGHT will always sum to 1. It is also possible to develop
weights using any method and use these with MCE-WLC, so long as they sum to 1.
It is with much difficulty that factors relevant to environmentalists have been measured against factors relevant to developers' costs. For example, how can an
environmental concern for open space be compared to and eventually tradeoff with costs of development due to slope? We will address this issue directly in the next
exercise.
EXERCISE 2-8 MCE: NON-BOOLEAN STANDARDIZATION AND WEIGHTED LINEAR COMBINATION 127
Give an example from everyday life when you consciously weighted several criteria to come to a decision (e.g., selecting a
particular item in a market, choosing which route to take to a destination). Was it difficult to consider all the criteria at once?
Open the module MCE. Choose weighted linear combination as the MCE procedure. Enter the two constraints in the Constraints
grid, LANDCON and WATERCON. Next, click the retrieve parameters button and select RESIDENTIAL.DSF. This is the
decision support file created from running the module WEIGHT and contains the names of the factors and their weights. Call the
output image MCEWLC and click OK. The module MCE will run and the final aggregated suitability image is automatically
displayed.
We will explore the resulting aggregate suitability image with the Identify tool to better understand the origin of the final values.
EXERCISE 2-8 MCE: NON-BOOLEAN STANDARDIZATION AND WEIGHTED LINEAR COMBINATION 128
In TerrSet Explorer, select the following seven files: MCEWLC, LANDFUZZ, TOWNFUZZ, ROADFUZZ, SLOPEFUZZ,
WATERFUZZ, and DEVELOPFUZZ. Once all seven are selected, right-click in TerrSet Explorer and select Add Layer. All seven
images should now be displayed in the one map window.
Click on the Identify tool from the icon menu. Then use the Identify tool to explore the values across all the images. The values
are more quickly interpreted if you choose the View as Graph option on the Identify box, select Relative Scaling.3
It should be clear from your exploration that areas of similar suitability do not necessarily have the same combination of suitability scores for
each factor. Factors tradeoff with each other throughout the image.
Which two factors most determine the character of the resulting suitability map? Why?
The MCEWLC result is a continuous image that contains a wealth of information concerning overall suitability for every location. However,
using this result for site selection is not always obvious. We will continue to explore site selection methods relevant to continuous suitability
images in the next exercise.
See the on-line Help System for the Identify tool for more information about these options.
EXERCISE 2-8 MCE: NON-BOOLEAN STANDARDIZATION AND WEIGHTED LINEAR COMBINATION 129
The weighted linear combination aggregation method offers much more flexibility than the Boolean approaches of the previous exercise. It
allows for criteria to be standardized in a continuous fashion, retaining important information about degrees of suitability. It also allows the
criteria to be differentially weighted and to trade off with each other. In the next exercise, we will explore another aggregation technique,
Ordered Weighted Averaging (OWA), which will allow us to control the amount of risk and tradeoff we wish to include in the result.
EXERCISE 2-8 MCE: NON-BOOLEAN STANDARDIZATION AND WEIGHTED LINEAR COMBINATION 130
EXERCISE 2-9
MCE: ORDERED WEIGHTED AVERAGING
In this exercise, we will explore Ordered Weighted Averaging (OWA) as a method for MCE. This technique, like WLC, is best used with
factors that have been standardized to a continuous scale of suitability and weighted according to their relative importance. Constraints will
remain as Boolean masks. Therefore, this exercise will simply use the constraints, standardized continuous factors, and weights developed in
the previous exercises. However, in the case of OWA, a second set of weights, Order Weights, will also be applied to the factors. This will
allow us to control the overall level of tradeoff between factors, as well as the level of risk in our suitability determination.
Our first method of aggregation, Boolean, demanded that we reduce our factors to simple constraints that represent "hard" decisions about
suitability. The final map of suitability for residential development was the product of the logical AND (minimum) operation, i.e., it was a
risk-averse solution that left no possibility for criteria to tradeoff with each other. If a location was not suitable for any criterion, then it could
not be suitable on the final map. (We also explored the Boolean OR (maximum) operation, which was too risk-taking to be of much use.)
WLC, however, allowed us to use the full potential of our factors as continuous surfaces of suitability. Recall that after identifying the factors,
they were standardized using fuzzy functions, and then weighted and combined using an averaging technique. The factor weights used
expressed the relative importance of each criterion for the overall objective, and they determined how factors were able to trade off with each
other. The final map of continuous suitability for residential development (MCEWLC) was the result of an operation that can be said to be
exactly halfway between the AND and OR operations. It was neither extremely risk-averse nor extremely risk-taking. In addition, all factors
were allowed to fully tradeoff. Any factor could compensate for any other according to its factor weight.
Thus the MCE procedures we used in the previous two exercises lie along a continuum from AND to OR. The Boolean method gives us
access to the extremes while the WLC places the operation exactly in the middle. At both extremes of the continuum, tradeoff is not possible,
but in the middle there is the potential for full tradeoff. The aggregation method we will use in this exercise, OWA, will give us control over
the position of the MCE along both the risk and tradeoff axes (refer to the figure in the previous exercise). That is, it will let us control the
level of risk we wish to assume in our MCE, and the degree to which factor weights (tradeoff weights) will influence the final suitability map.
OWA offers a wealth of possible solutions for our residential development problem.
Control over risk and tradeoff is made possible through a set of order weights for the different rank-order positions of factors at every
location (pixel). The order weights will first modify the degree to which factor weights will have influence in the aggregation procedure, thus
they will govern the overall level of tradeoff. After factor weights are applied to the original factors (to some degree dependent upon the
overall level of tradeoff used), the results are ranked from low to high suitability for each location. The factor with the lowest suitability score
is then given the first Order Weight, the factor with the next lowest suitability score is given the second Order Weight, and so on. This has the
effect of weighting factors based on their rank from minimum to maximum value for each location. The relative skew toward either
minimum or maximum of the order weights controls the level of risk in the evaluation. Additionally, the degree to which the order weights
are evenly distributed across all positions controls the level of overall tradeoff, i.e., the degree to which factor weights have influence.
131
The user should review the section on Decision Support in the TerrSet Manual for more information on OWA.
0.16
0.16
0.16
0.16
0.16
0.16
Rank:
1st
2nd
3rd
4th
5th
6th
In the above example, weight is distributed or dispersed evenly among all factors regardless of their rank-order position from minimum to
maximum for any given location. They are skewed toward neither the minimum (AND operation) nor the maximum (OR operation). As in
the WLC procedure, our result will be exactly in the middle in terms of risk. In addition, because all rank order positions are given the same
weight, no rank-order position will have a greater influence over another in the final result.1 There will be full tradeoff between factors,
allowing the factor weights to be fully employed. To see the result of such a weighting scheme, and to explore a range of other possible
solutions for our residential development problem, we will again use the module MCE.
Open the module MCE. Choose Ordered Weighted Average as the MCE procedure. Enter the two constraints in the Constraints
grid, LANDCON and WATERCON. Next, click the retrieve parameters button and select RESIDENTIAL.DSF. This is the
decision support file created from running the module WEIGHT and contains the names of the factors and their weights. Call the
output image MCEAVG.
Next, specify the order weights. Specify .16 for each order weight by entering this into the order weight grid. These order weights
When MCE has finished processing, the resulting image, MCEAVG, will be displayed. Also display the WLC result, MCEWLC,
will produce a solution with full tradeoff and average risk. Click OK to run.
and arrange the images such that both are visible. These images are identical. As previously discussed, the WLC technique is
simply a subset of the OWA technique. Do not close MCE.
It is important to remember that the rank-order for a set of factors for a given location may not be the same for another location. Order weights are not applied to an
entire factor image, but on a pixel by pixel basis according to the pixel values' rank orders.
132
The results from any OWA operation will be a continuous image of overall suitability, although each may use different levels of tradeoff and
risk. These results, like that from WLC, present a problem for site selection as in our example. Where are the best sites for residential
development? Will these sites be large enough for a housing development? The next exercise will address site selection methods. In the
remainder of this exercise, we will explore the result of altering the order weights in the MCE-OWA procedure.
Rank:
1st
2nd
3rd
4th
5th
6th
In this AND operation example, all weight is given to the first ranked position, the factor with the minimum suitability score for a given
location. Clearly this set of order weights is skewed toward AND; the factor with the minimum value gets full weighting. In addition, because
no rank-order position other than the minimum is given any weight, there can be no tradeoff between factors. The minimum factor alone
determines the final outcome.
133
Specify a new set of order weights in the order weights grid. Enter 1 for Weight 1 and 0 for all the remaining order weights.
Use the Identify tool to explore the values in the output image. You can display all the factors in the same map window and
explore further the values across all the factors for any minimum risk score.
What factor appears to have most determined the final result for each location in MCEMIN? What influence did factor weights
For comparison, display your Boolean result, MCEBOOL (with the qualitative palette), alongside MCEMIN. Clearly these images
have areas in common. Why are there areas of suitability that do not correspond to the Boolean result?
An important difference between the OWA minimum result and the earlier Boolean result is evident in areas that are highly suitable in both
images. Unlike in the Boolean result, in the MCEOWA result, areas chosen as suitable retain information about varying degrees of suitability.
Now, lets create an image called MCEMAX that represents the maximum operation using the same set of factors and constraints.
In MCE, rename the output image to MCEMAX. Then specify a new set of order weights for the maximum operation. For the
first order weight, specify 0. Specify 0 for all the other weights except for Weight 6. For this one, specify 1. Click OK to run. When
it is done running, use the Identify tool to explore the values in the image.
134
What order weights yield the maximum operation? What level of tradeoff is there in your maximum operation? What level of risk?
Why do the non-constrained areas in MCEMAX have such high suitability scores?
The minimum and maximum results are located at the extreme ends of our risk continuum while they share the same position in terms of
tradeoff (none). This is illustrated in the figure above.
135
0.5
0.3
0.125
0.05
0.025
0.0
Rank:
1st
2nd
3rd
4th
5th
6th
Notice that these order weights specify an operation midway between the extreme of AND and the average risk position of WLC. In addition,
these order weights set the level of tradeoff to be midway between the no tradeoff situation of the AND operation and the full tradeoff
situation of WLC.
Display the image MCEMIDAND from the group file called MCEMIDAND. (The remaining MCE output images have already
Display the image MCEMIDOR from the group file called MCEMIDOR. The following set of order weights was used to create
MCEMIDOR.
0.0
0.025
0.05
0.125
0.3
0.5
Rank:
1st
2nd
3rd
4th
5th
6th
136
How do the results from MCEMIDOR differ from MCEMIDAND in terms of tradeoff and risk? Would the MCEMIDOR result
In a graph similar to the risk-tradeoff graph above, indicate the rough location for both MCEMIDAND and MCEMIDOR.
Close all open display windows and display all five results from the OWA procedure in order from AND to OR (i.e., MCEMIN,
MCEMIDAND, MCEAVG, MCEMIDOR, MCEMAX), into the same map window. Then use the Identify tool to explore the
values in these images. It may be easier to use the graphic display in the Identify box. To do so, click on the View as Graph button
at the bottom of the box.
While it is clear that suitability generally increases from AND to OR for any given location, the character of the increase between any two
operations is different for each location. The extremes of AND and OR are clearly dictated by the minimum and maximum factor values,
however, the results from the middle three tradeoff operations are determined by an averaging of factors that depends upon the combination
of factor values, factor weights, and order weights. In general, in locations where the heavily weighted factors (slopes and roads) have similar
suitability scores, the three results with tradeoff will be strikingly similar. In locations where these factors do not have similar suitability
scores, the three results with tradeoff will be more influenced by the difference in suitability (toward the minimum, the average, or the
maximum).
In the OWA examples explored so far, we have varied our level of risk and tradeoff together. That is, as we moved along the continuum from
AND to OR, tradeoff increased from no tradeoff to full tradeoff at WLC and then decreased to no tradeoff again at OR. Our analysis, graphed
in terms of tradeoff and risk, moved along the outside edges of a triangle, as shown in the figure below.
.
However, had we chosen to vary risk independent of tradeoff we could have positioned our analysis anywhere within the triangle, the
Decision Strategy Space.
Suppose that the no tradeoff position is desirable, but the no tradeoff positions we have seen, the AND (minimum) and OR (maximum), are
not appropriate in terms of risk. A solution with average risk and no tradeoff would have the following order weights.
137
0.0
0.0
0.5
0.5
0.0
0.0
Rank:
1st
2nd
3rd
4th
5th
6th
(Note that with an even number of factors, setting order weights to absolutely no tradeoff is impossible at the average risk position.)
Display the image called MCEARNT (for average risk, no tradeoff). Compare MCEARNT with MCEAVG. (If desired, you can
add MCEARNT to the MCEOWA group file by opening the group file in TerrSet Explorer, adding MCEARNT, then saving the
file.)
MCEAVG and MCEARNT are clearly quite different from each other even though they have identical levels of risk. With no tradeoff, the
average risk solution, MCEARNT, is near the median value instead of the weighted average as in MCEAVG (and MCEWLC). As you can see,
MCEARNT breaks significantly from the smooth trend from AND to OR that we explored earlier. Clearly, varying tradeoff independently
from risk increases the number of possible outcomes as well as the potential to modify analyses to fit individual situations.
138
Original Weights
Rescaled Weights
LANDFUZZ
0.0620
0.0791
TOWNFUZZ
0.0869
0.1108
ROADFUZZ
0.3182
0.4057
SLOPEFUZZ
0.3171
0.4044
For the second set of factors, those relevant to environmental concerns, we will use an OWA procedure that will yield a low risk result with no
trade off (i.e. the order weights will be 1 for the 1st rank and 0 for the 2nd). There are two factors to consider: distance from water bodies and
wetlands and distance from already developed areas. Again, we will rescale the original factor weights such that they sum to 1 and apply the
original constraints.
Original Weights
Rescaled Weights
WATERFUZZ
0.1073
0.4972
DEVELOPFUZZ
0.1085
0.5028
Clearly these images are very different from each other. However, note how similar COSTFACTORS is to MCEWLC.
What does the similarity of MCEWLC and COSTFACTORS tell us about our previous average risk analysis? Which factors most
influence the results in COSTFACTORS and ENVFACTORS?
The final step in this procedure is to combine our two intermediate results using a third MCE operation. In this aggregation, COSTFACTORS
and ENVFACTORS are treated as factors in a separate aggregation procedure. There is no clear rule as to how to combine these two results.
We will assume that our town planners are unwilling to give more weight to either the developers' or the environmentalists' factors; the factor
139
weights will be equal. In addition, they will not allow the two new consolidated factors to trade off with each other, nor do they want anything
but the lowest level of risk when combining the two intermediate results.
M
10
What set of factor and order weights will give us this result?
How does MCEFINAL differ from previous results? How did the grouping of factors in this case affect outcomes?
Save the image MCEFINAL for use in the following exercise. OWA offers an extraordinarily flexible tool for MCE. Like traditional WLC
techniques, it allows us to combine factors with variable factor weights. However, it also allows control over the degree of tradeoff between
factors as well as the level of risk one wants to assume. Finally, in cases where sets of factors clearly do not have the same level of tradeoff,
OWA allows us to temporarily treat them as separate suitability analyses, and then to recombine them. OWA, as a GIS technique for nonBoolean suitability analysis and decision making is potentially revolutionary.
140
EXERCISE 2-10
MCE: SITE SELECTION USING BOOLEAN AND
CONTINUOUS RESULTS
This exercise uses the results from the previous three exercises to address the problem of site selection. While a variety of standardization and
aggregation techniques are important to explore for any multi-criteria problem, they result in images that show the suitability of locations in
the entire study area. However, multi-criteria problems, as in the previous exercises, often concern eventual site selection for some
development, land allocation, or land use change. There are many techniques for site selection using images of suitability. This exercise
explores some of those techniques in the context of finding the most suitable sites for residential development.
Display MCEBOOL, the result from the previous exercise. Using a combination of the modules GROUP, AREA, RECLASS, and
OVERLAY in a sequence identical to that used in the latter part of the exercise on distance and context operators, contiguous
areas greater than or equal to 20 hectares were found. The result is the image BOOLSIZE20. Display this image.
This approach results in several potential sites from which to choose. However, due to their Boolean nature, their relative suitability cannot be
judged and it would be difficult to make a final choice between one or another site. A non-Boolean approach will give us more information to
compare potential sites with each other.
EXERCISE 2-10 MCE: SITE SELECTION USING BOOLEAN AND CONTINUOUS RESULTS
141
Suitability Thresholds
A threshold of suitability for the final site selection may be arbitrary or it may be grounded in the suitability scores determined for each of the
factors. For example, during the standardization of factors, a score of .75 or above might have been thought to be, on average, acceptable
while below .75 was questionable in terms of suitability. If this was the logic used in standardization, then it should also be applicable to the
final suitability map. Let's assume this was the case and use a score of .75 as our suitability threshold for site selection. This is a postaggregation constraint. We will use the result from the previous exercise, MCEWLC, but you could follow these procedures using any of the
continuous suitability results from the previous exercises.
Run RECLASS from the IDRISI GIS Analysis/Database Query menu and specify MCEWLC as the input image and SUIT75 as the
output image. Then enter the following values into the reclassification parameters area of the dialog box. Save the image in integer
format.
EXERCISE 2-10 MCE: SITE SELECTION USING BOOLEAN AND CONTINUOUS RESULTS
142
.75
.75
>
The result is a Boolean image of all possible sites for residential development. However, it is a highly fragmented image with just a few
contiguous areas that are substantial. Let's assume that another post-aggregation constraint must be applied here as well, that a suitable site be
20 hectares or greater.
C
1
Use GROUP (with diagonals) and AREA to determine if there are any areas 20 hectares or larger in size. (Remember to remove
the unsuitable groups.) Call the resulting image SUIT75SIZE20 (for suitability threshold .75, site size 20 hectares).
What is the size of the largest potential site for residential development?
Clearly, given the post-aggregation constraints of both a suitability threshold of .75 and a site size of 20 hectares or greater, there
are no suitable sites for residential development. Assuming town planners want to continue with site selection, there are a number
of ways to change the WLC result. Town planners might use different factors or combinations of factors, they might alter the
original methods/functions used for standardization of factors, they might weight factors differently, or they might simply relax
either or both of the post-aggregation constraints (the suitability threshold or the minimum area for an acceptable site).
In general, non-Boolean MCE is an iterative process and you are encouraged to explore all of the options listed above to change the WLC
result.
Use Edit, from the Data Entry menu, to examine a macro file (.iml) named SITESELECT. Don't make any changes to the file yet.
The macro scripting language uses a particular syntax for each module to specify the parameters for that module. For more information on
these types of macros, see the section on TerrSet Modeling Tools in the TerrSet Manual. The particular command line syntax for each
module is specified in each module description in the on-line Help System.
EXERCISE 2-10 MCE: SITE SELECTION USING BOOLEAN AND CONTINUOUS RESULTS
143
The macro uses a variety of TerrSet modules to produce two maps of suitable sites.1 One map shows each site with a unique identifier and the
other shows sites using the original continuous suitability scores. The former is automatically named SITEID by the macro. It is used as the
feature definition file to extract statistics for the sites. The other map is named by the user each time the macro is run (see below). The macro
also reports statistics about each site selected. These include the average suitability score, range of scores, standard deviation of scores, and
area in hectares for each site.
Note that some of the command lines contain symbols such as %1. These are placeholders for user-defined inputs to the macro. The user
types the proper values for these into the macro parameters input box on the Run Macro dialog box. The first parameter entered is
substituted into the macro wherever the symbol %1 is placed, the second is substituted for the %2 symbol, and so on. Using a macro in this
way allows you to easily and quickly change certain parameters without editing and resaving the macro file. The SITESELECT macro has four
placeholders, %1 through %4. These represent the following parameters:
%1
%2
%3
%4
the name of the output image with the suitable sites masked and each site containing its continuous values from the
original suitability map
Now that we understand the macro, we will use it to iteratively find a solution to our site selection problem. (Note that in Macro Modeler, you
would change these parameters by linking different input files, renaming output files and editing the .rcl files used by RECLASS)
Earlier, a suitability level of .75 and a site size of 20 hectares resulted in no selected sites from MCEWLC. Therefore, we will reduce the site
size threshold to 2 hectares to see if any sites result.
Choose the Run Macro command from the IDRISI GIS Analysis\Model Deployment Tools menu. Enter SITESELECT as the
macro file to run. In the Macro Parameters input box, type in the following four macro parameters as shown, with a space
between each:
MCEWLC .75 2 SUIT75SIZE2
These parameters ask the macro to analyze the image MCEWLC, isolate all locations with a suitability score of 200 or greater, from those
locations find all contiguous areas that are at least 2 hectares in size, and output an image called SUIT200SIZE2 (for suitability of 200 or
greater and sites of 2 hectares or greater). Click Run Macro and wait while the macro runs several TerrSet modules to yield the result.
The macro will output two images and two tables.
It will first display the sites selected using unique identifiers (the image will be called SITEID).
Any lines that begin with "rem" are remarks. These are for documentation purposes and are ignored by the macro processor.
EXERCISE 2-10 MCE: SITE SELECTION USING BOOLEAN AND CONTINUOUS RESULTS
144
It will then display a table that results from running EXTRACT using the image SITEID as the feature definition image and the original
suitability image, MCEWLC, as the image to be processed. Information about each site, important to choosing amongst them, is displayed in
tabular format.
The macro will then display a second table listing the identifier of each site along with its area in hectares.
Finally, it will display the sites selected using the original suitability scores. This final image will be called SUIT75SIZE2.
The images output from the SITESELECT macro show all locations that are suitable using the post-aggregation constraints of a particular
suitability threshold and minimum site size. The macro can be run repeatedly with different thresholds.
H
3
How many sites are selected now that the minimum area constraint has been lowered to 2 hectares? How might you select one site
over another?
Visually compare SUIT75SIZE2 to the final result from the Boolean analysis (BOOLSIZE20).
What might account for the sites selected in the WLC approach that were not selected in the Boolean approach?
Rather than reducing the minimum area for site selection, planners might choose to change the suitability threshold level. They might lower it
in search of the most suitable 2 hectare sites.
Run the SITESELECT macro a second time using the following parameters that lower the suitability threshold to .65:
MCEWLC .65 2 SUIT65SIZE2
These parameters ask the macro to again analyze the image MCEWLC, isolate all locations with a suitability score of .65 or
greater, find sites of 2 hectares or greater, and output the image SUIT65SIZE2 (i.e., suitability .65 and hectares 2).
How many sites are selected? How would you explain the differences between SUIT75SIZE2 and SUIT65SIZE2?
Finally, lower the suitability threshold to .5, retain the 2 hectare site size, and run the macro again. Call the resulting image
SUIT5SIZE2.
The difference in the size and quantity of sites selected from a suitability level of .65 to .5 is striking. In the case where the threshold is set at .5,
the number of sites may be too great to reasonably select amongst them. Also, note that as the size of sites grow, appreciable differences
within those large sites in terms of suitability can be seen. (This can be verified by checking the standard deviations of the sites.)
EXERCISE 2-10 MCE: SITE SELECTION USING BOOLEAN AND CONTINUOUS RESULTS
145
K
5
To help explain why there is such a change in the number and size of sites, run HISTO from the IDRISI GIS Analysis/Statistics
menu with MCEWLC as the input image and a display minimum of .0001.
What helps to explain the increase in the number and size of selected sites between suitability levels .65 and .5?
Selecting a variety of suitability thresholds, different minimum site sizes, and exploring the results is relatively easy with the SITESELECT
macro. However, justifying the choices of threshold and site size is dependent solely on the human element of GIS. It can only be done by
participants in the decision making process. There is no automated way to decide the level of suitability nor the minimum site size needed to
select final sites.
Open the module TOPRANK. Enter the input image name MCEWLC. Choose the number of cells option, and enter 25000.
Run TOPRANK again, but change the number of cells to 50000. This will identify the best 2000 hectares. Change the output
25000 cells is equivalent to 1000 ha. Enter an output name, BEST1000, and click OK to run.
What problem might be associated with selecting sites for residential development from the most suitable 2000 hectares in
MCEWLC?
The results of this total area threshold approach can be used to allocate specific amounts of land for some new development. However, it
cannot guarantee the contiguity of the locations specified since the selection is on a pixel by pixel basis from the entire study area. These
Boolean results must be submitted to the same grouping and reclassification steps described in the previous section to address issues of
contiguity and site size.
Using a total area threshold works well for selecting the best locations for phenomena that can be distributed throughout the study area or for
datasets that result in high levels of autocorrelation (i.e., suitability scores tend to be similar for neighboring pixels). A much easier way of
identifying more contiguous areas is to use the Spatial Decision Modeler or the MOLA module which has an automated contiguity threshold
feature build in. We will explore these techniques in the next exercises.
EXERCISE 2-10 MCE: SITE SELECTION USING BOOLEAN AND CONTINUOUS RESULTS
146
Our exploration of MCE techniques has thus far concentrated on a single objective. The next exercise introduces tools that may be used when
multiple objectives must be accommodated.
EXERCISE 2-10 MCE: SITE SELECTION USING BOOLEAN AND CONTINUOUS RESULTS
147
EXERCISE 2-11
MCE: MULTIPLE OBJECTIVES
In the previous four exercises, we have explored multi-criteria evaluation in terms of a single objectivesuitability for residential
development. However, it is often the case that we need to make site selection or land allocation decisions that satisfy multiple objectives, each
expressed in its own suitability map. These objectives may be complementary in terms of land use (e.g., open space preservation and market
farming) or they may be conflicting (e.g., open space preservation and retail space development).
Complementary objective problems are easily addressed with MCE analyses. We simply treat each objective's suitability map as a factor in an
additional MCE aggregation step. The case of conflicting or competing objectives, however, requires some mechanism for choosing between
objectives when a location is found highly suitable for more than one. The Multi-Objective Land Allocation (MOLA) module in TerrSet
employs a decision heuristic for this purpose. It is designed to allocate locations based upon total area thresholds as in the last part of the
previous exercise. However, the module simultaneously resolves areas where multiple objectives conflict. It does so in a way to provide a best
overall solution for all objectives. For details about the operation of MOLA, review the section on Decision Support found in the TerrSet
Manual.
To illustrate the multi-objective problem, we will use MOLA to allocate land (up to specified area thresholds) for two competing objectives,
residential development and industrial development in Westborough. As noted above, total area thresholding can be thought of as a postaggregation constraint. In this example, there is one constraint for each objective. Town planners want to identify the best 1600 hectares for
residential development as well as the best 600 hectares for industrial expansion. We will use the final suitability map, MCEFINAL, from the
previous exercise, for the residential development suitability map. For the industrial objective we have already created an industrial suitability
map for you called INDUSTRIAL.
We will begin by creating maps for each objective.
Open the module MOLA. Select the single objective allocation procedure. For the suitability image enter MCEFINAL. Select to
force contiguous allocations, set the number of clusters at 1. Select to force compact allocations with the minimum span of
allocations set to 3. Select to use areal requirements and enter 40000. This is in cell units and is equivalent to 1600 ha. Give an
output image name as BEST1600RESID. Hit OK to run.
We will again run MOLA with the single objective allocation procedure. For the suitability image enter INDUSTRIAL. Select to
force contiguous allocations, set the number of clusters at 1. Select to force compact allocations with the minimum span of
allocations set to 3. Select to use areal requirements and enter 15000. This is in cell units and is equivalent to 600 ha. Give an
output image name as BEST600INDUST. Hit OK to run.
148
Before we continue with the MOLA process, we will first determine where conflicts in allocation would occur if we treated each of these
objectives separately.
Open the module CROSSTAB. Enter BEST1600RESID as the first image, BEST600INDUST as the second image, and choose to
create a crossclassification image called CONFLICT.
The categories of CONFLICT include areas allocated to neither objective (1), areas allocated to residential objective, but not the industrial
objective (2), and areas allocated to both the residential and industrial objectives (3). It is this latter class that is in conflict. (There are no areas
that were selected among the best 600 hectares for industrial development that were not also selected among the best 1600 hectares for
residential development.)
The image CONFLICT illustrates the nature of the multi-objective problem with conflicting and competing objectives. Since treating each
objective separately produces conflicts, neither objective has been allocated its full target area. We could prioritize one solution over the other.
For example, we could use the BEST1600RESID image as a constraint in choosing areas for industry. In doing so, we would assign all the
areas of conflict to residential development, then choose more (and less suitable) areas for industry to make up the difference. Such a solution
is often not desirable. A compromise solution that achieves a solution that is best for the overall situation and doesn't grossly favor any
objective may be more appropriate.
The MOLA procedure is designed to resolve such allocation conflicts in a way that provides a compromise solutiona best overall solution
for all objectives.
Open the module MOLA. Select multi-objective as the allocation type and select use area requirements for the allocation. To the
right of the grid, increase the number of objectives to 2. Enter the two suitability maps, MCEFINAL and INDUSTRIAL. Enter the
allocation captions of Residential and Industrial for the correct suitability map. Enter .5 as the objective weight for both. Enter an
areal requirement of 40000 for residential and 15000 for industrial. These are equivalent to 1600 ha and 600 ha respectively. Select
to force contiguous allocation and compactness. Leave the minimum span of 3. Enter an output name as MOLAFINAL. Click OK.
The MOLA procedure will run iteratively and when finished will display a log of its iterations and the final image.
The MOLA log indicates the number of cells assigned to each objective. However, since we specified the area requirements in
hectares, we will check the result by running the module AREA. Choose AREA from the IDRISI GIS Analysis / Database Query
menu. Give MOLAFINAL as the input image, choose tabular output, and units in hectares.
The solution presented in MOLAFINAL is only one of any number of possible solutions for this allocation problem. You may wish to repeat
the process using other suitability maps created earlier for residential development or new industrial suitability maps you create yourself
149
using your own factors, weights, and aggregation processes. You may also wish to identify other objectives and develop suitability maps for
these.
150
EXERCISE 2-12
MCE: CONFLICT RESOLUTION OF
COMPETING OBJECTIVES
The Kathmandu Valley Case Study
In the previous exercises on decision support, we explored the tools available in TerrSet for land suitability mapping and land allocation. This
exercise will further explore these concepts using a new case study and dataset. This exercise assumes the user has familiarity of the concepts
and language introduced in the Decision Support chapter of the TerrSet Manual.
In this exercise, we will consider the case of the expansion of the carpet industry in Nepal and its urbanizing effects on areas traditionally
devoted to valley agriculture. After the flight of the Tibetans into Nepal in 1949, efforts were undertaken, largely by the Swiss, to promote
151
traditional carpet-producing technologies as a means of generating local income and export revenues. Today the industry employs over
300,000 workers in approximately 5000 registered factories. Most of these are sited within the Kathmandu Valley. The carpets produced are
sold locally as well as in bulk to European suppliers.
In recent years, considerable concern has been expressed about the expansion of the carpet industry. While it is recognized that the
production of carpets represents a major economic resource, the Kathmandu Valley is an area that has traditionally been of major importance
as an agricultural region. The Kathmandu Valley is a major rice growing region during the monsoon months, with significant winter crops of
wheat and mustard (for the production of cooking oil). The region also provides a significant amount of the vegetables for the Kathmandu
urban area. In addition, there is concern that urbanization will force the loss of a very traditional lifestyle in the cultural heritage of Nepal.
In an attempt to limit the degree of urban expansion within the Kathmandu area, the Planning Commission of Nepal has stopped granting
permission for the development of new carpet factories within the ring road of Nepal, promoting instead the area outside the Kathmandu
Valley for such developments. However, there still remains significant growth within the valley.
Display the image KVLANDU with the KVLANDU palette. The Kathmandu urban area is clearly evident in this image as the
large purplish area to the west. The smaller urban region of Bakhtipur can be seen to the east. Agricultural areas show up either as
light green (fallow or recently planted) or greenish (young crops). The deep green areas are forested.
The focus of this exercise is the development of a planning map for the Kathmandu Valley, setting aside 1500 hectares outside the
Kathmandu ring road in which further development by the carpet industry will be permitted and 6000 hectares in which agriculture will be
specially protected. The land set aside for specific protection of agriculture needs to be the best land for cultivation within the valley, while
those zoned for further development of the carpet industry should be well-suited for that activity. Remaining areas, after the land is set aside,
will be allowed to develop in whatever manner arises.
The development of a planning zone map is a multi-objective/multi-criteria decision problem. In this case, we have two objectives: the need
to protect land that is best for agriculture and the need to find other land that is best suited for the carpet industry. Since land can only be
allocated to one of these uses at any one time, the objectives are viewed as conflicting -- i.e., they may potentially compete for the same land.
Furthermore, each of these objectives require a number of criteria. For example, suitability for agriculture can be seen to relate to such factors
as soil quality, slope, distance to water, and so on. In this exercise, a solution to the multi-objective/multi-criteria problem is presented as it
was developed with a group of Nepalese government officials as part of an advanced seminar in GIS.1 While the scenario was developed
purely for the purpose of demonstrating decision support techniques and the result does not represent an actual policy decision, it is one that
incorporates substantial field work and well-established perspectives.
Each of the two objectives is dealt with as a separate multi-criteria evaluation problem and two separate suitability maps are created. They are
then compared to arrive at a single solution that balances the needs of the two competing objectives.
The data available for the development of this solution are as follows:
i. Land use map derived from Landsat imagery named KVLANDU
ii. Digital elevation model (DEM) named KVDEM
iii. 50 meter contour vector file named DEMCONTOURS
The seminar was hosted by UNITAR at the International Center for Integrated Mountain Development (ICIMOD) in Nepal, September 28-October 2, 1992.
152
The Landsat TM imagery dates from October 12, 1988. The DEM is derived from the USGS Seamless Data Distribution System at
http://seamless.usgs.gov/. All other maps were digitized by the United Nations Environment Program Global Resources Information
Database (UNEP/GRID). The roads data are quite generalized and were digitized from a 1:125,000 scale map. The river data are somewhat
less generalized and also derived from a 1:125,000 map. The land capability map KVLANDC was digitized from a 1:50,000 scale map with the
following legend categories:
IIBh2st Class II soils (slopes 1-5 degrees / deep and well drained). Warm temperate (B = 15-20 degrees) humid (h) climate. Moderately
suitable for irrigation (2).
IIIBh
Class III soils (slopes 5-30 degrees / 50-100cm deep and well drained). Warm temperate and humid climate.
IIICp
Class III soils and cool temperate (C = 10-15 degrees) perhumid climate.
IVBh
Class IV soils (slope >30 degrees and thus too steep to be cultivated) and a warm temperate humid climate.
IBh1
Class I soils (slopes <1 degree and deep) / warm temperate humid climate / suitable for irrigation for diversified crops.
IBh1R
Class I soils / warm temperate humid climate / suitable for irrigation for wetland rice.
153
Proximity to Water
Substantial amounts of water are used in the carpet washing process. In addition, water is also needed in the dying of wool. As a result, close
proximity to water is often an important consideration.
Proximity to Roads
The wool used in Nepalese carpets is largely imported from Tibet and New Zealand (see figure below). Access to transportation is thus an
important consideration. In addition, the end product is large and heavy, and is often shipped in large lots (see figure above).
154
Proximity to Power
Electricity is needed for general lighting and for powering the dying equipment. Although not as critical an element as water, proximity to
power is a consideration in the siting of a carpet factory.
Proximity to Market
Kathmandu plays an important role in the commercial sale of carpets. With Nepal's growing tourist trade, a sizable market exists within the
city itself. Perhaps more importantly, however, commercial transactions often take place within the city and most exports are shipped from
the Kathmandu airport.
Slope Gradient
Slope gradient is a relatively minor factor. However, as with most industries, lands of shallow gradient are preferred since they are cheaper to
construct and permit larger floor areas. In addition, shallow gradients are less susceptible to soil loss during construction.
In addition to these factors, the decision group also identified several significant constraints to be considered in the zoning of lands for the
carpet industry:
Slope Constraint
The group thought that any lands with slope gradients in excess of 100% (45 degrees) should be excluded from consideration.
155
Display KVDEM with the default Quantitative palette. To get a better perspective of the relief, use "Add Layer" from Composer to
display the vector file DEMCONTOURS. Choose the White Outline symbol file. These are 50 meter contours created from the
DEM.
Next run SURFACE on the elevation model KVDEM to create a slope map named KVSLOPES. Specify to calculate the output in
Now create the slope constraint map by running RECLASS on the image KVSLOPES to create a new image named SLOPECON.
slope gradients as percents. Display the result with the Quantitative palette.
Use the default user-defined classification option to assign a new value of 1 to all values ranging from 0 to just less than 100 and 0
to those from 100 to 999. Then examine SLOPECON with the Qualitative palette. Notice that very few areas exceed the threshold
of 100% gradient.
Now that the slope constraint map has been created, we need to create the ring road constraint map. We will use the vector ring
road area data for this.
After displaying the vector file KVRING, run the module RASTERVECTOR and select to rasterize a vector polygon file. Select
KVRING as the input file and give it the output name, TMPCON, for the raster file to create. When you hit OK, the module
156
INITIAL will be called because the corresponding raster file does not yet exist. Using INITIAL, specify the image to copy
parameters from as KVDEM and the output data type as byte. Then hit OK.
What we need is the inverse of this map so as to exclude the area inside the ring road. As in the step above, run RECLASS on
TMPCON to assign a new value of 1 to all values ranging from 0 to 1 and 0 to those from 1 to 2. Name the output RINGCON.
The final constraint map is one related to land use. Only agriculture is open for consideration in the allocation of lands for either
objective. Display KVLANDU with the legend and the KVLANDU user-defined palette. Of the twelve land use categories, the
Katus, Forest/Shadow, Chilaune and Salla/Bamboo categories are all forest types; two categories are urban and the remaining six
categories are agricultural types.
Perhaps the easiest way to create the constraint map here is to use the combination of Edit and ASSIGN to assign new values to
the land use categories. Use Edit to create an integer attribute values filename TMPLAND as follows:
1
10
Then run ASSIGN and use KVLANDU as the feature definition image, TMPLAND as the attribute values file, and LANDCON as
the output file. Note that ASSIGN will assign a zero to any category not mentioned in the values file. Thus the forest and urban
categories will receive a zero by default. When ASSIGN has finished, display LANDCON with the Qualitative palette.
This completes our development of the constraint maps for the carpet suitability mapping project.
The first factor is that of proximity to water. As we did earlier with the roads, we will first need to create a raster version of the
river data. First display the vector file named KVRIVERS. Notice how this is quite a large file covering the entire Bagmati Zone
(one of the main provinces of Nepal). The roads data also cover this region. As we did before, we will run RASTERVECTOR, but
this time we will rasterize a line file. Input KVRIVERS as the vector line file and enter KVRIVERS as the image file to be updated.
When you hit OK, the module INITIAL will be called since the raster file named in the output does not yet exist. Specify the
157
image to copy parameters from as KVDEM then hit OK. Display the result and note that only the portion of the vector file
matching the extent of the initial file was rasterized.
Now run DISTANCE to calculate the distance of every cell from the nearest river. Specify KVRIVERS as the input feature image
and TMPDIST as the output image. View the result.
What are the minimum and maximum distances of data cells to the nearest river? How did you determine this?
Now run the module FUZZY to standardize the distance values. Use TMPDIST as the input image and WATERFAC as the
output. Specify linear as the membership function type, the output data format as real, and the membership function shape as
monotonically decreasing (we want to give more importance to being near a water source than away). Specify the control points
for c and d as 0 and 2250, respectively. Hit OK and display the result.
This is the final factor map. Display it with the Quantitative palette and confirm that the higher values are those nearest the rivers
(you can use "Add Layer" to overlay the vector rivers to check).The distance image has thus been converted to a standard range of
values (to be known as criterion scores) based on the minimum and maximum values in the image. Values are thus standardized
to a range determined by the extreme values that exist within the study area. Most of the factors will be standardized in this
fashion.
Now create the proximity to roads factor map. Since the raster version of the roads data has already been created, the procedure
will be quick. Run DISTANCE on KVROADS to create a distance image named TMPDIST (yes, this is the same name we used in
the previous step -- since the distance image was only a temporary image in the process of creating the proximity image, it may be
overwritten). Then run FUZZY on TMPDIST to create ROADFAC. Specify linear as the membership function type, the output
data format as real, and the membership function shape as monotonically decreasing. Specify the control points for c and d as 0
and 2660, respectively. Hit OK and display the result. Confirm that it has criterion scores that are high (e.g., 1) indicating high
suitability near the roads and low (e.g., 0) indicting low suitability at the most distant extremes.
Now create the proximity to power factor map. We do not have any data on electrical power. However, it is reasonable to assume
that power lines tend to be associated with paved (Bitumen) roads. Thus use RECLASS on KVROADS to create TMPPOWER.
With the user-defined classification option, assign a value of 0 to all values ranging from 2 to 999. Then display the image to
confirm that you have a Boolean map that includes only the class 1 (Bitumen) roads from KVROADS. Use the same procedures as
in the above two steps to create a scaled proximity factor map based on TMPPOWER. Call the result POWERFAC.
To create the proximity to market map, we will first need to specify the location of the market. There are several possible
candidates: the center of Kathmandu, the airport, the center of Patan, etc. For purposes of illustration, the junction of the roads at
column 163 and row 201 will be used. First use INITIAL to create a byte binary image with an initial value of 0 based on the
spatial parameters of KVLANDU. Call this new image KVMARK. Then use UPDATE to change the cell at row 201 / column 163
to have a value of 1. Indicate 201 for the first and last row and 163 for the first and last column. Display this image with the
Qualitative palette to confirm that this was successfully done.
158
In this case, we will use the concept of cost distance in determining the distance to market. Cost distance is similar in concept to
normal Euclidean distance except that we incorporate the concept of friction to movement. For instance, the paved roads are
easiest to travel along, while areas off roads are the most difficult to traverse. We thus need to create a friction map that indicates
these impediments to travel. To do so, first create an attribute values file that indicates the frictions associated with each of the
surface types we can travel along (based on the road categorizations in KVROADS). Use Edit to create this real number attribute
values file named FRICTION with the following values:
0
10.0
1.0
1.5
6.0
8.0
The final factor map needed in this stage is the slope factor map. The slope gradients have already been calculated (KVSLOPES).
However, our procedure for developing the standardized criterion scores will be slightly different. Instead of using the minimum
and maximum values as the control points, use FUZZY with the linear option and base it on values of 0 and 100 (the minimum
and a logically determined maximum slope) for control points c and d respectively. Call the output factor map SLOPEFAC. Use
DISPLAY Launcher with the Quantitative palette to examine the result and confirm that the high factor scores occur on the low
slopes (which should dominate the map).
159
1/9
1/7
1/5
1/3
extremely very strongly strongly moderately equally moderately strongly very strongly extremely
less important
more important
The scale is continuous, and thus allows ratings of 2.4, 5.43 and so on. In addition, in comparing rows to columns in the matrix below, if a
particular factor is seen to be less important rather than more important than the other, the inverse of the rating is used. Thus, for example, if
a factor is seen to be strongly less important than the other, it would be given a rating of 1/5. Fractional ratings are permitted with reciprocal
ratings as well. For example, it is permissible to have ratings of 1/2.7 or 1/7.1 and so on.
To provide a systematic procedure for comparison, a pairwise comparison matrix is created by setting out one row and one column for each
factor in the problem. The group involved in the decision then provides a rating for each of the cells in this matrix. Since the matrix is
symmetrical, however, ratings can be provided for one half of the matrix and then inferred for the other half. For example, in the case of the
carpet industry problem being considered here, the following ratings were provided.
waterfac powerfac roadfac markfac slopefac
Waterfac 1
Powerfac 1/5
roadfac
1/3
markfac
1/5
1/5
1/3
1/7
1/7
slopefac 1/8
160
The diagonal of the matrix is automatically filled with ones. Ratings are then provided for all cells in the lower triangular half of the matrix. In
this case, where a group was involved, the GIS analyst solicited a rating for each cell from a different person. After providing an initial rating,
the individual was asked to explain why he/she rated it that way. The rating and its rationale were then discussed by the group at large, in
some cases leading to suggestions for modified ratings. The final rating was then chosen either by consensus or compromise.
To illustrate this process, consider the first few ratings. The first ratings solicited were those involved with the first column. An individual was
selected by the analyst and asked the question, "Relative to proximity to water, how would you rate the importance of being near power?" The
person responded that proximity to power was strongly less important than proximity to water, and it thus received a rating of 1/5. Relative to
being near water, other individuals rated the relative importance of being near roads, near the market and on shallow slopes as moderately
less important (1/3), strongly less important (1/5) and very strongly less important (1/8) respectively. The next ratings were then based on the
second column. In this case, relative to being near to power, proximity to roads was rated as being very strongly more important (7),
proximity to market was seen as strongly more important (5), and slope was seen as being moderately less important (1/3). This procedure
then continued until all of the cells in the lower triangular half of the matrix were filled.
This pairwise rating procedure has several advantages. First, the ratings are independent of any specific measurement scale. Second, the
procedure, by its very nature, encourages discussion, leading to a consensus on the weightings to be used. In addition, criteria that were
omitted from initial deliberations are quickly uncovered through the discussions that accompany this procedure. Experience has shown,
however, that while it is not difficult to come up with a set of ratings by this means, individuals, or groups are not always consistent in their
ratings. Thus the technique of developing weights from these ratings also needs to be sensitive to these problems of inconsistency and error.
To develop a set of weights from these ratings, we will use the WEIGHT module in TerrSet. The WEIGHT module has been specially
developed to take a set of pairwise comparisons such as those above, and determine a best fit set of weights that sum to 1.0. The basis for
determining the weights is through the technique developed by Saaty (1980), as discussed further in the Help for the module.
Run the module WEIGHT and specify to create a new pairwise comparison file. Name the output CARPET and indicate the
number of files to be 5. Then insert the names of the factors, in this order: WATERFAC, POWERFAC, ROADFAC, MARKFAC,
SLOPEFAC. Hit next and you will be presented with an input matrix similar to the one above, with no ratings. Referring to the
matrix above, fill out the appropriate ratings and call the output file CARPET. Hit OK.
You will then be presented with the best fit weights and an indication of the consistency of the judgments. The Consistency Ratio
measures the likelihood that the pairwise ratings were developed at random. If the Consistency Ratio is less than 0.10, then the
ratings have acceptable consistency and the weights are directly usable. However, if the Consistency Ratio exceeds 0.10, significant
consistency problems potentially exist (see Saaty, 1980). This is the case with the ratings we entered.
What are the weights associated with the factors on this run? What was the Consistency Ratio?
Since the Consistency Ratio exceeds 0.10, we should consider revising our ratings. A second display will be presented in which
inconsistencies can be identified. This next display shows the lower triangular half of the pairwise comparison matrix along with a
consistency index for each. The consistency index measures the discrepancy between the pairwise rating given and the rating that
would be required to be perfectly consistent with the best fit set of weights.
161
The procedure for resolving inconsistencies is quite simple. First, find the consistency index with the largest absolute value
(without regard for whether it is negative or positive). In this case, the value of -3.39 associated with the rating of the proximity to
power factor (POWERFAC) relative to the proximity to water factor (WATERFAC) is the largest. The value -3.39 indicates that to
be perfectly consistent with the best fit weights, this rating would need to be changed by 3.39 positions to the left on the rating
scale (the negative sign indicates that it should be moved to the left -- i.e., a lower rating).
At this point, the individual or group that provided the original ratings should reconsider this problematic rating. One solution
would be to change the rating in the manner indicated by the consistency index. In this case, it would suggest that the rating
should be changed from 1/5 (the original rating) to 1/8.39. However, this solution should be used with care.
In this particular situation, the Nepalese group debated this new possibility and felt that the 1/8.39 was indeed a better rating.
(This was the first rating that the group had estimated and in the process of developing the weights, their understanding of the
problem evolved as did their perception of the relationships between the factors.) However, they were uncomfortable with the
provision of fractional ratings -- they did not think they could identify relative weights with any greater precision than that
offered by whole number steps. As a result, they gave a new rating of 1/8 for this comparison.
Return to the WEIGHT matrix and modify the pairwise rating such that the first column, second row of the lower triangular half
of the pairwise comparison matrix reads 1/8 instead of 1/5. Then run WEIGHT again.
What are the weights associated with the factors in this second run? What is the Consistency Ratio?
Clearly, we still haven't achieved an acceptable level of consistency. What comparison has the greatest inconsistency with the best
Again, the Nepalese group who worked with these data preferred to work with whole numbers. As a result, after reconsideration of
the relative weight of the market factor to the road factor, they decided on a new weight of 1/2. What would have been their rating
if they had used exactly the change that the consistency index indicated?
Again edit the pairwise matrix to change the value in column 3 and row 4 of the CARPET pairwise comparison file from 1/5 to
1/2. Then run WEIGHT again. This time an acceptable consistency is reached.
What are the final weights associated with the factors? Notice how they sum to 1.0. What were the two most important factors in
the siting of carpet industry facilities in the judgment of these Nepalese officials?
Now that we have a set of weights to apply to the factors, we can undertake the final multi-criteria evaluation of the variables
considered important in siting carpet industry facilities. To do this, run the module MCE. The MCE module will ask for the
number of constraints and factors to be used in the model. Indicate 3 constraints and enter the following names:
162
SLOPECON
RINGCON
LANDCON
For the names of the factors and their weights, either enter the name of the pairwise comparison file saved from running
WEIGHT, i.e., CARPET, or enter the following:
WATERFAC
0.5077
POWERFAC
0.0518
ROADFAC
0.2468
MARKFAC
0.1618
SLOPEFAC
0.0318
Name the output CARPSUIT and run MCE. The MCE module will then complete the weighted linear combination. Display the
result. This map shows suitability for the carpet industry. Use "Add Layer" to overlay KVRIVERS and KVROADS. Note the
importance of these factors in determining suitability.
MCE uses a procedure that multiplies each of the factors by its associated weight, adds the results, and then multiplies this sum by
each of the constraints in turn. The procedure has been optimized for speed. However, it would also have been possible to
undertake this procedure using standard mathematical operators found in any GIS. Describe the TerrSet modules that could have
been used to undertake this same procedure in a step-by-step process.
Display the map KVLANDC with the Qualitative palette and a legend. This land capability map combines information about
soils, temperature, moisture, and irrigation potential. Based on the information in the legend (see the beginning of this exercise
for detailed descriptions of the categories), the group of Nepalese officials who worked with these data felt that the most capable
soil was IBh1R, followed in sequence by by IBh1, IIBh2st, IIIBh, IVCp, and IVBh.
163
To reclassify the land capability map into an ordinal map of physical suitability for agriculture, use Edit to create an integer
attribute values file named TMPVAL. Then enter the following values to indicate how classes in the land capability map should be
reassigned to indicate ordinal land capability:
1
Next, run ASSIGN and use KVLANDC as the feature definition image to create the output image TMPSOIL using TMPVAL as
the attribute values file of reassignments.
Then run STRETCH with a simple linear stretch. Specify TMPSOIL as the input image and SOILFAC as the output image name.
For the output image parameters, select real as the output data type and specify the minimum and maximum value as 0.0 and 1. 0,
respectively.2 Display the result with the Qualitative palette.
This now gives us the following constraints and factors to be brought together in the multi-criteria evaluation of land suitability
for agriculture:
Constraints
SLOPECON
RINGCON
LANDCON
Factors
WATERFAC
SLOPEFAC
SOILFAC
MARKFAC
There is some question about the advisability of using ordinal data sets in the development of factor maps. Factor maps are assumed to contain interval or ratio data.
The standardization procedure ensures that the end points of the new map have the same meaning as for any other factor -- they indicate areas that have the
minimum and maximum values within the study area on the variable in question. However, there is no guarantee that the intermediate values will be correctly
positioned on this scale. Although in this particular case it was felt that classes represented fairly even changes in land capability, input data of less than interval
scaling should, in general, be avoided.
164
Here is the lower triangular half of the pairwise comparison matrix for the factors as judged by the Nepalese decision team:
waterfac
slopefac
soilfac
waterfac
slopefac
1/7
soilfac
markfac
1/6
1/3
1/6
markfac
Now use the WEIGHT and MCE procedures as outlined in the carpet facilities suitability section to create an agricultural
suitability map. Call the pairwise comparison file AGRI and the final agricultural suitability map AGSUIT.
What were the final weights you determined for the factors in this agricultural suitability map? What was the Consistency Ratio?
How many iterations were required to achieve a solution?
Run the module TOPRANK and specify the input file CARPSUIT. Specify 16666 as the number of cells. Call the output image to
be produced CARPRANK. Click OK to run.
In the case here, we wish to isolate the best 1500 hectares of land. In this data set, each cell is 30 meters by 30 meters. This amounts to 900
square meters, or 0.09 hectares per cell (since a hectare contains 10,000 square meters). As a result, 1500 hectares is the equivalent of 16,666
cells.
Note particularly, however, that this process of looking at the problem from a single-objective perspective is not normally undertaken in the solution of multiobjective problems. It is only presented here because it is easier to understand the multi-objective procedure once we have examined the problem from a singleobjective perspective.
165
Display the result. You may wish to use "Add Layer" and the advanced palette selection to overlay the KVRIVERS file with a
Now use the same procedure as that just described to create AGRANK from AGSUIT that isolates the best 66,666 cells (which is
Use the module CROSSTAB to produce a cross-classification image of CARPRANK against AGRANK. Call this cross-
BLUE symbol file and the KVROADS file with a GREEN symbol file.
classification image CONFLICT. Then display the result, with a legend, to examine the CONFLICT image.
Which class shows areas that are best suited for the carpet industry and not for agriculture? Which class shows areas that are best
suited for agriculture and not for the carpet industry? Which class shows areas of conflict (i.e., were selected as best for both
agriculture and the carpet industry)?
The conflict image thus illustrates the nature of the multi-objective problem with competing objectives. The ultimate solution still
needs to meet the area targets set (1500 hectares of land for the carpet industry and 6000 hectares of land for agriculture).
However, since land can only be allocated to one or the other use, conflicts will need to be resolved.
ranking the suitability maps using the module TOPRANK to identify the best areas according to the areal requirement.
b.
resolving conflicts using a minimum distance to ideal point rule based on weighted objectives;
c.
checking how far short of the area targets each objective is, and then
d.
166
Now to complete the multi-objective decision process, run the module named MOLA. Select the multi-objective allocation type.
Specify to use area requirement and use the spin buttons to 2 as the number of objectives. Then, enter the two suitability maps,
AGSUIT and CARPSUIT. Specify an allocation caption for each, Agriculture and Carpet Industry, respectively. Enter 0.5 as the
objective weight for each. Next, specify the areal requirements of 66666 for agriculture and 16666 for the carpet industry. Specify
the output name as FINAL. Click OK to run.
10
Display FINAL and use "Add Layer" to overlay the roads (KVROADS) with the GREEN user-defined symbol file and the rivers
(KVRIVERS) with the WHITE user-defined symbol file. What evidence can you cite for the procedure appearing to work?
Feel free to experiment with the many options such as contiguity and compactness. Compare the results of each.
Conclusions
The procedure illustrated in this exercise provides both immediate intuitive appeal and a strong mathematical basis. Moreover, this choice
heuristic procedure highlights the participatory methodology employed throughout this workbook. The logic is easily understood as the
procedure offers an excellent vehicle for discussion of the identified criteria and objectives and their relative strengths and weaknesses.
It isolates the decisions between competing objectives to those cases where the effects of an incorrect decision would be least damaging -areas that are highly suitable for all objectives.
167
EXERCISE 2-13
SPATIAL DECISION MODELER (SDM)
In this exercise we will explore the Spatial Decision Modeler (SDM) modeling environment. Spatial Decision Modeler is a graphical decision
support tool that provides a graphical interface for developing decision models that can resolve complex resource allocation decisions. For
this exercise we will develop a planning map for the metro west area of Massachusetts with the goal of allocating 3600 ha for additional
protection and 3600 ha for residential development. Fundamentally, the development of a planning map is a multi-objective/multi-criteria
decision problem. In this case, we would like to allocate land for two objectives. Each of these objectives requires a number of criteria. For
example, to calculate the suitability for the protected area objective may require such factors as proximity to primary roads, proximity to
urban areas, proximity to residential areas, etc.
The Spatial Decision Modeler uses the language and the logic developed around the TerrSet decision support tools, including the
development of factors and constraints with tools such as FUZZY and RECLASS, the combination of factors to produce suitability maps with
the MCE tool (multi-criterion evaluation), and the combination of multiple objectives with the MOLA tool (multi-objective land allocation).
The SDM graphical interface is modeled after Macro Modeler. It will be useful to review the help and tutorial on Macro Modeler and decision
support before modeling with SDM.
We have identified many data layers to address the competing objectives of protection and residential development.
The variables which influence the suitability for protected areas include:
1)
2)
3)
4)
5)
6)
The variables which influence the suitability for residential areas include:
1)
2)
168
3)
4)
5)
Slopes
Notice that some variables can apply to both objectives. There is as well a Boolean constraint map, CONSTRAINT, with will constrain the
result from existing urban areas, residential areas and water bodies.
From the SDM menu or the toolbar, click Decision Variables and then Add variable. From the pick list, add the first variable,
distance from primary roads, DIST_PRIMARY. Do this for the remaining 5 variables, DIST_SECONDARY, DIST_TERTIARY,
DIST_URBAN, DIST_RESID, and PROX_PROTECTED.
Next, add the constraint from the Decision Variable menu by selecting Add constraint. Add the file CONSTRAINT.
Now that we have all the protected area input variables on the workspace, our next step is to convert these variables to factor maps using the
FUZZY decision operator. The use of the FUZZY operator not only converts each variable to be on the same scale, but also allows the user to
define what is suitable for a given variable. For example, we have a distance from primary roads variable. Should 10 km from a road be given
the same preference during the aggregation process as 500 meters from the same road? The FUZZY operator allows us to define these variable
preferences, or in the language of the FUZZY operator, its membership function. We will use the FUZZY module to convert the value in each
variable map to a specific range with a specific membership so that they can be combined to create a suitability map using the MCE
procedure.
Insert a FUZZY operation into the modeling area, either from the Decision Operations menu or its associated icon on the toolbar.
Since each variable has to be converted through a fuzzy operation, the number of FUZZY operators inserted has to equal to the
number of variables. Not including the constraint variable, insert six FUZZY operators onto the workspace and place each next to
a variable. Then link each variable with a FUZZY operator using the Connect link icon on the toolbar. The output of FUZZY
operators will be factors, shown with default output filenames in the blue rectangle.
Finally, for each FUZZY output, change the output filename. Right-click on each output filename and replace the initial
characters with the characters fprot, denoting the fuzzy result for protected land variables. The new names should be:
FPROT_PRIMARY, FPROT_SECONDARY, FPROT_TERTIARY, FPROT_URBAN, FPROT_RESID, and
FPROT_PROTECTED.
169
Next we need to enter the fuzzy parameters for each variable in order to transform them into factors. Right-click on each FUZZY
operator and set each according to the table below.
Variable name
Function shape
Function type
Control points
FPROT_PRIMARY
Monotonically Increasing
Sigmoidal
a: 500; b:5000
FPROT_SECONDARY
Monotonically Increasing
Sigmoidal
a: 100; b:2000
FPROT_TERTIARY
Monotonically Increasing
Sigmoidal
a: 0; b:1000
FPROT_PROTECTED
Monotonically Decreasing
Sigmoidal
c: 0; d:1000
FPROT_URBAN
Monotonically Increasing
Sigmoidal
a: 500; b:5000
FPROT_RESID
Monotonically Increasing
Sigmoidal
a: 0; b:1000
After defining the fuzzy parameters for each protected area variable, the next step is the MCE aggregation that will combine all the factors to
create a protected area suitability map, our first objective. We will link each factor to one MCE operator to accomplish this task.
Add an MCE operation from the Decision Operations menu. Then, using the Connect option, link each factor to the MCE
Right-click on the MCE output filename and rename the output to OBJ_PROT.
Since MCE is a weighted linear combination, we next need to set the weights that will be applied to each factor during the MCE
operation. Also link the constraint file, CONSTRAINT, to the MCE operation.
aggregation operation. Right-click the MCE operator and set the aggregation operation as medium decision risk / no tradeoff.
Then, for each factor set the weight listed below.
Factor name
Weights
DIST_PRIMARY
0.4085
DIST_SECONDARY
0.1158
DIST_TERTIARY
0.0610
PROX_PROTECTED
0.0243
DIST_URBAN
0.2550
DIST_RESID
0.1355
170
Save the model, use the name TUTOR_SDM. If you want, you can run the model at this point to check if all the parameters are set
correctly. Click the Run menu item or the Run icon on the toolbar.
Residential Objective
We have now completed the first half of the analysis, deriving the protected area objective. Since our problem is a multi-objetive problem
with competing objectives, the next phase is to add the residential land allocation portion of the model to our SDM workspace.
From the SDM menu or the toolbar, click Decision Variables and then Add variable. From the pick list, add the first variable, cost
distance from urban areas, COSTDIST_URBAN. Add the remaining 4 variables: DIST_PRIMARY, DIST_SECONDARY,
OPEN_WATER_VIEW, and SLOPE.
Next, add the constraint from the Decision Variable menu by selecting Add constraint. Add the file CONSTRAINT. This step is
optional; you could use the existing constraint file already on the workspace.
As we did previously, we need to develop factor maps (suitability maps) based on each variable using the FUZZY module.
Insert a FUZZY operation next to each of the five residential variables. Then link each variable with a FUZZY operator using the
Next, change the output name for each fuzzy output. Replace the initial characters and precede each with fres, denoting fuzzy
Connect link icon on the toolbar. The output of FUZZY operators will be factors, shown as a blue rectangle.
for the residential land evaluation. The new names should be: FRES_PRIMARY, FRES_SECONDARY, FRES_URBAN,
FRES_OPEN_WATER, and FRES_SLOPE.
Then, enter the fuzzy parameters for each variable in order to transform them into factors. Right-click each FUZZY operator and
set each according to the table below.
Variable name
Function shape
Function type
Control points
DIST_PRIMARY
Monotonically Increasing
Sigmoidal
a: 0; b:1000
DIST_SECONDARY
Monotonically Increasing
Sigmoidal
a: 0; b:500
COSTDIST_URBAN
Symmetric
Sigmoidal
OPEN_WATER_VIEW
Monotonically Increasing
Linear
a:0; b:0.08
SLOPE
Monotonically Decreasing
Sigmoidal
c:0; d:25
We can now link all the outputs from FUZZY to a new MCE operation that will calculate the residential objective.
171
Add an MCE operation into the workspace. Link each of the five residential factors to this new MCE operation. And also link the
Rename the MCE operator output file for residential land allocation to be OBJ_RES.
Next, set the weights for each factor to be applied during the MCE operation. Right-click the MCE operator for residential land
allocation and set the aggregation operation as medium decision risk / no tradeoff. Then, for each factor set the weight as listed
below.
Factor name
Weights
FRES_URBAN
0.0811
FRES_PRIMARY
0.2900
FRES_SECONDARY
0.1628
FRES_SLOPE
0.4340
FRES_OPEN_WATER
0.0321
Add the MOLA operation into the SDM workspace from the Decision Operations menu. Then link the two result images from
the MCE operations to the MOLA operator. Also, link the constraint file, CONSTRAINT, to the MOLA operator. Although the
constraint map was taken into account during the MCE operations, it can be included in the MOLA step to speed up the
allocation calculation.
Right-click the MOLA operator to set its parameters. Select to use area requirement. The grid should show two records, our two
objectives. Leave the objective weight for each at 1 (equal weight). Then set the area requirement for both objectives at 40000.
(40,000 is in the number of cells, which given the 30 meter resolution of the data, is equivalent to 3600 ha.) Deselect both of the
force options for contiguity and compactness for now.
Right-click on the output filename for MOLA, enter MOLA as the new filename.
172
When the model finishes MOLA image will be autodisplayed. Also display the two MCE output images, OBJ_PROT and
OBJ_RES.
Viewing the two MCE results, for each, which factors seem to dominate in the determination of the suitabilities?
Viewing the MOLA output, notice how the allocation of pixels are scattered throughout the study area. What do you suppose
accounts for the residential allocation to be less contiguous than the protected area allocation?
Suppose one would like to use the allocation for residential area to start a housing construction project. A final allocation that has one or
several contiguous areas would be more ideal. We will now take into account contiguity.
Right-click on the MOLA operator and select the force contiguous allocation option. Close the parameter dialog by clicking OK,
then rename the MOLA output to MOLA_CONTIG. Run the model again.
This time the results are two contiguous regions, one for residential and for protected area.
Now suppose we want to find the best three parcels for residential development. We can do this as a single objective problem by running
MOLA with just one objective.
X
Y
Disconnect the link between "OBJ_PROT" and MOLA operator. You can do this by selecting the link. Once it is highlighted you
can select delete. This leaves only the residential objective.
Now right-click the MOLA operator so set parameters. Notice how the parameters dialog is very different now that we have only
one objective. Select to force contiguous allocations and set the number of clusters to 3. Next, select areal requirement and enter a
value of 40000. Then close the parameters dialog.
173
AA
Using the same model in the previous example, we will delete everything related to the residential land allocation, including all the
FUZZY, MCE and MOLA operators and their inputs and outputs. What remains are the FUZZY operators for creating factors
related to protected land allocation and the MCE operator for creating the corresponding suitability map.
BB
Add another variable, WATER_YIELD and connect this new variable to a new FUZZY operator. Change the output name to
FWATER_YIELD. Right-click the FUZZY operator and set the water yield parameters below:
Variable name
Function shape
Function type
Control points
WATER_YIELD
Monotonically Increasing
Sigmoidal
a: 0; b:1000
174
CC
Add an MCE operator and link both FWATER_YIELD and OBJ_PROT to it. Change the output name to COMB_SUIT.
DD
Right-click on the new MCE operation and set the aggregation option to medium decision risk / full tradeoff. Give OBJ_PROT a
EE
higher weight (0.7) than FWATER_YIELD (0.3). MCE operators will create a composite suitability image for both objectives.
Add a MOLA operation into the workspace and connect COMB_SUIT to it. Right-click the MOLA operator and select to force
contiguous allocations and set the number of clusters to 4. Set the areal requirement 40000.
FF
Link the constraint file to the MOLA operator. Right-click the MOLA output filename and rename it to MOLA_COMP.
GG
175
EXERCISE 2-14
WEIGHT-OF-EVIDENCE MODELING
WITH BELIEF
This exercise will expand upon the series of MCE/MOLA Decision Support exercises of the previous section by examining another method
for the aggregation of data known as Dempster-Shafer Weight-of-Evidence modeling. The Belief module, used in this exercise, has a wide
variety of applications, as it can aggregate many different sources of information to predict the probability that any phenomenon might occur.
Because the tool provides the user with a method for reviewing the relative strength of the information gathered to establish belief values, it is
useful for applying anecdotal information to an analysis since one can acknowledge ignorance in the final outcome produced. With this
flexibility, it becomes possible to establish and evaluate the relative risk of decisions made based on the total information that is available. The
user should review the Dempster-Shafer section of the Decision Support: Decision Strategy Analysis chapter in the TerrSet Manual for
more background information.
As an introduction to the module, this exercise will demonstrate how to evaluate sample evidence for which the application of expert
knowledge is important, and then derive probability surfaces in order to demonstrate that knowledge. This exercise also will demonstrate how
to combine evidence to predict the belief in a phenomenon occurring across an entire raster surface.
The user will evaluate existing evidence using expert knowledge to transform the evidence into probabilities to support certain hypotheses
which, represented as probability surfaces, are then aggregated in the Belief module. The objective is to evaluate the probability that an
archaeological site may be found in each pixel location in a surface representing the Pion Canyon in the American Southwest.1 Given
knowledge about existing archaeological sites and given expert knowledge about the culture, each line of evidence is transformed into a layer
representing the likelihood that a site exists. The aggregated evidence produces results that are used to predict the presence of archaeological
sites, evaluate the impact of each line of evidence to the total body of knowledge, and identify areas for further research.
The research question guides us to define the frame of discernmentit includes two basic elements: [site] and [nonsite]. The hierarchical
combination of all possible hypotheses therefore includes [site], [nonsite], and [site, nonsite]. We are most interested in the results produced
for the hypothesis [site]. The existing evidence we use, however, may support any of the possible hypotheses. The final results produced for
the hypothesis [site] are dependent on how all evidence is related together in the process of aggregation. Even though the evidence may
support other hypotheses, it indirectly affects the total belief in [site].
Kenneth Kvamme of the Department of Anthropology, University of Arkansas, Fayetteville, Arkansas, USA, donated the sample data. We have formed a hypothetical
example from his data set.
176
We have gathered indirect evidence that is related to the likelihood that an archaeological site exists. They are: known sites, frequency of
artifacts (shards counted), permanent water, and slopes. The evidence is derived from different sources independent of each other. Each line
of evidence is associated with the hypotheses only indirectly, therefore ignorance is an important factor to acknowledge in the analysis. We
must be explicit about what we know and what we do not know.
The data files for this exercise consist of:
SITES: vector file containing known archaeological sites
WATER: image file of permanent waters
SHARD_SITE: probability image in support of the hypothesis [site], derived from frequency of shard counts
SLOPE_NONSITE: probability image in support of the hypothesis [nonsite], derived from slopes.
First we need to derive, for each line of evidence, probability images for the hypotheses that the evidence supports. Deciding which hypothesis
to support, given the evidence, is not always very clear. Often the distinction between which hypothesis the evidence supports is very subtle.
For each line of evidence we develop, we must decide where our knowledge lies about the relationship between the evidence and the
hypotheses. This in part determines which hypothesis the evidence supports, as well as how we develop probability values for each hypothesis
supported. For example, in the case of slopes, we are less certain about which slopes attract settlement than with which slopes are unlivable.
Gentle slopes may seem to support the hypothesis that there will be a site. However, since gentle slopes are only a necessary but not a
sufficient condition for a site to exist, they only constitute a plausibility instead of a belief for [site]. Therefore, they support the hypothesis
[site, nonsite]. Steep slopes, on the other hand, indicate a high likelihood that a location is NOT a site, thus, such slopes support the
hypothesis [nonsite].
In many cases, the evidence we have only supports the plausibility or negation of the primary hypothesis of concern. This means that
evidence supports the hypotheses [site, nonsite] or [nonsite] rather than [site]. Our knowledge about a hypothesis is greatest when the
support for the hypothesis by the evidence is indistinguishable from the support for other hypotheses. Likewise, if the evidence only supports
the complement of the hypothesis, the contrary is true. This is because it is often the case that the clearest and strongest evidence we have only
supports the negation of the hypothesis of concern. Even with our total body of knowledge in hand, we may only produce evidence images
that state an overall lack of evidence to support the hypothesis of concern. This does not, however, mean that the information is not useful.
Indeed, it means just the opposite. By producing these evidence images, we seek to refine the hypothesis of where a spatial phenomenon is
likely to occur by applying evidence that reduces the likelihood the phenomenon will NOT exist. By aggregating different sources of
probability information, we can narrow down the range of probabilities for the hypothesis of concern, thus making it possible either to make
a prediction for the hypothesis or to narrow the number of selected locations for further information gathering.
First we will set up the Working Folder for this exercise. Using TerrSet Explorer choose the folder \TerrSet Tutorial\Advanced
GIS as your Working Folder.
177
Then display a raster image called WATER using the Default Qualitative palette. This represents the permanent water bodies in
the area. On Composer, choose to add a vector layer called SITES using the outline white symbol file. These are the existing
known archaeological sites in the area.
The association of most sites with permanent water suggests that these rivers are a determining factor for the presence of the sites. We can see
that most (if not all) of the sites are close to permanent water. Our knowledge about this culture indicates that water is a necessary condition
of living, but it is not sufficient by itself, since other factors such as slopes also affect settlement. Therefore, closeness to water indicates the
plausibility for a site. Locations farther away from water, on the other hand, clearly support the hypothesis [nonsite], for without water or the
means to access it, people cannot survive. Distance to permanent water is important in understanding the relationship of this evidence to our
hypothesis of concern, [site]. To look at the relationship between distance to water and known site locations, we must perform the following
steps.
Run the module DISTANCE from the GIS Analysis menu on the image WATER and call the result WATERDIST. Inspect the
Run the module RASTERVECTOR from the Reformat menu with the point to raster option. Enter SITES as the input vector file
and SITES as the image to be updated. For the operation type, choose to change cells to record the identifiers of each point. Press
OK. Since the image file SITES does not exist, you will next be asked to create the image with INITIAL. Choose yes. Enter
WATER as the image from which to copy the parameters, select byte as the output data type and set the initial value to 0. When
the resulting file autodisplays, open Layer Properties and change the palette to be Qual.
Now we have a raster image SITES that contains known sites and a raster image WATERDIST that contains distance from water values.
To initially develop the relationship between the sites and their distance from water, the module HISTO will be used. Since it is only the pixels
that contain sites that we want to analyze, we will use a mask image with HISTO.
Run the module HISTO and indicate that an image file will be used. The input file is WATERDIST. Click on the checkbox to use
a mask and enter SITES as the mask filename. Choose graphic output and enter the value 100 for the class width.
The graphic histogram shows the frequency of different distance values among the existing archaeological sites. Such a sample describes a
relationship between the distance values and the likelihood that a site may occur. Notice that when distance is greater than 800 m, there are
relatively few known sites. We can use this information to derive probabilities for the hypothesis [nonsite].
Run the module FUZZY. Use WATERDIST as the input file, then choose the Sigmoidal function with a monotonically increasing
curve. Enter 800 and 2000 as control points a and b, respectively. Choose real data output and call the result WATERTMP. Use
the Identify tool to explore the range of values in your result.
WATERTMP contains the probabilities for the hypothesis [nonsite]. The image shows that when distance to permanent water is 800 m, the
probability for [nonsite] starts to rise following a sigmoidal shaped curve, until at 2000 m when the probability reaches 1. However, there is a
problem with this probability assessment. When probability reaches 1 for the hypothesis [nonsite], it does not leave any room for ignorance
178
for other types of water bodies (such as ground water and non-permanent water). To incorporate this uncertainty, we will scale down the
probability.
Run the module SCALAR, and multiply WATERTMP by 0.8 to produce a result called WATER_NONSITE. (Note that you could
also use Image Calculator for this.)
In the image WATER_NONSITE, the probability range is between 0 - 0.8 in support of the hypothesis [nonsite]. It is still a sigmoidal
function, but the maximum fuzzy membership is reduced from 1 to 0.8. The remaining evidence (1-WATER_NONSITE) produces the
probabilities that support the hypothesis [site, nonsite]. This is known as ignorance, and it is calculated automatically by the Belief module.
Run DISTANCE on the image SITES and call the result SITEDIST. Then run FUZZY on SITEDIST with the J-shaped function.
Choose a monotonically decreasing curve, and use 0 and 350 (meters) as control points c and d, respectively. Call the result
SITE_SITE. These are the supporting probabilities for the hypothesis [site] given the evidence "known sites."
For locations that are far from the known sites, we do not have information to support the hypothesis [site], yet this could simply reflect that
research has not extended into those areas. Therefore, it does not support the hypothesis [nonsite]. It indicates ignorance (probability for the
hypothesis [site, nonsite]), which is calculated internally by the module Belief.
For the evidence representing shard counts, we use a similar reasoning as that for the evidence of known sites, and we derive the image
SHARD_SITE in support of the hypothesis [site]. The image represents the likelihood a site will occur at each location given the frequency of
discovered shards. Likewise for the slope evidence, we derive a probability image SLOPE_NONSITE which supports the hypothesis [nonsite].
This line of evidence represents the likelihood that a site will not occur given the steepness of slopes. We have already created these two
images for you.
179
Run the Belief module. Replace the Knowledge base title with: "Archaeological Sites." In the class list, we want to enter the basic
elements in the frame of discernment: site, and nonsite. In the class list box, enter the word SITE and then press the Add button.
Next enter the word NONSITE and press the Add button again. As soon as you enter both elements, a list of hierarchical
hypotheses will be created automatically in the hypotheses list. In this example, we have three hy