Otb Software Guide
Otb Software Guide
http://www.orfeo-toolbox.org
e-mail: [email protected]
The ORFEO Toolbox is not a black box.
Ch.D.
FOREWORD
Beside the Pleiades (PHR) and Cosmo-Skymed (CSK) systems developments forming ORFEO,
the dual and bilateral system (France - Italy) for Earth Observation, the ORFEO Accompaniment
Program was set up, to prepare, accompany and promote the use and the exploitation of the images
derived from these sensors.
The creation of a preparatory program1 is needed because of:
• the new capabilities and performances of the ORFEO systems (optical and radar high resolu-
tion, access capability, data quality, possibility to acquire simultaneously in optic and radar),
• the implied need of new methodological developments : new processing methods, or adapta-
tion of existing methods,
• the need to realise those new developments in very close cooperation with the final users for
better integration of new products in their systems.
This program was initiated by CNES mid-2003 and will last until mid 2013. It consists in two parts,
between which it is necessary to keep a strong interaction:
• A Thematic part,
• A Methodological part.
The Thematic part covers a large range of applications (civil and defence), and aims at specifying and
validating value added products and services required by end users. This part includes consideration
about products integration in the operational systems or processing chains. It also includes a careful
thought on intermediary structures to be developed to help non-autonomous users. Lastly, this part
aims at raising future users awareness, through practical demonstrations and validations.
1 http://smsc.cnes.fr/PLEIADES/A prog accomp.htm
iv
The Methodological part objective is the definition and the development of tools for the operational
exploitation of the submetric optic and radar images (tridimensional aspects, changes detection,
texture analysis, pattern matching, optic radar complementarities). It is mainly based on R&D
studies and doctorate and post-doctorate researches.
In this context, CNES2 decided to develop the ORFEO ToolBox (OTB), a set of algorithms encapsu-
lated in a software library. The goals of the OTB is to capitalise a methological savoir faire in order
to adopt an incremental development approach aiming to efficiently exploit the results obtained in
the frame of methodological R&D studies.
All the developments are based on FLOSS (Free/Libre Open Source Software) or existing CNES
developments. OTB is distributed under the permissive open source license Apache v2.0 - aka
Apache Software License (ASL) v2.0:
http://www.apache.org/licenses/LICENSE-2.0
OTB is implemented in C++ and is mainly based on ITK3 (Insight Toolkit).
2 http://www.cnes.fr
3 http://www.itk.org
CONTENTS
I Introduction 1
1 Welcome 3
1.1 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 How to Learn OTB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Software Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Obtaining the Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Directory Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.3 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.4 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 The OTB Community and Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.1 Join the Mailing List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.2 Community . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 A Brief History of OTB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5.1 ITK’s history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3 Known issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3 System Overview 21
3.1 System Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Essential System Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.1 Generic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.2 Include Files and Class Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.3 Object Factories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.4 Smart Pointers and Memory Management . . . . . . . . . . . . . . . . . . . . . . . 24
3.2.5 Error Handling and Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.6 Event Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.7 Multi-Threading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Numerics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4 Data Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.5 Data Processing Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.6 Spatial Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
II Tutorials 33
5 Data Representation 61
5.1 Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.1.1 Creating an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.1.2 Reading an Image from a File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.1.3 Accessing Pixel Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.1.4 Defining Origin and Spacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.1.5 Accessing Image Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.1.6 RGB Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.1.7 Vector Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.1.8 Importing Image Data from a Buffer . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.1.9 Image Lists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.2 PointSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.2.1 Creating a PointSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.2.2 Getting Access to Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.2.3 Getting Access to Data in Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.2.4 Vectors as Pixel Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.3 Mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.3.1 Creating a Mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.3.2 Inserting Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.3.3 Managing Data in Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.4 Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.4.1 Creating a PolyLineParametricPath . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
12 Radiometry 301
12.1 Radiometric Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
12.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
12.1.2 NDVI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
12.1.3 ARVI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
12.1.4 AVI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
12.2 Atmospheric Corrections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
xii CONTENTS
19 Classification 455
19.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
xiv CONTENTS
22 Hyperspectral 563
22.1 Unmixing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
xvi CONTENTS
25 Iterators 593
25.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
25.2 Programming Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
25.2.1 Creating Iterators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
25.2.2 Moving Iterators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
25.2.3 Accessing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
25.2.4 Iteration Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
25.3 Image Iterators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
CONTENTS xvii
V Appendix 687
34 Contributors 691
Index 705
LIST OF FIGURES
Introduction
CHAPTER
ONE
WELCOME
1.1 Organization
This software guide is divided into several parts, each of which is further divided into several chap-
ters. Part I is a general introduction to OTB, with—in the next chapter—a description of how to
install the ORFEO Toolbox on your computer. Part I also introduces basic system concepts such
as an overview of the system architecture, and how to build applications in the C++ programming
language. Part II is a short guide with gradual difficulty to get you start programming with OTB.
Part III describes the system from the user point of view. Dozens of examples are used to illustrate
important system features. Part IV is for the OTB developer. It explains how to create your own
classes and extend the system.
There are two broad categories of users of OTB. First are class developers, those who create classes
in C++. The second, users, employ existing C++ classes to build applications. Class developers
must be proficient in C++, and if they are extending or modifying OTB, they must also be familiar
with OTB’s internal structures and design (material covered in Part IV).
The key to learning how to use OTB is to become familiar with its palette of objects and the ways
of combining them. We recommend that you learn the system by studying the examples and then,
if you are a class developer, study the source code. Start by the first few tutorials in Part II to get
4 Chapter 1. Welcome
familiar with the build process and the general program organization, follow by reading Chapter 3,
which provides an overview of some of the key concepts in the system, and then review the exam-
ples in Part III. You may also wish to compile and run the dozens of examples distributed with the
source code found in the directory OTB/Examples. (Please see the file OTB/Examples/README.txt
for a description of the examples contained in the various subdirectories.) There are also several
hundreds of tests found in the source distribution in OTB/Testing/Code, most of which are mini-
mally documented testing code. However, they may be useful to see how classes are used together
in OTB, especially since they are designed to exercise as much of the functionality of each class as
possible.
The following sections describe the directory contents, summarize the software functionality in each
directory, and locate the documentation and data.
Periodic releases of the software are available on the OTB Website. These official releases are
available a few times a year and announced on the ORFEO Web pages and mailing lists.
This software guide assumes that you are working with the latest official OTB release (available on
the OTB Web site).
OTB can be downloaded without cost from the following web site:
http://www.orfeo-toolbox.org/
In order to track the kind of applications for which OTB is being used, you will be asked to complete
a form prior to downloading the software. The information you provide in this form will help
developers to get a better idea of the interests and skills of the toolkit users.
Once you fill out this form you will have access to the download page. This page can be book
marked to facilitate subsequent visits to the download site without having to complete any form
again.
Then choose the tarball that better fits your system. The options are .zip and .tgz files. The
first type is better suited for MS-Windows while the second one is the preferred format for UNIX
systems.
Once you unzip or untar the file, a directory called OTB will be created in your disk and you will be
ready for starting the configuration process described in Section 2.
There are two other ways of getting the OTB source code:
• Clone the current release with Git from the OTB git server, (master branch)
1.3. Software Organization 5
• Clone the latest revision with Git from the OTB git server (develop branch).
These last two options need a proper Git installation. To get source code from Git, do:
Using Git, you can easily navigate through the different versions. The master branch contains the
latest stable version:
There is also a mirror of OTB official repository on GitHub. You can find more information on the
OTB git workflow in the wiki.
To begin your OTB odyssey, you will first need to know something about OTB’s software organiza-
tion and directory structure. It is helpful to know enough to navigate through the code base to find
examples, code, and documentation.
The OTB contains the following subdirectories:
• OTB/Modules—the heart of the software; the location of the majority of the source code.
• OTB/CMake—internal files used during the configuration process.
• OTB/Copyright—the copyright information of OTB and all the dependencies included in the
OTB source tree.
• OTB/Examples—a suite of simple, well-documented examples used by this guide and to il-
lustrate important OTB concepts.
• OTB/Superbuild—CMake scripts to automatically download, patch, build and install impor-
tant dependencies of OTB (ITK, OSSIM, GDAL to name a few).
• OTB/Utilities—small programs used for the maintenance of OTB.
6 Chapter 1. Welcome
OTB is organized into different modules, each one covering different part of image processing. It is
therefore important to understand the source code directory structure—found in OTB/Modules—.
See also chapter 31 for more information about how to write modules.
1.4. The OTB Community and Support 7
1.3.3 Documentation
Besides this text, there are other documentation resources that you should be aware of.
1.3.4 Data
The OTB Toolkit was designed to support the ORFEO Acompaniment Program and its associated
data. This data is available at http://smsc.cnes.fr/PLEIADES/index.htm.
It is strongly recommended that you join the users mailing list. This is one of the primary resources
for guidance and help regarding the use of the toolkit. You can subscribe to the users list online at
http://groups.google.com/group/otb-users
The otb-users mailing list is also the best mechanism for expressing your opinions about the toolbox
and to let developers know about features that you find useful, desirable or even unnecessary. OTB
developers are committed to creating a self-sustaining open-source OTB community. Feedback from
users is fundamental to achieving this goal.
1.4.2 Community
OTB was created from its inception as a collaborative, community effort. Research, teaching, and
commercial uses of the toolkit are expected. If you would like to participate in the community, there
are a number of possibilities.
8 Chapter 1. Welcome
• Users may actively report bugs, defects in the system API, and/or submit feature requests.
Currently the best way to do this is through the OTB users mailing list.
• Developers may contribute classes or improve existing classes. If you are a developer, you
may request permission to join the OTB developers mailing list. Please do so by sending
email to otb “at” cnes.fr. To become a developer you need to demonstrate both a level of
competence as well as trustworthiness. You may wish to begin by submitting fixes to the OTB
users mailing list.
• Research partnerships with members of the ORFEO Acompaniment Program are encouraged.
CNES will encourage the use of OTB in proposed work and research projects.
• Educators may wish to use OTB in courses. Materials are being developed for this purpose,
e.g., a one-day, conference course and semester-long graduate courses. Watch the OTB web
pages or check in the OTB-Documents/CourseWare directory for more information.
Orfeo ToolBox is currently in the incubation stage of being part of the OSGeo1 foundation.Within
the ORFEO ToolBox community we act respectfully toward others in line with the OSGeo Code of
Conduct2.
Beside the Pleiades (PHR) and Cosmo-Skymed (CSK) systems developments forming ORFEO,
the dual and bilateral system (France - Italy) for Earth Observation, the ORFEO Accompaniment
Program was set up, to prepare, accompany and promote the use and the exploitation of the images
derived from these sensors.
The creation of a preparatory program3 is needed because of :
• the new capabilities and performances of the ORFEO systems (optical and radar high resolu-
tion, access capability, data quality, possibility to acquire simultaneously in optic and radar),
• the implied need of new methodological developments : new processing methods, or adapta-
tion of existing methods,
• the need to realize those new developments in very close cooperation with the final users for
better integration of new products in their systems.
This program was initiated by CNES mid-2003 and will last until 2010 at least It consists in two
parts, between which it is necessary to keep a strong interaction :
1 http://www.osgeo.org/
2 http://www.osgeo.org/code of conduct
3 http://smsc.cnes.fr/PLEIADES/A prog accomp.htm
1.5. A Brief History of OTB 9
• A Thematic part
• A Methodological part.
The Thematic part covers a large range of applications (civil and defence ones), and aims at spec-
ifying and validating value added products and services required by end users. This part includes
consideration about products integration in the operational systems or processing lines. It also in-
cludes a careful thought on intermediary structures to be developed to help non-autonomous users.
Lastly, this part aims at raising future users awareness, through practical demonstrations and valida-
tions.
The Methodological part objective is the definition and the development of tools for the operational
exploitation of the future submetric optic and radar images (tridimensional aspects, change detec-
tion, texture analysis, pattern matching, optic radar complementarities). It is mainly based on R&D
studies and doctorate and post-doctorate research.
In this context, CNES4 decided to develop the ORFEO ToolBox (OTB), a set of algorithms encapsu-
lated in a software library. The goals of the OTB is to capitalize a methological savoir faire in order
to adopt an incremental development approach aiming to efficiently exploit the results obtained in
the frame of methodological R&D studies.
All the developments are based on FLOSS (Free/Libre Open Source Software) or existing CNES
developments.
OTB is implemented in C++ and is mainly based on ITK5 (Insight Toolkit):
In 1999 the US National Library of Medicine of the National Institutes of Health awarded six
three-year contracts to develop an open-source registration and segmentation toolkit, that eventu-
ally came to be known as the Insight Toolkit (ITK) and formed the basis of the Insight Software
4 http://www.cnes.fr
5 http://www.itk.org
10 Chapter 1. Welcome
Consortium. ITK’s NIH/NLM Project Manager was Dr. Terry Yoo, who coordinated the six prime
contractors composing the Insight consortium. These consortium members included three com-
mercial partners—GE Corporate R&D, Kitware, Inc., and MathSoft (the company name is now
Insightful)—and three academic partners—University of North Carolina (UNC), University of Ten-
nessee (UT) (Ross Whitaker subsequently moved to University of Utah), and University of Penn-
sylvania (UPenn). The Principle Investigators for these partners were, respectively, Bill Lorensen
at GE CRD, Will Schroeder at Kitware, Vikram Chalana at Insightful, Stephen Aylward with Luis
Ibáñez at UNC (Luis is now at Kitware), Ross Whitaker with Josh Cates at UT (both now at Utah),
and Dimitri Metaxas at UPenn (now at Rutgers). In addition, several subcontractors rounded out the
consortium including Peter Raitu at Brigham & Women’s Hospital, Celina Imielinska and Pat Mol-
holt at Columbia University, Jim Gee at UPenn’s Grasp Lab, and George Stetten at the University of
Pittsburgh.
In 2002 the first official public release of ITK was made available.
CHAPTER
TWO
There are two ways to install OTB on your system: installing from a binary distribution or compiling
from sources. You can find information about the installation of binary packages for OTB and
Monteverdi in the OTB-Cookbook.
This chapter covers the compilation of OTB from source. Note that it also includes the compilation
of Monteverdi which is integrated as an OTB module since version 5.8.
OTB has been developed and tested across different combinations of operating systems, compilers,
and hardware platforms including Windows, GNU/Linux and macOS. It is known to work with the
following compilers in 32/64 bit:
Since release version 6.2.0, OTB is compiled using the C++14 standard by default.
The challenge of supporting OTB across platforms has been solved through the use of CMake,
a cross-platform, open-source build system. CMake is used to control the software compilation
process using simple platform and compiler independent configuration files. CMake generates native
makefiles and workspaces that can be used in the compiler environment of your choice. CMake
is quite sophisticated: it supports complex environments requiring system configuration, compiler
feature testing, and code generation.
CMake supports several generators to produce the compilation scripts, dependending on the platform
and compiler. It can use:
The information used by CMake is provided by CMakeLists.txt files that are present in every
directory of the OTB source tree. These files contain information that the user provides to CMake at
configuration time. Typical information includes paths to utilities in the system and the selection of
software options specified by the user.
There are (at least) two ways to use CMake :
• Using the command ccmake (on Unix) or cmake-gui (on Windows): it provides an interactive
mode in which you iteratively select options and configure according to these options. The
iteration proceeds until no more options remain to be selected. At this point, a generation step
produces the appropriate build files for your configuration. This is the easiest way to start.
• Using the command cmake : it is a non-interactive polyvalent tool designed for scripting. It
can run both configure and generate steps.
As shown in figure 2.1, CMake has a different interfaces according to your system. Refer to sec-
tion 2.1 for GNU/Linux and macOS build instructions and 2.2 for Windows.
For more information on CMake, check :
http://www.cmake.org
OTB depends on a number of external libraries. Some are mandatory, meaning that OTB cannot be
compiled without them, while others (the majority) are optional and can be activated or not during
the build process. See table 2.1 for the full list of dependencies.
The first thing to do is to create a directory for working with OTB. This guide will use ∼/OTB but
you are free to choose something else. In this directory, there will be three locations:
• ∼/OTB/otb for the source file obtained from the git repository
• ∼/OTB/build for the intermediate build objects, CMake specific files, libraries and binaries.
2.1. GNU/Linux and macOS 13
Figure 2.1: CMake interface. Top) ccmake, the UNIX version based on curses. Bottom) CMakeSetup, the
MS-Windows version based on MFC.
14 Chapter 2. Compiling OTB from source
• ∼/OTB/install, the installation directory for OTB once it is built. A system location
(/usr/local for example) can also be used, but installing locally is more flexible and does
not require root access.
$ mkdir ˜/OTB
$ cd ˜/OTB
$ git clone https://gitlab.orfeo-toolbox.org/orfeotoolbox/otb.git
$ mkdir build
$ mkdir install
The OTB project uses a git branching model where develop is the current development
version. It contains the latest patches and represents the work in progress towards the
next release. For more information regarding the use of Git in the project please have a
look at : http://wiki.orfeo-toolbox.org/index.php/Git. See the contributing.md
(https://gitlab.orfeo-toolbox.org/orfeotoolbox/otb/blob/develop/CONTRIBUTING.md)
to have more inforamtion on how to contribute to OTB.
Checkout the relevant branch now:
$ cd ˜/OTB/otb
$ git checkout develop
Now you must decide which build method you will use. There are two ways of compiling OTB from
sources, depending on how you want to manage dependencies. Both methods rely on CMake.
• SuperBuild (go to section 2.1.2). All OTB dependencies are automatically downloaded and
compiled. This method is the easiest to use and provides a complete OTB with minimal effort.
• Normal build (go to section 2.1.3). OTB dependencies must already be compiled and available
on your system. This method requires more work but provides more flexibility.
If you do not know which method to use and just want to compile OTB with all its modules, use
SuperBuild.
If you want to use a standalone binary package, a lot of dependencies are already supplied in it. In
this case, it is advised to use all of the dependencies from that package. Mixing system libraries
with libraries from OTB package may not be safe. When you call the otbenv script in the package,
it will add an environment variable CMAKE PREFIX PATH, pointing to the root of the OTB package.
This variable is used by CMake as a hint to detect the dependencies location.
16 Chapter 2. Compiling OTB from source
The SuperBuild is a way of compiling dependencies to a project just before you build the project.
Thanks to CMake and its ExternalProject module, it is possible to download a source archive, config-
ure and compile it when building the main project. This feature has been used in other CMake-based
projects (ITK, Slicer, ParaView,...). In OTB, the SuperBuild is implemented with no impact on the
library sources : the sources for SuperBuild are located in the ’OTB/SuperBuild’ subdirectory. It
is made of CMake scripts and source patches that allow to compile all the dependencies necessary
for OTB. Once all the dependencies are compiled and installed, the OTB library is built using those
dependencies.
OTB’s compilation is customized by specifying configuration variables. The most important config-
uration variables are shown in table 2.2. The simplest way to provide configuration variables is via
the command line -D option:
$ cd ˜/OTB/build
$ cmake -D CMAKE_INSTALL_PREFIX=˜/OTB/install ../otb/SuperBuild
a system library if you want!. You must be very much aware of dependencies of those libraries
you use from system. For example, if libjpeg is not used from superbuild then you should not
use zlib from superbuild because zlib is a dependency of libjpeg. Here SuperBuild will NOT set
USE SYSTEM ZLIB=FALSE. One must re-run cmake with -DUSE SYSTEM ZLIB=FALSE. Above ex-
ample of libjpeg-zlib dependency is so simple. Imagine the case for GDAL which depends on zlib,
libjpeg, libtiff(with big tiff support), geotiff, sqlite, curl, geos, libkml, openjpeg. This is one of the
reasons we recommend to use SuperBuild exclusively.
All dependencies are configured and built in a way that help us to get an efficient build OTB. So we
enable geotiff (with proj4 support), openjpeg, geos in GDAL build.
(see table 2.2).
SuperBuild downloads dependencies into the DOWNLOAD LOCATION directory, which will be
∼/OTB/build/Downloads in our example. Dependencies can be downloaded manually into this
directory before the compilation step. This can be useful if you wish to bypass a proxy, intend to
compile OTB without an internet connection, or other network constraint. You can find an archive
with sources of all our dependencies on the Orfeo ToolBox website (pick the ’SuperBuild-archives’
corresponding to the OTB version you want to build) :
https://www.orfeo-toolbox.org/packages
Qt library: Unlike other dependencies, building Qt5 on all platforms is not a trivial task but OTB
SuperBuild does its level best to facilitate this for the user. So there is still some additional pack-
age installation, one has to do as a pre-requistie for SuperBuild On a GNU/Linux you must have
Qt X11 dependencies installed. See Qt 5 documentation for the list of packages that need to
be installed before starting superbuild. https://doc.qt.io/qt-5/linux-requirements.html. For a De-
bian 8.1 system, all Qt5 dependencies can be installed with the following ’apt-get install’ com-
mand: apt-get install libx11-dev libxext-dev libxt-dev libxi-dev libxrandr-dev
libgl-dev libglu-dev libxinerama-dev libxcursor-dev
You can also deactivate Qt5 and skip this by passing -DOTB USE QT=OFF to cmake, but this will
install OTB without Monteverdi, Mapla and the GUI application launchers.
For macOS you need to install XCode and Windows 7,8.1,10 requires MSVC 2015 or higher.
You are now ready to compile OTB! Simply use the make command (other targets can be generated
with CMake’s -G option):
$ cd ˜/OTB/build
$ make
Applications will be located in the bin/ directory in CMAKE INSTALL PREFIX directory, which
in our case is /OTB/install/bin/. For example:
˜/OTB/install/bin/otbcli_ExtractROI
18 Chapter 2. Compiling OTB from source
will launch the command line version of the ExtractROI application, while:
˜/OTB/install/bin/otbgui_ExtractROI
export PATH=$PATH:˜/OTB/install/bin
export LD_LIBRARY_PATH=˜/OTB/install/lib:$LD_LIBRARY_PATH
Monteverdi is integrated as an OTB module since release 5.8 and it is compiled by the SuperBuild
(provided that GLEW, GLUT, OPENGL, Qt and QWT modules are activated).
To use OTB applications from within Monteverdi you will need to define the
OTB APPLICATION PATH environment variable.
export OTB_APPLICATION_PATH=˜/OTB/install/lib/otb/applications
monteverdi
A wiki page detailing the status of SuperBuild on various platforms is also available here:
http://wiki.orfeo-toolbox.org/index.php/SuperBuild.
Once all OTB dependencies are availables on your system, use CMake to generate a Makefile:
$ cd ˜/OTB/build
$ cmake -C configuration.cmake ../otb
The script configuration.cmake needs to contain dependencies location if CMake cannot find
them automatically. This can be done with the XXX DIR variables containing the directories which
contain the FindXXX.cmake scripts, or with the XXX INCLUDEDIR and XXX LIBRARY variables.
Additionally, decide which module you wish to enable, together with tests and examples. Refer to
table 2.2 for the list of CMake variables.
Since OTB is modularized, it is possible to only build some modules instead of the
whole set. To deactivate a module (and the ones that depend on it) switch off the
2.2. Windows 19
CMake variable OTB BUILD DEFAULT MODULES, configure, and then switch off each
Module module name variable. To provide an overview on how things work, the option
COMPONENTS of the CMake command find package is used in order to only load the requested
modules. This module-specific list prevents CMake from performing a blind search; it is also a
convienent way of monitoring the dependencies of each module.
Some of the OTB capabilities are considered as optional, and you can deactivate the related modules
thanks to a set of CMake variables starting with OTB USE XXX. Table 2.3 shows which modules are
associated to these variables. It is very important to notice that these variable override the variable
OTB BUILD DEFAULT MODULES.
You are now ready to compile OTB! Simply use the make command (other targets can be generated
with CMake’s -G option):
$ make
The installation target will copy the binaries and libraries to the installation location:
$ make install
2.2 Windows
Everything that is needed for OTB development on Windows, including compiling from source, is
covered in details on the OTB wiki at:
http://wiki.orfeo-toolbox.org/index.php/OTB_development_on_Windows
THREE
SYSTEM OVERVIEW
The purpose of this chapter is to provide you with an overview of the ORFEO Toolbox system. We
recommend that you read this chapter to gain an appreciation for the breadth and area of application
of OTB. In this chapter, we will make reference either to OTB features or ITK features without
distinction. Bear in mind that OTB uses ITK as its core element, so all the fundamental elements
of OTB come from ITK. OTB extends the functionalities of ITK for the remote sensing image
processing community. We benefit from the Open Source development approach chosen for ITK,
which allows us to provide an impressive set of functionalities with much less effort than would
have been the case in a closed source universe!
Essential System Concepts. Like any software system, OTB is built around some core design con-
cepts. OTB uses those of ITK. Some of the more important concepts include generic program-
ming, smart pointers for memory management, object factories for adaptable object instanti-
ation, event management using the command/observer design paradigm, and multithreading
support.
Numerics OTB, as ITK uses VXL’s VNL numerics libraries. These are easy-to-use C++ wrappers
around the Netlib Fortran numerical analysis routines (http://www.netlib.org).
Data Representation and Access. Two principal classes are used to represent data: the
otb::Image and itk::Mesh classes. In addition, various types of iterators and containers
are used in ITK to hold and traverse the data. Other important but less popular classes are also
used to represent data such as histograms.
ITK’s Data Processing Pipeline. The data representation classes (known as data objects) are op-
erated on by filters that in turn may be organized into data flow pipelines. These pipelines
22 Chapter 3. System Overview
maintain state and therefore execute only when necessary. They also support multi-threading,
and are streaming capable (i.e., can operate on pieces of data to minimize the memory foot-
print).
IO Framework. Associated with the data processing pipeline are sources, filters that initiate the
pipeline, and mappers, filters that terminate the pipeline. The standard examples of sources
and mappers are readers and writers respectively. Readers input data (typically from a file),
and writers output data from the pipeline. Viewers are another example of mappers.
Spatial Objects. Geometric shapes are represented in OTB using the ITK spatial object hierarchy.
These classes are intended to support modeling of anatomical structures in ITK. OTB uses
them in order to model cartographic elements. Using a common basic interface, the spatial
objects are capable of representing regions of space in a variety of different ways. For ex-
ample: mesh structures, image masks, and implicit equations may be used as the underlying
representation scheme. Spatial objects are a natural data structure for communicating the re-
sults of segmentation methods and for introducing geometrical priors in both segmentation
and registration methods.
ITK’s Registration Framework. A flexible framework for registration supports four different
types of registration: image registration, multiresolution registration, PDE-based registration,
and FEM (finite element method) registration.
FEM Framework. ITK includes a subsystem for solving general FEM problems, in particular non-
rigid registration. The FEM package includes mesh definition (nodes and elements), loads,
and boundary conditions.
Level Set Framework. The level set framework is a set of classes for creating filters to solve partial
differential equations on images using an iterative, finite difference update scheme. The level
set framework consists of finite difference solvers including a sparse level set solver, a generic
level set segmentation filter, and several specific subclasses including threshold, Canny, and
Laplacian based methods.
Wrapping. ITK uses a unique, powerful system for producing interfaces (i.e., “wrappers”) to inter-
preted languages such as Tcl and Python. The GCC XML tool is used to produce an XML
description of arbitrarily complex C++ code; CSWIG is then used to transform the XML
description into wrappers using the SWIG package. OTB does not use this system at present.
This section describes some of the core concepts and implementation features found in ITK and
therefore also in OTB.
3.2. Essential System Concepts 23
In ITK and OTB classes are defined by a maximum of two files: a header .h file and an implemen-
tation file—.cxx if a non-templated class, and a .txx if a templated class. The header files contain
class declarations and formatted comments that are used by the Doxygen documentation system to
automatically produce HTML manual pages.
In addition to class headers, there are a few other important ITK header files.
itkMacro.h defines standard system-wide macros (such as Set/Get, constants, and other pa-
rameters).
itkNumericTraits.h defines numeric characteristics for native types such as its maximum
and minimum possible values.
Most classes in OTB are instantiated through an object factory mechanism. That is, rather than using
the standard C++ class constructor and destructor, instances of an OTB class are created with the
static class New() method. In fact, the constructor and destructor are protected: so it is generally
not possible to construct an OTB instance on the heap. (Note: this behavior pertains to classes
that are derived from itk::LightObject . In some cases the need for speed or reduced memory
footprint dictates that a class not be derived from LightObject and in this case instances may be
created on the heap. An example of such a class is itk::EventObject .)
The object factory enables users to control run-time instantiation of classes by registering one or
more factories with itk::ObjectFactoryBase . These registered factories support the method
CreateInstance(classname) which takes as input the name of a class to create. The factory can
choose to create the class based on a number of factors including the computer system configura-
tion and environment variables. For example, in a particular application an OTB user may wish to
deploy their own class implemented using specialized image processing hardware (i.e., to realize a
performance gain). By using the object factory mechanism, it is possible at run-time to replace the
creation of a particular OTB filter with such a custom class. (Of course, the class must provide the
exact same API as the one it is replacing.) To do this, the user compiles their class (using the same
compiler, build options, etc.) and inserts the object code into a shared library or DLL. The library is
then placed in a directory referred to by the OTB AUTOLOAD PATH environment variable. On instan-
tiation, the object factory will locate the library, determine that it can create a class of a particular
name with the factory, and use the factory to create the instance. (Note: if the CreateInstance()
method cannot find a factory that can create the named class, then the instantiation of the class falls
back to the usual constructor.)
In practice object factories are used mainly (and generally transparently) by the OTB input/output
(IO) classes. For most users the greatest impact is on the use of the New() method to create a class.
Generally the New() method is declared and implemented via the macro itkNewMacro() found in
Modules/Core/Common/include/itkMacro.h.
By their nature object-oriented systems represent and operate on data through a variety of object
types, or classes. When a particular class is instantiated to produce an instance of that class, mem-
ory allocation occurs so that the instance can store data attribute values and method pointers (i.e.,
the vtable). This object may then be referenced by other classes or data structures during normal
operation of the program. Typically during program execution all references to the instance may
disappear at which point the instance must be deleted to recover memory resources. Knowing when
to delete an instance, however, is difficult. Deleting the instance too soon results in program crashes;
deleting it too late and memory leaks (or excessive memory consumption) will occur. This process
of allocating and releasing memory is known as memory management.
In ITK, memory management is implemented through reference counting. This compares to another
3.2. Essential System Concepts 25
MyRegistrationFunction()
{ <----- Start of scope
In this example, reference counted objects are created (with the New() method) with a reference
count of one. Assignment to the SmartPointer interp does not change the reference count. At the
end of scope, interp is destroyed, the reference count of the actual interpolator object (referred to
by interp) is decremented, and if it reaches zero, then the interpolator is also destroyed.
Note that in ITK SmartPointers are always used to refer to instances of classes derived from
itk::LightObject . Method invocations and function calls often return “real” pointers to in-
stances, but they are immediately assigned to a SmartPointer. Raw pointers are used for non-
LightObject classes when the need for speed and/or memory demands a smaller, faster class.
In general, OTB uses exception handling to manage errors during program execution. Exception
handling is a standard part of the C++ language and generally takes the form as illustrated below:
26 Chapter 3. System Overview
try
{
//...try executing some code here...
}
catch ( itk::ExceptionObject exp )
{
//...if an exception is thrown catch it here
}
where a particular class may throw an exceptions as demonstrated below (this code snippet is taken
from itk::ByteSwapper :
switch ( sizeof(T) )
{
//non-error cases go here followed by error case
default:
ByteSwapperError e(__FILE__, __LINE__);
e.SetLocation("SwapBE");
e.SetDescription("Cannot swap number of bytes requested");
throw e;
}
Event handling in OTB is implemented using the Subject/Observer design pattern (sometimes re-
ferred to as the Command/Observer design pattern). In this approach, objects indicate that they are
watching for a particular event—invoked by a particular instance–by registering with the instance
that they are watching. For example, filters in OTB periodically invoke the itk::ProgressEvent
. Objects that have registered their interest in this event are notified when the event occurs. The noti-
fication occurs via an invocation of a command (i.e., function callback, method invocation, etc.) that
is specified during the registration process. (Note that events in OTB are subclasses of EventObject;
look in itkEventObject.h to determine which events are available.)
To recap via example: various objects in OTB will invoke specific events as they execute (from
ProcessObject):
this->InvokeEvent( ProgressEvent() );
To watch for such an event, registration is required that associates a command (e.g., callback func-
tion) with the event: Object::AddObserver() method:
3.3. Numerics 27
When the event occurs, all registered observers are notified via invocation of the associated
Command::Execute() method. Note that several subclasses of Command are available supporting
const and non-const member functions as well as C-style functions. (Look in Common/Command.h
to find pre-defined subclasses of Command. If nothing suitable is found, derivation is another pos-
sibility.)
3.2.7 Multi-Threading
Multithreading is handled in OTB through ITK’s high-level design abstraction. This approach pro-
vides portable multithreading and hides the complexity of differing thread implementations on the
many systems supported by OTB. For example, the class itk::MultiThreader provides sup-
port for multithreaded execution using sproc() on an SGI, or pthread create on any platform
supporting POSIX threads.
Multithreading is typically employed by an algorithm during its execution phase. MultiThreader
can be used to execute a single method on multiple threads, or to specify a method per thread. For
example, in the class itk::ImageSource (a superclass for most image processing filters) the
GenerateData() method uses the following methods:
multiThreader->SetNumberOfThreads(int);
multiThreader->SetSingleMethod(ThreadFunctionType, void* data);
multiThreader->SingleMethodExecute();
In this example each thread invokes the same method. The multithreaded filter takes care to divide
the image into different regions that do not overlap for write operations.
The general philosophy in ITK regarding thread safety is that accessing different instances of a class
(and its methods) is a thread-safe operation. Invoking methods on the same instance in different
threads is to be avoided.
3.3 Numerics
OTB, like ITK, uses the VNL numerics library to provide resources for numerical programming
combining the ease of use of packages like Mathematica and Matlab with the speed of C and the
elegance of C++. It provides a C++ interface to the high-quality Fortran routines made available
in the public domain by numerical analysis researchers. ITK extends the functionality of VNL by
including interface classes between VNL and ITK proper.
The VNL numerics library includes classes for
28 Chapter 3. System Overview
Matrices and vectors. Standard matrix and vector support and operations on these types.
Specialized matrix and vector classes. Several special matrix and vector class with special nu-
merical properties are available. Class vnl diagonal matrix provides a fast and con-
venient diagonal matrix, while fixed size matrices and vectors allow ”fast-as-C” compu-
tations (see vnl matrix fixed<T,n,m> and example subclasses vnl double 3x3 and
vnl double 3).
Matrix decompositions. Classes vnl svd<T>, vnl symmetric eigensystem<T>, and
vnl generalized eigensystem.
Real polynomials. Class vnl real polynomial stores the coefficients of a real polynomial, and
provides methods of evaluation of the polynomial at any x, while class vnl rpoly roots
provides a root finder.
Optimization. Classes vnl levenberg marquardt, vnl amoeba, vnl conjugate gradient,
vnl lbfgs allow optimization of user-supplied functions either with or without user-supplied
derivatives.
Standardized functions and constants. Class vnl math defines constants (pi, e, eps...) and
simple functions (sqr, abs, rnd...). Class numeric limits is from the ISO stan-
dard document, and provides a way to access basic limits of a type. For example
numeric limits<short>::max() returns the maximum value of a short.
Most VNL routines are implemented as wrappers around the high-quality Fortran routines
that have been developed by the numerical analysis community over the last forty years and
placed in the public domain. The central repository for these programs is the ”netlib” server
http://www.netlib.org/. The National Institute of Standards and Technology (NIST) provides
an excellent search interface to this repository in its Guide to Available Mathematical Software
(GAMS) at http://gams.nist.gov, both as a decision tree and a text search.
ITK also provides additional numerics functionality. A suite of optimizers, that use VNL under
the hood and integrate with the registration framework are available. A large collection of statistics
functions—not available from VNL—are also provided in the Insight/Numerics/Statistics
directory. In addition, a complete finite element (FEM) package is available, primarily to support
the deformable registration in ITK.
There are two principal types of data represented in OTB: images and meshes. This functionality is
implemented in the classes Image and Mesh, both of which are subclasses of itk::DataObject .
In OTB, data objects are classes that are meant to be passed around the system and may participate
in data flow pipelines (see Section 3.5 on page 30 for more information).
3.4. Data Representation 29
The otb::Image class extends the functionalities of the itk::Image in order to take into account
particular remote sensing features as geographical projections, etc.
The Mesh class represents an n-dimensional, unstructured grid. The topology of the mesh is rep-
resented by a set of cells defined by a type and connectivity list; the connectivity list in turn refers
to points. The geometry of the mesh is defined by the n-dimensional points in combination with
associated cell interpolation functions. Mesh is designed as an adaptive representational structure
that changes depending on the operations performed on it. At a minimum, points and cells are re-
quired in order to represent a mesh; but it is possible to add additional topological information. For
example, links from the points to the cells that use each point can be added; this provides implicit
neighborhood information assuming the implied topology is the desired one. It is also possible to
specify boundary cells explicitly, to indicate different connectivity from the implied neighborhood
relationships, or to store information on the boundaries of cells.
The mesh is defined in terms of three template parameters: 1) a pixel type associated with the
points, cells, and cell boundaries; 2) the dimension of the points (which in turn limits the maximum
dimension of the cells); and 3) a “mesh traits” template parameter that specifies the types of the
containers and identifiers used to access the points, cells, and/or boundaries. By using the mesh
traits carefully, it is possible to create meshes better suited for editing, or those better suited for
“read-only” operations, allowing a trade-off between representation flexibility, memory, and speed.
Mesh is a subclass of itk::PointSet . The PointSet class can be used to represent point clouds
or randomly distributed landmarks, etc. The PointSet class has no associated topology.
30 Chapter 3. System Overview
While data objects (e.g., images and meshes) are used to represent data, process objects are classes
that operate on data objects and may produce new data objects. Process objects are classed as
sources, filter objects, or mappers. Sources (such as readers) produce data, filter objects take in data
and process it to produce new data, and mappers accept data for output either to a file or some other
system. Sometimes the term filter is used broadly to refer to all three types.
The data processing pipeline ties together data objects (e.g., images and meshes) and process objects.
The pipeline supports an automatic updating mechanism that causes a filter to execute if and only
if its input or its internal state changes. Further, the data pipeline supports streaming, the ability
to automatically break data into smaller pieces, process the pieces one by one, and reassemble the
processed data into a final result.
Typically data objects and process objects are connected together using the SetInput() and
GetOutput() methods as follows:
itk::RandomImageSource<FloatImage2DType>::Pointer random;
random = itk::RandomImageSource<FloatImage2DType>::New();
random->SetMin(0.0);
random->SetMax(1.0);
itk::ShrinkImageFilter<FloatImage2DType,FloatImage2DType>::Pointer shrink;
shrink = itk::ShrinkImageFilter<FloatImage2DType,FloatImage2DType>::New();
shrink->SetInput(random->GetOutput());
shrink->SetShrinkFactors(2);
otb::ImageFileWriter::Pointer<FloatImage2DType> writer;
writer = otb::ImageFileWriter::Pointer<FloatImage2DType>::New();
writer->SetInput (shrink->GetOutput());
writer->SetFileName( ‘‘test.raw’’ );
writer->Update();
The ITK spatial object framework supports the philosophy that the task of image segmentation and
registration is actually the task of object processing. The image is but one medium for representing
objects of interest, and much processing and data analysis can and should occur at the object level
and not based on the medium used to represent the object.
ITK spatial objects provide a common interface for accessing the physical location and geometric
properties of and the relationship between objects in a scene that is independent of the form used
to represent those objects. That is, the internal representation maintained by a spatial object may
be a list of points internal to an object, the surface mesh of the object, a continuous or parametric
representation of the object’s internal points or surfaces, and so forth.
The capabilities provided by the spatial objects framework supports their use in object segmentation,
registration, surface/volume rendering, and other display and analysis functions. The spatial object
framework extends the concept of a “scene graph” that is common to computer rendering packages
so as to support these new functions. With the spatial objects framework you can:
1. Specify a spatial object’s parent and children objects. In this way, a city may contain roads
and those roads can be organized in a tree structure.
2. Query if a physical point is inside an object or (optionally) any of its children.
3. Request the value and derivatives, at a physical point, of an associated intensity function, as
specified by an object or (optionally) its children.
4. Specify the coordinate transformation that maps a parent object’s coordinate system into a
child object’s coordinate system.
5. Compute the bounding box of a spatial object and (optionally) its children.
6. Query the resolution at which the object was originally computed. For example, you can
query the resolution (i.e., pixel spacing) of the image used to generate a particular instance of
a itk::LineSpatialObject .
Currently implemented types of spatial objects include: Blob, Ellipse, Group, Image, Line, Surface,
and Tube. The itk::Scene object is used to hold a list of spatial objects that may in turn have
children. Each spatial object can be assigned a color property. Each spatial object type has its own
capabilities. For example, itk::TubeSpatialObject s indicate to what point on their parent tube
they connect.
There are a limited number of spatial objects and their methods in ITK, but their number is growing
and their potential is huge. Using the nominal spatial object capabilities, methods such as mutual
information registration, can be applied to objects regardless of their internal representation. By
having a common API, the same method can be used to register a parametric representation of a
building with an image or to register two different segmentations of a particular object in object-
based change detection.
Part II
Tutorials
CHAPTER
FOUR
Well, that’s it, you’ve just downloaded and installed OTB, lured by the promise that you will be able
to do everything with it. That’s true, you will be able to do everything but - there is always a but -
some effort is required.
OTB uses the very powerful systems of generic programming, many classes are already available,
some powerful tools are defined to help you with recurrent tasks, but it is not an easy world to enter.
These tutorials are designed to help you enter this world and grasp the logic behind OTB. Each
of these tutorials should not take more than 10 minutes (typing included) and each is designed to
highlight a specific point. You may not be concerned by the latest tutorials but it is strongly advised
to go through the first few which cover the basics you’ll use almost everywhere.
Let’s start by the typical Hello world program. We are going to compile this C++ program linking
to your new OTB.
First, create a new folder to put your new programs (all the examples from this tutorial) in and go
into this folder.
Since all programs using OTB are handled using the CMake system, we need to create a
CMakeLists.txt that will be used by CMake to compile our program. An example of this file
can be found in the OTB/Examples/Tutorials directory. The CMakeLists.txt will be very simi-
lar between your projects.
Open the CMakeLists.txt file and write in the few lines:
36 Chapter 4. Building Simple Applications with OTB
PROJECT(Tutorials)
cmake_minimum_required(VERSION 2.6)
FIND_PACKAGE(OTB)
IF(OTB_FOUND)
INCLUDE(${OTB_USE_FILE})
ELSE(OTB_FOUND)
MESSAGE(FATAL_ERROR
"Cannot build OTB project without OTB. Please set OTB_DIR.")
ENDIF(OTB_FOUND)
ADD_EXECUTABLE(HelloWorldOTB HelloWorldOTB.cxx )
TARGET_LINK_LIBRARIES(HelloWorldOTB ${OTB_LIBRARIES})
The first line defines the name of your project as it appears in Visual Studio (it will have no effect
under UNIX or Linux). The second line loads a CMake file with a predefined strategy for finding
OTB 1 . If the strategy for finding OTB fails, CMake will prompt you for the directory where OTB
is installed in your system. In that case you will write this information in the OTB DIR variable.
The line INCLUDE(${USE OTB FILE}) loads the UseOTB.cmake file to set all the configuration
information from OTB.
The line ADD EXECUTABLE defines as its first argument the name of the executable that will be
produced as result of this project. The remaining arguments of ADD EXECUTABLE are the names
of the source files to be compiled and linked. Finally, the TARGET LINK LIBRARIES line specifies
which OTB libraries will be linked against this project.
The source code for this example can be found in the file
Examples/Tutorials/HelloWorldOTB.cxx.
The following code is an implementation of a small OTB program. It tests including header files
and linking with OTB libraries.
#include "otbImage.h"
#include <iostream>
return EXIT_SUCCESS;
}
1 Similar files are provided in CMake for other commonly used libraries, all of them named Find*.cmake
4.1. Hello world 37
This code instantiates an image whose pixels are represented with type unsigned short. The
image is then created and assigned to a itk::SmartPointer . Later in the text we will discuss
SmartPointers in detail, for now think of it as a handle on an instance of an object (see section
3.2.4 for more information).
Once the file is written, run ccmake on the current directory (that is ccmake ./ under Linux/Unix).
If OTB is on a non standard place, you will have to tell CMake where it is. Once your done with
CMake (you shouldn’t have to do it anymore) run make.
You finally have your program. When you run it, you will have the OTB Hello World ! printed.
Ok, well done! You’ve just compiled and executed your first OTB program. Actually, using OTB
for that is not very useful, and we doubt that you downloaded OTB only to do that. It’s time to move
on to a more advanced level.
4.1.2 Windows
Create a directory (with write access) where to store your work (for example at
C:\path\to\MyFirstCode). Organize your repository as it :
• MyFirstCode\src
• MyFirstCode\build
1. Create a CMakeLists.txt into the src repository with the following lines:
project(MyFirstProcessing)
cmake_minimum_required(VERSION 2.8)
find_package(OTB REQUIRED)
include(${OTB_USE_FILE})
add_executable(MyFirstProcessing MyFirstProcessing.cxx )
target_link_libraries(MyFirstProcessing ${OTB_LIBRARIES} )
2. Create a MyFirstProcessing.cxx into the src repository with the following lines:
38 Chapter 4. Building Simple Applications with OTB
#include "otbImage.h"
#include "otbVectorImage.h"
#include "otbImageFileReader.h"
#include "otbImageFileWriter.h"
#include "otbMultiToMonoChannelExtractROI.h"
reader->SetFileName(argv[1]);
filter->SetInput(reader->GetOutput());
writer->SetFileName(argv[2]);
writer->SetInput(filter->GetOutput());
return EXIT_SUCCESS;
}
3. create a file named BuildMyFirstProcessing.bat into the MyFirstCode directory with the fol-
lowing lines:
@echo off
set /A ARGS_COUNT=0
for %%A in (%*) do set /A ARGS_COUNT+=1
if %ARGS_COUNT% NEQ 3 (goto :Usage)
set src_dir=%1
set build_dir=%2
4.2. Pipeline basics: read and write 39
set otb_install_dir=%3
set current_dir=%CD%
cd %build_dir%
cmake %src_dir% ˆ
-DCMAKE_INCLUDE_PATH:PATH="%OSGEO4W_ROOT%\include" ˆ
-DCMAKE_LIBRARY_PATH:PATH="%OSGEO4W_ROOT%\lib" ˆ
-DOTB_DIR:PATH=%otb_install_dir% ˆ
-DCMAKE_CONFIGURATION_TYPES:STRING=Release
cd %current_dir%
goto :END
:Usage
echo You need to provide 3 arguments to the script:
echo 1. path to the source directory
echo 2. path to the build directory
echo 3. path to the installation directory
GOTO :END
:NoOSGEO4W
echo You need to run this script from an OSGeo4W shell
GOTO :END
:END
4. into a OSGEo4W shell, run the configure.bat with the right arguments: full path to your src
directory, full path to your build directory, full path to the place where find OTBConfig.cmake
file (should be C:\path\to\MyOTBDir\install\lib\otb).
5. into the OSGeo4W shell, open the MyFirstProcessing.sln
6. build the solution
7. into the OSGeo4W shell, go to the bin\Release directory and run MyFirstProcessing.exe. You
can try for example with the otb logo.tif file which can be found into the OTB source.
OTB is designed to read images, process them and write them to disk or view the result. In this
tutorial, we are going to see how to read and write images and the basics of the pipeline system.
First, let’s add the following lines at the end of the CMakeLists.txt file:
40 Chapter 4. Building Simple Applications with OTB
ADD_EXECUTABLE(Pipeline Pipeline.cxx )
TARGET_LINK_LIBRARIES(Pipeline ${OTB_LIBRARIES})
Declare the image as an otb::Image , the pixel type is declared as an unsigned char (one byte)
and the image is specified as having two dimensions.
typedef otb::Image<unsigned char, 2> ImageType;
To read the image, we need an otb::ImageFileReader which is templated with the image type.
typedef otb::ImageFileReader<ImageType> ReaderType;
ReaderType::Pointer reader = ReaderType::New();
The filenames are passed as arguments to the program. We keep it simple for now and we don’t
check their validity.
reader->SetFileName(argv[1]);
writer->SetFileName(argv[2]);
Now that we have all the elements, we connect the pipeline, pluging the output of the reader to the
input of the writer.
4.3. Filtering pipeline 41
writer->SetInput(reader->GetOutput());
And finally, we trigger the pipeline execution calling the Update() method on the last element of the
pipeline. The last element will make sure to update all previous elements in the pipeline.
writer->Update();
return EXIT_SUCCESS;
}
Once this file is written you just have to run make. The ccmake call is not required anymore.
Get one image from the OTB-Data/Examples directory from the OTB-Data repos-
itory. You can get it either by cloning the OTB data repository (git clone
https://gitlab.orfeo-toolbox.org/orfeotoolbox/otb-data.git), but that might be
quite long as this also gets the data to run the tests. Alternatively, you can get it from
http://www.orfeo-toolbox.org/packages/OTB-Data-Examples.tgz. Take for example get
QB Suburb.png.
Now, run your new program as Pipeline QB Suburb.png output.png. You obtain the file
output.png which is the same image as QB Suburb.png. When you triggered the Update()
method, OTB opened the original image and wrote it back under another name.
Well. . . that’s nice but a bit complicated for a copy program!
Wait a minute! We didn’t specify the file format anywhere! Let’s try Pipeline QB Suburb.png
output.jpg. And voila! The output image is a jpeg file.
That’s starting to be a bit more interesting: this is not just a program to copy image files, but also to
convert between image formats.
You have just experienced the pipeline structure which executes the filters only when needed and
the automatic image format detection.
Now it’s time to do some processing in between.
We are now going to insert a simple filter to do some processing between the reader and the writer.
Let’s first add the 2 following lines to the CMakeLists.txt file:
ADD_EXECUTABLE(FilteringPipeline FilteringPipeline.cxx )
TARGET_LINK_LIBRARIES(FilteringPipeline ${OTB_LIBRARIES})
The source code for this example can be found in the file
Examples/Tutorials/FilteringPipeline.cxx.
42 Chapter 4. Building Simple Applications with OTB
We declare the image type, the reader and the writer as before:
typedef otb::Image<unsigned char, 2> ImageType;
reader->SetFileName(argv[1]);
writer->SetFileName(argv[2]);
Now we have to declare the filter. It is templated with the input image type and the output image
type like many filters in OTB. Here we are using the same type for the input and the output images:
typedef itk::GradientMagnitudeImageFilter
<ImageType, ImageType> FilterType;
FilterType::Pointer filter = FilterType::New();
And finally, we trigger the pipeline execution calling the Update() method on the writer
4.4. Handling types: scaling output 43
writer->Update();
return EXIT_SUCCESS;
}
If you tried some other filter in the previous example, you may have noticed that in some cases,
it does not make sense to save the output directly as an integer. This is the case if you tried the
itk::CannyEdgeDetectionImageFilter . If you tried to use it directly in the previous example,
you will have some warning about converting to unsigned char from double.
The output of the Canny edge detection is a floating point number. A simple solution would be to
used double as the pixel type. Unfortunately, most image formats use integer typed and you should
convert the result to an integer image if you still want to visualize your images with your usual
viewer (we will see in a tutorial later how you can avoid that using the built-in viewer).
To realize this conversion, we will use the itk::RescaleIntensityImageFilter .
Add the two lines to the CMakeLists.txt file:
ADD_EXECUTABLE(ScalingPipeline ScalingPipeline.cxx )
TARGET_LINK_LIBRARIES(ScalingPipeline ${OTB_LIBRARIES})
The source code for this example can be found in the file
Examples/Tutorials/ScalingPipeline.cxx.
This example illustrates the use of the itk::RescaleIntensityImageFilter to convert the
result for proper display.
We include the required header including the header for the
itk::CannyEdgeDetectionImageFilter and the itk::RescaleIntensityImageFilter .
44 Chapter 4. Building Simple Applications with OTB
#include "otbImage.h"
#include "otbImageFileReader.h"
#include "otbImageFileWriter.h"
#include "itkUnaryFunctorImageFilter.h"
#include "itkCannyEdgeDetectionImageFilter.h"
#include "itkRescaleIntensityImageFilter.h"
We need to declare two different image types, one for the internal processing and one to output the
results:
typedef double PixelType;
typedef otb::Image<PixelType, 2> ImageType;
We declare the reader with the image template using the pixel type double. It is worth noticing that
this instantiation does not imply anything about the type of the input image. The original image can
be anything, the reader will just convert the result to double.
The writer is templated with the unsigned char image to be able to save the result on one byte images
(like png for example).
typedef otb::ImageFileReader<ImageType> ReaderType;
ReaderType::Pointer reader = ReaderType::New();
reader->SetFileName(argv[1]);
writer->SetFileName(argv[2]);
Now we are declaring the edge detection filter which is going to work with double input and output.
typedef itk::CannyEdgeDetectionImageFilter
<ImageType, ImageType> FilterType;
FilterType::Pointer filter = FilterType::New();
4.5. Working with multispectral or color images 45
typedef itk::RescaleIntensityImageFilter
<ImageType, OutputImageType> RescalerType;
RescalerType::Pointer rescaler = RescalerType::New();
rescaler->SetOutputMinimum(0);
rescaler->SetOutputMaximum(255);
filter->SetInput(reader->GetOutput());
rescaler->SetInput(filter->GetOutput());
writer->SetInput(rescaler->GetOutput());
And finally, we trigger the pipeline execution calling the Update() method on the writer
writer->Update();
return EXIT_SUCCESS;
}
As you should be getting used to it by now, compile with make and execute as ScalingPipeline
QB Suburb.png output.png.
You have the filtered version of your image in the output.png file.
So far, as you may have noticed, we have been working with grey level images, i.e. with only one
spectral band. If you tried to process a color image with some of the previous examples you have
probably obtained a deceiving grey result.
Often, satellite images combine several spectral band to help the identification of materials: this is
called multispectral imagery. In this tutorial, we are going to explore some of the mechanisms used
by OTB to process multispectral images.
Add the following lines in the CMakeLists.txt file:
46 Chapter 4. Building Simple Applications with OTB
ADD_EXECUTABLE(Multispectral Multispectral.cxx )
TARGET_LINK_LIBRARIES(Multispectral ${OTB_LIBRARIES})
The source code for this example can be found in the file
Examples/Tutorials/Multispectral.cxx.
First, we are going to use otb::VectorImage instead of the now traditional otb::Image . So
we include the required header:
We also include some other header which will be useful later. Note that we are still using the
otb::Image in this example for some of the output.
#include "otbImageFileReader.h"
#include "otbImageFileWriter.h"
#include "otbMultiToMonoChannelExtractROI.h"
#include "itkShiftScaleImageFilter.h"
#include "otbPerBandVectorImageFilter.h"
We want to read a multispectral image so we declare the image type and the reader. As we have
done in the previous example we get the filename from the command line.
reader->SetFileName(argv[1]);
Sometime, you need to process only one spectral band of the image. To get only one of the spectral
band we use the otb::MultiToMonoChannelExtractROI . The declaration is as usual:
typedef otb::MultiToMonoChannelExtractROI<PixelType, PixelType>
ExtractChannelType;
ExtractChannelType::Pointer extractChannel = ExtractChannelType::New();
4.5. Working with multispectral or color images 47
We need to pass the parameters to the filter for the extraction. This filter also allow extracting only
a spatial subset of the image. However, we will extract the whole channel in this case.
To do that, we need to pass the desired region using the SetExtractionRegion() (method
such as SetStartX, SetSizeX are also available). We get the region from the reader with the
GetLargestPossibleRegion() method. Before doing that we need to read the metadata from the
file: this is done by calling the UpdateOutputInformation() on the reader’s output. The dif-
ference with the Update() is that the pixel array is not allocated (yet !) and reduce the memory
usage.
reader->UpdateOutputInformation();
extractChannel->SetExtractionRegion(
reader->GetOutput()->GetLargestPossibleRegion());
We chose the channel number to extract (starting from 1) and we plug the pipeline.
extractChannel->SetChannel(3);
extractChannel->SetInput(reader->GetOutput());
writer->SetFileName(argv[2]);
writer->SetInput(extractChannel->GetOutput());
writer->Update();
After this, we have a one band image that we can process with most OTB filters.
In some situation, you may want to apply the same process to all bands of the image. You don’t have
to extract each band and process them separately. There is several situations:
• the filter (or the combination of filters) you want to use are doing operations that are well
defined for itk::VariableLengthVector (which is the pixel type), then you don’t have to
do anything special.
• if this is not working, you can look for the equivalent filter specially designed for vector
images.
• some of the filter you need to use applies operations undefined for
itk::VariableLengthVector , then you can use the otb::PerBandVectorImageFilter
specially designed for this purpose.
48 Chapter 4. Building Simple Applications with OTB
Let’s see how this filter is working. We chose to apply the itk::ShiftScaleImageFilter to
each of the spectral band. We start by declaring the filter on a normal otb::Image . Note that we
don’t need to specify any input for this filter.
typedef itk::ShiftScaleImageFilter<ImageType, ImageType> ShiftScaleType;
ShiftScaleType::Pointer shiftScale = ShiftScaleType::New();
shiftScale->SetScale(0.5);
shiftScale->SetShift(10);
We declare the otb::PerBandVectorImageFilter which has three template: the input image
type, the output image type and the filter type to apply to each band.
The filter is selected using the SetFilter() method and the input by the usual SetInput() method.
typedef otb::PerBandVectorImageFilter
<VectorImageType, VectorImageType, ShiftScaleType> VectorFilterType;
VectorFilterType::Pointer vectorFilter = VectorFilterType::New();
vectorFilter->SetFilter(shiftScale);
vectorFilter->SetInput(reader->GetOutput());
Now, we just have to save the image using a writer templated over an otb::VectorImage :
typedef otb::ImageFileWriter<VectorImageType> VectorWriterType;
VectorWriterType::Pointer writerVector = VectorWriterType::New();
writerVector->SetFileName(argv[3]);
writerVector->SetInput(vectorFilter->GetOutput());
writerVector->Update();
return EXIT_SUCCESS;
}
Well, if you play with some other filters in the previous example, you probably noticed that in
many cases, you need to set some parameters to the filters. Ideally, you want to set some of these
parameters from the command line.
In OTB, there is a mechanism to help you parse the command line parameters. Let try it!
Add the following lines in the CMakeLists.txt file:
4.6. Parsing command line arguments 49
ADD_EXECUTABLE(SmarterFilteringPipeline SmarterFilteringPipeline.cxx )
TARGET_LINK_LIBRARIES(SmarterFilteringPipeline ${OTB_LIBRARIES})
The source code for this example can be found in the file
Examples/Tutorials/SmarterFilteringPipeline.cxx.
We are going to use the otb::HarrisImageFilter to find the points of interest in one image.
The derivative computation is performed by a convolution with the derivative of a Gaussian kernel
of variance σD (derivation scale) and the smoothing of the image is performed by convolving with
a Gaussian kernel of variance σI (integration scale). This allows the computation of the following
matrix:
#include "itkMacro.h"
#include "otbCommandLineArgumentParser.h"
The first one is to handle the exceptions, the second one to help us parse the command line.
We include the other required headers, without forgetting to add the header for the
otb::HarrisImageFilter . Then we start the usual main function.
#include "otbImage.h"
#include "otbImageFileReader.h"
#include "otbImageFileWriter.h"
#include "itkUnaryFunctorImageFilter.h"
#include "itkRescaleIntensityImageFilter.h"
#include "otbHarrisImageFilter.h"
To handle the exceptions properly, we need to put all the instructions inside a try.
try
{
50 Chapter 4. Building Simple Applications with OTB
Now, we can declare the otb::CommandLineArgumentParser which is going to parse the com-
mand line, select the proper variables, handle the missing compulsory arguments and print an error
message if necessary.
Let’s declare the parser:
It’s now time to tell the parser what are the options we want. Special options are available for input
and output images with the AddInputImage() and AddOutputImage() methods.
For the other options, we need to use the AddOption() method. This method allows us to specify
parser->SetProgramDescription(
"This program applies a Harris detector on the input image");
parser->AddInputImage();
parser->AddOutputImage();
parser->AddOption("--SigmaD",
"Set the sigmaD parameter. Default is 1.0.",
"-d",
1,
false);
parser->AddOption("--SigmaI",
"Set the sigmaI parameter. Default is 1.0.",
"-i",
1,
false);
parser->AddOption("--Alpha",
"Set the alpha parameter. Default is 1.0.",
"-a",
1,
false);
Now that the parser has all this information, it can actually look at the command line to parse it. We
have to do this within a try - catch loop to handle exceptions nicely.
4.6. Parsing command line arguments 51
try
{
parser->ParseCommandLine(argc, argv, parseResult);
}
Now, we can declare the image type, the reader and the writer as before:
We are getting the filenames for the input and the output images directly from the parser:
reader->SetFileName(parseResult->GetInputImage().c_str());
writer->SetFileName(parseResult->GetOutputImage().c_str());
Now we have to declare the filter. It is templated with the input image type and the output image
type like many filters in OTB. Here we are using the same type for the input and the output images:
52 Chapter 4. Building Simple Applications with OTB
typedef otb::HarrisImageFilter
<ImageType, ImageType> FilterType;
FilterType::Pointer filter = FilterType::New();
We set the filter parameters from the parser. The method IsOptionPresent() let us know if an
optional option was provided in the command line.
if (parseResult->IsOptionPresent("--SigmaD"))
filter->SetSigmaD(
parseResult->GetParameterDouble("--SigmaD"));
if (parseResult->IsOptionPresent("--SigmaI"))
filter->SetSigmaI(
parseResult->GetParameterDouble("--SigmaI"));
if (parseResult->IsOptionPresent("--Alpha"))
filter->SetAlpha(
parseResult->GetParameterDouble("--Alpha"));
rescaler->SetOutputMinimum(0);
rescaler->SetOutputMaximum(255);
filter->SetInput(reader->GetOutput());
rescaler->SetInput(filter->GetOutput());
writer->SetInput(rescaler->GetOutput());
We trigger the pipeline execution calling the Update() method on the writer
writer->Update();
Compile with make as usual. The execution is a bit different now as we have an automatic parsing
of the command line. First, try to execute as SmarterFilteringPipeline without any argument.
The usage message (automatically generated) appears:
Usage : ./SmarterFilteringPipeline
[--help|-h] : Help
[--version|-v] : Version
--InputImage|-in : input image file name (1 parameter)
--OutputImage|-out : output image file name (1 parameter)
[--SigmaD|-d] : Set the sigmaD parameter of the Harris points of
interest algorithm. Default is 1.0. (1 parameter)
[--SigmaI|-i] : Set the SigmaI parameter of the Harris points of
interest algorithm. Default is 1.0. (1 parameter)
[--Alpha|-a] : Set the alpha parameter of the Harris points of
interest algorithm. Default is 1.0. (1 parameter)
That looks a bit more professional: another user should be able to play with your program. As this
is automatic, that’s a good way not to forget to document your programs.
So now you have a better idea of the command line options that are possible. Try
SmarterFilteringPipeline -in QB Suburb.png -out output.png for a basic version with
the default values.
If you want a result that looks a bit better, you have to adjust the parameter
with SmarterFilteringPipeline -in QB Suburb.png -out output.png -d 1.5 -i 2 -a
0.1 for example.
54 Chapter 4. Building Simple Applications with OTB
Quite often, when you buy satellite images, you end up with several images. In the case of opti-
cal satellite, you often have a panchromatic spectral band with the highest spatial resolution and a
multispectral product of the same area with a lower resolution. The resolution ratio is likely to be
around 4.
To get the best of the image processing algorithms, you want to combine these data to produce a new
image with the highest spatial resolution and several spectral band. This step is called fusion and you
can find more details about it in 13. However, the fusion suppose that your two images represents
exactly the same area. There are different solutions to process your data to reach this situation.
Here we are going to use the metadata available with the images to produce an orthorectification as
detailed in 11.
First you need to add the following lines in the CMakeLists.txt file:
ADD_EXECUTABLE(OrthoFusion OrthoFusion.cxx)
TARGET_LINK_LIBRARIES(OrthoFusion ${OTB_LIBRARIES})
The source code for this example can be found in the file
Examples/Tutorials/OrthoFusion.cxx.
Start by including some necessary headers and with the usual main declaration. Apart from the
classical header related to image input and output. We need the headers related to the fusion and the
orthorectification. One header is also required to be able to process vector images (the XS one) with
the orthorectification.
#include "otbImageFileReader.h"
#include "otbImageFileWriter.h"
#include "otbOrthoRectificationFilter.h"
#include "otbGenericMapProjection.h"
#include "otbSimpleRcsPanSharpeningFusionImageFilter.h"
#include "otbStandardFilterWatcher.h"
We initialize ossim which is required for the orthorectification and we check that all parameters are
provided. Basically, we need:
• as the coordinates are given in UTM, we need the UTM zone number;
• of course, we need the UTM coordinates of the final image;
• the size in pixels of the final image;
• and the sampling of the final image.
return EXIT_FAILURE;
}
readerPAN->SetFileName(argv[1]);
readerXS->SetFileName(argv[2]);
writer->SetFileName(argv[3]);
We declare the projection (here we chose the UTM projection, other choices are possible) and re-
trieve the parameters from the command line:
We will need to pass several parameters to the orthorectification concerning the desired output re-
gion:
ImageType::IndexType start;
start[0] = 0;
start[1] = 0;
ImageType::SizeType size;
size[0] = atoi(argv[8]);
size[1] = atoi(argv[9]);
ImageType::SpacingType spacing;
spacing[0] = atof(argv[10]);
spacing[1] = atof(argv[11]);
ImageType::PointType origin;
origin[0] = strtod(argv[6], ITK_NULLPTR);
origin[1] = strtod(argv[7], ITK_NULLPTR);
OrthoRectifFilterType::Pointer orthoRectifPAN =
OrthoRectifFilterType::New();
orthoRectifPAN->SetMapProjection(utmMapProjection);
orthoRectifPAN->SetInput(readerPAN->GetOutput());
orthoRectifPAN->SetOutputStartIndex(start);
orthoRectifPAN->SetOutputSize(size);
orthoRectifPAN->SetOutputSpacing(spacing);
orthoRectifPAN->SetOutputOrigin(origin);
Now we are able to have the orthorectified area from the PAN image. We just have to follow a
similar process for the XS image.
4.7. Going from raw satellite images to useful products 57
typedef otb::OrthoRectificationFilter<VectorImageType,
DoubleVectorImageType, InverseProjectionType>
VectorOrthoRectifFilterType;
VectorOrthoRectifFilterType::Pointer orthoRectifXS =
VectorOrthoRectifFilterType::New();
orthoRectifXS->SetMapProjection(utmMapProjection);
orthoRectifXS->SetInput(readerXS->GetOutput());
orthoRectifXS->SetOutputStartIndex(start);
orthoRectifXS->SetOutputSize(size);
orthoRectifXS->SetOutputSpacing(spacing);
orthoRectifXS->SetOutputOrigin(origin);
It’s time to declare the fusion filter and set its inputs:
typedef otb::SimpleRcsPanSharpeningFusionImageFilter
<DoubleImageType, DoubleVectorImageType, VectorImageType> FusionFilterType;
FusionFilterType::Pointer fusion = FusionFilterType::New();
fusion->SetPanInput(orthoRectifPAN->GetOutput());
fusion->SetXsInput(orthoRectifXS->GetOutput());
And we can plug it to the writer. To be able to process the images by tiles, we use the
SetAutomaticTiledStreaming() method of the writer. We trigger the pipeline execution with
the Update() method.
writer->SetInput(fusion->GetOutput());
writer->SetAutomaticTiledStreaming();
writer->Update();
return EXIT_SUCCESS;
}
Part III
User’s guide
CHAPTER
FIVE
DATA REPRESENTATION
This chapter introduces the basic classes responsible for representing data in OTB. The most com-
mon classes are the otb::Image , the itk::Mesh and the itk::PointSet .
5.1 Image
The otb::Image class follows the spirit of Generic Programming, where types are separated from
the algorithmic behavior of the class. OTB supports images with any pixel type and any spatial
dimension.
The source code for this example can be found in the file
Examples/DataRepresentation/Image/Image1.cxx.
This example illustrates how to manually construct an otb::Image class. The following is the
minimal code needed to instantiate, declare and create the image class.
First, the header file of the Image class must be included.
#include "otbImage.h"
Then we must decide with what type to represent the pixels and what the dimension of the image
will be. With these two parameters we can instantiate the image class. Here we create a 2D image,
which is what we often use in remote sensing applications, anyway, with unsigned short pixel
data.
typedef otb::Image<unsigned short, 2> ImageType;
62 Chapter 5. Data Representation
The image can then be created by invoking the New() operator from the corresponding image type
and assigning the result to a itk::SmartPointer .
ImageType::Pointer image = ImageType::New();
In OTB, images exist in combination with one or more regions. A region is a subset of the image
and indicates a portion of the image that may be processed by other classes in the system. One of the
most common regions is the LargestPossibleRegion, which defines the image in its entirety. Other
important regions found in OTB are the BufferedRegion, which is the portion of the image actually
maintained in memory, and the RequestedRegion, which is the region requested by a filter or other
class when operating on the image.
In OTB, manually creating an image requires that the image is instantiated as previously shown, and
that regions describing the image are then associated with it.
A region is defined by two classes: the itk::Index and itk::Size classes. The origin of the
region within the image with which it is associated is defined by Index. The extent, or size, of the
region is defined by Size. Index is represented by a n-dimensional array where each component is
an integer indicating—in topological image coordinates—the initial pixel of the image. When an
image is created manually, the user is responsible for defining the image size and the index at which
the image grid starts. These two parameters make it possible to process selected regions.
The starting point of the image is defined by an Index class that is an n-dimensional array where
each component is an integer indicating the grid coordinates of the initial pixel of the image.
ImageType::IndexType start;
The region size is represented by an array of the same dimension of the image (using the Size class).
The components of the array are unsigned integers indicating the extent in pixels of the image along
every dimension.
ImageType::SizeType size;
Having defined the starting index and the image size, these two parameters are used to create an
ImageRegion object which basically encapsulates both concepts. The region is initialized with the
starting index and size of the image.
ImageType::RegionType region;
region.SetSize(size);
region.SetIndex(start);
5.1. Image 63
Finally, the region is passed to the Image object in order to define its extent and origin. The
SetRegions method sets the LargestPossibleRegion, BufferedRegion, and RequestedRegion simul-
taneously. Note that none of the operations performed to this point have allocated memory for the
image pixel data. It is necessary to invoke the Allocate() method to do this. Allocate does not
require any arguments since all the information needed for memory allocation has already been
provided by the region.
image->SetRegions(region);
image->Allocate();
In practice it is rare to allocate and initialize an image directly. Images are typically read from a
source, such a file or data acquisition hardware. The following example illustrates how an image
can be read from a file.
The source code for this example can be found in the file
Examples/DataRepresentation/Image/Image2.cxx.
The first thing required to read an image from a file is to include the header file of the
otb::ImageFileReader class.
#include "otbImageFileReader.h"
Then, the image type should be defined by specifying the type used to represent pixels and the
dimensions of the image.
typedef unsigned char PixelType;
const unsigned int Dimension = 2;
Using the image type, it is now possible to instantiate the image reader class. The image type is used
as a template parameter to define how the data will be represented once it is loaded into memory.
This type does not have to correspond exactly to the type stored in the file. However, a conversion
based on C-style type casting is used, so the type chosen to represent the data on disk must be
sufficient to characterize it accurately. Readers do not apply any transformation to the pixel data
other than casting from the pixel type of the file to the pixel type of the ImageFileReader. The
following illustrates a typical instantiation of the ImageFileReader type.
typedef otb::ImageFileReader<ImageType> ReaderType;
The reader type can now be used to create one reader object. A itk::SmartPointer (defined
by the ::Pointer notation) is used to receive the reference to the newly created reader. The New()
method is invoked to create an instance of the image reader.
64 Chapter 5. Data Representation
The minimum information required by the reader is the filename of the image to be loaded in mem-
ory. This is provided through the SetFileName() method. The file format here is inferred from
the filename extension. The user may also explicitly specify the data format explicitly using the
itk::ImageIO (See Chapter 6.1 99 for more information):
Reader objects are referred to as pipeline source objects; they respond to pipeline update requests
and initiate the data flow in the pipeline. The pipeline update mechanism ensures that the reader
only executes when a data request is made to the reader and the reader has not read any data. In the
current example we explicitly invoke the Update() method because the output of the reader is not
connected to other filters. In normal application the reader’s output is connected to the input of an
image filter and the update invocation on the filter triggers an update of the reader. The following
line illustrates how an explicit update is invoked on the reader.
reader->Update();
Access to the newly read image can be gained by calling the GetOutput() method on the reader.
This method can also be called before the update request is sent to the reader. The reference to the
image will be valid even though the image will be empty until the reader actually executes.
Any attempt to access image data before the reader executes will yield an image with no pixel data.
It is likely that a program crash will result since the image will not have been properly initialized.
The source code for this example can be found in the file
Examples/DataRepresentation/Image/Image3.cxx.
This example illustrates the use of the SetPixel() and GetPixel() methods. These two methods
provide direct access to the pixel data contained in the image. Note that these two methods are
relatively slow and should not be used in situations where high-performance access is required.
Image iterators are the appropriate mechanism to efficiently access image pixel data.
The individual position of a pixel inside the image is identified by a unique index. An index is an
array of integers that defines the position of the pixel along each coordinate dimension of the image.
The IndexType is automatically defined by the image and can be accessed using the scope operator
like itk::Index . The length of the array will match the dimensions of the associated image.
5.1. Image 65
The following code illustrates the declaration of an index variable and the assignment of values to
each of its components. Please note that Index does not use SmartPointers to access it. This is
because Index is a light-weight object that is not intended to be shared between objects. It is more
efficient to produce multiple copies of these small objects than to share them using the SmartPointer
mechanism.
The following lines declare an instance of the index type and initialize its content in order to associate
it with a pixel position in the image.
ImageType::IndexType pixelIndex;
Having defined a pixel position with an index, it is then possible to access the content of the pixel in
the image. The GetPixel() method allows us to get the value of the pixels.
Please note that GetPixel() returns the pixel value using copy and not reference semantics. Hence,
the method cannot be used to modify image data values.
Remember that both SetPixel() and GetPixel() are inefficient and should only be used for de-
bugging or for supporting interactions like querying pixel values by clicking with the mouse.
The source code for this example can be found in the file
Examples/DataRepresentation/Image/Image4.cxx.
Even though OTB can be used to perform general image processing tasks, the primary purpose of
the toolkit is the processing of remote sensing image data. In that respect, additional information
about the images is considered mandatory. In particular the information associated with the physical
spacing between pixels and the position of the image in space with respect to some world coordinate
system are extremely important.
Image origin and spacing are fundamental to many applications. Registration, for example, is per-
formed in physical coordinates. Improperly defined spacing and origins will result in inconsistent
results in such processes. Remote sensing images with no spatial information should not be used for
image analysis, feature extraction, GIS input, etc. In other words, remote sensing images lacking
spatial information are not only useless but also hazardous.
66 Chapter 5. Data Representation
Size=7x6
Spacing=( 20.0, 30.0 )
300 Spacing[0]
Physical extent=( 140.0, 180.0 )
Spacing[1]
200 30.0
Pixel Coverage
Voronoi Region
150
Pixel Coordinates
100
50
Image Origin
Origin=(60.0,70.0)
0
Figure 5.1 illustrates the main geometrical concepts associated with the otb::Image . In this figure,
circles are used to represent the center of pixels. The value of the pixel is assumed to exist as a Dirac
Delta Function located at the pixel center. Pixel spacing is measured between the pixel centers
and can be different along each dimension. The image origin is associated with the coordinates
of the first pixel in the image. A pixel is considered to be the rectangular region surrounding the
pixel center holding the data value. This can be viewed as the Voronoi region of the image grid, as
illustrated in the right side of the figure. Linear interpolation of image values is performed inside
the Delaunay region whose corners are pixel centers.
Image spacing is represented in a FixedArray whose size matches the dimension of the image. In
order to manually set the spacing of the image, an array of the corresponding type must be created.
The elements of the array should then be initialized with the spacing between the centers of adjacent
pixels. The following code illustrates the methods available in the Image class for dealing with
spacing and origin.
ImageType::SpacingType spacing;
// Note: measurement units (e.g., meters, feet, etc.) are defined by the application.
spacing[0] = 0.70; // spacing along X
spacing[1] = 0.70; // spacing along Y
The array can be assigned to the image using the SetSignedSpacing() method.
image->SetSignedSpacing(spacing);
5.1. Image 67
The spacing information can be retrieved from an image by using the GetSignedSpacing()
method. This method returns a reference to a FixedArray. The returned object can then be used to
read the contents of the array. Note the use of the const keyword to indicate that the array will not
be modified.
const ImageType::SpacingType& sp = image->GetSignedSpacing();
The image origin is managed in a similar way to the spacing. A Point of the appropriate dimension
must first be allocated. The coordinates of the origin can then be assigned to every component. These
coordinates correspond to the position of the first pixel of the image with respect to an arbitrary
reference system in physical space. It is the user’s responsibility to make sure that multiple images
used in the same application are using a consistent reference system. This is extremely important in
image registration applications.
The following code illustrates the creation and assignment of a variable suitable for initializing the
image origin.
ImageType::PointType origin;
image->SetOrigin(origin);
The origin can also be retrieved from an image by using the GetOrigin() method. This will return
a reference to a Point. The reference can be used to read the contents of the array. Note again the
use of the const keyword to indicate that the array contents will not be modified.
Once the spacing and origin of the image have been initialized, the image will correctly map pixel
indices to and from physical space coordinates. The following code illustrates how a point in phys-
ical space can be mapped into an image index for the purpose of reading the content of the closest
pixel.
First, a itk::Point type must be declared. The point type is templated over the type used to
represent coordinates and over the dimension of the space. In this particular case, the dimension of
the point must match the dimension of the image.
The Point class, like an itk::Index , is a relatively small and simple object. For this reason, it
is not reference-counted like the large data objects in OTB. Consequently, it is also not manipulated
with itk::SmartPointer s. Point objects are simply declared as instances of any other C++
class. Once the point is declared, its components can be accessed using traditional array notation.
In particular, the [] operator is available. For efficiency reasons, no bounds checking is performed
on the index used to access a particular point component. It is the user’s responsibility to make sure
that the index is in the range {0, Dimension − 1}.
PointType point;
The image will map the point to an index using the values of the current spacing and origin. An index
object must be provided to receive the results of the mapping. The index object can be instantiated
by using the IndexType defined in the Image type.
ImageType::IndexType pixelIndex;
The TransformPhysicalPointToIndex() method of the image class will compute the pixel index
closest to the point provided. The method checks for this index to be contained inside the current
buffered pixel data. The method returns a boolean indicating whether the resulting index falls inside
the buffered region or not. The output index should not be used when the returned value of the
method is false.
The following lines illustrate the point to index mapping and the subsequent use of the pixel index
for accessing pixel data from the image.
bool isInside = image->TransformPhysicalPointToIndex(point, pixelIndex);
if (isInside)
{
ImageType::PixelType pixelValue = image->GetPixel(pixelIndex);
pixelValue += 5;
image->SetPixel(pixelIndex, pixelValue);
}
Remember that GetPixel() and SetPixel() are very inefficient methods for accessing pixel data.
Image iterators should be used when massive access to pixel data is required.
The source code for this example can be found in the file
Examples/IO/MetadataExample.cxx.
5.1. Image 69
This example illustrates the access to metadata image information with OTB. By metadata, we mean
data which is typically stored with remote sensing images, like geographical coordinates of pixels,
pixel spacing or resolution, etc. Of course, the availability of these data depends on the image
format used and on the fact that the image producer must fill the available metadata fields. The
image formats which typically support metadata are for example CEOS and GeoTiff.
The metadata support is embedded in OTB’s IO functionnalities and is accessible through the
otb::Image and otb::VectorImage classes. You should avoid using the itk::Image class if
you want to have metadata support.
This simple example will consist on reading an image from a file and writing the metadata to an
output ASCII file. As usual we start by defining the types needed for the image to be read.
typedef unsigned char InputPixelType;
const unsigned int Dimension = 2;
We can now instantiate the reader and get a pointer to the input image.
ReaderType::Pointer reader = ReaderType::New();
InputImageType::Pointer image = InputImageType::New();
reader->SetFileName(inputFilename);
reader->Update();
image = reader->GetOutput();
Once the image has been read, we can access the metadata information. We will copy this informa-
tion to an ASCII file, so we create an output file stream for this purpose.
std::ofstream file;
file.open(outputAsciiFilename);
We can now call the different available methods for accessing the metadata. Useful methods are :
One can also get the GCPs by number, as well as their coordinates in image and geographical space.
tab = image->GetUpperLeftCorner();
file << "Corners " << std::endl;
for (unsigned int i = 0; i < tab.size(); ++i)
{
file << " UL[" << i << "] -> " << tab[i] << std::endl;
}
tab.clear();
tab = image->GetUpperRightCorner();
for (unsigned int i = 0; i < tab.size(); ++i)
{
file << " UR[" << i << "] -> " << tab[i] << std::endl;
}
tab.clear();
tab = image->GetLowerLeftCorner();
for (unsigned int i = 0; i < tab.size(); ++i)
{
file << " LL[" << i << "] -> " << tab[i] << std::endl;
}
tab.clear();
tab = image->GetLowerRightCorner();
for (unsigned int i = 0; i < tab.size(); ++i)
{
file << " LR[" << i << "] -> " << tab[i] << std::endl;
}
tab.clear();
file.close();
The term RGB (Red, Green, Blue) stands for a color representation commonly used in digital imag-
ing. RGB is a representation of the human physiological capability to analyze visual light using
three spectral-selective sensors [92, 145]. The human retina possess different types of light sensitive
cells. Three of them, known as cones, are sensitive to color [51] and their regions of sensitivity
72 Chapter 5. Data Representation
loosely match regions of the spectrum that will be perceived as red, green and blue respectively. The
rods on the other hand provide no color discrimination and favor high resolution and high sensitiv-
ity1 . A fifth type of receptors, the ganglion cells, also known as circadian2 receptors are sensitive
to the lighting conditions that differentiate day from night. These receptors evolved as a mechanism
for synchronizing the physiology with the time of the day. Cellular controls for circadian rythms are
present in every cell of an organism and are known to be exquisitively precise [89].
The RGB space has been constructed as a representation of a physiological response to light by the
three types of cones in the human eye. RGB is not a Vector space. For example, negative numbers
are not appropriate in a color space because they will be the equivalent of “negative stimulation” on
the human eye. In the context of colorimetry, negative color values are used as an artificial construct
for color comparison in the sense that
just as a way of saying that we can produce ColorB by combining ColorA and ColorC. However,
we must be aware that (at least in emitted light) it is not possible to subtract light. So when we
mention Equation 5.1 we actually mean
On the other hand, when dealing with printed color and with paint, as opposed to emitted light like
in computer screens, the physical behavior of color allows for subtraction. This is because strictly
speaking the objects that we see as red are those that absorb all light frequencies except those in the
red section of the spectrum [145].
The concept of addition and subtraction of colors has to be carefully interpreted. In fact, RGB has a
different definition regarding whether we are talking about the channels associated to the three color
sensors of the human eye, or to the three phosphors found in most computer monitors or to the color
inks that are used for printing reproduction. Color spaces are usually non linear and do not even
from a Group. For example, not all visible colors can be represented in RGB space [145].
ITK introduces the itk::RGBPixel type as a support for representing the values of an RGB color
space. As such, the RGBPixel class embodies a different concept from the one of an itk::Vector
in space. For this reason, the RGBPixel lack many of the operators that may be naively expected
from it. In particular, there are no defined operations for subtraction or addition.
When you anticipate to perform the operation of “Mean” on a RGB type you are assuming that in
the color space provides the action of finding a color in the middle of two colors, can be found by
using a linear operation between their numerical representation. This is unfortunately not the case
in color spaces due to the fact that they are based on a human physiological response [92].
If you decide to interpret RGB images as simply three independent channels then you should rather
1
The human eye is capable of perceiving a single isolated photon.
2 The term Circadian refers to the cycle of day and night, that is, events that are repeated with 24 hours intervals.
5.1. Image 73
use the itk::Vector type as pixel type. In this way, you will have access to the set of operations
that are defined in Vector spaces. The current implementation of the RGBPixel in ITK presumes
that RGB color images are intended to be used in applications where a formal interpretation of color
is desired, therefore only the operations that are valid in a color space are available in the RGBPixel
class.
The following example illustrates how RGB images can be represented in OTB.
The source code for this example can be found in the file
Examples/DataRepresentation/Image/RGBImage.cxx.
Thanks to the flexibility offered by the Generic Programming style on which OTB is based, it is
possible to instantiate images of arbitrary pixel type. The following example illustrates how a color
image with RGB pixels can be defined.
A class intended to support the RGB pixel type is available in ITK. You could also define your own
pixel class and use it to instantiate a custom image type. In order to use the itk::RGBPixel class,
it is necessary to include its header file.
#include "itkRGBPixel.h"
The RGB pixel class is templated over a type used to represent each one of the red, green and blue
pixel components. A typical instantiation of the templated class is as follows.
The type is then used as the pixel template parameter of the image.
The image type can be used to instantiate other filter, for example, an otb::ImageFileReader
object that will read the image from a file.
Access to the color components of the pixels can now be performed using the methods provided by
the RGBPixel class.
PixelType onePixel = image->GetPixel(pixelIndex);
The subindex notation can also be used since the itk::RGBPixel inherits the [] operator from
the itk::FixedArray class.
74 Chapter 5. Data Representation
The source code for this example can be found in the file
Examples/DataRepresentation/Image/VectorImage.cxx.
Many image processing tasks require images of non-scalar pixel type. A typical example is a multi-
spectral image. The following code illustrates how to instantiate and use an image whose pixels are
of vector type.
We could use the itk::Vector class to define the pixel type. The Vector class is intended to
represent a geometrical vector in space. It is not intended to be used as an array container like the
std::vector in STL. If you are interested in containers, the itk::VectorContainer class may
provide the functionality you want.
However, the itk::Vector is a fixed size array and it assumes that the number of channels of
the image is known at compile time. Therefore, we prefer to use the otb::VectorImage class
which allows choosing the number of channels of the image at runtime. The pixels will be of type
itk::VariableLengthVector .
The first step is to include the header file of the VectorImage class.
#include "otbVectorImage.h"
The VectorImage class is templated over the type used to represent the coordinate in space and over
the dimension of the space. In this example, we want to represent Pléiades images which have 4
bands.
typedef unsigned char PixelType;
typedef otb::VectorImage<PixelType, 2> ImageType;
Since the pixel dimensionality is chosen at runtime, one has to pass this parameter to the image
before memory allocation.
5.1. Image 75
image->SetNumberOfComponentsPerPixel(4);
image->Allocate();
The VariableLengthVector class overloads the operator []. This makes it possible to access the
Vector’s components using index notation. The user must not forget to allocate the memory for each
individual pixel by using the Reserve method.
ImageType::PixelType pixelValue;
pixelValue.Reserve(4);
We can now store this vector in one of the image pixels by defining an index and invoking the
SetPixel() method.
image->SetPixel(pixelIndex, pixelValue);
The GetPixel method can also be used to read Vectors pixels from the image
Lets repeat that both SetPixel() and GetPixel() are inefficient and should only be used for de-
bugging purposes or for implementing interactions with a graphical user interface such as querying
pixel value by clicking with the mouse.
The source code for this example can be found in the file
Examples/DataRepresentation/Image/Image5.cxx.
This example illustrates how to import data into the otb::Image class. This is particularly useful
for interfacing with other software systems. Many systems use a contiguous block of memory as a
buffer for image pixel data. The current example assumes this is the case and feeds the buffer into
an otb::ImportImageFilter , thereby producing an Image as output.
For fun we create a synthetic image with a centered sphere in a locally allocated buffer and pass this
block of memory to the ImportImageFilter. This example is set up so that on execution, the user
must provide the name of an output file as a command-line argument.
First, the header file of the ImportImageFilter class must be included.
76 Chapter 5. Data Representation
#include "otbImage.h"
#include "otbImportImageFilter.h"
Next, we select the data type to use to represent the image pixels. We assume that the external block
of memory uses the same data type to represent the pixels.
A filter object created using the New() method is then assigned to a SmartPointer.
This filter requires the user to specify the size of the image to be produced as output. The
SetRegion() method is used to this end. The image size should exactly match the number of
pixels available in the locally allocated buffer.
ImportFilterType::SizeType size;
ImportFilterType::IndexType start;
start.Fill(0);
ImportFilterType::RegionType region;
region.SetIndex(start);
region.SetSize(size);
importFilter->SetRegion(region);
The origin of the output image is specified with the SetOrigin() method.
double origin[Dimension];
origin[0] = 0.0; // X coordinate
origin[1] = 0.0; // Y coordinate
importFilter->SetOrigin(origin);
double spacing[Dimension];
spacing[0] = 1.0; // along X direction
spacing[1] = 1.0; // along Y direction
importFilter->SetSpacing(spacing);
Next we allocate the memory block containing the pixel data to be passed to the ImportImageFilter.
Note that we use exactly the same size that was specified with the SetRegion() method. In a
practical application, you may get this buffer from some other library using a different data structure
to represent the images.
// MODIFIED
const unsigned int numberOfPixels = size[0] * size[1];
PixelType * localBuffer = new PixelType[numberOfPixels];
Here we fill up the buffer with a binary sphere. We use simple for() loops here similar to
those found in the C or FORTRAN programming languages. Note that otb does not use for()
loops in its internal code to access pixels. All pixel access tasks are instead performed using
otb::ImageIterator s that support the management of n-dimensional images.
The buffer is passed to the ImportImageFilter with the SetImportPointer(). Note that the last
argument of this method specifies who will be responsible for deleting the memory block once it
is no longer in use. A false value indicates that the ImportImageFilter will not try to delete the
buffer when its destructor is called. A true value, on the other hand, will allow the filter to delete
the memory block upon destruction of the import filter.
For the ImportImageFilter to appropriately delete the memory block, the memory must be allocated
with the C++ new() operator. Memory allocated with other memory allocation mechanisms, such as
C malloc or calloc, will not be deleted properly by the ImportImageFilter. In other words, it is the
78 Chapter 5. Data Representation
Finally, we can connect the output of this filter to a pipeline. For simplicity we just use a writer here,
but it could be any other filter.
writer->SetInput(dynamic_cast<ImageType*>(importFilter->GetOutput()));
Note that we do not call delete on the buffer since we pass true as the last argument of
SetImportPointer(). Now the buffer is owned by the ImportImageFilter.
The source code for this example can be found in the file
Examples/DataRepresentation/Image/ImageListExample.cxx.
This example illustrates the use of the otb::ImageList class. This class provides the function-
nalities needed in order to integrate image lists as data objects into the OTB pipeline. Indeed, if
a std::list< ImageType > was used, the update operations on the pipeline might not have the
desired effects.
In this example, we will only present the basic operations which can be applied on an
otb::ImageList object.
The first thing required to read an image from a file is to include the header file of the
otb::ImageFileReader class.
#include "otbImageList.h"
As usual, we start by defining the types for the pixel and image types, as well as those for the readers
and writers.
const unsigned int Dimension = 2;
typedef unsigned char InputPixelType;
typedef otb::Image<InputPixelType, Dimension> InputImageType;
typedef otb::ImageFileReader<InputImageType> ReaderType;
typedef otb::ImageFileWriter<InputImageType> WriterType;
We can now define the type for the image list. The otb::ImageList class is templated over the
type of image contained in it. This means that all images in a list must have the same type.
5.2. PointSet 79
Let us assume now that we want to read an image from a file and store it in a list. The first thing to
do is to instantiate the reader and set the image file name. We effectively read the image by calling
the Update().
ReaderType::Pointer reader = ReaderType::New();
reader->SetFileName(inputFilename);
reader->Update();
In order to store the image in the list, the PushBack() method is used.
imageList->PushBack(reader->GetOutput());
We could repeat this operation for other readers or the outputs of filters. We will now write an image
of the list to a file. We therefore instantiate a writer, set the image file name and set the input image
for it. This is done by calling the Back() method of the list, which allows us to get the last element.
// Getting the image from the list and writing it to file
WriterType::Pointer writer = WriterType::New();
writer->SetFileName(outputFilename);
writer->SetInput(imageList->Back());
writer->Update();
• SetNthElement() and GetNthElement() allow randomly accessing any element of the list.
• Front() to access to the first element of the list.
• Erase() to remove an element.
Also, iterator classes are defined in order to have an efficient mean of moving through the list.
Finally, the otb::ImageListToImageListFilter is provided in order to implement filter which
operate on image lists and produce image lists.
5.2 PointSet
The source code for this example can be found in the file
Examples/DataRepresentation/Mesh/PointSet1.cxx.
80 Chapter 5. Data Representation
The itk::PointSet is a basic class intended to represent geometry in the form of a set of points
in n-dimensional space. It is the base class for the itk::Mesh providing the methods necessary to
manipulate sets of point. Points can have values associated with them. The type of such values is
defined by a template parameter of the itk::PointSet class (i.e., TPixelType. Two basic inter-
action styles of PointSets are available in ITK. These styles are referred to as static and dynamic.
The first style is used when the number of points in the set is known in advance and is not expected
to change as a consequence of the manipulations performed on the set. The dynamic style, on the
other hand, is intended to support insertion and removal of points in an efficient manner. Distin-
guishing between the two styles is meant to facilitate the fine tuning of a PointSet’s behavior while
optimizing performance and memory management.
In order to use the PointSet class, its header file should be included.
#include "itkPointSet.h"
Then we must decide what type of value to associate with the points. This is generally called the
PixelType in order to make the terminology consistent with the itk::Image. The PointSet is
also templated over the dimension of the space in which the points are represented. The following
declaration illustrates a typical instantiation of the PointSet class.
A PointSet object is created by invoking the New() method on its type. The resulting object must be
assigned to a SmartPointer. The PointSet is then reference-counted and can be shared by multiple
objects. The memory allocated for the PointSet will be released when the number of references to
the object is reduced to zero. This simply means that the user does not need to be concerned with
invoking the Delete() method on this class. In fact, the Delete() method should never be called
directly within any of the reference-counted ITK classes.
Following the principles of Generic Programming, the PointSet class has a set of associated defined
types to ensure that interacting objects can be declared with compatible types. This set of type
definitions is commonly known as a set of traits. Among them we can find the PointType type,
for example. This is the type used by the point set to represent points in space. The following
declaration takes the point type as defined in the PointSet traits and renames it to be conveniently
used in the global namespace.
The PointType can now be used to declare point objects to be inserted in the PointSet. Points are
fairly small objects, so it is inconvenient to manage them with reference counting and smart pointers.
They are simply instantiated as typical C++ classes. The Point class inherits the [] operator from
the itk::Array class. This makes it possible to access its components using index notation. For
5.2. PointSet 81
efficiency’s sake no bounds checking is performed during index access. It is the user’s responsibility
to ensure that the index used is in the range {0, Dimension − 1}. Each of the components in the point
is associated with space coordinates. The following code illustrates how to instantiate a point and
initialize its components.
PointType p0;
p0[0] = -1.0; // x coordinate
p0[1] = -1.0; // y coordinate
Points are inserted in the PointSet by using the SetPoint() method. This method requires the user
to provide a unique identifier for the point. The identifier is typically an unsigned integer that will
enumerate the points as they are being inserted. The following code shows how three points are
inserted into the PointSet.
pointsSet->SetPoint(0, p0);
pointsSet->SetPoint(1, p1);
pointsSet->SetPoint(2, p2);
It is possible to query the PointSet in order to determine how many points have been inserted into it.
This is done with the GetNumberOfPoints() method as illustrated below.
const unsigned int numberOfPoints = pointsSet->GetNumberOfPoints();
std::cout << numberOfPoints << std::endl;
Points can be read from the PointSet by using the GetPoint() method and the integer identifier. The
point is stored in a pointer provided by the user. If the identifier provided does not match an existing
point, the method will return false and the contents of the point will be invalid. The following code
illustrates point access using defensive programming.
PointType pp;
bool pointExists = pointsSet->GetPoint(1, &pp);
if (pointExists)
{
std::cout << "Point is = " << pp << std::endl;
}
GetPoint() and SetPoint() are not the most efficient methods to access points in the PointSet. It
is preferable to get direct access to the internal point container defined by the traits and use iterators
to walk sequentially over the list of points (as shown in the following example).
The source code for this example can be found in the file
Examples/DataRepresentation/Mesh/PointSet2.cxx.
82 Chapter 5. Data Representation
The itk::PointSet class uses an internal container to manage the storage of itk::Point s. It
is more efficient, in general, to manage points by using the access methods provided directly on the
points container. The following example illustrates how to interact with the point container and how
to use point iterators.
The type is defined by the traits of the PointSet class. The following line conveniently takes the
PointsContainer type from the PointSet traits and declare it in the global namespace.
typedef PointSetType::PointsContainer PointsContainer;
The actual type of the PointsContainer depends on what style of PointSet is being used. The dynamic
PointSet use the itk::MapContainer while the static PointSet uses the itk::VectorContainer
. The vector and map containers are basically ITK wrappers around the STL classes std::map and
std::vector. By default, the PointSet uses a static style, hence the default type of point container is
an VectorContainer. Both the map and vector container are templated over the type of the elements
they contain. In this case they are templated over PointType. Containers are reference counted
object. They are then created with the New() method and assigned to a itk::SmartPointer after
creation. The following line creates a point container compatible with the type of the PointSet from
which the trait has been taken.
PointsContainer::Pointer points = PointsContainer::New();
Points can now be defined using the PointType trait from the PointSet.
typedef PointSetType::PointType PointType;
PointType p0;
PointType p1;
p0[0] = -1.0;
p0[1] = 0.0; // Point 0 = {-1, 0 }
p1[0] = 1.0;
p1[1] = 0.0; // Point 1 = { 1, 0 }
The created points can be inserted in the PointsContainer using the generic method
InsertElement() which requires an identifier to be provided for each point.
unsigned int pointId = 0;
points->InsertElement(pointId++, p0);
points->InsertElement(pointId++, p1);
Finally the PointsContainer can be assigned to the PointSet. This will substitute any previously
existing PointsContainer on the PointSet. The assignment is done using the SetPoints() method.
pointSet->SetPoints(points);
The PointsContainer object can be obtained from the PointSet using the GetPoints() method. This
method returns a pointer to the actual container owned by the PointSet which is then assigned to a
SmartPointer.
5.2. PointSet 83
The most efficient way to sequentially visit the points is to use the iterators provided by PointsCon-
tainer. The Iterator type belongs to the traits of the PointsContainer classes. It behaves pretty
much like the STL iterators.3 The Points iterator is not a reference counted class, so it is created
directly from the traits without using SmartPointers.
typedef PointsContainer::Iterator PointsIterator;
The subsequent use of the iterator follows what you may expect from a STL iterator. The iterator
to the first point is obtained from the container with the Begin() method and assigned to another
iterator.
PointsIterator pointIterator = points->Begin();
The ++ operator on the iterator can be used to advance from one point to the next. The actual value
of the Point to which the iterator is pointing can be obtained with the Value() method. The loop for
walking through all the points can be controlled by comparing the current iterator with the iterator
returned by the End() method of the PointsContainer. The following lines illustrate the typical loop
for walking through the points.
PointsIterator end = points->End();
while (pointIterator != end)
{
PointType p = pointIterator.Value(); // access the point
std::cout << p << std::endl; // print the point
++pointIterator; // advance to next point
}
Note that as in STL, the iterator returned by the End() method is not a valid iterator. This is called
a past-end iterator in order to indicate that it is the value resulting from advancing one step after
visiting the last element in the container.
The number of elements stored in a container can be queried with the Size() method. In the case
of the PointSet, the following two lines of code are equivalent, both of them returning the number
of points in the PointSet.
std::cout << pointSet->GetNumberOfPoints() << std::endl;
std::cout << pointSet->GetPoints()->Size() << std::endl;
The source code for this example can be found in the file
Examples/DataRepresentation/Mesh/PointSet3.cxx.
3 If you dig deep enough into the code, you will discover that these iterators are actually ITK wrappers around STL
iterators.
84 Chapter 5. Data Representation
The itk::PointSet class was designed to interact with the Image class. For this reason it was
found convenient to allow the points in the set to hold values that could be computed from images.
The value associated with the point is referred as PixelType in order to make it consistent with
image terminology. Users can define the type as they please thanks to the flexibility offered by the
Generic Programming approach used in the toolkit. The PixelType is the first template parameter
of the PointSet.
The following code defines a particular type for a pixel type and instantiates a PointSet class with it.
Data can be inserted into the PointSet using the SetPointData() method. This method requires the
user to provide an identifier. The data in question will be associated to the point holding the same
identifier. It is the user’s responsibility to verify the appropriate matching between inserted data and
inserted points. The following line illustrates the use of the SetPointData() method.
Data associated with points can be read from the PointSet using the GetPointData() method. This
method requires the user to provide the identifier to the point and a valid pointer to a location where
the pixel data can be safely written. In case the identifier does not match any existing identifier on
the PointSet the method will return false and the pixel value returned will be invalid. It is the user’s
responsibility to check the returned boolean value before attempting to use it.
The SetPointData() and GetPointData() methods are not the most efficient way to get access
to point data. It is far more efficient to use the Iterators provided by the PointDataContainer.
Data associated with points is internally stored in PointDataContainers. In the same way as
with points, the actual container type used depend on whether the style of the PointSet is static or
dynamic. Static point sets will use an itk::VectorContainer while dynamic point sets will
use an itk::MapContainer . The type of the data container is defined as one of the traits in the
PointSet. The following declaration illustrates how the type can be taken from the traits and used to
conveniently declare a similar type on the global namespace.
Using the type it is now possible to create an instance of the data container. This is a standard
reference counted object, henceforth it uses the New() method for creation and assigns the newly
created object to a SmartPointer.
PointDataContainer::Pointer pointData = PointDataContainer::New();
Pixel data can be inserted in the container with the method InsertElement(). This method requires
an identified to be provided for each point data.
unsigned int pointId = 0;
pointData->InsertElement(pointId++, value0);
pointData->InsertElement(pointId++, value1);
Finally the PointDataContainer can be assigned to the PointSet. This will substitute any previously
existing PointDataContainer on the PointSet. The assignment is done using the SetPointData()
method.
pointSet->SetPointData(pointData);
The PointDataContainer can be obtained from the PointSet using the GetPointData() method.
This method returns a pointer (assigned to a SmartPointer) to the actual container owned by the
PointSet.
PointDataContainer::Pointer pointData2 = pointSet->GetPointData();
The most efficient way to sequentially visit the data associated with points is to use the iterators
provided by PointDataContainer. The Iterator type belongs to the traits of the PointsContainer
classes. The iterator is not a reference counted class, so it is just created directly from the traits
without using SmartPointers.
typedef PointDataContainer::Iterator PointDataIterator;
The subsequent use of the iterator follows what you may expect from a STL iterator. The iterator
to the first point is obtained from the container with the Begin() method and assigned to another
iterator.
PointDataIterator pointDataIterator = pointData2->Begin();
The ++ operator on the iterator can be used to advance from one data point to the next. The actual
value of the PixelType to which the iterator is pointing can be obtained with the Value() method.
The loop for walking through all the point data can be controlled by comparing the current iterator
with the iterator returned by the End() method of the PointsContainer. The following lines illustrate
the typical loop for walking through the point data.
86 Chapter 5. Data Representation
Note that as in STL, the iterator returned by the End() method is not a valid iterator. This is called
a past-end iterator in order to indicate that it is the value resulting from advancing one step after
visiting the last element in the container.
The source code for this example can be found in the file
Examples/DataRepresentation/Mesh/PointSetWithVectors.cxx.
This example illustrates how a point set can be parameterized to manage a particular pixel type. It
is quite common to associate vector values with points for producing geometric representations or
storing multi-band information. The following code shows how vector values can be used as pixel
type on the PointSet class. The itk::Vector class is used here as the pixel type. This class
is appropriate for representing the relative position between two points. It could then be used to
manage displacements in disparity map estimations, for example.
In order to use the vector class it is necessary to include its header file along with the header of the
point set.
#include "itkVector.h"
#include "itkPointSet.h"
Then we use the PixelType (which are actually Vectors) to instantiate the PointSet type and subse-
quently create a PointSet object.
5.2. PointSet 87
The following code is generating a circle and assigning vector values to the points. The components
of the vectors in this example are computed to represent the tangents to the circle as shown in
Figure 5.2.
PointSetType::PixelType tangent;
PointSetType::PointType point;
We can now visit all the points and use the vector on the pixel values to apply a displacement on the
points. This is along the spirit of what a deformable model could do at each one of its iterations.
Note that the ConstIterator was used here instead of the normal Iterator since the pixel values
are only intended to be read and not modified. ITK supports const-correctness at the API level.
88 Chapter 5. Data Representation
The itk::Vector class has overloaded the + operator with the itk::Point . In other words,
vectors can be added to points in order to produce new points. This property is exploited in the
center of the loop in order to update the points positions with a single statement.
We can finally visit all the points and print out the new values
pointIterator = pointSet->GetPoints()->Begin();
pointEnd = pointSet->GetPoints()->End();
while (pointIterator != pointEnd)
{
std::cout << pointIterator.Value() << std::endl;
++pointIterator;
}
Note that itk::Vector is not the appropriate class for representing normals to surfaces and gradi-
ents of functions. This is due to the way in which vectors behave under affine transforms. ITK has a
specific class for representing normals and function gradients. This is the itk::CovariantVector
class.
5.3 Mesh
The source code for this example can be found in the file
Examples/DataRepresentation/Mesh/Mesh1.cxx.
The itk::Mesh class is intended to represent shapes in space. It derives from the itk::PointSet
class and hence inherits all the functionality related to points and access to the pixel-data associated
with the points. The mesh class is also n-dimensional which allows a great flexibility in its use.
In practice a Mesh class can be seen as a PointSet to which cells (also known as elements) of many
different dimensions and shapes have been added. Cells in the mesh are defined in terms of the
existing points using their point-identifiers.
In the same way as for the PointSet, two basic styles of Meshes are available in ITK. They are
referred to as static and dynamic. The first one is used when the number of points in the set can be
known in advance and it is not expected to change as a consequence of the manipulations performed
on the set. The dynamic style, on the other hand, is intended to support insertion and removal of
points in an efficient manner. The reason for making the distinction between the two styles is to
facilitate fine tuning its behavior with the aim of optimizing performance and memory management.
In the case of the Mesh, the dynamic/static aspect is extended to the management of cells.
In order to use the Mesh class, its header file should be included.
#include "itkMesh.h"
5.3. Mesh 89
Then, the type associated with the points must be selected and used for instantiating the Mesh type.
typedef float PixelType;
The Mesh type extensively uses the capabilities provided by Generic Programming. In particular
the Mesh class is parameterized over the PixelType and the dimension of the space. PixelType is the
type of the value associated with every point just as is done with the PointSet. The following line
illustrates a typical instantiation of the Mesh.
Meshes are expected to take large amounts of memory. For this reason they are reference counted
objects and are managed using SmartPointers. The following line illustrates how a mesh is cre-
ated by invoking the New() method of the MeshType and the resulting object is assigned to a
itk::SmartPointer .
MeshType::Pointer mesh = MeshType::New();
The management of points in the Mesh is exactly the same as in the PointSet. The type point
associated with the mesh can be obtained through the PointType trait. The following code shows
the creation of points compatible with the mesh type defined above and the assignment of values to
its coordinates.
MeshType::PointType p0;
MeshType::PointType p1;
MeshType::PointType p2;
MeshType::PointType p3;
p0[0] = -1.0;
p0[1] = -1.0; // first point ( -1, -1 )
p1[0] = 1.0;
p1[1] = -1.0; // second point ( 1, -1 )
p2[0] = 1.0;
p2[1] = 1.0; // third point ( 1, 1 )
p3[0] = -1.0;
p3[1] = 1.0; // fourth point ( -1, 1 )
The points can now be inserted in the Mesh using the SetPoint() method. Note that points are
copied into the mesh structure. This means that the local instances of the points can now be modified
without affecting the Mesh content.
mesh->SetPoint(0, p0);
mesh->SetPoint(1, p1);
mesh->SetPoint(2, p2);
mesh->SetPoint(3, p3);
90 Chapter 5. Data Representation
The current number of points in the Mesh can be queried with the GetNumberOfPoints() method.
std::cout << "Points = " << mesh->GetNumberOfPoints() << std::endl;
The points can now be efficiently accessed using the Iterator to the PointsContainer as it was done
in the previous section for the PointSet. First, the point iterator type is extracted through the mesh
traits.
typedef MeshType::PointsContainer::Iterator PointsIterator;
A point iterator is initialized to the first point with the Begin() method of the PointsContainer.
PointsIterator pointIterator = mesh->GetPoints()->Begin();
The ++ operator on the iterator is now used to advance from one point to the next. The actual value
of the Point to which the iterator is pointing can be obtained with the Value() method. The loop
for walking through all the points is controlled by comparing the current iterator with the iterator
returned by the End() method of the PointsContainer. The following lines illustrate the typical loop
for walking through the points.
PointsIterator end = mesh->GetPoints()->End();
while (pointIterator != end)
{
MeshType::PointType p = pointIterator.Value(); // access the point
std::cout << p << std::endl; // print the point
++pointIterator; // advance to next point
}
The source code for this example can be found in the file
Examples/DataRepresentation/Mesh/Mesh2.cxx.
A itk::Mesh can contain a variety of cell types. Typical cells are the itk::LineCell ,
itk::TriangleCell , itk::QuadrilateralCell and itk::TetrahedronCell . The lat-
ter will not be used very often in the remote sensing context. Additional flexibility is provided for
managing cells at the price of a bit more of complexity than in the case of point management.
The following code creates a polygonal line in order to illustrate the simplest case of cell manage-
ment in a Mesh. The only cell type used here is the LineCell. The header file of this class has to be
included.
#include "itkLineCell.h"
In order to be consistent with the Mesh, cell types have to be configured with a number of custom
types taken from the mesh traits. The set of traits relevant to cells are packaged by the Mesh class
5.3. Mesh 91
into the CellType trait. This trait needs to be passed to the actual cell types at the moment of their
instantiation. The following line shows how to extract the Cell traits from the Mesh type.
The LineCell type can now be instantiated using the traits taken from the Mesh.
The main difference in the way cells and points are managed by the Mesh is that points are stored by
copy on the PointsContainer while cells are stored in the CellsContainer using pointers. The reason
for using pointers is that cells use C++ polymorphism on the mesh. This means that the mesh is only
aware of having pointers to a generic cell which is the base class of all the specific cell types. This
architecture makes it possible to combine different cell types in the same mesh. Points, on the other
hand, are of a single type and have a small memory footprint, which makes it efficient to copy them
directly into the container.
Managing cells by pointers add another level of complexity to the Mesh since it is now necessary to
establish a protocol to make clear who is responsible for allocating and releasing the cells’ memory.
This protocol is implemented in the form of a specific type of pointer called the CellAutoPointer.
This pointer, based on the itk::AutoPointer , differs in many respects from the SmartPointer.
The CellAutoPointer has an internal pointer to the actual object and a boolean flag that indicates
if the CellAutoPointer is responsible for releasing the cell memory whenever the time comes for
its own destruction. It is said that a CellAutoPointer owns the cell when it is responsible for
its destruction. Many CellAutoPointer can point to the same cell but at any given time, only one
CellAutoPointer can own the cell.
The CellAutoPointer trait is defined in the MeshType and can be extracted as illustrated in the
following line.
Note that the CellAutoPointer is pointing to a generic cell type. It is not aware of the actual type
of the cell, which can be for example LineCell, TriangleCell or TetrahedronCell. This fact will
influence the way in which we access cells later on.
At this point we can actually create a mesh and insert some points on it.
92 Chapter 5. Data Representation
MeshType::PointType p0;
MeshType::PointType p1;
MeshType::PointType p2;
p0[0] = -1.0;
p0[1] = 0.0;
p1[0] = 1.0;
p1[1] = 0.0;
p2[0] = 1.0;
p2[1] = 1.0;
mesh->SetPoint(0, p0);
mesh->SetPoint(1, p1);
mesh->SetPoint(2, p2);
The following code creates two CellAutoPointers and initializes them with newly created cell ob-
jects. The actual cell type created in this case is LineCell. Note that cells are created with the normal
new C++ operator. The CellAutoPointer takes ownership of the received pointer by using the method
TakeOwnership(). Even though this may seem verbose, it is necessary in order to make it explicit
from the code that the responsibility of memory release is assumed by the AutoPointer.
CellAutoPointer line0;
CellAutoPointer line1;
line0.TakeOwnership(new LineType);
line1.TakeOwnership(new LineType);
The LineCells should now be associated with points in the mesh. This is done using the identifiers
assigned to points when they were inserted in the mesh. Every cell type has a specific number of
points that must be associated with it.4 For example a LineCell requires two points, a TriangleCell
requires three and a TetrahedronCell requires four. Cells use an internal numbering system for
points. It is simply an index in the range {0, NumberO f Points − 1}. The association of points and
cells is done by the SetPointId() method which requires the user to provide the internal index of
the point in the cell and the corresponding PointIdentifier in the Mesh. The internal cell index is the
first parameter of SetPointId() while the mesh point-identifier is the second.
4 Some cell types like polygons have a variable number of points associated with them.
5.3. Mesh 93
Cells are inserted in the mesh using the SetCell() method. It requires an identifier and the Auto-
Pointer to the cell. The Mesh will take ownership of the cell to which the AutoPointer is pointing.
This is done internally by the SetCell() method. In this way, the destruction of the CellAutoPointer
will not induce the destruction of the associated cell.
mesh->SetCell(0, line0);
mesh->SetCell(1, line1);
After serving as an argument of the SetCell() method, a CellAutoPointer no longer holds owner-
ship of the cell. It is important not to use this same CellAutoPointer again as argument to SetCell()
without first securing ownership of another cell.
The number of Cells currently inserted in the mesh can be queried with the GetNumberOfCells()
method.
std::cout << "Cells = " << mesh->GetNumberOfCells() << std::endl;
In a way analogous to points, cells can be accessed using Iterators to the CellsContainer in the mesh.
The trait for the cell iterator can be extracted from the mesh and used to define a local type.
typedef MeshType::CellsContainer::Iterator CellIterator;
Then the iterators to the first and past-end cell in the mesh can be obtained respectively with the
Begin() and End() methods of the CellsContainer. The CellsContainer of the mesh is returned by
the GetCells() method.
CellIterator cellIterator = mesh->GetCells()->Begin();
CellIterator end = mesh->GetCells()->End();
Finally a standard loop is used to iterate over all the cells. Note the use of the Value() method
used to get the actual pointer to the cell from the CellIterator. Note also that the values returned are
pointers to the generic CellType. These pointers have to be down-casted in order to be used as actual
LineCell types. Safe down-casting is performed with the dynamic cast operator which will throw
an exception if the conversion cannot be safely performed.
while (cellIterator != end)
{
MeshType::CellType * cellptr = cellIterator.Value();
LineType * line = dynamic_cast<LineType *>(cellptr);
std::cout << line->GetNumberOfPoints() << std::endl;
++cellIterator;
}
The source code for this example can be found in the file
Examples/DataRepresentation/Mesh/Mesh3.cxx.
94 Chapter 5. Data Representation
In the same way that custom data can be associated with points in the mesh, it is also possible to
associate custom data with cells. The type of the data associated with the cells can be different
from the data type associated with points. By default, however, these two types are the same. The
following example illustrates how to access data associated with cells. The approach is analogous
to the one used to access point data.
Consider the example of a mesh containing lines on which values are associated with each line. The
mesh and cell header files should be included first.
#include "itkMesh.h"
#include "itkLineCell.h"
Then the PixelType is defined and the mesh type is instantiated with it.
The itk::LineCell type can now be instantiated using the traits taken from the Mesh.
Let’s now create a Mesh and insert some points into it. Note that the dimension of the points matches
the dimension of the Mesh. Here we insert a sequence of points that look like a plot of the log()
function.
MeshType::Pointer mesh = MeshType::New();
A set of line cells is created and associated with the existing points by using point identifiers. In this
simple case, the point identifiers can be deduced from cell identifiers since the line cells are ordered
in the same way.
5.3. Mesh 95
CellType::CellAutoPointer line;
const unsigned int numberOfCells = numberOfPoints - 1;
for (unsigned int cellId = 0; cellId < numberOfCells; cellId++)
{
line.TakeOwnership(new LineType);
line->SetPointId(0, cellId); // first point
line->SetPointId(1, cellId + 1); // second point
mesh->SetCell(cellId, line); // insert the cell
}
Data associated with cells is inserted in the itk::Mesh by using the SetCellData() method. It
requires the user to provide an identifier and the value to be inserted. The identifier should match
one of the inserted cells. In this simple example, the square of the cell identifier is used as cell data.
Note the use of static cast to PixelType in the assignment.
Cell data can be read from the Mesh with the GetCellData() method. It requires the user to provide
the identifier of the cell for which the data is to be retrieved. The user should provide also a valid
pointer to a location where the data can be copied.
Neither SetCellData() or GetCellData() are efficient ways to access cell data. More efficient
access to cell data can be achieved by using the Iterators built into the CellDataContainer.
Note that the ConstIterator is used here because the data is only going to be read. This approach
is exactly the same already illustrated for getting access to point data. The iterator to the first cell
data item can be obtained with the Begin() method of the CellDataContainer. The past-end iterator
is returned by the End() method. The cell data container itself can be obtained from the mesh with
the method GetCellData().
Finally a standard loop is used to iterate over all the cell data entries. Note the use of the Value()
method used to get the actual value of the data entry. PixelType elements are copied into the local
variable cellValue.
while (cellDataIterator != end)
{
PixelType cellValue = cellDataIterator.Value();
std::cout << cellValue << std::endl;
++cellDataIterator;
}
More details about the use of itk::Mesh can be found in the ITK Software Guide.
5.4 Path
The source code for this example can be found in the file
Examples/DataRepresentation/Path/PolyLineParametricPath1.cxx.
This example illustrates how to use the itk::PolyLineParametricPath . This class will typically
be used for representing in a concise way the output of an image segmentation algorithm in 2D. See
section 14.3 for an example in the context of alignment detection. The PolyLineParametricPath
however could also be used for representing any open or close curve in N-Dimensions as a linear
piece-wise approximation.
First, the header file of the PolyLineParametricPath class must be included.
#include "itkPolyLineParametricPath.h"
path->Initialize();
ContinuousIndexType cindex;
ImagePointType point;
image->TransformPhysicalPointToContinuousIndex(origin, cindex);
path->AddVertex(cindex);
image->TransformPhysicalPointToContinuousIndex(point, cindex);
path->AddVertex(cindex);
2;rgb:0000/0000/0000
CHAPTER
SIX
This chapter describes the toolkit architecture supporting reading and writing of images to files.
OTB does not enforce any particular file format, instead, it provides a structure inherited from ITK,
supporting a variety of formats that can be easily extended by the user as new formats become
available.
We begin the chapter with some simple examples of file I/O.
The source code for this example can be found in the file
Examples/IO/ImageReadWrite.cxx.
The classes responsible for reading and writing images are located at the beginning and end of the
data processing pipeline. These classes are known as data sources (readers) and data sinks (writers).
Generally speaking they are referred to as filters, although readers have no pipeline input and writers
have no pipeline output.
The reading of images is managed by the class otb::ImageFileReader while writing is per-
formed by the class otb::ImageFileWriter . These two classes are independent of any particular
file format. The actual low level task of reading and writing specific file formats is done behind the
scenes by a family of classes of type itk::ImageIO . Actually, the OTB image Readers and Writ-
ers are very similar to those of ITK, but provide new functionnalities which are specific to remote
sensing images.
The first step for performing reading and writing is to include the following headers.
#include "otbImageFileReader.h"
#include "otbImageFileWriter.h"
Then, as usual, a decision must be made about the type of pixel used to represent the image processed
by the pipeline. Note that when reading and writing images, the pixel type of the image is not
100 Chapter 6. Reading and Writing Images
necessarily the same as the pixel type stored in the file. Your choice of the pixel type (and hence
template parameter) should be driven mainly by two considerations:
• It should be possible to cast the file pixel type in the file to the pixel type you select. This
casting will be performed using the standard C-language rules, so you will have to make sure
that the conversion does not result in information being lost.
• The pixel type in memory should be appropriate to the type of processing you intended to
apply on the images.
A typical selection for remote sensing images is illustrated in the following lines.
typedef unsigned short PixelType;
const unsigned int Dimension = 2;
typedef otb::Image<PixelType, Dimension> ImageType;
Note that the dimension of the image in memory should match the one of the image in file. There
are a couple of special cases in which this condition may be relaxed, but in general it is better to
ensure that both dimensions match. This is not a real issue in remote sensing, unless you want to
consider multi-band images as volumes (3D) of data.
We can now instantiate the types of the reader and writer. These two classes are parameterized over
the image type.
typedef otb::ImageFileReader<ImageType> ReaderType;
typedef otb::ImageFileWriter<ImageType> WriterType;
Then, we create one object of each type using the New() method and assigning the result to a
itk::SmartPointer .
ReaderType::Pointer reader = ReaderType::New();
WriterType::Pointer writer = WriterType::New();
The name of the file to be read or written is passed with the SetFileName() method.
reader->SetFileName(inputFilename);
writer->SetFileName(outputFilename);
We can now connect these readers and writers to filters to create a pipeline. For example, we can
create a short pipeline by passing the output of the reader directly to the input of the writer.
writer->SetInput(reader->GetOutput());
At first view, this may seem as a quite useless program, but it is actually implementing a powerful file
format conversion tool! The execution of the pipeline is triggered by the invocation of the Update()
methods in one of the final objects. In this case, the final data pipeline object is the writer. It is a
wise practice of defensive programming to insert any Update() call inside a try/catch block in
case exceptions are thrown during the execution of the pipeline.
6.2. Pluggable Factories 101
1 1
ImageIO
try
{
writer->Update();
}
catch (itk::ExceptionObject& err)
{
std::cerr << "ExceptionObject caught !" << std::endl;
std::cerr << err << std::endl;
return EXIT_FAILURE;
}
Note that exceptions should only be caught by pieces of code that know what to do with them. In
a typical application this catch block should probably reside on the GUI code. The action on the
catch block could inform the user about the failure of the IO operation.
The IO architecture of the toolkit makes it possible to avoid explicit specification of the file format
used to read or write images.1 The object factory mechanism enables the ImageFileReader and
ImageFileWriter to determine (at run-time) with which file format it is working with. Typically, file
formats are chosen based on the filename extension, but the architecture supports arbitrarily complex
processes to determine whether a file can be read or written. Alternatively, the user can specify the
data file format by explicit instantiation and assignment the appropriate itk::ImageIO subclass.
To better understand the IO architecture, please refer to Figures 6.1, 6.2, and 6.3.
The following section describes the internals of the IO architecture provided in the toolbox.
The principle behind the input/output mechanism used in ITK and therefore OTB is known
as pluggable-factories [48]. This concept is illustrated in the UML diagram in Figure 6.1.
1 In this example no file format is specified; this program can be used as a general file conversion utility.
102 Chapter 6. Reading and Writing Images
e
nam
file
Register CreateImageIO
PNGImageIOFactory for Reading ImageFileReader
filename
CanRead ? e
am
en
fil CreateImageIO filena
for Writing me
ImageIOFactory
CanWrite ?
ImageFileWriter
MetaImageIOFactory
ObjectFactoryBase ImageIOFactory
1 *
RegisterFactory(factory:ObjectFactoryBase) RegisterBuiltInFactories()
CreateImageIO(string)
GDALImageIOFactory RawImageIOFactory
ONERAImageIOFactory JPEGImageIOFactory
BMPImageIOFactory
MetaImageIOFactory
TIFFImageIOFactory
From the user’s point of view the objects responsible for reading and writing files are the
otb::ImageFileReader and otb::ImageFileWriter classes. These two classes, however,
are not aware of the details involved in reading or writing particular file formats like PNG or Geo-
TIFF. What they do is to dispatch the user’s requests to a set of specific classes that are aware of the
details of image file formats. These classes are the itk::ImageIO classes. The ITK delegation
mechanism enables users to extend the number of supported file formats by just adding new classes
to the ImageIO hierarchy.
Each instance of ImageFileReader and ImageFileWriter has a pointer to an ImageIO object. If this
pointer is empty, it will be impossible to read or write an image and the image file reader/writer
must determine which ImageIO class to use to perform IO operations. This is done basically by
passing the filename to a centralized class, the itk::ImageIOFactory and asking it to identify
any subclass of ImageIO capable of reading or writing the user-specified file. This is illustrated by
the use cases on the right side of Figure 6.2. The ImageIOFactory acts here as a dispatcher that help
to locate the actual IO factory classes corresponding to each file format.
Each class derived from ImageIO must provide an associated factory class capable of producing an
instance of the ImageIO class. For example, for PNG files, there is a itk::PNGImageIO object
that knows how to read this image files and there is a itk::PNGImageIOFactory class capable
of constructing a PNGImageIO object and returning a pointer to it. Each time a new file format is
added (i.e., a new ImageIO subclass is created), a factory must be implemented as a derived class of
the ObjectFactoryBase class as illustrated in Figure 6.3.
For example, in order to read PNG files, a PNGImageIOFactory is created and registered with the
central ImageIOFactory singleton2 class as illustrated in the left side of Figure 6.2. When the Im-
ageFileReader asks the ImageIOFactory for an ImageIO capable of reading the file identified with
filename the ImageIOFactory will iterate over the list of registered factories and will ask each one of
them is they know how to read the file. The factory that responds affirmatively will be used to create
the specific ImageIO instance that will be returned to the ImageFileReader and used to perform the
read operations.
With respect to the ITK formats, OTB adds most of the remote sensing image formats. In order to
do so, the Geospatial Data Abstraction Library, GDAL http://www.gdal.org/, is encapsultated
in a ImageIO factory. GDAL is a translator library for raster geospatial data formats that is released
under an X/MIT style Open Source license. As a library, it presents a single abstract data model to
the calling application for all supported formats, which include CEOS, GeoTIFF, ENVI, and much
more. See http://www.gdal.org/formats_list.html for the full format list.
Since GDAL is itself a multi-format library, the GDAL IO factory is able to choose the appropriate
resource for reading and writing images.
In most cases the mechanism is transparent to the user who only interacts with the ImageFileReader
and ImageFileWriter. It is possible, however, to explicitly select the type of ImageIO object to use.
Please see the ITK Software for more details about this.
2 Singleton means that there is only one instance of this class in a particular application
104 Chapter 6. Reading and Writing Images
6.3 IO Streaming
The source code for this example can be found in the file
Examples/IO/StreamingImageReadWrite.cxx.
As we have seen, the reading of images is managed by the class otb::ImageFileReader while
writing is performed by the class otb::ImageFileWriter . ITK’s pipeline implements streaming.
That means that a filter for which the ThreadedGenerateData method is implemented, will only
produce the data for the region requested by the following filter in the pipeline. Therefore, in order to
use the streaming functionality one needs to use a filter at the end of the pipeline which requests for
adjacent regions of the image to be processed. In ITK, the itk::StreamingImageFilter class
is used for this purpose. However, ITK does not implement streaming from/to files. This means that
even if the pipeline has a small memory footprint, the images have to be stored in memory at least
after the read operation and before the write operation.
OTB implements read/write streaming. For the image file reading, this is transparent for the pro-
grammer, and if a streaming loop is used at the end of the pipeline, the read operation will be
streamed. For the file writing, the otb::ImageFileWriter has to be used.
The first step for performing streamed reading and writing is to include the following headers.
#include "otbImageFileReader.h"
#include "otbImageFileWriter.h"
Then, as usual, a decision must be made about the type of pixel used to represent the image processed
by the pipeline.
We can now instantiate the types of the reader and writer. These two classes are parameterized over
the image type. We will rescale the intensities of the as an example of intermediate processing step.
Then, we create one object of each type using the New() method and assigning the result to a
itk::SmartPointer .
ReaderType::Pointer reader = ReaderType::New();
RescalerType::Pointer rescaler = RescalerType::New();
WriterType::Pointer writer = WriterType::New();
6.3. IO Streaming 105
The name of the file to be read or written is passed with the SetFileName() method. We also choose
the range of intensities for the rescaler.
reader->SetFileName(inputFilename);
rescaler->SetOutputMinimum(0);
rescaler->SetOutputMaximum(255);
writer->SetFileName(outputFilename);
We can now connect these readers and writers to filters to create a pipeline.
rescaler->SetInput(reader->GetOutput());
writer->SetInput(rescaler->GetOutput());
We can now trigger the pipeline execution by calling the Update method on the writer.
writer->Update();
The writer will ask its preceding filter to provide different portions of the image. Each filter in the
pipeline will do the same until the request arrives to the reader. In this way, the pipeline will be
executed for each requested region and the whole input image will be read, processed and written
without being fully loaded in memory.
The source code for this example can be found in the file
Examples/IO/ExplicitStreamingExample.cxx.
Usually, the streaming process is hidden within the pipeline. This allows the user to get rid of the
annoying task of splitting the images into tiles, and so on. However, for some kinds of processing,
we do not really need a pipeline: no writer is needed, only read access to pixel values is wanted.
In these cases, one has to explicitly set up the streaming procedure. Fortunately, OTB offers a high
level of abstraction for this task. We will need to include the following header files:
#include "otbRAMDrivenAdaptativeStreamingManager.h"
Once a region of the image is available, we will use classical region iterators to get the pixels.
106 Chapter 6. Reading and Writing Images
We instantiate the image file reader, but in order to avoid reading the whole im-
age, we call the GenerateOutputInformation() method instead of the Update() one.
GenerateOutputInformation() will make available the information about sizes, band, resolu-
tions, etc. After that, we can access the largest possible region of the input image.
ImageReaderType::Pointer reader = ImageReaderType::New();
reader->SetFileName(infname);
reader->GenerateOutputInformation();
We set up now the local streaming capabilities by asking the streaming traits to compute the number
of regions to split the image into given the splitter, the user defined number of lines, and the input
image information.
/*
SplitterType::Pointer splitter = SplitterType::New();
unsigned int numberOfStreamDivisions =
StreamingTraitsType::CalculateNumberOfStreamDivisions(
reader->GetOutput(),
largestRegion,
splitter,
otb::SET_BUFFER_NUMBER_OF_LINES,
0, 0, nbLinesForStreaming);
We can now get the split regions and iterate through them.
unsigned int piece = 0;
RegionType streamingRegion;
for (piece = 0;
piece < numberOfStreamDivisions;
piece++)
{
while (!it.IsAtEnd())
{
std::cout << it.Get() << std::endl;
++it;
}
The source code for this example can be found in the file
Examples/IO/RGBImageReadWrite.cxx.
RGB images are commonly used for representing data acquired from multispectral sensors. This
example illustrates how to read and write RGB color images to and from a file. This requires the
following headers as shown.
#include "itkRGBPixel.h"
#include "otbImage.h"
#include "otbImageFileReader.h"
#include "otbImageFileWriter.h"
The itk::RGBPixel class is templated over the type used to represent each one of the red, green
and blue components. A typical instantiation of the RGB image class might be as follows.
typedef itk::RGBPixel<unsigned char> PixelType;
typedef otb::Image<PixelType, 2> ImageType;
The image type is used as a template parameter to instantiate the reader and writer.
typedef otb::ImageFileReader<ImageType> ReaderType;
typedef otb::ImageFileWriter<ImageType> WriterType;
The filenames of the input and output files must be provided to the reader and writer respectively.
108 Chapter 6. Reading and Writing Images
reader->SetFileName(inputFilename);
writer->SetFileName(outputFilename);
Finally, execution of the pipeline can be triggered by invoking the Update() method in the writer.
writer->Update();
You may have noticed that apart from the declaration of the PixelType there is nothing in this code
that is specific for RGB images. All the actions required to support color images are implemented
internally in the itk::ImageIO objects.
The source code for this example can be found in the file
Examples/IO/ImageReadCastWrite.cxx.
Given that ITK and OTB are based on the Generic Programming paradigm, most of the types are
defined at compilation time. It is sometimes important to anticipate conversion between different
types of images. The following example illustrates the common case of reading an image of one
pixel type and writing it on a different pixel type. This process not only involves casting but also
rescaling the image intensity since the dynamic range of the input and output pixel types can be
quite different. The itk::RescaleIntensityImageFilter is used here to linearly rescale the
image values.
The first step in this example is to include the appropriate headers.
#include "otbImageFileReader.h"
#include "otbImageFileWriter.h"
#include "itkUnaryFunctorImageFilter.h"
#include "itkRescaleIntensityImageFilter.h"
Then, as usual, a decision should be made about the pixel type that should be used to represent the
images. Note that when reading an image, this pixel type is not necessarily the pixel type of the
image stored in the file. Instead, it is the type that will be used to store the image as soon as it is read
into memory.
typedef float InputPixelType;
typedef unsigned char OutputPixelType;
const unsigned int Dimension = 2;
We can now instantiate the types of the reader and writer. These two classes are parameterized over
the image type.
6.5. Reading, Casting and Writing Images 109
Below we instantiate the RescaleIntensityImageFilter class that will linearly scale the image inten-
sities.
typedef itk::RescaleIntensityImageFilter<
InputImageType,
OutputImageType> FilterType;
A filter object is constructed and the minimum and maximum values of the output are selected using
the SetOutputMinimum() and SetOutputMaximum() methods.
Then, we create the reader and writer and connect the pipeline.
filter->SetInput(reader->GetOutput());
writer->SetInput(filter->GetOutput());
The name of the files to be read and written are passed with the SetFileName() method.
reader->SetFileName(inputFilename);
writer->SetFileName(outputFilename);
Finally we trigger the execution of the pipeline with the Update() method on the writer. The output
image will then be the scaled and cast version of the input image.
try
{
writer->Update();
}
catch (itk::ExceptionObject& err)
{
std::cerr << "ExceptionObject caught !" << std::endl;
std::cerr << err << std::endl;
return EXIT_FAILURE;
}
110 Chapter 6. Reading and Writing Images
The source code for this example can be found in the file
Examples/IO/ImageReadRegionOfInterestWrite.cxx.
This example should arguably be placed in the filtering chapter. However its usefulness for typical
IO operations makes it interesting to mention here. The purpose of this example is to read and
image, extract a subregion and write this subregion to a file. This is a common task when we want
to apply a computationally intensive method to the region of interest of an image.
As usual with OTB IO, we begin by including the appropriate header files.
#include "otbImageFileReader.h"
#include "otbImageFileWriter.h"
The otb::ExtractROI is the filter used to extract a region from an image. Its header is included
below.
#include "otbExtractROI.h"
The types for the otb::ImageFileReader and otb::ImageFileWriter are instantiated using
the image types.
typedef otb::ImageFileReader<InputImageType> ReaderType;
typedef otb::ImageFileWriter<OutputImageType> WriterType;
The ExtractROI type is instantiated using the input and output pixel types. Using the pixel
types as template parameters instead of the image types allows restricting the use of this class to
otb::Image s which are used with scalar pixel types. See section 6.8.1 for the extraction of ROIs
on otb::VectorImage s. A filter object is created with the New() method and assigned to a
itk::SmartPointer .
typedef otb::ExtractROI<InputImageType::PixelType,
OutputImageType::PixelType> FilterType;
The ExtractROI requires a region to be defined by the user. This is done by defining a rectangle
with the following methods (the filter assumes that a 2D image is being processed, for N-D region
extraction, you can use the itk::RegionOfInterestImageFilter class).
6.7. Reading and Writing Vector Images 111
filter->SetStartX(atoi(argv[3]));
filter->SetStartY(atoi(argv[4]));
filter->SetSizeX(atoi(argv[5]));
filter->SetSizeY(atoi(argv[6]));
Below, we create the reader and writer using the New() method and assigning the result to a Smart-
Pointer.
ReaderType::Pointer reader = ReaderType::New();
WriterType::Pointer writer = WriterType::New();
The name of the file to be read or written is passed with the SetFileName() method.
reader->SetFileName(inputFilename);
writer->SetFileName(outputFilename);
Below we connect the reader, filter and writer to form the data processing pipeline.
filter->SetInput(reader->GetOutput());
writer->SetInput(filter->GetOutput());
Finally we execute the pipeline by invoking Update() on the writer. The call is placed in a try/catch
block in case exceptions are thrown.
try
{
writer->Update();
}
catch (itk::ExceptionObject& err)
{
std::cerr << "ExceptionObject caught !" << std::endl;
std::cerr << err << std::endl;
return EXIT_FAILURE;
}
Images whose pixel type is a Vector, a CovariantVector, an Array, or a Complex are quite common
in image processing. One of the uses of these tye of images is the processing of SLC SAR images,
which are complex.
The source code for this example can be found in the file
Examples/IO/ComplexImageReadWrite.cxx.
112 Chapter 6. Reading and Writing Images
This example illustrates how to read and write an image of pixel type std::complex. The complex
type is defined as an integral part of the C++ language.
We start by including the headers of the complex class, the image, and the reader and writer classes.
#include <complex>
#include "otbImage.h"
#include "otbImageFileReader.h"
#include "otbImageFileWriter.h"
The image dimension and pixel type must be declared. In this case we use the std::complex<> as
the pixel type. Using the dimension and pixel type we proceed to instantiate the image type.
The image file reader and writer types are instantiated using the image type. We can then create
objects for both of them.
Filenames should be provided for both the reader and the writer. In this particular example we take
those filenames from the command line arguments.
reader->SetFileName(argv[1]);
writer->SetFileName(argv[2]);
Here we simply connect the output of the reader as input to the writer. This simple program could
be used for converting complex images from one fileformat to another.
writer->SetInput(reader->GetOutput());
The execution of this short pipeline is triggered by invoking the Update() method of the writer. This
invocation must be placed inside a try/catch block since its execution may result in exceptions being
thrown.
6.8. Reading and Writing Multiband Images 113
try
{
writer->Update();
}
catch (itk::ExceptionObject& err)
{
std::cerr << "ExceptionObject caught !" << std::endl;
std::cerr << err << std::endl;
return EXIT_FAILURE;
}
For a more interesting use of this code, you may want to add a filter in between the reader and the
writer and perform any complex image to complex image operation.
The source code for this example can be found in the file
Examples/IO/MultibandImageReadWrite.cxx.
The otb::Image class with a vector pixel type could be used for representing multispectral images,
with one band per vector component, however, this is not a practical way, since the dimensionality
of the vector must be known at compile time. OTB offers the otb::VectorImage where the
dimensionality of the vector stored for each pixel can be chosen at runtime. This is needed for the
image file readers in order to dynamically set the number of bands of an image read from a file.
The OTB Readers and Writers are able to deal with otb::VectorImage s transparently for the
user.
The first step for performing reading and writing is to include the following headers.
#include "otbImageFileReader.h"
#include "otbImageFileWriter.h"
Then, as usual, a decision must be made about the type of pixel used to represent the image processed
by the pipeline. The pixel type corresponds to the scalar type stored in the vector components.
Therefore, for a multiband Pléiades image we will do:
We can now instantiate the types of the reader and writer. These two classes are parameterized over
the image type.
114 Chapter 6. Reading and Writing Images
Then, we create one object of each type using the New() method and assigning the result to a
itk::SmartPointer .
ReaderType::Pointer reader = ReaderType::New();
WriterType::Pointer writer = WriterType::New();
The name of the file to be read or written is passed with the SetFileName() method.
reader->SetFileName(inputFilename);
writer->SetFileName(outputFilename);
We can now connect these readers and writers to filters to create a pipeline. The only thig to take care
of is, when executing the program, choosing an output image file format which supports multiband
images.
writer->SetInput(reader->GetOutput());
try
{
writer->Update();
}
catch (itk::ExceptionObject& err)
{
std::cerr << "ExceptionObject caught !" << std::endl;
std::cerr << err << std::endl;
return EXIT_FAILURE;
}
The source code for this example can be found in the file
Examples/IO/ExtractROI.cxx.
This example shows the use of the otb::MultiChannelExtractROI and
otb::MultiToMonoChannelExtractROI which allow the extraction of ROIs from multi-
band images stored into otb::VectorImage s. The first one povides a Vector Image as output,
while the second one provides a classical otb::Image with a scalar pixel type. The present
example shows how to extract a ROI from a 4-band SPOT 5 image and to produce a first multi-band
3-channel image and a second mono-channel one for the SWIR band.
We start by including the needed header files.
6.8. Reading and Writing Multiband Images 115
#include "otbImageFileReader.h"
#include "otbImageFileWriter.h"
#include "otbMultiChannelExtractROI.h"
#include "otbMultiToMonoChannelExtractROI.h"
The program arguments define the image file names as well as the rectangular area to be extracted.
First of all, we extract the multiband part by using the otb::MultiChannelExtractROI class,
which is templated over the input and output pixel types. This class in not templated over the images
types in order to force these images to be of otb::VectorImage type.
typedef otb::MultiChannelExtractROI<InputPixelType,
OutputPixelType> ExtractROIFilterType;
We create the extractor filter by using the New method of the class and we set its parameters.
extractROIFilter->SetStartX(startX);
extractROIFilter->SetStartY(startY);
extractROIFilter->SetSizeX(sizeX);
extractROIFilter->SetSizeY(sizeY);
We must tell the filter which are the channels to be used. When selecting contiguous bands, we can
use the SetFirstChannel and the SetLastChannel. Otherwise, we select individual channels by
using the SetChannel method.
extractROIFilter->SetFirstChannel(1);
extractROIFilter->SetLastChannel(3);
We will use the OTB readers and writers for file access.
116 Chapter 6. Reading and Writing Images
Since the number of bands of the input image is dynamically set at runtime, the
UpdateOutputInformation method of the reader must be called before using the extractor filter.
reader->SetFileName(inputFilename);
reader->UpdateOutputInformation();
writer->SetFileName(outputFilenameRGB);
extractROIFilter->SetInput(reader->GetOutput());
writer->SetInput(extractROIFilter->GetOutput());
And execute the pipeline by calling the Update method of the writer.
writer->Update();
typedef otb::MultiToMonoChannelExtractROI<InputPixelType,
OutputPixelType>
ExtractROIMonoFilterType;
extractROIMonoFilter->SetChannel(4);
Figure 6.5 illustrates the result of the application of both extraction filters on the image presented in
figure 6.4.
6.8. Reading and Writing Multiband Images 117
Figure 6.5: Result of the extraction. Left: 3-channel image. Right: mono-band image.
118 Chapter 6. Reading and Writing Images
The source code for this example can be found in the file
Examples/IO/ImageSeriesIOExample.cxx.
This example shows how to read a list of images and concatenate them into a vector image. We
will write a program which is able to perform this operation taking advantage of the streaming
functionnalities of the processing pipeline. We will assume that all the input images have the same
size and a single band.
The following header files will be needed:
#include "otbImage.h"
#include "otbVectorImage.h"
#include "otbImageFileReader.h"
#include "otbImageList.h"
#include "otbImageListToVectorImageFilter.h"
#include "otbImageFileWriter.h"
We will start by defining the types for the input images and the associated readers.
We will use a list of image file readers in order to open all the input images at once. For this, we use
the otb::ObjectList object and we template it over the type of the readers.
We will also build a list of input images in order to store the smart pointers obtained at the output of
each reader. This allows us to build a pipeline without really reading the images and using lots of
RAM. The otb::ImageList object will be used.
We can now loop over the input image list in order to populate the reader list and the input image
list.
6.9. Reading Image Series 119
imageReader->SetFileName(argv[i + 2]);
imageReader->UpdateOutputInformation();
imageList->PushBack(imageReader->GetOutput());
readerList->PushBack(imageReader);
All the input images will be concatenated into a single output vector image. For this matter, we
will use the otb::ImageListToVectorImageFilter which is templated over the input image
list type and the output vector image type.
ImageListToVectorImageFilterType::Pointer iL2VI =
ImageListToVectorImageFilterType::New();
We plug the image list as input of the filter and use a otb::ImageFileWriter to write the result
image to a file, so that the streaming capabilities of all the readers and the filter are used.
iL2VI->SetInput(imageList);
imageWriter->SetFileName(argv[1]);
We can tune the size of the image tiles, so that the total memory footprint of the pipeline is constant
for any execution of the program.
120 Chapter 6. Reading and Writing Images
imageWriter->SetAutomaticTiledStreaming(memoryConsumptionInMB);
imageWriter->SetInput(iL2VI->GetOutput());
imageWriter->Update();
CHAPTER
SEVEN
As we have seen in the previous chapter, OTB has a great capability to read and process images.
However, images are not the only type of data we will need to manipulate. Images are characterized
by a regular sampling grid. For some data, such as Digital Elevation Models (DEM) or Lidar, this is
too restrictive and we need other representations.
Vector data are also used to represent cartographic objects, segmentation results, etc: basically, ev-
erything which can be seen as points, lines or polygons. OTB provides functionnalities for accessing
this kind of data.
The source code for this example can be found in the file
Examples/IO/DEMToImageGenerator.cxx.
The following example illustrates the use of the otb::DEMToImageGenerator class. The aim
of this class is to generate an image from the srtm data (precising the start extraction latitude and
longitude point). Each pixel is a geographic point and its intensity is the altitude of the point. If srtm
doesn’t have altitude information for a point, the altitude value is set at -32768 (value of the srtm
norm).
Let’s look at the minimal code required to use this algorithm. First, the following header defining
the otb::DEMToImageGenerator class must be included.
#include "otbDEMToImageGenerator.h"
The image type is now defined using pixel type and dimension. The output image is defined as an
otb::Image .
122 Chapter 7. Reading and Writing Auxiliary Data
The DEMToImageGenerator is defined using the image pixel type as a template parameter. After
that, the object can be instancied.
typedef otb::DEMToImageGenerator<ImageType> DEMToImageGeneratorType;
Input parameter types are defined to set the value in the otb::DEMToImageGenerator .
typedef DEMToImageGeneratorType::SizeType SizeType;
typedef DEMToImageGeneratorType::SpacingType SpacingType;
typedef DEMToImageGeneratorType::PointType PointType;
The origin (Longitude/Latitude) of the output image in the DEM is given to the filter.
PointType origin;
origin[0] = ::atof(argv[3]);
origin[1] = ::atof(argv[4]);
object->SetOutputOrigin(origin);
The size (in Pixel) of the output image is given to the filter.
SizeType size;
size[0] = ::atoi(argv[5]);
size[1] = ::atoi(argv[6]);
object->SetOutputSize(size);
The spacing (step between to consecutive pixel) is given to the filter. By default, this spacing is set
at 0.001.
SpacingType spacing;
spacing[0] = ::atof(argv[7]);
spacing[1] = ::atof(argv[8]);
object->SetOutputSpacing(spacing);
The output image name is given to the writer and the filter output is linked to the writer input.
7.2. Elevation management with OTB 123
writer->SetFileName(outputName);
writer->SetInput(object->GetOutput());
The invocation of the Update() method on the writer triggers the execution of the pipeline. It is
recommended to place update calls in a try/catch block in case errors occur and exceptions are
thrown.
try
{
writer->Update();
}
Let’s now run this example using as input the SRTM data contained in DEM srtm folder. Figure
7.1 shows the obtained DEM. Invalid data values – hidden areas due to SAR shadowing – are set to
zero.
The source code for this example can be found in the file
Examples/IO/DEMHandlerExample.cxx.
OTB relies on OSSIM for elevation handling. Since release 3.16, there is a single configuration
class otb::DEMHandler to manage elevation (in image projections or localization functions for
example). This configuration is managed by the a proper instantiation and parameters setting of
124 Chapter 7. Reading and Writing Auxiliary Data
this class. These instantiations must be done before any call to geometric filters or functionalities.
Ossim internal accesses to elevation are also configured by this class and this will ensure consistency
throughout the library.
This class is a singleton, the New() method is deprecated and will be removed in future release. We
need to use the Instance() method instead.
otb::DEMHandler::Pointer demHandler = otb::DEMHandler::Instance();
It allows configuring a directory containing DEM tiles (DTED or SRTM supported) us-
ing the OpenDEMDirectory() method. The OpenGeoidFile() method allows inputting
a geoid file as well. Last, a default height above ellipsoid can be set using the
SetDefaultHeightAboveEllipsoid() method.
demHandler->SetDefaultHeightAboveEllipsoid(defaultHeight);
if(!demHandler->IsValidDEMDirectory(demdir.c_str()))
{
std::cerr<<"IsValidDEMDirectory("<<demdir<<") = false"<<std::endl;
fail = true;
}
demHandler->OpenDEMDirectory(demdir);
demHandler->OpenGeoidFile(geoid);
We can now retrieve height above ellipsoid or height above Mean Sea Level (MSL) using the meth-
ods GetHeightAboveEllipsoid() and GetHeightAboveMSL(). Outputs of these methods depend
on the configuration of the class otb::DEMHandler and the different cases are:
For GetHeightAboveEllipsoid():
For GetHeightAboveMSL():
otb::DEMHandler::PointType point;
point[0] = longitude;
point[1] = latitude;
height = demHandler->GetHeightAboveMSL(point);
std::cout<<"height above MSL ("<<longitude<<","
<<latitude<<") = "<<height<<" meters"<<std::endl;
height = demHandler->GetHeightAboveEllipsoid(point);
std::cout<<"height above ellipsoid ("<<longitude
<<", "<<latitude<<") = "<<height<<" meters"<<std::endl;
Note that OSSIM internal calls for sensor modelling use the height above ellipsoid, and follow the
same logic as the GetHeightAboveEllipsoid() method.
More examples about representing DEM are presented in section 23.1.4.
The source code for this example can be found in the file
Examples/IO/VectorDataIOExample.cxx.
Unfortunately, many vector data formats do not share the models for the data they represent. How-
ever, in some cases, when simple data is stored, it can be decomposed in simple objects as for
instance polylines, polygons and points. This is the case for the Shapefile and the KML (Keyhole
Markup Language) formats, for instance.
Even though specific reader/writer for Shapefile and the Google KML are available in OTB, we
designed a generic approach for the IO of this kind of data.
The reader/writer for VectorData in OTB is able to access a variety of vector file formats (all OGR
supported formats)
In section 11.4, you will find more information on how projections work for the vector data and how
you can export the results obtained with OTB to the real world.
This example illustrates the use of OTB’s vector data IO framework.
We will start by including the header files for the classes describing the vector data and the corre-
sponding reader and writer.
#include "otbVectorData.h"
#include "otbVectorDataFileReader.h"
#include "otbVectorDataFileWriter.h"
126 Chapter 7. Reading and Writing Auxiliary Data
We will also need to include the header files for the classes which model the individual objects that
we get from the vector data structure.
#include "itkPreOrderTreeIterator.h"
#include "otbObjectList.h"
#include "otbPolygon.h"
We define the types for the vector data structure and the co