Well, that’s it, you’ve just downloaded and installed OTB, lured by the promise that you will be able to do everything with it. That’s true, you will be able to do everything but - there is always a but - some effort is required.
OTB uses the very powerful systems of generic programming, many classes are already available, some powerful tools are defined to help you with recurrent tasks, but it is not an easy world to enter.
These tutorials are designed to help you enter this world and grasp the logic behind OTB. Each of these tutorials should not take more than 10 minutes (typing included) and each is designed to highlight a specific point. You may not be concerned by the latest tutorials but it is strongly advised to go through the first few which cover the basics you’ll use almost everywhere.
Let’s start by the typical Hello world program. We are going to compile this C++ program linking to your new OTB.
First, create a new folder to put your new programs (all the examples from this tutorial) in and go into this folder.
Since all programs using OTB are handled using the CMake system, we need to create a CMakeLists.txt that will be used by CMake to compile our program. An example of this file can be found in the OTB/Examples/Tutorials directory. The CMakeLists.txt will be very similar between your projects.
Open the CMakeLists.txt file and write in the few lines:
The first line defines the name of your project as it appears in Visual Studio (it will have no effect under UNIX or Linux). The second line loads a CMake file with a predefined strategy for finding OTB 1. If the strategy for finding OTB fails, CMake will prompt you for the directory where OTB is installed in your system. In that case you will write this information in the OTB_DIR variable. The line INCLUDE(${USE_OTB_FILE}) loads the UseOTB.cmake file to set all the configuration information from OTB.
The line ADD_EXECUTABLE defines as its first argument the name of the executable that will be produced as result of this project. The remaining arguments of ADD_EXECUTABLE are the names of the source files to be compiled and linked. Finally, the TARGET_LINK_LIBRARIES line specifies which OTB libraries will be linked against this project.
The source code for this example can be found in the file
Examples/Tutorials/HelloWorldOTB.cxx.
The following code is an implementation of a small OTB program. It tests including header files and linking with OTB libraries.
This code instantiates an image whose pixels are represented with type unsigned short. The image is then created and assigned to a itk::SmartPointer . Later in the text we will discuss SmartPointers in detail, for now think of it as a handle on an instance of an object (see section 3.2.4 for more information).
Once the file is written, run ccmake on the current directory (that is ccmake ./ under Linux/Unix). If OTB is on a non standard place, you will have to tell CMake where it is. Once your done with CMake (you shouldn’t have to do it anymore) run make.
You finally have your program. When you run it, you will have the OTB Hello World ! printed.
Ok, well done! You’ve just compiled and executed your first OTB program. Actually, using OTB for that is not very useful, and we doubt that you downloaded OTB only to do that. It’s time to move on to a more advanced level.
Create a directory (with write access) where to store your work (for example at C:\path\to\MyFirstCode). Organize your repository as it :
Follow the following steps:
OTB is designed to read images, process them and write them to disk or view the result. In this tutorial, we are going to see how to read and write images and the basics of the pipeline system.
First, let’s add the following lines at the end of the CMakeLists.txt file:
Now, create a Pipeline.cxx file.
The source code for this example can be found in the file
Examples/Tutorials/Pipeline.cxx.
Start by including some necessary headers and with the usual main declaration:
Declare the image as an otb::Image , the pixel type is declared as an unsigned char (one byte) and the image is specified as having two dimensions.
To read the image, we need an otb::ImageFileReader which is templated with the image type.
Then, we need an otb::ImageFileWriter also templated with the image type.
The filenames are passed as arguments to the program. We keep it simple for now and we don’t check their validity.
Now that we have all the elements, we connect the pipeline, pluging the output of the reader to the input of the writer.
And finally, we trigger the pipeline execution calling the Update() method on the last element of the pipeline. The last element will make sure to update all previous elements in the pipeline.
Once this file is written you just have to run make. The ccmake call is not required anymore.
Get one image from the OTB-Data/Examples directory from the OTB-Data repository. You can get it either by cloning the OTB data repository (git clone https://gitlab.orfeo-toolbox.org/orfeotoolbox/otb-data.git), but that might be quite long as this also gets the data to run the tests. Alternatively, you can get it from http://www.orfeo-_toolbox.org/packages/OTB-_Data-_Examples.tgz. Take for example get QB_Suburb.png.
Now, run your new program as Pipeline QB_Suburb.png output.png. You obtain the file output.png which is the same image as QB_Suburb.png. When you triggered the Update() method, OTB opened the original image and wrote it back under another name.
Well…that’s nice but a bit complicated for a copy program!
Wait a minute! We didn’t specify the file format anywhere! Let’s try Pipeline QB_Suburb.png output.jpg. And voila! The output image is a jpeg file.
That’s starting to be a bit more interesting: this is not just a program to copy image files, but also to convert between image formats.
You have just experienced the pipeline structure which executes the filters only when needed and the automatic image format detection.
Now it’s time to do some processing in between.
We are now going to insert a simple filter to do some processing between the reader and the writer.
Let’s first add the 2 following lines to the CMakeLists.txt file:
The source code for this example can be found in the file
Examples/Tutorials/FilteringPipeline.cxx.
We are going to use the itk::GradientMagnitudeImageFilter to compute the gradient of the image. The beginning of the file is similar to the Pipeline.cxx.
We include the required headers, without forgetting to add the header for the itk::GradientMagnitudeImageFilter .
We declare the image type, the reader and the writer as before:
Now we have to declare the filter. It is templated with the input image type and the output image type like many filters in OTB. Here we are using the same type for the input and the output images:
Let’s plug the pipeline:
And finally, we trigger the pipeline execution calling the Update() method on the writer
Compile with make and execute as FilteringPipeline QB_Suburb.png output.png.
You have the filtered version of your image in the output.png file.
Now, you can practice a bit and try to replace the filter by one of the 150+ filters which inherit from the itk::ImageToImageFilter class. You will definitely find some useful filters here!
If you tried some other filter in the previous example, you may have noticed that in some cases, it does not make sense to save the output directly as an integer. This is the case if you tried the itk::CannyEdgeDetectionImageFilter . If you tried to use it directly in the previous example, you will have some warning about converting to unsigned char from double.
The output of the Canny edge detection is a floating point number. A simple solution would be to used double as the pixel type. Unfortunately, most image formats use integer typed and you should convert the result to an integer image if you still want to visualize your images with your usual viewer (we will see in a tutorial later how you can avoid that using the built-in viewer).
To realize this conversion, we will use the itk::RescaleIntensityImageFilter .
Add the two lines to the CMakeLists.txt file:
The source code for this example can be found in the file
Examples/Tutorials/ScalingPipeline.cxx.
This example illustrates the use of the itk::RescaleIntensityImageFilter to convert the result for proper display.
We include the required header including the header for the itk::CannyEdgeDetectionImageFilter and the itk::RescaleIntensityImageFilter .
We need to declare two different image types, one for the internal processing and one to output the results:
We declare the reader with the image template using the pixel type double. It is worth noticing that this instantiation does not imply anything about the type of the input image. The original image can be anything, the reader will just convert the result to double.
The writer is templated with the unsigned char image to be able to save the result on one byte images (like png for example).
Now we are declaring the edge detection filter which is going to work with double input and output.
Here comes the interesting part: we declare the itk::RescaleIntensityImageFilter . The input image type is the output type of the edge detection filter. The output type is the same as the input type of the writer.
Desired minimum and maximum values for the output are specified by the methods SetOutputMinimum() and SetOutputMaximum().
This filter will actually rescale all the pixels of the image but also cast the type of these pixels.
Let’s plug the pipeline:
And finally, we trigger the pipeline execution calling the Update() method on the writer
As you should be getting used to it by now, compile with make and execute as ScalingPipeline QB_Suburb.png output.png.
You have the filtered version of your image in the output.png file.
So far, as you may have noticed, we have been working with grey level images, i.e. with only one spectral band. If you tried to process a color image with some of the previous examples you have probably obtained a deceiving grey result.
Often, satellite images combine several spectral band to help the identification of materials: this is called multispectral imagery. In this tutorial, we are going to explore some of the mechanisms used by OTB to process multispectral images.
Add the following lines in the CMakeLists.txt file:
The source code for this example can be found in the file
Examples/Tutorials/Multispectral.cxx.
First, we are going to use otb::VectorImage instead of the now traditional otb::Image . So we include the required header:
We also include some other header which will be useful later. Note that we are still using the otb::Image in this example for some of the output.
We want to read a multispectral image so we declare the image type and the reader. As we have done in the previous example we get the filename from the command line.
Sometime, you need to process only one spectral band of the image. To get only one of the spectral band we use the otb::MultiToMonoChannelExtractROI . The declaration is as usual:
We need to pass the parameters to the filter for the extraction. This filter also allow extracting only a spatial subset of the image. However, we will extract the whole channel in this case.
To do that, we need to pass the desired region using the SetExtractionRegion() (method such as SetStartX, SetSizeX are also available). We get the region from the reader with the GetLargestPossibleRegion() method. Before doing that we need to read the metadata from the file: this is done by calling the UpdateOutputInformation() on the reader’s output. The difference with the Update() is that the pixel array is not allocated (yet !) and reduce the memory usage.
We chose the channel number to extract (starting from 1) and we plug the pipeline.
To output this image, we need a writer. As the output of the otb::MultiToMonoChannelExtractROI is a otb::Image , we need to template the writer with this type.
After this, we have a one band image that we can process with most OTB filters.
In some situation, you may want to apply the same process to all bands of the image. You don’t have to extract each band and process them separately. There is several situations:
Let’s see how this filter is working. We chose to apply the itk::ShiftScaleImageFilter to each of the spectral band. We start by declaring the filter on a normal otb::Image . Note that we don’t need to specify any input for this filter.
We declare the otb::PerBandVectorImageFilter which has three template: the input image type, the output image type and the filter type to apply to each band.
The filter is selected using the SetFilter() method and the input by the usual SetInput() method.
Now, we just have to save the image using a writer templated over an otb::VectorImage :
Compile with make and execute as ./Multispectral qb_RoadExtract.tif qb_blue.tif qb_shiftscale.tif.
Well, if you play with some other filters in the previous example, you probably noticed that in many cases, you need to set some parameters to the filters. Ideally, you want to set some of these parameters from the command line.
In OTB, there is a mechanism to help you parse the command line parameters. Let try it!
Add the following lines in the CMakeLists.txt file:
The source code for this example can be found in the file
Examples/Tutorials/SmarterFilteringPipeline.cxx.
We are going to use the otb::HarrisImageFilter to find the points of interest in one image.
The derivative computation is performed by a convolution with the derivative of a Gaussian kernel of variance σD (derivation scale) and the smoothing of the image is performed by convolving with a Gaussian kernel of variance σI (integration scale). This allows the computation of the following matrix:
The output of the detector is det(μ)-αtrace2(μ).
We want to set 3 parameters of this filter through the command line: σD (SigmaD), σI (SigmaI) and α (Alpha).
We are also going to do the things properly and catch the exceptions.
Let first add the two following headers:
The first one is to handle the exceptions, the second one to help us parse the command line.
We include the other required headers, without forgetting to add the header for the otb::HarrisImageFilter . Then we start the usual main function.
To handle the exceptions properly, we need to put all the instructions inside a try.
Now, we can declare the otb::CommandLineArgumentParser which is going to parse the command line, select the proper variables, handle the missing compulsory arguments and print an error message if necessary.
Let’s declare the parser:
It’s now time to tell the parser what are the options we want. Special options are available for input and output images with the AddInputImage() and AddOutputImage() methods.
For the other options, we need to use the AddOption() method. This method allows us to specify
Now that the parser has all this information, it can actually look at the command line to parse it. We have to do this within a try - catch loop to handle exceptions nicely.
Now, we can declare the image type, the reader and the writer as before:
We are getting the filenames for the input and the output images directly from the parser:
Now we have to declare the filter. It is templated with the input image type and the output image type like many filters in OTB. Here we are using the same type for the input and the output images:
We set the filter parameters from the parser. The method IsOptionPresent() let us know if an optional option was provided in the command line.
We add the rescaler filter as before
Let’s plug the pipeline:
We trigger the pipeline execution calling the Update() method on the writer
Finally, we have to handle exceptions we may have raised before
Compile with make as usual. The execution is a bit different now as we have an automatic parsing of the command line. First, try to execute as SmarterFilteringPipeline without any argument.
The usage message (automatically generated) appears:
That looks a bit more professional: another user should be able to play with your program. As this is automatic, that’s a good way not to forget to document your programs.
So now you have a better idea of the command line options that are possible. Try SmarterFilteringPipeline -in QB_Suburb.png -out output.png for a basic version with the default values.
If you want a result that looks a bit better, you have to adjust the parameter with SmarterFilteringPipeline -in QB_Suburb.png -out output.png -d 1.5 -i 2 -a 0.1 for example.
Quite often, when you buy satellite images, you end up with several images. In the case of optical satellite, you often have a panchromatic spectral band with the highest spatial resolution and a multispectral product of the same area with a lower resolution. The resolution ratio is likely to be around 4.
To get the best of the image processing algorithms, you want to combine these data to produce a new image with the highest spatial resolution and several spectral band. This step is called fusion and you can find more details about it in 13. However, the fusion suppose that your two images represents exactly the same area. There are different solutions to process your data to reach this situation. Here we are going to use the metadata available with the images to produce an orthorectification as detailed in 11.
First you need to add the following lines in the CMakeLists.txt file:
The source code for this example can be found in the file
Examples/Tutorials/OrthoFusion.cxx.
Start by including some necessary headers and with the usual main declaration. Apart from the classical header related to image input and output. We need the headers related to the fusion and the orthorectification. One header is also required to be able to process vector images (the XS one) with the orthorectification.
We initialize ossim which is required for the orthorectification and we check that all parameters are provided. Basically, we need:
We check that all those parameters are provided.
We declare the different images, readers and writer:
We declare the projection (here we chose the UTM projection, other choices are possible) and retrieve the parameters from the command line:
We will need to pass several parameters to the orthorectification concerning the desired output region:
We declare the orthorectification filter. And provide the different parameters:
Now we are able to have the orthorectified area from the PAN image. We just have to follow a similar process for the XS image.
It’s time to declare the fusion filter and set its inputs:
And we can plug it to the writer. To be able to process the images by tiles, we use the SetAutomaticTiledStreaming() method of the writer. We trigger the pipeline execution with the Update() method.