Object-Based Image Analysis (OBIA) focusses on analyzing images at the object level instead of working
at the pixel level. This approach is particularly well adapted for high resolution images and leads to more
robust and less noisy results.
OTB allows to implement OBIA by using ITK’s Label Object framework
(http://www.insight-_journal.org/browse/publication/176). This allows to represent a
segmented image as a set of regions and not anymore as a set of pixels. Added to the compression rate
achieved by this kind of description, the main advantage of this approach is the possibility to operate at the
segment (or object level).
A classical OBIA pipeline will use the following steps:
- Image segmentation (the whole or only parts of it);
- Image to LabelObjectMap (a kind of std::map<LabelObject>) transformation;
- Eventual relabeling;
- Attribute computation for the regions using the image before segmentation:
- Shape attributes;
- Statistics attributes;
- Attributes for radiometry, textures, etc.
- Object filtering
- Remove/select objects under a condition (area less than X, NDVI higher than X, etc.)
- Keep N objects;
- etc.
- LabelObjectMap to image transformation.
The source code for this example can be found in the file
Examples/OBIA/ImageToLabelToImage.cxx.
This example shows the basic approach for the transformation of a segmented (labeled) image into a
LabelObjectMap and then back to an image. For this matter we will need the following header files which
contain the basic classes.
#include "itkBinaryImageToLabelMapFilter.h" #include "itkLabelMapToLabelImageFilter.h"
The image types are defined using pixel types and dimension. The input image is defined as an
otb::Image .
const int dim = 2; typedef unsigned short PixelType;
typedef otb::Image<PixelType, dim> ImageType; typedef itk::LabelObject<PixelType, dim> LabelObjectType;
typedef itk::LabelMap<LabelObjectType> LabelMapType;
As usual, the reader is instantiated and the input image is set.
typedef otb::ImageFileReader<ImageType> ReaderType; ReaderType::Pointer reader = ReaderType::New();
reader->SetFileName(argv[1]);
Then the binary image is transformed to a collection of label objects. Arguments are:
- FullyConnected: Set whether the connected components are defined strictly by face
connectivity or by face+edge+vertex connectivity. Default is FullyConnectedOff.
- InputForegroundValue/OutputBackgroundValue: Specify the pixel value of input/output
of the foreground/background.
typedef itk::BinaryImageToLabelMapFilter<ImageType, LabelMapType> I2LType;
I2LType::Pointer i2l = I2LType::New(); i2l->SetInput(reader->GetOutput());
i2l->SetFullyConnected(atoi(argv[5])); i2l->SetInputForegroundValue(atoi(argv[6]));
i2l->SetOutputBackgroundValue(atoi(argv[7]));
Then the inverse process is used to recreate a image of labels. The itk::LabelMapToLabelImageFilter
converts a LabelMap to a labeled image.
typedef itk::LabelMapToLabelImageFilter<LabelMapType, ImageType> L2IType;
L2IType::Pointer l2i = L2IType::New(); l2i->SetInput(i2l->GetOutput());
The output can be passed to a writer. The invocation of the Update() method on the writer triggers the
execution of the pipeline.
typedef otb::ImageFileWriter<ImageType> WriterType; WriterType::Pointer writer = WriterType::New();
writer->SetInput(l2i->GetOutput()); writer->SetFileName(argv[2]); writer->Update();
Figure 20.1 shows the effect of transforming an image into a label object map and back to an
image
The source code for this example can be found in the file
Examples/OBIA/ShapeAttributeComputation.cxx.
This basic example shows how compute shape attributes at the object level. The input image is firstly
converted into a set of regions ( itk::ShapeLabelObject ), some attribute values of each object are
computed and then saved to an ASCII file.
#include "itkShapeLabelObject.h" #include "itkLabelImageToLabelMapFilter.h"
#include "itkShapeLabelMapFilter.h"
The image types are defined using pixel types and dimensions. The input image is defined as an
otb::Image .
const int dim = 2; typedef unsigned long PixelType;
typedef otb::Image<PixelType, dim> ImageType;
typedef unsigned long LabelType;
typedef itk::ShapeLabelObject<LabelType, dim> LabelObjectType;
typedef itk::LabelMap<LabelObjectType> LabelMapType; typedef itk::LabelImageToLabelMapFilter
<ImageType, LabelMapType> ConverterType;
Firstly, the image reader is instantiated.
typedef otb::ImageFileReader<ImageType> ReaderType; ReaderType::Pointer reader = ReaderType::New();
reader->SetFileName(argv[1]);
Here the itk::ShapeLabelObject type is chosen in order to read some attributes related to the shape of
the objects, by opposition to the content of the object, with the itk::StatisticsLabelObject
.
typedef itk::ShapeLabelMapFilter<LabelMapType> ShapeFilterType;
The input image is converted in a collection of objects
ConverterType::Pointer converter = ConverterType::New(); converter->SetInput(reader->GetOutput());
converter->SetBackgroundValue(itk::NumericTraits<LabelType>::min());
ShapeFilterType::Pointer shape = ShapeFilterType::New(); shape->SetInput(converter->GetOutput());
Update the shape filter, so its output will be up to date.
Then, we can read the attribute values we’re interested in. The itk::BinaryImageToShapeLabelMapFilter
produces consecutive labels, so we can use a for loop and GetLabelObject() method to retrieve the
label objects. If the labels are not consecutive, the GetNthLabelObject() method must be use
instead of GetLabelObject(), or an iterator on the label object container of the label map. In
this example, we write 2 shape attributes of each object to a text file (the size and the centroid
coordinates).
std::ofstream outfile(argv[2]); LabelMapType::Pointer labelMap = shape->GetOutput();
for (unsigned long label = 1; label <= labelMap->GetNumberOfLabelObjects();
label++) { // We don't need a SmartPointer of the label object here,
// because the reference is kept in the label map.
const LabelObjectType ⋆ labelObject = labelMap->GetLabelObject(label);
outfile << label << "\t" << labelObject->GetPhysicalSize() << "\t"
<< labelObject->GetCentroid() << std::endl; } outfile.close();
The source code for this example can be found in the file
Examples/OBIA/RadiometricAttributesLabelMapFilterExample.cxx.
This example shows the basic approach to perform object based analysis on a image. The input image is firstly
segmented using the otb::MeanShiftSegmentationFilter Then each segmented region is converted to
a Map of labeled objects. Afterwards the otb::otbMultiChannelRAndNIRIndexImageFilter
computes radiometric attributes for each object. In this example the NDVI is computed. The computed
feature is passed to the otb::BandsStatisticsAttributesLabelMapFilter which computes statistics
over the resulting band. Therefore, region’s statistics over each band can be access by concatening STATS,
the band number and the statistical attribute separated by colons. In this example the mean
of the first band (which contains the NDVI) is access over all the regions with the attribute:
’STATS::Band1::Mean’.
Firstly, segment the input image by using the Mean Shift algorithm (see 8.7.2.3 for deeper
explanations).
typedef otb::MeanShiftSegmentationFilter <VectorImageType, LabeledImageType, VectorImageType> FilterType;
FilterType::Pointer filter = FilterType::New(); filter->SetSpatialBandwidth(spatialRadius);
filter->SetRangeBandwidth(rangeRadius); filter->SetMinRegionSize(minRegionSize);
filter->SetThreshold(0.1); filter->SetMaxIterationNumber(100);
The otb::MeanShiftSegmentationFilter type is instantiated using the image types.
filter->SetInput(vreader->GetOutput());
The itk::LabelImageToLabelMapFilter type is instantiated using the output of the
otb::MeanShiftSegmentationFilter . This filter produces a labeled image where each segmented
region has a unique label.
LabelMapFilterType::Pointer labelMapFilter = LabelMapFilterType::New();
labelMapFilter->SetInput(filter->GetLabelOutput());
labelMapFilter->SetBackgroundValue(itk::NumericTraits<LabelType>::min());
ShapeLabelMapFilterType::Pointer shapeLabelMapFilter = ShapeLabelMapFilterType::New();
shapeLabelMapFilter->SetInput(labelMapFilter->GetOutput());
Instantiate the otb::RadiometricLabelMapFilterType to compute statistics of the feature image on
each label object.
RadiometricLabelMapFilterType::Pointer radiometricLabelMapFilter = RadiometricLabelMapFilterType::New();
Feature image could be one of the following image:
- GEMI
- NDVI
- IR
- IC
- IB
- NDWI2
- Intensity
Input image must be convert to the desired coefficient. In our case, statistics are computed on the NDVI
coefficient on each label object.
NDVIImageFilterType:: Pointer ndviImageFilter = NDVIImageFilterType::New(); ndviImageFilter->SetRedIndex(3);
ndviImageFilter->SetNIRIndex(4); ndviImageFilter->SetInput(vreader->GetOutput());
ImageToVectorImageCastFilterType::Pointer ndviVectorImageFilter =
ImageToVectorImageCastFilterType::New(); ndviVectorImageFilter->SetInput(ndviImageFilter->GetOutput());
radiometricLabelMapFilter->SetInput(shapeLabelMapFilter->GetOutput());
radiometricLabelMapFilter->SetFeatureImage(ndviVectorImageFilter->GetOutput());
The otb::AttributesMapOpeningLabelMapFilter will perform the selection. There are three
parameters. AttributeName specifies the radiometric attribute, Lambda controls the thresholding of the
input and ReverseOrdering make this filter to remove the object with an attribute value greater than
Lambda instead.
OpeningLabelMapFilterType::Pointer opening = OpeningLabelMapFilterType::New();
opening->SetInput(radiometricLabelMapFilter->GetOutput()); opening->SetAttributeName(attr);
opening->SetLambda(thresh); opening->SetReverseOrdering(lowerThan); opening->Update();
Then, Label objects selected are transform in a Label Image using the itk::LabelMapToLabelImageFilter
.
LabelMapToBinaryImageFilterType::Pointer labelMap2LabeledImage
= LabelMapToBinaryImageFilterType::New(); labelMap2LabeledImage->SetInput(opening->GetOutput());
And finally, we declare the writer and call its Update() method to trigger the full pipeline
execution.
WriterType::Pointer writer = WriterType::New(); writer->SetFileName(outfname);
writer->SetInput(labelMap2LabeledImage->GetOutput()); writer->Update();
Figure 20.2 shows the result of applying the object selection based on radiometric attributes.
The source code for this example can be found in the file
Examples/OBIA/HooverMetricsEstimation.cxx.
The following example shows how to compare two segmentations, using Hoover metrics. For instance, it
can be used to compare a segmentation produced by your algorithm against a partial ground truth
segmentation. In this example, the ground truth segmentation will be referred by the letters GT whereas the
machine segmentation will be referred by MS.
The estimation of Hoover metrics is done with two filters : otb::HooverMatrixFilter and
otb::HooverInstanceFilter . The first one produces a matrix containing the number of overlapping
pixels between MS regions and GT regions. The second one classifies each region among four types (called
Hoover instances):
- Correct detection : a region is matched with an other one in the opposite segmentation, because
they cover nearly the same area.
- Over-segmentation : a GT region is matched with a group of MS regions because they cover
nearly the same area.
- Under-segmentation : a MS region is matched with a group of GT regions because they cover
nearly the same area.
- Missed detection (for GT regions) or Noise (for MS region) : un-matched regions.
Note that a region can be tagged with two types. When the Hoover instance have been found,
the instance filter computes overall scores for each category : they are the Hoover metrics
.
#include "otbHooverMatrixFilter.h" #include "otbHooverInstanceFilter.h"
#include "otbLabelMapToAttributeImageFilter.h"
The filters otb::HooverMatrixFilter and otb::HooverInstanceFilter are designed to handle
itk::LabelMap images, made with otb::AttributesMapLabelObject . This type of label object
allows storing generic attributes. Each region can store a set of attributes: in this case, Hoover instances and
metrics will be stored.
typedef otb::AttributesMapLabelObject<unsigned int, 2, float> LabelObjectType;
typedef itk::LabelMap<LabelObjectType> LabelMapType;
typedef otb::HooverMatrixFilter<LabelMapType> HooverMatrixFilterType;
typedef otb::HooverInstanceFilter<LabelMapType> InstanceFilterType;
typedef otb::Image<unsigned int, 2> ImageType; typedef itk::LabelImageToLabelMapFilter
<ImageType, LabelMapType> ImageToLabelMapFilterType;
typedef otb::VectorImage<float, 2> VectorImageType; typedef otb::LabelMapToAttributeImageFilter
<LabelMapType, VectorImageType> AttributeImageFilterType;
The first step is to convert the images to label maps : we use itk::LabelImageToLabelMapFilter . The
background value sets the label value of regions considered as background: there is no label object for the
background region.
ImageToLabelMapFilterType::Pointer gt_filter = ImageToLabelMapFilterType::New();
gt_filter->SetInput(gt_reader->GetOutput()); gt_filter->SetBackgroundValue(0);
ImageToLabelMapFilterType::Pointer ms_filter = ImageToLabelMapFilterType::New();
ms_filter->SetInput(ms_reader->GetOutput()); ms_filter->SetBackgroundValue(0);
The Hoover matrix filter has to be updated here. This matrix must be computed before being given to the
instance filter.
HooverMatrixFilterType::Pointer hooverFilter = HooverMatrixFilterType::New();
hooverFilter->SetGroundTruthLabelMap(gt_filter->GetOutput());
hooverFilter->SetMachineSegmentationLabelMap(ms_filter->GetOutput()); hooverFilter->Update();
The instance filter computes the Hoover metrics for each region. These metrics are stored as attributes in
each label object. The threshold parameter corresponds to the overlapping ratio above which two regions
can be matched. The extended attributes can be used if the user wants to keep a trace of the associations
between MS and GT regions : i.e. if a GT region has been matched as a correct detection, it will carry an
attribute containing the label value of the associated MS region (the same principle goes for other types of
instance).
InstanceFilterType::Pointer instances = InstanceFilterType::New();
instances->SetGroundTruthLabelMap(gt_filter->GetOutput());
instances->SetMachineSegmentationLabelMap(ms_filter->GetOutput()); instances->SetThreshold(0.75);
instances->SetHooverMatrix(hooverFilter->GetHooverConfusionMatrix()); instances->SetUseExtendedAttributes(false);
The otb::LabelMapToAttributeImageFilter is designed to extract attributes values from a label map
and output them in the channels of a vector image. We set the attribute to plot in each channel.
AttributeImageFilterType::Pointer attributeImageGT = AttributeImageFilterType::New();
attributeImageGT->SetInput(instances->GetOutputGroundTruthLabelMap());
attributeImageGT->SetAttributeForNthChannel(0, InstanceFilterType::GetNameFromAttribute(InstanceFilterType::ATTRIBUTE_RC));
attributeImageGT->SetAttributeForNthChannel(1, InstanceFilterType::GetNameFromAttribute(InstanceFilterType::ATTRIBUTE_RF));
attributeImageGT->SetAttributeForNthChannel(2, InstanceFilterType::GetNameFromAttribute(InstanceFilterType::ATTRIBUTE_RA));
attributeImageGT->SetAttributeForNthChannel(3, InstanceFilterType::GetNameFromAttribute(InstanceFilterType::ATTRIBUTE_RM));
WriterType::Pointer writer = WriterType::New(); writer->SetInput(attributeImageGT->GetOutput());
writer->SetFileName(argv[3]); writer->Update();
The output image contains for each GT region its correct detection score (”RC”, band 1), its
over-segmentation score (”RF”, band 2), its under-segmentation score (”RA”, band 3) and its missed
detection score (”RM”, band 4).
std::cout << "Mean RC ="<< instances->GetMeanRC() << std::endl;
std::cout << "Mean RF ="<< instances->GetMeanRF() << std::endl;
std::cout << "Mean RA ="<< instances->GetMeanRA() << std::endl;
std::cout << "Mean RM ="<< instances->GetMeanRM() << std::endl;
std::cout << "Mean RN ="<< instances->GetMeanRN() << std::endl;
The Hoover scores are also computed for the whole segmentations. Here is some explanation
about the score names : C = correct, F = fragmentation, A = aggregation, M = missed, N =
noise.