Dimension reduction is a statistical process, which concentrates the amount of information in multivariate data into a fewer number of variables (or dimensions). An interesting review of the domain has been done by Fodor [47].
Though there are plenty of non-linear methods in the litterature, OTB provides only linear dimension reduction techniques applied to images for now.
Usually, linear dimension-reduction algorithms try to find a set of linear combinations of the input image bands that maximise a given criterion, often chosen so that image information concentrates on the first components. Algorithms differs by the criterion to optimise and also by their handling of the signal or image noise.
In remote-sensing images processing, dimension reduction algorithms are of great interest for denoising, or as a preliminary processing for classification of feature images or unmixing of hyperspectral images. In addition to the denoising effect, the advantage of dimension reduction in the two latter is that it lowers the size of the data to be analysed, and as such, speeds up the processing time without too much loss of accuracy.
The source code for this example can be found in the file
This example illustrates the use of the
The first step required to use this filter is to include its header file.
We start by defining the types for the images and the reader and the writer. We choose to work with a
We instantiate now the image reader and we set the image file name.
We define the type for the filter. It is templated over the input and the output image types and also the transformation direction. The internal structure of this filter is a filter-to-filter like structure. We can now the instantiate the filter.
The only parameter needed for the PCA is the number of principal components required as output. Principal components are linear combination of input components (here the input image bands), which are selected using Singular Value Decomposition eigen vectors sorted by eigen value. We can choose to get less Principal Components than the number of input bands.
We now instantiate the writer and set the file name for the output image.
We finally plug the pipeline and trigger the PCA computation with the method
Figure 18.1 shows the result of applying forward and reverse PCA transformation to a 8 bands Worldview2 image.
The source code for this example can be found in the file
This example illustrates the use of the
The Noise-Adjusted Principal Component Analysis transform is a sequence of two Principal Component Analysis transforms. The first transform is based on an estimated covariance matrix of the noise, and intends to whiten the input image (noise with unit variance and no correlation between bands).
The second Principal Component Analysis is then applied to the noise-whitened image, giving the Maximum Noise Fraction transform. Applying PCA on noise-whitened image consists in ranking Principal Components according to signal to noise ratio.
It is basically a reformulation of the Maximum Noise Fraction algorithm.
The first step required to use this filter is to include its header file.
We also need to include the header of the noise filter.
We start by defining the types for the images, the reader and the writer. We choose to work with a
We instantiate now the image reader and we set the image file name.
In contrast with standard Principal Component Analysis, NA-PCA needs an estimation of the noise correlation matrix in the dataset prior to transformation.
A classical approach is to use spatial gradient images and infer the noise correlation matrix from it. The
method of noise estimation can be customized by templating the
In this implementation, noise is estimated from a local window. We define the type of the noise filter.
We define the type for the filter. It is templated over the input and the output image types, the noise estimation filter type, and also the transformation direction. The internal structure of this filter is a filter-to-filter like structure. We can now the instantiate the filter.
We then set the number of principal components required as output. We can choose to get less PCs than the number of input bands.
We set the radius of the sliding window for noise estimation.
Last, we can activate normalisation.
We now instantiate the writer and set the file name for the output image.
We finally plug the pipeline and trigger the NA-PCA computation with the method
Figure 18.2 shows the result of applying forward and reverse NA-PCA transformation to a 8 bands Worldview2 image.
The source code for this example can be found in the file
This example illustrates the use of the
The Maximum Noise Fraction transform is a sequence of two Principal Component Analysis transforms. The first transform is based on an estimated covariance matrix of the noise, and intends to whiten the input image (noise with unit variance and no correlation between bands).
The second Principal Component Analysis is then applied to the noise-whitened image, giving the Maximum Noise Fraction transform.
In this implementation, noise is estimated from a local window.
The first step required to use this filter is to include its header file.
We also need to include the header of the noise filter.
We start by defining the types for the images, the reader, and the writer. We choose to work with a
We instantiate now the image reader and we set the image file name.
In contrast with standard Principal Component Analysis, MNF needs an estimation of the noise correlation matrix in the dataset prior to transformation.
A classical approach is to use spatial gradient images and infer the noise correlation matrix from it. The
method of noise estimation can be customized by templating the
In this implementation, noise is estimated from a local window. We define the type of the noise filter.
We define the type for the filter. It is templated over the input and the output image types and also the transformation direction. The internal structure of this filter is a filter-to-filter like structure. We can now the instantiate the filter.
We then set the number of principal components required as output. We can choose to get less PCs than the number of input bands.
We set the radius of the sliding window for noise estimation.
Last, we can activate normalisation.
We now instantiate the writer and set the file name for the output image.
We finally plug the pipeline and trigger the MNF computation with the method
Figure 18.3 shows the result of applying forward and reverse MNF transformation to a 8 bands Worldview2 image.
The source code for this example can be found in the file
This example illustrates the use of the
Like Principal Components Analysis, Independent Component Analysis [77] computes a set of orthogonal linear combinations, but the criterion of Fast ICA is different: instead of maximizing variance, it tries to maximize statistical independence between components.
In the Fast ICA algorithm [66], statistical independence is measured by evaluating non-Gaussianity of the components, and the maximization is done in an iterative way.
The first step required to use this filter is to include its header file.
We start by defining the types for the images, the reader, and the writer. We choose to work with a
We instantiate now the image reader and we set the image file name.
We define the type for the filter. It is templated over the input and the output image types and also the transformation direction. The internal structure of this filter is a filter-to-filter like structure. We can now the instantiate the filter.
We then set the number of independent components required as output. We can choose to get less ICs than the number of input bands.
We set the number of iterations of the ICA algorithm.
We also set the
We now instantiate the writer and set the file name for the output image.
We finally plug the pipeline and trigger the ICA computation with the method
Figure 18.4 shows the result of applying forward and reverse FastICA transformation to a 8 bands Worldview2 image.
The source code for this example can be found in the file
This example illustrates the class
Auto-correlation is the correlation between the component and a unitary shifted version of the component.
Please note that the inverse transform is not implemented yet.
We start by including the corresponding header file.
We then define the types for the input image and the output image.
We can now declare the types for the reader. Since the images can be very large, we will force the pipeline
to use streaming. For this purpose, the file writer will be streamed. This is achieved by using the
The
The different elements of the pipeline can now be instantiated.
We set the parameters of the different elements of the pipeline.
We build the pipeline by plugging all the elements together.
And then we can trigger the pipeline update, as usual.
Figure 18.5 shows the results of Maximum Autocorrelation Factor applied to an 8 bands Worldview2 image.