Train Dimensionality Reduction
Brief Description
Train a dimensionality reduction model
Tags
Learning
Long Description
Trainer for dimensionality reduction algorithms (autoencoders, PCA, SOM). All input samples are used to compute the model, like other machine learning models.
The model can be used in the ImageDimensionalityReduction and VectorDimensionalityReduction applications.
Parameters
Input and output data (io): This group of parameters allows setting input and output data.
Input Vector Data (io.vd): Input geometries used for training (note : all geometries from the layer will be used)
Output model (io.out): Output file containing the estimated model (.txt format).
Input XML image statistics file (io.stats): XML file containing mean and variance of each feature.
Field names to be used for training. (feat): List of field names in the input vector data used as features for training.
algorithm to use for the training (algorithm): Choice of the dimensionality reduction algorithm to use for the training.
OTB SOM (som): This group of parameters allows setting SOM parameters.
Map size (algorithm.som.s): Sizes of the SOM map (one per dimension). For instance, [12;15] means a 2D map of size 12x15. Support2D to 5D maps.
Neighborhood sizes (algorithm.som.n): Sizes of the initial neighborhood in the SOM map (one per dimension). The number of sizes should be the same as the map sizes
NumberIteration (algorithm.som.ni): Number of iterations for SOM learning
BetaInit (algorithm.som.bi): Initial learning coefficient
BetaFinal (algorithm.som.bf): Final learning coefficient
InitialValue (algorithm.som.iv): Maximum initial neuron weight
Shark Autoencoder (autoencoder): This group of parameters allows setting Shark autoencoder parameters.
Maximum number of iterations during training (algorithm.autoencoder.nbiter): The maximum number of iterations used during training.
Maximum number of iterations during training (algorithm.autoencoder.nbiterfinetuning): The maximum number of iterations used during fine tuning of the whole network.
Epsilon (algorithm.autoencoder.epsilon): Epsilon
Weight initialization factor (algorithm.autoencoder.initfactor): Parameter that control the weight initialization of the autoencoder
Size (algorithm.autoencoder.nbneuron): The number of neurons in each hidden layer.
Strength of the regularization (algorithm.autoencoder.regularization): Strength of the L2 regularization used during training
Strength of the noise (algorithm.autoencoder.noise): Strength of the noise
Sparsity parameter (algorithm.autoencoder.rho): Sparsity parameter
Sparsity regularization strength (algorithm.autoencoder.beta): Sparsity regularization strength
Learning curve (algorithm.autoencoder.learningcurve): Learning error values
Shark PCA (pca): This group of parameters allows setting Shark PCA parameters.
Dimension of the output of the pca transformation (algorithm.pca.dim): Dimension of the output of the pca transformation.
Available RAM (Mb) (ram): Available memory for processing (in MB)
Load otb application from xml file (inxml): Load otb application from xml file
Save otb application to xml file (outxml): Save otb application to xml file
Limitations
None
Authors
OTB-Team
See also
ImageDimensionalityReduction, VectorDimensionalityReduction
Example of use
io.vd: cuprite_samples.sqlite
io.out: mode.ae
algorithm: pca
algorithm.pca.dim: 8
feat: value_0 value_1 value_2 value_3 value_4 value_5 value_6 value_7 value_8 value_9
otbcli_TrainDimensionalityReduction -io.vd cuprite_samples.sqlite -io.out mode.ae -algorithm pca -algorithm.pca.dim 8 -feat value_0 value_1 value_2 value_3 value_4 value_5 value_6 value_7 value_8 value_9