#include <otbRandomForestsMachineLearningModel.h>
template<class TInputValue, class TTargetValue>
class otb::RandomForestsMachineLearningModel< TInputValue, TTargetValue >
Definition at line 36 of file otbRandomForestsMachineLearningModel.h.
◆ ConfidenceValueType
template<class TInputValue , class TTargetValue >
◆ ConstPointer
template<class TInputValue , class TTargetValue >
◆ InputListSampleType
template<class TInputValue , class TTargetValue >
◆ InputSampleType
template<class TInputValue , class TTargetValue >
◆ InputValueType
template<class TInputValue , class TTargetValue >
◆ Pointer
template<class TInputValue , class TTargetValue >
◆ ProbaSampleType
template<class TInputValue , class TTargetValue >
◆ RFType
template<class TInputValue , class TTargetValue >
◆ Self
template<class TInputValue , class TTargetValue >
◆ Superclass
template<class TInputValue , class TTargetValue >
◆ TargetListSampleType
template<class TInputValue , class TTargetValue >
◆ TargetSampleType
template<class TInputValue , class TTargetValue >
◆ TargetValueType
template<class TInputValue , class TTargetValue >
◆ VariableImportanceMatrixType
template<class TInputValue , class TTargetValue >
◆ RandomForestsMachineLearningModel() [1/2]
template<class TInputValue , class TOutputValue >
◆ ~RandomForestsMachineLearningModel()
template<class TInputValue , class TTargetValue >
◆ RandomForestsMachineLearningModel() [2/2]
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ CanReadFile()
template<class TInputValue , class TOutputValue >
◆ CanWriteFile()
template<class TInputValue , class TOutputValue >
◆ CreateAnother()
template<class TInputValue , class TTargetValue >
Run-time type information (and related methods).
◆ DoPredict()
template<class TInputValue , class TOutputValue >
◆ GetCalculateVariableImportance()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ GetComputeMargin()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ GetComputeSurrogateSplit()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ GetForestAccuracy()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ GetMaxDepth()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ GetMaxNumberOfCategories()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ GetMaxNumberOfTrees()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ GetMaxNumberOfVariables()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ GetMinSampleCount()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ GetNameOfClass()
template<class TInputValue , class TTargetValue >
Run-time type information (and related methods).
◆ GetPriors()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
Definition at line 100 of file otbRandomForestsMachineLearningModel.h.
◆ GetRegressionAccuracy()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ GetTerminationCriteria()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ GetTrainError()
template<class TInputValue , class TOutputValue >
◆ GetVariableImportance()
template<class TInputValue , class TOutputValue >
◆ Load()
template<class TInputValue , class TOutputValue >
◆ New()
template<class TInputValue , class TTargetValue >
Run-time type information (and related methods).
◆ operator=()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ PrintSelf()
template<class TInputValue , class TOutputValue >
◆ Save()
template<class TInputValue , class TOutputValue >
◆ SetCalculateVariableImportance()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ SetComputeMargin()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ SetComputeSurrogateSplit()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ SetForestAccuracy()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ SetMaxDepth()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ SetMaxNumberOfCategories()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ SetMaxNumberOfTrees()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ SetMaxNumberOfVariables()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ SetMinSampleCount()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ SetPriors()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
Definition at line 105 of file otbRandomForestsMachineLearningModel.h.
◆ SetRegressionAccuracy()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ SetTerminationCriteria()
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
◆ Train()
template<class TInputValue , class TOutputValue >
◆ m_CalculateVariableImportance
template<class TInputValue , class TTargetValue >
◆ m_ComputeMargin
template<class TInputValue , class TTargetValue >
Whether to compute margin (difference in probability between the 2 most voted classes) instead of confidence (probability of the most voted class) in prediction
Definition at line 230 of file otbRandomForestsMachineLearningModel.h.
◆ m_ComputeSurrogateSplit
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
Definition at line 171 of file otbRandomForestsMachineLearningModel.h.
◆ m_ForestAccuracy
template<class TInputValue , class TTargetValue >
◆ m_MaxDepth
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
Definition at line 161 of file otbRandomForestsMachineLearningModel.h.
◆ m_MaxNumberOfCategories
template<class TInputValue , class TTargetValue >
Cluster possible values of a categorical variable into clusters to find a suboptimal split. If a discrete variable, on which the training procedure tries to make a split, takes more than max_categories values, the precise best subset estimation may take a very long time because the algorithm is exponential. Instead, many decision trees engines (including ML) try to find sub-optimal split in this case by clustering all the samples into max categories clusters that is some categories are merged together. The clustering is applied only in n>2-class classification problems for categorical variables with N > max_categories possible values. In case of regression and 2-class classification the optimal split can be found efficiently without employing clustering, thus the parameter is not used in these cases.
Definition at line 187 of file otbRandomForestsMachineLearningModel.h.
◆ m_MaxNumberOfTrees
template<class TInputValue , class TTargetValue >
The maximum number of trees in the forest (surprise, surprise). Typically the more trees you have the better the accuracy. However, the improvement in accuracy generally diminishes and asymptotes pass a certain number of trees. Also to keep in mind, the number of tree increases the prediction time linearly.
Definition at line 219 of file otbRandomForestsMachineLearningModel.h.
◆ m_MaxNumberOfVariables
template<class TInputValue , class TTargetValue >
The size of the randomly selected subset of features at each tree node and that are used to find the best split(s). If you set it to 0 then the size will be set to the square root of the total number of features.
Definition at line 212 of file otbRandomForestsMachineLearningModel.h.
◆ m_MinSampleCount
template<class TInputValue , class TTargetValue >
◆ m_Priors
template<class TInputValue , class TTargetValue >
The array of a priori class probabilities, sorted by the class label value. The parameter can be used to tune the decision tree preferences toward a certain class. For example, if you want to detect some rare anomaly occurrence, the training base will likely contain much more normal cases than anomalies, so a very good classification performance will be achieved just by considering every case as normal. To avoid this, the priors can be specified, where the anomaly probability is artificially increased (up to 0.5 or even greater), so the weight of the misclassified anomalies becomes much bigger, and the tree is adjusted properly. You can also think about this parameter as weights of prediction categories which determine relative weights that you give to misclassification. That is, if the weight of the first category is 1 and the weight of the second category is 10, then each mistake in predicting the second category is equivalent to making 10 mistakes in predicting the first category.
Definition at line 203 of file otbRandomForestsMachineLearningModel.h.
◆ m_RegressionAccuracy
template<class TInputValue , class TTargetValue >
Termination criteria for regression trees. If all absolute differences between an estimated value in a node and values of train samples in this node are less than this parameter then the node will not be split
Definition at line 170 of file otbRandomForestsMachineLearningModel.h.
◆ m_RFModel
template<class TInputValue , class TTargetValue >
The depth of the tree. A low value will likely underfit and conversely a high value will likely overfit. The optimal value can be obtained using cross validation or other suitable methods.
Definition at line 156 of file otbRandomForestsMachineLearningModel.h.
◆ m_TerminationCriteria
template<class TInputValue , class TTargetValue >
The documentation for this class was generated from the following files: