Next: Application of the Cascades
Up: Detecting Objects in 3D
Previous: Gentle Ada Boost for
The performance of a single classifier is not suitable for object
classification, since it produces a high hit rate, e.g., 0.999,
but also a high error rate, e.g., 0.5. Nevertheless, the hit rate
is significantly higher than the error rate. To construct an
overall good classifier, several classifiers are arranged in a
cascade, i.e., a degenerated decision tree. In every stage of the
cascade, a decision is made whether the image contains the object
or not. This computation reduces both rates. Since the hit rate
is close to one, their multiplication results also in a value
close to one, while the multiplication of the smaller error rates
approaches zero. Furthermore, this speeds up the whole
classification process, since large parts of the image do not
contain relevant data. These areas can be discarded quickly in
the first stages.
Figure:
The first three stages of a cascade of classifiers to
detect the object volksbot. Every stage contains several
simple classifier trees that use Haar-like features with a
threshold thr. and return values of
. is
determined by the path through the trees.
|
An overall effective cascade is learned by a simple iterative
method. For every stage, the classification function is
learned until the required hit rate is reached. The process
continues with the next stage using the correctly classified
positive and the currently misclassified negative examples. These
negative examples are random image parts generated from the given
negative examples that pass the previous stages and thus are
misclassified. This bootstrapping process is the most time
consuming of the training phase. The number of CARTs used in each
classifier may increase with additional stages.
Fig. shows an example cascade of classifiers for
detecting a volksbot in 2D depth images, whose results are given
in Table .
Next: Application of the Cascades
Up: Detecting Objects in 3D
Previous: Gentle Ada Boost for
root
2005-05-03