next up previous
Next: Ball Detection and Results Up: Learning a Sphere Classifier Previous: Gentle Ada Boost for

The Cascade of Classifiers

The performance of a single classifier is not suitable for object classification, since it produces a high hit rate, e.g., 0.999, but also a high error rate, e.g., 0.5. Nevertheless, the hit rate is much higher than the error rate. To construct an overall good classifier, several classifiers are arranged in a cascade, i.e., a degenerated decision tree. In every stage of the cascade, a decision is made whether the image contains the object or not. This computation reduces both rates. Since the hit rate is close to one, their multiplication results also in a value close to one, while the multiplication of the smaller error rates approaches zero. Furthermore, this speeds up the whole classification process. Figure 6 shows an example cascade of classifiers for detecting balls in 2D images, whose results are given in Table I.

Figure: The first three stages of a cascade of classifiers to detect a ball. Every stage contains several simple classifier trees that use Haar-like features with a threshold and return values of $ \sum{h(x)}$.
\includegraphics[width=181mm]{ball_cascade}

An overall effective cascade is learned by a simple iterative method. For every stage the classification function $ h(x)$ is learned, until the required hit rate is reached. The process continues with the next stage using the correct classified positive and the currently misclassified negative examples. The number of CARTs used in each classifier may increase with additional stages.


next up previous
Next: Ball Detection and Results Up: Learning a Sphere Classifier Previous: Gentle Ada Boost for
root 2004-08-31