next up previous
Next: Learning Classification Functions Up: Learning a Sphere Classifier Previous: Color Invariance using Linear

Feature Detection using Integral Images

There are many motivations for using features rather than pixels directly. For mobile robots, a critical motivation is that feature based systems operate much faster than pixel based systems [17]. The features used here have the same structure as the Haar basis functions, i.e. step functions introduced by Alfred Haar to define wavelets [6]. They are also used in [8,9,10,17]. Fig. 4 (left) shows the eleven basis features, i.e. edge, line, diagonal and center surround features. The base resolution of the object detector is $ 30 \times 30$ pixels, thus, the set of possible features in this area is very large (642592 features, see [9] for calculation details). A single feature is effectively computed on input images using integral images [17], also known as summed area tables [8,9]. An integral image $ I$ is an intermediate representation for the image and contains the sum of gray scale pixel values of image $ N$ with height $ y$ and width $ x$, i.e.,

$\displaystyle I(x,y) = \sum_{x'=0}^{x}\sum_{y'=0}^{y} N(x',y').$      

The integral image is computed recursively, by the formulas: $ I(x,y) = I(x,y-1) + I(x-1,y) + N(x,y) - I(x-1,y-1)$ with $ I(-1,y) =
I(x,-1) = I(-1,-1) = 0$, therefore requiring only one scan over the input data. This intermediate representation $ I(x,y)$ allows the computation of a rectangle feature value at $ (x,y)$ with height and width $ (h,w)$ using four references (see Figure 4 (top right)):
$\displaystyle F(x,y,h,w)$ $\displaystyle =$ $\displaystyle I(x,y) + I(x+w,y+h) - $  
$\displaystyle end{tex2html_deferred}$      
$\displaystyle end{tex2html_deferred}$   $\displaystyle I(x,y+h) - I(x+w,y).$  

For the computation of the rotated features, Lienhart et. al. introduced rotated summed area tables in 2002 that contain the sum of the pixels of the rectangle rotated by 45$ ^\circ$ with the bottom-most corner at $ (x,y)$ and extending till the boundaries of the image (see Figure 4 (bottom right)) [9]:
$\displaystyle I_r(x,y) = \sum_{x'=0}^{x}\sum_{y'=0}^{x - \vert x'-y\vert} N(x',y').$      

The rotated integral image $ I_r$ is computed recursively, i.e., $ I_r(x,y) = I_r(x-1, y-1) + I_r(x+1, y-1) + -I_r(x, y-1) + N(x,y) +
N(x, y-1)$ using the start values $ I_r(-1,y) = I_r(x, -1) = I_r(x,-2)
= I_r(-1, -1) = I_r(-1, -2) = 0$. Four table lookups are required to compute the pixel sum of any rotated rectangle with the formula:
$\displaystyle F_r(x,y,h,w)$ $\displaystyle =$ $\displaystyle I_r(x+w-h, y+w+h-1) + $  
$\displaystyle end{tex2html_deferred}$      
$\displaystyle end{tex2html_deferred}$   $\displaystyle I_r(x, y-1) - I_r(x-h,y+h-1) - $  
$\displaystyle end{tex2html_deferred}$      
$\displaystyle end{tex2html_deferred}$   $\displaystyle I_r(x+w,y+w-1).$  

Since the features are compositions of rectangles, they are computed with several lookups and subtractions weighted with the area of the black and white rectangles.
Figure: Left: Edge, line, diagonal and center surround features are used for classification. Right: Computation of feature values in the shaded region is based on the four upper rectangles.
\includegraphics[width=75mm]{features}

To detect a feature, a threshold is required. This threshold is automatically determined during a fitting process, such that a minimum number of examples are misclassified. Furthermore, the return values $ (\alpha, \beta)$ of the feature are determined, such that the error on the examples is minimized. The examples are given in a set of images that are classified as positive or negative samples. The set is also used in the learning phase that is briefly described next.


next up previous
Next: Learning Classification Functions Up: Learning a Sphere Classifier Previous: Color Invariance using Linear
root 2004-08-31