next up previous
Next: The Autonomous Mobile Robot Up: Automatic Classification of Objects Previous: Automatic Classification of Objects

Introduction


A fundamental problem in the design of autonomous mobile cognitive systems is the perception of the environment. A basic part of the perception is to learn, detect and recognize objects, which has to be done with the limited resources of a mobile robot. The performance of a mobile robot crucially depends on the accuracy, duration and reliability of its perceptions and the involved interpretation process. This paper proposes a new method for the learning, fast detection and classification of instances of 3D object classes. The approach uses 3D laser range and reflectance data on an autonomous mobile robot to perceive the 3D objects. The 3D range and reflectance data are transformed into images by off-screen rendering. A cascade of classifiers, i.e., a linear decision tree, is used to detect the objects. Based on the ideas of Viola and Jones, each classifier is composed of several simple classifiers, that in turn contain an edge, line or center surround feature [#!Viola_2001!#]. They and others have presented and implemented a method for the effective computation of these features using an intermediate representation, namely, integral image [#!Lienhart_2003_1!#,#!Lienhart_2002!#,#!Viola_2001!#]. For learning of the object classes, a boosting technique, namely, Ada Boost, is used [#!Freund_1996!#]. The resulting approach for object classification is reliable and real-time capable and combines recent results in computer vision with the emerging technology of 3D laser scanners.

Other approaches use information of CCD-cameras that provide a view of the robot's environment. Nevertheless, cameras are difficult to use in natural environments with changing light conditions. Robot control architectures that include robot vision rely mainly on tracking of features, e.g., invariant features [#!Se_2001!#], light sources [#!Launay_2001!#] or the ceilings [#!Dellaert_1999!#]. Other camera-based approaches to robot vision, e.g., stereo cameras and structure from motion, have difficulties to provide navigation information for a mobile robot in real-time. So, many current successful robots are equipped with distance sensors, mainly 2D laser range finders [#!Thrun_2000!#]. 2D scanners have difficulties to detect 3D obstacles with jutting out edges. Currently a general tendency exists to use 3D laser range finders and build 3D maps [#!Allen_2001!#,#!ICAR2003!#,#!Sequeira_1999!#,#!ISR2001!#].

Some groups have attempted to build 3D volumetric representations of environments with 2D laser range finders. For example, Thrun et al. [#!Thrun_2000!#], use two 2D laser range finders for acquiring 3D data. One laser scanner is mounted horizontally, the other vertically. The latter one grabs a vertical scan line that is transformed into 3D points using the current robot pose. A few other groups use 3D laser scanners [#!Allen_2001!#,#!Sequeira_1999!#]. A 3D laser scanner generates consistent 3D data points within a single 3D scan. The RESOLV project aimed at modeling interiors for virtual reality and telepresence [#!Sequeira_1999!#]. They used a RIEGL laser range finder on two mobile robots called EST and AEST (Autonomous Environmental Sensor for Telepresence). The AVENUE project develops a robot for modeling urban environments using a CYRAX laser scanner [#!Allen_2001!#].

In the area of object recognition and classification in 3D range data, Johnson and Hebert use the well-known ICP algorithm [#!Besl_1992!#] for registering 3D shapes in a common coordinate system [#!Johnson_1999!#]. The necessary starting guess of the ICP algorithm is done by localizing the object with spin images [#!Johnson_1999!#]. This approach was extended by Shapiro et al. [#!Correa_2003!#]. In contrast to our proposed method, both approaches use local, memory consuming surface signatures based on prior created mesh representations of the objects.

The paper is organized as follows: The next section describes the autonomous mobile robot that is equipped with the AIS 3D laser range finder. Then we present the object learning and detection algorithm. Section 4 states the results and section 5 concludes.



next up previous
Next: The Autonomous Mobile Robot Up: Automatic Classification of Objects Previous: Automatic Classification of Objects
root 2004-03-04