Other approaches use information of CCD-cameras that provide a view of the robot's environment. Nevertheless, cameras are difficult to use in natural environments with changing light conditions. Robot control architectures that include robot vision rely mainly on tracking of features, e.g., invariant features [#!Se_2001!#], light sources [#!Launay_2001!#] or the ceilings [#!Dellaert_1999!#]. Other camera-based approaches to robot vision, e.g., stereo cameras and structure from motion, have difficulties to provide navigation information for a mobile robot in real-time. So, many current successful robots are equipped with distance sensors, mainly 2D laser range finders [#!Thrun_2000!#]. 2D scanners have difficulties to detect 3D obstacles with jutting out edges. Currently a general tendency exists to use 3D laser range finders and build 3D maps [#!Allen_2001!#,#!ICAR2003!#,#!Sequeira_1999!#,#!ISR2001!#].
Some groups have attempted to build 3D volumetric representations of environments with 2D laser range finders. For example, Thrun et al. [#!Thrun_2000!#], use two 2D laser range finders for acquiring 3D data. One laser scanner is mounted horizontally, the other vertically. The latter one grabs a vertical scan line that is transformed into 3D points using the current robot pose. A few other groups use 3D laser scanners [#!Allen_2001!#,#!Sequeira_1999!#]. A 3D laser scanner generates consistent 3D data points within a single 3D scan. The RESOLV project aimed at modeling interiors for virtual reality and telepresence [#!Sequeira_1999!#]. They used a RIEGL laser range finder on two mobile robots called EST and AEST (Autonomous Environmental Sensor for Telepresence). The AVENUE project develops a robot for modeling urban environments using a CYRAX laser scanner [#!Allen_2001!#].
In the area of object recognition and classification in 3D range data, Johnson and Hebert use the well-known ICP algorithm [#!Besl_1992!#] for registering 3D shapes in a common coordinate system [#!Johnson_1999!#]. The necessary starting guess of the ICP algorithm is done by localizing the object with spin images [#!Johnson_1999!#]. This approach was extended by Shapiro et al. [#!Correa_2003!#]. In contrast to our proposed method, both approaches use local, memory consuming surface signatures based on prior created mesh representations of the objects.
The paper is organized as follows: The next section describes the autonomous mobile robot that is equipped with the AIS 3D laser range finder. Then we present the object learning and detection algorithm. Section 4 states the results and section 5 concludes.