next up previous
Next: The Autonomous Mobile Robot Up: Planning Robot Motion for Previous: Planning Robot Motion for

Introduction

Digital 3D models are demanded in rescue and inspection robotics, facility management and architecture. Especially mobile systems with 3D laser scanners that automatically perform multiple steps such as scanning, gaging and autonomous driving have the potential to greatly advance the field of environmental sensing. Furthermore 3D information available in real-time enable autonomous robots to navigate in unknown environments.

This paper presents a planning module for an automatic system for gaging and digitalization of 3D indoor environments. The complete system consists of an autonomous mobile robot, a reliable 3D laser range finder and different software modules. The software modules are composed of a scan matching algorithm based on the iterative closest points algorithm (ICP), which registers the 3D scans in a common coordinate system and relocalizes the robot. The second component, the next best view planner, computes the next nominal pose based on the 3D data acquired so far, while avoiding 3D obstacles, by calculating a suitable trajectory.

Motion Planning has to be extended beyond basic path planning under the presence of obstacles. Visibility requirements are present and the question we are interested in is: How can a robot keep the number of sensing operations at a minimum, while allowing the data to be registered and merged into a single representation? This question combines motion planning with sensing and model construction.

The paper is organized as follows. After discussing the state of the art in the following part we present the mobile robot and the 3D laser range finder. Section 2 describes the autonomous mobile robot and AIS 3D laser range finder. The next two sections describe the next best view planner followed by a brief description of the motor controller. Section 5 concludes the paper.

Some groups have attempted to build 3D volumetric representations of environments with 2D laser range finders. Thrun et al. [1], Früh et al. [2] and Zhao et al. [3] use two 2D laser range finder for acquiring 3D data. One laser scanner is mounted horizontally and one is mounted vertically. The latter one grabs a vertical scan line which is transformed into 3D points using the current robot pose. Since the vertical scanner is not able to scan sides of objects, Zhao et al. use two additional vertical mounted 2D scanner shifted by $45^\circ$ to reduce occlusion [3]. The horizontal scanner is used to compute the robot pose. The precision of 3D data points depends on that pose and on the precision of the scanner. All of these approaches have difficulties to navigate around 3D obstacles with jutting out edges. They are only detected while passing them. Exploration schemes for environment digitalization are missing.

A few other groups use 3D laser scanners [4,5]. A 3D laser scanner generates consistent 3D data points within a single 3D scan. The RESOLV project aimed to model interiors for virtual reality and telepresence [4]. They use ICP for scan matching and a perception planning module for minimizing occlusions. The AVENUE project develops a robot for modeling urban environments [5]. They give a planning module [6], which calculates set intersections of volumes to calculate occluded volumes and to create an solid model.


next up previous
Next: The Autonomous Mobile Robot Up: Planning Robot Motion for Previous: Planning Robot Motion for
root 2003-03-20