next up previous
Next: Related Work Up: Semantic Scene Analysis of Previous: Semantic Scene Analysis of

Introduction

Automatic and precise reconstruction of indoor environments is an important task in architecture and robotics. Autonomous mobile robots equipped with 3D laser range finders are well suited for gaging the 3D data. Due to odometry errors the self localization of the robot is an unprecise measurement and therefore can only be used as a starting point for registration of the 3D scans in a common coordinate system. Furthermore the merging of the views as well as the scanning process itself is noisy and small errors may occur. We overcome these problems by extending the reconstruction process with a new knowledge based approach for the automatic model refinement.

Since architectural shapes of environments follow standard conventions arising from tradition or utility [6] we exploit knowledge for reconstruction of indoor environments. The used knowledge describes general attributes of the domain, i.e., architectural features as plane walls, ceilings and floors. For various domains different knowledge is needed, e.g., for reverse engineering of CAD parts [17]. We show that applying general knowledge for recovering specific knowledge improves reverse engineering.

This paper presents algorithms for building compact and precise 3D models and extends our work in [12]. The proposed algorithm consists of three steps: First we extract features, i.e., planes from registered unmeshed range data. The planes are found by an algorithm which is a mixture of the RANSAC (Random Sample Consensus) algorithm and the ICP (Iterative Closest Point) algorithm [1,3]. Second the computed planes are labeled based on their relative orientation. A predefined semantic net implementing general knowledge about indoor environments is employed to define these orientations. The semantic net is externalized through a representation by a set of Horn clauses. 3D analysis of the previously found planes compiles additional clauses. Prolog's unification and backtracking algorithms are used to derive a scene specific interpretation from the general knowledge. Finally architectural constraints like parallelism and orthogonality are enforced with respect to the gaged 3D data by numerical methods to refine the 3D model. To this end, two minimization algorithms are compared: Powell's method and the downhill simplex method.

The paper is organized as follows. After discussing the state of the art in the following part we present the 3D laser range finder that is mounted on an autonomous mobile robot. Then we start to describe algorithms for the 3D model based analysis and scene refinement. These algorithms run after data acquisition. The second section presents the feature extraction algorithm. The algorithms for semantic interpretation of the data is given in section three. In section 4 the model refinement is described. Section 5 concludes the paper.



Subsections
next up previous
Next: Related Work Up: Semantic Scene Analysis of Previous: Semantic Scene Analysis of
root 2003-08-21