Next: Related Work
Up: Automatic Model Refinement for
Previous: Automatic Model Refinement for
Automatic and precise reconstruction of indoor environments is an
important task in robotics and architecture. Autonomous mobile
robots equipped with a 3D laser range finder are well suited for
gaging the 3D data. Due to odometry errors the self localization
of the robot is an unprecise measurement and therefore can only
be used as a starting point for registration of the 3D scans in a
common coordinate system. Furthermore the merging of the views as
well as the scanning process itself is noisy and small errors may
occur. We overcome these problems by extending the reconstruction
process with a new knowledge based approach for the automatic
model refinement.
Since architectural shapes of environments follow standard
conventions arising from tradition or utility (9)
we can exploit knowledge for reconstruction of indoor
environments. The used knowledge describes general attributes of
the domain, i.e., architectural features as plane walls, ceilings
and floors. For various domains different knowledge is needed,
e.g., for reverse engineering of CAD parts (20).
We show that applying general knowledge for recovering specific
knowledge improves reverse engineering.
In mobile robotics one important task is to learn the environment
to fulfill specific jobs. 3D maps are needed for plan execution
and obstacle avoidance (23). Volumetric maps, i.e., 3D
point clouds are often large and difficult to use directly in
control tasks. Therefore some groups have attempted to generate
compact flat 3D models (12,15) or compact
bounding box models (24).
This paper presents algorithms for building compact and precise
3D models and generates a coarse semantic interpretation, thus
creates coarse semantic maps. The proposed algorithm consists of
three steps: First we extract features, i.e., planes from
registered unmeshed range data. The planes are found by an
algorithm which is a mixture of the RANSAC (Random Sample
Consensus) algorithm and the ICP (Iterative Closest Point)
algorithm (5,1). Second the computed planes
are labeled based on their relative orientation. A predefined
semantic net implementing general knowledge about indoor
environments is employed to define these orientations. Finally
architectural constraints like parallelism and orthogonality are
enforced with respect to the gaged 3D data by numerical methods.
The paper is organized as follows. After discussing the state of
the art in the following part we present the 3D laser range
finder and the autonomous mobile robot. The second section
presents the range image registration, followed by a description
of the feature extraction algorithm. The algorithms for semantic
interpretation of the data is given in section four. In section 5
the model refinement is described. Section 6 summarizes the
results and concludes the paper.
Subsections
Next: Related Work
Up: Automatic Model Refinement for
Previous: Automatic Model Refinement for
root
2003-08-06