next up previous
Next: Extention to multiple 3D Up: The Next Best View Previous: Approximating Art Galleries

Planing the Next Best View

Exploration starts with a blank map i.e., the environment is unknown. The robot's first action is to acquire a 3D scan. Based on the first scan, an initial ground plane is generated. The Hough transformation is used to detect horizontal lines in the ground plane, including lines of legth 0 (fig. 3 (left)). The found lines are transformed into polygons, whereas the edges are either labeled as seen or as unseen (fig. 3). The seen edges are the detected lines which are connected by unseen edges (third picture). The seen lines have to be sorted for connecting them. The sorting criterion is the smallest angle between the end points of a line and the scanner position. Fig. 3 (second picture) shows how to connect two lines. $\alpha_2 < \alpha_2$ and no further $\alpha$ exists between them.

Figure 3: A ground plane of the scene inside castle Birlinghoven (fig. 2) taken at height of 50cm $\pm $ 5cm. Left: The detected lines. Second: Lines are sorted by using the smallest angle between the lines and the scanner position. Third: The polygon. Right: Two polygons with edge labels prior merging. The resulting map is given in fig. 5 (top row, rightmost illustration).
\begin{figure*}\begin{center}
\parbox[c]{4.2cm}{\epsfxsize=4.2cm \epsffile{lines...
...m}{\epsfxsize=4.2cm \epsffile{merge.eps}}\vspace*{-7mm}\end{center}\end{figure*}

The second step of the next best view algorithm generates randomly candidate position $(x,y)$ in the interior of the polygon. Every position has the same probability (fig. 4 (left)).

The candidate with the most information gain is selected according to the position to the unseen lines, i.e., the direction of the next measurement has to be towards the unseen lines, since behind these lines lies unexplored area. For estimating the information gain, a vertical laser scanner with an apex angle of $180^\circ$ and a distance range, e.g., [0.5m, 15m], dependent on the slice height, is simulated. The number of intersections with the polygon determines the information gain $V(\vec p)$ and the direction $\theta_p$ of the candidate location $\vec p = (x,y)^T$

\begin{eqnarray*}
V(\vec p) & = & \max \\ Big\{ \sum_{i=\varphi}^{\varphi+\pi}
f...
...+ \frac{\pi}{2} \\ [0.8ex]
& = & \varphi_{max} + \frac{\pi}{2},
\end{eqnarray*}



with $f(\vec p, \varphi) = 1$, if the laser beam scans an unseen edge, $0$ otherwise. Fig. 4 shows 200 candidate positions inside a polygon (left) and the result of the evaluation (right).

The next best view pose $(x,y,\theta)$ has three properties: 1. The evaluated information gain value $V(\vec p)$ of the candidate position $\vec p$. 2. The distance from the current position. 3. The angle to the current position. The next best view pose is an optimum of:
$\hat V(\vec p) = V(\vec p) \exp(-c_1 \vert\vert\vec r- \vec p\vert\vert)
\exp(-c_2\vert\vert\theta_r - \theta \vert\vert),
$
with $\vec r = (x_r,y_r)^T$ is the current robot position and $\theta_r$ the orientation. The second item prevents the robot from oscillating between distant areas of high information gain. The third part penalties rotation and also prevents oscillation. The constants $c_1$ and $c_2$ determine the optimum, e.g., $c_1 =
0.05$cm$^{-1}$, $c_2 = 0.01$. To plan the next best view in 3D, several positions in different slices are computed and compared. In our experiments, we used one to five slices and it turned out, that one slice is sufficient in the majority of cases.




Subsections
next up previous
Next: Extention to multiple 3D Up: The Next Best View Previous: Approximating Art Galleries
root 2003-03-20