The autonomous robot has to plan and to drive to multiple
positions to efficiently construct a complete 3D model. The next
best view is the pose which has a high gain of
information, is accessible by the robot and the overall robot
path is short. Traditional next best view algorithms assume that
the sensor head can freely move around some object
[10]. In mobile robotics, the sensor head has lower
degrees of freedom, in our case even a fixed height. The sensor
is inside the scene and the accuracy of the pose is
limited. Thus Banos et al. concluded that traditional next best
view algorithms are not suitable [11].
The calculation of viewpoints, i.e., where shall we place the 3D sensor to scan the whole building without occlusions, is similar to the art gallery problem, i.e., given a map of a building, where shall someone place watchmen to see the whole building [12]. Map building with mobile robots requires a competitive online strategy for finding the locations from which the whole environment is seen. The next part describes an approximation of the art gallery problem and derives an online greedy version based on the algorithm of Banos et al [11]. The art gallery is modeled by a horizontal plane (2D map) through the 3D scene. The approach is extended by considering several horizontal planes at different heights to model the whole 3D environment.