The basic idea of labelling 3D points with semantic information
is to use the gradient between neighbouring points to differ
between three categories, i.e., floor-, object- and
ceiling-points. A 3D point cloud that is scanned in a yawing scan
configuration, can be described as a set of points
given in a cylindrical coordinate
system, with
the index of a vertical raw scan and
the
point index within one vertical raw scan counting bottom up. The
gradient
is calculated by the following equation:
![]() ![]() |
The classification of point
is directly derived
from the gradient
:
1. ceiling-points: ¯ 1. floor-points:with a constant![]()
2. object-points:![]()
3. ceiling-points:![]()
Applied to real data, this simple definition causes two
problems. As can be seen in Fig. (a)
noisy range data can lead to wrong classifications of floor- and
ceiling-points. Changing the differential quotient as follows
solves this problem:
![]() |
|
The second difficulty is the correct computation of the gradient
across jumping edges (see Fig. (b)). This
problem is solved with a prior segmentation
[16], as the gradient
is only
calculated correctly if both points
and
belong to the same segment. The correct classification
result can be seen in
Fig.
(c). Fig.
shows a 3D
scan with the semantic labels.
![]() ![]() ![]() |