next up previous
Next: Bibliography Up: 3D Mapping with Semantic Previous: Computing Point Correspondences

Results and Conclusion

The proposed methods have been tested on a data set acquired at RTS, Hannover. Fig. [*] shows a single 3D scan with semantic labeling. Fig. [*] presents the final map, consisting of five 3D scans, each containing 43440 points. Table [*] shows the computing time for matching of two 3D scans. Using semantically labeled points results in a speedup of up to 30% with no loss of quality.

Figure: The final 3D map of an office corridor / laboratory environment. The map consists of 5 3D scans and contains 217200 3D points. Left: Front view. Right: Top view.
\includegraphics[width=60mm]{full_map} \includegraphics[width=60mm]{full_map_top_view}

Table: Computing time and number of ICP iterations to align all 32 3D scans (Pentium-IV-3200). In addition, the computing time for scan matching using reduced points are given. Point reduction follows the algorithm given in [10].
used points search method computing time number of iterations
all points $ k$d-trees 17151.00 ms 49
reduced points $ k$d-trees 2811.21 ms 38
all points forest of $ k$d-trees 12151.50 ms 39
reduced points forest of $ k$d-trees 2417.19 ms 35

Fig. [*] shows a detailed view of the ceiling. 3D points belonging to the lamp at the ceiling are also colored yellow. The correct match will be in no way affected by this fact. In fact, the semantic meaning is that data points of the lamp will be matched correctly with their correspondents.

Figure: Left: scanned 3D points of the ceiling including a lamp. Some 3D points of the lamp are marked yellow. The fingerprint like structure is the result of the scanning process. On plane surfaces the laser beam describes a circle. Right: Photo of the scene.
Image ceiling Image bild7

Contrary to previous works, every single 3D scan is a full 360$ ^\circ$ scan. They are acquired in a stop-scan-go fashion to ensure consistency within the single 3D scans. In RoboCup Rescue the operator drives the robot and acquires 3D scans. In the 2004 competition we encountered that the overlap between two consecutive scans was sometimes too low, so that the operator had to intervene in the matching process. The new scanner will reduce this problem, since it provides backward vision. Fig. [*] shows the analogous map of Fig. [*] without backwards vision, i.e., the algorithm uses only points that lie in front of the robot. The 3D scans can no longer be matched precisely, the map shows inaccuracies for example at the lab door. In fact, doors and passages are a general problem of mapping algorithms, due to the small overlap. Fig. [*] shows the final map of an additional experiment with 9 3D scans and 434400 data points.

This paper presented a robotic 3D mapping system consisting of a robot platform and a 3D laser scanner. The laser scanner provides a 360$ ^\circ$ vision; the scan matching software, based on the well-known ICP algorithm, uses semantic labels to establish correspondences. Both approaches improve previous work, e.g., [9,10], in terms of computational speed and stability.

The aim of future work is to combine the mapping algorithms with mechatronic robotic systems, i.e., building a robot system that can actually go into the third dimension and can cope with the red arena in RoboCup Rescue. Furthermore, we plan to include semi-autonomous planning tools for the acquisition of 3D scans in this years software.

Figure: Resulting 3D map (top view) if the scan matching algorithm uses only 3D points in front of the robot, i.e., the 3D scan is restricted to 180$ ^\circ$.
Image halb_scan

Figure: Results of a second experiment. Left: 3D point cloud (top view). Middle: Some view of the points. Right: A floor plan extracted from the 3D model, i.e., only points with a height of 125 cm $ \pm$ 15 cm are drawn.
Image s1 Image s2 Image s

next up previous
Next: Bibliography Up: 3D Mapping with Semantic Previous: Computing Point Correspondences
root 2005-05-03