In order to assign texture to the previously generated triangle mesh, the algorithm projects the vertices of each triangle to the images by utilizing the extrinsic, intrinsic and distortion parameters, computed during the camera calibration process (cf. section 4). Due to the lens quality of an ordinary USB webcam, modeling the lens distortion is essential.
To figure out which of the 12 images holds the correct texture information, the vertices are projected to all images. If one image does not cover the according part of the 3D scene the computed vertex coordinates will be out of the image range. If the algorithm gives valid coordinates for more than one of the images (e.g., for triangles in overlapping areas, compare Fig. 2), it uses the image in which the projected vertex coordinates are closest to the image center.
Based on these projected coordinates, an OPENGL-based viewer application cuts the texture out of the images and "glues" it onto the 3D triangle mesh. Due to this relation of 3D data and texture information, the scene can be rendered from different perspectives (Fig. 4, middle and right) as a textured 3D scene.2