next up previous
Next: Conclusions Up: Automatic Reconstruction of Colored Previous: Texture Mapping

Global Color Correction


Figure: Left: Magnification of the textured 3D scene in Fig. 4, right. Right: Uncorrected and corrected photo No. B3. The correction is based on the pixels shared with photo No. A3. The photo numbers are defined in Fig. 2.
\includegraphics[height=73mm]{all}

Different illumination conditions and the camera technology prevent color continuity at the borders of each image, leading to observable discontinuities in the color and also brightness. Based on the ideas of Agathos and Fisher we use global corrections in order to diffuse the texture from each two different views [#!Agathos_2003!#] and to reduce the observable discontinuities. They motivate the assumption that there exists a linear transformation matrix $ \M T_{j \to k}$ to correct the $ j$th view to the $ k$th view, i.e.,

$\displaystyle \M T_{j \to k} \left( N_i^{(j)} ( \lambda ) \right) = N_i^{(k)}(\lambda), \vspace*{-2mm}$    

for $ i \in \{$   R,G,B$ \}$. Two vectors of pixels $ \V V_k, \V
V_j$ are formed from the views $ k$ and $ j$ respectively. They contain R,G,B pixels from the overlap of the views. The global correction matrix $ \M T_{j \to k}$ is estimated as follows [#!Agathos_2003!#]:

$\displaystyle \V V_k = \M T_{j \to k} \V V_k \quad \Leftrightarrow \quad \M T_{j \to k} = (\V V_k \V V_j^T) (\V V_j \V V_j)^{-1} \vspace*{-2mm}$    

Fig. 5 shows an image part of the scene of Fig. 4 with an uncorrected and corrected image. The result is presented in Fig. 4 and shows still some color incontinuities that cannot be resolved with the correction. The method requires enough image overlap, precise 3D-to-2D calibration and sufficient input image quality.



next up previous
Next: Conclusions Up: Automatic Reconstruction of Colored Previous: Texture Mapping
root 2004-04-16