Colorization usually requires users to segment the image into regions. However, the segmentation is often tedius and unreliable. Anat Levin et al. present a much simpler process in his 2004 SIGGRAPH paper "Colorization Using optimization", which only require a few simple scribles by human user.
This algorithm is based on a very simple premise: neighboring pixels that have similiar intensity valuese should have similar colors. This premise leads to a quadratic cost function and the user would provide scribbles as linear constraints for the optimization.
This algorithm was described in YUV space, where Y represents the intensity and U,V represent the colors. The input to the algorithm is all the Y values( original black-white image) and some pixels with full YUV valuse (user scribles). Let J(U) be the cost we want minimize and U(r) be the U value of pixel r, then J(U) equals U(r) minus the weighted sum of its neighbors' U values (same for V values).
Where we use a Gaussian distribution to decide the weights (also divide by their sum to make sure weights sum to one):
It's easy to see that neighbors with similar intensities would have higher weight. As for the size of neighbors, we can use either four neighbors or eight neighbors. I'vr tried both and they don't have much difference.
With eq1 and eq2, we can force the cost to equal to zero and write them in matrix form. Then it becomes a least square problem similar to the one we did in Project 2. We can solve it by constructing a sparse matrix and using matlab "\".
From left to right are balck-white image, scribble image, algorithm ouput and original color image/professional colored image
The first one is from the paper
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |