Your Eyes Do Layered Image Processing
Computer users familiar with Photoshop and other image processing programs know that an image can be divided into “layers” for making color corrections, evening out contrast and enhancing details. Your eyes do that, too, says Alan Gilchrist in Current Biology.1 He shows a stunning optical illusion to make the point: transparent chess pieces against differing cloudy backgrounds appear darker in one than the other, even though they are identical (see image and source paper by Anderson and Winawer in Nature.2) Clearly, the brain is interpreting the pieces in relation to their context.
The rods and cones in your eye are not just light collectors that pass on signals direct to the brain. Image processing is done before the brain gets the data (see 05/22/2003 entry). Scientists are narrowing down theories for how this works. Apparently, the rods and cones are not just sharing data with neighboring receptors, nor are they arranged into frameworks like states on a map. The leading theory is that the eye decomposes the image into layers, and uses complex mathematical algorithms to “decompose” the combined image into its parts, including contrast, brightness, hue, illumination and saturation:
For example, a red book on the dashboard of your car casts a red reflection in the windshield. Through the reflection you perceive distant objects, including green grass, in their normal colors. Light from the green grass and the red reflection physically mix to produce yellow. The yellow is observed when seen through a small hole punched in a piece of cardboard held up so it blocks out the surrounding context. Without the cardboard, however, no yellow is seen, only the red and green layers. The brain is thought to split the yellow light into the red and green layers using rules that invert the usual rules of color mixing. This is called scission.
Or consider the image of a white house reflected in the shiny surface of a black car. Neither the house nor the car appears gray where their images overlap. Rather the light at that location is perceptually split into a white and a black layer.
Strictly speaking, the illumination that falls on surfaces is not a separate layer. But the same scission algorithms that work for transparent layers can be effectively applied to the illumination. Mathematically a shadow and a sunglass lens have the same effect on the image.
When the processes of image formation are inverted in this way, surface reflectance is not merely computed, it is recovered.
Does the fact that optical illusions can fool us (and fool everybody, systematically) mean that the “visual software employed by the brain” has bugs? Not necessarily; “In principle, the errors could be accounted for by partial failures in the scission process,” Gilchrist says; ”But such efforts to model the errors have not proven very effective.” Instead, the brain may combine the layer algorithm with a framework algorithm that is even more complicated. Proponents of both theories are still trying to figure all this out. “Both sides are open to an integration of the two approaches,” he says. “Stay tuned.”
1Alan L. Gilchrist, “Lightness Perception: Seeing One Color through Another,” Current Biology, Vol 15, R330-R332, 10 May 2005.
2Anderson and Winawer, “Image segmentation and lightness perception,” Nature 34, 79-83 (3 March 2005) | doi: 10.1038/nature03271.
Stay tuned: that implies we have limited ability to fathom such design. Speaking of staying tuned, our ears do a similar kind of processing. Students of advanced mathematics know that through Fourier Analysis and other techniques, one can separate out the individual contributors to a complex waveform. For instance, your ear hears a hugely complex single waveform when listening to a symphony orchestra, but you are able to discern the individual sounds of the oboe, violin, trumpet, horn, timpani and all the rest.
It should be noted that our confusion with optical illusions should not cause us to infer errors in the code. For one thing, our eyes were designed to operate in our natural habitat, not in the pages of books of optical illusions. Second, the ability of humans to trick the software with illusions shows that humans have creative ability to alter a designed system and understand how it was tricked. The algorithms of our image processing organs work for the environment for which they were designed. They pull together the best responses to a vast array of possible inputs. This is constrained optimization, the art of achieving the ideal compromise between competing inputs and priorities.
Gilchrist credits Johannes Kepler, one of the champion creation scientists in our online book, with discovering “that an image of whatever we look at is projected onto the rear inner surface of the eye,” just like in a camera obscura. Ever since then, he says, “it has been natural to assume that the rods and cones function much as modern day photocells, reporting the point-by-point intensity of light in the image.” Now we are realizing that the truth is far more amazing. If Kepler’s discovery was marvelous to him, how much more should these recent discoveries make us stand in awe of the supreme optician of the universe? (See also 05/09/2002 entry.)