Daniel Kersten (1987)
Abstract
One aspect of human image understanding is the ability to estimate missing parts of a natural image. This ability depends on the redundancy of the representation used to describe the class of images. In 1951, Shannon [Bell. Syst. Tech. J. 30, 50 (1951)] showed how to estimate bounds on the entropy and redundancy of an information source from predictability data. The entropy, in turn, gives a measure of the limits to error-free information compaction. An experiment was devised in which human observers interactively restored missing gray levels from 128 X 128 pixel pictures with 16 gray levels. For eight images, the redundancy ranged from 46%, for a complicated picture of foliage, to 74%, for a picture of a face. For almost-complete pictures, but not for noisy pictures, this performance can be matched by a nearest-neighbor predictor.
Keine Kommentare:
Kommentar veröffentlichen