Human Face Perception in Degraded Images

Sanjiv Bhatia, Vasudevan Lakshminarayanan, Ashok Samal, Grant V. Welland

Research output: Contribution to journalArticlepeer-review

Abstract

This paper reports human performance data from a series of psychophysical experiments investigating the limits of stimulus parameters relevant to distinguishing a human face in a mug shot. In these experiments, we use a two-alternative forced-choice paradigm for response elicitation. We develop a benchmark that can be used to determine the performance of a machine vision system for human face detection at different levels of image degradation. The benchmark is developed in terms of the number of pixel blocks and the number of gray scales used in the images. The paper presents a model of representation that can be useful for recognition of faces in a database, and may be used to define the minimum image quality required for retrieval of facial records at different confidence levels. Our results show that low-frequency information in face images is useful since it is most resilient to degradation in the image quality. The model is particularly relevant to the retrieval of facial images in large image databases.
Original languageAmerican English
JournalJournal of Visual Communication and Image Representation
Volume6
StatePublished - Sep 1995

Disciplines

  • Physical Sciences and Mathematics

Cite this