Abstract: Recent image forensic research has resulted in a number of tampering detection techniques utilizing cues culled from understanding of various natural image characteristics and modeling of tampering artifacts. Fusion of multiple cues provides promises for improving the detection robustness, however has never been systematically studied before. By fusing multiple cues, the tampering detection process does not rely entirely on a single detector and hence can be robust in face of missing or unreliable detectors. In this paper, we propose a statistical fusion framework based on Discriminative Random Fields (DRF) to integrate multiple cues suitable for forgery detection, such as double quantization artifacts and camera response function inconsistency. The detection results using individual cues are used as observation from which the DRF model parameters and the most likely node labels are inferred indicating whether a local block belongs to a tampered foreground or the authentic background. Such inference results also provide information about localization of the suspect spliced regions. The proposed framework is effective and general - outperforming individual detectors over systematic evaluation and easily extensible to other detectors using different cues.

  author       = {Yu-Feng Hsu and Shih-Fu Chang},
  url          = {http://www.ee.columbia.edu/ln/dvmm/publicationPage//Publi//yfhsu08asilomar.html},
  title        = {Statistical fusion of multiple cues for image tampering detection},
  year         = {2008},
  keywords     = {Image Forensics; Tamper Detection},
  booktitle    = {Asilomar Conference on Signals, Systems, and Computers},