Using False Colors to Protect Visual Privacy of Sensitive Content


Creative Commons License

Ciftci S., Korshunov P., AKYÜZ A. O., Ebrahimi T.

Conference on Human Vision and Electronic Imaging XX, San-Francisco, Kostarika, 9 - 12 Şubat 2015, cilt.9394 identifier identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Cilt numarası: 9394
  • Doi Numarası: 10.1117/12.2083189
  • Basıldığı Şehir: San-Francisco
  • Basıldığı Ülke: Kostarika
  • Anahtar Kelimeler: Visual privacy protection, false color visualization, objective evaluation, subjective assessment, RECOGNITION
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Many privacy protection tools have been proposed for preserving privacy. Tools for protection of visual privacy available today lack either all or some of the important properties that are expected from such tools. Therefore, in this paper, we propose a simple yet effective method for privacy protection based on false color visualization, which maps color palette of an image into a different color palette, possibly after a compressive point transformation of the original pixel data, distorting the details of the original image. This method does not require any prior face detection or other sensitive regions detection and, hence, unlike typical privacy protection methods, it is less sensitive to inaccurate computer vision algorithms. It is also secure as the look-up tables can be encrypted, reversible as table look-ups can be inverted, flexible as it is independent of format or encoding, adjustable as the final result can be computed by interpolating the false color image with the original using different degrees of interpolation, less distracting as it does not create visually unpleasant artifacts, and selective as it preserves better semantic structure of the input. Four different color scales and four different compression functions, one which the proposed method relies, are evaluated via objective (three face recognition algorithms) and subjective (5 0 human subjects in an online-based study) assessments using faces from FERET public dataset. The evaluations demonstrate that DEF and RBS color scales lead to the strongest privacy protection, while compression functions add little to the strength of privacy protection. Statistical analysis also shows that recognition algorithms and human subjects perceive the proposed protection similarly.