Youtube: https://www.youtube.com/watch?v=9JeOHyQew6M
Martha Larson, Zhuoran Liu, Simon Brugman and Zhengyu Zhao, Pixel Privacy: Increasing Image Appeal while Blocking Automatic Inference of Sensitive Scene Information. Proc. of MediaEval 2018, 29-31 October 2018, Sophia Antipolis, France.
Abstract: We introduce a new privacy task focused on images that users share online. The task benchmarks image transformation algorithms that are capable of blocking the ability of automatic classifiers to infer sensitive information in images. At the same time, the image transformations should maintain the original value of the image to the user who is sharing it, either by leaving it not obviously changed, or by enhancing it to increase its visual appeal. This year, the focus is on a set of 60 scene categories, selected from the Places365-Standard data set, that can be considered privacy sensitive.
Presented by Martha Larson
3. Geo-location Estimation: Placing Task
• Given a multimedia item and its associated metadata, predict a geo-location.
• Data are images and videos drawn from Flickr.
4. Placing Task: Year-to-year insights
• 2010: (5k/5k)* move from images to videos, language modeling works well,
uploader ID a key indicator.
• 2011: (10k/5k) first use of audio, granularity of regions is critical.
• 2012: (15k/5k) user models, motion features, two-stage approaches.
• 2013: (9M/262k) eliminated uploader ID, confidence prediction, retrieval
approaches.
• 2014: (5M/500k) YFCC100M data set, graph approaches, again external
knowledge resources hurt.
• 2015: (5M/1M) 8% correct visual, 27% correct multimodal.
*(training data size/test data size)
5. M. Larson, P. Kelm, A. Rae, C. Hauff, B. Thomee, M. Trevisiol, J. Choi, O. van Laere, S. Schockaert, G. J. F.
Jones, P. Serdyukov, V. Murdock, and G. Friedland. The benchmark as a research catalyst: Charting the
progress of geo-prediction for social multimedia. In J. Choi and G. Friedland, editors, Multimodal Location
Estimation of Videos and Images. Springer, pp. 5-40. 2015.
9. Gerald Friedland, Symeon Papadopoulos, Julia Bernd, and Yiannis Kompatsiaris. 2016. Multimedia Privacy. In Proceedings of the 2016
ACM on Multimedia Conference (MM '16). ACM, New York, NY, USA, 1479-1480.
Cybercasing: Beware of large-scale automatic image mining
12. Example images whose locations are protected
by an Instagram filter
Jaeyoung Choi, Martha Larson, Xinchao Li, Kevin Li, Gerald Friedland, and Alan Hanjalic.
2017. The Geo-Privacy Bonus of Popular Photo Enhancements. ACM ICMR 2017.
13. Pixel Privacy at MediaEval 2018
Task Goal:
• Increasing image appeal,
• while blocking automatic inference of sensitive scene information.
Evaluation Criteria:
• Protection: % of images whose location categories can no longer be inferred by
the “attack algorithm”. (Attack algorithm taken to be: ResNet50 classifier trained
on the training set of the Places365-Standard dataset.)
• Appeal: Degree to which the images are enhanced from the point of view of
users.
Novelty:
We combine work on adversarial examples and image enhancement, which have
been previously studied separately.
B. Zhou, A. Lapedriza, A. Khosla, A. Oliva and A. Torralba, "Places: A 10 Million Image Database for Scene Recognition," in IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 6, pp. 1452-1464, 1 June 2018.
14. Pixel Privacy at MediaEval 2018
Task Data: Places365-Standard dataset
Sensitive scene information: Taken to be the class labels of classes in
Places365-Standard dataset associated privacy criteria (that we defined):
• Places in the home,
• Places far away from the home (typical vacation places),
• Places typical for children,
• Places related to religion,
• Places related to people's health,
• Places related to alcohol consumption,
• Places in which people do not typically wear street clothes,
• Places related to people's living conditions/income,
• Places related to security, Places related to military.
B. Zhou, A. Lapedriza, A. Khosla, A. Oliva and A. Torralba, "Places: A 10 Million Image Database for Scene Recognition," in IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 6, pp. 1452-1464, 1 June 2018.
18. Example images whose scene categories are protected style transfer
Yang Chen, Yu-Kun Lai, and Yong-Jin Liu. CartoonGAN: Generative Adversarial Networks for Photo
Cartoonization. CVPR 2018.
19. Example images whose scene categories are protected style transfer
Yang Chen, Yu-Kun Lai, and Yong-Jin Liu. CartoonGAN: Generative Adversarial Networks for Photo
Cartoonization. CVPR 2018.
20. Baseline: Protection with CartoonGAN
Protection: About 60% of the images that can be recognized are protected.
Appeal: CartoonGAN designed to appeal (appeal not specifically tested.)
21. Outlook
• What is appeal?
• Protecting other types of sensitive information?
• Protecting under less-constrained conditions:
- Protecting against other attacks?
- Protecting in the case of which negative images?
• Can we keep up in the “arms race” against classifiers trained on protected
images?
• Detection vs. Protection: Can the task help to restore the balance in effort that is
invested in research?