This Election Year, Look for Content Credentials: Media organizations combat deepfakes and disinformation with digital manifests , Eliza Strickland, IEE Spectrum, 2024 talking about what C2PA is and how the adoption may work Understanding the Impact of AI-Generated Deepfakes on Public Opinion, Political Discourse, and Personal Security in Social Media , Prakash L. Kharvi, IEEE SP, 2024 why policies & companies should adopt C2PA Defining best practices for opting out of ML training , Paul Keller, Zuzanna Warso, Open Future policy brief, 2023 C2PA has entry to opt out of ML training AI-Generated Images as an Emergent Record Format , Jessica Bushey, IEEE BigData, 2023 literature review by computational archival science (CAS) people, ITrustAI GenAI literature in medicine & law enforcement & journalism copyright attribution; public trust in democracy Confidence-Building Measures for Artificial Intelligence: Workshop Proceedings , Sarah Shoker, Andrew Reddie, et al., arXiv, 2023 foundational models can be used for insecure things watermarking has tempering and adoption problems ⭐ Solutions to Deepfakes: Can Camera Hardware, Cryptography, and Deep Learning Verify Real Images? , Alexander Vilesov, Yuan Tian, Nader Sehatbakhsh, Achuta Kadambi, arXiv, 2024 ways to verify images, including C2PA need every camera to support C2PA can spoof by taking photo of a photo prevent by various ways to distinguish 2D from 3D: camera structure from motion, hand jitter, stereo-based depth triangulation, lidar sensing, polarization-based sensing, time-of-flight sensing, spatially altered Bayer pattern, sparse polarization-sensitive pixel sampling useless if object is already flat new file format for “real” images Global content revocation on the internet: a case study in technology ecosystem transformation , Narek Galstyan, James McCauley, Hany Farid, Sylvia Ratnasamy, Scott Shenker, HotNets, 2022 wild new idea to rethink content revocation Deepfake Fraud Detection: Safeguarding Trust in Generative Ai , Felipe Romero-Moreno, SSRN Preprint, 2024 from law person perspective, C2PA and other tools to combat deepfake Ensuring privacy in provenance information for images , Nikolaos Fotos, Jaime Delgado, IEEE DSP, 2023; Towards Privacy-Enhancing Provenance Annotations for Images , Nikolaos Fotos, Jaime Delgado, IEEE ICIP, 2024 add privacy-preserving features A Blockchain based Framework for Content Provenance and Authenticity , Emil Bureacă, Iulian Aciobăniței, IEEE ECAI, 2024 blockchain on top of C2PA Integrating Content Authenticity with DASH Video Streaming , Stefano Petrangeli, Haoliang Wang, Maurice Fisher, Dave Kozma, Massy Mahamli, Pia Blumenthal, Andy Parsons, ACM MMSys, 2024 Trust Nobody: Privacy-Preserving Proofs for Edited Photos with Your Laptop , Pierpaolo Della Monica, Ivan Visconti, Andrea Vitaletti, Marco Zecchini, IEEE SP, 2025; Trust Nobody: Privacy-Preserving Proofs for Edited Photos with Your Laptop , Pierpaolo Della Monica, Ivan Visconti, Andrea Vitaletti, Marco Zecchini, Cryptology ePrint Archive, 2024 privacy-preserving proofs that an image is manipulation of another VerITAS: Verifying Image Transformations at Scale , Trisha Datta, Binyi Chen, Dan Boneh, Cryptology ePrint Archive, 2024; VIMz: Verifiable Image Manipulation using Folding-based zkSNARKs , Stefan Dziembowski, Shahriar Ebrahimi, Parisa Hassanizadeh, Cryptology ePrint Archive, 2024 zero-knowledge proof that an image is transformation of another To Authenticity, and Beyond! Building Safe and Fair Generative AI Upon the Three Pillars of Provenance , John Collomosse, Andy Parsons, IEEE CGA, 2024 three pillars: metadata, fingerprint, watermark EKILA: Synthetic Media Provenance and Attribution for Generative Art , Kar Balan, Shruti Agarwal, Simon Jenni, Andy Parsons, Andrew Gilbert, John Collomosse, IEEE CVPR, 2023; DECORAIT - DECentralized Opt-in/out Registry for AI Training , Kar Balan, Andrew Gilbert, Alexander Black, Simon Jenni, Andy Parsons, John Collomosse, ACM CVMP, 2023 blockchain to index & attribute GenAI contribution by fingerprinting C2PA to express consent & payment preference Interoperable Provenance Authentication of Broadcast Media using Open Standards-based Metadata , Watermarking and Cryptography, John C. Simmons, Joseph M. Winograd, IBC 2024 C2PA & Advanced Television Systems Committee (ATSC) for broadcast provenance ATSC: transfer metadata over broadcast Towards Trustworthy Digital Media In The Aigc Era: An Introduction To The Upcoming IsoJpegTrust Standard , Jiayun Mo, Xin Kang, Ziyuan Hu, Haibo Zhou, Tieyan Li, Xiaojun Gu, IEEE COMSTD, 2023; An International Standard For Assessing Trustworthiness In Media , Deepayan Bhowmik, Sabrina Caldwell, Jaime Delgado, Touradj Ebrahimi, Nikolaos Fotos, Xiaojun Gu, IEEE ICIP, 2024 ISO JPEG Trust: “trust profile” vs “trust credential” question-answer validation emphasis on trustworthiness TRAIT: A Trusted Media Distribution Framework , James Rainey, Mohamed Elawady, Charith Abhayartne, Deepayan Bhowmik, IEEE DSP, 2023 blockchain to detect media manipulation Can people identify original and manipulated photos of real-world scenes? , Sophie J Nightingale, Kimberley A Wade, Derrick G Watson, Springer Cognitive Research, 2017 people can hardly tell if and how image manipulated Explaining Why Fake Photos are Fake: Does It Work? , Margie Ruffin, Gang Wang, Kirill Levchenko, ACM HCI, 2022 people cannot tell if image manipulated explaining manipulation does not always help AI-synthesized faces are indistinguishable from real faces and more trustworthy , Sophie J. Nightingale, Hany Farid, PNAS, 2022 people cannot tell if face is AI-generated Seeing is not always believing: Benchmarking Human and Model Perception of AI-Generated Images , Zeyu Lu, Di Huang, Lei Bai, Jingjing Qu, Chengyue Wu, Xihui Liu, Wanli Ouyang, NeurIPS, 2023 accuracy detecting fake photorealistic image: human 60%, best open model 87% but model high false positive; best overall accuracy 83% 2 million fake image for model benchmark Organic or Diffused: Can We Distinguish Human Art from AI-generated Images? , Anna Yoo Jeong Ha, Josephine Passananti, Ronik Bhaskar, Shawn Shan, Reid Southen, Haitao Zheng, Ben Y. Zhao, ACM CCS, 2024 ML detector & human cannot tell if image is AI-generated ways to trick ML detector ML detector trained on specific generator not good at new generator human expert better than ordinary human Fake It Until You Break It: On the Adversarial Robustness of AI-generated Image Detectors , Sina Mavali, Jonas Ricker, David Pape, Yash Sharma, Asja Fischer, Lea Schönherr, arXiv, 2024 perturbation can make forensic classifier useless best model DRCT-CLIP has 88% accuracy Is The Web HTTP/2 Yet? , Matteo Varvello, Kyle Schomp, David Naylor, Jeremy Blackburn, Alessandro Finamore, Konstantina Papagiannaki, PAM, 2016 crawl & track HTTP 2 usage