Page 9 - Detecting deepfakes and generative AI: Report on standards for AI watermarking and multimedia authenticity workshop
P. 9

Detecting deepfakes and generative AI: Report on standards for AI
                                    watermarking and multimedia authenticity workshop



               1      Introduction


               Experts predict that 90 per cent of online content will be generated by AI by 2025, raising the
               question of how to identify whether content was human-created, AI-created, deepfaked, or
               some combination. Deepfakes are a type of synthetic media that can be disseminated with
               malicious intent. Synthetic media refers to media generated or manipulated using AI. In most
               cases, synthetic media is generated for gaming and to improve services and quality of life,
               but the increase in synthetic media and use of generative AI technology has given rise to new
               possibilities for disinformation and misinformation.

               Governments and international organizations are already working towards setting policies,
               regulations, and codes of conduct to enhance the security and trust of AI systems. The rise of
               generative AI technology and deepfakes calls for a focus on international standards to support
               the assessment of multimedia authenticity, the use of watermarking technology, enhanced
               security protocols, and extensive cybersecurity awareness.

               ITU organized a workshop on "Detecting deepfakes and generative AI: Standards for AI
               watermarking and multimedia authenticity" on Friday 31 May 2024 during the AI for Good
               Global Summit. The workshop brought together technology and media companies, artists,
               international organizations, standards bodies, and academia, to discuss the security risks
               and challenges of deepfakes and generative AI, technological innovations, and areas where
               standards are needed.

               This report outlines key points of discussion at the workshop and the workshop's outcomes,
               including the recommendation to initiate a multistakeholder standards collaboration on AI
               watermarking, multimedia authenticity, and deepfake detection.













































                                                                                                     1
   4   5   6   7   8   9   10   11   12   13   14