AI Deepfake Harm Report: Weaponized Profit, Sexual Exploitation, and Disinformation Threats Escalate
The AIID report reveals escalating real-world harms from AI deepfakes: weaponized scams, non-consensual sexual content, mass disinformation, and 'official misuse' by institutions using AI-generated content.
A research report jointly published by multiple global institutions warns that AI-driven deepfake technology has moved from the laboratory to systematic weaponized applications, causing increasingly severe social harm in three areas: profit-driven fraud, sexual exploitation, and political disinformation. The report, titled "2026 Global Deepfake Threat Assessment," was co-authored by the AI Incident Database, Europol, and several universities.
The Manila Times was the first to provide in-depth coverage of the report. It noted that global economic losses caused by deepfake technology in 2025 were estimated at $38 billion, a 156% increase from the previous year. The largest single category of losses was business fraud—criminals using AI-generated voice and video to impersonate corporate executives, obtaining fraudulent fund transfers or sensitive information. The report documented 17 deepfake business fraud cases with losses exceeding $100 million each.
In the area of sexual exploitation, the situation is even more dire. Tracking data from DeepTrace Labs (now renamed Sensity AI) shows that deepfake pornographic content on the internet grew by 420% in 2025, with 96% of victims being women. More concerning is the dramatic increase in the accessibility of generation tools—an investigative report by NBC News revealed that multiple Telegram groups and dark web forums offer "one-click face-swap" services, where users only need to upload a single photo of a target person, and the AI can generate realistic pornographic videos within minutes. Victims include ordinary citizens, teachers, students, and even minors.
Europol analyzed the use of deepfakes in political disinformation in a dedicated chapter of the report. Between 2025 and early 2026, elections or referendums in at least 23 countries were disrupted by deepfake content. The most severe case occurred during a presidential election in a Southeast Asian country, where a fabricated video of a candidate went viral on social media 72 hours before voting, garnering over 100 million views. Although it was ultimately confirmed as fake, it had already had a substantive impact on the election outcome.
On the technical defense front, the report showcases both progress and challenges. Currently, the most advanced deepfake detection systems (such as the latest versions of Microsoft Video Authenticator and Intel FakeCatcher) achieve detection accuracy of up to 97% in laboratory environments, but in real social media dissemination environments—after compression, screen capture, re-encoding, and other operations—accuracy drops to approximately 72%. More problematic is that the latest generation of deepfake generation models can conduct adversarial training against known detection algorithms, creating an "arms race" between detection and fabrication.
Professor Renée DiResta of Stanford University's Internet Observatory noted in the report's commentary: "We are losing an asymmetric war. Generating a deepfake takes only a few minutes and a few dollars of computing cost, but verifying its authenticity requires specialized equipment, professional personnel, and considerable time. The speed of dissemination on social platforms far exceeds the speed of fact-checking." She called for the establishment of a global Content Provenance verification standard, embedding digital watermarks and signatures into all AI-generated media.
On the legislative front, multiple countries are accelerating the development of relevant regulations. The EU AI Act explicitly classifies deepfakes as a "high-risk AI application," requiring all AI-generated audio and video content to be labeled with its origin. The U.S. Congress currently has three deepfake-related bills under review, among which the DEEPFAKES Accountability Act proposes fines of up to $150,000 for generating deepfake content using someone's likeness without consent. South Korea became the first country to implement the world's strictest deepfake pornography regulations in 2025, with penalties of up to seven years in prison.
The report concludes by calling on the international community to establish a multilateral governance framework for AI deepfakes, similar to the Geneva Conventions. UNESCO has indicated it will convene a dedicated international conference on this issue in the second half of 2026.
From a law enforcement and judicial perspective, countries worldwide are accelerating legislation to address deepfake threats. The U.S. Congress passed the DEEPFAKES Accountability Act in late 2025, requiring all AI-generated videos and images to embed irremovable watermarks, with violators facing up to 10 years imprisonment and $5 million in fines. The EU has incorporated deepfake provisions into the updated Digital Services Act (DSA), requiring social media platforms to take down reported deepfake content within 24 hours or face fines of 6% of global revenue. China released a revised version of its "Interim Measures for the Administration of Generative AI Services" in 2025, adding registration and traceability requirements for AI face-swapping.
Progress in technical defense is also noteworthy. The C2PA (Coalition for Content Provenance and Authenticity) standard, jointly promoted by Intel, Google, and Adobe, achieved a major breakthrough in 2025—major camera manufacturers (Canon, Sony, Nikon) and smartphone makers (Apple, Samsung) have committed to adding C2PA content authentication capabilities to all new devices by the end of 2026. MIT Media Lab's deepfake detection model "ARIA-5" achieved a 96.3% identification accuracy rate in the latest benchmark tests, but the researchers admitted: "The pace of advancement in generation technology still outpaces detection technology. This is an asymmetric arms race."
Regarding economic impact, beyond direct fraud losses, deepfake technology also causes enormous indirect damage to brand and corporate reputation. Insurance company Lloyd's of London estimates that global brand value losses due to AI misinformation (including but not limited to deepfakes) reached $78 billion in 2025. Several large enterprises have begun purchasing specialized "AI reputation insurance" products. EY's latest survey shows that 73% of Global Fortune 500 CEOs listed AI deepfakes as "the greatest non-traditional security threat for 2026."