EU Moves to Ban Deepfake Porn Without Consent in AI Law Overhaul
The EU agreed on May 7, 2026 to ban AI services that create deepfake porn of real people without consent, tightening the AI Act and imposing compliance measures.
The European Union has reached a provisional agreement to prohibit AI services that generate deepfake porn of real people without their consent, moving to amend the bloc’s landmark AI Act. Lawmakers said the change responds to a surge in harms—particularly against women and children—where intimate images and videos are synthesized and distributed without permission. The agreement, struck by the European Parliament and the Council, aims to close a gap in regulation that advocates and victims have long warned about.
Provisional Agreement by EU Institutions
On May 7, 2026 the European Parliament and EU Council secured a provisional deal to insert a specific ban into the AI Act on services that create sexual content of real people without consent. The amendment singles out AI-generated imagery, video and audio intended to sexualize or depict someone in intimate situations when that individual has not agreed. Lawmakers framed the move as a targeted response to an emerging type of abuse made trivial to produce and share by generative AI tools.
EU officials described the accord as provisional pending final technical drafting and adoption, but they signalled a swift timetable to convert the political deal into binding law. The change was presented as part of a broader effort to ensure the AI Act deals not only with high-risk systems but also with specific harms that have grown rapidly in digital ecosystems. Observers noted the political consensus reflected mounting public pressure and high-profile cases that highlighted the damage of non-consensual deepfakes.
Scope of the Ban and Notable Exceptions
The amendment prohibits AI services that generate sexual images, video or voice of an identifiable real person without that person’s consent, according to the agreement text released by negotiators. The prohibition covers services that synthesize a person’s face or likeness into sexual content even when the underlying footage or images belong to someone else. In addition, the text makes an explicit, non-negotiable ban on any AI generation of sexual content involving children or material that would constitute child sexual abuse.
Exceptions were crafted narrowly for specific professional uses, with EU sources indicating that medical, forensic or law-enforcement purposes could be excluded from the ban when tightly controlled. These carve-outs are intended to preserve legitimate investigative or therapeutic uses while preventing routine or commercial deployment of the same technologies for exploitative ends. Negotiators stressed that any permitted uses should be subject to strict oversight, transparency and purpose limitation.
Obligations Imposed on AI Service Providers
Under the revision, providers of generative AI services will be required to implement technical and organisational measures to prevent the creation and spread of non-consensual sexual content. Proposed obligations include content filters, identity-verification safeguards for consent claims, takedown procedures and cooperation mechanisms with law-enforcement and victim-support organisations. Regulators expect firms to adopt proactive detection tools and to maintain records that can support investigations into violations.
The provisional text sets concrete compliance milestones for affected companies, reflecting a push by EU institutions to avoid indefinite grace periods. Industry representatives warned that rapid compliance could be operationally challenging and costly, especially for smaller developers, but also acknowledged the need for clear rules to restore public trust. The final law will determine the enforcement powers and penalties for non-compliance, which negotiators said will be calibrated to deter systemic breaches.
Protecting Victims and Minors
The amendment places a strong emphasis on protecting victims of image-based sexual abuse, with lawmakers highlighting the disproportionate impact on women and minors. The ban on generating sexual content involving children is absolute and does not hinge on consent, reflecting long-standing criminal standards for child sexual exploitation. Member states will be urged to strengthen victim assistance, public awareness campaigns and cross-border cooperation to handle evidence and prosecutions.
Victim advocates welcomed the move but cautioned that legal bans must be matched by practical support, including streamlined removal processes and affordable legal remedies. Digital platforms will face pressure to speed up takedowns and to work with civil-society organisations that assist survivors. Experts also noted the technical difficulty of fully erasing content once it spreads, underscoring the importance of prevention and early detection.
Implementation Timeline and Next Steps
Negotiators expect the provisional agreement to be followed by final technical drafting and formal approval before the AI Act amendment becomes law, after which member states and companies will move into implementation and enforcement phases. The provisional text referenced specific compliance deadlines for providers, signalling a phased schedule to bring existing services into alignment with the new rules. EU regulatory bodies will develop guidance and monitoring frameworks to support consistent application across member states.
Legal scholars say the EU measure could influence policy debates outside Europe and push global platforms to adopt similar safeguards to avoid reputational and regulatory risk. Technology firms that operate across jurisdictions will need to reconcile differing national rules while preparing for potential extra-territorial effects if major platforms apply uniform standards. The coming months are likely to see discussions between regulators, industry and civil society over the technical feasibility, costs and practicalities of enforcing the ban.
The decision marks a significant step in the EU’s effort to curb harms from generative AI by targeting a clearly defined and socially harmful application, and it sets expectations for how legislators will balance innovation with individual rights going forward.