Despite widespread concern about deepfakes, no jurisdiction has enacted a blanket prohibition on creating or sharing AI-generated content depicting real people. According to Traverse Legal, "there is no single federal law that bans all deepfakes across all use cases." The EU AI Act focuses on transparency and labeling requirements, not outright bans. The legal landscape is better described as a patchwork of narrow, context-specific rules.
The clearest prohibitions target nonconsensual intimate imagery. In the United States, the TAKE IT DOWN Act makes nonconsensual AI-generated pornography of adults a federal crime, and nearly every state has introduced or passed similar legislation. In the UK, the Technology Secretary announced in January 2026 that creating non-consensual intimate deepfake images is now a specific criminal offence under the Data (Use and Access) Act. AI-generated child sexual abuse material (CSAM) is illegal everywhere under existing federal law.
Large categories of deepfake content remain entirely legal in most jurisdictions — including satire, political commentary, entertainment, parody, and consensual uses. Treating targeted bans on intimate imagery as evidence of a general prohibition is a composition fallacy: the existence of specific rules does not mean a sweeping, categorical ban exists. Anyone assessing the legality of a particular deepfake must consider the content type, jurisdiction, and context rather than assuming a universal prohibition.