Meta refuses EU AI code, openly rejecting the European Union’s newly released guidelines for general-purpose AI models. Although the code of practice is voluntary, Meta’s top policy executive, Joel Kaplan, criticized it publicly, calling it “over-reach” and filled with “legal uncertainties.”
The European Commission introduced the AI code of practice on July 10, offering companies a voluntary roadmap to comply with the more comprehensive AI Act. The AI Act itself is enforceable by law and carries steep penalties. However, the code gives businesses early guidance and the option of legal safe harbor — a benefit Meta just declined.
Kaplan, Meta’s Chief Global Affairs Officer, issued a blunt statement. “Europe is heading down the wrong path on AI,” he said. “We have carefully reviewed the Code of Practice and Meta won’t be signing it. This Code introduces legal uncertainties and exceeds the AI Act’s intended scope.”
Meta refuses EU AI code not just quietly, but with a clear PR stance. This isn’t its first clash with European lawmakers over AI. The company has previously called the AI Act “unpredictable” and accused it of “hampering innovation.” Back in February, Meta’s public policy team warned the rules would delay product launches and deprive EU citizens of new technologies.
Despite being optional, the code of practice offers distinct advantages. Signing it helps demonstrate compliance with the AI Act, potentially reducing regulatory exposure. EU spokesperson Thomas Regnier explained that those who refuse the code will face greater scrutiny and must prove compliance in other ways. Companies found non-compliant could face fines of up to 7% of annual global turnover.
The EU’s code includes specific demands. It bans AI training on pirated content and requires developers to honor opt-outs from artists and writers. It also asks companies to provide regular documentation describing how their models work and are updated. These requirements have raised concerns among tech firms, particularly those building powerful foundation models.
Some of Meta’s resistance appears rooted in U.S. politics. The company has found a regulatory ally in the White House. In April, President Trump urged the EU to scrap the AI Act, dismissing it as “a form of taxation.” His administration’s opposition aligns with Meta’s ongoing efforts to water down or delay AI legislation worldwide.
Kaplan’s framing positions Meta as a defender of innovation under siege. Yet critics argue the company is dodging accountability and using public opposition to shape the global narrative. They warn that by refusing voluntary commitments now, Meta could face tougher legal consequences later.
Meanwhile, other AI developers must decide how to proceed. Signing the code of practice could offer strategic benefits, including smoother compliance and regulatory goodwill. Refusing it, as Meta refuses EU AI code, may invite more risk — even if done for strategic posturing.
As the AI Act moves toward full enforcement, companies like Meta must navigate a rising tide of global AI regulation. Whether public defiance pays off remains to be seen.
READ: Introducing the Meta AI App: A New Way to Access Your AI Assistant