The European Commission on Friday released guidelines to help companies comply with the EU’s artificial intelligence law, which takes effect August 2 for AI models with systemic risks and general-purpose models.
The guidance is aimed at firms building powerful AI systems that could impact public health, safety, fundamental rights, or society. These include models made by companies such as Google, OpenAI, Meta, Anthropic, and Mistral.
Under the AI Act, firms must evaluate their models, test for potential threats, report serious incidents, and ensure cybersecurity protections.
General-purpose or foundation models must also meet transparency rules, including preparing technical documents, adopting copyright practices, and summarizing training data sources.
The AI Act became law last year. Firms have until August 2, 2025, to meet its full requirements.
Violations can bring fines of up to €35 million or 7% of global turnover, depending on the type of breach.
In a statement, EU tech chief Henna Virkkunen said the guidelines support smooth enforcement of the rules.