EU AI Act
Prepare HR and Legal teams to meet AI governance compliance requirements.
EU AI Act
Adopted in 2024, the European Union Artificial Intelligence Act is the world’s first comprehensive AI regulation, establishing a harmonized legal framework across all EU member states. The law introduces a risk-based approach to governing artificial intelligence, with heightened obligations for high-risk systems that materially affect individuals’ rights and opportunities. For HR and Legal leaders, the Act makes AI governance an operational compliance mandate — requiring structured oversight, transparency and workforce training.
EU AI Act FAQs
When was the EU AI Act passed and when does it take effect?
- The EU Parliament approved the final text on 13 March 2024, and the EU Council gave its approval on 21 May 2024.
- It was published in the EU Official Journal on 12 July 2024 and entered into force on 1 August 2024.
- Most provisions will become applicable from 2 August 2026, though implementation timelines vary by risk category.
What does the EU AI Act regulate?
The Act applies a risk-based approach, classifying AI systems into different levels based on potential harm:
- Unacceptable risk: Certain applications (e.g., social scoring) are banned outright.
- High-risk systems: Face strict requirements before being placed on the market (e.g., quality, safety, human oversight).
- Limited risk: Subject to transparency obligations (users must know they’re interacting with AI).
- Minimal risk: No strict regulatory duties, though voluntary standards are encouraged.
It also includes rules for general-purpose AI, such as transparency and training data summaries for large models.
Where can I read the EU AI Act text or official overview?
The full legal text is available via the EU Official Journal (Regulation (EU) 2024/1689). You can also find up-to-date summaries and provisions on the EU’s digital strategy AI Act page.
What are the key updates or news on the EU AI Act today?
Recent developments include:
- The European Parliament banned the use of AI features on official work devices over cybersecurity and data protection concerns.
- Discussions around delaying high-risk AI rules until 2027 as part of broader regulatory reform are underway.
- The European Commission published a voluntary code of practice to help businesses comply with AI Act transparency and safety requirements.
- Experts are highlighting the move toward making AI literacy training a legal requirement under the AI Act regime.
How does the EU AI Act affect training requirements?
While the Act doesn’t set a single specific course that everyone must take, it elevates AI literacy training to a legal requirement — meaning organizations using or developing AI may be legally expected to ensure their staff understand AI obligations, risks, and governance under the EU framework. Authorities and industry bodies are devising guidelines and programs to help meet this obligation and to support compliance training.
What are the purposes of the EU AI Act?
The main aims are to:
- Protect fundamental rights of individuals in the EU.
- Foster trustworthy, human-centric and ethical AI.
- Encourage innovation and competitiveness by harmonizing rules across the EU internal market.
- Promote transparency, accountability, and safety in AI deployment.
Does the EU AI Act apply to companies outside the EU?
Yes. Because it regulates access to the EU internal market, it applies extraterritorially to non-EU providers if their AI systems are sold, deployed, or used within the EU.