decorative website bubbles

EU AI Act

Prepare HR and Legal teams to meet AI governance compliance requirements.

    Ready to see the training in action?

    EU AI Act

    Adopted in 2024, the European Union Artificial Intelligence Act is the world’s first comprehensive AI regulation, establishing a harmonized legal framework across all EU member states. The law introduces a risk-based approach to governing artificial intelligence, with heightened obligations for high-risk systems that materially affect individuals’ rights and opportunities. For HR and Legal leaders, the Act makes AI governance an operational compliance mandate — requiring structured oversight, transparency and workforce training.

    EU AI Act FAQs

    • The EU Parliament approved the final text on 13 March 2024, and the EU Council gave its approval on 21 May 2024. 
    • It was published in the EU Official Journal on 12 July 2024 and entered into force on 1 August 2024. 
    • Most provisions will become applicable from 2 August 2026, though implementation timelines vary by risk category.  

    The Act applies a risk-based approach, classifying AI systems into different levels based on potential harm: 

    • Unacceptable risk: Certain applications (e.g., social scoring) are banned outright. 
    • High-risk systems: Face strict requirements before being placed on the market (e.g., quality, safety, human oversight). 
    • Limited risk: Subject to transparency obligations (users must know they’re interacting with AI). 
    • Minimal risk: No strict regulatory duties, though voluntary standards are encouraged.
      It also includes rules for general-purpose AI, such as transparency and training data summaries for large models.  

    The full legal text is available via the EU Official Journal (Regulation (EU) 2024/1689). You can also find up-to-date summaries and provisions on the EU’s digital strategy AI Act page.  

    Recent developments include: 

    • The European Parliament banned the use of AI features on official work devices over cybersecurity and data protection concerns.  
    • Discussions around delaying high-risk AI rules until 2027 as part of broader regulatory reform are underway.  
    • The European Commission published a voluntary code of practice to help businesses comply with AI Act transparency and safety requirements.  
    • Experts are highlighting the move toward making AI literacy training a legal requirement under the AI Act regime.  

    While the Act doesn’t set a single specific course that everyone must take, it elevates AI literacy training to a legal requirement — meaning organizations using or developing AI may be legally expected to ensure their staff understand AI obligations, risks, and governance under the EU framework. Authorities and industry bodies are devising guidelines and programs to help meet this obligation and to support compliance training. 

    The main aims are to: 

    • Protect fundamental rights of individuals in the EU. 
    • Foster trustworthy, human-centric and ethical AI. 
    • Encourage innovation and competitiveness by harmonizing rules across the EU internal market. 
    • Promote transparency, accountability, and safety in AI deployment.  

    Yes. Because it regulates access to the EU internal market, it applies extraterritorially to non-EU providers if their AI systems are sold, deployed, or used within the EU.