What should Directors know about AI Act?
What is the EU AI Act?
The European AI Act is a pioneering regulation designed to manage the risks associated with artificial intelligence (AI) within the EU. Enacted on August 1st 2024, it aims to balance AI innovation with protections for health, safety, and fundamental rights. Key compliance dates include:
- February 2, 2025: Ban on AI applications deemed prohibited.
- August 2, 2025: Activation of rules on governance and obligations for General Purpose AI (GPAI).
- August 2, 2026: Activation of obligations for high-risk AI systems requirements affecting sectors like health and life insurance.
- August 2, 2027: Expanded obligations for additional high-risk AI applications in critical areas.
The EU AI Act sets a legal framework that is responsive to new developments, easy and quick to adapt and allows for frequent evaluation. The legislation itself can be amended by delegated and implementing acts, for example to review the list of high-risk use cases. There will be frequent evaluations of certain parts of the EU AI Act and eventually of the entire regulation, making sure that any need for revision and amendments is identified.
This article was written by the GUBERNA Sounding Board Committee for Cybersecurity.
Applicability and Roles under the EU AI Act
The AI Act applies to any organization developing, deploying, or using AI systems in the EU, with distinct responsibilities for:
- AI System Providers: Those who create and market AI solutions (e.g. a developer of a CV-screening tool).
- AI System Importers/Distributors/Deployers: Those who import, distribute, implement or use AI internally (e.g. a bank buying this screening tool).
It is important to know the role organisation is playing regarding AI systems in order to define its compliance obligations:
- Is your organisation an AI system provider (developing AI system and placing it on the market), or
- Is your organisation an AI system deployer (using AI system)
Risk Categories in the EU AI Act
The EU AI Act introduces a risk-based approach with requirements tailored to the level of risk posed by the AI system:
- Unacceptable Risk: Prohibited AI practices based on a closed list.
- using subliminal techniques or purposefully manipulative or deceptive techniques to materially distort behaviour, leading to significant harm;
- exploiting vulnerabilities of a person or group due to specific characteristics, leading to significant harm;
- biometric categorisation systems that individually categorise a person based on sensitive information, except for labelling or filtering lawfully acquired biometric datasets in the area of law enforcement;
- social scoring systems;
- real-time remote biometric identification systems in the public for law enforcement purposes;
- predictive policing based solely on profiling or personality traits, except when supporting human assessments based on objective, verifiable facts linked to criminality;
- facial recognition databases based on untargeted scraping; and
- inferring emotions in workplaces or educational institutions, except for medical or safety reasons.
2. High Risk: Extensive safeguards, including data management, human oversight, and cybersecurity. The AI Act considers two types of AI systems that are regarded as high-risk: Such high-risk AI systems include for example AI systems that assess whether somebody is able to receive a certain medical treatment, to get a certain job or loan to buy an apartment. Other high-risk AI systems are those being used by the police for profiling people or assessing their risk of committing a crime (unless prohibited). And high-risk could also be AI systems operating robots, drones, or medical devices AI intended to be used as a product (or the security component of a product) covered by specific EU legislation, such as civil aviation, vehicle security, marine equipment, toys, lifts, pressure equipment and personal protective equipment. AI systems such as remote biometric identification systems, AI used as a safety component in critical infrastructure, and AI used in education, employment, credit scoring, law enforcement, migration and the democratic process
- Limited Risk: Requires transparency, especially when AI influences user behavior: for example where there is a clear risk of manipulation (e.g. via the use of chatbots) or deep fakes. Users should be aware that they are interacting with a machine.
- Minimal Risk: No stringent requirements, covering applications like spam filters. Voluntarily, providers of those systems may choose to apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.
Special provisions apply to General Purpose AI systems (including large generative AI models) used for tasks like image and speech recognition, audio / video generation, pattern detection, question answering, etc.
Governance and Accountability under the EU AI Act
Directors are expected to oversee AI governance structures that emphasize ethical AI, risk management, and compliance with EU values. Their responsibilities include:
- Cultivating a culture of ethical AI use.
- Ensuring strong AI governance frameworks.
- Overseeing transparent engagement with stakeholders, including regulators and the public.
Any natural or legal person in the EU can make a formal complaint with a market surveillance authority about AI non-compliance.
Directors should note that non-compliance can lead to fines of up to 7% of global turnover or €35 million. Directors may also face personal liability if their oversight on AI is found lacking.
Strategic and Risk Considerations in the EU AI Act
As AI becomes more and more central to business operations, directors should focus on:
- Developing an AI Strategy: Aligning with transparency, fairness, and ethical standards, with vendor selection and process impact in mind.
- Harmonization Across Jurisdictions: Ensuring alignment with global regulatory standards.
AI-Related Risks
In order to assess AI related risks analysis of the process, data and technology AI system will be involved in is essential. The AI Act requires proactive management of AI risks beyond typical IT risks. Directors should be aware of:
- Bias and hallucination in AI outputs.
- Intellectual property concerns and data leaks.
- Risks of cyberattacks, advanced fraud, and environmental impact.
Conclusion on the EU AI Act
The EU AI Act creates harmonized rules for placing AI on the EU market applicable to EU and any third-country providers and deployers that place AI systems on the EU market following a risk-based approach. The EU AI Act prohibits use of certain AI systems and provides specific requirements for high-risk systems.
Board directors play a critical role in AI compliance under the EU AI Act. By embedding AI governance into the corporate structure and staying ahead of regulatory requirements, organizations can not only avoid penalties but also gain a competitive edge in an evolving AI landscape.
Remark: Board directors must be aware that there are many other AI laws and regulations across the world outside of the EU (https://iapp.org/media/pdf/resource_center/global_ai_law_policy_tracker.pdf), hence why this article talks only about the EU AI Act. The EU is involved in bilateral and multilateral forums to promote trustworthy, human-centric and ethical AI.