From risk to resilience with cybersecurity under the AI act
This article was written by Peter Van Dyck (Partner), Sarah De Wulf (Senior Associate) and Sofia Devroe (Junior Associate), A&O Shearman
Cybersecurity under the AI Act: what do you need to know?
On 1 August 2024, the Artificial Intelligence Act (AI Act) officially entered into force, making it the world’s first comprehensive legal framework for regulating artificial intelligence. Its primary objective is to ensure that AI systems are trustworthy, safe and aligned with fundamental rights and values. To achieve this, the AI Act introduces specific obligations for high-risk AI systems, including the requirement to maintain an appropriate level of cybersecurity.
When do the AI Act’s cybersecurity requirements apply and what do they impose?
The AI Act’s explicit cybersecurity obligations only apply to providers of high-risk AI systems that place on the market or put into service the AI systems in the EU or whose AI system produces output used within the EU. These are AI systems that present significant risks to health, safety or fundamental rights in certain sectors or use cases. Under the AI Act, such systems must be resilient against malicious attacks that could compromise their intended use, outputs or performance.
The AI Act requires high-risk AI systems to be designed and developed to ensure a high level of accuracy, robustness and cybersecurity. These systems are expected to maintain a consistent performance in these areas throughout their entire lifecycle.
This means that high-risk AI systems must be resilient to errors, faults or inconsistencies – whether arising internally or from the external environment in which they operate. Additionally, they must be safeguarded against exploitation by unauthorised third parties seeking to manipulate or compromise the AI system.
The cybersecurity measures implemented must be proportionate to the specific risks and operational context. These include:
technical redundancy solutions, such as backup systems or fail-safe mechanisms; and
preventive and responsive measures to detect, respond to, resolve and control cyberattacks.
The AI Act specifically highlights protection against the following types of attacks:
data poisoning: manipulating the training data to corrupt the system’s learning process;
model poisoning: tampering with pre-trained components used during training; and
model evasion: altering input data to deceive the AI system into producing unintended outcomes.
Furthermore, connected devices that incorporate AI models and fall within the scope of the Cyber Resilience Act (CRA) are presumed to comply with the AI Act’s cybersecurity requirements – provided that they meet the CRA’s security-by-design obligations.
How is cybersecurity risk assessed for high-risk AI systems?
Before a high-risk AI system can be placed on the market or put into service, the AI Act requires the providers to conduct a cybersecurity risk assessment. The findings hereof must be documented in the system’s technical documentation.
This assessment must go beyond identifying risks to health, safety and fundamental rights – it must also address cybersecurity threats. Providers must evaluate the AI system’s exposure to malicious attacks and document the measures taken to ensure its resilience, as well as how such measures reduce the identified risks.
Importantly, the risk assessment is not a one-off exercise. It must be updated regularly throughout the AI system’s lifecycle to reflect the evolving threats and system changes. The technical documentation should be readily available to competent authorities upon request.
Providers of high-risk AI systems are also responsible for ensuring that their quality control and assurance processes generate and maintain this documentation appropriately. These records should be prepared with the understanding that they may be subject to scrutiny in future enforcement actions.
Which cybersecurity requirements apply to other AI systems?
Although the AI Act only imposes explicit cybersecurity obligations on providers of high-risk AI systems, this does not imply that the other AI systems operating in the EU are exempt from cybersecurity considerations. On the contrary, any AI system that processes personal data, interacts with users, or influences physical or virtual environments is potentially vulnerable to cyber threats and should therefore be designed and developed with security and applicable cybersecurity legislation in mind.
Cybersecurity is not merely a compliance issue, it is also essential for maintaining trust, protecting reputation and ensuring competitiveness. Cyberattacks on AI systems can have serious consequences, including compromising the confidentiality, integrity or availability of data, causing harm or damage to users or third parties, undermining the system’s performance or reliability and violating fundamental rights or ethical principles.
Moreover, such attacks can erode the trust and confidence of users, customers and stakeholders – ultimately damaging the provider’s reputation and market position.
What is at stake for you?
Non-compliance with the AI Act’s cybersecurity rules for high-risk systems can lead to fines of up to €15 million or 3% of global annual turnover, whichever is higher.
In addition, as outlined in our previous blogpost, the management bodies of entities classified as “important” or “essential” under the NIS2 Directive – or even their individual members – can be held personally liable for cybersecurity mismanagement within their organisation.
Under the Belgian Code of Companies and Associations, any person with actual management authority may also be held liable for errors committed in the performance of their duties. This applies if a normally prudent and diligent manager, placed in the same circumstances, could reasonably have acted differently. Depending on the type of entity, liability caps range from €125.000 to €12 million.
In the context of AI system cybersecurity, such errors could include failing to appoint a qualified Information Security Officer or neglecting compliance with cybersecurity obligations under the AI Act or the CRA.
Cybersecurity governance and best practices for providers of AI systems: what can you do?
Regardless of their risk classification, providers of AI systems should adopt a risk-based, security-by-design and security-by-default approach. This means integrating cybersecurity measures from the earliest stages of design and ensuring that default settings offer the highest level of protection.
Key best practices include:
conducting regular risk assessments to identify and mitigate vulnerabilities;
implementing appropriate technical and organisational measures tailored to the system’s context and threat landscape;
following recognised standards and frameworks to ensure robust cybersecurity; and
complying with applicable legislation, such as the General Data Protection Regulation (GDPR), the CRA, the AI Act and the Digital Operational Resilience Act (DORA).
Cybersecurity is not static, and neither is the development of AI systems. Therefore, the cybersecurity management of such systems requires continuous monitoring, updating, and improvement in response to evolving threats and technological developments.
To support this, management must stay informed about the latest cyber-related legislation and participate in regular awareness training. While not every member of management needs to be an AI expert, it is essential that they understand the strategic impact of the AI systems their organisation provides. This knowledge enables informed decision-making and alignment with business objectives.
To ensure accountability and transparency, AI governance and cybersecurity discussions should be clearly documented in meeting minutes.
Additionally, Management should:
hire qualified professionals in legal, IT and cybersecurity roles;
ensure that internal policies and IT systems are up to date;
train staff to detect and report incidents promptly; and
stay alert to changes in regulatory requirements.
Ultimately, a truly effective cybersecurity strategy for AI systems requires a holistic approach—one that integrates legal, technical, organisational, and human factors to ensure resilience across the entire ecosystem.