Artificial Intelligence and the fight against cybercrime
Cybersecurity is becoming an ever more critical concern for organisations worldwide, as cyber-attacks continue to rise in frequency and sophistication. Our research partner, A&O Shearman, has prepared an overview of the key challenges and considerations, including the role of Artificial Intelligence as a defence tool, the risks it introduces, and the legal frameworks that organisations must navigate.
The ever-increasing importance of cybersecurity
Cyber-attacks keep rising. The World Economic Forum reports that, in the past two years, the average number of attacks grew by more than half1. Cybersecurity organisations estimate cybercrime may cost the economy 10.5 trillion dollars in 2025 alone2. At the same time, the world faces a shortage of around 4 million cybersecurity professionals3.
These are not only abstract numbers, they directly affect companies. Businesses risk financial loss, operational disruption and lasting reputational damage and their board members may face personal exposure if they are not prepared.
For leadership teams, cybersecurity has become a core governance and risk issue. Cybersecurity must be a board-level priority with clear accountability, adequate resources and effective prevention and remediation measures in place.
The role of Artificial Intelligence
Artificial Intelligence (AI) is reshaping the cyber landscape. It is today being used as an offensive cybersecurity threat. On the one hand, cybercriminals use AI to scale their social engineering and fraud activities. On the other hand, the implementation of AI within companies introduces new vulnerabilities and attack opportunities for threat actors.
But AI may also be deployed as a cyber-defence tool. It can quickly analyse a vast amount of data, detect trends, reduce detection and response times and help identify novel attack patterns. It comes however with some risks and vulnerabilities that need to be addressed with proper control, oversight and skill to avoid legal and technical complications.
Risks inherent to the use of Artificial Intelligence
1. AI may facilitate and support cybersecurity resilience, but it also introduces new risks executives must manage. These risks include:
Personal data leaks and misuse: AI tools used to monitor or protect IT systems may process personal data. Hence, the use of AI as a defence tool will have to come along with proper governance and access control rights.
IT system integrity: implementing AI as a defence tool may introduce new IT risks and vulnerabilities. A proper technical due diligence should be performed and, in case of identified vulnerabilities, proper remediation measures should be implemented.
Increased complexity: deploying AI for cybersecurity requires professionals who also understand AI risks. Tailored, regular and updated trainings to staff using AI as a cyber-defence tool should be put in place.
2. Leadership should treat these risks as strategic, assess use cases, set clear guardrails and ensure accountability for secure and legally compliant AI deployment. Companies should ensure that their use of AI as cyber-defence tool complies with the relevant EU cybersecurity framework.
Legal cybersecurity framework
In the European Union, several regulatory instruments impose cybersecurity obligations that often also apply to the use of AI as a cybersecurity tool. These include:
The Artificial Intelligence Act (AI Act) (for which we refer to our previous blogpost): when using an AI system as a cyber defence tool, it is essential to correctly classify it within the AI Act’s risk-based framework. Should it be considered as high-risk, it will be necessary to verify that it was trained on proper and adequate data, that it will continuously be updated to address new risks and that it complies with the variety of security-related obligations of the AI Act (accuracy, robustness and cybersecurity etc.).
The Network and Information Systems Directive (NIS 2) (also explored in a prior blogpost): entities within certain sectors and above a certain size threshold will have to comply with NIS 2 cybersecurity rules. With regards to AI tools used for cybersecurity defence, this implies assessing cybersecurity in the tool’s supply chain as well as the adequacy of the tool in relation to the identified cyber-risks. When a vulnerability is identified with regards to the AI tool, it will be necessary to notify it to the Belgian Centre for Cybersecurity.
The Regulation on Digital Operational Resilience for the financial sector (DORA): DORA is an equivalent of NIS 2 applicable to financial entities and their specificities. Financial entities deploying AI cyber-defense systems are required to verify that they are adequate regarding the identified cyber risks. When the AI system is provided by a third party, due diligence of said party must be undertaken and certain contractual arrangements must be made.
The General Data Protection Regulation (GDPR): AI used as cybersecurity tool will likely process personal data and you will therefore have to comply with the GDPR’s general personal data processing principles and obligations such as transparency and lawfulness of processing or the implementation of appropriate safeguards, as well as with more specific provisions regarding automated decision-making.
Management’s role
AI-related cyber risks require a holistic approach that combines strategy, governance, people, processes and technology. Management should ensure that clear practices within the company fit its risk profile and regulatory obligations. In particular, they should undertake the following efforts:
Take a cautious approach to adopting new AI tools. Assess risks before deployment. Ensure that adequate due diligences are performed on external suppliers to check their capacity to provide a secured tool compliant with your legal obligations and to quickly detect and remedy vulnerabilities and external threats.
Implement appropriate technical and organisational measures. Tailor controls to your use of AI as a cyber-defence tool. Ensure robust access management, data governance, monitoring and change control.
Invest in cybersecurity. Allocate the proper budget to have qualified legal, IT and cybersecurity personnel and adequate IT resources to assess the risk of AI cyber-defence tools. Build internal capability on AI risk and secure development practices.
Foster a culture of cyber-awareness. Train employees on secure use of your AI systems and data. Reinforce accountability across all levels of the organisation.
Stay up-to-date and comply with the evolving cybersecurity and AI regulatory landscape, recognised standards and bestpractice frameworks. Align with them proactively and make sure regular trainings and updates are provided within your organisation.
Plan for incidents. Maintain incident planning, response and recovery strategies. Include scenarios specific to the risks associated to your cyber-defensive AI tools and prepare back-up defence strategies should they be compromised.
The Authors
-
Peter Van Dyck
A&O Shearman
-
Marie Barani
A&O Shearman
-
Igor Staelens
A&O Shearman