What is AI?

Media attention to the phenomenon of artificial intelligence results in a blurred understanding of this important technology. Such impartial or even limited understanding leads to looking at a part of phenomena only, limiting the scope of attention and thus losing a substantial part of opportunities that come together with the new technology. Some emerging risks could be overlooked too. Wider approach is necessary to integrate data and algorithms into corporate practices. That means that people matter as much as data and technology that processes data: people carry out the work, handle AI-powered insights and take ultimate responsibility for the outcomes.

For clarity and consistency purposes we rely upon OECD definition (1)

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

 

This definition emphasises high degree of autonomy that modern AI systems already have. It also illustrates dependence of such systems upon a) quality of input data and b) quality of algorithms. Besides, the overall quality of AI-generated insights depends on the further training of a system.

 

Why should boards act?

 

The first reason

AI enables a strategic shift in what has been considered as the key bottleneck of corporate governance (2): namely, asymmetry of information between boards and management. In traditional interaction, management has responsibility for sharing information and board members need to ensure the information is sufficient to define the future direction and to keep risks under control.

With AI-powered assistance, boards could have access to wider pool of information. Besides, AI offers tools to process and analyse data in a much deeper way, enabling sharper insights and tracking tacit or unusual trends. That means that information asymmetry that has kept hindering board-management interactions, can be handled in a radically different way: on the one hand, board members can request and analyse more information in advance, on the other hand, management can better foresee what type of information can be requested. Better preparedness of directors through deeper analysed board packs will help in raising sharper questions and in running deeper-focused board meetings. In this way, the recurrent call to boards for dedicating more time to strategy and less time to operational issues receives now a solid supporting framework to become reality.

A recent study by INSEAD and Wharton compared performance of AI-powered and human boards (3). Six human boards comprised of INSEAD’s Advanced Board Programme members, received the same materials as AI agents’ board, where each agent was configured with memory systems, individual viewpoints, and the ability to adjust goals over the course of the meeting. The working processes of human and AI boards were compared and showed consistent results: the AI board clearly outperformed the human boards across eight governance criteria used in the study, such as decision quality and decision implementability, reliance on facts and good preparation to the meeting, depth of insights, process discipline and inclusiveness of board discussions.

Does that mean that human perspectives lose their relevance to business reality? Not at all: the authors of the experiment emphasised the importance of human perception in interpreting facts, in giving them meaning and in using context for better judgements. However, the authors also believed that AI showed good examples of addressing the unknown: when humans tend to look for similarities in the past and to lock themselves in the known, AI remains more open to possible alternative scenarios.

The experiment above gives some insights into the popular question on the future of boards in AI era. On the one hand, the findings indicate the existing and well-known flaws of current board practices, such as lack of preparation, reliance on guts feeling instead of knowledge of facts, ineffective board dynamics when power game becomes more important and inclusion remains fragmented or even declarative only. On the other hand, the study illustrates an enormous potential in taming the power of facts-based and less biased decision-making that AI assistance can provide. There is a clear need for bringing two systems together, and we are sure that AI will change the way directors work, pushing for their deeper preparation, for more appetite for facts and analysis, and for more effective board dynamics.

Key take away? Human creativity is enormous. Boardroom realities deserve being a new frontier to engage and fully exploit human talent for openness and resilience.

 

The second reason

There is another important aspect to think about: accessibility of new technology opens the ways for its wide and speedy adoption. Boards cannot wait and see how things develop. Just by looking around we see AI already penetrating in diverse fields and re-shaping them: students summarise papers, programmers code, business developers identify and approach prospective customers. All of that is being done differently with the use of AI, meaning that “before-AI” approach will not work any more.

Accessibility changes technology use: lower costs boost demand among the existing customers. Besides, new use cases emerge and new categories of users join. As a result, technological transformation occurs.

This phenomenon is often referred to as Jevons paradox (4). Some examples of it include a transition to energy-efficient cars that has radically increased overall fuel consumption or a shift from postal correspondence to electronic mail that has drastically intensified communication.

Current pace of AI adoption implies a highly probable radical shift in categories of users and use cases for AI in all spheres. Boards need to prepare their companies to such a shift. How possible changes can impact the existing strategy, whether, to what extent and how should it be changed? How the emerging risks are being addressed and should be addressed? 

Given the scope and the speed of AI penetration into various fields of human activity, the preparedness should already be at its highest. Boards should seek the ways to proactively shape technology applications to meet company’s strategic goals. Boards should ensure safeguards are in place to protect company’s core against the new risks that emerge together with massive adoption of AI.

To sum it up, boards have the most important role in the corporate world, and they need the most powerful tools at their disposal to ensure they fully use the potential at their hands – this is the key reason why boards should proactively address AI expansion.

 

Where should boards act? 

It might sound obvious, but should not be taken for granted: boards attention to AI should address a number of action fields, each of them being important:

  • Level of business environment

  • Level of organisation

  • Level of board functioning

  • Level of individual directors

Strategic outlook into wider business environment was nicely addressed in the first GUBERNA guide on AI for directors (5): a powerful technology, AI is capable of rapid disruption of existing business practices. That means that boards have to be alert to emerging opportunities, understand and proactively look for emerging trends to have their companies prepared. The studies (6) reveal lack of strategic coherence and unclear value dynamics, calling for a clear view on how AI fits into overall strategy or transformation agenda.

At the same time, as AI becomes inherently present in daily digital environment, it influences the way people work within an organisation. Even when companies do not make any specific steps to integrate AI into their business processes, AI still sneaks into an organisation with shadow use (7), i.e. uncontrolled applications for diverse purposes. Shadow use of AI severely hampers an organisation’s ability to detect, manage and mitigate risks and increases organisational vulnerability in highly competitive environment.

A Melbourne Business School Study of 2025 (8) addressed more than 30 thousand employees in 47 countries representing global economic regions. The findings illustrate the power of AI, showing the gains in efficiency, information access, innovation and work quality for more than a half of respondents. At the same time, almost a half admitted they had uploaded sensitive company or customer information into public generative AI tools. Besides, most employees had avoided revealing the use of AI (61%), and presented AI-generated content as their own (55%).

To address shadow use of AI effectively, organisations need clear vision on why and for what purposes they are ready to use AI, how far they want to move forward and how often to launch their experimentation. Effective feedback loop is crucial in tracing the effects of AI-powered processes, solutions and systems.

Paradoxically, shadow use of AI illustrates the untamed potential of artificial intelligence as well, and the call for innovation that comes from inside the companies. Employees are often paying themselves for AI assistance (9) The pressure for efficiency pushes employees to find the ways to perform. The push comes from the top, but often without empowerment to accompany it. In a recent poll (10) 83% of IT and business professionals reported AI use within their organisations, with only 28% of these organisations admitted having a comprehensive AI policy in place.

In fact, the term “shadow” illustrates lack of clarity about legitimate tools and approaches to achieve ever raising expectations. It also emphasises the contrast between the initiative the personnel is taking and the lack of initiative the management is taking when it comes to the use of data and algorithms in diverse working processes. This dilemma leads to an important question for boards: Where the responsibility starts: with taking initiative or with closing eyes for the tide that rises?

Similar logics could be applied to the ways boards organise their work, and to the ways individual directors prepare for board meetings and execute their role: boards need to define their appetite in how far they are ready to go, for what reasons, and then agree about shared approach and processes to follow.

AI can empower various processes within organisation, from assisting an individual to re-designing workflows and boosting cross-functional processes. Defining the desired appetite for the endeavour should be one of the key board’s priorities. In doing this, boards should consistently explore the innovative potential of the new technology to incorporate it into a wider transformative strategy. Board have to decide where to engage, what and how often to oversee.

1

When should boards act?

The importance of clarity about the need and place of AI in a company revealed in the section above, implies the need to ensure this clarity first, and start acting then. Clarity is crucial.

Recent Google analysis of nearly 5,000 technology professionals illustrates the magnifying effect of AI systems on organisational performance: the strengths build up, but so do weaknesses. Just adding up an AI system to a team that already struggles with unclear expectations and processes will only amplify the struggle (11)

It is thus critical not to rush into doing per se, without understanding of reasons for doing that. It is also decisive to ensure a shared understanding of how the change will impact existing structures and processes. Otherwise AI as a mirror will reflect the bad preparation at its best.

Last but not least, it is important to bring the whole team on board for AI policies that are shared and integrated throughout a company. AI changes the whole concept of responsibility at the work place, as each employee holds an individual responsibility for correct and transparent use of AI-powered tools. To avoid one bad apple spoiling the barrel, every team member has to be upskilled to the necessary level, and has to understand the expectations, know the limits and adhere voluntary to the suggested practices and tools.

Do you know the current state of AI use within your company?

 

How should boards act?

While ensuring clarity, it is important to agree not only about what has to be ensured but also about how to ensure it. Compliance adds more nuances to this task. If treated as an inside-driven need, it requires both action and evidence. In this case, the link between strategy focus and exact monitoring frameworks becomes crucial due the character of most AI-related requirements: principle-based (e.g., fairness, robustness, accountability, transparency, etc.), they are also context-sensitive. That means that the same requirement can imply different controls, depending on a use case, risk profile, or stakeholders involved. Some useful tips on how to bring regulation to the ground and ensure meaningful compliance are shared in AI Governance Newsletter series. (12)

At all steps of AI transformation, it is important to remember that algorithms differ from humans, which necessitates a review of current working processes to ensure an effective place is given to data-driven assistants. They could not replace humans, but they could empower human creativity, reinforce human ability to generate new insights, to interpret differently, to rely upon data-driven preparatory work. That means that the existing workflows, processes and tasks have to be reviewed to identify the place for AI, and to define the new roles for human supervision and for human use of AI-generated materials.

How should a board start AI journey? These starting fundamental questions set up the starting framework:

  • Is there a clear vision and holistic strategy for the organisation?

  • Does it reflect key aspects of company’s business?

  • Do we have a clear understanding of 

    • the existing processes, their enablers and limitations?

    • where AI could be immediately injected?

    • what processes or functions need additional efforts to prepare for AI boost?

Given the diversity of use cases and levels of advancement in AI penetration, the how question has to be adjusted to specific needs of the board, including its advancement in AI use. GUBERNA will continue publishing specific tips and advice on how to deal with AI at all levels of board agenda – keep following up!

 

When it comes to AI, boards face a rapid change bringing radically new opportunities. Directors need to be at front edge of knowledge to see these opportunities, as well as to mobilise their expertise and experience to seize those opportunities. As everything comes at a cost, AI risk awareness and AI risk mitigation should enter boards’ radars and will require action as well. Top skills, best expertise, and true openness are crucial for ensuring that this technological shift works for boosting company’s sustainable growth.

Sources

[1] Explanatory Memorandum on the Updated OECD Definition of an AI System. // OECD Artificial Intelligence Papers. March 2024, No. 8, p. 4.
[2] See, for example: Elbadry, A., Gounopoulos, D. and Skinner, F. (2015), Governance Quality and Information Asymmetry. Financial Markets, Institutions & Instruments, 24: 127-157. https://doi.org/10.1111/fmii.12026
[3] Yakubovich V., Shekshnia S., Yashneva E., Sullivan K. Can AI Boards Outperform Human Ones? // Harvard Business Review, November 5, 2025, Can AI Boards Outperform Human Ones? 
[4] York, R., & McGee, J. A. (2016). Understanding the Jevons paradox. Environmental Sociology2(1), 77–87. https://doi.org/10.1080/23251042.2015.1106060
[5] AI for directors: a concise practical guide. GUBERNA, 2024, p. 8-11. guberna_-_ai_for_directors_-_a_concise_practical_guide.pdf
[6] See, for example, The state of AI in 2025: Agents, innovation, and transformation. McKinsey Survey, November 2025. The State of AI: Global Survey 2025 | McKinsey. Check a wider OECD framework study sharing similar conclusions: The Adoption of Artificial Intelligence in Firms | OECD
[7] What is shadow AI? // IBM isights: What Is Shadow AI? | IBM
[8] Trust, attitudes and use of artificial intelligence: A global study 2025. Trust, attitudes and use of Artificial Intelligence: A global study 2025 
[9] The Hidden AI Workforce: 29% of Employees Pay for Their Own AI Tools While Bosses Provide No Training, November 2025, The Hidden AI Workforce: 29% of Employees Pay for Their Own AI Tools As Bosses Provide No Training
[10] ISACA 2025 AI Pulse Poll, AI Pulse Poll
[11] 2025 DORA Report: State of AI-Assisted Software Development. Announcing the 2025 DORA Report | Google Cloud Blog
[12] AI Governance Library Newsletter #10: Requirements, Controls, and Everything Else They Forgot, September 2025, AI Governance Library Newsletter #10: Requirements, Controls, and Everything Else They Forgot