In this article, you will read the most important insights and discussion points from GUBERNA's National Member Forum held on June 6, 2023. This year's Forum critically examined the hype surrounding Artificial Intelligence (AI) while highlighting strategic opportunities created by AI and its potential impact on the roles of board members. Based on feedback from our attending members, the event was highly appreciated. The introduction, keynote speeches, and panel discussion were intellectually stimulating and provided plenty of food for thought.

 

Is Artificial Intelligence going to eat the world?

In her introductory speech, GUBERNA's CEO, Sandra Gobert, dispelled some myths surrounding AI while outlining how it will affect the roles of board members. In 2011, Mark Andreessen from the VC firm Andreessen Horowitz stated that software was “eating the world,” suggesting that software services were increasingly replacing jobs and products that formerly relied on capital equipment or human labor. This prediction proved accurate in various sectors such as entertainment, travel, telecommunications, finance, marketing, advertising, or human resources. Similarly, it is now predicted that AI will replace many jobs currently performed by humans. Board members must remain vigilant about the opportunities and risks AI may bring to their industry while maintaining a critical perspective on the current hype cycle.

To demonstrate the current hype, Sandra Gobert highlighted how Geoffrey Hinton, one of the pioneers of artificial neural networks, stated in 2016 that neural networks would replace radiologists “within five to ten years”. However, Gobert presented figures showing an increased demand for radiologists and even a global shortage of radiology jobs since Hinton's prediction. Hinton's mistake was extrapolating the ability of AI to automate certain subtasks of a job to the entire job. In fact, industry statistics reveal that highly automated countries like Germany or South Korea have some of the lowest unemployment rates globally, suggesting an unclear relationship between automation and job losses.

Relive the event trough the aftermovie

NMF

AI will complement, not substitute human board members.

While maintaining a healthy skepticism about possible hype, board members should be aware of the opportunities this technology offers and the risks it might pose. Two fundamental questions that board members consistently should ask pertain to sustainable value creation and control. Firstly, board members need to understand how this technology creates value for their company, and need to be aware, measure and integrate the environmental and societal effects caused by AI systems. Secondly, board members must have a view on how to control the risks associated with using this technology. They need to understand the causes and magnitude of these risks, the market segments or user groups where these risks appear, and how to anticipate and manage them. These questions about value and control can be asked in each of the three core roles performed by board members: strategy formulation, risk monitoring, and leadership.

How AI affects the strategy role

Board members will need to challenge executives on the various ways in which AI affects the organization's strategy. AI can impact the competitive position, help achieve the mission and strategy, or disrupt the competitive landscape. The introduction of AI in the workplace needs to be done carefully to gain acceptance from the staff. It can affect the customer interface by creating value for certain customers or causing harm to others. AI has the potential to improve operational models by increasing productivity and cost efficiencies, but this must align with the overarching strategy.

How AI affects the audit role

These strategic opportunities necessitate an expanded audit role of AI by the board. The adoption of an AI ethics code can ensure proper development, deployment, and use of AI. Auditing the algorithms that are used becomes crucial. If AI is utilized for automated financial controls and reports, understanding the inner workings of these algorithms becomes essential before making decisions based on them. Ownership of the data used by these algorithms must be clarified. Finally, AI introduces new cybersecurity risks, as malicious actors can exploit it to disrupt operations. However, AI can also be used to manage and mitigate these risks, thus serving as a double-edged sword.

How AI affects the leadership role

Lastly, AI will affect various aspects of the board's leadership role. Selecting AI technology is similar to hiring executives to whom decision-making authority is delegated. Due diligence is necessary when “hiring” AI, just as it is when hiring executives. If decision-making powers are delegated to AI, safeguards and tripwires need to be established to prevent overstepping of mandates. Additionally, AI will require coaching, similar to how executives undergo lifelong learning trajectories. Continuous improvement will be sought through feedback loops between AI and the individuals involved.

In conclusion, Sandra Gobert emphasized the need for the board to adopt a structured approach to scan for strategic opportunities, market trends, and competitive threats. Additionally, the board should establish guardrails to detect irresponsible deployment of AI.

Rob Heyman

Keynotes on AI ethics and the competitive landscape

Our first keynote speaker, Professor Rob Heyman (from the research institute imec-SMIT-VUB), has decades of experience in applied and industry-driven research projects. He focused in his speech on how to translate ethical principles into actionable corporate guidelines to execute strategy and monitor potential harm. Heyman cautioned against relying solely on high-level AI principles and advocated for “legal design thinking” which develops practical guidelines that bridge the gap between abstract ethical principles and technological/business practices. He also highlighted the unexpected and creative ways in which humans always utilize technology, emphasizing that there is no one-size-fits-all approach to ethical AI use. AI ethics must simultaneously consider its impact on customer value propositions, financial design, and the design of internal business processes. A feedback loop should ensure continuous learning and growth across these three dimensions.

Cristina Caffara

Our second keynote speaker, Cristina Caffarra, is a highly respected antitrust expert from Keystone Europe. In her speech, she raised concerns about market dominance by a few software giants. Given that the largest technology companies currently control the foundational models of AI, board members need to be vigilant against premature competitive closure and customer lock-in effects within the industry. Caffarra criticized Silicon Valley's focus on hypothetical existential risks of AI in the distant future, highlighting the need for a critical examination of the present harms caused by AI dominated by a select few firms. This includes scrutinizing data collection and processing practices, the exploitation of creative works, the disregard for the human labor involved in building and refining these models, and the amplification of biases through uncritical use of certain datasets.

Panel debate

Following the keynotes, we hosted a lively panel debate on key considerations for board members regarding AI. Joining keynote speakers Rob Heyman and Cristina Caffarra were Florence Bosco from BVI.eu, David Dab from Microsoft BeLux, and Katrin Geyskens from Capricorn Partners.

 

“High-quality and comprehensive datasets will become a goldmine for companies that can harness them.”

 

Florence Bosco emphasized the importance of AI in drug development and diagnostics. She highlighted that AI enables better-informed decisions, significantly impacting the average 12-year development time and low success rate of new drugs.

Florence Bosco

While she didn't foresee significant risks from bad actors using AI for malicious purposes, she highlighted potential problems in the doctor-patient relationship when AI is introduced. Skewed data used to identify target populations for medicines can have detrimental effects on patient health. This serves as a lesson for other sectors, emphasizing the need for organizations to avoid developing services based on biased data that could cause unintended harm. For that reason, boards must increasingly monitor the auditing of datasets before and after system deployment.

David Dab proposed a cooperative approach between humans and machines, suggesting that humans will remain “in the loop” for critical decisions even with AI assistance. AI will create a range of different job opportunities rather than replacing humans entirely. Continuous learning is crucial to prevent leaving people behind. While the exact impact on future jobs remains to be seen, even high-level expertise roles like medical doctors are expected to integrate AI into their work routines. Doctors who ignore AI will be surpassed by those who embrace it.

 

“We are fighting the digital giants with plastic spoons.”

 

The discussion returned to competition issues, with Cristina Caffarra expressing concerns about the power imbalance caused by the dominance of large market players that have the vast resources required to invest in AI hardware and software. She stressed the need for more than soft law and self-regulation to regulate this domain effectively. Rob Heyman reminded the audience that even ChatGPT was developed with the assistance of numerous workers from low-wage countries, highlighting the ethical considerations surrounding AI development and implementation. Such ethical deliberations will continue to impact board decisions in the foreseeable future.

Katrin Geyskens

Katrin Geyskens provided a positive message, stating that there are still market opportunities for European initiatives amidst the dominance of Silicon Valley giants. The quality and quantity of available data are crucial for future success in any market niche. There is also a growing need for greater AI literacy, not only among boards but across all levels that have social and economic significance. AI should not remain a black box; processes must be transparent and rational.

Panel moderator Olivier Braet (GUBERNA) concluded the afternoon by summarizing the evolving role of board members in successive generations of AI. As autonomous intelligence becomes prevalent, directors will increasingly depend on strategic decisions based on predefined instructions and algorithms. The certification of these systems through new rules, regulations, and standards will gain importance. Additionally, there will be a symbiotic interaction between people and machines, mutually coaching and fine-tuning processes.

Similar to previous technological innovations, there will be a search for a new balance between humans and machines. When cars emerged in cities in the early 20th century, pedestrians freedom of movement was restricted with zebra crossings and traffic lights, and they were no longer allowed to cross streets wherever they felt like. Later on, speed limits were introduced for cars. Achieving a similar equilibrium between machines and humans will be the main challenge in age of autonomous AI.

  • NMF Drink 1
  • NMF Panel
  • NMF Drink 2