Businesses urged to act now to ensure compliance with EU AI Act

The time to act is now! This is the clear message from DNV Digital Assurance Director, Frank Børre Pedersen. While the EU AI Act is only expected to be enacted at the end of 2023 and come into full force two years after that, organizations should already be planning now for its consequences.
Towards trustworthy industrial AI systems

In response to mounting technical and regulatory challenges related to AI, DNV recently released a Recommended Practice (RP) on the Assurance of AI-enabled systems. In this interview, Dr Pedersen provides insight into the changing AI landscape and how DNV’s recommended practice can help you build trustworthy and compliant AI.

Contact us:

Access the Recommended Practice (RP) on AI and other related RPs and services

Access the (RP)

The rapidly evolving AI landscape

Frank Børre Pedersen has a PhD in Physics and over 25 years’ experience in the oil and gas industry. His research career started at a time when what we now call AI was more often referred to as ‘data-driven approaches’, ‘expert systems’, ‘interpolation’, or ‘regression’. The terminology may have changed, but the core concept remains the same: using data to build models which in turn create new data points that support decision-making. Two things that have not remained unchanged, however, are data availability and computing power. We live in a time when sensors collect data at an unparalleled scale. Add to that exponentially growing data-processing capabilities, and you have the requisites for algorithms with billions of variables – the hallmark of sophisticated AI.

Advanced AI solutions, such as machine learning and generative AI, offer many advantages: they are extremely efficient in carrying out tedious tasks; they can perform work in environments that are unsafe for humans; and they continuously improve, so that even an initially poor algorithm can eventually become efficient if provided with sufficient data. This makes them highly valuable – and increasingly indispensable – to sectors ranging from finance and healthcare to transport and energy production. At the same time, like any powerful technology, they can be used irresponsibly, unethically, or with malicious intent. Furthermore, machine learning is inherently opaque and unpredictable. Unlike an ordinary computer algorithm, which is essentially a set of instructions on how to carry out a procedure, machine-learning software is a set of instructions on how to learn and alter itself to better reflect and interpret the data it receives, a process which is not fully explainable even to the software developers.

The need for regulation – EU leads the way

When such AI algorithms are integrated into systems and acquire the agency to trigger and influence real-world events, you end up with a powerful system whose behaviour is somewhat unpredictable and thus requires governance to ensure that it serves its intended purpose in a safe and controlled way. It is to protect humans and society from such new and hard-to-predict risks that the EU has set out to enact the world’s first law that specifically targets AI-enabled systems – the EU AI Act. Dr Pedersen emphasizes that the intention of the Act is not to reduce the use of AI; in fact, the ambition is to create a level playing field by taking regulatory uncertainty out of the equation, and thereby remove barriers to safe and responsible AI innovation and deployment.

Given the broad and constantly changing range of AI solutions and applications, the law defines AI equally broadly: essentially any data-driven system that is deployed in the EU, irrespective of where it is developed and where it sources its data, will fall under its purview. With such a sweeping definition, the EU has adopted a risk-based approach to regulating AI, categorizing it into low-risk, high-risk, and unacceptable AI. Unacceptable AI is AI solutions that violate EU values or fundamental human rights, including discrimination, social scoring of citizens, and manipulation of children. Such applications are completely banned. Low-risk AI includes applications such as chatbots, in which case it is sufficient to inform users that they are interacting with a chatbot rather than a human. The thorny segment is the one in between: high-risk AI that is not completely banned. Businesses whose AI falls into this category must demonstrate compliance with a set of requirements through a so-called conformity case.

Bridging the gap: DNV’s Recommended Practice 

Complying, however, is easier said than done. The EU AI Act can be hard to understand, even for experts. Building a specific conformity case may be even harder since the law is generic and contains many implicit and explicit requirements that are technically difficult to meet.
In response to these technical and regulatory challenges, DNV has released a Recommended Practice (RP) on the Assurance of AI-enabled systems. By bridging the gap between the generically written law and system-specific conformity cases, this RP provides affected stakeholders with a practical interpretation of the EU AI Act that allows them to identify the applicable requirements and collect evidence substantiating their conformity claims.

While the RP builds upon DNV’s nearly 160 years of third-party verification in high-risk sectors, the nature of AI-enabled systems has necessitated a completely new approach to assurance. Whereas conventional mechanical or electric systems degrade over years, AI-enabled systems change within milliseconds. Consequently, a rubber stamp, which normally has a five-year validity, could be invalidated with each collected data point. This requires an even more rigorous assurance methodology as well as a thorough understanding of the intricate interplay between system and AI, and how either of them could fail.

Trustworthy AI beyond compliance

Dr Pedersen goes on to clarify that the RP covers a broader scope than just the EU AI Act. It not only addresses compliance, but also ensures that AI and systems perform as intended. Moreover, the RP is not a stand-alone recipe for compliance but fits into a greater whole. A high-quality AI system requires high-quality building blocks ¬– that is, data, sensors, algorithms, and digital twins. Therefore, DNV offers an updated suite of RPs covering each of these digital building blocks. The RP on the Assurance of AI-enabled systems is the latest addition, which ties all of the other pieces together from an AI perspective.

Now is the time to act

When asked to give potentially affected parties one piece of advice, Dr Pedersen strongly encourages them to immediately read up on the EU AI Act, seek advice, and assess whether they will be affected. On a final note, he underscores that, in addition to being a necessary ticket to trade, early preparation for compliance will likely give businesses a competitive edge. DNV remains committed to helping clients build trust in their assets as they become increasingly reliant on digital and data-driven technologies.

Contact us:

Access the Recommended Practice (RP) on AI and other related RPs and services

Access the (RP)
  • Group

Creating a secure and trustworthy digital world

Organizations that struggle to demonstrate the trustworthiness of AI to their stakeholders, can close the trust gap with DNV’s new services and a set of recommended practices, for the safe application of industrial AI and other digital solutions.

  • Group

DNV's director of AI research on the EU AI Act

VIDEO: Get an understanding of the fundamentals of AI, what the EU AI Act will cover, and what companies can do to prepare.

  • Group

The EU AI Act and your company

The use of artificial intelligence (AI) in the European Union will be regulated by the EU AI Act, the world’s first comprehensive AI law. With a broad definition of AI, many businesses will be affected and should start preparing for compliance.

  • Group

Artificial intelligence

Building trust and compliance into AI enabled systems