Skip to content
Thomson Reuters
Technology

The new EU Regulation for AI is here, will you be ready for an AI audit—part one

Nayeem Syed

26 Apr 2021

Image Credit: REUTERS/Kevin Coombs

A 9 minute read.

The EU has shown again it is a leader in generating laws, but we will need to wait to see if this effort will help the Union market in generating more leading and trusted AI technologies.  However, it also does show again that regulating frontier technologies is not mission impossible, only merely mission difficult.

The European Commission released on 21 April 2021 a proposal for a regulatory framework to help mitigate certain high-risk forms of artificial intelligence (AI) throughout the whole AI systems’ lifecycle. It had previously suggested a sectoral focus but has decided it should not be limited. Therefore, it is potentially as expansive and impactful as General Data Protection Regulation (GDPR) was for data processing in the Union. It also allows us to see how the rest of the world may learn from and broadly follow in regulating AI.

The Commission has sought to achieve a balanced approach that ensures AI placed in the Union market likely to pose high risks to fundamental rights (enshrined in the European Union (EU) Charter of Fundamental Rights) is safe, and respects existing laws but doesn’t disproportionately hinder innovation. There are specific uses that are entirely prohibited.  For use cases that are high-risk AI systems (there is a list in Annex III but includes, for example, where it is used to evaluate the creditworthiness of persons), it will establish a European Artificial Intelligence Board supported by an expert group, impose extensive conformity assessments, registration and compliance obligations on providers (and users) and post-market surveillance requirements; all backed up with hefty fines (up to six percent of total worldwide annual turnover). The initial list of higher risk use cases will likely be added to by the Commission.  For use cases that are AI systems other than high-risk AI systems, limited transparency obligations are imposed. It will encourage and facilitate the drawing up of voluntary codes of conduct by providers or groups representing them.

It says it proposes a future-proof definition of AI, but it is always tough to avoid becoming outpaced by innovation. It also seeks to encourage innovation but suggesting the Member States establish sandboxing schemes.

This new framework will be subject to change before it is finalised, but ultimately, it will require providers to determine if they are high-risk and complete pre-launch compliance assessments. Regulated firms will be well used to this, but now all providers will need to have appropriate governance and control frameworks to manage algorithmic risks effectively. Costs of compliance have been analysed as part of the impact assessment phase but will likely be higher unless firms’ existing infrastructure can be leveraged efficiently, and standardised processes are established that can be adopted.  This resourcing point also applies to Member States who will need to implement and operate the new requirements.

Providers will need to have a risk management system and a quality management system and notify any serious incidents within 15 days. After their deployment, AI systems that continue to learn may need ongoing conformity assessments.

What is clear is that the entire AI value chain will need to focus on the risks with algorithms and comply with soft requirements and hard rules so firms can demonstrate it manages well those risks and remains compliant with its obligations with the Regulation and existing analogue laws.

Member States are required to handle enforcement, so achieving consistency will be tricky, of course, as we have seen with the enforcement of GDPR.

While the EU policymakers have taken the lead, the US are likely not far behind. In March 2021, five US federal financial regulatory agencies jointly sought to consult financial institutions regarding AI governance, risk management, and controls to determine if clarifications would be helpful for their compliance with applicable laws and regulations, including those related to consumer protection.

If your work involves supporting AI, you should look to follow the debate in the coming weeks and months.

It is clear that lawyers and technology, and compliance professionals will need to help advise on AI deployment, so next we will discuss how lawyers and technologists can work together to help their provider and operator organisations ensure they have developed appropriate internal governance frameworks for readiness so they can comply with the legislative, regulatory and contractual obligations, and be ready for an AI audit or compliance questionnaire.

AI in the enterprise

Algorithms may be as simple as automation but are increasingly becoming sophisticated using learning techniques to support human decision-making processes. The Regulation is in part a reaction to the growing and general concern about the impact that such techniques may have on reinforcing biases or accountability for decisions or outputs that may determine social, economic or ethical outcomes. As Geoffrey Horrell, Global Head of Innovation Labs at London Stock Exchange Group, explains, “Data Scientists have always applied tests to measure statistical bias in their models, extending this rigorous approach to broader definitions of fairness and explainability is a logical step as adoption of AI grows, and the industry matures”.

Organisations that use these techniques, ostensibly for good reasons such as speeding up decisions, will likely need to demonstrate to regulators or clients that the use of such techniques is appropriately governed and being carried out with suitable oversight. Ultimately, executives and boards are accountable to ensure this is done.

We will over these two parts consider some of the questions that organisations should be asking in order to better equip themselves to be ready to meet the risks that are being identified through the use of such techniques.  The second part will go into the work needed to identify mitigations or solutions that could be proposed to treat the risks appropriately.

Discovery

When designing any framework, it critical that there is upfront investment in compiling and maintaining an accurate view of the use cases across the enterprise and those that are being contemplated by various groups either in small labs or within larger, but less technical sales and product development groups.

It is important to identify where the various applicable algorithms are being used.  This will require an investigation with the various teams that manage the different corporate and product systems. That may take some time to do properly as sending out a simple survey may be superficially responded to.

It is essential to try to understand the precise purpose for which the algorithms are being used. It is also vital to understand the different types of data used as inputs. Each may have their own governance implications and different treatment in different jurisdictions: employee data, financial data, product data, etc.

It is important to identify the various interested stakeholders and their respective concerns, including whether they have any enforceable rights or legitimate expectations. For example, customers may seek certain information and assurance before purchasing your product. Employees will be concerned about their personal information being used to comply with codes of conduct and employment law. Specific regulators may have broad rights to request the AI is explained to them and whether there is sufficient ability for relevant end-users to seek details and question results.

This all will allow an assessment of the potential risks from each usage, data type and stakeholder. These may be reputational or operational or financial.  As with any complex or new risk assessment, a high degree of cross-functional evaluation and collaboration are required.

Therefore, cross-functional teams need to review what governance mechanisms are employed today which relate to the new AI usage. It will be important to consider whether the existing data and security assurance or attestation teams and processes that can be leveraged.

They should then look to estimate the current maturity level in governing the use of algorithms. For example, they should rate themselves using a scale of none, ad hoc, simple, repeatable or mature.

Change scenarios

Now that we have a validated view of our governance maturity level, we can turn to what we would change to meet a minimum requirement. Initially, we would typically form an opinion on the tactical response to meet short term deadlines.

For example, with customer assurance, it is vital to meet inbound customer requests around products with responses that inspire confidence.

There must also be early engagement and consistent alignment with the relevant legal and compliance teams to help guide the various regulatory dimensions across numerous jurisdictions. Those teams can also help prepare for any legal or regulatory inquiry: employment or privacy or product/domain-specific. Each will have a different focus and authority to demand compliance.

One key to effectiveness is working with business leads to move risk into a value generator for the business by using algos innovatively and showing that we use them thoughtfully. Being able to demonstrate how we are seeking to achieve explainability and transparency will help the investment case and marketing of the underlying product

Enterprise Ready

As we develop scale in this area, we would look to develop our longer term, permanent view of how to operationalise our governance at an enterprise level—in short, defining how we embed a comprehensive strategic and leading response.

The aim would be to establish a suitable tiered structure that operates a proven oversight model.

A typical model might have the overarching policies set and managed by an advisory panel that includes senior representatives from essential functions.  They may also interact with external organisations, including industry forums.

The panel would draw on those connections to work towards an appropriate standard. They need to look to define a proper governance methodology.

Conclusion

As Vivienne Artz, Chief Privacy Officer at London Stock Exchange Group, explains, “The EU AI Rules are the first of their kind, focussing on how AI is used rather than on AI itself.  Following a risk-based approach means that higher risk AI comes with stricter rules.  Will this approach support or discourage innovation?  Time will tell….”.

As ever, regulations result in additional costs and slow down businesses. However, compliance with these rules may help these frontier technologies win the confidence of hesitant clients and ultimately help the sales and distribution effort. For example, providers of high-risk AI systems that comply with the Regulation can display a ‘CE’ mark on their packaging or accompanying documentation which will help assure potential clients and move freely within the Union. After all, would you allow your children to be driven by a driverless taxi if you didn’t know it was such a heavily regulated industry?

Different sectors will likely react differently. In the main, most market participants will likely accept that requiring higher risk use cases to complete a rigorous pre-deployment assessment process is necessary to ensure they comply with all their wider obligations and they are appropriately risk managed. By suggesting voluntary codes for other AI is pragmatic, but definitions will be critical.  Will use cases at the margin be conceived, developed and marketed differently to avoid the Regulation?

There will therefore be much debate around the scope and approach to the categorisation. However, firms will likely need internal specialist teams to support compliance and assistance from external advisors to help with the risks and processes. There are a large number of critical questions that will be debated in the coming weeks and months.

In this part, we have discussed how to approach creating a suitable governance process where the new draft Regulation has accelerated the need. In our next part, we will describe some practical steps to demonstrate appropriate oversight and be in a good position for an AI audit.

Co-authored by: Nayeem Syed, Senior Director & Malcolm Melville, CISO Advisory; London Stock Exchange Group

Introducing AI-Assisted Research on Westlaw Edge UK with CoCounsel  3 reasons legal professionals need Westlaw Edge UK with CoCounsel Three reasons why generative AI will not take over lawyer jobs Legal AI tools and AI assistants: must-have for legal professionals Creating a seamless legal transaction management workflow has never been easier How to simplify M&A due diligence and smooth out transaction management Level up your legal team’s performance without increasing headcount AI made big strides in 2023 – what does 2024 hold? EU AI Act: The world’s first comprehensive laws to regulate AI Level up M&A due diligence reviews with HighQ and Document Intelligence