Insights

AI regulation tracker: UK and EU take divergent approaches to AI regulation

AI regulation tracker: UK and EU take divergent approaches to AI regulation

May 17, 2023
Download PDFDownload PDF
Print
Share

Artificial intelligence (“AI”), once limited to the pages of science fiction novels, is now viewed as a key strategic priority for both the UK and EU.

The UK, in particular, plays a prominent role at the cutting edge of the technology, ranking third worldwide for private investment into AI companies in 20201following the United States and China. When it comes to proposals for AI regulation in the UK and EU, these are at different stages and look set to take divergent forms. While the UK is proposing a light touch regulatory approach to foster innovation, the EU has focused on establishing clear rules with the hope of attracting increased investment through regulatory clarity.

As companies increasingly integrate AI into their products, services, processes, and decision-making, they will need to do so in ways that comply with the varying regulatory approaches in the UK and EU.

As is the case with most new technologies, the establishment of regulatory and compliance frameworks has lagged behind AI’s rise. This is set to change as the UK government further clarifies its approach to sectoral regulation of AI, and the EU enters the final stages of finalising its ambitious AI Act. Additional EU legislation concerning AI liability is also in the pipeline.   

Our AI Regulation Tracker will keep you updated on legislation that, if passed, would directly impact businesses’ development or deployment of AI solutions in the UK and EU.2

BCLP actively tracks proposed and enacted AI legislation to help our clients stay informed in this rapidly changing regulatory landscape. In addition to the UK and EU, we are actively tracking proposed and enacted AI legislation across the United States. Explore our interactive map of AI legislation across the US.

On 21 April 2021, the European Commission introduced its proposal for a regulation laying down harmonised rules on AI throughout the European Union (the “AI Act”). Its status as a regulation means that, once finalised and in force, the AI Act will apply directly in each of the 27 EU member state countries.

The AI Act will apply to:

  • providers placing on the market or putting into service AI systems in the EU, irrespective of where those providers are established;
  • users of AI systems located in the EU; and
  • providers and users of AI systems that are located in a third country, where the output of the system is used in the EU.

The AI Act will therefore have a broad extra-territorial reach and will need to be considered by providers and users of AI systems globally.

A “risk based” approach to regulating AI

The AI Act takes a “risk based” approach, classifying AI systems according to separate tiers:

  • prohibited;
  • high-risk;
  • limited risk;
  • all other systems (in effect, minimal risk).

The rules applicable to AI systems will vary depending on whether they fall into specific tiers or not.

1. Prohibited (including AI systems using techniques to manipulate individuals and cause harm):

The European Commission describes these types of AI systems as posing “a clear threat to the safety, livelihoods and rights of people”. As originally drafted, Article 5 imposed bans on systems that use subliminal techniques to materially distort users’ behaviour in harmful ways; exploit the vulnerabilities of specific groups; allow for “social scoring” by governments, or use biometric identification in public spaces for law enforcement purposes (subject to narrowly defined exceptions).

A later draft of the AI Act, released by joint committees of the European Parliament (“EP”) on 9 May 2023, proposes some significant changes, illustrating that it is still too early to assess the impact of the finalised AI Act with certainty. For instance, to fall within the prohibited tier, the EP proposes that an AI system must have the objective or the effect of “materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision they would not have taken otherwise in a manner that causes or is likely to cause that person, another person or group of persons significant harm”.

The EP’s proposals also ban the use of “real time” biometric identification systems in public places (meaning these systems can only be used by law enforcement “post” an incident, and only where subject to pre-judicial authorisation and in connection with serious crimes).

The scope of “prohibited” systems is further expanded by the EP’s proposals to include those used for predictive policing; the scraping of facial images to broaden facial recognition databases; and inferring emotions in the areas of law enforcement, border management, the workplace and education institutions.

2. High risk (i.e. systems subject to additional safeguards, including human oversight):

These systems include those used for:

  • biometric identification and classification of natural persons;
  • management or operation of critical infrastructure;
  • education and vocational training,
  • employment and worker management; and
  • access to and enjoyment of essential public and private services (including creditworthiness and credit scoring).

These systems will be subject to strict additional obligations, such as the need to undergo a “conformity assessment”, a prior registration regime, adequate risk management and mitigation systems and ensuring appropriate human oversight. Requirements differ depending on whether the high risk AI is embedded as part of a wider system (e.g. in a medical device) or free standing.

In its 9 May draft, the European Parliament has proposed limiting the classification of “high-risk” systems to those posing a “significant risk” to the health, safety or fundamental rights of persons (different rules apply for safety components).

3. Limited risk:

Certain AI systems that are not “high risk” but interact with and/or could manipulate human behaviour, such as chat bots or emotion recognition systems, will be subject to specific transparency obligations, e.g. users must be made aware that they are interacting with a machine.

4. All other AI systems (low risk):

These are systems posing a minimal risk to users’ rights or safety, such as AI enabled spam filters. No specific requirements are proposed here in the original text.

5. Foundation models:

The EP’s 9 May draft introduces additional obligations in respect of “foundation models”.  A foundation model is defined as “an AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks”. These proposals are in response to the popularity of foundation models such as GPT, on which ChatGPT is based.

What are the possible fines?

Fines are proposed on a sliding scale, with the most serious contraventions (including breaches of obligations applicable to prohibited and high-risk systems) carrying fines of up to:

  • EUR 30 million; or
  • 6% of total worldwide annual turnover for the preceding financial year (whichever is higher).

In its 9 May proposal, the EP proposed increasing fines to the higher of EUR 40 million or 7% of total worldwide annual turnover.

When will the AI Act come into effect?

The European Council published its “general position” in December 2022, which contained numerous changes to the text of the draft legislation. The EP adopted its negotiating position on 14 June 2023, meaning the “trilogue” stage can now commence. During the trilogue, the Council and the EP negotiate to reach agreement on proposed changes to the text.

Once the EU’s ordinary legislative procedure is complete (expected to take months), the AI Act is set to apply following a 24 month transition period. As a result, the AI Act appears unlikely to apply until 2025, at the earliest.

Initially proposed by the European Commission on 28 September 2022, the draft AI liability directive (the “AI Liability Directive”) aims to modernise the EU liability framework by introducing rules specific to damages caused by AI systems. The AI Liability Directive is complementary to the EU’s draft AI regulation (the “AI Act”). The EU product liability regime is also being updated in parallel.

Presumption of causality

The AI Liability Directive lays down a rebuttable presumption of causality, establishing a causal link between non-compliance with a duty of care under Union or national law (i.e. the fault) and either (a) the output produced by the AI system, or (b) the failure of the AI system to produce an output, that gave rise to the relevant damage. The presumption would only apply where the following conditions are satisfied:

  • the claimant demonstrates that non-compliance with a certain EU or national obligation relevant to the harm of the AI system caused the damage (which might include failure to comply with a provision of the AI Act);
  • it is reasonably likely that, based on the circumstances of the case, the defendant’s negligent conduct influenced the output produced by the AI system or the AI system’s inability to produce an output that gave rise to the relevant damage; and
  • the claimant demonstrates that the output produced by the AI system, or the AI system’s inability to produce an output, gave rise to the damage.

The presumption will also depend on whether the AI system is “high-risk” or not. Different rules will apply in either case. A defendant will be able to rebut the presumption of causality, for instance by proving that their fault could not have been responsible for the relevant damage.

Disclosure of evidence

The AI Liability Directive would also provide EU member states’ national courts with the power to order disclosure of evidence concerning high-risk AI systems under certain circumstances.

When will the AI Liability Directive come into effect?

It is difficult to predict the timing at present; once the Directive is in force the draft text allows a further two years for EU Member States to transpose the final requirements into national law.

The UK Government’s Department for Science, Innovation and Technology (“DSIT”) is consulting on its white paper published on 29 March 2023 (the “AI White Paper”) until 21 June 2023. The proposals represent a light-touch approach consistent with the UK’s National AI Strategy published in September 2022.  Legislation is not currently being proposed, which is in stark contrast with the approach being taken in the EU.

The AI White Paper proposes a flexible definition of AI systems, based on their inherent adaptability and autonomy characteristics. It also suggests a principles-based framework for existing regulators to interpret and apply to AI within their remits.  Regulators would also be expected to issue guidance on how the principles interact with existing legislation, in order to support compliance in their sectors. This reflects the UK Government’s view that AI is a general purpose technology, likely to cut across numerous regulatory remits (suggesting that cooperation between regulators is likely to be fundamental).

The five cross-sectoral principles to be applied by regulators are:

  1. safety, security and robustness;
  2. appropriate transparency and explainability;
  3. fairness;
  4. accountability and governance; and
  5. contestability and redress.

Initially, the principles will not be placed on a statutory footing.  Legislation may be passed in the future to impose a duty on regulators to have due regard to the principles, if it is found that they are not being applied appropriately. AI assurance techniques and technical standards are also intended to play a major role.

The AI White Paper does not seek to allocate liability for harm caused by AI. This issue will be left to existing legal frameworks and monitored further; however, future legislative intervention is not ruled out.

(1) https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version

(2) We have focused on regulation that is specific to AI. In the UK and EU, the framework established in 2018 by the General Data Protection Regulation (the GDPR) regulates the use of personal data (including biometric data) for profiling and automated decision-making purposes. We note that AI and automation systems are increasingly integrated, however, not all automated decision-making systems involve AI (or personal data). The UK data protection framework is currently undergoing reform; some of the proposals are intended to facilitate use of personal data in connection with AI systems.

Meet The Team

+44 (0) 20 3400 3207

Meet The Team

+44 (0) 20 3400 3207

Meet The Team

+44 (0) 20 3400 3207
This material is not comprehensive, is for informational purposes only, and is not legal advice. Your use or receipt of this material does not create an attorney-client relationship between us. If you require legal advice, you should consult an attorney regarding your particular circumstances. The choice of a lawyer is an important decision and should not be based solely upon advertisements. This material may be “Attorney Advertising” under the ethics and professional rules of certain jurisdictions. For advertising purposes, St. Louis, Missouri, is designated BCLP’s principal office and Kathrine Dixon (kathrine.dixon@bclplaw.com) as the responsible attorney.