7 November 2024

The EU AI Act: quick guide

On the 1st August 2024, the EU AI Act came in effect across all 27 EU member states. With the act not being fully applicable until 24 months after launch (minus a few key provisions), businesses need to understand how to remain compliant and what actions can be taken today to ensure they don’t trip up and face the hefty fines the act promises.

Navigating this challenge begins with companies having a good understanding of the act’s scope, the risk-based approach, and some of the obligations and best practices to carry out as a result. The act aims to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. Simple, right? Well, not quite. 

The new rules and risk-based approach means AI systems will have established obligations depending on the level of risk it poses. Even low-risk AI systems will need to be assessed to understand its impact on users. The highest level of risk is unacceptable risk and these are systems that are considered to be a threat to people, and are therefore banned – the first of its kind legislation. This is followed by high risk such as AI used to support and manage critical infrastructure, and finally general purpose AI which will be used widely across industry, but still scrutinised. 

Unacceptable risk: These systems pose an existential threat to people and include but aren’t limited to:

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behaviour in children
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
  • Biometric identification and categorisation of people
  • Real-time and remote biometric identification systems, such as facial recognition

Some exemptions exist for law enforcement in a limited number of serious scenarios. For example, real-time remote biometric identification systems will be allowed in a limited number of serious cases, whilst post remote biometric identification systems, where identification occurs after a significant delay will be allowed to prosecute serious crimes and only after court approval. 

High risk: These systems are divided into two categories. The first is systems used In products falling under the EU’s product safety legislation such as toys, cars, lifts etc., and the second is an AI system falling into specific areas and need to be registered in an EU database:

  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management, and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and, border control management
  • Assistance in legal interpretation and application of the law

Other systems not classified as high risk will have to comply with transparency requirements and EU copywrite law such as:

  • Disclosing content was generated by AI
  • Designing a model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

So what does this mean?

If you’re a company that either operates within the EU, serve customers within the EU, or produce AI systems that are used in the EU, then this applies to you. What steps can you take to prepare? 

The first step would be to conduct an AI inventory to catalogue AI systems in use or in development to determine their classification, so you best understand your compliance obligations and identify systems that need to be either modified or sunset to meet the standards. The EU has released a EU AI Act Compliance Checker to aid with this process. 

A really crucial element to any AI system in general, but particularly for meeting the compliance requirements is high quality data that is representative and free from bias. Depending on how your system is categorised, you will go under consistent risk assessments to protect the rights of citizens. This knocks onto other important elements such as data governance for example. 

Following the AI inventory, an AI impact assessment should be carried out and needs to include both a Data Protection Impact Assessment (DPIA) and a Fundamental Rights Impacts Assessment (FRIA). 

DPIA: At a glance, a DPIA is a process to help you identify and minimise the data protection risks of a project, and must be done for processing that is likely to result in a high risk to individuals, and is good practice for any other major projects which requires the processing of personal data. The DPIA must:

  • Describe the nature, scope, context and purpose of the processing
  • Assess necessity, proportionality and compliance measures
  • Identify and assess risks to individuals
  • Identify any additional measures to mitigate those risks.

FRIA: FRIA is an examination of potential risks that an AI system may pose to the fundamental rights of individuals and include, but are not limited to privacy, non-discriminations, freedom of expression, and safety, and applies to high-risk AI systems as categorised by the EU AI act. The intention of the process is to assess the systems design, development, and deployment, along with the intended use so that mitigating strategies can be developed. The importance and benefits of conducting a FRIA is you can demonstrate responsible AI practices, mitigate risks, and avoid fines, and improve the transparency and explainability of your system.

Why this legislation is important?

There are some historical lessons to be learnt from using AI systems in high-risk scenarios that have seen people burnt – usually by inherent system bias which is still a huge problem today. Let's take the example of COMPAS.

COMPAS – short for Correctional Offender Management Profiling for Alternative Sanctions – is a case management and decision support tool that was used by the U.S. courts to assess the likelihood of a defendant recommitting an offence. A problem with using this type of system in a court setting is that interventions of AI is usually motivated by cognitive bias, but also software such as COMPAS uses ‘trade secrets’, meaning that in this context, it cannot be examined by the public or the defendant which may violate due process. 

The largest and most critical flaw however is how human bias gets reflected into AI systems. Whilst many frameworks exist to reduce the bias within these systems, bias still creeps its way in as it will reflect in ways we can’t predict or eliminate, as we cannot reliably audit ourselves for our own bias. Pair that up with a blackbox AI that can’t be audited and removing bias becomes even harder. 

What did this mean for COMPAS? Well, the result was not good. After an investigation, it was found that black individuals were twice as likely as white individuals to be labelled as higher risk but not actually re-offend. 

You might think that you could remove these characteristics when evaluating, such as removing their gender or race – or use another AI to audit the output. The problem is that in both cases, bias still arises. The system, instead of using race, may use neighbourhood for example. This is why the governing and safe use and distribution of data and AI systems is so important in the modern age as they get increasingly powerful. For example, I'm sure some time in the near future you’ll say to yourself ‘huh, I haven’t seen many AI generated photos for a long time’ – and there is good reason for this if you think about it. 

You may also have heard about the Harvard students using Meta glasses for facial recognition and social engineering. These students, to be clear, did not use these glasses for evil and more to demonstrate how AI might be used by bad actors in the future. In their demonstration, the students can be seen walking around the Cambridge, Massachusetts campus and subway stations asking people that they’d never met before questions about them that they otherwise would have had no way of obtaining asking questions like “Are you Betsy?”. By livestreaming to Instagram and having a program monitor the stream, they used facial recognition software to start scraping data from reverse image search engine PimEyes. Once they had a match, they could access personal information about the person and strike up a conversation using these personal details. 

The worrying piece about this however, is that you could just do this with your phone. 

The point here is that the legislation against these types of systems is important to not infringe on our rights at scale using totally automated systems that can be used to manipulate or give false and bias output resulting in unfair outcomes for different demographics, and many more reasons.

Fines

Non-compliance and the resulting fines can result in the following:

  • Maximum penalty for non-compliance with the EU AI Acts rules on prohibited uses of AI is up to EUR 35m or 7% of worldwide annual turnover – whichever is higher
  • Penalties for breach of certain provisions are subject to a maximum fine of EUR 15m or 3% worldwide annual turnover - whichever is higher.
  • Penalties for incorrect, incomplete, or misleading information to notified bodies or national competent authorities is EUR 7.5m or 1% worldwide annual turnover – whichever is higher
  • For SMEs and start-ups, the fines for all of the above are subject to the same maximum percentages or amounts, but whichever is lower

To find out more, speak to our data specialists to see how we can help with your data needs.