Every business (that I know of) has at least thought about AI and the threats and opportunities it might throw up. It is a technology that will likely redefine our age – what the steam engine did to physical labour; AI is poised to do to cognitive industries. So we need to strap ourselves in and embrace the change (or be left behind).
As we prepare for this seismic shift, regulators are scrambling to keep up and one of the first movers in this space is the European Union with its Artificial Intelligence Act (EU AI Act). The aim of the AI Act (and no doubt regulation generally) is to make sure AI systems are safe, clear, and respect people’s rights.
This blog is here to explain what the EU AI Act is, how it might affect your business, and what you can do to prepare for it.
1. What is the EU AI Act?
The EU AI Act, effective from 1 August 2024 is the EU’s attempt to regulate artificial intelligence and protect society (us!) from potential harm. It uses a risk-based approach, sorting AI systems into four levels: minimal, low, high, and unacceptable risk. The goal is to ensure AI systems are trustworthy and don’t harm health, safety, or fundamental human rights.
Risk levels explained: The Act sorts AI systems by their risk to people and society.
- Minimal or no-risk systems, like spam filters, are not regulated but providers are encouraged to follow codes of conduct.
- Low-risk systems, such as chatbots, have some transparency requirements.
- High-risk systems, are those that pose a high risk to fundamental rights, e.g. those used in healthcare or law enforcement, face strict checks and must meet specific standards.
- Unacceptable risk systems, which could harm fundamental rights, are banned entirely.
2. Let’s Take a Deeper Dive into the Regs…
Low Risk Systems: Under Art. 50, these are systems that: (i) interact with humans; (ii) generate synthetic audio, image, video or text content (iii) are used to detect emotions or determine association with (social) categories based on biometric data (ie emotion recognition system or biometric categorisation system); or (iii) generate or manipulate content (ie deepfakes); or (iv) generate or manipulate text published to inform the public on matters of public interest.
High-Risk AI Systems: These are systems that pose a high risk to fundamental rights and so are subject to thorough checks and ongoing monitoring. They must be clear, have human oversight, and be accurate. High-risk systems include: (i) those that include product safety components (e.g. medical devices or cars); (ii) products already covered by EU safety product legislation (e.g. toys, lifts, machinery etc.); and (iii) those that materially influence the outcome of decision making where there is a significant risk to harm (e.g. health, safety, education and other fundamental rights.
Unacceptable Risk Banned Practices: The Act bans AI systems that pose an unacceptable risk, like those manipulating behavior or used for social scoring. This means any AI that could exploit vulnerabilities or manipulate people in harmful ways is not allowed. For instance, AI systems that use subliminal techniques to influence decisions or those that rank individuals based on social behavior are prohibited.
General Purpose AI (GPAI): The Act also covers versatile AI models that have a certain (high) computing power. These models must be transparent and may have extra rules if they pose systemic risks. This means if your AI fits these criteria, you need to ensure it complies with transparency requirements and possibly more.
3. So How Might This Affect Your Business?
As a tech or AI company founder, it’s important to know how the EU AI Act could impact you:
- Who It Affects: The Act applies to any AI system used in the EU, even if the company is outside the EU. This means, wherever you are based, your business could be affected if your AI is used in Europe. Whether you are developing, importing, or using AI systems, you need to be aware of these rules. For instance, a US-based company selling AI software to European clients must comply with the Act.
- Compliance Needs: If your AI is high-risk, you must meet the Act’s requirements, like doing checks, keeping records, and having human oversight. This involves conducting conformity assessments and ensuring your AI system is safe and reliable. For example, a company developing AI for autonomous vehicles must ensure the system is thoroughly tested and monitored for safety.
- Transparency and Accountability: For low-risk systems the Act still stresses the need for clarity and accountability. You must explain how your AI works and ensure it can be checked for compliance. This means providing detailed documentation and being open about your AI’s functions. For instance, if your AI system makes decisions, you should be able to explain the decision-making process to users and regulators.
- Penalties: Not following the EU AI Act can lead to big fines, up to €35 million or 7% of your total worldwide annual turnover, whichever is higher. Although fear of enforcement isn’t necessarily the best reason to try and comply, it’s an important reminder that the regulations have teeth, for those playing fast and loose with other people’s data, rights and freedoms.
5. Preparing Your Business for the EU AI Act
To handle the EU AI Act, consider these steps:
- Assess Risks: Review your use of AI to understand if it falls within scope of the Act. This will help you know what rules you need to follow. Where necessary, conduct a thorough risk assessment to understand where your AI systems stand. For example, evaluate whether your AI system could potentially harm users or infringe on their rights.
- Set Up Compliance: For high-risk AI, make sure you have the right processes for checks, records, and oversight. Implement systems to monitor and document your AI’s performance and compliance. For example you’ll need to establish a compliance process to regularly review and update your AI systems according to the regulations.
Stay Updated: Keep informed about changes related to the EU AI Act and other new rules that might come into effect. This helps you adapt and plan ahead. Regularly review updates from the EU and adjust your strategies accordingly. For example, subscribe to newsletters or join industry groups that provide insights into regulatory changes. - Consult Experts: If in doubt, talk to an expert in AI and tech law. They can offer insights and help you navigate the rules. Engaging with professionals can provide clarity and guidance on complex legal requirements.
- Promote Transparency: Encourage a culture of transparency and accountability in your company. A bit like with the GDPR, compliance with the AI Act will be a cross-functional endeavour. Teams from engineering, product, legal, compliance, sales and others will need to understand the requirements and how your products comply. Being compliant and showing your compliance will build trust with customers, investors and other key stakeholders. Make sure your team understands the importance of clear communication and ethical AI practices and how this will underpin your company’s growth.
Key Dates To Remember
The AI Act comes into force in a staggered way – we’ve summarised the key dates to be aware of below.
- August 1, 2024: The EU AI Act officially came into force.
- February 2, 2025: Specific provisions, including prohibitions for banned AI, will start to apply.
- August 2, 2025: Rules concerning general-purpose AI start to apply.
- August 2, 2026: The fuller application of the EU AI Act, including most compliance requirements.
Conclusion
The EU AI Act is the first big move by a regulator in the AI space. For founders, understanding and following the changing legal landscape is key to your business’s success. Getting up to speed early can be a defensible moat for your business as opposed to a threat to fear.
As always, changing legislation should be more than a hoop jumping exercise – it can also be an opportunity to enhance your company’s reputation as a responsible and forward-thinking player in the industry. Embracing the EU AI Act’s principles will likely lead to better products, increased customer trust, and better traction with investors. As the AI landscape continues to evolve, there is opportunity there for those who are proactive and well informed – which might just be the key to the success of your AI business.
The preceding information does not constitute legal advice and should not be relied upon for making business or legal decisions
Author: Saman Harris
Head of Commercial & Fractional GC.
Saman is a senior commercial tech GC with expertise supporting tech companies and Fintechs safely navigate complex legal and commercial issues during periods of rapid international expansion. Saman brings strong experience in regulatory, commercial, product, employment, finance and corporate work. He has spent his career enabling high growth tech businesses to scale whilst managing commercial and legal risks for those businesses. He is currently Fractional General Counsel to four of Seven Legal’s clients.
Seven Legal provides stage specific legal advice for fast growth technology companies. Built on advising hundreds of founding teams in the UK, US and India with funding, scaling and exiting high growth ventures, our expert tech lawyers will be a growth enabler for your business.