AI Regulations by EU: A purpose driven legislature which will impact the AI technology landscape

Sat Mar 16 2024

Prevention is better than cure, many people believe this to be true and this time around a whole continent says that they too believe the same. European Union came out with a well-detailed Artificial Intelligence regulation bill and the European Parliament and Council reached a provisional agreement on the text of the AI Act. This means they have agreed on the core principles and content of the law. On the 13th of March 2024, the European Parliament approved this bill, officially making it an enforceable legislation. We may see this bill turning into an act and being enforced in member states within the next 6 months to 2 years. 

This is a landmark in regulating AI. With this bill, nations in the EU become the first in the world with a framework for AI regulation. Is this a big deal? Absolutely! Remember the days when iPhones were coming with a lighting port, well the major reason behind the inclusion of a Type C port on iPhones is due to the regulatory requirement of Europe to standardise charging ports in smartphones. 

It’s too costly to build something specifically for Europe and even more costly to avoid Europe as a market. Why? Because Europe is an economic superpower. The European Union’s GDP is estimated to be around $18.35 trillion in 2023 representing around one-sixth of the global economy. Usually, when Europe follows a format of regulations, the rest of the world follows or creates similar regulations as per the regional requirements. So it’s safe to assume that these regulations will have global implications.

What is AI Bill – Explained in Detailed.

So what is this AI bill? In short, the EU Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. Also, the EU wants every AI system accessible in EU territory to be overseen by people, rather than by automation, to prevent harmful outcomes.

The bill is a 349-page document,  to help you understand the summary of the same, let’s discuss 4 core aspects of this regulation.

  1. Risk-based approach:

    AI systems are categorized based on the level of risk they pose (minimal, high, unacceptable, and specific transparency risk). Higher-risk systems, such as facial recognition in public spaces, face stricter requirements compared to low-risk ones like spam filters.
  1. Transparency and explainability:

Users should understand how AI systems make decisions, particularly in high-risk situations. Developers are obligated to provide information about the data used, training methods, and potential biases involved.

  1. Fundamental rights and human values:

AI systems must respect fundamental rights like non-discrimination, privacy, and human dignity. For example, using AI for automated hiring decisions with inherent biases is prohibited.

  1. Human oversight and accountability:

Humans, not algorithms, should ultimately be responsible for AI systems. Developers and users need to implement appropriate safeguards and monitoring mechanisms to prevent harm.

These core concepts aim to establish a human-centric and trustworthy framework for AI development and deployment in the EU. They prioritize safety, transparency, and ethical considerations while fostering responsible innovation in this rapidly evolving field.

Fines & Regulations as per the EU Bill


The fines for not following the regulations are adequate and above to prevent violating these norms. As per some experts, the proposed fines are really hefty. 

The draft EU AI Act outlines a tiered system of fines for violating its provisions, aiming to ensure compliance and promote responsible development of AI systems. Here’s a breakdown of severity tiers and maximum fines:

Unacceptable risk AI: Up to €35 million or 7% of global annual turnover, whichever is higher. These are systems presenting significant risks to fundamental rights, safety, or health.

High-risk AI: Up to €15 million or 3% of global annual turnover, whichever is higher. These include systems potentially impacting fundamental rights or having significant impacts on individuals or the environment.

Other non-compliance: Up to €7.5 million or 1.5% of global annual turnover, whichever is higher. This covers general violations of the Act’s transparency, explainability, and risk assessment requirements.

National enforcement: Individual EU member states will have the authority to impose and collect fines, potentially leading to variations in practice.

What’s on the road ahead

To conclude, the European Union came up with a landmark legislature in the space of AI. Of course, legal intervention was expected in the R&D and application of AI because every technology at the end of the hype will be standardised and regulated. Europe taking the lead is a positive wave in the eye of technology analytics due to the stricter norms and penalties.

The AI Act is set to be fully adopted in April 2024, with different parts becoming applicable at varying times, such as bans on high-risk AI systems posing unacceptable risks applying six months after entry into force

Sign up to our newsletter

Related Reading