top of page
Search

Learn about the relevant aspects of the European Union's Artificial Intelligence Act

Artificial intelligence (AI) has ceased to be a futuristic fantasy and has become a tangible reality and an essential component of our daily lives. From virtual assistants to advanced medical systems, AI has permeated almost every aspect of society. However, with the rapid advancement of this technology, legal and ethical concerns arise.


The European Parliament, in its role as guardian of the rights and safety of its citizens, has introduced the "Artificial Intelligence Act" (hereinafter AI Act), which was initially proposed by the European Commission in April 2021. The amendments were adopted in June 2023 and are currently under negotiation between the member states and the European Commission.


The AI Act marks a historical precedent in the regulation of artificial intelligence. Here we present an overview of this regulation and what it means for the future of AI in Europe and, potentially, the world.

1. What is the AI Act?

The AI Act is the world's first comprehensive attempt to regulate artificial intelligence. It aims to regulate the use of AI in the European Union, ensuring its application is safe, fair, and ecologically sustainable.


This proposal aims to classify AI systems according to their level of risk and establish requirements for their development and use. Areas of interest include stronger rules around data quality, transparency, human oversight, and accountability, addressing ethical issues and implementation challenges in various sectors such as health, education, finance, and energy.

In June, changes were agreed upon in the draft act, such as the prohibition of AI technology in biometric surveillance and the obligation of generative AI systems, like ChatGPT, to disclose AI-generated content.


The centerpiece of this act is a classification system that determines the level of risk that an AI technology could pose to health and safety or the fundamental rights of a person.


2. Risk Based Categorization

As mentioned, a distinctive feature of this regulation is that it does not treat all AI systems the same way. Instead, it classifies systems according to the risk they pose to users:

  1. Unacceptable Risk: This includes systems that are considered a direct threat to people, such as toys that could induce dangerous behaviors in children or real-time facial recognition systems. These systems will face a ban.

  2. High Risk: This category encompasses those AI systems that can affect safety or fundamental rights. EU-regulated products, such as toys or medical devices, are included here and must be registered in an EU database.

  3. Generative AI: Tools like ChatGPT, which generate content, must comply with specific transparency requirements and avoid generating illegal content.

  4. Limited Risk: These systems, like those that produce deepfakes, must comply with minimal transparency requirements.

3. What does this mean for AI developers and the general public?

For innovators and developers, this regulation provides a clear framework on how to design and implement their systems. In turn, the general public will benefit from interacting with safer, more transparent, and reliable AI systems. However, European companies have raised concerns about the potential impact of the legislation on Europe's competitiveness and technological sovereignty.


The act proposes significant penalties for non-compliance. Companies could face fines of up to €30 million or 6% of global revenue.

4. Towards the Future

With the adoption of the AI Act on June 14, 2023, the European Parliament has established a clear position on how AI should be regulated. Final negotiations will now begin with EU countries, and the Act is expected to become law before the end of the year.

Conclusion

The introduction of the AI Act by the European Parliament is a bold and necessary step in the digital age. This legislation not only seeks to protect citizens from potential harm but also to clearly guide innovators, ensuring that Europe continues to be a leader in the responsible adoption of advanced technologies.


This article is for informational purposes only and should not be considered legal advice. It is always advisable to consult a lawyer or expert in AI regulations for an accurate and case-specific interpretation. If you have any questions or need support, please contact us at info@lmzabogados.com.


 

Sources:


aviso legal

Legal disclaimer

The content of this blog is provided for informational and educational purposes only and should not be considered legal advice. Regulations in Ecuador are subject to changes and updates that may affect the applicability and accuracy of the content published here. We do not guarantee that the information presented is accurate, complete or current at the time of reading. Therefore, past postings should not be construed as necessarily reflecting current regulations. We strongly recommend that you consult with our qualified attorneys for specific and personalized advice.

Offices

Phone

Email

Connect with us

  • LinkedIn
  • Facebook
  • Instagram
  • https://twitter.com/MZAbogadosEC
bottom of page