AI Act – a conscious choice

What is AI and why is it important?

AI is the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity [1]. It enables systems to perceive their environment, solve problems and act to achieve a specific goal. These systems gather data – through attached sensors, process this data and respond. They are capable of optimizing and thus adapting their behaviour by learning and work autonomously.

Whilst some AI technologies have been around for more than 50 years, recent advances in computing power, with availability of enormous amount of data and new algorithms, have led to major AI breakthroughs in recent years.

Artificial intelligence is already present, influences our everyday life and is digitally transforming our society and thus has become an EU priority.

What is AI ACT?

To prepare Europe for an advanced digital age, the AI ACT provides a comprehensive legal framework on AI, which addresses the risks of AI within Europe and sets the tone for upcoming AI regulations worldwide. The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI [2].

The AI Act aims to “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensuring that AI in Europe respects set values and rules, and harnesses the potential of AI for industrial use.”

European Parliament News

Why is it needed?

To make sure that AI systems used in EU are safe, transparent, traceable, unbiased, trustworthy and environmentally friendly. AI systems should be overseen by people, rather than by automation, thus should foster inclusiveness.

The AI Act fosters the development of trustworthy AI in Europe, which also includes the Innovation Package and the Coordinated Plan on AI [3] . Together, these measures assure the health, safety and fundamental rights of people, and providing legal certainty to businesses across Europe. Overall, these initiatives would strengthen EU’s AI talent pool through education, training, skilling and reskilling activities.

Whilst the existing legislation provides protection, however it is insufficient to address AI system specific challenges, the proposed rules will be able to:

  • address risks created by AI applications/services;
  • prohibit AI practices that pose unacceptable risks;
  • determine a list of high-risk and set clear requirements for such applications;
  • define specific obligations for deployers and providers of high-risk AI applications;
  • require assessment before a given AI system is put into service or placed on the market;
  • overall, establish a governance structure
To whom does the AI Act apply?

This legal framework is applicable to both public and private actors inside and outside the EU as long as the AI system affects people located in the EU [4].

It is a cause of concern for both the developer of such system as well as the deployers of AI systems (high-risk). Importers of AI systems must also ensure that the provider (foreign) carries out appropriate conformity assessment procedure, bears a European Conformity (CE) marking and is accompanied by required documentation and instructions of use.

In addition, certain obligations are foreseen for providers of general-purpose AI models, including large generative AI models. Providers of free and open-source models are exempted from most of these obligations; however, the exemption does not cover obligations for providers of general-purpose AI models with systemic risks.

Research, development and prototyping activities preceding the release in the market are exempted and the regulation furthermore does not apply to AI systems that are exclusively for military, defence or for national security purposes, regardless of the type of entity carrying out those activities [4].

What happens if you don’t comply?

AI systems that do not respect the requirements of the Regulation, would attract penalties, including administrative fines, in relation to infringements and communicate them to the Commission [4].

The Regulation sets out thresholds that needs to be taken into account:

  • Up to €35m or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data;
  • Up to €15m or 3% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the other requirements or obligations of the Regulation, including infringement of the rules on general-purpose AI models;
  • Up to €7.5m or 1.5% of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request;

To conclude, the AI Act for the ICT industry aims to foster innovation while ensuring that the developed AI technologies are deployed responsibly, ethically, and in the best interests of society. It would provide assurance to its users and guidance for stakeholders, overall contributing to the sustainable growth and adoption of AI technologies.

Further information

[1] https://www.europarl.europa.eu/topics/en/article/20200827STO85804/what-is-artificial-intelligence-and-how-is-it-used

[2] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

[3] https://ec.europa.eu/commission/presscorner/detail/en/ip_24_383

[4] https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683