Ten answers about the European artificial intelligence law | Technology

Artificial intelligence (AI) systems and programs are capable of performing tasks typical of human intelligence, such as reasoning, learning (machine learning), perceive, understand natural language and solve problems. It is already present in all areas of our lives, from the usual shopping applications or watching movies to the development of new pharmaceutical formulas or the organization of production processes. It allows you to automate tasks, make decisions, improve efficiency and provide solutions in areas as diverse as medicine, industry, robotics or financial services. The European AI law begins to be applied gradually with the aim of guaranteeing its development with ethical and legal criteria. These are 10 answers to the doubts generated by a pioneering initiative in the world:

Why does Europe regulate it?

Artificial intelligence provides social benefits, promotes economic growth and improves innovation and competitiveness. These applications generally pose little or no risk. But others can generate situations contrary to rights and freedoms, such as the use of artificial intelligence to generate unwanted pornographic images or the use of biometric data to categorize people by features of their appearance or its application in hiring, education, healthcare or predictive policing.

What are the risk categories?

Minimum risk: Most systems fall into this category. For these applications, the supplier may voluntarily adopt ethical requirements and adhere to codes of conduct. General-purpose AI is considered AI that is trained with a computing power of 10²⁵ floating point operations per second (FLOPS). FLOPS is the measure of a computer’s performance and the Commission considers the aforementioned dimension as the threshold for possible systemic risks. The EU considers that ChatGPT-4, from OpenAI, and Gemini, from Google DeepMind, could be at this threshold, which can be reviewed through a delegated act.

High risk: These are the models that can potentially affect people’s safety or their rights. The list is open to permanent review, but the standard already foresees areas of application included in this category, such as critical communication and supply infrastructure, education, personnel management or access to essential services.

Unacceptable risk: Systems included in this category are prohibited because they violate fundamental rights. This list includes those for social classification or scoring, those that take advantage of people’s vulnerability, and those for identifying race, opinion, belief, sexual orientation, or emotional reaction. Exceptions are provided for its police use to prosecute 16 specific crimes related to the disappearance of persons, kidnappings, trafficking and sexual exploitation, prevention of threats to life or safety or response to the current or foreseeable threat of a terrorist attack. . In cases of urgency, exceptional use may be authorized, but if denied, all data and information must be deleted. In non-urgent circumstances, it must be preceded by a prior assessment of the implications from a fundamental rights perspective and must be notified to the relevant market surveillance authority and the data protection authority.

Specific risk for transparency: Refers to the manipulation dangers generated by hoaxes that appear real (deepfakes) or with conversational applications. The standard requires it to be made unequivocally clear that the user is dealing with an artificial creation or that he or she is interacting with a machine.

Systemic risk: The standard takes into account that the widespread use of large-capacity systems can cause massive or wide-ranging damage, such as in the case of cyber attacks or the spread of a financial hoax or bias.

Who must submit to the law?

All agents, both public and private, who use artificial intelligence systems within the EU must comply with the law, whether they are European or not. It affects program providers, those who apply them and those who buy them. Everyone must ensure that their system is secure and compliant with the law. In the case of high-risk systems, before and after being marketed or put into service, the systems must undergo a conformity assessment to ensure data quality, traceability, transparency, human supervision, accuracy, cybersecurity and robustness. . This evaluation must be repeated if the system or its purpose is substantially modified. High-risk AI systems used by authorities or entities acting on their behalf must also be registered in a public EU database, unless such systems are used for law enforcement and migration purposes. Providers of models with systemic risks (computing power of more than 10²⁵ FLOPS) have the obligation to assess and mitigate them, report serious incidents, carry out advanced testing and evaluation, ensure cybersecurity and provide information on the energy consumption of their Models.

What should a conformity assessment include?

The processes, the period and frequency of use, the categories of affected individuals and groups, the specific risks, the human surveillance measures and the action plan in case the risks materialize.

How does a supplier know the effects of their product?

Large corporations already have their own systems to adapt to the standard. For smaller entities and those that use open source systems, the law creates controlled testing and rehearsal spaces in real conditions, which provide a controlled environment to test innovative technologies for six months, extendable to as many. They may be subject to inspections.

Who is exempt?

Providers of free and open source models are exempt from commercialization obligations, but not from risk avoidance obligations. Nor does the rule affect research, development and prototyping activities or developments intended for defense or national security uses. General-purpose AI systems will have to meet transparency requirements, such as the production of technical documentation, compliance with EU copyright law and the dissemination of detailed summaries of the content used for training of the system.

Who monitors compliance?

A European Artificial Intelligence Office, a scientific advisory panel and national surveillance authorities are established to monitor systems and authorize applications. AI agencies and offices must have access to the information necessary to fulfill their obligations.

When will the AI ​​Law be fully applicable?

Following its adoption, the AI ​​Law enters into force 20 days after its publication and will be fully applicable in 24 months, gradually. In the first six months, Member States must eliminate banned systems. Within a year, governance obligations for general-purpose AI will be imposed. In two years, all high-risk systems must be adequate.

What are the penalties for violations?

Where artificial intelligence systems are marketed or used that do not meet the requirements of the Regulation, Member States must establish effective, proportionate and dissuasive sanctions for infringements and notify them to the Commission. Fines of up to €35 million or 7% of the annual worldwide turnover of the previous financial year, up to €15 million or 3% of the turnover and up to €7.5 million or 1 .5% of business volume. In each infringement category, the threshold would be the lower of the two amounts for SMEs and the higher for other companies.

What can the victim of an infraction do?

The AI ​​Law provides for the right to lodge a complaint with a national authority and makes it easier for individuals to claim compensation for damage caused by high-risk artificial intelligence systems.

to continue reading

_

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.