Summary of the proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence and amending certain Union acts
EUR-Lex – 52021PC0206 – DE – EUR-Lex (europa.eu)
EUROPÄISCHE KOMMISSION
Brussels, 21.4.2021
In April 2021, the European Commission presented a legislative proposal for a coordinated EU approach to the human and ethical aspects of AI.
The European Union has set itself the goal of becoming a global leader in the development of safer, more reliable and more ethical artificial intelligence. The European Parliament has explicitly called for the protection of ethical principles.
Harmful AI practices and high-risk AI systems
Harmful AI practices that violate the values of the Union are prohibited.
Specific restrictions and security measures are proposed for certain applications of remote biometric identification systems in the field of law enforcement.
High-risk AI systems are systems that pose significant risks to the health and safety or fundamental rights of individuals.
High-risk AI systems must meet the horizontal requirements for trustworthy AI and undergo conformity assessment procedures before they can be placed on the market in the Union. Providers and users of high-risk AI systems should be subject to predictable, proportionate and clear obligations. High-risk AI systems require high data quality, documentation and traceability, transparency, human oversight, accuracy and robustness.
Only minimal transparency obligations are proposed for the use of chatbots or „deepfakes“.
Monitoring arrangements
The European Commission should monitor the impact of the proposed rules and establish a Union-wide public database for the registration of high-risk AI applications in order to control and monitor high-risk AI systems.
AI providers should be obliged to submit meaningful information about their systems and the conformity assessment carried out when registering in this database and to inform the competent national authorities as soon as they become aware of serious incidents or malfunctions.
The national authorities are obliged to investigate serious incidents or malfunctions, collect all necessary information and forward it to the Commission.
Scope of the Regulation
This Regulation shall cover suppliers who place AI systems on the market or put them into service in the EU, AI system operators established in the EU and AI system suppliers and AI system operators established or resident in non-EU countries where the output will be used in the EU.
This Regulation shall not apply to artificial intelligence systems developed or used exclusively for military purposes, to authorities of third countries and to international organizations, where those authorities or organizations use artificial intelligence in the framework of international police and judicial cooperation agreements with the Union or with one or more Member States.
Bans on AI systems
It is prohibited to place on the market, put into service or use AI systems that use techniques of subliminal influence. It is also prohibited to place on the market, put into service or use AI systems that exploit the weakness or vulnerability of a particular group of people due to their age or physical or mental disability. The use of AI systems by or on behalf of public authorities to assess or classify the trustworthiness of natural persons based on their social behavior or known or predicted personal characteristics is also prohibited.
The use of real-time biometric remote identification systems in publicly accessible areas for law enforcement purposes is generally prohibited. However, there are exceptions: In a targeted search for specific potential crime victims or missing children; to avert a danger to persons or a terrorist attack; or to investigate, locate, identify, or prosecute an offender or suspect of a crime punishable by a maximum term of imprisonment of at least three years. Any individual use of a real-time biometric remote identification system in publicly accessible areas for law enforcement purposes requires prior authorization.
Defining high-risk AI systems
High-risk AI systems are AI systems that are intended to be used for real-time remote biometric identification of natural persons, as security components in the management and operation of road traffic, and in the water, gas, heat and electricity supply sectors.
AI systems that are intended to be used to make decisions about the access of natural persons to education and training institutions, the assessment of students in education and training institutions, and the assessment of candidates for compulsory examinations are also considered to be high-risk AI systems.
In addition, AI systems are considered high-risk AI systems if they are intended to be used for the recruitment or selection of natural persons and for the assessment of candidates in job interviews or tests, if the systems are intended to be used to decide on promotions or dismissals, or to monitor and evaluate the performance and behavior of individuals.
In addition, AI systems are considered high-risk AI systems if they are intended to be used by public authorities to assess whether to grant public assistance and services to natural persons, to assess the creditworthiness of natural persons, to dispatch or prioritize the deployment of emergency and rescue services, including fire and emergency medical services.
AI systems are also considered to be high-risk AI systems when they are intended to be used by law enforcement authorities: for individual risk assessment of natural persons; as a lie detector or to determine the emotional state of a natural person; to detect deep fakes; to assess the reliability of evidence; to predict the occurrence of a crime based on the profile, personality traits and characteristics, or past criminal behavior of natural persons; for profiling and criminal analysis of natural persons.
High-risk AI systems also include AI systems that are intended to be used by the competent authorities for the following purposes: as a lie detector or to determine the emotional state of a natural person; to assess a risk, including a security risk, an irregular immigration risk or a health risk, posed by a natural person seeking to enter or having entered the territory of a Member State; to verify the authenticity of travel documents and proof of identity of natural persons; to examine applications for asylum, visas and residence permits, as well as related complaints.
In addition, high-risk AI systems are those whose intended purpose is to assist judicial authorities in investigating and interpreting facts and legal provisions and in applying the law to specific situations.
Requirements for high-risk AI systems
System for Risk Management: For high-risk AI systems, a risk management system shall be established, implemented, documented, and maintained.
The risk management system is a process throughout the life cycle of an AI system that requires regular, systematic updates. It includes identifying and analyzing the known and foreseeable risks posed by each high-risk AI system, assessing and evaluating those risks, assessing other risks, and taking appropriate risk management measures. These measures must reflect the generally accepted state of the art, and the overall residual risk of high-risk AI systems must be assessed as acceptable and communicated to users.
Due consideration must be given to the technical knowledge, experience and level of training of the users and the environment in which the system is to be used.
Test Methods
High-risk AI systems must be tested to determine the most appropriate risk management measures. Test procedures must be adequate.
In any case, the tests shall be carried out prior to the placing on the market or putting into service, using pre-defined parameters that are appropriate to the intended use of the HRC AI system. In particular, consideration shall be given to whether the high-risk AI system is accessible to children or has the potential to have an impact on children.
The technical documentation
When a high-risk AI system is placed on the market or put into service, a single technical documentation must be prepared. The technical documentation of a high-risk AI system shall be prepared before it is placed on the market or put into service and shall be kept up to date.
It shall contain at least a general description of the AI system, a detailed description of the components of the AI system and its development, detailed information on the monitoring, operation and control of the AI system and a copy of the EU declaration of conformity.
Logging
During the operation of high-risk AI systems, processes and events must be automatically recorded. This logging must conform to accepted standards and be reasonably traceable throughout the life cycle of the AI system. In addition, the identity of the natural persons involved in reviewing the results must be recorded.
Transparency
The operation of high-risk AI systems must be sufficiently transparent. High-risk AI systems must be provided with operating instructions that contain accurate, complete, correct and unambiguous information in a form that is relevant, accessible and understandable to users.
Human supervision
In order to avoid or minimize risks to health, safety or fundamental rights, high-risk AI systems should be able to be effectively supervised by humans throughout their lifecycle.
The human supervisor must be able to fully understand the capabilities and limitations of the high-risk AI system and properly monitor its operation. This person must be able to correctly interpret the results of the high-risk AI system and to intervene in the operation of the high-risk AI system or to interrupt the operation of the system by means of a „stop button“ or similar procedure.
Robustness
High-risk AI systems must be robust to errors, failures, or irregularities that may occur, particularly due to their interaction with humans or other systems, and resistant to attempts by unauthorized third parties to alter their use or performance by exploiting system vulnerabilities.
Providers
Providers must take reasonable precautions to protect the fundamental rights and freedoms of natural persons, including technical limitations on further use and state-of-the-art security and privacy measures such as pseudonymization or encryption.
Providers of high-risk AI systems are required to ensure that their high-risk AI systems have a quality management system and to create the technical documentation of the high-risk AI system. They must keep the logs automatically generated by their high-risk AI systems and ensure that the high-risk AI system has undergone the appropriate conformity assessment procedure before being placed on the market or put into service. Suppliers are obliged to comply with the registration requirements, to affix the CE marking to their high-risk AI systems and to demonstrate the compliance of the high-risk AI system upon request of a competent national authority.
Importer
Before placing a high-risk AI system on the market, the importer shall ensure that the supplier of the AI system has carried out the appropriate conformity assessment procedure and has prepared the technical documentation. In addition, the importer shall ensure that the system bears the required conformity marking and is accompanied by the required documentation and instructions for use.
The name, registered trade name or registered trade mark and contact address of the importer must be indicated on the high-risk AI system itself or, if this is not possible, on the packaging or accompanying documentation.
Distributors
Before placing a high-risk AI system on the market, distributors shall verify that the high-risk AI system bears the required CE conformity marking, that it is accompanied by the required documentation and instructions for use, and that the provider or, where applicable, the importer of the system has fulfilled the obligations laid down in this Regulation.
Transparency
Natural persons must be informed that they are interacting with an AI system. This is particularly true when using an emotion recognition system or a biometric categorization system.
The user of an AI system that generates or manipulates image, sound or video content that is deceptively similar to real persons, objects, places or other facilities or events and gives a person the impression that it is genuine or real („deepfake“) must indicate that the content has been artificially generated or manipulated.
EU database
An EU database containing information on registered high-risk AI systems will be established and maintained by the European Commission in cooperation with the Member States. The data stored in the EU database will be available to the public.
Market surveillance
Suppliers must establish and document a post-market surveillance system for the high-risk AI system that is appropriate to the nature of the AI technology and the risks of the high-risk AI system. The post-market surveillance system shall be based on an appropriate plan, which shall be part of the technical documentation.
A national supervisory authority is responsible for market surveillance. Providers of high-risk AI systems shall notify the market surveillance authorities of the Member States of serious incidents or malfunctions of these systems which constitute a breach of the provisions of Union law on the protection of fundamental rights. The market surveillance authorities shall inform the national authorities or public bodies. The national surveillance authorities shall report regularly to the Commission on the results of their market surveillance activities.
Penalties
Member States shall lay down rules on penalties and shall take all measures necessary to ensure that they are applied correctly and effectively. They shall take particular account of the interests of small and start-up enterprises and their economic survival. The penalties provided for shall be effective, proportionate and dissuasive.
Member States shall notify those provisions and measures, and any subsequent amendment affecting them, to the Commission without delay.
Fines
Violations of the ban on AI practices and failure of the AI system to comply with the specified requirements may result in fines of up to 30 million euros. For companies, the fine may be up to 6% of the total worldwide annual turnover of the preceding financial year, whichever is higher.
Where an AI system is in breach of the requirements or obligations under this Regulation, the fine shall be up to 20 million euros or, in the case of an undertaking, up to 4% of the total worldwide turnover in the preceding business year.
Where incorrect, incomplete or misleading information is provided to notified bodies and national competent authorities at their request, fines of up to 10 million euros or, in the case of an undertaking, up to 2 % of the undertaking’s total worldwide annual turnover in the preceding business year, whichever is the greater, may be imposed.
To the German translation of this article: Zusammenfassung des Vorschlags für ein KI-Gesetz