In 2019, companies in Germany already generated sales of almost 60 billion euros with artificial intelligence (AI) products or services. Globally, revenue from business applications of AI is expected to increase more than sixfold between 2020 and 2025.
Lawyers, police officers, immigration officials, and law enforcement agencies were also using AI to work for them, collecting personal data without the knowledge of the individuals involved. Could this be justified by democratic rights?
More and more people are asking whether an AI can simply disregard democratic laws, or who should be held responsible if a person feels their personal rights have been violated by an AI application. We need clear, comprehensive and detailed laws, and severe penalties if they are ignored.
A parliamentary inquiry
Regulierung von künstlicher Intelligenz in Deutschland (bundestag.de)
In January 2023, some parliamentarians submitted a question to the German Bundestag’s scientific service. They wanted to know if there were any regulations in Germany with specific rules for the use of artificial intelligence, or if there were any plans to adopt such regulations.
The scientific service replied that there are no specific laws and regulations in Germany, i.e. laws and regulations specifically tailored to AI systems. In addition, the use of AI-based technologies and information systems is governed by general regulations that do not explicitly contain any requirements related to AI. Furthermore, Germany is not currently planning any standardization procedures for the enactment of specific laws or regulations tailored to AI systems.
Europe needs to take action
The European Union now has a duty to put in place an effective legal framework to ensure that AI systems placed on the market and used in the Union are safe and respect the existing fundamental rights and values of the Union. The European Union aims to be a world leader in the development of safe, trustworthy and ethical artificial intelligence.
In this context, an EU proposal will be the core element of an EU strategy for the Digital Single Market. It will ensure the smooth functioning of the Single Market through the adoption of harmonized rules. These should apply in particular to the development, placing on the market and use of products and services that use AI technologies or autonomous AI systems. In addition, the European Parliament explicitly calls for the protection of ethical principles.
European Commission legislative proposal
A legislative proposal for a coordinated European approach to the human and ethical aspects of AI has already been presented by the European Commission in April 2021. The aim of the proposal is to define common requirements for the design and development of certain AI systems that must be met before these systems can be placed on the market. These requirements are to be further specified by harmonized technical standards. The proposal also addresses the situation after AI systems have been placed on the market by providing for a coordinated approach to ex-post control. It is proposed that all Member States agree on a forward-looking definition of AI.
Following a risk-based approach, the European Commission’s proposal divides AI systems into three classes: AI systems with unacceptable risk, high risk and low risk.
Harmful AI practices banned
AI systems that subliminally influence people in ways that can cause physical or psychological harm to themselves or others are prohibited. In addition, AI technologies must not exploit the weakness or vulnerability of a particular group of people due to their age or physical or mental disability. Authorities must also not use AI technologies to assess or classify the trustworthiness of natural persons based on their social behavior or known or predicted personal characteristics or personality traits.
The use of real-time remote biometric identification systems in public places for law enforcement purposes is also prohibited. However, exceptions apply when AI systems are used to search for specific potential crime victims or missing children, or to avert a threat to natural persons or a terrorist attack. Law enforcement authorities may also use AI technology to identify and prosecute a perpetrator or suspect of a crime punishable by a maximum term of imprisonment of at least three years.
Prior authorization is required for any individual use of a real-time remote biometric recognition system in locations that are accessible to the public for the purpose of law enforcement.
High-risk AI systems
High-risk AI systems are those that pose significant risks to the health, safety or fundamental rights of individuals. Such AI systems should comply with certain requirements and undergo conformity assessment procedures before being placed on the market in the Union. Providers and users of such systems should be subject to predictable, proportionate and clear obligations.
High-risk AI systems have stringent requirements for data quality, documentation and traceability, transparency, human supervision, accuracy and robustness. For these systems, a risk management system needs to be set up, applied, documented and maintained. The risk management system should be understood as a continuous process throughout the life cycle of an AI system. It should be tested and updated on a regular basis.
A technical documentation should be prepared before a high risk AI system is placed on the market or put into operation. This documentation must include a general description of the system and a detailed description of the components of the AI system and its development process. It must also contain detailed information on the monitoring, operation and control of the AI device.
Low-risk AI systems
For some AI systems, notably the use of chatbots or deepfakes, AI-powered video games or spam filters, only minimal transparency requirements are proposed. The vast majority of AI systems fall into this category. The draft regulation does not intervene here, as these AI systems pose little or no risk to the rights or security of citizens.
Compliance monitoring
EU Member States will be responsible for ensuring that their national supervisory authorities implement the proposed rules. These authorities will be in charge of market surveillance and reporting to the European Commission on a regular basis. The market surveillance authorities should have unrestricted access to all data, application programming interfaces (APIs) or other technical means and tools.
A European Committee on Artificial Intelligence will be established at Union level to develop a mechanism for cooperation. It will also support innovation, in particular in the form of real-world AI labs and for small and medium-sized enterprises (SMEs) and start-ups.
Penalties
The Member States will have to lay down rules on penalties. However, the legislative proposal already provides for significant fines, e.g. up to 30 million euros for non-compliance with the prohibition of AI practices and non-compliance of the AI system with the specified requirements, up to 20 million euros for non-compliance with requirements or obligations, and up to 10 million euros for providing false, incomplete or misleading information to the competent national authorities at their request.
Discussions are now underway with EU member states in the Council of the European Union on the final form of the law, which will be in place by 2024.
To the German Translation of this article: EU-Kommission: So will Brüssel KI bändigen