In June, the European Parliament voted almost unanimously in favor of the proposed AI law with some amendments. In particular, it wants to ensure that AI systems used in the EU are safe, transparent, accountable, non-discriminatory and environmentally friendly.
Last year, we were surprised when OpenAI released its AI-powered Chat-GPT program to the world, and we were slow to use it. But this year, we’ve already gotten used to asking our computer questions or giving instructions and getting a detailed response. And while we are still debating the reliability of artificial intelligence (AI) answers, AI technology has already brought new tools to the market. There are now not just one, but many vendors of AI-powered tools.
Reliability & Security
At the beginning of the year 2023 an improvement of the original GPT 3 version of ChatGPT was released. This was followed by new voice and image features for the application, and new options to create your own customized version of ChatGPT without having to program it yourself. But the question of reliability and security of the AI technology remained.
In Germany, there are laws on data protection (General Data Protection Regulation (GDPR)), the Civil Code (BGB), the Unfair Competition Act (UWG), the Copyright Act (UrhG), the General Equal Treatment Act (AGG) and the Works Constitution Act (BetrVG), but there are no regulations specifically tailored to the use of artificial intelligence. The legislation of the European Union is supposed to close this gap, because it is assumed that such a far-reaching technology has to be regulated on the level of the Union in order to create a uniform and clear legal situation.
Our Members of the European Parliament have taken action and drafted a proposal for a Regulation of the European Parliament and of the Council on harmonized rules on artificial intelligence and amending certain EU acts in 2021. But where do we go from here?
The EU’s executive and legislative branches
One of the tasks of the European Commission, the EU’s „executive„, is to develop new laws and programs that are in the general interest of the EU. Before making a proposal, the Commission seeks the views of national parliaments and governments, interest groups, experts and the public by inviting everyone to comment online.
The Commission’s proposals are carefully examined by the European Parliament and the Council of the European Union, the EU’s legislative body. These two institutions have the final say on all EU legislation. They can amend the proposals or reject them outright. If the Parliament and the Council cannot agree on a legislative proposal, no new law is adopted.
EU procedure
When the European Commission submits a legislative proposal, the European Parliament examines it in a first reading. It can adopt the proposal as it is or amend it.
The Council of the European Union may decide at first reading to accept Parliament’s position, thereby adopting the legislative act, or it may amend Parliament’s position and refer the proposal back to Parliament for a second reading. However, the vast majority of proposals are adopted at this stage. This is also the case for the proposal on the use of AI.
The European Parliament almost unanimously approves the proposal
In June, the European Parliament voted almost unanimously to adopt the proposal with some amendments. In particular, the Commission wants to ensure that AI systems used in the EU are secure, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should not be automated, but monitored by humans to avoid harmful outcomes. In this area, the EU should take the lead. The Parliament also wants to establish a uniform definition of AI that is technology-neutral and that can be applied to AI systems of the future.
The members decided on exemptions for research activities and AI components made available under open source licenses. In addition, AI systems developed solely for the purposes of scientific research and development would be excluded from the scope of the regulation.
Furthermore, the European Parliament has expanded the classification of high-risk areas to include harm to human health, safety, fundamental rights or the environment. AI systems to influence voters in political campaigns and recommendation systems on social media platforms have also been added to the list of high-risk areas.
EU adopts AI law
In early December 2023, the European Commission, the Council of the European Union, and the European Parliament agreed on an EU regulation for artificial intelligence systems. After long and often heated discussions, the rules for the use of AI technologies have now been finalized.
Adopting the risk-based approach of the proposal
Dangerous practices are prohibited, such as the use of AI for the indiscriminate harvesting of facial images from the web, for example for real-time remote biometric identification systems. In addition, AI technologies must not be used to exploit human vulnerabilities or weaknesses, or to manipulate free will. This is known as cognitive behavioral manipulation of individuals or certain vulnerable groups, such as voice-controlled toys that encourage dangerous behavior in children.
AI-driven emotion recognition in the workplace and educational institutions is prohibited, as is social scoring to assess or classify the trustworthiness of natural persons (social phenomena or characteristics of persons are scored and then evaluated using algorithms to classify those persons, predict their behavior, or describe their market value).
Some exceptions may be allowed for the prosecution of serious crimes, e.g. systems for retrospective remote biometric identification, where identification takes place with a significant delay and only with judicial authorization.
Risky applications
Particularly stringent rules were issued for „highly effective“, widely deployed AI models with systemic risk. Vendors of such systems will be required to conduct hostile attack testing, report serious incidents to the Commission, and report on their energy efficiency. The Commission will compile a list of affected systems.
In addition, risky AI systems may only be placed on the EU market if they meet mandatory requirements. This should apply, for example, to search engines with AI support such as ChatGPT, Bard or Bing. Risky AI systems must be accompanied by documentation and some form of user guidance.
High-risk AI systems
AI systems that pose a high risk to the health and safety or fundamental rights of natural persons are referred to as ‚high-risk AI systems‘. All high-risk AI systems will be assessed before they are placed on the market and throughout their life cycle. The CE mark shall be used to indicate compliance with the law on medical devices.
Generative AI
Generative AI systems based on models such as ChatGPT should meet transparency requirements and disclose that the content was generated by AI. This will also help distinguish deepfakes from real images. They should include safeguards to prevent creating illegal content. Detailed summaries of the copyrighted data used to train them should also be made publicly available.
GPT, Gemini, LaMDA or LLaMA
So-called basic models like the GPT behind ChatGPT, Gemini from Google, LaMDA or LLaMA from Meta, which are trained on a large database and can be adapted to a variety of different tasks, need to be clearly regulated. Independent experts should review potential risks to health, safety, fundamental rights, the environment, democracy and the rule of law. In addition, a summary of the training data used must be published by the operators of large basic AI models. This excludes trade secrets.
Limited risk
Limited-risk AI systems should meet minimum transparency requirements to enable users to make informed decisions. Users should be informed when interacting with AI. This also applies to AI systems that generate or manipulate image, audio or video content (e.g. deepfakes).
Supporting small and medium-sized enterprises
The AI Act wants to make it easier for small and medium-sized businesses to create artificial intelligence solutions. National authorities should therefore set up so-called regulatory sandboxes as test environments in which innovative AI can be trained and tested before it is launched on the market.
Setting up AI risk management
The adopted text emphasizes that the European Commission, together with Member States, should establish and manage a public EU database containing information on high-risk AI systems. The data in the EU database should be publicly available.
In setting up a system for monitoring AI, it is crucial to determine who is responsible for assessing the risks. To ensure that the assessment is as objective as possible, the team should be interdisciplinary. The proposal foresees the establishment of a „European Artificial Intelligence Office“ as an independent body of the Union. It is proposed to be located in Brussels.
This agreement affects us directly and will shape our daily lives and the way we interact with artificial intelligence. Why is that? Because EU member states must implement the laws passed by the EU institutions. EU law is therefore binding on all of us.
To the German Translation of this article: Erstes KI-Gesetz: Endlich Klarheit