New computer application: AI celebrates a triumph

von | 12 Jan. 2024

Artificial intelligence, or AI for short, has become very popular in recent years. There is debate about the potential of AI, but also about the loss of jobs, the disruption of entire industries and the threat to the security of personal data.

AI is nothing new

Artificial intelligence is not new, it has been around for many years. Just think back to the first chess computers, which could be beaten by humans, but then got better and better. Early on, people thought we would discover algorithms with human intelligence. But it turned out that solving each task was much more complex than we had hoped.

Then came the idea of using artificial intelligence in learning programs, e.g. for language learning. The promising concept of „deep learning“ emerged. Instead of manually programming a new algorithm for each problem, deep learning develops architectures that can transform themselves into a variety of algorithms based on the data fed to them. Deep learning can easily recognize patterns, which is why it has achieved excellent results in recognizing objects in images, machine translation, and speech recognition.

Concepts from science and technology

In late 2015, a group of scientists and technology enthusiasts decided to create a non-profit organization to develop safe and useful artificial intelligence for the benefit of humanity – OpenAI.

They wanted to create powerful but safe AI systems, and to eliminate any foreseeable risks before these systems were deployed. But despite extensive research and testing in the lab, real-world experience can be a critical component in developing and releasing ever safer AI systems.

Early calls for legislation

Legislation is now needed to establish clear rules and strict safety checks for AI systems. In 2017, the European Council called for a high level of data protection, digital rights and ethical standards to be maintained. In 2019, the Council emphasized that Europe needs a coordinated plan for „Made in Europe“ artificial intelligence to ensure full respect for the rights of European citizens and to determine which AI applications should be classified as high-risk.

In February 2020, the European Commission presented its White Paper on Artificial Intelligence (White Paper on Artificial Intelligence – A European approach to excellence and trust, COM(2020) 65 final, 2020). It called for a European approach to excellence and trust, the creation of a legal framework for trustworthy AI, and legislative measures to enable a well-functioning single market for artificial intelligence systems. The proposal should be based on EU values and fundamental rights. It should ensure that users have confidence in AI-based solutions, while at the same time providing incentives for companies to develop them.

Response from the European Parliament

Back in October 2020, the European Parliament responded with a series of resolutions on AI, including on ethics (European Parliament resolution of October 20, 2020 with recommendations to the Commission on the framework for the ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL)), civil liability (European Parliament resolution of October 20, 2020 with recommendations to the Commission on a civil liability regime for the use of artificial intelligence, 2020/2014(INL)) and copyright (European Parliament resolution of October 20, 2020 on the ethical aspects of artificial intelligence, robotics and related technologies, 2020/2014(INL)).

This was followed in 2021 by further resolutions on AI in criminal law (European Parliament – Draft report on artificial intelligence in criminal law and its use by police and judicial authorities in criminal matters, 2020/2016(INI)) and in education, culture and the audiovisual sector (European Parliament – Draft report on artificial intelligence in education, culture and the audiovisual sector, 2020/2017(INI). For example, the Commission has adopted the Digital Education Action Plan 2021-2027: „Resetting education for the digital age“. The Action Plan provides for the development of ethical guidelines for the use of AI and data in education – Communication from the Commission, COM(2020) 624 final).

Germany also takes action

On February 2, 2021, the German Federal Office for Information Security (BSI) published the first concrete catalogue of criteria for trustworthy and secure artificial intelligence (AI Cloud Service Compliance Criteria Catalogue (AIC4)). The criteria can be used in a variety of ways: As a basis for audits, they create transparency for users of AI services. They also provide a solid foundation for regulating the lifecycle of an AI technology, AI quality assurance in the development process, and more granular control over that technology.

Take the plunge?

Just over a year ago, on November 30, 2022, a non-profit research organization made an artificial intelligence called ChatGPT freely available to all Internet users. The developers of this AI wanted to learn more about the strengths and weaknesses of their product.

It was a bold move to make a program as comprehensive as ChatGPT freely available to everyone. As recently as June 2020, when the research company released its first commercial product, OpenAI API, an API (application programming interface, or programming interface, more precisely, an interface for programming applications), OpenAI preferred to retain control over the model’s applications.  This technology could only be used by registering on a proprietary platform. In this way, the company could deny access to the program at any time if malicious applications were registered.

Unstoppable success

The world seemed to have been waiting for this program. Nine months after the release of the first commercial product, more than 300 applications were using GPT-3 and tens of thousands of developers worldwide were building on the platform.  A community of tens of thousands of developers and users grew worldwide. They used ChatGPT to write letters and emails, used it for research and enjoyed the sometimes entertaining answers. But the program learned and got better and better.

Companies go big

It didn’t take long for reports to start coming in that AI had found its place in business. Many companies are using AI to make their day-to-day operations more efficient, using ChatGPT in employee and company organization, and even letting the new technology take over the selection process for hiring new employees. On the other hand, many job seekers are relying on AI to help them write a great application.

To sum up: AI comes with a tremendous amount of potential, but it also comes with new risks and challenges that need to be managed effectively.

To the German translation of this article: Neue Computer-Anwendung: KI feiert Siegeszug

WordPress Cookie Plugin von Real Cookie Banner