Artificial intelligence – a legal entity?

von | 8 Mrz 2024

Anyone who wants to talk or write about artificial intelligence (AI) faces a daunting task: defining what it is and what it is not. The vagueness of the term has become so absurd that AI is used for everything from a robot that assembles pizzas to science fiction fantasies about superintelligent AI overlords threatening the world.

The definition of AI in the European Union’s White Paper on Artificial Intelligence is: “AI is a set of technologies that combine data, algorithms and computing power.” As the members of the Dutch Alliance for Artificial Intelligence (ALLAI) point out, this “definition applies to every piece of software ever written, not just AI.”

So, what is artificial intelligence?

Artificial Intelligence is first and foremost an algorithm, a computer program. It was developed by humans. It interprets collected data and draws conclusions from the knowledge gained. Depending on predefined parameters, the program decides on the best action(s) to take. AI systems can also analyze how the environment has been affected by previous actions and thus learn to adapt their behavior.

AI is therefore a human construct and is not autonomous, even if it is capable of learning.

Legal entity for AI

The idea of giving artificial intelligence its own legal personality may lead to radical changes in our legal systems to accommodate ideas that seem far-fetched. For example, there is a recurring debate about whether an AI can have intellectual property rights over something it invents.

From what has been written on the subject, it seems more straightforward to say that some scientists have used an AI system to invent something. In that case, the idea of an AI owning intellectual property seems rather bizarre.

Taking this issue further, we can ask who might benefit from an AI (developed by a private company) owning the copyright. If an AI had its own legal personality, companies could probably avoid responsibility by making the system responsible for its actions, rather than being liable for the actions of the systems they develop.

In patent law, there is currently an international consensus that AI cannot be considered an inventor of a patent. In copyright law, lawyers agree that an artificial intelligence cannot have a copyright on a thing.

A robot with citizenship

In 2017, the government of Saudi Arabia granted citizenship to a robot named Sophia. The decision may have been a public relations stunt, but many people have argued that granting legal rights to robots undermines human rights.

Saudi Arabia is not alone: Japan took a similar step, granting residency in Tokyo to Shibuya Mirai, a chatbot in a messaging app.

Are robots electronic persons or not?

The European Union has flirted with the idea. Namely with the proposal of the European Parliament to introduce a special legal status for robots as „electronic persons“. The European Parliament’s request asked the European Commission to study the possibility of applying the concept of electronic personhood to „cases where robots make intelligent autonomous decisions“.

An open letter signed by more than 150 European AI experts strongly opposed the proposal, citing in particular an overestimation of the actual capabilities of AI and concerns about liability issues.

The European Commission did not adopt the European Parliament’s proposal in its draft for a future strategy for dealing with artificial intelligence, thus rejecting the idea of an electronic personality.

AI has „limited intelligence“

One problem is that the term „artificial intelligence“ tends to overestimate its capabilities. Instead of calling the DABUS system artificial intelligence, it might be better described as a computer program that uses advanced statistical methods, and suddenly it seems much less plausible to grant a patent for it.

Another problem is that a robot like Sophia, which looks a bit like a human and can answer human questions instantly, might make people think that robots with human intelligence are not far away.

In reality, what we have today is what’s called „limited intelligence“ in AI systems. That is, the system can perform well on a very narrowly defined task, but it usually falls back to zero at the slightest change in the task.

To achieve something like general artificial intelligence – where a machine would be able to intelligently solve a series of complex tasks – you don’t just need more data and more computing power, it’s a problem for which no one has a roadmap.

Can AI be „creatively independent“?

When systems use complex algorithms such as neural networks that learn from large data sets, it is often difficult for their programmers to understand the exact mechanisms by which they achieve results. This means that their operation is somewhat opaque, and they often produce results that are surprising to human programmers.

In some cases, these surprising results are due to serious errors that could have had life-threatening consequences. In other cases, they have led to exciting and novel results that a human would probably not have thought of.

In the case of AI personality, this opacity is understood as a kind of „creative independence“. This means that the AI’s decision-making processes are independent of its creators and therefore should not be the responsibility of them.

Many ethical questions

The interaction between man and machine quickly raises questions such as: Will we be increasingly controlled or even dominated by machines? How much better are machines than we are? Will machines soon be able to replace humans? Many of these questions are ethical questions, which is why AI ethics is an important perspective on artificial intelligence. Like all technologies, AI systems present risks, opportunities, and challenges.

Ultimately, ethical guidelines, however well-intentioned, will not be enough. We need governments and international institutions to draw red lines to stop certain applications of artificial intelligence-from biometric surveillance to predictive policing-in their tracks, and put in place appropriate safeguards and accountability mechanisms for others.

Is it possible to control AI?

There are two common myths about whether artificial intelligence can be controlled:

First, AI is simply too complex to control and regulate.

Second, any control and regulation of AI would stifle innovation.

As the hype around artificial intelligence grows, so too does the debate about how to regulate it: On the one hand, there is the question of whether we can regulate a technology that is developing so rapidly and whose workings are supposedly so opaque that we barely understand them; on the other hand, there is the question of whether we should regulate artificial intelligence if any regulation risks stifling innovation or depriving the country that regulates it of its competitive advantage. Both claims are based on a number of misconceptions.

Politicians have no idea about AI

A common argument against regulating artificial intelligence is that politicians have too little experience with complex AI technologies to regulate them. This argument usually leads to the conclusion that clueless governments would only make things worse if they tried to regulate artificial intelligence, and that they should therefore leave things to the companies that have the expertise in this area.

Some even go so far as to argue that human regulation of artificial intelligence is impossible, and that AI must ultimately regulate itself because it is free of human shortcomings.

AI can still be regulated

As innovative and novel as today’s artificial intelligence technology may be, government control of new technologies is nothing new. Throughout history, governments have regulated new technologies, and the results have often been successful. Examples include the regulation of the automobile, the railways, the telegraph, and the telephone.

Artificial Intelligence, like any technology, is a tool that people use. The impact of AI systems on society depends largely on who uses them, for what purpose, and for whom. And all of this can be regulated.

AI regulation will stifle innovation – really?

We keep hearing that we should not regulate AI. It is argued that regulation would stifle innovation, and that companies should be free to develop new technologies.

In the US, the White House has published guidelines for AI policy in 2019, in which it strongly argues against over-regulation.

Laws may be beneficial

Looking at different industries, the idea that regulation completely stifles innovation has not been proven true, and there is no reason why it should be the case in the context of artificial intelligence. In a variety of industries, regulation has been successfully implemented without stifling all innovation.

After all, artificial intelligence is just a computer program, a technology invented by humans. Therefore, humans need to regulate how this technology is used.

Myth: AI has agency | AI Myths

To the German translation of this article: Künstliche Intelligenz – eine juristische Persönlichkeit?

WordPress Cookie Plugin von Real Cookie Banner