Almost without realizing it, Artificial Intelligenct (AI) has come into our lives at a wild pace. This has its advantages but also there are points we should reflect upon. Do machines have too much power?
For the last few years, artificial intelligence has experienced an exponential growth, although we currently do not know exactly how our brain works, we are increasingly more capable of creating replications similar to it.
However, this technological development has set the alarm bells ringing for experts in Artificial Intelligence Law, since they consider it is imperative for the European Union to elaborate an articulated ethical code to regulate responsibilities for the damages caused by these systems.
But not only experts have made their voices to be heard, in the European Parliament itself, representatives from the European Parliament such as Adrián Vázquez Lázara, representative from the Group Renew Europe and president of the Committee on Legal Affaires of the European Parliament, pointed out in his intervention in the VI Registration Congress, that it is necessary to regulate for example, in autonomous driving vehicles, the decision a driving vehicle must take if for example it found on its way a child running after a ball. The driving system would have to decide, when not being able to stop on time, if it would be ethical to better keep the same direction with the possibility of running the child over or it would be better to suddenly swerve and hit an old person walking down the walkway.
If we reflect on this scenario that the representative from the European Parliament exposed, we might think it is science fiction, as if it was a Hollywood movie, but this approach is not new at all.
In 1985, Judith Jarvis Johnson highlighted in the magazine The Yale Law Journal (the publication of Yale's Law Faculty) some questions experts posed during the XX century. This recognized jurist exposed the complexity it would entail for Artificial Intelligence to find the solution to complex human dilemma.
Many experts agree that the development of the Ethical Code of Artificial Intelligence must be developed by Brussels through a Regulation, as it was the case of the Data Protection legislation, which is the best-implemented legislation globally
Imagine a similar dilemma to the one exposed previously, but this time, it is a human who drives a vehicle without breaks and the way branches two ways: on the left side, there are five people and on the left one there is only one person. Is it licit to kill one person to save five people?
Now we go with the second part of the dilemma: a youngster goes into a clinic for a regular checkup; in that clinic there are five patients waiting for organ transplants. In order to live, two of them need a lung, two need a kidney and the fifth one needs a heart. The funny thing is that the yougster that went to the clinic for that checkup has the same blood type as all of them, so he would be a perfect donor. Now we repeat the question again, is it licit to kill one person to save five people?
In these kinds of dilemma. a human would have to solve the equation by following ethical criteria. But surely almost everyone would agree on running a person over to save five of them. There are many human factors that justify said action. However, it is difficult for a machine to callibrate those decisions, therefore it is very important to provide Artificial Intelligence with an ethical code.
Out of that concern, the European Parliament elaborated a report on robotics in 2017 called Code of Ethics and in December 2018, the first draft for Ethics Guidelines for Trustworthy AI was published. 52 experts analyzed and explored all crannies of the problem, focusing on human beings and always taking an approach from a fundamental rights defence perspective.
They are moral standards adressed towards humans, technology creators.
The principles are the following:
It is hard for a machine to callibrate human decisions, therefore it is so important to provide Artificial Intelligence with a code of ethics
Out of their concers, the Comission published a bill proposal that is currently being discussed on the European Parliament.
This bill proposal has as a goal classifying the risks posed in four different assumed risk levels: high risk, limited risk and low risk.
High risk systems must undergo a series of important obligations before being marketed, including mechanisms for evaluating risks thanks to which a register will be carried out on activity, result traceability and supervision measures carried out by specialized professionals.
Many experts agree on Brussels developing a Code of Ethics for Artificial Intelligence through Regulations, as it was the case of the Data Protection legislation, which is the best-implemented one globally.
But also, it is also vitally important for technology creators to be increasingly specialized in ethics and to be able to discern the implications and the risks of the systems they create.