Machines that Fire or Promote Employees

Machines that Fire or Promote Employees

Machines that Fire or Promote Employees

AI is increasingly more integrated into society's day to day, to the point of requiring specific regulations before it gets too late. 

David Yatskiv

16/11/2021

inteligencia_artificial

Whenever someone may read the title of this article, they might recall sci-fy movies like I, Robot, Blade Runner or Back to the future. But... surprise!, I'm afraid to tell you this article is not about a science fiction movie but rather about the real world we live in in 2021.

The exponential development of Artificial Intelligence and its application in different domains has reached the world of labour relationships, even to the point of entailing a major shift in the way we conceive today those labour relationships. In the face of this impending economical challenge, the European Union is compelled to revise its labour regulations to suit the impact of Artificial Intelligence applied to working environments. The urge for regulating this technology is due to the implementation of many Artificial Intelligence systems that decide on issues such as labour hiring, as well as worker firing and woker promotion.

The European Union is compelled to revise its labour regulations to suit the impact of Artificial Intelligence applied to working environments.

Last 21st of April 2021, the European Comission proposed a Regulation that comprises the first legal framework on Artificial Intelligence, providing that way continuity to the White Paper (White Papers are documents that contain action proposals by the European Union in regards to the future, in this case the Artificial Intelligence domain) published a year ago. Currently, the proposed Regulation is being discussed in the European Parliament and the European Council.

Among the news this draft includes, it stands out the classification as high risk systems of certain algorithmic systems. Particularly, the Regulation draft makes reference to those who could be employed by companies to make decisions on labour promotions, performance evaluation and employee behavior, as well as their emotional checkup in their work places.

These huge technological developments must suppose a huge risk for employees, particularly exposing their private lives in certain situations and vulnerating their fundamental rights. Because implementing emotional recognition systems and biometrical categorization systems may end up invading in certain cases people's private lives and afterwards affect their psychological behavior. Therefore, at first, this technology may clash with current regulations in the domains of personal data protection and fundamental rights.

Therefore, the new Regulation foresees its usage legality, making sure businesspeople who employ that technology stay always sibjected in any case to special transparency obligations, so that employees may stay aware of which ones among said systems must be used.


In addition, the European legislator has indicated in the Regulation draft, that enterprises must carry out a "conformity evaluation", through which the risks that the use of this type of technology may entail will be evaluated, particularly considering its effect on the fundamental rights of the affected people.

Currently, the use of Artificial Intelligence in the labour environment is one of the main concerns for those who defend worker rights.

However, one of the key goals in the strategic framework of the European Union on health and labour security, foreseen for 2021 to 2027, consists particularly on anticipating and managing the change that digital transformation means, and specifically the use of Artificial Intelligence in the working environment. That way, the European legislator does not pretend to make a distinction between what's right and what's wrong when employing Artificial Intelligence in the labour domain, but rather its approach is based on the risk, so companies must be those that must value their actions by abiding to the principle of prevention, proactive responsibility and due dilligence.

Currently, the use of Artificial Intelligence in the labour environment entails one of the main concerns for defensors of worker human rights. Particularly, how this technology may be useful for hiring and the control on employee behavior by the employeer. Furthermore, the misuse of those Artficial Intelligence systems may mean vulnerations on labour rights, generate negative effects on their mental health and even increment the risk of anxiety and pressure.

The use of this Artificial Intelligence and the amount of personal data it can compile will require the use of more sophisticated security systems, since there will be bigger cyberthreats and a security issue in a company that employs this type of Artificial Intelligence may compromise the personal data of their employees, even to the point of threatening their fundamental rights.

There is no doubt that this new technology paradigm in working environments will arise comments in the next few years, provided that this technology, depending on the use it is given, may exponentially improve the efficiency with which human resources are employed in a company, but it could also entail a detriment for their rights.

The impending need to regulate this technology is because currently, lots of Artificial Intelligence systems are implemented, which make decisions on labour hiring, firing staff or even promoting employees.

Just to give an example, during the course of an emotion analysis, the employee who becomes the objective of the study might find themselves to be in a delicate emotional situation for personal reasons (an argument with a relative, sickness, the death of an acquaintance, etc.). By appying this analysis, first, data that will be obtained will not stick to the end they pursue, since Artificial Intelligence will determine for instance that the employee is depressed with their job when the problem is actually completely different and totally personal. These kinds of mistakes could affect emloyees by preventing them from being promoted or even cause them to be fired. Since Artificial Intelligence would guess the employee is not happy with their job and their performance will therefore be poor.

However, we might think this obstacle is easy to overcome, the employee just needs to previously indicate their personal situation and the chances are the company will not analyze them in a while, but that would mean directly and ilegally vulnarating their privacy.

All things considered, we may conclude that the use of this kind of technology should be voluntary on the employee's end and never mandatory by the employer, as long as the emotional status and biometric parameters of the employee are analyzed.