Algorithms are present in our daily lives and in our digital devices. They help us discover new music based on our tastes or recommend films based on what we’ve already seen. But algorithms have also reached Human Resources and are playing an important role in personnel selection while opening new debates: are they really efficient and ethical?

 

What will I read about in this article?

  • Origin and operation of algorithms
  • Algorithms are part of our lives
  • Human Resources departments and the use of algorithms
  • Algorithm Biases and Prejudices

 

H2: Where do algorithms in human resources come from and how do they work?

 

Interest in algorithms has been growing steadily in recent years. Their use has spread to social networks, music platforms, advertising companies and human resources departments, among many other areas. However, although we might think that algorithms are an eminently modern practice and a thing of computer scientists, the truth is that they are neither one thing nor the other.

The earliest known record of the explanation and use of algorithms is from clay tablets dating back to 2500 BC, where the Sumerians indicated the steps for dividing the harvest among different people. Euclid, in his work Elements, published in the 3rd century BC, presented an algorithm for calculating the greatest common divisor. It was not until the early modern era that Ada Lovelace (1815-1852) was the first person to create the first algorithm to be processed by a machine.

Nowadays, an algorithm is understood as a programmed procedure that transforms input data into results that allow us to make decisions. More specifically, it is defined as “an ordered and finite set of operations that allows us to find the solution to a problem”.

 

“An algorithm is a programmed procedure that transforms input data into results that enable us to make decisions.”

 

Therefore, for an algorithm to be completed, three steps are needed. Input, understood as the set of data that the algorithm needs to process. Process, defined as the steps applied for the algorithm to arrive at the solution. And finally, output, which is the result produced.

 

H2: Algorithms present in our mobile and in our daily lives

 

A good modern example of the importance of algorithms is Spotify. Spotify is not the music platform it has become simply because it has access to much of the music produced, but because through its algorithms —which examine our historical listening behaviour (input data)—, it offers personalised predictions (output) in the form of songs that we will “like”. Spotify’s algorithms have the function of offering the user songs that the algorithm “knows” will make us vibrate.

It’s interesting to think that, in fact, no one has made the specific selection for us, no one has listened to the personalised playlist before, but a complex algorithm has been created to provide us with a personalised prediction, which only each end user will listen to. The ultimate goal of the algorithm is to deliver the right song to the right person.

The same is true of the algorithms programmed by many social networks to deliver personalised ads to each user: the right ad for the right person.

Algorithms have also arrived in force in the workplace. In fact, they’re already transforming the labour market and the corporate world. Algorithms are used to distribute tasks, directions, times, based on the location and availability of employees. A clear example is the distribution of motorbikes in Motosharing services . The same type of algorithm governs the new way in which lift repair companies or roadside assistance services operate.

 

 

H2: How does HR use algorithms to select staff?

 

In human resources, algorithms have expanded into recruitment, evaluation, promotion, and verification of goal achievement. And it’s good to have objective help to make good decisions.

However, experts warn that while algorithms can offer us new objectivity, they can also bring with them new risks and ethical issues that need to be considered. First, in many countries, new laws have already been passed to regulate the use of algorithms that affect working conditions.

In Spain, for example, a law has recently been passed stating that work councils must be informed when algorithms affect working conditions. In other words, the company must report if the algorithms it uses to distribute tasks or manage time are also used to measure the productivity of its workers without their knowledge. Therefore, the first consideration is ethical. Employees should know if there are artificial intelligence systems in their organisation that affect their working conditions.

Secondly, experts tell us that algorithms can be biased, and it’s essential not to have blind faith in them. Clearly, the algorithm will never fail in its process of converting input data into output, but it can fail if the programming of these processes does not prevent the output results from being discriminatory.

 

“Employees should be aware of whether there are artificial intelligence systems in their organisation that affect their working conditions.

 

For example, in e-recruitment, algorithms semantically interpret the content written in recommendation letters and CVs, but at the same time these systems learn from recruiters’ preferences, and can make predictions to offer the recruiter similar candidates to those previously hired, generating a possible bias in favour of one part of the population.

 

A second, more specific example provided by Prof. Ali of Northestarn University and colleagues (2019) is that sourcing algorithms, those necessary to reach a potential audience, are key to the recruitment process, even if they are not understood as part of the process itself. These sourcing algorithms work in the same way as personalised advertisements, but in this case, the job offer is provided to a potential audience.

 

 

H2: Algorithms also have biases and prejudices

 

To understand the possible biases of algorithms in hiring, Ali and his team conducted an interesting experiment. They paid $8,500 in ads to Facebook to share different job offers among users. In their case, although the job offers were different (a position in the timber industry, a position as a cashier in a supermarket chain, and a position in a taxi company), the specific target audience as advertisers was identical in all three cases.

However, even if the instructions were that the specific target audience should be the same, the results of the experiment analysis revealed that the timber industry job reached an audience that was mostly white and male (72% and 90% respectively), that the supermarket cashier ad reached a female audience (85%), and that the taxi driver ad reached a 75% African-American audience.

The same happened with a second experiment offering houses and flats for rent and sale, some luxury and some affordable. Again, by contracting ads to a similar target audience for both types of housing, the luxury ads reached mostly white people, and the affordable ones reached mostly African Americans.

Therefore, again, experts encourage a review of the very biases that algorithms can generate. So, it would be advisable to follow some steps:

 

  • Do not evaluate, nor make completely blind decisions following the results of algorithms, but use them as necessary and complementary data to other data generated by the people involved in the process.

  • Monitor the entire pipeline. In the case of hiring, the sourcing algorithm is often not considered as a key process, but as Ali suggests, this first point is already key and can discriminate in favour of one population versus another. Therefore, it would be desirable to examine all the processes involved in a decision-making process.
  • Conduct interventions and experiments to understand first-hand in the organisation possible biases in programming, e.g. comparing multiple decision processes (human + algorithm output) with unilateral decision processes (best outcome provided by the algorithm).

References:

Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., & Rieke, A. (2019). Discrimination through optimization: How Facebook’s Ad delivery can lead to biased outcomes. Proceedings of the ACM on human-computer interaction3(CSCW), 1-30.