Skip to content

Algorithms, Austrian job seekers and Automation Bias

During the last ten weeks I worked for Tech4Germany, Germany’s technology task force under Germany’s Chief of Staff and Head of the Chancellory, Prof. Dr. Helge Braun. We are a startup within the German government that offers tech talents the opportunity to innovate within the public sector. With now 30 people, we want to prove that digital transformation is possible in the public sector, using tech and design. With no background in the public sector it was a great adventure for me to learn more about Germany’s public services and how to innovate within this environment.

In my personal opinion, technology and digitization have a great potential to benefit the public good. In my project we developed a user-centered and service-oriented concept for digital points of contact between citizens and the Federal Ministry. By using representative surveys, interviews and user tests, we set an example for the inclusion of citizens in digital projects at the federal level.

Unfortunately, modern technologies such as artificial intelligence (AI) or automated decision-making software also have the potential to cause harm. Due to the enormous impact on the life of citizens, public administrations have a special responsibility when it comes to the application and use of those technologies. If the Netflix algorithm has flaws, my personal consequences are limited. I’ll watch series that don’t fit my profile, or I have to scout for interesting movies myself. When it comes to public services, consequences can be serious, e.g. if citizens rely on allowance.

In Austria we can witness a bad example right now. Their national employment agency currently rolls out an algorithm that allocates scores to job seekers reflecting their (computed) chances on the labor market. Job seekers are then placed into one of three groups (A, B and C). Based on the group, they get different training, benefits and support.

As part of the algorithm’s documentation has been made public, experts started to criticize the employment agency for discrimination. As one can see in the published model, being female leads to a negative weight as well as being disabled. The head of the agency defend his plans by stating, that the employment agency’s case workers can overrule the algorithm’s categorization.

Former studies from similar systems in other countries invalidate this statement. In 2014, Poland introduced a similar data-collection and profiling system for categorization of un-employed citizens.

The polish Panoptykon Foundation released a report in 2015 criticizing the system’s negative impact on the fundamental rights of citizens. Their research shows that even though the case workers had the option to change a profile, which means overruling the algorithm’s categorization decision, this happened only in 0.58% of all cases. In other words, most of the time, human case workers rely on the machine’s score.

This form of over-relying on automation is known as automation bias in the human factors literature. With putting too much trust in an automated system, humans reduce their information seeking and processing. According to the literature, there are – generally speaking – three main factors for automation bias (Parasuraman & Manzey, 2010).

First, humans prefer the use of simple heuristics over complex decision making. In other words, if we have the choice in decision making, we chose the way with the least cognitive effort. Relying on the algorithm’s categorization means less effort than overruling which requires justifications.

Next, automated decision aids are seen as powerful. We perceive automated systems as superior in their analysis capabilities and tend to overestimate their performance. In Panoptykon Foundation’s research there is an interesting quote from an employment agency manager:

“A client counselor can change a profile, but as a bureaucrat he must… I am not afraid of making decisions. Some workers are so that they are not afraid of making decisions. Others are afraid. If they weren’t, they would probably be all like Kulczyk [the name of the former richest Pole]. However, they are bureaucrats; they prefer somebody else to make this decision for them”

– Quote by an employment agency manager (Niklas et al. 2015, p.28)

Third, social loafing does not only occur in all-human teams but also when human and machines are working together. It’s likely that the software is perceived as a team member and thus, felt responsibility of the case workers decrease.

That means, giving human case workers the option to overrule algorithms is not enough because it’s very likely that – without further training or instruction – they will not use it, as it was the case for their polish colleagues. In my opinion, the Austrian government should put more effort into finding answers for socio-technical questions and scientific phenomena such as automation bias before releasing their algorithm – which is then still very likely illegal. 🙂

Technology is not a silver bullet for improving public services. It comes with risks and opportunities that need to be addressed by policymakers and discussed with the wider public. Where do we want automation as citizens? What about our data privacy? Will I still be able to talk to human civil servants in the future? Governments have great responsibility and therefore they can’t afford to get digitization and automation wrong. To sum up, I want to recommend Meg Leta Jones’ article on the ironies of automation law: typing policy knots with fair automation practices principles.

Niklas, J., Sztandar-Sztanderska, K., & Szymielewicz, K. (2015). Profiling the unemployed in Poland: social and political implications of algorithmic decision making. Fundacja Panoptykon, Warsaw Google Scholar.

Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human factors52(3), 381-410.Chicago.

Jones, M. L. (2015). The ironies of automation law: Tying policy knots with fair automation practices principles. Vand. J. Ent. & Tech. L.18, 77.