Rahman (2021). The Invisible Cage: Workers’ Reactivity to Opaque Algorithmic Evaluations

Author:

Hatim Rahman – Northwestern University, Kellogg School of Management

Interviewers:
Deepika Chhillar – University of Illinois Urbana Champaign, Gies College of Business
Manav Raj – New York University, Leonard Stern School of Business

Article link: https://journals.sagepub.com/doi/10.1177/00018392211010118


1. Congratulations, first of all, on a brilliant article addressing growing concerns about algorithmic opacity. The article mentions that the data collection process took more than three years. Could you talk about how the project developed over that time span? Were you always interested in looking at the role of algorithmic transparency and its implications or was it something that developed organically as you spent more time in the setting?

Rahman: To answer the question directly, I was not initially interested in algorithmic transparency. My doctoral training and this project, however, were heavily influenced by ethnographic data collection techniques and principles. One of the core principles I learned was to immerse yourself in a setting as deeply and as long as possible (eventually we all have to move on, for various reasons!). This guideline served me well, because when I first started collecting data, the platform was using a relatively simple, transparent algorithm to rate workers. During this time, I was focused on immersing myself on the platform to understand how it worked and build relationships with workers. About a year into this initial data collection period is when the platform started transitioning to an opaque algorithm, which, as the paper delves into, was a huge deal. Had I not been immersed on the platform before that, I can’t imagine how I would have collected the data for the paper.

2. The very nature of many AI algorithms is to operate with limited human supervision, which may result in more opacity regarding what’s going on “under the hood.” How would you suggest that companies that plan to use these kinds of algorithms balance potential cost to workers (who are subject to these opaque evaluations) with the potential benefits that come from using them? 

Rahman: This question is at the center of much academic, legislative, and corporate debate in the US and around the world, which I think reflects how difficult it is to figure out the answer to this question. We do know thus far that companies really struggle to figure out this balance on their own (e.g., Facebook, Google, Uber, etc). So I think a mix of legislative, consumer, and industry level initiatives are needed. We are starting to see the European Union take a lead on this initiative with GDPR and its recent AI guidelines. I think these initiatives will be important to track, especially the extent to which other countries adopt similar approaches and how companies respond.

3. Your paper largely focuses on how gig workers reacted to the uncertainty imposed by the opaque evaluation algorithm. Do you have a sense of whether or how the clients responded to this change? Did the opacity of the evaluation algorithm cause them concern, in any way? This may sound like a million-dollar question, but in your view, is there a way to still deploy these algorithms, while providing necessary transparency?

Rahman: Clients were largely unaware of the changes. They mainly saw a new rating that they could use to find workers. While initially it may seem surprising that clients did not notice such a big change, it is not so surprising when you think about how platforms are set up. For instance, most people who use Amazon, Google, and even gig platforms for food delivery are completely unaware of the changes the platforms are making to its algorithms. They just use the platform to fulfill the need they have for a good or service. But workers on these platforms are hyper aware of the changes the platform makes. Recently, for example, some people on social media posted that they were shocked to discover that only a small percentage of the tip they gave through the platform was given to the workers delivering the food. Workers have been aware of this dynamic for a while, but are not allowed to ask for tips/compensation off-platform.

Yes, I think there is a better way to balance between transparency and opacity. For the context I studied, one idea is that workers could be shown what factors influenced their score after they complete a certain number of projects. This approach prevents workers from gaming the algorithm after every project, but also allows them an opportunity to learn how they could improve.

4.  An area that AI has been widely applied to, is hiring through automated resume perusal. What do you think are some lessons from your findings that can be applied to the employment market?

Rahman: One of the goals of advancing the invisible cage metaphor was to show how the dynamics I observed are, unfortunately, widespread. The hiring context exemplifies the invisible cage metaphor. A lot of researchers, for example, have shown that automated AI hiring systems exacerbate biases and discrimination, especially against women and unrepresented minorities. What’s worse is that candidates  rarely get feedback about why a system makes a decision in a certain way and are unaware of how these opaque systems operate. In fact, a PhD student in our department, Dawei Wang, has shown how just differences in background lighting can influence how an AI system will score a candidate. So for organizations, my research shows that having a “human in the loop” is essential when using these systems because these systems overlook qualified candidates because of nothing to do with their skills. For candidates, just as I found in my setting, unfortunately they are at a disadvantage because they are often not presented with any alternatives.

5.  Since there are many students who read the ASQ blog, would you like to share any advice on academic work in this domain? What would be your advice to a first year PhD student who is starting out in the “AI/ ML area” or a budding scholar interested in this field of research? (You can advise on methods/domain knowledge or just about anything you found useful in your own research.)

Rahman: I thought one great piece of advice I received in grad school was that in order for you to succeed as a researcher, your project will be judged and categorized by its theory, methods, and phenomenon. Ideally, a project excels at all three, but the professor who told me this said a project needs to excel at two out of three of these areas to be viable. Applying this advice to qualitative studies of AI/ML, the phenomenon is obviously very interesting, but just studying this topic is likely not enough. A lot of thought needs to go into what type of methods and data you will be able to use to study this phenomenon. Getting access to “thick” data over time and/or comparative data I think helps improve the chances that when you inductively analyze the data you will be able to provide a rich theoretical contribution. I will also acknowledge that I feel like a lot of luck goes into the process and there are definitely ups and downs, so do not be too hard on yourself if things do not seem to work out right away.


Interviewer Bios:

Deepika Chhillar is a doctoral candidate in the Organizational Behavior and Strategy area at the Gies School of Business at the University of Illinois, Urbana-Champaign. She received her Masters in Economics and BE in Computer Science from BITS Pilani. Prior to joining the Ph.D. program Deepika worked in various roles at Credit Suisse and Schlumberger. Deepika’s research interests include understanding and enhancing the cultural life of organizations, technological discontinuities, and the governing role of institutions. Her work employs computational methods, including natural language processing, and machine learning. 

Manav Raj is a Ph.D. candidate in the Management & Organizations department at the Leonard Stern School of Business at New York University. His research studies how firms and markets respond to innovation around them, with a specific focus on digital innovations and platforms. Prior to his doctoral studies, Manav graduated from Dartmouth College, with a major in Economics and a minor in Public Policy, and worked as a consultant with Cornerstone Research in Boston. This fall, he will be joining the faculty in the Management Department at the Wharton School of the University of Pennsylvania.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: