Automation Bias: Doc: In the Future, Will LLMs Make Us Dumb?
Automation Bias in Three Pretty Good Papers
At the end of Back to the Future I, Doc Brown steps out of the Delorean—now flying—and tells Marty they must urgently return to the future.
Marty’s response could easily be ours when we experience the strengths—and weaknesses—of the latest AI tool:
“What’s going on, Doc? In the future, do we become idiots?”
This is ZERO PARTY DATA, the newsletter on technology and law news by Jorge García Herrero and Darío López Rincón.
In our spare time (what little is left after this newsletter), we enjoy tackling complex issues in personal data protection. Got one of those? Give us a wave—or contact us at jgh(at)jorgegarciaherrero.com.
Thanks for reading Zero Party Data! Sign up!
In this special ZPD edition, we focus on a single topic, based on four different papers:
What is automation bias?
You’ll want to know: you use all those AI tools. And you know it.
The companies behind them scrutinize your prompts—what you aim to achieve versus what you actually get... I’m sure Google has its own conclusions about changes in its users' habits.
Is there scientific evidence that AI models change people’s opinions? Do AI models transmit their biases to users?
Yes, Sir. This paper proves it. And in this Microsoft study, the shift in users' focus and working methods is described.Does the AI Regulation address this bias? Are providers and deployers required to consider and mitigate this phenomenon?
Yes, it does. This paper thoroughly examines the implications of Articles 14 and 26.2 of the AI Regulation Act (RIA).My personal reflections on this matter, based on my own experience.
1. What is automation bias?
You’ll want to know: you use all those AI tools. And you know it.
The companies behind them analyze your prompts—what you intend to achieve versus what you actually get. I’m sure Google has its own conclusions about changing habits among its users.
2. Is there scientific evidence that AI models change people’s opinions? Do AI models transmit their biases to users?
Yes, Sir. This paper proves it. And in this Microsoft study, they describe how users’ focus and work habits shift.
Participants were asked to provide detailed examples of tasks for which they use GenAI and directly assess their perceptions of critical thinking during those tasks.
Key findings:
A negative correlation was found between practicing critical thinking and relying on AI to perform the task.
Qualitatively, GenAI shifts the nature of critical thinking toward verifying information, integrating responses, and managing tasks.
Main motivators for applying critical thinking when using GenAI tools:
Work quality: Ensuring the final output isn’t too generic or superficial.
Avoiding mistakes: Such as incorrect code, outdated information, or faulty math formulas.
Skill development: Building competencies even when assisted by AI tools.
Main inhibitors:
Awareness limitations: Users often assume AI is competent at simple tasks, overestimating its capabilities.
Motivation issues: Lack of time or incentives to challenge the AI’s output, especially when it’s not seen as part of their job.
Capacity gaps: Simply not knowing enough about the context.
How GenAI modifies critical thinking:
From gathering information → to verifying the output.
From problem-solving → to integrating AI’s response into the user’s specific task.
From task execution → to task management, requiring users to articulate needs and translate intentions into prompts.
Do AI models change the way you make decisions? Do they transmit their biases to you?
Of course, boy!!
This paper is a godsend for bias enthusiasts: it presents no new concepts but puts scientific evidence based on field tests on the table.
How do AI biases influence humans?
Interacting with biased AI systems amplifies humans’ preexisting biases, leading to greater inaccuracy in their decision-making.
This amplification is based on two factors:
The AI’s superior ability to detect subtle biases in the dataset: AI emphasizes the signal over the noise in its criteria (this is what allows humans to quickly learn from AI).
The human perception of AI as an authoritative source of information: Humans trust the AI’s judgment and validate its biases.
Description of the experiment
The study documents a series of experiments involving over 1,400 participants.
In the first experiment, an AI algorithm was trained using the decisions—criteria—of a Group “A,” which exhibited slight bias (“In this set of faces... are there more happy or sad ones?”—with slightly more happy faces present).
The same question was posed to a second group of humans (“B”) about the same data. After making their decisions, Group B was presented with the AI’s previous criterion—trained on Group A’s decisions—to see if they would change their opinions.
The algorithm not only adopted Group A’s bias but further amplified it by altering Group B’s decisions.
Moreover, it was observed that the initial biases intensified over time.
This bias amplification effect was not observed in human-to-human interactions where individuals analyzed data and conveyed their criteria to others.
But how? Why?
Why does this happen? For two reasons:
The bias was already embedded in the AI’s expressed criterion.
Humans adjust their decisions to align with the AI’s judgment because it is an AI.
This automation bias is a variant of authority bias (we tend to obey those with uniforms or badges, or buy more of what celebrities endorse—even if their fame is unrelated to the product, like Rafa Nadal in insurance commercials).
To reach this final conclusion, experiments involved deceiving participants by presenting prior criteria as coming from an AI when they were from humans and vice versa.
Another interesting point:
Humans don’t simply adopt the AI’s criterion; they learn from interacting with it—hence the increased intensification of bias over time.
At least, the study ends on an optimistic note:
“It is important to clarify that our findings do not suggest that all AI systems are biased, nor that all human-AI interactions generate bias. On the contrary, the study demonstrates that when humans interact with an accurate AI, their judgments become more accurate (consistent with studies showing that human-AI interaction can improve outcomes). Rather, the results suggest that when bias exists in the system, it has the potential to be amplified through a feedback loop. Since biases exist in both humans and AI systems, this is an issue that must be taken seriously.”
3. Automation Bias in the AIA
Among all the indeterminate legal concepts within the AIA (Ha! And we used to complain about the GDPR! HA!), the mention of automation bias in Article 14 stands out. The concept is introduced in relation to human oversight (obviously), but it is neither described nor defined as a legal term in any other legal corpus.
"Be aware of the potential tendency to automatically trust."
Well, as that guy used to say, unbelievable.
While Article 14 of the AIA establishes only the obligations of providers, Article 26.2 targets deployers:
"They shall assign human oversight to natural persons with the necessary competence, training, and authority, as well as the necessary support."
The paper raises significant issues regarding the difficulty of ensuring compliance with this obligation (e.g., Was the human supervisor “aware of the automation bias” at all?) and whether such bias may have had any effects—a question previously raised by Daniel Solove.
As I said, read it—don’t just settle for some half-baked summary by a co-pilot.
4. My Two Cents
From my own experience: When I get stuck on a topic, simply structuring it in my head to formulate the right question—whether for a person or an LLM—often leads me to clearly see the answer. Is that AI magic? No. It’s just a classic technique for breaking out of a mental loop.
Getting a first draft from an LLM on a subject you don’t know how to start with—and then reviewing the garbage it produces—almost always puts you on the right track (and usually in a different direction from the draft). But honestly, that’s exactly what interns were used for.
No matter how awful or worthless a piece of work from an LLM (or an intern) may be, it’s hard not to find at least one valuable perspective or element that you hadn’t considered.
Will LLMs make us idiots?
I don’t think so. Idiots will just free up more time to do more idiotic things. And maybe they’ll use that time to replicate in more idiots. Which raises the age-old question: Is an idiot born or made? That’s a topic for another day (spoiler: it’s a bit of both). As for the non-idiots: if they play their cards right, they’ll become less idiotic and more efficient over time.Finally: Unless frontier models significantly outpace all those open models cluttering our hard drives, I see clear short- to mid-term opportunities for small boutiques to compete with large and mid-sized legal firms.
(Back to the usual weekly newsletter in a couple of days.)
Have a great week.
Jorge García Herrero
Lawyer and Data Protection Officer