Coded Bias: How AI Is Learning to Think Like Us (and Why That’s a Problem)

by | Aug 14, 2025 | Social Spotlights, Understanding Cognitive Biases

For decades, we’ve held a shimmering, sci-fi vision of the future. It’s a world run by artificial intelligence, a world of sleek, impartial algorithms making decisions with a speed and rationality that we fragile humans could only dream of. We imagined AI as the ultimate antidote to our flawed, emotional, and biased minds. These systems, built on the pure logic of mathematics, would surely be free from the petty prejudices and mental glitches that have plagued humanity for millennia. They would hire the best candidate, approve the most deserving loan, and deliver the most objective justice.

This utopian vision is proving to be a profound and dangerous fantasy. As we delegate more and more of our critical decisions to automated systems, we are not escaping our biases. We are merely outsourcing them. We are teaching our own flawed patterns to machines, then hiding that bias behind a veneer of technological neutrality. We are taking the ghosts of our past prejudices and programming them into the architecture of our future.

This is the new frontier of cognitive bias: coded bias. It’s the digital manifestation of our oldest psychological failings. It’s a world where an algorithm, with no malice or intent, can learn to be racist, sexist, and exclusionary because it has been trained on the messy, biased data of human history. To navigate the coming decades, we must move past our awe of AI’s capabilities and develop a healthy, critical understanding of its limitations. We must become literate in the psychology of our own machines.

The Ghost in the Machine: How AI Learns Our Stereotypes

The promise of using AI in hiring seems so alluring. An algorithm could sift through thousands of resumes, immune to the human biases that have historically held back women and minorities, and select the most qualified candidates based on pure merit. What could possibly go wrong?

In 2018, the world found out. It was revealed that Amazon had been secretly working on an AI recruiting tool. The system was fed a decade’s worth of resumes submitted to the company, and its job was to learn the characteristics of a successful hire. But the AI didn’t learn about skills. It learned about history. Because the tech industry has been historically male-dominated, the historical data was overwhelmingly male.

The algorithm quickly taught itself that male candidates were preferable. It learned to penalize resumes that contained the word “women’s,” as in “women’s chess club captain.” It reportedly even downgraded graduates of two all-women’s colleges. Amazon’s engineers tried to edit the system to make it neutral, but they couldn’t guarantee it wouldn’t find new, more subtle ways to discriminate. They ultimately scrapped the entire project.

This is a perfect, if terrifying, example of how Stereotyping gets coded into AI. The AI itself isn’t “sexist.” It’s a pattern-matching machine. It ingested a dataset that reflected a biased reality, identified a pattern (maleness correlates with being hired), and logically concluded that this was a desirable trait. The algorithm didn’t learn our values; it learned our past behavior, warts and all.

The Problem of Proxies

The real danger lies in how algorithms use proxies. A proxy is an indirect measure of a desired outcome. An AI might be forbidden from using race as a factor in loan decisions. But it can still use proxies for race, like zip codes, which are often highly segregated. The AI might learn from historical data that people from certain zip codes have a higher default rate. Without any understanding of the systemic factors that create this disparity (like historical redlining and lack of access to banking), the algorithm will simply conclude that living in that zip code is a sign of being a high-risk borrower. It has effectively laundered a historical, societal bias through a seemingly neutral data point, creating a high-tech version of redlining.

This is the insidious nature of coded bias. It creates a feedback loop. The biased data produces a biased algorithm, which in turn makes biased decisions that create even more biased data for the next generation of algorithms to learn from.

The Algorithmic Echo Chamber: Supercharging the Availability Heuristic

We’ve already explored how our brains fall victim to the Availability Heuristic, our tendency to judge the likelihood of something by how easily examples of it come to mind. This is why we fear plane crashes more than car accidents; the dramatic, vivid images of a plane crash are far more “available” in our memory.

Now, consider the recommendation engines that govern our digital lives. Your YouTube homepage, your Netflix queue, your Spotify “Discover Weekly” playlist, and your news feed are all designed with one primary goal: to keep you engaged by showing you things you are likely to click on. To do this, they learn from your past behavior.

If you watch one video about a bizarre conspiracy theory out of idle curiosity, the algorithm notes this. It concludes, “Ah, this user is interested in conspiracy theories.” It then serves you another, slightly more extreme video on the topic. If you watch that one, it doubles down. Before you know it, your entire recommended list is a firehose of conspiratorial content.

The algorithm has created a powerful feedback loop for the Availability Heuristic. It takes a fleeting interest and, by making similar content constantly and readily available, it transforms that interest into what feels like a significant, pervasive, and important topic. It creates the illusion that “everyone is talking about this,” when in reality, you are in a personalized media bubble of one, curated by a machine that is simply trying to guess what you’ll click on next.

This isn’t just about entertainment. This same mechanism can shape our perception of political reality, our health fears, and our view of societal risk. The algorithm makes certain ideas, fears, and narratives hyper-available, distorting our mental map of the world in a way that is both incredibly subtle and profoundly impactful.

Automation Bias: Our Dangerous Trust in the Machine

Compounding this problem is our innate Automation Bias. This is our tendency to trust the suggestions and decisions made by automated systems, sometimes more than our own judgment. We see the output of a computer and assume it must be objective and correct.

When a GPS tells us to turn down a deserted-looking road, we often obey without question. When a recommendation engine suggests a video, we are more likely to see it as a credible suggestion than if a random stranger had recommended it. We grant the machine an unearned aura of authority.

This means we are less likely to be critical of the information presented to us by an algorithm. We let our guard down, assuming the system is neutral. This makes the Availability Heuristic feedback loop even more powerful. We’re not just being shown a distorted view of the world; we’re predisposed to believe it.

The Unseen Architecture: How Bias Hides in the Code Itself

So far, we’ve focused on how biased data creates biased outcomes. But bias can also be baked into the very design and architecture of the algorithm itself. The choices made by the human programmers—what data to include, what variables to prioritize, how to define “success”—are all laden with their own values and inherent biases.

Consider an algorithm designed to predict criminal recidivism, which is used in some court systems to help determine sentencing. How do you define “success” for this algorithm? Is a successful outcome simply “this person was not re-arrested”? This seems logical, but it hides a massive bias.

In the United States, policing is not uniform. Certain neighborhoods, often predominantly minority communities, are policed much more heavily than others. A person living in a heavily policed area is statistically more likely to be arrested for a minor infraction (like loitering or marijuana possession) than a person in a less-policed suburban neighborhood who is committing the same infraction.

Therefore, an algorithm trained on arrest data will logically conclude that people from that heavily policed neighborhood are at a higher risk of “recidivism.” The definition of success—”avoiding re-arrest”—is itself biased by real-world policing practices. The algorithm isn’t just reflecting a biased world; its very objective function is codifying and reinforcing that bias, creating a veneer of objective, data-driven justification for a deeply inequitable cycle.

The Myth of Neutrality

This reveals the central myth of our technological age: the myth that technology is neutral. Every technological tool is a product of human choices, and every human choice is subject to bias. An algorithm is an opinion, embedded in code. It is a set of priorities and definitions, written in the language of mathematics.

There is no “raw data.” Data is collected by people, cleaned by people, and interpreted by people (or by systems designed by people). The decision of what data to even collect is a human one. Are we collecting data on income but not on generational wealth? Are we collecting data on prior convictions but not on access to community support systems? Every choice shapes the “reality” the algorithm gets to see.

The Way Forward: Towards Algorithmic Accountability

The picture seems bleak, but it is not hopeless. The rise of coded bias is not an unstoppable force of nature; it is a design problem. And because it is a design problem, it has design solutions. The path forward requires a new kind of vigilance, a new kind of literacy, and a new commitment to accountability.

A Call for Transparency and Auditing

First, we must demand transparency. When a decision that significantly affects a person’s life (like a loan, a job application, or a parole hearing) is made with the help of an algorithm, we have a right to know. The “black box” approach, where companies treat their algorithms as untouchable trade secrets, is no longer acceptable.

Second, we need robust, independent auditing. Just as financial auditors scrutinize a company’s books, algorithmic auditors must be empowered to scrutinize a system’s code, data, and outcomes. These auditors should be diverse, multi-disciplinary teams of computer scientists, social scientists, and ethicists who can probe for coded biases and test for discriminatory outcomes.

Diversifying the Creators

The homogeneity of the tech industry is a major source of coded bias. A team of programmers who are all from similar demographic and socioeconomic backgrounds will have a limited, shared set of life experiences and, therefore, a shared set of blind spots. They are less likely to anticipate how their technology might negatively impact communities they are not a part of. Actively building more diverse and inclusive teams is not just a matter of social justice; it is a prerequisite for building safer and more equitable technology. A wider range of perspectives in the design phase will lead to a more robust and less biased product.

The age of AI is upon us, and with it comes a new and powerful vector for our oldest human failings. Our biases have escaped the confines of our skulls and now live in the cloud, operating at a scale and speed we can barely comprehend. We cannot afford to be naive about these new machines. We must approach them not with blind faith, but with critical wisdom, recognizing that every algorithm is a mirror. And if we don’t like the reflection we see, it is up to us, not the machine, to change.

Sorry! This part of content is hidden behind this box because it requires a higher contribution level ($5) at Patreon. Why not take this chance to increase your contribution?
Unlock A World of Learning by Becoming a Patron
Become a patron at Patreon!

0 Comments

Submit a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

<a href="https://englishpluspodcast.com/author/dannyballanowner/" target="_self">English Plus</a>

English Plus

Author

English Plus Podcast is dedicated to bring you the most interesting, engaging and informative daily dose of English and knowledge. So, if you want to take your English and knowledge to the next level, you're in the right place.

You may also Like

Recent Posts

Categories

Follow Us

Pin It on Pinterest