Welcome to the Infodemic
Have you ever felt like you’re living in a completely different reality from someone you know? You’re looking at the same event, the same headline, but you’re seeing two wildly different stories. What shapes that reality? And in a world of endless information, how do we know what’s real anymore?
Today, we’re wading into some murky waters. We’re going to talk about the internet.
Ah, the internet. Humanity’s greatest achievement and its most chaotic, unmoderated garage sale, all at once. It’s a place where you can access the entire collection of the British Library, learn quantum physics from a Nobel laureate, and watch a video of a cat playing a tiny piano, all in the span of about ten minutes. It’s a miracle.
But it’s also a place that perfectly illustrates the old saying, often attributed to Winston Churchill, that ‘a lie can get halfway around the world before the truth has a chance to put its boots on.’ And in a perfect twist of irony for an episode about misinformation, Churchill almost certainly never said that, which is a perfect way to start our discussion.
I’m Danny, and this is a new episode from English Plus Podcast.
Let me paint you a picture. It’s 2017. Hurricane Harvey is devastating Texas. The news is filled with harrowing images of floods and heroic rescues. Suddenly, a new image goes viral, shared hundreds of thousands of times on Twitter and Facebook. It’s a photograph of a massive shark, its fin slicing through the murky brown water, swimming down a flooded Houston highway. The caption is simple: “Sharks are swimming on the freeway in Houston.”
People are horrified. They’re amazed. They share it with friends and family. “Can you believe this?!” It becomes a symbol of the storm’s apocalyptic scale. The only problem? It’s completely, utterly fake. It wasn’t even a new fake. It was a photoshop job that had been circulating online for years, attached to whatever new flood was in the news.
Now, on the grand scale of things, a fake shark picture is pretty harmless. It’s a bit of silly, jaw-dropping clickbait. But it’s a perfect symptom of a much larger, much more dangerous phenomenon. We’re living in an age not just of information, but of information overload. We’re drowning in it. And in this flood of content, separating fact from fiction, the real sharks from the photoshopped ones, has become one of the most critical skills of the 21st century.
This isn’t just an information age; it’s an infodemic. An epidemic of bad information that spreads just as fast as any virus, infecting our minds, our conversations, and our societies.
So, for the next 40 minutes or so, we’re going to build a survival guide. This is your personal toolkit for navigating the modern information age without losing your mind. First, we’ll get our definitions straight—we need to know what we’re up against. Then, we’re going to play armchair psychologist and look at why our own brains are so easily tricked by this stuff. It’s not because we’re dumb; it’s because we’re human. After that, we’ll investigate the digital gasoline that fuels this fire—the algorithms and echo chambers that have turned the internet into the world’s most efficient rumor mill. And finally, the part you’ve been waiting for: we will lay out simple, actionable strategies that anyone can use to build a stronger immune system against the infodemic.
So grab your headphones, open your mind, and let’s figure out how to navigate this wild, wonderful, and utterly weird digital world together.
Know Your Enemy – Defining the Terms
Alright, before we can fight a monster, we have to know its name. In the world of bad information, not all beasts are created equal. You’ve probably heard the terms ‘misinformation’ and ‘disinformation’ thrown around, sometimes interchangeably. But the difference between them is crucial—it’s the difference between an accident and an attack. And there’s a third, sneakier category we need to talk about, too. So let’s break them down.
First up, the most common and least sinister of the bunch: misinformation.
Misinformation is, simply, false information. The key ingredient here, the thing that separates it from its more evil twin, is intent. Or rather, the lack of it. A person sharing misinformation doesn’t know it’s wrong. They’re not trying to deceive you; they’re often trying to be helpful or share something they found interesting.
Think of your great-aunt Carol. She sees a post on Facebook that says you can fully charge your phone by putting it in the microwave for 30 seconds. She thinks, “Wow, what a great life hack! My nephew is always complaining about his battery.” So she shares it on your wall. She’s not a monster trying to destroy your iPhone; she’s just your lovely, technologically-challenged Aunt Carol who genuinely believes she’s helping. She is spreading misinformation. It’s an honest mistake. It’s you misremembering a movie quote, or telling a friend a restaurant is open on Mondays when it’s actually closed. The information is wrong, but the heart is, usually, in the right place. The harm is accidental.
But then we get to the dark side. We get to disinformation.
Disinformation is also false information, but here, the intent is everything. Disinformation is a weapon. It is a lie crafted and spread with the specific purpose to deceive, to manipulate, to make a profit, to damage a reputation, or to sow chaos. This isn’t an accident; it’s arson.
The person creating or knowingly sharing disinformation is the puppet master. They know the information is false, and they’re using it as a tool. This is the stuff of state-sponsored propaganda campaigns designed to destabilize other countries. It’s the fake “miracle cure” website trying to sell you useless sugar pills for a deadly disease. It’s the political operative who invents a scandalous rumor about an opponent right before an election.
If misinformation is your Aunt Carol accidentally telling you to microwave your phone, disinformation is the prankster who created that meme in the first place, knowing full well it would lead to a bunch of broken phones, just so they could have a laugh at other people’s expense. One is an error, the other is an attack. One is a mistake, the other is malicious. The falsehood is the same, but the intent changes everything. It’s the difference between accidentally bumping into someone and punching them in the face.
Now for the third, and perhaps the most complex, category: malinformation.
This one is tricky because the information itself is often true. It’s genuine. The problem isn’t the content; it’s the context and the intent. Malinformation is the weaponization of truth. It’s real information shared with the intent to cause harm to a person, an organization, or a country.
The most common example of this is doxxing. That’s when someone takes a person’s private, personal information—like their home address, their phone number, their family members’ names—and posts it publicly online, often to encourage others to harass them. The address is real. The phone number is real. The information is technically correct. But it has been weaponized. It’s been taken from a private context and blasted into a public one to inflict harm.
Another example is taking a quote from someone completely out of context. Someone might have made a sarcastic joke ten years ago in an email to a friend. If that email is leaked and the joke is presented as a serious, current belief, that is malinformation. The person really did write those words—that part is true. But the framing is designed to mislead and cause damage. It’s taking a piece of truth and twisting it into a weapon.
So, let’s recap.
Misinformation: False, but the intent isn’t malicious. An honest mistake.
Disinformation: False, and the intent is absolutely malicious. A deliberate lie.
Malinformation: True, but used with malicious intent. A weaponized truth.
Why do these distinctions matter? Because knowing what you’re looking at helps you understand the motive behind it. Is this something that needs a gentle correction, or is it an attack you need to defend yourself against? Is this a simple misunderstanding, or are you the target of a deliberate campaign? Naming the enemy is the first step toward defeating it. And the biggest enemy we face isn’t always the information itself, but the quirks of our own psychology.
The Brain on Bad Information – Why We’re Vulnerable
So, we have these three horsemen of the infodemic: misinformation, disinformation, and malinformation. But why are they so effective? Why do we, as supposedly rational, intelligent beings, fall for this stuff over and over again? It’s tempting to think, “Oh, it’s those people who fall for fake news. The uneducated ones. The gullible ones. Not me. I’m too smart for that.”
Well, I have some bad news for all of us. Susceptibility to bad information has very little to do with intelligence and a whole lot to do with how the human brain is wired. Our brains are incredible machines, but they’re not perfect. To navigate a complex world without our heads exploding, our brains have developed mental shortcuts, or what psychologists call cognitive biases.
Think of them like the default settings on your phone. They’re there to save energy and help you make quick decisions. Most of the time, they’re incredibly useful. But sometimes, those default settings can get us into a lot of trouble, especially online.
Let’s talk about the big one, the undisputed heavyweight champion of cognitive biases: Confirmation Bias.
Confirmation bias is the tendency for our brains to actively seek out, interpret, and remember information that confirms what we already believe. At the same time, we tend to ignore, dismiss, or forget information that challenges our existing beliefs. Our brains don’t like being wrong. Being wrong feels bad; it creates something called cognitive dissonance, a kind of mental static. Being right, on the other hand, feels great. It gives us a little hit of dopamine. So, our brains go looking for that good feeling.
Imagine you strongly believe that electric cars are the only way to save the planet. You go online. You see two headlines. One says, “New Study Shows Electric Cars Slash Urban Pollution by 70%.” The other says, “The Hidden Environmental Cost of Mining for Electric Car Batteries.”
Which one are you going to click?
If you’re like most people, you’re clicking that first one. You’ll read it, nod along, and maybe even share it with the caption, “See! I told you so!” You might glance at the second headline and think, “Pfft, that’s probably funded by the oil industry,” and you’ll scroll right past it without a second thought.
You didn’t consciously decide to ignore evidence. Your brain did it for you, automatically, like an overzealous bouncer at a nightclub, only letting in the information it already recognizes and likes. This isn’t a moral failing; it’s a feature of our cognition. We build a worldview, and then we spend our lives looking for evidence to furnish it and defend it.
But it gets even weirder. What happens when you’re cornered? What happens when someone presents you with undeniable facts that directly contradict one of your deeply held beliefs? You might think that a rational person would say, “Oh, wow. I hadn’t considered that. Thank you for this new information, I will now update my belief.”
Yeah… that’s not usually what happens. Instead, we often experience something called the Backfire Effect. The backfire effect is when, in the face of contradictory evidence, people can reject the evidence and actually strengthen their support for their original, incorrect belief.
It’s completely counterintuitive. It’s like telling someone their house is on fire, and they react by locking the doors and adding more kindling to the fireplace.
You’ve probably seen this play out during a family dinner. You get into a debate with an uncle who believes something you know to be factually wrong. You pull out your phone. You show him articles, studies, expert testimony. You lay out a perfect, iron-clad, logical case. And what happens? He doesn’t just disagree; he gets more convinced he’s right. He doubles down. He accuses the experts of being biased. He dismisses your sources. Your attempt to put out the fire with the water of facts has somehow acted like gasoline.
That’s the backfire effect. For beliefs that are tied to our identity—our political views, our ethical values, our sense of who we are—a factual challenge can feel like a personal attack. And when we feel attacked, we raise our defenses. We don’t just protect our belief; we reinforce it.
So we have our own brains working against us with biases like these. Now, let’s pour the jet fuel of modern technology onto this psychological kindling.
Our modern digital world, especially social media, is practically designed to exploit these biases. Think about your social media feed. Who, or what, decides what you see? It’s not a human editor thinking about what is true or important for you to know. It’s an algorithm. And that algorithm has one primary goal: to keep you on the platform for as long as possible. To maximize your engagement.
And how does it do that? It shows you things you’re likely to react to. Things that make you angry, happy, shocked, or validated. It learns what you like, what you click on, and what you believe, and then it feeds you more of the same. It’s a confirmation bias machine. It sees you clicking on articles about the benefits of electric cars, and it says, “Ah! This person loves pro-EV content. Let’s show them a hundred more articles just like it, and hide all that nasty stuff about battery mining.”
This creates what we call Echo Chambers or Filter Bubbles. We end up in these personalized digital realities where we are surrounded by voices that agree with us and information that confirms what we already think. It feels comfortable. It feels validating. It feels like everyone thinks the way we do. But it’s an illusion. We’re in a hall of mirrors, and we’ve mistaken it for the entire world.
And into this perfectly primed environment, you introduce the agents of chaos: bots and troll farms. Bots are automated software programs designed to post and share content at an incredible rate. A single person can command an army of thousands of bots to make a fringe idea look like a mainstream movement, or to amplify a piece of disinformation until it trends. Troll farms are the human equivalent: organized groups of people, often paid by a government or a political group, whose entire job is to sit in an office and spread divisive and false content online all day.
So you have a human brain that’s biologically primed to accept information it already agrees with. You have algorithms that are commercially designed to feed us more of what we already agree with. And you have malicious actors who are strategically built to flood that system with lies that cater to our biases.
It’s a perfect storm. It’s no wonder we’re all struggling to stay afloat. But there is hope. You can learn to be a better sailor.
The Survival Guide – Your Digital Literacy Toolkit
Alright, we’ve diagnosed the illness and explored the underlying conditions. We know what bad information is, and we know why our brains and our technology are so susceptible to it. Now for the most important part: the cure. Or, if not a cure, then a powerful vaccine and a set of survival skills.
This isn’t about becoming a super-genius fact-checker who spends eight hours a day debunking memes. This is about developing a few simple, powerful habits that can transform you from a passive consumer of information into an active, critical thinker. The best framework I’ve found for this comes from the digital literacy expert Mike Caulfield, and it’s called the SIFT method.
SIFT is an acronym for Stop, Investigate the source, Find better coverage, and Trace claims to the original context. It’s easy to remember and incredibly effective. Let’s break it down, step by step.
The first step, and honestly the most important one, is S – Stop.
Just… stop. For one second. Before you click share, before you fire off an angry comment, before you let that headline ruin your day, just pause. Take a deep breath.
The entire digital economy is built on instant, emotional reaction. Disinformation thrives on it. It’s designed to bypass your rational brain and hit you right in the gut. It wants to make you angry, scared, or smug. Strong emotions are the enemy of critical thinking. When your heart rate is up, your critical faculties are down.
So when you see a post that makes your blood boil or a headline that seems too good, too perfect, or too outrageous to be true, recognize that feeling. That jolt of emotion is a red flag. It’s a warning sign from your brain saying, “Hey, something that wants to manipulate me is happening here.”
Stopping interrupts that cycle. It creates a crucial gap between the emotional impulse and the action of sharing. In that gap, you can ask a simple question: “Why am I feeling this way? Does this source want me to be angry? Does this post want me to feel validated?” Pausing is your superpower. It’s the single biggest thing you can do to avoid becoming an unwitting link in the chain of misinformation.
Okay, so you’ve stopped. You’ve taken a breath. What’s next?
I – Investigate the source.
This seems obvious, but it’s amazing how often we skip it. Before you even read the article, ask yourself: “Who is telling me this?” Look at the name of the website. Do you recognize it? Is it a well-known news organization with a reputation to uphold, like the BBC, Reuters, or the Associated Press? Or is it something like “The American Patriot Herald” or “True Health Wisdom Today dot-info”?
If you don’t recognize the source, your investigation has just begun. Don’t just take its word for it. Open a new tab. This is key. Do not stay on the page. Go to Google or another search engine and type in the name of the website. Look at what other, independent sources say about it. Wikipedia can be surprisingly useful for this; its entries on publications often describe their political leaning and factual accuracy.
Look for an “About Us” page on the site itself. Is it transparent? Does it list its editors, its funding, its mission? Or is it vague and full of buzzwords? A credible news source will have a clear, public corrections policy. They admit when they get things wrong. A disinformation site will never admit fault. Pay attention to the URL itself. Is it trying to look like a real news site? A common trick is a “.co” or “.net” that looks like a “.com” you trust—like “abcnews.com.co”. It’s a digital fake mustache.
Investigating the source is like checking who cooked your food before you eat it. Is it a trained chef in a clean kitchen, or some random guy in an alley? You’d want to know, right? Same goes for your information.
Now for the third step, which is where you really start to level up your skills.
F – Find better coverage.
This is also known as lateral reading. Most of us were taught to read “vertically.” That is, you land on a webpage, and you analyze it deeply. You read the text, check for a date, look for links, and try to decide if it’s trustworthy all by itself. That’s a slow and often ineffective way to operate online.
Lateral reading is different. Instead of going deep, you go wide. The moment you land on an unfamiliar site that is making a big claim, you open other tabs. Your goal isn’t to analyze the original article; your goal is to see what the rest of the web is saying about that claim.
Let’s say you see a headline on “FutureScienceNews.com” that says, “Scientists Discover Broccoli Cures Baldness.” Wow! Exciting news for many. But before you run to the grocery store, read laterally. Open a new tab and search for “broccoli baldness cure.”
Now, look at the results. Is the story being reported by major, trusted news outlets and scientific journals? Or is the only other coverage on blogs called “MiracleVeggieCures” and “HairGrowthSecrets”? If the New York Times, The Guardian, and Nature magazine aren’t talking about a major scientific breakthrough, it probably didn’t happen.
Lateral reading is the fastest, most effective way to get your bearings on a topic. You’re not trying to become an expert on the source; you’re just trying to see if there’s a consensus among experts and journalists. It’s like getting a second, third, and fourth opinion. In just a few minutes of searching across different sites, you can usually determine if a story is legitimate, a fringe theory, or total nonsense.
Finally, we have the fourth step. You’ve stopped, you’ve investigated the source, you’ve looked for better coverage. Now it’s time to get to the root of the matter.
T – Trace claims, quotes, and media to the original context.
Information rarely just appears out of thin air. It usually comes from somewhere. A quote comes from an interview. A statistic comes from a study. A photo comes from a specific time and place. A huge amount of mis- and disinformation works by stripping that original context away. Your job is to put it back.
If you see a shocking quote from a politician, don’t just trust the meme that’s sharing it. Find the original video or transcript of the speech or interview. Was the quote said exactly as written? Was it sarcasm? Was the sentence before it, “Now, my opponent would have you believe that…”? Context is everything.
If you see a powerful photograph, do a reverse image search. This sounds complicated, but it’s incredibly easy. On most browsers, you can just right-click on an image and select “Search Image with Google” or a similar option. Or you can use sites like TinEye. This will show you where else that image has appeared online, and when.
Remember our shark on the highway? A quick reverse image search would have shown you that the photo was years old and had nothing to do with Hurricane Harvey. This technique is invaluable for debunking photos of supposed war crimes, protests, or natural disasters that are actually from completely different events, sometimes decades old.
The same goes for data and statistics. An article might claim, “A new study shows that people who drink coffee every day are 50% more likely to be geniuses.” Don’t just accept the headline. Find the study. Who funded it? Was it the International Association of Coffee Growers? How many people were in the study? Was it 10,000 people, or was it eight guys in a coffee shop? Does the study’s actual conclusion match the breathless headline? Often, you’ll find the reality is far more nuanced.
So there it is: SIFT. Stop. Investigate. Find. Trace. It’s not a magic bullet, but it’s a powerful shield. It’s a mental habit that, with a little practice, becomes second nature.
Becoming a Better Digital Citizen
So, where does that leave us? We’ve journeyed through the murky depths of the infodemic. We’ve defined our enemies: the accidental misinformation, the deliberate disinformation, and the weaponized malinformation. We’ve looked in the mirror and seen how our own cognitive biases—especially our love for confirming our own beliefs—make us vulnerable. We’ve examined the technological landscape of algorithms and echo chambers that have turned this vulnerability into a full-blown crisis.
But we didn’t stop there. We’ve equipped ourselves with a toolkit, the SIFT method, a simple framework to help us navigate this chaotic world: Stop before you share. Investigate the source. Find better coverage through lateral reading. And trace the information back to its original home.
It can feel overwhelming. It can feel like the tide of garbage information is too high to ever turn back. And it can be tempting to just throw up your hands and disengage entirely, to retreat into a state of cynicism where you trust nothing and no one.
But that’s not the goal. The goal isn’t to become a cynic. The goal is to become a discerning digital citizen. The goal is to trade blind trust for earned trust. It’s to replace passive scrolling with active curiosity. This isn’t about believing nothing; it’s about learning how to believe the right things for the right reasons.
Learning these skills is like learning to drive. At first, it feels like there are a million things to pay attention to—the pedals, the steering wheel, the mirrors, the other cars. It’s clumsy and it takes conscious effort. But with practice, it becomes automatic. It becomes instinct. You learn to spot the red flags of a bad source just like you learn to spot a driver who’s about to cut you off.
And you don’t have to master all of it overnight. Start small. For the next week, I challenge you to practice just one step: Stop. The next time you see a headline that makes your heart pound, that fills you with righteous indignation or smug satisfaction, just pause. That’s it. You don’t have to do a full investigation. Just stop, take a breath, and recognize the emotion. That one small act is a revolution.
Every time you pause before you share, every time you take 30 seconds to check a source, you are doing more than just protecting yourself. You are improving the information ecosystem for everyone. You are throwing a small bucket of water on the digital wildfire. And if enough of us do that, we can begin to put out the flames.
We built this digital world. It is flawed, it is chaotic, but it is also full of wonder and connection and knowledge. We have a responsibility to ourselves, and to each other, to make it better. And that starts with being a little more critical, a little more curious, and a lot more thoughtful about the information we choose to consume and to share.
That’s all the time we have for today. Thank you for joining me on this journey through the infodemic. Stay curious, stay critical, and I’ll talk to you next week.
0 Comments