The Coach | The AI Horizon 1 | The Event Horizon: Understanding the Singularity

by | Jan 19, 2026 | The Teacher

The Black Hole of Future History

Welcome back to English Plus. I’m your host, Danny, and I am your coach. Usually, we talk about English, we talk about communication, we talk about culture. But this week, we are doing something different. We are doing something necessary. We are zooming out. Way out.

Imagine you are in a spaceship. You are traveling through the cosmos, and ahead of you, there is a point in space where the laws of physics as we know them break down. Light cannot escape it. Information cannot be retrieved from it. Once you cross that line, there is no turning back, and predicting what happens on the other side is mathematically impossible. In physics, we call this the Event Horizon of a black hole.

Now, take that concept and apply it to human history.

For thousands of years, we could predict roughly what the next generation’s life would look like. If you were a farmer in the year 1200, you knew your children would be farmers in the year 1230. The tools would be the same. The struggles would be the same. Even fifty years ago, if you were a mechanic, you knew your kids might drive slightly faster cars, but the road was still the road.

But right now, in this moment, we are approaching a historical Event Horizon. We are approaching a point where technology is moving so fast, changing so radically, that we literally cannot predict what civilization will look like 20 years from now.

This week, we are running a special mini-series called “The AI Horizon.” My goal is not to scare you with Terminator stories, and it’s not to bore you with computer code. My goal is to coach you through the reality of the next two decades. I want you to understand the vocabulary of the future so that when the wave comes, you are surfing it, not drowning in it.

Today, in Episode 1, we are tackling the big one. The concept that ties it all together. The Singularity.

What is it? Why are people like Elon Musk and Ray Kurzweil obsessed with it? And why is your human brain biologically wired to misunderstand how close it actually is? Let’s dive in.

The Linear Trap: Why We Don’t See It Coming

Let’s start with a little experiment. I want you to be honest with yourself. When you think about the year 2035—ten years from now—what do you see?

Maybe you picture slightly thinner phones. Electric cars that are a bit cheaper. Maybe a robot vacuum that doesn’t get stuck under the couch. You probably picture life as it is today, just… plus ten percent. Better, faster, but fundamentally the same.

If that is what you see, you are not stupid. You are human. You are suffering from what we call the “Linear Bias.”

See, for 200,000 years, human evolution has been linear. If you hunted an antelope today, you’d probably hunt one tomorrow. If you walked 30 steps, you’d be 30 meters away. Our brains are hardwired to expect change to happen in a straight line. 1, 2, 3, 4, 5.

But technology does not move linearly. Technology moves exponentially. 1, 2, 4, 8, 16, 32.

Let me give you the classic example, often used by futurists, to show you how bad we are at understanding this.

Imagine I tell you to take 30 linear steps. One step is one meter. After 30 steps, where are you? You’ve crossed the street. You are 30 meters away. Easy to visualize.

Now, imagine I tell you to take 30 exponential steps. So, your first step is 1 meter. Your second step is 2 meters. Your third is 4, then 8, then 16. By step 30, how far have you gone?

Take a guess. A few kilometers? Maybe across the country?

At step 30, exponentially, you have traveled one billion meters. You have gone around the Earth twenty-six times.

That is the difference between how we think the future is coming (linear) and how it is actually coming (exponential).

We are currently standing at roughly step 25 or 26 in the world of computing. We have been doubling the power of computers every couple of years for decades—this is known as Moore’s Law. For a long time, the numbers were small. The difference between 0.001 and 0.002 isn’t noticeable to the average person. But once you hit the whole numbers—once you hit the “knee of the curve”—things go vertical.

We are seeing this right now. Think about the iPhone. It came out in 2007. That wasn’t that long ago. Before that, we didn’t have the App Store. We didn’t have Uber. We didn’t have Instagram. Entire economies and behaviors shifted in less than two decades.

Now, apply that speed to Artificial Intelligence. In 2020, GPT-3 was released, and it could barely write a coherent email. Three years later, GPT-4 could pass the Bar Exam to become a lawyer in the top 10th percentile.

That is not linear growth. That is a rocket ship taking off. And this leads us to the man who put a name to this phenomenon.

Ray Kurzweil and the Prediction of 2045

You can’t talk about the Singularity without talking about Ray Kurzweil.

Now, some people dismiss futurists as guys with crystal balls who just guess. But Kurzweil isn’t a psychic; he is an engineer. This is the guy who invented the first flatbed scanner. He invented the first text-to-speech synthesizer for the blind. He works at Google as a Director of Engineering. He deals in data, not magic.

Decades ago, Kurzweil plotted the growth of computing power on a graph. He looked at how fast we were making chips smaller and faster. And he noticed that this exponential curve was incredibly consistent. It held up through wars, through recessions, through the dot-com bubble. Nothing stopped the curve.

Based on this math, he made a prediction. He predicted that by the year 2029, AI would pass a valid Turing Test—meaning it would be indistinguishable from a human in conversation.

When he said this in the 90s, people laughed. They said, “No way, computers are dumb calculators.”

Well, look at where we are. It is the mid-2020s, and we have Large Language Models that can write poetry, code software, and chat with you about philosophy. We are right on schedule for his 2029 prediction.

But 2029 is just the warmup. His big number—the scary number—is 2045.Kurzweil predicts that the year 2045 will be the year of the Technological Singularity. This is the definition you need to remember for this episode:

The Technological Singularity is the moment when machine intelligence becomes infinitely more powerful than all human intelligence combined. It’s the moment when we build a computer that is smarter than us. And because it is smarter than us, it can build a computer even smarter than itself. And that computer builds an even smarter one. This creates a feedback loop—an “intelligence explosion.”

In this scenario, scientific progress that used to take us 100 years could happen in 100 minutes. We aren’t just talking about better video games. We are talking about solving aging. Curing all known diseases. Nanobots in our bloodstreams. Uploading our consciousness to the cloud.

It sounds like science fiction, right? It sounds like the plot of a movie. But if you look at the math of exponential growth—that 30 steps around the world logic—it is actually the most likely trajectory we are on.

However, before we pack our bags for the Matrix, we need to understand where we are right now. Because there is a massive confusion in the media between what AI is today, and what the Singularity requires it to be.

The Great Divide: Narrow AI vs. AGI

If you open the news, you see headlines: “AI is taking over,” “AI learns to lie,” “AI creates art.”

It is easy to think the Singularity is already here. It is not. To understand why, you need to distinguish between two terms: Narrow AI and General AI.

Let’s start with Narrow AI (ANI).

Narrow AI is what we have today. It is brilliant, but it is fragile. It is an expert in one tiny lane.

Your calculator is a Narrow AI. It can multiply numbers faster than Einstein, but if you ask it to tell you a joke, it does nothing.

Deep Blue, the computer that beat Garry Kasparov at chess in the 90s, was Narrow AI. It was a god at chess, but it didn’t know what chess was. It didn’t know it was playing a game. It couldn’t drive a car or cook an egg.

Even ChatGPT and Midjourney are forms of advanced Narrow AI. They are probabilistic engines. When ChatGPT writes a sentence, it isn’t “thinking” in the way you do. It is calculating the probability of the next word. It has read the entire internet, so it knows that after “The cat sat on the…” the most likely word is “mat.” It mimics intelligence. It mimics reasoning. But it is stuck in the domain of language processing. It doesn’t have a body; it doesn’t have sensory experience; it doesn’t have “common sense.”

Now, compare that to AGI (Artificial General Intelligence).

This is the Holy Grail. This is the goal.

AGI is a machine that has the ability to understand, learn, and apply knowledge across a wide variety of tasks, just like a human.

A human can learn to play chess, then go make a sandwich, then drive a car, then comfort a crying friend. We are generalists. We have adaptability.

Steve Wozniak, the co-founder of Apple, came up with a great test for AGI called the “Coffee Test.”

He said: A machine doesn’t have AGI until it can walk into a random American home that it has never seen before, navigate to the kitchen, identify the coffee machine, find the coffee grounds, find a mug, and brew a cup of coffee without help.

Think about how hard that is for a robot. It has to recognize objects it’s never seen (maybe the coffee is in a bag, maybe a jar). It has to understand physics (how to open the cupboard). It has to understand liquids.

Right now, no robot on earth can do that reliably. We have robots that can do backflips, but we don’t have a robot that can navigate a messy kitchen as well as a five-year-old child.

The Singularity happens only when we crack AGI. When we move from “Task Specific” to “General Adaptability.”

The Intelligence Explosion: The Final Invention

So, here is the million-dollar question: How do we get from a Chatbot to AGI? And what happens when we do?

This is where things get wild. This is where the “Event Horizon” comes into play.

Many experts believe that once we create an AGI—a computer that is roughly as smart as a smart human—it won’t stay that way for long.

If you or I want to get smarter, we have to read books, go to school, sleep well, and maybe drink some coffee. It is a slow biological process.

But if software wants to get smarter, it just needs more compute. It just needs to rewrite its own code to be more efficient.

The theory is that the moment an AGI reaches human level, it will immediately begin to improve itself. It will become an ASI—Artificial Super Intelligence.

Let’s try to visualize the gap here.

We use IQ to measure intelligence. The average human is 100. Einstein was maybe 160. A severely cognitively impaired person might be 70.

The gap between 70 and 160 defines our entire civilization—from tying shoelaces to General Relativity.

Now, imagine an ASI with an IQ of 10,000.

We literally cannot comprehend what that means. To an ASI, a human being would look like how an ant looks to us.

Does the ant understand why you are building a highway over its hill? No. It can’t. The gap is too big.

This is why the Singularity is an “Event Horizon.” We cannot predict what an entity that smart will do.

Will it solve climate change in an afternoon? Probably.

Will it figure out how to travel faster than light? Maybe.

Will it view humans as pets, partners, or pests? We don’t know.

But here is the catch—and this is why I want to ground this in reality for you. We are not there yet. We are in the “foothills” of this mountain.

We are currently seeing “emergent behaviors” in our AI. This is when the AI does something we didn’t explicitly program it to do.

For example, programmers trained an AI to translate languages. Suddenly, it figured out how to reason through logic puzzles in a language it wasn’t even trained on. It generalized.

We are seeing the sparks of AGI. And because of the exponential curve, the time between “sparks” and “fire” might be much shorter than we think.

Why This Matters To You (Real Life Examples)

Okay, Coach Danny, this is all very interesting philosophy, but I have a job to go to and rent to pay. Why does the Singularity matter to me today?

It matters because the pre-shocks of this earthquake are already hitting our economy and our daily lives. You don’t have to wait for 2045 to feel the impact.

1. The Value of Skills is Flipping.

For the last 50 years, we told everyone: “Don’t work with your hands, work with your head. Go to law school. Learn to code. Become an accountant.”

The assumption was that physical jobs are easy to automate (robots in factories) and cognitive jobs are safe.

The road to the Singularity has flipped this upside down.

It turns out, it is surprisingly easy to simulate a lawyer. It is very hard to simulate a plumber.

Narrow AI can scan a 500-page contract in 3 seconds and find the errors. It can write a Python script in 5 seconds.

But a robot cannot yet easily crawl under your sink, identify a weird leak, and fix it with a wrench.

We are entering an era where “white-collar” drudgery is at the highest risk, and “blue-collar” dexterity or high-level human strategy is the most valuable.

2. The Speed of Scientific Discovery.

We are already using “Proto-AGI” tools to do things humans couldn’t.

Look at AlphaFold by Google DeepMind.

For 50 years, biologists struggled with the “Protein Folding Problem.” Proteins are the building blocks of life. Understanding their shape helps us cure diseases. Humans took years to map a single protein.

AlphaFold mapped nearly all known proteins—200 million of them—in roughly a year.

This isn’t just a tech upgrade. This is the beginning of the “Singularity” in medicine. This means new drugs, new materials, and new treatments arriving at a pace doctors can barely keep up with.

3. The Truth Crisis.

As we approach the Singularity, the line between reality and simulation blurs. We already have Deepfakes. We have AI voices that sound exactly like your mother.

Part of living in the “Event Horizon” means we have to develop a new immune system for information. You can no longer trust your eyes or ears. You have to verify sources. This is a critical life skill you need to develop now, not in 2045.

Don’t Panic, Prepare

So, where does this leave us for Episode 1?

We are standing on a curve. A vertical line.

We talked about the Linear Trap—why our brains want to believe the future will be slow, even though the math proves it will be fast.

We talked about Ray Kurzweil and his prediction that by 2029 we have human-level AI, and by 2045, we hit the Singularity.

We distinguished between the Narrow AI we have today (smart tools) and the AGI we are chasing (smart entities).

It is a lot to process. And it is natural to feel a mix of excitement and terror. That is the correct response to the sublime.

But remember this: The future is not a movie that you are just watching. You are in it.

Technology is a tool. Fire can burn your house down, or it can cook your food and keep you warm. The difference is how much you respect it and how well you understand it.

The Singularity represents the ultimate fire. It has the potential to solve our greatest problems—scarcity, disease, ignorance. But it requires us to be awake.

In the next episode, we are going to look at the first major battleground of this new era. It’s not a war zone. It’s the art studio.

We are going to talk about The New Renaissance. Can a machine have a soul? Can an algorithm paint a masterpiece? And if a computer can write a symphony, what is left for humans to do?

We are going to explore the clash between human creativity and generative algorithms. It is going to be a fascinating discussion about what it means to be a creator.

Key Takeaways from Episode 1

Before you go, here is a quick recap of the “Mental Upgrades” from today’s session:

  • Think Exponentially, Not Linearly: Don’t judge the next 10 years by the last 10. We are at the “knee of the curve.”
  • Know Your Definitions: Narrow AI is a tool (calculator). AGI is a general intelligence (human-level). ASI is Superintelligence (god-like). We are currently transitioning from Narrow to AGI.
  • The 2045 Target: Ray Kurzweil’s prediction for the Singularity is 2045. Even if he is off by 10 or 20 years, it is likely happening within the lifetimes of many listening to this.
  • The Coffee Test: A robot isn’t truly intelligent until it can go into a strange kitchen and make coffee. Adaptability is the benchmark, not calculation speed.

I’m Coach Danny. This is “The AI Horizon.” Keep learning, stay curious, and I’ll see you in the future.

See you in Episode 2!

0 Comments

Submit a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

<a href="https://englishpluspodcast.com/author/dannyballanowner/" target="_self">Danny Ballan</a>

Danny Ballan

Author

Host and founder of English Plus Podcast. A writer, musician, and tech enthusiast dedicated to creating immersive educational experiences through storytelling and sound.

You may also Like

Recent Posts

Categories

Follow Us

Pin It on Pinterest