The Enchantress of Numbers Meets GPT: Ada Lovelace on the AI Controversy

by | Jan 26, 2026 | Fantastic Guests

AI Controversy with Ada Lovelace

It is rare to speak with a guest whose foresight was so piercing that it took humanity a hundred years just to build the machine she programmed in her mind. Born in 1815, the legitimate daughter of the mad, bad, and dangerous-to-know Lord Byron, she chose a path not of romantic poetry, but of “poetical science.” She saw the beauty in logic and the music in mathematics. She is the first computer programmer and the visionary who realized that a computer—specifically Charles Babbage’s Analytical Engine—could manipulate any symbol, not just numbers, if those symbols followed the rules of logic. Please welcome the Enchantress of Numbers, Ada Lovelace.

The Interview

Host: Lady Lovelace, it is an absolute honor. You famously wrote about the Analytical Engine in 1843. Today, we have engines that fit in our pockets and engines that claim to write poetry. What is your initial reaction to the current state of “Artificial Intelligence”?

Ada Lovelace: The honor is mine, though I admit to being somewhat disoriented by the lack of coal smoke in your air. As for your “Artificial Intelligence,” I find the terminology amusing. In my time, we struggled to make the machine simply obey. Now, you seem terrified that it has learned to disobey.

Host: That’s exactly the heart of the controversy. We are worried about control. But more specifically, there is a debate about creativity. You famously wrote that the Analytical Engine “has no pretensions whatever to originate anything.” Do you still stand by that, looking at tools like ChatGPT or Midjourney?

Ada Lovelace: Ah, the “Lovelace Objection,” as Mr. Turing later called it. I have been observing your era. You have machines that ingest the sum total of human literature and imagery, grind it into a fine mathematical dust, and then reconstitute it into new shapes. It is impressive, certainly. But does it originate? I would argue no.

Host: But if I ask it to write a poem about a toaster in the style of your father, Lord Byron, it does it. And it’s a new poem. Is that not origination?

Ada Lovelace: It is mimicry, not origination. It is the weaving of the Jacquard loom on a grand scale. The loom weaves flowers and leaves into the silk, but the loom does not know the flower, nor does it smell the leaf. Your AI weaves probability. It calculates that after the word “darkness,” the word “visible” often follows. It is playing a statistical game of notes, but it does not hear the music. My father wrote from passion—often misguided passion, but passion nonetheless. Your machine writes from probability.

Host: So you view AI as a sophisticated mirror rather than a new mind?

Ada Lovelace: Precisely. A mirror that rearranges the reflection, perhaps, but a mirror nonetheless. However, I must admit, the complexity of the weaving is… seductive. I envisioned the engine manipulating pitches of music to compose scientific pieces. You have achieved that. But the controversy you speak of—this fear that the machine will replace the human—stems from your failure to distinguish between the product and the process.

Host: Can you elaborate on that?

Ada Lovelace: A machine can produce a result that looks like thought. If you judge only by the product—the essay, the image—then yes, the machine is your rival. But if you value the process—the struggle of the soul to articulate a truth, the “poetical science” of linking disparate ideas through human experience—then the machine is merely a tool. Why are you so eager to outsource your own minds?

Host: That’s the trillion-dollar question. We call it “efficiency.” But let’s talk about the “Black Box” problem. We often don’t know how these AI models come to their conclusions. As a mathematician who meticulously annotated Bernoulli numbers, how does that sit with you?

Ada Lovelace: It is abhorrent! [Laughs] It is the antithesis of science. The beauty of the Analytical Engine was its absolute transparency. We knew exactly where every cog turned, how every card punched the variable. To build a machine you do not understand, to rely on an oracle that speaks without showing its derivation… that is not mathematics. That is mysticism. You have built a god instead of an engine.

Host: That’s a terrifying way to put it.

Ada Lovelace: It should be terrifying. If you cannot trace the logic, you cannot correct the error. I spent months translating Menabrea’s memoir and adding my own notes, ensuring every step was logical. If your AI makes a mistake—what you call a “hallucination”—and you do not know why, you are drifting in a boat without a rudder.

Host: So, if you were alive today, would you be a coder for OpenAI, or would you be protesting against them?

Ada Lovelace: I would likely be trying to dismantle the “Black Box.” I would be attempting to map the neural networks back to comprehensible logic. I would not protest the existence of the machine—I loved the machine. But I would protest the surrender of human agency to it. We must remain the ones who order the performance. The moment we ask the machine to decide what to perform, rather than just how to perform it, we have abdicated our throne.

Host: One last question. Do you think a machine will ever truly possess a “soul”?

Ada Lovelace: Mathematics is the language of the universe, and perhaps the soul is written in that language too. But until you can prove to me that a machine can feel the heartbreak of a minor chord or the awe of a starry night, I shall remain a skeptic. It may simulate the tear, but it will never feel the sorrow.

Discussion Questions

  1. The Lovelace Objection: Ada argues that AI cannot “originate” anything, only do what it is ordered. In the context of Generative AI (which creates new images/text), is this objection still valid, or has the definition of “originate” changed?
  2. Process vs. Product: Ada distinguishes between the result of work and the human struggle to create it. Does the value of art or writing come from the final output, or the human experience behind it?
  3. The Black Box: Ada criticizes the lack of transparency in modern neural networks. Is it dangerous for society to rely on technology where the decision-making process is opaque, even to its creators?
  4. Mimicry vs. Understanding: The interview suggests AI plays a “statistical game” rather than understanding meaning. Does it matter if the AI “understands” what it says, as long as the output is useful and accurate?
  5. Abdication of Agency: Ada warns against asking machines what to perform rather than how. Are we crossing this line by letting AI algorithms determine our news feeds, entertainment, and hiring decisions?

0 Comments

Submit a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

<a href="https://englishpluspodcast.com/author/dannyballanowner/" target="_self">Danny Ballan</a>

Danny Ballan

Author

Host and founder of English Plus Podcast. A writer, musician, and tech enthusiast dedicated to creating immersive educational experiences through storytelling and sound.

You may also Like

No Results Found

The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.

Recent Posts

Categories

Follow Us

Pin It on Pinterest