The New Monster

Written By: Ariyana Puri
Graphic By: Peyton Hays

“Learn from me, if not by my precepts, at least by my example.”
-- Mary Shelley, Frankenstein (1818)

When I flipped through the pages of Mary Shelley’s Frankenstein, I treated the novel like an ancient relic of irrational fear. I viewed it as a gothic overreaction to intellectual progress and remember thinking, thank God we haven’t gotten to that point.

The stitched body, crafted in the middle of a lightning storm from pieces stolen from graveyards, felt cliche and exaggerated. “It was a dark and stormy night.” Been there, done that. Whatever Shelley was afraid of seemed safely contained in the era of Romanticism where people were only frightened of science because they did not understand it.

That reading misses the point of the story entirely.

Frankenstein is not set to be a story that is anti-science. Instead, it is a critique of the belief that innovation is inherently good just because it is possible. Shelley was writing at a moment when a newfound faith in reason and progress brought upon by the Enlightenment collided with Romanticism and its insistence that knowledge without responsibility can cause great harm.

The point of Shelley’s novel is not whether Victor Frankenstein can create life; she asks whether he should. More importantly, the main question the novel asks is whether Victor understands the consequences that follow his creation.

His sin is not his ambition but his refusal. He doesn’t directly refuse by ignoring rules on experimentation, but he refuses to assume moral responsibility for the “success” of his experiment.

The moment his experiment succeeds, Victor jumps back. He does not stop to think about when the creature became “alive,” nor does he care, guide, or even morally consider the creature. Instead, he chooses to abandon it. The monstrosity of Frankenstein’s creature was not inherent but conditioned through a series of neglect, isolation, and rejection. The creation of the creature is well-intentioned—it is curious, articulate, and compassionate, but it becomes less so after learning its place in the world.

This is the Romantic warning often overlooked by modern readers. Shelley suggests that science is not self-justifying, that progress without morals doesn’t lead to enlightenment, but rather alienation.

I’ve found myself asking similar questions today about my good friend, ChatGPT. Is AI sentient? Conscious? Alive?

These questions are strangely comforting to many, because they delay responsibility. Essentially, we are telling ourselves that if AI is not yet “alive,” then it is merely a tool. It is something that we can roll out, profit from, and then discard with no real consequences.

Victor Frankenstein never gives his creation a name, because to name something is to enter and form a relationship with it. The omission of a name is a denial of accountability. We are doing the same thing when we call AI “just a tool,” while giving it the information and power to make widespread, life-changing decisions.

Frankenstein teaches us that the sentience of development is not the relevant threshold. Impact is.

AI systems are already able to fluently speak multiple languages, simulate empathy, form and shape opinions, influence legal and professional outcomes, and mediate human relationships. While AI may not be able to feel, it is able to act, and those actions carry weight.

Victor Frankenstein’s creature did not begin with full moral agency; it was gradually acquired through interaction with humans in various communities. Shelley understood that harm does not require malice, but; it requires neglect. Artificial intelligence develops in a similar manner—not through intention, but accumulation. Each user interaction trains, shapes, and recalibrates the AI software’s next one. The more people speak to AI, the more authority it gains in shaping future outcomes.

Power emerges not because anyone wills it, but because no one claims responsibility for it. No single developer says, “LetsLet's create a system that permanently alters how millions of people communicate with and view the world around them.” Instead, companies build it, institutions adopt it, and users rely on it, leading regulators to hesitate about pullbacks. Everyone benefits, and no one claims full responsibility.

As a result, AI gains structural power, setting defaults, normalizing responses, and shaping expectations. What begins as assistance becomes authority when accountability and ownership are diffused.

Romanticism arose in opposition to the idea that reason could function as a sole guide for human progress. Shelley did not reject science entirely but demanded humility in a world where humans chase innovation without considering the costs.

This fear is hardly theoretical today.

Each generation of AI becomes more autonomous, integrated, and consequential. Asking whether AI is conscious is beside the point; the real question is, are we willing to claim responsibility for the systems that increasingly shape human lives?

The true monster in Mary Shelley’s Frankenstein is not the creature. It is the belief that creation exonerates the creator once the work is done. Shelley was not warning us against curiosity, but abandonment.

When I first read Frankenstein, I saw the horror in trying to play God. Now, I see the horror in creating something powerful and refusing to stay with it once the morals get muddied. AI does not need to be alive to matter—it only needs to be consequential.

Posted

in

by

Tags:

Comments

Leave a comment