Editor’s Note:
This reflection essay was written by Teagle Humanities Fellow Eli Semmel in August, 2023. During the summer before his first year in college, Eli worked with a writing tutor while he read transformative texts, developed his own thoughts and opinions about the world he inhabits, and practiced college-level writing. All of the essays produced in the Teagle Humanities Fellowship are the works of young scholars, and as such, reflect craftsmanship and ideas still in progress, and are written in the spirit of open inquiry.
Eli Semmel

Eli Semmel

Eli has lived in New Haven, Connecticut since he was three years old. He participated in Yale’s Citizens Thinkers Writers program in 2022 and graduated from New Haven Academy in 2023. He now attends Yale University and plans to major in Mathematics. In his free time, he enjoys practicing the violin and spending time with his dogs.

Playing God: What Fiction Can Teach Us about AI

“Both Shelley and Ellison describe entities which are initially created by humans to be subservient but which eventually exceed the strength of their creators and turn against them. The stories share many common threads, the most salient being the scientists’ attempts to advance science without any forethought, and their belief that one’s creation will always remain in their control. This is the philosophy that we must address moving forward as a society.”

Humans have always altered nature for our own benefit and created new tools that to any one of our ancient ancestors would seem like magic. However, our power as a species has grown far faster than our foresight has. Most actions of those in power are still driven by that which will lead to personal short-term benefit. This means that many decisions have been made that threaten humanity’s future in the long term for financial and economic motivations, such as the invention of the nuclear bomb, the discovery and utilization of bioweapons, and the continued use of fossil fuels for electricity even after their negative impact on the environment has been well-documented. Currently, we are living through the development of a technology that rivals all of these in potential for destruction: that of artificial intelligence.

The benefits of artificial intelligence are mostly seen by wealthy people in positions of corporate power and within the military. Corporations benefit from AI because it means that they can get cheaper work done with fewer workers. Currently, AI is also being used in order to sort through the information in wars and make quicker decisions than humans can make (Rogin, Zahn.) Although no equipment is autonomously operated by AI yet, there is no way to predict what could come next in this quickly changing field.

In order to think about AI more deeply, I’ve read Frankenstein by Mary Shelley and the short story I Have No Mouth and I Must Scream by Harlan Ellison. Both Shelley and Ellison describe entities which are initially created by humans to be subservient but which eventually exceed the strength of their creators and turn against them. The stories share many common threads, the most salient being the scientists’ attempts to advance science without any forethought, and their belief that one’s creation will always remain in their control. This is the philosophy that we must address moving forward as a society.

In Frankenstein, the titular Dr. Frankenstein, taking inspiration from scientists throughout the ages, aims to master the secrets of life and death, and in his experiments, stitches a monster together from corpses and animates it. However, as soon as the monster comes to life, he abandons him and leaves him to fend for himself. Over the next year, people continuously reject the monster until he starts to lash out against humanity, leading to a chain of murders which serves as the main conflict of the story. Artificial intelligence is defined as “the ability of a computer or a robot controlled by a computer to do tasks that are usually done by humans because they require human intelligence and discernment” by the Encyclopedia Britannica. While the monster doesn’t technically fit this definition, they are both artificial creations with human characteristics, so we can draw some parallels between them.

This story can serve to show that anything which is only exposed to humans can potentially pick up the worst traits of humanity. A real example of this issue is the use of facial recognition programs within law enforcement. Since 2017, police departments have been using these programs to find matches to suspects in certain cases, yet just in Detroit, three false arrests have been made of Black citizens, while no people of any other ethnicity have ever been falsely targeted by this software in the nation (Bhuiyan). The reason why this happens is because AI relies upon neural networks, which rather than following a set list of human-made instructions, are able to make unique decisions based on data it is given. In this particular scenario, the facial recognition programs were fed a primarily white dataset and struggled to work outside it. We most likely will keep bringing AI into more fields, and we will continue to see it reflect the worst sides of ourselves.

The short story I Have No Mouth and I Must Scream starts in an alternate Cold War which has actually become World War III. AM is the antagonist and a true AI; although AM was first created as multiple independent computers, each meant to help plan war, all of these eventually unify into one single consciousness. AM then kills every person on the planet except for the five who catalyzed the emergence of his sentience, who he then continues to torture as long as he can.

This story was published just one year before the first moon landing and during one of the most tense periods of the Cold War. Ellison uses AM as a tool to write about the arms race at the time rather than AI in general. However, now that using AI to coordinate optimal use of weapons, just like AM was being used for in the story, it becomes even more relevant. Just as the Cold War drove advancement for nuclear weapons, today competition with other countries is accelerating AI development far faster than we can sort out the ethical implications of using it on the battlefield.

Within both of these stories, the generator of conflict is the emergence of sentience within the antagonists, but even if AI doesn’t reach this point, it still can cause a lot of harm. With the amount of power we are giving AI, any kind of flaw within the programming or emergent defect within the data we feed it could have disastrous consequences.

At first glance, the mistakes that AI can make might not seem particularly relevant to the stories because within them, the problem is that the constructs are too capable rather than too clumsy. However, there is much they all share in common; even if our AI isn’t precise, it is being given a lot of power just like the antagonists of the stories, and all of their creators share a similar shortsightedness. Frankenstein sees a shallow world which focuses more on appearance than anything else, decides to animate an ugly superhumanly strong monster, and abandons him. The team of scientists in I Have no Mouth and I Must Scream, aware of the consequences any weapon could have on the tense environment of the Cold War gives a computer free reign over their nation’s arsenal. Finally, in our time, we see a world that for so many reasons must be handled carefully and choose to designate many of its most sensitive tasks, like policing and war, to a technology still in its infancy.

The development of AI at its current rate poses a significant risk for the future of humanity. I believe its advancement is inevitable, but I also think if we can stall it, we can find ways to deal with its imprecision and build stronger regulations about where it is allowed. The responsibility is on computer scientists to work on AI within fields that currently cannot cause major harm to anyone, like programs that help support research advancements in science or those which make agriculture more efficient. If we make developments specialized towards those fields, legislation and international humanitarian law can make sure that AI does not displace workers, fail at sensitive jobs, or cause an excess of death and destruction in war.

Works Cited

Bhuiyan, Johana. “TechScape: “Are You Kidding, Carjacking?” – the Problem with Facial Recognition in Policing.” The Guardian, 15 Aug. 2023, www.theguardian.com/newsletters/2023/aug/15/techscape-facial-recognition-software-detroit-porcha-woodruff-black-people-ai. Accessed 19 Aug. 2023.

Ellison, Harlan. I Have No Mouth, and I Must Scream. Edgeworks Abbey, 2002.

Rogin, Ali, and Harry Zahn. “How Militaries Are Using Artificial Intelligence on and off the Battlefield.” PBS NewsHour, 9 July 2023, www.pbs.org/newshour/show/how-militaries-are-using-artificial-intelligence-on-and-off-the-battlefield.

Shelley, Mary. Frankenstein: The 1818 Text. Penguin Classics, 2018.