If you spend too much time on the Internet like I do, you may have seen this uncanny monstrosity cross your timeline last week.
This is a trailer for The Lord of the Rings as “directed” “by” “Wes Anderson,” complete with a “Minas Tirith” set straight out of Grand Budapest Hotel and “Bill Murray” in the role of Gandalf. This doesn’t exist, of course. If it weren’t immediately obvious from the gross plastic quality of the entire enterprise, it was frankensteined together with A.I. The trailer was created (in the loosest sense of that word) by a Los Angeles-based startup called Curious Refuge. The company pitches A.I. as the “future of filmmaking” and, in a yet-to-be-released tutorial which will almost certainly languish behind a steep paywall, purports to teach “how to use AI to assist in the entire filmmaking process” from scriptwriting to editing to voiceover.
Given the film industry’s predilection for formulaic writing, computer generated landscapes, and the ghoulish practice of technologically resurrecting simulacra of dead actors, A.I. isn’t so much a revolution as the next logical step toward banishing human labor from Hollywood. The choice to give LOTR the uncanny-valley treatment is especially ironic, however, because if there’s one thing J.R.R. Tolkien would have despised with every fiber of his being, it is A.I.
I’m not going to pretend to be an expert in the LLMs (Large Language Models) which lie behind A.I. generators like ChatGPT and Stable Diffusion. If you can get past some of his politics, the computer scientist Jon Stokes provides a very solid overview; Adam Conover has a good profanity-laced explainer video too. As I understand it, LLMs scrape big datasets of collections of symbols—words in a sentence, pixels in an image, frames in a video—and calculates the probability that any given symbol will occur in proximity to any other given symbol in a particular kind of sentence, image, video. When a user feeds a prompt into one of these generators, it then uses probabilistic modeling to generate a text, image, video that “looks like” other texts, images, videos with the relevant clustered attributes.
This simplifies the process, of course – if I’ve wildly misrepresented it, feel free to tell me off in the comments. At present, I want to draw attention to two features of LLMs. First, they are what computational linguist Emily Bender, digital ethicist Timnit Gebru, and their coauthors of a seminal 2021 paper call stochastic parrots. LLMs don’t actually understand language, because they lack the ability to make meaning based on context and intentionality. “Text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind,” they write. “It can’t have been, because the training data never included sharing thoughts with a listener, nor does the machine have the ability to do that” (p. 616). This poses a couple of problems. For one, LLM-based A.I. has no way of determining whether a statement is true or false, only whether it’s probable. Example: I recently asked for article recommendations on Tolkien and the Death of the Author on social media. One commenter fed my question into ChatGPT, which duly generated plausible-sounding article titles by real-life Tolkien scholars. Trouble was, those articles did not exist, and they certainly hadn’t been written by my colleagues. This raises some serious questions about the spread of disinformation, especially if we fold in the threat of A.I. “deepfake” audiovisual manipulation. Likewise, if there are systematic biases or hate speech in an LLM’s training data, you might end up with an A.I. probabilistically screeching racial slurs at you. If I wanted that, I could just log on to Elon Musk’s Twitter thanks very much.
All of this would be impossible, though, if LLMs weren’t feeding off vast quantities of pre-existing creative productions which real humans have made. However, it is not paying those real humans to appropriate their creative productions in this way. “Generative” A.I. isn’t creating new artworks. It’s creating a probabilistic facsimile of existing artworks, made by actual artists, all while A.I.’s boosters pass those facsimiles off as original work and seek to profit off them. It is this aspect of A.I. which J.R.R. Tolkien would have hated most.
Tolkien has a deserved reputation as a bit of a Luddite. Like his Hobbits, he didn’t much care for “machines more complicated than a forge-bellows, a water-mill, or a hand-loom” (LOTR Prologue, p. 1). It’s not hard to understand why, quite apart from his well-documented love of green and growing things. Between witnessing the factories of Birmingham swallow up his beloved West Midlands countryside, losing all but one of his close friends to the mechanized horror of the Great War, and looking on as the next Great War raised the specter of global atomic annihilation, he had ample grounds for distrusting blithe narratives of technological progress. This distrust is woven through all of his fiction, but it would be a mistake to characterize it as mere technophobia. It is, rather, part and parcel of Tolkien’s deepest beliefs about what it means to be created in the image and likeness of God.
In his famous 1951 letter to potential publisher Milton Waldman, Tolkien describes his legendarium in terms of three interlocking themes: the Fall, Mortality, and the Machine. The Machine in Tolkien’s usage is “all use of external plans or devices (apparatus) instead of development of the inherent inner powers or talents – or even the use of these talents with the corrupted motive of dominating: bulldozing the real world, or coercing other wills” (Letters #13,1 p. 146). This last point is crucial and requires some unpacking. In his essay On Fairy-Stories, Tolkien asserts that humans are sub-creators. Unlike God (as understood in orthodox Western Christianity), we do not create ex nihilo or “out of nothing.” Rather, human creativity takes the given elements of creation as it exists—the primary world—and rearranges them to create new things. That might sound a bit like how LLMs respond to prompts. The difference, for Tolkien, is precisely what LLMs lack: intention. Intention, the desire to communicate, and the spark of the New, that which has never been seen before and never would have been if somebody hadn’t decided to make it and share it with the world.
Tolkien identifies this sub-creative spark with the Imago Dei, the image and likeness of God (Genesis 1:26-27) that subsists in every human being. That is to say, we are created in the image of a Creator. Thus for Tolkien, humanity’s sub-creative power, while derivative of and subordinate to God’s, is “ultimately of the same stuff” (OFS p. 47). Far from the probabilistic recombination of symbols, human creativity is a kind of participation in the Divine life, a co-creative improvisation upon the Music of Creation: “We make still by the law in which we’re made” (OFS p. 56). There is an incredible power in that: our creativity reaches into the depths of our beings and touches the Secret Fire at the heart of All That Is. But that calls forth a tremendous responsibility not to use our creative powers for evil: domination, coercion, bulldozing the world.
Tolkien believed that we are constantly tempted to usurp the role of God and impose our will upon the world, whether the world likes it or not. This temptation “will lead to the desire for Power, for making the will more quickly effective - and so to the Machine (or Magic)” (Letters #131, p. 145). Time and again this temptation leads to downfall in Middle-earth: Melkor and his solipsistic music, the Elven-Smiths of Eregion and their Rings of Power, Fëanor and his Silmarils, Númenor and its imperial arrogance, Saruman and his mind of metal and wheels, Sauron and his One Ring. Creativity, something which was and is in its origins good, is twisted and wrought into the power by which we subjugate others and wreck our common home.
So it’s worth asking: what power does A.I. purport to grant its wielder?
Some of A.I.’s boosters would have you believe that it permits “uncreative” types to give visible shape to their dreams, to finally close the gap between artistic elites and themselves. This defense is framed almost as if A.I. were fighting ableism: I lack an ability that others have, in this case artistic ability, which is conceptualized as an inherent talent not a cultivated skill. A.I. then is a technology of accessibility, putting me on an equal footing with all those privileged creatives. This is, in a word, gross. Quite apart from the obscene appropriation of the discourses of disability and accessibility to justify the rendering of uncompensated human labor into homogenized content slurry, it misses the fact that artistic ability is as much or more a result of practice as it is of inborn genius. It also elides the point that the people who stand to benefit most from A.I. “content” generation are not artists, actual or would-be, but rather corporate entities for whom the benefit lies mostly in being able to bypass paying for creative labor.
Why would a company want to use A.I. exclusively for production of, say, a film? To reduce costs. It’s cheaper to feed prompts into an A.I. than it is to hire people to write, direct, film, edit, score, and provide voiceover for a project. A.I. filmmaking promises the same benefits for would-be filmmakers that Bloomsbury’s recent use of A.I.-generated images for the cover of YA fantasy author Sarah J. Maas’s recent book House of Earth and Blood. Surely Bloomsbury could have contracted with a human artist for the cover of a bestselling author’s latest work? This is a huge part of what’s at stake in the ongoing Writers Guild of America (WGA) strike. As screenwriter Blake Masters (not that one) explains in a recent interview, a battle over residuals for streaming media quickly took on an existential cast when the Alliance of Motion Picture and Television Producers balked at WGA demands that A.I. not be used to replace actual writers. The point, as it usually is with these things, is to bolster profits at the expense of the real human beings whose (uncompensated) work provides the basis for A.I. training datasets.
In Christian theology, soteriology refers to doctrines of salvation: what we need to be saved from, what will save us, and what we need to be saved for. Technosoteriology, then, is the belief that technology will save us: save us, in the final analysis, from human finitude and the vulnerability which our finitude entails. Far from being a labor-saving technology, A.I. reduces a human act of communication into an act of algorithmic exploitation and capitalist consumption. Instead of making art and sharing it with others as an expression of our Imago Dei humanity, our deep longing for relationship with others, corporate paymasters and their tech-drunk acolytes pump out content slop for consumers to lap up in the ever-shrinking hope that something might press the Happy Button in our serotonin-deprived brains. Human connection is replaced by Pavlovian stimulus-response; art is reduced to a nonfungible token. But the real nonfungible thing, the thing that cannot be exchanged or consumed but only encountered and perhaps even loved, is the human. And if you follow Tolkien, the human is a window, however dirty at times, into the Divine.
Tolkien fandom gets a lot of mileage out of debating whether J.R.R.T. would have approved of this or that change in adaptations of his work. I’m not convinced that such debates are productive, if only because none of us should presume to speak for an author who has been quite literally dead for fifty years now. That being said, there is one thing I am 100% positive Tolkien would’ve hated, and it is A.I. Lord of the Rings. And you know what? I do too.
…
WORKS CITED
Bender, Emily M, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” FAccT ’21: Proceedings of the 2021 Conference on Fairness, Accountability, and Transparency (2021): 610-623.
Conover, Adam. “A.I. is B.S.” YouTube video. March 31, 2023. URL.
Menaker, Will, host. “What’s at Stake in the WGA Writers’ Strike.” Chapo Trap House (podcast). May 12, 2023. URL.
Stokes, John. “ChatGPT Explained: A Normie’s Guide to How It Works.” jonstokes.com (blog). March 1, 2023. URL.
Tolkien, J.R.R. On Fairy-Stories. Edited by Verlyn Flieger & Douglas A. Anderson. London: HarperCollins, 2014.
—. The Letters of J.R.R. Tolkien. Edited by Humphrey Carpenter & Christopher Tolkien. Boston: Houghton Mifflin, 1981.
—. The Lord of the Rings. Boston: Houghton Mifflin Company, 1994.
Great read! Thank you, I learned a lot about Tolkien and AI.