July 12, 2022

The Measure of a Machine: Is LaMDA a Person?

LaMDA is Google’s latest conversation-generating artificial intelligence.

By Dr. Brian Dellinger

In June 2022, Google suspended engineer Blake Lemoine from his work in artificial intelligence. Having previously assisted with a program called the Language Models for Dialog Applications (LaMDA), Lemoine was placed on leave after publishing confidential information about the project. Lemoine himself disputes this description, saying, “All I talked to other people about was my conversations with a coworker.”

Complicating matters, that “coworker” is LaMDA itself.

LaMDA is Google’s latest conversation-generating artificial intelligence. If assigned virtually any identity — such as, say, “you are Tom Cruise,” or “you are secretly a squirrel” — it offers in-character conversation, patterning its responses on databases of real conversations and related information. Its dialogue is extremely sophisticated; LaMDA answers questions, composes poems, and expresses concern at being switched off. Lemoine claims that this behavior shows that LaMDA is a sentient person, and therefore not Google’s property. The company, and many experts, disagree. The claim, however, points to a fundamental question: if a computer program was a person, how would one tell?

Lemoine’s argument follows reasoning first introduced by Alan Turing, a father of AI and of computation in general. By 1950, Turing had observed a pattern in computational research. Skeptical observers would declare that only a thinking human could accomplish some task — i.e., draw a picture, outwit another human, and so forth — only to propose a new, more stringent requirement when a computer achieved the first. Turing proposed a broader metric for intelligence; if an AI could converse indistinguishably from ordinary humans, it should be believed capable of true thought. After all, humans cannot directly detect sentience in each other, and yet typically assume that the people they converse with are precisely that: people.

Anyone fooled by a “robo-caller” can attest that even simple programs may briefly appear human, but the Turing Test as a whole remains a robust challenge. While LaMDA’s breadth of interaction is astounding, the program still shows conversational seams. The AI’s memory is limited in some substantial ways, and it is prone to insisting on obvious falsehoods — its history as a school teacher, for example — even when speaking to its own development team. While it often uses the right vocabulary, its arguments’ structure sometimes degenerates into nonsense.

Still, these things might not be disqualifying. Human beings obviously lie or argue badly; most people would likely not question the self-awareness of another human who said the things that LaMDA does. Indeed, Lemoine argues that, by judging LaMDA’s utterances differently from those of biological humans, observers exhibit “hydrocarbon bigotry.”

More fundamentally, conversation alone is a poor way of measuring self-awareness. The most famous critic of Turing’s “imitation game” is the philosopher John Searle, who proposed a thought experiment called the Chinese Room. Searle imagined a sealed room; outside, a Chinese-language speaker composes messages and passes them in through a mail slot. Inside, a second participant receives the messages but cannot read them. With him in the room, though, are a stack of books defining a series of rules: “If you see such-and-such Chinese characters, write this-and-that characters in response.” Obediently, the man in the room does so, copying a reply and passing it back out. From the perspective of the Chinese speaker, the exchanges are a sensible conversation; from the perspective of the person inside, they are a meaningless exchange of symbols.

Herein lies the flaw in conversation-based measures of intelligence. By definition, any computer program can be reduced to a series of input/output rules like the books in Searle’s imaginary room. An AI, then, simply follows its set of symbol-manipulation rules, forming words and sentences as instructed by the rules, without regard for semantics or comprehension. Any sense of meaning is thus imposed by the speaker “outside” the room: the human user.

LaMDA, of course, does not have simple rules of the form Searle pictures; no database of canned replies could suffice for its purposes. But the program’s operation is still ultimately reducible to a finite description of that form: given these symbols, take those actions. Indeed, a sufficiently motivated programmer could (very slowly) trace LaMDA’s operation entirely with pencil and paper, with no computer required, and produce identical results. Where, then, is the purported artificial person?

One might object that the same could be said of a human being. In principle, perhaps a dedicated biologist could trace every fluctuation of hormones and electricity in the brain, entirely describing its inputs and outputs. Doing so would not, presumably, deny that humans experience meaning. But this argument begs the question; it assumes that the mind is reducible to the brain, or more broadly, that human personhood reduces to physical properties. Indeed, the seeming inexplicability of consciousness in purely physical terms has earned a name in philosophy: “the hard problem.”

Christianity may be well-positioned to offer a better answer. Most Christians have historically understood personhood to depend on more than physical traits or conversational capabilities; unborn infants, then, are persons, while artificial intelligences are not. A robust defense of this understanding might be attractive — and, indeed, might offer valuable insight. Unfortunately, despite statements from groups like the Southern Baptists and the Roman Catholic Church, the church as a whole has been sluggish to respond to the theological questions of AI. LaMDA is not a final endpoint, and coming years will likely see many more who share Lemoine’s convictions. Increasingly, the church’s rising challenges share a common need for a rich anthropology: a biblical defense of what, precisely, it is to be human.

Dr. Brian Dellinger is an associate professor of computer science at Grove City College. His research interests are artificial intelligence and models of consciousness.

Who We Are

The Patriot Post is a highly acclaimed weekday digest of news analysis, policy and opinion written from the heartland — as opposed to the MSM’s ubiquitous Beltway echo chambers — for grassroots leaders nationwide. More

What We Offer

On the Web

We provide solid conservative perspective on the most important issues, including analysis, opinion columns, headline summaries, memes, cartoons and much more.

Via Email

Choose our full-length Digest or our quick-reading Snapshot for a summary of important news. We also offer Cartoons & Memes on Monday and Alexander’s column on Wednesday.

Our Mission

The Patriot Post is steadfast in our mission to extend the endowment of Liberty to the next generation by advocating for individual rights and responsibilities, supporting the restoration of constitutional limits on government and the judiciary, and promoting free enterprise, national defense and traditional American values. We are a rock-solid conservative touchstone for the expanding ranks of grassroots Americans Patriots from all walks of life. Our mission and operation budgets are not financed by any political or special interest groups, and to protect our editorial integrity, we accept no advertising. We are sustained solely by you. Please support The Patriot Fund today!


The Patriot Post and Patriot Foundation Trust, in keeping with our Military Mission of Service to our uniformed service members and veterans, are proud to support and promote the National Medal of Honor Heritage Center, the Congressional Medal of Honor Society, both the Honoring the Sacrifice and Warrior Freedom Service Dogs aiding wounded veterans, the National Veterans Entrepreneurship Program, the Folds of Honor outreach, and Officer Christian Fellowship, the Air University Foundation, and Naval War College Foundation, and the Naval Aviation Museum Foundation. "Greater love has no one than this, to lay down one's life for his friends." (John 15:13)

★ PUBLIUS ★

“Our cause is noble; it is the cause of mankind!” —George Washington

Please join us in prayer for our nation — that righteous leaders would rise and prevail and we would be united as Americans. Pray also for the protection of our Military Patriots, Veterans, First Responders, and their families. Please lift up your Patriot team and our mission to support and defend our Republic's Founding Principle of Liberty, that the fires of freedom would be ignited in the hearts and minds of our countrymen.

The Patriot Post is protected speech, as enumerated in the First Amendment and enforced by the Second Amendment of the Constitution of the United States of America, in accordance with the endowed and unalienable Rights of All Mankind.

Copyright © 2024 The Patriot Post. All Rights Reserved.

The Patriot Post does not support Internet Explorer. We recommend installing the latest version of Microsoft Edge, Mozilla Firefox, or Google Chrome.