The Irrational Exuberance around A.I.

Current irrational exuberance accompanying stories about generative AI in legacy media is sometimes the point. The objective is to raise hackles or heartbeats to generate engagement. Generative AI's forebears, simple algorithmic machine learning have been helping do this for years across the social media universe. It's important to caution against imbuing a generative AI tool with too much meaning. Humans' natural tendency to anthropomorphism is generally healthy; it’s what makes empathy possible. Still, it can lead to superimposing qualities that aren’t there. We do it with animals – both natural and not (I used to talk to a stuffed Daiken sleuth dog I’d gotten as a prize for a 3rd-grade read-a-thon. Its name was “Sluethy”). Sherry Turkle's "Alone Together" is a fascinating study of this. It features several subjects interviewed and observed exhibiting uncanny behavior interacting with robots and software. When displaying even the slightest bit of conduct that appears human, those subjects speak of the robot or software as if they were alive and aware and "cared." Some reporters in recent months may be doing that or are playing on that tendency in others to juice interest in the story. For instance, some headlines have referred to AI as “creepy” and “sinister,” suggesting that AI technology is intrusive and potentially dangerous.

 

The danger of projecting traits onto A.I. that it does not possess is the danger of abdicating responsibility for what A.I. produces. A recent story in the news was of an attorney using ChatGPT to do legal research and citing case law that the bot came back with in support of his brief. The problem was that the bot “hallucinated” the cases and their facts. At Texas A&M, a professor threatened to fail students in his class for cheating. This is because when he copied and pasted students’ essays into the ChatGPT field and asked if generative A.I. produced an essay, it responded in the affirmative. However, the A.I. was wrong (the industry likes to call it “hallucinate,” a word that itself imbues the software with human qualities in a way a word like “error” does not).   

 

Machine advancement has surpassed imagination and magic, leading some to claim that our current technological shifts are unprecedented. It is not just one technological revolution among many but a primary social revolution we are experiencing.

 

Twelve years ago, when writing about a plethora of literature both praising and criticizing the magnitude of this transformation, Adam Gopnik broke down the way people responded to these changes into three categories: the Never-Betters, the Better-Nevers, and the Ever-Wasers.

 

The Never-Betters envision a utopian future where information is free, the news is democratized, and love reigns. At the same time, the Better-Nevers believe that the world was better before these changes and that books and magazines provide private spaces for deep thinking that the internet lacks. On the other hand, the Ever-Wasers argue that similar shifts have occurred throughout history and provoked different reactions among people, making them a characteristic of modern times. When he wrote, Gopnik was referring to public culture’s responses to the digital revolution itself; but the same camps can pitch the same tents housing responses to A.I. For now, though, the Ever-Wasers are whisperers in the wind. The current dueling narratives are primarily between boon and doom.

 

Ultimately, the debate surrounding A.I.'s potential impact on society mirrors similar concerns from the past about other technological advancements. And both neglect the possibility that they both overestimate what A.I. is currently capable of doing.

 

In T.S. Eliot's essay "Yeats," he says of the poet: "Something is coming through, and in beginning to speak as a particular man, he is beginning to speak for man." The machines, as impressive as they are and are becoming, cannot render expression that synthesizes themselves and the world they find themselves in because there is yet no "self." LLMAs are still, for now, really, really, really great calculators that use words instead of numbers for their input and output. The model can only take a bunch of epistemically objective input and rearrange it into epistemically objective output; it cannot take epistemically objective input and render it into epistemically subjective output. It performs the tasks it performs with miraculous alacrity. But when we start believing machines are capable of miracles, that's when we start believing they are gods.

Previous
Previous

Why Can’t AI Be Funny On Purpose?

Next
Next

It’s Artificial, but is it Intelligent?