AI’s Truth, Lies, and Ethos

Summary: Conversational artificial intelligence is often a form of storytelling, and underlying some of AI’s stories is an artificial ethos that could be insidious. 

I am a cultural anthropologist and I’ve been ruminating on the cultural impact of artificial intelligence. I recognize AI’s potential for increasing knowledge, productivity, and generating medical breakthroughs. There is no question in my mind that AI will be a collaborative, indefatigable partner for humanity. I am so galvanized by it that this fall I will incorporate AI into the curriculum of my Columbia Business School course, Market Intelligence: The Art and the Science. At the same time, I am acutely aware of AI’s threats, among them job displacement, data breaches, and the annihilation of humanity (a 50-50 chance according to BCA Research). 

I am not writing here about the list Google Bard furnished when I asked it to outline AI’s impact on culture: Democratizing media, improving our quality of life, changing how we work, and challenging our concept of identity. All are worth considering. However, my focus centers on a topic that has long fascinated anthropologists: storytelling and the role of stories as behavioral models and meaning makers. AI’s impact there could be more insidious than the topics it provided. AI’s well documented racism is one concern, but my purview is broader.

For eons, only human beings crafted stories. Anthropologists have observed that stories express the ethos of a culture, epitomizing shared ideas and values, codifying social rules, and encompassing a world view. This essay is a musing – and a provocation – based on the notion that knowledge produced by the conversational AI many of us are now accessing is often a form of storytelling. If we look hard enough, we can discern a kind of artificial ethos underlying some of AI’s stories. 

At its most basic, a story imparts information. Sometimes it is difficult to know when a story is real or imagined, and AI is not aware of the difference. Historical fiction blends fact and fiction by design. If you subscribe to multiverse theory, all stories can be true everywhere all at once. The reliability of stories as information has been fraught and fought over for years. Rudyard Kipling’s Just So Stories contained tales on the origins of animals’ physical characteristics that aimed to amuse children. Decades later, geneticist Lewis I. Held Jr. was moved to publish a scientific account of animal evolution. In the film, Rashomon, different people witness a horrific event and provide conflicting accounts of what occurred. One interpretation of Rashomon is that facts are subjective and depend on one’s perspective. Science fiction author Ted Chiang writes about the truth in contrast to the feeling of a story, contemplating the merits of what is correct historically versus what is valuable for a community in the present.  Several years ago, comedian Stephen Colbert coined the term “truthiness” to denote the sense that something feels true even if it is false. “Fake news” gained currency as a political cudgel in the past decade but dates back to the late nineteenth century. The variability of truth telling was cringingly demonstrated by political consultant Kellyanne Conway when she referenced “alternate facts” regarding the crowd size at Donald Trump’s Presidential inauguration. Need more be said regarding George Santos’ fabrications about his family, ethnic heritage, education, employment, finances, residences, health, and charity work?

What are we to make of stories crafted by a non-human? How can we untangle fact from fiction when we ask AI for accuracy, are provided with fantasy, and the author’s judgement is absent? That is what occurs when AI “hallucinates.” I asked ChatGPT: How can I be sure the information it provides is true? Here is part of its response:  It is important to note that ChatGPT is not infallible and can make errors or provide incomplete or inaccurate information…while ChatGPT can provide useful insights and information, it’s always a good idea to exercise critical thinking and verify information through multiple sources.

Good advice. The problem is that many of us won’t heed it. We’re too busy, too lazy, or too trusting. While AI promises efficiency for knowledge generation, its errors of fact and interpretation can be problematic. My apprehension is elevated when I think about AI’s subtle, unfettered impact on our culture-based ideas, sentiments, and behavior. AI is not yet imparting an ethos consciously but latent meanings and biases can be discerned in much of its content. If AI becomes sentient, will a purposeful ethos follow? Will enough of us decode the ethos embedded in its stories? What impact might an AI ethos have on its human users? How might it alter the character of our cultures? Are we prepared for the possibility that the stories AI concocts and the artificial ethos they convey implicitly could have profound and pervasive effects on who we are as human beings? These questions, which go beyond falsehoods per se, are not only for anthropologists to ask; they are questions for every one of us.

Our reliance on AI’s stories will proliferate the more we tap its breadth and depth of knowledge and relish its speed in providing it. Even if purveyors of AI pause its technological advances to mitigate its risks there is little doubt that AI will permeate our lives. Much of that will enable humanity to live smarter and better. This is a moment when we should reflect on AI’s storytelling facility, be cognizant of the artificial ethos conveyed in some of its stories, and take seriously their cultural implications. Before we buy into AI’s stories, we should recall the ancient Roman who, after being told a compelling but false tale by a market vendor, probably thought, caveat emptor.

Leave a Reply