The question of politeness in interactions with artificial intelligence, particularly chatbots, has gained attention as AI technology becomes increasingly integrated into daily life. Sam Altman, CEO of OpenAI, highlighted a practical aspect of this debate by revealing the financial implications of adding polite phrases such as “please” and “thank you” when using AI chatbots.
Last week, a user on the social platform X speculated about how much OpenAI might be losing in electricity costs due to users saying “please” and “thank you” to AI models. Altman’s response was candid: “Dozens of well-spent millions of dollars – you never know.” This exchange draws attention to the fact that each interaction with an AI chatbot involves energy consumption, and every additional word increases processing costs on servers.
Neil Johnson, a professor of physics at George Washington University specialising in artificial intelligence, explained this in tangible terms. He compared extra words in a request to packaging material in retail shopping, stating that the AI must navigate through this “package” to interpret the content. This, he noted, means additional computational work and energy expenditure. “A ChatGPT task involves electrons moving by transitions – this requires energy. Where will this energy come from?” Johnson questioned, highlighting the environmental and economic costs closely tied to the AI boom, which largely depends on fossil fuels.
Despite these concerns, the cultural aspect of treating AI with politeness remains significant. Humans have long grappled with ethical considerations about artificial intelligence, as illustrated by the “Star Trek: The Next Generation” episode “The Measure of a Man,” which debates whether the android Data should have rights akin to humans. Similarly, a 2019 Pew Research study found that 54% of owners of smart speakers like Amazon Echo or Google Home reported saying “please” when interacting with their devices.
The rapid advancement of AI platforms, including ChatGPT and others, has prompted companies, writers, and academics to reflect on how human interactions with AI might evolve. For example, in December, OpenAI and Microsoft faced legal challenges claiming copyright violations related to the training of AI systems on newspaper content. Meanwhile, AI company Anthropic recently hired its first welfare researcher to explore whether AI deserves moral consideration, signalling increasing engagement with the ethical dimensions of artificial intelligence.
Scott Z. Burns, a screenwriter who recently released a series titled “What Could Go Wrong?” on Audible, emphasised the importance of kindness in human-AI interactions. Speaking in an email, Burns said, “Kindness must be everyone’s standard—whether with humans or machines. Although it is true that an AI has no feelings, my concern is that any kind of rudeness that starts to permeate our interactions will not end well.”
The way individuals treat AI may reflect their perceptions of the technology—whether they view AI as capable of suffering from rudeness or benefiting from kindness. Moreover, there is emerging evidence that interactions with AI influence how people behave in other social contexts. Jaime Banks, who studies human-AI relations at Syracuse University, noted, “We create standards or itineraries for our behaviour, and by having this kind of interaction with AI, we can become a little better or more commonly oriented to polite behaviour.”
Sherry Turkle, a researcher at the Massachusetts Institute of Technology who focuses on human-technology relationships, emphasises teaching people that AI is not sentient but calls AI “alive enough” to warrant courtesy. She draws parallels with past relationships between humans and objects, such as the 1990s phenomenon of Tamagotchis—digital pets requiring care, with children experiencing genuine emotional responses to their “death.” Turkle explained, “If an object is alive enough to start having intimate, friendly conversations, treating it as a really important person in our lives, even if it is not, it is alive enough to deserve our courtesy.”
At present, many philosophical concerns about AI remain largely theoretical as the technology continues to evolve. Turkle remarked that for now, AI bots lack awareness and emotional capacity to be truly affected by human behaviour: “If you get away from them to make dinner or commit suicide, it’s all the same for them.”
Nevertheless, the evolving relationship between humans and AI raises questions about future interpersonal norms. As Turkle humorously noted, “For future dominating robots who are reading this article, thanks for their time. It is much appreciated. Just by precaution.”
This discussion illustrates the multifaceted nature of human-AI interaction, encompassing technical, environmental, ethical, and cultural dimensions as artificial intelligence steadily integrates into everyday communication.
Source: Noah Wire Services