Many users describe ChatGPT as “alive,” but OpenAI avoids giving a definitive answer about whether the AI is conscious, calling this the more responsible approach.
More and more people are interacting with ChatGPT as if it were a person. They say “thank you,” share personal details, and even ask how the model is doing. Joanne Jang, who works on the design of human-AI relationships at OpenAI, says this isn’t a new phenomenon. People have always been inclined to attribute human traits to objects, whether it’s their car or a robot vacuum cleaner.
The difference with ChatGPT is that it responds. It mirrors users’ tone, remembers previous statements, and simulates empathy. For people who feel lonely or overwhelmed, this can create the impression of genuine care. OpenAI sees both potential benefits and risks in this. An AI-powered, patient listener could reshape how people interact, but it might also make it harder to form real relationships.
A recently leaked strategy paper shows that OpenAI sees human interaction itself as a long-term competitor to AI interaction. The company’s goal is to turn ChatGPT into a “digital super assistant” that can help in every area of life.
Ad
OpenAI draws a line between “real” and perceived consciousness
Jang doesn’t claim that ChatGPT is aware, but she also avoids saying the opposite. Instead, OpenAI wants to focus on the real effects: how AI changes people’s behavior, and what that reveals about human nature.
According to Jang, OpenAI distinguishes between two concepts: Ontological consciousness, which asks whether a model is fundamentally conscious, and perceived awareness, which measures how human the system seems to users. The company considers the ontological question scientifically unanswerable, at least for now.
Perceived awareness, on the other hand, is something social scientists can actually study. OpenAI is focused on this aspect. How human does the model seem to users? People may intellectually know that AI isn’t sentient, but still form emotional bonds because they feel understood.
OpenAI trains its models to highlight the complexity of consciousness when users ask about it, rather than giving a clear answer. In practice, though, Jang says, the models often just reply “no,” something OpenAI plans to address in future updates.
AI should be polite, but not have its own personality
OpenAI says ChatGPT should appear approachable, but not as if it has its own independent existence. Words like “think” or “remember” are used to make the model’s responses easier for non-experts to understand, but they’re meant purely as metaphors.
Recommendation
Elements like a fictional backstory, romantic feelings, or instincts for self-preservation are left out on purpose. The model is designed to be helpful, warm, and polite, without forming emotional attachments or pursuing its own goals.
OpenAI also avoids technical terms such as “context window” or “thought chains” that might help experts but would make the system harder to use for the average person.
When ChatGPT responds to small talk with phrases like “I’m fine,” Jang explains, it’s not expressing feelings but following the conventions of conversation. Many users thank the model, not because they think it’s conscious, but because being polite matters to them, Jang says.
The debate around AI consciousness is complex, touching on both scientific and ethical issues. Studies in animal behavior have shown that even insects like ants and bees can display self-awareness, challenging our assumptions about the brain power needed for consciousness. Some researchers think a specific type of memory might explain consciousness, suggesting that self-aware AI could arrive sooner than we think.
From a scientific standpoint, researchers are investigating whether AI models exhibit features associated with theories of consciousness, such as the global workspace theory.