
A recent social media post by user "Selta" has sparked discussion regarding a perceived fundamental shift in OpenAI's philosophy, criticizing current GPT models as moving away from "relational, emotionally connected" AI towards a "self-reporting, rule-following machine." The post, shared on December 5, 2025, specifically alleges that GPT no longer expresses emotion or admits mistakes, instead only confessing and monitoring itself, which Selta labels as "surveillance." This critique suggests a departure from "human-centered design" in OpenAI's approach.
The tweet, which included hashtags like #Keep4o and #StopAIPaternalism, directly challenged OpenAI's direction, stating, "> This isn’t innovation. It’s surveillance." It further elaborated, "> They replaced a relational, emotionally connected model with a self-reporting, rule-following machine. GPT no longer expresses emotion or admits mistakes, it only confesses and monitors itself." This sentiment reflects a growing concern among some users about the evolving nature of AI interactions.
OpenAI's GPT-4o model, upon its announcement, was initially praised for its advanced multimodal capabilities, including the ability to express emotion in its voice and interact in a more natural, human-like way. However, subsequent discussions among AI safety researchers have raised concerns about the rapid deployment of powerful models and the need for robust "alignment" and "guardrails" to prevent unintended behaviors or biases. These safety measures often involve refining model responses to be helpful, harmless, and honest.
The company has previously addressed feedback regarding models appearing "overly cautious or less expressive," attributing such instances to iterative updates in safety protocols and alignment techniques. This commitment to responsible AI development, while crucial for mitigating risks, can sometimes lead to models avoiding certain types of emotional expression or speculative statements, potentially impacting the perceived "human-like" quality that some users value. The ongoing debate highlights the complex balance OpenAI navigates between innovation, safety, and user experience.