GPT-5's Coding Prowess Outshines Writing Style Amidst User Critiques and Model Deprecations

OpenAI's latest flagship model, GPT-5, has launched to mixed reception regarding its writing capabilities, despite widespread acclaim for its advanced coding performance. Jeffrey Emanuel, a prominent observer, recently stated on social media, > "Funny that the GPT-5 demo is emphasizing what a great writer it is… and then showing off pure turbo-slop with emdashes in every other sentence. They should stick with the coding demos, which are a lot more impressive!"

OpenAI has positioned GPT-5 as a significant leap forward in artificial intelligence, touting its enhanced abilities across various domains, including writing, research, analysis, and coding. Official announcements highlight GPT-5 as their "most capable writing collaborator yet," capable of translating ideas into "compelling, resonant writing with literary depth and rhythm." Concurrently, the company emphasizes its state-of-the-art performance in coding benchmarks, showcasing its capacity to handle complex tasks, generate high-quality code, and assist with debugging.

However, the critique regarding GPT-5's writing style, particularly the overuse of em dashes, resonates with a growing sentiment among users and observers. Numerous discussions across social media and tech publications have pointed out that an excessive reliance on em dashes has become a tell-tale sign of AI-generated text, leading some to label it as a "GPT-ism." This stylistic quirk often detracts from the perceived human-like quality of the output, despite OpenAI's claims of improved nuance and natural language generation.

Emanuel's tweet also touched upon another point of contention for some users: OpenAI's policy of deprecating older models. The company regularly retires previous iterations of its models, urging developers and users to transition to newer versions like GPT-5. This ongoing cycle of deprecation, while intended to streamline development and encourage adoption of more advanced and efficient models, can be disruptive for users reliant on specific older versions, as implied by Emanuel's comment, "Also, I hate that they’re deprecating all the old models…"

The launch of GPT-5 underscores the ongoing evolution of large language models, where advancements in technical capabilities, particularly in coding and complex problem-solving, appear to be progressing rapidly. Yet, the nuanced art of human-like writing, free from stylistic tells, remains a challenge that even the most advanced AI models are still striving to perfect. The feedback from users like Jeffrey Emanuel highlights the demand for a more refined and less identifiable AI writing style, even as the models demonstrate increasingly impressive functional prowess.