Sam Altman of OpenAI Teases GPT‑5 on X
OpenAI’s Sam Altman has set the tech sphere abuzz with an informal tease of GPT‑5 via a screenshot reply on X. When asked whether the new model also recommends the sci‑fi show Pantheon, his reply was blunt: “turns out yes”, accompanied by a snippet praising the show’s “cerebral emotional and philosophically intense” tone and confirming its perfect Rotten Tomatoes score. That screenshot is considered one of the first signs of GPT-5's public-facing presence.

That interaction captured attention less for revealing features than for what it hinted at: GPT‑5’s ability to retrieve and synthesize cultural criticism with nuance and context window performance that matches critical language. Yet it also revealed the persistence of a familiar writing habit: the model still leans heavily on em‑dashes.
Reaction has been swift. Curt Woodward, writing on LinkedIn, recounted Altman’s own admission that em‑dash overuse in ChatGPT had become “quite annoying” and that OpenAI would address this stylistic issue soon.
Analysts suggest that this is more than a typing quirk as it reflects a deeper bias baked into models trained on US-centric corpora, where em-dashes appear frequently.
The broader context: Altman has recently voiced genuine concern over GPT‑5’s capabilities. On This Past Weekend with Theo Von, he described the development pace as resembling the Manhattan Project and confessed to feeling nervous: “It feels very fast… ‘What have we done?’” That unease echoes across industry commentary.
Further raising alarms, in another interview, Altman admitted feeling “useless” compared to what the model accomplished, and warned of likely "capacity crunches" during rollout, prompting him to ask users to “bear with us” during expected hiccups.
GPT‑5’s technical step‑change remains officially unconfirmed, but multiple sources anticipate substantial improvements: longer context windows, better multi‑step reasoning, unified reasoning models replacing the o‑series and GPT models separately, and richer multimodal interaction. These in turn raise concerns around safety protocols, misuse, and the absence of robust external oversight.
On the punctuation front, the em‑dash affair has stirred debate. Style purists point out that its prevalence in AI output might mark human writing down as robotic. Despite this, users often can request avoidance explicitly, such as simply instructing ChatGPT to “avoid em dashes in the response,” which tends to work.
Critics on Hacker News and in other forums suggest that the AI’s production of em‑dashes is not a deliberate stylistic choice but a learned default from training data.
Altman’s brush with punctuation fatigue may reflect a larger shift in AI tone design. If the team responds and retrains ChatGPT to trim back em‑dash density, that suggests a willingness to adapt stylistic defaults even as model capabilities expand.
The seemingly innocent screenshot of GPT‑5 recommending Pantheon may have been just that—yet it double‑served as a statement of readiness, a sign that GPT‑5 is close to production, and a nod to anticipated performance improvements. At the same time, it reminds us how small stylistic choices can resonate in a world sensitized by generative AI.
Altman’s warnings about rollout strain and oversight shortfalls deserve serious attention. If GPT‑5 delivers the scale and fluency hinted at in these teases, OpenAI will need stronger guardrails to manage both technical and ethical risks.
Observers will be watching social channels closely in the coming days for more breadcrumbs. For now, GPT‑5 remains behind the curtain, but every tease matters when the stakes may be higher than before.