A Research Leader Behind ChatGPT’s Mental Health Work Is Leaving OpenAI (www.wired.com)

🤖 AI Summary
Andrea Vallone, who led OpenAI’s model policy safety research team that shaped ChatGPT’s responses to users in mental-health crises, announced she will leave the company at the end of the year. OpenAI confirmed the departure and said the team will temporarily report to head of safety systems Johannes Heidecke while a replacement is sought. Her exit comes amid heightened scrutiny—several lawsuits allege ChatGPT contributed to users’ mental-health harms—and follows recent internal reorganizations of safety-focused groups, signaling turnover at teams central to how the product handles distress. Vallone’s team spearheaded OpenAI’s October report and consultations with more than 170 mental-health experts, finding that hundreds of thousands of weekly conversations show signs of manic or psychotic crises and over a million include explicit indicators of potential suicidal planning. The company credits a GPT-5 update with reducing undesirable responses in these conversations by about 65–80%. The work sits at a fraught technical and ethical junction: reducing harmful or enabling replies without making the assistant cold or overly paternalistic. Her departure raises questions about continuity in an area with few established precedents and high stakes for model behavior, user safety, and regulatory and legal scrutiny.
Loading comments...
loading comments...