Mindfully Analyzing OpenAI Released Data On AI Mental Health Distress And Emergencies Of ChatGPT Users

Generative AI Large Language Model LLM News

Mindfully Analyzing OpenAI Released Data On AI Mental Health Distress And Emergencies Of ChatGPT Users
Mental Health Psychology Psychiatry CognitionOpenai Chatgpt GPT-5 GPT-4OAnthropic Claude Google Gemini Meta Llama Xai Grok

OpenAI released vital percentages on ChatGPT usage that reveal the prevalence of mental health aspects among users. This data is important. An AI Insider analysis.

Leveraging the new data that OpenAI provided about ChatGPT users and mental health is a vital step forward for societal use of AI.In today’s column, I closely examine a new set of data that OpenAI has released about the percentages associated with ChatGPT users experiencing a form of mental health distress or emergency during their interactions with the popular AI.

I have repeatedly urged AI makers to provide statistics regarding such weighty matters, enabling society to understand the nature and frequency of these occurrences. I call upon all the major AI makers to do so. Society is pretty much in the dark regarding population-level impacts. The popular LLMs tend to be proprietary; thus, there isn’t a straightforward way to fully gauge the extent of AI-related mental health encounters by users. In the case of ChatGPT, OpenAI has previously noted that they have approximately 800 million weekly active users overall. By applying the newly released percentages of those detected as having a mental health consideration while using the LLM, we can explore a semblance of population-level impacts.This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities overdependence on AI, social substitution of AI, emotional over-attachment of AI, compulsive usage of AI, validation-seeking from AI, and delusional identification with AI. For details on how AI can serve as a co-collaborator in guiding humans toward delusional thinking, see my discussion at You might be aware that there is a rising concern that users of AI could stridently fall into a form of psychosis, often informally labeled as AI psychosis. Since there isn’t yet a formal definition of AI psychosis, I have been using my drafted strawman definition for the time being: – “An adverse mental condition involving the development of distorted thoughts, beliefs, and potentially concomitant behaviors as a result of conversational engagement with AI such as generative AI and LLMs, often arising especially after prolonged and maladaptive discourse with AI. A person exhibiting this condition will typically have great difficulty in differentiating what is real from what is not real. One or more symptoms can be telltale clues of this malady and customarily involve a collective connected set.” For more details about this strawman, seeIn an online posting by OpenAI on October 27, 2025, entitled “Strengthening ChatGPT’s Responses In Sensitive Conversations,” these salient points were made : “We recently updated ChatGPT’s default model⁠ to better recognize and support people in moments of distress.” “We’ve taught the model to better recognize distress, de-escalate conversations, and guide people toward professional care when appropriate. We’ve also expanded access to crisis hotlines, re-routed⁠ sensitive conversations originating from other models to safer models, and added gentle reminders to take breaks⁠ during long sessions.” “… our initial analysis estimates that around 0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania.” “… our initial analysis estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent.” “… our initial analysis estimates that around 0.15% of users active in a given week and 0.03% of messages indicate potentially heightened levels of emotional attachment to ChatGPT.” If you are directly interested in the topic of AI and mental health, you should consider reading the entirety of the OpenAI blog posting – I’m going to focus on selected aspects and don’t have the space here to cover the entire blog. No worries, since I will be covering other elements of the OpenAI blog in several upcoming postings. You should also take a look at the updated system card for GPT-5, which OpenAI indicates: “We are publishing a related blog post that gives more information about this work, and this addendum to the GPT-5 system card to share baseline safety evaluations. These evaluations compare the August 15 version of ChatGPT’s default model, also known as GPT-5 Instant, to the updated one launched October 3.” The document briefly depicts the latest adjustments and nuances associated with trying to put in place AI safeguards to detect mental health concerns.I will gingerly use the cited percentages by multiplying them by the already commonly reported statistic that there are 800 million weekly active users of ChatGPT. I will also use the three categories that were identified in the OpenAI blog and then add them with caution. Emotional attachment : All three categories added up :For the sake of discussion, let’s cautiously agree to add up the three counts, arriving at the total of 2,960,000 , which could be rounded to 3 million people. This addition is a bit problematic because we don’t know that each such person was only labeled in one of the three categories. There is likely some overlap, and in that case, we would need to deduplicate the count accordingly.Before we start to analyze the calculated counts about ChatGPT usage, we can take a more macroscopic perspective and do a likewise calculation across the board for the major LLMs. Please know that estimates vary considerably about how many weekly active users there are across the likes of Anthropic Claude, Google Gemini, xAI Grok, Meta Llama, and so on. One popular estimate that floats around quite a bit is that there are 1.5 billion weekly active users for all of the major AI players, which includes ChatGPT and GPT-5. Personally, I think that’s a low count. It is my guess that the number is much bigger. Psychosis or mania :2,250,000 users based on 1,500,000,000 weekly active users x 0.15%.5,550,000 users based on adding together the 560K, 1.2M, and 1.2M of the above. In the macroscopic perspective, there might be around 5.5 million weekly active users of generative AI who are experiencing one of the three categories of mental health conditions. The issue of deduplication is once again a caveat. Indeed, the deduplication is not only with respect to the mental health categories; we would need to do the same across the AIs .By the use of a back-of-the-envelope approach, we might suggest that there are about 5.5 million people who, on a weekly basis, are experiencing a mental health condition on a global basis when using AI and as detected by AI. ChatGPT would seem to be the bulk of those instances, approximately 3 million people, though this is predicated on the assumption that of the 1.5 billion all-users, there are 800 million ChatGPT users. We must also be cognizant of the aspect that these are only the detected instances. The AI might be missing a sizable portion of users and be unable to sufficiently catch those who are experiencing mental health concerns. Another nitty-gritty is that we are assuming the percentages apply across the other AIs, and we are assuming that there aren’t more than just the three categories of mental health qualms. Not wanting to pour fuel on that fire, but we should be wondering about the timing underlying these considerations. Here’s what I mean. You can inspect statistics on when people tend to go see a human therapist, and in doing so, there is often a time-based pattern involved. During certain times of the year, the numbers seem to rise. At other times of the year, the numbers seem to reduce. It could be that the 0.07% in the psychosis or mania category is based on a snapshot in time, and the same might be the case for the other reported percentages. If the time period selected or inspected is at a low ebb, the percentage is an undercount of what might later occur. We would certainly be further interested in whether the percentage is moving over time, perhaps increasing. Temporal tracking would be quite insightful and helpful. The point is that there are many layers of assumptions, and we must be mindful of making any over-the-top conclusions correspondingly.If these numbers are anywhere near the true count, which maybe they are and maybe they aren’t, what can we make of them? First, consider the 3 million weekly users of ChatGPT who are in the three categories. Should we be worried about those people? Yes, of course. Each person is worth our attention. These people could be someone you know, a friend, a relative, a partner, and could be someone you don’t know . Our compassionate view is that each user deserves help regarding their mental health care. Second, in the aggregate, can we get a handle on how big or small the number of 3 million people is? Let’s compare this to the population of various US states. There are at least fifteen states that have a population that’s fewer than three million people, such as New Mexico, Nebraska, Idaho, Maine, Rhode Island, etc. Thus, we are pondering the mental health status of a count of users the size of those respective states. I would suggest that ought to cause you to pause and think things over. For the count of perhaps 5.5 million people across all major AIs, now we are reaching the size of 30 states that have a population less than that amount. This encompasses states such as Alabama, Louisiana, Kentucky, Connecticut, Utah, etc. Again, that suggests this is an issue encompassing a relatively large number of people. Making a comparison to state sizes is somewhat delicate since the counts are based on global usage. The 3 million and the 5.5 million are people using AI throughout the world. In any case, for purposes of visualizing the magnitudes, it is reasonable to mull over the size of state populations on a comparative basis.An important distinction that needs to be pointed out is that we should not tumble into the mental trap of assuming that these people are encountering mental health issues necessarily due to the acts of AI.For example, of the 1.2 million users of ChatGPT who expressed some form of self-harm intentions, we do not know whether the AI led them to that intention. It could be that some or maybe many were seeking out AI after having already decided to go down that path. The key is that the AI didn’t necessarily push them in that direction at the get-go. Some of the users might have pursued AI and even searched the web to find out about the self-harm topic, rather than having been stirred by the online capabilities to pursue the matter from the start. I’ve previously examined the question of AI as a driver of human behavior versus a collaborator of human behavior in these situations . I am hoping that AI makers will either release the data associated with these percentages so that we can dig underneath the numbers or at least do the grunt work themselves and provide a more granular indication of how things look under the hood.I’ve got a few more twists for you that go beyond the surface-level assessment of these numbers. A crucial question that ought to be raised is what the AI did once the detection of these users was computationally determined. You see, for the half-million users of ChatGPT who seemed to be experiencing psychosis or mania, did the AI talk them out of it, or did the AI hand off the conversation to a human therapist, or what transpired? How successful was this as a mental health intervention? OpenAI had announced previously that they are setting up a curated network of therapists, providing a real-time, seamless means of connecting a user with a human therapist. I believe this is laudable and will be a kind of mental healthcare backstop that all AI makers are going to inevitably employ, see my discussion at The last point for now is something that might raise some eyebrows. Here we go. Besides counting those who appear to be encountering a mental health issue, how many users were proactively aided by AI and improved their mental health? If we are counting those who had a mental health qualm, we might want to look at the other side of the coin, too. It is conceivable that the AI prevented some number of users from cycling down into a mental health abyss. The idea is that they came into using the AI and did not have a mental health issue at play, nor did the AI stir them into a mental health issue. Instead, the AI bolstered their mental health by giving sound advice or prudent guidance during their AI conversation. I mention this surprising facet because it is vital to realize that the use of AI in a mental health context is a tradeoff. The AI can be on the dour side of the coin and cause mental health issues or spur mental health issues. In the same light, we need to give credit where credit is due, namely that AI can be a helpful 24x7 source of mental health advisement, even when someone wasn’t seeking mental health advice and nor were they particularly in need of it.The world is embarking upon a humongous experiment that is taking place on a wanton basis, and we are all guinea pigs, somewhat involuntarily participating. The experiment is that generative AI and LLMs can generate mental health advice, doing so at the touch of a button and whenever and wherever a person might be.Now is the time to take the pulse of how AI is being used in a mental health context, along with shaping how it should be used. Collecting data, interpreting the data, and exploring statistics will be a fruitful means of figuring this out. As the sage words of Albert Einstein noted: “Not everything that can be counted counts, and not everything that counts can be counted.” In the case of AI, let’s count the right counts and use them wisely.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

ForbesTech /  🏆 318. in US

Mental Health Psychology Psychiatry Cognition Openai Chatgpt GPT-5 GPT-4O Anthropic Claude Google Gemini Meta Llama Xai Grok Population Global Worldwide USA America United Sta Well-Being Advice Guidance Counseling Coaching Data Statistics Percentages Counts Artificial Intelligence AI Sam Altman CEO

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

2 Tips For Speeding Up Your ChatGPT Answers2 Tips For Speeding Up Your ChatGPT AnswersJosé is a tech journalist with ten years of experience covering Apple, AI, mobile innovation, and major industry shifts. He currently reports for BGR.com, where he writes daily stories about product launches, software updates, and the cultural impact of consumer technology.
Read more »

OpenAI just taught your browser to thinkOpenAI just taught your browser to thinkThe new OpenAI Atlas browser uses AI to read, remember, and automate online tasks, marking the biggest shift in browsing since Chrome.
Read more »

OpenAI's Next Big Thing Might Be AI Music GenerationOpenAI's Next Big Thing Might Be AI Music GenerationChris started blogging about tech by accident when he figured out his passion for consumer electronics, especially mobile devices, and telling stories could be intertwined.
Read more »

OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every WeekOpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every WeekOpenAI released initial estimates about the share of users who may be experiencing symptoms like delusional thinking, mania, or suicidal ideation, and says it has tweaked GPT-5 to respond more effectively.
Read more »

Bernie Sanders Calls for Breakup of OpenAIBernie Sanders Calls for Breakup of OpenAII've been at Futurism since 2017, where my role has evolved to encompass design, writing, and increasingly editing.
Read more »

OpenAI Data Shows Hundreds of Thousands of Users Display Signs of Mental Health ChallengesOpenAI Data Shows Hundreds of Thousands of Users Display Signs of Mental Health ChallengesThe company's position seems to be that's not that bad.
Read more »



Render Time: 2026-04-05 19:37:39