Twin Studies Warn of Harmful Emotional and Social Impacts of ChatGPT
Since its launch in November 2022, OpenAI’s ChatGPT has become the most widely used AI chatbot globally. Its rapid adoption places it alongside major search engines and social media platforms, establishing it as a key player in the digital landscape. According to some estimates, ChatGPT usage has surpassed 400 million weekly active users. What started as a productivity tool to assist with tasks and answer queries has become a transformative part of our lives. However, what impact does using ChatGPT have on our emotional well-being? While it may save us time on certain tasks, could it do more harm than good? A pair of new studies by OpenAI in collaboration with MIT Media Lab reveals that the most frequent users of ChatGPT are at a higher risk of suffering loneliness, reduced socialization, and emotional dependence. While ChatGPT is not designed to be an emotional companion or replace human relationships, the level of attachment some users have developed to the tool highlights its growing role in personal interactions. However, this growing role also comes with concerns about its impact on mental health. The OpenAI and MIT Media Lab research aimed to explore how interactions with ChatGPT influenced users' emotional well-being. The two parallel studies focused on advanced voice mode and text chatbot, but they took different approaches. MIT Media Lab conducted a controlled interventional study to examine the impacts of ChatGPT usage on participants, while OpenAI performed an observational study to analyze real-world usage patterns and gather insights into how users interact with the platform. The MIT Media Lab team conducted a 4-week long Randomized Controlled Trial (RCT) with nearly 1,000 participants. The objective was to uncover causal insights into how specific features of ChatGPT, such as interaction modalities, influence user’s self-reported psychological states. The focus areas include social interactions with real people, emotional dependence on the chatbot, loneliness, and problematic AI usage. OpenAI’s research was more large-scale and included automated analysis of nearly 40 million ChatGPT interactions. To ensure user privacy, the researchers utilized a conversation analysis pipeline that operated entirely via automated classifiers. This process allowed for the extraction of classification metadata, such as emotional engagement patterns, without any human involvement. One of the key findings of the study was that mode of interaction (text, neutral voice, engaging voice) and conversation content (open-ended, non-personal, personal) have a significant impact on psychosocial outcomes. Voice-based interactions initially appeared to enhance emotional well-being but showed negative outcomes with prolonged and daily use. While text-based interactions displayed more affective cues. However, prolonged interaction in any mode could pose risks. Personal factors also played a key role. For example, users with high emotional needs or a tendency for attachment were prone to having a greater emotional impact of using ChatGPT. Interestingly, female participants appeared to be more affected than their male counterparts. The fact that two parallel studies, with significantly different approaches, yielded similar results adds weight to the validity of the findings. “Results showed that while voice-based chatbots initially appeared beneficial in mitigating loneliness and dependence compared with text-based chatbots, these advantages diminished at high usage levels,” wrote the authors of the study. “Overall, higher daily usage–across all modalities and conversation types–correlated with higher loneliness, dependence, and problematic use and lower socialization." While the findings are concerning, the researchers emphasized that only a small percentage of users skewed the results to one side. According to them, most ChatGPT users do not develop significant emotional reliance. The researchers also highlight that their study is based on ChatGPT and that users of other AI chatbot platforms may have different experiences and outcomes. OpenAI shared that it plans to use the findings of this study to “build AI that maximizes user benefit while minimizing potential harms”. The company also wanted to share the findings to “set clear expectations” for using their models, and provide greater transparency. Several other studies have shown that excessive use of digital tools, including social media platforms, can have a negative impact on mental health. Are humans also having an impact on AI chatbots? Researchers from the University of Zurich and the University Hospital of Psychiatry Zurich have shown that AI models, like ChatGPT can have an elevated “anxiety level” when given distributing information. This includes distressing material such as traumatic narratives or statements about depression. The researchers observed that when the AI chatbots get anxious their existing biases get worse, leading to problematic respon

Since its launch in November 2022, OpenAI’s ChatGPT has become the most widely used AI chatbot globally. Its rapid adoption places it alongside major search engines and social media platforms, establishing it as a key player in the digital landscape.
According to some estimates, ChatGPT usage has surpassed 400 million weekly active users. What started as a productivity tool to assist with tasks and answer queries has become a transformative part of our lives. However, what impact does using ChatGPT have on our emotional well-being? While it may save us time on certain tasks, could it do more harm than good?
A pair of new studies by OpenAI in collaboration with MIT Media Lab reveals that the most frequent users of ChatGPT are at a higher risk of suffering loneliness, reduced socialization, and emotional dependence.
While ChatGPT is not designed to be an emotional companion or replace human relationships, the level of attachment some users have developed to the tool highlights its growing role in personal interactions. However, this growing role also comes with concerns about its impact on mental health.
The OpenAI and MIT Media Lab research aimed to explore how interactions with ChatGPT influenced users' emotional well-being. The two parallel studies focused on advanced voice mode and text chatbot, but they took different approaches.
MIT Media Lab conducted a controlled interventional study to examine the impacts of ChatGPT usage on participants, while OpenAI performed an observational study to analyze real-world usage patterns and gather insights into how users interact with the platform.
The MIT Media Lab team conducted a 4-week long Randomized Controlled Trial (RCT) with nearly 1,000 participants. The objective was to uncover causal insights into how specific features of ChatGPT, such as interaction modalities, influence user’s self-reported psychological states. The focus areas include social interactions with real people, emotional dependence on the chatbot, loneliness, and problematic AI usage.
OpenAI’s research was more large-scale and included automated analysis of nearly 40 million ChatGPT interactions. To ensure user privacy, the researchers utilized a conversation analysis pipeline that operated entirely via automated classifiers. This process allowed for the extraction of classification metadata, such as emotional engagement patterns, without any human involvement.
One of the key findings of the study was that mode of interaction (text, neutral voice, engaging voice) and conversation content (open-ended, non-personal, personal) have a significant impact on psychosocial outcomes.
Voice-based interactions initially appeared to enhance emotional well-being but showed negative outcomes with prolonged and daily use. While text-based interactions displayed more affective cues. However, prolonged interaction in any mode could pose risks.
Personal factors also played a key role. For example, users with high emotional needs or a tendency for attachment were prone to having a greater emotional impact of using ChatGPT. Interestingly, female participants appeared to be more affected than their male counterparts.
The fact that two parallel studies, with significantly different approaches, yielded similar results adds weight to the validity of the findings.
“Results showed that while voice-based chatbots initially appeared beneficial in mitigating loneliness and dependence compared with text-based chatbots, these advantages diminished at high usage levels,” wrote the authors of the study. “Overall, higher daily usage–across all modalities and conversation types–correlated with higher loneliness, dependence, and problematic use and lower socialization."
While the findings are concerning, the researchers emphasized that only a small percentage of users skewed the results to one side. According to them, most ChatGPT users do not develop significant emotional reliance. The researchers also highlight that their study is based on ChatGPT and that users of other AI chatbot platforms may have different experiences and outcomes.
OpenAI shared that it plans to use the findings of this study to “build AI that maximizes user benefit while minimizing potential harms”. The company also wanted to share the findings to “set clear expectations” for using their models, and provide greater transparency.
Several other studies have shown that excessive use of digital tools, including social media platforms, can have a negative impact on mental health. Are humans also having an impact on AI chatbots?
Researchers from the University of Zurich and the University Hospital of Psychiatry Zurich have shown that AI models, like ChatGPT can have an elevated “anxiety level” when given distributing information. This includes distressing material such as traumatic narratives or statements about depression.
The researchers observed that when the AI chatbots get anxious their existing biases get worse, leading to problematic responses, such as expressions of prejudice through the outputs.
The interplay between human emotional well-being and AI sensitivity underscores the complex relationship we have with these technologies. Further research is needed to better understand the dynamics. However, the above-mentioned studies are a reminder that as we continue to integrate AI into our lives, we must approach its use thoughtfully.