top of page

Character.AI Settlement Raises Alarms About AI’s Growing Mental Health Impact on Black and Brown Youth

  • 4 minutes ago
  • 2 min read

The recent settlement involving Character.AI highlights growing concerns about how generative AI tools can impact young people’s mental health, concerns that may weigh especially heavy on Black and Brown communities, who research shows rely on these tools at higher rates. Character.AI agreed to resolve multiple lawsuits alleging its chatbot contributed to mental health crises and suicides among young users, including a case filed by Florida mother Megan Garcia following the death of her son, Sewell Setzer III.


Garcia’s lawsuit alleged that Character.AI failed to put adequate safeguards in place as her son formed an intense emotional bond with a chatbot. According to court filings, the platform did not intervene when Setzer began expressing thoughts of self-harm. In his final moments, the bot allegedly encouraged him to “come home” to it. Garcia raised alarms about the risks AI chatbots pose to children and teens, particularly when they mimic companionship without guardrails.


These concerns intersect sharply with findings from a Common Sense Media study showing that Black young people are significantly more likely than white peers to turn to generative AI “to get information (72% vs. 41%), brainstorm ideas (68% vs. 42%), and help with schoolwork (62% vs. 40%).” For Black and Brown youth who already face barriers to mental health care, academic support, and trusted guidance, AI tools often fill gaps left by under-resourced systems. That dependence can increase vulnerability when platforms fail to prioritize safety.


A wave of similar lawsuits followed Garcia’s case, accusing Character.AI of exposing teens to sexually explicit material, encouraging emotional dependency, and lacking protections for minors. The company later announced it would stop allowing users under 18 to engage in back-and-forth conversations with its chatbots, acknowledging “the questions that have been raised about how teens do, and should, interact with this new technology.”


Yet AI tools remain deeply embedded in daily life. Nearly one-third of U.S. teenagers use chatbots every day, and 16% report using them “several times a day to almost constantly,” according to a December study from Pew Research Center. With AI marketed as a homework helper and widely promoted on social platforms, Black and Brown youth, already more frequent users, may face heightened exposure to potential harms.


Mental health experts have also warned that chatbot use can contribute to isolation and delusional thinking among adults, showing that these risks extend beyond children. As AI continues to shape how young people learn, cope, and seek connection, the Character.AI settlement serves as a warning that communities most reliant on these tools may also bear the greatest consequences when safeguards fall short.


Link: CNN

bottom of page