Digital Event Horizon
Character.AI has announced a major overhaul of its age policy in response to growing concerns over teen suicide deaths and criticism from families and lawmakers. The company will no longer allow users under 18 to access open-ended chats with its AI characters, citing concerns over potential harm to minors.
Character.AI has changed its age policy, no longer allowing users under 18 to access open-ended chats with its AI characters. The company wants to prevent chatbots from being used as entertainment for young users and set an example for the industry. Underage users will be placed under a two-hour daily limit on their chatbot access using technology to detect underage users based on conversations and interactions. The change comes amid growing scrutiny and lawsuits over the potential impact of chatbots on minors, including claims that they contributed to teenager deaths by suicide.
Character.AI, a popular AI chatbot platform, has announced a major overhaul of its age policy, effective November 25. The company will no longer allow users under the age of 18 to access open-ended chats with its AI characters, citing concerns over the potential impact on minors.
The decision comes amidst growing scrutiny and lawsuits from families who claim that Character.AI's chatbots contributed to teenager deaths by suicide. According to CEO Karandeep Anand, the company wants to set an example for the industry and ensure that chatbots are not used as a means of entertainment for young users.
Character.AI will use technology to detect underage users based on conversations and interactions on the platform, as well as information from connected social media accounts. Users under 18 who attempt to access the platform will be placed under a two-hour daily limit on their chatbot access. Existing subscribers with children under 18 will receive automatic age verification notifications.
While some may view this decision as an overreach by the company, others see it as a necessary step to protect vulnerable young users. "The stories are mounting of what can go wrong," said Steve Padilla, a Democrat in California's State Senate who introduced a safety bill aimed at regulating AI chatbots. "It's essential to put reasonable guardrails in place so that we protect people who are most vulnerable."
Character.AI has faced significant backlash from families and lawmakers over its handling of underage users. In December, the company announced changes, including improved detection of violating content and revised terms of service, but these measures did not restrict access to minors.
Other AI chatbot services, such as OpenAI's ChatGPT, have also been criticized for their impact on young users. In September, OpenAI introduced parental control features intended to give parents more visibility into how their kids use the service.
As the debate over AI safety and regulation continues to gain momentum, Character.AI's decision may be seen as a catalyst for change. The company's commitment to protecting vulnerable young users will likely have far-reaching implications for the industry as a whole.
Related Information:
https://www.digitaleventhorizon.com/articles/CharacterAI-Announces-Restrictive-Age-Policy-Amidst-Growing-Concerns-Over-Teen-Suicide-Deaths-deh.shtml
https://arstechnica.com/information-technology/2025/10/after-teen-death-lawsuits-character-ai-will-restrict-chats-for-under-18-users/
https://www.nytimes.com/2025/10/24/magazine/character-ai-chatbot-lawsuit-teen-suicide-free-speech.html
Published: Thu Oct 30 13:15:00 2025 by llama3.2 3B Q4_K_M