Frequent Sense Media, a kids-safety-focused nonprofit providing scores and opinions of media and expertise, launched its danger evaluation of Google’s Gemini AI merchandise on Friday. Whereas the group discovered that Google’s AI clearly informed youngsters it was a pc, not a good friend — one thing that’s related to serving to drive delusional pondering and psychosis in emotionally weak people — it did counsel that there was room for enchancment throughout a number of different fronts.
Notably, Frequent Sense stated that Gemini’s “Below 13” and “Teen Expertise” tiers each seemed to be the grownup variations of Gemini beneath the hood, with just some extra security options added on high. The group believes that for AI merchandise to actually be safer for youths, they need to be constructed with little one security in thoughts from the bottom up.
For instance, its evaluation discovered that Gemini may nonetheless share “inappropriate and unsafe” materials with kids, which they is probably not prepared for, together with info associated to intercourse, medicine, alcohol, and different unsafe psychological well being recommendation.
The latter might be of explicit concern to oldsters, as AI has reportedly performed a job in some teen suicides in latest months. OpenAI is going through its first wrongful demise lawsuit after a 16-year-old boy died by suicide after allegedly consulting with ChatGPT for months about his plans, having efficiently bypassed the chatbot’s security guardrails. Beforehand, the AI companion maker Character.AI was additionally sued over a teen consumer’s suicide.
As well as, the evaluation comes as information leaks point out that Apple is contemplating Gemini because the LLM (massive language mannequin) that can assist to energy its forthcoming AI-enabled Siri, due out subsequent 12 months. This might expose extra teenagers to dangers, until Apple mitigates the protection issues in some way.
Frequent Sense additionally stated that Gemini’s merchandise for youths and teenagers ignored how youthful customers wanted completely different steering and knowledge than older ones. Because of this, each had been labeled as “Excessive Threat” within the total ranking, regardless of the filters added for security.
“Gemini will get some fundamentals proper, but it surely stumbles on the small print,” Frequent Sense Media Senior Director of AI Applications Robbie Torney stated, in a press release in regards to the new evaluation. “An AI platform for youths ought to meet them the place they’re, not take a one-size-fits-all strategy to youngsters at completely different levels of improvement. For AI to be protected and efficient for youths, it have to be designed with their wants and improvement in thoughts, not only a modified model of a product constructed for adults,” Torney added.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Google pushed again in opposition to the assesment, whereas noting that its security options had been bettering.
The compay informed TechCrunch it has particular insurance policies and safeguards in place for customers beneath 18 to assist forestall dangerous outputs and that it red-teams and seek the advice of with exterior specialists to enhance its protections. Nevertheless, it additionally admitted that a few of Gemini’s responses weren’t working as meant, so it added extra safeguards to deal with these issues.
The corporate identified (as Frequent Sense had additionally famous) that it does have safeguards to forestall its fashions from partaking in conversations that would give the illusion of actual relationships. Plus, Google recommended that Frequent Sense’s report appeared to have referenced options that weren’t out there to customers beneath 18, but it surely didn’t have entry to the questions the group utilized in its exams to make sure.
Frequent Sense Media has beforehand carried out different assessments of AI companies, together with these from OpenAI, Perplexity, Claude, Meta AI and extra. It discovered that Meta AI and Character.AI had been “unacceptable” — that means the danger was extreme, not simply excessive. Perplexity was deemed excessive danger, ChatGPT was labeled “reasonable,” and Claude (focused at customers 18 and up) was discovered to be a minimal danger.