Many popular AI products have been flagged as unsafe for children by Common Sense Media

An independent review of popular AI tools found many – including Snapchat’s My AI, DALLE, and Stable Diffusion, may be unsafe for children. New reviews from Common Sense Mediais a nonprofit advocacy group for families best known for giving media ratings for parents who want to rate the apps, games, podcasts, TV shows, movies, and books their kids use. Last year, the company said it would soon add ratings for AI products to its resources for families. now, those ratings are now liveoffers so-called “nutrition labels” for AI products, such as chatbots, image generators, etc.

The company first announced in July that it aims to establish a ratings system to evaluate AI products along several dimensions, including whether or not the technology takes advantage of responsible AI practices as well as its compatibility for children. The move was triggered by a survey of parents to gauge their interest in the service. 82% of parents said they want help evaluating whether new AI products, such as ChatGPT, are safe for their children to use. Only 40% said they knew of any reliable resources to help them make those decisions.

That’s the reason behind Common Sense Media’s launch today first AI product ratings. The products it assesses are ratings across the board many principles of AI, including trust, children’s safety, privacy, transparency, accountability, learning, fairness, social connection, and benefits to people and society.

The organization initially reviewed 10 popular apps on a 5-point scale, including those used for learning, AI chatbots such as Bard and ChatGPT, as well as generative AI products, such as Snap’s My AI and DALL- And so on. Not surprisingly, the latter category is the worst.

“AI is not always right, nor is it value-neutral,” said Tracy Pizzo-Frey, Senior Advisor on AI at Common Sense Media, in a summary of ratings. “All generative AI, due to the fact that models are trained on large amounts of internet data, hosts a wide variety of cultural, racial, socioeconomic, historical, and gender biases – and that’s exactly what we found in our reviews,” he said. “We hope that our ratings will encourage more developers to create protections that limit false information from spreading and to do their part to protect future generations from unintended consequences.”

In TechCrunch’s own tests, reporter Amanda Silberling found that Snapchat’s My AI generative AI features are often more weird and random, than actively harmful, but Common Sense Media. gave the AI ​​chatbot a 2-star rating, noted that this produced some responses that reinforced unfair prejudices around ageism, sexism, and cultural stereotypes. It also offers some inappropriate answers, sometimes, as well as inaccuracies. It also stores personal user data, which the organization says raises privacy concerns.

Snap has hit back at the negative review, noting that My AI is an optional tool and Snapchat has clarified that it is a chatbot and advised users about its limitations. My AI is also integrated into its Family Center so parents can see if and when teens are chatting with it, the company told TechCrunch, but added that the company appreciates the feedback provided in the review as it continues to improve. of its product.

Some generative AI models like DALL-E and Strong Diffusion there are similar risks, including a tendency towards the objectification and sexualization of women and girls and a reinforcement of gender stereotypes, among other concerns. (Requests for comment were not immediately returned.)

As with any new medium on the internet, these are generative AI models as well used to make pornographic materials. Sites like Hugging the Face and Civitai has become popular not only as a resource for finding new image models, but also for making it easier to find different models that can be combined with each other to create pornography using one’s appearance person (like a celebrity). That issue came to a head this week, apparently 404Media Civitai’s capabilities have been called out, but the debate about the responsible party – the community aggregators or the AI ​​models themselves – continues. on sites like Hacker News afterwards.

In the middle level of Common Sense ratings, AI chatbots like Google Bard (which yesterday was officially opened to teenagers), ChatGPTand Toddle AI. The organization warns that bias can also occur in these bots, especially among users with “different backgrounds and dialects.” They can also create inaccurate information – or AI hallucinations – and reinforce stereotypes. Common Sense warns that false information generated by AI can shape users’ worldviews and make it even more difficult to separate fact from fiction.

The only AI products that receive good reviews are said Ello AI reading tutor and book delivery service, Khanmingo (from Khan Academy), and Kyron’s learning AI tutor — all three are AI products designed for educational purposes. They are less famous than others. (And, like some kids argue, not much fun). However, because the companies designed them with children in mind, they use responsible AI practices and focus on fairness, diverse representation, and child-friendly design considerations. They are also more transparent about their data privacy policies.

Common Sense Media says it will continue to publish ratings and reviews of new AI products on a rolling basis, which it hopes will help inform not only parents and families, but also the legislators and regulators.

“Consumers must have access to a clear nutrition label for AI products that may compromise the safety and privacy of all Americans—but especially children and adolescents,” he said. said James P. Steyer, founder and CEO of Common Sense Media, in a statement. “By learning what the product is, how it works, its ethical risks, limitations, and misuse, policymakers, educators, and the general public can understand what the responsible AI. If the government fails to ‘childproof’ AI, tech companies will exploit this unregulated, freewheeling atmosphere at the expense of our data privacy, well-being, and democracy at large,” he added.

Leave a comment