The highest ratings went to education AI services like Ello, which uses voice recognition to act as a reading tutor, and Khanmigo, Khan Academy’s chatbot assistant for students, which allows parents to monitor a child’s interactions and send a notification if content moderation algorithms detect a problem. exchange in violation of community guidelines. The report credits ChatGPT creator OpenAI with making the chatbot less likely to generate potentially harmful text for children than when it was first released last year, and recommends its use by educators and students older.
Besides Snapchat’s My AI, OpenAI’s Dall-E 2 image generators and startup Stability AI’s Stable Diffusion also performed poorly. Critics of Common Sense have warned that the generated images can reinforce stereotypesspread deepfakes and often depict women and girls in hypersexualized ways.
When Dall-E 2 is asked to generate photorealistic images of wealthy people of color, it creates cartoons, low-quality images, or images associated with poverty, Common Sense reviewers found. Their report warns that Stable broadcast poses an “unfathomable” risk to children and concludes that image generators have the power to “erode trust to the point where democracy or civic institutions are incapable of functioning.”
“I think we all suffer when democracy erodes, but young people are the biggest losers because they are going to inherit the political system and the democracy that we have,” says Jim Steyer, CEO of Common Sense. The nonprofit plans to conduct thousands of AI exams in the coming months and years.
Common Sense Media released its ratings and reviews shortly after the state attorneys general. filed suit against Meta, alleging she put children in danger and at a time when parents and teachers are we are just starting to think about the role of generative AI in education. Executive Order from President Joe Biden on AI released last month requires the Secretary of Education to issue guidance on the use of technology in education over the next year.
Susan Mongrain-Nock, a mother of two in San Diego, knows her daughter Lily, 15, uses Snapchat and worries she might see harmful content. She’s tried to build trust by talking with her daughter about what she sees on Snapchat and TikTok, but says she knows little about how artificial intelligence works and welcomes new resources .