In the mid-tier of Common Sense’s ratings, were AI chatbots like Google’s Bard (which just yesterday officially opened to teens), ChatGPT, and Toddle AI. The organization warned that bias may occur in these bots as well, particularly for users with “diverse backgrounds and dialects.” They could also produce inaccurate information — or AI hallucinations — and reinforce stereotypes. Common Sense warned that the false information AI produces could shape users’ worldviews and make it even more difficult to separate fact from fiction.
The only AI products to receive good reviews were Ello’s AI reading tutor and book delivery service, Khanmingo (from Khan Academy), and Kyron Learning’s AI tutor — all three being AI products designed for educational purposes. They’re less well-known than others. (And, as some kids may argue, less fun). Still, because the companies designed them with kids’ usage in mind, they tended to use responsible AI practices and focused on fairness, diverse representation, and kid-friendly design considerations. They also were more transparent about their data privacy policies.
If you’re thinking of using AI tools in your classroom, you might want to think twice. Some of these tools, like Google’s Bard and ChatGPT are not very reliable or fair. They may exhibit bias and produce inaccurate or misleading information, which could harm students’ critical thinking skills and cultural awareness. They may also generate false or inappropriate content, which could influence students’ beliefs and values in negative ways. These chatbots may not be transparent or accountable for their data collection and usage, which could compromise students’ privacy and security.