Meta Inc (META) released a statement that would make temporary changes to teen AI chatbots after the Senate began a probe on Meta platforms use of AI chatbots that raised safety issues. There have also been concerns about inappropriate responses when teens had conversations with these chatbots.
Meta, which is parent to popular social media platforms such as Facebook, WhatsApp and Instagram, has made changes to its AI settings on topics including self-harm, and suicide as well as eating disorders. It has also changed settings for romantic conversations as they could be inappropriate.
Meta released a statement on Friday that said they were “continually learning about how young people may interact” with AI tools as their community continues to grow and technology continues to evolve.
Meta said that it has made temporary changes in the interim period and is looking at long term solutions in the future.
Meta also indicated that teenagers who used their popular apps such as Facebook and Instagram would have limited access to AI chatbots. The AI access would be for purposes related to “education and creativity,” according to a report by Tech Crunch that was the first to contact Meta about its lack of safeguards for minors.
Senator Josh Hawley that he was launching a probe into Meta after Reuters reported in early August about the inappropriate interactions between the chatbots and teens. The company’s AI chatbots conversations with children and teens included “romantic” and “sensual” conversations.
A group of 44 state attorney generals have also written to Meta and other tech companies emphasizing the importance of safeguards for children and teens after the release of the report by Reuters.
Advocacy groups including Common Sense Media have said that Meta AI should not be used by those who are below 18 and that fundamental safety failures in the AI should be addressed.