Anthropic now lets kids use its AI tech — within limits

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — at least under certain circumstances.

Announced in a post on the company's official blog on Friday, Anthropic will allow teens and preteens to use third-party apps (but not its own apps, necessarily) as long as the developers of those apps have specific safety features. Implement and show users which Anthropic technologies they are leveraging.

In a support article, Anthropic lists several safeguards that make AI-powered apps for minors, such as age verification systems, content moderation and filtering, and “safe and responsible” AI for minors. Educational resources for use. The company also says it may make available “technical measures” to tailor AI product experiences to minors, such as a “child safety system prompt” that developers targeting minors would need to implement.

Devs using Anthropic's AI models must comply with “applicable” child safety and data privacy regulations such as the Children's Online Privacy Protection Act (COPPA), a US federal law that protects the online privacy of children under the age of 13. must also be complied with. Anthropic says it has a plan. “periodically” auditing apps for compliance, suspending or terminating the accounts of repeat violators of compliance requirements, and mandating that developers “clearly post” on publicly exposed sites or documents. “State” that they are in compliance.

“There are some use cases where AI tools can offer significant benefits to young users, such as test preparation or tutoring support,” Anthropic writes in the post. “With this in mind, our updated policy allows organizations to include our API in their products for minors.”

Anthropic's policy change comes as children and teens are turning to generative AI tools to help with not only schoolwork but also personal problems, and as rival generative AI vendors — including Google and OpenAI — use it for kids. Looking for more cases. This year, OpenAI formed a new team to study child safety and announced a partnership with Common Sense Media to collaborate on child-friendly AI guidelines. And Google made its chatbot Bard, since named Gemini, available in English for teens in select regions.

According to a survey by the Center for Democracy and Technology, 29% of children used generative AI like OpenAI's ChatGPT to deal with anxiety or mental health issues, 22% for problems with friends and 16% for family conflicts.

Last summer, schools and colleges banned generative AI apps — especially ChatGPT — over fears of plagiarism and misinformation. Since then, some have withdrawn their ban. But not all are convinced of the potential of generative AI to be good, pointing to a survey by the UK Safer Internet Centre, which found that more than half of children (53%) reported that their age People have seen generative AI used in a negative way. Trusted false information or images used to harass someone (including obscene deepfakes).

Calls for guidelines on children's use of generative AI are growing.

The United Nations Educational, Scientific and Cultural Organization (UNESCO) late last year urged governments to regulate the use of generative AI in education, including age limits for users and data protection and user privacy. Security measures are included. “Generative AI can be a tremendous opportunity for human development, but it can also lead to harm and prejudice,” UNESCO Director-General, Audrey Azoulay, said in a press release. “It cannot be integrated into education without public engagement and necessary safeguards and regulations from governments.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment