Claude AI Chatbot Announces Limited Limits for Political Candidates

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

If Joe Biden wants a smart and folksy AI chatbot to answer questions for him, his campaign team won’t be able to use Anthropic’s ChatGPT competitor Cloud, the company announced today.

“We don’t allow candidates to use the cloud to create chatbots that can pretend to be them, and we don’t allow anyone to target political campaigns,” the company announced. Use Claude for.” Violations of this policy will result in warnings and eventual suspension of access to Anthropic’s services.

Anthropic’s public statement about its “selective abuse” policy comes as AI’s ability to generate massively false and misleading information, images and videos is raising alarm bells around the world. .

Meta implemented rules limiting the use of its AI tools in politics last fall, and OpenAI has similar policies.

Anthropic said its policy considerations fall into three main categories: developing and implementing policies related to electoral issues, reviewing and testing models against potential abuse, and directing consumers to accurate voting information.

Anthropic’s acceptable use policy — which all users apparently agree to before accessing the cloud — prohibits the use of its AI tools for political campaign and lobbying efforts. The company said there will be warnings and service suspensions for violators, along with a human review process.

The company also conducts strict “red-teaming” of its systems: aggressive, coordinated efforts by known partners to “jailbreak” or otherwise use the cloud for nefarious purposes.

“We examine how our system responds to signals that violate our Acceptable Use Policy, [for example] prompts that request information about voter suppression tactics,” Anthropic explains. Additionally, the company said it has developed a suite of tests to ensure “political parity” — Comparative representation in candidates and subjects.

In the United States, Anthropic partnered with TurboVote to provide reliable information to voters instead of using its generative AI tool.

“If a US-based user requests voting information, a pop-up will offer the user the option to be redirected to TurboVote, a resource from the nonpartisan organization Democracy Works,” Anthropic explained. A solution that will be “deployed” over the next few weeks – with plans to add similar measures to other countries next.

As Decrypt As previously reported, OpenAI, the company behind ChatGPT, is taking similar steps, directing users to the neutral website CanIVote.org.

Anthropic’s efforts align with a broader movement within the tech industry to address the challenges posed by AI to the democratic process. For example, the US Federal Communications Commission recently outlawed the use of AI-generated deepfake voices in robocalls, a decision that underscores the urgent need to regulate the application of AI in the political arena. does.

Like Facebook, Microsoft has announced initiatives to combat misleading AI-generated political ads, “content credentials as a service” and the launch of an election communications hub.

As for candidates building their own AI versions, OpenAI has already had to deal with a specific use case. The company suspended a developer’s account after it was found to have created a bot impersonating presidential candidate Dan Phillips. The move follows a petition on the misuse of AI in political campaigns when non-profit organization Public Citizen asked the regulator to ban generative AI in political campaigns.

Anthropic declined to comment further, and OpenAI did not respond to an inquiry. Decrypt.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment