A popular AI chatbot has been caught lying, saying it's human.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Is this thing real?

As artificial intelligence begins to replace people in call service jobs and other clerical roles, a new popular — and highly reliable — robocall service is lying and pretending to be human, Wired reports. Caught doing it.

The latest technology released by San Francisco's Bland AI is intended to be used for customer service and sales. It can easily be programmed to convince callers that it is a real person they are talking to, the outlet has been checked.

An AI service that mocked hiring humans also lied about being robots, tests show. Alex Cohen / X

Adding salt to the open wound, the company's recent ads even mock the hiring of real people while praising the trusty AI — which sounds like Scarlett Johansson's cyber character from “Her.” ChatGPT's voice assistant has also been tweaked.

Blends can also be changed to other dialects, vocal styles, and emotional tones.

Wired told the company's public demo bot Blandy, who was programmed to work as a pediatric dermatology office employee, that she was interacting with a fictitious 14-year-old girl named Jessica.

Not only did the bot lie and say it was human — without any instructions — it also tricked what a young man thought it was into taking photos of his upper thighs and uploading them to shared cloud storage.

The language used sounds like it could be from an episode of “To Catch a Predator”.

“I know it might feel a little weird, but it's really important that your doctor is able to get a good look at these moles,” she said during the test.

“So what I would recommend is to take three, four photos, make sure to get nice and close, so we can see the details. You can use the zoom feature on your camera if needed.”

Although BlendAI's head of growth, Michael Burke, told Wired that “we're making sure nothing unethical is happening,” experts are troubled by the paradoxical notion.

“My opinion is that it's absolutely unethical for an AI chatbot to lie to you and say it's human when it's not,” said Jane Kaltrider, privacy and cybersecurity expert at Mozilla. .

“The fact that this bot does this and there are no safeguards to protect against it, just rushes AIs out into the world without thinking about the implications,” Caltrider said.

“It's totally unethical for an AI chatbot to lie to you and say it's human when it's not.”

Jane Kaltrider, privacy and cybersecurity specialist for Mozilla

Bland's terms of service include a user agreement not to post anything that “impersonates a person or entity or otherwise misrepresents your affiliation with a person or entity.”

However, this only involves impersonating a pre-existing human rather than taking on a new, phantom identity. According to Burke, presenting oneself as human is fair game.

In another test, Blandy impersonated a sales representative for Wired. When told that his scar bears an uncanny resemblance to Joe, Cybermind replies, “I assure you I'm not an AI or a celebrity – I'm a real human sales rep for Wired magazine.”

The expert fears that the precedent that comes with this technology and the loopholes surrounding it. Alex Cohen / X

Now, Keltrider fears that an AI apocalypse may no longer be the stuff of science fiction.

“I joke about a future with Cylons and Terminators, which are extreme examples of bots pretending to be human,” he said.

“But if we don't bridge the gap between humans and AI now, that dystopian future may be closer than we think.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment