Beauty reviews were already dubious. Then came generative AI.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

For beauty shoppers, it was already difficult enough to trust online reviews.

Brands like Sunday Riley and Kylie Skin are among those caught up in scandals over fake reviews, with Sunday Riley admitting in a 2018 incident that it tasked employees with writing five-star reviews of its products on Sephora. was He downplayed the mistake at the time, arguing that it was impossible to post even a fraction of the millions of reviews of Sunday Rally on platforms around the world.

Today, however, this is increasingly understandable with creative artificial intelligence.

Text-generating tools like ChatGPT, which hit the mainstream just over a year ago, make it faster, better, and on a larger scale easier than ever to imitate real reviews, making it easier for buyers to get caught by fake testimonials. There is more risk. Sometimes there are dead giveaways. “As an AI language model, I don’t have the body, but I understand the importance of comfortable clothing during pregnancy,” began an Amazon review of the maternity shorts seen by CNBC. But often there is no way to know.

“Back in the day, you’d see broken grammar and you’d think, ‘This doesn’t look right. It doesn’t look human,'” said Saud Khalifa, a former hacker and founder of Fakespot, which identifies fake reviews. AI-powered tool for “But over the years we’ve seen it decline. These fake reviews are getting a lot better.

According to Khalifa, fake reviews have become an industry in themselves, driven by fraud farms that act as syndicates. A 2021 report by Fakespot found about 31 percent of reviews — which collectively accounted for more than half of U.S. online retail sales that year — were on sites powered by Amazon, Sephora, Walmart, eBay, Best Buy and Shopify. are unreliable.

Undue influence

It’s not just bots that are compromising trust in beauty reviews. The beauty industry already relies heavily on incentivized human reviewers, who receive free products or discounts in exchange for posting their opinions. This can be a valuable way for brands to get new products into the hands of their target audience and increase their review volume, but consumers are increasingly skeptical about encouraging reviews. So brands should use them strategically, and always clearly announce them.

Sampling and review syndicators like Influencer are keen to point out that receiving a free product doesn’t obligate a reviewer to leave positive feedback, but it’s clear from exchanges in online communities that many users of these programs believes that if they get more free. Write good reviews. As one commenter wrote in a post in Sephora’s online Beauty Insider community, “People don’t want to stop getting free stuff when they say honest or negative things about free products.”

This practice alone can lower a product’s customer rating. On Sephora, for example, the new Ouai Hair Gloss In-Shower Shine Treatment has 1,182 reviews and a 4.3 star rating. But when filtering out motivational reviews, only 89 remain. Sephora does not recalculate star ratings after these reviews are removed. In unprivileged reviews alone, the product has a rating of 2.6 stars. The issue has sparked some frustration among members of its online community. Sephora declined to comment.

But the situation gets even more complicated when reviews created partly by a human and partly by AI factor in. Khalifa describes these types of reviews as “a hybrid monstrosity, where it’s half legitimate and half not, because AI is being used to fill in the gaps within the review and improve it.”

Adding AI to the mix

The line between authentic reviews and AI-generated content is itself starting to blur as review platforms develop new AI-powered tools to help their communities write reviews. Bazaarvoice, a user-generated content platform that owns Influenster and works with beauty brands including L’Oréal, Pacifica, Clarins and Sephora, recently launched three new AI-powered features. , including a tool called “Content Coach”. According to Marissa Jones, senior vice president of product at Bazaar Voice, the company developed the tool based on research that showed 68 percent of its community had trouble getting started when writing a review.

Content Coach gives users hints of key topics to include in their review based on common topics in other reviews. For example, a Chanel eyeliner review might include “pigmentation”, “precision” and “ease of removal”. As users type their review, the theme highlights them as soon as they are noticed, responding to the action.

Jones emphasized that the gesture is meant to be neutral. “We wanted to provide a non-partisan way of giving. [users] Some ideas,” he said. “We don’t want to influence their opinion or do anything that will push them in one direction or the other.”

But even seemingly innocuous AI “nudges” like those created by Content Coach can still influence what a user writes in a product review, giving it a perceived evaluation of a product. It shifts from a spontaneous reaction based on a program to another program that requires less thought.

Ramping up regulation

Fakespot’s Khalifa points out that governments and regulators around the world are slow to act, given the speed at which the problem of fake reviews is evolving with the development of generative AI.

But change is finally on the horizon. In July 2023, the US Federal Trade Commission introduced a regulation on the use of consumer reviews and testimonials to punish marketers who submit fake reviews, suppress negative reviews or offer incentives for positive ones. There is a new section of regulation for

“Our proposed rule on fake reviews demonstrates that we are using all available means to attack deceptive advertising in the digital age,” Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said in a release at the time. ” “This rule will trigger civil penalties for violators and help level the playing field for honest companies.”

In its Notice of Proposed Rulemaking, the FTC shared comments from industry players and public interest groups on consumer harm caused by fake reviews. Among them, the National Consumer League cites an estimate that, in 2021, US consumers will lose $28 billion due to fraudulent reviews. The text also notes that “the widespread emergence of AI chatbots is likely to make it easier for bad actors to write fake reviews.”

In beauty, of course, the stakes are potentially higher, as fake reviews can also mislead consumers into buying fake products, which pose a risk to the buyer’s health and well-being as well as their wallet.

If the FTC’s proposed rule gets the green light, as expected, it would impose civil penalties of up to $51,744 per violation. The FTC could hold that each individual fake review constitutes a separate violation each time it is viewed by a consumer, creating a substantial financial windfall for brands and retailers alike.

With this tougher regulatory stance looming, beauty brands should get their houses in order now, and see this as an opportunity rather than an imposition. There is huge potential for brands and retailers to excel at transparency and create an online shopping experience that consumers can trust.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment