In Japan, customer harassment, or kasu-harahas increasingly become a problem, along with bullying and sexual harassment in the workplace.
According to a 2024 survey by UA Zensen, Japan's largest union, of nearly 30,000 staff working in service and other sectors, 46.8 percent said they had experienced customer anger or intimidation in the past two years.
The incidents included verbal abuse, repeated complaints, threats and unreasonable demands for apologies.
SoftBank is working with the University of Tokyo on an AI filter that can identify the voices of angry customers, and soften them to a less aggressive tone.
The new tech was published by Softbank on April 15.
In a product demonstration video, a male customer's angry voice is described by a Japanese news anchor as “an anime dubbing artist”.
It is hoped that the technology will reduce the negative impact on the mental health of customer service staff, so they stay in their jobs.
In Japan, serving superiors and customers at work is traditionally seen as a virtue.
However, the situation has gradually improved in recent years.
In 2022, Japan's Ministry of Health, Labor and Welfare published a guideline that instructs and urges companies to deal with consumer harassment.
Some service providers, such as ANA Holdings, the parent of All Nippon Airways, and West Japan Railways, had already unveiled policies on customer harassment.
West Japan Railway has told staff that they can stop selling products or providing services to customers who verbally or physically abuse them.
Lawyers may also be involved in helping employees take legal action against clients.
SoftBank will likely start using its AI filter in 2025.
The technology has gained massive support online.
“It's really nice to have technology like this. However, people should learn to control their anger when talking to customer service staff,” one person said on YouTube.
“It would also be cool if the AI ​​changed the staff's voices to sound like a scared gangster,” joked another.
A third person said the filter was unnecessary: ​​”The AI ​​should only hang up when it recognizes an angry voice.”