Our government should not use the public sector as a guinea pig for AI.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Last month, New York City came under scrutiny after its AI-powered chatbot was shown to be giving out false information, leading small business owners to break the law and violating protections for workers and tenants. Encouraged. When asked about the chatbot's shortcomings, which were first reported by investigative outlet The Markup, New York City Mayor Eric Adams responded that “[a]Whenever you use technology, you need to put it in a real environment to iron out its flaws.”

Weeks later, the chatbot is still up, running, and dishing out bad advice — and ironing out any “kinks” that come at the expense of real people.

While Adams' philosophy of “move fast and break things” may still hold sway among Silicon Valley entrepreneurs, it's a terrible guide for the public sector, responsible for the consequences of these disruptions. There are governments. The New York City chatbot episode is a perfect example of how early adoption of new technology, and AI in particular, can create costs for governments and the public that far outweigh the benefits.

Created by Microsoft and released in October as part of the New York City Artificial Intelligence Action Plan (billed as the first of its kind for a major U.S. city), the chatbot is hosted on the Department of Small Business Services' website. done on the site. The goal is to provide business owners with “access to reliable information” from government city sources to help them “start, run and grow businesses.” It seems innocent enough. And what business owner wouldn't be enticed by the promise of a quick, straightforward answer to a tedious, all-too-familiar click to find the right FAQ or form or phone number?

Had it been implemented well, the chatbot could have potentially boosted the city's efforts to streamline and improve public services. Instead, the chatbot has created many potential problems for city government, harming its residents.

For example, according to the Markup investigation, the chatbot falsely stated that employers could take worker tips. On paper, New York City has some of the strongest labor protections in the United States. But enforcing those laws is difficult, even more so when a government-sanctioned chatbot is feeding business owners the wrong information about it. And because wage theft reports are complaint-based, initiated by workers, such misinformation can deter workers from filing complaints. If workers suspect their rights are being violated by withholding their suggestions, employers can counter their claims, supported by an AI chatbot that has the authority and It is legal because it has been appointed by the City of New York.

Protecting workers' rights is already difficult, and technological systems can make it even more difficult. Research from Data & Society has shown how automated systems can scale unpredictability in work through scheduling software, while tip theft can be automated on platforms like Amazon Flex and Instacart. In fact, Amazon Flex was fined $60 million by the Federal Trade Commission for this practice. Existing laws like tip protection legislation and fair scheduling laws can hold employers accountable, regardless of the tools they use, but labor protection is only as good as their enforcement.

A recent report by Data & Society and Cornell University looked at NYC law requiring employers to notify job applicants if they use automated employment decision tools in the hiring or promotion process. have been. They found that compliance with the law appears to be surprisingly low and its usefulness to job seekers is limited.

In providing false information, cities can also create legal problems for themselves and businesses. In a recent case, Air Canada lost a small claims court case brought by a passenger who said the airline's AI chatbot misled them about its bereavement policy. If the company in question was, instead, a government, it could be liable for providing false information – and workers could in turn sue their employer for processing the false information and breaking the law. .

The public should have opportunities to provide input on technologies introduced in public administration, as they interface with these agencies and may be adversely affected by the AI ​​systems they deploy. Ultimately, it's an issue of trust: if people can't trust their democratically elected governments to know their rights—and these technocratic intermediaries are representatives of those governments—then they're unlikely to protect their rights. will trust the same institutions for

With the pace of governments adopting more technology, it is important that any new tools are thoroughly tested and tested before they are released to the world. AI has the potential to dramatically improve many government processes and help cities deliver better services. But if technologies are designed poorly, without attention to how they are integrated into society, they can change power relations and how people relate to their governments. In this case, the more likely result is a further erosion of trust in public institutions—and a weakening of the laws and regulations that are responsible for clarifying and protecting the city.

Aiha Nguyen is the Program Director of the Labor Futures Program at Data & Society, which seeks to better understand the disruptions in the labor force resulting from data-driven technological advances, and address these disruptions through evidence-based research. Creates new frames for understanding. and cooperation.

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment