To achieve fair hiring in the AI ​​era, regulate AI vendors, not just employers

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

The lesson of New York City’s requirement for bias audits of employment algorithms is that policymakers should regulate AI vendors.

A landmark New York City law, known as NYC 144, requires employers to conduct a bias audit annually for the automated employment tools they use. I testified in favor of an earlier version that placed the burden on the vendor of the employment instrument. As I see it, public disclosure of any disparate impact of an automated employment tool would be valuable information to employers who were potential customers of the tool.

However, the law was changed to place the burden on the employer, not the shopkeeper. I’m not sure why, perhaps because it wasn’t clear that the city had jurisdiction over the vendors, some of whom operate in several states. Perhaps because it was not clear that the city had jurisdiction over vendors who operated in several states. I dont know.

In any event, placing the burden solely on employers was a mistake. The law took effect six months ago, and a study by the public interest group Data & Society shows that while many companies are contracting out bias audits, they largely do not make them available to the public. have been

Apparently, the advertising requirement only applies if the tool is actually used to make employment decisions. Those familiar with the requirement of adverse credit reports for employment decisions under the Fair Credit Reporting Act will recognize this problem: Only the employer can tell whether the decision-making tool was actually used to make the decision. had gone.

Employers in New York City are making routine internal decisions that they are exempt from audit and disclosure requirements. They then wait for regulators or affected applicants or employees to bring an enforcement case. New York City enforces the law on a complaint basis and has yet to implement one. As a result, there is a significant chance that the algorithmic bias law that was presented as a model for other states and territories will be on the books but ignored.

There could be a number of different ways to proceed, some of which Jacob Metcalfe, author of the NYC law study mentioned above, described in an op-ed for the Hill in December. But two things jump out at me. The first is to place the burden on the vendor to disclose and disclose generally the various effects of its employment instruments. The second is to do so on a national basis so that vendors cannot avoid local laws by refusing to sell there, and to remove any issue of legal authority to regulate interstate commerce.

The main benefit of requiring vendor audits is that they put the information in the hands of employers, who can then choose an employment tool that meets their employment needs, taking into account the legal risk of violating employment discrimination laws. best matches with

If a tool typically recommends 10 white candidates for hire for every 100 that only recommends two black candidates for every hundred who apply, that’s useful information for the hiring firm. It can then be seen that another tool recommends eight black candidates for every hundred that apply, and employers can see that using the other tool the firm meets the Equal Employment Opportunity Commission’s 80 percent guidelines. will have a better shape to comply with.

The requirement to audit and disclose bias is not, and should not be, a standard for unlawful disparate impact. This is the scope of basic discrimination law. If a hiring firm thinks it has a good business reason to use a tool that selects only two out of every hundred black applicants, it is free to use the tool. But at least the firm knows what legal risk it is running when it does so.

This disclosure requirement puts market pressure on vendors to develop employment tools that have potentially different impacts rather than burdening each employer to automatically administer and advertise the use of employment tools. avoid Employers will understandably seek every legal avenue to avoid such embarrassment. It would be more effective to use market incentives to move vendors to develop fair employment tools.

Congress can take lessons from this example for AI law and regulation. Agencies with current responsibility for ensuring that AI users comply with the law, including the EEOC for employment and financial regulators for credit scores, have limited authority to impose disclosure or vetting requirements on AI vendors. .

To achieve fair hiring in the AI ​​era, regulate AI vendors, not just employers 2

Last year Alex Engler, my former colleague at Brookings, now in the White House, urged Congress to pass a comprehensive bill to upgrade existing agencies’ powers to deal with AI issues in their jurisdictions. . One element of this would be giving these agencies the authority to impose audit and disclosure rules on AI vendors.

As it looks for opportunities to regulate the fast-moving field of artificial intelligence, Congress should pursue some low-hanging fruit and legislation to mandate bias audits and disclosures for AI vendors. .

Mark McCarthy is the author of “Regulating Digital Industries” (Brookings, 2023), an adjunct professor in the Communication, Culture and Technology Program at Georgetown University, a non-resident senior fellow at the Institute for Technology Law and Policy at Georgetown Law, and is a non-resident senior fellow. at the Brookings Institution.

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment