New White House Executive Order on Artificial Intelligence: A Step Forward in AI Governance? – PA Times Online

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By James L. Vaughn
1 April 2024

Concern over the risks posed by artificial intelligence (AI) has prompted governments and organizations around the world to develop a plethora of ethical principles and frameworks to govern AI. Initiatives range from the Future Life Institute. Principles of Asilomer AI International guidelines such as those of the European Parliament Artificial Intelligence Act. So far, most countries, organizations and corporations with interests in AI technology have adopted some form of governance framework. In the United States, White House Executive Order (EO) No. 14110 Safe, secure, and reliable development and use of artificial intelligence (October 30, 2023) AI is one of the government’s latest initiatives on governance. At 35 pages, EO 14110 is more comprehensive than previous US legislation and executive orders:

  • Advancing American AI Act, PL 117-263 (2022)
  • AI in Government Act of 2020, PL 116-260
  • EO 13960, Promoting the use of trusted artificial intelligence in the federal government; December 3, 2020
  • EO 13859, Sustaining American Leadership in Artificial Intelligence11 February 2019

EO 14110 seeks to strike a balance between encouraging AI innovation and regulating its use. It directs federal agencies to assume more roles and responsibilities and to take specific actions, such as: designating a Chief AI Officer; publicly releasing compliance plans and AI use cases; Practice AI risk management; and establish thresholds for human review. It calls for the Defense Production Act to impose certain requirements on industry. Technical concepts introduced by the Order, such as generative AI, artificial content and watermarking are notable, indicating that parties with expertise in AI had a hand in its drafting. Researchers at Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) commented that

While the EO’s coverage and accountability features are noteworthy, it also reveals the inherent pitfalls of federal policymaking that will challenge AI governance. It seeks an ambitious top-down and cross-government approach, drawing on various stakeholders and policy subsystems. It contains vague references to AI that will be problematic for administration and compliance. Like all executive orders, the EO can only be implemented “in accordance with applicable law and subject to the availability of appropriations.”

As the new order is being hashed out by federal agencies, commercial AI development is moving at an astonishing rate. In the past year alone, powerful new open source multi-model products have been poised to enter the creative AI marketplace. Artificial data generation capabilities will allow AI models to train themselves on data that is not subject to legal protections. Agent-based models are making autonomous decisions in different work environments. Arguing that they are better positioned to govern AI, companies have initiated self-governance initiatives, such as risk-based licensing and forums such as AI Safety Alliance. EO 14110 already misses some of these developments.

Critics of AI governance initiatives often raise the question: Are our traditional systems of policymaking capable of governing AI, or will technology advance at such an explosive rate that governance will be redundant? Luke Munn of the University of Queensland lamented the “uselessness” of AI ethics, saying they are difficult or impossible to enforce. Writing for George Mason University’s Mercatus Research Center, Adam Thayer, Andrea C. O’Sullivan, and Raymond Russell argued for “unauthorized innovation”, noting that “(u)nless a compelling case has been made A new invention may cause serious harm. For society, innovation must be allowed to continue unhindered, and problems, if they develop at all, can be solved later.” In an analysis for the Brookings Institution, Blair Levin and Tom Wheeler suggest that agencies’ discretion to regulate AI could be weakened under the US Supreme Court’s “big questions doctrine,” much like environmental regulations. was terminated. West Virginia v. EPA (2022).

EO 14110 is a commendable effort in AI governance that goes far beyond past norms and frameworks. However, this is an additional step in a cumbersome policy-making process. The neural self-learning nature of AI, its growth rate, associated market forces and its importance to national security will quickly overwhelm current policymaking efforts. While federal agencies will in principle comply with the order’s mandate, the overall social impact is likely to be minimal. Until the harmful manifestations of AI (such as personal safety, crime and privacy and civil rights violations, etc.) tip the balance of American political agendas, increasing efforts to broadly regulate AI will only marginally benefit. Will be happy. In the interim, a more constructive approach would rely on less permissive innovation, legalization of law and jurisprudence, and decentralized enforcement within specialized domains of “narrow” AI applications. As Luke Mann notes, this will involve a lot of granular grunt work. However, students and practitioners of public administration should find this burgeoning new field of work both challenging and exciting.


the author: James L. Wayne is a researcher in public management. He is a former presidential management intern, served in the federal civil service, and has held positions as corporate ethics officer with UK-based Lucas Industries and analyst at MITER Corporation. He holds a PhD in Public Policy and Administration from Virginia Tech.

(no rating yet)
its loading…

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment