With the rapid proliferation of AI systems, public policymakers and industry leaders are demanding clear guidance on how to regulate the technology. A majority of American IEEE members say the current regulatory approach to managing artificial intelligence (AI) systems is inadequate. They also say that prioritizing AI governance should be a matter of public policy, on par with issues like health care, education, immigration and the environment. This is according to the results of a survey conducted by the IEEE for the IEEE-USA AI Policy Committee.
We serve as chairs of the AI Policy Committee, and know that IEEE members are an important, invaluable resource for informed insights about the technology. To guide our public policy advocacy work in Washington, DC, and to better understand opinions about the governance of AI systems in the US, IEEE has 9,000 active IEEE-USA members plus AI and Neural Networks. surveyed a random sample of 888 active members working on
The survey intentionally did not define the term. A.I. Instead, he asked respondents to use their own interpretation of the technology when answering. The results showed that there is no clear consensus on the definition of , even among the IEEE membership. A.I. There is significant variation in how members think about AI systems, and this lack of convergence has implications for public policy.
Overall, members were asked their views on how to govern the use of algorithms in decision-making and data privacy, and whether the US government should increase its workforce and expertise in AI.
The State of AI Governance
For years, IEEE-USA has been advocating for strong governance to govern the impact of AI on society. It is apparent that US public policymakers struggle with the regulation of the data that drives AI systems. Existing federal laws protect certain types of health and financial data, but Congress has yet to enact legislation that would enact national data privacy standards, despite numerous attempts to do so. Data protections for Americans are few and far between, and compliance with complex federal and state data privacy laws can be costly for industry.
A number of US policymakers have argued that AI governance cannot occur without a national data privacy law that provides standards and technical safeguards for data collection and use, particularly commercially available information. in the market. The data is a key resource for big third-party language models, which use it to train AI tools and generate content. As the US government has recognized, the commercially available information marketplace allows any buyer to obtain data about individuals and groups, including details otherwise protected by law. This issue raises significant privacy and civil liberties concerns.
Regulating data privacy, it turns out, is an area where IEEE members have strong and clear consensus.
Survey routes
The majority of respondents – nearly 70 percent – said the current regulatory approach is inadequate. Individual responses tell us more. To provide context, we have divided the findings into four areas of discussion: the governance of public policies related to AI; risk and liability; Confidence and comparative perspective.
Governance of AI as Public Policy
Although there are differing views on aspects of AI governance, what stands out is the consensus on the regulation of AI in specific cases. More than 93 percent of respondents support protecting the privacy of individual data and support regulation to deal with AI-generated disinformation.
About 84 percent of support requires a risk assessment for medium- and high-risk AI products. Eighty percent called for transparency or explanation requirements on AI systems, and 78 percent called for restrictions on autonomous weapons systems. More than 72 percent of members support policies that limit or govern the use of facial recognition in certain contexts, and nearly 68 percent support policies that allow the use of algorithms in decision-making. Organized.
There was strong consensus among respondents on prioritizing AI governance as a matter of public policy. Two-thirds said technology should be given at least equal priority as other areas of government, such as health care, education, immigration and the environment.
Eighty percent support the development and use of AI, and more than 85 percent say it needs to be managed carefully, but respondents disagree about how and by whom such management should be done. What should be done? While just over half of respondents said the government should regulate AI, this data point should be linked to clear majority support for government regulation in specific areas or use case scenarios.
Only a small percentage of non-AI-focused computer scientists and software engineers thought that private companies should self-regulate AI with minimal government oversight. Conversely, about half of AI professionals prefer government oversight.
More than three-quarters of IEEE members support the idea that governing bodies of all kinds should do more to manage the impact of AI.
Risk and Liability
Several survey questions asked about the threat of AI. About 83 percent of members said the public is insufficiently informed about AI. More than half agree that the benefits of AI outweigh its risks.
In terms of responsibility and liability for AI systems, slightly more than half said that developers should bear primary responsibility for ensuring that systems are safe and effective. About a third said the government should take responsibility.
Trusted organizations.
Respondents ranked academic institutions, nonprofits, and small and medium-sized technology companies as the most trusted institutions for responsible design, development, and deployment. The three least trusted factions are large technology companies, international organizations and governments.
The institutions most trusted to responsibly manage or govern AI are academic institutions and independent third-party organizations. The least trusted are large technology companies and international organizations.
Comparative perspective
Members showed a strong preference for regulating AI to reduce social and ethical risks, with 80 percent of non-AI science and engineering professionals and 72 percent of AI workers supporting this view.
About 30 percent of professionals working in AI say regulation can stifle innovation, compared to about 19 percent of their non-AI counterparts. A majority of all groups agree that regulating AI is more important than waiting, with 70 percent of non-AI professionals and 62 percent of AI workers supporting immediate regulation.
A significant majority of respondents recognized the social and ethical risks of AI, emphasizing the need for responsible innovation. More than half of AI professionals favor non-binding regulatory tools such as standards. About half of non-AI professionals favor specific government regulations.
Mixed governance approach
The survey proves that the majority of US-based IEEE members support the development of AI and strongly advocate its careful management. The results will guide IEEE-USA in working with Congress and the White House.
Respondents recognized the benefits of AI, but expressed concerns about its social impacts, such as inequality and misinformation. Trust in institutions responsible for creating and managing AI varies widely. Educational institutions are considered to be the most trusted institutions.
A significant minority opposes government involvement, preferring non-regulatory guidelines and standards, but the numbers should not be viewed in isolation. Although there are conceptually mixed attitudes toward government regulation, there is a strong consensus for immediate regulation in specific situations such as data privacy, the use of algorithms in decision-making, facial recognition, and autonomous weapons systems. Is.
Overall, mixed governance using laws, regulations, and technical and industry standards is preferred.