What U.S. Members Think About Regulating AI
With the rapid proliferation of AI systems, public policymakers and industry leaders are calling for clearer guidance on governing the technology. The majority of U.S. IEEE members express that the current regulatory approach to managing artificial intelligence (AI) systems is inadequate. They also say that prioritizing AI governance should be a matter of public policy, equal to issues such as health care, education, immigration, and the environment. That’s according to the results of a survey conducted by IEEE for the IEEE-USA AI Policy Committee.
We serve as chairs of the AI Policy Committee, and know that IEEE’s members are a crucial, invaluable resource for informed insights into the technology. To guide our public policy advocacy work in Washington, D.C., and to better understand opinions about the governance of AI systems in the U.S., IEEE surveyed a random sampling of 9,000 active IEEE-USA members plus 888 active members working on AI and neural networks.
The survey intentionally did not define the term AI. Instead, it asked respondents to use their own interpretation of the technology when answering. The results demonstrated that, even among IEEE’s membership, there is no clear consensus on a definition of AI. Significant variances exist in how members think of AI systems, and this lack of convergence has public policy repercussions.
Overall, members were asked their opinion on how to govern the use of algorithms in consequential decision-making and on data privacy, and whether the U.S. government should increase its workforce capacity and expertise in AI.
The state of AI governance
For years, IEEE-USA has been advocating for strong governance to control AI’s impact on society. It is apparent that U.S. public policy makers struggle with regulation of the data that drives AI systems. Existing federal laws protect certain types of health and financial data, but Congress has yet to pass legislation that would implement a national data privacy standard, despite numerous attempts to do so. Data protections for Americans are piecemeal, and compliance with the complex federal and state data privacy laws can be costly for industry.
Numerous U.S. policymakers have espoused that governance of AI cannot happen without a national data privacy law that provides standards and technical guardrails around data collection and use, particularly in the commercially available information market. The data is a critical resource for third-party large-language models, which use it to train AI tools and generate content. As the U.S. government has acknowledged, the commercially available information market allows any buyer to obtain hordes of data about individuals and groups, including details otherwise protected under the law. The issue raises significant privacy and civil liberties concerns.
Regulating data privacy, it turns out, is an area where IEEE members have strong and clear consensus views.
Survey takeaways
The majority of respondents—about 70 percent—said the current regulatory approach is inadequate. Individual responses tell us more. To provide context, we have broken down the results into four areas of discussion: governance of AI-related public policies; risk and responsibility; trust; and comparative perspectives.
Governance of AI as public policy
Although there are divergent opinions around aspects of AI governance, what stands out is the consensus around regulation of AI in specific cases. More than 93 percent of respondents support protecting individual data privacy and favor regulation to address AI-generated misinformation.
About 84 percent support requiring risk assessments for medium- and high-risk AI products. Eighty percent called for placing transparency or explainability requirements on AI systems, and 78 percent called for restrictions on autonomous weapon systems. More than 72 percent of members support policies that restrict or govern the use of facial recognition in certain contexts, and nearly 68 percent support policies that regulate the use of algorithms in consequential decisions.
There was strong agreement among respondents around prioritizing AI governance as a matter of public policy. Two-thirds said the technology should be given at least equal priority as other areas within the government’s purview, such as health care, education, immigration, and the environment.
Eighty percent support the development and use of AI, and more than 85 percent say it needs to be carefully managed, but respondents disagreed as to how and by whom such management should be undertaken. While only a little more than half of the respondents said the government should regulate AI, this data point should be juxtaposed with the majority’s clear support of government regulation in specific areas or use case scenarios.
Only a very small percentage of non-AI focused computer scientists and software engineers thought private companies should self-regulate AI with minimal government oversight. In contrast, almost half of AI professionals prefer government monitoring.
More than three quarters of IEEE members support the idea that governing bodies of all types should be doing more to govern AI’s impacts.
Risk and responsibility
A number of the survey questions asked about the perception of AI risk. Nearly 83 percent of members said the public is inadequately informed about AI. Over half agree that AI’s benefits outweigh its risks.
In terms of responsibility and liability for AI systems, a little more than half said the developers should bear the primary responsibility for ensuring that the systems are safe and effective. About a third said the government should bear the responsibility.
Trusted organizations
Respondents ranked academic institutions, nonprofits and small and midsize technology companies as the most trusted entities for responsible design, development, and deployment. The three least trusted factions are large technology companies, international organizations, and governments.
The entities most trusted to manage or govern AI responsibly are academic institutions and independent third-party institutions. The least trusted are large technology companies and international organizations.
Comparative perspectives
Members demonstrated a strong preference for regulating AI to mitigate social and ethical risks, with 80 percent of non-AI science and engineering professionals and 72 percent of AI workers supporting the view.
Almost 30 percent of professionals working in AI express that regulation might stifle innovation, compared with about 19 percent of their non-AI counterparts. A majority across all groups agree that it’s crucial to start regulating AI, rather than waiting, with 70 percent of non-AI professionals and 62 percent of AI workers supporting immediate regulation.
A significant majority of the respondents acknowledged the social and ethical risks of AI, emphasizing the need for responsible innovation. Over half of AI professionals are inclined toward nonbinding regulatory tools such as standards. About half of non-AI professionals favor specific government rules.
A mixed governance approach
The survey establishes that a majority of U.S.-based IEEE members support AI development and strongly advocate for its careful management. The results will guide IEEE-USA in working with Congress and the White House.
Respondents acknowledge the benefits of AI, but they expressed concerns about its societal impacts, such as inequality and misinformation. Trust in entities responsible for AI’s creation and management varies greatly; academic institutions are considered the most trustworthy entities.
A notable minority oppose government involvement, preferring non regulatory guidelines and standards, but the numbers should not be viewed in isolation. Although conceptually there are mixed attitudes toward government regulation, there is an overwhelming consensus for prompt regulation in specific scenarios such as data privacy, the use of algorithms in consequential decision-making, facial recognition, and autonomous weapons systems.
Overall, there is a preference for a mixed governance approach, using laws, regulations, and technical and industry standards.