FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

AI Could Become the Next Victim of the 'Sacramento Effect'

blue-green | Illustration: Lex Villena

Today's technology companies are increasingly sandwiched between the regulatory requirements of the European Union (E.U.) and those of California. While the U.S. federal government may adopt a light touch, pro-innovation approach, California's state legislation can undermine this with a regulatory approach with impacts far beyond its borders.

A new California bill imposes a rigorous regulatory regime on Artificial Intelligence (AI), making it the latest technology caught in this potentially innovation-stifling squeeze between Brussels and Sacramento. The term "Brussels Effect" often refers to the outsize influence of E.U. policy—particularly in technology—as a de facto global standard. But now, companies are also experiencing the "Sacramento Effect," where California's stringent regulations effectively set de facto federal policy for the rest of the country.

California is not the only state diving into significant tech policy legislation. Colorado recently enacted notable AI regulations, Montana attempted to ban TikTok, and many states are pursuing data privacy or youth online safety regulations.

For better or worse, states can move faster than Congress, acting as laboratories of democracy. However, this agility also risks creating a fragmented tech policy landscape, with one state's regulations imposing heavy burdens on the entire nation. This is particularly pronounced with California.

The impact is profound not just because many leading tech companies are based in California but rather because of the nature of the technologies California seeks to regulate. For example, in some cases, the only feasible way to implement regulations is at a national level. In data privacy, the laws apply to California residents even when their actions are not occurring within the state's borders, pushing companies toward broader compliance to avoid legal pitfalls.

While some of these laws could be challenged under the dormant commerce clause, without judicial intervention, they become de facto federal policy. Many companies find it easier to comply with California's stringent regulations rather than juggling different standards across states and risking non-compliance.

This dynamic was evident in 2018 when California enacted its regulatory approach to data privacy. Now, we could soon see California—either by regulation or legislation—disrupting the crucial AI innovations currently taking place. Unlike some technologies, such as autonomous vehicles, the development of large language models and other foundational AI models cannot, in most cases, simply be removed from a state due to regulations.

Perhaps the "best-case scenario" from the actions of states like California and Colorado might be a problematic patchwork of AI regulations, but more realistically, California's proposal (if it becomes law) would deter innovation by creating a costly compliance regime. This would limit AI development to only the largest companies capable of bearing these costs and would come at the expense of investments in product improvements.

Moreover, beneficial AI applications could be thwarted by other proposals California's legislature is currently considering. As R Street's Adam Thierer notes in an analysis of state laws surrounding the AI revolution, the California legislature has considered a variety of anti-AI bills that could "ban self-checkout at grocery and retail stores and ban the use of AI in call centers that provide government services, making things even less efficient."

It is not only legislation that could result in California derailing a pro-innovation approach to AI. The California Privacy Protection Agency (CPPA), established under California's data privacy laws, has proposed a regulatory framework for "automated decision-making." The E.U.'s General Data Protection Regulation shows how data privacy regulation can inadvertently stifle AI development by imposing compliance requirements designed for older technologies. Regulating "automated decision-making" could give the CPPA an unintended yet significant role in obstructing AI and other beneficial algorithmic uses.

America's tech innovators and entrepreneurs are already facing challenges from the E.U.'s heavy-handed AI regulations. In the absence of federal preemption or an alternative framework, they may also be hindered by the heavy hand of Sacramento. Such a sandwiching of significant regulation could harm not only the tech sector's economy but also all Americans who stand to benefit from AI advancements, as a single state or region's policy preferences dictate the national landscape.

The post AI Could Become the Next Victim of the 'Sacramento Effect' appeared first on Reason.com.

❌