AI & Society

A series of articles about the most significant technology shift of our lifetime

 1 Overview

2 The Case For Optimism

3 The Case For Pessimism

4 The Failure Of Federal Governance

5 Governance Alternatives

6 Messy, Complicated & Uneven

There will not be any federal regulation of AI in the US in the foreseeable future. No policy guidance, no agencies guarding privacy, no support for displaced workers, no accountability for errors or abuse.

That’s not a partisan statement, it’s not a political argument. It’s a statement of fact.

At one time the United States regulated industries to ensure public safety, economic stability, and fair competition. In the 21st century, political polarization has created a state of legislative paralysis. With a couple of limited, specific exceptions, there have been no meaningful changes to federal business regulation in the US in the last twenty five years. Existing agencies and regulations continued to do their work after paralysis set in – until this year, when many of them are being dismantled.

But calm down. That doesn’t mean we’re doomed. Well, we might be doomed, but not for that reason.

We have begun a complex, decentralized, and often volatile “Great American AI Experiment.” A profound shift is occurring in American governance, with a fragmented and politically contentious patchwork of state-level laws emerging to fill the federal void. This “technology federalism” creates legal uncertainty and a balkanized market – but it’s not quite the same as no governance in the US at all.

In addition to state-level action, the giant tech companies will take steps to provide their own technocratic guidance, with limited public participation. Ethical concerns and the public interest may be sidelined.

AI development in the US will also be affected by the more proactive governance models put in place by the European Union and China.

Let’s start by focusing on the importance of governance, then look at the failure of governance at the federal level in the US.

Governance is critical for AI development

The future of AI is not a technological inevitability. It will be shaped by the human choices we make about policy, ethics, and governance.

Analysts and industry insiders almost universally agree that governance is essential to realize the benefits of AI and avoid catastrophic risks. “Governance” is the overall term to describe a safety net of policies and regulations, institutions, and public-private cooperation – the framework to guide society safely into the age of AI.

We’re not going to have any of that at the federal level.

Every discussion of AI includes governance as an open plea or an implied premise. I’ll mention a few specific examples but assume these are just the tip of a very, very large iceberg.

OpenAI CEO Sam Altman has been one of the most vocal advocates for AI regulation, even testifying before Congress to urge government action.

Google/Alphabet CEO Sundar Pichai argues that regulation is needed to protect public safety and privacy. “Companies such as ours cannot just build promising new technology and let market forces decide how it will be used.”

Bill Gates believes that relying on corporate self-regulation is insufficient and that governments must establish clear guidelines, especially for high-risk applications like those in healthcare and finance.

Mustafa Suleyman, co-founder of DeepMind and CEO of Inflection AI, says in his book The Coming Wave that AI requires a coordinated global governance framework, including new international treaties, modernized regulations, and public-private cooperation.

Stuart Russell, a leading AI researcher and UC Berkeley professor, focuses on the “AI control problem” in his book Human Compatible. The governance he proposes is not just about government regulation, but about a global, coordinated effort among researchers and policymakers.

In his book Life 3.0: Being Human in the Age of Artificial Intelligence, author Max Tegmark (a physicist and president of the Future of Life Institute), explores scenarios from an AI utopia to a dystopian benevolent dictator AI, and stresses that the outcome depends on whether we can successfully implement AI governance.

And on it goes. Without governance there is no accountability for the risks of hallucination, jailbreaking, bias, privacy infringement, or potentially dangerous superintelligent AI.

Dare to dream. Imagine federal AI regulations that address ethical risks like bias, propaganda, and misinformation.

Imagine a world where US policymakers focus on investing in initiatives to help displaced workers in the labor market and ensure that the benefits of AI are distributed more equitably.

We don’t live in that world.

US regulatory gridlock

For much of its history, the U.S. government actively shaped the economy to serve the public interest.

The Progressive Era (Late 19th – Early 20th Century): In response to the unchecked power of industrial monopolies, Congress passed landmark legislation like the Sherman Antitrust Act of 1890. The goal was to break up corporate giants that stifled competition and exploited consumers, ensuring a level playing field for innovation and enterprise.

The New Deal (1930s): The Roosevelt administration responded to the Depression with a wave of regulations creating agencies like the Securities and Exchange Commission (SEC) to police the stock market and the Federal Deposit Insurance Corporation (FDIC) to protect bank deposits.

The Social Regulation Wave (1970s): A growing awareness of public health and environmental dangers led to the creation of the Environmental Protection Agency (EPA) and the Occupational Safety and Health Administration (OSHA). This era established the government’s role in protecting citizens from non-economic harms, such as pollution and unsafe workplaces.

The underlying premise was that effective regulation could coexist with, and even foster, innovation and profitability by creating a predictable and safe market.

Paralysis (2000 forward): In the last twenty five years, the political will to collaborate has dissolved. There have been only five meaningful new federal acts in this century: Sarbanes-Oxley improved corporate financial reporting in 2002; Dodd-Frank ushered in new bank regulations and established the Consumer Financial Protection Bureau in 2011; President Obama expanded Medicare coverage in 2010; and President Obama (2009) and President Biden (2022) each pushed infrastructure bills through.

That’s it. That’s the sum total of federal action to regulate business and help US citizens in 25 years.

The most glaring failures of the paralyzed government show up in its failure to keep pace with technology. Five examples:

Artificial intelligence: Misinformation and deepfakes corrode our trust in democratic institutions, the uninhibited collection of personal data threatens our privacy, we face all the rest of the risks we’ve discussed and – nothing. Not only have there been no new laws or regulations, there is disagreement about whether any action is appropriate at all.

Privacy: The U.S. is one of the only major economic powers without a comprehensive national data privacy law, leaving citizens’ personal information largely unprotected.

Cybersecurity: Congress has failed to pass comprehensive legislation mandating baseline security standards for critical infrastructure like the power grid, water systems, and financial networks.

Antitrust: Big tech companies have become modern monopolies that do not fit century-old antitrust definitions. Nonstop legal challenges and lack of agreement over the scope and specifics of new regulations have prevented any progress.

Social networks: There are no guidelines for content moderation to regulate misinformation, propaganda, and deepfakes.

The failure of federal governance extends to other areas – scientific and medical research, public health, gun control, environmental protection, the climate crisis, Social Security, taxation, immigration, voting and election law, and additional work to maintain infrastructure and improve the healthcare system.

For the foreseeable future, Democrats and Republicans hold diametrically opposed views on how to regulate technology. Meanwhile, the technologies that fuel polarization run unchecked, meaning the lack of regulation is both a symptom and a cause of the social division and legislative paralysis.

A quick note about military AI

I don’t want to drive too far down the road to despair, but I’m compelled to mention the terrifying danger of using AI in the military without a legal framework.

Most of the initial wave of Israeli attacks on Gaza were carried out by Habsora, an AI system that produces targets “at a rate that far exceeds what was previously possible, … essentially facilitating a ‘mass assassination factory.’” The war between Ukraine and Russia has been a laboratory for AI-powered drones, the technology transforming modern warfare.

Existing international law is currently insufficient to assign liability when an AI system makes a mistake that causes civilian casualties or human rights violations. Should the software developer, the state, or the military commander be held responsible?

And the US military wants to use AI, you bet it does. In July NextGov reported that “the Defense Department’s latest budget request seeks billions of dollars for AI and autonomous systems for everything from autonomous “wingman” fighter drones to AI research and development, robotics development and other emerging technologies.”

The Pentagon specifically intends to use Elon Musk’s AI Grok, which is being trained to be partisan and biased.

The idea of unregulated partisan military AIs is so hideously awful that there’s nothing more to be said. Don’t dwell on it. That way lies madness.

In the next article we’ll talk about some of the governance alternatives – how the states are filling the federal vacuum, voluntary action by the tech companies, and the governance models in the EU and China.

Share This