AI & Society

A series of articles about the most significant technology shift of our lifetime

Overview

2 The Case For Optimism

The Case For Pessimism

4 The Failure Of Federal Governance

5 Governance Alternatives

6 Messy, Complicated & Uneven

Those darned pessimists. A glorious new era stretches before us, promising unprecedented productivity, economic growth, and improved standards of living. And the pessimists can’t let us have nice things so they whine about distractions like “economic reality” and “historical precedent” and “societal harm.”

Dammit! They make some good points. There are persuasive reasons to believe that the AI revolution may not only fail to deliver on its grand promises but could actively worsen our quality of life, destabilize our economy, and erode the very fabric of our society.

Spoiler alert: The single most important factor that will determine whether AI leads us to a happy or a dark future is governance – the framework of rules that guide AI development. Last week the UN announced a major initiative to address global AI governance. We have some problems with governance in the US. It’s such an important topic that the next article will talk about it in detail.

There are five troubling paths for AI that might prevent a happy ending.

  • The mirage of productivity and the specter of a crash
  • The fragmented workforce and the erosion of skills
  • A degraded quality of life: bias, misinformation, and loss of privacy
  • Extinction of all life on earth (yes, seriously)
  • Governance failure

The Productivity Paradox

The introduction of AI may lead to a decline in performance due to high adjustment costs, workflow redesigns, and the need for extensive staff training. After this dip, the huge gains in economic output may never materialize. Analysts call it the “productivity paradox,” where massive investment in new technology fails to produce measurable gains in economic output.

At least during the initial period, the Harvard Business Review estimates that the failure rate for AI projects could be as high as 80%. Economists refer to this as the “J-curve of disruption,” a temporary productivity dip before the benefits of AI turn positive.

MIT economist and Nobel laureate Daron Acemoglu argues that AI will have a “nontrivial, but modest” effect on the economy, because only a small fraction of tasks can be profitably automated by AI. The Penn Wharton Budget Model forecasts that AI’s long-term annual growth boost will fade to less than 0.04 percentage points.

The AI Bubble

Big tech companies are spending hundreds of billions of dollars on AI infrastructure, which is singlehandedly propping up economic indicators and the stock market. Think back to the end of the 1990s when the economy was heated up by the dot-com bubble. There was massive investment in fiber optic cables and server farms, built on the speculation of future demand that never materialized. Will there be enough demand to justify the AI infrastructure being built today?

Paul Krugman is worried. “The surge in AI investment — a tech boom the likes of which we haven’t seen since the 1990s — has buoyed the economy in the short run, offsetting the drag from Trump’s tariffs. Without the data center boom, we’d probably be in a recession.”

Financial institutions and analysts are worried. MarketWatch added up AI spending, real estate, venture capital, and AI-adjacent sectors like crypto and NFTs and published a piece arguing that the AI bubble has hugely outpaced the dot-com and sub-prime bubbles. The Wall Street Journal published its own story about the “echoes of the dot-com bubble.”

Companies like OpenAI and Anthropic have sky-high valuations but no clear path to profitability. It seems like any company that claims to be linked to AI sees its stock soar, regardless of its business model.

If AI fails to deliver on its productivity promises, this speculative bubble could burst, leading to market corrections, economic stagnation, or even a financial crash.

Cory Doctorow puts it bluntly.

“The most important thing about AI isn’t its technical capabilities or limitations. The most important thing is the investor story and the ensuing mania that has teed up an economical catastrophe that will harm hundreds of millions or even billions of people. AI isn’t going to wake up, become superintelligent and turn you into paperclips – but rich people with AI investor psychosis are almost certainly going to make you much, much poorer.”

The bifurcated labor market

There is already evidence that AI’s impact on jobs will be a slow, arduous, and fragmented process. AI may turn out to be just good enough to displace workers but not productive enough to generate a strong “productivity effect” that creates demand in other areas. Some analysts believe that only a tiny fraction of jobs can be profitably automated by AI.

At best, job growth will be uneven. In the first article we talked about the bifurcation of the job market, where AI disproportionately displaces young entry-level workers. AI lock-in is a threat to employees who become so reliant on AI tools that their own skills atrophy. Instead of a partnership, AI threatens to hollow out the middle and create permanent dependency.

Degraded quality of life

Okay, that’s not good, loss of jobs and maybe an economic crash.

I wish that was all.

There’s also the very real possibility that AI might destroy our society and increase friction, mistrust, and social instability.

When citizens can no longer trust what they see, hear, or read, the foundations of democratic representation and accountability are fundamentally undermined. I’m already skeptical of the authenticity of any image or video.

Generative AI can hallucinate, producing fabricated or incorrect answers. All of us have to deal with the loss of trust in AI when we experience a hallucination ourselves, but it has deeper consequences in high-stakes fields like law and healthcare. AI tools are being used by police to write narrative reports – with increasing concerns about accuracy and accountability.

AI systems can amplify racial and gender biases if they are trained on unrepresentative data. There are already reports of discriminatory outcomes in mortgage lending, healthcare, and criminal sentencing.

AI is a propaganda engine, generating realistic misinformation and deep fakes at scale. This not only pollutes our daily online experiences with an onslaught of AI slop. At worst it’s a threat to public trust and democratic stability, with algorithms manipulating our viewing habits, purchasing decisions, and political opinions.

Our privacy has been an illusion for a long time but AI creates the potential for future privacy violations at an even larger scale. AI models absorb personal and copyrighted information without consent, with limited transparency or accountability. The data is functionally impossible to delete, creating a permanent and expanding record of personal information in corporate systems.

These are societal and ethical challenges that could be addressed with comprehensive federal regulation. We’ll talk about that in the next article.

Extinction

“If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of A.I., then everyone, everywhere on Earth, will die.”

That’s Eliezer Yudkowsky, Silicon Valley’s doomsday preacher, in his new book If Anyone Builds It, Everyone Dies.

It would be tempting to dimiss Mr. Yudkowsky as a crank except that he is very smart and well informed and influential. The New York Times profiled him recently and noted that Sam Altman says Mr. Yudkowsky was “critical in the decision to start OpenAI” and suggested that he might deserve a Nobel Peace Prize.

In 2023, over 30,000 people signed an open letter calling for a pause on training more powerful AI systems. Signatories included Apple co-founder Steve Wozniak, historian Yuval Noah Harari, and many AI researchers and academics. Risks mentioned in the letter included the development of “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us” and a civilizational “loss of control.” (Mr. Yudkowsky did not sign the letter because he felt it did not go far enough.)

Separately in May 2023, some of the most influential figures in the AI world – pioneering researchers, industry leaders and AI CEOs, academics, philosophers, and public figures – signed a one-sentence letter:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The one-sentence format was a deliberate choice. The goal was to focus on the long-term, catastrophic risk of advanced AI leading to human extinction.

No one believes the genie can be put back in the bottle. But these are the smartest people in the world begging and pleading, trying to make governments, regulatory authorities, and developers understand that terrible things might happen unless policy makers set up safety protocols to make AI “accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

You don’t see it? Don’t be afraid of super-intelligent evil AIs. (Well, be a little afraid of super-intelligent evil AIs, that might happen.) But for now focus on a smart but insane biochemist who asks AI to design a virus that would start a pandemic and kill everyone on earth. There are probably already AIs with enough medical data to do that. There certainly will be before long.

That would only go badly if the biochemist was smart enough to understand the virus at a high level, and sufficiently well funded to manufacture and distribute the bioweapon, and crazy enough to do it.

Maybe that won’t happen. Are you sure?

And maybe the AI would refuse to help because the tech companies are trying to filter out terrible things that AIs might be asked to do.

The New York Times recently published an essay about the risks of AI by Stephen Witt, author of The Thinking Machine. He looks in detail at the filters designed to stop malicious requests, and he concludes: “In the course of quantifying the risks of A.I., I was hoping that I would realize my fears were ridiculous. Instead, the opposite happened: The more I moved from apocalyptic hypotheticals to concrete real-world findings, the more concerned I became. . . . The point of disagreement is no longer whether A.I. could wipe us out. It could.”

There’s only a miniscule chance of that crazy biochemist getting past the filters on an AI trained on biochemistry and developing the superbug and destroying humanity.

But the chances are not zero, and that’s the point.

The standards and requirements that might be imposed to guide development and prevent catastrophe – that’s governance. And that’s what we’ll talk about in the next article.

For today, just remember that AI stands for Actually Insidious and it’s the worst thing that mankind has ever created and there’s a good chance that Skynet will be real, R.I.P. humanity.

Or not. It’s complicated.

Share This