This is a story about lawyers and moronicans, laziness and greed, and why blame for bad AI results should frequently rest with people, not technology.

You should be skeptical about reports of flaws and problems with AI – but also skeptical about the results you get from your own use of AI, and skeptical about the people and companies promoting AI so relentlessly.

AI is not just one thing. I’m going to give you a simple framework of four distinct areas that are being transformed by AI. One, and only one, of them is the source of all the anecdotes that are filling our newsfeeds.

Predictions  AI is really good at sifting through mountains of data and making predictions. Scientists, researchers, and engineers are using it for groundbreaking research in medicine, climate and weather, astronomy and astrophysics, and genetics. Celebrate their work and thank the technology gods. It will improve our lives.

Automation  AI can be used for automation. It will become part of the controls for physical systems – robotics, assembly lines, and cybersecurity, for example. There will be a human cost in lost jobs as people are replaced by machines, but don’t blame AI. Technology has been disrupting jobs for centuries. Ask the scribes who were put out of work by the printing press.

Perception & understanding  AI can use human-like senses like sight and hearing, and help us understand the world around us. It will become part of self-driving cars, security cameras, and facial recognition systems, and it will enable speech recognition and translation. There’s lots of potential for each of those systems to be abused by bad people and poor choices but that’s not AI’s fault. The use of AI to enhance them is only a boon, not the problem.

I want you to remember those three uses of AI! Every time you see something critical of AI, your mind should insert a footnote that says “*Footnote: Sure AI screwed up (fill in the blank), but that’s just what it does with words, it’s not the science-y stuff.

Because it’s the next one that’s getting all the attention.

Content generation  AI can generate words, images, video, and programming code, by learning from existing data. It summarizes anything and everything, it writes articles, composes music, creates images and videos, drafts emails, generates marketing copy, writes legal briefs, and writes programs.

This is what you’re doing with AI. It’s the customer-facing side of ChatGPT and Google Gemini and Microsoft CoPilot. It’s what employees use to write memos and meeting summaries and business plans.

And it’s the one that’s screwing up the internet and being misused and getting all the headlines. There are a lot of stories about hallucinations and errors in work done by AI.

These are early days and a lot of work is being done to improve AI technology. There’s nothing new about living through clumsy tech while its problems are worked out. Things change.

But you’re right to be skeptical of what you get from AI today. In this area, it is not ready to replace humans. It serves up results which are frequently good enough, but seldom excellent – and sometimes filled with falsehoods. And it is being promoted by huge companies and tech billionaires that are increasingly predatory and unscrupulous.

That’s the lens that you should be using when you read stories about AI failures. We are constantly encouraged to lean on AI to save time and money. It can do that with careful supervision.

But AI is not generating content that can replace humans. It is not yet operating at a level comparable to humans.

Some of the AI mistakes in the news are forgivable, the result of people overestimating what AI can do.

A lot of AI news is about bad people using AI to mislead, and lazy people who don’t care whether their AI output is accurate or not.

Blame the people, not the technology.

There are two examples in the news – lawyers and Trumpist moronicans.

Lawyers You’ve probably seen stories about lawyers submitting AI-generated briefs with botched citations. There are more of them than you thought. There’s a lawyer compiling a database of legal briefs which have been caught with fake AI citations – 149 cases identified as of June 8, 2025, more than 20 just in the last monthOn June 9, England’s High Court warned lawyers that they could face criminal prosecution if they present AI-generated false material.

Let’s be generous. Writing briefs takes hours and days of hard work. It is drastically easier to cut and paste AI responses from ChatGPT. To the lawyer, it feels just like cutting and pasting from old briefs.

But it’s not the same. The old briefs at least referred to real court cases. The output from ChatGPT looks good but no human’s eyes have reviewed it.

So there are two possible reasons that lawyers are being caught using AI-generated fake cases.

One is that they’re too lazy to check their work. You can sweeten that up with talk of stress and time pressure but the effect is the same. It’s up to them (it’s up to you) to check AI output for accuracy.

The other reason that lawyers are submitting briefs filled with AI hallucinations is that they’re too cheap to pay for a Thomson Reuters subscription. Thomson Reuters and the other companies making law tech – which is expensive! – are incorporating AI results that are grounded in real cases.

The MAHA health report  On May 22, the MAHA Commission released its highly publicized report on childhood diseases, a 78 page screed citing more than 500 studies and other sources. The press release called it a “landmark report” that will be used to create new US health policy based on its “gold-standard scientific research.”

The report is a “case study in generative AI red flags.” It was assembled by unserious people using AI tools to save time on a product they care nothing about. It contains bogus citations, titles of papers that don’t exist, and distorted summaries of articles that do exist but do not stand for what is claimed in the report. Much of the report bears the hallmarks of AI-generated prose, and the Washington Post reported that some citations specifically included “oaicite” in the URLs, a distinctive marker of ChatGPT output.

When asked about the report, White House Press Secretary Karoline Leavitt attributed the errors to “formatting issues.”

These are not AI errors. It is work by evil people with an agenda who cannot even be troubled to make their work product look plausible at a surface level. It matches their attitude towards science and medicine in general, which is to just make things up. They don’t care if AI is doing the hallucinating; that saves them the trouble of doing it themselves.

Blame the people, not the technology.

And when you see criticism of AI for its inherent limitations, lay some blame at the feet of predatory tech companies and tech billionaires driven by avarice and a desire to dominate the emerging markets. They are engaging in hyperbolic promotion that stretches beyond fair and ethical bounds. They frequently overstate AI’s current capabilities, downplay its known weaknesses, and frame every minor breakthrough as a leap towards artificial general intelligence, creating a misleading perception for the public and investors alike. This aggressive, often misleading marketing isn’t merely optimistic; it’s a calculated strategy to generate hype, attract massive investment, and secure market share, even if it means users encounter frustratingly imperfect systems or, worse, face real-world harm from inadequately tested deployments.

Skepticism is called for. Brian Chen wrote an article for the New York Times recently about errors in his search results with Google AI Search Mode. He urged everyone to use AI with caution. He’s right! AI is brilliant and helpful, but it’s a tool, not a replacement for doing your own work.

Share This