You're Media Illiterate, And It's Hurting The Codebase

You're Media Illiterate, And It's Hurting The Codebase

Software leadership keeps falling hook, line, and sinker for media hype around AI. Will basic media literacy help?

An image of a newspaper and a coffee

When Anthropic unveiled Claude Mythos earlier this year, the articles covering it intoned a gravitas usually reserved for military weaponry.

Wired said the model was being heralded as a “hacker’s superweapon,” and The Washington Post said it was sparking a “global scramble” among governments to lock down their digital infrastructure. The Times said that new AI releases “are beginning to function less like product launches and more like weapons tests, and most nations want to understand how the technologies work and what protections are needed.”

Anthropic has fed those narratives with its own cloak-and-dagger hype, dubbing “Project Glasswing” a fundamental leap forward in AI capabilities.

As the “responsible” AI provider, it claims that Mythos is too dangerous for general release. Without a real model to examine or test, where does that leave the general public? Are we to take these media reports and product announcements at face value?

Media Literacy

Before I was a software engineer, I worked as a national security reporter under the Trump and Biden administrations.

I’m not going to pretend to fully understand the deeply complex work that goes into building and training the world’s best AI models. But I do understand the media fairly well. Most leaders in the software industry don’t, but mistake their technical expertise for a substitute for media literacy. This has led many to vastly undervalue the importance of hiring and retaining top talent, in the mistaken belief that AI will soon replace us all, effectively kneecapping their companies ability to build and scale reliable and secure products.

We may not fully understand every product release coming from AI companies right away. That’s okay. But software engineers, and particularly those in management, owe it to themselves and their colleagues to view AI announcements with a healthy amount of skepticism. We need to understand, fundamentally, that someone is always trying to sell you something.

On the coverage side, the fact of the matter is that most reporters are also encouraged to overhype their own stories. During my time in Washington, it was transparent that reporters with large Twitter followings often got them by clipping the most outrageous parts of otherwise boring committee hearings, not through thoughtful and deliberate policy coverage. Those reporters went on to receive promotions, TV spots, and other coveted accolades.

I’m not anti-media. As someone who went into journalism with a genuine interest in improving the world’s understanding of current events, I believe that the vast majority of reporters go into the field out of a genuine interest in public service. Sadly, the structure of the media landscape does not reward this type of work.

The truth is, you cannot accurately report on something like Mythos through hearsay and word of mouth. You need access to the underlying model to test it, and to speak with companies that have used it. Experience actually working in cyber or as a software engineer would also help. We don’t have that, so instead, we’re left with Anthropic’s marketing strategy.

Someone’s Always Trying to Sell You Something

Anthropic and other AI companies have a vested interest in sensationalizing their own products, but their “safety first” reputation has caused many to look past this. They’re warning us about Mythos, you see, because they’re so concerned about the fate of humanity, not because they stand to make billions off an upcoming IPO. Anthropic CEO Dario Amodei is notorious for exaggerating claims around AI.

“I think we will be there in three to six months, where AI is writing 90% of the code,” he said at the Council of Foreign Relations event in March of 2025, over a year ago. “And then, in 12 months, we may be in a world where AI is writing essentially all of the code.”

Anyone who works in this profession knew that’s bullshit at the time. In January of 2026, less than a year later, he made essentially the same prediction, but with a new timeline: “We might be 6 to 12 months away from when the model is doing most, maybe all, of what SWEs do end to end.”

Tech CEOs overhyping their own products isn’t a new phenomenon. And yet, for some reason, there is a tendency among software leaders to accept, hook, line, and sinker, the media reports on these predictions as facts themselves, not as vapor designed to boost the company’s reputation.

The reason many of these bad-faith arguments are hard to dispute is that they’re predictive. You can’t prove that he’s wrong about the future; it’s only obvious in hindsight. There’s also a kernel of truth in what Amodei is saying: in a well-structured codebase, with proper guardrails and good patterns, AI can genuinely take over a large part of the hands-on-keyboard work. I’ve adopted AI heavily into my own workflows, and it does save time when knocking out syntax for a solution.

But whether deliberately or not, Amodei and others miscommunicate what actually makes writing software hard. The act of physically writing out code is not the bottleneck when building software. If it was, then fresh college grads would be hyping their WPM and Vim adoption as massive productivity boosts too.

What actually makes building software difficult is the time spent figuring out what to build, iterating on the design, fixing bugs, refactoring, redesigning, speaking and iterating with customers, and so forth. But that won’t play on Twitter, hence: “We’re 6 months away from Armageddon.”

Asking the Right Questions

People who work in software are often good at solving very hard problems. That competence creates a blind spot.

Someone who spent a decade working at Google as a distributed systems engineer has no more media training than the average American and is likely just as ill-equipped to understand how narratives around AI are constructed, incentivized, and amplified. Reporters rely heavily on access journalism, executive framing, and selectively curated demos because they often lack the technical grounding needed to independently evaluate companies’ claims. Software engineers don’t intuitively understand this.

In my opinion, computer science programs should probably include at least some basic instruction in how incentives shape media coverage, how access journalism works, how headlines differ from underlying reporting, and how to distinguish marketing claims from independently verifiable evidence. Education around how conspiracism works and how it’s exploded in the United States would probably be useful too, and not just for software graduates. Right now, the computer science major is purely technical, focusing purely on operating systems, networking, and the like.

But managers also need to do better. At the very least, they need to ask themselves: Who is making this announcement, and do they have a financial or public-image interest in overhyping the news? Is this a primary source, or is this secondhand from someone who doesn’t have access to the underlying technology? Were these results replicated outside of a demo environment? Do engineers actually using these tools in production have anything to say?

To pretend that technical expertise can substitute for basic media literacy is arrogant and fundamentally disrespects the hard work of engineers who know that the AI apocalypse is not six months away, no matter what Amodei says next week.