Hello Reader,
This has been a packed week with an ongoing project, and I didn't manage to write as much as I wanted to. Things should ease by next week, so stay tuned.
Today, I want to talk about why I think we are going to need far more data centres than ever. I'm not going to jump into data centres. Instead, I'll start with the state of cybersecurity and how AI is already reshaping work.
The uproar over Claude Mythos
As part of my work for GovWare, I've had the privilege of speaking with cybersecurity experts over the years. One assertion that I'm hearing more often is how we can only fight AI with AI. The reasoning is straightforward: humans can't respond fast enough.
This was confirmed this week with the release of Claude Mythos Preview. In a surprise twist, Anthropic, citing the exceptional ability of its latest AI model to discover and exploit high-severity software vulnerabilities, made it available only to a select group of tech firms and organisations.
So what is the big deal with Mythos? The model is massive, with reports estimating it at around 10 trillion parameters, placing it significantly ahead of Claude Opus 4.6 at up to 2 trillion. And it is far too good at finding and exploiting software vulnerabilities.
According to the Red Team at Anthropic, Mythos was set loose on real open-source projects. The model read the source code, formed hypotheses about where vulnerabilities might lurk, ran the software, and used debuggers to confirm its theories. Once verified, it went on to develop working exploits.
"Non-experts can also leverage Mythos Preview to find and exploit sophisticated vulnerabilities. Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit."
The list of discoveries is sobering. A 27-year-old bug in OpenBSD in a specialised part of the networking stack (TCP SACK). A 17-year-old remote code execution flaw in the FreeBSD NFS server that allows unauthenticated root access using a 20-gadget ROP chain split across six sequential packets. And many more.
I get goosebumps reading it.
- OpenBSD, first released in 1996, is widely considered one of the most security-focused operating systems, a reputation earned through its security-first design philosophy and strong networking heritage from day one.
- And a Return-Oriented Programming (ROP) chain is an exploitation technique that reuses chunks of existing code in memory. A 20-gadget chain means 20 individual sequences, and six sequential packets mean the attack is divided across separate network packets to evade defences. In short, it is a non-trivial, sophisticated exploit.
It is not just about finding vulnerabilities. Sophisticated software exploits that traditionally would have taken weeks or months to develop can now be created the very same day with Mythos.
Of course, we need to be clear-eyed that Anthropic does have vested interests in telling the story a certain way. But the point remains: AI capabilities are outpacing the ability of humans to react. The logical next step is to dramatically ramp up the use of AI by cyber defenders. There is simply no other option.
From hospitals to comms teams
But cybersecurity is just one front. At Huawei Singapore TechWeek on Friday, I was surprised by where hospitals are actually using AI today. It's largely in operations rather than clinical care, covering areas such as documentation, workflow optimisation, and scheduling. But it's there, and it will eventually make its way to the bedside.
As a slight aside, the session I found most fascinating was by Prof Gao Yujia of NUHS. His team had developed practical solutions using the Microsoft HoloLens for use in surgeries. Prof Gao also demonstrated how the HoloLens's built-in infrared camera, paired with a GPU-powered convolutional neural network running elsewhere on the network, can identify veins beneath the skin in real time.
The final story I'll share is that of a corporate communications professional who leaned hard into AI after being challenged by her boss. According to Melanie Pasch, her team of two now operates like a team of 10. How? Claude Code.
What did she use it for? I subscribed to the Free Press just to find out. The first tool she created was an automated intelligence briefing system that scans the news each morning across specific topics, ranks each story by urgency, suggests strategic plays, and delivers it to her before she starts work.
Melanie is also working on tools to track media coverage against benchmarks, monitor her company's rank in AI-generated search results, and manage approval workflows and flag bottlenecks. This from someone who has "never written a line of code in my life."
The demand is only starting
So where does all of this lead? First, Mythos clearly demonstrates that frontier AI models continue to get substantially better. There is no sign of capabilities tapering off, which means the next model currently being trained will be even larger and demand more GPUs.
From cybersecurity to healthcare to everyday work, AI use is increasing dramatically across the board. And as more non-technical users like Melanie discover how AI has collapsed traditional barriers, expect usage to accelerate as they build AI-powered tools of their own.
What is more striking is that while over a billion people use standalone AI platforms every month, just 3 to 5% pay. I'd argue that paying users use substantially more AI than those on free tiers. Imagine what happens when that percentage inches upwards.
Just last year, I often expressed scepticism that the surge of data centres is sustainable. But as I look at the evidence of more capable models and surging AI use, I've changed my mind. The question is no longer whether we need more data centres. It's where they'll all go.
As usual, you can reply to this email to reach me.
Regards,
Paul Mah