AI transitions will take years. Here's why [#86]


FEBRUARY 22, 2026

Tech Stories

In this issue #86

The decade-long making of Johor's data centre boom


Claude Code lets man break into thousands of robot vacuums


Adani eyes 5GW of data centres


and more...

Hello Reader,

I took a few days off from writing this week for Chinese New Year, partly because I figured most of us would either be taking time off or busy calling on family and friends.

It was somewhat disconcerting to see the traffic thin out so visibly in Singapore, even on Thursday and Friday, which weren't public holidays. I expect things to get back to normal by tomorrow, though. I'm raring to write again.

Today, I'll talk about how I've decided to treat AI-generated content, why the "AI transition" could take a decade or more, and my renewed focus on covering the digital infrastructure surge and science behind AI.

How I'm treating AI-generated content

I'm not sure if you've noticed, but a growing amount of content online is now significantly longer than it used to be. Take this pitch for a new marketing-centric service offering, which rambles on for over 2,600 words. As you might guess, it scores as fully AI-generated with a "High" level of confidence on Pangram.

In fact, it was so lengthy and repetitive that even I gave up midway through. And I'm the strange chap who prefers reading throughout the entirety of 10-hour plane flights to watching movies. Instead, I got ChatGPT to generate a 250-word summary, which I skimmed in less than a minute.

Why is it so drawn out? ChatGPT suggests it's a positioning play: length signals authority and depth. It helps, too, that LinkedIn seems to reward long form, especially pieces running over a couple of thousand words. Both readers and the algorithm view it as a thesis, even if the former never finish reading it and the latter don't actually understand it.

I blame AI for making it trivial to churn out such meandering content, which by its very nature shifts the cognitive load onto readers.

And that is one more reason I've decided to stop engaging with AI-generated content on LinkedIn. I won't go as far as blocking, because truth be told, it's simply too prevalent. But I'll be muting such posts or unfollowing the most egregious offenders to spare myself the very real frustration I experience reading them.

Expect years of transitions

In "We're all thinking about AI wrong" last month, I wrote about how AI continues to befuddle us with its jagged brilliance, where it can be phenomenal at some things yet absolutely dismal at others. This can happen even within a single field. It can craft great headlines or passable press releases, for instance, but struggle to put together a good opinion piece.

Today, I'd suggest that as AI capabilities continue to evolve, they will throw an increasing number of industries into disarray. My current hypothesis is that we can expect a protracted period of transitions, even if AI stops getting better today. Spoiler: it's still getting better.

Let me highlight a couple of examples within cybersecurity.

This week, a man tinkering with Claude Code to hack his own DJI Romo robot vacuum (he wanted to control it using his PS5 controller) inadvertently stumbled upon a serious security flaw. How bad? The bug gave him control over thousands of other robot vacuums using nothing more than his personal login credentials.

Think about it. If even a commercial product from a well-known manufacturer could contain such security oversights, what of vibe-coded apps created by non-technical users with little regard for best practices? That's a perennial concern of mine - the average code out there is already insecure, and it's being used to train AI systems. Are we heading towards a vibe-coded security apocalypse?

Stop all AI use in code, then? It isn't that simple. Jagged edges, remember? Just yesterday, Anthropic unveiled Claude Code Security, a new capability built into Claude Code for the web that scans codebases for vulnerabilities and suggests targeted patches.

Anthropic said its internal team has already found over 500 vulnerabilities in production open-source codebases using its latest AI model, Opus 4.6. These are bugs in mature, well-established products that had gone undetected for decades, overlooked despite multiple rounds of expert review.

Of course, cyber stocks promptly tumbled after the announcement. If there's one certainty, it's that we are still far, far away from understanding what AI will ultimately mean for cybersecurity. And if that's true of cybersecurity, it's true of just about every other industry AI touches.

Of data centres, the energy transition, and AI

When I first set up Clearly Tech, I envisioned it as a place for deeper content that's too long or dense for my LinkedIn posts. But it's led to some confusion, because what exactly should readers expect from it?

This week, things finally clicked in my mind.

I have loads of notes, recordings, and photos going back years that I've never been able to fit into LinkedIn posts or short blogs. There's a lot more I can do to plumb the depths of the race to build the foundations of AI both in Asia and around the world.

So here's my plan: Clearly Tech will track the infrastructure, energy transition, and research powering AI. That means more data centres, everything related to sustainability, and the top developments and debates in the space. And because I can't stand regurgitated content, expect my personal insights, behind-the-scenes takes, and the practical realities on the ground.

The audience I'll be writing for? Data centre insiders and the broader ecosystem around them, investors, and those who want to know how everything fits together, minus the hype and hyperbole.

If you haven't signed up yet, you can do so here. For those of you who are already supporting me on Substack - a big thank you. You know who you are.

Would love to hear your thoughts - just hit reply to this email.

Regards,
Paul Mah

Content collaboration
Love the way I write? I'm opening up for more content collaborations. If you are looking for event coverage, brand storytelling, or a way to share your unique perspectives with a broader audience, fill in the form here and I'll get in touch.

Tech stories on whatsapp

Join 1,350 others on my WhatsApp Channel. Nobody sees your number.

Link

Snapshots

Spotlight

The decade-long making of Johor's data centre boom

Old brochures from 2015 show the groundwork was laid long before the hyperscalers arrived.

Unfiltered

A man used Claude Code to hack thousands of robot vacuums by accident

A Claude Code side project accidentally broke into thousands of DJI devices. The implications go further.

Recent News

Adani wants 5GW of AI data centres in India

The conglomerate is betting big on AI data centres, renewable energy, and an end-to-end ecosystem.

Get this for yourself

Did a friend forward this digest to you? Subscribe to receive your own copy every week.

Join my Substack

Do a Content Partnership

7 Temasek Boulevard, #12-07, Suntec Tower One, Singapore, 038987
Unsubscribe · Preferences