Hello Reader,
This week, I talk more about Singapore's DC-CFA2, and how a wave of pervasive AI use will cause ground erosion even for those at the frontier of AI.
Foolish or prescient?
Monday was the deadline for the DC-CFA2, which raised the bar on sustainable data centres with an extraordinary requirement for at least 50% renewables. It would be foolhardy to think that aspiring operators would have settled for meeting this bare minimum.
Moreover, the renewables component is but one facet. Submissions will also be evaluated for economic benefits, how they benefit Singapore as a data centre hub, the implementation of cutting-edge technologies, and how they spur the adoption of innovations within the sector.
The park sits on Jurong Island - I've marked the exact location in an earlier post. There's also another plot just across the road, boarded up, apparently not part of the DC-CFA2.
When Singapore announced its "Data Centre Park" in 2011, interest was muted and the actual launch got quietly delayed at least twice. Why would operators want to build right next to their competitors within the same industry park? That was the prevailing thinking. Also, have you thought about the risks of building so closely together! Pfft.
Today, data centre campuses are a dime a dozen, with adjacent buildings normalised. Why? Because that's the only way to meet the surge in demand for digital infrastructure. When the alternative is being unable to meet that demand, people suddenly stop caring about the old arguments.
Two readings are possible. Is Singapore foolish in putting pressure on its data centre industry at a time when everyone else is throwing open the doors, or prescient in its dogged focus on sustainability?
Moats made of sand
The social contract that held the internet together is collapsing, warns Raffi Krikorian. Raffi is the chief technology officer at Mozilla, and he wrote a guest essay in the New York Times in which he cautioned that "it's the end of the internet as we know it," referencing Claude Mythos.
Yes, I know I talked about Mythos just last week. But it's worth noting how this extremely smart, technically brilliant MIT-trained technologist thinks about it. And it's bad. His argument: security expertise used to be scarce, which made it hard to find vulnerabilities.
Mythos is so good that it can find obscure flaws and craft sophisticated attacks to exploit them. The problem? Most of the internet relies on software built by very small teams of volunteers in their spare time. How will they fix these issues quickly enough, particularly since they don't have access to Mythos themselves?
Even the "normal" frontier AI models are powerful enough that a small but growing group of people are going from 2x to 10x simply by leveraging them in new ways. And yet, even the capabilities gained by these pioneers are built on shifting sands that could erode under their feet.
Some weeks ago, I wrote about an app I built to engage smarter on social media. The app relied on a set of paid APIs from a provider. Earlier this week, I had a wild thought: why not replicate what the API does? 12 hours later, I had most of the features I needed working. Within 24 hours, I decided to rip out all the code referencing the old provider. By the third day, I had many more features than what the previous provider offered.
Today's competitive edge is tomorrow's commodity. The moat you build with AI can be replicated by someone else with AI. So where does that leave any of us?
As usual, you can reply to this email to reach me.
Regards,
Paul Mah