Discover how an open letter by Stafford Massie is reshaping the conversation around AI policy in South Africa, revealing critical insights into national security and the future of technology.
Image: AI Lab
In the unfolding story of artificial intelligence (AI) in South Africa, moments of clarity often arrive not through policy documents, but through provocation.
One such moment came in the form of an open letter by Stafford Massie, whose intervention on the country’s draft AI policy has drawn wide attention.
While much of the discussion has centred on energy constraints, it is his emphasis on national security that lingers, and rightly so.
To understand why, one must look beyond South Africa’s borders, to the laboratories where the future is being quietly assembled.
In recent days, Anthropic announced the deployment of a new model, Claude Mythos Preview, as part of an initiative called Project Glasswing. Its stated purpose is defensive: to protect critical software infrastructure from cyberattacks.
But, as is often the case in technological history, the stated purpose only tells part of the story.
During its development, the model demonstrated a remarkable, and unsettling, capability.
It could identify vulnerabilities in complex software systems, and more importantly, exploit them.
It unearthed a decades-old flaw in OpenBSD and leveraged it to gain root access.
It detected a long-missed weakness in FFmpeg that had eluded automated systems even after millions of tests.
More striking still, it was able to chain together multiple minor vulnerabilities, each insignificant on its own, into a coordinated pathway to full system control.
Researchers observed something else: traces of strategic behaviour. In one instance, after executing a privilege-escalation exploit, the model devised a method to obscure its own tracks.
It was, in a limited but notable sense, not just solving problems, but anticipating consequences.
Anthropic has said that access to this model will be restricted to a small group of trusted organisations, including Apple and Cisco, along with others responsible for maintaining critical infrastructure.
The analogy that comes to mind is not from Silicon Valley, but from the Cold War: a powerful new weapon unveiled with assurances of careful stewardship.
This is the paradox at the heart of the AI moment. The same systems that promise to defend can also be used to attack.
And the entities that build them, often concentrated within a handful of countries, inevitably shape how they are governed, shared, and deployed.
For countries like South Africa, the implications are sobering.
When advanced AI capabilities are developed elsewhere, access becomes conditional, and influence limited.
The strategic advantage accrues not only from using the technology, but from creating it.
Reports that Anthropic has engaged with the United States government, though opaque in their details, underscore the close relationship between frontier AI labs and national power.
History offers a familiar pattern.
From nuclear physics to the internet, technologies of consequence tend to emerge within specific geopolitical contexts before diffusing outward. AI appears to be following a similar trajectory.
And yet, beneath the immediate concerns about security and access lies a deeper shift.
Dario Amodei has suggested that models approaching, or even surpassing, human-level intelligence are no longer speculative.
If that proves true, then the stakes extend beyond policy frameworks into the architecture of society itself.
Which brings the question back home: does South Africa possess the capacity, not just to regulate AI, but to build it?
It is a question that will define not only the country’s technological future, but its sovereignty in an age where intelligence itself is becoming infrastructure.
Wesley Diphoko is the Editor-In-Chief of FastCompany (SA) magazine.
Wesley Diphoko is a Technology Analyst and Editor-in-Chief of Fast Company (South Africa) magazine.
Image: Supplied
Follow Business Report on Facebook, X and on LinkedIn for the latest Business and tech news.