Last week Anthropic revealed that Chinese state-linked hackers had used Claude models to automate a sweeping cyber-espionage campaign. The hack was against 30 major companies and government agencies, and it was successful against several of at least the corporate targets. The headlines made it sound like the AI had gone full rogue, launching its own operation, identifying targets and executing attacks nearly autonomously.
The journalistic focus was on how fast and independently agentic AI operates, which makes these kinds of hacks far more damaging than traditional cyberattacks. “The first AI-run spy mission,” at least one headline claimed. While this line of reporting is true – AI is getting more powerful and also more accessible to bad actors – I think it is important not to get too caught up in the hyperbole lest we forget: The machines didn’t decide to hack anyone. People did.
Other WRAL Top Stories
Humans still picked the targets. Humans wrote the root prompts. Humans directed the strategy. The AI just did the work cheaply, tirelessly and at a scale that would have required massive teams only a few years ago. This is the real existential problem. Not runaway intelligence, but runaway access.
What does the hack advise?
Anthropic disclosed that attackers used Claude and Claude Code to automate 80–90% of the operational work in a September campaign targeting more than two dozen organizations across technology, finance, chemicals and government. AI handled reconnaissance, wrote custom malware, generated phishing lures and processed stolen data, leaving only the strategic direction to human operators.
This wasn’t the first case.
Back in August, Anthropic reported a different group using similar models for data theft and extortion across at least 17 organizations. These aren’t coincidences. They’re trendlines.
Agentic AI is turning cyber operations into a kind of knowledge-work assembly line. And while states like China are the early adopters, they won’t be the last. Because once the tools exist, they don’t stay exclusive for long. And if you don’t have the tools, increasingly you can simply ask AI to make them for you, as Claude coded in this most recent hack. The process was something like this:
Human: Claude, research these organizations and look for vulnerabilities
Human: Claude, write malicious code to exploit those vulnerabilities
Human: Execute the code, steal and store data, spear-phish targets, and so on
Yes, agents did the lion’s share of the heavy lifting, but humans were the conductor.
Agentic AI radicalizes the cost curve
To understand why this matters, picture the cost of a cyberattack 10 years ago. You needed specialized skills, teams of engineers, access to infrastructure or a well-funded intelligence service behind you. It isn’t a huge oversimplification to claim that today you only need a smartphone, an internet connection and a model with a task executor.
Agentic AI annihilates fundamental costs and barriers that historically required significant investment and coordination:
● Cost of Expertise: models provide the technical know-how. Intent can be spoken into code.
● Cost of Labor: agents run 24/7 without fatigue. Large teams can now be a single conductor.
● Cost of Speed: agentic hacks operate at the speed of light through fiber optics and electrons across semiconductor junctions
● Cost of Coordination: automated workflows execute complex multi-step tasks with precise synchronicity
The threat is no longer defined by who has the ability to cause harm. It’s becoming more defined by who has the desire.
We’ve already seen early examples. AI-generated malware kits are circulating on Telegram. There are fully automated fraud rings. People are executing synthetic extortion operations powered by voice clones and deepfakes. There is reporting even on LLM-driven biological protocol generator tools that raise real red flags among researchers.
The existential threat is super-empowerment
A lot of smart people are worried about AI evolving beyond our control. But in my view, the more urgent risk isn’t autonomous superintelligence, it’s the growing democratization of powerful tools.
A violent AI is not the threat. A violent human with an agent is.
I know this sounds a bit like the 2nd Amendment argument, “guns don’t kill people, people kill people.” But the difference here is that AI is a general purpose tool that can benefit every single sector of society and aspect of our lives and economy. Guns don’t have an analogous cost-benefit argument. In other words, a policy to partially or fully ban guns can make a strong argument that societal benefits outweigh the cost. Banning AI would never demonstrate a stronger positive impact than the opportunity cost.
So if we agree that the use of AI, and the accelerating progression of the power of this technology should not (and can not) be stopped. Then what is our best strategy to handle the existential risk of access to the technology?
What used to require a nation-state can now be done by a handful of individuals—or even one very determined person. This is the first time in history when a small, unhappy, technologically literate group could inflict nation-scale disruption without access to weapons, armies or sophisticated labs. And as inequality widens, polarization deepens and social trust erodes, the number of disaffected individuals increases.
Agentic AI is becoming the ultimate tool of asymmetric retribution.
The futility of building ever-bigger AI firewalls
We ultimately are facing an inversion of the security paradigm. Historically, to prevent mass harm we either limit access to weapons of mass destruction (e.g. nuclear materials, chemical weapons, etc), or we build fortifications (physical walls, firewalls, air-gapped systems, etc).
We’ve already concluded that access to AI is so democratized that we can’t opt for the “restrict access” option. So the world has focused on fortifications. This has been a hamster wheel strategy since the origin of the computer. Create a virus, code an antivirus. Email spam, code anti-spam tools. And so on the wheel turns in ever increasing cold war style escalation.
The New York Times reported last week that economist Charles Jones at Stanford recently modeled how much societies should invest in AI risk mitigation. His conclusion: at least 1% of GDP every year, which in the U.S. is more than $300 billion. In many of his scenarios, the rational figure jumps to 8%—a number he himself labeled “stunning.”
For context, $300B represents 30 times the National Science Foundation’s annual budget.
Yet global AI-risk spending today is only ~0.03% of Jones’ recommendation. And I just pointed out our historical record. If we were we to follow this path, we enter an unending arms race:
more powerful models → more powerful defensive models → attackers iterate → defenders iterate → spending skyrockets.
This strategy ultimately fails. You can’t firewall your way out. If we want to solve the AI hacker problem, we need to actually address the problem. It isn’t technology that is the root cause of conflict and war. The root cause has always been humans and discontent.
If the threat is human malice, the defense must be human flourishing
National security officials tend to think in terms of software, treaties and defense budgets. But anyone who has studied the roots of conflict knows: people don’t destabilize society because they wake up one day with an LLM. They destabilize society because they are desperate, angry, marginalized or convinced they have nothing left to lose.
In an age where agentic AI gives extraordinary power to ordinary people, our best defense is not thicker armor, it’s fewer enemies. We need a modern reimagining of our national defense with a much stronger focus on the kinds of institutions that actually reduce violence:
● USAID, which lowers conflict by raising development
● The Peace Corps, which builds empathy across cultures
● Community and public service programs that give people purpose, connection and dignity
● The Small Business Administration, which helps build local economies and generational wealth through small business ownership
● Entrepreneurial Support Organizations who help people to become self-sufficient and community minded
● Open Source Initiatives that build technology openly and freely
● Educational Institutions that teach history and humanities with as much importance as they teach STEM and economics
And above all: we need to tackle inequality head-on.
This is not just a moral imperative, but should be the keystone objective for our national defense. In a world where people feel safe and secure financially, there are few reasons to act from a place of desperation. The stress of living paycheck to paycheck and subsidy to subsidy can be replaced with a new mental capacity to get to know neighbors and talk through differences peacefully.
The strongest defense will never come from technology or economic superiority as a tool to hold others down. It must come from equality - a core principle frequently stated by modern societies, but not always demonstrated by our actions or free market systems. At a personal level, this is why we focus on entrepreneurship at RIoT, particularly in more rural areas where there have not always been resources available to teach people to take advantage of technology advances like AI.
A world where wealth keeps concentrating at the top is a world where resentment festers. And in a world where resentment festers, the existence of agentic AI ensures that even a few disaffected individuals can cause truly catastrophic harm. The most powerful safety protocol we have is reducing the number of people who feel abandoned.
What companies, governments and local communities must do now
As I argue above, people are still at the top. We have control to shape the future to be better for all. At this moment, we should stop treating AI as the root cause of our fears and problems and start treating it as the accelerant for equitable solutions.
● Policymakers should balance algorithmic safeguards with investments in economic mobility, education, healthcare, and civic infrastructure.
● Tech companies must design agents with deliberate friction for high-risk behaviors and work on misuse deterrence, not just alignment.
● Local governments and businesses should think of social cohesion and opportunity creation as core components of cybersecurity, not social services.
Our defenses must be sociotechnical, not merely technical.
Machines don’t decide our fate. We do
The more I study these agentic AI incidents, the more convinced I become that we’re approaching a historic fork in the road. One path is a world where AI amplifies the frustration of the marginalized, multiplying grievances into global-scale threats. The other is a world where AI amplifies human potential because we’ve built a society stable enough, hopeful enough and equitable enough that people choose to build rather than destroy.
Let’s put AI agents to work finding solutions to inequality. Use the technology to redistribute wealth and opportunity broadly. This will require significant changes in our society, but we have AI agents to do that heavy work. Remember, agentic AI will not determine which path we take. Humans will through the choices we make about each other.
If we want to thrive in the AI and Data Economy, the answer isn’t more robust fortifications. It’s stronger communities.