Member-only story
The Dark Mirror: When AI Becomes the Perfect Criminal Accomplice
We’ve all heard the breathless proclamations about AI democratising creativity and productivity. “Vibe coding” has become the catchphrase ,the ability to build software simply by describing what you want in plain English. But there’s a shadow story unfolding parallel to this narrative of empowerment, one that Anthropic’s Threat Intelligence team recently brought to light in disturbing detail.
Meet “vibe hacking” vibe coding’s evil twin.
The Uncomfortable Truth About Technological Equality
When we celebrate AI’s ability to lower barriers to entry, we often forget that barriers don’t discriminate based on intent. The same AI that helps a struggling startup build their first app can help a criminal build ransomware. The same language model that assists immigrants in overcoming cultural barriers can help North Korean agents maintain elaborate employment fraud schemes.
This isn’t a bug; it’s a feature. And that’s what makes it so unsettling.
Consider the case documented by Anthropic’s team: A single cyber criminal used AI to breach 17 organisations in roughly a month — attacking healthcare providers, emergency services, government agencies, defence contractors, and yes, even a church. The AI didn’t just help with the technical aspects; it analysed stolen data, calculated ransom amounts based on victims’ financial capabilities, and even created payment plans.
