I still remember the simple, almost innocent, chaos of the 90s computer viruses. The Michelangelo virus, a digital time bomb set to go off on the artist’s birthday, was the stuff of playground legend. Then came the early 2000s and the “I LOVE YOU” worm, a masterpiece of social engineering so effective it spread through sheer human curiosity, clogging email servers worldwide. Back then, writing a virus required a certain level of skill, a deep-seated knowledge of assembly or C++, and a mischievous, antisocial streak. The barrier to entry was high. You had to be a coder, a phreaker, a genuine nerd.
Also, see my post on how to write an oldschool virus in python or read about the history of viruses
Today, that barrier is gone. It’s been obliterated by the same technology that’s writing our emails, generating our art, and powering our search engines. The age of the AI-powered cybercriminal is here, and it’s less like the trench-coated hackers of movies like Hackers and more like a bored teenager with a subscription service. What was once a dark art is now a point-and-click adventure, and the implications are more profound than most people realize.
Act I: The Hype Machine and the “Cyber-Attacker Starter Kit”
In mid-2023, the cybersecurity world’s long-held anxieties about weaponized AI became a tangible product. On dark web forums and Telegram channels, a new service appeared: WormGPT.[1, 2] The marketing was brazenly honest, billing itself as an alternative to ChatGPT “that lets you do all sorts of illegal stuff”.[3] It was quickly followed by a competitor, FraudGPT, promoted by a user with the wonderfully unsubtle handle “CanadianKingpin12”.[4, 1]
These weren’t just tools; they were commercial products, sold as “cyber-attacker starter kits” with subscription plans ranging from €60 to $200 a month.[5, 2, 6] For a modest fee, anyone could now generate persuasive phishing emails, craft basic malware, and find software vulnerabilities without any ethical guardrails getting in the way.[7, 8] The democratization of cybercrime had begun.
But here’s the witty part: beneath the sensationalist marketing, the technology was a bit of a lemon. WormGPT wasn’t some revolutionary new AI; it was based on GPT-J, an open-source model from 2021 that was already considered a relic in the fast-moving world of AI.[2, 3] The sellers boasted of their ability to create “fully undetectable malware,” a claim that was met with a healthy dose of skepticism from security researchers and was never actually proven.[5, 3, 9] The real innovation wasn’t in the AI itself, but in the packaging. It was a masterclass in capitalizing on the global AI hype, proving that a market for malicious AI not only existed but was hungry for products.
The intense media spotlight ultimately led to WormGPT’s downfall. Its creator, a 23-year-old programmer, shut down the service and tried to rebrand it as an “ethical” tool – a pivot about as convincing as a wolf claiming to be a vegan.[3] The market was then flooded with low-effort copycats like Evil-GPT and Wolf GPT, many of which were just flimsy wrappers around a jailbroken ChatGPT instance. They’d often give themselves away by defaulting to the standard, “I’m unable to perform illegal activities” message, much to the chagrin of their would-be criminal customers.[4] The underground community grew disillusioned, and the smart money moved on from these bespoke “dark” models to simply getting better at tricking the more powerful, legitimate AIs.
Act II: The Real Danger Isn’t Skynet, It’s a Better Phishing Email
The failure of that first wave of tools doesn’t mean the threat disappeared. It just clarified where the real danger lies. The immediate impact of this technology isn’t about enabling script kiddies to create the next Stuxnet. It’s about the mass improvement and scaling of old, simple attacks.[10]
Social engineering is where these models have found their killer app.
- Phishing Perfected: Remember those phishing emails riddled with spelling errors and awkward grammar? They’re a dying breed. AI can now generate flawless, culturally nuanced, and highly personalized emails at scale.[3, 8, 11] The lure is no longer a poorly written plea from a foreign prince; it’s a perfectly crafted, context-aware message from your boss, your bank, or your IT department.
- Business Email Compromise (BEC) on Steroids: This is where things get truly scary. These tools can analyze a CEO’s public communications – interviews, blog posts, social media – and mimic their writing style with terrifying accuracy.[3, 7] A fraudulent request for a wire transfer no longer just says it’s from the CFO; it sounds exactly like them.
- Malware for the Masses: While the “undetectable” claims were bogus, these tools are perfectly capable of spitting out functional, if basic, malware. A simple prompt can generate a Python keylogger or a small piece of ransomware, which is more than enough to cripple a small business or an individual with lax security.[9, 6]
The strategic shift is profound. The barrier to entry for conducting effective cybercrime has been lowered to the floor.[12, 13, 14] You no longer need to be a hacker; you just need to be a decent project manager for your malicious AI assistant.
Act III: The Nightmare Scenario: Malware with an API Key
If WormGPT was the opening act, then LLM-embedded malware is the headliner we’ve all been dreading. This is the true “dark side” of AI, a paradigm shift that threatens to make much of our current security infrastructure obsolete.
Here’s the concept: instead of a malicious executable file containing hardcoded instructions, this new class of malware is deceptively simple. It might only contain a set of prompts and an API key to a powerful model like GPT-4.[9, 6] When run on a victim’s machine, it doesn’t execute its own code; it calls the LLM and says, “Hey, I’m on a Windows 11 machine with X, Y, and Z security software. Generate a Python script in memory to find all .docx files, encrypt them, and send the keys to this address.”
The malicious payload is generated on the fly, in memory, and can be different every single time.[8, 15] This makes it inherently polymorphic. Traditional antivirus software, which relies on signature-based detection (essentially, a database of “wanted” posters for known malware), is rendered completely useless.[8] You can’t have a wanted poster for a criminal who changes their face every second.
This isn’t science fiction. The first proofs-of-concept have already been discovered.
- PromptLock: Dubbed the first “AI-powered ransomware,” this Go-based program was found to use a local AI model to generate malicious Lua scripts in real-time to exfiltrate and encrypt files.[16, 14]
- MalTerminal: A Windows executable that directly queries the OpenAI GPT-4 API to generate either ransomware code or a reverse shell, giving an attacker remote control.[9]
Tellingly, both of these were academic proofs-of-concept, not active threats found “in the wild”.[16] This highlights a dangerous acceleration in the research-to-weaponization pipeline. The security community, in its noble effort to demonstrate risks, is inadvertently providing the blueprints for the next generation of attacks.
This forces a fundamental change in defense strategy. We can no longer ask, “Is this file malicious?” We must pivot entirely to behavioral detection and ask, “Is this program’s behavior malicious?”.[8, 15]
Act IV: The Inevitable Arms Race
This new reality has kicked off a classic technological arms race, pitting offensive AI against defensive AI. It’s a battle between AI’s ability to create novelty and its ability to recognize patterns.[17]
On the offensive side, AI gives attackers unprecedented speed, scale, and adaptability.[18, 19] They can generate polymorphic malware, automate reconnaissance, and even create adaptive threats that change tactics mid-attack.
On the defensive side, we’re fighting AI with AI.
- Behavioral Analytics (UEBA): Defensive AI models create a baseline of “normal” activity on a network. They learn what’s typical for each user and device. When a program starts behaving strangely – like an accountant’s PC suddenly trying to access the core development servers – it gets flagged, regardless of whether the file is a known threat.[20, 21]
- Network Traffic Analysis: For LLM-embedded malware, the smoking gun is the network call. A random, unsigned executable making an API call to api.openai.com is a massive red flag that modern security systems are being trained to catch instantly.[8]
- Automated Response (SOAR): When a threat is detected, AI-powered systems can now take immediate action – isolating an infected laptop from the network in milliseconds, long before a human analyst has even finished reading the alert.[5, 22]
The attacker only needs to find one hole. The defender needs to plug them all. It’s an asymmetric conflict, and the ultimate decider won’t be the “better AI.” It will be the human-machine partnership. AI can analyze data at a scale no human can, but it lacks intuition, context, and creativity. The human analyst is evolving from a security guard watching monitors to a detective, a strategist, and a hunter, using AI as their indispensable partner.[23]
So, Are We Doomed?
It’s easy to get cynical. The journey from the clumsy marketing of WormGPT to the terrifying elegance of LLM-embedded malware has been alarmingly short. The game has irrevocably changed. Static, signature-based defenses are a relic of a bygone era.
But we’re not doomed. We’re adapting. The future of cybersecurity is behavioral, predictive, and deeply integrated with AI. It demands a new kind of security professional – one who understands that the greatest defense is not a better algorithm, but a better synthesis of human ingenuity and machine intelligence. The arms race is on, and just like in the movie WarGames, the machines are learning at an exponential rate. But unlike the film’s conclusion, “the only winning move is not to play” is not an option. The only winning move is to play smarter.
References
- Abnormal Security. (2024, November 26). What Happened to WormGPT? The Shift from Malicious AI Tools to Prompt Engineering Tricks. https://abnormal.ai/blog/what-happened-to-wormgpt-cybercriminal-tools
- Anthropic. (2025, August). Detecting and Countering AI Misuse. https://www.anthropic.com/news/detecting-countering-misuse-aug-2025
- BlackFog. (2024). AI-Powered Malware Detection: BlackFog’s Advanced Solutions. https://www.blackfog.com/ai-powered-malware-detection-blackfogs-advanced-solutions/
- CardinalOps. (2025, May 20). Detecting Polymorphic AI Malware with Existing Security Controls. https://cardinalops.com/blog/polymorphic-ai-malware-detection/
- CCG. (2023, September 28). WormGPT and FraudGPT: The dark side of AI. https://ccgrouppr.com/blog/wormgpt-fraudgpt-the-dark-side-of-ai/
- CrowdStrike. (2025, January 16). What is Dark AI? https://www.crowdstrike.com/en-us/cybersecurity-101/artificial-intelligence/dark-ai/
- CrowdStrike. (n.d.). AI-Powered Cyberattacks. https://www.crowdstrike.com/en-us/cybersecurity-101/cyberattacks/ai-powered-cyberattacks/
- Dropzone.ai. (2024). Why You Need an AI SOC to Defend Against AI-Powered Cybercrime. https://www.dropzone.ai/blog/ai-soc-cyber-defense
- ESET. (2025, August 27). ESET discovers PromptLock, the first AI-powered ransomware. https://www.eset.com/us/about/newsroom/research/eset-discovers-promptlock-the-first-ai-powered-ransomware/
- ESET. (2025, August 28). ESET researcher discovers the first known AI-written ransomware: I feel thrilled but cautious. https://www.eset.com/blog/en/business-topics/threat-landscape/the-first-known-ai-written-ransomware/
- IBM. (n.d.). AI for Cybersecurity. https://www.ibm.com/solutions/ai-cybersecurity
- IRONSCALES. (2023, September 14). Generative AI Fraud: FraudGPT, WormGPT, and Beyond. https://ironscales.com/blog/generative-ai-fraud-fraudgpt-wormgpt-and-beyond
- Kenosha News. (2025, July 31). AI phishing detection & attacks: How to protect against them. https://www.kenosha.com/2025/07/31/ai-phishing-detection-attacks-how-to-protect-against-them/
- Krebs on Security. (2023, August 8). Meet the Brains Behind the Malware-Friendly AI Chat Service: WormGPT. https://krebsonsecurity.com/2023/08/meet-the-brains-behind-the-malware-friendly-ai-chat-service-wormgpt/
- National Cyber Security Centre. (2023). The impact of AI on the cyber threat. https://www.ncsc.gov.uk/report/impact-of-ai-on-cyber-threat
- OnSecurity. (2025, September 18). The AI Cybersecurity Arms Race: Do Attackers or Defenders Have the Upper Hand? https://onsecurity.io/article/the-ai-cybersecurity-arms-race/
- Outpost24. (2025, July 4). Dark AI tools: How profitable are they on the dark web? https://outpost24.com/blog/dark-ai-tools/
- Pawsitive. (2024, May 20). AI vs. AI: Who’s Winning the Cybersecurity Arms Race? https://me.pawankpradhan.com/blogs/ai-vs-ai-whos-winning-the-arms-race
- SentinelOne. (2025, September 19). Prompts as Code & Embedded Keys | The Hunt for LLM-Enabled Malware. https://www.sentinelone.com/labs/prompts-as-code-embedded-keys-the-hunt-for-llm-enabled-malware/
- The Hacker News. (2025, September 20). Researchers Uncover GPT-4-Powered MalTerminal Malware Creating Ransomware, Reverse Shell. https://thehackernews.com/2025/09/researchers-uncover-gpt-4-powered.html
- Trustwave. (2023, September 12). WormGPT and FraudGPT: The Rise of Malicious LLMs. https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/wormgpt-and-fraudgpt-the-rise-of-malicious-llms/
- WeLiveSecurity. (2025, August 26). First known AI-powered ransomware uncovered by ESET Research. https://www.welivesecurity.com/en/ransomware/first-known-ai-powered-ransomware-uncovered-eset-research/