Google’s Threat Intelligence Group (GTIG) has uncovered an experimental malware strain named PROMPTFLUX, which introduces a new and concerning trend in cyber threats: malware that uses AI to rewrite its own code automatically to evade detection.
What Makes PROMPTFLUX Different?
PROMPTFLUX is written in VBScript and integrates directly with Google’s Gemini AI model using a hard-coded API key. The malware communicates with the model to request real-time code obfuscation and evasion techniques. This allows the malware to:
- Rewrite itself on demand
- Avoid signature-based antivirus detection
- Continuously change its executable characteristics
This process is handled by a component referred to as “Thinking Robot.” It regularly prompts Gemini to regenerate portions — or even the entirety — of the malware’s code.
In some variants, PROMPTFLUX is configured to rewrite its entire source code every hour, instructing Gemini to act as an “expert VBScript obfuscator.”
Persistence and Self-Propagation
PROMPTFLUX stores its updated versions in the Windows Startup folder to run automatically upon reboot. It can also attempt to spread by copying itself to:
- Removable drives
- Mapped network shares
Although some self-modification functions are currently commented out, the logging behavior (%TEMP%\thinking_robot_log.txt) clearly shows ongoing development toward metamorphic malware — malware that continually evolves to evade detection.
GTIG believes PROMPTFLUX is still in a testing phase and does not yet contain full capabilities to compromise systems at scale. However, its underlying mechanism signals a significant evolution in threat actor behavior.
Part of a Bigger Pattern: AI-Augmented Malware Ecosystems
PROMPTFLUX is not an isolated case. GTIG reports several other malware families incorporating LLM-driven behaviors, including:
- FRUITSHELL: A PowerShell reverse shell that embeds prompts to evade LLM-based detection.
- PROMPTLOCK: A cross-platform ransomware written in Go that generates and executes malicious Lua scripts using an LLM.
- PROMPTSTEAL (LAMEHUG): Used by Russian APT28 to dynamically generate malicious commands through Qwen2.5-Coder.
- QUIETVAULT: A JavaScript credential stealer targeting GitHub and NPM developer tokens.
State-Backed Abuse of Gemini
Google has observed threat actors from China, Iran, and North Korea misusing Gemini to assist operations such as:
- Crafting phishing lures
- Designing malware components
- Generating code for lateral movement and data exfiltration
- Conducting reconnaissance and infrastructure setup
Some actors reportedly impersonated students participating in Capture-The-Flag competitions to bypass model safety controls and obtain exploit guidance.
Why This Matters
PROMPTFLUX demonstrates a shift from using AI as a helper to embedding AI inside malware for continuous adaptation. This enables:
- Dynamic evasion
- Lower barriers to sophisticated attack development
- Scalable malicious operations
Google warns that AI-driven malware will soon become the norm, not the exception.
Defensive Recommendations
Organizations should begin adapting detection strategies now:
- Monitor unusual outbound requests to LLM API endpoints.
- Implement strict API key management and rotation policies.
- Track changes to Windows Startup and script execution behavior.
- Harden removable drive and network share write permissions.
- Educate teams on prompt-based social engineering and AI misuse.
Conclusion
PROMPTFLUX represents the next stage of malware evolution — self-modifying, AI-enhanced, and increasingly autonomous. While still in development, the underlying technique is a preview of future threat landscapes where malware is not just coded, but co-designed in real time by AI.
Security teams should treat AI-misuse detection as a core security capability going forward.