“`html

An urgent flaw within Flowise and various AI frameworks has been uncovered by OX Security, putting millions of users at risk of remote code execution (RCE).

This issue originates from the Model Context Protocol (MCP), a commonly utilized communication standard for AI agents crafted by Anthropic.

Unlike standard software bugs, this vulnerability arises from a design choice ingrained in Anthropic’s official MCP SDKs across languages such as Python, TypeScript, Java, and Rust.

Developers building upon the MCP platform unknowingly adopt this risk, indicating the attack vector is not confined to a single platform but extends throughout the entire AI supply chain.

Architectural Deficiency at the Heart of MCP

This flaw permits adversaries to execute arbitrary commands on susceptible systems, providing them direct access to confidential user data, internal databases, API keys, and chat logs.

During their investigation, OX Security successfully executed live commands on six active platforms. Flowise, a widely recognized open-source AI workflow creator, ranks among the most severely impacted systems.


google

Investigators pinpointed a “hardening bypass” attack vector against Flowise, revealing that even instances configured with additional safeguards remain vulnerable through MCP adapter interfaces.

MCP Disclosure (Source: OX security)
MCP Disclosure (Source: OX security)

The expansive reach is concerning: exceeding 150 million downloads, more than 7,000 publicly accessible servers, and an estimated 200,000 vulnerable instances throughout the ecosystem.

So far, at least ten CVEs have been issued, addressing critical vulnerabilities in platforms like LiteLLM, LangChain, GPT Researcher, Windsurf, DocsGPT, and IBM’s LangFlow.

Four unique exploitation families have been identified:

  • Unauthenticated UI injections in prominent AI frameworks.
  • Hardening bypasses in “secure” environments like Flowise.
  • Zero-click prompt injections in AI IDEs such as Windsurf and Cursor.
  • Malicious distribution of MCP servers: 9 out of 11 MCP registries were effectively poisoned during testing.

Anthropic Rejects Protocol-Level Solution

OX Security has persistently urged root-level updates for Anthropic that would have safeguarded millions of downstream users.

Anthropic declined, describing the reaction as “anticipated.” The company did not oppose the researchers’ plan to release their findings.

Security teams should act without delay:

  • Restrict public internet access for AI services connected to sensitive APIs or databases.
  • Consider all external MCP configuration inputs as untrusted and limit user input from reaching StdioServerParameters.
  • Install MCP servers only from trusted sources like the official GitHub MCP Registry.
  • Execute MCP-enabled services within sandboxed environments with minimal permissions.
  • Track AI agent tool calls for any unexpected outbound activity.
  • Immediately update all affected services to their latest patched versions.

OX Security has delivered platform-level defenses for its clients, identifying STDIO MCP configurations that include user input as actionable remediation points.

“`