“`html
A significant vulnerability in LangChain’s fundamental library (CVE-2025-68664) permits malicious actors to extract confidential environment variables and potentially execute code via deserialization vulnerabilities.
Identified by a researcher from Cyata and patched just prior to Christmas 2025, this problem pertains to one of the leading AI frameworks with hundreds of millions of installations.
The dumps() and dumpd() methods of LangChain-core neglected to escape dictionaries controlled by users that included the reserved ‘lc’ key, which designates internal serialized objects.
This resulted in the deserialization of untrusted information (CWE-502) when outputs from LLMs or prompt injections manipulated fields such as additional_kwargs or response_metadata, initiating serialization-deserialization cycles in typical processes like event streaming, logging, and caching. The CVSS score assigned by CNA stands at 9.3, classifying it as Critical, with 12 identified vulnerable patterns, such as astream_events(v1) and Runnable.astream_log().
Cyata’s security expert discovered the vulnerability during evaluations of AI trust boundaries, identifying the absent escape in serialization code after monitoring deserialization sinks.
Reported through Huntr on December 4, 2025, LangChain recognized it the following day and released the advisory on December 24. Fixes were implemented in langchain-core versions 0.3.81 and 1.2.5, which encapsulate ‘lc’-containing dictionaries and disable secrets_from_env by default—previously active, which caused direct environment variable leaks. The team granted a record $4,000 reward.

Malicious individuals could devise prompts to instantiate allowlisted classes like ChatBedrockConverse from langchain_aws, provoking SSRF with environment variables in headers for extraction.
PromptTemplate facilitates Jinja2 rendering for potential RCE if invoked after deserialization. The scope of LangChain amplifies the danger: pepy.tech documents ~847M total downloads, while pypistats show ~98M last month.
It is crucial to upgrade langchain-core immediately and check dependencies like langchain-community. Treat LLM outputs as unreliable, audit deserialization processes in streaming/log files, and disable secret resolution unless inputs are authenticated. A parallel vulnerability was found in LangChainJS (CVE-2025-68665), highlighting dangers in agentic AI infrastructure.
Organizations need to catalog agent deployments for swift triage in the wake of the rapid adoption of LLM applications.
“`