HomeSample Page

Sample Page Title


Dec 26, 2025Ravie LakshmananAI Safety / DevSecOps

Important LangChain Core Vulnerability Exposes Secrets and techniques through Serialization Injection

A important safety flaw has been disclosed in LangChain Core that may very well be exploited by an attacker to steal delicate secrets and techniques and even affect giant language mannequin (LLM) responses by means of immediate injection.

LangChain Core (i.e., langchain-core) is a core Python package deal that is a part of the LangChain ecosystem, offering the core interfaces and model-agnostic abstractions for constructing purposes powered by LLMs.

The vulnerability, tracked as CVE-2025-68664, carries a CVSS rating of 9.3 out of 10.0. Safety researcher Yarden Porat has been credited with reporting the vulnerability on December 4, 2025. It has been codenamed LangGrinch.

“A serialization injection vulnerability exists in LangChain’s dumps() and dumpd() capabilities,” the venture maintainers stated in an advisory. “The capabilities don’t escape dictionaries with ‘lc’ keys when serializing free-form dictionaries.”

Cybersecurity

“The ‘lc’ key’s used internally by LangChain to mark serialized objects. When user-controlled knowledge comprises this key construction, it’s handled as a professional LangChain object throughout deserialization slightly than plain consumer knowledge.”

In line with Cyata researcher Porat, the crux of the issue has to do with the 2 capabilities failing to flee user-controlled dictionaries containing “lc” keys. The “lc” marker represents LangChain objects within the framework’s inner serialization format.

“So as soon as an attacker is ready to make a LangChain orchestration loop serialize and later deserialize content material together with an ‘lc’ key, they might instantiate an unsafe arbitrary object, doubtlessly triggering many attacker-friendly paths,” Porat stated.

This might have numerous outcomes, together with secret extraction from surroundings variables when deserialization is carried out with “secrets_from_env=True” (beforehand set by default), instantiating lessons inside pre-approved trusted namespaces, equivalent to langchain_core, langchain, and langchain_community, and doubtlessly even resulting in arbitrary code execution through Jinja2 templates.

What’s extra, the escaping bug permits the injection of LangChain object buildings by means of user-controlled fields like metadata, additional_kwargs, or response_metadata through immediate injection.

The patch launched by LangChain introduces new restrictive defaults in load() and masses() by way of an allowlist parameter “allowed_objects” that enables customers to specify which lessons will be serialized/deserialized. As well as, Jinja2 templates are blocked by default, and the “secrets_from_env” possibility is now set to “False” to disable computerized secret loading from the surroundings.

The next variations of langchain-core are affected by CVE-2025-68664 –

  • >= 1.0.0, < 1.2.5 (Mounted in 1.2.5)
  • < 0.3.81 (Mounted in 0.3.81)

It is value noting that there exists a comparable serialization injection flaw in LangChain.js that additionally stems from not correctly escaping objects with “lc” keys, thereby enabling secret extraction and immediate injection. This vulnerability has been assigned the CVE identifier CVE-2025-68665 (CVSS rating: 8.6).

Cybersecurity

It impacts the next npm packages –

  • @langchain/core >= 1.0.0, < 1.1.8 (Mounted in 1.1.8)
  • @langchain/core < 0.3.80 (Mounted in 0.3.80)
  • langchain >= 1.0.0, < 1.2.3 (Mounted in 1.2.3)
  • langchain < 0.3.37 (Mounted in 0.3.37)

In gentle of the criticality of the vulnerability, customers are suggested to replace to a patched model as quickly as potential for optimum safety.

“The most typical assault vector is thru LLM response fields like additional_kwargs or response_metadata, which will be managed through immediate injection after which serialized/deserialized in streaming operations,” Porat stated. “That is precisely the sort of ‘AI meets traditional safety’ intersection the place organizations get caught off guard. LLM output is an untrusted enter.”

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles