On this tutorial, we construct a genuinely superior Agentic AI system utilizing LangGraph and OpenAI fashions by going past easy planner, executor loops. We implement adaptive deliberation, the place the agent dynamically decides between quick and deep reasoning; a Zettelkasten-style agentic reminiscence graph that shops atomic information and robotically hyperlinks associated experiences; and a ruled tool-use mechanism that enforces constraints throughout execution. By combining structured state administration, memory-aware retrieval, reflexive studying, and managed device invocation, we reveal how trendy agentic programs can purpose, act, be taught, and evolve quite than reply in a single go. Take a look at the FULL CODES right here.
!pip -q set up -U langgraph langchain-openai langchain-core pydantic numpy networkx requests
import os, getpass, json, time, operator
from typing import Checklist, Dict, Any, Non-obligatory, Literal
from typing_extensions import TypedDict, Annotated
import numpy as np
import networkx as nx
from pydantic import BaseModel, Subject
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_core.messages import SystemMessage, HumanMessage, ToolMessage, AnyMessage
from langchain_core.instruments import device
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.reminiscence import InMemorySaverWe arrange the execution setting by putting in all required libraries and importing the core modules. We convey collectively LangGraph for orchestration, LangChain for mannequin and power abstractions, and supporting libraries for reminiscence graphs and numerical operations. Take a look at the FULL CODES right here.
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter OPENAI_API_KEY: ")
MODEL = os.environ.get("OPENAI_MODEL", "gpt-4o-mini")
EMB_MODEL = os.environ.get("OPENAI_EMBED_MODEL", "text-embedding-3-small")
llm_fast = ChatOpenAI(mannequin=MODEL, temperature=0)
llm_deep = ChatOpenAI(mannequin=MODEL, temperature=0)
llm_reflect = ChatOpenAI(mannequin=MODEL, temperature=0)
emb = OpenAIEmbeddings(mannequin=EMB_MODEL)We securely load the OpenAI API key at runtime and initialize the language fashions used for quick, deep, and reflective reasoning. We additionally configure the embedding mannequin that powers semantic similarity in reminiscence. This separation permits us to flexibly swap reasoning depth whereas sustaining a shared illustration house for reminiscence. Take a look at the FULL CODES right here.
class Observe(BaseModel):
note_id: str
title: str
content material: str
tags: Checklist[str] = Subject(default_factory=listing)
created_at_unix: float
context: Dict[str, Any] = Subject(default_factory=dict)
class MemoryGraph:
def __init__(self):
self.g = nx.Graph()
self.note_vectors = {}
def _cos(self, a, b):
return float(np.dot(a, b) / ((np.linalg.norm(a) + 1e-9) * (np.linalg.norm(b) + 1e-9)))
def add_note(self, observe, vec):
self.g.add_node(observe.note_id, **observe.model_dump())
self.note_vectors[note.note_id] = vec
def topk_related(self, vec, okay=5):
scored = [(nid, self._cos(vec, v)) for nid, v in self.note_vectors.items()]
scored.type(key=lambda x: x[1], reverse=True)
return [{"note_id": n, "score": s, "title": self.g.nodes[n]["title"]} for n, s in scored[:k]]
def link_note(self, a, b, w, r):
if a != b:
self.g.add_edge(a, b, weight=w, purpose=r)
def evolve_links(self, nid, vec):
for r in self.topk_related(vec, 8):
if r["score"] >= 0.78:
self.link_note(nid, r["note_id"], r["score"], "evolve")
MEM = MemoryGraph()We assemble an agentic reminiscence graph impressed by the Zettelkasten technique, the place every interplay is saved as an atomic observe. We embed every observe and join it to semantically associated notes utilizing similarity scores. Take a look at the FULL CODES right here.
@device
def web_get(url: str) -> str:
import urllib.request
with urllib.request.urlopen(url, timeout=15) as r:
return r.learn(25000).decode("utf-8", errors="ignore")
@device
def memory_search(question: str, okay: int = 5) -> str:
qv = np.array(emb.embed_query(question))
hits = MEM.topk_related(qv, okay)
return json.dumps(hits, ensure_ascii=False)
@device
def memory_neighbors(note_id: str) -> str:
if note_id not in MEM.g:
return "[]"
return json.dumps([
{"note_id": n, "weight": MEM.g[note_id][n]["weight"]}
for n in MEM.g.neighbors(note_id)
])
TOOLS = [web_get, memory_search, memory_neighbors]
TOOLS_BY_NAME = {t.identify: t for t in TOOLS}We outline the exterior instruments the agent can invoke, together with internet entry and memory-based retrieval. We combine these instruments in a structured approach so the agent can question previous experiences or fetch new data when mandatory. Take a look at the FULL CODES right here.
class DeliberationDecision(BaseModel):
mode: Literal["fast", "deep"]
purpose: str
suggested_steps: Checklist[str]
class RunSpec(BaseModel):
purpose: str
constraints: Checklist[str]
deliverable_format: str
must_use_memory: bool
max_tool_calls: int
class Reflection(BaseModel):
note_title: str
note_tags: Checklist[str]
new_rules: Checklist[str]
what_worked: Checklist[str]
what_failed: Checklist[str]
class AgentState(TypedDict, whole=False):
run_spec: Dict[str, Any]
messages: Annotated[List[AnyMessage], operator.add]
resolution: Dict[str, Any]
remaining: str
budget_calls_remaining: int
tool_calls_used: int
max_tool_calls: int
last_note_id: str
DECIDER_SYS = "Determine quick vs deep."
AGENT_FAST = "Function quick."
AGENT_DEEP = "Function deep."
REFLECT_SYS = "Replicate and retailer learnings."We formalize the agent’s inner representations utilizing structured schemas for deliberation, execution objectives, reflection, and world state. We additionally outline the system prompts that information conduct in quick and deep modes. This ensures the agent’s reasoning and selections stay constant, interpretable, and controllable. Take a look at the FULL CODES right here.
def deliberate(st):
spec = RunSpec.model_validate(st["run_spec"])
d = llm_fast.with_structured_output(DeliberationDecision).invoke([
SystemMessage(content=DECIDER_SYS),
HumanMessage(content=json.dumps(spec.model_dump()))
])
return {"resolution": d.model_dump(), "budget_calls_remaining": st["budget_calls_remaining"] - 1}
def agent(st):
spec = RunSpec.model_validate(st["run_spec"])
d = DeliberationDecision.model_validate(st["decision"])
llm = llm_deep if d.mode == "deep" else llm_fast
sys = AGENT_DEEP if d.mode == "deep" else AGENT_FAST
out = llm.bind_tools(TOOLS).invoke([
SystemMessage(content=sys),
*st.get("messages", []),
HumanMessage(content material=json.dumps(spec.model_dump()))
])
return {"messages": [out], "budget_calls_remaining": st["budget_calls_remaining"] - 1}
def route(st):
return "instruments" if st["messages"][-1].tool_calls else "finalize"
def tools_node(st):
msgs = []
used = st.get("tool_calls_used", 0)
for c in st["messages"][-1].tool_calls:
obs = TOOLS_BY_NAME[c["name"]].invoke(c["args"])
msgs.append(ToolMessage(content material=str(obs), tool_call_id=c["id"]))
used += 1
return {"messages": msgs, "tool_calls_used": used}
def finalize(st):
out = llm_deep.invoke(st["messages"] + [HumanMessage(content="Return final output")])
return {"remaining": out.content material}
def replicate(st):
r = llm_reflect.with_structured_output(Reflection).invoke([
SystemMessage(content=REFLECT_SYS),
HumanMessage(content=st["final"])
])
observe = Observe(
note_id=str(time.time()),
title=r.note_title,
content material=st["final"],
tags=r.note_tags,
created_at_unix=time.time()
)
vec = np.array(emb.embed_query(observe.title + observe.content material))
MEM.add_note(observe, vec)
MEM.evolve_links(observe.note_id, vec)
return {"last_note_id": observe.note_id}We implement the core agentic behaviors as LangGraph nodes, together with deliberation, motion, device execution, finalization, and reflection. We orchestrate how data flows between these phases and the way selections have an effect on the execution path. Take a look at the FULL CODES right here.
g = StateGraph(AgentState)
g.add_node("deliberate", deliberate)
g.add_node("agent", agent)
g.add_node("instruments", tools_node)
g.add_node("finalize", finalize)
g.add_node("replicate", replicate)
g.add_edge(START, "deliberate")
g.add_edge("deliberate", "agent")
g.add_conditional_edges("agent", route, ["tools", "finalize"])
g.add_edge("instruments", "agent")
g.add_edge("finalize", "replicate")
g.add_edge("replicate", END)
graph = g.compile(checkpointer=InMemorySaver())
def run_agent(purpose, constraints=None, thread_id="demo"):
if constraints is None:
constraints = []
spec = RunSpec(
purpose=purpose,
constraints=constraints,
deliverable_format="markdown",
must_use_memory=True,
max_tool_calls=6
).model_dump()
return graph.invoke({
"run_spec": spec,
"messages": [],
"budget_calls_remaining": 10,
"tool_calls_used": 0,
"max_tool_calls": 6
}, config={"configurable": {"thread_id": thread_id}})We assemble all nodes right into a LangGraph workflow and compile it with checkpointed state administration. We additionally outline a reusable runner operate that executes the agent whereas preserving reminiscence throughout runs.
In conclusion, we confirmed how an agent can repeatedly enhance its conduct by reflection and reminiscence quite than counting on static prompts or hard-coded logic. We used LangGraph to orchestrate deliberation, execution, device governance, and reflexion as a coherent graph, whereas OpenAI fashions present the reasoning and synthesis capabilities at every stage. This strategy illustrated how agentic AI programs can transfer nearer to autonomy by adapting their reasoning depth, reusing prior information, and encoding classes as persistent reminiscence, forming a sensible basis for constructing scalable, self-improving brokers in real-world functions.
Take a look at the FULL CODES right here. Additionally, be happy to observe us on Twitter and don’t neglect to hitch our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you may be part of us on telegram as properly.
Take a look at our newest launch of ai2025.dev, a 2025-focused analytics platform that turns mannequin launches, benchmarks, and ecosystem exercise right into a structured dataset you may filter, examine, and export
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its recognition amongst audiences.