On this tutorial, we take a deep dive into nanobot, the ultra-lightweight private AI agent framework from HKUDS that packs full agent capabilities into roughly 4,000 strains of Python. Quite than merely putting in and operating it out of the field, we crack open the hood and manually recreate every of its core subsystems, the agent loop, software execution, reminiscence persistence, abilities loading, session administration, subagent spawning, and cron scheduling, so we perceive precisely how they work. We wire every part up with OpenAI’s gpt-4o-mini as our LLM supplier, enter our API key securely via the terminal (by no means exposing it in pocket book output), and progressively construct from a single tool-calling loop all the way in which to a multi-step analysis pipeline that reads and writes information, shops long-term recollections, and delegates duties to concurrent background employees. By the top, we don’t simply know use nanobots, we perceive lengthen them with customized instruments, abilities, and our personal agent architectures.
import sys
import os
import subprocess
def part(title, emoji="🔹"):
"""Fairly-print a bit header."""
width = 72
print(f"n{'═' * width}")
print(f" {emoji} {title}")
print(f"{'═' * width}n")
def information(msg):
print(f" ℹ️ {msg}")
def success(msg):
print(f" ✅ {msg}")
def code_block(code):
print(f" ┌─────────────────────────────────────────────────")
for line in code.strip().break up("n"):
print(f" │ {line}")
print(f" └─────────────────────────────────────────────────")
part("STEP 1 · Putting in nanobot-ai & Dependencies", "📦")
information("Putting in nanobot-ai from PyPI (newest secure)...")
subprocess.check_call([
sys.executable, "-m", "pip", "install", "-q",
"nanobot-ai", "openai", "rich", "httpx"
])
success("nanobot-ai put in efficiently!")
import importlib.metadata
nanobot_version = importlib.metadata.model("nanobot-ai")
print(f" 📌 nanobot-ai model: {nanobot_version}")
part("STEP 2 · Safe OpenAI API Key Enter", "🔑")
information("Your API key will NOT be printed or saved in pocket book output.")
information("It's held solely in reminiscence for this session.n")
attempt:
from google.colab import userdata
OPENAI_API_KEY = userdata.get("OPENAI_API_KEY")
if not OPENAI_API_KEY:
elevate ValueError("Not set in Colab secrets and techniques")
success("Loaded API key from Colab Secrets and techniques ('OPENAI_API_KEY').")
information("Tip: You'll be able to set this in Colab → 🔑 Secrets and techniques panel on the left sidebar.")
besides Exception:
import getpass
OPENAI_API_KEY = getpass.getpass("Enter your OpenAI API key: ")
success("API key captured securely by way of terminal enter.")
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
import openai
consumer = openai.OpenAI(api_key=OPENAI_API_KEY)
attempt:
consumer.fashions.listing()
success("OpenAI API key validated — connection profitable!")
besides Exception as e:
print(f" ❌ API key validation failed: {e}")
print(" Please restart and enter a legitimate key.")
sys.exit(1)
part("STEP 3 · Configuring nanobot for OpenAI", "⚙️")
import json
from pathlib import Path
NANOBOT_HOME = Path.residence() / ".nanobot"
NANOBOT_HOME.mkdir(mother and father=True, exist_ok=True)
WORKSPACE = NANOBOT_HOME / "workspace"
WORKSPACE.mkdir(mother and father=True, exist_ok=True)
(WORKSPACE / "reminiscence").mkdir(mother and father=True, exist_ok=True)
config = {
"suppliers": {
"openai": {
"apiKey": OPENAI_API_KEY
}
},
"brokers": {
"defaults": {
"mannequin": "openai/gpt-4o-mini",
"maxTokens": 4096,
"workspace": str(WORKSPACE)
}
},
"instruments": {
"restrictToWorkspace": True
}
}
config_path = NANOBOT_HOME / "config.json"
config_path.write_text(json.dumps(config, indent=2))
success(f"Config written to {config_path}")
agents_md = WORKSPACE / "AGENTS.md"
agents_md.write_text(
"# Agent Instructionsnn"
"You're nanobot 🐈, an ultra-lightweight private AI assistant.n"
"You're useful, concise, and use instruments when wanted.n"
"All the time clarify your reasoning step-by-step.n"
)
soul_md = WORKSPACE / "SOUL.md"
soul_md.write_text(
"# Personalitynn"
"- Pleasant and approachablen"
"- Technically precisen"
"- Makes use of emoji sparingly for warmthn"
)
user_md = WORKSPACE / "USER.md"
user_md.write_text(
"# Consumer Profilenn"
"- The person is exploring the nanobot framework.n"
"- They're serious about AI agent architectures.n"
)
memory_md = WORKSPACE / "reminiscence" / "MEMORY.md"
memory_md.write_text("# Lengthy-term Memorynn_No recollections saved but._n")
success("Workspace bootstrap information created:")
for f in [agents_md, soul_md, user_md, memory_md]:
print(f" 📄 {f.relative_to(NANOBOT_HOME)}")
part("STEP 4 · nanobot Structure Deep Dive", "🏗️")
information("""nanobot is organized into 7 subsystems in ~4,000 strains of code:
┌──────────────────────────────────────────────────────────┐
│ USER INTERFACES │
│ CLI · Telegram · WhatsApp · Discord │
└──────────────────┬───────────────────────────────────────┘
│ InboundMessage / OutboundMessage
┌──────────────────▼───────────────────────────────────────┐
│ MESSAGE BUS │
│ publish_inbound() / publish_outbound() │
└──────────────────┬───────────────────────────────────────┘
│
┌──────────────────▼───────────────────────────────────────┐
│ AGENT LOOP (loop.py) │
│ ┌─────────┐ ┌──────────┐ ┌────────────────────┐ │
│ │ Context │→ │ LLM │→ │ Instrument Execution │ │
│ │ Builder │ │ Name │ │ (if tool_calls) │ │
│ └─────────┘ └──────────┘ └────────┬───────────┘ │
│ ▲ │ loop again │
│ │ ◄───────────────────┘ till achieved │
│ ┌────┴────┐ ┌──────────┐ ┌────────────────────┐ │
│ │ Reminiscence │ │ Expertise │ │ Subagent Mgr │ │
│ │ Retailer │ │ Loader │ │ (spawn duties) │ │
│ └─────────┘ └──────────┘ └────────────────────┘ │
└──────────────────────────────────────────────────────────┘
│
┌──────────────────▼───────────────────────────────────────┐
│ LLM PROVIDER LAYER │
│ OpenAI · Anthropic · OpenRouter · DeepSeek · ... │
└───────────────────────────────────────────────────────────┘
The Agent Loop iterates as much as 40 instances (configurable):
1. ContextBuilder assembles system immediate + reminiscence + abilities + historical past
2. LLM is known as with instruments definitions
3. If response has tool_calls → execute instruments, append outcomes, loop
4. If response is apparent textual content → return as closing reply
""")We arrange the complete basis of the tutorial by importing the required modules, defining helper features for clear part show, and putting in the nanobot dependencies inside Google Colab. We then securely load and validate the OpenAI API key so the remainder of the pocket book can work together with the mannequin with out exposing credentials within the pocket book output. After that, we configure the nanobot workspace and create the core bootstrap information, akin to AGENTS.md and SOUL.md, USER.md, and MEMORY.md, and research the high-level structure so we perceive how the framework is organized earlier than shifting into implementation.
part("STEP 5 · The Agent Loop — Core Idea in Motion", "🔄")
information("We'll manually recreate nanobot's agent loop sample utilizing OpenAI.")
information("That is precisely what loop.py does internally.n")
import json as _json
import datetime
TOOLS = [
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Get the current date and time.",
"parameters": {"type": "object", "properties": {}, "required": []}
}
},
{
"kind": "perform",
"perform": {
"title": "calculate",
"description": "Consider a mathematical expression.",
"parameters": {
"kind": "object",
"properties": {
"expression": {
"kind": "string",
"description": "Math expression to judge, e.g. '2**10 + 42'"
}
},
"required": ["expression"]
}
}
},
{
"kind": "perform",
"perform": {
"title": "read_file",
"description": "Learn the contents of a file within the workspace.",
"parameters": {
"kind": "object",
"properties": {
"path": {
"kind": "string",
"description": "Relative file path throughout the workspace"
}
},
"required": ["path"]
}
}
},
{
"kind": "perform",
"perform": {
"title": "write_file",
"description": "Write content material to a file within the workspace.",
"parameters": {
"kind": "object",
"properties": {
"path": {"kind": "string", "description": "Relative file path"},
"content material": {"kind": "string", "description": "Content material to put in writing"}
},
"required": ["path", "content"]
}
}
},
{
"kind": "perform",
"perform": {
"title": "save_memory",
"description": "Save a truth to the agent's long-term reminiscence.",
"parameters": {
"kind": "object",
"properties": {
"truth": {"kind": "string", "description": "The actual fact to recollect"}
},
"required": ["fact"]
}
}
}
]
def execute_tool(title: str, arguments: dict) -> str:
"""Execute a software name — mirrors nanobot's ToolRegistry.execute()."""
if title == "get_current_time":
elif title == "calculate":
expr = arguments.get("expression", "")
attempt:
consequence = eval(expr, {"__builtins__": {}}, {"abs": abs, "spherical": spherical, "min": min, "max": max})
return str(consequence)
besides Exception as e:
return f"Error: {e}"
elif title == "read_file":
fpath = WORKSPACE / arguments.get("path", "")
if fpath.exists():
return fpath.read_text()[:4000]
return f"Error: File not discovered — {arguments.get('path')}"
elif title == "write_file":
fpath = WORKSPACE / arguments.get("path", "")
fpath.dad or mum.mkdir(mother and father=True, exist_ok=True)
fpath.write_text(arguments.get("content material", ""))
return f"Efficiently wrote {len(arguments.get('content material', ''))} chars to {arguments.get('path')}"
elif title == "save_memory":
truth = arguments.get("truth", "")
mem_file = WORKSPACE / "reminiscence" / "MEMORY.md"
current = mem_file.read_text()
timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M")
mem_file.write_text(current + f"n- [{timestamp}] {truth}n")
return f"Reminiscence saved: {truth}"
return f"Unknown software: {title}"
def agent_loop(user_message: str, max_iterations: int = 10, verbose: bool = True):
"""
Recreates nanobot's AgentLoop._process_message() logic.
The loop:
1. Construct context (system immediate + bootstrap information + reminiscence)
2. Name LLM with instruments
3. If tool_calls → execute → append outcomes → loop
4. If textual content response → return closing reply
"""
system_parts = []
for md_file in ["AGENTS.md", "SOUL.md", "USER.md"]:
fpath = WORKSPACE / md_file
if fpath.exists():
system_parts.append(fpath.read_text())
mem_file = WORKSPACE / "reminiscence" / "MEMORY.md"
if mem_file.exists():
system_parts.append(f"n## Your Memoryn{mem_file.read_text()}")
system_prompt = "nn".be part of(system_parts)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message}
]
if verbose:
print(f" 📨 Consumer: {user_message}")
print(f" 🧠 System immediate: {len(system_prompt)} chars "
f"(from {len(system_parts)} bootstrap information)")
print()
for iteration in vary(1, max_iterations + 1):
if verbose:
print(f" ── Iteration {iteration}/{max_iterations} ──")
response = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
messages=messages,
instruments=TOOLS,
tool_choice="auto",
max_tokens=2048
)
selection = response.decisions[0]
message = selection.message
if message.tool_calls:
if verbose:
print(f" 🔧 LLM requested {len(message.tool_calls)} software name(s):")
messages.append(message.model_dump())
for tc in message.tool_calls:
fname = tc.perform.title
args = _json.masses(tc.perform.arguments) if tc.perform.arguments else {}
if verbose:
print(f" → {fname}({_json.dumps(args, ensure_ascii=False)[:80]})")
consequence = execute_tool(fname, args)
if verbose:
print(f" ← {consequence[:100]}{'...' if len(consequence) > 100 else ''}")
messages.append({
"position": "software",
"tool_call_id": tc.id,
"content material": consequence
})
if verbose:
print()
else:
closing = message.content material or ""
if verbose:
print(f" 💬 Agent: {closing}n")
return closing
return "⚠️ Max iterations reached with no closing response."
print("─" * 60)
print(" DEMO 1: Time-aware calculation with software chaining")
print("─" * 60)
result1 = agent_loop(
"What's the present time? Additionally, calculate 2^20 + 42 for me."
)
print("─" * 60)
print(" DEMO 2: File creation + reminiscence storage")
print("─" * 60)
result2 = agent_loop(
"Write a haiku about AI brokers to a file referred to as 'haiku.txt'. "
"Then do not forget that I get pleasure from poetry about expertise."
)
We manually recreate the center of nanobot by defining the software schemas, implementing their execution logic, and constructing the iterative agent loop that connects the LLM to instruments. We assemble the immediate from the workspace information and reminiscence, ship the dialog to the mannequin, detect software calls, execute them, append the outcomes again into the dialog, and maintain looping till the mannequin returns a closing reply. We then take a look at this mechanism with sensible examples that contain time lookups, calculations, file writing, and reminiscence saving, so we will see the loop function precisely like the interior nanobot move.
part("STEP 6 · Reminiscence System — Persistent Agent Reminiscence", "🧠")
information("""nanobot's reminiscence system (reminiscence.py) makes use of two storage mechanisms:
1. MEMORY.md — Lengthy-term information (at all times loaded into context)
2. YYYY-MM-DD.md — Every day journal entries (loaded for latest days)
Reminiscence consolidation runs periodically to summarize and compress
outdated entries, conserving the context window manageable.
""")
mem_content = (WORKSPACE / "reminiscence" / "MEMORY.md").read_text()
print(" 📂 Present MEMORY.md contents:")
print(" ┌─────────────────────────────────────────────")
for line in mem_content.strip().break up("n"):
print(f" │ {line}")
print(" └─────────────────────────────────────────────n")
immediately = datetime.datetime.now().strftime("%Y-%m-%d")
daily_file = WORKSPACE / "reminiscence" / f"{immediately}.md"
daily_file.write_text(
f"# Every day Log — {immediately}nn"
"- Consumer ran the nanobot superior tutorialn"
"- Explored agent loop, instruments, and memoryn"
"- Created a haiku about AI agentsn"
)
success(f"Every day journal created: reminiscence/{immediately}.md")
print("n 📁 Workspace contents:")
for merchandise in sorted(WORKSPACE.rglob("*")):
if merchandise.is_file():
rel = merchandise.relative_to(WORKSPACE)
dimension = merchandise.stat().st_size
print(f" {'📄' if merchandise.suffix == '.md' else '📝'} {rel} ({dimension} bytes)")
part("STEP 7 · Expertise System — Extending Agent Capabilities", "🎯")
information("""nanobot's SkillsLoader (abilities.py) reads Markdown information from the
abilities/ listing. Every talent has:
- A reputation and outline (for the LLM to determine when to make use of it)
- Directions the LLM follows when the talent is activated
- Some abilities are 'at all times loaded'; others are loaded on demand
Let's create a customized talent and see how the agent makes use of it.
""")
skills_dir = WORKSPACE / "abilities"
skills_dir.mkdir(exist_ok=True)
data_skill = skills_dir / "data_analyst.md"
data_skill.write_text("""# Knowledge Analyst Ability
## Description
Analyze knowledge, compute statistics, and supply insights from numbers.
## Directions
When requested to research knowledge:
1. Determine the info kind and construction
2. Compute related statistics (imply, median, vary, std dev)
3. Search for patterns and outliers
4. Current findings in a transparent, structured format
5. Recommend follow-up questions
## All the time Out there
false
""")
review_skill = skills_dir / "code_reviewer.md"
review_skill.write_text("""# Code Reviewer Ability
## Description
Evaluate code for bugs, safety points, and finest practices.
## Directions
When reviewing code:
1. Test for widespread bugs and logic errors
2. Determine safety vulnerabilities
3. Recommend efficiency enhancements
4. Consider code type and readability
5. Charge the code high quality on a 1-10 scale
## All the time Out there
true
""")
success("Customized abilities created:")
for f in skills_dir.iterdir():
print(f" 🎯 {f.title}")
print("n 🧪 Testing skill-aware agent interplay:")
print(" " + "─" * 56)
skills_context = "nn## Out there Skillsn"
for skill_file in skills_dir.glob("*.md"):
content material = skill_file.read_text()
skills_context += f"n### {skill_file.stem}n{content material}n"
result3 = agent_loop(
"Evaluate this Python code for points:nn"
"```pythonn"
"def get_user(id):n"
" question = f'SELECT * FROM customers WHERE id = {id}'n"
" consequence = db.execute(question)n"
" return resultn"
"```"
)
We transfer into the persistent reminiscence system by inspecting the long-term reminiscence file, making a every day journal entry, and reviewing how the workspace evolves after earlier interactions. We then lengthen the agent with a abilities system by creating markdown-based talent information that describe specialised behaviors akin to knowledge evaluation and code evaluation. Lastly, we simulate how skill-aware prompting works by exposing these abilities to the agent and asking it to evaluation a Python perform, which helps us see how nanobot will be guided via modular functionality descriptions.
part("STEP 8 · Customized Instrument Creation — Extending the Agent", "🔧")
information("""nanobot's software system makes use of a ToolRegistry with a easy interface.
Every software wants:
- A reputation and outline
- A JSON Schema for parameters
- An execute() technique
Let's create customized instruments and wire them into our agent loop.
""")
import random
CUSTOM_TOOLS = [
{
"type": "function",
"function": {
"name": "roll_dice",
"description": "Roll one or more dice with a given number of sides.",
"parameters": {
"type": "object",
"properties": {
"num_dice": {"type": "integer", "description": "Number of dice to roll", "default": 1},
"sides": {"type": "integer", "description": "Number of sides per die", "default": 6}
},
"required": []
}
}
},
{
"kind": "perform",
"perform": {
"title": "text_stats",
"description": "Compute statistics a few textual content: phrase rely, char rely, sentence rely, studying time.",
"parameters": {
"kind": "object",
"properties": {
"textual content": {"kind": "string", "description": "The textual content to research"}
},
"required": ["text"]
}
}
},
{
"kind": "perform",
"perform": {
"title": "generate_password",
"description": "Generate a random safe password.",
"parameters": {
"kind": "object",
"properties": {
"size": {"kind": "integer", "description": "Password size", "default": 16}
},
"required": []
}
}
}
]
_original_execute = execute_tool
def execute_tool_extended(title: str, arguments: dict) -> str:
if title == "roll_dice":
n = arguments.get("num_dice", 1)
s = arguments.get("sides", 6)
rolls = [random.randint(1, s) for _ in range(n)]
return f"Rolled {n}d{s}: {rolls} (complete: {sum(rolls)})"
elif title == "text_stats":
textual content = arguments.get("textual content", "")
phrases = len(textual content.break up())
chars = len(textual content)
sentences = textual content.rely('.') + textual content.rely('!') + textual content.rely('?')
reading_time = spherical(phrases / 200, 1)
return _json.dumps({
"phrases": phrases,
"characters": chars,
"sentences": max(sentences, 1),
"reading_time_minutes": reading_time
})
elif title == "generate_password":
import string
size = arguments.get("size", 16)
chars = string.ascii_letters + string.digits + "!@#$%^&*"
pwd = ''.be part of(random.selection(chars) for _ in vary(size))
return f"Generated password ({size} chars): {pwd}"
return _original_execute(title, arguments)
execute_tool = execute_tool_extended
ALL_TOOLS = TOOLS + CUSTOM_TOOLS
def agent_loop_v2(user_message: str, max_iterations: int = 10, verbose: bool = True):
"""Agent loop with prolonged customized instruments."""
system_parts = []
for md_file in ["AGENTS.md", "SOUL.md", "USER.md"]:
fpath = WORKSPACE / md_file
if fpath.exists():
system_parts.append(fpath.read_text())
mem_file = WORKSPACE / "reminiscence" / "MEMORY.md"
if mem_file.exists():
system_parts.append(f"n## Your Memoryn{mem_file.read_text()}")
system_prompt = "nn".be part of(system_parts)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message}
]
if verbose:
print(f" 📨 Consumer: {user_message}")
print()
for iteration in vary(1, max_iterations + 1):
if verbose:
print(f" ── Iteration {iteration}/{max_iterations} ──")
response = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
messages=messages,
instruments=ALL_TOOLS,
tool_choice="auto",
max_tokens=2048
)
selection = response.decisions[0]
message = selection.message
if message.tool_calls:
if verbose:
print(f" 🔧 {len(message.tool_calls)} software name(s):")
messages.append(message.model_dump())
for tc in message.tool_calls:
fname = tc.perform.title
args = _json.masses(tc.perform.arguments) if tc.perform.arguments else {}
if verbose:
print(f" → {fname}({_json.dumps(args, ensure_ascii=False)[:80]})")
consequence = execute_tool(fname, args)
if verbose:
print(f" ← {consequence[:120]}{'...' if len(consequence) > 120 else ''}")
messages.append({
"position": "software",
"tool_call_id": tc.id,
"content material": consequence
})
if verbose:
print()
else:
closing = message.content material or ""
if verbose:
print(f" 💬 Agent: {closing}n")
return closing
return "⚠️ Max iterations reached."
print("─" * 60)
print(" DEMO 3: Customized instruments in motion")
print("─" * 60)
result4 = agent_loop_v2(
"Roll 3 six-sided cube for me, then generate a 20-character password, "
"and at last analyze the textual content stats of this sentence: "
)
part("STEP 9 · Multi-Flip Dialog — Session Administration", "💬")
information("""nanobot's SessionManager (session/supervisor.py) maintains dialog
historical past per session_key (format: 'channel:chat_id'). Historical past is saved
in JSON information and loaded into context for every new message.
Let's simulate a multi-turn dialog with persistent state.
""")We develop the agent’s capabilities by defining new customized instruments akin to cube rolling, textual content statistics, and password technology, after which wiring them into the software execution pipeline. We replace the executor, merge the built-in and customized software definitions, and create a second model of the agent loop that may purpose over this bigger set of capabilities. We then run a demo activity that forces the mannequin to chain a number of software invocations, demonstrating how simple it’s to increase nanobot with our personal features whereas conserving the identical total interplay sample.
class SimpleSessionManager:
"""
Minimal recreation of nanobot's SessionManager.
Shops dialog historical past and gives context continuity.
"""
def __init__(self, workspace: Path):
self.workspace = workspace
self.classes: dict[str, list[dict]] = {}
def get_history(self, session_key: str) -> listing[dict]:
return self.classes.get(session_key, [])
def add_turn(self, session_key: str, position: str, content material: str):
if session_key not in self.classes:
self.classes[session_key] = []
self.classes[session_key].append({"position": position, "content material": content material})
def save(self, session_key: str):
fpath = self.workspace / f"session_{session_key.substitute(':', '_')}.json"
fpath.write_text(_json.dumps(self.classes.get(session_key, []), indent=2))
def load(self, session_key: str):
fpath = self.workspace / f"session_{session_key.substitute(':', '_')}.json"
if fpath.exists():
self.classes[session_key] = _json.masses(fpath.read_text())
session_mgr = SimpleSessionManager(WORKSPACE)
SESSION_KEY = "cli:tutorial_user"
def chat(user_message: str, verbose: bool = True):
"""Multi-turn chat with session persistence."""
session_mgr.add_turn(SESSION_KEY, "person", user_message)
system_parts = []
for md_file in ["AGENTS.md", "SOUL.md"]:
fpath = WORKSPACE / md_file
if fpath.exists():
system_parts.append(fpath.read_text())
system_prompt = "nn".be part of(system_parts)
historical past = session_mgr.get_history(SESSION_KEY)
messages = [{"role": "system", "content": system_prompt}] + historical past
if verbose:
print(f" 👤 You: {user_message}")
print(f" (dialog historical past: {len(historical past)} messages)")
response = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
messages=messages,
max_tokens=1024
)
reply = response.decisions[0].message.content material or ""
session_mgr.add_turn(SESSION_KEY, "assistant", reply)
session_mgr.save(SESSION_KEY)
if verbose:
print(f" 🐈 nanobot: {reply}n")
return reply
print("─" * 60)
print(" DEMO 4: Multi-turn dialog with reminiscence")
print("─" * 60)
chat("Hello! My title is Alex and I am constructing an AI agent.")
chat("What's my title? And what am I engaged on?")
chat("Are you able to counsel 3 options I ought to add to my agent?")
success("Session persevered with full dialog historical past!")
session_file = WORKSPACE / f"session_{SESSION_KEY.substitute(':', '_')}.json"
session_data = _json.masses(session_file.read_text())
print(f" 📄 Session file: {session_file.title} ({len(session_data)} messages)")
part("STEP 10 · Subagent Spawning — Background Activity Delegation", "🚀")
information("""nanobot's SubagentManager (agent/subagent.py) permits the primary agent
to delegate duties to unbiased background employees. Every subagent:
- Will get its personal software registry (no SpawnTool to stop recursion)
- Runs as much as 15 iterations independently
- Studies outcomes again by way of the MessageBus
Let's simulate this sample with concurrent duties.
""")
import asyncio
import uuid
async def run_subagent(task_id: str, purpose: str, verbose: bool = True):
"""
Simulates nanobot's SubagentManager._run_subagent().
Runs an unbiased LLM loop for a particular purpose.
"""
if verbose:
print(f" 🔹 Subagent [{task_id[:8]}] began: {purpose[:60]}")
response = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a focused research assistant. "
"Complete the assigned task concisely in 2-3 sentences."},
{"role": "user", "content": goal}
],
max_tokens=256
)
consequence = response.decisions[0].message.content material or ""
if verbose:
print(f" ✅ Subagent [{task_id[:8]}] achieved: {consequence[:80]}...")
return {"task_id": task_id, "purpose": purpose, "consequence": consequence}
async def spawn_subagents(objectives: listing[str]):
"""Spawn a number of subagents concurrently — mirrors SubagentManager.spawn()."""
duties = []
for purpose in objectives:
task_id = str(uuid.uuid4())
duties.append(run_subagent(task_id, purpose))
print(f"n 🚀 Spawning {len(duties)} subagents concurrently...n")
outcomes = await asyncio.collect(*duties)
return outcomes
objectives = [
"What are the 3 key components of a ReAct agent architecture?",
"Explain the difference between tool-calling and function-calling in LLMs.",
"What is MCP (Model Context Protocol) and why does it matter for AI agents?",
]
attempt:
loop = asyncio.get_running_loop()
import nest_asyncio
nest_asyncio.apply()
subagent_results = asyncio.get_event_loop().run_until_complete(spawn_subagents(objectives))
besides RuntimeError:
subagent_results = asyncio.run(spawn_subagents(objectives))
besides ModuleNotFoundError:
print(" ℹ️ Operating subagents sequentially (set up nest_asyncio for async)...n")
subagent_results = []
for purpose in objectives:
task_id = str(uuid.uuid4())
response = consumer.chat.completions.create(
mannequin="gpt-4o-mini",
messages=[
{"role": "system", "content": "Complete the task concisely in 2-3 sentences."},
{"role": "user", "content": goal}
],
max_tokens=256
)
r = response.decisions[0].message.content material or ""
print(f" ✅ Subagent [{task_id[:8]}] achieved: {r[:80]}...")
subagent_results.append({"task_id": task_id, "purpose": purpose, "consequence": r})
print(f"n 📋 All {len(subagent_results)} subagent outcomes collected!")
for i, r in enumerate(subagent_results, 1):
print(f"n ── End result {i} ──")
print(f" Aim: {r['goal'][:60]}")
print(f" Reply: {r['result'][:200]}")We simulate multi-turn dialog administration by constructing a light-weight session supervisor that shops, retrieves, and persists dialog historical past throughout turns. We use that historical past to take care of continuity within the chat, permitting the agent to recollect particulars from earlier within the interplay and reply extra coherently and statefully. After that, we mannequin subagent spawning by launching concurrent background duties that every deal with a targeted goal, which helps us perceive how nanobot can delegate parallel work to unbiased agent employees.
part("STEP 11 · Scheduled Duties — The Cron Sample", "⏰")
information("""nanobot's CronService (cron/service.py) makes use of APScheduler to set off
agent actions on a schedule. When a job fires, it creates an
InboundMessage and publishes it to the MessageBus.
Let's exhibit the sample with a simulated scheduler.
""")
from datetime import timedelta
class SimpleCronJob:
"""Mirrors nanobot's cron job construction."""
def __init__(self, title: str, message: str, interval_seconds: int):
self.id = str(uuid.uuid4())[:8]
self.title = title
self.message = message
self.interval = interval_seconds
self.enabled = True
self.last_run = None
self.next_run = datetime.datetime.now() + timedelta(seconds=interval_seconds)
jobs = [
SimpleCronJob("morning_briefing", "Give me a brief morning status update.", 86400),
SimpleCronJob("memory_cleanup", "Review and consolidate my memories.", 43200),
SimpleCronJob("health_check", "Run a system health check.", 3600),
]
print(" 📋 Registered Cron Jobs:")
print(" ┌────────┬────────────────────┬──────────┬──────────────────────┐")
print(" │ ID │ Title │ Interval │ Subsequent Run │")
print(" ├────────┼────────────────────┼──────────┼──────────────────────┤")
for job in jobs:
interval_str = f"{job.interval // 3600}h" if job.interval >= 3600 else f"{job.interval}s"
print(f" │ {job.id} │ {job.title:<18} │ {interval_str:>8} │ {job.next_run.strftime('%Y-%m-%d %H:%M')} │")
print(" └────────┴────────────────────┴──────────┴──────────────────────┘")
print(f"n ⏰ Simulating cron set off for '{jobs[2].title}'...")
cron_result = agent_loop_v2(jobs[2].message, verbose=True)
part("STEP 12 · Full Agent Pipeline — Finish-to-Finish Demo", "🎬")
information("""Now let's run a fancy, multi-step activity that workout routines the complete
nanobot pipeline: context constructing → software use → reminiscence → file I/O.
""")
print("─" * 60)
print(" DEMO 5: Advanced multi-step analysis activity")
print("─" * 60)
complex_result = agent_loop_v2(
"I want you to assist me with a small undertaking:n"
"1. First, examine the present timen"
"2. Write a brief undertaking plan to 'project_plan.txt' about constructing "
"a private AI assistant (3-4 bullet factors)n"
"3. Do not forget that my present undertaking is 'constructing a private AI assistant'n"
"4. Learn again the undertaking plan file to substantiate it was saved correctlyn"
"Then summarize every part you probably did.",
max_iterations=15
)
part("STEP 13 · Closing Workspace Abstract", "📊")
print(" 📁 Full workspace state after tutorial:n")
total_files = 0
total_bytes = 0
for merchandise in sorted(WORKSPACE.rglob("*")):
if merchandise.is_file():
rel = merchandise.relative_to(WORKSPACE)
dimension = merchandise.stat().st_size
total_files += 1
total_bytes += dimension
icon = {"md": "📄", "txt": "📝", "json": "📋"}.get(merchandise.suffix.lstrip("."), "📎")
print(f" {icon} {rel} ({dimension:,} bytes)")
print(f"n ── Abstract ──")
print(f" Complete information: {total_files}")
print(f" Complete dimension: {total_bytes:,} bytes")
print(f" Config: {config_path}")
print(f" Workspace: {WORKSPACE}")
print("n 🧠 Closing Reminiscence State:")
mem_content = (WORKSPACE / "reminiscence" / "MEMORY.md").read_text()
print(" ┌─────────────────────────────────────────────")
for line in mem_content.strip().break up("n"):
print(f" │ {line}")
print(" └─────────────────────────────────────────────")
part("COMPLETE · What's Subsequent?", "🎉")
print(""" You've got explored the core internals of nanobot! This is what to attempt subsequent:
🔹 Run the actual CLI agent:
nanobot onboard && nanobot agent
🔹 Hook up with Telegram:
Add a bot token to config.json and run `nanobot gateway`
🔹 Allow internet search:
Add a Courageous Search API key beneath instruments.internet.search.apiKey
🔹 Strive MCP integration:
nanobot helps Mannequin Context Protocol servers for exterior instruments
🔹 Discover the supply (~4K strains):
https://github.com/HKUDS/nanobot
🔹 Key information to learn:
• agent/loop.py — The agent iteration loop
• agent/context.py — Immediate meeting pipeline
• agent/reminiscence.py — Persistent reminiscence system
• agent/instruments/ — Constructed-in software implementations
• agent/subagent.py — Background activity delegation
""")We exhibit the cron-style scheduling sample by defining easy scheduled jobs, itemizing their intervals and subsequent run instances, and simulating the triggering of an automatic agent activity. We then run a bigger end-to-end instance that mixes context constructing, software use, reminiscence updates, and file operations right into a single multi-step workflow, so we will see the complete pipeline working collectively in a practical activity. On the finish, we examine the ultimate workspace state, evaluation the saved reminiscence, and shut the tutorial with clear subsequent steps that join this pocket book implementation to the actual nanobot undertaking and its supply code.
In conclusion, we walked via each main layer of the nanobot’s structure, from the iterative LLM-tool loop at its core to the session supervisor that offers our agent conversational reminiscence throughout turns. We constructed 5 built-in instruments, three customized instruments, two abilities, a session persistence layer, a subagent spawner, and a cron simulator, all whereas conserving every part in a single runnable script. What stands out is how nanobot proves {that a} production-grade agent framework doesn’t want lots of of 1000’s of strains of code; the patterns we applied right here, context meeting, software dispatch, reminiscence consolidation, and background activity delegation, are the identical patterns that energy far bigger techniques, simply stripped all the way down to their essence. We now have a working psychological mannequin of agentic AI internals and a codebase sufficiently small to learn in a single sitting, which makes nanobot a great selection for anybody trying to construct, customise, or analysis AI brokers from the bottom up.
Take a look at the Full Codes right here. Additionally, be at liberty to comply with us on Twitter and don’t overlook to affix our 120k+ ML SubReddit and Subscribe to our Publication. Wait! are you on telegram? now you possibly can be part of us on telegram as effectively.
