HomeSample Page

Sample Page Title


On this tutorial, we construct a sophisticated multi-agent incident response system utilizing AgentScope. We orchestrate a number of ReAct brokers, every with a clearly outlined position resembling routing, triage, evaluation, writing, and evaluate, and join them via structured routing and a shared message hub. By integrating OpenAI fashions, light-weight instrument calling, and a easy inside runbook, we show how complicated, real-world agentic workflows may be composed in pure Python with out heavy infrastructure or brittle glue code. Try the FULL CODES right here.

!pip -q set up "agentscope>=0.1.5" pydantic nest_asyncio


import os, json, re
from getpass import getpass
from typing import Literal
from pydantic import BaseModel, Area
import nest_asyncio
nest_asyncio.apply()


from agentscope.agent import ReActAgent
from agentscope.message import Msg, TextBlock
from agentscope.mannequin import OpenAIChatModel
from agentscope.formatter import OpenAIChatFormatter
from agentscope.reminiscence import InMemoryMemory
from agentscope.instrument import Toolkit, ToolResponse, execute_python_code
from agentscope.pipeline import MsgHub, sequential_pipeline


if not os.environ.get("OPENAI_API_KEY"):
   os.environ["OPENAI_API_KEY"] = getpass("Enter OPENAI_API_KEY (hidden): ")


OPENAI_MODEL = os.environ.get("OPENAI_MODEL", "gpt-4o-mini")

We arrange the execution setting and set up all required dependencies so the tutorial runs reliably on Google Colab. We securely load the OpenAI API key and initialize the core AgentScope parts that will likely be shared throughout all brokers. Try the FULL CODES right here.

RUNBOOK = [
   {"id": "P0", "title": "Severity Policy", "text": "P0 critical outage, P1 major degradation, P2 minor issue"},
   {"id": "IR1", "title": "Incident Triage Checklist", "text": "Assess blast radius, timeline, deployments, errors, mitigation"},
   {"id": "SEC7", "title": "Phishing Escalation", "text": "Disable account, reset sessions, block sender, preserve evidence"},
]


def _score(q, d):
   q = set(re.findall(r"[a-z0-9]+", q.decrease()))
   d = re.findall(r"[a-z0-9]+", d.decrease())
   return sum(1 for w in d if w in q) / max(1, len(d))


async def search_runbook(question: str, top_k: int = 2) -> ToolResponse:
   ranked = sorted(RUNBOOK, key=lambda r: _score(question, r["title"] + r["text"]), reverse=True)[: max(1, int(top_k))]
   textual content = "nn".be a part of(f"[{r['id']}] {r['title']}n{r['text']}" for r in ranked)
   return ToolResponse(content material=[TextBlock(type="text", text=text)])


toolkit = Toolkit()
toolkit.register_tool_function(search_runbook)
toolkit.register_tool_function(execute_python_code)

We outline a light-weight inside runbook and implement a easy relevance-based search instrument over it. We register this perform together with a Python execution instrument, enabling brokers to retrieve coverage data or compute outcomes dynamically. It demonstrates how we increase brokers with exterior capabilities past pure language reasoning. Try the FULL CODES right here.

def make_model():
   return OpenAIChatModel(
       model_name=OPENAI_MODEL,
       api_key=os.environ["OPENAI_API_KEY"],
       generate_kwargs={"temperature": 0.2},
   )


class Route(BaseModel):
   lane: Literal["triage", "analysis", "report", "unknown"] = Area(...)
   objective: str = Area(...)


router = ReActAgent(
   identify="Router",
   sys_prompt="Route the request to triage, evaluation, or report and output structured JSON solely.",
   mannequin=make_model(),
   formatter=OpenAIChatFormatter(),
   reminiscence=InMemoryMemory(),
)


triager = ReActAgent(
   identify="Triager",
   sys_prompt="Classify severity and instant actions utilizing runbook search when helpful.",
   mannequin=make_model(),
   formatter=OpenAIChatFormatter(),
   reminiscence=InMemoryMemory(),
   toolkit=toolkit,
)


analyst = ReActAgent(
   identify="Analyst",
   sys_prompt="Analyze logs and compute summaries utilizing python instrument when useful.",
   mannequin=make_model(),
   formatter=OpenAIChatFormatter(),
   reminiscence=InMemoryMemory(),
   toolkit=toolkit,
)


author = ReActAgent(
   identify="Author",
   sys_prompt="Write a concise incident report with clear construction.",
   mannequin=make_model(),
   formatter=OpenAIChatFormatter(),
   reminiscence=InMemoryMemory(),
)


reviewer = ReActAgent(
   identify="Reviewer",
   sys_prompt="Critique and enhance the report with concrete fixes.",
   mannequin=make_model(),
   formatter=OpenAIChatFormatter(),
   reminiscence=InMemoryMemory(),
)

We assemble a number of specialised ReAct brokers and a structured router that decides how every person request must be dealt with. We assign clear tasks to the triage, evaluation, writing, and evaluate brokers, making certain separation of issues. Try the FULL CODES right here.

LOGS = """timestamp,service,standing,latency_ms,error
2025-12-18T12:00:00Z,checkout,200,180,false
2025-12-18T12:00:05Z,checkout,500,900,true
2025-12-18T12:00:10Z,auth,200,120,false
2025-12-18T12:00:12Z,checkout,502,1100,true
2025-12-18T12:00:20Z,search,200,140,false
2025-12-18T12:00:25Z,checkout,500,950,true
"""


def msg_text(m: Msg) -> str:
   blocks = m.get_content_blocks("textual content")
   if blocks is None:
       return ""
   if isinstance(blocks, str):
       return blocks
   if isinstance(blocks, record):
       return "n".be a part of(str(x) for x in blocks)
   return str(blocks)

We introduce pattern log knowledge and a utility perform that normalizes agent outputs into clear textual content. We be certain that downstream brokers can safely devour and refine earlier responses with out format points. It focuses on making inter-agent communication strong and predictable. Try the FULL CODES right here.

async def run_demo(user_request: str):
   route_msg = await router(Msg("person", user_request, "person"), structured_model=Route)
   lane = (route_msg.metadata or {}).get("lane", "unknown")


   if lane == "triage":
       first = await triager(Msg("person", user_request, "person"))
   elif lane == "evaluation":
       first = await analyst(Msg("person", user_request + "nnLogs:n" + LOGS, "person"))
   elif lane == "report":
       draft = await author(Msg("person", user_request, "person"))
       first = await reviewer(Msg("person", "Evaluate and enhance:nn" + msg_text(draft), "person"))
   else:
       first = Msg("system", "Couldn't route request.", "system")


   async with MsgHub(
       members=[triager, analyst, writer, reviewer],
       announcement=Msg("Host", "Refine the ultimate reply collaboratively.", "assistant"),
   ):
       await sequential_pipeline([triager, analyst, writer, reviewer])


   return {"route": route_msg.metadata, "initial_output": msg_text(first)}


outcome = await run_demo(
   "We see repeated 5xx errors in checkout. Classify severity, analyze logs, and produce an incident report."
)
print(json.dumps(outcome, indent=2))

We orchestrate the complete workflow by routing the request, executing the suitable agent, and operating a collaborative refinement loop utilizing a message hub. We coordinate a number of brokers in sequence to enhance the ultimate output earlier than returning it to the person. It brings collectively all earlier parts right into a cohesive, end-to-end agentic pipeline.

In conclusion, we confirmed how AgentScope allows us to design strong, modular, and collaborative agent methods that transcend single-prompt interactions. We routed duties dynamically, invoked instruments solely when wanted, and refined outputs via multi-agent coordination, all inside a clear and reproducible Colab setup. This sample illustrates how we are able to scale from easy agent experiments to production-style reasoning pipelines whereas sustaining readability, management, and extensibility in our agentic AI purposes.


Try the FULL CODES right here. Additionally, be at liberty to comply with us on Twitter and don’t neglect to affix our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you may be a part of us on telegram as nicely.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles