HomeSample Page

Sample Page Title


On this tutorial, we construct a whole AgentScope workflow from the bottom up and run all the things in Colab. We begin by wiring OpenAI by means of AgentScope and validating a fundamental mannequin name to grasp how messages and responses are dealt with. From there, we outline customized instrument capabilities, register them in a toolkit, and examine the auto-generated schemas to see how instruments are uncovered to the agent. We then transfer right into a ReAct-based agent that dynamically decides when to name instruments, adopted by a multi-agent debate setup utilizing MsgHub to simulate structured interplay between brokers. Lastly, we implement structured outputs with Pydantic and execute a concurrent multi-agent pipeline through which a number of specialists analyze an issue in parallel, and a synthesiser combines their insights.

import subprocess, sys


subprocess.check_call([
   sys.executable, "-m", "pip", "install", "-q",
   "agentscope", "openai", "pydantic", "nest_asyncio",
])


print("✅  All packages put in.n")


import nest_asyncio
nest_asyncio.apply()


import asyncio
import json
import getpass
import math
import datetime
from typing import Any


from pydantic import BaseModel, Discipline


from agentscope.agent import ReActAgent
from agentscope.formatter import OpenAIChatFormatter, OpenAIMultiAgentFormatter
from agentscope.reminiscence import InMemoryMemory
from agentscope.message import Msg, TextBlock, ToolUseBlock
from agentscope.mannequin import OpenAIChatModel
from agentscope.pipeline import MsgHub, sequential_pipeline
from agentscope.instrument import Toolkit, ToolResponse


OPENAI_API_KEY = getpass.getpass("🔑  Enter your OpenAI API key: ")
MODEL_NAME = "gpt-4o-mini"


print(f"n✅  API key captured. Utilizing mannequin: {MODEL_NAME}n")
print("=" * 72)


def make_model(stream: bool = False) -> OpenAIChatModel:
   return OpenAIChatModel(
       model_name=MODEL_NAME,
       api_key=OPENAI_API_KEY,
       stream=stream,
       generate_kwargs={"temperature": 0.7, "max_tokens": 1024},
   )


print("n" + "═" * 72)
print("  PART 1: Fundamental Mannequin Name")
print("═" * 72)


async def part1_basic_model_call():
   mannequin = make_model()
   response = await mannequin(
       messages=[{"role": "user", "content": "What is AgentScope in one sentence?"}],
   )
   textual content = response.content material[0]["text"]
   print(f"n🤖  Mannequin says: {textual content}")
   print(f"📊  Tokens used: {response.utilization}")


asyncio.run(part1_basic_model_call())

We set up all required dependencies and patch the occasion loop to make sure asynchronous code runs easily in Colab. We securely seize the OpenAI API key and configure the mannequin by means of a helper operate for reuse. We then run a fundamental mannequin name to confirm the setup and examine the response and token utilization.

print("n" + "═" * 72)
print("  PART 2: Customized Device Features & Toolkit")
print("═" * 72)


async def calculate_expression(expression: str) -> ToolResponse:
   allowed = {
       "abs": abs, "spherical": spherical, "min": min, "max": max,
       "sum": sum, "pow": pow, "int": int, "float": float,
       "sqrt": math.sqrt, "pi": math.pi, "e": math.e,
       "log": math.log, "sin": math.sin, "cos": math.cos,
       "tan": math.tan, "factorial": math.factorial,
   }
   attempt:
       consequence = eval(expression, {"__builtins__": {}}, allowed)
       return ToolResponse(content material=[TextBlock(type="text", text=str(result))])
   besides Exception as exc:
       return ToolResponse(content material=[TextBlock(type="text", text=f"Error: {exc}")])


async def get_current_datetime(timezone_offset: int = 0) -> ToolResponse:
   now = datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=timezone_offset)))
   return ToolResponse(
       content material=[TextBlock(type="text", text=now.strftime("%Y-%m-%d %H:%M:%S %Z"))],
   )


toolkit = Toolkit()
toolkit.register_tool_function(calculate_expression)
toolkit.register_tool_function(get_current_datetime)


schemas = toolkit.get_json_schemas()
print("n📋  Auto-generated instrument schemas:")
print(json.dumps(schemas, indent=2))


async def part2_test_tool():
   result_gen = await toolkit.call_tool_function(
       ToolUseBlock(
           kind="tool_use", id="test-1",
           identify="calculate_expression",
           enter={"expression": "factorial(10)"},
       ),
   )
   async for resp in result_gen:
       print(f"n🔧  Device consequence for factorial(10): {resp.content material[0]['text']}")


asyncio.run(part2_test_tool())

We outline customized instrument capabilities for mathematical analysis and datetime retrieval utilizing managed execution. We register these instruments right into a toolkit and examine their auto-generated JSON schemas to grasp how AgentScope exposes them. We then simulate a direct instrument name to validate that the instrument execution pipeline works accurately.

print("n" + "═" * 72)
print("  PART 3: ReAct Agent with Instruments")
print("═" * 72)


async def part3_react_agent():
   agent = ReActAgent(
       identify="MathBot",
       sys_prompt=(
           "You're MathBot, a useful assistant that solves math issues. "
           "Use the calculate_expression instrument for any computation. "
           "Use get_current_datetime when requested in regards to the time."
       ),
       mannequin=make_model(),
       reminiscence=InMemoryMemory(),
       formatter=OpenAIChatFormatter(),
       toolkit=toolkit,
       max_iters=5,
   )


   queries = [
       "What's the current time in UTC+5?",
   ]
   for q in queries:
       print(f"n👤  Consumer: {q}")
       msg = Msg("person", q, "person")
       response = await agent(msg)
       print(f"🤖  MathBot: {response.get_text_content()}")
       agent.reminiscence.clear()


asyncio.run(part3_react_agent())


print("n" + "═" * 72)
print("  PART 4: Multi-Agent Debate (MsgHub)")
print("═" * 72)


DEBATE_TOPIC = (
   "Ought to synthetic basic intelligence (AGI) analysis be open-sourced, "
   "or ought to it stay behind closed doorways at main labs?"
)

We assemble a ReAct agent that causes about when to make use of instruments and dynamically executes them. We move person queries and observe how the agent combines reasoning with instrument utilization to provide solutions. We additionally reset reminiscence between queries to make sure unbiased and clear interactions.

async def part4_debate():
   proponent = ReActAgent(
       identify="Proponent",
       sys_prompt=(
           f"You're the Proponent in a debate. You argue IN FAVOR of open-sourcing AGI analysis. "
           f"Subject: {DEBATE_TOPIC}n"
           "Preserve every response to 2-3 concise paragraphs. Deal with the opposite facet's factors instantly."
       ),
       mannequin=make_model(),
       reminiscence=InMemoryMemory(),
       formatter=OpenAIMultiAgentFormatter(),
   )


   opponent = ReActAgent(
       identify="Opponent",
       sys_prompt=(
           f"You're the Opponent in a debate. You argue AGAINST open-sourcing AGI analysis. "
           f"Subject: {DEBATE_TOPIC}n"
           "Preserve every response to 2-3 concise paragraphs. Deal with the opposite facet's factors instantly."
       ),
       mannequin=make_model(),
       reminiscence=InMemoryMemory(),
       formatter=OpenAIMultiAgentFormatter(),
   )


   num_rounds = 2
   for rnd in vary(1, num_rounds + 1):
       print(f"n{'─' * 60}")
       print(f"  ROUND {rnd}")
       print(f"{'─' * 60}")


       async with MsgHub(
           individuals=[proponent, opponent],
           announcement=Msg("Moderator", f"Spherical {rnd} — start. Subject: {DEBATE_TOPIC}", "assistant"),
       ):
           pro_msg = await proponent(
               Msg("Moderator", "Proponent, please current your argument.", "person"),
           )
           print(f"n✅  Proponent:n{pro_msg.get_text_content()}")


           opp_msg = await opponent(
               Msg("Moderator", "Opponent, please reply and current your counter-argument.", "person"),
           )
           print(f"n❌  Opponent:n{opp_msg.get_text_content()}")


   print(f"n{'─' * 60}")
   print("  DEBATE COMPLETE")
   print(f"{'─' * 60}")


asyncio.run(part4_debate())


print("n" + "═" * 72)
print("  PART 5: Structured Output with Pydantic")
print("═" * 72)


class MovieReview(BaseModel):
   12 months: int = Discipline(description="The discharge 12 months.")
   style: str = Discipline(description="Major style of the film.")
   ranking: float = Discipline(description="Ranking from 0.0 to 10.0.")
   professionals: checklist[str] = Discipline(description="Checklist of 2-3 strengths of the film.")
   cons: checklist[str] = Discipline(description="Checklist of 1-2 weaknesses of the film.")
   verdict: str = Discipline(description="A one-sentence ultimate verdict.")

We create two brokers with opposing roles and join them utilizing MsgHub for a structured multi-agent debate. We simulate a number of rounds through which every agent responds to the others whereas sustaining context by means of shared communication. We observe how agent coordination allows coherent argument change throughout turns.

async def part5_structured_output():
   agent = ReActAgent(
       identify="Critic",
       sys_prompt="You're a skilled film critic. When requested to evaluate a film, present a radical evaluation.",
       mannequin=make_model(),
       reminiscence=InMemoryMemory(),
       formatter=OpenAIChatFormatter(),
   )


   msg = Msg("person", "Overview the film 'Inception' (2010) by Christopher Nolan.", "person")
   response = await agent(msg, structured_model=MovieReview)


   print("n🎬  Structured Film Overview:")
   print(f"    Title   : {response.metadata.get('title', 'N/A')}")
   print(f"    12 months    : {response.metadata.get('12 months', 'N/A')}")
   print(f"    Style   : {response.metadata.get('style', 'N/A')}")
   print(f"    Ranking  : {response.metadata.get('ranking', 'N/A')}/10")
   professionals = response.metadata.get('professionals', [])
   cons = response.metadata.get('cons', [])
   if professionals:
       print(f"    Execs    : {', '.be part of(str(p) for p in professionals)}")
   if cons:
       print(f"    Cons    : {', '.be part of(str(c) for c in cons)}")
   print(f"    Verdict : {response.metadata.get('verdict', 'N/A')}")


   print(f"n📝  Full textual content response:n{response.get_text_content()}")


asyncio.run(part5_structured_output())


print("n" + "═" * 72)
print("  PART 6: Concurrent Multi-Agent Pipeline")
print("═" * 72)


async def part6_concurrent_agents():
   specialists = {
       "Economist": "You're an economist. Analyze the given subject from an financial perspective in 2-3 sentences.",
       "Ethicist": "You're an ethicist. Analyze the given subject from an moral perspective in 2-3 sentences.",
       "Technologist": "You're a technologist. Analyze the given subject from a know-how perspective in 2-3 sentences.",
   }


   brokers = []
   for identify, immediate in specialists.gadgets():
       brokers.append(
           ReActAgent(
               identify=identify,
               sys_prompt=immediate,
               mannequin=make_model(),
               reminiscence=InMemoryMemory(),
               formatter=OpenAIChatFormatter(),
           )
       )


   topic_msg = Msg(
       "person",
       "Analyze the influence of enormous language fashions on the worldwide workforce.",
       "person",
   )


   print("n⏳  Working 3 specialist brokers concurrently...")
   outcomes = await asyncio.collect(*(agent(topic_msg) for agent in brokers))


   for agent, end in zip(brokers, outcomes):
       print(f"n🧠  {agent.identify}:n{consequence.get_text_content()}")


   synthesiser = ReActAgent(
       identify="Synthesiser",
       sys_prompt=(
           "You're a synthesiser. You obtain analyses from an Economist, "
           "an Ethicist, and a Technologist. Mix their views into "
           "a single coherent abstract of 3-4 sentences."
       ),
       mannequin=make_model(),
       reminiscence=InMemoryMemory(),
       formatter=OpenAIMultiAgentFormatter(),
   )


   combined_text = "nn".be part of(
       f"[{agent.name}]: {r.get_text_content()}" for agent, r in zip(brokers, outcomes)
   )
   synthesis = await synthesiser(
       Msg("person", f"Listed here are the specialist analyses:nn{combined_text}nnPlease synthesise.", "person"),
   )
   print(f"n🔗  Synthesised Abstract:n{synthesis.get_text_content()}")


asyncio.run(part6_concurrent_agents())


print("n" + "═" * 72)
print("  🎉  TUTORIAL COMPLETE!")
print("  You may have coated:")
print("    1. Fundamental mannequin calls with OpenAIChatModel")
print("    2. Customized instrument capabilities & auto-generated JSON schemas")
print("    3. ReAct Agent with instrument use")
print("    4. Multi-agent debate with MsgHub")
print("    5. Structured output with Pydantic fashions")
print("    6. Concurrent multi-agent pipelines")
print("═" * 72)

We implement structured outputs utilizing a Pydantic schema to extract constant fields from mannequin responses. We then construct a concurrent multi-agent pipeline the place a number of specialist brokers analyze a subject in parallel. Lastly, we mixture their outputs utilizing a synthesiser agent to provide a unified and coherent abstract.

In conclusion, now we have carried out a full-stack agentic system that goes past easy prompting and into orchestrated reasoning, instrument utilization, and collaboration. We now perceive how AgentScope manages reminiscence, formatting, and gear execution below the hood, and the way ReAct brokers bridge reasoning with motion. We additionally noticed how multi-agent methods will be coordinated each sequentially and concurrently, and the way structured outputs guarantee reliability in downstream functions. With these constructing blocks, we’re ready to design extra superior agent architectures, lengthen instrument ecosystems, and deploy scalable, production-ready AI methods.


Take a look at the Full Pocket book right here.  Additionally, be happy to observe us on Twitter and don’t neglect to hitch our 120k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you’ll be able to be part of us on telegram as properly.


Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles