On this tutorial, we exhibit the way to design a contract-first agentic resolution system utilizing PydanticAI, treating structured schemas as non-negotiable governance contracts somewhat than non-obligatory output codecs. We present how we outline a strict resolution mannequin that encodes coverage compliance, threat evaluation, confidence calibration, and actionable subsequent steps straight into the agent’s output schema. By combining Pydantic validators with PydanticAI’s retry and self-correction mechanisms, we be sure that the agent can not produce logically inconsistent or non-compliant selections. All through the workflow, we concentrate on constructing an enterprise-grade resolution agent that causes underneath constraints, making it appropriate for real-world threat, compliance, and governance situations somewhat than toy prompt-based demos. Take a look at the FULL CODES right here.
!pip -q set up -U pydantic-ai pydantic openai nest_asyncio
import os
import time
import asyncio
import getpass
from dataclasses import dataclass
from typing import Checklist, Literal
import nest_asyncio
nest_asyncio.apply()
from pydantic import BaseModel, Discipline, field_validator
from pydantic_ai import Agent
from pydantic_ai.fashions.openai import OpenAIChatModel
from pydantic_ai.suppliers.openai import OpenAIProvider
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
if not OPENAI_API_KEY:
strive:
from google.colab import userdata
OPENAI_API_KEY = userdata.get("OPENAI_API_KEY")
besides Exception:
OPENAI_API_KEY = None
if not OPENAI_API_KEY:
OPENAI_API_KEY = getpass.getpass("Enter OPENAI_API_KEY: ").strip()We arrange the execution setting by putting in the required libraries and configuring asynchronous execution for Google Colab. We securely load the OpenAI API key and make sure the runtime is able to deal with async agent calls. This establishes a steady basis for working the contract-first agent with out environment-related points. Take a look at the FULL CODES right here.
class RiskItem(BaseModel):
threat: str = Discipline(..., min_length=8)
severity: Literal["low", "medium", "high"]
mitigation: str = Discipline(..., min_length=12)
class DecisionOutput(BaseModel):
resolution: Literal["approve", "approve_with_conditions", "reject"]
confidence: float = Discipline(..., ge=0.0, le=1.0)
rationale: str = Discipline(..., min_length=80)
identified_risks: Checklist[RiskItem] = Discipline(..., min_length=2)
compliance_passed: bool
situations: Checklist[str] = Discipline(default_factory=checklist)
next_steps: Checklist[str] = Discipline(..., min_length=3)
timestamp_unix: int = Discipline(default_factory=lambda: int(time.time()))
@field_validator("confidence")
@classmethod
def confidence_vs_risk(cls, v, data):
dangers = data.information.get("identified_risks") or []
if any(r.severity == "excessive" for r in dangers) and v > 0.70:
elevate ValueError("confidence too excessive given high-severity dangers")
return v
@field_validator("resolution")
@classmethod
def reject_if_non_compliant(cls, v, data):
if data.information.get("compliance_passed") is False and v != "reject":
elevate ValueError("non-compliant selections should be reject")
return v
@field_validator("situations")
@classmethod
def conditions_required_for_conditional_approval(cls, v, data):
d = data.information.get("resolution")
if d == "approve_with_conditions" and (not v or len(v) < 2):
elevate ValueError("approve_with_conditions requires not less than 2 situations")
if d == "approve" and v:
elevate ValueError("approve should not embody situations")
return vWe outline the core resolution contract utilizing strict Pydantic fashions that exactly describe a sound resolution. We encode logical constraints akin to confidence–threat alignment, compliance-driven rejection, and conditional approvals straight into the schema. This ensures that any agent output should fulfill enterprise logic, not simply syntactic construction. Take a look at the FULL CODES right here.
@dataclass
class DecisionContext:
company_policy: str
risk_threshold: float = 0.6
mannequin = OpenAIChatModel(
"gpt-5",
supplier=OpenAIProvider(api_key=OPENAI_API_KEY),
)
agent = Agent(
mannequin=mannequin,
deps_type=DecisionContext,
output_type=DecisionOutput,
system_prompt="""
You're a company resolution evaluation agent.
You should consider threat, compliance, and uncertainty.
All outputs should strictly fulfill the DecisionOutput schema.
"""
)
We inject enterprise context by way of a typed dependency object and initialize the OpenAI-backed PydanticAI agent. We configure the agent to provide solely structured resolution outputs that conform to the predefined contract. This step formalizes the separation between enterprise context and mannequin reasoning. Take a look at the FULL CODES right here.
@agent.output_validator
def ensure_risk_quality(outcome: DecisionOutput) -> DecisionOutput:
if len(outcome.identified_risks) < 2:
elevate ValueError("minimal two dangers required")
if not any(r.severity in ("medium", "excessive") for r in outcome.identified_risks):
elevate ValueError("not less than one medium or excessive threat required")
return outcome
@agent.output_validator
def enforce_policy_controls(outcome: DecisionOutput) -> DecisionOutput:
coverage = CURRENT_DEPS.company_policy.decrease()
textual content = (
outcome.rationale
+ " ".be a part of(outcome.next_steps)
+ " ".be a part of(outcome.situations)
).decrease()
if outcome.compliance_passed:
if not any(okay in textual content for okay in ["encryption", "audit", "logging", "access control", "key management"]):
elevate ValueError("lacking concrete safety controls")
return outcomeWe add output validators that act as governance checkpoints after the mannequin generates a response. We power the agent to determine significant dangers and to explicitly reference concrete safety controls when claiming compliance. If these constraints are violated, we set off computerized retries to implement self-correction. Take a look at the FULL CODES right here.
async def run_decision():
world CURRENT_DEPS
CURRENT_DEPS = DecisionContext(
company_policy=(
"No deployment of methods dealing with private information or transaction metadata "
"with out encryption, audit logging, and least-privilege entry management."
)
)
immediate = """
Determination request:
Deploy an AI-powered buyer analytics dashboard utilizing a third-party cloud vendor.
The system processes person habits and transaction metadata.
Audit logging will not be carried out and customer-managed keys are unsure.
"""
outcome = await agent.run(immediate, deps=CURRENT_DEPS)
return outcome.output
resolution = asyncio.run(run_decision())
from pprint import pprint
pprint(resolution.model_dump())We run the agent on a sensible resolution request and seize the validated structured output. We exhibit how the agent evaluates threat, coverage compliance, and confidence earlier than producing a last resolution. This completes the end-to-end contract-first resolution workflow in a production-style setup.
In conclusion, we exhibit the way to transfer from free-form LLM outputs to ruled, dependable resolution methods utilizing PydanticAI. We present that by imposing laborious contracts on the schema stage, we are able to robotically align selections with coverage necessities, threat severity, and confidence realism with out guide immediate tuning. This method permits us to construct brokers that fail safely, self-correct when constraints are violated, and produce auditable, structured outputs that downstream methods can belief. In the end, we exhibit that contract-first agent design permits us to deploy agentic AI as a reliable resolution layer inside manufacturing and enterprise environments.
Take a look at the FULL CODES right here. Additionally, be at liberty to observe us on Twitter and don’t overlook to hitch our 100k+ ML SubReddit and Subscribe to our Publication. Wait! are you on telegram? now you’ll be able to be a part of us on telegram as nicely.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its reputation amongst audiences.