There’s a selected type of tedium that each AI engineer is aware of intimately: the prompt-tuning loop. You write a system immediate, run your agent in opposition to a benchmark, learn the failure traces, tweak the immediate, add a software, rerun. Repeat this just a few dozen occasions and also you would possibly transfer the needle. It’s grunt work dressed up in Python recordsdata. Now, a brand new open-source library referred to as AutoAgent, constructed by Kevin Gu at thirdlayer.inc, proposes an unsettling various — don’t try this work your self. Let an AI do it.
AutoAgent is an open supply library for autonomously bettering an agent on any area. In a 24-hour run, it hit #1 on SpreadsheetBench with a rating of 96.5%, and achieved the #1 GPT-5 rating on TerminalBench with 55.1%.

What Is AutoAgent, Actually?
AutoAgent is described as being ‘like autoresearch however for agent engineering.’ The thought: give an AI agent a job, let it construct and iterate on an agent harness autonomously in a single day. It modifies the system immediate, instruments, agent configuration, and orchestration, runs the benchmark, checks the rating, retains or discards the change, and repeats.
To know the analogy: Andrej Karpathy’s autoresearch does the identical factor for ML coaching — it loops via propose-train-evaluate cycles, maintaining solely modifications that enhance validation loss. AutoAgent ports that very same ratchet loop from ML coaching into agent engineering. As an alternative of optimizing a mannequin’s weights or coaching hyperparameters, it optimizes the harness — the system immediate, software definitions, routing logic, and orchestration technique that decide how an agent behaves on a job.
A harness, on this context, is the scaffolding round an LLM: what system immediate it receives, what instruments it will possibly name, the way it routes between sub-agents, and the way duties are formatted as inputs. Most agent engineers hand-craft this scaffolding. AutoAgent automates the iteration on that scaffolding itself.
The Structure: Two Brokers, One File, One Directive
The GitHub repo has a intentionally easy construction. agent.py is your complete harness beneath check in a single file — it comprises config, software definitions, agent registry, routing/orchestration, and the Harbor adapter boundary. The adapter part is explicitly marked as fastened; the remaining is the first edit floor for the meta-agent. program.md comprises directions for the meta-agent plus the directive (what sort of agent to construct), and that is the one file the human edits.
Consider it as a separation of considerations between human and machine. The human units the path inside program.md. The meta-agent (a separate, higher-level AI) then reads that directive, inspects agent.py, runs the benchmark, diagnoses what failed, rewrites the related elements of agent.py, and repeats. The human by no means touches agent.py straight.
A essential piece of infrastructure that retains the loop coherent throughout iterations is outcomes.tsv — an experiment log robotically created and maintained by the meta-agent. It tracks each experiment run, giving the meta-agent a historical past to be taught from and calibrate what to attempt subsequent. The total mission construction additionally consists of Dockerfile.base, an elective .agent/ listing for reusable agent workspace artifacts like prompts and expertise, a duties/ folder for benchmark payloads (added per benchmark department), and a jobs/ listing for Harbor job outputs.
The metric is whole rating produced by the benchmark’s job check suites. The meta-agent hill-climbs on this rating. Each experiment produces a numeric rating: hold if higher, discard if not — the identical loop as autoresearch.
The Job Format and Harbor Integration
Benchmarks are expressed as duties in Harbor format. Every job lives beneath duties/my-task/ and features a job.toml for config like timeouts and metadata, an instruction.md which is the immediate despatched to the agent, a checks/ listing with a check.sh entry level that writes a rating to /logs/reward.txt, and a check.py for verification utilizing both deterministic checks or LLM-as-judge. An setting/Dockerfile defines the duty container, and a recordsdata/ listing holds reference recordsdata mounted into the container. Checks write a rating between 0.0 and 1.0 to the verifier logs. The meta-agent hill-climbs on this.
The LLM-as-judge sample right here is value flagging: as an alternative of solely checking solutions deterministically (like unit checks), the check suite can use one other LLM to guage whether or not the agent’s output is ‘appropriate sufficient.’ That is frequent in agentic benchmarks the place appropriate solutions aren’t reducible to string matching.
Key Takeaways
- Autonomous harness engineering works — AutoAgent proves {that a} meta-agent can exchange the human prompt-tuning loop totally, iterating on
agent.pyin a single day with none human touching the harness recordsdata straight. - Benchmark outcomes validate the method — In a 24-hour run, AutoAgent hit #1 on SpreadsheetBench (96.5%) and the highest GPT-5 rating on TerminalBench (55.1%), beating each different entry that was hand-engineered by people.
- ‘Mannequin empathy’ could also be an actual phenomenon — A Claude meta-agent optimizing a Claude job agent appeared to diagnose failures extra precisely than when optimizing a GPT-based agent, suggesting same-family mannequin pairing might matter when designing your AutoAgent loop.
- The human’s job shifts from engineer to director — You don’t write or edit
agent.py. You writeprogram.md— a plain Markdown directive that steers the meta-agent. The excellence mirrors the broader shift in agentic engineering from writing code to setting objectives. - It’s plug-and-play with any benchmark — As a result of duties comply with Harbor’s open format and brokers run in Docker containers, AutoAgent is domain-agnostic. Any scorable job — spreadsheets, terminal instructions, or your individual customized area — can turn out to be a goal for autonomous self-optimization.
Try the Repo and Tweet. Additionally, be happy to comply with us on Twitter and don’t neglect to affix our 120k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you’ll be able to be a part of us on telegram as nicely.
Have to companion with us for selling your GitHub Repo OR Hugging Face Web page OR Product Launch OR Webinar and so on.? Join with us