HomeSample Page

Sample Page Title


Construct Higher AI Brokers with Google Antigravity Expertise and Workflows
Picture by Editor

 

Introduction

 
Chances are high, you have already got the sensation that the brand new, agent-first synthetic intelligence period is right here, with builders resorting to new instruments that, as an alternative of simply producing code reactively, genuinely perceive the distinctive processes behind code era.

Google Antigravity has quite a bit to say on this matter. This instrument holds the important thing to constructing extremely customizable brokers. This text unveils a part of its potential by demystifying three cornerstone ideas: guidelines, expertise, and workflows.

On this article, you may discover ways to hyperlink these key ideas collectively to construct extra strong brokers and highly effective automated pipelines. Particularly, we are going to carry out a step-by-step course of to arrange a code high quality assurance (QA) agent workflow, based mostly on specified guidelines and expertise. Off we go!

 

Understanding the Three Core Ideas

 
Earlier than getting our fingers soiled, it’s handy to interrupt down the next three components belonging to the Google Antigravity ecosystem:

  • Rule: These are the baseline constraints that dictate the agent’s conduct, in addition to the best way to adapt it to our stack and match our model. They’re saved as markdown recordsdata.
  • Talent: Take into account expertise as a reusable bundle containing information that instructs the agent on the best way to handle a concrete process. They’re allotted in a devoted folder that accommodates a file named SKILL.md.
  • Workflow: These are the orchestrators that put all of it collectively. Workflows are invoked by utilizing command-like directions preceded by a ahead slash, e.g. /deploy. Merely put, workflows information the agent by way of an motion plan or trajectory that’s well-structured and consists of a number of steps. That is the important thing to automating repetitive duties with out lack of precision.

 

Taking Motion

 
Let’s transfer on to our sensible instance. We are going to see the best way to configure Antigravity to evaluate Python code, apply appropriate formatting, and generate exams — all with out the necessity for extra third-party instruments.

Earlier than taking these steps, ensure you have downloaded and put in Google Antigravity in your laptop first.

As soon as put in, open the desktop software and open your Python challenge folder — if you’re new to the instrument, you’ll be requested to outline a folder in your laptop file system to behave because the challenge folder. Regardless, the way in which so as to add a manually created folder into Antigravity is thru the “File >> Add Folder to Workspace…” choice within the higher menu toolbar.

Say you’ve got a brand new, empty workspace folder. Within the root of the challenge listing (left-hand aspect), create a brand new folder and provides it the identify .brokers. Inside this folder, we are going to create two subfolders: one referred to as guidelines and one named expertise. Chances are you’ll guess that these two are the place we are going to outline the 2 pillars for our agent’s conduct: guidelines and expertise.

 

Project folder hierarchy
The challenge folder hierarchy | Picture by Creator

 

Let’s outline a rule first, containing our baseline constraints that may make sure the agent’s adherence to Python formatting requirements. We do not want verbose syntax to do that: in Antigravity, we outline it utilizing clear directions in pure language. Contained in the guidelines subfolder, you may create a file named python-style.md and paste the next content material:

# Python Model Rule
All the time use PEP 8 requirements. When offering or refactoring code, assume we're utilizing `black` for formatting. Preserve dependencies strictly to free, open-source libraries to make sure our challenge stays free-friendly.

 

If you wish to nail it, go to the agent customizations panel that prompts on the right-hand aspect of the editor, open it, and discover and choose the rule we simply outlined:

 

Customizing activation of agent rules
Customizing the activation of agent guidelines | Picture by Creator

 

Customization choices will seem above the file we simply edited. Set the activation mannequin to “glob” and specify this glob sample: **/*.py, as proven under:

 

Setting Glob activation mode
Setting the glob activation mode | Picture by Creator

 

With this, you simply ensured the agent that can be launched later at all times applies the rule outlined after we are particularly engaged on Python scripts.

Subsequent, it is time to outline (or “train”) the agent some expertise. That would be the ability of performing strong exams on Python code — one thing extraordinarily helpful in in the present day’s demanding software program improvement panorama. Contained in the expertise subfolder, we are going to create one other folder with the identify pytest-generator. Create a SKILL.md file inside it, with the next content material:

 

Defining agent skills
Defining agent expertise throughout the workspace | Picture by Creator

 

Now it is time to put all of it collectively and launch our agent, however not with out having inside our challenge workspace an instance Python file containing “poor-quality” code first to strive all of it on. If you have no, strive creating a brand new .py file, calling it one thing like flawed_division.py within the root listing, and add this code:

def divide_numbers( x,y ):
  return x/y

 

You will have seen this Python code is deliberately messy and flawed. Let’s have a look at what our agent can do about it. Go to the customization panel on the right-hand aspect, and this time deal with the “Workflows” navigation pane. Click on “+Workspace” to create a brand new workflow we are going to name qa-check, with this content material:

# Title: Python QA Examine
# Description: Automates code evaluate and check era for Python recordsdata.

Step 1: Evaluate the at present open Python file for bugs and elegance points, adhering to our Python Model Rule.
Step 2: Refactor any inefficient code.
Step 3: Name the `pytest-generator` ability to jot down complete unit exams for the refactored code.
Step 4: Output the ultimate check code and counsel working `pytest` within the terminal.

 

All these items, when glued collectively by the agent, will remodel the event loop as an entire. With the messy Python file nonetheless open within the workspace, we are going to put our agent to work by clicking the agent icon within the right-hand aspect panel, typing the qa-check command, and hitting enter to run the agent:

 

Putting the agent to work
Invoking the QA workflow by way of the agent console | Picture by Creator

 

After execution, the agent can have revised the code and robotically steered a brand new model within the Python file, as proven under:

 

Code improvements generated by the agent
The refactored code steered by the agent | Picture by Creator

 

However that is not all: the agent additionally comes with the excellent high quality verify we had been in search of by producing quite a few code excerpts you need to use to run several types of exams utilizing pytest. For the sake of illustration, that is what a few of these exams might appear like:

import pytest
from flawed_division import divide_numbers

def test_divide_numbers_normal():
    assert divide_numbers(10, 2) == 5.0
    assert divide_numbers(9, 3) == 3.0

def test_divide_numbers_negative():
    assert divide_numbers(-10, 2) == -5.0
    assert divide_numbers(10, -2) == -5.0
    assert divide_numbers(-10, -2) == 5.0

def test_divide_numbers_float():
    assert divide_numbers(5.0, 2.0) == 2.5

def test_divide_numbers_zero_numerator():
    assert divide_numbers(0, 5) == 0.0

def test_divide_numbers_zero_denominator():
    with pytest.raises(ValueError, match="Can not divide by zero"):
        divide_numbers(10, 0)

 

All this sequential course of carried out by the agent has consisted of first analyzing the code beneath the constraints we outlined by way of guidelines, then autonomously calling the newly outlined ability to supply a complete testing technique tailor-made to our codebase.

 

Wrapping Up

 
Wanting again, on this article, we’ve got proven the best way to mix three key components of Google Antigravity — guidelines, expertise, and workflows — to show generic brokers into specialised, strong, and environment friendly workmates. We illustrated the best way to make an agent specialised in accurately formatting messy code and defining QA exams.
 
 

Iván Palomares Carrascosa is a frontrunner, author, speaker, and adviser in AI, machine studying, deep studying & LLMs. He trains and guides others in harnessing AI in the actual world.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles