
Picture by Creator
# Introduction
AI coding instruments are getting impressively good at writing Python code that works. They will construct complete purposes and implement complicated algorithms in minutes. Nevertheless, the code AI generates is usually a ache to take care of.
In case you are utilizing instruments like Claude Code, GitHub Copilot, or Cursor’s agentic mode, you’ve gotten in all probability skilled this. The AI helps you ship working code quick, however the fee reveals up later. You’ve got probably refactored a bloated operate simply to know the way it works weeks after it was generated.
The issue is not that AI writes dangerous code — although it typically does — it’s that AI optimizes for “working now” and finishing the necessities in your immediate, when you want code that’s readable and maintainable in the long run. This text reveals you bridge this hole with a deal with Python-specific methods.
# Avoiding the Clean Canvas Lure
The most important mistake builders make is asking AI to start out from scratch. AI brokers work greatest with constraints and pointers.
Earlier than you write your first immediate, arrange the fundamentals of the challenge your self. This implies selecting your challenge construction — putting in your core libraries and implementing a number of working examples — to set the tone. This may appear counterproductive, however it helps with getting AI to jot down code that aligns higher with what you want in your utility.
Begin by constructing a few options manually. In case you are constructing an API, implement one full endpoint your self with all of the patterns you need: dependency injection, correct error dealing with, database entry, and validation. This turns into the reference implementation.
Say you write this primary endpoint manually:
from fastapi import APIRouter, Relies upon, HTTPException
from sqlalchemy.orm import Session
router = APIRouter()
# Assume get_db and Person mannequin are outlined elsewhere
async def get_user(user_id: int, db: Session = Relies upon(get_db)):
consumer = db.question(Person).filter(Person.id == user_id).first()
if not consumer:
elevate HTTPException(status_code=404, element="Person not discovered")
return consumer
When AI sees this sample, it understands how we deal with dependencies, how we question databases, and the way we deal with lacking information.
The identical applies to your challenge construction. Create your directories, arrange your imports, and configure your testing framework. AI shouldn’t be making these architectural selections.
# Making Python’s Kind System Do the Heavy Lifting
Python’s dynamic typing is versatile, however that flexibility turns into a legal responsibility when AI is writing your code. Make kind hints important guardrails as a substitute of a nice-to-have in your utility code.
Strict typing catches AI errors earlier than they attain manufacturing. If you require kind hints on each operate signature and run mypy in strict mode, the AI can not take shortcuts. It can not return ambiguous varieties or settle for parameters that could be strings or could be lists.
Extra importantly, strict varieties drive higher design. For instance, an AI agent making an attempt to jot down a operate that accepts knowledge: dict could make many assumptions about what’s in that dictionary. Nevertheless, an AI agent writing a operate that accepts knowledge: UserCreateRequest the place UserCreateRequest is a Pydantic mannequin has precisely one interpretation.
# This constrains AI to jot down appropriate code
from pydantic import BaseModel, EmailStr
class UserCreateRequest(BaseModel):
title: str
e mail: EmailStr
age: int
class UserResponse(BaseModel):
id: int
title: str
e mail: EmailStr
def process_user(knowledge: UserCreateRequest) -> UserResponse:
cross
# Somewhat than this
def process_user(knowledge: dict) -> dict:
cross
Use libraries that implement contracts: SQLAlchemy 2.0 with type-checked fashions and FastAPI with response fashions are glorious selections. These aren’t simply good practices; they’re constraints that hold AI on monitor.
Set mypy to strict mode and make passing kind checks non-negotiable. When AI generates code that fails kind checking, it is going to iterate till it passes. This automated suggestions loop produces higher code than any quantity of immediate engineering.
# Creating Documentation to Information AI
Most tasks have documentation that builders ignore. For AI brokers, you want documentation they really use — like a README.md file with pointers. This implies a single file with clear, particular guidelines.
Create a CLAUDE.md or AGENTS.md file at your challenge root. Don’t make it too lengthy. Concentrate on what is exclusive about your challenge somewhat than basic Python greatest practices.
Your AI pointers ought to specify:
- Mission construction and the place several types of code belong
- Which libraries to make use of for widespread duties
- Particular patterns to comply with (level to instance recordsdata)
- Express forbidden patterns
- Testing necessities
Right here is an instance AGENTS.md file:
# Mission Pointers
## Construction
/src/api - FastAPI routers
/src/companies - enterprise logic
/src/fashions - SQLAlchemy fashions
/src/schemas - Pydantic fashions
## Patterns
- All companies inherit from BaseService (see src/companies/base.py)
- All database entry goes via repository sample (see src/repositories/)
- Use dependency injection for all exterior dependencies
## Requirements
- Kind hints on all capabilities
- Docstrings utilizing Google fashion
- Capabilities below 50 strains
- Run `mypy --strict` and `ruff test` earlier than committing
## By no means
- No naked besides clauses
- No kind: ignore feedback
- No mutable default arguments
- No world state
The secret is being particular. Don’t merely say “comply with greatest practices.” Level to the precise file that demonstrates the sample. Don’t solely say “deal with errors correctly;” present the error dealing with sample you need.
# Writing Prompts That Level to Examples
Generic prompts produce generic code. Particular prompts that reference your current codebase produce extra maintainable code.
As a substitute of asking AI to “add authentication,” stroll it via the implementation with references to your patterns. Right here is an instance of such a immediate that factors to examples:
Implement JWT authentication in src/companies/auth_service.py. Comply with the identical construction as UserService in src/companies/user_service.py. Use bcrypt for password hashing (already in necessities.txt).
Add authentication dependency in src/api/dependencies.py following the sample of get_db.
Create Pydantic schemas in src/schemas/auth.py much like consumer.py.
Add pytest exams in exams/test_auth_service.py utilizing fixtures from conftest.py.
Discover how each instruction factors to an current file or sample. You aren’t asking AI to construct out an structure; you’re asking it to use what it is advisable to a brand new characteristic.
When the AI generates code, evaluate it towards your patterns. Does it use the identical dependency injection strategy? Does it comply with the identical error dealing with? Does it arrange imports the identical method? If not, level out the discrepancy and ask it to align with the present sample.
# Planning Earlier than Implementing
AI brokers can transfer quick, which may sometimes make them much less helpful if pace comes on the expense of construction. Use plan mode or ask for an implementation plan earlier than any code will get written.
A planning step forces the AI to suppose via dependencies and construction. It additionally provides you an opportunity to catch architectural issues — similar to round dependencies or redundant companies — earlier than they’re applied.
Ask for a plan that specifies:
- Which recordsdata shall be created or modified
- What dependencies exist between parts
- Which current patterns shall be adopted
- What exams are wanted
Evaluate this plan such as you would evaluate a design doc. Test that the AI understands your challenge construction. Confirm it’s utilizing the suitable libraries and ensure it isn’t reinventing one thing that already exists.
If the plan seems to be good, let the AI execute it. If not, appropriate the plan earlier than any code will get written. It’s simpler to repair a nasty plan than to repair dangerous code.
# Asking AI to Write Exams That Really Take a look at
AI is nice and tremendous quick at writing exams. Nevertheless, AI just isn’t environment friendly at writing helpful exams except you’re particular about what “helpful” means.
Default AI take a look at habits is to check the pleased path and nothing else. You get exams that confirm the code works when the whole lot goes proper, which is strictly when you do not want exams.
Specify your testing necessities explicitly. For each characteristic, require:
- Joyful path take a look at
- Validation error exams to test what occurs with invalid enter
- Edge case exams for empty values, None, boundary situations, and extra
- Error dealing with exams for database failures, exterior service failures, and the like
Level AI to your current take a look at recordsdata as examples. When you’ve got good take a look at patterns already, AI will write helpful exams, too. If you happen to shouldn’t have good exams but, write a number of your self first.
# Validating Output Systematically
After AI generates code, don’t simply test if it runs. Run it via a guidelines.
Your validation guidelines ought to embody questions like the next:
- Does it cross mypy strict mode
- Does it comply with patterns from current code
- Are all capabilities below 50 strains
- Do exams cowl edge circumstances and errors
- Are there kind hints on all capabilities
- Does it use the desired libraries accurately
Automate what you may. Arrange pre-commit hooks that run mypy, Ruff, and pytest. If AI-generated code fails these checks, it doesn’t get dedicated.
For what you can not automate, you’ll spot widespread anti-patterns after reviewing sufficient AI code — similar to capabilities that do an excessive amount of, error dealing with that swallows exceptions, or validation logic blended with enterprise logic.
# Implementing a Sensible Workflow
Allow us to now put collectively the whole lot we have now mentioned to date.
You begin a brand new challenge. You spend time organising the construction, selecting and putting in libraries, and writing a few instance options. You create CLAUDE.md along with your pointers and write particular Pydantic fashions.
Now you ask AI to implement a brand new characteristic. You write an in depth immediate pointing to your examples. AI generates a plan. You evaluate and approve it. AI writes the code. You run kind checking and exams. All the things passes. You evaluate the code towards your patterns. It matches. You commit.
Whole time from immediate to commit might solely be round quarter-hour for a characteristic that might have taken you an hour to jot down manually. However extra importantly, the code you get is simpler to take care of — it follows the patterns you established.
The following characteristic goes quicker as a result of AI has extra examples to be taught from. The code turns into extra constant over time as a result of each new characteristic reinforces the present patterns.
# Wrapping Up
With AI coding instruments proving tremendous helpful, your job as a developer or an information skilled is altering. You are actually spending much less time writing code and extra time on:
- Designing techniques and selecting architectures
- Creating reference implementations of patterns
- Writing constraints and pointers
- Reviewing AI output and sustaining the standard bar
The talent that issues most just isn’t writing code quicker. Somewhat, it’s designing techniques that constrain AI to jot down maintainable code. It’s realizing which practices scale and which create technical debt. I hope you discovered this text useful even when you don’t use Python as your programming language of selection. Tell us what else you suppose we are able to do to maintain AI-generated Python code maintainable. Hold exploring!
Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, knowledge science, and content material creation. Her areas of curiosity and experience embody DevOps, knowledge science, and pure language processing. She enjoys studying, writing, coding, and low! At the moment, she’s engaged on studying and sharing her data with the developer neighborhood by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates partaking useful resource overviews and coding tutorials.