How I Set Up a SECURITY.md When Building With AI Agents
I asked an AI agent to add a “save to watchlist” feature on a Next.js + Supabase app. Within seconds it generated a working implementation. The user could click, the row appeared in the database, the feature was done.
Then I read the code. The reason it worked was that the agent had set the table's INSERT policy to WITH CHECK (true), helpfully, in the same generation. Anyone could write any row, with any user_id, into that table. The agent had hit the wall (RLS blocking the write) and “fixed” it by removing the wall.
That's the moment I started writing SECURITY.md files.
AI coding agents are useful. They can scaffold features, write migrations, wire up API routes, and refactor entire modules in seconds. But they move fast enough to introduce security holes before you notice: hardcoded keys, permissive database policies, service role clients exposed to the browser. I needed guardrails that worked before code review, not after. The solution was a SECURITY.md file at the root of the repo that acts as a contract between me and any agent writing code in the project.
Here's the template I use, the thinking behind each section, and a before/after that shows what changes when an agent reads it.
Why a SECURITY.md?
AI agents read your repo. They index your files, scan your project structure, and use that context to generate code. A SECURITY.md gives agents explicit rules about what they must and must not do, in plain language they can parse and follow.
Think of it as a system prompt for your codebase. It doesn't replace proper security practices, but it catches the most common mistakes at the point of generation, before they ever reach a pull request.
The agent rules
No secrets in code
This is rule number one. Never commit service role keys, JWT secrets, database passwords, or API keys. Client-side environment variables must be limited to NEXT_PUBLIC_* and must be non-sensitive. AI agents love to helpfully inline a key “to get things working.” This rule stops that.
Assume public keys are public
Anyone can extract client-side keys from a web app. All data protection must be enforced via Row Level Security (RLS), safe RPCs, and server-side checks. If an agent writes a client-side query that assumes the anon key provides access control, the policy has already failed.
Service role usage is restricted
Service role keys may only be used in Next.js Route Handlers (src/app/api/**), Edge Functions, or trusted backend workers. Never in the browser. This is the single most common mistake AI agents make when working with Supabase. They reach for the service role client because it “just works” without RLS getting in the way.
RLS is mandatory
Any public or semi-public table must have RLS enabled. No permissive write policies like WITH CHECK (true) for INSERT, UPDATE, or DELETE. AI agents will sometimes generate these to “unblock” a feature. The security file makes it clear that's not acceptable.
Server endpoints for public writes
Any endpoint that records analytics, clicks, or user actions must validate input (e.g. with Zod), apply rate limits, and use service-role DB writes on the server. Client components must not call write RPCs directly. This pattern prevents the entire class of “unauthenticated write” vulnerabilities.
Function hardening
All SECURITY DEFINER functions must have a fixed search_path and restricted execution grants. Only grant EXECUTE to roles that actually need it, often service_role only.
Pre-merge checklist
Rules only work if you verify them. The security file includes a merge checklist that I run before every PR lands:
- Repository scan: search for leaked service role keys, JWTs, and tokens
- Database advisors: run security and performance advisors after any DB, RLS, or function change
- Verify public write paths: confirm no public tables allow anonymous inserts without validation
- Confirm all public write actions route through a server endpoint with rate limiting
Enforced architecture
Server routes for privileged writes
Direct writes to protected tables from client code are not allowed. All privileged writes must go through Next.js Route Handlers that validate input, rate limit, authenticate via JWT, and use the service role for DB writes. This is the backbone of the architecture. It means an AI agent can't accidentally expose a write path by generating a client-side Supabase call.
Anonymous activity stays local
Anonymous activity must remain local until login. Cloud state for user data (watchlists, recently viewed items, watch progress) is authenticated-only. This prevents a whole category of bugs where an agent creates a “save to cloud” feature that accidentally works without auth.
Before and after
Here's what an agent generates for a watchlist write endpoint without a SECURITY.md in the repo:
Looks fine. It isn't. The user_id comes from the request body, so any caller can write to any user's watchlist. There's no validation, no rate limit, no auth check. The service role is used because the agent reached for what works without RLS in the way.
And here's what the same agent generates with a SECURITY.md in the repo:
Same feature. Different code. The agent reads the rules in the file (validate input, rate limit, identity from auth not from body, service role server-side only) and applies them automatically. It's the same model. The difference is the context.
The template
Here's the full template I drop into every production repo. Adapt it to your stack. The specific tools (Supabase, Next.js, Zod) are less important than the patterns (server-only writes, mandatory RLS, no secrets in code).
Why this works
AI agents are context-driven. They read your files, follow your patterns, and generate code that matches the style of your repo. A security file doesn't guarantee perfect output, but it dramatically reduces the chance of an agent generating an insecure pattern in the first place.
The key insight is that this isn't just documentation for humans. It's a directive for machines. When an agent sees “service role keys may only be used in Route Handlers,” it follows that rule. When it sees “RLS is mandatory,” it generates policies instead of skipping them.
The file also serves as a shared contract between you and anyone else on the team. It's the source of truth for “how we handle security here,” whether the code is written by a person or a model.
If you're shipping production apps with AI agents, a security file isn't optional. It's the cheapest, highest-leverage safety net you can add.
What I do now
Every new repo I start with an AI agent gets a SECURITY.md before the first feature is generated. The file isn't finished. I add to it whenever I catch the agent generating something I don't want to see again. Each new rule prevents the next class of mistake.
I'm more curious now about how far this pattern goes. A SECURITY.md works because it sits in the context window. The same trick works for accessibility, performance, naming conventions, anything you want the agent to enforce. The file is the prompt you don't have to type.