How I Set Up a SECURITY.md When Building With AI Agents
AI coding agents are incredible. They can scaffold features, write migrations, wire up API routes, and refactor entire modules in seconds. But they also move fast enough to introduce security holes before you notice – hardcoded keys, overly permissive database policies, service role clients exposed to the browser.
When I started building production apps with AI agents (Cursor, Claude Code, Copilot), I realised I needed guardrails that worked before code review – not after. The solution was a SECURITY.md file at the root of the repo that acts as a contract between me and any agent writing code in the project.
Here's the template I use, and the thinking behind each section.
Why a SECURITY.md?
AI agents read your repo. They index your files, scan your project structure, and use that context to generate code. A SECURITY.md gives agents explicit rules about what they must and must not do – in plain language they can parse and follow.
Think of it as a system prompt for your codebase. It doesn't replace proper security practices, but it catches the most common mistakes at the point of generation – before they ever reach a pull request.
The agent rules
No secrets in code
This is rule number one. Never commit service role keys, JWT secrets, database passwords, or API keys. Client-side environment variables must be limited to NEXT_PUBLIC_* and must be non-sensitive. AI agents love to helpfully inline a key “to get things working” – this rule stops that.
Assume public keys are public
Anyone can extract client-side keys from a web app. All data protection must be enforced via Row Level Security (RLS), safe RPCs, and server-side checks. If an agent writes a client-side query that assumes the anon key provides access control, the policy has already failed.
Service role usage is restricted
Service role keys may only be used in Next.js Route Handlers (src/app/api/**), Edge Functions, or trusted backend workers. Never in the browser. This is the single most common mistake AI agents make when working with Supabase – they reach for the service role client because it “just works” without RLS getting in the way.
RLS is mandatory
Any public or semi-public table must have RLS enabled. No permissive write policies like WITH CHECK (true) for INSERT, UPDATE, or DELETE. AI agents will sometimes generate these to “unblock” a feature – the security file makes it clear that's not acceptable.
Server endpoints for public writes
Any endpoint that records analytics, clicks, or user actions must validate input (e.g. with Zod), apply rate limits, and use service-role DB writes on the server. Client components must not call write RPCs directly. This pattern prevents the entire class of “unauthenticated write” vulnerabilities.
Function hardening
All SECURITY DEFINER functions must have a fixed search_path and restricted execution grants. Only grant EXECUTE to roles that actually need it – often service_role only.
Pre-merge checklist
Rules only work if you verify them. The security file includes a merge checklist that I run before every PR lands:
- Repository scan – search for leaked service role keys, JWTs, and tokens
- Database advisors – run security and performance advisors after any DB, RLS, or function change
- Verify public write paths – confirm no public tables allow anonymous inserts without validation
- Confirm all public write actions route through a server endpoint with rate limiting
Enforced architecture
Server routes for privileged writes
Direct writes to protected tables from client code are not allowed. All privileged writes must go through Next.js Route Handlers that validate input, rate limit, authenticate via JWT, and use the service role for DB writes. This is the backbone of the architecture – it means an AI agent can't accidentally expose a write path by generating a client-side Supabase call.
Anonymous activity stays local
Anonymous activity must remain local until login. Cloud state for user data – watchlists, recently viewed items, watch progress – is authenticated-only. This prevents a whole category of bugs where an agent creates a “save to cloud” feature that accidentally works without auth.
The template
Here's the full template I drop into every production repo. Adapt it to your stack – the specific tools (Supabase, Next.js, Zod) are less important than the patterns (server-only writes, mandatory RLS, no secrets in code).
Why this works
AI agents are context-driven. They read your files, follow your patterns, and generate code that matches the style of your repo. A security file doesn't guarantee perfect output – but it dramatically reduces the chance of an agent generating an insecure pattern in the first place.
The key insight is that this isn't just documentation for humans. It's a directive for machines. When an agent sees “service role keys may only be used in Route Handlers,” it follows that rule. When it sees “RLS is mandatory,” it generates policies instead of skipping them.
The file also serves as a shared contract between you and anyone else on the team. It's the source of truth for “how we handle security here” – whether the code is written by a person or a model.
If you're shipping production apps with AI agents, a security file isn't optional. It's the cheapest, highest-leverage safety net you can add.