I'm a Designer Who Ships With AI Agents. Here's How.

March 2026·8 min read

Two years ago, my workflow was: design in Figma, write a spec, hand it to engineering, wait, review, iterate, wait again. The gap between what I designed and what shipped was measured in weeks and compromises.

Today, I design the interface and build it myself, often in the same session. The difference isn't that I became a 10x engineer overnight. It's that AI agents became good enough to close the gap between design intent and working code.

This is how I work now: the five steps that turn a fuzzy idea into shipped product, what I've built with the approach, and what I've learned about the emerging role of the design engineer.

The five steps

1. Research and map the logic

I start with ChatGPT. Before any code, I research the problem, map the user flow, define the data model, write the PRD, and structure the logic I need. This replaces the traditional design phase for me and moves about 10x faster than opening Figma. By the time I start building, I know exactly what I need.

2. Describe the behaviour to the agent

With the flow and logic mapped, I describe the component to an AI agent: the states it needs to handle, the data it consumes, the interactions it supports. I think of this as context engineering: giving the agent enough structure to generate correct code, not just any code. A well-structured codebase is a well-prompted agent.

3. Build with the agent

The agent generates the component, I review it, adjust, and iterate. This isn't “vibe coding” where you accept whatever comes out. I read every line. The difference is that the first draft appears in seconds instead of hours, and iterations happen in real time instead of across sprint cycles.

4. Handle the hard parts myself

Edge cases, error states, accessibility, performance. These still need human judgement. AI agents are excellent at scaffolding but they don't understand the why behind design decisions. I handle the refinements that make the difference between a prototype and a product.

5. Ship

Because I control both the design and the code, there's no handoff friction. What I designed is what ships. The fidelity gap that plagues most design-to-engineering workflows simply doesn't exist.

The stack

My primary tools are Windsurf (agentic IDE with Cascade) and Claude Code (terminal-based AI agent). Windsurf for feature development. Its Cascade agent understands the full codebase and handles multi-file changes. Claude Code for targeted work like debugging, refactoring, wiring up API routes. ChatGPT for research and PRDs upstream of both.

The rest of the stack (Next.js, TypeScript, Supabase, Tailwind, Vercel) is chosen because AI agents work well with these tools. Strong type systems and well-documented frameworks give agents better context, which means better output.

One project, end to end

The clearest example is OpenWatch, a real-time security intelligence platform for Nigeria. AI-powered incident extraction from news sources, semantic deduplication with pgvector, an interactive tactical map, automated daily briefs, and a Twitter bot.

I started in ChatGPT, mapping out what counts as an incident, what taxonomy to use, and what the dedup logic should look like. I wrote the PRD and the data model before opening an editor. Then I moved to Windsurf for the pipeline scaffolding (RSS scrapers, GPT extraction prompts, pgvector schema, BullMQ orchestration), and to Claude Code for the precise work: tuning the extraction prompt against real Nigerian news, debugging the dedup fusion score, fixing the publish-gate logic.

The interesting part is what I didn't hand to the agents. The taxonomy decisions (what counts as an incident, what doesn't). The confidence threshold tuning (when to publish, when to queue). The map interaction design (how to handle a hundred markers in one state). Those are the human-judgement parts, and they're also the parts that made the product worth building. The agents shipped the foundation in days. I spent the rest of the time on the parts only a human could decide.

What would have been a team project became a solo build, in weeks instead of months.

Other things shipped this way

ShipSecure is a CLI that generates security policies for AI-assisted codebases. 30+ policy files, a 100-point audit scanner, shipped to npm in days.

Filmflux is a movie discovery app with 10k monthly users across web, iOS, and Android. Three-layer cache, hybrid semantic search, full admin dashboard. Sole developer.

This portfolio. Every page, animation, API integration, and interactive component on this site was built with AI agents. The live weather, the GitHub graph, the changelog feeds, the design system docs. Proof of the workflow, not just a description.

Two things I've learned

Context engineering is the real skill

The quality of AI output depends almost entirely on the context you provide. File structure, type definitions, naming conventions, SECURITY.md files, clear component boundaries. These aren't just good engineering practices, they're prompts. The most leveraged thing I do every day is shape the context the agent reads before it writes a line.

Speed changes what you attempt

When building something takes days instead of weeks, you experiment more. You try the ambitious interaction, the real-time feature, the complex animation. The cost of exploration drops so low that the rational choice is to be bold. Most of my best work in the last year came from ideas I would have descoped in a traditional workflow.

The design engineer moment

The industry is converging on a role that didn't exist at scale three years ago: the design engineer. Someone who designs the product and builds it. Not a designer who dabbles in code, or an engineer with an eye for aesthetics, but someone who owns the full path from concept to production.

AI agents are what make this role viable at the level of quality and speed the industry demands. They don't replace design thinking or engineering rigour. They compress the execution gap between the two, so one person can do what used to require a team.

I'm not arguing every designer should learn to code, or every engineer should learn design. I'm saying the tools now exist for people who want to do both, and the results speak for themselves.

Design EngineeringAI AgentsWindsurfClaude CodeProduct DesignWorkflow