All work
CodingDesign2026

BrandForge — Generative Brand Identity Engine

A multi-agent generative system that turns a single natural-language brief into a complete, production-ready brand kit — strategy, palette, type, logo, voice, mood board.

BrandForge — Generative Brand Identity Engine — cover
01 · The brief

One prompt in, a brand kit out.

The user enters five fields — brand name, industry, target audience, mood keywords, and an optional description. That's it. From there, the system has to produce something that would normally cost a small studio a month: strategy, archetype, six-slot colour palette, typography pairing, logo directions, voice guidelines, copy scenarios, a favicon, a QR code, a mood board, social mockups, a brand story, and an FAQ. The interface promises this can happen in under a minute. The architecture has to deliver on that promise.

BrandForge prompt input screen
The home — five fields and a single CTA. Everything downstream flows from this.
02 · Three agents, one chain

Strategist → Visual Director → Copywriter.

I deliberately chose multi-agent chain-of-prompts over RAG. The task is creative synthesis from a brief, not factual retrieval against a corpus — RAG would have been the wrong tool. Each agent does one thing well: the Strategist decides on archetype, positioning, and brand pillars; the Visual Director picks palette, type, logo direction; the Copywriter writes voice guidelines, story, FAQ. Each agent's output is strictly-typed JSON, validated on the way through, so a downstream agent can rely on what the upstream one produced.

BrandForge agent pipeline composing the brand kit
Agents at work — JSON flows down the chain; each step is independently visible.
03 · Resilience

Fallbacks, dedup, and security.

Generative systems break in interesting ways — image APIs throttle, model outputs drift off-schema, identical prompts shouldn't pay the model bill twice. BrandForge has multi-layer image fallbacks (so a failed mood-board image swaps to a deterministic alternative, not a broken state), a deterministic client-side SVG logo generator (so even if every image API fails, you still get a logo), response-cache deduplication (same brief, same kit, no second bill), and a prompt-injection filter at the input layer. Auth and storage are on Supabase with row-level security, so users only see their own kits.

Outcome

Deployed to Vercel and working end-to-end. A user can land on the home page, type a brief, and watch a complete brand kit assemble in under a minute.

Reflection

More iterations of the visual-output layer are coming — better logo quality, richer mood boards, exportable design tokens. The agent architecture is the part I'd keep; the rendering layer is where the next month of work goes.