AI creative director Jamey Gannon has a repeatable system for generating consistent brand imagery in Midjourney, and complex prompts are not part of it. Her workflow centers on three levers: mood boards built in Pinterest or Cosmos, Midjourney's SREF (style reference) codes, and personalization codes that encode aesthetic preferences directly into the model. SREFs, she argues, outperform general mood boards for consistency because they lock stylistic parameters rather than suggest them. The full session runs from first mood board to client-ready deliverable.

The technical sequence matters here. Gannon tests style consistency using standardized prompt sets at the 8:45 mark, iterates on SREFs through a defined refinement loop starting at 12:33, and brings in Nano Banana, Google's image generation tool, at 35:48 specifically for targeted element fixes without full regeneration. She also covers building AI self-portraits for content use at 38:23, a practical application most brand workflows ignore. Each stage has a discrete tool assignment, which is what makes the system transferable.

The real value in watching the full episode is not the conclusion but the troubleshooting section at 46:50, where Gannon shows what to do when Midjourney resists a style direction. That segment alone reframes how most practitioners diagnose generation failures. The client packaging section also addresses a gap most tutorials skip: how to hand off a generative system so clients can produce on-brand assets without the original operator.

[READ ORIGINAL →]