The Digital Doppelgänger: A Creator’s Guide to Consistent AI Characters with Midjourney
You’ve spent hours trying to get your main character to look the same in two different AI-generated images. One minute she’s a battle-hardened warrior, the next she has different eyes and a completely new outfit. It’s the single biggest frustration for narrative artists using generative AI. As of October 12, 2024, that frustration is a relic of the past. Forget the consistency problem. Today, we’re not just creating images; we’re creating digital actors. We’ll use Midjourney’s revolutionary Character Reference feature to forge a consistent character you can direct across any scene, any mood, any story. This isn’t about replacing you. It’s about giving you a digital stunt double, an infinitely patient model, and a concept artist all rolled into one.
The Old Way vs. The New Workflow
For decades, character design has meant a mountain of sketch pages, model sheets, and turnarounds. It’s laborious but necessary work to ensure your character looks the same from panel to panel, frame to frame. The ‘old’ way of using AI was even worse for consistency—a slot machine of random faces.
Our new workflow transforms this. Think of it in three distinct phases:
- The Casting Call: Rapidly generate dozens of high-quality character concepts.
- The Character Lock: Select your ‘hero’ concept and create a ‘digital DNA’ reference.
- The Scene Rehearsal: Place your consistent character into any environment or action sequence you can imagine.
Let’s step into the lab and build our character from scratch.
Phase 1: The Casting Call – Generating Your Protagonist
Our first goal is variety, not consistency. We want to see a range of possibilities before we commit. We’re designing a protagonist for a cyberpunk comic book: a young, resourceful scavenger navigating a neon-drenched metropolis. We will tell the AI the genre, role, and key visual elements.
The Prompting Studio: Initial Concept Generation
Head over to Midjourney (on Discord or their web alpha). We’ll start with a detailed prompt focused on style and theme.
Copy and paste this prompt:
/imagine prompt: concept art sheet of a female cyberpunk scavenger, Rei Ayanami hairstyle, techwear jacket, determined expression, manga-inspired art style, vibrant neon and chrome palette, character design –ar 4:5 –style raw –stylize 250
After you press Enter, Midjourney will deliver four unique takes on this concept. This is your initial lineup.
Strategist’s Log (Deconstructing the Prompt): We used ‘concept art sheet’ and ‘character design’ to signal our intent. Referencing a ‘Rei Ayanami hairstyle’ gives the AI a concrete stylistic anchor. The `–ar 4:5` parameter creates a portrait-friendly aspect ratio, ideal for focusing on the character. `–stylize 250` (the default is 100) tells Midjourney to be more artistically adventurous with the initial designs.
Phase 2: The Character Lock – Creating the Digital DNA
From the four options generated, let’s say we love the top-right version (Image #2). She has the perfect blend of vulnerability and toughness. Now, we’ll lock her in. This is where the magic happens.
Upscale the image you love. Once you have the full-size image, right-click (or long-press on mobile) and select “Copy Image Address”. This URL is now the ‘DNA’ for your character.
The Prompting Studio: Locking with –cref
We’ll use a new prompt, but this time we’ll include the new Character Reference parameter.
Construct your prompt like this:
/imagine prompt: [Paste the Image URL you copied here] A full-body shot of the character standing in a futuristic city street, looking up at giant holographic ads, cinematic –cref [Paste the SAME Image URL again here] –cw 100
This might seem redundant, but the first URL is an image prompt to guide the style, and the second, after `–cref`, is the crucial command telling Midjourney to lock the character’s facial and physical features.
Strategist’s Log (Understanding –cref and –cw): The --cref parameter tells Midjourney: “Pay attention to the character features in this reference image and apply them to the new generation.” The --cw parameter stands for ‘Character Weight’ and ranges from 0 to 100. A value of `–cw 0` focuses only on the face (good for changing outfits), while `–cw 100` attempts to copy the face, hair, and clothing. We’re using 100 here to get the full character into our new scene.
Phase 3: The Scene Rehearsal – Your Character in Action
Now for the ultimate test. Can we put our newly-minted character into a completely different scenario? Let’s try an action sequence. This is where AI transcends being a static image generator and becomes a dynamic storytelling partner.
The Prompting Studio: Dynamic Scene Test
We’ll use the same Character Reference URL. The only thing that changes is the description of the scene.
Copy and paste this new prompt:
/imagine prompt: cinematic action shot of a young cyberpunk scavenger sprinting through a rain-soaked alley, dodging security drones, motion blur, dynamic angle, intense emotion –cref [Paste Your Character’s URL] –cw 80 –ar 16:9
The results will be astonishing. You’ll see the same character, with the same face and hairstyle, now in a high-stakes action scene. You’ve successfully directed your digital actor.
Strategist’s Log (Directorial Choices): Notice how we lowered the character weight to `–cw 80`. This gives Midjourney a little more creative freedom to adapt her clothing to the action (e.g., making her jacket flow with the motion) while keeping her core features locked. Changing the aspect ratio to `–ar 16:9` makes the shot feel more cinematic, like a still from an animated film. You are no longer just a prompter; you are a director, a cinematographer, and a stylist.
The Final 20%: Bringing The Human Touch
The generated image is a spectacular starting point, but it’s not the final product. Your artistry is the final, essential ingredient. Import the AI-generated scene into your tool of choice—Photoshop, Krita, Procreate—and use it as an underpainting. Trace the lines to match your personal style, adjust the colors, add custom textures, hand-letter speech bubbles, and composite elements from different generations. The AI provides the composition, perspective, and lighting in seconds; you provide the soul and the story.
The Big Questions: Your AI Debrief
“Is this going to replace character designers?”
Absolutely not. It changes the job of a character designer. The emphasis shifts from laborious rendering to high-level creative direction. Your value is in the initial concept, the curatorial eye to select the right ‘casting call’ option, the directorial skill to craft prompts that test the character, and the artistic talent to perform the final integration. This tool automates the grunt work, freeing you up for the actual storytelling.
“How do I maintain my unique art style?”
The key is twofold: prompt-craft and post-production. First, infuse your prompts with stylistic keywords unique to you (e.g., ‘in the style of Mike Mignola,’ ‘hatching like Bernie Wrightson,’ ‘colors like a Studio Ghibli film’). Second, and more importantly, is the final integration. Never use a raw AI image as your final piece. Always bring it into your primary art software. By redrawing key lines, adding your signature color palette, and applying your own textures, you transmute the AI’s output into your own distinct creation.
“What about copyright and using this commercially?”
The legal landscape for AI is still evolving. As of late 2024, the general consensus is that raw AI output is not copyrightable. However, work that involves significant human authorship and transformation (like our proposed workflow of tracing, repainting, and compositing) has a much stronger claim to copyright protection. Always check the terms of service for the specific AI tool you’re using. For commercial projects, this workflow—using AI for ideation and as a base layer for substantial human artwork—is the safest and most creatively authentic path forward.
Your Creative Sandbox Assignment
Your mission is to create your own digital doppelgänger. First, define a character in three words (e.g., “Jaded, Veteran, Astronaut”). Use Phase 1 to generate initial concepts. Select one and create your character reference URL. Now, for the test: write a new prompt for a scene that contradicts one of their traits (e.g., put your ‘Jaded Astronaut’ in a scene of ‘a child’s birthday party with confetti and cake’). Use the `–cref` parameter and see how the AI interprets their expression. Does their jaded nature still show through? This exercise teaches you to direct for emotional nuance.
Your AI Integration Plan This Week
- Monday: Dedicate 20 minutes to a ‘Casting Call.’ Write one detailed character concept prompt and generate at least three variations (by re-rolling the prompt).
- Wednesday: Choose your favorite character from Monday. Get their reference URL and spend 20 minutes on ‘Scene Rehearsals.’ Put them in three completely different situations (e.g., a quiet library, a crowded market, a fantasy forest).
- Friday: Select the most compelling scene you generated. Take it into Photoshop, Krita, or your preferred software. Spend 30 minutes just doing a rough line art layer over the top. Don’t worry about perfection; just feel how your hand and style merge with the AI’s base.
- Sunday: Review your work. You’ve now gone through the entire character-to-scene workflow. You have a model sheet, scene concepts, and a hybrid art piece. You’re no longer just using AI; you’re collaborating with it.



Post Comment
You must be logged in to post a comment.