Exploring Emotion Through AI: Fashion Experiments in ComfyUI
Starting Point: Curiosity Meets Control
As an art director, I’m always looking for ways to bridge storytelling and technology. I’ve worked with immersive spaces, 3D platforms, and interactive retail but AI feels different. It’s not just a new medium; it’s a new collaborator and though it's an amazing tool I do strongly believe at the end of the day taste is important in order to create an end result that resonates and sparks.
So I started with a simple challenge: can I maintain brand consistency and emotional tone using ComfyUI alone?
I chose three brands that live at the intersection of futurism and intimacy: Gentle Monster, Prada, and Miu Miu. Each has a strong design language, rigid yet romantic, and I wanted to see if an AI workflow could preserve those identities through image and motion.
Building the Workflow
Using ComfyUI, I set up a visual pipeline that balanced precision and play:
Tested product realism and lighting consistency across multiple prompts.
Layered brand elements like logos, textures, and typography to see how AI handled distinct visual codes.
Experimented with image-to-video generation to maintain continuity and natural motion between frames. Here I used nano banana to Wan 2.2
Tuned style consistency nodes to keep the Gentle Monster visual tone cool and sculptural, while Prada leaned minimal and architectural, and Miu Miu stayed soft and cinematic.
The process wasn’t linear — it was iterative, intuitive, and occasionally chaotic. But that chaos produced unexpected beauty.

Discoveries Along the Way
AI doesn’t replicate luxury it interprets it so prompts must be detailed and strategic.
I noticed that subtle prompts (“soft gloss under gallery light”) mattered more than brand names themselves. By treating the tool as a design partner rather than a generator, I could coax it into emotional accuracy,, the kind that feels human-made.
It reminded me of traditional art direction: you define a feeling first, then build the system to express it. ComfyUI just happens to visualize that system in real time.
As I continued prompting, and trying out different images/shots, I felt much like a photographer art directing on-site at a shoot. Much like that, I had to come with my idea, my vision, and taste and direct the scene.

From Still to Motion
Next, I extended the experiment into short-form motion.
Using the same node pipeline, I generated looping sequences that felt like high-end campaign teasers with moody lighting, controlled camera drift, and tactile product focus.
The results weren’t perfect, but they were powerful: each loop carried a recognizable brand DNA, achieved entirely through AI compositing and motion consistency.
Why It Matters
These small studies are helping me rethink how we define creative authorship.
As tools like ComfyUI evolve, they aren’t replacing art direction, they’re expanding it.
They make it possible to prototype brand emotion before any shoot, render, or spatial build.
And in an era where creative speed matters as much as vision, that blend of intuition and iteration is what keeps storytelling human.
Results
Gentle Monster:




Prada:


Miu Miu:

