AI tools can generate images. They still can't understand what a designer actually needs.

Cue is a context-aware AI intent layer that translates designer thinking into precise AI output, without leaving your design tool or learning prompt engineering.

ROLE

Product Designer

TYPE

Self-Initiated Concept

TIMELINE

4 Weeks

PLATFORM

Desktop / Design Tool Panel

THE MOMENT IT BREAKS

I needed a background image. What followed was 30 minutes across three applications.

I was designing a social media post for a luxury hotel brand called Maison. Simple brief. I needed a atmospheric background image, warm, minimal, editorial, that would sit behind large gold typography. I opened the AI generation tool. It showed me a blank prompt field and waited.

I typed what felt natural: "luxury hotel lobby warm background." Six words. I hit generate.

What came back was generic. Stock photography energy. Warm orange lens flare, ornate chandeliers, nothing close to the modern minimal aesthetic the brand needed. So I did what most designers do when a tool doesn't understand them. I left. I opened ChatGPT in another tab, described my need in plain English, asked it to write me a proper prompt, copied the result, went back to the generation tool, pasted it in, generated again.

The results were better. But the tool had hallucinated text into the images, garbled letterforms baked into the architecture. I couldn't use them. I downloaded the least broken one anyway, went back to Illustrator, placed it behind my text.

It looked wrong. Thirty minutes gone. I was back where I started.

THE BROKEN WORKFLOW

Six steps. Three applications. One design task that should have taken two minutes.

These are real screenshots from an actual design session. Nothing is recreated or staged.

STEP 01

Adobe Illustrator workspace. The artboard has the MAISON brand name in gold. The background is empty. I need an AI-generated image to fill it.

STEP 02

I open Adobe Firefly. A blank prompt field. I type what feels natural, six words. This is where most designers are already stuck.

STEP 03

The results. Generic warm hotel lobbies. Orange lens flare. Ornate chandeliers. Nothing close to the modern minimal luxury aesthetic the brand needs. The tool gave me what the words said, not what I meant.

STEP 04

I leave the design tool entirely. I open ChatGPT and describe my need in plain English. ChatGPT generates a 200 word structured prompt. This is what the AI generation tool actually needed to understand me.

STEP 05

Back in Firefly with the structured prompt. The results are dramatically better. Cool marble, minimal architecture, editorial lighting. But the tool has hallucinated text into every image. Garbled letterforms baked into the walls. Unusable.

STEP 06

Back in Illustrator. The best available image placed behind the MAISON text. The composition doesn't work. The colors clash. The hallucinated architecture competes with the typography. Thirty minutes later, I'm back where I started.

“The problem wasn’t the AI’s capability. The results at Step 05 proved the technology works. The problem was that communicating designer intent to an AI tool requires a completely separate skill set that has nothing to do with design ability.”

INDUSTRY PATTERN

Every AI generation tool has the same problem. I checked.

Before designing anything I wanted to understand whether this was a single tool’s failure or an industry-wide pattern. I opened every major AI generation tool available and looked at one specific thing: what does the tool show you when you need to create an image?

The answer, across every tool I tested, was identical. A blank text field. Waiting.

ADOBE FIREFLY

Blank prompt field. No context. No guidance. No knowledge of what you're designing. The entire interaction model assumes you already know how to communicate with an AI. "Describe what you want to generate" but how?

CANVA AI

Friendlier tone but fundamentally identical. A blank field waiting for a prompt. Has Style and Aspect Ratio dropdowns, technical parameters, but still assumes you can describe your creative need in text. No knowledge of any existing design context.

DALL-E / CHATGPT

The most conversational framing. Has generic idea shortcuts like Studio headshot and Blueprint poster, the closest any tool gets to guided generation. But these are generic categories completely disconnected from your specific design task. Still starts blind.

MICROSOFT DESIGNER

The most honest about what it's asking. A massive empty text area. The example prompt shown is about a flamingo with surreal features. That tells you exactly who this tool thinks its user is. Not a professional designer working on a luxury brand campaign.

Four tools. Four companies. Four different headlines asking the same question. None of them know anything about what you’re designing.

THE REAL PROBLEM

Prompt engineering is not a design skill. It never should have been.

Every major AI generation tool today puts a blank text field in front of the designer and waits. The assumption baked into that interaction model is that the designer knows how to write a structured, detailed, technically precise prompt that will reliably produce what they have in their head.

That assumption is wrong. And it affects every designer equally regardless of experience level. A designer with 20 years of professional experience and a designer in their first year face exactly the same barrier in front of a blank prompt field. Because neither was trained in prompt engineering. They were trained to design. Prompt quality has no correlation with design skill, creative vision, or professional experience. It is a completely separate discipline with its own learning curve.

There is a second problem layered underneath. AI generation tools are blind to context. They don’t know what you’re designing, what’s already on your artboard, what colors you’re working with, what mood you’re trying to achieve, or that you need clean negative space in the center because you’re overlaying typography. Every generation starts from zero. The designer has to re-communicate all of this context every single time.

EXPLORATION A

List-based navigation — felt too much like a file directory. B2B clients needed inspiration, not a menu to navigate.

EXPLORATION B

Category-first approach added friction. Clients had to know the right category before they could see anything visual.

EXPLORATION C

Filter-based gallery let clients browse visually first and understand the categories as they went. No prior knowledge needed.

KEY DECISIONS

Every template had to be real — not aspirational.

The most important constraint I set was honesty. Every design in the showcase had to be fully achievable within Cvent's actual editor. No custom code that clients couldn't replicate, no design that required development work beyond what the platform supported.

This meant reviewing modern web patterns and templates from across the industry, then asking: can we build this within the platform's constraints? Sometimes the answer was yes with modifications. Sometimes it was no, and we moved on. We weren't making a design portfolio. We were making a promise to clients about what they could actually have.


WHAT WE BUILT

A living platform that sales teams still use today.

The showcase organised templates by event type, industry vertical, colour palette, and complexity, giving clients multiple ways to find something that felt relevant to them. Every template was real, implementable, and linked to guidance on how to build it. Two event builders from the technical team worked alongside us to set up the template infrastructure within Cvent's backend, so the design team could focus entirely on front-end visual execution without getting blocked on platform configuration.

The homepage — filter by theme type, industry vertical, and colour to find relevant examples fast.

Individual templates shown in full — every detail achievable within Cvent's editor, nothing that required custom development.

Survey and email templates — extending the showcase beyond registration to cover the full attendee journey.

Hospitality and onsite solutions — showing clients the platform's full breadth, not just its most visible features.

WHAT CHANGED

Design started solving problems before projects even began.

The shift was visible quickly. Sales teams adopted the showcase as their primary demo tool. Instead of describing what Cvent could do, they could show it. Client conversations changed character. Expectations were better calibrated before projects started, which meant fewer surprises during delivery.

Escalations dropped. Client satisfaction improved. Sales conversions increased. These weren't marginal changes. They were felt across the team. The showcase is still live today and continues to be used by Cvent's global sales and client teams.

~30%

Reduction in escalations

Fewer misaligned expectations reaching delivery

~20%

Improvement in client satisfaction

Clients felt more confident earlier in the process

~25%

Lift in sales conversations

Teams could show rather than explain

Metrics are approximate, based on team reporting from the quarter following launch.

LOOKING BACK

Showing is almost always more effective than explaining.

This project taught me something I now apply to everything. When there is a gap between what something can do and what people believe it can do, the answer is rarely a better explanation. It is a better demonstration.

If I were extending this further, I would push toward tighter integration between the showcase and the actual platform. Letting clients start building directly from a template they discovered in the showcase, reducing the gap between inspiration and action even further.

I would also involve the sales team earlier in the design process. We built something they ended up loving, but we mostly designed it based on our own understanding of their problem. Getting them in the room from week one would have made the early template selection sharper.