Here’s a dirty secret about enterprise software development: user interfaces almost never get enough iteration.
The budget runs out. The dev team gets frustrated with pixel-level feedback. The business signs off on “close enough.” And six months later, the sales team quietly goes back to their spreadsheet because the custom tool feels clunky and nobody wants to say it out loud.
I’ve seen this pattern for twenty years on the Salesforce platform. And it’s not because anyone’s doing bad work — it’s because UI iteration is expensive, slow, and emotionally draining for everyone involved.
Last December, I discovered something that changes the math entirely.
What Happened in December
I was building two production Salesforce apps under tight deadlines. One of them — a trade program management system for a client whose sales team actively hated their existing tool — needed an interface that felt intuitive on day one. No training manuals, no “you’ll get used to it.” It had to work the way their brains worked, or they’d never adopt it.
The other project — custom quote-to-cash functionality — had a component for applying account credits to orders. The reps would be using it on every transaction. If it was confusing or slow, it would create friction on every single sale.
Both of these were cases where UI quality wasn’t a nice-to-have. It was the difference between adoption and rejection.
What “Live Wireframing” Actually Looks Like
With AI-assisted development, I could do something that I’m calling “live wireframing” — iterating through multiple complete versions of a UI component in a single working session, using natural language feedback that would make a traditional developer want to quit.
Here’s what the Edit Rebate component went through in one session:
I started with a basic layout. It worked, technically. But the information hierarchy was wrong — the most important fields were buried. So I said something like: “Move the margin summary to the top. Make it the first thing they see.”
New version in minutes. Better, but the summary cards felt cramped. “Give those cards more breathing room. And the margins should be color-coded — green when they’re above target, red when they’re below.”
New version. Now the color coding was too aggressive — it looked like a Christmas decoration. “Tone down the colors. More subtle. Think muted indicators, not traffic lights.”
New version. Getting close, but the action buttons were competing with the data. “Push the action buttons to the bottom right. They’re secondary to reading the data.”
This went on for eight iterations. In a single session.
Let me be clear about what’s happening here: the kind of feedback I was giving — “more breathing room,” “tone it down,” “push those to the bottom right” — is exactly the kind of feedback that drives human developers up a wall. Not because it’s bad feedback, but because each round means another cycle of changes, another review, another round of “almost, but…”
AI doesn’t get frustrated. It doesn’t sigh when you say “actually, go back to the version before this one.” It doesn’t start cutting corners on iteration seven because it’s tired of the same component. Each version gets the same attention as the first.
Why This Matters More Than You Think
If you’re the person managing a Salesforce org — the ops leader, the CRO, the admin who’s been told “we can build you something custom” — this might be the single most important thing AI changes about your experience.
Here’s why: you probably already know what you want. You’ve seen good interfaces. You’ve used tools that felt right. The problem has never been your vision — it’s been the cost of translating that vision into reality through a process that charges by the hour and communicates through JIRA tickets.
Live wireframing collapses that translation gap. Instead of writing a requirements document that says “the dashboard should be intuitive and visually clean,” you can sit with someone like me and say “I don’t love how that looks” and see a new version before your coffee gets cold.
That means the tool your team actually gets is closer to the tool you imagined. Not the compromised version that survived the budget.
The Account Credits Screen: A Real Example
One project had a component where sales reps select and apply account credits to orders. Sounds simple. It’s not.
Reps needed to see available credits, their expiration dates, how much remained on each, and select which ones to apply — sometimes partially. The interaction had to be obvious because reps would be doing this under time pressure, often on the phone with a customer.
The first version laid everything out in a table. Functional, but dense. Reps would have to scan and think. “Make the available amount bigger. And the expiration date should be more visible — if a credit is about to expire, they need to know that at a glance.”
Next version. Better hierarchy, but the selection mechanism — checkboxes plus an amount-to-apply field — wasn’t clear enough. “When they check a credit, pre-fill the amount to the full remaining balance. They can change it, but the default should be ‘use it all.‘”
The input focus was jumping around weirdly after each change — every time a rep typed a number, the screen flickered. We found the cause and fixed it in the same session. The kind of subtle UX bug that would normally get filed as a ticket, deprioritized, and fixed in a follow-up sprint two weeks later got caught and resolved before anyone stood up from the desk.
By the end of that session, the credits screen worked the way a rep’s brain works: see what’s available, pick what to use, confirm, move on. The kind of interface that doesn’t need a training session because it’s just… obvious.
What This Doesn’t Replace
I’d be dishonest if I didn’t mention the limits.
Live wireframing works brilliantly for components where the interaction pattern is conversational — forms, dashboards, selection screens, summary views. It’s less useful for complex data visualizations or interactions that require deep understanding of user workflows you haven’t observed yourself.
AI will happily iterate on a layout forever, but it can’t tell you whether the layout makes sense for your users. That judgment — understanding how a sales rep actually works, what they look at first, what makes them trust or distrust a number — still comes from domain knowledge. From sitting in meetings, watching people use the tool, hearing what they complain about.
The live wireframing approach plays to AI’s strength — tireless, ego-free iteration at near-zero marginal cost — while relying on your knowledge of what “right” looks like for your users.
The Practical Takeaway
If you’re building custom Salesforce UI — whether it’s a Lightning Web Component, a Flow screen, or a record page layout — here’s what I’d suggest: stop finalizing designs before you start building.
Instead, start with a rough version and iterate live. Tell the AI what bugs you. Be as vague or as specific as you want — “this feels cluttered” is just as useful as “reduce the padding on the header by 4 pixels.” Push for the version that feels right, not just the version that works.
The reason most enterprise UIs are mediocre isn’t that people don’t care. It’s that the iteration budget runs out before the interface is actually good. AI removes that constraint.
If you’re building custom Salesforce functionality and have settled for “good enough” on the interface, I’d love to hear about it — because that’s exactly the kind of problem that live wireframing was made for. Find me on LinkedIn.