If you’ve been following the Salesforce ecosystem this year, you’ve probably seen the demos. Someone types a prompt, AI generates a component, and the audience applauds. It looks like magic.
I’ve now used thousands of AI messages across multiple production Salesforce projects. I can tell you with confidence: it’s not magic. It’s better than magic, actually — it’s a real methodology that works. But the gap between what demos show and what production requires is significant, and nobody’s really talking about it.
This is that conversation.
The Projects
Last December, two clients came to me with variations of the same problem: they needed custom Salesforce functionality, they’d been quoted enterprise-level timelines and budgets, and they needed to be live by January.
Client A needed a trade program management system. Their sales team was using a third-party tool they hated — margins were inaccurate, the interface was clunky, and nobody trusted the numbers. The system needed to handle complex margin calculations across multiple product categories, support different rebate types, flag exceptions when margins dipped below targets, and provide real-time visibility into trade program performance.
Client B needed custom quote-to-cash functionality including account credits — the ability to apply accumulated credits to orders, track credit usage across transactions, and handle partial applications and expirations. Enterprise add-on tools existed but didn’t fit the budget or the specific workflow.
Both projects involved the kind of complexity that typically requires a dedicated development team: custom Apex classes, Lightning Web Components, complex business logic, multi-object data models, and approval workflows.
I built both using AI-assisted development as my primary approach, working as a solo consultant with deep platform expertise. Both shipped on time. Both are in production today.
Three Patterns That Actually Work
Across thousands of messages and two full production builds, three patterns consistently produced the best results. I’ve since formalized these as The Grounded Build Method.
1. Share Raw Materials, Not Crafted Prompts
The internet is full of advice about “prompt engineering” — the idea that carefully structured, detailed prompts produce better AI output. In my experience building production Salesforce apps, the opposite is true.
My most effective “prompts” were almost always one sentence, a pasted error message, or a screenshot with an arrow pointing at the problem. When the trade program summary component displayed margin percentages incorrectly, I didn’t write a detailed prompt explaining my data model. I typed: “The margins are displaying as if they’re multiplied by 100.”
The AI immediately identified the issue — lightning-formatted-number with format-style="percent" multiplies by 100 internally, doubling up a value already stored as a percentage. Fix deployed in minutes.
When a component wasn’t finding records and the screen showed no error, I opened the browser console, copied the TypeError, and pasted it directly into the conversation. The AI parsed the error, identified the data type mismatch, and proposed a fix.
Why this works: When you craft a detailed prompt, you’re doing two jobs — diagnosing the problem and communicating the diagnosis. AI is often better at diagnosis than you are at description. An error message contains structured, specific information that AI is optimized to parse. Your interpretation of that same error is a lossy compression.
The practical takeaway: Share what’s in front of you. The error message you think is too messy? Share it. The screenshot where you circled the broken thing? Send it. Your job is to be honest about what’s happening, not eloquent about why.
2. Use AI as a Live Wireframing Tool for UI
User interfaces are where AI-assisted development shines brightest — and where the productivity gains are most visible to the business.
On the trade program project, I iterated through 8+ versions of the Edit Rebate component in a single session. Feedback like “move the margin summary to the top,” “make the colors more subtle,” and “push the action buttons to the bottom right” — the kind of pixel-level direction that exhausts human developers — costs essentially nothing when AI is building.
On the second project, the account credits selection screen went through multiple rounds of iteration. The interaction between selecting credits, entering partial amounts, and seeing the running total had to feel immediate and obvious. We refined the layout, the default behaviors (pre-filling the full amount when a credit is selected), and even fixed a focus-jumping bug caused by overly aggressive re-rendering — all in the same working session.
Why this matters for the business: The reason most enterprise Salesforce UIs are mediocre isn’t that people don’t care — it’s that UI iteration is expensive. Every “make it a little more like this” costs another development cycle. When iteration is nearly free, the tool your team gets is dramatically closer to the tool you imagined.
The practical takeaway: Start ugly, iterate fast. Give the AI a rough idea of what you need, then refine through conversation. “I’ll know it when I see it” becomes a legitimate development strategy.
3. Your Platform Expertise Directs the Build
This is the pillar that separates production results from demo results.
AI is a brilliant coder that has never actually logged into a Salesforce org. It doesn’t know that percent fields store values differently depending on context. It doesn’t know that cross-object formulas have a 15-relationship limit. It doesn’t know that the pragmatic Salesforce solution to a data access problem is often denormalization, not a complex calculation pipeline.
On the trade program project, the AI confidently wrote code that turned 28% into 2800% — a Salesforce percent field storage behavior it had no way to reason about. When we hit the formula relationship limit, it proposed an elaborate workaround when a simple field copy was the right answer.
These aren’t failures of AI. They’re the expected behavior of a tool that knows about Salesforce but doesn’t know Salesforce. The difference matters, and it’s where experienced platform professionals add irreplaceable value.
Why this matters: AI-assisted development doesn’t reduce the need for platform expertise. It increases it. The speed of code generation means architectural mistakes compound faster. Someone needs to catch them before they ship, and that someone needs to know the platform deeply enough to spot what looks right but isn’t.
The practical takeaway: AI handles the “how.” You provide the “what” and the “why.” And critically, you provide the “not like that” when the AI heads down a path that won’t work on your platform.
The Honest Numbers
Across both projects, here’s what the reality looked like:
What AI did well: UI components, calculation logic, data transformation, SOQL queries, test scaffolding, error handling patterns, and repetitive boilerplate. These categories probably represented 60-70% of the total code written, and AI handled them dramatically faster than manual development.
What required significant human input: Architecture decisions, data model design, business rule translation (rules that lived in people’s heads, not in any document), platform-specific optimizations, and code review. The review step alone consumed roughly 40% of my active working time — pruning defensive over-engineering, catching platform quirks, and ensuring cohesion across the codebase.
What AI couldn’t do: Gather requirements from stakeholders. Understand organizational context. Make judgment calls about what to build versus what to skip. Navigate the politics of replacing a tool people had opinions about.
Both projects shipped on time and within budget. The total build cost was a fraction of what traditional development engagements would have quoted. The client whose sales team hated their old tool? They’re using the replacement daily. The account credits system? It handles real transactions in production every day.
Should You Try This?
Here’s my framework for when AI-assisted Salesforce development makes sense:
It works when you have clear business requirements, the implementation is the bottleneck (not the discovery), and you have access to someone with deep platform expertise to direct the build and review the output.
It’s risky when the requirements are unclear, the platform knowledge is thin, or you’re expecting AI to make architectural decisions it’s not equipped to make.
It’s transformative when you pair it with real platform expertise and use the speed gains not just to build faster, but to build more — shipping features and polish that traditional budgets would have cut.
The Grounded Build Method isn’t about being impressed by AI. It’s about being honest with it — sharing what’s actually happening instead of performing for the machine — while bringing the expertise and judgment that turns generated code into software people actually use.
The age of AI-assisted Salesforce development is here. It works. And it works best when the humans in the loop know what they’re doing.