Skip to content

Domain Grilling: D4 Solution Architecture

AI-Generated Content — Use for Reference Only

This content is AI-generated and has only been validated by AI review processes. It has NOT been reviewed or validated by certified Salesforce CTAs or human subject matter experts. Do not rely on this content as authoritative or completely accurate. Use it solely as a reference point for your own study and preparation. Always verify architectural recommendations against official Salesforce documentation.

Solution Architecture covers the selection of declarative versus programmatic tools, AppExchange evaluation, build versus buy decisions, and modern platform features like Agentforce. Judges test whether you can justify each technology choice with specific reasoning and demonstrate awareness of the full solution design spectrum.

Type 1: Invalid — “Your Solution Won’t Work”

These questions challenge a flaw in your design. The judge believes your approach is technically incorrect or impossible.

Q1.1: Flow with Governor Limit Violations

Judge: “You designed a Record-Triggered Flow on Opportunity that does a Get Records for related Quote Lines, loops through them recalculating prices, and then updates each one individually inside the loop. With 200 quote lines per opportunity, that’s 200 DML statements in a single transaction. You’ll hit the 150 DML limit. How does this work?”

What they’re testing: Whether you understand Flow governor limits and bulkification patterns.

Model answer: “You’re right — that’s a governor limit violation. Performing DML inside a loop is an anti-pattern in both Apex and Flow. I need to restructure the Flow. Instead of updating each quote line individually inside the loop, I would use a Collection Variable to accumulate all the modified records during the loop, and then perform a single Update Records element outside the loop on the entire collection. This reduces the DML from 200 statements to 1. I also need to ensure the Get Records element returns all 200 quote lines in a single query rather than querying inside the loop. The corrected pattern is: one Get Records to fetch all quote lines, one Loop to calculate prices and add each to a collection, one Update Records on the collection. This follows the same bulkification principle as Apex — never put DML or SOQL inside a loop.”


Q1.2: Screen Flow as Automated Background Process

Judge: “You proposed using a Screen Flow for the automated order fulfillment process that runs when a payment is confirmed. But Screen Flows require user interaction — they display screens. An automated background process can’t present screens. How does this run unattended?”

What they’re testing: Whether you understand the difference between Flow types and their execution contexts.

Model answer: “You’re correct — I specified the wrong Flow type. Screen Flows are interactive and require a user to navigate through screens. For an automated process triggered by a payment confirmation event, I should use either a Record-Triggered Flow (if the trigger is a record change on a Payment or Order object) or a Platform Event-Triggered Flow (if the payment system publishes a Platform Event). If the flow needs to run asynchronously to avoid blocking the triggering transaction, I would use the ‘Run Asynchronously’ path on the Record-Triggered Flow or an Autolaunched Flow invoked by a Platform Event trigger. The logic itself — updating order status, creating fulfillment records, sending notifications — is the same, but the execution context must be non-interactive.”


Q1.3: AppExchange Package with Namespace Conflicts

Judge: “You recommended installing two AppExchange packages for document generation and e-signature. But both packages create triggers on the Opportunity object and you already have custom triggers there. With three sets of triggers, you’ll have unpredictable execution order and potential conflicts. Have you tested this?”

What they’re testing: Whether you understand the risks of multiple trigger execution order and managed package interactions.

Model answer: “That’s a valid concern I should have addressed in my design. Salesforce does not guarantee trigger execution order across namespaces. Two managed package triggers plus my custom trigger on Opportunity could fire in any order, and if they each perform DML or modify the same fields, the results are unpredictable. I would mitigate this in three ways. First, before recommending both packages, I would install them in a sandbox and run integration tests to verify there are no conflicts — checking for field locking issues, recursion, and governor limit consumption when all three triggers fire together. Second, I would consolidate my custom trigger into a single trigger handler framework so I have a predictable execution path for the custom logic. Third, if conflicts are found, I would evaluate whether one package can be replaced by custom development that I control, or whether one of the two packages offers both document generation and e-signature, eliminating the conflict entirely. The installation testing should happen before finalizing the architecture.”


Q1.4: Visualforce for New Development

Judge: “You proposed building a custom Visualforce page for the customer-facing configuration tool. But Visualforce is a legacy technology — Salesforce is investing in Lightning Web Components. Why aren’t you using LWC?”

What they’re testing: Whether you are current on platform direction and can justify technology choices.

Model answer: “You’re right that LWC should be the default for new UI development. Salesforce has been clear that LWC is the strategic path forward, with Visualforce in maintenance mode. I would revise this to use a Lightning Web Component exposed through an Experience Cloud site. LWC provides better performance through the Shadow DOM, native Web Components standards, and reactive data binding. The only scenario where I might still use Visualforce is if I need renderAs=‘pdf’ for server-side PDF generation, which LWC does not support natively. For the configuration tool specifically, LWC gives me a better user experience on mobile, smaller payload sizes, and access to the Lightning Design System natively. If any part of the UI needs PDF output, I would use a combination of LWC for the interactive experience and a Visualforce page invoked only for PDF rendering.”

Type 2: Missed — “You Haven’t Addressed…”

These questions point out a requirement you didn’t cover.

Q2.1: Order of Execution Awareness

Judge: “You have Record-Triggered Flows, Apex triggers, validation rules, and workflow rules all on the Opportunity object. You haven’t described how these interact. What’s the order of execution, and how do you prevent conflicts?”

What they’re testing: Whether you understand the Salesforce order of execution and can prevent unintended interactions.

Model answer: “I should have addressed execution order explicitly. The order is: system validation first (required fields, field formats), then before-save record-triggered Flows, then before triggers (Apex), then custom validation rules, then the record is saved to the database, then after triggers (Apex), then assignment/auto-response/workflow rules, then after-save Flows (immediate path), then after-save Flows (asynchronous path). To prevent conflicts, I follow three principles. First, consolidate automation per object — one master Record-Triggered Flow and one trigger handler class, not multiple competing automations. Second, use before-save Flows for field defaulting and simple calculations that only modify the triggering record, because they are faster and don’t consume DML. Third, separate concerns clearly: Apex handles complex business logic and cross-object operations, Flows handle simple field updates and user-facing automation, and validation rules handle data integrity. I would document the automation inventory for each object in a matrix showing what fires when, in what order.”


Q2.2: Custom Scheduling Engine Not Evaluated

Judge: “You proposed building a custom Apex scheduling engine for field service appointment optimization. Have you evaluated existing solutions? This sounds like it could be an AppExchange or native Salesforce capability.”

What they’re testing: Whether you applied the build-vs-buy evaluation framework before defaulting to custom development.

Model answer: “You’re right — I should have evaluated existing solutions first. Salesforce Field Service includes Enhanced Scheduling and Optimization (ESO), which is a native scheduling optimizer that considers technician skills, location, travel time, SLA priorities, and resource availability. ESO handles most appointment scheduling scenarios out of the box and is configurable through scheduling policies without custom code. Before building custom, I should have evaluated whether ESO meets the requirements and what gaps exist. If the scenario requires custom optimization logic beyond what ESO provides — such as specific business rules for technician rotation or custom cost optimization algorithms — I would extend ESO with custom scheduling rules rather than replacing it entirely. Building a custom scheduling engine from scratch would take months of development, require operations research expertise, and need ongoing maintenance as the business rules evolve. Unless the scenario has truly unique scheduling requirements that ESO cannot accommodate, the native or extended solution is always preferred.”


Q2.3: Agentforce / AI Evaluation

Judge: “The scenario mentions that customer service agents handle 5,000 routine inquiries per week with predictable answers. You haven’t mentioned Agentforce or Einstein Bots at all. Shouldn’t you be evaluating AI-assisted service?”

What they’re testing: Whether you are current on modern platform capabilities and evaluate AI where appropriate.

Model answer: “You’re right — with 5,000 routine, predictable inquiries per week, this is an ideal use case for AI-assisted service. I would recommend evaluating Agentforce with the Service Cloud integration. Agentforce agents can be configured with topics and actions that handle routine inquiries — order status checks, return initiation, FAQ responses — using grounded responses from Knowledge articles and CRM data via Data Cloud. The Atlas Reasoning Engine determines the appropriate action based on the customer’s intent. For the implementation, I would define specific topics for the top 10 inquiry categories, configure actions that query Order, Case, and Knowledge objects, and set up a human handoff escalation for queries the agent cannot resolve. The expected deflection rate for routine inquiries is typically 30-50% in the first phase, reducing the 5,000 weekly inquiries that require human agents. The trade-off is the initial configuration effort and the need for Data Cloud to ground the agent’s responses in accurate, current data.”


Q2.4: Exit Strategy for AppExchange Package

Judge: “You recommended a third-party AppExchange CPQ package. What’s your exit strategy if the vendor gets acquired, discontinues the product, or raises prices to unacceptable levels?”

What they’re testing: Whether you considered vendor risk and data portability in your build-vs-buy analysis.

Model answer: “Every AppExchange dependency should have an exit plan. For the CPQ package, my exit strategy has three components. First, data portability: the CPQ package creates custom objects with the vendor’s namespace. I need to verify that the quote data, pricing rules, and configuration data can be exported and remapped to either Salesforce CPQ (now Revenue Cloud) or custom objects. I would maintain a parallel data dictionary mapping vendor-namespace fields to standard terminology. Second, contractual protection: the licensing agreement should include data export rights and a minimum 12-month notice period for product discontinuation. Third, architectural isolation: I would wrap the CPQ package’s API with a facade layer so that my integrations, Flows, and reports reference the facade rather than the vendor’s objects directly. If we need to swap the vendor, only the facade implementation changes, not the entire integration and reporting layer. The facade adds development cost but significantly reduces the blast radius of a vendor change.”

Type 3: Suboptimal — “Have You Considered…?”

These questions suggest a potentially better approach.

Q3.1: Custom Apex vs Flow for Simple Logic

Judge: “You built a custom Apex solution for the lead assignment logic, but looking at it, it’s just field-based routing with 8 conditions. Could this be done with Flow?”

What they’re testing: Whether you defaulted to code when a declarative solution would suffice.

Model answer: “You’re right — 8-condition field-based routing is well within Flow’s capability. A Record-Triggered Flow with Decision elements can evaluate the 8 conditions and assign the lead owner accordingly, and it would be maintainable by the admin team without developer involvement. I chose Apex because the original requirement mentioned potential expansion to 50+ routing rules, and I was concerned about Flow maintainability at that scale. However, for the current 8 conditions, Flow is the right choice — it is faster to build, easier to modify, and doesn’t require a deployment cycle for rule changes. I would implement it as a before-save Record-Triggered Flow that sets the OwnerId field based on the conditions, and plan to re-evaluate when the rule count exceeds 25-30, at which point a Custom Metadata Type-driven Apex engine might scale better. Starting with Flow and evolving to Apex when complexity warrants it is better than over-engineering from day one.”


Q3.2: Custom Portal vs Experience Cloud

Judge: “You proposed building a custom portal on Heroku for the partner deal registration workflow. Have you considered using Experience Cloud? It has native Salesforce data access, built-in authentication, and template-based design.”

What they’re testing: Whether you evaluated the platform-native option before going off-platform.

Model answer: “I considered Experience Cloud and chose Heroku because the original requirement mentioned a complex, highly interactive UI with real-time collaboration features that seemed beyond what Experience Cloud templates provide. However, I should re-evaluate. Experience Cloud with the LWR (Lightning Web Runtime) framework now supports custom Lightning Web Components, which can deliver sophisticated UIs. For the deal registration workflow — creating deals, uploading documents, tracking approval status — Experience Cloud provides native Salesforce data access without API calls, built-in sharing through partner roles, and authenticated access through standard login flows. The Heroku approach requires building and maintaining a separate authentication layer, API integration, and hosting infrastructure. Unless the real-time collaboration feature requires WebSocket connections or custom backend processing that Salesforce cannot support, Experience Cloud is the simpler, more maintainable choice. I would revise to Experience Cloud and only use Heroku for specific components that require custom compute.”


Q3.3: Custom LWC vs OmniStudio

Judge: “You’re building a multi-step guided process with 12 screens for customer onboarding. Have you considered OmniStudio FlexCards and OmniScripts instead of custom LWC? They’re designed for exactly this type of guided interaction.”

What they’re testing: Whether you are aware of OmniStudio and can evaluate when it is a better fit than custom development.

Model answer: “I considered OmniStudio and it is a strong fit for this use case. OmniScripts are specifically designed for multi-step guided processes with conditional branching, data pre-population, and integration callouts — exactly what the 12-screen onboarding flow requires. The advantage over custom LWC is development speed: OmniStudio’s declarative designer can build and modify the flow without Apex or LWC code, and FlexCards can display customer data summaries at each step. The trade-off is that OmniStudio requires its own license (included in some Industry Cloud editions, add-on for others) and the development team needs OmniStudio expertise, which is a narrower skill set than LWC. If the team already has OmniStudio skills or the organization is on an Industry Cloud, OmniStudio is the clear choice. If this is a one-off guided flow and the team is experienced in LWC, the custom approach avoids a new technology dependency. For 12 screens with complex branching, I would revise to OmniStudio if licensing is available.”


Q3.4: Custom Integration vs MuleSoft Composer

Judge: “You’re building custom Apex integration code for a simple Salesforce-to-Slack notification. Have you considered MuleSoft Composer? It’s a low-code integration tool for exactly these point-to-point scenarios.”

What they’re testing: Whether you evaluated simpler integration tools before writing custom code.

Model answer: “That’s a valid suggestion. For a straightforward Salesforce-to-Slack notification — triggering when a deal closes and posting to a Slack channel — MuleSoft Composer provides a no-code workflow that an admin can configure and maintain. My custom Apex approach requires a developer to build the HTTP callout, manage the Slack webhook URL in a Named Credential, handle error scenarios, and maintain the code as the Slack API evolves. MuleSoft Composer has pre-built Salesforce and Slack connectors that handle authentication and API versioning. The trade-off is that Composer has limitations for complex logic — if the notification needs conditional routing to different channels based on deal attributes, custom message formatting with complex data aggregation, or retry logic with dead letter queues, custom Apex gives more control. For this simple notification use case, Composer is the right level of tooling. I would reserve custom Apex integration for the more complex system-to-system integrations in the architecture.”

Type 4: Rationale Missing — “WHY Did You Choose…?”

These questions probe the reasoning behind a correct decision.

Q4.1: Declarative vs Code Decision

Judge: “Walk me through your decision framework for when you used Flow versus Apex in this architecture. What criteria drove each choice?”

What they’re testing: Whether you have a consistent, articulable framework rather than ad hoc decisions.

Model answer: “I apply four criteria to every automation decision. First, complexity: if the logic involves simple field updates, record creation, or straightforward branching with fewer than 15-20 conditions, Flow is preferred because admins can maintain it. When logic involves complex data transformations, recursive processing, or multi-step calculations, Apex provides better control. Second, volume: for automations processing fewer than 10,000 records per batch, Flow performs well. Above that, Batch Apex or Queueable Apex provides better governor limit management because each batch chunk gets its own limit context. Third, maintainability: who will modify this in 2 years? If the answer is an admin, Flow. If it is a developer with Apex expertise, Apex. Fourth, error handling: simple try-catch-notify patterns work in Flow. Complex retry logic, partial success handling, and integration error management are better in Apex. In this architecture, I used Flow for lead assignment, field defaulting, and notification logic. I used Apex for the high-volume nightly data processing, the complex pricing calculation, and all integration handlers.”


Q4.2: Build vs Buy Reasoning

Judge: “You chose to build a custom document generation solution instead of using an AppExchange product like Conga or Formstack. Why? Those are mature, proven solutions.”

What they’re testing: Whether you can justify building custom when proven buy options exist.

Model answer: “I applied the vendor evaluation scorecard and the AppExchange options scored below 70% for this specific use case. The scenario requires generating documents that combine Salesforce data with real-time data from two external APIs — current pricing from the ERP and compliance status from a regulatory API. The AppExchange document generation tools I evaluated support Salesforce data merge but have limited support for real-time external API callouts during document generation. To achieve this with an AppExchange tool, I would need to replicate the external data into Salesforce before document generation, which introduces data freshness issues. Second, the document templates require dynamic table generation with variable-length sections based on product configurations — this exceeds the template complexity of most AppExchange tools. Third, the volume is low — approximately 200 documents per week — so the custom development cost is manageable. If the requirement were standard Salesforce-data-only document generation at high volume, Conga or similar would be the obvious choice.”


Q4.3: AppExchange Package Evaluation

Judge: “You recommended the OwnBackup AppExchange package for data backup. What was your evaluation criteria? How did you assess vendor risk?”

What they’re testing: Whether you followed a structured evaluation process, not just name recognition.

Model answer: “I evaluated OwnBackup against four competing solutions using a weighted scorecard. Functionally, OwnBackup scored highest on backup completeness — it backs up data, metadata, files, and Chatter, with point-in-time recovery and sandbox seeding. Technically, it passed the Salesforce security review, provides its own API for programmatic restore, and stores data in the customer’s chosen cloud region for data residency compliance. From a vendor perspective, OwnBackup has over 7,000 customers, was acquired by Datto and subsequently by Kaseya, which provides financial backing but introduces acquisition risk — I documented this and identified Grax as the fallback alternative. Operationally, it supports push upgrades with release notes, has a documented API for automation, and provides data export in standard formats for portability. The contractual terms include data export rights and a service-level commitment on recovery time. The weighted score was 82%, which exceeded my 70% threshold. The primary risk is vendor concentration from the Kaseya acquisition, which I mitigate by ensuring data is always exportable.”


Q4.4: LWC vs Aura Component Choice

Judge: “Why did you choose Lightning Web Components over Aura Components for the custom UI?”

What they’re testing: Whether you understand the technical differences and strategic direction.

Model answer: “I chose LWC for four reasons. First, performance: LWC uses native Web Components standards, which means the browser handles the component lifecycle natively rather than through Aura’s custom framework layer. This results in faster rendering and smaller bundle sizes. Second, Salesforce’s strategic investment is in LWC — new features and components are being released for LWC first, and Aura is in maintenance mode. Third, developer experience: LWC uses standard HTML, JavaScript, and CSS with minimal framework overhead, making it easier to hire developers and leverage the broader web development ecosystem. Fourth, interoperability: LWC components can be embedded in Aura containers (for backward compatibility) but not vice versa, so starting with LWC ensures forward compatibility. The only scenario where I would still use Aura is if I needed to embed a component in a context that only supports Aura — certain Lightning Out deployments or Visualforce pages with Lightning components. For this architecture, all UI components are in Lightning pages or Experience Cloud, where LWC is fully supported.”

Type 5: Cascading — “If You Change X, What Happens to Y?”

These questions test cross-domain dependency awareness.

Q5.1: Moving from Declarative to Programmatic Impact

Judge: “You just acknowledged that the Flow-based pricing logic won’t scale and needs to be rebuilt in Apex. What else changes when you move from declarative to programmatic?”

What they’re testing: Whether you understand the operational and governance implications of switching from clicks to code.

Model answer: “Moving from Flow to Apex cascades through governance, testing, and operations. First, deployment: Flows can be activated and deactivated directly in production by admins. Apex requires a deployment pipeline — development in sandbox, test coverage at 75% minimum, deployment via change set or CLI. This means the pricing logic now requires developer involvement for any changes, extending the change cycle from hours to days. Second, testing: Flow test coverage is informal. Apex requires unit tests with assertions, which is better for quality but adds development effort. Third, error handling: Flow’s fault paths are visual and admin-understandable. Apex try-catch blocks require developer interpretation. I need to build custom error logging that admins can monitor. Fourth, the team skills matrix changes — the admin who previously maintained the pricing Flow now depends on a developer. Fifth, the CI/CD pipeline must be updated to include the new Apex class and test class in the deployment package. I would document a decision record explaining why the migration was necessary and what the ongoing maintenance model looks like.”


Q5.2: Removing an AppExchange Package Impact

Judge: “The vendor for your CPQ package just announced end-of-life in 18 months. What is the impact on your architecture and what’s the migration path?”

What they’re testing: Whether you understand the blast radius of removing an embedded AppExchange package.

Model answer: “Removing a CPQ package has one of the largest blast radii of any architectural change. Data: all quotes, quote lines, pricing rules, and configuration data stored in the vendor’s namespaced objects must be migrated to new objects — either Salesforce CPQ (Revenue Cloud), another vendor, or custom objects. This is a significant data migration project. Integrations: any integration that reads or writes CPQ objects needs new field mappings and API endpoints. Reports and dashboards: every report referencing vendor-namespaced fields must be rebuilt. Automations: Flows and Apex that reference vendor objects and fields need to be rewritten. UI: any Lightning pages, page layouts, or Experience Cloud pages that display vendor components need replacement. User training: the sales team needs retraining on the new quoting workflow. The migration path over 18 months would be: months 1-3, evaluate replacements and select the target; months 4-8, build the new CPQ solution in parallel; months 9-12, migrate data and retrain; months 13-15, parallel run with both systems; months 16-18, cutover and decommission. The facade layer I mentioned earlier would significantly reduce the blast radius if I had implemented it from the start.”


Q5.3: Adding Agentforce Impact on Data Architecture

Judge: “You want to add Agentforce to handle routine service inquiries. How does that change your data architecture?”

What they’re testing: Whether you understand that AI features have data infrastructure requirements.

Model answer: “Adding Agentforce cascades into the data layer in several ways. First, Data Cloud: Agentforce uses Data Cloud for grounding — providing the agent with accurate, current customer context. This means I need to configure Data Cloud with Data Stream Objects (DSOs) ingesting from Service Cloud (Cases, Knowledge, Orders) and potentially external systems. The identity resolution in Data Cloud must be configured to unify customer records so the agent has a 360-degree view. Second, Knowledge management: the agent’s ability to answer questions depends on having a well-structured Knowledge base. I need to ensure Knowledge articles are current, categorized, and tagged appropriately for agent retrieval. Third, the Prompt Templates that define the agent’s behavior reference specific Salesforce objects and fields — any data model changes to the objects the agent queries would require updating the prompt templates. Fourth, the agent’s actions — creating cases, updating order status, initiating returns — must align with the existing automation on those objects, so the agent’s DML operations trigger the correct Flows and triggers. Fifth, monitoring: I need to track agent performance data, which means new objects or fields for conversation logs, resolution metrics, and escalation patterns.”


Q5.4: Switching from Custom to AppExchange Impact

Judge: “You decide to replace your custom scheduling solution with Salesforce Field Service. What ripple effects does this have across your architecture?”

What they’re testing: Whether you understand the broad architectural impact of adopting a platform product that touches multiple domains.

Model answer: “Adopting Field Service is not just a scheduling replacement — it introduces a comprehensive data model and automation framework. Data model: Field Service adds Work Orders, Service Appointments, Service Resources, Service Territories, and related objects. My custom scheduling objects must be migrated to these standard objects, and any references in Apex, Flows, or reports must be updated. Sharing model: Field Service has its own sharing patterns — Service Resource records determine who sees which work orders, and territory-based sharing replaces my custom sharing logic. Mobile: Field Service Mobile replaces any custom mobile solution for technicians, which means re-evaluating the mobile strategy and offline requirements. Integration: the ERP integration that fed scheduling data to my custom objects now needs to integrate with the Field Service data model — different objects, different fields, different API patterns. Licensing: Field Service is a Service Cloud add-on with its own per-user licensing, adding cost. Skills and training: the admin and dev team need Field Service certification or training. The payoff is eliminating custom maintenance and gaining native scheduling optimization, but the migration is a 3-6 month program.”

This is a personal study site for Salesforce CTA exam preparation. Built with AI assistance. Not affiliated with Salesforce.