Domain Grilling: D6 Development Lifecycle
AI-Generated Content — Use for Reference Only
This content is AI-generated and has only been validated by AI review processes. It has NOT been reviewed or validated by certified Salesforce CTAs or human subject matter experts. Do not rely on this content as authoritative or completely accurate. Use it solely as a reference point for your own study and preparation. Always verify architectural recommendations against official Salesforce documentation.
Development Lifecycle is the domain most candidates shortchange — often giving generic answers like “we will use CI/CD” without specifying tools, gates, or branching strategies. The review board expects concrete, scenario-specific details about how you build, test, deploy, and govern the solution.
Type 1: Invalid — “Your Solution Won’t Work”
Q1.1: Unlocked package component migration
Judge: “You proposed unlocked packages — what happens to your deployment if you need to move a custom object between packages?”
What they’re testing: Understanding of unlocked package constraints around component ownership.
Model answer: “This is a known challenge with unlocked packages. A metadata component can only belong to one package at a time. Moving a custom object from the Core package to the Sales package requires: first, removing it from the Core package in a new version, then adding it to the Sales package. However, if any other package has a dependency on that object through Core, those dependencies break. The correct approach is to plan the package architecture carefully upfront — shared objects belong in the Core base package, and only domain-specific objects go in domain packages. If this scenario requires moving a custom object, I would deprecate the field references in the source package, create a new package version, then add the object to the target package. This is a destructive change that requires a coordinated deployment across all dependent packages.”
Q1.2: Change sets with rollback requirement
Judge: “Your deployment plan says ‘deploy via change sets with rollback capability.’ Change sets do not support rollback. How do you handle a failed production deployment?”
What they’re testing: Whether you understand change set limitations.
Model answer: “You are correct — change sets have no rollback capability, no version history, and no automated testing. My statement was inaccurate. For a scenario requiring reliable rollback, I would use Salesforce CLI deployments from a Git repository. Each deployment is a versioned artifact in Git, so rollback means redeploying the previous commit. For critical deployments, I would run a check-only deployment first to validate without changes, then a full deployment during the change window. If the team is mature enough, unlocked packages provide even better rollback — you install the previous promoted package version. Change sets are only acceptable for small admin teams making low-risk configuration changes, and even then, I would recommend migrating to CLI-based deployments.”
Q1.3: Performance testing on a Developer sandbox
Judge: “You say you will performance test in a Developer sandbox. That sandbox has 200 MB of storage and no production data. How is that valid?”
What they’re testing: Understanding of which tests require which environments.
Model answer: “A Developer sandbox is completely invalid for performance testing. Performance tests are only meaningful against production-scale data volumes, which requires a Full Copy sandbox. My test environment mapping should be: unit tests and component tests in scratch orgs or Developer sandboxes, integration tests in the SIT Partial Copy sandbox, UAT in a dedicated Partial Copy sandbox with representative data, and performance tests exclusively in the Full Copy Staging sandbox. The Staging sandbox has the same storage as production and copies all production data, making it the only environment where query performance, batch processing times, and page load metrics reflect real-world behavior. I would revise my plan accordingly.”
Q1.4: Scratch orgs for UAT with business users
Judge: “You proposed scratch orgs for UAT. How do your business users who are not familiar with Salesforce CLI access a scratch org?”
What they’re testing: Understanding the difference between scratch orgs (developer-centric, ephemeral) and sandboxes (persistent, user-accessible).
Model answer: “Scratch orgs are not suitable for UAT. They are ephemeral — they expire after a maximum of 30 days, default 7 — and require CLI access to create and manage. Business users need a persistent environment with representative data and a stable URL they can bookmark. For UAT, I would use a Partial Copy sandbox with a sandbox template that includes enough production data to be representative. Business users log in with their credentials appended with the sandbox name. Test data masking ensures PII compliance. The scratch orgs in my architecture serve CI/CD pipeline validation and developer feature isolation, not end-user testing.”
Type 2: Missed — “You Haven’t Addressed…”
Q2.1: Merge conflict management
Judge: “You have 4 developer sandboxes and one UAT sandbox. Who manages merge conflicts and how?”
What they’re testing: Whether your development lifecycle addresses the realities of parallel development.
Model answer: “I should have addressed merge conflict management explicitly. With 4 developers working in parallel, my branching strategy uses feature branches off a develop integration branch. Each developer works in their own feature branch and Developer sandbox. When a feature is ready, the developer creates a pull request to merge into develop. The PR triggers a CI pipeline that creates a scratch org, deploys the merged code, and runs all tests. If the merge has conflicts — particularly common with Salesforce metadata XML files like custom object definitions — the developer resolves conflicts locally using source format, which decomposes metadata into individual files for cleaner diffs. The tech lead reviews and approves PRs. The develop branch deploys to the SIT Partial Copy sandbox for integration testing. This flow ensures conflicts are caught and resolved before they reach UAT.”
Q2.2: Post-refresh sandbox automation
Judge: “Your Full Copy staging sandbox contains production customer data. What happens after a refresh? Who masks the PII?”
What they’re testing: Data masking and post-refresh automation.
Model answer: “Every sandbox refresh triggers automated post-copy processing. I implement the SandboxPostCopy Apex interface that runs automatically after each refresh. The post-copy script performs four critical tasks: first, Salesforce Data Mask anonymizes PII fields — names, emails, phone numbers, and financial data — this is a compliance requirement under GDPR and HIPAA if applicable. Second, integration endpoint URLs are updated from production to sandbox equivalents via Named Credential overrides. Third, email deliverability is disabled to prevent test emails reaching real customers. Fourth, test users and admin accounts are created with appropriate profiles. This is automated, not manual — manual post-refresh checklists get skipped under time pressure and create compliance risk.”
Q2.3: Definition of Done missing
Judge: “What is your definition of done and why is it important?”
What they’re testing: Governance maturity and shared team standards.
Model answer: “The definition of done is the shared checklist that a feature must pass before it is considered complete. For this project, a feature is done when: code has 85% or higher meaningful test coverage with assertions that validate business outcomes, not just code paths. A pull request has been reviewed and approved by at least one peer. Static analysis via PMD for Apex and ESLint for LWC shows no critical issues. The feature has been deployed to SIT and passes integration tests. UAT test scripts have been written. And documentation has been updated, including ADRs for any architectural decisions. Without a shared definition of done, the team produces inconsistent quality, and ‘done’ becomes a negotiation rather than a standard. This is particularly important when mixing declarative admins and programmatic developers on the same team.”
Q2.4: No regression testing strategy
Judge: “You’ve described your initial testing plan but said nothing about regression testing for future releases. How do you prevent breaking existing functionality?”
What they’re testing: Long-term testing sustainability beyond the initial implementation.
Model answer: “Regression testing is critical as the org evolves. My strategy has three layers. First, an automated regression suite using Provar or Copado Robotic Testing that covers the top 20 critical business workflows — these run on every deployment to SIT and UAT. Second, the Apex unit tests serve as a code-level regression net — every PR must maintain 85%+ coverage and all existing tests must pass. Third, for declarative changes like Flow modifications, I would implement Flow test coverage using Salesforce’s native Flow testing framework to catch regressions in automation. The regression suite is maintained as part of the definition of done — any new feature must include regression tests for the scenarios it could affect.”
Type 3: Suboptimal — “Have You Considered…?”
Q3.1: Hotfix strategy with GitFlow
Judge: “Your branching strategy has feature branches merging to main. How do you handle a hotfix that needs to skip the current release cycle?”
What they’re testing: Whether your branching strategy accounts for emergency scenarios.
Model answer: “My modified GitFlow handles this. A hotfix branches directly off main, not develop. The developer creates a hotfix/JIRA-999-description branch, implements the fix, and the CI pipeline validates it against a scratch org. The fix follows an expedited CAB approval process — CAB chair approval only, with a post-implementation review within 48 hours. After production deployment, the hotfix branch is merged back into both main and develop to keep both branches current. The key distinction is that the hotfix completely bypasses the release/2024-Q4 branch and the UAT cycle, which is acceptable for emergency fixes but requires the post-implementation review as a governance gate. I would also use a feature flag if the hotfix risks side effects, allowing a quick kill switch.”
Q3.2: Sandboxes vs scratch orgs for the scenario
Judge: “Why sandboxes instead of scratch orgs for your development environments in this scenario?”
What they’re testing: Whether you have evaluated both options and chosen deliberately.
Model answer: “I considered both. For this scenario, I chose Developer sandboxes for individual development because the implementation relies heavily on declarative configuration — Flows, page layouts, validation rules — that admins build directly in the org and retrieve to source control. Scratch orgs require deploying all metadata from source every time, which works well for code-centric development but creates friction for admin-heavy work. However, I use scratch orgs in the CI/CD pipeline for automated validation — each PR creates a fresh scratch org, deploys, and runs tests, then the scratch org is destroyed. This hybrid approach gives admins a persistent workspace while giving the pipeline clean, reproducible validation environments. If the team were primarily developers building LWC and Apex, I would lean more heavily toward scratch orgs for development as well.”
Q3.3: Manual UAT vs automated regression
Judge: “If you change your testing strategy from manual UAT to automated regression, what impact does that have on your timeline and sandbox strategy?”
What they’re testing: Understanding of the cascading impact of testing strategy changes.
Model answer: “Introducing automated regression testing has three impacts. First, timeline: building the automated regression suite takes 2-3 sprints of upfront investment for test creation and framework setup using a tool like Provar. This extends the initial delivery but reduces each subsequent release cycle by eliminating manual regression test execution. Second, sandbox strategy: automated tests can run in the CI pipeline against scratch orgs, reducing the dependency on the UAT Partial Copy sandbox for regression. The UAT sandbox becomes focused on exploratory testing and new feature validation by business users. Third, team skills: the QA team needs training on the automation tool. The trade-off is higher upfront investment for lower long-term cost and faster release cadence.”
Q3.4: DevOps Center vs CLI pipeline
Judge: “Have you considered using Salesforce DevOps Center instead of building a custom CI/CD pipeline with GitHub Actions?”
What they’re testing: Awareness of platform-native alternatives and when they are sufficient.
Model answer: “I evaluated DevOps Center. It provides a UI layer over Git-based source control that is accessible to admins who are not comfortable with CLI tools, and it supports both declarative and programmatic developers working together. For a mixed admin/developer team with moderate complexity, DevOps Center is a viable choice. However, for this scenario with 15+ developers, complex package dependencies, and integration with Jira and Slack for notifications, a custom GitHub Actions pipeline gives more control over quality gates — PMD static analysis, custom scratch org definition files, parallel test execution, and integration with third-party monitoring. I would recommend DevOps Center for the admin team’s declarative changes and the GitHub Actions pipeline for the developer team’s programmatic changes, both backed by the same Git repository.”
Type 4: Rationale Missing — “WHY Did You Choose…?”
Q4.1: CI/CD specifics demanded
Judge: “You said ‘we will use CI/CD.’ Specifically, what tool, what triggers deployment, and what gates must pass before code reaches production?”
What they’re testing: Whether your CI/CD answer is concrete or hand-waving.
Model answer: “The CI/CD pipeline uses GitHub Actions. The trigger is a pull request to the develop branch. The pipeline runs six gates in sequence: first, static analysis with PMD for Apex and ESLint for JavaScript, which blocks merge on critical findings. Second, a scratch org is created from a definition file that mirrors production features. Third, source is deployed to the scratch org. Fourth, Apex unit tests run with an 85% coverage minimum and all tests must pass. Fifth, LWC Jest tests run. Sixth, a check-only deployment to the SIT sandbox validates metadata compatibility. After merge, a deployment trigger deploys to SIT automatically. Deployment to UAT requires manual approval by the PM. Deployment to production requires CAB approval, a check-only validation, and execution during the approved deployment window — Tuesdays and Thursdays, 6-10 PM.”
Q4.2: Branching strategy rationale
Judge: “Why modified GitFlow instead of trunk-based development for this team?”
What they’re testing: Whether you chose your branching strategy deliberately or by default.
Model answer: “I chose modified GitFlow for three reasons specific to this scenario. First, the team has 12 developers across 3 workstreams with a monthly release cadence — long-lived feature branches and a release branch allow parallel development without destabilizing the integration branch. Trunk-based development requires short-lived branches merged within hours, which demands high CI/CD maturity and comprehensive automated testing — this team is not there yet. Second, Salesforce metadata merges are more painful than traditional code merges — XML-based metadata files create verbose diffs, making frequent trunk merges risky. Source format helps, but merge conflicts on shared objects are still common. Third, the customer requires a formal UAT sign-off before production — the release branch provides a stable cut point for UAT while development continues on the develop branch.”
Q4.3: CoE model justification
Judge: “You recommended a Hybrid Center of Excellence. Why not centralized, given this is a single Salesforce org?”
What they’re testing: Understanding of organizational governance models beyond just technology.
Model answer: “Even with a single org, the customer has 3 business units with different Salesforce needs — Sales, Service, and Marketing. A centralized CoE creates a bottleneck: every change request from any BU routes through one central team, slowing delivery and disconnecting the CoE from business context. A purely federated model gives each BU autonomy but risks inconsistent standards, duplicated effort, and architectural drift. The Hybrid CoE balances both: the central team owns architecture standards, the ARB, shared platform services like security and integration patterns, and the CI/CD pipeline. Each BU has a delivery team that builds features within those standards. This gives BUs the agility to deliver at their own pace while the central team ensures consistency, governs shared objects, and prevents technical debt accumulation.”
Q4.4: Deployment window rationale
Judge: “Why Tuesdays and Thursdays for deployments? What is the reasoning behind that schedule?”
What they’re testing: Whether your deployment schedule is deliberate and risk-informed.
Model answer: “The deployment schedule is based on four risk factors. First, avoid Mondays — the team has the least context on Friday’s changes and Monday is the highest-traffic business day. Second, avoid Fridays — a failed Friday deployment leaves the team scrambling over the weekend with reduced staff. Third, mid-week gives 2-3 business days of monitoring before the weekend to catch issues. Fourth, the 6-10 PM window is off-peak for user activity, reducing the blast radius of any deployment issues. Two weekly windows give enough frequency for the monthly release cadence while providing fallback — if Tuesday’s deployment fails, Thursday is the backup. This is standard ITIL change management practice adapted for Salesforce.”
Type 5: Cascading — “If You Change X, What Happens to Y?”
Q5.1: Testing strategy change cascading
Judge: “You just doubled your estimated data volume. What changes in your environment strategy and testing approach?”
What they’re testing: Whether environment and testing decisions are tied to data volume assumptions.
Model answer: “Doubling data volume cascades through three areas. First, environment strategy: my Partial Copy sandbox templates need revision — the 10K records per object limit may be insufficient for representative integration testing. I may need to upgrade the SIT sandbox from Partial Copy to Full Copy, which changes the refresh cycle from 5 days to 29 days and impacts sprint planning. Second, testing approach: performance testing becomes critical rather than nice-to-have — I need load testing with production-scale data, SOQL query plan analysis for selectivity, and batch Apex testing with the actual record counts. Third, my Bulk API integration must be re-evaluated — 500K records might now be 1 million, requiring partitioned jobs and potentially off-peak scheduling to stay within the 15,000 daily batch limit.”
Q5.2: Adding a new cloud cascading impact
Judge: “The business just told you they are also implementing Marketing Cloud. How does that change your development lifecycle?”
What they’re testing: Multi-cloud development lifecycle complexity.
Model answer: “Adding Marketing Cloud impacts the lifecycle in five ways. First, a separate deployment pipeline — Marketing Cloud uses its own deployment tools (Marketing Cloud package manager, SFMC DevTools) that do not integrate with the Salesforce CLI pipeline. I need a parallel CI/CD workflow. Second, environment strategy: Marketing Cloud has separate sandbox tiers (business units in a separate instance), and sandbox refreshes must be coordinated with Salesforce sandbox refreshes to keep integration endpoints aligned. Third, testing: the integration between Sales Cloud and Marketing Cloud via MC Connect needs integration tests in a coordinated sandbox pair. Fourth, governance: the ARB needs Marketing Cloud representation, and the CoE needs a Marketing Cloud specialist. Fifth, source control: Marketing Cloud assets — email templates, journeys, automations — need their own repository or a subdirectory in the monorepo with separate build triggers.”
Q5.3: Switching from change sets to packages
Judge: “The customer currently uses change sets. You are proposing unlocked packages. Walk me through the migration path and what happens to their existing metadata.”
What they’re testing: Practical migration planning, not just the ideal end state.
Model answer: “The migration from change sets to unlocked packages is a phased transition, not a big bang. Phase 1: establish a Git repository and retrieve the full production metadata using Salesforce CLI. Convert to source format. This becomes the source of truth. Phase 2: design the package architecture — typically a Core base package for shared objects and fields, plus domain packages for Sales, Service, and Integration metadata. Phase 3: create the package definitions in sfdx-project.json, mapping existing metadata to packages. Phase 4: create initial package versions from the existing metadata — this is the riskiest step because any metadata that does not fit cleanly into a package must be resolved. Phase 5: set up CI/CD pipelines that build and validate packages. During the transition, the team may use CLI deployments as an intermediate step before fully adopting packages. The existing change set history is lost, but Git history replaces it. I would allow 2-3 sprints for this migration alongside feature work.”
Q5.4: Release cadence change impact
Judge: “The business wants to move from monthly releases to weekly releases. What has to change in your lifecycle?”
What they’re testing: Understanding of what enables faster release cadence.
Model answer: “Moving from monthly to weekly releases requires changes across four areas. First, branching strategy: modified GitFlow with release branches becomes too heavy for weekly cadence — I would shift toward trunk-based development with short-lived feature branches and feature flags for incomplete work. Second, testing: manual UAT for every release is not feasible weekly — I need automated regression testing covering the critical business workflows, with manual UAT reserved for net-new features. Third, deployment: the Tuesday/Thursday deployment windows need to expand, and deployment must be fully automated with a single-click production deploy after automated gate passage. Fourth, governance: the CAB cannot meet weekly for every change — I would introduce a ‘standard change’ classification for pre-approved change types that deploy without CAB review, reserving CAB for ‘normal’ and ‘major’ changes. This is a significant maturity jump that requires investment in automation and team training.”
Related Topics
This is a personal study site for Salesforce CTA exam preparation. Built with AI assistance. Not affiliated with Salesforce.