User Acceptance Testing Checklist

Scope and Environment Setup

    Pull the Jira epic and confirm every story tagged for this release has acceptance criteria written. Stories without explicit AC are the most common reason UAT slides — testers don't know what 'done' looks like and product owners disagree at sign-off. Flag any AC gaps to the PM before test planning starts.

    Deploy the release candidate build to the dedicated UAT cluster. Confirm it points at the UAT database, the UAT Stripe test keys, and sandbox endpoints for any third-party integrations (Salesforce, HubSpot, etc.) — never prod credentials.

    Run the data scrubber to mask PII (emails, phone numbers, names) before loading into UAT. Synthetic-only data misses real-world edge cases (long tenant names, unicode, accounts with 50k records); raw prod data violates GDPR and SOC 2 logical-access controls.

    Confirm LaunchDarkly (or the equivalent) flag values in UAT match the planned production rollout — flags shipped 'on' in UAT but 'off' in prod cause testers to validate code paths users will never see. Confirm SAML SSO with the test IdP works for both admin and standard roles.

Test Plan and Data Preparation

    Each AC should map to at least one positive and one negative test case. Use TestRail, Xray, or a shared sheet — whatever the team already runs. Cases without expected results are not test cases; they are TODOs.

    Identify modules touched by the release (use the diff and CODEOWNERS). Pull the regression suite for those modules from prior releases — billing, auth, and reporting are the usual suspects where a small change ripples downstream.

    Document which test accounts to use for which scenarios — admin user, read-only user, multi-tenant user, customer with expired subscription. Confirm no real customer emails are used in notification tests; route all UAT email through a sink (Mailtrap, MailHog) so a stray notification doesn't reach a customer.

    If the release includes a schema migration or backfill, schedule it against a clone of production-scale data. Lock duration, replication lag, and rollback path are what the dry run measures — a migration that takes 4 hours on a 50M-row table needs a maintenance window or a batched approach.

Test Execution

    Run the primary user journeys end-to-end against UAT. Capture screenshots or Loom recordings for any case that fails — 'it didn't work' tickets without repro steps stall triage for days.

    Boundary values, expired sessions, malformed inputs, concurrent updates, permission denials. The bugs that ship to prod are almost always in this category, not the happy path.

    Trigger the Playwright/Cypress regression run against UAT, not staging. Investigate every failure — 'it's flaky' is the wrong default; quarantine flaky tests in a separate file so they don't mask real regressions.

    Run the migration script against the prod-scale clone. Record start/end timestamps, peak replication lag, and any locks held. If the dry run took longer than the planned maintenance window, flag now — not at the cutover.

    Each defect gets: environment, build/sha, repro steps, expected vs actual, severity (SEV1 blocks release, SEV2 = workaround exists, SEV3 = cosmetic). Link each defect back to the test case ID it failed on so the verification pass can be tracked.

Defect Triage and Verification

    Walk the defect list with the PM and tech lead. For each: fix in this release, defer with documented workaround, or won't-fix. Record the decision in the ticket — 'we agreed verbally in standup' loses by week's end.

    For each defect closed by engineering, re-run the original repro steps and the regression cases on adjacent functionality. Mark the defect verified or reopen with a fresh comment — no silent reopens.

    Compare row counts and checksums between source and migrated tables. Off-by-one or partial backfills are the most common migration defects and rarely surface in functional tests; only a count comparison catches them.

    Pull the migration log, identify which batches dropped or duplicated rows, and rerun the affected batches against a fresh clone. Do not proceed to sign-off until the reconciliation is clean — a missing-rows bug discovered post-cutover is a customer-data incident.

Sign-Off and Release Readiness

    Product owner and (where applicable) the sponsoring business unit confirm the build meets acceptance criteria including any deferred-with-workaround items. Signature here is the audit artifact for SOC 2 change-management evidence.

    The customer or internal user-group rep who ran their own scenarios signs off independently. Their failure list is often different from QA's — they catch workflow ergonomics that pass technical AC but break daily usage.

    One-page summary: scope tested, defects found / fixed / deferred, migration dry-run timing, sign-off names. File in Confluence and link from the release ticket. This is the document the change-advisory board reviews and what auditors pull at the next SOC 2 walkthrough.

Use this template in Manifestly

Start a Free 14 Day Trial
Use Slack? Start your trial with one click

Related Software Development Checklists

Ready to take control of your recurring tasks?

Start Free 14-Day Trial


Use Slack? Sign up with one click

With Slack