Workbook

Make the Mission Yours

Role: QA Engineer

Use these activities to apply each principle to your current product, service, or project. These activities are a sample to get you started, not an exhaustive list. Adapt and expand them based on your team's context and needs. Capture your answers, share them with your team, and revisit them as you learn.

⚠️

Important: When Using AI Tools

When using AI-assisted activities, always double-check for accuracy and meaning each and every time. AI tools can help accelerate your work, but human judgment, validation, and critical thinking remain essential.

Review AI-generated content with your team, validate it against real user feedback and domain knowledge, and ensure it truly serves your mission and user outcomes before proceeding.

1) Shared Mission and Vision

Anchor tests to mission outcomes and user goals.

πŸ’‘

Learn More

For more information and deeper understanding of this principle, refer to the 1) Shared Mission and Vision section in the framework.

Workbook Activities (do now)

  • ☐Rewrite the mission in your test plan and map each suite to a user outcome (e.g., checkout success, claim submitted).
  • ☐For the current story, write the user behavior you are protecting and the exact signal you will observe post-release.
  • ☐Add a β€œwhy this matters” note to one critical test, tied to mission and a specific user scenario.
  • ☐Review today’s top ticket and restate acceptance in terms of user intent and success criteria.
  • ☐Walk a developer through how this test defends a user outcome; adjust if the mission link is weak.

AI Assisted Activities

  • ☐Use AI to help draft test plans that map to mission outcomes, but have your team review and refine them to ensure tests truly protect user value.
  • ☐Ask AI to generate potential test scenarios based on user outcomes, then validate each scenario against direct user feedback and real-world usage patterns.
  • ☐Use AI to help structure your "why this matters" notes in test cases, but ensure human team members validate that each test truly serves the mission before executing.
  • ☐Have AI analyze past test plans to identify mission alignment patterns, then use those insights in team discussions to improve how tests connect to user outcomes.

Evidence of Progress

  • ☐Your test cases cite user outcomes, not just components.
  • ☐You can explain how a failing test ties to a user impact.

2) Break Down Silos

Prevent over-the-wall surprises by co-designing quality.

πŸ’‘

Learn More

For more information and deeper understanding of this principle, refer to the 2) Break Down Silos section in the framework.

Workbook Activities (do now)

  • ☐Pair with a developer in grooming to co-author acceptance tests and edge cases for this story.
  • ☐Join a designer/PM review to agree on the critical user journey and negative paths you will test.
  • ☐Create a β€œready for QA” checklist for this feature (data, environments, UX states) and circulate today.
  • ☐Host a 10-minute sync with DevOps/ops to confirm logging and observability for your test focus.
  • ☐Share a WIP test note with design/PM to confirm UX acceptance before executing.

AI Assisted Activities

  • ☐When AI generates test cases or test data, have cross-functional team members (developers, designers, product managers) review them together to ensure they serve users and cover the right scenarios.
  • ☐Use AI to help draft acceptance criteria or test plans, but ensure all roles contribute their perspectives during the actual test design session.
  • ☐Have AI analyze test patterns and bug reports to identify handoff friction, then use those insights in cross-functional discussions to improve collaboration.
  • ☐Use AI to help structure test collaboration sessions, but ensure human team members make decisions together about what to test and how it serves users.

Evidence of Progress

  • ☐Fewer reopened bugs due to unclear acceptance or missed UX criteria.
  • ☐Developers reference your acceptance notes before handing off.

3) User Engagement

Test with real user signals and empathy.

πŸ’‘

Learn More

For more information and deeper understanding of this principle, refer to the 3) User Engagement section in the framework.

Workbook Activities (do now)

  • ☐Observe or replay a user session and extract three real data values/flows to seed your tests.
  • ☐Translate a top support issue into a regression scenario and add it to the suite.
  • ☐Run an exploratory session mimicking how a real user might fail; log unexpected behaviors.
  • ☐For this story, capture one user quote and turn it into a test note for empathy and clarity.
  • ☐Validate one assumed user behavior by pairing with support/PM and adjusting test data accordingly.

AI Assisted Activities

  • ☐Use AI to analyze user feedback, support tickets, or error logs to identify patterns for test scenarios, but always validate AI insights through direct user observation or usability testing.
  • ☐Have AI generate test questions or scenarios based on your assumptions about user behavior, then use those scenarios in real conversations with users to build genuine empathy.
  • ☐Use AI to help summarize user research findings for test planning, but ensure you review the summaries and add your own observations from direct user interactions.
  • ☐Have AI analyze user behavior patterns from telemetry, then discuss those patterns with actual users to understand the "why" behind the behavior before writing tests.

Evidence of Progress

  • ☐You use real-world data patterns in tests, not only synthetic inputs.
  • ☐A support-reported issue is now covered by an automated/regression test.

4) Outcomes Over Outputs

Measure quality by escaped defects and user-visible impact.

πŸ’‘

Learn More

For more information and deeper understanding of this principle, refer to the 4) Outcomes Over Outputs section in the framework.

Workbook Activities (do now)

  • ☐Define one quality outcome for this release (e.g., escaped defects for journey X) and log it in the plan.
  • ☐After release, review logs/tickets for this journey and link each issue to a prevention in tests.
  • ☐Add a post-release check for the journey metric you guarded (e.g., task success rate, error rate).
  • ☐For one failed outcome, propose a specific test/guard to add this sprint and do it.
  • ☐Share a short quality readout: what you protected, what moved, what to improve next.

AI Assisted Activities

  • ☐When AI generates test cases or test automation, define quality outcome metrics upfront and measure whether AI-generated tests achieve intended user outcomes, not just coverage.
  • ☐Use AI to help analyze test outcome data and identify patterns, but have human team members interpret what those patterns mean for users and the mission.
  • ☐Have AI help draft quality outcome definitions and success criteria for your tests, but ensure the team validates them against real user needs and business goals before proceeding.
  • ☐Use AI to track and report on quality outcome metrics, but schedule human team reviews to discuss what the metrics mean and how to adjust tests based on observed impact.

Evidence of Progress

  • ☐You report on a quality outcome metric, not just test counts.
  • ☐You closed the loop from post-release issues to added/updated tests.

5) Domain Knowledge

Test with domain constraints and ecosystem awareness.

πŸ’‘

Learn More

For more information and deeper understanding of this principle, refer to the 5) Domain Knowledge section in the framework.

Workbook Activities (do now)

  • ☐Map upstream/downstream systems for this journey; design one targeted test per critical dependency.
  • ☐Use a service map to add a test that covers a backstage failure surfacing in the UI.
  • ☐Review one policy/regulatory constraint and craft a compliance test for this story.
  • ☐Identify a data contract assumption; create a test that fails loudly if the contract breaks.
  • ☐Tag one domain risk in your test plan and confirm coverage with the domain owner.

AI Assisted Activities

  • ☐Use AI to help summarize domain documentation, API contracts, or system architecture for test planning, but validate AI-generated domain knowledge through direct engagement with domain experts.
  • ☐Have AI generate questions about domain constraints or ecosystem relationships for your tests, then use those questions in conversations with domain experts to build deep understanding.
  • ☐Use AI to help draft test coverage maps or dependency diagrams, but ensure team members review them with domain experts to verify accuracy and completeness.
  • ☐Have AI analyze past incidents or domain-related test gaps, then discuss those insights with the team and domain experts to identify patterns and prevent similar problems.

Evidence of Progress

  • ☐Your tests cover dependency and policy constraints explicitly.
  • ☐You can explain how a system failure would present to users and which test covers it.

6) The Art of Storytelling

Tell the story of quality in user terms.

πŸ’‘

Learn More

For more information and deeper understanding of this principle, refer to the 6) The Art of Storytelling section in the framework.

Workbook Activities (do now)

  • ☐Share a β€œbug story” in retro: user impact β†’ root cause β†’ new test that prevents it.
  • ☐Write a short narrative for the critical path you just tested: user success and how tests enforce it.
  • ☐Prepare two summaries of a run: one for engineers (coverage/edge cases) and one for stakeholders (risk reduced).
  • ☐Add a user quote or data point to your test summary to make impact tangible.
  • ☐Record a 60-second walkthrough of a critical test explaining the user risk it guards.

AI Assisted Activities

  • ☐Use AI to help structure or draft test summaries and bug stories, but refine them with real user anecdotes, emotions, and personal observations from direct user interactions.
  • ☐Have AI generate different versions of test reports for different audiences (technical peers vs stakeholders), but ensure each version includes authentic human stories about real user impact.
  • ☐Use AI to help summarize test results in demos, but lead presentations with human stories about real users affected by bugs, using AI-generated summaries as supporting material.
  • ☐Have AI help draft test documentation or quality reports, but always include real user quotes, data points, or anecdotes that connect your test work to human impact.

Evidence of Progress

  • ☐Stakeholders can restate the risk reduced from your test summary.
  • ☐The team references your bug story to justify quality work.