Skip to main content

Quality control effectiveness dimension

1 - Reactive

Description

  • Test engineers have minimal awareness and alignment with the tribe's Quality and Test Strategy.
  • The teams have minimal engagement with the system level automation framework, primarily using it as provided without active maintenance.
  • Ad-hoc system-level testing is performed, often manually, without a structured strategy or prioritization.
  • There is no, or limited, visibility on testing assumptions, limitations, risks, and results.

Completeness

  • System level test coverage is sporadic, predominantly focusing on new functionalities.

Efficiency & execution

  • Test execution is mostly manual and often inconsistent.
  • Failures are not promptly addressed.
  • Tests are rarely maintained and may be outdated quickly.

Review processes

  • Minimal or no peer-review of test cases/scripts; limited collaboration among team members.
  • The tribe/squads have little understanding of what is not automated or tested at all.

Results reporting

  • Test Results Reporting is mostly reactive, and when it occurs is typically only relevant to the teams with no consideration for broader project context.

Improvement focus

  • Begin familiarizing the teams with the tribe's Quality Strategy.
  • Identify the core functionalities of the product and start planning to automate them at the system level.
  • Document the design architecture and constraints of the system-level automation framework.
  • Begin training the teams on the importance of test results reporting.
  • Introduce the concept of peer reviews among the teams (for new functionalities, user stories, acceptance criteria, test strategy and coverage, test results, etc).

2 - Managed

Description

  • Test engineers show basic adherence to the tribe's Quality and Test Strategy.
  • Routine maintenance of the automation framework is performed, but proactive improvements are limited.
  • The tribe/squads conduct quality control activities, but they are inconsistently applied or lack rigorous enforcement.
  • Basic test planning based on identified requirements, mainly for core and/or critical features.
  • Test cases are derived from requirements with no clear strategy.

Completeness

  • All new user-facing functionalities are validated through automated system-level tests.
  • Partial automated coverage is achieved in critical/core areas, but some gaps remain.

Efficiency & execution

  • All automated tests are run regularly (at the PR level + nightly).
  • Failures are sometimes addressed but not consistently. Some effort to maintain tests but it's reactive.

Review processes

  • Occasional peer reviews involving test engineers and developers at the team level, but not consistent or structured.
  • The tribe is somewhat aware of areas that are not automated but lack a formalized process to capture them.

Results reporting

  • Test results are regularly collected and reported but may lack granularity or detail.
  • Test Results Reporting are created as needed, typically without a consistent format or schedule.

Improvement focus

  • Work on improving the alignment with the tribe's Quality and Test Strategy.
  • Increase the extent and depth of system-level test automation.
  • Prioritize consistent reporting of test results.
  • Set regular inter-team meetings to discuss test plans and overlaps.
  • Periodically review and refine core scenarios as the product evolves.

3 - Defined

Description

  • Test engineers consistently align with the tribe's Quality and Test Strategy and actively apply its principles.
  • The test engineering team consistently develops, updates, and shares System-Level Test Strategies for their assigned functionalities.
  • The teams actively maintain and initiate basic improvements to the system-level automation framework.
  • Comprehensive testing for new features. Some backlog remains for older, untested components or features.
  • Clear understanding and visibility of testing assumptions, limitations, active risks, and what is not automated.
  • The team systematically engages in UAT, exploratory testing, and begins dogfooding exercises to refine the release.
  • All issues and risks identified are tracked, prioritized, labeled with appropriate severity levels, and their trends are consistently analyzed and made accessible.
  • New automated tests are created or tracked for each bug fix.
  • The system-level test framework provides detailed documentation for execution, interpretation, and debugging, as well as guidelines for external contributions.
  • Performance, stress, and load testing are consistently developed and tracked as part of quality control.
  • Customer and User Feedback is constantly considered and integrated into the Quality Control process.

Completeness

  • System-level coverage is balanced, covering new, existing, core and critical functionalities while increasingly addressing functional and non-functional aspects.
  • The Quality Control process enforces dogfooding, UAT, and Exploratory Testing.

Efficiency & execution

  • Risk-based testing strategies, with scope, objectives, and resources.
  • Test failures are prioritized and addressed in a timely manner.

Review processes

  • Regular peer reviews of tests, at each level, is standard practice.
  • The teams systematically identify and track areas not covered (or with poor coverage) by automation.
  • The team/tribe follows a well defined Labelling Convention for issues, risks, enhancements

Results reporting

  • The tribe follows a defined process for reporting test results, with established templates and guidelines to ensure clarity, consistency, and uniformity at the tribe and project level.
  • Comprehensive test result reports are produced nightly and additionally for each release.

Improvement focus

  • Promote a more consistent peer review process for both new and existing tests at the tribe level (involving both developers and test engineers).
  • Focus on refining the test automation process to cover edge cases and improve efficiency.
  • Encourage a culture of continuous improvement by regularly reviewing and updating the Quality and Test Strategy.
  • Engage in more exploratory testing and encourage feedback loops.

4 - Measured

Description

  • Test engineers not only adhere to but also actively contribute to enhancing the tribe's Quality and Test Strategy. The System-Level Test Strategies are continuously reviewed and improved based on past experiences, feedback from the peer review inside the - tribe, and post-deployment monitoring.
  • Active and systematic maintenance paired with significant improvements ensures the automation framework's effectiveness and resilience.

Advanced release testing methodologies, including comprehensive exploratory testing, rigorous UAT, and consistent dogfooding, are enforced to- ensure a polished and quality release.

  • Clear metrics** and dashboards for tracking test effectiveness and releases quality.

Completeness

Thorough and consistent coverage spans all functionalities, emphasizing both functional and non-functional scenarios with data-driven - refinements.

Efficiency & execution

With a metrics-driven approach, the tribe's test execution is refined, and failures are methodically analyzed and addressed to improve the - strategy continually.

Review processes

  • Regular peer reviews at both team and tribe levels are a consistent practice, leading to continuous improvements in coverage and efficiency.

Results reporting

Comprehensive test result reports are automatically generated, integrated into the tribes reporting, and easily accessible to the entire tribe- organisation.

Improvement focus

  • Prioritize metrics and analytics to measure and enhance test efficiency, effectiveness, and execution.
  • Initiate feedback loops from end-users and stakeholders to further refine test scenarios.
** Some of the core metrics are: Defect Leakage, CI/CD Failure Rate, Time to run all the automated tasks, Flakiness Score, Automated vs. Manual Test Ratio, Total System Test Duration.

5 - Optimized

Description

  • Test engineers masterfully champion the tribe's Quality and Test Strategy and actively mentor other teams in its adoption and execution. The tribe masterfully mentors others in the utilization and enhancement of the automation framework, leading proactive improvements in - efficiency, stability, and performance.
  • The tribe excels in multifaceted release testing, actively seeking user feedback to refine and elevate the quality of their releases.

Completeness

Holistic coverage across all functionalities is achieved, with a special focus on eliminating redundancy and ensuring thoroughness in both - functional and non-functional aspects.

Efficiency & execution

The tribe epitomizes efficiency in test execution, flawlessly integrating into the tribe's framework, preserving the stability of existing - tests.

Review processes

  • Deep collaboration at the tribe level ensures test deduplication and optimization of test coverage.
  • The tribe has a healthy ratio of unit-to-integration-to-system tests.
  • Peer reviews are deeply collaborative, and the shared insights lead to the optimization of the testing process across all teams.

Results reporting

All testing, including functional, non-functional, cross-browser, per OS, and all levels per the test pyramid, is fully integrated into a - seamless reporting and decision-making process at the tribe level.

Improvement focus

  • Serve as mentors and leaders within the tribe, sharing knowledge and guiding other teams to higher levels of maturity.
  • Always stay updated with the latest trends and best practices in quality control to continue pushing boundaries.

Guiding questions

  1. Process Evaluation: How effectively do our current quality control processes identify and address deficiencies, and what steps are we taking to continuously improve these processes?
  2. Error Identification and Resolution: What mechanisms are in place for identifying and resolving errors and bugs in your code, and how efficient are these processes in maintaining code and product quality?
  3. Performance Metrics: Which metrics do you use to measure the effectiveness of your quality control processes, and how do these metrics drive improvements and decision-making?
  4. Preventive Measures: What proactive steps are we taking to anticipate and mitigate potential quality issues before they arise?
  5. Feedback and Adaptation: How do you incorporate feedback from end-users, peers, and other stakeholders into your quality control process, and how does this feedback influence future developments?
  6. Team Knowledge and Skills: How well-equipped is your team in terms of knowledge and skills related to quality control, and what training or learning opportunities are (or could be) provided to bridge any gaps?
  7. Continuous Improvement: In what ways does your team incorporate lessons learned from the past to continuously improve quality control practices?
  8. Resource Allocation: How effectively are we utilizing our resources, including tools and engineering talent, in our quality control processes, what strategies are we implementing to optimize their use for maximum efficiency and impact, and how do you determine if these are sufficient?
  9. How comprehensive and structured is our approach to system-level testing (especially for new, existing, and critical functionalities), and do we have clear visibility on testing assumptions, limitations, risks, and results?
  10. How efficient are our testing processes in terms of time, resources, and consistency? Are test failures addressed promptly and are test cases regularly maintained and updated?