Software Test Strategy – How To Define

I had been asked so many times about software test strategy, what is it? How to define it? What’s the purpose?

In the today’s article I will detail the software testing strategy for a software development product.

The document should include the following details: purpose, scope, test levels, test types, test approaches, test processes, test roles and stakeholders, test matrix, test tools and approval/sign off of the document.

What is the purpose?

According to the International Software Testing Qualifications Board (ISTQB) a Test Strategy provides a high-level description of Test Levels to be performed and testing activities within those levels for an Organization or Programme.

The purpose of the document is to provide the general test strategy for projects and IT Products under responsibility of Testing department.

The strategy will describe how the testing should be applied to manage product maintenance and project risks, the division of testing into levels, and the high-level activities associated with the testing.

Why testing is needed is defined in the Test Policy.

What is the scope?

The description of the software testing managed by Testing department for new software development activities is in the scope of the Test Strategy.

The Test Strategy concerns the internal development activities in Testing department, as well as development by an external service provider.

In some organization, the Testing department is named in different ways, probably the testing is not a separate department rather than part of the development department.

Testing Principles

Principle 1 – Testing shows presence of defects

Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the developed solution but, even if no defects are found, it is not a proof of correctness.

Principle 2 – Exhaustive testing is impossible
Testing of all combinations of inputs and preconditions is not feasible, except for trivial cases. Instead of exhaustive testing a risk analysis and priorities should be used to focus testing efforts.

Principle 3 – Early testing
To find defects early, testing activities shall be started as early as possible in the software development lifecycle and shall be focused on defined objectives.

Principle 4 – Defect clustering
Testing effort shall be focused proportionally on the expected and later observed defect density of modules.

A few modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of the operational failures.

Principle 5 – Pesticide paradox
If the same tests are repeated over and over again, the same set of test cases will no longer find any defects.

To overcome this “pesticide paradox”, test cases need to be regularly reviewed and revised and new and different tests need to be written to exercise different parts of the software or system to be able to detect potentially more defects.

Principle 6 – Testing is context dependent
Testing is done differently in different context.

Principle 7 – Absence-of-errors fallacy
Finding and fixing defects does not bring value if the system built is unusable and does not fulfill the users’ needs and expectations.

The following principles should also be considered:

Principle 8 – Testers are no Quality Police
The tester should be a collaborator, not an enforcer.

Testers, developers and business owners have one common goal to produce a solution that fit for the user’s needs.

Principle 9 – Keep it simple
Test documentation should be practical.

The goal of testing is not to produce lots of documents, but to achieve a good quality IT product, therefore there is no need for a test plan of tens of pages, but a simple one-page test plan with references to the test strategy is enough.

Principle 10 – Keep it simple II
The test execution should be started with the “happy paths” for the highest risks and highest priority requirements.

Once the “happy path” is acceptable, then continue with the exceptions.

Principle 11 – Enough is enough
One of the most important aspects of test management is finding the balance between ‘not enough testing that results in possible production defects that are costly’ and ‘too much testing that has no extra value and also increases the costs’.

Principle 12 – Testing should be objective
As testing results are in the assessment of the solution under test, this assessment should be objective.

Therefore, testing should be performed
– conducted against predefined test cases and agreed acceptance criteria
– performed by an independent role to ensure that quality is not compromised with project or maintenance priorities.

Test levels

A Test Level is a group of test activities that are organized and managed together.

A Test Level is linked to the responsibilities in a project.

The Test Levels are:

  • Unit Testing, including unit integration
  • System Testing
  • System Integration Testing
  • User Acceptance Testing

Let’s take them one by one and describe in details.

Unit Testing

  • Definition – Testing performed to test individual units and the integration between the different units to expose defects in the interfaces and interaction between integrated units.
  • Test Objective – To ensure that the units are working as defined and are integrated in a correct way and working together as expected
  • Test Basis – Code, Detailed design, Unit requirements, For the unit integration test: Unit tested Units
  • Test Objects – Interfaces, Units, Programs, Data conversion / migration programs, Database modules
  • Test executed by – development team
  • Entry Criteria – Code is finished (not in the case of test driven development)
  • Exit Criteria – All unit tests are executed successfully, No open defects
  • Test Deliverables – N/A
  • Definition – Testing an integrated system to verify that it meets specified requirements.
  • Test Objective – System testing is concerned with the behaviour of the whole system / IT Product. Its objective is to ensure that the system is correctly behaving as defined in the requirements.
  • Test Objects – System, user and operation manuals, System configuration and configuration data
  • Test prepared by – Test Analyst
  • Test executed by – Tester
  • Entry Criteria – Unit Testing finished
  • Exit Criteria – 100% of all Solution requirements according to the agreed scope are verified with no open defects, Exception: if it is justified and agreed to leave select defects open. This should be documented
  • Test Deliverables – QA Report

System Testing

  • Definition – Testing an integrated system to verify that it meets specified requirements.
  • Test Objective – System testing is concerned with the behaviour of the whole system / IT Product. Its objective is to ensure that the system is correctly behaving as defined in the requirements.
  • Test Basis – Solution requirements (functional and non-functional), Stakeholder requirements, Risk analysis (if available)
  • Test Objects – System, user and operation manuals, System configuration and configuration data
  • Test prepared by – Test Analyst
  • Test executed by – Tester
  • Entry Criteria – Unit Testing finished
  • Exit Criteria – 100% of all Solution requirements according to the agreed scope are verified with no open defects, Exception: if it is justified and agreed to leave select defects open. This should be documented
  • Test Deliverables – QA Report

Integration Testing

  • Definition – Testing performed to:
    – expose defects in the interfaces and in the interactions between integrated units or systems;
    – Testing performed to expose defects in the interfaces and interaction between hardware and software components;
    – Testing the integration of systems and packages (if any)
  • Testing interfaces to external organizations (if any)
  • Test Objective – The objective is to ensure that the system interacts with its environment (different systems, hardware and software, external environment such as the web) as expected.
  • Test Basis – Solution requirements (functional and non-functional) (including data and user migration), Stakeholder requirements, High level design, Risk analysis (if available)
  • Test Objects – Subsystems, Database implementation, Infrastructure, Interfaces, System configuration and configuration data
  • Test prepared by – Test Analyst
  • Test executed by – Tester
  • Entry Criteria – System test successfully completed, Release package is installed successfully, Smoke test has run successfully
  • Exit Criteria – System Integration test has run successfully
  • Test Deliverables – QA Report

User Acceptance Testing

  • Definition – Testing with respect to user needs, requirements and business processes, which is conducted to determine whether a system satisfies the acceptance criteria (defined as part of the stakeholder’s requirements) and to enable the users and other stakeholders to determine whether to accept the solution.
  • Test Objective – The goal of acceptance testing is to establish confidence in the solution or specific non-functional characteristics of the solution. The main focus of the acceptance testing is the solution readiness for deployment to the production environment, and detecting defects is in the focus as well.
  • Test Basis – Stakeholder requirements, Solution requirements (including, but not limited, to data and user migration), Business processes, Risk
    analysis report (if available)
  • Test Objects – Business processes in fully integrated system, Operational and maintenance processes, User procedures, Forms, Reports, Configuration data
  • Test prepared by – BO (Business Owner) and requirement provider or delegated role (BA – Business Analyst)
  • Test executed by – BO (Business Owner), requirement provider, user
  • Entry Criteria – System Integration Testing finished
  • Exit Criteria – Sign off (acceptance by the stakeholders that were involved in the acceptance tests)
  • Test Deliverables – Official Acceptance per testing stakeholder

Test Types

A Test Type is a group of test activities aimed at testing a Unit or IT product on a specific Test Objective. A Test Type may take place on one or more Test Levels.

The following Test Types should be performed:

  • Functional testing:
    Testing based on an analysis of the specification of the functionality of a component or solution (testing “what” the solution does)
  • Non-functional testing:
    Testing of the attributes of a component or system that do not relate to functionality (testing “how well” the solution works).

Non-functional testing includes, but not limited to: Deployment, Performance, Load, Stress, Security, Privacy, Usability, Maintainability, Reliability, Portability, Installability testing etc

  • Regression Testing:
    Testing of a previously tested solution following modification to ensure that defects have not been occurred or are uncovered in unchanged areas of the solution.

It is performed when the software or its environment is changed.

The regression test consists of previously defined and used test cases.
The size of the regression test set is based on the product risk of the solution.

  • Confirmation testing
    Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
  • Smoke testing
    Testing of a subset of all defined or planned test cases that cover the main functionality of a component or system in order to ensure that the most crucial functions of a solution work (there is no need to test in detail). It is recommended to automate the smoke tests as much as possible.

Test Strategies

As per test Principle 2, an exhaustive testing is not possible therefore a choice should be made what are the priorities and what to test.

There are different ways for selection, allocation and prioritization of tests that are called Test Strategies.

  • Risk-based Testing
  • Requirement-based Testing
  • Business Processes-based Testing
  • Session-based Testing
  • Methodical Testing

Risk based testing is the basis for the test strategy but techniques from other approaches will also be used such as requirement, business process and, for exploratory testing, session based testing.

Different test levels may require different approaches, for example, User Acceptance Test may require business process and requirements based testing, System Integration Test requires risk and (non-)functional requirements based testing.

Risk-based Testing

Definition: An approach to testing aiming to reduce the level of product risks and inform stakeholders of their status, starting in the initial stages of a project or maintenance. It involves the identification of product risks and the use of risk levels to guide the test process.

The product risks are taken into account to define what to test, when, and how much, based on risks, thus ensuring that risky IT Product parts are tested more intensively and earlier than parts with a lower risk.

The risks guide the planning, test specification and test execution activities. These risks are related to requirements.

It is also a way to confirm that any potential risk remaining after testing is visible and acceptable to relevant stakeholders.

In the case of time constraints to test all requirements, Risk-based testing involves testing the scenarios with the highest impact and probability of failure.

Deciding on priority of test cases.

During requirement elicitation Business Analyst for each requirement determines the impact (low, medium or high) and the probability (low, medium or high) if the requirement is not correctly implemented.

The risk value is decided by considering the impact and probability of the risk.

Requirements-based Testing

Definition: An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements, e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.

If the requirements are prioritized, these priorities can be used to allocate effort and priorities to the test cases.

Prerequisite: The quality of the requirements must be sufficient (complete, testable and not ambiguous).

Business process-based testing

Definition: An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.

By applying this approach, it is possible to simulate the day to day work for the users.

Session-based Testing

Definition: An approach to testing in which test activities are planned as uninterrupted sessions of test design and execution, often used in conjunction with exploratory testing.

During the creation of the test documentation, first the decision should be made on which areas to focus during exploratory testing and then the decision should be applied.

In practice, for each of the areas a small form is created called a Test charter.

The exploratory testing is split up in test sessions, two hours duration of each of them, and each session should cover a test charter to maintain focus and ensure that at the end of the testing all areas are covered

Methodical Testing
Definition: An approach to testing whereby a predetermined set of test conditions, such as a quality standard, a checklist or a collection of generalized, logical test conditions which may relate to a particular domain, application or type of testing is used.

Test process

According to IEEE International Standard 29119-Software and system engineering – Software testing, the test process model consists of three layers:

  • The maintenance of the Test Policy and Test Strategy
  • The Test Management process – describes planning, monitoring and control of test activities, as well as the test completion.
  • The Test Level process – takes place at each test level, and it can also be executed for a test type

Test roles and key stakeholders

Test roles

Test Manager Responsibilities: Testing planning, monitoring, control, reporting, collecting and reporting on metrics

Test Analyst Responsibilities – Definition and creation of needed test data, Setting up and maintenance of the test environments, Definition of test conditions, Review of the requirements, Review of high-level designs

Tester Responsibilities: Creation of the test cases, Execution of test cases

Key stakeholders

Testing Stakeholder is anyone who has an interest in the testing activities, the testing work products, or the quality of the final system or deliverable. The stakeholder’s interest can be direct or indirect involvement in the testing activities, direct or indirect receipt of testing work products, or direct or indirect effect by the quality of the deliverables produced by the project or maintenance.

Testing Stakeholders may vary, depending on the project, the product, the organization and other factors. They can include the following roles (this list is not comprehensive):

  • Business Owner
    S/He is overall responsible to validate that the system requirements are delivered as agreed and the solution satisfies user and other stakeholder needs and is fit for business purpose (e.g. through User Acceptance Testing). Business Owner takes decision to ‘go-live’ based on the test results.
  • Users
    These stakeholders use the software directly (i.e., they are the end-users), or receive outputs or services produced or supported by the software.
  • Developers, Lead Developers and Development Managers
    These stakeholders can be both internal or external. They implement the software under test, receive test results, and often should take action based on those results (e.g. fix reported defects).
  • Testing team
    These stakeholders can be both internal or external. They plan, define, execute and report on the testing.
  • Project Manager, Product Manager
    These stakeholders are responsible for managing their projects to success or monitoring maintenance activities, which requires balancing quality, schedule, feature and budget priorities. They collaborate with the Test Manager in test planning and control.
  • Solution Architect and Designers
    These stakeholders design the software, receive test results, and may need to take action on those results.
  • Requirement Providers
    These stakeholders determine the features and the level of quality inherent in those features that must be present in the software. They validate the requirements and their implementation (e.g. through User Acceptance Testing).

Test Metrics

Test Metrics are used to provide insight on the quality and progress of the testing process.

Test Metrics are defined to give an answer to the following questions regarding testing and testing related processes:

  • What is the quality of the testing done by the Software Development Service Provider?
  • What is the duration of fixing defects from System Integration and Acceptance testing by the Software Development Service Provider?
  • What is quality of the performed System Integration Testing and Acceptance Testing?
  • What is the progress of the ongoing testing?
  • How much effort is spent on testing as a percentage of the total effort on project or maintenance?
  • The measurements apply to both external and internal service providers. The metrics below are proposal and shall be revised during their implementation and, if necessary, incorporated as part of project or product maintenance measurements

Test Tools

Depends on each organization, you could have Test Management tool, Test execution tool (it could be the same as Test Management tool), automated testing, website security tool.

Conclusion

Here you have it, the software test strategy. Purpose, scope, testing principles, test levels, test types, test strategies, test process, test roles, test metrics, test tools.

All the information provided could be updated for each organization needs.

The above detailed structure is what I use. Put all of the above mentioned points is a document and here you have it: the test strategy.

I hope it will help you to define yours. Let me know how when.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *