You have the task to create a software testing plan template, but what it this?
According to IEEE Std 829-2008 – IEEE Standard for Software and System Test Documentation, the following documents formats are mentioned:
- Master Test Plan (MTP): The purpose of the Master Test Plan (MTP) is to provide an overall test planning and test management document for multiple levels of test (either within one project or across multiple projects).
- Level Test Plan (LTP): For each LTP the scope, approach, resources, and schedule of the testing activities for its specified level of testing needs to be described. The items being tested, the
features to be tested, the testing tasks to be performed, the personnel responsible for each task, and the associated risk(s) need to be identified. - Level Test Design (LTD): Detailing test cases and the expected results as well as test pass criteria.
- Level Test Case (LTC): Specifying the test data for use in running the test cases identified in the Level Test Design.
- Level Test Procedure (LTPr): Detailing how to run each test, including any set-up preconditions and the steps that need to be followed.
- Level Test Log (LTL): To provide a chronological record of relevant details about the execution of tests, e.g. recording which tests cases were run, who ran them, in what order, and whether each test passed or failed.
- Anomaly Report (AR): To document any event that occurs during the testing process that requires investigation. This may be called a problem, test incident, defect, trouble, issue, anomaly, or error report. This document is deliberately named as an anomaly report, and not a fault report. The reason is that a discrepancy between expected and actual results can occur for a number of reasons other than a fault in the system. These include the expected results being wrong, the test being run incorrectly, or inconsistency in the requirements meaning that
more than one interpretation could be made. The report consists of all details of the incident such as actual and expected results, when it failed, and any supporting evidence that will help in its resolution. The report will also include, if possible, an assessment of the impact of an incident upon testing. - Level Interim Test Status Report (LITSR): To summarize the interim results of the designated testing activities and optionally to provide evaluations and recommendations based on the results for the specific test level.
- Level Test Report (LTR): To summarize the results of the designated testing activities and to provide evaluations and recommendations based on the results after test execution has finished for the specific test level.
- Master Test Report (MTR): To summarize the results of the levels of the designated testing activities and to provide evaluations based on these results. This report may be used by any organization using the MTP. A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from Anomaly Reports. The report also records what testing was done and how long it took, in order to improve any future test planning. This final document is used to indicate whether the software system under test is fit for purpose according to whether or not it has met acceptance criteria defined by project
stakeholders.
Source: https://en.wikipedia.org/wiki/Software_test_documentation
Each Test Plan has a unique identifier.
Introduction
1. Purpose of the document
The purpose of this document is to define on a high level how testing will be done, which test levels will be executed and by whom.
The document shall also define a high-level test schedule in line with the project planning, risks, contingency and mitigation actions, requirements for the test environment and test data, testing deliverables, acceptance, entry and exit, suspension and resumption criteria.
2. IT Project Description and Objectives
This section provides a high-level description of the IT Product under test.
The main objectives of the IT Product should be described in this section and reference to the project plan/release plan should be used, if necessary.
3. Scope
In this section, it is defined what will be tested and what not, it can be defined in the form of functionalities, interfaces, requirements (functional and non-functional),
4. Dependencie
In this Section dependencies with other projects/releases, availability of hardware /software tools etc. are documented.
5. Relationship with other plans
The Master test plan should be in line with the project/release plan.
If the Project/release plan changes, this should be reflected in the master test plan, as well as in the level test plans (if available).
The relevant relationship with the project/ release plans should be reflected in this section, as well as the underlying level test plans, if available.
6. Test Approach
If not defined in the Master test plan or more details are needed this section is filled in. In other cases, a reference to the Master test plan suffice.
7. Test Automation
In this section, it is described which tests should be automated and which tools will be used for test automation.
8. Test Criteria
The test criteria are defined in the Master test plan. This section is only filled in case of deviations.
Test Strategy And Approach
1. Approach per Test Level
This section defines the test approach per test levels needed for a project or maintenance activity and explanation in case of tailoring of Test Strategy.
Test Level | Test Approaches | Y/N | Comments |
System Test | Risk-based | ||
Requirements-based | |||
Session-based | |||
Integration Test | Risk-based | ||
Requirement-based | |||
Acceptance Test | |||
Operational Acceptance Test | Risk-based | ||
Requirement-based | |||
User Acceptance Test | Business process-based | ||
Requirements-based | |||
Service Desk Acceptance Test | Documentation-review | ||
Security Acceptance Test | Requirements-based |
2. Approach per Test Type
This section defines the test approaches per test types needed for a project or maintenance activity and explanation in case of tailoring of Test Strategy.
Test Level | Test Type | Y/N | Comments |
System Test | Functional testing | ||
Non-functional testing | |||
Regression testing | |||
System Integration Test | Smoke testing | ||
Functional testing | |||
Non-functional testing | |||
Regression testing | |||
Acceptance Test: | |||
Operational Acceptance Test | Smoke testing | ||
Non-functional testing | . | ||
User Acceptance Test | Smoke testing | ||
Functional testing | |||
Non-functional testing | |||
Security Acceptance Test | Non-functional testing |
3. Test automation
In this section, it is described which tests should be automated and which tools will be used for test automation.
Risks
This section testing risks and issues should be listed in the table below with impact and probability, contingency and mitigation actions, if applicable.
# | Risk /Issue description | Probability | Impact severity | Mitigation action (M) | Actions |
Test Planning
1. Assumptions
This section is used to list all assumptions that are the basis of the testing planning:
- Number of test runs needed during system testing
- Defect resolution time for UAT defects is [x] days
- Availability of [n] test resources
- Test environment availability is 95% minimum
- Test objects are delivered as per project planning vs number of available resources of [date]
2. Test Schedule
This section should provide a link to the detailed test planning.
The test plan must be in line with the Master test plan or the testing section of the PMP or release plan.
3. Resource planning
Record all dates, effort and remarks with the actual values.
Check that this planning is in line with the Master test plan, in case of deviation contact the allocated test manager else the project or release manager.
Note: The Master Test Plan should include information for all test levels/test types.
4. Deliverable planning
This section provides information on the deliverables of testing and the delivery dates.
The table below should be filled in accordingly. Check that this planning is in line with the Master test plan, in case of deviation contact the allocated test manager else the project or release manager.
Testing Stakeholders
In this section, all stakeholders should be defined and their role in the testing is defined:
- Project Manager: in charge of project management.
- Business Analyst: in charge of supporting the Test Team to clarify the functional and non-functional requirements (performance and load), the system integration and Browser requirements. This includes also the acceptance criteria and the expected behavior of the new solution.
- IT Infrastructure Service Provider: in charge of providing the Test Team with the access to the test environment, and fix any issue at this level (connectivity, security…) to ensure a permanent availability of the test environment.
- Test Team: in charge of performing all the test activities in the scope of this project and reporting on these activities to the Project Manager;
Development team: in charge of developing the solution and correcting defects found during testing.
Testing Environments
In this section all requirements for the test environment(s) should be listed:
– What environments are needed (system test, integration test, acceptance test etc.);
– Where will the test data come from;
– If a copy of the production data is used are there any constraints with confidentiality policy;
– What are the tools used
1. Environments
2. Test Data
3. Privacy
4. Tools
Criteria
1. Acceptance Criteria
The acceptance criteria for an IT Product is documented in the IT Product quality acceptance criteria:
- The IT Product is implemented, verified and validated 100% of the IT Product requirements according to the agreed scope (from the project plan) with no open defects with severity “critical” or “high”
- Meets the acceptance criteria as defined in the project or release plan
2. Entry and Exit Criteria
List the entry and exit criteria for the chosen test levels, as defined in the Test Strategy document.
3. Suspension Criteria
In this section the conditions when test execution may be suspended should be listed:
- Blocking defects which block all test cases that still need to be executed
- Configuration issues
- Issues with the architecture
- Environment not available
The test execution can resume as soon as a solution is provided.
Suspension or resumption of the test execution is a joined decision between the Project manager and the test team.
Communication
1. Meeting
List here the meeting that will take place. For each meeting state Name, Purpose, Attendants, Frequency, Minutes.
Meetings that can be mentioned:
- Kick off meeting testing
- Review meetings
- Progress meeting
- Defect triage
- Test Closure
2. Reporting
Define how the reporting will be done:
- What is to be reported
- To whom
- Frequency
- Format (QA report)
Defect Workflow
The defect workflow should be described in the testing guidelines.
Approvals
The names and roles of all persons who should approve the plan shall be detailed.
The test plan should contain a date and signature section as well.
Conclusion
Now you know what is a test plan and what it should contain.
Adjust the test plan template described based on your needs.
The test plan is a document describing the purpose, approach, resources and schedule for activities testing that will take place.
It shows the strategy that will be used to verify and ensures a product or system meets assures a product or system meets design specifications and other requirements.
It identifies the functionalities to be tested, attribute testing, who and what to test, the test environment, techniques for creating tests, measuring techniques of the tests, as well as of the risks that may occur in planning.
Are you feeling prepared to create a test plan for your project?
Let me know how went for you.
If you like the article give it a LIKE, SHARE and COMMENT. Feel free to share it with whom may benefit from it.
Be First to Comment