The key objectives for IV&V of software testing practices are to provide the customer: a means of early detection and correction of software quality assurance(QA) practices; insights into process and operational risks; and evidence of compliance (non-compliance) with program performance, schedule and budget requirements. The scope of the assessment is greatly dependent on the extent of QA operations to be assessed. This typically drives the level of resources needed and the underlining activities, tools and work products to be examined. The discussions that follow define a practical IV&V methodology for assessing the appropriateness of QA activities at various phases of the software QA lifecycle. To include:
Test planning involves completion of the necessary activities by the Quality Assurance Team (QA) to define the project’s test strategy and foster development of its test plan. The test plan specifies test objectives and goals, the test environment, test schedules, and preparation tasks. It identifies those requirements that are to be tested and the sources of the requirements (Functional Requirements Document [FRD], SOW, ADP Service Request, etc.). It also describes the testing procedure. In addition, the test plan addresses the allocation of resources and the responsibilities for testing within the functional, regression, performance, and acceptance phases of testing. It may be updated/enhanced as details of the project evolve. Test planning also includes environment planning by QA to ensure needed testing resources will be available to support testing. As a result, IV&V should examine the following:
- Are appropriate activities conducted to define the project’s test strategy and test plan?
- Are test plans developed that specify test objectives, goals, test environment, test schedules and preparation tasks?
- Does the test plans address the allocation of resources and responsibilities for the specified testing phase?
During requirements analysis, QA will develop a requirements traceability matrix (RTM) to ensure all user requirements are satisfied. Thus, test planning begins by verifying that the user’s requirements are fully documented, unambiguous, understandable, complete, appropriate, verifiable, and testable. QA ensures the requirements are clearly and properly stated, and consistent with each other. QA also ensures a “testable requirement” is sufficiently defined to permit writing test procedures that demonstrate whether or not the capability or capabilities defined by the requirement are embodied in the computer code and/or supporting data bases or products.
Test development also involves creation of the testing scenarios. These are developed by QA during the coding phase, but may be started earlier. Test scenarios are documented on Test Events Forms or in a Requirements Traceability Matrix (RTM). The Test Events Forms or RTM specify the acceptance criteria for the product, which are the satisfactory demonstration of the expected results. During IV&V, the assessment team will ensure practices exist to define appropriate test scenarios that:
- Verify that the product satisfies the user’s requirements;
- Test exceptions;
- Test all conditional situations;
- Test for boundaries and data value ranges;
- Test that errors are handled correctly; and
- Check calculations if applicable.
It will ensure each scenario consists of a set of test steps that a tester can perform and a corresponding set of expected results that a tester can compare with the observed operation of the software code. Also that test scenarios are reusable and maintainable, so whenever necessary they will be commented to document what is being tested and why.
IV&V will also verify that the test scenarios are sufficiently detailed to satisfy acceptance of the final, tested product. This includes mapping test cases to the requirements and testing to ensure that the application and products support these requirements.
As software development is completed, tests are executed as specified in the test plan and test scenarios. The QA staff performs testing based on the customer’s agreement for QA involvement at planned intervals within the project’s SDLC.
As a result, QA staff with perform test execution and management activities to address the performance of testing based on the scope and span of the product requirements. In this area, IV&V would include the following validations:
- Functional testing – Are the proper testing activities conducted to verify that the module works in relation to and in concert with other modules? Are the proper steps conducted to asses the combined parts of an application to determine if they function correctly together? The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc.
- Regression testing – Are the proper processes and tools in place to conduct re-testing of application systems and modules after fixes or modifications of the software or its environment. Are industry accepted automated testing tools, such as HP Quick Test Professional (QTP) in use?
- Load/Performance testing – Are the appropriate tools and testing practices in place to assess whether applications will have good response times during peak usage? Are industry accepted tools like HP’s LoadRunner are in use for load and performance testing. Is the team capable of simulating any number of users using a web site and can support analysis of such things as average response times? Are there practices in place to assess the system while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a data base system, etc?
- Acceptance testing – This is the final phase of testing that is coordinated by QA. It is performed by the end-user or customer or by the end users/customers over some limited period of time. This testing determines if software is satisfactory to an end-user or customer. As a result, IV&V should be conducted to:
o Verify that a method exists for the end-user to document its approval that the software satisfies the requirements and that there are no mismatches between the tested and delivered configurations of the software that would invalidate the acceptance.
o Each defect has been assigned a tracking number for management to closure or assignment of a final disposition by QA through the test management process.
At the end of the life cycle, metrics and lessons learned should be captured for process improvement. Therefore, IV&V activities should be conducted to ensure useful metrics are being captured, such as the number of action items per release, reviews are conducted improve software development practices, and methods are in place to measure whether defects are being minimized when the product leaves the development stages.
Also, IV&V should assess whether follow-up reviews are being conducted to assess test procedures and recognize opportunities for improvement. It should also determine if the findings are being introduced to the software engineering process for discussion and incorporation into the procedures for building a knowledge base for management of future testing initiatives