When to Start and Stop Testing????????
Testing is sometimes incorrectly thought as an after-the-fact activity; performed after programming is done for a product. Instead, testing should be performed at every development stage of the product. Test data sets must be derived and their correctness and consistency should be monitored throughout the development process.
If we divide the lifecycle of software development into “Requirements Analysis”, “Design”, “Programming/Construction” and “Operation and Maintenance”, then testing should accompany each of the above phases. If testing is isolated as a single phase late in the cycle, errors in the problem statement or design may incur exorbitant costs. Not only must the original error be corrected, but the entire structure built upon it must also be changed. Therefore, testing should not be isolated as an inspection activity. Rather testing should be involved throughout the SDLC in order to bring out a quality product.
Testing Activities in Each Phase
The following testing activities should be performed during the phases
1. Requirements Analysis
- Determine correctness
- Generate functional test data.
2. Design
- Determine correctness and consistency
- Generate structural and functional test data.
3. Programming/Construction
- Determine correctness and consistency
- Generate structural and functional test data
- Apply test data
- Refine test data.
4. Operation and Maintenance
- Retest.
Details of testing in all phases:
1. Requirements Analysis
The following test activities should be performed during this stage:
1.1 Invest in analysis at the beginning of the project - Having a clear, concise and formal statement of the requirements facilitates programming, communication, error analysis an d test data generation.
The requirements statement should record the following information and decisions:
a. Program function - What the program must do?
b. The form, format, data types and units for input.
c. The form, format, data types and units for output.
d. How exceptions, errors and deviations are to be handled.
e. For scientific computations, the numerical method or at least the required accuracy of the solution.
f. The hardware/software environment required or assumed (e.g. the machine, the operating system, and the implementation language).
Deciding the above issues is one of the activities related to testing that should be performed during this stage.
1.2 Start developing the test set at the requirements analysis phase - Data should be generated that can be used to determine whether the requirements have been met. To do this, the input domain should be partitioned into classes of values that the program will treat in a similar manner and for each class a representative element should be included in the test data.
In addition, following should also be included in the data set:
(1) boundary values
(2) any non-extreme input values that would require special handling.
The output domain should be treated similarly.
Invalid input requires the same analysis as valid input.
1.3 The correctness, consistency and completeness of the requirements should also be analyzed - Consider whether the correct problem is being solved, check for conflicts and inconsistencies among the requirements and consider the possibility of missing cases.
2. Design
The design document aids in programming, communication, and error analysis and test data generation. The requirements statement and the design document should together give the problem and the organization of the solution i.e. what the program will do and how it will be done.
The design document should contain:
· Principal data structures.
· Functions, algorithms, heuristics or special techniques used for processing.
· The program organization, how it will be modularized and categorized into external and internal interfaces.
· Any additional information.
Here the testing activities should consist of:
- Analysis of design to check its completeness and consistency - the total process should be analyzed to determine that no steps or special cases have been overlooked. Internal interfaces, I/O handling and data structures should specially be checked for inconsistencies.
- Analysis of design to check whether it satisfies the requirements - check whether both requirements and design document contain the same form, format, units used for input and output and also that all functions listed in the requirement document have been included in the design document. Selected test data which is generated during the requirements analysis phase should be manually simulated to determine whether the design will yield the expected values.
- Generation of test data based on the design - The tests generated should cover the structure as well as the internal functions of the design like the data structures, algorithm, functions, heuristics and general program structure etc. Standard extreme and special values should be included and expected output should be recorded in the test data.
- Re-examination and refinement of the test data set generated at the requirements analysis phase.
The first two steps should also be performed by some colleague and not only the designer/developer.
3. Programming/Construction
Here the main testing points are:
- Check the code for consistency with design - the areas to check include modular structure, module interfaces, data structures, functions, algorithms and I/O handling.
- Perform the Testing process in an organized and systematic manner with test runs dated, annotated and saved. A plan or schedule can be used as a checklist to help the programmer organize testing efforts. If errors are found and changes made to the program, all tests involving the erroneous segment (including those which resulted in success previously) must be rerun and recorded.
- Asks some colleague for assistance - Some independent party, other than the programmer of the specific part of the code, should analyze the development product at each phase. The programmer should explain the product to the party who will then question the logic and search for errors with a checklist to guide the search. This is needed to locate errors the programmer has overlooked.
- Use available tools - the programmer should be familiar with various compilers and interpreters available on the system for the implementation language being used because they differ in their error analysis and code generation capabilities.
- Apply Stress to the Program - Testing should exercise and stress the program structure, the data structures, the internal functions and the externally visible functions or functionality. Both valid and invalid data should be included in the test set.
- Test one at a time - Pieces of code, individual modules and small collections of modules should be exercised separately before they are integrated into the total program, one by one. Errors are easier to isolate when the no. of potential interactions should be kept small. Instrumentation-insertion of some code into the program solely to measure various program characteristics – can be useful here. A tester should perform array bound checks, check loop control variables, determine whether key data values are within permissible ranges, trace program execution, and count the no. of times a group of statements is executed.
- Measure testing coverage/When should testing stop? - If errors are still found every time the program is executed, testing should continue. Because errors tend to cluster, modules appearing particularly error-prone require special scrutiny.
The metrics used to measure testing thoroughness include statement testing (whether each statement in the program has been executed at least once), branch testing (whether each exit from each branch has been executed at least once) and path testing (whether all logical paths, which may involve repeated execution of various segments, have been executed at least once). Statement testing is the coverage metric most frequently used as it is relatively simple to implement.
The amount of testing depends on the cost of an error. Critical programs or functions require more thorough testing than the less significant functions.
4. Operations and maintenance
Corrections, modifications and extensions are bound to occur even for small programs and testing is required every time there is a change. Testing during maintenance is termed regression testing. The test set, the test plan, and the test results for the original program should exist. Modifications must be made to accommodate the program changes, and then all portions of the program affected by the modifications must be re-tested. After regression testing is complete, the program and test documentation must be updated to reflect the changes.
When to stop testing?
"When to stop testing" is one of the most difficult questions to a test engineer. The following are few of the common Test Stop criteria:
- All the high priority bugs are fixed.
- The rate at which bugs are found is too small.
- The testing budget is exhausted.
- The project duration is completed.
- The risk in the project is under acceptable limit.
Practically, the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process, we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -
- Measuring Test Coverage.
- Number of test cycles.
- Number of high priority bugs.
If we divide the lifecycle of software development into “Requirements Analysis”, “Design”, “Programming/Construction” and “Operation and Maintenance”, then testing should accompany each of the above phases. If testing is isolated as a single phase late in the cycle, errors in the problem statement or design may incur exorbitant costs. Not only must the original error be corrected, but the entire structure built upon it must also be changed. Therefore, testing should not be isolated as an inspection activity. Rather testing should be involved throughout the SDLC in order to bring out a quality product.
Testing Activities in Each Phase
The following testing activities should be performed during the phases
1. Requirements Analysis
- Determine correctness
- Generate functional test data.
2. Design
- Determine correctness and consistency
- Generate structural and functional test data.
3. Programming/Construction
- Determine correctness and consistency
- Generate structural and functional test data
- Apply test data
- Refine test data.
4. Operation and Maintenance
- Retest.
Details of testing in all phases:
1. Requirements Analysis
The following test activities should be performed during this stage:
1.1 Invest in analysis at the beginning of the project - Having a clear, concise and formal statement of the requirements facilitates programming, communication, error analysis an d test data generation.
The requirements statement should record the following information and decisions:
a. Program function - What the program must do?
b. The form, format, data types and units for input.
c. The form, format, data types and units for output.
d. How exceptions, errors and deviations are to be handled.
e. For scientific computations, the numerical method or at least the required accuracy of the solution.
f. The hardware/software environment required or assumed (e.g. the machine, the operating system, and the implementation language).
Deciding the above issues is one of the activities related to testing that should be performed during this stage.
1.2 Start developing the test set at the requirements analysis phase - Data should be generated that can be used to determine whether the requirements have been met. To do this, the input domain should be partitioned into classes of values that the program will treat in a similar manner and for each class a representative element should be included in the test data.
In addition, following should also be included in the data set:
(1) boundary values
(2) any non-extreme input values that would require special handling.
The output domain should be treated similarly.
Invalid input requires the same analysis as valid input.
1.3 The correctness, consistency and completeness of the requirements should also be analyzed - Consider whether the correct problem is being solved, check for conflicts and inconsistencies among the requirements and consider the possibility of missing cases.
2. Design
The design document aids in programming, communication, and error analysis and test data generation. The requirements statement and the design document should together give the problem and the organization of the solution i.e. what the program will do and how it will be done.
The design document should contain:
· Principal data structures.
· Functions, algorithms, heuristics or special techniques used for processing.
· The program organization, how it will be modularized and categorized into external and internal interfaces.
· Any additional information.
Here the testing activities should consist of:
- Analysis of design to check its completeness and consistency - the total process should be analyzed to determine that no steps or special cases have been overlooked. Internal interfaces, I/O handling and data structures should specially be checked for inconsistencies.
- Analysis of design to check whether it satisfies the requirements - check whether both requirements and design document contain the same form, format, units used for input and output and also that all functions listed in the requirement document have been included in the design document. Selected test data which is generated during the requirements analysis phase should be manually simulated to determine whether the design will yield the expected values.
- Generation of test data based on the design - The tests generated should cover the structure as well as the internal functions of the design like the data structures, algorithm, functions, heuristics and general program structure etc. Standard extreme and special values should be included and expected output should be recorded in the test data.
- Re-examination and refinement of the test data set generated at the requirements analysis phase.
The first two steps should also be performed by some colleague and not only the designer/developer.
3. Programming/Construction
Here the main testing points are:
- Check the code for consistency with design - the areas to check include modular structure, module interfaces, data structures, functions, algorithms and I/O handling.
- Perform the Testing process in an organized and systematic manner with test runs dated, annotated and saved. A plan or schedule can be used as a checklist to help the programmer organize testing efforts. If errors are found and changes made to the program, all tests involving the erroneous segment (including those which resulted in success previously) must be rerun and recorded.
- Asks some colleague for assistance - Some independent party, other than the programmer of the specific part of the code, should analyze the development product at each phase. The programmer should explain the product to the party who will then question the logic and search for errors with a checklist to guide the search. This is needed to locate errors the programmer has overlooked.
- Use available tools - the programmer should be familiar with various compilers and interpreters available on the system for the implementation language being used because they differ in their error analysis and code generation capabilities.
- Apply Stress to the Program - Testing should exercise and stress the program structure, the data structures, the internal functions and the externally visible functions or functionality. Both valid and invalid data should be included in the test set.
- Test one at a time - Pieces of code, individual modules and small collections of modules should be exercised separately before they are integrated into the total program, one by one. Errors are easier to isolate when the no. of potential interactions should be kept small. Instrumentation-insertion of some code into the program solely to measure various program characteristics – can be useful here. A tester should perform array bound checks, check loop control variables, determine whether key data values are within permissible ranges, trace program execution, and count the no. of times a group of statements is executed.
- Measure testing coverage/When should testing stop? - If errors are still found every time the program is executed, testing should continue. Because errors tend to cluster, modules appearing particularly error-prone require special scrutiny.
The metrics used to measure testing thoroughness include statement testing (whether each statement in the program has been executed at least once), branch testing (whether each exit from each branch has been executed at least once) and path testing (whether all logical paths, which may involve repeated execution of various segments, have been executed at least once). Statement testing is the coverage metric most frequently used as it is relatively simple to implement.
The amount of testing depends on the cost of an error. Critical programs or functions require more thorough testing than the less significant functions.
4. Operations and maintenance
Corrections, modifications and extensions are bound to occur even for small programs and testing is required every time there is a change. Testing during maintenance is termed regression testing. The test set, the test plan, and the test results for the original program should exist. Modifications must be made to accommodate the program changes, and then all portions of the program affected by the modifications must be re-tested. After regression testing is complete, the program and test documentation must be updated to reflect the changes.
When to stop testing?
"When to stop testing" is one of the most difficult questions to a test engineer. The following are few of the common Test Stop criteria:
- All the high priority bugs are fixed.
- The rate at which bugs are found is too small.
- The testing budget is exhausted.
- The project duration is completed.
- The risk in the project is under acceptable limit.
Practically, the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process, we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -
- Measuring Test Coverage.
- Number of test cycles.
- Number of high priority bugs.
0 comments:
Post a Comment