Sunday, August 10, 2008

Testing - when requirements are changing continuously

Work with management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application's initial design allows for some adaptability, so that later changes do not require redoing the application from scratch. Additionally, try to...

- In the project's initial schedule, allow for some extra time to commensurate with probable changes.
- Ensure customers and management understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are acceptable.
- Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.
- Design some flexibility into automated test scripts.
- Focus initial automated testing on application aspects that are most likely to remain unchanged.
- Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs.
- Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans.
- Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.

Read more...

The Testing Estimation Process

One of the most difficult and critical activities in IT is the estimation process. I believe that it occurs because when we say that one project will be accomplished in such time by at such cost, it must happen. If it does not happen, several things may follow: from peers’ comments and senior management’s warnings to being fired depending on the reasons and seriousness of the failure.

Here are a few rules for effective testing estimation:

Rule 1: Estimation shall be always based on the software requirements
All estimation should be based on what would be tested, i.e., the software requirements.
In many cases, the software requirements are only established by the development team without any or just a little participation from the testing team. After the specification have been established and the project costs and duration have been estimated, the development team asks how long would take for testing the solution.

Instead of this:
The software requirements shall be read and understood by the testing team, too. Without the testing participation, no serious estimation can be considered.

Rule 2: Estimation shall be based on expert judgment
Before estimating, the testing team classifies the requirements in the following categories:
- Critical: The development team has little knowledge in how to implement it;
- High: The development team has good knowledge in how to implement it but it is not an easy task;
- Normal: The development team has good knowledge in how to implement.

The experts in each requirement should say how long it would take for testing them. The categories would help the experts in estimating the effort for testing the requirements.

Rule 3: Estimation shall be based on previous projects
All estimation should be based on previous projects. If a new project has similar requirements from a previous one, the estimation is based on that project.

Rule 4: Estimation shall be recorded
All decisions should be recorded. It is very important because if requirements change for any reason, the records would help the testing team to estimate again. The testing team would not need to return for all steps and take the same decisions again. Sometimes, it is an opportunity to adjust the estimation made earlier.

Rule 5: Estimation shall be supported by tools
Tools (e.g a spreadsheet containing metrics) that help to reach the estimation quickly should be used. In this case, the spreadsheet calculates automatically the costs and duration for each testing phase.
Also, a document containing sections such as: cost table, risks, and free notes should be created. This letter should be sent to the customer. It also shows the different options for testing that can help the customer decide which kind of test he needs.

Rule 6: Estimation shall always be verified
Finally, all estimation should be verified. Another spreadsheet can be created for recording the estimations. The estimation is compared to the previous ones recorded in a spreadsheet to see if they have similar trend. If the estimation has any deviation from the recorded ones, then a re-estimation should be made.

Read more...

Boundary Value Analysis (BVA)

Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code, is to exercise it at its natural boundaries.

E.g. if a text box is supposed to accept values in range 1 to 100, then try providing the following values:
1) 1, 100, values between 1 and 100 on sampling basis,
2) 0, 101,
3) Negative values,
4) Extremely large values

Read more...

Responsibilities of a Test Lead

Responsibilities of a Test leader:

− Prepare the Software Test Plan.

− Check / Review the Test Cases document
– System, Integration and User Acceptance prepared by test engineers.
− Analyze requirements during the requirements analysis phase of projects.
− Keep track of the new requirements from the Project.
− Forecast / Estimate the Project future requirements.
− Arrange the Hardware and software requirement for the Test Setup.
− Develop and implement test plans.
− Escalate the issues about project requirements (Software, Hardware, Resources) to Project Manager / Test Manager.
− Escalate the issues in the application to the Client.
− Assign task to all Testing Team members and ensure that all of them have sufficient work in the project.
− Organize the meetings.
− Prepare the Agenda for the meeting, for example: Weekly Team meeting etc.
− Attend the regular client call and discuss the weekly status with the client.
− Send the Status Report (Daily, Weekly etc.) to the Client.
− Frequent status check meetings with the Team.
− Communication by means of Chat / emails etc. with the Client (If required).
− Act as the single point of contact between Development and Testers for iterations, Testing and deployment activities.
− Track and report upon testing activities, including testing results, test case coverage, required resources, defects discovered and their status, performance baselines, etc.
− Assist in performing any applicable maintenance to tools used in Testing and resolve issues if any.
− Ensure content and structure of all Testing documents / artifacts is documented and maintained.
− Document, implement, monitor, and enforce all processes and procedures for testing is established as per standards defined by the organization.
− Review various reports prepared by Test engineers.
− Log project related issues in the defect tracking tool identified for the project.
− Check for timely delivery of different milestones.
− Identify Training requirements and forward it to the Project Manager (Technical and Soft skills).
− Attend weekly Team Leader meeting.
− Motivate team members.
− Organize / Conduct internal trainings on various products.

Read more...

Responsibilities of a Test Manager

Responsibilities of a Test Manager:

− Manage the Testing Department.
− Allocate resource to projects.
− Review weekly Testers' status reports and take necessary actions
− Escalate Testers' issues to the Sr. Management.
− Estimate for testing projects.
− Enforce the adherence to the company's Quality processes and procedures.
− Decision to procure Software Testing tools for the organization.
− Inter group co-ordination between various departments.
− Provide technical support to the Testing team.
− Continuous monitoring and mentoring of Testing team members.
− Review of test plans and test cases.
− Attend weekly meeting of projects and provide inputs from the Testers' perspective.
− Immediate notification/escalation of problems to the Sr. Test Manager / Senior Management.
− Ensure processes are followed as laid down.

Read more...

More Testing types: Funny

Have you ever done Testing to Obsession?
Well, we did it once and with that particularly long and painful bout of regression testing, we came up with
the list of other types of testing we ‘d like not to see –


Aggression Testing: If this doesn't work, I'm gonna kill somebody.
Compression Testing: [ ]
Confession Testing: Okay, Okay, I did program that bug.
Depression Testing: If this doesn't work, I'm gonna kill myself.
Digression Testing: Well, it works, but can I tell you about my truck...
Expression Testing: #@%^&*!!! a bug.
Obsession Testing: I'll find this bug if it's the last thing I do.
Oppression Testing: Test this now!
Repression Testing: It's not a bug it's a feature.
Suggestion Testing:Well, it works but wouldn't it be better if...

Read more...

System Testing

This type of test involves examination of all the system for software components, all the hardware components and any interfaces.
The whole system is checked not only for validity but also for met objectives. The complete system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment.
System testing should include recovery testing, security testing, stress testing and performance testing.

Recovery testing uses test cases designed to examine how easily and completely the system can recover from a disaster (power shut down, blown circuit, disk crash, interface failure, insufficient memory, etc.). It is desirable to have a system capable of recovering quickly and with minimal human intervention. It should also have a log of activities happening before the crash (these should be part of daily operations) and a log of messages during the failure (if possible) and upon re-start.

Security testing involves testing the system in order to make sure that unauthorized personnel or other systems cannot gain access to the system and information or resources within it. Programs that check for access to the system via passwords are tested along with any organizational security procedures established.

Stress testing encompasses creating unusual loads on the system in attempts to brake it. System is monitored for performance loss and susceptibility to crashing during the load times. If it does crash as a result of high load, it provides for just one more recovery test.

Performance testing involves monitoring and recording the performance levels during regular and low and high stress loads. It tests the amount of resource usage under the just described conditions and serves as basis for making a forecast of additional resources needed (if any) in the future. It is important to note that performance objectives should have been developed during the planning stage and performance testing is to assure that these objectives are being met. However, these tests may be run in initial stages of production to compare the actual usage to the forecasted figures.

Read more...

Writing effective test cases

Testing cannot ensure complete eradication of errors. Various types of Testing have their own limitations. Consider Exhaustive black box and white box testing, which are practically not "Exhaustive" to the verbose as they are required to be, owing to the resource factors. While testing a Software Product, one of the most important things is the design of effective test cases. A Tester tries to ensure quality output in Black box testing by identifying which subset of all the possible test cases has highest probability of detecting most of the errors?

A test case is there in place to describe how you intend to empirically verify that the software being developed, confirms to its specifications. In other words, the author needs to cover all the possibilities that it can correctly carry out its intended functions. An independent tester to carry the tests properly should write the test case with enough clarity and detail.

Each Test case would ideally have the actual input data to be provided and the expected output. The author of Test cases should mention any manual calculations necessary to determine the expected outputs. Say a Program converts Fahrenheit to Celsius, having the conversion formulae in Test case makes it easier for the Tester to verify the result in Black box testing. Test Data can be tabulated as in a column of input items and the corresponding column of expected outputs.

Though when we talk about random input testing, there is a little chance of being near or exactly around the probability of detecting most of the defects. Hence, the Author is required to give more attention to the certain details.

It requires thought process that allows tester to select a set of test data more intelligently. He will try to cover a large set of Probabilities of occurrence of error, in other words generating as many Scenarios for the test cases. Besides, he looks in other possible errors to ensure that the document covers the presence or absence of errors in the product.

The approach of an author/tester towards Black box testing has been focused here.

Black box Testing:
Functional testing addresses the overall behavior of the program by testing transaction flows, input validation and functional completeness. Which is known as Black box Testing. There are four important techniques, which are significantly important to derive minimum test cases and input data for the same.

Equivalence partitioning:
An equivalence class data is a subset of a larger class. This data is used for technically equivalence partitioning rather than undertaking exhaustive testing of each value in the larger set of data. For example, a payroll program, which edits professional tax deduction limits within Rs. 100 to Rs. 400, would have three equivalence partitions.

Less than Rs.100/- (Invalid Class)
Between Rs.100 to Rs.400/- (Valid Class)
Greater than Rs.400/- (Invalid Class)

If one test case from one equivalence class results in an error, all other test cases in the equivalence class would be expected to result the same error. Here, tester needs to write very few test cases, which is going to save our precious time and resources.

Boundary Value Analysis:
Experiences show that the test cases, which explore boundary conditions, have a higher payoff than test cases that do not. Boundary conditions are the situations directly on, above and beneath the edges of input and output equivalence classes.

This technique consists of generating test cases and relevant set of data, that should focus on the input and output boundaries of given function. In the above example of professional tax limits, boundary value analysis would derive the test cases for:

Low boundary plus or minus one (Rs.99/- and Rs.101/-)
On the boundary (Rs.100/- and Rs.400/-)
Upper boundary plus or minus one (Rs.399 and Rs.401/-)

Error Guessing:
This is based on the theory that test cases can be developed, based upon intuition and experience of the test engineer. Some people tend to adapt very naturally with program testing. We can say these people have a knack for ’Smelling out’ errors without incorporating any particular methodology.
This “Error Guessing” quality of a tester enables him to put in practice, more efficient and result oriented testing than a test case should be able to guide a Tester.
It is difficult to give procedure for the error guessing technique since it is largely intuitive and ad hoc process. For example Where on of the input is the date test engineer may try February 29,2000 or 9/9/99.

Orthogonal Array:
Particularly this technique is useful in finding errors associated with region faults. An error category associated with faulty logic within software component.

For example there are three parameters (A, B & C) each of which has one of the three possible values. Which may require 3X3X3=27 Test cases. But because of the way program works it is probably it is more likely that the fault will depend on the values of only two parameters. In that case fault may occur for each of these 3 test cases.
1. A=1,B=1,C=1
2. A=1,B=1,C=2,
3. A=1,B=1,C=3

Since the value of the 'C' seems to be irreverent to the occurrence of this particular fault, any one of the three test cases will suffice. Depending upon the above assumption, test engineer may derive only nine test cases. Which will show all possible pairs within all three variables. The array is orthogonal because of each pair of parameters all combination of their values occurs once.

That is all possible pair wise combination between parameters A & B, B & C, C & A are shown since we are thinking in terms of pairs we say this array has strength of 2, It does not have strength of 3,
because not all thee way combination occurs A=1, B=2, C=3 for example, don’t appear but it covers the pair wise possibilities which is what we are concern about.

White box Testing:
Structural testing includes path testing, code coverage testing and analysis; logic testing nested loop testing and many similar techniques. Which is known as white box testing.

1. Statement Coverage: Execute all the statements at least once.
2. Decision Coverage: Execute each decision directions at least once.
3. Condition Coverage: Execute each condition with all possible outcomes at least once.
4. Decision / Condition Coverage: Execute all possible combinations of condition outcomes in each decision. Treat all iterations as two way conditions exercising the loop zero times and Once.
5. Multiple Condition Coverage: Invokes each point of entry at least once.

A Tester would choose a combination from above technique that is appropriate for the application and available time frame. A very detailed focus on all these aspects would lead to too much of vague information at times.

Read more...

Checklist for Acceptance Test

Test Preparation

- Has the plan for acceptance testing been submitted?
- Have all possible interactions been described?
- Are all input data required for testing available?
- Is it possible to automatically document the test runs?
- Have the customer specific constraints been considered?
- Have you defined acceptance criteria (e.g. performance, portability, throughput, etc.) on which the completion of the acceptance test will be judged?
- Has the method of handling problems detected during acceptance testing and their disposition been agreed between you and the customer?
- Have you defined the testing procedure, e.g. benchmark test?
- Have you designed test cases to discover contradictions between the software product and the requirements, if existent?
- Have you established test cases to review if timing constraints are met by the system?

Test Execution and Evaluation

- Has the acceptance test been performed according to the test plan?
- Have all steps of the test run been documented?
- Have the users reviewed the test results?
- Are the services provided by the system conform to user requirements stated before?
- Have the users judged about acceptability according to the predetermined criteria?
- Has the user signed-off on output?

Read more...

Levels of testing

A step into the world of software testing might seem like a nightmare, with memories of unending test scripts
providing weeks of tedious repetitive testing and mountains of problem reports. Even after apparently exhaustive testing, bugs are still found, there is rework to carry out and inevitably yet more testing. But with careful planning, software testing can go like a dream.

Levels of testing:
The key to successful test strategies is picking the right level of testing at each stage in a project. The first point to take into account is that testing will not find all bugs and so a decision has to be made about the level of coverage required. This will be dependent on the software application and the level of quality required by the end users. A safety critical system such as a commercial fly-by-wire will clearly need a far higher degree of testing than a cheap games package for the domestic market.

This sort of decision needs to be made at an early stage in a project and will be documented in an overall software test plan. Typically this general plan will be drafted at the same stage that the requirements are catalogued and would be finalized once functional design work has been completed. It is now that confusion over the different levels of testing often arises. The adoption of the simple philosophy that each design document requires a corresponding test document can cut right through this confusion. The definition of the requirements will therefore be followed closely by the preparation of a test document - the Acceptance Tests.

Acceptance Tests are written using the Requirements Specification and apart from the overall test plan, are the only documents, which would be used in writing the Acceptance Tests. The point of Acceptance Tests is to demonstrate that the requirements have been met and therefore there needs to be no other input. It follows that the Acceptance Tests will be written at a fairly abstract level, as there is little detail at this stage of how the software will operate. An important point to bear in mind for all testing is that the tests should be written by a person who has some degree of independence from the project. This may be achieved by employing an external software consultant, or it may be adequate to use someone other than the author of the Requirements Specification. This will help to remove ambiguity from the design documents and to ensure that the tests reflect the design, rather than testing that has already been performed.

Following the classic software lifecycle, the next major document to be produced is the Functional Specification (External Design or Logical Design), which provides the first translation of the Requirements into a working system, and it is here that the system tests are written. As for the overall system it would be usual to write a System Test Plan, which will describe the general approach to how the system testing will be carried out. This might define the level of coverage (e.g. the level of validation testing to be carried out on user enterable fields) and the degree of regression testing to be included. In any case, it would normally only exercise the system as far as described in the Functional Specification. System Tests will be based on the Functional Specification, and this will often be the only document used as input.

The lowest level of testing is module level and this is based on the individual module designs. This testing will normally be the most detailed and will usually include areas such as range checking on input fields, exercising of output messages and other module level details. Following from the previous analogies, the module tests will be based on the individual module designs. Splitting the tests into these different segments helps to keep re-testing to a minimum. A problem discovered in one module during System Testing will simply mean re-executing that module test, followed by the System Tests.

Other levels:
Many other types of testing are available (e.g. installation, load, performance), though all will normally fall under one of the previous categories. One of the most important is Regression Testing, which is used to confirm that a new release of software has not regressed (i.e. lost functionality). In an extreme case, (which might be the product of a poorly planned test program) all previous tests must be re-executed on the new system. This may be acceptable for a system with automated testing capabilities, but for an interactive system it could mean an extensive use of test staff. For a software release it may be perfectly adequate to re-execute a subset of the module and system tests, and to write a new acceptance test based only on the requirements for this release.

Conclusion
Testing can make or break a project - it is vital that testing is planned to ensure that adequate coverage is
achieved. The emphasis here is adequate - too much testing can be uneconomic, too little can result in poor quality software.

Read more...

What is Testware?

As we know that hardware development engineers produce hardware. Software development engineers produce software. Same like this, Software test engineers produce testware.

Testware is produced by both verification and validation testing methods. Testware includes test cases, test plan, test report and etc. Like software, testware should be placed under the control of a configuration management system, saved, faithfully maintained.

Like software, the testware has significant value because it can be reused.The tester’s job is to create testware that is going to have a specified lifetime and is valuable asset to the company.

Read more...

Link for WinRunner Demo

Click here for Free trial version of WinRunner, Mercury Interactive's enterprise functional testing tool.

Note that, you'll need to contact their sales department in you region.

Read more...

Funny replies developers give to testers..

Top 20 replies by Programmers to Testers when their programs don't work:

20. "That's weird..."
19. "It's never done that before."
18. "It worked yesterday."
17. "How is that possible?"
16. "It must be a hardware problem."
15. "What did you type in wrong to get it to crash?"
14. "There is something funky in your data."
13. "I haven't touched that module in weeks!"
12. "You must have the wrong version."
11. "It's just some unlucky coincidence."
10. "I can't test everything!"
9. "THIS can't be the source of THAT."
8. "It works, but it hasn't been tested."
7. "Somebody must have changed my code."
6. "Did you check for a virus on your system?"
5. "Even though it doesn't work, how does it feel?
4. "You can't use that version on your system."
3. "Why do you want to do it that way?"
2. "Where were you when the program blew up?"
1. "It works on my machine"

Read more...

Security Testing

Security Testing:
Testing which confirms that the program can access to authorized personnel and that the authorized personnel can access the functions available to their security level. Security testing is testing how well the system is protected against unauthorized internal or external access, or willful damage.

The purpose of security testing is to determine how well a system protects against unauthorized internal or external access or willful damage.

Types of Security Testing:
1. Vulnerability Scanning
2. Security Scanning
3. Penetration Testing
4. Risk Assessment
5. Security Auditing
6. Ethical Hacking

7. Posture Assessment & Security Testing

Vulnerability Scanning is using automated software to scan one or more systems against known vulnerability signatures. Vulnerability analysis is a systematic review of networks and systems, that determines the adequacy of security measures, identifies security deficiencies, and evaluates the effectiveness of existing and planned safeguards. It justify the resources required to scope of organization's perimeter security or alternatively give you the piece of mind that your network is secure. Examples of this software are Nessus, Sara, and ISS.

Security Scanning is a Vulnerability Scan plus Manual verification. The Security Analyst will then identify network weaknesses and perform a customized professional analysis.

Penetration Testing takes a snapshot of the security on one machine, the "trophy". The Tester will attempt to gain access to the trophy and prove his access, usually, by saving a file on the machine. It is a controlled and coordinated test with the client to ensure that no laws are broken during the test. This is a live test mimicking the actions of real life attackers. Is the security of IT systems up to the task? Conducting a penetration test is a valuable experience in preparing your defenses against the real thing.

Risk Assessment involves a security analysis of interviews compiled with research of business, legal, and industry justifications.

Security Auditing involves hands on internal inspection of Operating Systems and Applications, often via line-by-line inspection of the code. Thorough and frequent security audits will mean your network is more secure and less prone to attack.

Ethical Hacking is basically a number of Penetration Tests on a number of systems on a network segment.

Posture Assessment & Security Testing combine Security Scanning, Ethical Hacking and Risk Assessments to show an overall Security Posture of the organization. It needs a methodology to follow.

The 6 testing sections include:
1. Information Security
2. Process Security
3. Internet Technology Security
4. Communications Security
5. Wireless Security
6. Physical Security

The Information Security section is where an initial Risk Assessment is performed. All pertinent documentation is compiled and analyzed to compute "Perfect Security". This level of Perfect Security then becomes the benchmark for the rest of the test. Throughout the other five sections, all testing results are reviewed against this benchmark and the final report includes a gap analysis providing solutions to all outstanding vulnerabilities.

Process Security addresses Social Engineering. Through Request, Guided Suggestion, and Trusted Persons testing the tester can gauge the security awareness of your personnel.

The Internet Technology Security Testing section contains what most people view as a security test. Various scans and exploit research will point out any software and configuration vulnerabilities along with comparing the business justifications with what is actually being deployed.

Communications Security Testing involves testing Fax, Voicemail and Voice systems. These systems have been known to be exploited causing their victims to run up costly bills. Most of these exploits will go unknown without being tested.

Wireless Security Wireless Technology has been gaining in use rapidly over the last few years. The Wireless Security Testing section was created to address the gaping exploits that can be found due to misconfigurations by engineers with limited knowledge of the recent technology.

Physical Security Testing section This section checks areas such as physical access control and the environmental and political situations surrounding the site. An example of this may be, if your data center has been placed in the flight path of an airport runway. What is the risk of having an airliner engine jump into your server rack? If you have a redundant data center, then the risk may be assumable. Another risk is having your call center located in a flood plain.

Read more...

Have I Tested Enough?

One of the most intriguing and difficult questions to answer in any software development life cycle is whether the software is defect free. No matter what the level of maturity of software application, it is next to impossible to say that a software application is defect free. We can answer this question to a certain level by collecting data with regard to the performance and reliability of the software. This data is collected on the basis of what is commonly known as 'Testing'.

This gives rise to another not so easily answerable question, has the software been tested enough? Unfortunately there is no formula that can answer this question in black and white. And hence the tester has to rely on certain vital signs that can help him release the code. It is important for us to understand the objectives of testing before we can answer these questions.

Some of the popular views of the objectives of testing are -
Testing is a process of executing a program with the intent of finding an error.
A good test case is one that has a high probability of finding an as-yet undiscovered error.
A successful test is one that uncovers an as-yet undiscovered error.

The above axioms are taken from a book on Software Engineering by Glen Myers. The above axioms give us an altogether different perspective on testing. Testing, for long has been regarded as an activity that helps us to say that the program works. But the actual objective of testing is to find the many ways by which the program can go wrong and to fix those 'bugs'. Another important thing to understand over here is that, testing can only show the existence of bugs by uncovering them and not the absence of them. A bug detected is good but a bug undetected is a cause for concern simply because the cost for fixing it goes up almost exponentially over a period of time. Therefore, the sooner the bug is detected the better it is for your financial coffers.

The whole process of testing involves various testing strategies, procedures, documentation, planning and execution, tracking of defects, collecting metrics et al. No wonder with all these activities, testing is given the status of a phase in itself under the software development life cycle. Testing is a very critical activity that can make the difference between a satisfied customer and a lost customer. That is why it is of paramount importance that the testing team is a creative unit and well focused on the job of testing. There are many ways and techniques that a testing team can adopt to test a particular program or software or requirement. We will not get into the details of those methodologies. The scope of this discussion is to look for those 'vital signs' that will help us in making a decision call with regards to marking the end of a testing phase.

Following are the vital signs -
Test Planning
Test Case Design and Coverage
New Requirements
Defect log
Regression Test suite
Project Deadlines and Test Budget

Now let us look at each of these vital signs individually so that we get a better understanding of how we can use them as vital signs.

Test Planning
It is essential to have an effective plan for testing. The Test Plan should clearly indicate what are the Entry and Exit criteria. Having defined these criteria well, it would be easier to analyze the stability of the software after tests have been performed on them. Requirement traceability analysis can be done to be sure that the requirements that have been identified as testable have corresponding test cases written for them. A Traceability matrix can be used for this exercise. It is also important to prioritize the test cases to account for risks into mandatory, required and desired test cases so that the important ones can be executed first up and reduce the effect of time constraints on these critical test cases.

Test Case Design and Coverage
Test Coverage deals with making sure that the test cases that have been designed for testing the requirements of a particular release of a software application are optimal and address the requirements correctly. The traceability matrix made while test planning will give us an idea of whether there are corresponding test cases for every requirement. Whereas a coverage analysis will be able to tell us whether the test cases drafted are the right ones and if they are enough. The most effective way of analyzing is through Reviews or Inspections. Reviews and Inspections can involve conducting meetings wherein the participants review certain documents like test cases in this case. The test cases are sent to the reviewers in advance to go through them and during the course of the meeting any inadequacies in the test cases can be dug out. Certain testing techniques like Boundary Value Analysis, Equivalence Partitioning, Valid and Invalid test cases can be incorporated while designing test cases to address the coverage issue.

New Requirements
Most of the software releases generally have a set of new requirements or features implemented to enhance the ability of the software. And when testing a particular release of a software application, the concentration is more on these new requirements. Prioritizing the execution of test cases for the new requirements helps the tester to stay focused on the job on hand.
A high priority can be assigned to these test cases while planning test execution. Sometimes software releases go through a lot of changes in requirements and addition of new requirements. Though this is not acceptable, the business demands for the addition of new requirements or changes to already frozen requirements, which is exactly why software development is such a dynamic process. In such a scenario, it is essential for the tester to keep track of the changes in requirements and have certain test cases designed for them. The time to go through the entire process of test planning and reviews may not be sufficient at this juncture but keeping track of these changes through documents assist the testers in knowing the status of these requirements as tested or not.

Defect Log
The defect log gives a clear indication of the quality and stability of the product. If there are severe defects that are still open, it indicates that the quality of the product is still not up to the mark. And that testing still needs to be done to uncover more such severe defects. But on the other hand if there are no high severity defects open and the number of low severity defects is relatively low, the development team can negotiate with the testing team in order to move the software into production. The use of a proper defect tracking system or tool is advisable to keep a defect log and generate reports on defect status.

Regression Test suite
While planning the testing phase, it is important to have regression testing cycles also planned. A minimum of two to three regression cycles is necessary to gain confidence in the stability of the software. The advantages of regression testing are two fold
- Uncovers any defects that have gone unnoticed in previous builds
- Uncovers defects that arise due to fixes to existing defects

Automation tools can be used to write scripts that will do a regression test in order to reduce cycle time for testing. Assigning criticalities to test cases will help in choosing and creating a regression test suite to help prioritize execution of manual test cases. But automated scripts are the best way to run and record a log of regression testing. If there is no defect found while running these scripts, we can be assured of the existing functionality being relatively stable.

Project Deadlines and Test Budget
In most real life scenarios the end of testing is defined by the project deadlines and the depletion of the budget for testing. Though many software products and services go into production with a negotiation on open issues due to time constraints, it is advisable to utilize the test budget fully. Since the project deadlines and the budget are known beforehand, planning of testing can be done effectively and all the resources for testing can be utilized optimally. And finally, having a mechanism that represents the confidence levels of these vital signs on a scale of 1 to 10 will clearly indicate the quality of testing activity being done and the ability to capture critical defects sooner than later. A simple bar graph with the vital signs on the x-axis and values 1 to 10 on the y-axis will be sufficient for this. If each of the bars is above a certain minimum level that is mutually agreed to by the development and testing teams, then we can safely conclude that most of the testing is and will be effectively done. Though testing is a very critical and essential activity, the load on testing can be reduced by reviewing and inspecting the various development activities and artifacts starting right from the requirement analysis to the code being written in order to detect bugs early in the development life cycle and reduce the impact of a testing phase that may not be completed wholly.

Read more...