Sunday, August 10, 2008

What is Testware?

As we know that hardware development engineers produce hardware. Software development engineers produce software. Same like this, Software test engineers produce testware.

Testware is produced by both verification and validation testing methods. Testware includes test cases, test plan, test report and etc. Like software, testware should be placed under the control of a configuration management system, saved, faithfully maintained.

Like software, the testware has significant value because it can be reused.The tester’s job is to create testware that is going to have a specified lifetime and is valuable asset to the company.

Read more...

Link for WinRunner Demo

Click here for Free trial version of WinRunner, Mercury Interactive's enterprise functional testing tool.

Note that, you'll need to contact their sales department in you region.

Read more...

Funny replies developers give to testers..

Top 20 replies by Programmers to Testers when their programs don't work:

20. "That's weird..."
19. "It's never done that before."
18. "It worked yesterday."
17. "How is that possible?"
16. "It must be a hardware problem."
15. "What did you type in wrong to get it to crash?"
14. "There is something funky in your data."
13. "I haven't touched that module in weeks!"
12. "You must have the wrong version."
11. "It's just some unlucky coincidence."
10. "I can't test everything!"
9. "THIS can't be the source of THAT."
8. "It works, but it hasn't been tested."
7. "Somebody must have changed my code."
6. "Did you check for a virus on your system?"
5. "Even though it doesn't work, how does it feel?
4. "You can't use that version on your system."
3. "Why do you want to do it that way?"
2. "Where were you when the program blew up?"
1. "It works on my machine"

Read more...

Security Testing

Security Testing:
Testing which confirms that the program can access to authorized personnel and that the authorized personnel can access the functions available to their security level. Security testing is testing how well the system is protected against unauthorized internal or external access, or willful damage.

The purpose of security testing is to determine how well a system protects against unauthorized internal or external access or willful damage.

Types of Security Testing:
1. Vulnerability Scanning
2. Security Scanning
3. Penetration Testing
4. Risk Assessment
5. Security Auditing
6. Ethical Hacking

7. Posture Assessment & Security Testing

Vulnerability Scanning is using automated software to scan one or more systems against known vulnerability signatures. Vulnerability analysis is a systematic review of networks and systems, that determines the adequacy of security measures, identifies security deficiencies, and evaluates the effectiveness of existing and planned safeguards. It justify the resources required to scope of organization's perimeter security or alternatively give you the piece of mind that your network is secure. Examples of this software are Nessus, Sara, and ISS.

Security Scanning is a Vulnerability Scan plus Manual verification. The Security Analyst will then identify network weaknesses and perform a customized professional analysis.

Penetration Testing takes a snapshot of the security on one machine, the "trophy". The Tester will attempt to gain access to the trophy and prove his access, usually, by saving a file on the machine. It is a controlled and coordinated test with the client to ensure that no laws are broken during the test. This is a live test mimicking the actions of real life attackers. Is the security of IT systems up to the task? Conducting a penetration test is a valuable experience in preparing your defenses against the real thing.

Risk Assessment involves a security analysis of interviews compiled with research of business, legal, and industry justifications.

Security Auditing involves hands on internal inspection of Operating Systems and Applications, often via line-by-line inspection of the code. Thorough and frequent security audits will mean your network is more secure and less prone to attack.

Ethical Hacking is basically a number of Penetration Tests on a number of systems on a network segment.

Posture Assessment & Security Testing combine Security Scanning, Ethical Hacking and Risk Assessments to show an overall Security Posture of the organization. It needs a methodology to follow.

The 6 testing sections include:
1. Information Security
2. Process Security
3. Internet Technology Security
4. Communications Security
5. Wireless Security
6. Physical Security

The Information Security section is where an initial Risk Assessment is performed. All pertinent documentation is compiled and analyzed to compute "Perfect Security". This level of Perfect Security then becomes the benchmark for the rest of the test. Throughout the other five sections, all testing results are reviewed against this benchmark and the final report includes a gap analysis providing solutions to all outstanding vulnerabilities.

Process Security addresses Social Engineering. Through Request, Guided Suggestion, and Trusted Persons testing the tester can gauge the security awareness of your personnel.

The Internet Technology Security Testing section contains what most people view as a security test. Various scans and exploit research will point out any software and configuration vulnerabilities along with comparing the business justifications with what is actually being deployed.

Communications Security Testing involves testing Fax, Voicemail and Voice systems. These systems have been known to be exploited causing their victims to run up costly bills. Most of these exploits will go unknown without being tested.

Wireless Security Wireless Technology has been gaining in use rapidly over the last few years. The Wireless Security Testing section was created to address the gaping exploits that can be found due to misconfigurations by engineers with limited knowledge of the recent technology.

Physical Security Testing section This section checks areas such as physical access control and the environmental and political situations surrounding the site. An example of this may be, if your data center has been placed in the flight path of an airport runway. What is the risk of having an airliner engine jump into your server rack? If you have a redundant data center, then the risk may be assumable. Another risk is having your call center located in a flood plain.

Read more...

Have I Tested Enough?

One of the most intriguing and difficult questions to answer in any software development life cycle is whether the software is defect free. No matter what the level of maturity of software application, it is next to impossible to say that a software application is defect free. We can answer this question to a certain level by collecting data with regard to the performance and reliability of the software. This data is collected on the basis of what is commonly known as 'Testing'.

This gives rise to another not so easily answerable question, has the software been tested enough? Unfortunately there is no formula that can answer this question in black and white. And hence the tester has to rely on certain vital signs that can help him release the code. It is important for us to understand the objectives of testing before we can answer these questions.

Some of the popular views of the objectives of testing are -
Testing is a process of executing a program with the intent of finding an error.
A good test case is one that has a high probability of finding an as-yet undiscovered error.
A successful test is one that uncovers an as-yet undiscovered error.

The above axioms are taken from a book on Software Engineering by Glen Myers. The above axioms give us an altogether different perspective on testing. Testing, for long has been regarded as an activity that helps us to say that the program works. But the actual objective of testing is to find the many ways by which the program can go wrong and to fix those 'bugs'. Another important thing to understand over here is that, testing can only show the existence of bugs by uncovering them and not the absence of them. A bug detected is good but a bug undetected is a cause for concern simply because the cost for fixing it goes up almost exponentially over a period of time. Therefore, the sooner the bug is detected the better it is for your financial coffers.

The whole process of testing involves various testing strategies, procedures, documentation, planning and execution, tracking of defects, collecting metrics et al. No wonder with all these activities, testing is given the status of a phase in itself under the software development life cycle. Testing is a very critical activity that can make the difference between a satisfied customer and a lost customer. That is why it is of paramount importance that the testing team is a creative unit and well focused on the job of testing. There are many ways and techniques that a testing team can adopt to test a particular program or software or requirement. We will not get into the details of those methodologies. The scope of this discussion is to look for those 'vital signs' that will help us in making a decision call with regards to marking the end of a testing phase.

Following are the vital signs -
Test Planning
Test Case Design and Coverage
New Requirements
Defect log
Regression Test suite
Project Deadlines and Test Budget

Now let us look at each of these vital signs individually so that we get a better understanding of how we can use them as vital signs.

Test Planning
It is essential to have an effective plan for testing. The Test Plan should clearly indicate what are the Entry and Exit criteria. Having defined these criteria well, it would be easier to analyze the stability of the software after tests have been performed on them. Requirement traceability analysis can be done to be sure that the requirements that have been identified as testable have corresponding test cases written for them. A Traceability matrix can be used for this exercise. It is also important to prioritize the test cases to account for risks into mandatory, required and desired test cases so that the important ones can be executed first up and reduce the effect of time constraints on these critical test cases.

Test Case Design and Coverage
Test Coverage deals with making sure that the test cases that have been designed for testing the requirements of a particular release of a software application are optimal and address the requirements correctly. The traceability matrix made while test planning will give us an idea of whether there are corresponding test cases for every requirement. Whereas a coverage analysis will be able to tell us whether the test cases drafted are the right ones and if they are enough. The most effective way of analyzing is through Reviews or Inspections. Reviews and Inspections can involve conducting meetings wherein the participants review certain documents like test cases in this case. The test cases are sent to the reviewers in advance to go through them and during the course of the meeting any inadequacies in the test cases can be dug out. Certain testing techniques like Boundary Value Analysis, Equivalence Partitioning, Valid and Invalid test cases can be incorporated while designing test cases to address the coverage issue.

New Requirements
Most of the software releases generally have a set of new requirements or features implemented to enhance the ability of the software. And when testing a particular release of a software application, the concentration is more on these new requirements. Prioritizing the execution of test cases for the new requirements helps the tester to stay focused on the job on hand.
A high priority can be assigned to these test cases while planning test execution. Sometimes software releases go through a lot of changes in requirements and addition of new requirements. Though this is not acceptable, the business demands for the addition of new requirements or changes to already frozen requirements, which is exactly why software development is such a dynamic process. In such a scenario, it is essential for the tester to keep track of the changes in requirements and have certain test cases designed for them. The time to go through the entire process of test planning and reviews may not be sufficient at this juncture but keeping track of these changes through documents assist the testers in knowing the status of these requirements as tested or not.

Defect Log
The defect log gives a clear indication of the quality and stability of the product. If there are severe defects that are still open, it indicates that the quality of the product is still not up to the mark. And that testing still needs to be done to uncover more such severe defects. But on the other hand if there are no high severity defects open and the number of low severity defects is relatively low, the development team can negotiate with the testing team in order to move the software into production. The use of a proper defect tracking system or tool is advisable to keep a defect log and generate reports on defect status.

Regression Test suite
While planning the testing phase, it is important to have regression testing cycles also planned. A minimum of two to three regression cycles is necessary to gain confidence in the stability of the software. The advantages of regression testing are two fold
- Uncovers any defects that have gone unnoticed in previous builds
- Uncovers defects that arise due to fixes to existing defects

Automation tools can be used to write scripts that will do a regression test in order to reduce cycle time for testing. Assigning criticalities to test cases will help in choosing and creating a regression test suite to help prioritize execution of manual test cases. But automated scripts are the best way to run and record a log of regression testing. If there is no defect found while running these scripts, we can be assured of the existing functionality being relatively stable.

Project Deadlines and Test Budget
In most real life scenarios the end of testing is defined by the project deadlines and the depletion of the budget for testing. Though many software products and services go into production with a negotiation on open issues due to time constraints, it is advisable to utilize the test budget fully. Since the project deadlines and the budget are known beforehand, planning of testing can be done effectively and all the resources for testing can be utilized optimally. And finally, having a mechanism that represents the confidence levels of these vital signs on a scale of 1 to 10 will clearly indicate the quality of testing activity being done and the ability to capture critical defects sooner than later. A simple bar graph with the vital signs on the x-axis and values 1 to 10 on the y-axis will be sufficient for this. If each of the bars is above a certain minimum level that is mutually agreed to by the development and testing teams, then we can safely conclude that most of the testing is and will be effectively done. Though testing is a very critical and essential activity, the load on testing can be reduced by reviewing and inspecting the various development activities and artifacts starting right from the requirement analysis to the code being written in order to detect bugs early in the development life cycle and reduce the impact of a testing phase that may not be completed wholly.

Read more...