Saturday, February 20, 2010

Levels of Testing

A step into the world of software testing might seem like a nightmare, with memories of unending test scripts
providing weeks of tedious repetitive testing and mountains of problem reports. Even after apparently exhaustive testing, bugs are still found, there is rework to carry out and inevitably yet more testing. But with careful planning, software testing can go like a dream.

Levels of testing:
The key to successful test strategies is picking the right level of testing at each stage in a project. The first point to take into account is that testing will not find all bugs and so a decision has to be made about the level of coverage required. This will be dependent on the software application and the level of quality required by the end users. A safety critical system such as a commercial fly-by-wire will clearly need a far higher degree of testing than a cheap games package for the domestic market.


This sort of decision needs to be made at an early stage in a project and will be documented in an overall software test plan. Typically this general plan will be drafted at the same stage that the requirements are catalogued and would be finalized once functional design work has been completed. It is now that confusion over the different levels of testing often arises. The adoption of the simple philosophy that each design document requires a corresponding test document can cut right through this confusion. The definition of the requirements will therefore be followed closely by the preparation of a test document - the Acceptance Tests.

Acceptance Tests are written using the Requirements Specification and apart from the overall test plan, are the only documents, which would be used in writing the Acceptance Tests. The point of Acceptance Tests is to demonstrate that the requirements have been met and therefore there needs to be no other input. It follows that the Acceptance Tests will be written at a fairly abstract level, as there is little detail at this stage of how the software will operate. An important point to bear in mind for all testing is that the tests should be written by a person who has some degree of independence from the project. This may be achieved by employing an external software consultant, or it may be adequate to use someone other than the author of the Requirements Specification. This will help to remove ambiguity from the design documents and to ensure that the tests reflect the design, rather than testing that has already been performed.

Following the classic software lifecycle, the next major document to be produced is the Functional Specification (External Design or Logical Design), which provides the first translation of the Requirements into a working system, and it is here that the system tests are written. As for the overall system it would be usual to write a System Test Plan, which will describe the general approach to how the system testing will be carried out. This might define the level of coverage (e.g. the level of validation testing to be carried out on user enterable fields) and the degree of regression testing to be included. In any case, it would normally only exercise the system as far as described in the Functional Specification. System Tests will be based on the Functional Specification, and this will often be the only document used as input.

The lowest level of testing is module level and this is based on the individual module designs. This testing will normally be the most detailed and will usually include areas such as range checking on input fields, exercising of output messages and other module level details. Following from the previous analogies, the module tests will be based on the individual module designs. Splitting the tests into these different segments helps to keep re-testing to a minimum. A problem discovered in one module during System Testing will simply mean re-executing that module test, followed by the System Tests.

Other levels:
Many other types of testing are available (e.g. installation, load, performance), though all will normally fall under one of the previous categories. One of the most important is Regression Testing, which is used to confirm that a new release of software has not regressed (i.e. lost functionality). In an extreme case, (which might be the product of a poorly planned test program) all previous tests must be re-executed on the new system. This may be acceptable for a system with automated testing capabilities, but for an interactive system it could mean an extensive use of test staff. For a software release it may be perfectly adequate to re-execute a subset of the module and system tests, and to write a new acceptance test based only on the requirements for this release.

Conclusion
Testing can make or break a project - it is vital that testing is planned to ensure that adequate coverage is
achieved. The emphasis here is adequate - too much testing can be uneconomic, too little can result in poor quality software.

Read more...

What is TestWare?

As we know that hardware development engineers produce hardware. Software development engineers produce software. Same like this, Software test engineers produce testware.

Testware is produced by both verification and validation testing methods. Testware includes test cases, test plan, test report and etc. Like software, testware should be placed under the control of a configuration management system, saved, faithfully maintained.

Like software, the testware has significant value because it can be reused.The tester’s job is to create testware that is going to have a specified lifetime and is valuable asset to the company.

Read more...

Ability to identify the hot spots of release from Bug Database

Bug Database for the products might have thousands of issues over a period of time against various builds and releases. Though these issues fixed over a period of time, it might be hard to derive meaningful metrics over the release.

We need to support these releases over the production systems & it might be helpful to capture the hotspots / risk elements with the release. Most of the issues here to deal with the respective features, compatibility with other features / technologies & performance related issues.

The usual metrics of number of issues against a module and their severity levels may not be of help always.

How easy is it to derive the following from the Bug Database for a given release

  1. Identify the issues have originated from Requirements, Design & Implementation
  2. Identify the issues over their category (Functional, Performance, Security, Compatibility, Usability etc)
  3. Identify the issues along with their origin & category over the features rather than the modules / components

Read more...

The Life Cycle of a Bug – Different Stages in it.

In this post, i will explore different stages of the Bug from it’s inception to closer. The Bug has been found and logged into the Bug Tracking System. It’s my fourth post in the Bug Life Cycle series.

  1. The Bug has been found and logged into the Bug Tracking System. It will be treated as New Bug in the System.
  2. The Bug will be assigned to the concerned Developer for a Resolution.
  3. The developer looks in to the possibilities of the resoultion & takes a call on Resolution by fixing it or differing over the information provided.
  4. Tester validates the resolved issue in the build & checks for the regression scenarios over the fix.
  5. If the issue found fixed, then he choose to Close the issue else he / she will Re-open the same.
  6. The Cycle follows again for the re-opened issue till it get’s closed.

Bug Life Cycle

It worth doing the following activities

  1. Capturing the required and re-usable info to the Bug Report at it’s each stage.
  2. Check for all the closed bugs of Severity 1 & 2 against final build for the release.

In the next post, I will share my thoughts on the useful metrics over the Bug Tracking

Happy Testing..

Read more...

Thursday, December 3, 2009

Explain IEEE 829 standard and other software testing standards.

IEEE 829 Standard is used for Software Test Documentation, where it specifies format for the set of documents to be used in the different stages software testing.
The documents are,
Test Plan – Test Plan is a planning document which has information about the scope, resources, duration, test coverage and other details.
Test Design – Test Design document has information of test pass criteria with test conditions and expected results.
Test Case – Test case document has information about the test data to be used.
Test Procedure – Test Procedure has information about the test steps to be followed and how to execute it.
Test Log – Test log has details about the run test cases, test plans & fail status, order, and the resource information who tested it.
Test Incident Report – Test Incident Report has information about the failed test comparing the actual result with expected result.
Test Summary Report – Test Summary Report has information about the testing done and quality of the software, it also analyses whether the software has met the requirements given by customer.

The other standards related to software testing are,
IEEE 1008 is for Unit Testing
IEEE 1012 is for Software verification and validation
IEEE 1028 is for Software Inspections
IEEE 1061 is for Software metrics and methodology
IEEE 1233 is for guiding the SRS development
IEEE 12207 is for SLC process

Read more...