Tuesday, June 26, 2007

Testers Questions from Microsoft and Google.

Questions for Testers


A friend of mine sent along some questions he was asked for a SDE/T position at Microsoft/Google (Software Design Engineer in Test):

  1. "How would you deal with changes being made a week or so before the ship date?
  2. "How would you deal with a bug that no one wants to fix? Both the SDE and his lead have said they won't fix it.
  3. "Write a function that counts the number of primes in the range [1-N]. Write the test cases for this function.
  4. "Given a MAKEFILE (yeah a makefile), design the data structure that a parser would create and then write code that iterates over that data structure executing commands if needed.
  5. "Write a function that inserts an integer into a linked list in ascending order. Write the test cases for this function.
  6. "Test the save dialog in Notepad. (This was the question I enjoyed the most).
  7. "Write the InStr function. Write the test cases for this function.
  8. "Write a function that will return the number of days in a month (no using System.DateTime).
  9. "You have 3 jars. Each jar has a label on it: white, black, or white&black. You have 3 sets of marbles: white, black, and white&black. One set is stored in one jar. The labels on the jars are guaranteed to be incorrect (i.e. white will not contain white). Which jar would you choose from to give you the best chances of identifying the which set of marbles in is in which jar.
  10. "Why do you want to work for Microsoft.
  11. "Write the test cases for a vending machine.

    "Those were the questions I was asked. I had a lot of discussions about how to handle situations. Such as a tester is focused on one part of an SDK. During triage it was determined that that portion of the SDK was not on the critical path, and the tester was needed elsewhere. But the tester continued to test that portion because it is his baby. How would you get him to stop testing that portion and work on what needs to be worked on?

    "Other situations came up like arranging tests into the different testing buckets (functional, stress, perf, etc.)."

Read more...

Huge Collections of Testing FAQs

I have so many questions in my mind when I think of any interview questions for Software testing / QA Engineer. Would you be able to answer all of them? ;)

1. What is Software Testing?
2. What is the Purpose of Testing?
3. What types of testing do testers perform?
4. What is the Outcome of Testing?
5. What kind of testing have you done?
6. What is the need for testing?
7. What are the entry criteria for Functionality and Performance testing?
8. What is test metrics?
9. Why do you go for White box testing, when Black box testing is available?
10. What are the entry criteria for Automation testing?
11. When to start and Stop Testing?
12. What is Quality?
13. What is Baseline document, Can you say any two?
14. What is verification?
15. What is validation?
16. What is quality assurance?
17. What is quality control?
18. What is SDLC and TDLC?
19. What are the Qualities of a Tester?
20. When to start and Stop Testing?
21. What are the various levels of testing?
22. What are the types of testing you know and you experienced?
23. What exactly is Heuristic checklist approach for unit testing?
24. After completing testing, what would you deliver to the client?
25. What is a Test Bed?
26. What is a Data Guidelines?
27. Why do you go for Test Bed?
28. What is Severity and Priority and who will decide what?
29. Can Automation testing replace manual testing? If it so, how?
30. What is a test case?
31. What is a test condition?
32. What is the test script?
33. What is the test data?
34. What is an Inconsistent bug?
35. What is the difference between Re-testing and Regression testing?
36. What are the different types of testing techniques?
37. What are the different types of test case techniques?
38. What are the risks involved in testing?
39. Differentiate Test bed and Test Environment?
40. What ifs the difference between defect, error, bug, failure, fault?
41. What is the difference between quality and testing?
42. What is the difference between White & Black Box Testing?
43. What is the difference between Quality Assurance and Quality Control?
44. What is the difference between Testing and debugging?
45. What is the difference between bug and defect?
46. What is the difference between verification and validation?
47. What is the difference between functional spec. and Business requirement specification?
48. What is the difference between unit testing and integration testing?
49. What is the diff between Volume & Load?
50. What is diff between Volume & Stress?
51. What is the diff between Stress & Load Testing?
52. What is the Diff between Two Tier & Three tier Architecture?
53. What is the diff between Client Server & Web Based Testing?
54. What is the diff between Integration & System Testing?
55. What is the Diff between Code Walkthrough & Code Review?
56. What is the diff between walkthrough and inspection?
57. What is the Diff between SIT & IST?
58. What is the Diff between static and dynamic?
59. What is the diff between alpha testing and beta testing?
60. What are the Minimum requirements to start testing?
61. What is Smoke Testing & when it will be done?
62. What is Adhoc Testing? When it can be done?
63. What is cookie testing?
64. What is security testing?
65. What is database testing?
66. What is the relation ship between Quality & Testing?
67. How do you determine, what to be tested?
68. How do you go about testing a project?
69. What is the Initial Stage of testing?
70. What is Web Based Application Testing?
71. What is Client Server Application Testing?
72. What is Two Tier & Three tier Architecture?
73. What is the use of Functional Specification?
74. Why do we prepare test condition, test cases, test script (Before Starting Testing)?
75. Is it not waste of time in preparing the test condition, test case & Test Script?
76. How do you go about testing of Web Application?
77. How do you go about testing of Client Server Application?
78. What is meant by Static Testing?
79. Can the static testing be done for both Web & Client Server Application?
80. In the Static Testing, what all can be tested?
81. Can test condition, test case & test script help you in performing the static testing?
82. What is meant by dynamic testing?
83. Is the dynamic testing a functional testing?
84. Is the Static testing a functional testing?
85. What are the functional testing you perform?
86. What is meant by Alpha Testing?
87. What kind of Document you need for going for an Functional testing?
88. What is meant by Beta Testing?
89. At what stage the unit testing has to be done?
90 Who can perform the Unit Testing?
91. When will the Verification & Validation be done?
92. What is meant by Code Walkthrough?
93. What is meant Code Review?
94. What is the testing that a tester performs at the end of Unit Testing?
95. What are the things, you prefer & Prepare before starting Testing?
96. What is Integration Testing?
97. What is Incremental Integration Testing?
98. What is meant by System Testing?
99. What is meant by SIT?
100 .When do you go for Integration Testing?
101 Can the System testing be done at any stage?
102. What are stubs & drivers?
103. What is the Concept of Up-Down & Down-Up in Testing in integration testing?
104. What is the final Stage of Integration Testing?
105. Where in the SDLC, the Testing Starts?
106. What is the Outcome of Integration Testing?
107. What is meant by GUI Testing?
108. What is meant by Back-End Testing?
109. What are the features, you take care in Prototype testing?
110. What is Mutation testing & when can it be done?
111. What is Compatibility Testing?
112. What is Usability Testing?
113 What is the Importance of testing?
114. What is meant by regression Testing?
115. When we prefer Regression & what are the stages where we go for Regression Testing?
116. What is performance testing?
117. What is the Performance testing; those can be done Manually & Automatically?
118 What is Volume, Stress & Load Testing?
119. What is a Bug?
120. What is a Defect?
121. What is the defect Life Cycle?
122. What is the Priority in fixing the Bugs?
123. Explain the Severity you rate for the bugs found?
124. Diff between UAT & IST?
125. What is meant by UAT?
126. What all are the requirements needed for UAT?
127. What are the docs required for Performance Testing?
128. What is risk analysis?
129. How to do risk management?
130. What are test closure documents?
131. What is traceability matrix?
132. What ways you followed for defect management?

and finally
133. What is diff betn Smoke Testing and Sanity Testing?

Read more...

MICROSOFT Software Testing Interview Questions

Software Testing Interview Questions E-mail

Test Automation:
1.What automating testing tools are you familiar with?
2.How did you use automating testing tools in your job?
3.Describe some problem that you had with automating testing tool.
4.How do you plan test automation?
5.Can test automation improve test effectiveness?
6.What is data - driven automation?
7.What are the main attributes of test automation?
8.Does automation replace manual testing?
9.How will you choose a tool for test automation?
10.How you will evaluate the tool for test automation?

Load Testing:
1.What criteria would you use to select Web transactions for load testing?
2.For what purpose are virtual users created?
3.Why it is recommended to add verification checks to your all your scenarios?
4.In what situation would you want to parameterize a text verification check?
5.Why do you need to parameterize fields in your virtual user script?

General questions:
1.What types of documents would you need for QA, QC, and Testing?
2.What did you include in a test plan?
3.Describe any bug you remember.
4.What is the purpose of the testing?
5.What do you like (not like) in this job?
6.What is quality assurance?
7.What is the difference between QA and testing?
8.How do you scope, organize, and execute a test project?
9.What is the role of QA in a development project?
10.What is the role of QA in a company that produces software?

Read more...

TOP 50 Software tester (SQA) interview questions

Software tester (SQA) interview questions

These questions are used for software tester or SQA (Software Quality Assurance) positions. Refer to The Real World of Software Testing for more information in the field.

  1. The top management was feeling that when there are any changes in the technology being used, development schedules etc, it was a waste of time to update the Test Plan. Instead, they were emphasizing that you should put your time into testing than working on the test plan. Your Project Manager asked for your opinion. You have argued that Test Plan is very important and you need to update your test plan from time to time. It’s not a waste of time and testing activities would be more effective when you have your plan clear. Use some metrics. How you would support your argument to have the test plan consistently updated all the time.
  2. The QAI is starting a project to put the CSTE certification online. They will use an automated process for recording candidate information, scheduling candidates for exams, keeping track of results and sending out certificates. Write a brief test plan for this new project.
  3. The project had a very high cost of testing. After going in detail, someone found out that the testers are spending their time on software that doesn’t have too many defects. How will you make sure that this is correct?
  4. What are the disadvantages of overtesting?
  5. What happens to the test plan if the application has a functionality not mentioned in the requirements?
  6. You are given two scenarios to test. Scenario 1 has only one terminal for entry and processing whereas scenario 2 has several terminals where the data input can be made. Assuming that the processing work is the same, what would be the specific tests that you would perform in Scenario 2, which you would not carry on Scenario 1?
  7. Your customer does not have experience in writing Acceptance Test Plan. How will you do that in coordination with customer? What will be the contents of Acceptance Test Plan?
  8. How do you know when to stop testing?
  9. What can you do if the requirements are changing continuously?
  10. What is the need for Test Planning?
  11. What are the various status reports you will generate to Developers and Senior Management?
  12. Define and explain any three aspects of code review?
  13. Why do you need test planning?
  14. Explain 5 risks in an e-commerce project. Identify the personnel that must be involved in the risk analysis of a project and describe their duties. How will you prioritize the risks?
  15. What are the various status reports that you need generate for Developers and Senior Management?
  16. You have been asked to design a Defect Tracking system. Think about the fields you would specify in the defect tracking system?
  17. Write a sample Test Policy?
  18. Explain the various types of testing after arranging them in a chronological order?
  19. Explain what test tools you will need for client-server testing and why?
  20. Explain what test tools you will need for Web app testing and why?
  21. Explain pros and cons of testing done development team and testing by an independent team?
  22. Differentiate Validation and Verification?
  23. Explain Stress, Load and Performance testing?
  24. Describe automated capture/playback tools and list their benefits?
  25. How can software QA processes be implemented without stifling productivity?
  26. How is testing affected by object-oriented designs?
  27. What is extreme programming and what does it have to do with testing?
  28. Write a test transaction for a scenario where 6.2% of tax deduction for the first $62,000 of income has to be done?
  29. What would be the Test Objective for Unit Testing? What are the quality measurements to assure that unit testing is complete?
  30. Prepare a checklist for the developers on Unit Testing before the application comes to testing department.
  31. Draw a pictorial diagram of a report you would create for developers to determine project status.
  32. Draw a pictorial diagram of a report you would create for users and management to determine project status.
  33. What 3 tools would you purchase for your company for use in testing? Justify the need?
  34. Put the following concepts, put them in order, and provide a brief description of each:
    • system testing
    • acceptance testing
    • unit testing
    • integration testing
    • benefits realization testing
  35. What are two primary goals of testing?
  36. If your company is going to conduct a review meeting, who should be on the review committe and why?
  37. Write any three attributes which will impact the Testing Process?
  38. What activity is done in Acceptance Testing, which is not done in System testing?
  39. You are a tester for testing a large system. The system data model is very large with many attributes and there are a lot of inter-dependencies within the fields. What steps would you use to test the system and also what are the effects of the steps you have taken on the test plan?
  40. Explain and provide examples for the following black box techniques?
    • Boundary Value testing
    • Equivalence testing
    • Error Guessing
  41. What are the product standards for?
    • Test Plan
    • Test Script and Test Report
  42. You are the test manager starting on system testing. The development team says that due to a change in the requirements, they will be able to deliver the system for SQA 5 days past the deadline. You cannot change the resources (work hours, days, or test tools). What steps will you take to be able to finish the testing in time?
  43. Your company is about to roll out an e-commerce application. It’s not possible to test the application on all types of browsers on all platforms and operating systems. What steps would you take in the testing environment to reduce the business risks and commercial risks?
  44. In your organization, testers are delivering code for system testing without performing unit testing. Give an example of test policy:
    • Policy statement
    • Methodology
    • Measurement
  45. Testers in your organization are performing tests on the deliverables even after significant defects have been found. This has resulted in unnecessary testing of little value, because re-testing needs to be done after defects have been rectified. You are going to update the test plan with recommendations on when to halt testing. Wwhat recommendations are you going to make?
  46. How do you measure:
    • Test Effectiveness
    • Test Efficiency
  47. You found out the senior testers are making more mistakes then junior testers; you need to communicate this aspect to the senior tester. Also, you don’t want to lose this tester. How should one go about constructive criticism?
  48. You are assigned to be the test lead for a new program that will automate take-offs and landings at an airport. How would you write a test strategy for this new program?
  49. This is the product (Pen or stapler) how you will test it???
  50. We have this (some specific project). How you think, that you will be able to adjust your self to this project?

Read more...

45 LoadRunner interview questions

LoadRunner interview questions

  1. What is load testing? - Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.
  2. What is Performance testing? - Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.
  3. Did u use LoadRunner? What version? - Yes. Version 7.2.
  4. Explain the Load testing process? -
    Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions. Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us. Step 4: Running the scenario.
    We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers. Step 5: Monitoring the scenario.
    We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors. Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner.s graphs and reports to analyze the application.s performance.
  5. When do you do load and performance Testing? - We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.
  6. What are the components of LoadRunner? - The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.
  7. What Component of LoadRunner would you use to record a Script? - The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.
  8. What Component of LoadRunner would you use to play Back the script in multi user mode? - The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group.
  9. What is a rendezvous point? - You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.
  10. What is a scenario? - A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.
  11. Explain the recording mode for web Vuser script? - We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.
  12. Why do you create parameters? - Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.
  13. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
  14. How do you find out where correlation is required? Give few examples from your projects? - Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated. In my project, there was a unique id developed for each customer, it was nothing but Insurance Number, it was generated automatically and it was sequential and this value was unique. I had to correlate this value, in order to avoid errors while running my script. I did using scan for correlation.
  15. Where do you set automatic correlation options? - Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
  16. What is a function to capture dynamic values in the web Vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.
  17. When do you disable log in Virtual User Generator, When do you choose standard and extended logs? - Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select
    Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Select
    extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options.
  18. How do you debug a LoadRunner script? - VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.
  19. How do you write user defined functions in LR? Give me few functions you wrote in your previous project? - Before we create the User Defined functions we need to create the external
    library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* (char*, char*)Examples of user defined functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined functions used in my earlier project.
  20. What are the changes you can make in run-time settings? - The Run Time Settings that we make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable Logging Standard Log and c) Extended Think Time - In think time we have two options like Ignore think time and Replay think time. d) General - Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.
  21. Where do you set Iteration for Vuser testing? - We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings, Pacing tab, set number of iterations.
  22. How do you perform functional testing under load? - Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain.
  23. What is Ramp up? How do you set this? - This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be
    specified. To set Ramp Up, go to ‘Scenario Scheduling Options’
  24. What is the advantage of running the Vuser as thread? - VuGen provides the facility to use multithreading. This enables more Vusers to be run per
    generator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single
    generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of
    Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.
  25. If you want to stop the execution of your script on error, how do you do that? - The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first uncheck the .Continue on error. option in Run-Time Settings.
  26. What is the relation between Response Time and Throughput? - The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.
  27. Explain the Configuration of your systems? - The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.
  28. How do you identify the performance bottlenecks? - Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.
  29. If web server, database and Network are all fine where could be the problem? - The problem could be in the system itself or in the application server or in the code written for the application.
  30. How did you find web server related issues? - Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that
    occurred during scenario, the number of http responses per second, the number of downloaded pages per second.
  31. How did you find database related issues? - By running .Database. monitor and help of .Data Resource Graph. we can find database related issues. E.g. You can specify the resource you want to measure on before running the controller and than you can see database related issues
  32. Explain all the web recording options?
  33. What is the difference between Overlay graph and Correlate graph? - Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph show.s the current graph.s value & Right Y-axis show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph.s Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graph.s Y-axis.
  34. How did you plan the Load? What are the Criteria? - Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.
  35. What does vuser_init action contain? - Vuser_init action contains procedures to login to a server.
  36. What does vuser_end action contain? - Vuser_end section contains log off procedures.
  37. What is think time? How do you change the threshold? - Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.
  38. What is the difference between standard log and extended log? - The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned by the server. Advanced trace.
  39. Explain the following functions: - lr_debug_message - The lr_debug_message function sends a debug message to the output log when the specified message class is set. lr_output_message - The lr_output_message function sends notifications to the Controller Output window and the Vuser log file. lr_error_message - The lr_error_message function sends an error message to the LoadRunner Output window. lrd_stmt - The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed. lrd_fetch - The lrd_fetch function fetches the next row from the result set.
  40. Throughput - If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively flat as the number of Vusers increased, it would
    be reasonable to conclude that the bandwidth is constraining the volume of
    data delivered.
  41. Types of Goals in Goal-Oriented Scenario - Load Runner provides you with five different types of goals in a goal oriented scenario:
    • The number of concurrent Vusers
    • The number of hits per second
    • The number of transactions per second
    • The number of pages per minute
    • The transaction response time that you want your scenario
  42. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load
    increases. At 56 Vusers, there is a sudden, sharp increase in the average response
    time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.
  43. What is correlation? Explain the difference between automatic correlation and manual correlation? - Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.
  44. Where do you set automatic correlation options? - Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database, can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.
  45. What is a function to capture dynamic values in the web vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.

Read more...

Interview questions on WinRunner


Hi All,
Some more questions and their answers on WinRunner I am posting over here.
Do not forget to post your comments.. and valuable suggestions.

Galactus.

Interview questions on WinRunner

  1. How you used WinRunner in your project? - Yes, I have been using WinRunner for creating automated scripts for GUI, functional and regression testing of the AUT.
  2. Explain WinRunner testing process? - WinRunner testing process involves six main stages
    • Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested
    • Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.
    • Debug Test: run tests in Debug mode to make sure they run smoothly
    • Run Tests: run tests in Verify mode to test your application.
    • View Results: determines the success or failure of the tests.
    • Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.
  3. What is contained in the GUI map? - WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object.s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description. There are 2 types of GUI Map files. Global GUI Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.
  4. How does WinRunner recognize objects on the application? - WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object.s description in the GUI map and then looks for an object with the same properties in the application being tested.
  5. Have you created test scripts and what is contained in the test scripts? - Yes I have created test scripts. It contains the statement in Mercury Interactive.s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner.s visual programming tool, the Function Generator.
  6. How does WinRunner evaluate test results? - Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.
  7. Have you performed debugging of the scripts? - Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.
  8. How do you run your test scripts? - We run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches are found, WinRunner captures them as actual results.
  9. How do you analyze results and report the defects? - Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.
  10. What is the use of Test Director software? - TestDirector is Mercury Interactive.s software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.
  11. Have you integrated your automated scripts from TestDirector? - When you work with WinRunner, you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual. And if it is automated script then TestDirector will build a skeleton for the script that can be later modified into one which could be used to test the AUT.
  12. What are the different modes of recording? - There are two type of recording in WinRunner. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.
  13. What is the purpose of loading WinRunner Add-Ins? - Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function.
  14. What are the reasons that WinRunner fails to identify an object on the GUI? - WinRunner fails to identify an object in a GUI due to various reasons. The object is not a standard windows object. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.
  15. What is meant by the logical name of the object? - An object.s logical name is determined by its class. In most cases, the logical name is the label that appears on an object.
  16. If the object does not have a name then what will be the logical name? - If the object does not have a name then the logical name could be the attached text.
  17. What is the different between GUI map and GUI map files? - The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files. Global GUI Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.
  18. GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description.
  19. How do you view the contents of the GUI map? - GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.
  20. When you create GUI map do you record all the objects of specific objects? - If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts.

Read more...

Friday, June 22, 2007

Top 50 Software Testing/SQA FAQs you may be asked in an Interview !

Top 50 Software Testing/SQA FAQs you may be asked in an Interview !


Long since I was being asked to compile a list of Software Testing Interview Questions by some of my blog readers/friends. So at last I have managed to compile a list of Top 50 Software Testing/SQA FAQs you may be asked in an Interview ! So here it goes...


1. What is 'Software Quality Assurance'?

Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.


2. What is 'Software Testing'?

Testing involves operation of a system or application under controlled conditions and evaluating the results. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should.


3. Does every software project need testers?

It depends on the size and context of the project, the risks, the development methodology, the skill and experience of the developers. If the project is a short-term, small, low risk project, with highly experienced programmers utilizing thorough unit testing or test-first development, then test engineers may not be required for the project to succeed. For non-trivial-size projects or projects with non-trivial risks, a testing staff is usually necessary. The use of personnel with specialized skills enhances an organization's ability to be successful in large, complex, or difficult tasks. It allows for both a) deeper and stronger skills and b) the contribution of differing perspectives.


4. What is Regression testing?

Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.


5. Why does software have bugs?

Some of the reasons are:
a. Miscommunication or no communication.
b. Programming errors
c. Changing requirements
d. Time pressures


6. How can new Software QA processes be introduced in an existing Organization?

It depends on the size of the organization and the risks involved.
a. For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects.
b. By incremental self managed team approaches.


7. What is verification? Validation?

Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed.


8. What is a 'walkthrough'? What's an 'inspection'?

A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required. An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything.


9. What kinds of testing should be considered?

Some of the basic kinds of testing involve:Blackbox testing, Whitebox testing, Integration testing, Functional testing, smoke testing, Acceptance testing, Load testing, Performance testing, User acceptance testing.


10. What are 5 common problems in the software development process?
a. Poor requirements
b. Unrealistic Schedule
c. Inadequate testing
d. Changing requirements
e. Miscommunication


11.What are 5 common solutions to software development problems?
a. Solid requirements
b. Realistic Schedule
c. Adequate testing
d. Clarity of requirements
e. Good communication among the Project team


12. What is software 'quality'?

Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable


13. What are some recent major computer system failures caused by software bugs?

Trading on a major Asian stock exchange was brought to a halt in November of 2005, reportedly due to an error in a system software upgrade. A May 2005 newspaper article reported that a major hybrid car manufacturer had to install a software fix on 20,000 vehicles due to problems with invalid engine warning lights and occasional stalling. Media reports in January of 2005 detailed severe problems with a $170 million high-profile U.S. government IT systems project. Software testing was one of the five major problem areas according to a report of the commission reviewing the project.


14. What is 'good code'? What is 'good design'?

'Good code' is code that works, is bug free, and is readable and maintainable. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and status logging capability; and works correctly when implemented. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements.


15. What is SEI? CMM? CMMI? ISO? Will it help?

These are all standards that determine effectiveness in delivering quality software. It helps organizations to identify best practices useful in helping them increase the maturity of their processes.


16. What steps are needed to develop and run software tests?
a. Obtain requirements, functional design, and internal design specifications and other necessary documents
b. Obtain budget and schedule requirements.
c. Determine Project context.
d. Identify risks.
e. Determine testing approaches, methods, test environment, test data.
f. Set Schedules, testing documents.
g. Perform tests.
h. Perform reviews and evaluations
i. Maintain and update documents


17. What's a 'test plan'? What's a 'test case'?

A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly.


18. What should be done after a bug is found?

The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere


19. Will automated testing tools make testing easier?

It depends on the Project size. For small projects, the time needed to learn and implement them may not be worth it unless personnel are already familiar with the tools. For larger projects, or on-going long-term projects they can be valuable.


20. What's the best way to choose a test automation tool?

Some of the points that can be noted before choosing a tool would be:
a. Analyze the non-automated testing situation to determine the testing activity that is being performed.
b. Testing procedures that are time consuming and repetition.
c. Cost/Budget of tool, Training and implementation factors.
d. Evaluation of the chosen tool to explore the benefits.


21. How can it be determined if a test environment is appropriate?

Test environment should match exactly all possible hardware, software, network, data, and usage characteristics of the expected live environments in which the software will be used.


22. What's the best approach to software test estimation?

The 'best approach' is highly dependent on the particular organization and project and the experience of the personnel involvedSome of the following approaches to be considered are:
a. Implicit Risk Context Approach
b. Metrics-Based Approach
c. Test Work Breakdown Approach
d. Iterative Approach
e. Percentage-of-Development Approach


23. What if the software is so buggy it can't really be tested at all?

The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs.


24. How can it be known when to stop testing?

Common factors in deciding when to stop are:
a. Deadlines (release deadlines, testing deadlines, etc.)
b. Test cases completed with certain percentage passed
c. Test budget depleted
d. Coverage of code/functionality/requirements reaches a specified point
e. Bug rate falls below a certain level
f. Beta or alpha testing period ends


25. What if there isn't enough time for thorough testing?
a. Use risk analysis to determine where testing should be focused.
b. Determine the important functionalities to be tested.
c. Determine the high-risk aspects of the project.
d. Prioritize the kinds of testing that need to be performed.
e. Determine the tests that will have the best high-risk-coverage to time-required ratio.


26. What if the project isn't big enough to justify extensive testing?

Consider the impact of project errors, not the size of the project. The tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis.


27. How does a client/server environment affect testing?

Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers, especially in multi-tier systems. Load/stress/performance testing may be useful in determining client/server application limitations and capabilities.


28. How can World Wide Web sites be tested?

Some of the considerations might include:
a. Testing the expected loads on the server
b. Performance expected on the client side
c. Testing the required securities to be implemented and verified.
d. Testing the HTML specification, external and internal links
e. cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled


29. How is testing affected by object-oriented designs?

Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. If the application was well designed this can simplify test design.


30. What is Extreme Programming and what's it got to do with testing?

Extreme Programming (XP) is a software development approach for small teams on risk-prone projects with unstable requirements. For testing ('extreme testing', programmers are expected to write unit and functional test code first - before writing the application code. Customers are expected to be an integral part of the project team and to help develop scenarios for acceptance/black box testing.


31. What makes a good Software Test engineer?

A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful.


32. What makes a good Software QA engineer?

They must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews


33. What's the role of documentation in QA?

QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. Change management for documentation should be used.


34. What is a test strategy? What is the purpose of a test strategy?

It is a plan for conducting the test effort against one or more aspects of the target system.A test strategy needs to be able to convince management and other stakeholders that the approach is sound and achievable, and it also needs to be appropriate both in terms of the software product to be tested and the skills of the test team.


35. What information does a test strategy captures?

It captures an explanation of the general approach that will be used and the specific types, techniques, styles of testing


36. What is test data?

It is a collection of test input values that are consumed during the execution of a test, and expected results referenced for comparative purposes during the execution of a test


37. What is Unit testing?

It is implemented against the smallest testable element (units) of the software, and involves testing the internal structure such as logic and dataflow, and the unit's function and observable behaviors


38. How can the test results be used in testing?

Test Results are used to record the detailed findings of the test effort and to subsequently calculate the different key measures of testing


39. What is Developer testing?

Developer testing denotes the aspects of test design and implementation most appropriate for the team of developers to undertake.


40. What is independent testing?

Independent testing denotes the test design and implementation most appropriately performed by someone who is independent from the team of developers.


41. What is Integration testing?

Integration testing is performed to ensure that the components in the implementation model operate properly when combined to execute a use case


42. What is System testing?

A series of tests designed to ensure that the modified program interacts correctly with other system components. These test procedures typically are performed by the system maintenance staff in their development library.


43. What is Acceptance testing?

User acceptance testing is the final test action taken before deploying the software. The goal of acceptance testing is to verify that the software is ready, and that it can be used by end users to perform those functions and tasks for which the software was built


44. What is the role of a Test Manager?

The Test Manager role is tasked with the overall responsibility for the test effort's success. The role involves quality and test advocacy, resource planning and management, and resolution of issues that impede the test effort


45. What is the role of a Test Analyst?

The Test Analyst role is responsible for identifying and defining the required tests, monitoring detailed testing progress and results in each test cycle and evaluating the overall quality experienced as a result of testing activities. The role typically carries the responsibility for appropriately representing the needs of stakeholders that do not have direct or regular representation on the project.


46. What is the role of a Test Designer?

The Test Designer role is responsible for defining the test approach and ensuring its successful implementation. The role involves identifying the appropriate techniques, tools and guidelines to implement the required tests, and to give guidance on the corresponding resources requirements for the test effort


47. What are the roles and responsibilities of a Tester?

The Tester role is responsible for the core activities of the test effort, which involves conducting the necessary tests and logging the outcomes of that testing. The tester is responsible for identifying the most appropriate implementation approach for a given test, implementing individual tests, setting up and executing the tests, logging outcomes and verifying test execution, analyzing and recovering from execution errors.


48. What are the skills required to be a good tester?

A tester should have knowledge of testing approaches and techniques, diagnostic and problem-solving skills, knowledge of the system or application being tested, and knowledge of networking and system architecture


49. What is test coverage?

Test coverage is the measurement of testing completeness, and it's based on the coverage of testing expressed by the coverage of test requirements and test cases or by the coverage of executed code.


50. What is a test script?

The step-by-step instructions that realize a test, enabling its execution. Test Scripts may take the form of either documented textual instructions that are executed manually or computer readable instructions that enable automated test execution.

Read more...

System Testing

System Testing

This type of test involves examination of all the system for software components, all the hardware components and any interfaces.
The whole system is checked not only for validity but also for met objectives. The complete system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment.
System testing should include recovery testing, security testing, stress testing and performance testing.

Recovery testing uses test cases designed to examine how easily and completely the system can recover from a disaster (power shut down, blown circuit, disk crash, interface failure, insufficient memory, etc.). It is desirable to have a system capable of recovering quickly and with minimal human intervention. It should also have a log of activities happening before the crash (these should be part of daily operations) and a log of messages during the failure (if possible) and upon re-start.

Security testing involves testing the system in order to make sure that unauthorized personnel or other systems cannot gain access to the system and information or resources within it. Programs that check for access to the system via passwords are tested along with any organizational security procedures established.

Stress testing encompasses creating unusual loads on the system in attempts to brake it. System is monitored for performance loss and susceptibility to crashing during the load times. If it does crash as a result of high load, it provides for just one more recovery test.

Performance testing involves monitoring and recording the performance levels during regular and low and high stress loads. It tests the amount of resource usage under the just described conditions and serves as basis for making a forecast of additional resources needed (if any) in the future. It is important to note that performance objectives should have been developed during the planning stage and performance testing is to assure that these objectives are being met. However, these tests may be run in initial stages of production to compare the actual usage to the forecasted figures.

Read more...

What is Testware?

What is Testware?

As we know that hardware development engineers produce hardware. Software development engineers produce software. Same like this, Software test engineers produce testware.

Testware is produced by both verification and validation testing methods. Testware includes test cases, test plan, test report and etc. Like software, testware should be placed under the control of a configuration management system, saved, faithfully maintained.

Like software, the testware has significant value because it can be reused.The tester’s job is to create testware that is going to have a specified lifetime and is valuable asset to the company.

Read more...

Software Testing Interview Questions

Software Testing Interview Questions


I have so many questions in my mind when I think of any interview questions for Software testing / QA Engineer. Would you be able to answer all of them? ;)

1. What is Software Testing?
2. What is the Purpose of Testing?
3. What types of testing do testers perform?
4. What is the Outcome of Testing?
5. What kind of testing have you done?
6. What is the need for testing?
7. What are the entry criteria for Functionality and Performance testing?
8. What is test metrics?
9. Why do you go for White box testing, when Black box testing is available?
10. What are the entry criteria for Automation testing?
11. When to start and Stop Testing?
12. What is Quality?
13. What is Baseline document, Can you say any two?
14. What is verification?
15. What is validation?
16. What is quality assurance?
17. What is quality control?
18. What is SDLC and TDLC?
19. What are the Qualities of a Tester?
20. When to start and Stop Testing?
21. What are the various levels of testing?
22. What are the types of testing you know and you experienced?
23. What exactly is Heuristic checklist approach for unit testing?
24. After completing testing, what would you deliver to the client?
25. What is a Test Bed?
26. What is a Data Guidelines?
27. Why do you go for Test Bed?
28. What is Severity and Priority and who will decide what?
29. Can Automation testing replace manual testing? If it so, how?
30. What is a test case?
31. What is a test condition?
32. What is the test script?
33. What is the test data?
34. What is an Inconsistent bug?
35. What is the difference between Re-testing and Regression testing?
36. What are the different types of testing techniques?
37. What are the different types of test case techniques?
38. What are the risks involved in testing?
39. Differentiate Test bed and Test Environment?
40. What ifs the difference between defect, error, bug, failure, fault?
41. What is the difference between quality and testing?
42. What is the difference between White & Black Box Testing?
43. What is the difference between Quality Assurance and Quality Control?
44. What is the difference between Testing and debugging?
45. What is the difference between bug and defect?
46. What is the difference between verification and validation?
47. What is the difference between functional spec. and Business requirement specification?
48. What is the difference between unit testing and integration testing?
49. What is the diff between Volume & Load?
50. What is diff between Volume & Stress?
51. What is the diff between Stress & Load Testing?
52. What is the Diff between Two Tier & Three tier Architecture?
53. What is the diff between Client Server & Web Based Testing?
54. What is the diff between Integration & System Testing?
55. What is the Diff between Code Walkthrough & Code Review?
56. What is the diff between walkthrough and inspection?
57. What is the Diff between SIT & IST?
58. What is the Diff between static and dynamic?
59. What is the diff between alpha testing and beta testing?
60. What are the Minimum requirements to start testing?
61. What is Smoke Testing & when it will be done?
62. What is Adhoc Testing? When it can be done?
63. What is cookie testing?
64. What is security testing?
65. What is database testing?
66. What is the relation ship between Quality & Testing?
67. How do you determine, what to be tested?
68. How do you go about testing a project?
69. What is the Initial Stage of testing?
70. What is Web Based Application Testing?
71. What is Client Server Application Testing?
72. What is Two Tier & Three tier Architecture?
73. What is the use of Functional Specification?
74. Why do we prepare test condition, test cases, test script (Before Starting Testing)?
75. Is it not waste of time in preparing the test condition, test case & Test Script?
76. How do you go about testing of Web Application?
77. How do you go about testing of Client Server Application?
78. What is meant by Static Testing?
79. Can the static testing be done for both Web & Client Server Application?
80. In the Static Testing, what all can be tested?
81. Can test condition, test case & test script help you in performing the static testing?
82. What is meant by dynamic testing?
83. Is the dynamic testing a functional testing?
84. Is the Static testing a functional testing?
85. What are the functional testing you perform?
86. What is meant by Alpha Testing?
87. What kind of Document you need for going for an Functional testing?
88. What is meant by Beta Testing?
89. At what stage the unit testing has to be done?
90 Who can perform the Unit Testing?
91. When will the Verification & Validation be done?
92. What is meant by Code Walkthrough?
93. What is meant Code Review?
94. What is the testing that a tester performs at the end of Unit Testing?
95. What are the things, you prefer & Prepare before starting Testing?
96. What is Integration Testing?
97. What is Incremental Integration Testing?
98. What is meant by System Testing?
99. What is meant by SIT?
100 .When do you go for Integration Testing?
101 Can the System testing be done at any stage?
102. What are stubs & drivers?
103. What is the Concept of Up-Down & Down-Up in Testing in integration testing?
104. What is the final Stage of Integration Testing?
105. Where in the SDLC, the Testing Starts?
106. What is the Outcome of Integration Testing?
107. What is meant by GUI Testing?
108. What is meant by Back-End Testing?
109. What are the features, you take care in Prototype testing?
110. What is Mutation testing & when can it be done?
111. What is Compatibility Testing?
112. What is Usability Testing?
113 What is the Importance of testing?
114. What is meant by regression Testing?
115. When we prefer Regression & what are the stages where we go for Regression Testing?
116. What is performance testing?
117. What is the Performance testing; those can be done Manually & Automatically?
118 What is Volume, Stress & Load Testing?
119. What is a Bug?
120. What is a Defect?
121. What is the defect Life Cycle?
122. What is the Priority in fixing the Bugs?
123. Explain the Severity you rate for the bugs found?
124. Diff between UAT & IST?
125. What is meant by UAT?
126. What all are the requirements needed for UAT?
127. What are the docs required for Performance Testing?
128. What is risk analysis?
129. How to do risk management?
130. What are test closure documents?
131. What is traceability matrix?
132. What ways you followed for defect management?

and finally
133. What is diff betn Smoke Testing and Sanity Testing?

Read more...

Monday, June 18, 2007

History's Worst Software Bugs

History's Worst Software Bugs

By Simson Garfinkel Email 11.08.05 | 2:00 AM

Last month automaker Toyota announced a recall of 160,000 of its Prius hybrid vehicles following reports of vehicle warning lights illuminating for no reason, and cars' gasoline engines stalling unexpectedly. But unlike the large-scale auto recalls of years past, the root of the Prius issue wasn't a hardware problem -- it was a programming error in the smart car's embedded code. The Prius had a software bug.

With that recall, the Prius joined the ranks of the buggy computer -- a club that began in 1945 when engineers found a moth in Panel F, Relay #70 of the Harvard Mark II system.The computer was running a test of its multiplier and adder when the engineers noticed something was wrong. The moth was trapped, removed and taped into the computer's logbook with the words: "first actual case of a bug being found."

Sixty years later, computer bugs are still with us, and show no sign of going extinct. As the line between software and hardware blurs, coding errors are increasingly playing tricks on our daily lives. Bugs don't just inhabit our operating systems and applications -- today they lurk within our cell phones and our pacemakers, our power plants and medical equipment. And now, in our cars.

But which are the worst?

It's all too easy to come up with a list of bugs that have wreaked havoc. It's harder to rate their severity. Which is worse -- a security vulnerability that's exploited by a computer worm to shut down the internet for a few days or a typo that triggers a day-long crash of the nation's phone system? The answer depends on whether you want to make a phone call or check your e-mail.

Many people believe the worst bugs are those that cause fatalities. To be sure, there haven't been many, but cases like the Therac-25 are widely seen as warnings against the widespread deployment of software in safety critical applications. Experts who study such systems, though, warn that even though the software might kill a few people, focusing on these fatalities risks inhibiting the migration of technology into areas where smarter processing is sorely needed. In the end, they say, the lack of software might kill more people than the inevitable bugs.

What seems certain is that bugs are here to stay. Here, in chronological order, is the Wired News list of the 10 worst software bugs of all time … so far.

July 28, 1962 -- Mariner I space probe. A bug in the flight software for the Mariner 1 causes the rocket to divert from its intended path on launch. Mission control destroys the rocket over the Atlantic Ocean. The investigation into the accident discovers that a formula written on paper in pencil was improperly transcribed into computer code, causing the computer to miscalculate the rocket's trajectory.

1982 -- Soviet gas pipeline. Operatives working for the Central Intelligence Agency allegedly (.pdf) plant a bug in a Canadian computer system purchased to control the trans-Siberian gas pipeline. The Soviets had obtained the system as part of a wide-ranging effort to covertly purchase or steal sensitive U.S. technology. The CIA reportedly found out about the program and decided to make it backfire with equipment that would pass Soviet inspection and then fail once in operation. The resulting event is reportedly the largest non-nuclear explosion in the planet's history.

1985-1987 -- Therac-25 medical accelerator. A radiation therapy device malfunctions and delivers lethal radiation doses at several medical facilities. Based upon a previous design, the Therac-25 was an "improved" therapy system that could deliver two different kinds of radiation: either a low-power electron beam (beta particles) or X-rays. The Therac-25's X-rays were generated by smashing high-power electrons into a metal target positioned between the electron gun and the patient. A second "improvement" was the replacement of the older Therac-20's electromechanical safety interlocks with software control, a decision made because software was perceived to be more reliable.

What engineers didn't know was that both the 20 and the 25 were built upon an operating system that had been kludged together by a programmer with no formal training. Because of a subtle bug called a "race condition," a quick-fingered typist could accidentally configure the Therac-25 so the electron beam would fire in high-power mode but with the metal X-ray target out of position. At least five patients die; others are seriously injured.

Read more...

Free Black Box Testing Course Video

Free Black Box Testing Course Video

Cem Kaner has posted, free, an evolving online version (with video, course notes, examples, exercises, papers, tests, and more) of his Black Box Software Testing course.

From the course overview:"Black box testing is the craft of testing a program from the external view. We look at how the program operates in its context, getting to know needs and reactions of the users, hardware and software platforms, and programs that communicate with it."This course is an introduction to black box testing. It is a superset of the Software Testing 1 introductory courses that Florida Tech requires in its undergraduate (CSE 3411) and graduate (SWE 5411) software engineering degree programs.

The full set of materials are equivalent to about a two-semester course."...and from the learning objectives for the course:"Testing is a type of technical investigation. If you think of it as a routine task, you'll do crummy testing. So, the essence of what I hopeyou'll learn in the course is how to investigate, how to plan your investigation, how to report the results, and how to tell whether you're doing a good job. Excellence in the course will require you to extend your critical thinking, writing, and teamworking skills. Finally, the work that you do in this course might help you land a job.

Many of the course tasks were designed to be realistic or impressive (to an employer) and to give you a chance to do professional-quality work that you can show off during a job interview.

http://www.testingeducation.org/BBST/index.html

Read more...

Risk Driven Testing

Risk Driven Testing

Whenever there is a lot of testing to be done and not enough time to do the testing, we have to prioritize so that at least the most important things get done.
In such a scenario, we have to prioritize and make out How to find the most important bugs first.

How to find the most important bugs first?
In testing, there's never enough time or resources. In addition, the consequences of skipping something important are severe.

So prioritization has received a lot of attention. The approach is called Risk Driven Testing, where Risk has very specific meaning.

Here's how you do it:

Take the pieces of your system, whatever you use - modules, functions, section of the requirements - and rate each piece on two variables, Impact and Likelihood.

Impact is what would happen if this piece somehow malfunctioned. Would it destroy the customer database? Or would it just mean that the column headings in a report didn't quite line up?

Likelihood is an estimate of how probable it is that this piece would fail.

Together, Impact and Likelihood determine Risk for the piece.

For example, suppose we're testing new two wheeler as they come off the production line. We only have one hour to test each vehicle.

We could easily justify spending all that time testing the brakes. After all, if the brakes don't work right, there'll be a really large Impact - literally, in this case. Therefore, while it's important to see that the vehicle runs, it's even more important to be sure that if it does run, it will stop.

Just as in testing software, we say that something "works" if it works every time, under all combinations of conditions.

So we could soak the brakes with water, freeze them, overheat them, and so on - all the standard engineering tests. We could spend the whole time doing this.

But we also have to make sure that the vehicle starts, accelerates, check that the lights, horn etc work.

Unfortunately, in the real world there's never enough time to do all the testing we'd like to do. So, as in the above example, we have to prioritize testing by Risk. That makes sure that we're doing the most important things first. But we also need to allow some time to get some coverage of the less risky features and usage scenarios.

Risk drives good testing!!!

Read more...

Software Development Life Cycle (SDLC)

Software Development Life Cycle

Software Development Life Cycle (SDLC) is the overall process of developing information systems through a multistep process from investigation of initial requirements through analysis, design, implementation and maintenance. There are many different models and methodologies, but each generally consists of a series of defined steps or stages.

Once upon a time, software development consisted of a programmer writing code to solve a problem or automate a procedure. Nowadays, systems are so big and complex that teams of architects, analysts, programmers, testers and users must work together to create the millions of lines of custom-written code that drive our enterprises. To manage this, a number of system development life cycle (SDLC) models have been created: waterfall, fountain, spiral, build and fix, rapid prototyping, incremental, and synchronize and stabilize.

The oldest of these, and the best known, is the waterfall: a sequence of stages in which the output of each stage becomes the input for the next. These stages can be characterized and divided up in different ways, including the following:

Project planning, feasibility study: Establishes a high-level view of the intended project and determines its goals.

Systems analysis, requirements definition: Refines project goals into defined functions and operation of the intended application. Analyzes end-user information needs.

Systems design: Describes desired features and operations in detail, including screen layouts, business rules, process diagrams, pseudocode and other documentation.

Implementation: The real code is written here.

Integration and testing: Brings all the pieces together into a special testing environment, then checks for errors, bugs and interoperability.

Acceptance, installation, deployment: The final stage of initial development, where the software is put into production and runs actual business.

Maintenance: What happens during the rest of the software's life: changes, correction, additions, moves to a different computing platform and more. This, the least glamorous and perhaps most important step of all, goes on seemingly forever.

Read more...

Rapid Testing

Rapid Testing

Rapid Software Testing - The style of rapid testing is tester-centric (as opposed to technique-centric) and is a blend of heuristic testing, risk-based testing, and exploratory testing.


How is Rapid Software Testing different from normal software testing?

Software Testing practice differs from industry to industry, company to company, tester to tester and person to person. But there are some elements that most test projects have in common. Let's call those common elements "normal testing". In our experience, normal testing involves writing test cases against some kind of specification. These test cases are fragmentary
plans or procedures that loosely specify what a tester will do to test the product. The tester is then expected to perform these test cases on the product, repeatedly, throughout the course of the project.

Rapid testing differs from traditional testing in several major ways:

  • Don’t Waste Time. The most rapid action is no action at all. So, in rapid testing we eliminate any activity that isn't necessary. Traditional testing, by comparison, is a bloated
    mess. It takes some training and experience to know what to eliminate, of course. In general, streamline documentation and devote testing to primarily to risk areas. Don't repeat tests just because someone told you that repetition is good. Make sure that you get good information value from every test. Consider the opportunity cost of each testing activitiy.

  • Mission. In Rapid Testing we don't start with a task ("write test cases"), we start with a
    mission. Our mission may be "find important problems fast". If so, then writing test cases may not be the best approach to the test process. If, on the other hand, our mission is "please the FDA auditors", then we not only will have to write test cases, we'll have to write certain kinds of test cases and present them in a specifically approved format. Proceeding from an understanding of our mission, we take stock of our situation and look for the most efficient and effective actions we can take right now to move towards fulfilling that mission.

  • Skills. To do any testing well requires skill. Normal testing downplays the importance of skill by focusing on the format of test documentation rather than the robustness of
    tests. Rapid Testing, as we describe it, highlights skill. It isn't a mechanical technique like making microwave popcorn, or filling out forms at the DMV. Robust tests are very important, so we practice critical thinking and experimental design skills. A novice tester will not do RT very well unless supervised and coached by a senior tester who is trained (or self-trained) in the art. We hope the articles and presentations on this site will help you work on those skills.


  • Risk. Normal testing is focused on functional and structural product coverage. In other words, if the product can do X, then try X. Rapid Testing focuses on important problems. We gain an understanding of the product we're testing to the point where we can imagine what kinds of problems are more likely to happen and what problems would have more impact if they happened.Then we put most of our effort into testing for those problems. Rapid Testing is concerned with uncovering the most important information as soon as possible.


  • Heuristics. We must beware of overthinking the testing problem, so we use heuristics (loose translation: rules of thumb) to help us get out of analysis paralysis and test
    more quickly. Heuristics are essentially reflexes-- biases to respond in a certain way-- that generally help us test the right things at the right time. Rapid Testers collect, memorize and practice using helpful heuristics. In normal testing, heuristics are also used, but testers are often not aware of them and do not fully control them.


  • Exploration. Rapid Testing is also rapid learning, so we use exploratory testing. We avoid pre-scripting test cases unless there is a clear and compelling need. We prefer
    to let the next test we do be influenced by the last test we did. This is a good thing, because we're not imprisoned by pre-conceived notions about what we should test, but let ourselves develop better ideas as we go. Other approaches to doing testing quickly, such as extensive test automation, run the risk that you'll end up with a lot of very fast tests that don't help you find the important problems in the product.

  • Teaming. Rapid Testers cheat. That is, they do things that our former elementary school teachers would consider cheating: we figure things out in advance where possible, we borrow other people's work, we use every resource and tool we can find. For example,
    one important technique of Rapid Testing is pair testing: two heads, one computer. This idea has proven valuable in the practice of Extreme Programming, and it works for testing just as well. In our experience of normal testing, testers usually work quietly and alone, rather than hunting for bugs like a pack of rapid wolves.

Read more...