Tuesday, April 14, 2015

Regression testing vs Retesting

Regression testing vs Retesting

Below are the differences between Regression Testing and Retesting.
a) Retesting is carried out to verify defect fix / fixes. Regression testing is done to check if the defect fix / fixes have not impacted other functionality of the application that was working fine before applying the code changes.
b) Retesting is planned based for the defect fixes listed in Build Notes. Regression testing is generic and may not be always specific to any defect fix or code change and can be planned as regional or full regression testing.
c) Retesting involves executing test cases that were failed earlier and regression testing involves executing test cases that were passed earlier build i.e., functionality that was working in earlier builds.
d) Retesting will involve rerunning failed test cases that are associated with defect(s) fixes being verified. Regression testing does not involve verifying defect fix but only executing regression test cases.
e) Retesting always takes higher priority over Regression testing i.e., Regression testing is done after completing Retesting. In some projects where there are ample testing resources, Regression testing is carried out in parallel with regression testing.

f) Though Retesting and regression testing have different objectives and priorities, they equally important for project’s success.

Smoke and sanity testing

Smoke and sanity testing

Below are the differences between Smoke and Sanity Testing.
Smoke Testing
Sanity Testing
Objective of Smoke Testing is to verify all the Critical and major functionality of an application is working as expected before going ahead with full fledged testing i.e., Functional or Regression testing.
Objective of Sanity testing is to confirm that the new build, environment and external services are stable enough to carry out any test i.e., even before carrying out Smoke test.
Smoke tests are broad and shallow. Smoke Tests are designed to catch any Critical or High severity defects across all the important functionalities.  Smoke tests are designed to catch Show stoppers (that was not tested and caught during sanity test) or blocker defects i.e., defects that indicate that a particular flow or functionality cannot be tested.
Sanity tests are very narrow (usually, tests a single flow or two at the max) it is not designed to test all the important functionality of the application. Sanity tests are intended to verify if the application is available (up and running) and it is able to interact successfully with database, external services and external devices if any.Sanity tests are designed to catch show stopper defects. Like, Unable to login to application OR application is not functioning due to JDBC connection failure etc.,
Smoke Testing is done by Testing team only as the focus of this testing more on validating application functionality.
Sanity Testing is mostly done by Build deployment/Operations Team after every new build is deployed OR once the environment is brought up after a scheduled application / environment maintenance.  Sanity testing is done by Build deployment team as immediate issues encountered post a new build deployment is more often towards configuration, database access and other setup issues.  In some of the bigger projects, testing team may be asked to perform sanity testing.
Smoke testing is usually done after Sanity Testing is completed.
Sanity Testing is done immediately after new build deployment OR after application / environment scheduled maintenance.
Smoke Test cases are mostly documented. Smoke Test suite is built by picking Functional test cases that requires validating all the critical and important functionalities of the application.e.g.:-a) Submit Orders and pay by different tender types (Cash, Credit, Debit, Gift Card etc.,)b) Verify cancel order is working fine.
c) Verify return functionality is working fine
d) Verify sales data in Oracle – daily sales report
And more test cases to test all the important functionality of the application.
Sanity Test cases are usually not documented i.e., no written test cases. In most of the companies they follow a Sanity check list.e.g.:-a) Verify Loginb) Submit an Order and pay by cash
c) Verify Oracle reports can be opened
As you can see, intent of Sanity testing is to find issues that are show stoppers and that can make the system completely not testable.
In the above example, tests are very narrow and does not test all the important functionality of the application, instead it checks but the intent of the testing.

Defect Life Cycle

Defect Life Cycle



Defect life cycle or bug life cycle comprises of all the defect status changes it would under go once a new defect is logged and till the defect is closed or cancelled. Defect Life Cycle indicates the flow on how defects are being analyzed, assigned, fixed, verified and closed or cancelled. 
Click on the below image to see it as a bigger image
Description of each defect status:
1) New – When a Defect is logged and yet to be assigned to a developer. Usually Project Manager or Dev Lead will decide on which defects to be assigned to which developer.
2) Assigned – indicates that the developer who would fix the defect has been identified and has started analyzing and working on the defect fix.
3) Duplicate – Manager or Developer will update the status of a defect as “Duplicate” if this defect was already reported.
4) Rejected / Not Reproducible – This status indicates that the developer is not considering the defect as valid due to following reasons
a) Not able to reproduce
b) Not a valid defect and it is as per requirement
c) Test Data used was invalid
d) Defect referring to the Requirement has been de-scoped from the current release, tester was not aware of this late changes.
5) Deferred – Defect fix has been held back because of time or budget constraints and project team has got approval from customer to defer the defect till next or future release.
6) Fixed – Developer has fixed the defect and has unit tested the fix. The code changes are deployed in test environment for verifying the defect fix.
7) Reopen – Status is changed to “Reopen” by a tester, when a tester finds the defect is Not fixed or partially fixed. Developer who fixed the defect looks into the comment that was provided by the tester at the time of reopening the defect. Developer will change the status to “Assigned” and starts working on the fix again. Incase the developer wants the tester to re-verify the defect then he/she will add a comment and will change the defect status to “Fixed”.
08)  Closed – Tester verifies the defects that are in “Fixed” status and once they find the defect is fixed, they change the status to “Closed”. This is the last status of Defect Life Cycle.
9) Cancelled – This status indicates that the tester realized that the defect logged by him was invalid and agreed to cancel it.

9) Cancelled – This status indicates that the tester realized that the defect logged by him was invalid and agreed to cancel it.

Test Plan

Test Plan
Test Plan or Software Test Plan is a document that describes Scope, approach, schedule, Resources, environments, Test Cycles and other details involved in testing activities. 
Test Plan also documents features to be tested, features that would not / cannot be tested, exit & Entry criteria, assumptions, risks identified, how the identified risks are tracked and mitigated and roles & responsibilities of the people involved. 
Test plan template, based on IEEE 829 format
1) Test Plan Identifier:- A unique identifier e.g.:-
TestPlan_project_app-Name_release-Num_version-Num.doc Naming convention depends on the Organization standards followed.
2) References: – Business Requirements, Functional Requirements, Project Plan, High Level and Detail design documents, Documents detailing organization’s process etc.,
3) Introduction: – objective or scope of the Test plan, process to used for change control, communication.
4) Test Items: – Functionality that would be tested. It contains delivery schedule of key deliverables.
5) Software Risk Issue: – Known or anticipated risks associated with project or testing activities, tools or people.
6) Features to be tested: – List of features that will be tested.
7) Features not to be tested: – List of features ‘not’ to be tested. There are several reasons why some of the features are not being tested, they are
a) Functionality already exists, found to be stable and not impacted by current implementation.
b) Functionality will not be used in this release.
 Approach:- This is the most important part of the Test Plan. Below are the details covered in Approach.
a) Types of tests carried out and details of the responsible team/individuals.
b) Pass Execution details.
c) Hardware, Software and tools used for testing.
d) Levels of regression testing that would be carried out.
e) CM (Configuration Management) setup and usage.
f) Metrics collected during different stages of the project.
9) Item Pass/Fail Criteria:- Criteria used to determine each test item has passed or failed.
10) Entry & Exit Criteria:- Explains on when to start and stop testing.
11) Suspension Criteria and Resumption Requirements:- criteria used to suspend all / portion of testing activities. Similarly resumption criteria specify when to resume testing after it was suspended.
12) Test Deliverables:- Documents, process deliverables, Metrics, Reports to be generated during different phases of testing.
13) Remaining Test Tasks: – This section details on the parts of the application that this plan does not address, because the testing may be done by external team or company.
14) Environmental Needs: – Specific details of Hardware configuration, Operating System and other software requirements.
15) Staffing and Training Needs: – Training needs of domain knowledge, Automation or any other tools required for testing etc.,
16) Responsibilities:- Details on who is responsible for what task and what is the escalation mechanism.
17) Planning Risks and Contingencies:- Details on over risk of the project but detailing more on risks associated with testing phase and also a plan on how to mitigate the risk.
18) Approvals:- Different stake holders of the project can approve certain deliverables e.g.:- Business approves UID (User Interface Design) document etc. Most of the deliverables will require approvals from multiple stakeholders.


Entry and Exit Criteria

Entry and exit criteria are the set of conditions that should be met in order to commence and close a particular project phase or stage. Each of the SDLC (Software Development Life Cycle) phase or stage will have one or more Exit/Entry Criteria conditions defined, documented and signed off.
Incase any of the conditions specified in Entry and Exit criteria cannot be met after they are documented and signed off, approval should be taken from stake holders who were involved in signing off on the Exit /Entry Criteria or the document containing Entry and Exit Criteria e.g.:- Application Test Approach document contains Entry and Exit criteria for the APT phase, so any changes done to Entry and Exit criteria conditions of the document will require taking written approval from stake holders who signed off on this document.
Every Test Stage, be it AT (Assembly Test), APT (Application Product Test) or IPT (Integration Product Test) or Performance Test or User Acceptance Test will have its own set of Entry and Exit Criteria.
Incase modification of Entry / Exit criteria involves wavier of a deliverable (e.g.:- Requirements Traceability Matrix) then a wavier request should be sent to SQA (Software Quality Assurance) team and a written approval should be obtained.
Below is a sample Entry and Exit Criteria for Application Product Test stage:
Entry Criteria
• Build notes is provided to APT team.
• All defect logged during earlier phases (Requirements, Design or Development) and planned to be fixed during APT phase are logged in Test Management software with a target resolution date.
• Business Analysts, Technical Architects, Developers, DBAs (Database Administrators), build deployment and support resources are identified and are made available as required during APT (Application Product Phase) testing.
• RTM (Requirements Traceability Matrix) is signed off by required stakeholders.
• Test Closure Report for AT (Assembly Testing) is signed off by required stake holders.

Exit Criteria
• All planned Test Scripts of Pass3 are executed and 95% of the Pass3 Test Cases have passed.
• Any Application Product Test Cases that are marked as NA (Not Applicable) should be reviewed and approved prior exiting APT (Application Product Test).
• There are no open/pending Severity 1 and Severity 2 defects, any pending Severity 3 and Severity 4 defects can be deferred only if they have been reviewed and approved by UAT users, Business users and other project stake holders.
• All the defects found pre-APT phases are closed or deferred by taking approval from all the required stake holders.
• Application Product Test resources, Business Analysts, Technical Architects, Developers, DBAs (Database Administrators), build deployment and support resources are identified and are made available for next phase of testing IPT (Integration Product Test).
• Test Closure Report for APT (Application Product Test) is signed off by required stake holders and handed off to IPT (Integration Product Test) team lead.
• Following deliverables are completed and signed off (Test approach, Test conditions and Expected results, Test scenarios, Test scripts, and common Test data sheet) before APT (Application Product Test) can start.
• APT (Application product test) environment is ready in terms of Hardware, Software and Build and is made available for APT team.
• Build deployed in Application Product Test environment has met the Exit Criteria defined for Assembly Testing.

• There are no pending Severity 1 Defects logged during Unit Testing or AT (Assembly Testing) phases.

Thursday, August 29, 2013

Software Testing Interview Questions



1. Tell me something about yourself?
2. Tell me about your current project?
3. What is your primary role in your project?
4. What is your daily routine in your office?
5. What is the end to end process that is followed in your project?
6. What is a test plan and who prepares it?
7. How do you make sure that you have understood the entire requirement given by the client?
8. What do you do when you find something wrong in the requirement?
9. What is a traceability matrix, and what is the purpose of it?
10. What do you do when you received a build?
11. What is the difference between retesting and regression?
12. Why do you do sanity and regression and what is the difference?
13. How do you write test cases?
14. How can we test without requirements?
15. How do we know all scenarios are covered in test cases?
1. Tell me something about yourself.
2. Why did you leave your last job? Why do you want to change your job?
3. What experience do you have in this field?
4. Do you consider yourself successful?
5. What do co-workers say about you?
6. What do you know about this organization?
7. What have you done to improve your knowledge in the last year?
8. Are you applying for other jobs?
9. Why do you want to work for this organization?
10. Do you know anyone who works for us?




11. What kind of salary do you expect?
12. Are you a team player?
13. How long would you expect to work for us if hired?
14. Have you ever had to fire anyone? How did you feel about that?
15. What is your philosophy towards work?
16. If you had enough money to retire right now, would you?
17. Have you ever been asked to leave a position?
18. Explain how you would be an asset to this organization.
19. Why should we hire you?
20. Tell me about a suggestion you have made.
21. What irritates you about co-workers?
22. What is your greatest strength?
23. Tell me about your dream job.
24. Why do you think you would do well at this job?
25. What are you looking for in a job?
26. What kind of person would you refuse to work with?
27. What is more important to you: the money or the work?
28. What would your previous supervisor say your strongest point is?
29. Tell me about a problem you had with a supervisor.
30. What has disappointed you about a job?
31. Tell me about your ability to work under pressure.
32. Do your skills match this job or another job more closely?
33. What motivates you to do your best on the job?
34. Are you willing to work beyond normal work hours, and on weekends?
35. How would you know you were successful on this job?
36. Would you be willing to relocate if required?
37. Are you willing to put the interests of the organization ahead of your own?
38. Describe your management style.
39. What have you learned from mistakes on the job?
40. Do you have any blind spots?
41. If you were hiring a person for this job, what would you look for?
42. Do you think you are overqualified for this position?
43. How do you propose to compensate for your lack of experience?
44. What qualities do you look for in a boss?
45. Tell me about a time when you helped resolve a dispute between others.
46. What position do you prefer on a team working on a project?
47. Describe your work ethic.
48. What has been your biggest professional disappointment?
49. Tell me about the most fun you have had on the job.
50. Do you have any questions for me?
16. What is role of a tester in the requirement phase?
17. If the release date is only one day left, how you will test the product in one day?
18. What is the review process that is followed in your project/organisation?
19. What is the test environment and sets in up in your project?
20. How do you report a bug?
21. What is a bug life cycle?
22. What are severity and priority?
23. Who decides severity and priority?
24. How do you know something is a bug?
25. Which is the most important bug you have reported so far?
26. Tell me an example of high severity but low priority bug?
27. Tell me an example of low severity but high priority bug?
28. What are the test case design techniques that you know?
29. Tell me about boundary value analysis with an example
30. What is equivalence partitioning?
31. How do you reduce the number of test cases?
32. When do you say your product ready for release?
33. What is suspension and resumption criteria in test plan?
34. Which all major fields/information are required when reporting a defect?
35. What is difference between static and dynamic testing?
36. What is the difference between system testing and functional testing?
37. What are Alpha and Beta testing?
38. What are verification and validation?
39. What is monkey testing and why it is necessary?
40. What is pesticide paradox?
41. If you report a bug, and developer says that this is not a bug, how would you handle such situation?
42. What is performance testing?
43. What is the difference between load and stress testing?
44. What is gorilla testing?
45. How many types of integration testing strategies are there?
46. What are stubs and drivers in integration testing?
47. When can we go for automation testing?
48. What is parameterization in automation testing?
49. What is correlation in performance testing?
50. How do you perform installation testing?
51. What is usability testing?
52. What are virtual users in performance testing?
53. How does system testing differ from integration testing?
54. What are CMMI levels?
55. What is the difference between inspection and walkthrough?
56. What are the code coverage tools?
57. What are Testing Effectiveness (TE) and Defect Removal Efficiency (DRE)?
58. Who are PQA, DPA and CC in a project?
59. Can automation replace manual testing and manual testing replace automation testing?
60. What is defect masking?
61. Which Estimation Model is used for your project work?
62. Who assigns work in your project and how?
63. Why do you want to change your job?
64. Do you have any problem in working late nights or on weekends?
65. Do you have any experience in managing a software testing project?