Explain in detail the QA process , he ask
me to take a example of vending machine which take out coke or water
check
whether ur card working
check
improper card
check
proper itrem and improper item is selected/deselected water/coke
try to access when no
item is there .
more
than available items stuffed in the tray
installation testing
check
if vending machine can be inztalled properly with proper connection
check
if it is movable. test on differnt plugs and power 110 v 220 v 440v
usability
testing
check
if vending machine has proper buttons
dispenser/provision for hands to fit in to take the product
check
for the height of the operationabilty.
performance
how
quickly it drops
Stress
shake
the machine
switch on/switch off machine alternatingly and
select it.
in
severe cold/hot atmosphere does it work.
insert
nothing and keep pressing
compatibility: different cards
capability
testing
equivalence
partioning /boundary value.
valid: correct amount correct
selection of an item
invalid :
o Unit testing: I had
candidate going really deep in the circuitry of the machine, testing each
individual transistors. Most go to a higher level and test each individual
functional component.
o Functional testing: Some
candidate approach functional testing through the actual mechanics of the
machine (buttons, display etc.) while some other approach it from a software
angle, leaving aside the mechanics. The best tester covers both.
o White box/Black box: Again, your
mileage may vary and you really see where your candidate is the most
comfortable.
o Performance/reliability testing: How fast can
I get my coffee? How many coffee can I deliver in rush hours, MTBF etc. You can
have a lot of fun in this area.
o Usability: So much to do
in the area but most candidate forget about it.
o Localization testing: Again, most
candidate don’t think about this one.
o Security testing: Most
candidate don’t go there but it’s always a good sign if they do. I had
candidate forcing the door to get money, throwing water on it to see how far
they can go until a power surge, sending 300v in the thing to understand how it
would react etc.
·
-Agile methodology: starts with sprint planing meeting.. the Product Owner and the team negotiate which stories a team will tackle that sprint.estimate the hours of work devide them into tasks i.e user stories and start to work on them. version one is used for agile tracking tool and jira for bug tracking.scrum master conduct the daily scrum meetings and track the progress of work do on daily basis for each member and find out any blocks,remove dependencies etc. sprint progress will be tracked using burn down chart .if it is inner parabolic shape team is in good shape, if it is outer parabolic, team is not meeting time lines..once all bugs tracked and fixed, and product manager acceptance is done, sprint ends with retrospective meeting which discuss about what went wrong in this release and how we can improve the process next time
How
to write a test case/test plan
INTRODUCTION
2.0 OBJECTIVES AND TASKS
2.1 Objectives
2.2 Tasks
2.1 Objectives
2.2 Tasks
3.0 SCOPE
4.0 Testing Strategy
4.1 Alpha Testing (Unit Testing)
4.2 System and Integration Testing
4.3 Performance and Stress Testing
4.4 User Acceptance Testing
4.5 Batch Testing
4.6 Automated Regression Testing
4.7 Beta Testing
4.1 Alpha Testing (Unit Testing)
4.2 System and Integration Testing
4.3 Performance and Stress Testing
4.4 User Acceptance Testing
4.5 Batch Testing
4.6 Automated Regression Testing
4.7 Beta Testing
5.0 Hardware Requirements
6.0 Environment Requirements
6.1 Main Frame
6.2 Workstation
6.1 Main Frame
6.2 Workstation
7.0 Test Schedule
8.0 Control Procedures
9.0 Features to Be Tested
10.0 Features Not to Be Tested
11.0 Resources/Roles & Responsibilities
12.0 Schedules
13.0 Significantly Impacted Departments (SIDs)
14.0 Dependencies
15.0 Risks/Assumptions
16.0 Tools
17.0 Approvals
Test cae:
Test case ID: Unique ID for each test case. Follow some convention to indicate types of test. E.g. ‘TC_UI_1′ indicating ‘user interface test case #1′.
Test priority (Low/Medium/High): This is useful while test execution. Test priority for business rules and functional test cases can be medium or higher whereas minor user interface cases can be low priority. Test priority should be set by reviewer.
Module Name – Mention name of main module or sub module.
Test Designed By: Name of tester
Test Designed Date: Date when wrote
Test Executed By: Name of tester who executed this test. To be filled after test execution.
Test Execution Date: Date when test executed.
Test Title/Name: Test case title. E.g. verify login page with valid username and password.
Test Summary/Description: Describe test objective in brief.
Pre-condition: Any prerequisite that must be fulfilled before execution of this test case. List all pre-conditions in order to successfully execute this test case.
Dependencies: Mention any dependencies on other test cases or test requirement.
Test Steps: List all test execution steps in detail. Write test steps in the order in which these should be executed. Make sure to provide as much details as you can. Tip – to efficiently manage test case with lesser number of fields use this field to describe test conditions, test data and user roles for running test.
Test Data: Use of test data as an input for this test case. You can provide different data sets with exact values to be used as an input.
Expected Result: What should be the system output after test execution? Describe the expected result in detail including message/error that should be displayed on screen.
Post-condition: What should be the state of the system after executing this test case?
Actual result: Actual test result should be filled after test execution. Describe system behavior after test execution.
·
How to log a bug, what
information should include
Reporter: Your name and email address.
Product: In which product you found this bug.
Version: The product version if any.
Component: These are the major sub modules of the product.
Platform: Mention the hardware platform where you found this bug. The various platforms like ‘PC’, ‘MAC’, ‘HP’, ‘Sun’ etc.
Operating system: Mention all operating systems where you found the bug. Operating systems like Windows, Linux, Unix, SunOS, Mac OS. Mention the different OS versions also if applicable like Windows NT, Windows 2000, Windows XP etc.
Priority:
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with highest priority” and P5 as ” Fix when time permits”.
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with highest priority” and P5 as ” Fix when time permits”.
Severity:
This describes the impact of the bug.
Types of Severity:
This describes the impact of the bug.
Types of Severity:
- Blocker: No further testing work can be done.
- Critical: Application crash, Loss of data.
- Major: Major loss of function.
- Minor: minor loss of function.
- Trivial: Some UI enhancements.
- Enhancement: Request for new feature or some enhancement in existing one.
Status:
When you are logging the bug in any bug tracking system then by default the bug status is ‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.
Click here to read more about detail bug life cycle.
When you are logging the bug in any bug tracking system then by default the bug status is ‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.
Click here to read more about detail bug life cycle.
Assign To:
If you know which developer is responsible for that particular module in which bug occurred, then you can specify email address of that developer. Else keep it blank this will assign bug to module owner or Manger will assign bug to developer. Possibly add the manager email address in CC list.
If you know which developer is responsible for that particular module in which bug occurred, then you can specify email address of that developer. Else keep it blank this will assign bug to module owner or Manger will assign bug to developer. Possibly add the manager email address in CC list.
URL:
The page url on which bug occurred.
The page url on which bug occurred.
Summary:
A brief summary of the bug mostly in 60 or below words. Make sure your summary is reflecting what the problem is and where it is.
A brief summary of the bug mostly in 60 or below words. Make sure your summary is reflecting what the problem is and where it is.
Description:
A detailed description of bug. Use following fields for description field:
A detailed description of bug. Use following fields for description field:
·
What should you do If
developer say it is not a bug:
show them screenshots, share u r screen, call him over or sit with him and try in dev environment,show them logs.
·
Any experience with
offshore team: experince working with india team, coordiante the team and get the work done and present the status of team to project manager
·
If it is close to
deadline and have to verify all the browsers, what kind of tests should
perform Junit and testNG
Few Simple steps Using TestNG :)
Step 1: Create your Script. Using TestNG annotations. Define parameters (using @Parameters) for taking input value i.e, which browser should be used for Running the Test
Step 2: Create a TestNG XML for running your script
Step 3: Configure the TestNG XML for passing parameters i.e, to tell which browser should be used for Running the Test
Step 4: Run the TestNG XML which can pass the appropriate browser name to the Script such that the Test Case is executed in a specified browser
Step 1: Create your Script. Using TestNG annotations. Define parameters (using @Parameters) for taking input value i.e, which browser should be used for Running the Test
Step 2: Create a TestNG XML for running your script
Step 3: Configure the TestNG XML for passing parameters i.e, to tell which browser should be used for Running the Test
Step 4: Run the TestNG XML which can pass the appropriate browser name to the Script such that the Test Case is executed in a specified browser
Enhancing your TestNG xml for running the Test Case on Different browsers Simultaneously. Speeding up the execution process :)
Using the feature provided by TestNG for Parallel Executions. Set the"parallel" attribute to "tests"so that the all the three browser tests can be executed Simultaneously.