Manual Testing and Scenario Based Questions
Manual Testing and Scenario Based Questions
Manual Testing and Scenario Based Questions
com
9123820085
Manual Testing
and Scenario
based Interview
Questions
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Manual Testing and Scenario based Interview Questions for Interview Selection
Process
Verification:
Validation:
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Q3: What is the difference between quality assurance and quality control?
Quality Assurance:
Eg:- verification
Quality Control:
Eg:- validation
The Software Development Life Cycle refers to all the activities that are performed
during software development, including requirement analysis, design,
implementation, testing, deployment, and maintenance phases.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
The software testing life cycle refers to all the activities performed during the
testing of a software product. The phases include –
It involves in the execution of code and validates the output with the expected
outcome.
It involves in reviewing the documents to identify the defects in the early stages of
SDLC.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Positive:
Negative:
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
It is a combination of both the black-box and white-box testing. The tester who
works on this type of testing needs to have access to design documents, this
helps to create better test cases.
It is a document which contains the plan for all the testing activities. Q14: What is
test scenario?
It gives the idea of what we have to test or the testable part of an application is
called TS.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
It is a document that is basically used to test the s/w program. It is divided into 2
categories: -
1. +ve test data which is generally given to the system to generate the
expected result.
Defect Life Cycle or Bug Life Cycle is the specific set of states that a Bug goes
through from discovery to defect fixation.
Bug Life Cycle phases/status: - The number of states that a defect goes through
varies from project to project. Below lifecycle diagram, covers all possible states
• New: When a new defect is logged and posted for the first time. It is
assigned a status as NEW.
• Assigned: Once the bug is posted by the tester, the lead of the tester
approves the bug and assigns the bug to the developer team.
• Open: The developer starts analyzing and works on the defect fix.
• Fixed: When a developer makes a necessary code change and verifies the
change, he or she can make the bug status “Fixed.”
• Pending retest: after fixing the defect the developer gives a particular code
for retesting the code to the tester. Here the testing is pending on the
tester’s end, the status assigned is “pending request.”
• Retest: Tester does the retesting of the code, to check whether the defect is
fixed by the developer or not and changes the status to “Re-test.”
• Verified: The tester re-tests the bug after it got fixed by the developer. If
there is no bug detected in the software, then the bug is fixed and the
status assigned is “verified.”
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
• Reopen: If the bug persists even after the developer has fixed the bug, the
tester changes the status to “reopened”. Once again the bug goes through
the life cycle.
• Closed: If the bug no longer exists then the tester assigns the status
“Closed.”
• Duplicate: If the defect is repeated twice or the defect corresponds to the
same concept of the bug, the status is changed to “duplicate.”
• Rejected: If the developer feels the defect is not a genuine defect then it
changes the defect to “rejected.”
• Deferred: If the present bug is not of a prime priority and if it is expected to
get fixed in the next release, then the status “Deferred” is assigned to such
bugs
• Not a bug: If it does not affect the functionality of the application then the
status assign to a bug is “Not a bug”.
• New – A bug or defect when detected is in a new state
• Assigned – The newly detected bug when assigned to the corresponding
developer is in the Assigned state
• Open – When the developer works on the bug, the bug lies in the Open
state
• Rejected/Not a bug – A bug lies in rejected state in case the developer
feels the bug is not genuine
• Deferred – A deferred bug is one, fix which is deferred for some time (for the
next releases) based on the urgency and criticality of the bug
• Fixed – When a bug is resolved by the developer it is marked as fixed
• Test – When fixed the bug is assigned to the tester and during this time the
bug is marked as in Test
• Reopened – If the tester is not satisfied with the issue resolution the bug is
moved to the Reopened state
• Verified – After the Test phase if the tester feels the bug is resolved, it is
marked as verified
• Closed – After the bug is verified, it is moved to Closed status.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Smoke:
Sanity Testing:
Entry:
It describes when to start testing i.e. what we have to test it should be stable
enough to test.
Ex:- if we want to test the home page, the SRS/BRS/FRS document & the test cases
must be ready and should be stable enough to test.
Exit:
It describes when to stop testing i.e. once everything mentioned below is fulfilled
then s/w release is known as exit criteria:-
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
A blocker is a bug of high priority and high severity. It prevents or blocks testing of
some other major portion of the application as well.
To test whether the changed component has introduced any error to unchanged
component or not is called as regression testing. It is perform on QA/production
site depends.
To test whether the reported bug has been resolved by the developer team or not,
is known as retesting.
Priority:
Severity:
It means how bad the defect is and what impact it can cause in our application.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
A defect priority is the urgency of the fixing the defect. Normally the defect priority
is set on a scale of P0 to P3 with P0 defect having the most urgency to fix.
Defect severity is the severity of the defect impacting the functionality. Based on
the organization, we can have different levels of defect severity ranging from
minor to critical or show stopper.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Alpha Testing:
Beta Testing:
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Example:
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Boundary value analysis is a software testing technique for designing test cases
wherein the boundary values of the classes of the equivalence class partitioning
are taken as input to the test cases. It is also called as a part of stress and –ve
testing.
e.g. if the test data lies in the range of 0-100, the boundary value analysis will
include test data – 0,1, 99, 100.
In case of top-down integration testing, many a times lower level modules are not
developed while beginning testing/integration with top level modules. In those
cases Stubs or dummy modules are used that simulate the working of modules
by providing hard-coded or expected output based on the input values.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
In case of bottom-up integration testing, drivers are used to simulate the working
of top level modules in order to test the related modules lower in the hierarchy.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
The Waterfall Model and Agile Methodology are two different approaches to
software development:
Waterfall Model:
Emphasis on documentation
Agile Methodology:
The Waterfall Model is suitable for projects with stable requirements and a focus
on predictability. Agile Methodology is ideal for projects requiring flexibility,
collaboration, and the ability to adapt to evolving requirements.
Q45: What is a test plan, and what are the steps to create one?
A test plan is a document that outlines the strategy, scope, resources, and
schedule for testing a product. It is an important part of the software
development process, as it helps ensure that the product is of high quality and
meets the requirements and specifications.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
1. 1Identify the goals of the testing. What do you want to achieve with the
testing? What are the objectives of the test plan?
2. 2Define the scope of the testing. What features and functions of the
product will be tested? What environments and platforms will the testing
be conducted on?
3. 3Determine the resources needed for testing. What personnel, equipment,
and tools will be required?
4. 4Develop a testing schedule. When will the testing take place? How long will
it take?
5. 5Determine the test approach. How will the testing be conducted? What
types of testing will be used (e.g., unit testing, integration testing, system
testing, acceptance testing)?
6. 6Create a test matrix. This is a table that maps the test cases to the
requirements or functions being tested.
Integrity testing’s primary goal is to find The primary goal of system testing is
flaws in how components or sections to confirm that the system satisfies the
communicate with one another. requirements and is suitable for use in
the environment intended.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
7. 7Write the test cases. A test case is a set of steps and expected results that
a tester follows to verify that a feature or function of the product is working
correctly.
8. 8Review and revise the test plan. Make sure that the test plan is complete,
accurate, and feasible.
9. 9Execute the testing. Follow the test plan and test cases to test the product.
10. 10Document the results of the testing. This includes any issues or defects
that were found, and how they were addressed.
By following these steps, you can create a comprehensive and effective test plan
that will help ensure the quality and reliability of your product.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
It is also known as static testing, where we are ensuring that “we are
developing the right product or not”. And it also checks that the developed
application fulfilling all the requirements given by the client.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Authentication Authorization
In this, the user or client and server are In this, it is verified that if the user is
verified. allowed through the defined policies
and rules.
It requires the login details of the user, It requires the user’s privilege or
such as user name & password, etc. security level.
Data is provided through the Token Ids. Data is provided through the access
tokens.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
The test case review process ensures the reliability and effectiveness of test
cases, improving the overall quality of the testing effort.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
You can do automation for regression You cannot automate the test cases for
testing, Manual Testing could be Retesting
expensive and time-consuming
Regression testing is known as a generic Re-testing is a planned testing
testing
Regression testing is done for passed test Retesting is done only for failed test cases
cases
Regression testing checks for unexpected Re-testing makes sure that the original
side-effects fault has been corrected
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Testing a complete module is exhaustive testing and time consuming that’s why
we use Equivalence partitioning as it is time saving.
Q53: What are the important Black Box Test Design Techniques?
• Equivalence partitioning
• Boundary value analysis
• Decision Table Testing
• State Transition Testing
• Use Case Testing
The purpose of test design techniques is to identify test conditions and test
scenarios through which effective and efficient test cases can be written. Using
test design techniques is a best approach rather the test cases picking out of the
air. Test design techniques help in
Defining tests that will provide insight into the quality of the test object.
A black box test design technique in which test cases are designed to execute
User scenarios of Business scenarios.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Test design techniques are categorized into two types. They are:
The dynamic techniques are subdivided into three more categories. They are:
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
a. Equivalence partitioning.
b. Boundary value analysis.
c. Decision tables.
d. State transition testing.
e. Use case testing.
Walk-through:
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
The aim of this review technique is to achieve consensus about the technical
aspect of the document. They are informal in nature and it is the experts, who
identify defects in the document. The experts who are a part of the review are
architects, chief designers, key users, etc. However, peers are also a part of the
review as well. In a technical review, the value of the technical concepts and
alternatives is assessed. It is also ensured that the right technical concepts are
used.
Static test techniques provide a great way to improve the quality and productivity
of software development. It includes the reviews and provides the overview of
how they are conducted. The primary objective of static testing is to improve the
quality of software products by assisting engineers to recognize and fix their own
defects early in the software development process.
• Since static testing can start early in the life cycle so early feedback on
quality issues can be established. As the defects are getting detected at an
early stage so the rework cost most often relatively low. Development
productivity is likely to increase because of the less rework effort.
• Types of the defects that are easier to find during the static testing are:
deviation from standards, missing requirements, design defects, non-
maintainable code and inconsistent interface specifications.
• Static tests contribute to the increased awareness of quality issues.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
This kick-off meeting is an optional step in a review procedure. The goal of this
step is to give a short introduction on the objectives of the review and the
documents to everyone in the meeting. The relationships between the document
under review and the other documents are also explained, especially if the
numbers of related documents are high. At customer sites, we have measured
results up to 70% more major defects found per page as a result of performing a
kick-off.
1. Planning
2. Kick-off
3. Preparation
4. Review meeting
5. Rework
6. Follow-up
Informal reviews are applied many times during the early stages of the life cycle
of the document. A two-person team can conduct an informal review. In later
stages these reviews often involve more people and a meeting. The goal is to
keep the author and to improve the quality of the document. The most important
thing to keep in mind about the informal reviews is that they are not documented.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
A black box test design technique in which test cases are designed to execute the
combinations of inputs and/or stimuli (causes) shown in a decision table.
Experience-based tests utilize testers’ skills and intuition, along with their
experience with similar applications or technologies. These tests are effective at
finding defects but are not as appropriate as other techniques to achieve specific
test coverage levels or produce reusable test procedures.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
• Error guessing.
• Checklist-based.
• Exploratory.
• Unit Testing
• Integration Testing
• System Testing
• Acceptance Testing
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
There are three main categories of defects as shown in the below figure:
Q78: A defect which could have been removed during the initial stage is
removed in a later stage. How does this affect the cost?
If at the initial stage a defect is identified, then it should be removed during that
stage/phase itself rather than at some later stage. It’s a fact that if a defect is
delayed for later phases, it becomes more costly. The following figure shows how
a defect is costly as the phases move forward.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
If a defect is identified and removed during the design phase, it is the most cost
effective but when removed during maintenance it becomes twenty times
costlier.
Q79: On what basis you can arrive at an estimation for your project?
Load Testing: It is the simplest form of testing which is done by increasing load
step by step to reach the defined limit or goal.
Stress Testing: We can also call it as negative testing as we test the system
beyond its capacity to find out the break point of the system.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Concurrent user load can be defined as when multiple users hit to any
functionality or transaction at the same time.
Sample test scenarios to give you an idea of the kind of security tests that are
available −
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Ideally, a soak test should run as long as possible, though it will depend on the
system for you to determine the duration of your test. Below are some examples
when soak testing’s can be considered:
When a bank announces that it will be closing, its system is expected to handle a
large number of transactions during the closing days of the bank. This event is
rare and unexpected, but your system has to handle this unlikely situation
anyway.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
It tests whether the actual result It checks the response time, and
is working according to the speed of the software under
expected result. specific conditions.
It is carried out manually. It is more feasible to test using
Example: Black box testing automated tools. Example:
method. LoadRunner.
Q88: How do you map STLC to SDLC? Specify what testing activities are held in
each phase of SDLC.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
In System Testing Level, Testers prepare test cases under the guidance of the Test
Lead.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Q90: What is the relation between Test case and Test data?
Some test data we (testers) prepare, some Test data we collect from Developers,
and some Test Data we collect from Customers and other sources.
We derive test cases from Test scenarios, test scenarios derived from
requirements.
For ex: Lets take yahoomail.com, in this website, the requirement says that
username can accept alphanumeric characters. So, the test case must be written
to check for different combinations like testing with only alphabets, only numeric
and alphanumeric characters. And the test data that you give for each test case
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
is different for each combination. Like this, we can write any number of test cases,
but optimization of these test cases is important.
Q95: What is the difference between test scenario and test case?
providing rapid delivery along with adapting to changing needs at the same
time.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
• Because of this meeting, if one person is absent another person from the
same team can complete his work. So the project isn’t paused and
dependency on one person does not happen. This is the main advantage
of this model.
• Sprint is dividing the project into modules and distributing these modules
among both teams so that the team is working in parallel.
• When to use: when the project is big/medium and we have to deliver it as
soon as possible then we will use this model. Quality is maintained.
• Product Owner – The product owner owns the whole development of the
product, assigns tasks to the team, and acts as an interface between the
scrum team(development team) and the stakeholders.
• Scrum Master – The scrum master monitors that scrum rules get followed
in the team and conducts scrum meetings.
• Scrum Team – A scrum team participate in the scrum meetings and
perform the tasks assigned.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
15. Account lockout: Enter incorrect credentials multiple times and verify that
the account gets locked after a certain number of failed attempts.
16. Social media login: Test the functionality of logging in with social media
accounts (e.g., Facebook, Google) and ensure that the integration works
correctly.
17. Accessibility: Verify that the login page is accessible to users with
disabilities by using screen readers or other assistive technologies.
18. Browser compatibility: Test the login page on different browsers (Chrome,
Firefox, Safari, etc.) and ensure consistent behavior and appearance.
19. Performance under load: Simulate a high load on the login page by
sending multiple login requests simultaneously and verify that it responds
efficiently.
20. Session timeout: Log in and wait for the session timeout period to expire.
Confirm that the user is automatically logged out and prompted to log in
again.
These test scenarios cover a range of scenarios to ensure that the login page
functions properly and handles different situations gracefully.
Q102: Write 10 test cases for adding an item to the cart in an e-commerce
application.
1. Valid item: Select a valid item from the product catalog and verify that it is
successfully added to the cart.
2. Empty cart: Attempt to add an item to an empty cart and ensure that the
cart is updated with the added item.
3. Multiple items: Add multiple items to the cart and confirm that all the
selected items are accurately displayed.
4. Quantity selection: Select different quantities of an item (e.g., 1, 5, or 10) and
verify that the correct quantity is reflected in the cart.
5. Product details: Verify that the added item in the cart displays the correct
product information, such as name, price, and image.
6. Out of stock: Try adding an item that is out of stock and validate that an
appropriate message is displayed, and the cart remains unchanged.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
7. Invalid item: Attempt to add an item that does not exist in the product
catalog and ensure that it is not added to the cart.
8. Cross-browser compatibility: Test the functionality of adding an item to the
cart on different browsers (Chrome, Firefox, Safari, etc.) and verify
consistent behavior.
9. Concurrent users: Simulate multiple users simultaneously adding items to
the cart and confirm that each user’s cart remains separate and
unaffected by others.
10. Add-ons or options: Test adding an item with additional options or add-
ons (e.g., size, color, customization) and verify that the selected options are
correctly reflected in the cart.
Q103: You are testing an e-commerce website that offers multiple payment
options, including credit cards, PayPal, and bank transfers. During your testing,
you encounter a scenario where a customer successfully places an order using
a credit card, but the order status does not update in the system, and the
payment is not recorded. How would you go about investigating this issue? Can
you outline the steps you would follow to isolate the problem?
• Reproduce the issue: Start by replicating the scenario where the problem
occurred. Follow the same steps as the customer, including selecting the
product, adding it to the cart, proceeding to the checkout process, and
making the payment using a credit card. Note down any specific details or
steps that could be relevant to the issue.
• Check for error messages: After making the payment, carefully review the
website for any error messages or notifications related to the order status
or payment process. Note down any error codes or messages displayed on
the screen.
• Review system logs: Examine the system logs to gather information about
the order and payment process. Look for any error logs, warnings, or
exceptions related to the specific order and payment. Pay attention to
timestamps, error codes, and any relevant log entries.
• Check payment gateway integration: Verify the integration with the credit
card payment gateway. Ensure that the communication between the
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
By following these steps, you can systematically investigate the issue and gather
relevant information to isolate the problem. Collaboration with different
stakeholders, including support teams and developers, is crucial to ensure a
comprehensive investigation and resolution of the issue.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Q104. When our release is in the next 2 hours & as a Tester, we found a blocker
bug; then what will you do?
First thing, raise the bug; we need some ID to track for sure, then discuss with the
Lead/Manager and call the development team to see if they can replicate and fix
or have some workaround. If we have a workaround, it’s always
suggested to go with it for now, as the fix may create new issues. If we don’t have
then discuss it with the Product Owner and analyze the business impacts based
on that either we need to fix it or notify the customer that it’s a known issue for this
patch, and we fix it later and then deploy.
Q105. You have 30 test cases to execute and you have limited time to deliver,
you cannot take another resource for help, so how you will execute these test
cases to deliver on time? What will be your strategy?
• Prioritize the test cases depending on risk, cost, time, and other factors
• Ask what would happen if you don’t execute 50% of those 30 test cases
• Understand the 30 test cases and create a mental model to ascertain the
best way to test rather than running through step by step for 30 test cases.
• Look for ways to automate at lower levels fully or partly for the 30 test
cases.
• Ask yourself why we end up in this situation.
• Everybody has 24 hours, if it cannot be done in 24 hours, there is no way
anybody can bring in the 25th hour, so be candid in your feedback.
• Skip 50% of executing 30 test cases, rest 50% monitor in production what
those test cases would actually do and how it’s behaving. Leverage unit
test cases that can cover up for those 30 test cases
• Use feature toggle, where you release the product, test in production and
release again with just a toggle once you are sure it works as expected.
• Use Blue–Green deployments.
• Highlight the risks to the stakeholders.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Q106. You are testing a login page, and when you enter valid credentials, the
page remains stuck with a loading spinner, and you cannot proceed further.
What could be the possible reasons for this issue?
Q107. You are testing an e-commerce website, and after placing an order, the
confirmation email is not sent to the user. How would you approach this
problem?
• First, verify that the user’s email address is correctly recorded during the
order placement.
• Check the email server logs to see if any errors occurred during the email-
sending process.
• Manually trigger the order confirmation email to see if there’s a delay in the
email delivery.
• Validate the SMTP settings and credentials to ensure the email server is
configured correctly.
• Confirm if the email is not landing in the spam folder.
• Investigate if any recent code changes might have impacted the email
functionality.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Q108. You are testing a mobile app, and when you rotate the device from
portrait to landscape mode, the app crashes. How do you troubleshoot this
issue?
To troubleshoot the app crash on rotation issue, we would perform the following
steps:
Q109. You are testing a financial application, and when you enter negative
values in a transaction, the application allows it, leading to incorrect
calculations. How do you approach this issue?
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Q110. You are testing a web application, and when you navigate through
different pages, the URL remains the same. How do you handle this situation?
• Ensure that the expected functionality is not dependent solely on the URL
changes, as modern web applications often use AJAX and single- page
application (SPA) techniques.
• Confirm that essential data is not solely stored in the URL parameters.
• Use other elements like page content, breadcrumb navigation, or page
titles to verify if the correct page is loaded.
• Check the usage of history. push State() or similar methods to modify the
URL dynamically in SPA applications.
• If the issue persists and URL changes are indeed essential for the
application, communicate the problem to the development team to
investigate further.
Q111. You re-testing a messaging app, and users report that sometimes
messages are delivered to the wrong recipients. How do you approach this
issue?
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Q112. You are testing a video streaming service, and some users complain about
buffering issues and playback interruptions. How do you troubleshoot this
problem?
Q113. You are testing a healthcare app that stores sensitive patient data, and
users report concerns about data privacy. How would you ensure the app
complies with privacy regulations?
• Verify that the app collects only the necessary patient data as per the
defined requirements.
• Check if the app implements proper encryption techniques for storing and
transmitting sensitive data.
• Test the app’s authentication and authorization mechanisms to prevent
unauthorized access to patient records.
• Validate if the app enforces proper user roles and access controls to limit
data access to authorized personnel.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
• Ensure that the app provides users with the option to consent to data
collection and understand the privacy policy.
• Collaborate with the development team to address any identified security
vulnerabilities promptly.
• Stay updated with relevant privacy regulations and guidelines to ensure
continuous compliance.
Q114. You are testing a gaming application, and users report that the game
crashes randomly during gameplay. How do you troubleshoot this issue?
• Replicate the issue by playing the game under various conditions and
scenarios.
• Check if the problem occurs on specific levels, actions, or devices.
• Monitor the system resources (CPU, memory, GPU) during gameplay to
identify resource-intensive areas.
• Analyze the game logs or error reports generated upon each crash.
• Validate if the crashes are consistent with certain actions or patterns.
• Collaborate with the game developers to identify any bugs or memory
leaks that could be causing the crashes.
• Test the game on different devices and operating systems to verify if the
issue is platform-specific.
Q115. You are testing a social media app, and users report that they sometimes
receive notifications for activities they did not perform. How do you handle this
situation?
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
hr@kasperanalytics.com
kasper-analytics