Unit 5 Software Testing Notes
Unit 5 Software Testing Notes
Unit 5 Software Testing Notes
TEST PLAN
A Test Plan is a detailed document that describes the test strategy, objectives, schedule,
estimation, deliverables, and resources required to perform testing for a software product.
Test Plan helps us determine the effort needed to validate the quality of the application under
test. The test plan serves as a blueprint to conduct software testing activities as a defined
process, which is minutely monitored and controlled by the test manager.
“Test Plan is A document describing the scope, approach, resources, and schedule of
intended test activities.”
This figure shows all the features which the Guru99 website may have.
Based on above features, you can define the Test Objective of the project Guru99 as
following
• Check that whether website Guru99 functionality(Account, Deposit…) is working as
expected without any error or bugs in real business environment
• Check that the external interface of the website such as UI is working as expected and
& meet the customer need
• Verify the usability of the website. Are those functionalities convenient for user or
not?
Step 4) Define Test Criteria
Test Criteria is a standard or rule on which a test procedure or test judgment can be based.
There’re 2 types of test criteria as following
Suspension Criteria
Specify the critical suspension criteria for a test. If the suspension criteria are met during
testing, the active test cycle will be suspended until the criteria are resolved.
Test Plan Example: If your team members report that there are 40% of test cases failed, you
should suspend testing until the development team fixes all the failed cases.
Exit Criteria
It specifies the criteria that denote a successful completion of a test phase. The exit criteria
are the targeted results of the test and are necessary before proceeding to the next phase of
development. Example: 95% of all critical test cases must pass.
Some methods of defining exit criteria are by specifying a targeted run rate and pass rate.
• Run rate is ratio between number test cases executed/total test cases of test
specification. For example, the test specification has total 120 TCs, but the tester only
executed 100 TCs, So the run rate is 100/120 = 0.83 (83%)
• Pass rate is ratio between numbers test cases passed / test cases executed. For
example, in above 100 TCs executed, there’re 80 TCs that passed, so the pass rate is
80/100 = 0.8 (80%)
• Run rate is mandatory to be 100% unless a clear reason is given.
• Pass rate is dependent on project scope, but achieving high pass rate is a goal.
TEST MANAGEMENT
Test Management is a process of managing the testing activities in order to ensure high
quality and high-end testing of the software application. The method consists of organizing,
controlling, ensuring traceability and visibility of the testing process in order to deliver a
high-quality software application. It ensures that the software testing process runs as
expected.
Test Management Phases:
Execution
Test Monitoring and Control
What will you do when your project runs out of resources or exceeds the time schedule?
You need to Monitor and Control Test activities to bring it back on schedule.
Test Monitoring and Control is the process of overseeing all the metrics necessary to ensure
that the project is running well, on schedule, and not out of budget.
Monitoring
Monitoring is a process of collecting, recording, and reporting information about the
project activity that the project manager and stakeholder needs to know
To Monitor, Test Manager does the following activities
• Define the project goal, or project performance standard
• Observe the project performance, and compare the actual and the planned
performance expectations
• Record and report any detected problem which happens to the project
Controlling
Project Controlling is a process of using data from monitoring activity to bring actual
performance to planned performance.
In this step, the Test Manager takes action to correct the deviations from the plan. In some
cases, the plan has to be adjusted according to project situation.
Issue Management
As mentioned at the beginning of the article, all projects may have potential risks. When the
risk happens, it becomes an issue.
In the life cycle of any project, there will be always unexpected problems and questions that
crop up. For example:
• The company cuts down your project budget
• Your project team lacks the skills to complete the project
• The project schedule is too tight for your team to finish the project at the deadline.
Risks to be avoided while testing:
• Missing the deadline
• Exceed the project budget
• Lose the customer’s trust
• When these issues arise, you have to be ready to deal with them – or they can
potentially affect the project’s outcome.
• How do you deal with the issues? What is issue management? Find the answer in
this article
Test Report & Evaluation
• The project has already been completed. It’s now time to look back at what you have
done.
• The purpose of the Test Evaluation Reports is:
• “Test Evaluation Report” describes the results of the Testing in terms of Test
coverage and exit criteria. The data used in Test Evaluation are based on the test
results data and test result summary.
Testing a web application is very common while testing any other application like testing
functionality, configuration, or compatibility, etc. Testing a web application includes the
analysis of the web fault compared to the general software faults. Web applications are
required to be tested on different browsers and platforms so that we can identify the areas
that need special focus while testing a web application.
Types of Web Testing:
Basically, there are 4 types of web-based testing that are available and all four of them are
discussed below:
• Static Website Testing: A static website is a type of website in which the
content shown or displayed is exactly the same as it is stored in the server. This
type of website has great UI but does not have any dynamic feature that a user or
visitor can use. In static testing, we generally focus on testing things like UI as it
is the most important part of a static website. We check things font size, color,
spacing, etc. testing also includes checking the contact us form, verifying URLs
or links that are used in the website, etc.
• Dynamic Website Testing: A dynamic website is a type of website that consists
of both frontend i.e, UI, and the backend of the website like a database, etc. This
type of website gets updated or change regularly as per the user’s requirements.
In this website, there are a lot of functionalities involved like what a button will
do if it is pressed, are error messages are shown properly at their defined time,
etc. We check if the backend is working properly or not, like does entering the
data or information in the GUI or frontend gets updated in the databases or not.
• E-Commerce Website Testing: An e-commerce website is very difficult in
maintaining as it consists of different pages and functionalities, etc. In this
testing, the tester or developer has to check various things like checking if the
shopping cart is working as per the requirements or not, are user registration or
login functionality is also working properly or not, etc. The most important thing
in this testing is that does a user can successfully do payment or not and if the
website is secured. And there are a lot of things that a tester needs to test apart
from the given things.
• Mobile-Based Web Testing: In this testing, the developer or tester basically
checks the website compatibility on different devices and generally on mobile
devices because many of the users open the website on their mobile devices. So,
keeping that thing in mind, we must check that the site is responsive on all
devices or platforms.
Points to be Considered While Testing a Website:
As the website consists of a frontend, backend, and servers, so things like HTML pages,
internet protocols, firewalls, and other applications running on the servers should be
considered while testing a website. There are various examples of considerations that need
to be checked while testing a web application. Some of them are:
• Do all pages are having valid internal and external links or URLs?
• Whether the website is working as per the system compatibility?
• As per the user interface- Does the size of displays are the optimal and the best
fit for the website?
• What type of security does the website need (if unsecured)?
• What are the requirements for getting the website analytics, and also controlling
graphics, URLs, etc?
• The contact us or customer assistance feature should be added or not on the
page, and etc?
In web-based testing, various areas have to be tested for finding the potential errors and
bugs, and steps for testing a web app are given below:
• App Functionality: In web-based testing, we have to check the specified
functionality, features, and operational behavior of a web application to ensure
they correspond to its specifications. For example, Testing all the mandatory
fields, Testing the asterisk sign should display for all the mandatory fields,
Testing the system should not display the error message for optional fields, and
also links like external linking, internal linking, anchor links, and mailing links
should be checked properly and checked if there’s any damaged link, so that
should be removed. We can do testing with the help of Functional Testing in
which we test the app’s functional requirements and specifications.
• Usability: While testing usability, the developers face issues with scalability and
interactivity. As different numbers of users will be using the website, it is the
responsibility of developers to make a group for testing the application across
different browsers by using different hardware. For example, Whenever the user
browses an online shopping website, several questions may come to his/her
mind like, checking the credibility of the website, testing whether the shipping
charges are applicable, etc.
• Load Issues: We perform this testing to check the behavior of the system under
a specific load so that we can measure some important transactions and the load
on the database, the application server, etc. are also monitored.
• Storage and Database: Testing the storage or the database of any web
application is also an important component and we must sure that the database is
properly tested. We test things like finding errors while executing any DB
queries, checking the response time of the query, testing whether the data
retrieved from the database is correctly shown on the website or not.
Here, we will explore the challenges and solution strategies for testing COTS-based
applications. We will also see a process for testing COTS-based applications.
Case Study
The Big Insurance Company plans to deploy a new system to allow its 1,200 agents to track
customer and client information. Instead of writing its own application, the company has
chosen to buy a site license of a popular contact management application. The solution
appears to be cost-effective, as the total cost of the software to be deployed to all agents will
be about $100,000 as compared to an in-house development estimate of $750,000. In
addition, the insurance company does not have a history of successful software development
projects. There are, however, some considerations that the company realized after they made
the purchase decision:
The deployment plan was to have all agents download and install the new application over a
weekend. Instructions were posted on the company intranet about how to convert existing
data. A help line was established to provide support to the agents. On deployment weekend,
98% of the agents downloaded the new software and installed it on their notebook computers.
About 20% of the agents had problems installing the software due to incompatibilities with
hardware and operating systems. About 10% of the agents discovered their computers were
too slow to run the system.
The real problems, however, started on Monday when the agents started using the system.
Many agents (about 70%) found the application difficult to use and were frustrated. In
addition, all of the agents found that the new application could not perform some of the
functions the old contact databases would. Fortunately, many of the agents kept their old
contacts database.
After four weeks, the company decided to implement another product, but this time more
field testing was performed, other customers of the product were referenced, and more
extensive testing was performed for interoperability, compatibility, correctness, and usability.
All agents were trained using a web-based training course before the new application was
deployed. The second deployment was a huge success.
The customer has no access to source code in COTS products. This forces testers to adopt an
external, black-box, test approach. Although black-box testing is certainly not foreign to
testers, it limits the view and expands the scope of testing. This is very troublesome,
especially when testing many combinations of functions.
Functional testing is redundant by its very nature. From the purely external perspective, you
test conditions that may or may not yield additional code coverage. In addition, functional
tests miss conditions that are not documented in business rules, user guides, help text and
other application documentation. The bottom line is that in functional testing, you can test
against a defined set of criteria, but there will likely be features and behavior that the criteria
will not include. That's why structural testing is also important. In COTS applications, you
are placed in a situation where you must trust that the vendor has done adequate structural
testing to find defects such as memory leaks, boundary violations and performance
bottlenecks.
Solution Strategies: Avoid complex combinations of tests and the idea of "testing everything."
Instead, base tests on functional or business processes used in the real world environment.
The initial tendency of people in testing COTS applications is to start defining tests based on
user interfaces and all of the combinations of features. This is a slippery slope which can lead
to many test scenarios, some meaningful and others with little value.
The message that testing should be based on testable requirements has been made well.
Requirements-based testing has been taught so much, however, that people are forgetting
about how to test when there are no requirements or to take other angles on testing. Testing
from the real-world perspective is validation, and validation is the kind of testing that is
primary in a customer or user's test of a COTS product.
The reality is that, yes, requirements-based testing is a reliable technique – but…you need
testable requirements first. In COTS you may have defined user needs, but you do not have
the benefit of documents that specify user need to the developer for building the software. In
fact, the developer of the software may not have the benefit of documented requirements for
tests either. For the customer, this means you have to look elsewhere for test cases, such as:
Solution Strategy:
Design tests that are important to how you will use the product. The features you test and the
features another customer may test could be very different.
Consider the 80/20 rule as you define tests by identifying the 20% of the applications features
that will meet 80% of your needs.
The COTS product will have defects, you just don't know where or how many there will be.
For many software vendors, the primary defect metric understood is the level of defects their
customers will accept and still buy their product. I know that sounds rather cynical, but once
again, let's face facts. Software vendors are in business to make a profit. Although perfection
is a noble goal and (largely) bug-free software is a joy to use, a vendor will not go to needless
extremes to find and fix some defects. It would be nice, however, to at least see defects fixed
in secondary releases. Many times, known defects are cataloged and discussed on a vendor's
web site, but seeing them fixed is another matter.
This aspect of COTS is where management may have the most unrealistic expectations. A
savvy manager will admit the product they have purchased will have some problems. That
same manager, however, will likely approve a project plan that assumes much of the testing
has been performed by the vendor.
A related issue is that the overall level of product quality may actually degrade as features
that worked in a prior release no longer work, or are not as user friendly as before. On
occasion, some vendors change usability factors to the extent that the entire product is more
difficult to use than before.
Solution Strategy:
Do not assume any level of product quality without at least a preliminary test. A common
strategy is not to be an early customer of a new release. It's often wise to wait and see what
other users are saying about the product. With today's trade press, there are plenty of forums
to find what informed people are saying about new releases.
Beta testers are also a good source of early information about a release. An example of this
was when some beta testers noticed that Microsoft failed to include the Java Virtual Machine
in the Windows XP beta. Prior to the revelation, Microsoft had not indicated their intention.
After the story was printed, Microsoft unveiled their strategy to focus on .Net.
Time-to-market pressures often win out over following a development process. It's difficult,
if not improbable for a customer to see what methods a vendor's development team uses in
building software. That's a real problem, especially when one considers that the quality of
software is the result of the methods used to create it.
Here are some things you might like to know, but probably will not be able to find out:
This is a tough issue to deal with because the vendors and their staffs do not want to reveal
trade secrets. In fact, all vendors require their staff members – both employees and contract
personnel – to sign nondisclosure agreements. Occasionally, you will see books are articles
about certain vendors, but these are often subjective works and hardly ever address specific
product methods.
Independent assessments may help, but like any kind of audit or review, people know what to
show and what to hide. Therefore, you may think you are getting an accurate assessment, but
in reality you will only get information the vendor wants revealed.
Software vendors, especially those in the PC-based arena, have a huge challenge in trying to
create software that will work correctly and reliably in a variety of hardware and operating
system environments. When you also consider peripherals, drivers, and many other variables,
the task of achieving compatibility is impossible. Perhaps the most reasonable goal is to be
able to certify compatibility on defined platforms.
Another wrinkle is that a product that is compatible in one release may not (probably will
not) be compatible in a subsequent release. Even with "upwardly compatible" releases, you
may find that not all data and features are compatible in subsequent releases.
Finally, be careful to consider compatibility between users in your organization that are using
varying release levels of the same product. When you upgrade a product version, you need a
plan that addresses:
When you select a COTS product for an application solution, the decision is often made
based on facts at one point in time. Although the current facts about a product are the only
ones that are known and relevant during the acquisition process, the product's future direction
will have a major impact in the overall return on investment for the customer. The problem is
that upgrade schedules fluctuate greatly, are impacted by other events such as new versions of
operating systems and hardware platforms, and are largely unknown quantities in terms of
quality.
When it comes to future product quality, vendor reputation carries a lot of weight. Also, past
performance of the product is often an indicator of future performance. This should be a
motivator for vendors to maintain high levels of product quality, but we find ourselves back
at the point of understanding that as long as people keep buying the vendor's product at a
certain level of quality, the vendor really has no reason to improve product quality except for
competing with vendors of similar products.
Solution Strategies:
Keep open lines of communication with the vendor. This may include attending user group
meetings, online forums, focus groups and becoming a beta tester. Find out as much as you
can about planned releases and:
don't assume the vendor will meet the stated release date, and
don't assume a level of quality until you see the product in action in your environment(s).
Vendor support is often high on the list of acquisition criteria. However, how can you know
for sure your assessment is correct? The perception of vendor support can be a subjective
one. Most people judge the quality of support based on one or a few incidents.
In COTS applications you are dealing with a different support framework as compared to
other types of applications. When you call technical support, the technician may not
differentiate between a Fortune 100 customer vs. an individual user at home.
Furthermore, when you find defects and report them to the vendor, there is no guarantee they
will be fixed, even in future releases of the product.
Solution Strategies:
Talk to other users about their support experiences, keeping in mind that people will have a
wide variety of experiences, both good and bad.
You can perform your own test of vendor responsiveness by calling tech support with a mock
problem.
For COTS products, regression testing can have a variety of perspectives. One perspective is
to view a new release as a new version of the same basic product. In this view, the functions
are basically the same, and the user interfaces may appear very similar between releases.
Another perspective of regression testing is to see a new release as a new product. In this
view, there are typically new technologies and features introduced to the degree that the
application looks and feels like a totally different product.
The goal of regression testing is to validate that functions work correctly as they did before
an application was changed. For COTS, this means that the product still meets your needs in
your environment as it did in the previous version used. Although the functions may appear
different at points, the main concerns are that:
This leaves us with a ROI based on repeatability of the automated tests. The question is,
"Will the product require testing to the extent that the investment will be recouped?"
If you are planning to test only one or two times per release, probably not. However, if you
plan to use automated tools to test product performance on a variety of platforms, or to just
test the correctness of installation, then you may well get a good return on your automation
investment.
For the scope concern, much of the problem arises from the inability to identify effective test
cases. Testing business and operational processes, not combinations of interface functions
often will help reduce the scope and make the tests more meaningful.
Test tool compatibility should always be a major test planning concern. Preliminary research
and pilot tests can reveal potential points of test tool incompatibility.
Solution Strategies:
When dealing the spider web of application interfaces and the subsequent processing on all
sides of the interfaces, the complexity level of testing interoperability becomes quite high.
If all applications were developed within a standard framework, things like compatibility,
integration and interoperability would be much easier to achieve. However, there is a tradeoff
between standards and innovation. As long as rapid innovation and time-to-market are
primary business motivators, standards are not going to be a major influence on application
development.
Some entities, such as the Department of Defense, have developed environments to certify
applications as interoperable with an approved baseline before they can be integrated into the
production baseline. This approach achieves a level of integration, but limits the availability
of solutions in the baseline. Other organizations have made large investments in
interoperability and compatibility test labs to measure levels of interoperability and
compatibility. However, the effort and expense to build and maintain test labs can be large. In
addition, you can only go so far in simulating environments where combinations of
components are concerned.
Solution Strategies:
TRACKING DEFECTS
Defects:
A software bug arises when the expected result don't match with the actual results. It can also
be error, flaw, failure, or fault in a computer program. Most bugs arise from mistakes and
errors made by developers, architects.
Following are the methods for preventing programmers from introducing bugs during
development:
• Programming Techniques adopted
• Software Development methodologies
• Peer Review
• Code Analysis
• Defect Id
• Priority
• Severity
• Created by
• Created Date
• Assigned to
• Resolved Date
• Resolved By
Defect/Bug tracking tool
We have various types of defect tracking tools available in software testing that helps us to
track the bug, which is related to the software or the application.
Some of the most commonly used defect tracking tools are as follows:
o Jira
o Bugzilla
o BugNet
o Redmine
o Mantis
o Trac
o Backlog
Jira
Jira is one of the most important defect/bug tracking tools. Jira is an open-source tool that is
used for bug tracking, project management, and issue tracking in manual testing. Jira includes
different features, like reporting, recording, and workflow. In Jira, we can track all kinds of
bugs and issues, which are related to the software and generated by the test engineer.
Bugzilla
Bugzilla is another important bug tracking tool, which is most widely used by many
organizations to track the bugs. It is an open-source tool, which is used to help the customer,
and the client to maintain the track of the bugs. It is also used as a test management tool
because, in this, we can easily link other test case management tools such as ALM, quality
Centre, etc. It supports various operating systems such as Windows, Linux, and Mac.
Features of the Bugzilla tool
Bugzilla has some features which help us to report the bug easily:
o A bug can be list in multiple formats
o Email notification controlled by user preferences.
o It has advanced searching capabilities
o This tool ensures excellent security.
o Time tracking
BugNet
It is an open-source defect tracking and project issue management tool, which was written
in ASP.NET and C# programming language and support the Microsoft SQL database. The
objective of BugNet is to reduce the complicity of the code that makes the deployment easy.
The advanced version of BugNet is licensed for commercial use.
Features of BugNet tool
The feature of BugNet tool are as follows:
o It will provide excellent security with simple navigation and administration.
o BugNet supports various projects and databases.
o With the help of this tool, we can get the email notification.
o This has the capability to manage the Project and milestone.
o This tool has an online support community
Redmine
It is an open-source tool which is used to track the issues and web-based project management
tool. Redmine tool is written in Ruby programing language and also compatible with
multiple databases like MySQL, Microsoft SQL, and SQLite.
While using the Redmine tool, users can also manage various projects and related
subprojects.
Features of Redmine tool
Some of the common characteristics of Redmine tools are as follows:
o Flexible role-based access control
o Time tracking functionality
o A flexible issue tracking system
o Feeds and email notification
o Multiple languages support (Albanian, Arabic, Dutch, English, Danish and so on)
MantisBT
o MantisBT stands for Mantis Bug Tracker. It is a web-based bug tracking system,
and it is also an open-source tool. MantisBT is used to follow the software defects. It
is executed in the PHP programing language.
Features of MantisBT
Some of the standard features are as follows:
o With the help of this tool, we have full-text search accessibility.
o Audit trails of changes made to issues
o It provides the revision control system integration
o Revision control of text fields and notes
o Notifications
o Plug-ins
o Graphing of relationships between issues.
Trac
Another defect/ bug tracking tool is Trac, which is also an open-source web-based tool. It is
written in the Python programming language. Trac supports various operating systems such
as Windows, Mac, UNIX, Linux, and so on. Trac is helpful in tracking the issues for software
development projects.
We can access it through code, view changes, and view history. This tool supports multiple
projects, and it includes a wide range of plugins that provide many optional features, which
keep the main system simple and easy to use.
Backlog
The backlog is widely used to manage the IT projects and track the bugs. It is mainly built for
the development team for reporting the bugs with the complete details of the issues,
comments, updates and change of status. It is a project management software.
Features of backlog tool are as follows:
o Gantt and burn down charts
o It supports Git and SVN repositories
o It has the IP access control feature.
o Support Native iOS and Android apps
Test Management tool – Tarantula gives you, than, the option to define Test Object. It
identifies actual “software/version/release” they are testing. Also you can create Test
Execution. Execution is a collection of test cases run for selected test object. So for each test
object there may be several different executions. E.g. Smoke Test, Integration Test,
Performance Tests etc.
In the touch of a button you start the test according to your definition. Test Management tool
– Tarantula system gives you general case information, Steps and actions entering defect and
comment, And at the end you’ll get a toolbar for entering step result.
After the test, Tarantula provides a comfortable to read reports and dashboards. This reports
and dashboard are easy to share with your coworkers and managers.
Dashboard offers quick status view to your report. It is based on Test Object, meaning that
you can select particular “release/version” to be viewed.
Project Status is often useful for periodic reporting to top managers, you can easily share
report to your boss with email, or deliver printed report to him.
Tarantula also gives you the option to see case execution List that often use in a report with
detailed information about executed cases.