Unit 5 Software Testing Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

UNIT 5

TEST MANAGEMENT AND APPLICATIONS

TEST PLAN
A Test Plan is a detailed document that describes the test strategy, objectives, schedule,
estimation, deliverables, and resources required to perform testing for a software product.
Test Plan helps us determine the effort needed to validate the quality of the application under
test. The test plan serves as a blueprint to conduct software testing activities as a defined
process, which is minutely monitored and controlled by the test manager.

“Test Plan is A document describing the scope, approach, resources, and schedule of
intended test activities.”

Importance of Test Plan


Making Test Plan document has multiple benefits
• Help people outside the test team such as developers, business managers,
customers understand the details of testing.
• Test Plan guides our thinking. It is like a rule book, which needs to be followed.
• Important aspects like test estimation, test scope, Test Strategy are documented in
Test Plan, so it can be reviewed by Management Team and re-used for other projects.

How to write a Test Plan


You already know that making a Test Plan is the most important task of Test Management
Process. Follow the seven steps below to create a test plan as per IEEE standard
1. Analyze the product
2. Design the Test Strategy
3. Define the Test Objectives
4. Define Test Criteria
5. Resource Planning
6. Plan Test Environment
7. Schedule & Estimation
8. Determine Test Deliverables
Step 1) Analyze the product
How can you test a product without any information about it? The answer
is Impossible. You must learn a product thoroughly before testing it.
The product under test is Guru99 banking website. You should research clients and the end
users to know their needs and expectations from the application
• Who will use the website?
• What is it used for?
• How will it work?
• What are software/ hardware the product uses?
• You should take a look around this website and also review product documentation.
Review of product documentation helps you to understand all the features of the
website as well as how to use it. If you are unclear on any items, you
might interview customer, developer, designer to get more information.
Step 2) Develop Test Strategy
Test Strategy is a critical step in making a Test Plan in Software Testing. A Test Strategy
document, is a high-level document, which is usually developed by Test Manager. This
document defines:
• The project’s testing objectives and the means to achieve them
• Determines testing effort and costs
you need to develop Test Strategy for testing that banking website. You should follow steps
below
Step 2.1) Define Scope of Testing
Before the start of any test activity, scope of the testing should be known. You must think
hard about it.
• The components of the system to be tested (hardware, software, middleware, etc.) are
defined as “in scope“
• The components of the system that will not be tested also need to be clearly defined as
being “out of scope.”
Defining the scope of your testing project is very important for all stakeholders. A precise
scope helps you
• Give everyone a confidence & accurate information of the testing you are doing
• All project members will have a clear understanding about what is tested and what is
not
How do you determine scope your project?
To determine scope, you must –
• Precise customer requirement
• Project Budget
• Product Specification
• Skills & talent of your test team
Now should clearly define the “in scope” and “out of scope” of the testing.
• As the software requirement specs, the project Guru99 Bank only focus on testing all
the functions and external interface of website Guru99 Bank (in scope testing)
• Nonfunctional testing such as stress, performance or logical database currently will
not be tested. (out of scope)
Problem Scenario
The customer wants you to test his API. But the project budget does not permit to do so. In
such a case what will you do?
Well, in such case you need to convince the customer that Api Testing is extra work and will
consume significant resources. Give him data supporting your facts. Tell him if Api Testing
is included in-scope the budget will increase by XYZ amount.
The customer agrees and accordingly the new scopes, out of scope items are
• In-scope items: Functional Testing, Api Testing
• Out of scope items: Database Testing, hardware & any other external interfaces
Step 2.2) Identify Testing Type
A Testing Type is a standard test procedure that gives an expected test outcome.
Each testing type is formulated to identify a specific type of product bugs. But, all Testing
Types are aimed at achieving one common goal “Early detection of all the defects before
releasing the product to the customer”
The commonly used testing types are described as following figure
Commonly Used Testing Types
There are tons of Testing Types for testing software product. Your team cannot
have enough efforts to handle all kind of testing. As Test Manager, you must set priority of
the Testing Types
• Which Testing Types should be focused for web application testing?
• Which Testing Types should be ignored for saving cost?
Step 2.3) Document Risk & Issues
Risk is future’s uncertain event with a probability of occurrence and a potential for loss.
When the risk actually happens, it becomes the ‘issue’.
In Risk Analysis and Solution, you have already identified potential risks in the project.
In the QA Test Plan, you will document those risks
Step 2.4) Create Test Logistics
In Test Logistics, the Test Manager should answer the following questions:
• Who will test?
• When will the test occur?
Who will test?
You may not know exact names of the tester who will test, but the type of tester can be
defined.
To select the right member for specified task, you have to consider if his skill is qualified for
the task or not, also estimate the project budget. Selecting wrong member for the task may
cause the project to fail or delay.
Person having the following skills is most ideal for performing software testing:
• Ability to understand customers point of view
• Strong desire for quality
• Attention to detail
• Good cooperation
In your project, the member who will take in charge for the test execution is the tester. Base
on the project budget, you can choose in-source or outsource member as the tester.
When will the test occur?
Test activities must be matched with associated development activities.
You will start to test when you have all required items shown in following figure

Step 3) Define Test Objective


Test Objective is the overall goal and achievement of the test execution. The objective of the
testing is finding as many software defects as possible; ensure that the software under test
is bug free before release.
To define the test objectives, you should do 2 following steps
1. List all the software features (functionality, performance, GUI…) which may need to
test.
2. Define the target or the goal of the test based on above features
Let’s apply these steps to find the test objective of your Guru99 Bank testing project
You can choose the ‘TOP-DOWN’ method to find the website’s features which may need to
test. In this method, you break down the application under test to component and sub-
component.
In the previous topic, you have already analyzed the requirement specs and walk through the
website, so you can create a Mind-Map to find the website features as following

This figure shows all the features which the Guru99 website may have.
Based on above features, you can define the Test Objective of the project Guru99 as
following
• Check that whether website Guru99 functionality(Account, Deposit…) is working as
expected without any error or bugs in real business environment
• Check that the external interface of the website such as UI is working as expected and
& meet the customer need
• Verify the usability of the website. Are those functionalities convenient for user or
not?
Step 4) Define Test Criteria
Test Criteria is a standard or rule on which a test procedure or test judgment can be based.
There’re 2 types of test criteria as following
Suspension Criteria
Specify the critical suspension criteria for a test. If the suspension criteria are met during
testing, the active test cycle will be suspended until the criteria are resolved.
Test Plan Example: If your team members report that there are 40% of test cases failed, you
should suspend testing until the development team fixes all the failed cases.

Exit Criteria
It specifies the criteria that denote a successful completion of a test phase. The exit criteria
are the targeted results of the test and are necessary before proceeding to the next phase of
development. Example: 95% of all critical test cases must pass.
Some methods of defining exit criteria are by specifying a targeted run rate and pass rate.
• Run rate is ratio between number test cases executed/total test cases of test
specification. For example, the test specification has total 120 TCs, but the tester only
executed 100 TCs, So the run rate is 100/120 = 0.83 (83%)
• Pass rate is ratio between numbers test cases passed / test cases executed. For
example, in above 100 TCs executed, there’re 80 TCs that passed, so the pass rate is
80/100 = 0.8 (80%)
• Run rate is mandatory to be 100% unless a clear reason is given.
• Pass rate is dependent on project scope, but achieving high pass rate is a goal.

Test Plan Example:


Your Team has already done the test executions. They report the test result to you, and they
want you to confirm the Exit Criteria.
Step 5) Resource Planning
Resource plan is a detailed summary of all types of resources required to complete project
task. Resource could be human, equipment and materials needed to complete a project
The resource planning is important factor of the test planning because helps
in determining the number of resources (employee, equipment…) to be used for the project.
Therefore, the Test Manager can make the correct schedule & estimation for the project.
Step 6) Plan Test Environment
What is the Test Environment
A testing environment is a setup of software and hardware on which the testing team is going
to execute test cases. The test environment consists of real business and user environment,
as well as physical environments, such as server, front end running environment.
How to setup the Test Environment
Back to your project, how do you set up test environment for this banking website?
To finish this task, you need a strong cooperation between Test Team and Development
Team
Step 7) Schedule & Estimation
In the article Test estimation, you already used some techniques to estimate the effort to
complete the project. Now you should include that estimation as well as the schedule to the
Test Planning
Then you create the schedule to complete these tasks.
Making schedule is a common term in project management. By creating a solid schedule in
the Test Planning, the Test Manager can use it as tool for monitoring the project progress,
control the cost overruns.
To create the project schedule, the Test Manager needs several types of input as below:
• Employee and project deadline: The working days, the project deadline, resource
availability are the factors which affected to the schedule
• Project estimation: Base on the estimation, the Test Manager knows how long it
takes to complete the project. So he can make the appropriate project schedule
• Project Risk : Understanding the risk helps Test Manager add enough extra time to
the project schedule to deal with the risks
Step 8) Test Deliverables
Test Deliverables is a list of all the documents, tools and other components that has to be
developed and maintained in support of the testing effort.
There are different test deliverables at every phase of the software development lifecycle.

Test deliverables are provided before testing phase.


• Test plans document.
• Test cases documents
• Test Design specifications.
Test deliverables are provided during the testing
• Test Scripts
• Simulators.
• Test Data
• Test Traceability Matrix
• Error logs and execution logs.
Test deliverables are provided after the testing cycles is over.
• Test Results/reports
• Defect Report
• Installation/ Test procedures guidelines
• Release notes

TEST MANAGEMENT
Test Management is a process of managing the testing activities in order to ensure high
quality and high-end testing of the software application. The method consists of organizing,
controlling, ensuring traceability and visibility of the testing process in order to deliver a
high-quality software application. It ensures that the software testing process runs as
expected.
Test Management Phases:

Test Management Process


Test Management Process is a procedure of managing the software testing activities from
start to end. The test management process provides planning, controlling, tracking, and
monitoring facilities throughout the whole project cycle. The process involves several
activities like test planning, designing, and test execution. It gives an initial plan and
discipline to the software testing process. To help manage and streamline these activities,
consider using one of these top test management tools.
There are two main parts of Test Management Process: –
• Planning
1. Risk Analysis
2. Test Estimation
3. Test Planning
4. Test Organization
• Execution
1. Test Monitoring and Control
2. Issue Management
3. Test Report and Evaluation
Planning:
Risk Analysis and Solution
Risk is the potential loss (an undesirable outcome, however not necessarily so) resulting from
a given action or an activity.
Risk Analysis is the first step that Test Manager should consider before starting any project.
Because all projects may contain risks, early risk detection and identification of its solution
will help Test Manager to avoid potential loss in the future & save on project costs.
You will learn more detail about the Risk Analysis and Solution here.
Test Estimation
An estimate is a forecast or prediction. Test Estimation is approximately determining how
long a task would take to complete. Estimating effort for the test is one of
the major and important tasks in Test Management.
Benefits of correct estimation:
1. Accurate test estimates lead to better planning, execution, and monitoring of tasks
under a test manager’s attention.
2. Allow for more accurate scheduling and help realize results more confidently.
You will learn more details about the Test Estimation and metrics here.
Test Planning
A Test Plan can be defined as a document describing the scope, approach, resources,
and schedule of intended Testing activities.
A project may fail without a complete Test Plan. Test planning is particularly important in
large software system development.
In software testing, a test plan gives detailed testing information regarding an upcoming
testing effort, including:
• Test Strategy
• Test Objective
• Exit /Suspension Criteria
• Resource Planning
• Test Deliverables
Test Organization
Test Organization in Software Testing is a procedure of defining roles in the testing
process. It defines who is responsible for which activities in the testing process. The same
process also explains test functions, facilities, and activities. The competencies and
knowledge of the people involved are also defined. However, everyone is responsible for the
quality of the testing process.
Now you have a Plan, but how will you stick to the plan and execute it? To answer that
question, you have Test Organization phase.
Generally speaking, you need to organize an effective Testing Team. You have to assemble a
skilled team to run the ever-growing testing engine effectively.

Execution
Test Monitoring and Control
What will you do when your project runs out of resources or exceeds the time schedule?
You need to Monitor and Control Test activities to bring it back on schedule.
Test Monitoring and Control is the process of overseeing all the metrics necessary to ensure
that the project is running well, on schedule, and not out of budget.
Monitoring
Monitoring is a process of collecting, recording, and reporting information about the
project activity that the project manager and stakeholder needs to know
To Monitor, Test Manager does the following activities
• Define the project goal, or project performance standard
• Observe the project performance, and compare the actual and the planned
performance expectations
• Record and report any detected problem which happens to the project
Controlling
Project Controlling is a process of using data from monitoring activity to bring actual
performance to planned performance.
In this step, the Test Manager takes action to correct the deviations from the plan. In some
cases, the plan has to be adjusted according to project situation.
Issue Management
As mentioned at the beginning of the article, all projects may have potential risks. When the
risk happens, it becomes an issue.
In the life cycle of any project, there will be always unexpected problems and questions that
crop up. For example:
• The company cuts down your project budget
• Your project team lacks the skills to complete the project
• The project schedule is too tight for your team to finish the project at the deadline.
Risks to be avoided while testing:
• Missing the deadline
• Exceed the project budget
• Lose the customer’s trust
• When these issues arise, you have to be ready to deal with them – or they can
potentially affect the project’s outcome.
• How do you deal with the issues? What is issue management? Find the answer in
this article
Test Report & Evaluation
• The project has already been completed. It’s now time to look back at what you have
done.
• The purpose of the Test Evaluation Reports is:
• “Test Evaluation Report” describes the results of the Testing in terms of Test
coverage and exit criteria. The data used in Test Evaluation are based on the test
results data and test result summary.

TESTING WEB BASED SYSTEM: (WEB BASED TESTING)


Web testing is a software testing technique to test web applications or websites for finding
errors and bugs. A web application must be tested properly before it goes to the end-users.
Also, testing a web application does not only means finding common bugs or errors but
also testing the quality-related risks associated with the application. Software
Testing should be done with proper tools and resources and should be done effectively. We
should know the architecture and key areas of a web application to effectively plan and
execute the testing.

Testing a web application is very common while testing any other application like testing
functionality, configuration, or compatibility, etc. Testing a web application includes the
analysis of the web fault compared to the general software faults. Web applications are
required to be tested on different browsers and platforms so that we can identify the areas
that need special focus while testing a web application.
Types of Web Testing:
Basically, there are 4 types of web-based testing that are available and all four of them are
discussed below:
• Static Website Testing: A static website is a type of website in which the
content shown or displayed is exactly the same as it is stored in the server. This
type of website has great UI but does not have any dynamic feature that a user or
visitor can use. In static testing, we generally focus on testing things like UI as it
is the most important part of a static website. We check things font size, color,
spacing, etc. testing also includes checking the contact us form, verifying URLs
or links that are used in the website, etc.
• Dynamic Website Testing: A dynamic website is a type of website that consists
of both frontend i.e, UI, and the backend of the website like a database, etc. This
type of website gets updated or change regularly as per the user’s requirements.
In this website, there are a lot of functionalities involved like what a button will
do if it is pressed, are error messages are shown properly at their defined time,
etc. We check if the backend is working properly or not, like does entering the
data or information in the GUI or frontend gets updated in the databases or not.
• E-Commerce Website Testing: An e-commerce website is very difficult in
maintaining as it consists of different pages and functionalities, etc. In this
testing, the tester or developer has to check various things like checking if the
shopping cart is working as per the requirements or not, are user registration or
login functionality is also working properly or not, etc. The most important thing
in this testing is that does a user can successfully do payment or not and if the
website is secured. And there are a lot of things that a tester needs to test apart
from the given things.
• Mobile-Based Web Testing: In this testing, the developer or tester basically
checks the website compatibility on different devices and generally on mobile
devices because many of the users open the website on their mobile devices. So,
keeping that thing in mind, we must check that the site is responsive on all
devices or platforms.
Points to be Considered While Testing a Website:
As the website consists of a frontend, backend, and servers, so things like HTML pages,
internet protocols, firewalls, and other applications running on the servers should be
considered while testing a website. There are various examples of considerations that need
to be checked while testing a web application. Some of them are:
• Do all pages are having valid internal and external links or URLs?
• Whether the website is working as per the system compatibility?
• As per the user interface- Does the size of displays are the optimal and the best
fit for the website?
• What type of security does the website need (if unsecured)?
• What are the requirements for getting the website analytics, and also controlling
graphics, URLs, etc?
• The contact us or customer assistance feature should be added or not on the
page, and etc?
In web-based testing, various areas have to be tested for finding the potential errors and
bugs, and steps for testing a web app are given below:
• App Functionality: In web-based testing, we have to check the specified
functionality, features, and operational behavior of a web application to ensure
they correspond to its specifications. For example, Testing all the mandatory
fields, Testing the asterisk sign should display for all the mandatory fields,
Testing the system should not display the error message for optional fields, and
also links like external linking, internal linking, anchor links, and mailing links
should be checked properly and checked if there’s any damaged link, so that
should be removed. We can do testing with the help of Functional Testing in
which we test the app’s functional requirements and specifications.

• Usability: While testing usability, the developers face issues with scalability and
interactivity. As different numbers of users will be using the website, it is the
responsibility of developers to make a group for testing the application across
different browsers by using different hardware. For example, Whenever the user
browses an online shopping website, several questions may come to his/her
mind like, checking the credibility of the website, testing whether the shipping
charges are applicable, etc.

• Browser Compatibility: For checking the compatibility of the website to work


the same in different browsers we test the web application to check whether the
content that is on the website is being displayed correctly across all the browsers
or not.

• Security: Security plays an important role in every website that is available on


the internet. As a part of security, the testers check things like testing the
unauthorized access to secure pages should not be permitted, files that are
confined to the users should not be downloadable without the proper access.

• Load Issues: We perform this testing to check the behavior of the system under
a specific load so that we can measure some important transactions and the load
on the database, the application server, etc. are also monitored.

• Storage and Database: Testing the storage or the database of any web
application is also an important component and we must sure that the database is
properly tested. We test things like finding errors while executing any DB
queries, checking the response time of the query, testing whether the data
retrieved from the database is correctly shown on the website or not.

TESTING OFF THE SHELL SOFTWARE:


Commercial Off-the-Shelf (COTS) software is becoming an ever-increasing part of
organizations' total IT strategy for building and delivering systems. A common perception
held by many people is that since a vendor developed the software, much of the testing
responsibility is carried by the software vendor. However, people are learning that as they
buy and deploy COTS-based systems, the test activities are not necessarily reduced, but
shifted to other types of testing not seen on some in-house developed systems.

Here, we will explore the challenges and solution strategies for testing COTS-based
applications. We will also see a process for testing COTS-based applications.

Case Study
The Big Insurance Company plans to deploy a new system to allow its 1,200 agents to track
customer and client information. Instead of writing its own application, the company has
chosen to buy a site license of a popular contact management application. The solution
appears to be cost-effective, as the total cost of the software to be deployed to all agents will
be about $100,000 as compared to an in-house development estimate of $750,000. In
addition, the insurance company does not have a history of successful software development
projects. There are, however, some considerations that the company realized after they made
the purchase decision:

New versions of the application will be released each year.


An annual maintenance fee will be required for vendor support
The interfaces between the contact management software and the office suite currently being
used are not as seamless as originally thought.
There are some computers being used by agents that are too old or too new for the software
Each agent has an existing client contact database of about 1,000 people, but the data is
stored in a variety of products and database formats.
In planning the purchase and deployment of the application, the project manager allocated
ample time to perform a pilot deployment with 10 field agents using a variety of computers.
Initial feedback from the 10 agents indicated that the new application worked correctly, but
some tasks were hard to understand and perform. Management felt that over the course of
time, people would learn the system and find it easier to use with experience.

The deployment plan was to have all agents download and install the new application over a
weekend. Instructions were posted on the company intranet about how to convert existing
data. A help line was established to provide support to the agents. On deployment weekend,
98% of the agents downloaded the new software and installed it on their notebook computers.
About 20% of the agents had problems installing the software due to incompatibilities with
hardware and operating systems. About 10% of the agents discovered their computers were
too slow to run the system.

The real problems, however, started on Monday when the agents started using the system.
Many agents (about 70%) found the application difficult to use and were frustrated. In
addition, all of the agents found that the new application could not perform some of the
functions the old contact databases would. Fortunately, many of the agents kept their old
contacts database.

After four weeks, the company decided to implement another product, but this time more
field testing was performed, other customers of the product were referenced, and more
extensive testing was performed for interoperability, compatibility, correctness, and usability.
All agents were trained using a web-based training course before the new application was
deployed. The second deployment was a huge success.

In the first project, the results were:

A loss of time and productivity for the agents


A loss of credibility for the project team and the IT department
A loss of sales as agents could not use the system to follow-up with prospects quickly
In the second deployment,
The initial product was more usable and had more useful features
Agents were trained to avoid confusion in how to use the product
Testing was more complete, which gave a higher level of confidence in deploying the
application
In comparing the deployments, the company learned that:

Application features are just one aspect of the product's quality.


End-users must understand how to use the product.
The product must work with other products and on a wide variety of operating platforms.
Although the vendor tested the product, the customer has responsibility to test items the
vendor can't test.
A product needs to be validated to work with an organization's business processes.
This case study is not a true story, but it is based in representative projects I have seen in
acquiring and deploying COTS products. From this example, we can see the need for testing,
but what are the issues in COTS testing and how do we solve them?

Unique Challenges of Testing COTS-based Applications

Challenge #1 - COTS is a Black Box

The customer has no access to source code in COTS products. This forces testers to adopt an
external, black-box, test approach. Although black-box testing is certainly not foreign to
testers, it limits the view and expands the scope of testing. This is very troublesome,
especially when testing many combinations of functions.

Functional testing is redundant by its very nature. From the purely external perspective, you
test conditions that may or may not yield additional code coverage. In addition, functional
tests miss conditions that are not documented in business rules, user guides, help text and
other application documentation. The bottom line is that in functional testing, you can test
against a defined set of criteria, but there will likely be features and behavior that the criteria
will not include. That's why structural testing is also important. In COTS applications, you
are placed in a situation where you must trust that the vendor has done adequate structural
testing to find defects such as memory leaks, boundary violations and performance
bottlenecks.

Solution Strategies: Avoid complex combinations of tests and the idea of "testing everything."
Instead, base tests on functional or business processes used in the real world environment.
The initial tendency of people in testing COTS applications is to start defining tests based on
user interfaces and all of the combinations of features. This is a slippery slope which can lead
to many test scenarios, some meaningful and others with little value.

Challenge #2 - Lack of Functional and Technical Requirements

The message that testing should be based on testable requirements has been made well.
Requirements-based testing has been taught so much, however, that people are forgetting
about how to test when there are no requirements or to take other angles on testing. Testing
from the real-world perspective is validation, and validation is the kind of testing that is
primary in a customer or user's test of a COTS product.
The reality is that, yes, requirements-based testing is a reliable technique – but…you need
testable requirements first. In COTS you may have defined user needs, but you do not have
the benefit of documents that specify user need to the developer for building the software. In
fact, the developer of the software may not have the benefit of documented requirements for
tests either. For the customer, this means you have to look elsewhere for test cases, such as:

Exploring the application


Business processes
User guides
There is also a good degree of professional judgment required in designing validation test
cases. Finding test cases is one thing. Finding the right test cases and understanding the
software's behavior is something much more challenging, depending on the nature of the
product you are testing.

Solution Strategy:

Design tests that are important to how you will use the product. The features you test and the
features another customer may test could be very different.
Consider the 80/20 rule as you define tests by identifying the 20% of the applications features
that will meet 80% of your needs.

Challenge #3 - The Level of Quality is Unknown

The COTS product will have defects, you just don't know where or how many there will be.
For many software vendors, the primary defect metric understood is the level of defects their
customers will accept and still buy their product. I know that sounds rather cynical, but once
again, let's face facts. Software vendors are in business to make a profit. Although perfection
is a noble goal and (largely) bug-free software is a joy to use, a vendor will not go to needless
extremes to find and fix some defects. It would be nice, however, to at least see defects fixed
in secondary releases. Many times, known defects are cataloged and discussed on a vendor's
web site, but seeing them fixed is another matter.

This aspect of COTS is where management may have the most unrealistic expectations. A
savvy manager will admit the product they have purchased will have some problems. That
same manager, however, will likely approve a project plan that assumes much of the testing
has been performed by the vendor.

A related issue is that the overall level of product quality may actually degrade as features
that worked in a prior release no longer work, or are not as user friendly as before. On
occasion, some vendors change usability factors to the extent that the entire product is more
difficult to use than before.

Solution Strategy:

Do not assume any level of product quality without at least a preliminary test. A common
strategy is not to be an early customer of a new release. It's often wise to wait and see what
other users are saying about the product. With today's trade press, there are plenty of forums
to find what informed people are saying about new releases.
Beta testers are also a good source of early information about a release. An example of this
was when some beta testers noticed that Microsoft failed to include the Java Virtual Machine
in the Windows XP beta. Prior to the revelation, Microsoft had not indicated their intention.
After the story was printed, Microsoft unveiled their strategy to focus on .Net.

Challenge #4 - Unknown Development Processes and Methods

Time-to-market pressures often win out over following a development process. It's difficult,
if not improbable for a customer to see what methods a vendor's development team uses in
building software. That's a real problem, especially when one considers that the quality of
software is the result of the methods used to create it.

Here are some things you might like to know, but probably will not be able to find out:

Were peer reviews used throughout the project?


How experienced are the developers?
Which phases of testing were performed?
Which types of testing were performed?
Are test tools used?
Are defects tracked?
How do developers collaborate on projects?
How are product features conceived and conveyed to developers?
What type of development methodology used?
Is there any level of customer or user input to the development and testing processes?
Solution Strategies:

This is a tough issue to deal with because the vendors and their staffs do not want to reveal
trade secrets. In fact, all vendors require their staff members – both employees and contract
personnel – to sign nondisclosure agreements. Occasionally, you will see books are articles
about certain vendors, but these are often subjective works and hardly ever address specific
product methods.
Independent assessments may help, but like any kind of audit or review, people know what to
show and what to hide. Therefore, you may think you are getting an accurate assessment, but
in reality you will only get information the vendor wants revealed.

Challenge #5 - Compatibility Issues

Software vendors, especially those in the PC-based arena, have a huge challenge in trying to
create software that will work correctly and reliably in a variety of hardware and operating
system environments. When you also consider peripherals, drivers, and many other variables,
the task of achieving compatibility is impossible. Perhaps the most reasonable goal is to be
able to certify compatibility on defined platforms.

The job of validating software compatibility is up to the customer to be performed in their


environments. With the widely diverse environments in use today, it's a safe bet to assume
that each environment is unique at some point.

Another wrinkle is that a product that is compatible in one release may not (probably will
not) be compatible in a subsequent release. Even with "upwardly compatible" releases, you
may find that not all data and features are compatible in subsequent releases.
Finally, be careful to consider compatibility between users in your organization that are using
varying release levels of the same product. When you upgrade a product version, you need a
plan that addresses:

When users will have their products upgraded


Which users will have their products upgraded
Hardware and other upgrades that may be needed
Data conversions that may be needed
Contingency plans in case the upgrade is not successful
Solution Strategies:

Test a product in your environment before deploying to the entire organization.


Have an upgrade plan in place to avoid incompatibility between users of the same product.

Challenge #6 - Uncertain Upgrade Schedules and Quality

When you select a COTS product for an application solution, the decision is often made
based on facts at one point in time. Although the current facts about a product are the only
ones that are known and relevant during the acquisition process, the product's future direction
will have a major impact in the overall return on investment for the customer. The problem is
that upgrade schedules fluctuate greatly, are impacted by other events such as new versions of
operating systems and hardware platforms, and are largely unknown quantities in terms of
quality.

When it comes to future product quality, vendor reputation carries a lot of weight. Also, past
performance of the product is often an indicator of future performance. This should be a
motivator for vendors to maintain high levels of product quality, but we find ourselves back
at the point of understanding that as long as people keep buying the vendor's product at a
certain level of quality, the vendor really has no reason to improve product quality except for
competing with vendors of similar products.

Solution Strategies:

Keep open lines of communication with the vendor. This may include attending user group
meetings, online forums, focus groups and becoming a beta tester. Find out as much as you
can about planned releases and:

don't assume the vendor will meet the stated release date, and
don't assume a level of quality until you see the product in action in your environment(s).

Challenge #7 - Varying Levels of Vendor Support

Vendor support is often high on the list of acquisition criteria. However, how can you know
for sure your assessment is correct? The perception of vendor support can be a subjective
one. Most people judge the quality of support based on one or a few incidents.

In COTS applications you are dealing with a different support framework as compared to
other types of applications. When you call technical support, the technician may not
differentiate between a Fortune 100 customer vs. an individual user at home.
Furthermore, when you find defects and report them to the vendor, there is no guarantee they
will be fixed, even in future releases of the product.

Solution Strategies:

Talk to other users about their support experiences, keeping in mind that people will have a
wide variety of experiences, both good and bad.
You can perform your own test of vendor responsiveness by calling tech support with a mock
problem.

Challenge #8 - Difficulty in Regression Testing and Test Automation

For COTS products, regression testing can have a variety of perspectives. One perspective is
to view a new release as a new version of the same basic product. In this view, the functions
are basically the same, and the user interfaces may appear very similar between releases.

Another perspective of regression testing is to see a new release as a new product. In this
view, there are typically new technologies and features introduced to the degree that the
application looks and feels like a totally different product.

The goal of regression testing is to validate that functions work correctly as they did before
an application was changed. For COTS, this means that the product still meets your needs in
your environment as it did in the previous version used. Although the functions may appear
different at points, the main concerns are that:

Features you use often have not been dropped


Performance has not degraded
Usability factors have not degraded
New features do not distract from core application processes
New technology does not require major infrastructure changes
It's hard to discuss regression testing with discussing test automation. Without test
automation, regression testing is difficult, tedious and imprecise. However, test automation of
COTS products is challenging due to:

Changing technologies between releases and versions


Low return on investment
The large scope of testing
Test tool incompatibility with the product
The crux of the issue is that test automation requires a significant investment in creating test
cases and test scripts. The only ways to recoup the investment are:

Finding defects that outweigh the cost of creating the tests


Repeating the tests enough times to outweigh the manual testing effort
While it is possible that a defect may be found in the regression testing of a COTS product
that may carry a high potential loss value, the more likely types of defects will be found in
other forms of testing and will relate more to integration, interoperability, performance,
compatibility, security and usability factors rather than correctness.

This leaves us with a ROI based on repeatability of the automated tests. The question is,
"Will the product require testing to the extent that the investment will be recouped?"
If you are planning to test only one or two times per release, probably not. However, if you
plan to use automated tools to test product performance on a variety of platforms, or to just
test the correctness of installation, then you may well get a good return on your automation
investment.

For the scope concern, much of the problem arises from the inability to identify effective test
cases. Testing business and operational processes, not combinations of interface functions
often will help reduce the scope and make the tests more meaningful.

Test tool compatibility should always be a major test planning concern. Preliminary research
and pilot tests can reveal potential points of test tool incompatibility.

Solution Strategies:

View regression testing as a business or operational process validation as opposed to purely a


functional correctness test.
Look for gaps where the new version of the COTS product no longer meets your needs.
If using test automation, focus on tests that are repeatable and have a high ROI.
Perform pilot tests to determine test tool compatibility.
Challenge #9 - Interoperability and Integration Issues

When dealing the spider web of application interfaces and the subsequent processing on all
sides of the interfaces, the complexity level of testing interoperability becomes quite high.

Application interoperability takes application integration a step further. While integration


addresses the ability to pass data and control between applications and components,
interoperability addresses the ability for the sending and receiving applications to use the
passed data and control to create correct processing results. It's one thing to pass the data, it's
another thing for the receiving application to use it correctly.

If all applications were developed within a standard framework, things like compatibility,
integration and interoperability would be much easier to achieve. However, there is a tradeoff
between standards and innovation. As long as rapid innovation and time-to-market are
primary business motivators, standards are not going to be a major influence on application
development.

Some entities, such as the Department of Defense, have developed environments to certify
applications as interoperable with an approved baseline before they can be integrated into the
production baseline. This approach achieves a level of integration, but limits the availability
of solutions in the baseline. Other organizations have made large investments in
interoperability and compatibility test labs to measure levels of interoperability and
compatibility. However, the effort and expense to build and maintain test labs can be large. In
addition, you can only go so far in simulating environments where combinations of
components are concerned.

Solution Strategies:

Make interoperability an acquisition requirement and measure it using a suite of


interoperability test cases.
Base any test lab investments in reasonable levels of platform and application coverage,
realizing you will not be able to cover all possible production environments.
Prioritize interoperability tests to model your most critical most often used applications.
Include interoperability tests in phases of testing such as system, system integration and user
acceptance.

TRACKING DEFECTS
Defects:
A software bug arises when the expected result don't match with the actual results. It can also
be error, flaw, failure, or fault in a computer program. Most bugs arise from mistakes and
errors made by developers, architects.

Following are the methods for preventing programmers from introducing bugs during
development:
• Programming Techniques adopted
• Software Development methodologies
• Peer Review
• Code Analysis

Common Types of Defects


Following are the common types of defects that occur during development:
• Arithmetic Defects
• Logical Defects
• Syntax Defects
• Multithreading Defects
• Interface Defects
• Performance Defects

Defect logging and Tracking


Defect logging, a process of finding defects in the application under test or product by testing
or recording feedback from customers and making new versions of the product that fix the
defects or the clients feedback.

Defect tracking is an important process in software engineering as Complex and business


critical systems have hundreds of defects. One of the challenging factors is Managing,
evaluating and prioritizing these defects. The number of defects gets multiplied over a period
of time and to effectively manage them, defect tracking system is used to make the job easier.

Defect Tracking Parameters


Defects are tracked based on various parameters such as:

• Defect Id
• Priority
• Severity
• Created by
• Created Date
• Assigned to

• Resolved Date
• Resolved By
Defect/Bug tracking tool
We have various types of defect tracking tools available in software testing that helps us to
track the bug, which is related to the software or the application.
Some of the most commonly used defect tracking tools are as follows:
o Jira
o Bugzilla
o BugNet
o Redmine
o Mantis
o Trac
o Backlog

Jira
Jira is one of the most important defect/bug tracking tools. Jira is an open-source tool that is
used for bug tracking, project management, and issue tracking in manual testing. Jira includes
different features, like reporting, recording, and workflow. In Jira, we can track all kinds of
bugs and issues, which are related to the software and generated by the test engineer.

Bugzilla
Bugzilla is another important bug tracking tool, which is most widely used by many
organizations to track the bugs. It is an open-source tool, which is used to help the customer,
and the client to maintain the track of the bugs. It is also used as a test management tool
because, in this, we can easily link other test case management tools such as ALM, quality
Centre, etc. It supports various operating systems such as Windows, Linux, and Mac.
Features of the Bugzilla tool
Bugzilla has some features which help us to report the bug easily:
o A bug can be list in multiple formats
o Email notification controlled by user preferences.
o It has advanced searching capabilities
o This tool ensures excellent security.
o Time tracking
BugNet
It is an open-source defect tracking and project issue management tool, which was written
in ASP.NET and C# programming language and support the Microsoft SQL database. The
objective of BugNet is to reduce the complicity of the code that makes the deployment easy.
The advanced version of BugNet is licensed for commercial use.
Features of BugNet tool
The feature of BugNet tool are as follows:
o It will provide excellent security with simple navigation and administration.
o BugNet supports various projects and databases.
o With the help of this tool, we can get the email notification.
o This has the capability to manage the Project and milestone.
o This tool has an online support community
Redmine
It is an open-source tool which is used to track the issues and web-based project management
tool. Redmine tool is written in Ruby programing language and also compatible with
multiple databases like MySQL, Microsoft SQL, and SQLite.
While using the Redmine tool, users can also manage various projects and related
subprojects.
Features of Redmine tool
Some of the common characteristics of Redmine tools are as follows:
o Flexible role-based access control
o Time tracking functionality
o A flexible issue tracking system
o Feeds and email notification
o Multiple languages support (Albanian, Arabic, Dutch, English, Danish and so on)
MantisBT
o MantisBT stands for Mantis Bug Tracker. It is a web-based bug tracking system,
and it is also an open-source tool. MantisBT is used to follow the software defects. It
is executed in the PHP programing language.
Features of MantisBT
Some of the standard features are as follows:
o With the help of this tool, we have full-text search accessibility.
o Audit trails of changes made to issues
o It provides the revision control system integration
o Revision control of text fields and notes
o Notifications
o Plug-ins
o Graphing of relationships between issues.
Trac
Another defect/ bug tracking tool is Trac, which is also an open-source web-based tool. It is
written in the Python programming language. Trac supports various operating systems such
as Windows, Mac, UNIX, Linux, and so on. Trac is helpful in tracking the issues for software
development projects.
We can access it through code, view changes, and view history. This tool supports multiple
projects, and it includes a wide range of plugins that provide many optional features, which
keep the main system simple and easy to use.
Backlog
The backlog is widely used to manage the IT projects and track the bugs. It is mainly built for
the development team for reporting the bugs with the complete details of the issues,
comments, updates and change of status. It is a project management software.
Features of backlog tool are as follows:
o Gantt and burn down charts
o It supports Git and SVN repositories
o It has the IP access control feature.
o Support Native iOS and Android apps

OPEN SOURCE TEST MANAGEMENT TOOL - TARANTULA


(Tarantula.fi)
Test management tool – Tarantula is a good tool for managing software testing in agile
software projects And it is an open source test management tool.
It offers you a free system, licensed as open source software and aims to be the best open
source test management tool, in various subjects:
• Agile Testing
• Testing management
• Reporting
• Usability
As you Log in to Tarantula System you’ll be able to add cases for new features, Tarantula
opens up to dashboard view displaying interesting statistics about current project.

Test Management tool – Tarantula gives you, than, the option to define Test Object. It
identifies actual “software/version/release” they are testing. Also you can create Test
Execution. Execution is a collection of test cases run for selected test object. So for each test
object there may be several different executions. E.g. Smoke Test, Integration Test,
Performance Tests etc.

In the touch of a button you start the test according to your definition. Test Management tool
– Tarantula system gives you general case information, Steps and actions entering defect and
comment, And at the end you’ll get a toolbar for entering step result.

After the test, Tarantula provides a comfortable to read reports and dashboards. This reports
and dashboard are easy to share with your coworkers and managers.
Dashboard offers quick status view to your report. It is based on Test Object, meaning that
you can select particular “release/version” to be viewed.
Project Status is often useful for periodic reporting to top managers, you can easily share
report to your boss with email, or deliver printed report to him.
Tarantula also gives you the option to see case execution List that often use in a report with
detailed information about executed cases.

You might also like