Performance Lifecycle in Banking Domain
Performance Lifecycle in Banking Domain
Performance Lifecycle in Banking Domain
Abstract: Performance assurance and testing plays a key role in complex applications and is an essential element of the application
development life cycle. This case study is about integrating performance at a large national bank. Learn how custom monitoring, Six
Sigma techniques, performance testing, and daily production reports played an important role in identifying production issues. This
paper illustrates and examines the challenges and successes of performance planning, testing, analysis, and optimization after the
release of X Banks CRM application.
The rollout plan for X Bank called for 500 branches every 6
1. INTRODUCTION months until reaching the goal of 2,000 branches.
Software Performance Lifecycle (SPL) is an approach that can The CRM application technologies consisted of an ASP front
be applied to all types of technologies and industries. This end on IIS Web servers, C++ middle tier on MO Series and an
solution allows the true possibilities of the system and Oracle database.
software under test to be examined. It allows for precise
planning and budgeting. The SPL approach can begin at any
stage of the Software Development Life Cycle (SDLC) and
2.1 Planning & Setup Phase
will mature as the wheel turns (or the life cycle progresses). The rst step in the SPL process is planning, this entails
However, the earlier performance is evaluated, the sooner planning for the entire process, creating a performance test
design and architectural flaws can be addressed and the faster plan, and setting up the performance environment. The initial
and cheaper the software development life cycle becomes. responsibilities of the two Performance Engineers was to
The wheel in this case is the development life cycle as a interact with the bank resources, business analysts,
whole, not just one application release but all releases from developers, system administrators, database administrators,
the start of the application. The SPL steps include planning, application engineers, and others to gather enough
testing, monitoring, analysis, tuning and optimization. These information to devise a performance test plan. To help create
steps will be discussed in conjunction with this case study. the performance test plan, we needed to fully understand the
The idea is to begin the SPL approach at any point on the application behavior at X Bank. First, we sat down with
wheel (or development life cycle). As the wheel turns the SPL business analysts to understand the major pain points and
approach will position itself to start earlier and earlier in the learn the application usage at the bank. We also made trips out
development life cycle for future release levels. In the to different branches and spoke to actual end users of the
example explained below, SPL started at the end of the first application to analyze their user experience and performance
release of the application to be tested. Due to the late concerns.
introduction, we ran into different issues and problems, but Next, we needed to get a better understanding of the database
we jumped on and started the performance lifecycle. The volumes in production to allow us to properly populate the
introduction of the SPL approach will save significant time perfom1ance test lab database. To do this we received
and money, while ensuring end user satisfaction. Our database row counts from the production database
organization used these set of techniques and procedures and administrator for the previous three months, and we used that
called it Software Performance Lifecycle (SPL) information in conjunction with the growth projections to
appropriately populate the performance test lab database.
All the information gathered during the planning phase
2. CASE STUDY enabled us to get a better understanding and positioned us to
This performance case study involves a major national bank create a performance test plan. The test plan included actual
with over 2,000 branches, ctionally named X Bank for this use case! business process steps, SLA goals, database sizing
study. The bank was facing performance issues with various information, performance lab specications, and exit criteria
portions of their CRIVI application. They were experiencing for this round.
high response times, degraded throughput, poor scaling Next we set out to create an onsite performance test lab. The
properties, and other issues. This caused un-acceptance from test lab included load testing servers, application servers, a
their end users and customer base. database server, an Integrated Architecture (IA) server, and a
During the first round of implementing performance at X host system. The perfom1ance lab environment mimicked
Bank, our responsibilities as consultants was to help elevate production in terms of the application servers but lacked in
their performance issues. We were in charge of managing and terms of number of CPUs on the IA Server. The load testing
executing the Software Performance Life Cycle, which software of choice was LoadRunner and Win Runner. We
included items such as planning, scripting, testing, analyzing, utilized one load testing controller, two load generators, and
tuning, and managing the performance lab. As the application two end user workstations. The workstations were used in
grew in size, so did the team. lt started with two Senior conjunction with WinRunner to truly understand end user
Performance Engineers and evolved to a Senior Performance experience under load. Lastly a custom dashboard application
Engineer, a Senior Application Developer, two Scripting was written to monitor application transaction response times
Resources, Environment Team Resource, and a part-time at the web and application tiers and to provide server statistics
Database Admin. information
The production environment consisted of five ACS The last step was to install the application on the servers and
(Application Combined Servers) and one NT database server. load the appropriate data volumes into the performance
database server. We used Perl scripts to generate the
www.ijcat.com 19
International Journal of Computer Applications Technology and Research
Volume 6Issue 1, 19-22, 2017, ISSN:-23198656
appropriate volumes of test data, which gave us the exibility was the front-end time which does not include GUI rendering
to increase volume as needed. time (shown as dotted line below). As the front end time
includes the back-end times, the difference provided us with
just the web server response time, giving us another data point
for our analysis. The last two entries provided the exact
request size and response size of each transaction, thus
allowing us to verify the correct data sizes for the appropriate
business processes.
www.ijcat.com 20
International Journal of Computer Applications Technology and Research
Volume 6Issue 1, 19-22, 2017, ISSN:-23198656
3. PRODUCTION MONITORING
After we rolled all the changes into production and completed
the rst iteration of SPL at X Bank, we began monitoring
production on a daily basis. The application production team
provided daily server statistics for all of production. We used
six sigma techniques such as regression analysis, processor
capabilities charts, Xbar-S charts and boxplots to aid in our
production monitoring. First, we performed daily regression
analysis on server processes to isolate any top consuming
processes. The regression analysis consists of gathering
process information from all servers, which was supplemented
by Perfmon, Minitab, and Perl. Perform is a windows
monitoring solution, Minitab is a statistical computing system,
and Perl, a scripting language, was used to parse and format
Perfmon data to t Minitab. Next, the formatted data was
imported into Minitab and a Minitab worksheet was created.
After the Minitab worksheet was created, a regression
analysis was performed to nd the top consuming processes
[M|Nll]3]. For this analysis, Processor Total was the Resp
processes, to be analyzed, were the Predictors (x variable).
See picture below.
www.ijcat.com 21
International Journal of Computer Applications Technology and Research
Volume 6Issue 1, 19-22, 2017, ISSN:-23198656
Below table shows list of top CPU consuming processes : utilization we are following outside our acceptable range 2274
times.
3.1 Conclusion
Start the SPL approach at any stage of the SDLC. The bank
faced performance issues in production, which caused un-
acceptance from end users, bank personal, and the possible
loss of future product upgrades. This could have cost millions
of dollars in product revenue and maintenance licenses. But it
did not, even though we started the SPL process of planning,
testing, monitoring, analysis, tuning, and optimization after
release l was in production. Since we were already on the
The processor capability graph [MINI03] below shows the wheel, we were able to include SPL earlier and earlier in the
processor distribution model around the CPU utilization Software Development Life Cycle as the product grew, or as
for the server on a given day. We used this information to see the wheel turned. The SPL approach saved signicant time
how many times (or parts per million, PPM) the data exceed and money, ensured production readiness, improved
our upper specications limit (USL). ln our case any data performance and scalability, and built condence.
point outside the 80% USL mark is considered defective
because the application degraded after the CPU utilization hit
80%. In this graph there are a few things to keep in mind, Left
4. REFERENCES
Boundary (LB), Upper Specication limit (USL), and parts [1] MINITAB Statistical Software
per million (PPM). Ln Graph 1, PPM > USL is 2274,
indicating that for every 1 million data points of CPU
www.ijcat.com 22