Rohith P S SRE

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Rohith P S

E-mail: psrohi@gmail.com Contact No: +91 9884538156


LinkedIn: www.linkedin.com/in/rohith-p-s-43983213

Objective

To build a career which makes me better, enhance my knowledge, skills and gives me adequate
opportunity to exploit my innate potential.

Work Experience

➢ Over 16 years of diverse experience in the field of Resilience, Reliability, Performance and chaos
engineering, Observability, Performance tuning, application/performance monitoring, production
incident analysis and performance testing of large Java, Microservices, PCF and Big data
applications.
➢ Strong experience in implementing Observability using AppDynamics and Kibana for
performance monitoring, customizing performance metrics, creating end to end flow dashboards
and log analysis.
➢ Strong knowledge in defining CICD model using Git and Jenkins.
➢ Exposure to AWS, Kubernetes and OpenShift.
➢ Have strong background in performance tuning of Java, Spark, Storm, HBase, Hive, Kafka, DB2
LUW and COBOL applications.
➢ Strong knowledge in application profiling, tuning, heap memory analysis, thread contention
analysis, system monitoring, Resilience validation and performance testing.
➢ Scripting knowledge in Bash & Python.
➢ Strong knowledge in Resilience, Reliability, and chaos engineering concepts.
➢ Good experience/knowledge in Pivotal Cloud Foundry and Microservices Architecture.
➢ Proficient in performance engineering concepts, defining Non-Functional Requirements,
designing performance test scenarios, isolating problems and providing recommendations for
improvements.
➢ Extensive experience in analysis of CPU Utilization, Memory Usage, Garbage Collection and DB
connection to verify the performance of the application.
➢ Analyzed performance, scalability and availability issues in production.
➢ Experience in Capacity planning for Hadoop HBase.
➢ Experience in performance testing using LoadRunner.
➢ Execute performance / load / stress and other non-functional tests.
➢ Experience in working in Windows, UNIX and Mainframe environment.
➢ Experience in Onsite - Offshore working model and direct client interaction.
➢ Flexible & capable of successfully managing multiple projects simultaneously.
➢ Excellent communicator: leverage technical acumen to communicate effectively with client
executives and their respective teams, managers and other teams in the enterprise at all levels.
➢ Effective problem-solving skills, strong analytical skills, outstanding interpersonal skills, good in
written and verbal communication.
➢ Ability to work independently as well as within a team environment. Driven to meet deadlines.
Motivated to produce robust, high-performance software.
➢ Experience in Agile methodologies and Software Development Life-Cycle (SDLC) Waterfall Model
➢ Worked with Tata Consultancy Services Pvt. Ltd for 3.8 years as a Mainframe developer.
➢ Currently working with Cognizant Technological Solutions as Performance Architect/SRE
since September 2009.

Software Proficiency

Technical Languages Java, JCL, COBOL, PL1, Python


Java Framework Spring Boot , Spring Batch , Spring Cloud, JPA, Spring Actuator,
Circuit Breaker, Service discovery
Hadoop Spark, Storm, Kafka, HBase, HDFS, Hive, Yarn
Tools AppDynamics, Kibana, Git, Jenkins, Strobe, APA, LoadRunner,
Instana, DataDog, Prometheus, Grafana
Cloud PCF, AWS , Kubernetes
Operating Systems OS390, Windows, UNIX
Database HBase, Oracle, DB2, IMS
Other Technology RabbitMQ, Gemfire, Mainframe
Certification AWS Certified Cloud Practitioner

Projects Profile

Discover

Technology: Java, PCF, Spark, Storm, Kafka, Hive, HBase, RabbitMQ Domain: Banking
Location: USA & India Duration: Nov 2016 till Date
Description: Discover after merger with Pulse network decided to create an Enterprise Payments
Platform to support all of their business i.e. Discover, Diners Club and Pulse. This EPP Program aimed
to integrate & create shared, multi-tenant services by migrating their multiple legacy systems on to
Big Data Hadoop with Micro-Services RESTful APIs. This platform also aims to create a new industry
standard by moving traditional batch processing to Near-Realtime/Realtime payments processing
EPP architecture involves multiple components such as DFTP Server, Network and Edge server (Java
based application), Cloak(Security Tokenization),Rabbit MQ Queues, Gemfire DB, Micro services on
Pivotal Cloud Foundry, Hadoop (Kafka, Storm, Spark and HBase).
Role: Service Reliability Engineer
Responsibilities:

• Implement Observability to monitor end to end Application using AppDynamics and Kibana.
• Evaluate the non-functional requirements and acceptance criteria. Derive SLI/SLO and define
health rules and alerts for all the application components.
• Conduct Resilience validation for all the application components.
• Configure AppDynamics agents/extensions on all the servers, Hadoop components, Java
processes and PCF services.
• Root cause analysis for production health rule violations.
• Define CICD model to deploy Microservices to PCF and spring batch JVMs on to virtual machine
using Jenkins and GitHub.
• Define and build End to End Application monitoring model by integrating all the application
components in AppDynamics.
• Performance tuning of Big Data application by monitoring metrics and statistics like process
latency, work process utilization, compaction queue & latencies etc. for Hadoop components –
HBase, Yarn , Spark , Hive, HDFS, MapReduce, Storm, Kafka and zookeeper
• Performance tuning of Pivotal Cloud Foundry application containers and micro services.
• Capacity planning and forecasting model for Hadoop data store, pivotal cloud foundry and
AppDynamics.
• Analyze and identify processes for application monitoring, derive critical business flow for
monitoring.
• Customize performance metrics to monitor RabbitMQ, Hadoop components (Kafka, Storm,
Spark, Hive and HBase) and PCF container.
• Build Business and performance dashboards using AppDynamics and Kibana for all the
processes, critical business flow and end to end Application monitoring.
• Define Performance Engineering strategy based on current and future workloads.
• Monitor end to end application and collect detailed performance metrics – Hardware metrics
(CPU, Memory, I/O and Network traffic), Application metrics (Heap Utilization, Garbage
Collection, Response time, Errors and transaction per sec), PCF Diego, bosh and container
metrics and Hadoop big data metrics for HBase, Spark, Storm, Kafka.
• Conduct deep dive analysis on performance metrics using tools like AppDynamics, Grafana,
Kibana, common logging system dashboards, Flame Graph and FireHose.
• Provide recommendations for improvements.
• Derive PCF container memory and parallel instances. Use complex theories and regression
algorithms and machine learning to create the forecast model for application usage
• Deliver monthly performance report with detailed application performance comparison.

HealthNet

Technology: Java, VMS COBOL, Oracle Rdb, PCA, Load Runner Domain: HealthCare
Location: USA Duration: Dec 2013 till Oct 2016
Description: Automated Business System (ABS) is the internal core system which manages the
complete heath management system like member, group, provider and claims details. Initially ICI
developed the ABS System. Health Net bought the code and application from ICI in 1986 and they
started programming for their own applications. ABS developed using VAX and BASIC, SCRGEN,
COBOL & Oracle Rdb technology. ABS would interact with different interfaces and third party tools to
manage member, group, and provider, claim details. It is a fully integrated system with the following
modules like Eligibility/Membership, Finance, Benefits, Claims and Provider. This project involves
various enhancements to increase Health Net’s business and comply with CMS standards. There are
also enhancements with Duals Eligibility Demonstration Program which would be implemented by 2013
and the base ground work, analysis & implementation of various enhancements is in progress for this
project. There are multiple batch jobs to accomplish the billing cycles, printing ID cards and other
major functionalities. The project follows the onsite-offshore model, which means the new requirement
requested by clients will be worked on from offshore as well as onsite. The project includes analysis
and solution proposition of service requests raised by business users. Service request is basically a
new enhancement that is driven by business or another application. A service request involves
analysis, design document preparation, coding, testing, Implementation & Post implementation
Support.

Medi-cal is California’s Medicaid program that provides health care services for low-Income individuals
including families with children, seniors, persons with disabilities, foster care, pregnant women and low
income people with specific diseases such as tuberculosis, breast cancer or HIV / AIDS and is similar to
HMO Business. QCare is the Core System for Medi-Cal and provides 100% of the system support to
Health Net of California’s Medi-Cal line of business

Role: Performance Engineer


Responsibilities:
• Involved in participating in project Brief/BARR/Work order assessment.
• Estimate the cost and time duration for the project from performance engineering perspective.
Identify critical workloads and business flows/scenarios.
• Share the analysis made on the requirement with offshore team thru meeting/mails so that
both onsite and offshore are in the same page.
• Simulating the performance issues for the identified transactions on ABS application projects at
given environment.
• Responsible to monitor the application server resource utilization, thread usage, connection
pool usage and performance characteristics of applications involved during performance testing.
• Responsible to monitor the database server resource utilization, session management and long
running Oracle RDB SQL queries.
• Used MONITOR / PCA reports for identifying the root cause of the performance bottleneck in
application and database end.
• Responsible to tune the application to meet the performance requirement without any
degradation in the quality of the service.
• Used the tool as Load Runner automation tool to find the application breaking point when the
load increasing linearly.
• Analyzed the potential performance and scalability bottlenecks and their root cause from the
logs and resource utilization of application as well as database systems.
• Provide the recommendations to resolve the identified performance bottlenecks and to improve
the overall the application performance & scalability.

Cigna

Technology: Java , DB2 LUW Domain: Insurance


Location: USA Duration: May 2013 till Nov 2013
Description: This is a performance engineering assignment for optimizing GCM reporting application.
This is a data ware house application that gets feed from different external vendors and loads the feed
to data warehouse. This loading of data happens daily in the nightly batch cycle. Execution time for
most of the jobs in the nightly batch cycle where not meeting the SLA. The scopes of this project are
to performance tune and optimize the jobs that are not meeting the SLA.
Role: Performance Engineer
Responsibilities:
• Interaction with various business and technical stake holders of the project for gathering non-
functional requirements and environment details.
• Responsible to get the list of all top CPU consuming jobs using DB2 monitoring views and DB2
snapshot.
• Collate and analyze several reports generated by the monitoring tools for identifying the hot
spots in the application.
• Involved in profiling the application using the profiler tool to identify the root cause of the
performance bottlenecks.
• Responsible to provide suggestions and to tune the application to meet the performance
requirement without any degradation in the quality of the service.
• Worked closely with the key project stake holders like Subject Matter Expects (SMEs), Solution
architects, Business analysts
• Collect DB2 snapshots.
• Analyze the DB utilization and DB2 snapshot.
• Analyze the deadlock and lock timeout events.
• Tune the queries that are above the SLA.
• Redesign the Stored procedure.
MetLife

Technology: Cobol, JCL, DB2 Domain: Insurance


Location: India Duration: May 2012 till Apr 2013
Description: This is a performance engineering assignment for optimizing the MIPS. This project was
started to find out mainframe utilization and the usages trend in last one year. MetLife has both online
and batch applications. Extracted the EIBR and INSIGHT reports for past 6 months and did analysis on
CPU utilization. From these reports got list of all the top CPU consuming jobs. Used Strobe to profile
the jobs and enhance the performance.
Role: Performance Engineer
Responsibilities:
• Interaction with various business and technical stake holders of the project for gathering the
requirements.
• Responsible to get the list of all top CPU consuming jobs using INSIGHT and EIBR reports.
• Collate and analyze several reports generated by the monitoring tools for identifying the hot
spots in the application.
• Profiling the application using the profiler tool to identify the root cause of the performance
bottlenecks.
• Responsible to provide suggestions and to tune the application to meet the performance
requirement without any degradation in the quality of the service.
• Collect DB2 snapshots.
• Analyze the DB utilization and DB2 snapshot.
• Analyze the deadlock and lock timeout events.
• Tune the queries that are above the SLA.
• Redesign the Stored procedure.

Travelport

Technology: Java,.NET, DB2 LUW Domain: Travel


Location: India Duration: Jan 2010 till Mar 2012
Description: This is a performance engineering assignment to optimize Travelport applications.
UProfiles is an online application developed in Java and data base used is DB2 LUW. Main objective of
this project is to tune the java code and the SQL queries. In this engagement we used to run the
online application and then monitor DB utilization, application server utilization and transaction
response time. We used to pick up the transaction that are above the SLA and tune the same. While
running we used to collect DB2 snapshots , and then pick up all the queries that are above the SLA
and start analyzing them for tuning also we used to monitor Lock timeout and deadlock events.
Role: Performance Engineer
Responsibilities:
• Performance tuning of the DB2.
• Collect DB2 snapshots.
• Analyze the DB utilization and DB2 snapshot.
• Analyze the deadlock and lock timeout events.
• Tune the queries that are above the SLA.
• Redesign the Stored procedure.
Capital One

Technology: COBOL, JCL, DB2 Domain: Banking


Location: India Duration: Sept 2009 till Dec 2009
Description: This is a performance engineering assignment for optimizing the MIPS for CardIT
application. CardIT is an online and batch application. But as per this engagement we concentrated
only on batch application.
As part of this project we extracted the SMF and RMF reports for past 2 months and did the analysis
on CPU utilization. From these reports we got a list of all the top CPU consuming jobs. Concentrated on
these jobs for tuning. In some of the jobs it was Cobol –DB2 program that was taking more time. In
this case we used APA for tuning.
Role: Performance Engineer
Responsibilities:
• Extract SMF and RMF reports
• Analyze the CPU utilization of the application.
• Identify the top CPU consuming jobs and tune it.
• COBOL – DB2 program we used APA.
• Give the recommendations.

AXA

Technology: COBOL, JCL, IMS DB/DC Domain: Insurance


Location: India Duration: June 2009 to Sept 2009
Description: This is an enhancement project carried out for an insurance client from USA under
products department. Main objective of this project is to enhance and maintain the existing RLAS
system.
System Overview: RLAS is an Online and Batch IMS Application, which supports all the related
functions for the payment of pension benefits. The functions include establishing individuals at the
time of the retirement, making one-time and periodic payments to retirees via paper checks or
electronic funds transfers, related tax and premium deduction withholding and reporting, fund
accounting, client bulk processing and reporting, and provide valuation support.
Role: Developer
Responsibilities:
• Enhancing the existing RLAS system by adding some new options in the online RLAS screens.
For this we have made some changes in existing COBOL modules and also in some cases
developed new programs
• Maintaining the RLAS system. If there is some abend in the production environment then fix
that issue as soon as possible.
• Took care of all the quality process that need to be taken are at project level like – IQA,EQA
and FI.
• Did all types of project management activities like assigning tasks to the team members and
getting deliverables being delivered on time.
• Coordinating with the client for all the mainframe changes.
Safeco

Technology: COBOL, JCL Domain: Insurance


Location: India Duration: May 2008 to June 2009
Description: This project is carried for an insurance client from USA under claims department. Main
objective of this project is to integrate the ‘ClaimIQ’ (Vendor Tool from Mitchell Int.) to the existing
Claims Management system.
New system: Loss details that are captured from Safeco’s claims application are sent to a third party
tool (ClaimIQ) for data investigation, evaluation and negotiation process. Claim data is sent to ClaimIQ
via Automatic triggers, or manual push is performed when the BI (Bodily injury) claim criteria is met
via automatic triggering. Only the updated claim data is sent to ClaimIQ for data evaluation. Claim
data is sent to Mitchell vendor for evaluation only if it’s a BI claim and then after completing the
evaluation the evaluated data will be sent back to claims management system.
Claim data are sent to ‘ClaimIQ’ using IBM message queues. Once the loss details are completely
captured then COBOL module will run some business rules and validate whether it is a BI claim. If the
claim is BI claim then, COBOL module will put entire claims data to message queue which will be
subscribed by ‘ClaimIQ’ and also some of the DB2 tables are updated with an indicator that tells that
claim data are sent to ‘ClaimIQ’.
Role: Project Leader
Responsibilities:
• Coded new COBOL modules for executing the business rules and COBOL stored procedure for
updating db2 tables and also writing messages to message queue and also COBOL modules for
updating IMS db.
• Added some new batch jobs and also updated the scheduling of some existing jobs.
• Took care of all the quality process that need to be taken are at project level like – IQA, EQA
and FI.
• Did all types of project management activities like assigning tasks to the team members and
getting deliverables being delivered on time.
• Coordinating with the client for all the mainframe changes.

USAA

Technology: COBOL, JCL, PL1, IMS DB/DC, DB2 Domain: Insurance


Location: India Duration: Nov 2007 to May 2008
Description: Main objective of this project is to migrate workitem UDB to host (OS/390).When some
business processing is done this database will be updated.
Old system: When a business processing is done there are some rules that will be executed for
updating workitem UDB. In the old system these rules are coded in java and this UDB is updated
directly.
New system: We are migrating workitem UDB to host. All the business rules that are required for
updating workitem DB are coded in Cobol modules. There are two scenarios for which workitem DB will
be updated :
1) When IMS transaction is run in host – Here we made changes in the processing modules of the
IMS transaction to call the new COBOL utilities for executing the rules and updating the
workitem db.
2) When executing business processes from online internet screen – We have used ‘IMS connect’
to connect to host for executing these COBOL modules.
Role: Developer
Responsibilities:
• Coded new COBOL modules for executing the business rules and updating workitem DB.
• Made changes in processing module of IMS transaction and also in PL1 controller of ‘IMS
connect’ to call the new utility for updating workitem DB.
• Added some new batch jobs and also updated the scheduling of some existing jobs.
• Took care of all the quality process that need to be taken are at project level like – IQA, EQA
and FI.
• Did all types of project management activities like assigning tasks to the team members and
getting deliverables being delivered on time.
• Coordinating with the client for all the mainframe changes.

USAA

Technology: COBOL, JCL, PL1, IMS DC/DB, DB2 Domain: Insurance


Location: India Duration: Jan 2006 till Nov 2007
Description: This project is carried for an insurance client from USA under claims department. This
project is designed to automate all the steps that are carried out for an insurance claim/loss.
Old system: All the business processing is carried out through IMS transaction in host through IMS
screens. A claim handler (person handling the loss) should run IMS transaction to carry out any
business tasks and he can see all the business tasks carried out on a loss only in IMS screens in host.
He cannot view nor do any business processing form online internet screens.
New system: We have made enhancement in the old system. Now a Claim handler can do all the
business processing form the online internet screens itself without going to host and run IMS
transaction. We have used IMS connect to connect to the host for running the business rules and also
for updating IMS db and db2 tables. Now claim handler can see all the business processing that is
carried out through IMS transaction in online internet screens. We have made changes in all the IMS
transaction to reflect all the business processing that are carried out in IMS, in online internet screens
also .We have used IBM message queues to communicate between host and java. We have made
changes in the processing modules of the IMS transactions to publish event to midtier (java). From the
Cobol modules events will be written to message queues and it will be subscribed by java subscribers
for updating their db and also to reflect in the online internet screen. We also have some batch jobs
for processing the business tasks. We had made changes in the batch JCL for publishing event to mid-
tier (Java).
Role: Developer
Responsibilities:
• Coded new COBOL modules for executing the business rules and updating IMS db and db2
tables.
• Made changes in IMS transaction for publishing event to mid-tier (Java) using IBM message
queues.
• Made changes in some batch jobs JCL to publish events to mid-tier (Java) via message queues.
• Made some changes in existing PL1 controller modules of IMS connect.
Education

Year of passing Qualification


University/Board
2005 B.E Visveswaraiah Technological
(Computer Science and Engineering) University, Belgaum
2001 HSC [10+2] Karnataka Pre-University
Board, Bangalore
1999 10th C.B.S.E.

You might also like