Resume - MGayathri

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

AZURE DEVOPS ENGINEER

MOULIKA GAYATHRI Mail Id: moulika028@gmail.com


Mobile: +1 (281) 810-8965
LinkedIn:www.linkedin.com/in/moulika-gayathri-nuthalapati-
aa6150192
__________________________________________________________________________________________
Professional Summary
 Highly skilled and experienced Azure DevOps Engineer with 10+ years of expertise in managing and optimizing
cloud infrastructure, automation, and continuous integration/continuous delivery (CI/CD) pipelines.
 Seeking a challenging role as an Azure DevOps Engineer where I can leverage my technical proficiency and problem-
solving abilities to drive efficiency, scalability, and reliability within a dynamic organization and experienced on
various environments of Linux and Windows servers along with espousing cloud strategies based on AWS.
 Experience working on Azure Cloud Services, Azure Storage, Azure Active Directory and Azure Service Bus.
Managing Client’s Microsoft Azure based PAAS and IAAS environment.
 Experienced in Azure IAAS, Provisioning VM’s, Virtual Hard disks, Virtual Networks, Deploying Web Apps and
Creating Web Jobs, Azure Windows Server, Microsoft SQL Server, Microsoft Visual Studio, Windows PowerShell,
Cloud Infrastructure.
 Used Azure Terraform and Azure Ops Works to deploy the infrastructure necessary to create development, test, and
production environments for a software development project.
 Experience in Automating, Configuring and Deploying Instances on AWS Cloud Environments and Data Centers,
also familiar with EC2, S3, ELB, Cloud Watch, SNS and managing Security groups, IAM on AWS
 Hands on experience on AWS technologies on creating IAM users and groups and configuring with CLI.
 Expertise in container systems Docker container orchestration Kubernetes/Enterprise OpenShift container platform
 Experience with container-based deployments using Docker, working with Docker images, Docker hub and Docker
registries, installation and configuring Kubernetes and clustering them.
 Familiar with designing and deploying container-based production clusters using Docker.
 Designed and created multiple deployment strategies using CI/CD Pipelines using Jenkins.
 Configured and administered Jenkins for automated builds Responsible for installing Jenkins master /slave nodes.
 Experience in building CI/CD Pipeline to automate the JAVA code release process using Integration tools like GIT,
GitHub, Jenkins, and artifact repo.
 Knowledge of Java, JSP, Servlet, Soap Web service, Restful Web Service, Spring Boot, XML, HTML, CSS
 Experience in software build tools like Apache Maven, Apache Ant to write Pom.xml and Build.xml respectively.
 Expertise in developing Chef recipes to configure, deploy maintain software components of existing infrastructure.
 Expertise in branching, tagging, and maintaining the versions across environments using GIT/SVN.
 Experience in Designing, Installing, and Implementing Ansible configuration management system and in writing
playbooks for Ansible and deploying applications.
 Expert in deploying the code to web application servers like Web Sphere/Web Logic/ Apache Tomcat/JBOSS.
 Extensive experience in automating and deployment of different objects in Oracle EBS R12/11i.
 Expertise in Querying RDBMS such as Oracle and MYSQL by using SQL for Data integrity.
 In-depth understanding of the principles and best practices of Software Configuration Management (SCM).
 Strong knowledge on source controller concepts like Branches, Merges and Tags.
 Worked with Engineers, QA, and other teams to ensure automated test efforts are tightly integrated with the build
system and in fixing the error while doing the deployment and building.
 Experience in running web scale services on Amazon Web Services.
 Analytical Warranty System AWS and Early Claims Binning ECB claims analysis.
 Exposed to all aspects of software development life cycle (SDLC) such as Analysis, Planning, Developing, Testing
and Implementing and Post-production analysis of the projects.
 Experience in using bug tracking systems like JIRA, Remedy, HP Quality Center, and IBM Clear Quest.
 Proficient in tracing complex build problems, release issues & environment issues in multi-component environment.
 Strong knowledge/ experience in Creating Jenkins CI pipelines, Deploying and used Maven to automate most of the
build related tasks.
 Setup test environment for patches and hotfixes and implement patch for application testing and staged deployment.
Technical Skills Summary

Cloud Services Azure, AWS, GCP

Microsoft Azure (IaaS):-Azure Cloud service, Azure Virtual Machine,


Azure Virtual Networks / Storage / SQL, Azure Backup,
Azure copy, Azure Load Balancer and Azure Traffic
Manager, Azure DNS.
(SaaS) : - Azure AD and AD Connect.
(PaaS) : - App Services & Function Apps.
Build Tools ANT, Maven, Groovy, Gradle, Argo CD
Version Control Systems GIT, Bit Bucket, SVN, GitHub, Gitlab
Configuration Management Tools Puppet, Chef, Ansible
Databases MySQL, SQL, Milvus, PostgreSQL, Mongo DB
Container Service Docker, Kubernetes, OpenShift, RedShift
Scripting Languages Ansible, PowerShell, Python, Bash Scripting
Artifact Repositories Nexus, Artifactory
CI Tools Jenkins, Azure DevOps
Monitoring Tools Nagios, Cloud Watch, Prometheus, Grafana
Application Servers/Middleware Tomcat, HTTP, WebLogic, Red hat
Bug Tracking and Ticketing Tools Clear Quest, JIRA

Network Protocols HTTP, SMTP, SNMP, ICMP, TCP/IP


Operating Systems Windows, Linux (RHEL 6.9, 7.0, 7.2), Solaris (SPARC
10, 11)
SDLC Agile and waterfall methodologies

Educational Qualification
 Bachelor of Science at Jawaharlal Nehru Technological University. (2012)
Project Details
Client : Cox Automotive Inc – Redwood city, CA ( Remote ) December 2020 – Till Present
Role : Azure DevOps Engineer

Responsibilities:
 Involved in AWS & Azure DevOps migration/automation processes for building and deploy systems.
 Implemented the Build automation process for all the assigned projects in Vertical Apps domain.
 Set up automated provisioning and SCM systems from scratch such that any new system, which is added to
infrastructure by auto scaling mechanisms, would be provisioned and configured to the service ready state
automatically. Built using AWS Auto Scaling, User data scripts, Chef ELB, IAM, SNS.
 Launching Amazon EC2 Cloud Instances using Amazon Images (Linux/ Ubuntu) and configuring launched instances
with respect to specific applications.
 Monitor the performance of the application and the AKS cluster using Azure Monitor and Prometheus/Grafana.
 Implement horizontal and vertical scaling strategies to handle varying workloads.
 Optimize resource allocation for containers and nodes to ensure efficient resource utilization.
 Automate the process of building container images, running tests, and deploying to the AKS cluster.
 Debug and troubleshoot problems within the AKS cluster, such as node failures or networking issues.
 Integrated IoT devices, including smart thermostats, smart plugs, and energy monitoring sensors, to gather real-time
data on energy usage from various appliances.
 Developed a cloud-based backend using platforms like AWS IoT Core to securely collect, process, and store the IoT
data.
 Building and managing Azure Infrastructure and Working on Microsoft Azure (Public) Cloud to provide IaaS
support.
 Utilized CloudWatch metrics and alarms to monitor the ELB's performance and trigger scaling actions.
 Used an Elastic Load Balancer (ELB) to switch traffic between the blue and green environments during deployments.
 Set up and configure API gateways in APIM to manage incoming API requests, traffic routing, and load balancing.
 Implement policies to enforce security, traffic management, rate limiting, and transformation of requests/responses.
 Define versioning strategies and manage the lifecycle of APIs through stages such as development, testing, staging,
and production.
 Publish, unpublish, and retire APIs as needed, while ensuring minimal disruption to consumers.
 Maintain comprehensive documentation on API usage, policies, and best practices for both internal teams and
external developers.
 Provide training to development teams and stakeholders on APIM usage and features.
 Plan for scaling the API management infrastructure to accommodate increasing usage and demand.
 Set up APIM in a highly available configuration to ensure uninterrupted service.
 Orchestrate container deployments using Kubernetes or Amazon ECS to achieve scalability and manage container
lifecycles.
 Set up monitoring and alerting using AWS CloudWatch or third-party tools to track system health, performance, and
resource utilization.
 Design architectures for high availability and fault tolerance across multiple AWS Availability Zones.
 Plan and execute the migration of on-premises systems or applications to the AWS cloud.
 AWS DevOps Engineer should be skilled in cloud technologies, automation tools, and best practices to ensure the
successful deployment and operation of applications in AWS environments.
 Integrated SonarQube code quality analysis into Azure DevOps pipelines to enforce coding standards and identify
potential issues early in the development lifecycle.
 Defined Kubernetes manifests for microservices and associated resources as code within version-controlled
repositories (Git).
 Installed and configured Argo CD in the Kubernetes cluster, integrating it with the Git repositories to monitor
application configurations.
 Analyzed the microservices architecture and categorized applications based on their functionalities, ensuring proper
isolation and independent deployments.
 Created Argo CD application definitions for each microservice, specifying the desired state and configuration details.
 Established a GitOps workflow by configuring Argo CD to monitor the Git repositories and automatically
synchronize applications with the desired state.
 Orchestrated automated deployments of microservices using Argo CD applications, ensuring consistent releases
across environments.
 Integrated Argo CD with the CI/CD pipeline to trigger deployments automatically after successful build and testing
stages.
 Implemented Role-Based Access Control (RBAC) within Argo CD to manage access to applications and resources
based on user roles.
 Create and manage user accounts, roles, and permissions within the Moog soft platform.
 Ensure the ongoing health and performance of the Moog soft system, including applying updates and patches.
 Using Moog soft to monitor infrastructure performance, detect anomalies, and respond to potential issues.
 Integrate Moog soft into the CI/CD pipeline for automated deployment and configuration management.
 Implemented the GitOps workflow, leading to improved collaboration between development and operations teams
and enhancing transparency in the deployment process.
 accumulated a significant amount of transactional and customer data, and they needed a powerful and scalable
solution to analyze this data in real time.
 extract data from various sources, including the company's sales system, customer interactions, website behavior, and
marketing campaigns. Data integration tools were used to clean, transform, and load the data into Amazon Redshift.
 implemented a process for regular incremental updates to the data warehouse. New data was loaded into staging
tables and then merged into the main tables using efficient SQL merge operations.
 team set up an ETL pipeline that continuously loaded real-time data into a dedicated table in Redshift. This allowed
the business analysts to run immediate analyses on the latest data.
 Adjusted the configuration settings to ensure efficient resource allocation during peak usage times.
 Regular backups and snapshots were configured to ensure data integrity and provide a fallback option in case of data
loss or system failure.
 Set up and manage data sources (e.g., Prometheus, Influx DB) for Grafana.
 Install, update, and configure Grafana plugins.
 Transform raw data into a format suitable for visualization in Grafana.
 set up Amazon CloudWatch alerts to proactively monitor the health and performance of the Redshift cluster. They
configured alerts for metrics like CPU utilization, storage usage, and query performance.
 Develop data pipelines to efficiently ingest and preprocess data into Milvus. This might involve integrating with
various data sources, data cleaning, and transformation tasks.
 Install and configure Milvus according to the project's requirements. Fine-tune parameters like indexing methods,
similarity metrics, and storage options for optimal performance.
 Implement indexing strategies that allow for fast and accurate similarity searches within the vector space. Experiment
with different indexing techniques provided by Milvus to find the best fit for the project's needs.
 Integrate Milvus with machine learning models that generate or utilize vector embeddings. This could involve storing,
retrieving, and updating vectors associated with specific data points.
 Set up monitoring tools to track the health and performance of the Milvus database. Implement automated alerts and
responses to address potential issues promptly.
 Document the system architecture, setup procedures, configuration choices, and any custom implementations. Share
this knowledge with team members for future reference and onboarding.
 Follow Azure and OAuth security best practices, such as token encryption, token expiration policies, and proper
handling of refresh tokens, to mitigate security risks. Secure microservices architecture on Azure using OAuth-based
authentication and authorization patterns, ensuring seamless communication between services while enforcing access
controls. Implement Azure Policy and access controls to enforce OAuth-related security policies and compliance
requirements across Azure resources. Incorporate OAuth-related configurations into Azure DevOps pipelines,
enabling secure deployment and continuous integration of applications that rely on OAuth-based authentication.
 Successfully integrated SonarQube into Azure DevOps pipelines for multiple projects, resulting in improved code
quality and reduced technical debt.
 Design, deploy, and maintain the infrastructure required for Adobe Experience Manager (AEM) in Azure, including
virtual machines, databases, and networking components.
 Ensure scalability and high availability of AEM instances.
 Configure Azure services such as Azure Blob Storage, Azure SQL Database, and Azure Active Directory for AEM.
 Set up and configure Azure DevOps pipelines for building, testing, and deploying AEM applications.
 Implementing AWS high-availability, fault tolerance using AWS Elastic Load Balancing (ELB), which performed
load balancing across instances in multiple availability zones.
 Developed AWS Cloud Formation templates to create custom sized VPC, subnets, EC2 instances, ELB, Security
groups.
 Managed and maintained PostgreSQL database instances, including installation, configuration, and upgrades.
 Monitored database performance and conducted performance tuning to optimize query execution and response times.
 Implemented backup and recovery strategies to ensure data integrity and availability.
 Administered user access, roles, and permissions to enforce data security and compliance.
 Troubleshot and resolved database-related issues to minimize downtime and disruptions.
 Applied security patches and updates to keep the PostgreSQL environment protected.
 Provided training and support to development teams on best practices for PostgreSQL usage.
 Building pipelines in Jenkins while also fully scripting the creation and provisioning of Jenkins. Implementing a
continuous delivery framework using Jenkins, Ansible in Linux environment.
 Design and implement OAuth-based authentication and authorization mechanisms for Azure applications, ensuring
secure and controlled access to resources.
 Integrate Azure Active Directory (Azure AD) as an identity provider to enable OAuth-based single sign-on (SSO) and
user authentication for Azure-hosted applications.
 Configure OAuth settings in Azure AD, including defining application registrations, redirect URIs, access policies,
and token lifetimes to meet security and compliance requirements.
 Manage OAuth tokens and refresh tokens, including validation, rotation, and revocation, to ensure the integrity and
security of token-based authentication.
 Orchestrated and migrated CI/CD processes using Cloud Formation, terraform templates and containerized the
infrastructure using Docker setup in Vagrant, AWS, and Amazon VPCs.
 Installed, Managed and Configured monitoring tools such as Splunk, Nagios and Cloud Watch for monitoring the log
files, Network Monitoring, log trace monitoring and the hard drives status.
 Worked with JIRA and configured various workflows, customizations and plug-ins for Jira bug/issue tracker and
depended on Confluence for documenting the progress of Projects and Sprints.
 Provided continuous improvement to agile software development teams by working with Jenkins under the CI/CD
pipeline. Integrated Maven, Nexus, Jenkins, SVN, GIT and JIRA.
 Hands on experience on Azure VPN-Point to Site, Virtual Networks, Azure Custom security, Endpoint Security,
firewall, Windows Azure name resolution, Scheduler, Automation and Traffic Manager.
 Work experience in Azure App & Cloud Services, PaaS, Azure Data Factory, Azure data lake, Azure Data Lake
Analytics, Azure SQL Data Warehouse, Power BI, Azure Blob Storage, Web API, VM creation, ARM Templates,
PowerShell scripts, IaaS, Lift & Shift, Storage, and database.
 Deployed Java/J2EE application packages on to the Apache Tomcat server and configured it to host the websites by
Coordinating with software development teams and QA teams.
 Involved in development of applications using java technologies like
 Developed automation scripting to deploy and manage Java applications developed using spring framework, across
Linux servers.
 Using Grafana dashboards to identify and troubleshoot performance issues.
 Generate reports and presentations using Grafana dashboards.
 Involved in requirement Analysis, Design, and development of the project. The design is done by following the Java
design patterns.
 Managing systems routine backup, scheduling jobs like disabling and enabling system logging, servers for
maintenance, performance tuning, testing of the systems to ensure bugs elimination.
 Implemented Docker, Kubernetes and OpenShift to manage micro services for development of continuous integration
and continuous delivery.
 Managed version control using Git and Azure Repos, enforcing code review processes, and ensuring code quality.
 Worked with Terraform key features such as Infrastructure as code, Execution plans, Resource Graphs, Change
Automation and using terraform for changing, and versioning infrastructure safely and efficiently.
 Involved in the automation of AWS infrastructure via Jenkins - software and service configuration via Chef
cookbooks and working with Chef Cookbooks for virtual and physical instance provisioning.
 Created in setting up the CI/CD pipeline utilizing Jenkins, Maven, GitHub, Ansible playbooks and AWS.
 Maintenance of source code in GIT worked on Version control systems including Subversion (SVN), GIT and
GITHUB and developing build scripts using MAVEN as the build tools for the creation of build artifacts like war or
jar files.
 Using Docker to easily deploy applications in a sandbox to run on Linux and working on the Kubernetes for Docker
containers and troubleshooting issues, also created Kubernetes cluster from scratch.
 Used Kubernetes to deploy scale, load balance, scale & manage docker containers with multiple names spaced
versions.
 Developing the automated scripts to provision the EKS cluster and deploying the pods in Kubernetes.
 Implemented build stage- to build the micro service and push the Docker container image to the private Docker
registry.
 Container management uses Docker by writing Docker files and setting up the automated build on Docker HUB and
installing and configuring Kubernetes.
 Used Jenkins pipelines to drive all microservices builds out to the AWS ECR and then deployed to Kubernetes,
Created Pods and managed using Kubernetes.
 Developing the automated scripts to provision the EKS cluster and deploying the pods in Kubernetes.

Environment: AWS, UNIX, Linux, Shell, Jenkins, Nagios, RedShift, AEM, ANT, ANSIBLE, ELB, Docker, Milvus,
Argo CD, Kubernetes, Saas, Paas, Assa, Maven, SVN, PostgreSQL, GIT.
Client : Homesite Group Incorporated, Boston, MA June 2018 to September 2020
Role : Azure Cloud Engineer

Responsibilities:
 Experienced in Software Development Life Cycle (SDLC), Agile Methodologies and Waterfall
 Planning and implementation of data and storage management solutions in Azure (SQL, Azure files, Queue storage,
Blob storage), Migrating current code to CI/CD pipeline via Ant to Maven and Jenkins.
 Hands on experience on Azure VPN-Point to Site, Virtual Networks, Azure Custom security, Endpoint Security,
firewall, Windows Azure name resolution, Scheduler, Automation and Traffic Manager.
 Work experience in Azure App & Cloud Services, PaaS, Azure Data Factory, Azure data lake, Azure Data Lake
Analytics, Azure SQL Data Warehouse, Power BI, Azure Blob Storage, Web API, VM creation, ARM Templates,
PowerShell scripts, IaaS, Lift & Shift, Storage, and database.
 Developing AWS Cloud Formation templates to create custom sized VPC, subnets, EC2 instances, ELB, Security
Groups.
 Building pipelines in Jenkins while also fully scripting the creation and provisioning of Jenkins. Implementing a
continuous delivery framework using Jenkins, Ansible in Linux environment.
 Orchestrated and migrated CI/CD processes using Cloud Formation, terraform templates and containerized the
infrastructure using Docker setup in Vagrant, AWS, and Amazon VPCs.
 Implement CI/CD (Continuous Integration/Continuous Deployment) processes for AEM projects.
 Integrate version control (e.g., Git) with AEM codebase. Implement backup and disaster recovery solutions for AEM
assets and content in Azure.
 Conduct regular backup tests and recovery drills to ensure data integrity.
 Collaborate closely with AEM developers and content authors to understand application requirements.
 Assist in troubleshooting AEM-related issues during development and production phases.
 Monitor and optimize Azure resource costs associated with AEM deployments.
 Installed, Managed and Configured monitoring tools such as Splunk, Nagios and Cloud Watch for monitoring the log
files, Network Monitoring, log trace monitoring and the hard drives status.
 Worked with JIRA as defect tracking system and configured various workflows, customizations and plug-ins for Jira
bug/issue tracker and depended on Confluence for documenting the progress of Projects and Sprints.
 Provided continuous improvement to agile software development teams by working with Jenkins under the CI/CD
pipeline. Integrated Maven, Nexus, Jenkins, SVN, GIT and JIRA.
 Deployed Java/J2EE application packages on to the Apache Tomcat server and configured it to host the websites by
Coordinating with software development teams and QA teams.
 Involved in development of applications using java technologies like
 Developed automation scripting to deploy and manage Java applications developed using spring framework, across
Linux servers.
 Implemented a GitOps workflow by configuring Argo CD applications to automatically synchronize with the Git
repository upon changes, ensuring the desired state is maintained.
 Orchestrated application deployments using Argo CD applications, enabling easy rollouts, rollbacks, and tracking of
changes.
 Create and manage user accounts, roles, and permissions within the Moog soft platform.
 Ensure the ongoing health and performance of the Moog soft system, including applying updates and patches.
 Using Moog software to monitor infrastructure performance, detect anomalies, and respond to potential issues.
 Integrate Moog software into the CI/CD pipeline for automated deployment and configuration management.
 Integrated GitOps principles with the development workflow by requiring changes to go through Git pull requests,
ensuring review and validation before deployment.
 Managed custom resource definitions for Argo CD, allowing the creation of custom application templates and
deployment strategies.
 Design and implement database schemas based on application requirements.
 Normalize data structures to ensure data integrity and reduce redundancy.
 Optimize database performance through efficient table design and indexing strategies.
 Set up and manage data sources (e.g., Prometheus, Influx DB) for Grafana.
 Install, update, and configure Grafana plugins.
 Transform raw data into a format suitable for visualization in Grafana.
 Install, configure, and upgrade PostgreSQL instances.
 Monitor database health, performance, and availability.
 Manage user access, roles, and permissions to ensure data security.
 Handle database backups, recovery, and replication to maintain data integrity.
 Develop ETL (Extract, Transform, Load) processes to migrate data between different systems.
 Ensure data consistency and accuracy during the migration process.
 Handle data transformation and validation to meet target system requirements.
 Involved in requirement Analysis, Design, and development of the project. The design is done by following the Java
design patterns.
 Accumulated a significant amount of transactional and customer data, and they needed a powerful and scalable
solution to analyze this data in real time.
 extract data from various sources, including the company's sales system, customer interactions, website behavior, and
marketing campaigns. Data integration tools were used to clean, transform, and load the data into Amazon Redshift.
 implemented a process for regular incremental updates to the data warehouse. New data was loaded into staging tables
and then merged into the main tables using efficient SQL merge operations.
 team set up an ETL pipeline that continuously loaded real-time data into a dedicated table in Redshift. This allowed
the business analysts to run immediate analyses on the latest data.
 Adjusted the configuration settings to ensure efficient resource allocation during peak usage times.
 Regular backups and snapshots were configured to ensure data integrity and provide a fallback option in case of data
loss or system failure.
 Using Grafana dashboards to identify and troubleshoot performance issues.
 Generate the reports and presentations using Grafana dashboards.
 set up Amazon CloudWatch alerts to proactively monitor the health and performance of the Redshift cluster. They
configured alerts for metrics like CPU utilization, storage usage, and query performance.
 Implement indexing strategies that allow for fast and accurate similarity searches within the vector space. Experiment
with different indexing techniques provided by Milvus to find the best fit for the project's needs.
 Integrate Milvus with machine learning models that generate or utilize vector embeddings. This could involve storing,
retrieving, and updating vectors associated with specific data points.
 Implement CI/CD (Continuous Integration/Continuous Deployment) processes for AEM projects.
 Integrate version control (e.g., Git) with AEM codebase. Implement backup and disaster recovery solutions for AEM
assets and content in Azure.
 Conduct regular backup tests and recovery drills to ensure data integrity.
 Collaborate closely with AEM developers and content authors to understand application requirements.
 Assist in troubleshooting AEM-related issues during development and production phases.
 Monitor and optimize Azure resource costs associated with AEM deployments.
 Set up monitoring tools to track the health and performance of the Milvus database. Implement automated alerts and
responses to address potential issues promptly.
 Develop data pipelines to efficiently ingest and preprocess data into Milvus. This might involve integrating with
various data sources, data cleaning, and transformation tasks.
 Install and configure Milvus according to the project's requirements. Fine-tune parameters like indexing methods,
similarity metrics, and storage options for optimal performance.
 Experienced in branching, tagging, and maintaining the Kubernetes version across environments using SCM tools like
Git, GitHub, Subversion (SVN) and TFS on windows platforms.
 Exposed Virtual machines and cloud services in VNets to the Internet using Azure External Load Balancer.
 The Docker container leverages Linux containers and has the Azure Container baked in. Converted our staging and
Production environment from a handful Azure Nodes to a single bare metal host running Docker.
 Experience in VSTS, TFS, Terraform, Groovy, Release Management, Power Configuring Virtual Networks,
Designing Subnets, Gateway Subnets, Setup DNS at the Virtual Network level, User Defined Routes (UDRs)
 Implemented Backup and Restore for the application data using Azure, worked with security team to make sure Azure
data is highly secure and configured BGP routes to enable Express Route connections between on premise data
centers and Azure cloud.
 Design, deploy, and maintain the infrastructure required for Adobe Experience Manager (AEM) in Azure, including
virtual machines, databases, and networking components.
 Ensure scalability and high availability of AEM instances.
 Configure Azure services such as Azure Blob Storage, Azure SQL Database, and Azure Active Directory for AEM.
 Set up and configure Azure DevOps pipelines for building, testing, and deploying AEM applications.
 Moderate and contribute to the support forums (specific to Azure Networking, Virtual Machines, Azure Active
Directory, and Azure Storage) for Microsoft Developers Network including Partners and MVPs.
 Working on different protocols like TCP, UDP, DNS, DHCP, HTTP, SSH, FTP, SNMP, and LDAP.
 Cloud migration, Configuration, and installation of various services provided by Azure, Jenkins, Docker, Maria DB,
Integrated in Azure
 Experienced in Python scripting to automate monitor deployment processes, Project Management and Project Release
 Deployed Dockers Engines in Virtualized Platforms for containerization of multiple apps.

Environment: Azure, Jenkins, AWS, Webhooks, Mavan, Gradle, AEM, Saas, Paas, Assa, ELB, Redshift, Web-logic,
SonarQube, Milvus, Argo CD, Informatica, PostgreSQL, Hadoop, GIT, SVN, ANT, Docker, Maria DB, Nagios,
VMware, Python, Kubernetes.

Client: Herbalife – Winston-Salem, NC December 2015 – March 2018


Role: DevOps Engineer

Responsibilities:
 Worked in cross platforms, resolved many severe issues, and tackled complex situations tactfully as a part of my job.
 Wrote SQL queries and DML operations using SQL programming knowledge.
 Active participation in Software Development Life Cycle (SDLC) specifically waterfall and Agile Scrum
methodology.
 Created and Maintained Subversion (SVN) repositories, branches, and tags.
 Involved in monitoring each Service Deployment and validating the Services across all Environments.
 Planned, scheduled, and tracked software configuration management activities across multiple projects.
 Performed mergers between different branches and resolved all merger conflicts successfully by working with
development teams.
 Involved in DevOps migration/automation processes for build and deploy systems.
 Implemented the Build automation process for all the assigned projects in Vertical Apps domain.

 Monitor the UAT/Production Environments for any down time issues by performing regular corn job updates in
servers.
 Customized TFS Item Templates and Workflow (Transitions Matrix) of the Work Items.
 Evolving new tools/methodologies to improve this existing process and show better results to all stakeholders.
 Administered and Implemented CI tools Hudson/Jenkins, Bamboo, Build Forge, Team Foundation Server (TFS)
 and Anthill Pro for automated buildings.
 Configured TFS 2010 Environment along with Share Point Services, Reporting Services.
 Supported the code builds by integrating with continuous integration tool (Jenkins).
 Hands on experience with using Linux, Amazon Web Services, and supporting AWS infrastructure.
 Working experience on Jenkins, SVN and Power Shell to automate deployment tasks.
 Built Continuous Integration environment (Jenkins, Nexus, and Continuous delivery environment (puppet, yum,
resync Integrated delivery (CI and CD process) Using Jenkins, Nexus, Yum and puppet.
 Experience with orchestration template technologies such as AWS cloud Formation.
 Design EC2 instance architecture to meet high availability application architecture and deploying, configuring, and
managing servers in AWS.
 Utilized Puppet for configuration management of hosted Instances within AWS.
 Integrated existing systems into AWS/EC2 cloud infrastructure. Built/maintained a puppet master server and used that
to push out bi-weekly application updates.
 Deployed Java/J2EE applications to web logic server using Jenkins builds.
 Automated deployments process using UC4 automation tool.
 Responsible for Installation of various Hadoop Ecosystems and Hadoop Daemons.
 Worked on maintaining MySQL databases creation and setting up the users and maintain the backup of databases.
 Involved in Hadoop Cluster environment administration that includes adding and removing cluster nodes, cluster
capacity planning, performance tuning, cluster Monitoring, troubleshooting.
 Involved in end-to-end testing of build deployment process developed using Shell scripting.
 Worked on Data Warehousing Concepts (extracting, transforming, and loading) and RDBMS (Relational Database
Management System) SQL reporting, analytics, and Business intelligence.
 Created ETL mappings using Informatica Power center to extract the data from multiple sources like Flat files,
Oracle, Xml, csv, Delimited files transformed based on business requirements and loaded to Data Warehouse.
 Installed Hadoop eco system components like Pig, Hive, HBase and Sqoop in a Cluster.
 Worked on complex mappings using different transformations like Source Qualifiers, Expressions, Filters, Joiners,
Routers, Union, Unconnected/Connected Lookups, Aggregators, Stored Procedures and Normalizer transformations.
 Defined, implemented, and documented software deployment strategies and installation procedures.
 Provided weekend support to deploy code in Production environment.

Environment: UNIX, Azure, AWS, SonarQube, Linux, Maven, Milvus, GitHub, Docker, ELB, Argo CD, Jenkins,
Informatica, Hadoop, UC4, SVN, ANT.

Client: XPO Logistics – Greensboro, NC November 2013 – October 2015


Role: Software Engineer

Responsibilities:
 Developed and maintained software applications, adhering to coding standards and best practices.
 Software Development Designing, coding, testing, and debugging software applications. This involves writing
clean, efficient, and maintainable code to meet project requirements.
 Requirements Analysis Collaborating with stakeholders, such as product managers and clients, to understand their
needs and translate them into technical requirements. Analyzing and documenting software specifications.
 System Design Creating high-level and detailed designs for software systems, including architecture, data
structures, and algorithms. Ensuring scalability, reliability, and performance.
 Involved in the automation of Azure infrastructure via Jenkins - software and service configuration via Chef
cookbooks and working with Chef Cookbooks for virtual and physical instance provisioning, configuration
management, patching, and software deployment.
 Designed, implemented, and managed Azure resources to support multiple applications, ensuring high availability
and scalability.
 Collaborated with development teams to automate application deployment using Azure reducing deployment time.
 Implemented Azure Monitoring solutions to proactively identify and resolve performance bottlenecks, reducing
application downtime.
 Implemented backup and disaster recovery strategies using Azure Site Recovery and Azure Backup, ensuring
business continuity.
 Configured Azure Security Center policies and Network Security Groups to enhance security posture and ensure
compliance with industry standards.
 Collaborated with cross-functional teams to gather requirements and translate them into technical specifications.
 Wrote SQL queries and DML operations using SQL programming knowledge.
 Created and Maintained Subversion (SVN) repositories, branches, and tags.
 Designed and implemented scalable and efficient solutions, ensuring high performance and reliability.
 Conducted thorough testing and debugging of software applications, identifying, and resolving issues promptly.
 Worked with version control systems, such as Git, to manage codebase and facilitate collaborative development.
 Participated in code reviews, providing constructive feedback, and ensuring code quality.
 Documented software designs, processes, and user manuals to facilitate knowledge sharing and onboarding of new
team members.
 Assisted in the deployment and production support of software applications, troubleshooting, and resolving issues
as needed.
 Deployed Java/J2EE applications to web logic server using Jenkins builds.
 Automated deployments process using UC4 automation tool.
 Responsible for Installation of various Hadoop Ecosystems and Hadoop Daemons.
 Working experience on maintaining MySQL databases creation and setting up the users and maintain the backup of
databases.
 Conducted unit testing and participated in quality assurance activities to ensure software quality.
 Worked closely with senior team members to learn best practices and gain practical experience.
 Contributed to the documentation of software processes and workflows.

Environment: Azure, UNIX, Linux, Jenkins, Informatica, Hadoop, UC4, SVN, ANT.

You might also like