Dice Resume CV BHAGWATI PRAJAPATI
Dice Resume CV BHAGWATI PRAJAPATI
Dice Resume CV BHAGWATI PRAJAPATI
various databases and Programming. At this point of time, data is the need of the hour and there is
extensive need of understanding patterns in data, scale systems through distributed computing and
Database Modelling:
provide long lasting solutions from the ever-expending data points for any resource.
Dimension Modeling, ER
Modeling, Star Schema
I am a go-getter and a believer in perseverance, that solutions come to those who seek it until found.
Modeling, Snowflake
Modeling
ACCOMPLISHMENTS:
CERT I F I C A TIONS Migrated in-house database from 32bit LINUX to 64bit AIX. Assisted development team with improving
the performance of the OLTP system to meet all business requirements.
Migrated existing db2 version from 10.5.4 to11.1.4 with disaster recovery including db servers, db2
1.Azure Data Engineer
connect and db2 client (nearly 12000 servers) production system without the need for human
Associate certified
interaction.
2.M001-MongoDB
Successfully configured DSM tool for the LUW production servers, which made easier performance and
3.CSM - ScrumMaster
SQL tuning for the db team and provide best performance to the business & etc
4.DB2-730
Migrated 2005 SQL servers to 2012.
Implemented db2 AutoStart script using db2fault monitor on all db2 environment to avoid business
impact
RECOGNIZATION AND
AWARDS: EXPERIENCE SUMMARY :
Highly adaptable and experienced Database engineer,Developer and DBA with a proven track record
1. Employee of the month of high client satisfaction.
March and Adept at adapting work pace to evolving client deadlines.
OCT -2018. Superior ability to support and develop analysis solutions and data transformations
3. Shining star- 2016 Excellent ability to develop and implement data and reporting solutions
4.Employee of the year -2017. High achievement record in process quality improvement and efficient data solutions management
Superb maintenance of all working relationships
develop tests, improves and maintains existing and new databases as a way to help users in their data
EDUCATION retrieval process.
Focus on data warehouse architecture in MongoDB, DB2, Sybase, Greenplum, MSSQL, Datastage with
WORK HISTROY:
Company: Morgan Stanley - Aug 2021 – Present
Database engineer
Experienced in high availability and data modelling, database performance tuning, and data migration
Experienced in Business intelligence include SSIS, ETL, ELT, Data Integration, Data warehousing, data modelling, creating VM & ingesting data and Power BI.
Knowledge of building big data pipeline using related technologies such as GitLab with CI/CD, Docker, and Mapping data Flow and many more.
Experienced in cloud-based computing and cloud deployment models, cloud data migration, AWS and SQL Azure, Cosmos DB, NoSQL, MongoDB and Cassandra.
Hands-on experience on big data, spark streaming, prediction model and data mining
Experience working with Azure Databricks, Azure Data Factory and Azure Data Lake
Familiar with machine learning frameworks
Worked on Azure Data Factory to integrate data of both on-prem (MY SQL, Cassandra) and cloud (Blob storage, Azure SQL DB) and applied transformations to load
back to Azure Synapse.
Managed, Configured and scheduled resources across the cluster using Azure Kubernetes Service.
Monitored Spark cluster using Log Analytics and Ambari Web UI. Transitioned log storage from Cassandra to Azure SQL Datawarehouse and improved the query
performance.
Involved in developing data ingestion pipelines on Azure HDInsight Spark cluster using Azure Data Factory and Spark SQL. Also Worked with Cosmos DB (SQL API and
Mongo API).
Develop dashboards and visualizations to help business users analyze data as well as providing data insight to upper management with a focus on Microsoft products
like SQL Server Reporting Services (SSRS) and Power BI.
Performed the migration of large data sets to Databricks (Spark), create and administer cluster, load data, configure data pipelines, loading data from ADLS Gen2 to
Databricks using ADF pipelines.
Created various pipelines to load the data from Azure data lake into Staging SQLDB and followed by to Azure SQL DB
Created Databrick notebooks to streamline and curate the data for various business use cases and also mounted blob storage on Databrick.
Utilized Azure Logic Apps to build workflows to schedule and automate batch jobs by integrating apps, ADF pipelines, and other services like HTTP requests, email
triggers etc.
Worked extensively on Azure data factory including data transformations, Integration Runtimes, Azure Key Vaults, Triggers and migrating data factory pipelines to
higher environments using ARM Templates.
Ingested data in mini-batches and performs RDD transformations on those mini-batches of data by using Spark Streaming to perform streaming analytics in Data
bricks.
Importing & exporting database using SQL Server Integrations Services (SSIS) and Data Transformation Services (DTS Packages).
Coded Teradata BTEQ scripts to load, transform data, fix defects like SCD 2 date chaining, cleaning up duplicates.
Developed reusable framework to be leveraged for future migrations that automates ETL from RDBMS systems to the Data Lake utilizing Spark Data Sources and Hive
data objects.
Conducted Data blending, Data preparation using Alteryx and SQL for Tableau consumption and publishing data sources to Tableau server.
Developed Kibana Dashboards based on the Log stash data and Integrated different source and target systems into Elasticsearch for near real time log analysis of
monitoring End to End transactions.
Implemented AWS Step Functions to automate and orchestrate the Amazon SageMaker related tasks such as publishing data to S3, training ML model and deploying it
for prediction.
Integrated Apache Airflow with AWS to monitor multi-stage ML workflows with the tasks running on Amazon SageMaker.
Environment: Azure SQL DW, Databrick, Azure Synapse, Cosmos DB, ADF, SSRS, Power BI, Azure Data lake, ARM, Azure HDInsight, Blob storage, Apache Spark.
Designed and setup Enterprise Data Lake to provide support for various uses cases including Analytics, processing, storing and Reporting of
voluminous, rapidly changing data.
Responsible for maintaining quality reference data in source by performing operations such as cleaning, transformation and ensuring
Integrity in a relational environment by working closely with the stakeholders & solution architect.
Designed and developed Security Framework to provide fine grained access to objects in AWS S3 using AWS Lambda, DynamoDB.
Set up and worked on Kerberos authentication principals to establish secure network communication on cluster and testing of HDFS, Hive
and MapReduce to access cluster for new users.
Performed end- to-end Architecture & implementation assessment of various AWS services like Amazon EMR, Redshift, S3.
Implemented the machine learning algorithms using python to predict the quantity a user might want to order for a specific item so we can
automatically suggest using kinesis firehose and S3 data lake.
Used AWS EMR to transform and move large amounts of data into and out of other AWS data stores and databases, such as Amazon Simple
Storage Service (Amazon S3) and Amazon DynamoDB.
Environment: AWS EMR, S3, RDS, Redshift, Lambda, DynamoDB, Amazon SageMaker, Apache Spark, HBase, Apache Kafka, HIVE, Map Reduce,
Snowflake, Apache ,Python, Tableau
Administration and maintenance over 200 schemas on 8000/10000 servers with over 52TB of data.
Supported multiple concurrent projects from development to production implementation for a variety of client initiatives (R1/R2/ISAS) on DB2
for Data Warehouse replace project(s).
Addressed performance related issues in Production, Test and Dev environments to meet or exceed business requirements utilizing
solutions.
Developed and optimize automated installation/uninstallation processes and procedures to reduce overall man-hours needed to support
existing/new DB2 infrastructure using PowerShell, Shell, VBS, batch and windows scripting.
SSL (Secure Sockets Layer)/TLS (Transport Layer Security) configuration for the critical databases.
Contributes in the effort estimation, task allocation, PMR presentation review meetings, audits, lessons learn documents to the organization
domain.
Creating databases ,redesigned database architecture to consolidate database servers and remove unneeded database hardware for the
existing environments.
Installed db2 ODBC drivers on the SQL servers and catalog db2 databases.
Planned HADR with multiple standby or suitable disaster recovery after discussing with the UPS client.
Worked on the db2 migration and upgrade/backout process to automate different versions for the DB2 projects.
Worked on Conceptual, Logical and Physical data modelling using Erwin, Toad and IBM data studio tools.
Applied fix packs, monitor IBM maintenance releases and communicate them to business.
Installation, Migration and administration of DB2 Connect and Client gateways used to access DB2 ZOS Subsystems.
Solely responsible for setup, installation, configuration, performance monitoring and issues resolution for DB2 DB/Connect/client servers
during the production cut-over project.
Configured Data Server Manager -DSM tool which is highly used for performance issues.
Contributed in the effort estimation, task allocation, PMR presentation review meetings, audits, lessons learn documents to the organization
domain
Established and maintained backup strategy and recovery policies and procedures
Monitoring and optimizing the performance of the database through Runstats, Reorgs, db2advis ,db2 optimizer using db2 utilities and DB2
AutoStart on all db2 servers by using the db2fm process.
Scheduled Jobs in the ESP scheduler mainframe Z/OS tool (Changeman package, docutext)
Provided cross platform training and mentored associates to provide working knowledge of DB2.
Developed the implementation process for database releases to match Agile/Sprint release methodology.
Efficiently resolving problem tickets of different severity levels (change control & HDFS case, ServiceNow Tool)
Technical and organizational process design, documentation and implementation
Day-to-day attending to the user queries in sorting out database problems.
Refine the physical design to meet system storage requirements.
Ensure that storage and archiving procedures are functioning correctly.
Ensure the performance, integrity and security of the databases.
Coordinating with user departments and attending their requirements for smooth functioning.
Understand the strategic directions of our clients to ensure the best consistency between existing systems and current and future goals.
Ability to organize and plan work independently.
Ability to multi-task between different activities and teams.
.
Company: Birla Soft - October 2015 – Jan 2016
Clients: BBG, Diageo and Heinz - Senior Software Engineer – DB2 LUW
Highly available and fault tolerant database design took advantage of technologies such as Clustering, RAID, log shipping, replication,
federated and parallel servers in DB2 UDB, SQL Server and Oracle RDBMS platforms.
Monitor, analyse and improve existing DB2 Database environments.
Handled the database installation including extent sizing, backup/restore policies, security settings (users/roles), file allocation,
application integration and release/upgrade management for all major systems.
I solemnly declare that all the information furnished in this document is free of errors to the best of my knowledge.