Rajshekarreddy (6y - 1m) - Cloud Data Engineer

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Raja Shekar Reddy

Phone: 9063031635
Email: rajasekharreddynimmakayala369@gmail.com

Profile Summary

 Having 6 years of experience in IT.


 Having 3.5 years of experience in Microsoft Azure Cloud technologies
 Having around 2 years of experience in SQL DBA.
 Hands-on experience in Azure Analytics Services – Azure Data Lake Store (ADLS), Azure SQL DW,
Azure Data Factory (ADF) , Azure Databricks etc.
 Excellent knowledge of ADF building components – Integration Runtime, Linked Services, Data Sets,
Pipelines, Activities.
 Good knowledge on polybase external tables in SQL DW.
 Designed and developed audit/error logging framework in ADF.
 Knowledge on Azure DataBricks (ADB) Spark-Python
 Knowledge on Spark RDDs, Data Frames and Spark SQL.
 Knowledge of Big data concepts - storage and computing framework
 Orchestrated data integration pipelines in ADF using various Activities like Get Metadata, Lookup,
For Each, Wait, Execute Pipeline, Set Variable, Filter, until, etc.
 Implemented dynamic pipeline to extract the multiple files into multiple targets with the help of
single pipeline.
 Automated execution of ADF pipelines using Triggers
 Have knowledge on Basic Admin activities related to ADF like providing access to ADLs using service
principle, install IR, created services like ADLS, logic appsetc.
 Implemented dynamic pipeline to extract the multiple files into multiple targets with the help of
single pipeline
 Manage data recovery for Azure Data Factory Pipelines
 Hands-on experience in Azure Data factory and its Core Concept's like Datasets, Pipelines and
Activities, Scheduling and Execution
 Designed and developed data ingestion pipelines from on-premise to different layersinto the ADLS
using Azure Data Factory (ADF V2)
 Logic App Service to Send E-mail Notification for Success & Failure of ADF Pipelines.
 Extensively used Azure keyvault resource in ADF Linked Service.
 Experience with integration of data from multiple data sources
 Extensively Worked on Copy Data activity
 Manage data recovery for Azure Data Factory Pipelines
 Pipeline execution methods (Debug vs Triggers)
 Monitor and manage Azure Data Factory
 Deployed the code with help of CI/CD process
 Extensively used ETL methodology for supporting Data Extraction, Transformation and processing of
data from Source like Oracle, SQL Server, Files using Azure Data Factory into Azure Data lake Storage
Technical Skills

Data Integration Tools Azure Data Factory , Azure Data bricks


Databases Azure Synapse Analytics, Azure SQL Database, Azure SQL Data Warehouse,
MS SQL Server
Azure Storage Accounts Azure Blob Storage, Azure Data Lake Storage
Code Migration Azure DevOps, GitHub
Languages SQL, Python and PySpark

Educational Details

B.tech Nova academy of rural Education & research institute of engineering

Work Experience

Organization: SPEROWARE Technologies pvt Ltd. since oct-2017- Til date

Professional Experience

Project 1 Development and Support


Organization SPEROWARE Technologies pvt Ltd
Client Network Rail (UK)

Description

• Involving in creating Azure Data Factory pipelines that move, transform, and analyse data from a wide
variety of sources
• Transform the data to Parquet format and File based incremental Load of data as per Vendor refresh
schedule
• Creating Triggers to run pipelines as per schedule
• Configuring ADF pipeline parameters and variable
• Create pipelines in Parent and child pattern
• Creating Triggers to execute pipelines sequentially
• Monitoring Dremio Data Lake Engine to deliver data to the customers as per business needs
Project 2 Development and Support
Organization SPEROWARE Technologies pvt Ltd
Client Atlas Systems

Description

• Created Linked Services for multiple source system (i.e.: Oracle, SQL Server, Teradata, SAP Hana, ADLS,
BLOB, File Storage and Table Storage).
• Created Pipeline’s to extract data from on premises source systems to azure cloud data lake storage;
Extensively worked on copy activities and implemented the copy behavior’s such as flatten hierarchy,
preserve hierarchy and Merge hierarchy; Implemented Error Handling concept through copy activity.
• Exposure on Azure Data Factory activities such as Lookups, Stored procedures, if condition, for each, Set
Variable, Append Variable, Get Metadata, Filter and wait.
• Configured the logic apps to handle email notification to the end users and key shareholders with the help
of web services activity; create dynamic pipeline to handle multiple source extracting to multiple targets;
extensively used azure key vaults to configure the connections in linked services.
• Configured and implemented the Azure Data Factory Triggers and scheduled the Pipelines; monitored the
scheduled Azure Data Factory pipelines and configured the alerts to get notification of failure pipelines.
• Implemented delta logic extractions for various sources with the help of control table; implemented the
Data Frameworks to handle the deadlocks, recovery, logging the data of pipelines.
• Deployed the codes to multiple environments with the help of CI/CD process and worked on code defect
during the SIT and UAT testing and provide supports to data loads for testing, implemented reusable
components to reduce manual interventions
• Knowledge on Azure Databricks to run Spark-Python Notebooks through ADF pipelines.
• Using Databricks utilities called widgets to pass parameters on run time from ADF to Databricks.
• Created Triggers, PowerShell scripts and the parameter JSON files for the deployments
• Reviewing individual work on ingesting data into azure data lake and provide feedbacks based on
reference architecture, naming conventions, guidelines and best practices
• Implemented End-End logging frameworks for Data factory pipelines.
• Worked extensively on different types of transformations like Query transformation, Merge, Case,
Validations, Map-operation, History Preserving and Table Comparison transformations etc.
• Extensively used ETL to load data from flat file and also from the relation database.
• Used BODS Script and Global Variables.
• Extracted data from different sources such as Flat files, Oracle to load into SQL database
• Ensuring proper Dependencies and Proper running of loads (Incremental and Completeloads).
• Maintained warehouse metadata, naming standards and warehouse standards for futureapplication
development.
• Involved in preparation and execution of the unit, integration and end to end test cases

You might also like