Rajshekarreddy (6y - 1m) - Cloud Data Engineer
Rajshekarreddy (6y - 1m) - Cloud Data Engineer
Rajshekarreddy (6y - 1m) - Cloud Data Engineer
Phone: 9063031635
Email: rajasekharreddynimmakayala369@gmail.com
Profile Summary
Educational Details
Work Experience
Professional Experience
Description
• Involving in creating Azure Data Factory pipelines that move, transform, and analyse data from a wide
variety of sources
• Transform the data to Parquet format and File based incremental Load of data as per Vendor refresh
schedule
• Creating Triggers to run pipelines as per schedule
• Configuring ADF pipeline parameters and variable
• Create pipelines in Parent and child pattern
• Creating Triggers to execute pipelines sequentially
• Monitoring Dremio Data Lake Engine to deliver data to the customers as per business needs
Project 2 Development and Support
Organization SPEROWARE Technologies pvt Ltd
Client Atlas Systems
Description
• Created Linked Services for multiple source system (i.e.: Oracle, SQL Server, Teradata, SAP Hana, ADLS,
BLOB, File Storage and Table Storage).
• Created Pipeline’s to extract data from on premises source systems to azure cloud data lake storage;
Extensively worked on copy activities and implemented the copy behavior’s such as flatten hierarchy,
preserve hierarchy and Merge hierarchy; Implemented Error Handling concept through copy activity.
• Exposure on Azure Data Factory activities such as Lookups, Stored procedures, if condition, for each, Set
Variable, Append Variable, Get Metadata, Filter and wait.
• Configured the logic apps to handle email notification to the end users and key shareholders with the help
of web services activity; create dynamic pipeline to handle multiple source extracting to multiple targets;
extensively used azure key vaults to configure the connections in linked services.
• Configured and implemented the Azure Data Factory Triggers and scheduled the Pipelines; monitored the
scheduled Azure Data Factory pipelines and configured the alerts to get notification of failure pipelines.
• Implemented delta logic extractions for various sources with the help of control table; implemented the
Data Frameworks to handle the deadlocks, recovery, logging the data of pipelines.
• Deployed the codes to multiple environments with the help of CI/CD process and worked on code defect
during the SIT and UAT testing and provide supports to data loads for testing, implemented reusable
components to reduce manual interventions
• Knowledge on Azure Databricks to run Spark-Python Notebooks through ADF pipelines.
• Using Databricks utilities called widgets to pass parameters on run time from ADF to Databricks.
• Created Triggers, PowerShell scripts and the parameter JSON files for the deployments
• Reviewing individual work on ingesting data into azure data lake and provide feedbacks based on
reference architecture, naming conventions, guidelines and best practices
• Implemented End-End logging frameworks for Data factory pipelines.
• Worked extensively on different types of transformations like Query transformation, Merge, Case,
Validations, Map-operation, History Preserving and Table Comparison transformations etc.
• Extensively used ETL to load data from flat file and also from the relation database.
• Used BODS Script and Global Variables.
• Extracted data from different sources such as Flat files, Oracle to load into SQL database
• Ensuring proper Dependencies and Proper running of loads (Incremental and Completeloads).
• Maintained warehouse metadata, naming standards and warehouse standards for futureapplication
development.
• Involved in preparation and execution of the unit, integration and end to end test cases