0% found this document useful (0 votes)
51 views2 pages

Hadoop Developer Resume - Hire IT People - We Get IT Done

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 2

We provide IT Staff Augmentation Services!

Hadoop Developer Resume


 /5 (Submit Your Rating)

 Masters of Electrical & Computer Engineering with Two years of Hadoop Developer experience
 Experience on Hadoop Ecosystems projects such as Map - Reduce, HIVE, Spark.
 Experience in processing data using HIVEQL and Pig Latin and custom Map Reduce programs in Java and shell script.
 Used Spark with Kafka to implement a receiver and process data using spark streaming.
 Used Sqoop to transfer bulk data from Relational databases to Hadoop and vice versa.
 Collecting and aggregating large amount of Log Data using Apache Flume and storing data in HDFS for further analysis.
 Worked at Job/Work�ow scheduling and monitoring tools like Oozie and Zookeeper.
 Experience in di�erent phases of Software Development Life Cycle (SDLC) including Design, Implementation and testing
during the development of software applications.
 Pro�ciency in Database Programming using Oracle, SQL Server, HQL and MySQL creating stored procedures, Triggers,
Indexes, Functions, Views, Joins etc.
 Experience in Programming, designing, Testing, Deploying and Supporting equipment and systems using various
programming PLC, SCADA, HMI etc.
 Excellent communication and self-motivated to implement complex rules.
 Quick learner and excellent team player, ability to meet deadlines.

Kafka, Spark Streaming, Flume

Hadoop, MapReduce, HDFS, HIVE, Impala, Sqoop, Oozie, Zookeeper, shell.

Windows XP, Windows 7, Windows 8, Windows 10 and Windows Vista, Linux

Xilinx ISE, PSpice, Micro wind, ModelSim, MATLAB, SCADA, HMI, PLC, CRM Microsoft

Scala, Java, C, C++, VHDL, Verilog, System Verilog

 Extracted data from various Oracle tables to HDFS using Sqoop and transformed data using spark Rdd/Dataframe.
 Loaded data from Hive external table to a dynamically partitioned (Partitioned on date) Hive managed table
 Developed Hive scripts to join as well as aggregate data at various levels (positions, instruments, risk engine).
 Loaded the aggregated data to Oracle tables via Sqoop.
 Wrapped the Hive scripts and other housekeeping operations (shell scripts/java modules/email alerts) in an Oozie
work�ow.
 Scheduled the entire application �ow via Oozie Coordinator
 Monitored the Oozie jobs via Ooozie console/Hue and health of the cluster via Ambar
 Worked with Hadoop Technical Architect with respect to the selection of Hadoop tools for aggregation and
transformation by developing poc's.
 Coordinated with Hadoop admin teams to ensure the health of Hadoop services, user creation, hue login creation for
development cluster
 Connected with Horton works support team and Driven support team(Cascading)for resolving issues.
 Performed Unit test during development.
 Coordinated with QA team for functional testing and gave the knowledge transfer to run Oozie coordinator jobs in
Hadoop cluster and monitoring via Oozie console/Hue
 Coordinated with admin/build team for deploying code to UAT and production Hadoop clusters.

 Worked in Loading and transforming large sets of structured, semi structured and unstructured data.
 Worked on di�erent �le formats like Sequence �les, XML �les and Map �les using Map Reduce Programs.
 Developed Spark code using Scala and Spark-SQL/Streaming for faster processing of data
 Imported and exported Data from/to Di�erent Relational Data Sources like RDBMS, Teradata to/from HDFS using Sqoop.
 Involved in creating Hive Tables, loading with data and writing Hive queries which invoke and run Map Reduce jobs in the
backend.
 Involved in writing APIs to Read HBase tables, cleanse data and write to another HBase table
 Using Kafka created a consumer group, subscribed to the appropriate topic, and start receiving messages, validating
them and writing results.
 Ingested data from kafka is processed by spark by using map and reduce algorithms and pushed further to database.
 Con�gured build scripts for multi module projects with Maven
 Worked with application teams to install operating system, Hadoop updates, patches, version upgrades as required.
 Worked on hadoop clusters using Cloudera CDH3 and CDH4

 Report an issue

You might also like