Previous Job
Previous
Hadoop Engineer
Ref No.: 18-00011
Location: Union, New Jersey
Position Type:Full Time/Contract
Start Date: 05/24/2018
Job Requirements:
·         Participate in collaborative software development and implementation of the new Enterprise Data Lake on Hortonworks HDP and HDF distributions
·         Design and implement complex big data solutions
·         Assess the quality of datasets for a HADOOP data lake
·         Troubleshoot and debug Oozie/Hive Jobs for run time issues
·         Develops application and custom integration solution using spark streaming and Hive
·         Apply deep learning capability to improve understanding of user behavior and data
·         Full life cycle experience in Data Integration, Data Warehousing, and  Data lake
·         Expert understanding of the ETL/ELT and architectural principles of data integration and data warehousing
 
Qualifications:
  • Candidate should possess 8+ years of overall IT experience
  • Candidate should possess 5+ years of  Data Warehousing experience
  • 3+ years of work experience in Big data Environment (Hortonworks)
  • Minimum of 5 years' experience as HIVE and HADOOP developer
  • Proven experience in developing Big Data Pipelines in a Hadoop Ecosystem using Hive, Hive w/Tez, HDFS, HBase, and Spark on Yarn (Hortonworks HDP preferred).
  • Experience in Hadoop related technologies is a must (HDFS, MapReduce, HBase and Hive)
  • Big data background with experience Designing and implementing  large scale systems
  • Hands-on expertise in Java/Scala, Pig, Hive
  • Working experience with Hadoop, Pub/Sub messaging (Kafka), Streaming processing (Storm, Nifi, Spark Streaming  ...etc), ETL processing with tools such as Ab Initio, Talend
  • Candidate should possess technical expertise in application integration (Real Time, Data Streams, and XML etc...)
  • History of working successfully with cross-functional engineering teams
  • Ability to multitask and comfortable working in a large organization across multiple teams
  • Demonstrated competency in all phases of software development. 
  • Expert understanding of MPP(Teradata, Netezza) and or large SMP environments.
  • Solid understanding of data modeling and design considerations for analytical systems.
  • Hadoop based ETL ( Extract Transform Load ) tools such as Pentaho or Talend and NoSQL  environments such as MongoDB
  • Knowledge of relational database techniques. data warehouse concepts and architecture
  • Experience working in Unix environment and a scripting language - shell / python
  • Advanced SQL capabilities
  • Strong problem solving skills and proactive attitude
  • Familiar with agile development practices
  • Certification on Hortonworks Data Platform
  • Bachelor's Degree in Information Systems, Computer Science or related field
  •