Previous Job
Previous
Hadoop Developer
Ref No.: 18-04437
Location: Cary, North Carolina
Position: Hadoop Developer


this is a temp to hire role, thus only submit US Citizens or Greencard holders as Client will be unable sponsor H1 Visa at time of FTE conversion

Key Responsibilities:
• Ingesting huge volumes data from various platforms for Analytics needs.
•Building and Implementing ETL process developed using Big data tools such as Spark(scala/python), Nifi etc.
•Monitoring performance and advising any necessary infrastructure changes.
•Defining data security principals and policies using Ranger and Kerberos.
• Works with IT and business customers to develop and design requirements to formulate technical design.

Supervisory Responsibilities: Leads and motivates project team members that are not direct reports, as well as providing work direction to lower-level staff members


Essential Business Experience and Technical Skills:
• 10+ years of experience in solutions development

• Building and Implementing ETL process developed using Big data tools such as Spark (scala/python), Hbase, Apache Solr, Kafka, Nifi etc.
• Proficient understanding of distributed computing principles
• Management of Hadoop cluster, with all included services – preferably Hortonworks.
• Proficiency with Hadoop v2, MapReduce, HDFS
• Proficiency in Unix scripting
• Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
• Extensive Experience with Spark & Scala
• Experience in Java/MapReduce, Storm, Kafka, Flume and Data security using Ranger
• Experience with integration of data from multiple data sources, Sqoop
• Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
• Performance tuning and problem solving skills is a must
• Experience in analyzing images and videos – using Big data tools is preferred.

• Experience on search applications, such as Apache Solr 6.5+ version.


Required:
• Experience in HDFS, Hive, Spark,Scala, Python,Hbase, Pig, Flume, Kafka etc.
• Unix & Python scripting
• Experience with SQL tuning and DW concepts

Preferred:
• Experience with Big Data Client toolkits, such as Mahout, SparkML, or H2O
• Any experience building RESTful APIs
• Exposure to Analytical tool such as SAS, SPSS is a plus
• Experience with Informatica PC 10 and implemented push down processing into Hadoop platform, is a huge plus.