Previous Job
Previous
Hadoop Admin
Ref No.: 17-00089
Location: Jericho, New York
Position Type:Contract
Start Date: 06/21/2017
Description
This position is responsible for delivering the systems infrastructure solutions of assigned big data application; identifying and documenting big data use case requirements; leading the design and engineering of the systems infrastructure; accountability for the implementation and production roll out of the solutions and training of the production staff for steady state support. The infrastructure solution delivered needs to be highly available, scalable, secured, and must perform optimally.

Position Responsibilities:
  • Manage scalable Hadoop virtual and physical cluster environments.
  • Manage the backup and disaster recovery for Hadoop data.
  • Optimize and tune the Hadoop environments to meet performance requirements.
  • Install and configure monitoring tools for all the critical Hadoop systems and services.
  • Work in tandem with big data developers and designs use case specific scalable supportable -infrastructure.
  • Provide very responsive support for day to day requests from, development, support, and business analyst teams.
  • Performance analysis and debugging of slow running development and production processes.
  • Solid technical understanding of services such as Drill, Hive, Hue, and Oozie.
  • Work with Linux server admin team in administering the server hardware and operating system.
  • Assist with development and maintain the system documentation.
  • Create and publish various production metrics including system performance and reliability information to systems owners and management.
  • Perform ongoing capacity management forecasts including timing and budget considerations.
  • Coordinate root cause analysis (RCA) efforts to minimize future system issues.
  • Mentor, develop and train other systems operations staff members as needed.
  • Provide off hour support.
Technical Qualifications:
  • Demonstrated experience in architecture, engineering and implementation of enterprise-grade production big data use cases.
  • Extensive knowledge about Hadoop Architecture and HDFS.
  • Extensive hands on experience in MapReduce, Hive, Pig, Java, HBase, Solr, and the following Hadoop eco-system products: Sqoop, Flume, Oozie, Storm, Spark, and/or Kafka.
  • Hands on delivery experience working on popular Hadoop distribution platforms like Cloudera, HortonWorks or MapR.
  • Hands on experience in architectural design and solution implementation of large scale Big Data use cases.
  • Shell Scripting, Python, Java and/or C/C++ programming experience.
  • Understanding of industry patterns for big data solutions.
  • AWS or other Cloud management experience.
  • Experience with monitoring tools such as Nagios and SolarWinds.
General Qualifications:
  • Demonstrated experience in working with the vendor(s) and user communities to research and testing new technologies to enhance the technical capabilities of existing Hadoop cluster.
  • Demonstrated experience in working with Hadoop architect and big data users to implement new Hadoop eco-system technologies to support multi-tenancy cluster.
  • Ability and desire to "think out of the box” to not only meet requirements but exceed them whenever possible.
  • Ability to multi-task in a fast-paced environment and complete assigned tasks on time.
  • Ability to effectively interact with team members using strong verbal and written communication skills.
  • Self-motivated and enthusiastic when working with difficult problems and tight deadlines.
  • A strong desire to learn and the ability to understand new concepts quickly.