Search for More Jobs
Forward this job to a friend
Apply by email without Registering
Apply by creating/using account
Please enter your registered email address, and we'll email you a link to reset your password right away.
We need a DBA(any database) experience at enterprise grade in candidate's past experience prior to stepping into Hadoop.
Sr. Hadoop Admin Position Responsibilities:
• Manage scalable Hadoop virtual and physical cluster environments.
• Manage the backup and disaster recovery for Hadoop data.
• Optimize and tune the Hadoop environments to meet performance requirements.
• Install and configure monitoring tools for all the critical Hadoop systems and services.
• Work in tandem with big data developers and designs use case specific scalable supportable -infrastructure.
• Provide very responsive support for day to day requests from, development, support, and business analyst teams.
• Performance analysis and debugging of slow running development and production jobs
• Solid technical understanding of services such as Hbase, kafka, spark, Hive, hdfs, yarn and ambari.
• Work with Linux server admin team in administering the server hardware and operating system.
• Assist with development and maintain the system documentation.
• Create and publish various production metrics including system performance and reliability information to systems owners and management.
• Perform ongoing capacity management forecasts including timing and budget considerations.
• Coordinate root cause analysis (RCA) efforts to minimize future system issues.
• Mentor, develop and train other systems operations staff members as needed.
• Provide off hour support as required and on-call rotation
• Isilon based hadoop experience is a plus.
1) At least 3 years experience in install and configure hortonworks hadoop cluster
2) Backup and recovery of HDFS file system (distributed filesystem java based)
3) mySQL databases used by Cluster
4) Configure and maintain HA of HDFS, YARN (yet another resource negotiator) Resource Manager. , MapReduce, Hive, HBASE, Kafka and Spark
5) Experience setting up Kerberos and administering a kerberized cluster.
6) Ranger Service configuration and setup policies. (authentication)
7) Knox service configuration. (reverse proxy, policy enforcement for Hadoop)...access control
8) String understanding and experience to use ODBC/JDBC with various clients like tableau, Microstrategy and server components like hive/spark and hbase
9) Diagnose and resolve individual service issues.
10) Monitor and tune cluster component performance.
11) Very strong skill in using various Linux commands for day to day operations and ability to analyze and create Shell scripts.
Overall 10 years experience in IT in middleware technologies.
• Demonstrated experience in working with the vendor(s) and user communities to research and testing new technologies to enhance the technical capabilities of existing Hadoop cluster.
• Demonstrated experience in working with Hadoop architect and big data users to implement new Hadoop eco-system technologies to support multi-tenancy cluster.
• Ability and desire to "think out of the box” to not only meet requirements but exceed them whenever possible.
• Ability to multi-task in a fast-paced environment and complete assigned tasks on time.
• Ability to effectively interact with team members using strong verbal and written communication skills.
• Self-motivated and enthusiastic when working with difficult problems and tight deadlines.
A strong desire to learn and the ability to understand new concepts quickly.
Apply by creating/using account