Previous Job
Previous
Technology Lead - US
Ref No.: 17-00432
Location: Naperville, Illinois
Start Date: 10/12/2017
 Technology Lead - US
Analytics
Grid Computing Platforms , HADOOP
24085BR
Job Description
Infosys - Client - TL - US – Hadoop Administrator (Multiple sites across the US)

Infosys is a global leader in technology services and consulting. We enable clients in more than 50 countries to create and execute strategies for their digital transformation. From engineering to application development, knowledge management and business process management, we help our clients find the right problems to solve, and to solve these effectively. Our team of 200,000 innovators, across the globe, is differentiated by the imagination, knowledge and experience, across industries and technologies, that we bring to every project we undertake.

Wanted: Global Innovators To Help Us Build Tomorrow's Enterprise

In the role of Technology Lead, you will interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle including Requirements Elicitation, Application Architecture definition and Design. You will play an important role in creating the high level design artifacts. You will also deliver high quality code deliverables for a module, lead validation for all types of testing and support activities related to implementation, transition and warranty. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued. 

U.S. citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this time.

(Final mapping will be determined post interview for role and locations)
Qualifications 
Basic
• Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
• At least 4 years of overall IT experience
Preferred
•  At least 4 years of experience in Implementation and Administration of Hadoop infrastructure
•  At least 2 years of experience Architecting, Designing, Implementation and Administration of Hadoop infrastructure
•  At least 2 years of experience in Project life cycle activities on development and maintenance projects.
•  Should be able to provide Consultancy to client / internal teams on which product/flavor is best for which situation/setup 
•  Operational expertise in troubleshooting , understanding of system's capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
•  Hadoop, MapReduce, HBase, Hive, Pig, Mahout
•  Hadoop Administration skills: Experience working in Cloudera Manager or Ambari, Ganglia, Nagios
•  Experience in using Hadoop Schedulers - FIFO, Fair Scheduler, Capacity Scheduler
•  Experience in Job Schedule Management - Oozie or Enterprise Schedulers like Control-M, Tivoli 
•  Good knowledge of Linux (RHEL, Centos, Ubuntu)
•  Experience in setting up Ad/LDAP/Kerberos Authentication models
•  Experience in Data Encryption technique
Responsibilities:
•  Upgrades and Data Migrations
•  Hadoop Ecosystem and Clusters maintenance as well as creation and removal of nodes
•  Perform administrative activities with Cloudera Manager/Ambari and tools like Ganglia, Nagios
•  Setting up and maintaining Infrastructure and configuration for Hive, Pig and MapReduce 
•  Monitor Hadoop Cluster Availability, Connectivity and Security
•  Setting up Linux users, groups, Kerberos principals and keys 
•  Aligning with the Systems engineering team in maintaining hardware and software environments required for Hadoop
•  Software installation, configuration, patches and upgrades 
•  Working with data delivery teams to setup Hadoop application development environments
•  Performance tuning of Hadoop clusters and Hadoop MapReduce routines
•  Screen Hadoop cluster job performances and capacity planning
•  Data modelling, Database backup and recovery
•  Manage and review Hadoop log files
•  File system management, Disk space management and monitoring (Nagios, Splunk etc)
•  HDFS support and maintenance
•  Planning of Back-up, High Availability and Disaster Recovery Infrastructure
•  Diligently teaming with Infrastructure, Network, Database, Application and Business Intelligence teams to guarantee high data quality and availability
•  Collaborating with application teams to install operating system and Hadoop updates, patches and version upgrades
•  Implementation of Strategic Operating model in line with best practices
•  Point of Contact for Vendor escalations
•  Ability to work in team in diverse/ multiple stakeholder environment
•  Analytical skills 
•  Experience and desire to work in a Global delivery environment
The job entails sitting as well as working at a computer for extended periods of time. Should be able to communicate by telephone, email or face to face. Travel may be required as per the job requirements.