Previous Job
Previous
Analytics and Big Data Hadoop Engineer
Ref No.: 17-07651
Location: Bellevue, Washington
Position Type:Contract
Key Responsibilities
• Translate complex functional and technical requirements into detailed design
• Experience in application deployment architectures and concerns such as scalability, performance, availability, reliability, security etc.
• Experience delivering distributed and highly scalable applications on NoSQL/Sharded relational Databases/Mapreduce
• Experience in any one or more of the following technologies
o 1 + yrs on Hadoop (Apache/Hortonworks/Cloudera) and/or other Map Reduce Platforms
o 1 + yrs on Hive, Pig, Sqoop, Kafka, Flume and/or Mahout
o 1 + yrs as an J2EE architect with strong experience in core Java
o 1+ yrs in architecting NO-SQL and competent SQL skills, SPARK, Storm
o Strong in Shell Scripting/LINUX programming
o Good knowledge of any Data Integration and/or DW tools is plus
• Candidate should have worked on the solutions using Open Source software
• Developing in hadoop & hadoop related tools for ETL and other performance related tools
• Developing in sql, pl/sql and relationla database development
• Develop and unit test in big data hadoop environment using Agile methodology
• Create scalable and high-performance web services for data tracking
• Should have knowledge with advanced analytical tools, languages, or libraries (e.g. SAS, Mahout);
• Strong understanding and promoting of best practices and standards for Hadoop application design and implementation is required
• Real Working experience on Big Data Use Cases & Development is must
• Loading data from disparate systems
• Perform analysis of vast data and uncover insights
• Previous experience with highly-scalable or distributed RDBMS (Teradata, Netezza, Exadata etc.)
• Knowledge of cloud computing infrastructure (e.g. Amazon Web Services EC2, Elastic MapReduce) and considerations for scalable, distributed systems is desired.
• Knowledge of NoSQL platforms (e.g. key-value stores, graph databases, RDF triple stores) is an added advantage.
• Hands-on experience using any Messaging Technologies (preferably open source RabbitMQ, ActiveMQ, Qpid, etc.)
• Hands-on experience using Apache Solr preferred
• Excellent written and verbal communications

Junior developer
• 1-2 years of experience as big data Engineer, 2-3 years of scrum and DevOps menthodology

Senior Developer
• 4-6 years of experience as big data Engineer, 4-5 years of scrum and DevOps menthodology