Previous Job
Previous
Big Data Consultant
Ref No.: 18-63073
Location: Denver, Colorado
Position Type:Contract
Start Date: 08/29/2018
As a Big Data (Hadoop) Architect, will be responsible for Cloudera Hadoop development, high-speed querying, managing and deploying Flume, HIVE and PIG, test prototypes and oversee handover to operational teams and propose best practices / standards. Expertise with Designing, building, installing, configuring and developing a Cloudera Hadoop echosystem. Principal Duties and Responsibilities (Essential Functions**):Work with cross functional consulting teams within the data science and analytics team to design, develop, and execute solutions to derive business insights and solve clients operational and strategic problems. Build the platform using cutting-edge capabilities and emerging technologies, including the Data Lake and Cloudera data platform, which will be used by thousands of users. Work in a Scrum-based Agile team environment using Hadoop. Install and configure the Hadoop and HDFS environment using the Cloudera data platform. Create ETL and data ingest jobs using Map Reduce, Pig, or Hive. Work with and integrate multiple types of data, including unstructured, structured, and streaming. Support the development of data science and analytics solutions and product that improve existing processes and decision making. Build internal capabilities to better serve clients and demonstrate thought leadership in latest innovations in data science, big data, and advanced analytics. Contribute to business and market development.Specific skills and abilities:• Strong computer science and programing background• Deep experience in data modeling, EDW, Star, snowflake and other schemas and cubing technologies (OLAP)• Ability to design and build data models, semantic layer to access data sets• Ability to own a complete functional area - from analysis to design to development and complete support• Ability to translate high-level business requirements into detailed design• Build integration between data systems ( restful API, micro batch, streaming) using technologies ( e.g. Snaplogic – iPaaS, Spark SQL, HQL, Sqoop, Kafka, Pig and Strom)• Hands on experience working the Cloudera Hadoop ecosystem and technologies• Strong desire to learn a variety of technologies and processes with a "can do " attitude• Experience guiding and mentoring 5-8 developers on various tasks• Aptitude to identify, create, and use best practices and reusable elements• Ability to solve practical problems and deal with a variety of concrete variables in situations where only limited standardization exitsQualifications & Skills: Bachelor's degree, Masters degree required.Expertise with HBase, NOSQL, HDFS, JAVA map reduce for SOLR indexing, data transformation, back-end programming, java, JS, Node.js and OOAD. Hands on experience in Scala and Python.7 + years' of experience in programing and data engineering with minimum 2 years' of experience in Cloudera Hadoop.