Previous Job
Hadoop Big Data
Ref No.: 18-65405
Location: Charlotte, North Carolina
Position Type:Full Time/Contract
Start Date: 09/06/2018
Job Description

1. Sound understanding and experience with Hadoop ecosystem (Cloudera). Able to understand and explore the constantly evolving tools within Hadoop ecosystem and apply them appropriately to the relevant problems at hand.
2. Experience in working with a Big Data implementation in production environment
3. Experience in HDFS, Map Reduce, Hive, impala, Linux/Unix technologies is mandatory
4. Experience in Flume/Kafka/spark is an added advantage
5. Experience in Unix shell scripting is mandatory
6. Able to analyze the existing shell scripts/python/perl code to debug any issues or enhance the code
7. Sound knowledge of relational databases (SQL) and experience with large SQL based systems.
8. Strong IT consulting experience in various data warehousing engagement, handling large data volumes, architecting big data environments.
9. Deep understanding of algorithms, data structures, performance optimization techniques and software development in a team environment.
10. Benchmark and debug critical issues with algorithms and software as they arise.
11. Lead and assist with the technical design/architecture and implementation of the big data cluster in various environments.
12. Able to guide/mentor development team for example to create custom common utilities/libraries that can be reused in multiple big data development efforts.
13. Exposure to ETL tools e.g. data stage, NoSQL (HBase, Cassandra, MongoDB)
14. Work with line of business (LOB) personnel, external vendors, and internal Data Services team to develop system specifications in compliance with corporate standards for architecture adherence and performance guidelines.
15. Provide technical resources to assist in the design, testing and implementation of software code and infrastructure to support data infrastructure and governance activities.
16. Support multiple projects with competing deadlines