Previous Job
Previous
Big Data Architect
Ref No.: 16-10150
Location: Jupiter, Florida
Job responsibilities:
  1. Should be able to cover various IM groups and guide them on the variety of tools and technologies. Kafka, Spark, Hive, Hbase, WebHDFS, Oozie, Jenkins to name few.
  2. Have expertise on all/most of the technologies listed above and guide all the IM groups.
  3. Extensive prior experience in developing applications across these different technologies in Hadoop ecosystem
  4. Ability to guide on what has worked and what has not worked at different customer sites
  5. Ability to establish best practices in these technologies (to improve quality and turnaround time of development)
  6. Ability to act as Big Data CoE leader and provide consulting to different groups


Expected Skill Set:
  • Design and develop data ingestion, aggregation, integration and advanced analytics in Hadoop
  • Define development standards and design patterns to process and store high volume data sets
  • Multidisciplinary work supporting real time streams, ETL pipelines, data warehouses and reporting services
  • Integrate Big Data tools into traditional enterprise architectures
  • Develop automated unit tests and integration tests for your code, as well as review the code of others to ensure quality.
  • Establish sound coding and testing practices to ensure quality software builds
  • Bring new and innovative solutions to the table to resolve challenging software issues as needed throughout the project life cycle
  • Establish Frameworks, Best practices across Hadoop ecosystem
  • Build scalable, Performant Applications in spark covering areas such as Streaming, SparkQL, Graphx
  • Build Scalable, Performant data streaming applications using Spark/Kafka
  • Experience in Bigdata technologies & frameworks such as Hive, Hbase, WebHDFS, Spark, Flume, Kafka etc
  • Technical experience in implementing multiple Bigdata programs