Previous Job
Previous
Extract Transform and Load (ETL) Engineer (CMS)
Ref No.: 18-00270
Location: Wood lawn, Maryland
Position Type:Contract
Start Date: 01/16/2018
Client: CMS (Centers for Medicare & Medicaid Services
Role: Extract Transform and Load (ETL) Engineer
Location: Woodlawn, MD
Duration: Full Time / Long Term Contract
 
Job Title/Labor Category Extract Transform and Load (ETL) Engineer
Location Woodlawn, MD
Project Duration March 27, 2018 – March 27, 2023
Educational Requirements BS/BA
Preferred Certifications Certified Developer from a major Hadoop Distributor such as Cloudera, HortonWorks and MapR
Letter of Intent Required Yes, it is mandatory. Please see below for the requirement format
 
 
 
 
 
 
 
 
 
 
 
 
Required/Must have Skills/Experience:
  • Must have 5-7 years of experience in the ETL Engineer role
  • 5 years of  proficient experience with Hadoop data ingestion, data processing and data modeling process
  • Experience working on Big Data projects or Health Insurance Market Place projects
  • Prior Health IT experience with CMS or HHS
  • Functional knowledge of CMS IT XLC (Expedited Life Cycle), MIDAS (Multidimensional Information Data Analytics System), ACA (Affordable Care Act), Health Insurance Portability and Accountability Act (HIPPA)
  • Must have experience in NiFi, Spark, AWS Glue, IBM InfoSphere DataStage, Informatica, and/or MS SQL Server Integration Services
  • Experience in Data Warehousing environment, experience in handling large volume of data
  • Experience analyzing, making recommendation, and resolving ETL, view, or query performance problems
  • Experience designing, developing, and testing ETL using SQL programming
  • Experience defining and developing source-to-target data mapping specifications
  • Experience developing ETL code from internal and external sources and have experience with various ETL tools
  • Experience working in DevOps and Agile environments
  • Experience in ETL development for relational databases with an advanced SQL programming background
  • Experience with datasets large enough to require extended storage
 
Preferred Skills/Experience:
  • Experience performing data analysis and data quality checks and recommends data remediation plan
  • Experience in writing Unix shell scripts
  • Experience reviewing, updating, and maintaining existing IT documentation
  • Experience creating, debugging, and extending ETL workflow
 
Familiarity with Technologies/Tools:
  • AWS, Redshift Spectrum, RDS, Aurora, Apache Spark, Scala, Python, XML, Java, JSON, SQL, S3, ETL
AWS Athena, QuickSight, Glue, Redshift, Apache NiFi, Apache Jupyter, AWS S3