Previous Job
Previous
Senior Data Engineer - Remote
Ref No.: 24-177271
Location: Remote, Washington
Position Type:Contract
Experience Level: 4 Years
Start Date: 10/02/2024
Title: Senior Data Engineer

Location: Remote

Duration: 6+ months

Summary: The main function of the Engineer is to design, collect, and process data within Azure. This role involves the development of data acquisition tooling and reporting to derive product insights and better align the product to its design specifications.

Job Responsibilities:

  • Attend and contribute to daily team syncs
  • Takes ownership of data analytics projects and drives adoption and insights to our data consumers.
  • Design, develop, document robust data flow processes
  • Build and manage data pipelines using Azure Data Factory and Azure Synapse.
  • Monitor and optimize the performance of data systems to ensure they operate efficiently and securely.
  • Provide support for customer issues, outages, and new feature requests.
  • Collaborate with cross-functional teams to integrate disparate datasets into a common data view
  • Responsible for system health and monitoring and ETL development and support
  • Contribute clear and accurate documentation and user guides to enable data consumers
  • Strong ability to use azure tooling to create data pipelines, security configurations, and automation. C# or Python coding work that would be done to configure Azure functions or to set up automated processes. Azure Synapse understanding would be necessary for ADF pipeline functions

Typical Day in the Role:

  • Purpose of the Team: The purpose of this team is working with the Mixed Reality Product Analytics team, which develops data collection and processing systems to improve the product capabilities
  • Key projects: This role will contribute to working with multiple disparate datasets, coming from different teams and locations, and will be standing up secure services to optimize data consumption and analytics.
  • Typical task breakdown and operating rhythm: The role will consist of:
  • 20-30min Daily Team Sync
  • First: Coming up with diagrams for how the system is supposed to work
  • Second: Implementing it within the Azure tool and environment.
  • Depending on the day: C sharp or Python coding work that would be done to configure Azure functions or to set up automated processes.
  • Finally: Customer support issues, outages, new feature requests, more interrupt programming depending on what needs to be done.

Skills:

  • Creativity, verbal and written communication skills, analytical and problem-solving ability.
  • Team player and detail oriented.
  • 4-5+ years of experience with Azure data infrastructure, including Azure Synapse, Azure Data Factory, and Azure SQL,
  • Experience understanding and performing azure process optimization
  • 8-10+ years of experience with file formatting (CSV, JSON, HTML) and data transport into SQL databases.
  • Strong analytical and problem-solving abilities.

Education/Experience:

  • 8-10 years of experience in data engineering or a related field.

Requirements:

  • Years of Experience Required: 8+ overall years of experience in the field.
  • Experience with data interpretation layers: Azure Synapse, Azure security settings, Azure blob storage, Azure Function Apps, Azure SQL, Power BI, JMP
  • Microsoft synapse or Microsoft SQL Azure based tools
  • Performance Indicators: Performance will be assessed and tracked using ADO, quality of work and timelines

Top 3 Hard Skills Required + Years of Experience:

  • 8-10+ Years of Experience with data pipeline development, ETL, data warehouse design, preference for experience in using Azure Synapse, Azure Data Factory, Azure SQL
  • 8-10+ Years of experience with a programing language, Python or C#
  • Experience troubleshooting and resolving complex issues