Search for More Jobs
Forward job to a friend
Apply by creating/using an account
Please enter your registered email address, and we'll email you a link to reset your password right away.
Job Description: Competitive Sourcing
*Position requires export compliance*
Location: Upon return to office, candidate can sit in Bellevue/Menlo Park or Seattle
We are looking for a Data Engineer to not only build data pipelines but also extend the next generation of our data tools. As a Data Engineer, you will develop a clear sense of connection with our organization and leadership - as Data Engineering is the eyes through which they see the product.
As a member of Infrastructure Strategy Data Engineering, you will belong to a centralized Data Science/Data Engineering team who partners closely with teams in Facebook’s Infrastructure organization. Through the consulting-nature of our team, you will contribute to a variety of projects and technologies, depending on partner needs. Projects include analytics, Client modeling, tooling, services, and more.
Data Engineer, Infrastructure Strategy Responsibilities
Partner with leadership, engineers, program managers and data scientists to understand data needs.
Design, build and launch extremely efficient and reliable data pipelines to move data across a number of platforms including Data Warehouse, online caches and real-time systems.
Communicate, at scale, through multiple mediums: Presentations, dashboards, company-wide datasets, bots and more.
Educate your partners: Use your data and analytics experience to ‘see what’s missing’, identifying and addressing gaps in their existing logging and processes.
Broad range of partners equates to a broad range of projects and deliverables: Client Models, datasets, measurements, services, tools and process.
Leverage data and business principles to solve large scale web, mobile and data infrastructure problems.
Build data expertise and own data quality for your areas.
Strong SQL skills are a MUST
Data Modeling skills are a MUST and considered a portion of the SQL signals we look for
For Coding, should have familiarity with major concepts and a comfort level with at least one language of the candidate’s choice
Problem Solving is required
Partnership is less of a thing here as most of the work will be directed by FTE Data Engineers, however this is a PLUS if the candidate exhibits good soft skills
5+ years of Python development experience.
5+ years of SQL experience.
3+ years of experience with workflow management engines (i.e. Airflow, Luigi, Prefect, Dagster, digdag.io, Google Cloud Composer, AWS Step Functions, Azure Data Factory, UC4, Control-M).
3+ years experience with Data Modeling.
Experience analyzing data to Client opportunities and address gaps.
5+ years experience in custom ETL design, implementation and maintenance.
Experience working with cloud or on-prem Big Data/MPP analytics platform(i.e. Netezza, Teradata, AWS Redshift, Google BigQuery, Azure Data Warehouse, or similar).
Experience with more than one coding language.
Designing and implementing real-time pipelines.
Experience with data quality and validation.
Experience with SQL performance tuning and e2e process optimization.
Experience with anomaly/outlier detection.
Experience with notebook-based Data Science workflow.
Experience with Airflow.
Experience querying massive datasets using Spark, Presto, Hive, Impala, etc.
Comments for Suppliers:
Apply by creating/using an account