Develop data pipelines to extract and load data from various sources into cloud data warehouses
Work closely with data scientists and analysts around the business to provide them with clean, reliable data
Collaborate with stakeholders across the organization to understand business needs, design reusable data assets, and implement models, dashboards, and automations to solve problems with data
Provide guidance and mentorship to engineers, analysts, and data practitioners at large on best practices for data modeling, visualization, and analysis to enable self-service
Implement and champion data governance and quality standards to ensure the security and reliability of our data
What You Bring
3+ years of software development experience
Bachelor’s degree in Computer Science, Applied Math, Economics, or other computationally-intensive fields; or an equivalent combination of technical education and work experience
Experience with data storage and data pipeline solutions and technologies within and outside the modern data stack, such as dbt, Redshift/Snowflake, Fivetran, Airflow, etc.
Experience with business intelligence tools like Tableau, Looker, PowerBI, etc.
Strong experience with SQL, and familiarity with server-side languages such as Python, Java, Typescript, etc.