Senior Data Engineer – Remote
Department:Product & Technology
Location: Remote – Anywhere
We are looking for a Senior Data Engineer to join our rapidly growing Data Engineering team. The team that this role is a part of is primarily based out of the United States. However, we are a remote-first company, and applicants from anywhere in the world are encouraged to apply. This full-time position will work closely with members of the Business Intelligence team to provide clean datasets to the organization to help drive the business forward as well as work as a data engineering expert on the Product team, designing data intensive applications and services which will help build a world-class software development machine that will accelerate our revenue growth as we continue to build new applications and partnerships. This role will require excellence in data architecture technologies like RDBMS and DAG-based workflow tooling, as well as experience with DevOps and Software Engineering best practices.
- Serve as a technical lead on our data engineering team, providing training and mentorship to help develop the skills and talents of others
- Architect RDBMS models
- Create logical data flows within the data warehouse
- Develop ETLs from a wide range of data sources
- Analyze data for completeness and accuracy
- Automate data and report QA functionality
- Assist in the creation of DevOps processes and utilize various software tools to codify those processes
- Develop machine learning model test and deployment strategies
- Assist in the creation of data intensive web applications
- Help implement engineering best practices including unit tests and integration tests in a CI/CD environment
- Preferred but not required – Bachelors or Masters degree in CS or related fields. We find that the best software developers come from many diverse backgrounds.
- 5-10 years of experience in a data engineering, analytics, or related software development environment with at least 5 of those years being in a data engineering role
- An understanding of software engineering best practices, and a commitment to writing clean, self-documenting, code
- A high level of proficiency in SQL and Python
- Proven track record in developing highly scalable batch and streaming ETL/ELT pipelines
- Experience with cloud environments, preferably AWS
- Strong writing, presentation and verbal communication skills
Nice to Have:
- Experience with a modern data stack, including tools like Snowflake, Looker, Airflow, and Dbt
- Back-End Web development experience in a Microservices architecture
Velocity: Exceeding expectations of our customers, colleagues, and ourselves by delivering swift and effective results.
Integrity: Doing right by our customers, colleagues, and ourselves through honest and ethical actions.
Be Bold: Daring to take risks, learn, and grow to benefit our customers, colleagues, and ourselves.
Empowerment: Owning our decisions and being accountable for the impact we have on our customers, colleagues, and ourselves.
Service: Supporting our customers, colleagues, and ourselves with respect and empathy.