About the Role

Title: Senior Data Engineer

Location: Remote

JobDescription:

Who we are

In a world where acquisition costs are skyrocketing, funding is scarce, and ecommerce merchants are forced to do more with less, the most innovative DTC brands understand that subscription strategy is business strategy.

Recharge is simplifying retention and growth for innovative ecommerce brands. As the #1 subscription platform, Recharge is dedicated to empowering brands to easily set up and manage subscriptions, create dynamic experiences at every customer touchpoint, and continuously evaluate business performance. Powering everything from no-code customer portals, personalized offers, and customizable bundles, Recharge helps merchants seamlessly manage, grow, and delight their subscribers while reducing operating costs and churn. Today, Recharge powers more than 20,000 merchants serving 90 million subscribers, including brands such as Blueland, Hello Bello, CrunchLabs, Verve Coffee Roasters, and Bobbie—Recharge doesn’t just help you sell products, we help build buyer routines that last.

Recharge is recognized on the Technology Fast 500, awarded by Deloitte, (3rd consecutive year) and is Great Place to Work Certified.

Overview

The centralized Data and Analytics team at Recharge delivers critical analytic capabilities and insights for Recharge’s business and customers.

As a Senior Data Engineer, you will build scalable data pipelines and infrastructure that power internal business analytics and customer-facing data products. Your work will empower data analysts to derive deeper strategic insights from our data, and will enable developers to build applications that surface data insights directly to our merchants.

What you’ll do

  • Build data pipeline, ELT and infrastructure solutions to power internal data analytics/science and external, customer-facing data products.
  • Create automated monitoring, auditing and alerting processes that ensure data quality and consistency.
  • Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models
  • Design, develop, implement, and optimize existing ETL processes that merge data from disparate sources for consumption by data analysts, business owners, and customers
  • Seek ways to continually improve the operations, monitoring and performance of the data warehouse
  • Influence and communicate with all levels of stakeholders including analysts, developers, business users, and executives.
  • Live by and champion our values: #day-one, #ownership, #empathy, #humility.

What you’ll bring

  • Typically, 5+ years experience in a data engineering related role (Data Engineer, Data Platform Engineer, Analytics Engineer etc) with a track record of building scalable data pipeline, transformation, and platform solutions.
  • 3+ years of hands-on experience designing and building data pipelines and models to ingesting, transforming and delivery of large amounts of data, from multiple sources into a Dimensional (Star Schema) Data Warehouse, Data Lake.
  • Experience with a variety of data warehouse, data lake and enterprise data management platforms (Snowflake {preferred}, Redshift, databricks, MySQL, Postgres, Oracle, RDS, AWS, GCP)
  • Experience building data pipelines, models and infrastructure powering external, customer-facing (in addition to internal business facing) analytics applications.
  • Solid grasp to data warehousing methodologies like Kimball and Inmon
  • Experience working with a variety of ETL tools. (FiveTran, dbt, Python etc)
  • Experience with workflow orchestration management engines such as Airflow & Cloud Composer
  • Hands on experience with Data Infra tools like Kubernetes, Docker
  • Expert proficiency in SQL
  • Strong Python proficiency
  • Experience with ML Operations is a plus.

APPLY HERE