About the Role

Title: Senior Data Engineer

Location: Remote – US

JobDescription:

At Evolve we’re a hardworking team serious about hospitality. Our teams work every day to make vacation rental easy for everyone — from our owners who trust Evolve to build their business to our guests who rest easy with every stay to our Evolvers who make this difference a reality. Our values anchor our daily decisions and interactions with our customers, communities, and each other. Join our inclusive culture in one of the most rapidly-growing segments in travel. Find your home at Evolve.

Why this role

As a tech-enabled disruptor in the vacation rental industry, data is the backbone of how Evolve is fundamentally changing how guests interact with vacation rentals and how homeowners maximize rental income generation. Evolve’s Data & Analytics Engineering team is focused on enabling the company to innovate and make data informed decisions at scale by ensuring data is a trusted and valued asset available to all levels of the organization for innovation, visualization, analytics, reporting, data hygiene, and data science.

In this role, you will be a key contributor responsible for enabling the business with the right information, tools, and technologies to gain valuable insights and make data-driven decisions. The Senior Data Engineer is a technical leadership role on the team and will play a key part in establishing data pipeline best practices, defining standards, and mentoring teammates. This role will also play a key part in building, supporting, and optimizing data pipelines that enable mission critical workflows for our reporting, analytics, business operations, and data science teams. This team and role is a critical element to Evolve’s success and helps position us as an innovator and thought-leader in the vacation rental space.

What you’ll do

  • Collaborate as a trusted partner with business stakeholders, data analysts, data engineers, analytics engineers, and data architects to build a solid data foundation
  • Mentor data engineers, analytics engineers, and data analysts around the organization to aid in growth, ensure best practices and similar business rules are consistently applied when turning data into information
  • Translate ambiguous or complex business logic into technical solutions.
  • Build, support, and optimize data pipelines using tools like Fivetran, dbt, Prefect, and Python to move data to/from Snowflake, SaaS APIs, and other data stores.
  • Design, modify, and implement data structures in Snowflake to support data ingestion, integration, and analytics
  • Curate and transform data into appropriate structures for analytics and data science purposes using SQL, Python, Snowflake scripting, and data transformation tools like Matillion and dbt.
  • Design and implement processes to automate monitoring and alerting on source data quality, data ingestion and transformation processes, and the overall health of our data infrastructure
  • Develop a deep understanding of the data you are working with, relevant business processes, strategies, and goals

Ensure the quality and trustworthiness of data sources used for analytics

  • Maintain and optimize Evolve’s cloud data platform, environment, and infrastructure by solving problems and tuning performance for underlying data structures, systems, and processes
  • Manage the deployment and monitoring of scheduled data ingestion and transformation processes
  • Research, recommend, and implement new and enhanced tools and methods that support Evolve’s data ecosystem
  • Lead definition of quality standards for ELT, Python, Prefect, Snowflake, Fivetran, dbt, and AWS as well as documenting and training other teammates on these standards
  • Perform collaboration duties such as code reviews and technical documentation for peers
  • Provide advanced data ingestion and pipeline support.
  • Partner with stakeholders to develop scalable solutions for new and modified data sources
  • Prioritize multiple tasks and projects efficiently, and clearly communicate progress and status

What makes you a great fit

  • 8+ years in a developer, architect, engineer, or DBA role working with large data sets
  • Subject matter expert in data ingestion concepts and best practices
  • Subject matter expert in data pipeline design, development and automation
  • Comfortable working with DevOps teams to optimize CI/CD pipelines
  • Advanced SQL skill is required
  • Experience coding with Python is required
  • Experience with Snowflake, Fivetran, dbt, Tableau, and AWS is preferred
  • Experience with Git version control and repository management in Gitlab
  • Experience with advanced ELT tool administration (code deployment, security, setup, configuration, and governance)
  • Experience with enterprise ELT tools like Fivetran, dbt, Matillion or other similar ETL/ELT tools
  • Expertise with one or more cloud-based data warehouses is required such as Snowflake
  • Expertise extracting raw data from APIs using industry standard ingestion techniques
  • Ability to explain complex information and concepts to technical and non-technical audiences
  • Enjoy supporting team members by sharing technical knowledge and helping solve problems
  • Enjoy a connected, collegial environment even though we are remote, hybrid, and on-site
  • Familiarity with documenting data definitions and code
  • Driven by a fast-paced, energetic, results-oriented environment
  • Exemplary organizational skills with the ability to manage multiple competing priorities

APPLY HERE