Employer: divvyDOSE

divvyDOSE is a rapidly growing healthcare startup headquartered in Chicago. Our vision is a life where medicine does what it’s supposed to, and people get the attention and care they deserve. We strive to improve the quality of life through innovative design and compassionate customer service that allows medicine to get out of the way of our customers lives.

Vision / Mission

Our vision is to fix and reimagine healthcare for everyone. Our mission is to engage customers by fixing all the problems around getting and taking their medications. We’ve taken the first step by simplifying adherence.

Were now part of Optum and the UnitedHealth Group family of businesses, backed by the resources of a global health organization working to help people live healthier lives and to help make the health system work better for everyone. Are you looking for a way to create next-level results with a human-level approach? Then look at opportunities with divvyDOSE, where changing the world is just one result of doing your lifes best work.SM

Job Description

divvyDOSE is seeking an innovative, passionate, and positive-minded Senior Data Engineer to join our rapidly growing Data Platform Engineering Team. We are the Center of Excellence for data and analytics engineering, turning millions of data points into insights and data sets that power key business and product decisions to help build a best-in-class digital pharmacy.

As a senior member of the Data Platform Engineering team, you will build and deploy platform-level tools and applications to build high quality data products and the infrastructure to support them. You have the opportunity to collaborate with our Data Architect, Data Scientists, Data Analysts, and Software Engineers focused on developing multiple areas of our product as well as the platform itself. You will implement data pipelines with requirements for high scalability, availability, security, and quality.

We believe that data is a first-class concern and not just a byproduct of our day to day processes. Data Platform Engineering at divvyDOSE focuses on self-service data strategies that encourage everyone to be data-driven. Above all, we believe in empowering all of our engineers through good DevOps and DataOps practices; on the Platform Engineering team, your focus will be on crafting an environment that is a joy for technical and non-technical minds to build upon.

Responsibilities

  • Build and support a scalable data platform to accelerate data ingestion, processing, orchestration, discoverability, and usage for Engineering, Product, and Business teams
  • Own and manage the data warehouse architecture
  • Define, build, and own key datasets and the quality and evolution of these datasets in a Data Catalog as use cases grow
  • Implement data ingestion and processing frameworks, both real time and batch, using best practices in data modeling, ETL/ELT processes by leveraging AWS technologies and big data tools
  • Collaborate with product engineers to uphold a Data Mesh architecture
  • Collaborate with data scientists to create rich data sets for optimization, statistical analysis, prediction, clustering and machine learning
  • Drive and improve ongoing reporting and analysis processes, automating or simplifying self-service support for data consumers
  • Mentor junior data engineers use and adopt new tools and best practices
  • Develop and maintain automated solutions, tools, libraries, and/or infrastructure related to the following areas:
    • Data Ingestion & Processing
    • Data Quality
    • Data Modeling
    • Data Versioning & Management
    • Data Security & Compliance

Requirements

Required Skills

  • At least 4 years prior experience in software engineering with a deep understanding of SDLC and agile practices
  • At least 2 years prior experience in a data engineering or data science role with a deep understanding of DevOps, DataOps, and/or MLOps processes
  • Proficient in one or more of the following: Python, Scala, Java
  • Proficient in SQL
  • Experience with SQL/NoSQL databases (PostgreSQL, MySQL, DynamoDB, MongoDB, Cassandra, BigTable)
  • Experience building scalable data and analytics pipelines using cloud technologies (AWS, Azure, GCP)
  • Experience with a cloud data warehouse (Snowflake, BigQuery, Redshift)
  • Experience with streaming technologies (Kinesis, Flink, Dataflow, Pub/Sub)
  • Experience with CI/CD technologies (Jenkins, CircleCI, Bamboo, BitBucket)
  • Experience with dataflow orchestration (Airflow, Luigi, Prefect, Dagster)

Desired experience in one or more of the following areas:

  • Infrastructure-as-Code (IaC) tools such as Terraform or CloudFormation
  • DevOps practices
  • Observability solutions such as DataDog, NewRelic, or Honeycomb
  • HIPPA, PII, PCI, and/or PHI data
  • DBT
  • Looker/LookML
  • ML/AI infrastructure and lifecycles
  • Data migration processes
  • Sagemaker, Google AI Platform
  • Containerized services
  • Graph Database

Technologies we use:

  • Python and SQL
  • Snowflake
  • Looker
  • Kinesis
  • dbt
  • Git
  • Terraform
  • RDS (PostgreSQL), DynamoDB, and ElasticSearch
  • CircleCI and Jenkins
  • AWS Lambda (Serverless framework)
  • Datadog

Benefits

  • In addition to a competitive salary our company offers comprehensive medical, dental, and vision plans and contributes to an HSA annually.

APPLY HERE