Job Description

Title: DataOps Engineer

Location: Anywhere in the U.S. (Remote)

At BetterUp, we give people the coaching, support, and push they need to reach peak performance and unlock their limitless potential, in all they do, personally and professionally. We work with world-class experts and Coaches, pioneer innovative technology, and foster human touch at scale to fuel the BetterUp experience making growth and development achievable for all. We believe that practicing strong mental fitness is a never-ending practice of maintaining and building the strengths that proceed health, happiness, and success.

And we’re looking to build out a diverse and ambitious team of go-getters to join us as we grow. Exciting opportunities lie ahead, as well as work that makes a real difference not only in the lives of others, but for your own personal and professional growth, too. Join us as we continue to bring BetterUp to more people everywhere, and create impactful change for our members and for you.

We’re looking for an experienced DataOps Engineer who cares deeply about their craft, and who wants to use their skills to bring about positive change in the world while working in a high-performing organization. At BetterUp, the opportunities to apply data and ML for Social Good are ever-expanding, ranging from developing personalization systems that meet someone where they’re at and help them reach their full potential to understanding multimodal human-to-human communication and what makes effective coaching through state-of-the-art natural language processing, computer vision, and audio signal processing.

We’re looking for someone who is comfortable in the rapidly changing nature of a startup environment but also adept at moving relentlessly forward: doing what needs to be done to unblock projects that truly deliver value to our users. At BetterUp we delight in supporting and pushing each other to bring out the best in our colleagues, and would love someone to join the team who shares our passions for empathy, excellence, and continuous improvement. We also deeply understand that a key to peak performance is balance, and our culture is focused on providing the support our people need to be able to bring their whole selves to bear in service of our mission.

Role and Responsibilities:

  • Data product evangelist: Bring, build, and drive MLOps/DataOps culture and practices, enabling the engineering org to build better, more reliable, and secure data products faster.
  • ML and Data Systems Designer: Passion for and expertise in ML pipelines and end-to-end lifecycle management.
  • Act as an owner: It may start with research but it’s not done until it’s in production. Adept at moving projects forward and able to unblock projects regardless of where we are in the lifecycle.
  • Do less, deliver more: Familiar with the terms YAGNI and yak shaving? Focus your efforts on high-impact initiatives that really move the needle.
  • Impress yourself: We hold ourselves to quality above and beyond something that just gets it done . Each system or line of code is an opportunity to blend craftspersonship with playfulness.
  • Collaborate without ego: Work together with teams to drive data science and ML technical roadmap, and willing to take on roles small or large in order to further the mission at hand.
  • Stay on your edge: Continuously learning and applying emerging technologies. Pushing yourself and your team to new heights.
  • Practice imagination: Hypothesize meaningful questions and challenge the status quo.

If you have some or all of the following please apply:

  • 2+ years of relevant DataOps and Data Infrastructure (high growth startup experience is a plus)
  • 4+ years of overall engineering and data infrastructure experience
  • Expertise in ML/DataOps lifecycle management (build, deploy, and production support)
  • Familiarity with popular Ops tooling from vendors like AWS (SageMaker) and GCP (Vertex AI), BentoML, MLFlow, Kubeflow, etc.
  • Broad understanding of data engineering and machine learning lifecycles and enablement, experiment and project tracking, and data version control
  • Experience with model management registries and lineage (training data, configuration, model parameters, etc)
  • Deployment management and monitoring: provisioning/orchestration, CI/CD, real-time and batch inference, outlier/anomaly detection, data drift monitoring.
  • Experience with Feature Stores for offline training and online serving (production) is a big plus
  • Experience building and maintaining data pipelines using tools like Airflow, Kafka, Cassandra, Hadoop, Kubernetes, etc.
  • Strong background in cloud computing and distributed systems
  • Infrastructure-as-code development (e.g. Terraform, Cloudformation, Ansible, Chef, etc)
  • 3+ years experience coding in Python (preferred) or other languages like Java, C#, Golang, etc.
  • Succeeded in a remote work environment
  • Excellent verbal and written communication skills

Benefits:

At BetterUp, we are committed to living out our mission every day and that starts with providing benefits that allow our employees to care for themselves, support their families, and give back to their community.

  • Access to BetterUp coaching; one for you and one for a friend or family member
  • A competitive compensation plan with opportunity for advancement
  • Full coverage for medical, dental and vision insurance
  • Employer Paid Life, AD&D, STD and LTD insurance
  • Flexible paid time off
  • Per year:
    • 13 paid holidays
    • 4 BetterUp Inner Work days (https://www.betterup.co/inner-work)
    • 5 Volunteer Days to give back
    • Learning and Development stipend
  • Year-round charitable contribution of your choice on behalf of BetterUp
  • 401(k) self contribution

APPLY HERE