About the Role

Title: Data Engineer

Location: Philadelphia United States

Job Description:

Veeva Systems is a mission-driven organization and pioneer in industry cloud, helping life sciences companies bring therapies to patients faster. As one of the fastest-growing SaaS companies in history, we surpassed $2B in revenue in our last fiscal year with extensive growth potential ahead.

At the heart of Veeva are our values: Do the Right Thing, Customer Success, Employee Success, and Speed. We’re not just any public company – we made history in 2021 by becoming a public benefit corporation (PBC), legally bound to balancing the interests of customers, employees, society, and investors.

As a Work Anywhere company, we support your flexibility to work from home or in the office, so you can thrive in your ideal environment.

Join us in transforming the life sciences industry, committed to making a positive impact on its customers, employees, and communities.

The Role

Veeva OpenData supports the industry by providing real-time reference data across the complete healthcare ecosystem, to support commercial sales execution, compliance, and business analytics. We drive value to our customers through constant innovation, using cloud-based solutions and state-of-the-art technologies to deliver product excellence and customer success.

As a Data Engineer in OpenData, you will take responsibility for the OpenData data processing workflows in US. You will be building and maintaining data processing tools, pipelines and reports, ensuring data quality in our reference data. We value end-to-end ownership, which gives you the freedom to determine the correct course of action, do all due diligence, and execute solutions in your own creative way.

Veeva is not sponsoring H1B or supporting H1 transfers for this role.

What You’ll Do

  • Build and maintain data processing pipeline and tools using state-of-the-art technologies
  • Work with Python on Spark-based data pipelines
  • Develop algorithms to build complex data relationships
  • Build analytical data structures to support reporting
  • Build and maintain Data Quality processes
  • Collaborate with Product team to adapt our reference data to changing demands in the market

Requirements

  • 3+ years of experience developing data pipelines using cloud-managed Spark clusters (e.g. AWS EMR, Databricks)
  • Fluent in Python programming language and PySpark (3+ years of experience)
  • Previous experience building tools and libraries to automate and streamline data processing workflows
  • Proficient with SQL/SparkSQL
  • Hands-on experience working with a Data Lakehouse
  • Good verbal and written communication and proven experience of working and delivering in an Agile environment

Nice to Have

  • Experience running data workflows through DevOps pipelines
  • Develop data pipelines with orchestration tools (e.g. Airflow)
  • Experience with AWS services for data processing like EMR, MWAA, etc.
  • Previous experience in the Life Sciences sector

APPLY HERE