Job Description
Title: Product Owner, Data Engineering Framework (Remote Opportunity)
Location: Remote / Field US
JobType: Full-Time
Category: IT, Data & Tech
Position Summary:
The Data Engineering Product Owner, Enterprise Data Platform (EDP) Team, your role will be to maximize the delivery of the data engineering team while ensuring a high level of quality, in line with business requirements, data quality and data management best practices using an agile methodology. He/She will be working with EDP leaders and business stakeholders to define and refine data engineering capability roadmap for the EDP platform. He will be working with data engineers to define and develop enterprise data engineering framework for data ingestion and onboarding pipelines, delta lakes, lake houses and data warehouses across a variety of infrastructure (both on-prem and cloud). Candidate must be able to work effectively in an agile team to define, evaluate design, and develop data engineering capabilities and features for our Enterprise Data Platform. This position offers an exciting opportunity to work on processes that interface with multiple systems including AWS, Oracle, Middleware and ERPs.
What will you do?
- Work closely with leaders of EDP Program in defining and refining Product Roadmap for the Data Engineering Platform
- Own the Product Roadmap for the Data Engineering Framework on EDP
- Develop scope and define backlog items (epics/features/user stories) for the data engineering team
- Develop and refine quarterly plan based on business priorities, and create/plan sprint backlogs for future sprint execution based on quarterly objectives and progress of the current sprint activities
- Lead and work with team in fulfilling user stories for each sprint by providing thought leadership and technical guidance in solution delivery
- Take on and fulfill user stories directly when needed
- Creates, refines and deletes work items in the team backlog in order to keep it organized, relevant and progressively elaborated just in time
- Works with Product Management, customers and stakeholders to ensure their wants and needs are backed into the backlog in succinct and plain language
- Owns priority of work items in the backlog at the team level with some priorities inherited from the parent program backlog
- Hosts and facilitates sprint review /demo as well as formulating relevant attendee list for each session
- Represents their team for PO Sync in order to coordinate/communicate shifting priorities
- Works with the Release Train Engineer, Product Management, and their Scrum Master to ensure the team is prepared in advance of program level events such as PI Planning, Inspect and Adapt and system or PI demos
- Ensures there is enough ready backlog to sustain team planning every two weeks
How will you get here?
- Master’s degree in computer science engineering from an accredited university (desired)
- 4-year degree with major in computer science engineering (or equivalent) from an accredited university (preferred) will substitute for minimum 5-7 years professional IT experience.
Experience, Knowledge, Skills, Abilities
- 10+ years working experience in data integration and pipeline development.
- Experience with Agile Framework
- Experience as a Solutions Architect for Data Engineering Framework
- Excellent experience in Databricks and Apache Spark.
- Data lake and Delta lake experience with AWS Glue and Athena.
- 2+ years of Experience with AWS Cloud on data integration with Apache Spark, Glue, Kafka, Elastic Search, Lambda, S3, Redshift, RDS, MongoDB/DynamoDB ecosystems.
- Strong real-life experience in python development especially in pySpark in AWS Cloud environment
- Strong analytical experience with database in writing complex queries, query optimization, debugging, user defined functions, views, indexes etc.
- Strong experience with source control systems such as Git and Jenkins build and continuous integration tools.
- Highly self-driven, execution-focused, with a willingness to do “what it takes to deliver results as you will be expected to rapidly cover a considerable amount of demands on data integration
- Understanding of development methodology and actual experience writing functional and technical design specifications.
- Excellent verbal and written communication skills, in person, by telephone, and with large teams.
- Strong prior technical, development background in either data Services or Engineering
- Demonstrated experience resolving complex data integration problems;
- Must be able to work cross-functionally. Above all else, must be equal parts data-driven and results-driven.