High impact, high responsibility. Build a cloud-based data platform, warehouse, and data lake to support industry-leading data applications, data science applications, predictive modeling, and machine learning deployments. Plan and deploy an ecosystem of data storage and tools to access millions of log records from digital touchscreen devices.
This position offers you an excellent opportunity to build out a brand-new data platform on Cloud from the ground up, flexibility, and the opportunity to influence decisions across the business unit.
PatientPoint is an entrepreneurial environment that values innovative thinking and using data to solve problems in a collaborative environment. Minimal travel. Growth potential.
- Architect, design, and build data platforms and systems with modern cloud-based technologies such as AWS and Snowflake.
- Design and build data pipelines using efficient and cost-effective solutions for different patterns such as large datasets.
- Design and build data processing systems with batch as well as real-time streaming data such as user impressions.
- Analyze, plan, and execute data warehouse development, monitoring, and performance tuning.
- Prototype ideas, run experiments and iterate to better design data-driven solutions to business problems.
- Create queries and scripts for the development and deployment of features and for production database and data warehouse support.
- Design and develop automated processes to perform scheduled tasks, ETL, and maintenance activities.
- Collaborate, design, and develop backup and recovery plans and processes.
- Proactively analyze data platform performance and make recommendations for improvements.
- Communicate insights and recommend areas for further data platform enhancements.
Education, Training and Experience:
- Bachelor’s required, Masters preferred. Information systems, Computer Science, business analytics, informatics, information technology, or a related field preferred.
- 5+ years of hands-on, real-world experience with various cloud-based databases (AWS Redshift, Snowflake, Azure SQL Server).
- Mastery of SQL and Python programming and relational data ETL, including data definition.
- Knowledge and demonstrated ability with ETL tools.
- Experience with obtaining data from external API’s and flat files required.
- Mastery of various relational databases such as Oracle or MariaDB helpful.
- Experience in working with Streaming Data Platforms such as KAFKA and other Big Data platforms such as Databricks is helpful.
- Experience in working with ETL Job orchestration tools such as Airflow and ADF is helpful.
- Experience in working with clickstream data for digital ad impressions is helpful.
- Experience with storage and retrieval of data in NoSQL databases helpful.
- Experience with graph databases helpful.
- Experience with GIS and spatial data analysis helpful.
- Experience with CICD tools such as Jenkins and code repo tools such as GitHub are useful.
Location: ND, SD, NE, KS, OK, TX, MN, IA, MO, AR, LA, WI, IL, KY, MS, AL, MI, IN, TN, GA, FL, OH, NC, SC, WV, VA, PA, DC, CT, NJ, NY, RI, NH, ME, MD, DE, VT