Employer: PatientPoint

What you will be doing

You will be working on building data pipelines and data applications using modern data technologies based on clouds such as AWS and Azure. Far beyond creating views and using outer table joins, this position creates fully-automated data flows that ingest data from new and external API’s and transforms the data into usable formats for downstream dashboards, predictive models, and machine learning.

Build data ingestion & transformation processes to support industry-leading data science applications in predictive modeling and machine learning deployments. Design, create and deploy solutions that aggregate and transform – enabling the use of billions of log records from digital touchscreen devices.

This position offers you a great opportunity to shape an entire ETL architecture from the ground up, flexibility and the responsibility to develop a stable and reliable data platform. This person plays an important role in selecting technology, vendors and team role development.

As a developer, you will prototype new ideas, run experiments, and iterate to better design data-driven solutions to business problems. On other projects, you might create queries and scripts for the deployment of features in the production database and enterprise data warehouse, including obtaining data from multiple external API’s.

Design and develop automated processes to perform scheduled tasks, ETL, data validation and maintenance activities; enabling quantitative analysts to tackle problems in sales planning, touchscreen user interaction, predictive maintenance, operations and many other subject areas.

Plus, there will be minimal travel, we have a new office space, you will have career growth opportunities, you will get interesting projects and can enjoy free coffee!

Background Requirements:

Minimum Qualifications

  • 4+ years of professional work experience
  • Mastery of relational databases such as SQL Server and RedShift is required. Snowflake helpful.
  • Mastery of SQL programming and relational data extract, transformation and loading (ETL), including data definition, transformation and preaggregation techniques.
  • Python experience in building data processing systems
  • Experience with obtaining data from external API’s, JSON and XML required. You will be asked to provide several examples during the interview process.
  • Knowledge and demonstrated ability with database administration and performance tuning.

Preferred Qualifications

  • Experience in noSQL such as MongoDB is a plus.
  • Experience with storage and retrieval of data in NoSQL databases helpful.
  • Knowledge of other ETL tools such as Dell Boomi or Informatica helpful.
  • Knowledge of other RDMS such as Oracle or helpful.
  • Working knowledge of the R programming language.
  • Experience with geospatial analysis.
  • Experience with reporting using Salesforce Wave Analytics
  • Working knowledge of graph databases.

Location: ND, SD, NE, KS, OK, TX, MN, IA, MO, AR, LA, WI, IL, KY, MS, AL, MI, IN, TN, GA, FL, OH, NC, SC, WV, VA, PA, DC, CT, NJ, NY, RI, NH, ME, MD, DE, VT