Build the data backbone behind life changing stroke recovery tech. You’ll turn messy, high impact healthcare data into clean, governed, insight ready assets that help clinicians and business teams move faster and smarter.

About Kandu Inc.
In April 2025, Kandu Health and Neurolutions merged to form Kandu Inc., combining FDA cleared brain computer interface technology with personalized telehealth to support stroke recovery. Their IpsiHand® device helps chronic stroke survivors regain upper extremity function at home, backed by expert clinicians and evidence based care. Kandu extends recovery beyond the hospital through illness navigation, education, care coordination, and advocacy.

Schedule
Full time, remote role. You’ll partner cross functionally across clinical, commercial, operations, and leadership, with expectations for strong documentation, reliable delivery, and clear communication across teams and time zones.

What You’ll Do

⦁ Build and maintain data pipelines across core systems like EHR, CRM, ERP, claims/finance, and manufacturing/ops

⦁ Design ELT workflows to ingest, normalize, and transform data using tools like Fivetran/Airbyte, Airflow/Prefect, dbt, and Python

⦁ Architect and implement a centralized enterprise data warehouse (Snowflake, BigQuery, or Redshift) including schemas, semantic layers, and dimensional models

⦁ Ensure data quality, lineage, governance, reliability, observability, version control, and documentation across pipelines and models

⦁ Develop dashboards, operational monitors, alerts, and recurring analytics to give real time visibility into performance across the business

⦁ Automate reporting workflows and orchestration, including scheduling, monitoring, and error handling

⦁ Enable self service analytics with intuitive, consistent, well documented data models

⦁ Support early predictive and prescriptive modeling (prior auth likelihood, adherence patterns, denial risk, forecasting) in partnership with the VP of ESI

⦁ Handle PHI responsibly and ensure HIPAA, GDPR, and SOC2 aligned data practices across storage and transformation

What You Need

⦁ 5–8 years in data engineering, analytics engineering, or a full stack data role spanning pipelines, modeling, and dashboards

⦁ Strong SQL and Python skills, with confidence building production grade pipelines

⦁ Hands on experience with modern data stack tools (Airflow/Prefect, dbt, Fivetran/Airbyte, Snowflake/BigQuery/Redshift)

⦁ Experience designing star schemas, dimensional models, and or semantic layers

⦁ Dashboarding experience in tools like Looker, Tableau, Power BI, Mode, Sigma, or similar

⦁ Comfort working with messy SaaS data, integrating via APIs, webhooks, and vendor interfaces

⦁ Clear communication skills and the ability to translate technical work into business impact

⦁ Ability to thrive in ambiguity and build v1 systems with limited infrastructure

Benefits

⦁ Competitive compensation ($150,000–$175,000 annually) plus stock options

⦁ Medical, dental, and vision insurance

⦁ 401(k) with company contribution

⦁ Unlimited PTO and holidays

⦁ Life insurance, short term disability, and long term disability

Hiring for this role now. If you like building from scratch and want your work to matter, move on this.

Bring the systems mindset, bring the curiosity, and help build an insight driven enterprise that improves lives.

Happy Hunting,
~Two Chicks…

APPLY HERE