A strong opportunity for an experienced Python Data Engineer who thrives in cloud environments and enjoys building efficient, scalable data pipelines. Ideal for someone who wants remote flexibility while working with modern tools like Snowflake, AWS, and PySpark.
About the Company
This role supports a global technology organization focused on building reliable, enterprise-grade data systems. The team values clean engineering, collaboration, and delivering solutions that scale. You’ll join a group committed to improving data flow, data quality, and automation across the business.
Schedule
- Remote role
- Must work EST or CST hours
- Full-time, long-term contract
What You’ll Do
- Develop and maintain ETL and ELT pipelines using Python, PySpark, and Snowflake
- Design, test, and optimize workflows for structured and semi-structured data
- Build scalable data solutions in AWS, including Snowflake Cloud Data Warehouse
What You Need
- 5+ years of Python development experience with ETL/ELT and Snowflake
- Strong proficiency in PySpark, complex SQL, SNOW SQL, and performance tuning
- Experience writing stored procedures, tasks, streams, and Snowpark workflows
- Clear communication skills to explain technical concepts
Nice to Have
- Job scheduling experience (Control-M, Autosys, or Airflow)
- Background in data quality, validation, and error-handling automation
- SnowPro Core Certification
Benefits
- Competitive pay range: $55–60.60/hr
- Remote work with a collaborative technical team
- Exposure to modern cloud-native data stack
This role is moving fast, and qualified engineers are in high demand.
Ready to step into a high-impact engineering role?
Happy Hunting,
~Two Chicks…