Senior Data Platform Engineer II
Although we have a Chicago-based HQ that employees are welcome to work out of whether they’re local or just visiting, this position is also eligible for work by a remote employee out of CA, CO, FL, GA, IL, IN, KY, MI, MN, NC, NY, OH, OR, PA, SC, TN, TX, UT, VA, WA or WI.Full-TimeSenior
Salary175,000 - 195,000 USD per year
Apply NowOpens the employer's application page
Job Details
- Experience
- 7–9+ years
- Required Skills
- PythonSQLApache AirflowKafkaSnowflakeSparkCI/CDTerraformdbtDatabricks
Requirements
- 7–9+ years of data engineering or data platform experience with hands-on ownership of production systems
- Experience building and operating a data lakehouse, data lake, or modern warehouse architecture (Snowflake, Databricks, or comparable)
- Deep fluency with Apache Airflow or comparable orchestration: DAG design, task dependencies, sensors, and production operations
- Solid understanding of open table formats (Iceberg, Delta, Hudi) and columnar storage (Parquet, Avro, ORC), including how format choices affect query performance, storage efficiency, and schema evolution
- Strong Python: production-grade code, testing, packaging, and debugging
- Advanced SQL: complex transformations, performance tuning, and debugging against a cloud warehouse
- Hands-on relational schema design, ideally in a multi-tenant SaaS context
- Terraform or comparable IaC for managing cloud data resources; CI/CD for pipeline or infrastructure deployment
- Familiarity with AWS data infrastructure: S3, IAM, and relevant managed services
- Experience using AI-assisted development tools (Claude Code, Cursor, Copilot, or similar) to accelerate engineering workflows
- Demonstrated ownership of systems you’ve inherited and systems you’ve built from scratch - you can assess an unfamiliar codebase and improve it, and you’re just as effective designing something new
- Clear written communication: you can describe a system’s state, a problem, or a recommendation in plain language
- Experience mentoring other engineers through code review, pairing, or technical guidance
Responsibilities
- Build and operate data pipelines that move data across systems - supporting data lake ingestion, compliance workloads, and cross-domain data flows
- Own pipeline operations end to end: monitoring, incident resolution, data quality, and documentation that lets any team member respond independently
- Identify technical debt and reliability risks and raise them with clear context and proposed next steps
- Design and maintain schemas across relational, warehouse, and lakehouse layers, working with application engineers and product to get data models right
- Build out the platform’s service layer, infrastructure-as-code, and data quality frameworks - this role spans design and implementation
- Keep platform documentation at a level where any team member can understand what exists, how it works, and where the risks are
- Contribute to evaluations of the current platform against emerging architectures and tooling, helping produce trade-off analyses and recommendations
- Bring what you see day to day in the systems you operate into the team’s improvement roadmap and technical direction
- Mentor peers and junior engineers through code review, pairing, and technical guidance
- Help uphold engineering standards and collaborate cross-functionally with application engineering, product, and analytics as a reliable technical partner
View Full Description & ApplyYou'll be redirected to the employer's site