Senior Data Engineer / Data Platform Lead
Fully remote within the continental US., available during business hours, Monday – Friday, 9am – 6pm ESTFull-TimeSenior
Salary$185k - $225k / year
Apply NowOpens the employer's application page
Job Details
- Experience
- 8–12+ years in data engineering, with at least 4 years primarily in Azure
- Required Skills
- PythonSQLC#CI/CDTerraformNetSuite
Requirements
- 8–12+ years in data engineering
- At least 4 years primarily in Azure (Data Factory, Synapse, Fabric, Functions, ADLS, Azure Monitor, or comparable)
- Demonstrated ownership of a production data platform
- Strong SQL experience (query tuning, execution plans, indexing strategy, incremental models, data-quality validation)
- Production-level C# and/or Python experience for custom connector work
- Solid experience with messy commerce/ERP APIs (Amazon SP-API, NetSuite, or similar)
- IaC and CI/CD discipline (Bicep, Terraform, or ARM, with dev/staging/prod separation, peer review, rollback thinking)
- Demonstrated on-call experience
- Experience partnering with a technical executive without layers of management
- Up-to-date Mac or Windows computer with anti-virus protection and a reliable high-speed internet connection
- Quiet, distraction-free workspace
Responsibilities
- Audit every existing data source (what's flowing, what's broken, what's missing)
- Replace our manual CSV-based Klaviyo ingestion with a direct API pipeline
- Stand up production pipelines for Amazon SP-API, Shopify, and NetSuite with proper monitoring
- Establish our infrastructure-as-code practice and CI/CD pipeline
- Define the target architecture: ingestion patterns, orchestration standards, environment separation, data-quality gates, deployment workflow
- Own Ingestion (Amazon SP-API, Shopify Admin API, NetSuite SuiteAnalytics Connect, Klaviyo, Recharge, YouTube, GA4, Google Search Console, Google Ads, Meta Ads, Triple Whale, plus ~15 more)
- Own Orchestration (decide what runs when, e.g., Finance data, YouTube API quotas, Klaviyo events)
- Own Reliability (health checks, schema validation, freshness SLAs, source reconciliation, data-quality gates, alerts, runbooks)
- Own Cost and performance (partitioning, query tuning, and incremental processing for large-scale data)
- Own Data contracts (pipelines stop and alert if source violates contract rather than passing bad data)
View Full Description & ApplyYou'll be redirected to the employer's site