3+ years building data pipelines (batch, stream) in production. Strong in Python, SQL, and data infrastructure (e.g. Kafka, Airflow, Flink, Spark). Experience with data integrity, schema evolution, partitioning, compaction, etc. Deep understanding of performance, latency, indexing. Comfortable designing for failure, retries, backfills, idempotency. Experience in fraud, security, abuse domains. Familiarity with real-time feature serving or feature store systems. Exposure to internal tooling APIs or data mesh architectures.