Senior Software Engineer - Data Platform

New
G
G-PSaaS
United States: California or Philadelphia, PennsylvaniaFull-TimeSenior
Salary not disclosed
Apply NowOpens the employer's application page

Job Details

Experience
5+ years
Required Skills
KafkaSparkDatabricks

Requirements

  • 5+ years of experience building and operating production-grade data systems at massive scale
  • Deep, hands-on mastery of the Databricks/Spark ecosystem (Delta Lake, DLT, Spark UI debugging, and performance tuning)
  • Proven track record of building Real-time/Streaming architectures (Spark Structured Streaming, Kafka, or Kinesis) as a core production requirement
  • Experience managing and optimizing cloud costs in a high-growth environment
  • Experience building APIs, tools, or frameworks used by other internal engineering teams

Responsibilities

  • Architect the Data Platform, leading the design and implementation of internal SDKs and self-service frameworks
  • Shift from "pipeline building" to "platform engineering," creating reusable patterns for batch and real-time event processing
  • Own the cost-effectiveness of the Databricks ecosystem, tuning Spark execution plans and optimizing shuffle partitions
  • Implement auto-scaling strategies to manage DBU consumption and ensure platform performance
  • Drive Data Contracts & Governance by implementing Schema-on-Write validation
  • Partner with the Data Architect & Data Stewards to enforce data privacy, security standards, and metadata lineage
  • Lead an AI-Infused SDLC, championing AI-assisted development tools
  • Mentor engineers on distributed computing best practices and conduct deep-dive code reviews
View Full Description & ApplyYou'll be redirected to the employer's site
View details
Apply Now