Extensive experience building and operating real-time, event-driven data processing systems at scale Hands-on experience managing and scaling Kafka clusters and streaming data pipelines in a production environment Deep expertise in event streaming architectures and messaging systems (Kafka, Kinesis, or similar) Strong proficiency with Snowflake, ClickHouse and data modeling Experience working across multi-cloud environments (GCP and AWS preferred) Proficiency in SQL and programming languages Go, Python, or Java Familiarity with observability and monitoring tools (Datadog, Grafana, Monte Carlo etc.) Experience with source/version control (Git) and CI/CD workflows for data systems Demonstrated ability to write production code and design scalable, fault-tolerant data infrastructure Track record of mentoring engineers