8+ years of advanced SQL development experience coupled with robust Python or Java programming capabilities. 4+ years of hands-on experience with data warehousing technologies (Snowflake, BigQuery, Redshift). 3+ years of experience working with streaming using Spark or Flink or Kafka or NoSQL or relevant technologies. Extensive experience building and optimizing large-scale ETL pipelines, data modeling and schema design. Including CDC, structured and unstructured data. Strong dedication to code quality, automation and operational excellence: CI/CD pipelines, unit/integration tests. Demonstrated experience in building working in AWS, web services and/or working with ML pipelines and LLM models. Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, Operations Research, Data Science or related field. Solid understanding of data security and compliance requirements. Effective communication skills, conveying complex technical concepts to non-technical stakeholders. Ability to thrive in a fast-paced, high-growth startup environment with a balance of autonomy and collaboration.