Own the end-to-end architecture for MLS and property data: streaming and batch pipelines, microservices, storage layers, and APIs Design and evolve event-driven, Kafka-based data flows Drive technical design reviews, set engineering best practices Design, build, and operate backend services (Python or Java) that expose data via robust APIs and microservices Implement scalable data processing with Spark or Flink on EMR (or similar), orchestrated via Airflow and running on Kubernetes Champion observability and operational excellence for data and backend services Build and maintain high-volume, schema-evolving streaming and batch pipelines Ensure data quality, lineage, and governance are built into the platform Partner with analytics engineering and data science to make data discoverable and usable Collaborate with ML/AI engineers to design and scale AI agents Work with frameworks such as PydanticAI, LangChain, or similar to integrate LLM-based agents Help define and implement evaluation, logging, and feedback loops for AI agents Collaborate with Product, Engineering, and Operations to shape the roadmap Translate ambiguous business and customer problems into clear technical strategies Mentor and unblock other engineers