Own end-to-end architecture for MLS and property data: streaming and batch pipelines, microservices, storage, APIs Design and evolve event-driven, Kafka-based data flows Drive technical design reviews and set engineering best practices Design, build, and operate backend services (Python or Java) exposing data via APIs/microservices Implement scalable data processing with Spark or Flink on EMR, orchestrated via Airflow, running on Kubernetes Champion observability and operational excellence for data and backend services Build and maintain high-volume, schema-evolving streaming and batch pipelines Ensure data quality, lineage, and governance are built into the platform Partner with analytics engineering and data science to make data discoverable Collaborate with ML/AI engineers to design and scale AI agents Work with frameworks such as PydanticAI, LangChain, or similar to integrate LLM-based agents Help define and implement evaluation, logging, and feedback loops for AI agents and data products Collaborate with Product, Engineering, and Operations to shape the data platform roadmap Translate business problems into technical strategies Mentor and unblock other engineers