Own end-to-end architecture for MLS and property data, including streaming and batch pipelines, microservices, storage, and APIs. Design and evolve event-driven, Kafka-based data flows. Drive technical design reviews and set engineering best practices. Design, build, and operate backend services (Python or Java) exposing data via APIs and microservices. Implement scalable data processing with Spark or Flink on EMR, orchestrated via Airflow and running on Kubernetes. Champion observability and operational excellence for data and backend services. Build and maintain high-volume, schema-evolving streaming and batch pipelines for MLS and third-party data ingestion and normalization. Ensure data quality, lineage, and governance are built into the platform. Partner with analytics engineering and data science to make data discoverable and usable. Collaborate with ML/AI engineers to design and scale AI agents for automation. Work with frameworks such as PydanticAI, LangChain, or similar to integrate LLM-based agents. Define and implement evaluation, logging, and feedback loops for AI agents and data products. Collaborate with Product, Engineering, and Operations to shape the data platform roadmap. Translate business problems into technical strategies and delivery plans. Mentor and unblock other engineers, and elevate technical decision-making.