Design, build, and scale data pipelines using ClickHouse, Python, and dbt. Build systems for large-scale streaming and batch data with an emphasis on correctness and stability. Own the end-to-end lifecycle of data pipelines from ingestion to clean datasets. Improve pipeline observability, data quality checks, and failure handling. Collaborate with data consumers to define dataset contracts and schemas. Use AI tools like Cursor, MCPs, and LLMs to accelerate development. Stay current with best practices and evolve toolkit.