Develop a data pipeline processing large amount of data reliably and at a high rate using Python, Go, MongoDB, GCP. Improve the efficiency of workers, monitoring, or autoscaling of infrastructure. Improve the throughput and reliability of imports. Implement integration with another data storage. Export data from platform to Google’s BigQuery using DataFlows, PySpark and Apache Beam. Run and support services in production handling high-volume traffic using Google Cloud Platform and Kubernetes. Review the code of peers. Participate in on-call rotation.