Design, optimize, and deploy computer vision and multimodal ML models that run efficiently on constrained edge platforms powering Samsara’s in-vehicle camera systems. Apply advanced model optimization techniques—such as quantization, pruning, and distillation—to achieve real-time inference under strict CPU, memory, and thermal constraints. Partner with ML research and product teams to translate new AI detections into deployable, maintainable edge models. Collaborate with firmware, ML research, and hardware teams to productize our ML runtime pipeline, bringing scalable, reliable, and testable on-device inference to production. Develop performance benchmarking, profiling, and validation frameworks for edge-deployed models to ensure robustness across millions of deployed devices. Drive continuous improvement of our edge ML toolchain and advocate for best practices in model optimization, inference reliability, and deployment efficiency. Mentor peers on efficient inference design and collaborate cross-functionally to accelerate feature delivery for safety and driver experience programs.