Design and implement real-time, onboard, multimodal perception models with a focus on robustness and efficiency Fine-tune deep learning models to maximize performance on embedded hardware (Jetson), including applying quantization, pruning, and architecture search Develop custom CUDA kernels and TensorRT plugins for optimized pre/post-processing Lead R&D efforts in multiview sensor fusion and scene understanding with emphasis on anomaly detection Manage and experiment with complex datasets across modalities (camera, lidar, radar) from multi-agent autonomous systems Document and present findings internally and externally, contributing to team knowledge and potential publications