3+ years of experience in AI/ML research, with a focus on safety, robustness, and interpretability Strong background in machine learning theory, with practical experience implementing models and algorithms Expertise in AI safety frameworks, fault tolerance, and risk mitigation strategies for AI systems Experience with reinforcement learning, adversarial training, and robustness testing of AI models Proficiency in programming languages such as Python, C++, or Go, with hands-on experience in AI development libraries (e.g., TensorFlow, PyTorch) Strong understanding of AI ethics, fairness, and the impact of machine learning algorithms in real-world applications Ability to identify potential safety risks in AI-driven systems and design solutions to address them Familiarity with distributed systems, cloud infrastructure, and build/test automation frameworks