Director of Safety Machine Learning
New
This role is completely remote friendly within the United States.Full-TimeDirector
Salary276,700 - 387,400 USD per year
Apply NowOpens the employer's application page
Job Details
- Experience
- 10+ years of experience in Machine Learning, AI, or applied research; 5+ years of experience leading multi-disciplinary ML teams of 25+ team members.
- Required Skills
- Artificial IntelligenceMachine LearningNLP
Requirements
- 10+ years of experience in Machine Learning, AI, or applied research, with a strong background in Trust & Safety, abuse prevention, detection, or content integrity.
- 5+ years of experience leading multi-disciplinary ML teams of 25+ team members (applied science, engineering, analytics) in a high-growth or high-impact environment.
- Must have experience managing people leaders (managers with direct reports).
- Proven track record of shipping ML systems at scale in production, ideally including transformer-based models and LLM fine-tuning.
- Depth in NLP, content understanding, detection systems, supervised and weak-supervision techniques.
- Strong cross-functional leadership skills, with ability to influence executives and foster alignment across Safety, Product, and Engineering.
- Thought leadership in responsible AI, safety ML research, or safety measurement frameworks.
- Entrepreneurial mindset — experience founding or scaling a product or ML org.
Responsibilities
- Set the vision and strategy for applying ML to Trust & Safety, ensuring scalable, proactive protection against evolving abuse patterns.
- Lead and grow a high-performing Safety ML organization, including applied research, model development, productionization, and continuous improvement.
- Develop and deploy cutting-edge Safety ML systems (including fine-tuned LLMs and transformer models) that outperform state-of-the-art solutions.
- Partner with Trust & Safety, Product, Moderation, and AI/ML Platform teams to identify safety risks, emerging harm vectors, and ML opportunities.
- Drive successful experimentation, evaluation, and model lifecycle management, ensuring high precision, recall, explainability, and policy alignment.
- Champion ethical and responsible AI practices in all Safety ML solutions.
- Track performance through metrics, research-based iteration, and alignment with Reddit’s safety policies and regulatory standards.
- Represent Safety ML leadership internally and externally.
View Full Description & ApplyYou'll be redirected to the employer's site