Sr. Staff AI Security Engineer, AI Native Platform

Remote, USAFull-TimeStaff
Salary209,000 - 309,000 USD per year
Apply NowOpens the employer's application page

Job Details

Experience
12+ years
Required Skills
OAuthLLM

Requirements

  • 12+ years in security engineering with depth in application security, cloud security, IAM, or detection — and a track record of building controls that earn adoption, not just approval.
  • Hands on builder shipping security controls that hold up in production. You’re not an advisor but a practitioner that can define patterns that last
  • Hands-on fluency with LLM and agentic systems. You've built with these tools, broken them, and shipped fixes for prompt pipelines , RAG architectures, and multi-agent orchestration from the inside.
  • Solid grounding in IAM for non-human systems: service identities, OAuth, secrets management, RBAC/ABAC, and least-privilege architecture at scale.
  • Experience with production telemetry and detection — defining detections and building response paths for threat surfaces without established playbooks.
  • Comfort with ambiguity and in-flight builds. You're energized by figuring things out — writing first-draft standards, testing approaches, and scaling what works.
  • Strong cross-functional communication and the ability to push back when it matters. You carry risk, tradeoffs, and technical decisions across engineering, product, and security leadership without losing precision — and can reshape a risky decision clearly and constructively.
  • Familiarity with NIST AI RMF, OWASP LLM Top 10, and adjacent compliance environments for consumer data at scale.
  • Bachelor's degree or equivalent experience in Computer Science, Information Security, or a related field.

Responsibilities

  • Secure how Life360 accesses frontier models. Design, build, and iterate the access controls, policy enforcement, and authorization patterns that govern how systems interact with the frontier models they rely on.
  • Build secure patterns for MCP access and tool use authorization. Build and own the controls that vets, risk-tier, and govern how we integrate with external tools and services via MCP as adoption expands across engineering teams.
  • Design and build the identity and authorization model for autonomous agents: service identities, scoped credentials, and least-privilege access patterns.
  • Design and build agentic observability and adversarial defenses. Build the telemetry pipelines and behavioral monitoring that provide visibility into AI system behavior. Implement architecture-level defenses against prompt injection and related adversarial attack classes.
  • Shape security for the common AI end-user platform. Lead design reviews, build access controls, data boundary enforcement, and abuse detection that keep a shared AI environment safe across users with different privilege levels.
  • Secure the shared knowledge layer. Define access control and data governance for retrieval augmented and reasoning systems, ensuring AI-powered tools don't surface sensitive data to the wrong systems or users.
  • Build AI supply chain integrity into the platform. Develop model provenance practices, service vetting, and dependency controls that keep the AI stack trustworthy as it grows.
  • Partner with Privacy, Legal, and Data Platform to ensure the right controls are built into pipelines handling real-time location, family relationship data, and data involving minors.
View Full Description & ApplyYou'll be redirected to the employer's site
209,000 - 309,000 USD per year
Apply Now