Staff AI Architect, Family AI Lab
All positions, unless otherwise specified, can be performed remotely (within the US and Canada)Full-TimeStaff
Salary197,000 - 290,000 USD per year
Apply NowOpens the employer's application page
Job Details
- Required Skills
- Written communicationPrompt Engineering
Requirements
- Deep engineering experience, with significant time spent building and running AI systems in production at meaningful scale. Real users, real load, real on-call.
- Fluent in modern LLM serving (vLLM, TGI, SGLang, hosted APIs — whatever's right for the job). No religion about it.
- You think in evals. You've built harnesses. You know the difference between vibes-based iteration and real progress because you've felt the pain of not having one.
- You can build a cloud system from the ground up. Pick the services, wire them together, make them reliable, keep the bill in check.
- Strong opinions about cost. You've seen inference bills run out of control and have instincts for where the leaks come from.
- AI-native in your daily workflow. Hands-on with Claude Code, Cursor, or equivalent. You think natively in agentic workflows, prompt engineering, context window management, MCP / function calling.
- Comfortable deciding with incomplete information. The spec will be a moving target. You don't need it to settle before you start building.
- Strong written communication. You write specs, decision records, and playbooks that an agent (and a human) can act on precisely.
- Experience with agent systems, retrieval pipelines, or long-context personalization.
- Privacy architecture experience in multi-user systems where context is shared across people with different rights to see it.
- You've built something where the data itself was the moat — and you understand why that's different from building on public data.
Responsibilities
- Architect the inference pipeline that turns Life360 into a context layer LLMs can reason over: soul files, family context, behavioral signals, document understanding.
- Choose models. Choose hosting. Decide what is self-hosted, API-based, fine-tuned, or prompt-engineered. Revisit those decisions as the landscape changes.
- Build the serving infrastructure. Latency budgets, batching, caching, fallbacks, graceful degradation. Make it run at scale before scale gets here.
- Build the eval loop. Know whether changes make the product better or worse, not just whether the demo works.
- Own the cost model. Track spend per user, per feature, per family. Find the levers (context compression, model routing, pre-computation) that keep unit economics working from a test group to millions.
- Define what gets persisted, what gets summarized, and what gets thrown away. The architecture decision of "what does our data look like to an LLM" lives with you.
- Set the bar for safety, privacy, and trust at the infrastructure layer. Sensitive categories filtered before they propagate. Encryption, access controls, audit trails built in, not bolted on.
View Full Description & ApplyYou'll be redirected to the employer's site