Senior Software Engineer - AI Product & Platform Engineering
Remote (US-based), +/-2 hours of America/ChicagoFull-TimeSenior
Salary not disclosed
Apply NowOpens the employer's application page
Job Details
- Required Skills
- Artificial IntelligenceFull Stack DevelopmentGitTypeScriptReact
Requirements
- Senior-level TypeScript + React experience in production systems
- Experience building React full-stack apps (routing, server rendering, server components/actions, backend-for-frontend patterns)
- Async + streaming experience (SSE/streams; cancellation/backpressure awareness)
- Comfortable on macOS and Linux; fluent with CLI for local dev, debugging, automation, and as an API
- Strong Git discipline (PR workflows, code reviews, conventional commits, clean commits) and ability to raise the bar through review
- Strong testing discipline and ability to build testable abstractions
- Strong API integration experience; ability to ship, measure, and iterate in ambiguity
- Experience with the Vercel AI SDK (Core + UI) for streaming experiences, tool calling, and chat UX patterns
- LLM integration experience (OpenAI/Anthropic/Bedrock or similar) with tool calling / structured outputs
- Experience building internal platform primitives with real adoption (SDKs, shared libraries, paved roads)
- Experience with AI evaluation frameworks and regression testing for model outputs
- Experience with API design and an intuition for Observability and/or security depth for AI systems
Responsibilities
- Ship end-to-end product increments (spec → build → release → operate)
- Use the right tool for the job; our current default is a modern TypeScript + React full-stack (server rendering, RSC-style patterns, streaming), and we stay flexible as we learn
- Build AI features with disciplined patterns: tool calling, structured outputs, grounding, and streaming UI
- Partner with product, design, and SMEs to define outcomes, validate assumptions, and iterate quickly
- Turn lessons from shipped features into reusable "paved road" primitives (eval harnesses, guardrails, shared tool/prompt patterns)
- Build evaluation + release loops (tests, golden datasets, regressions, targeted human review; LLM grading where it fits)
- Own reliability, performance, and cost; instrument with OpenTelemetry and define SLOs for key flows
- Enforce security and privacy by design (safe tool access, authZ, auditability, prompt-injection mitigations, PII handling)
View Full Description & ApplyYou'll be redirected to the employer's site