Lead Performance Tester (SDET)

New
United KingdomFull-TimeLead
Salary not disclosed
Apply NowOpens the employer's application page

Job Details

Required Skills
AWSPythonJavaJenkinsJMeterTypeScriptGrafanaGitHub Actions

Requirements

  • Extensive hands-on experience in backend performance engineering, test automation, and SDET leadership within complex distributed systems.
  • Strong proficiency in performance testing tools such as k6, JMeter, or Gatling, including large-scale test design and execution.
  • Proven experience with AWS cloud services (e.g., EC2, ECS, RDS, API Gateway) and performance optimization in cloud environments.
  • Strong programming skills in Python, Java, or TypeScript for building performance frameworks and automation tooling.
  • Deep understanding of distributed systems, microservices, event-driven architectures, and high-throughput APIs.
  • Experience implementing CI/CD-integrated performance testing using tools such as GitHub Actions or Jenkins.
  • Strong knowledge of system design principles, including latency, throughput, concurrency, and fault tolerance.
  • Hands-on experience with observability platforms such as Grafana, CloudWatch, or similar monitoring tools.
  • Proven ability to lead and coordinate QA/SDET teams, driving performance engineering standards across multiple squads.
  • Strong analytical mindset with the ability to translate performance data into actionable engineering decisions.

Responsibilities

  • Define and lead the overall performance engineering strategy across cloud-native, high-availability systems, ensuring scalability, reliability, and resilience.
  • Lead and mentor a distributed guild of SDETs, guiding performance testing practices and embedding performance ownership across engineering teams.
  • Design, build, and evolve performance testing frameworks using tools such as k6, Python, Java, or TypeScript, focused on APIs, databases, and distributed infrastructure.
  • Implement and manage advanced performance testing approaches including traffic replay, load simulation, and multi-tenant data generation to replicate production-scale conditions.
  • Drive performance, load, stress, endurance, and chaos testing strategies, ensuring system robustness under failure scenarios and peak demand.
  • Integrate performance testing into CI/CD pipelines to enable continuous validation, automated regression detection, and early performance issue identification.
  • Collaborate with architecture and data teams to identify bottlenecks, optimize system design, and validate database and messaging infrastructure performance.
  • Leverage observability tools to monitor, analyse, and troubleshoot system behavior, translating insights into actionable engineering improvements.
  • Define and maintain performance benchmarks, SLAs, SLOs, and error budgets aligned with business and technical requirements.
  • Work across monolithic and microservices architectures, ensuring consistent performance validation strategies across evolving systems.
View Full Description & ApplyYou'll be redirected to the employer's site
View details
Apply Now