AI Benchmark Engineer - Native Language Specialist - Turkish

L
LILT (Production)AI, Language Technology
Turkey (Remote)ContractMiddle
Salary not disclosed
Apply NowOpens the employer's application page

Job Details

Languages
Turkish, English
Experience
5+ years
Required Skills
Python

Requirements

  • 5+ years of industry experience in software engineering.
  • Proven track record at leading technology companies and/or graduation from top-tier engineering universities.
  • Native or near-native fluency in Turkish, with a deep understanding of its grammar, register, and phrasing rules.
  • High English proficiency.
  • Strong proficiency in Python.
  • Strong proficiency in standard shell scripting.
  • Strong proficiency in data processing.
  • Extensive experience with Terminal/CLI-based development workflows.
  • Working familiarity with coding agents.
  • Deep technical understanding of multilingual text processing pitfalls.
  • Experience with encoding/decoding robustness and Unicode normalization.
  • Experience with locale-dependent conventions (collation, casing, non-Gregorian dates).
  • Experience with Text I/O, toolchain interoperability, and safe string operations.
  • Experience with Bidirectional/RTL handling, font fallbacks, and rendering/typography in UI or artifacts (for specific languages).

Responsibilities

  • Design, build, and validate benchmarks to test large language models on multilingual software challenges.
  • Measure multilingual robustness across prompt language effects, non-English data processing, and complex locale/encoding edge cases.
  • Create high-signal, high-quality tasks that genuinely test a model's ability to handle multilingual environments without relying on English translation.
  • Evaluate coding agents through task engineering.
  • Build realistic task environments using datasets and files in native Turkish, ensuring assets remain in the target language.
  • Identify AI failure points in native Turkish through prompting and translation.
  • Support the development of robust solutions (reference implementations).
  • Write highly reliable, deterministic verifier scripts.
  • Analyze execution logs and calibrate task difficulty (Easy to Very Hard) using standard Terminal-Bench run configurations against various model tiers.
  • Participate in a rigorous, 4-layer human quality control process (creation, human review, calibration review, and audit) alongside automated LLM-based checks to ensure fairness, grammatical accuracy, and benchmark integrity.
View Full Description & ApplyYou'll be redirected to the employer's site
View details
Apply Now