M.Sc. or higher in Computer Science, Cybersecurity, Criminology, Security Studies, AI Policy, Risk Management, or a related field. Demonstrated experience with complex systems modeling, risk assessment methodologies, or security analysis. Strong understanding of dual-use technologies and factors influencing offense-defense applications. Deep understanding of modern AI systems (LLMs, multimodal models, autonomous agents) and their technical architectures. Experience in Security mindset, Security studies research, Cybersecurity, Safety engineering, AI governance, Operational risk management, Systems dynamics modeling, Network theory, Complexity science, Adversarial analysis, or Technical standards development. Ability to develop qualitative frameworks and quantitative models capturing sociotechnical interactions. Record of relevant publications or research contributions related to technology risk, governance, or security. Exceptional analytical thinking with ability to identify non-obvious path dependencies and feedback loops in complex systems.