Reinforcement Learning Environments Engineer - Cybersecurity

Preference Model

Preference Model

San Francisco, CA, USA

Posted on May 13, 2026

Location

San Francisco

Employment Type

Full time

Location Type

On-site

Department

Engineering

About Us

Preference Model is building automated ML research engineering.
Existing frontier models are brittle when applied to real-world ML tasks. The present bottleneck is the lack of high-quality RL training environments. Our first step is to build RL environments that reflect real-world complexity, with diverse tasks and robust reward functions.

Our founding team has previous experience on Anthropic’s data team building data infrastructure, and datasets behind Claude. We are partnering with leading AI labs to push AI closer to achieving its transformative potential.

About the Role

As part of our goal to automate every role at a hypothetical AI research lab. One important capability we care about is models' understanding of cybersecurity.

We're hiring experienced Security Engineers to design and build reinforcement learning environments that teach LLMs to reason about and solve real-world cybersecurity problems, such as finding vulnerabilities in production codebases to generating working exploits and patching them safely.

You'll join a small, high-ownership team and contribute directly to the data layer that powers frontier LLM capability in security.

What You Will Do

  • Design and build RL environments and reward functions that produce clean, learnable signals for frontier models on offensive and defensive security tasks across diverse programming languages.

  • Build environments covering the full vulnerability lifecycle: discovery in source code, exploiting, patching.

  • Build environments for reverse engineering tasks across binaries, bytecode, and obfuscated code.

  • Construct verifiable reward signals using fuzzers, sanitizers, symbolic execution, static analyzers, exploit-success checks, and patch-correctness validation.

  • Collaborate with others to brainstorm and create new ideas and tools to improve the environment building process.

What We are Looking For

  • Strong security fundamentals and broad interests across both offensive and defensive work. You read advisories, papers, and writeups, understand vulnerabilities deeply, and have the creativity to translate them into RLVR problems.

  • Hands-on experience finding, exploiting, or patching real vulnerabilities through CTFs, bug bounty work, security research, red/blue team engagements, or shipped security work in industry.

  • Proficiency in Python and systems programming, plus working comfort in at least one low-level language (C, C++, Rust) and one web/application stack.

  • Familiarity with security tooling: fuzzers, sanitizers, debuggers, and disassemblers

  • Problem solvers who take ownership and drive solutions end-to-end.

  • Passion for staying current with the rapidly evolving security and ML landscape.

  • Ability to meet throughput expectations and respond quickly to feedback.

About You

  • Published security research, CVEs, or notable bug bounty findings.

  • Strong CTF background or competitive results at events like DEF CON CTF, or similar.

  • Deep expertise in a specific area: binary exploitation, kernel security, browser/V8 internals, hypervisor security, cryptographic implementation, web application security, or cloud/container security.

  • Experience building or contributing to fuzzing infrastructure, vulnerability scanners, or automated program analysis tools.

  • Experience with ML for code or security.

  • You have built complex interactive RL environments, agent harnesses, or sandboxed evaluation infrastructure.

What We Offer:

  • Competitive cash and equity compensation (>90th percentile)

  • Ownership and autonomy in a fast moving startup environment

  • Opportunity to work with top machine learning engineers

  • Health, vision, dental, benefits

  • 401K match

  • Visa sponsorship & relocation support available

We value diverse perspectives and experiences. If you're excited about this role but don't check every box, we still encourage you to apply.