Software Engineer

Preference Model

Preference Model

Software Engineering

San Francisco, CA, USA

Posted on Apr 27, 2026

Location

San Francisco

Employment Type

Full time

Location Type

On-site

Department

Engineering

About Us

Preference Model is building automated ML research engineering.

Existing frontier models are brittle when applied to real-world ML tasks. The present bottleneck is the lack of high-quality RL training environments. Our first step is to build RL environments that reflect real-world complexity, with diverse tasks and robust reward functions.

Our founding team has previous experience on Anthropic’s data team building data infrastructure, and datasets behind Claude. We are partnering with leading AI labs to push AI closer to achieving its transformative potential.

About the Role

Frontier models still fail at the complex, judgment-heavy work that would make them genuinely transformative: long-horizon research, system design under constraints, iterative debugging in unfamiliar environments. The bottleneck isn't compute; it's training data. We build the RL environments that expose those failures and the infrastructure that turns them into reward.

What You Will Do

  • Design and build RL environments end-to-end: Own the full lifecycle — tasks, reward functions, grading infrastructure, failure analysis, and iteration until environments produce clean signal.

  • Build RL training infrastructure: Develop scalable post-training systems including orchestration, performance optimization, and monitoring.

  • Create model evaluations: Define what good agent performance looks like and build the tooling to measure it.

  • Shape technical strategy: Drive architecture decisions and help build our engineering culture as an early team member.

What We are Looking For

  • 4+ years of software engineering experience with strong project ownership

  • Deep expertise in at least one domain: infra, distributed systems, performance, security, or research tooling

  • Skilled in Python, Rust, or TypeScript across the full stack

  • Hands-on experience with Kubernetes, AWS, or GCP

  • Have extensive experience working with coding agents

  • Thrive working independently on ambiguous, high-ownership problems

Nice-to-Haves

  • ML infrastructure or RL systems experience

  • Simulation environments or LLM eval pipelines

  • Distributed systems or performance optimization

  • No prior ML experience required

This role is not a good fit if you want a product role shipping features to end users.

We value diverse perspectives and experiences. If you're excited about this role but don't check every box, we still encourage you to apply.