Research Engineer / Research Scientist

Preference Model

Preference Model

San Francisco, CA, USA

Posted on Apr 27, 2026

Location

San Francisco

Employment Type

Full time

Location Type

On-site

Department

Engineering

About Us

Preference Model is building automated ML research engineering.

Existing frontier models are brittle when applied to real-world ML tasks. The present bottleneck is the lack of high-quality RL training environments. Our first step is to build RL environments that reflect real-world complexity, with diverse tasks and robust reward functions.

Our founding team has previous experience on Anthropic’s data team building data infrastructure, and datasets behind Claude. We are partnering with leading AI labs to push AI closer to achieving its transformative potential.

About the Role

Models of the future will be able to train themselves on tasks that they are not good at. We are interested in investigating how far we can push the boundaries of self-directed learning. We are looking for Research Engineers or Research Scientists to push the frontier of post-training on large language models in a role that blends research and engineering, requiring you to implement novel approaches and shape research directions.

What You Will Do:

  • Train and evaluate models on our proprietary RL environments to validate data quality, surface gaps in task coverage, and close the feedback loop between environment design and model capability.

  • Architect and optimize our RL training infrastructure, from training abstractions to distributed experiment management, using frameworks like Verl, OpenRLHF, or similar. Help scale our systems to handle increasingly complex research workflows.

  • Design, implement, and test training environments, evaluations, and methodologies for RL agents.

  • Profile and optimize training runs end-to-end, from data loading through reward computation, to maximize experiment throughput and shorten the research iteration cycle.

What We are Looking For

  • Experience running end-to-end LLM post-training pipelines

  • Proficiency in Python and PyTorch or JAX

  • Experience with at least one modern RL training framework

  • Experience building and operating ML infrastructure at scale

You may be a good fit if you also:

  • Have experience evaluating model outputs and building reward or evaluation signals

  • Stay current on post-training research and can translate papers into running code

  • Have strong opinions (loosely held) about how to structure RL training code for reproducibility and fast iteration

  • Can balance research exploration with engineering rigor

  • Have strong systems design and communication skills

Candidates don't need a PhD or extensive publications. Some of the best researchers have no formal ML training and gained experience building industry products. We believe adaptability combined with exceptional communication and collaboration skills are the most important ingredients for successful startup research.

What We Offer:

  • Competitive cash and equity compensation (>90th percentile)

  • Ownership and autonomy in a fast moving startup environment

  • Opportunity to work with top machine learning engineers

  • Health, vision, dental, benefits

  • 401K match

  • Visa sponsorship & relocation support available

We value diverse perspectives and experiences. If you're excited about this role but don't check every box, we still encourage you to apply.