hero

Portfolio Founder potential, realized

Across investments in enterprise and consumer at seed and early growth stages, see why portfolio founders consistently say we're the most valuable investors on their cap table.
companies
Jobs

Lead Software Engineer, Model Serving Platform

Sciforium

Sciforium

Software Engineering
San Francisco, CA, USA
Posted on Dec 8, 2025

Location

San Francisco

Employment Type

Full time

Location Type

On-site

Department

Engineering

Sciforium is an AI infrastructure company developing next-generation multimodal AI models and a proprietary, high-efficiency serving platform. Backed by multi-million-dollar funding and direct sponsorship from AMD with hands-on support from AMD engineers the team is scaling rapidly to build the full stack powering frontier AI models and real-time applications.

We offer a fast-moving, collaborative environment where engineers have meaningful impact, learn quickly, and tackle deep technical challenges across the AI systems stack.

Role Overview

This is a rare chance to help architect and lead the development of Sciforium’s next-generation model serving platform, the high-performance engine that will bring a multimodal, highly efficient foundation model to market. As a senior technical leader, you’ll not only build core components yourself but also guide and mentor other engineers, influencing engineering direction, standards, and execution quality.

You will learn and shape the full AI stack: from GPU kernels and quantized execution paths to distributed serving, scheduling, and the APIs that power real-time AI applications. If you enjoy deep systems work, thrive on ownership, and want to lead engineers in building foundational AI infrastructure, this role puts you at the center of SciForium’s mission and growth.

Key Responsibilities

  • Lead the technical direction of the model serving platform, owning architecture decisions and guiding engineering execution.

  • Build core serving components including execution runtimes, batching, scheduling, and distributed inference systems.

  • Develop high-performance C++ and CUDA/HIP modules, including custom GPU kernels and memory-optimized runtimes.

  • Collaborate with ML researchers to productionize new multimodal models and ensure low-latency, scalable inference.

  • Build Python APIs and services that expose model capabilities to downstream applications.

  • Mentor and support other engineers through code reviews, design discussions, and hands-on technical guidance.

  • Drive performance profiling, benchmarking, and observability across the inference stack.

  • Ensure high reliability and maintainability through testing, monitoring, and engineering best practices.

  • Troubleshoot and resolve complex issues across GPU, runtime, and service layers.

Must-Haves

  • Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent practical experience

  • 5+ years of experience designing and building scalable, reliable backend systems or distributed infrastructure.

  • Strong understanding of LLM inference mechanics (prefill vs decode, batching, KV cache)

  • Experience with Kubernetes/Ray, Containerization

  • Strong proficiency in C++, Python.

  • Strong debugging, profiling, and performance optimization skills at the system level.

  • Ability to collaborate closely with ML researchers and translate model or runtime requirements into production-grade systems.

  • Effective communication skills and the ability to lead technical discussions, mentor engineers, and drive engineering quality.

  • Comfortable working from the office and contributing to a fast-moving, high-ownership team culture.

Nice to Have

  • Experience with ML systems engineering, distributed GPU scheduling, open source inference engine like vLLM, Sglang, or TRT-LLM

  • Experience in building large scale ML/MLOps infrastructure

  • Proficiency in CUDA or ROCm and experience with GPU profiling tools

  • Experience at an AI/ML startup, research lab, or Big Tech infrastructure/ML team.

  • Familiarity with multimodal model architectures, raw-byte models, or efficient inference techniques.

  • Contributions to open-source ML or HPC infrastructure

Why Join Us

  • Opportunity to build frontier-scale AI infrastructure powering next-generation LLMs and multimodal models.

  • Work with top-tier engineers and researchers across systems, GPUs, and ML frameworks.

  • Tackle high-impact performance and scalability challenges in training and inference.

  • Access state-of-the-art GPU clusters, datasets, and tooling.

  • Opportunity to publish, patent, and push the boundaries of modern AI

  • Join a culture of innovation, ownership, and fast execution in a rapidly scaling AI organization.

Benefits include

  • Medical, dental, and vision insurance

  • 401k plan

  • Daily lunch, snacks, and beverages

  • Flexible time off

  • Competitive salary and equity

Equal opportunity

Sciforium is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.