Senior HPC & GPU Infrastructure Engineer
Sciforium
Location
San Francisco
Employment Type
Full time
Location Type
On-site
Department
Engineering
Sciforium is an AI infrastructure company developing next-generation multimodal AI models and a proprietary, high-efficiency serving platform. Backed by multi-million-dollar funding and direct sponsorship from AMD with hands-on support from AMD engineers the team is scaling rapidly to build the full stack powering frontier AI models and real-time applications.
We offer a fast-moving, collaborative environment where engineers have meaningful impact, learn quickly, and tackle deep technical challenges across the AI systems stack.
Role Overview
We are seeking a Senior HPC & GPU Infrastructure Engineer to take full ownership of the health, reliability, and performance of our GPU compute cluster. You will be the primary PyTOrchcustodian of our high-density accelerator environment and the linchpin between hardware operations, distributed systems, and machine learning workflows. This role spans everything from hands-on Linux systems engineering and GPU driver bring-up to maintaining the ML software stack (CUDA/ROCm, PyTorch, JAX, vLLM). If you love squeezing every bit of performance out of hardware, enjoy debugging GPUs at scale, and want to build world-class AI infrastructure, this role is for you.
Key Responsibilities
1. System Health & Reliability (SRE)
On-Call Response: Act as the primary responder for system outages, GPU failures, node crashes, and cluster-wide incidents. Minimize downtime by resolving issues rapidly.
Cluster Monitoring: Implement and maintain monitoring for GPU health, thermal behavior, PCIe/NVLink topology issues, memory errors, and overall system load.
Vendor Liaison: Coordinate with data center staff, hardware vendors, and on-site technicians for repairs, RMA processing, and physical maintenance of the cluster.
2. Linux & Network Administration
OS Management: Install, patch, and maintain Linux distributions (Ubuntu / CentOS / RHEL). Ensure consistent configuration, kernel tuning, and automation for large node fleets.
Security & Access Controls: Configure VPNs, iptables/firewalls, SSH hardening, and network routing to secure our computer infrastructure.
Identity & Storage Management: Manage LDAP/FreeIPA/AD for user identity, and administer distributed file systems such as NFS, GPFS, or Lustre.
3. GPU & ML Stack Engineering
Deployment & Bring-Up: Lead deployment of new GPU nodes, including BIOS configuration, NUMA tuning, GPU topology validation, and cluster integration.
Driver & Kernel Management: Build and optimize kernel modules, maintain GPU drivers and runtime stacks for both NVIDIA (CUDA) and AMD (ROCm).
Software Stack Maintenance: Maintain and optimize ML frameworks and libraries PyTorch, JAX, CUDA toolkit, cuDNN, ROCm, NCCL, and supporting runtime systems.
Advanced Debugging: Troubleshoot complex interactions involving GPUs, compilers, ML frameworks, and distributed training runtimes (e.g., vLLM compilation failures, CUDA memory leaks, ROCm kernel crashes).
Must-Haves
5+ years of experience in HPC, GPU cluster operations, Linux systems engineering, or similar roles.
Bachelor’s or Master’s degree in Computer Science, Computer Engineering, Electrical Engineering, or a related technical field.
Strong expertise with NVIDIA (H100/B200) or AMD (MI325x/MI355x) GPUs, including driver and kernel-level debugging.
Deep understanding of Linux internals, kernel modules, hardware bring-up, and systems performance tuning.
Experience with network security, including VPNs, iptables/firewalld, SSH, and identity management (LDAP/FreeIPA/AD).
Proficiency in Bash and Python for scripting, automation, and workflow tooling.
Familiarity with ML software stacks: CUDA toolkit, cuDNN, NCCL, ROCm, JAX/PyTorch runtime behavior.
Deep debugging experience with NVLink/NVSwitch fabrics and RDMA networking.
Nice-to-Haves
Experience with job schedulers such as Slurm, Kubernetes, or Run:AI.
Exposure to vLLM, model serving optimizations, or inference systems.
Hands-on experience with configuration management tools (Ansible, SaltStack, Terraform).
Previous experience supporting ML research teams in a startup or research-heavy environment.
Why Join Us
Opportunity to build frontier-scale AI infrastructure powering next-generation LLMs and multimodal models.
Work with top-tier engineers and researchers across systems, GPUs, and ML frameworks.
Tackle high-impact performance and scalability challenges in training and inference.
Access state-of-the-art GPU clusters, datasets, and tooling.
Opportunity to publish, patent, and push the boundaries of modern AI
Join a culture of innovation, ownership, and fast execution in a rapidly scaling AI organization.
Benefits include
Medical, dental, and vision insurance
401k plan
Daily lunch, snacks, and beverages
Flexible time off
Competitive salary and equity
Equal opportunity
Sciforium is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.
