Reactor

Founding Engineer, ML Infrastructure

San Francisco, CA • Full-time

About the Role

We're looking for a Founding Infrastructure Engineer with deep expertise in building and scaling cloud-native systems. This is a highly technical, high-impact role focused on designing and evolving the foundation that powers our AI platform.

You'll work across the entire infrastructure stack, from GPU orchestration to networking and observability, ensuring our systems are reliable, performant, and cost-efficient. You'll shape the architecture that supports large-scale AI workloads, set best practices for how we operate, and establish the infrastructure patterns that will carry Reactor forward over the next 1, 2, and 5 years.

We want to build a world-class infrastructure platform for serving AI at scale, and you'll own this critical part of our stack.

What You'll Do

  • • Design and scale the infrastructure for real-time AI inference, delivering ultra-low latency, high throughput, and cost efficiency.
  • • Orchestrate GPUs and manage multi-tenant workloads with Kubernetes, service mesh, and global traffic routing.
  • • Build and operate core systems including Terraform-based infrastructure, Kubernetes, observability (Prometheus, Grafana), distributed storage, and networking.
  • • Implement cross-cutting capabilities: authentication, rate limiting, monitoring, alerting, and telemetry for inference systems.
  • • Define the roadmap for infrastructure growth, making tradeoffs across performance, reliability, and cost.
  • • Partner closely with ML engineers to productionize and optimize model serving pipelines.

Required Skills

  • • Proven experience in infrastructure engineering, DevOps, or ML platform engineering.
  • • Deep expertise in Kubernetes at scale, GPU orchestration, service mesh, and cloud-native automation.
  • • Experience designing and operating global load balancing and high-availability traffic routing.
  • • Fluency in infrastructure-as-code, modern CI/CD, and observability stacks.
  • • Strong systems background: distributed systems, performance tuning, caching, concurrency, and cost optimization.
  • • Hands-on experience with ML inference serving frameworks (e.g., Triton, ONNX Runtime, vLLM, TensorRT).
  • • Solid understanding of cloud security and data management strategies for inference workloads.
  • • Startup mindset: thrive in fast-paced environments, embrace ambiguity, and own projects end to end.

Logistics

We are based in-person in San Francisco. We believe the best ideas and work come from being together.

Benefits

  • • Competitive San Francisco salary and meaningful early equity.
  • • We sponsor visas. We are committed to working through the process together for the right candidates. If you're currently outside the US, we're also committed to helping you relocate to the US throughout this process.
  • • We offer generous health, dental, and vision coverage, and relocation support as needed.

If this sounds like you, we'd love to hear from you.

team@reactor.inc
Careers
© 2025 Reactor Technologies, Inc.
All rights reserved.