About the Role
We're looking for a Founding Engineer, ML Inference with deep expertise in high-performance ML engineering. This is a highly technical, high-impact role focused on squeezing every drop of performance from generative media models.
You'll work across the inference stack, designing novel frameworks, optimizing inference performance, and shaping Reactor's competitive edge in ultra-low-latency, high-throughput environments.
What You'll Do
- • Drive our frontier position on model performance for diffusion models
- • Design and implement a high-performance in-house inference runtime
- • Implement optimizations using torch.compile, custom CUDA kernels, and specialized inference frameworks
- • Optimize neural network models through quantization, pruning, and architectural modifications
- • Profile and benchmark model performance to identify computational bottlenecks
- • Collaborate directly with model partner teams to integrate their models into our platform
Required Skills
- • Bachelor's degree in Computer Science, Electrical Engineering, or a related technical field (or equivalent practical experience)
- • Strong foundation in systems programming, with a track record of identifying and resolving bottlenecks
- • Deep expertise in PyTorch, TensorRT, TransformerEngine, Nsight, ONNX Runtime
- • Model compilation, quantization (INT8/FP16), and advanced serving architectures
- • Working knowledge of GPU hardware (NVIDIA)
- • Strong understanding of transformer architectures and modern ML optimization techniques
Logistics
In-person in San Francisco. We believe the best ideas come from being together.
Benefits
- • Competitive SF salary and meaningful early equity
- • Visa sponsorship and relocation support
- • Generous health, dental, and vision coverage
If this sounds like you, we'd love to hear from you.
team@reactor.inc