Join Assert Labs

We are a team of engineers and researchers with an ambitious mission: to move the world toward error-free software. We're doing this by building tools to autonomously test and analyze code. We are backed by Accel, Guillermo Rauch (Vercel), Thomas Wolf (Hugging Face), David Cramer (Sentry), Charlie Marsh (Astral), and a number of other open source developers, machine learning researchers, and entrepreneurs. If you wish to learn more, read here about our mission and values.

Our Culture

  • -We are assembling a lean group of generalists. While we welcome experience, we're primarily looking for creative, curious problem solvers with a keen interest in open-ended, technically challenging problems.
  • -We want to work with people who care deeply about the mission of the company and enjoy the work required to achieve it.
  • -We work in person in our office in San Francisco. We believe that having a community is critical for learning, growth, and camaraderie. We pay generously for relocation assistance.
  • -We invest heavily in our team's growth. We sponsor conference attendance (NeurIPS, ICML, RustConf, PyCon, etc.), run an internal paper reading group, and encourage blogging/publishing research. If there's something that will help you grow as an engineer or researcher, we actively want to support it.
  • -We believe in compensating exceptional talent exceptionally well. We benchmark our compensation against the top 90th percentile for each role, and we're committed to working with candidates we're excited about to ensure they are excited too.
  • -We have an academic as well as competitive spirit. We enjoy thought exercises, math puzzles, intellectual inquiry, and vigorous debates. Outside of working hours we enjoy playing board games like Chess, Go, and Diplomacy.

Role: Machine Learning Engineer

Design, build, train, evaluate, deploy, and own models. We're pushing the boundaries of what's possible with large language models and deep learning in the domain of runtime analysis. Our work combines cutting-edge research with practical engineering to create models that can understand and analyze program behavior at scale. This role offers the opportunity to conduct novel research while building systems that directly impact software reliability and performance.

Responsibilities

  • -Train and fine-tune models to analyze program execution, detect anomalies, and predict runtime characteristics
  • -Develop rigorous evaluation frameworks to benchmark model performance across diverse runtime analysis tasks
  • -Optimize model architectures and inference pipelines for real-time program analysis
  • -Develop data-efficient training strategies through techniques like LoRA and quantization, while exploring synthetic data generation to overcome data limitations and optimize for compute constraints
  • -Publish technical blog posts, papers, and open-source releases about machine learning for runtime analysis

Qualifications

  • -Strong background in deep learning research and implementation, with experience in modern architectures and concepts (transformers, MoE, reasoning, etc.)
  • -Track record publishing results at top-tier conferences (NeurIPS, ICML, ICLR)
  • -Expertise in Python and ML frameworks (PyTorch, JAX)
  • -Experience with MLOps and model optimization (ONNX, TensorRT, serving infrastructure)
  • -Familiarity with systems programming across the stack: Rust/C/C++, CUDA/GPU optimization, and kernel-level performance tuning
  • -Experience with reinforcement learning, particularly in systems optimization and decision-making tasks

Interview Process

Our interview process is designed to be unusually thorough yet efficient, respecting our time as well as yours. We occasionally accelerate the process or skip steps for candidates.

1

Application

Send your resume, a personal introduction, and evidence of exceptional ability to hiring@assertlabs.dev.

2

Technical Discussion

A conversation with our engineers about your background and a deep dive into one of your past projects.

3

Take-Home

A 4-hour assessment of your ability to handle a challenge and deliver work without supervision.

4

Coding

A 1-hour live coding interview.

5

System Design

A 1-hour system design interview.

6

Quantitative Reasoning

A live session testing your ability to think abstractly and reason about complex problems.

7

Final Onsite

Collaborate with our team on a real problem in our codebase or design process.

> _