Our Mission
To improve the quality of software by building the world's best tooling for reviewing code.
What's Broken Today
Review tools have not caught up with state-of-the-art programming practices and automated reviews target the wrong end user.
Code review is not fun or effective
While tools for writing code have advanced dramatically — we now have powerful editors with semantic navigation (LSP), intelligent tab-complete suggestions, inline diagnostics, and integrated debugging — code review hasn't fundamentally changed in over a decade. Reviewing code is tedious and requires you to reverse engineer the author's intent to understand what's happening. At times, it's faster to write code than to review it and substantial review effort does not guarantee improvement to the quality of a pull request.
Nonetheless, bypassing it altogether is not a viable option for teams that value accountability, maintainability, and visibility. If anything, reviewing code is more important than ever in a world where a significant portion of code is generated by language models.
Automated code review has fundamental limitations
One solution to the tedium of reviewing code and the second-order effects of code generation is automated review. If developers are using AI to generate code, why not use AI to review it? Many teams now rely on bots living in GitHub or their preferred code hosting platform. We feel that this is the wrong abstraction, or at least not the abstraction that interests us. It's our opinion that the best AI-native tools add value as close as possible to the workflow that they're looking to improve.
First, it's unclear to us who the end user of a review bot is and whether the experience has been designed well for them. Presumably it's the author of the pull request? In that case, why would they not want that feedback earlier and less publicly? Our end user is the reviewer.
Second, it's extremely difficult for automated tools to replicate the depth of context and understanding that a professional engineer has about their own codebase. We should leverage this human expertise, not bypass it. We need tools that help engineers review code more quickly and effectively — tools that support and accelerate their work, rather than attempt to fully automate it.
Third, the value of code review goes far beyond simply catching bugs. Review is about building team awareness, making sure every change is seen and understood, establishing clear accountability for what goes into production, and fostering the sharing of information and best practices. We need tools that make review fun, fast, and effective.
What Needs to Exist
These are the broad areas of improvement that really excite us:
A Guided Review
We believe review can be dramatically easier with smarter guidance. Imagine opening a pull request and immediately seeing intelligent suggestions for where you, specifically, should focus your attention — what is most relevant to your expertise, and which parts might most benefit from your eyes. These recommendations would be informed by the current review state, your previous comments, and what has or hasn’t been addressed. Context for every change — links to related pull requests, documentation, and similar code — could be presented right when you need it, eliminating time spent searching. The review interface should also order changes in a way that helps you make the most of limited time, and lets you quickly choose how deeply you want to review each part: skim, comment, or dig deep. We envision tools that help reviewers stay focused, reduce duplicate effort, and make fast, high-quality decisions, all while respecting their time and attention.
Better State Management
While some tools have introduced features like stacking and rebasing pull requests, countless core issues — for both authors and reviewers — remain unsolved. Managing the simultaneity of pushing and reviewing changes is still clunky, making it difficult to see what has actually changed after multiple updates. Reviewers are often left to repeatedly comb through the same pull request just to determine what's new. And partial reviews — where feedback could flow more quickly — are awkward or impossible, needlessly delaying the feedback loop. We envision a much better experience: reviewers should instantly see what's new after each update, quickly focus on exactly what needs attention, and easily track every reviewer's feedback. Partial reviews should be easy and natural, allowing for faster, more continuous collaboration. Tools should make stacking, rebasing, and merging feel seamless for both authors and reviewers, letting everyone stay focused on meaningful work instead of getting bogged down in clunky review mechanics.
A Richer Diff
The standard diff has remained largely unchanged for decades, but we believe it can be made much richer, clearer, and more interactive. Imagine a diff that intelligently visualizes moved or refactored code so reviewers immediately see where logic has shifted. Repetitive or boilerplate changes could be collapsed by default, letting you expand only what matters, so you aren’t overwhelmed with noise. We see room for algorithmic advances too: semantic diff algorithms could group related changes. Less visual clutter means a reviewer’s attention is always where it’s most impactful. Interaction can also be dramatically improved. Reviewers should be able to edit code directly within the diff to make suggestions and leave comments that span multiple hunks or files. Imagine inline references to related documentation or the ability to invoke LSP-like actions without leaving the code review. A richer diff, both in visualization and interaction, can make reviews more intuitive and productive.
There are countless other hard problems we're excited to tackle to improve the quality of code review - version control itself might need extension and improvement. Our goal is not to replace code review, but to evolve it.