React Doctor is a developer tool for scanning React codebases and reporting a health score with diagnostics. The repository describes it as a way to catch problematic React output from coding agents, but its scope is broader than agent-generated code: it can be run directly against a project, added to GitHub Actions, or wired into existing lint workflows.
The project’s README positions the tool around React, Next.js, Vite, and React Native projects. It focuses on practical categories such as state and effects, performance, architecture, security, accessibility, and dead code, while also offering configuration paths for teams that need to suppress specific rules or scan only part of a codebase.
Why this kind of tool matters
React projects can accumulate issues that are not obvious from a successful build. A component can render, pass TypeScript checks, and still contain fragile effect logic, unnecessary derived state, risky patterns, or accessibility problems.
That is the gap React Doctor is trying to occupy. Instead of being just another formatter or syntax checker, it frames the result as a project health signal. The score is not a replacement for review, tests, or product judgment, but it can create a shared checklist for teams that want to catch recurring React problems earlier.
What the repository describes
The README describes a CLI that scans a codebase and returns a score from 0 to 100, along with diagnostics. It also documents score labels: 75 and above is considered great, 50 to 74 needs work, and below 50 is critical.
The source material highlights several usage modes:
- Running the scanner directly from a project root.
- Installing guidance for coding agents so they can follow React best practices.
- Using the included composite GitHub Action in pull request or push workflows.
- Posting findings as pull request comments when a GitHub token is configured.
- Exposing a score output for later workflow steps.
The documented rule areas include state and effects, performance, architecture, security, accessibility, and dead code. The README also says rules can adapt based on the framework and React version detected in the project.
Where it fits best
React Doctor looks most useful for teams that already have React code in production and want an additional review layer around code quality. It may be especially relevant for projects where AI coding assistants or automated agents produce a meaningful share of changes.
It also fits solo developers who want a second pass before shipping, particularly in React projects that have grown beyond a few components. The GitHub Actions integration makes it suitable for repositories where pull requests are the normal review boundary.
For teams that already use ESLint or oxlint, React Doctor may be more interesting as a complementary ruleset than as a replacement. The README describes both an oxlint plugin and an ESLint plugin, which suggests the project is designed to meet developers inside existing workflows.
Adoption notes
The simplest adoption path is likely to start with a local scan, review the diagnostics, and decide whether the output is useful enough to add to continuous integration. A team should treat the first score as a baseline rather than a verdict on engineering competence.
If the first run produces noisy findings, the configuration options matter. The README documents ignore rules, ignored files, overrides for specific file patterns, package.json configuration, and respect for common ignore files. That makes it possible to start broad, then narrow exceptions around generated files or intentional patterns.
For CI, a cautious rollout would avoid failing builds immediately. The documented fail-on option allows teams to decide whether errors, warnings, or no diagnostics should fail a workflow. That matters because newly introduced rules can lower scores across releases.
Caveats and limits
A health score is useful only when the team understands what it measures. React Doctor’s scoring formula counts unique rules triggered, not the total number of violations, so the number should be read as a summary of rule coverage rather than a raw defect count.
The repository also notes that scores may decrease as new rules are added. That is sensible for a tool that evolves, but it means teams that depend on stable CI thresholds may want to pin versions instead of always tracking the latest release.
There is also the usual limitation of static analysis: it can flag patterns, not prove product quality. React Doctor can help with code hygiene and recurring React mistakes, but it cannot know the full context of a design decision, user workflow, or performance tradeoff.
Editorial verdict
React Doctor is an interesting addition to the React tooling stack because it packages React-specific diagnostics as a visible health signal rather than burying everything in generic lint output. The project appears particularly timely for teams using coding agents, where a fast automated review layer can catch repeated anti-patterns before humans spend time on them.
The strongest case for it is not that every score should become a hard gate. It is that a repeatable scan can make code review more focused: reviewers can spend less energy on predictable React pitfalls and more on architecture, product behavior, and maintainability.
Primary link
Learn more at: https://github.com/millionco/react-doctor