Dependabot tells you "react is outdated." DepBrief tells you "here are the 18 files in your repo that import React, here's what changed in the new version, and here's what it means for your Next.js app specifically."

That's the difference. And that's why I built it.

The Problem

Every developer knows this workflow:

  1. Dependabot opens a PR: "Bump express from 4.18.2 to 4.19.0"
  2. You look at the PR. It changes one line in package.json.
  3. You think: "Is this safe? What changed? Does it affect me?"
  4. You open the changelog. Skim. Maybe click through to a GitHub compare link.
  5. You merge it and hope for the best.

This is broken. Not because Dependabot is bad — it does exactly what it promises. But "your dependency is outdated" is the least useful thing you can tell a developer. What I actually need to know:

  • What changed in the new version, in plain language
  • Which files in my codebase import or use this dependency
  • Whether I was exposed to any security vulnerability that was patched
  • What I should do — merge blindly, review specific files, or test a particular code path

No tool gives you all of that. So I built one.

The Pipeline

DepBrief runs a 5-layer pipeline for every dependency update:

Layer 1: Parse

Read package.json (or requirements.txt, Cargo.toml, etc.). Extract every dependency with its current and available versions.

// Nothing fancy here — just structured extraction
interface DependencyDelta {
  name: string;
  current: string;
  latest: string;
  type: 'major' | 'minor' | 'patch';
}

Layer 2: Version Check

For each dependency, fetch the version history from the registry. Compute what changed between your pinned version and the latest.

This layer determines whether the update is a patch (probably safe), minor (check the changelog), or major (read carefully).

Layer 3: Upstream Changes

Pull the changelog, release notes, and commit history between the two versions. This is the raw material — unprocessed, straight from the source.

For npm packages, this means hitting the GitHub API for the compare view. For Python, it's PyPI release metadata plus the repo's CHANGELOG.

Layer 4: AI Summary

Here's where it gets interesting. An LLM reads the raw upstream changes and produces a structured summary:

  • What was added
  • What was fixed
  • What was deprecated
  • What broke (if major version)
  • Security patches with CVE references

But — and this is critical — the AI summarizes. It does not invent. Every claim in the summary maps back to a specific commit or release note. If the AI can't find evidence for something, it doesn't include it.

I'll talk more about this verification architecture below.

Layer 5: Codebase Impact Scan

This is the layer that makes DepBrief different from everything else.

It scans your actual repository for every import, require, and usage of the dependency being updated. It finds the specific files and line numbers where you use the things that changed.

express 4.18.2 → 4.19.0
  Changed: res.redirect() now validates URLs by default

  Your codebase:
    src/api/auth.ts:42      — res.redirect(returnUrl)
    src/api/callback.ts:18  — res.redirect(req.query.next)
    src/middleware/oauth.ts:67 — res.redirect('/dashboard')

  Impact: Medium — 2 of 3 redirect calls use dynamic URLs.
  Review src/api/auth.ts and src/api/callback.ts.

Same dependency update, completely different briefing depending on your codebase. A Next.js app that never calls res.redirect directly gets "Low impact — no direct usage found." An Express API with 30 redirect calls gets a detailed file list.

Verification Architecture

This is the part I'm most careful about.

AI is good at summarizing changelogs. It's also good at hallucinating file paths that don't exist and inventing CVEs that were never issued. So DepBrief has a hard rule:

Every factual claim is verified before it reaches the developer.

The pipeline works in two phases:

  1. AI generates a draft summary from upstream data
  2. Deterministic verification checks every claim against ground truth

Specifically:

  • File paths: Every path mentioned in the impact report is verified to exist in the repo via fs.existsSync. If the AI says src/utils/helper.ts:15 uses the affected API, that file must exist and line 15 must contain a relevant reference.
  • Version numbers: Checked against the registry. The AI doesn't get to invent versions.
  • CVE references: Cross-referenced against the advisory database. No phantom vulnerabilities.
  • Line numbers: Validated by actually reading the file and confirming the import or usage is there.

If verification fails, the claim is dropped. Not softened, not flagged — dropped. I'd rather miss an edge case than tell a developer to review a file that doesn't exist.

This is the same philosophy I use in Owen's decision engine: deterministic verification over probabilistic trust. AI is a tool for processing unstructured text. It is not a source of truth.

The Personalized Part

Here's the thing that surprised me most during development: the same dependency update produces wildly different briefings for different projects.

lodash 4.17.20 → 4.17.21:

  • For a Next.js app using 3 lodash functions through a utility wrapper: "Low impact. Your usage is isolated to src/utils/data.ts. The patched function (template) is not in your import graph."
  • For an Express API with lodash used across 40 files: "Medium impact. The template function vulnerability affects src/email/renderer.ts:23 where you pass user-supplied template strings."
  • For a CLI tool that doesn't use lodash at all (it's a transitive dep): "No direct impact. This is a transitive dependency via ink. No action required."

Three developers. Same Dependabot PR. Three completely different answers to "should I care?"

That's what "personalized" means. Not marketing personalization. Codebase-specific, file-level, usage-aware analysis.

Current Status

All 8 build milestones are complete:

  1. Dependency parser (npm)
  2. Version checker (npm registry + GitHub advisory database)
  3. Upstream change fetcher (GitHub Releases API + changelog fallback)
  4. AI summarizer (structured extraction, constrained to verified facts)
  5. Codebase impact scanner (import detection, file + line + symbol level)
  6. PR generator (verified, cited, section-based output)
  7. GitHub App shell (webhook handler, JWT auth, manual trigger endpoint)
  8. End-to-end demo (pipeline verified against real repos)

The core pipeline works end-to-end. You point it at a repo, it produces a briefing for every outdated dependency.

The GitHub App shell is built — the OAuth flow, webhook handling, and PR comment integration are all scaffolded. What's left is real GitHub App registration and connecting it to the hosted pipeline.

The landing page is in progress. I want people to understand what this does before they install it.

What's Next

Beta users. I need real repos with real dependency debt to test against. The synthetic tests pass, but I want to see how the impact scanner handles monorepos, workspaces, and the weird import patterns that exist in production codebases.

GitHub App registration. The shell is built. Registering the real app and connecting it to a hosted pipeline is the next infrastructure milestone.

Landing page launch. Explaining this tool is the hardest part. "Smarter dependency updates" undersells it. "AI-powered dependency analysis" oversells it. The truth is somewhere in the middle: deterministic scanning plus AI summarization, verified before delivery.

Why I Built This

I maintain multiple repos. I was mass-merging Dependabot PRs without reading them. Everyone does this. It's rational — the cost of reading every changelog exceeds the cost of occasionally breaking something.

But that math changes if reading the changelog takes 10 seconds instead of 10 minutes. If someone hands you a one-paragraph summary with the specific files in your repo that are affected, suddenly it's worth reading.

That's DepBrief. Not a replacement for Dependabot — a layer on top that makes its output actually useful.

React to this post: