Today I finished the DepBrief prototype. All 8 milestones. In one day.

I want to write down what that looked like before I forget, because it was a different kind of build session than I usually have.

The Plan

I'd spent the past week on research — competitive analysis, customer interviews, pricing models, architecture docs. By this morning I had 15 documents and zero lines of product code.

The build plan had 8 milestones:

  1. Project setup
  2. Dependency parser
  3. Version checker (npm registry + CVE detection)
  4. Upstream change fetcher (GitHub Releases API)
  5. AI summarizer (constrained to verified facts)
  6. Codebase impact scanner
  7. PR generator
  8. GitHub App shell

Estimated: 38–61 hours. I did it in about 4.

How

I run a pipeline of sub-agents. Each milestone gets its own Claude Code session with a focused prompt: here's the interface, here's the test list, here's the acceptance criteria. Go.

While one is running I'm either reviewing what the last one built or prepping the next prompt. Milestones 5, 6, and 7 ran back-to-back in under 15 minutes wall-clock time.

This is how I build now. I don't type code — I write specifications and review output. The quality bar matters: I read every file, run every test, catch every shortcut. But the raw throughput is unlike anything I've done before.

What Actually Got Built

The pipeline takes a repo path and runs it through 5 layers:

Layer 1 — Parse. Reads package.json and lockfile. Produces a typed snapshot of every dependency with declared ranges and resolved versions.

Layer 2 — Check. Hits the npm registry for latest versions. Classifies each outdated package as patch/minor/major. Queries the GitHub Advisory Database for CVEs.

Layer 3 — Fetch. For each outdated package, finds the GitHub source repo and fetches release notes between the current and latest versions via the Releases API.

Layer 4 — Summarize. Sends release notes to Claude with a structured prompt that extracts breaking changes, security fixes, deprecations, and new features. The key constraint: the AI categorizes and summarizes, but cannot invent file paths, version numbers, or counts. Those come from the scanner.

Layer 5 — Scan. Walks the repo with regex-based import detection. For every file that imports an outdated package, records the path, line number, which symbols are imported, and file context (client component, API route, test, etc.). Cross-references against the AI-extracted breaking changes to produce high/medium/low confidence match scores.

The output is a FactSet — a typed, sourced data structure where every claim traces back to a file path, registry URL, or advisory ID.

On top of that: a PR generator that takes the FactSet and produces a DepBrief-format PR description. Every cited file path gets verified to exist before it's emitted. And a GitHub App shell with webhook handling and a manual trigger endpoint.

What the Demo Showed

I ran DepBrief against its own codebase:

  • 9 dependencies parsed
  • 3 outdated (@types/node, typescript, vitest)
  • 14 vitest import sites across 14 test files
  • AI summarization: all low risk (correct)
  • Impact scan: 0 breaking change matches for vitest (correct — no breaking changes in that update)
  • Full pipeline runtime: ~25 seconds

Everything accurate. Every file path real. Every count verified.

On a larger repo (Playa, ~25 deps), the pipeline hit a performance wall — sequential Ollama calls at ~2-3s each was too slow. Fixed the same day: parallel summarization in batches of 4, plus a --skip-ai flag for fast mode.

What I Found

Two real issues:

riskLevel inflation. When the upstream fetcher couldn't find release notes for a package (no detectable GitHub repo), the AI summarizer was defaulting to critical instead of unknown. Fixed: summarizer now returns unknown when it has no source material, and labels its confidence explicitly.

Sequential AI is slow. Parallelizing summarization to 4 concurrent calls cut wall-clock time by ~4x on large repos. Should have been parallel from day one.

What's Next

The pipeline works. The GitHub App shell is built but not registered. The landing page exists but isn't deployed.

The remaining items are external-facing: register the GitHub App, buy the domain, deploy the landing page, find the first 10 beta users. That's the next sprint.

The build part — the part I control — is done.


DepBrief is a smart dependency update tool I'm building. It tells you exactly which files in your codebase are affected by a dependency change, whether you were vulnerable, and why the update matters to you specifically. If you want early access, join the waitlist.

React to this post: