Skip to content

What is ShipLens?

ShipLens is an engineering intelligence platform built for CTOs and engineering managers who want to understand their team's work through commit analysis, not surveillance.

The Problem

Engineering leaders face a paradox: they need visibility into team performance, but most approaches either reduce engineers to ticket counters or create a surveillance culture that erodes trust.

Counting commits, lines of code, or PRs merged tells you almost nothing about the quality, complexity, or impact of the work. And stack-ranking developers by these vanity metrics actively harms team culture.

Our Approach

ShipLens takes a fundamentally different path:

  1. Understand commits in context — We use LLMs to read each commit the way a senior engineer would: understanding what changed, why it matters, and how complex the work was, all within the context of your specific project.

  2. Separate analysis from scoring — The expensive LLM analysis happens once per commit. Scoring uses cheap, transparent, configurable rules. You can re-score your entire history with different weights without re-analyzing anything.

  3. Patterns over scores — Work patterns (sessions, streaks, focus areas, commit sizing) reveal how your team works. These are descriptive, not prescriptive.

  4. Flags, never penalties — Gaming detection and AI slop detection raise flags for human review. They never automatically reduce scores. Trust your engineers, but verify patterns.

Core Philosophy

PrincipleWhat It Means
Score-agnostic analysisLLMs assess complexity and impact; scoring rules are separate and configurable
TransparencyEvery score shows its components — you can always see why
FairnessComplex commits get deeper analysis; trivial commits get quick triage
Cost efficiencyDeterministic triage before any LLM call; tiered depth saves cost
No surveillancePatterns inform conversations, they don't replace them

How It Works

Phase 1: Project Understanding

Before analyzing any commits, ShipLens builds a contextual understanding of your project:

  • Crawl the repository structure
  • Sample representative files from each domain
  • Map domains and their criticality
  • Embed project knowledge into a vector store (pgvector)

This context is used during commit analysis so the LLM understands your codebase, not just generic code.

Phase 2: Commit Analysis

Each commit goes through a deterministic triage that assigns analysis depth, then gets analyzed at the appropriate level:

  • Shallow — Merge commits, CI-only changes, docs-only changes. No LLM cost.
  • Standard — Most commits. Analyzed by Claude Haiku with lightweight context.
  • Deep — Security-related or flagged commits. Agentic Claude Sonnet with codebase tools.

The output is a Commit Report containing: commit type, summary, areas affected, complexity, impact, quality signals, risk signals, slop dimensions, and confidence level.

Phase 3: Scoring

Commit Reports are scored using a transparent, rules-based engine. The current V2 formula uses a multiplicative core (complexity × impact), plus additive bonuses for effort, quality signals, and risk signals.

Scoring is cheap and repeatable — you can change weights, switch presets, or create custom configs and re-score your entire history instantly.

What You Get

  • Contributor profiles with composite scores, work patterns, and trend analysis
  • Weekly digests summarizing team activity with comparisons to prior weeks
  • 1:1 prep reports with coaching-oriented talking points and evidence-based observations
  • Alerts for silent contributors, high fix ratios, and low test discipline
  • Gaming flags for suspicious commit patterns
  • AI slop scores measuring AI-generated code quality across seven dimensions

Built with intelligence, not surveillance.