Forecasting & Trending
ShipLens doesn't just tell you what happened — it tells you what's about to happen. Forecasting uses historical commit data to predict future engineering metrics, detect trends, and flag anomalies before they become problems.
Why Forecasting?
A CTO who only looks at last week's numbers is always reacting. Forecasting shifts the conversation from "what went wrong" to "what's changing" — giving you time to intervene before a trend becomes a crisis.
Forecasted Metrics
Seven metrics are tracked, trended, and predicted:
| # | Metric | Unit | What It Measures |
|---|---|---|---|
| 1 | Velocity | commits/week | Raw throughput of the team |
| 2 | Cycle time | hours | Time from first commit on a branch to PR merge |
| 3 | Average score | 0-10 | Mean commit score (V2 scoring) |
| 4 | Commit frequency | commits/day/contributor | How consistently contributors are shipping |
| 5 | PR cycle time | hours | Time from PR open to PR merge |
| 6 | Slop index | 0.0-1.0 | Proportion of AI-generated code without human refinement |
| 7 | Deploy frequency | deploys/week | How often code reaches production |
Methods
Moving Averages
Each metric is computed as a 7-day simple moving average (SMA) and a 28-day SMA:
The 7-day SMA captures short-term momentum. The 28-day SMA captures the underlying trend. When the 7-day crosses above or below the 28-day, it signals a trend change.
Linear Regression
For forward predictions, ShipLens fits a simple linear regression over the last 28 data points:
Where:
= predicted metric value = time (days from start of window) = intercept (baseline value) = slope (rate of change per day)
The slope
Anomaly Detection
An anomaly is flagged when a metric deviates more than 2 standard deviations from its 28-day moving average:
Where
Examples of anomalies:
- Velocity suddenly drops by 40% mid-sprint
- Slop index spikes from 0.1 to 0.5 in one week
- Cycle time doubles without an obvious cause
Anomalies are surfaced in the UI with a visual flag and included in weekly digests.
Confidence Intervals
Every prediction includes a 90% confidence interval:
Where
Wider intervals indicate less predictable metrics — which is itself useful information. A metric with very wide confidence intervals may be too volatile to forecast meaningfully, suggesting underlying process instability.
Trend Classification
Each metric is classified into one of three trend states based on the regression slope
| Trend | Condition | Interpretation |
|---|---|---|
| Improving | Metric is getting better with statistical confidence | |
| Declining | Metric is getting worse with statistical confidence | |
| Stable | No meaningful change detected |
For metrics where lower is better (cycle time, PR cycle time, slop index), the direction is inverted: a negative slope is "improving."
TIP
A "stable" classification is often good news. It means the team is operating consistently. Not every metric needs to be improving all the time — sustainability matters more than constant acceleration.
Background Processing
Forecasts are generated by the ForecastWorker, an Oban background job that runs daily:
| Job | Schedule | What It Does |
|---|---|---|
ForecastWorker | Daily (early morning) | Computes SMAs, fits regressions, classifies trends, flags anomalies |
The worker processes all metrics for all projects and squads. Results are stored as forecast snapshots, allowing historical comparison of predictions vs actuals.
Route
/c/:slug/forecastsThe forecasts page shows:
- Current trend classification for each metric (improving / stable / declining)
- 7-day and 14-day predictions with confidence intervals
- Historical trend lines with SMA overlays
- Anomaly flags with timestamps and severity
- Per-squad and per-contributor breakdowns
