Backlog Forecast
Forecast a single backlog or epic for one team
Throughput Data
Feature Scope
PERT: Values near the midpoint are more likely than the extremes. Produces bell-shaped distributions. Uses Beta distribution with formula: mean = (Low + 4×Mid + High) / 6. Better when you believe the middle is most realistic.
Confidence Levels
Completion Distribution
Monte Carlo Simulation Paths
Each line is one trial — showing remaining items burned down over iterations. The spread shows uncertainty.
Portfolio Forecast
Forecast multiple features with WIP limits and allocation
Simulation Settings
PERT: Values near the midpoint are more likely. Produces bell-shaped distributions. Better when you believe the middle is most realistic.
Features in Progress
Per feature: Each feature has its own throughput data. Good when different sub-teams or individuals own different features.
| Feature |
Low
?Minimum remaining items (optimistic scope). Each trial picks a random value between Low and High. If scope is known exactly, set Low = High. (Magennis Size-Growth-Pace model)
|
High
?Maximum remaining items (pessimistic scope). Accounts for scope uncertainty, hidden work, etc.
|
Split
?Split/growth rate multiplier (Magennis). Stories often split during development. A value of 1.2 means 20% scope growth on average. Enter a range like "1.0-1.5" or a single value like "1.0" for no growth. Leave empty for no growth.
|
Alloc %
?What % of the shared throughput goes to this feature. Leave empty for equal distribution.
|
|---|
Monte Carlo Simulation Paths
Each line is one trial — showing total remaining items burned down over iterations. The spread shows uncertainty.
Completion Distribution
Confidence Levels
Per-Feature Breakdown
Portfolio Forecaster
Sequential cut-line forecast (Magennis method). Features processed in priority order — throughput flows to the next feature as each completes. Shows when each feature will be done at different confidence levels.
Capacity Forecast
How much can your team deliver in a given time window?
Team Throughput
Planning Window
Feature Capacity
How many right-sized features can your team deliver?
Items Forecast Distribution
Total items your team could complete. Confidence lines show: with X% certainty you'll complete at least this many.
Monte Carlo Simulation Paths
Each line is one trial — showing cumulative items completed over iterations. The spread shows uncertainty in total output.
Product Forecast
Model a product built by multiple teams — with or without dependencies — and get probability-weighted completion dates
Teams & Dependencies
One row per team. Each team has its own throughput. The "Depends on" column takes team names or row numbers (comma-separated). Leave empty for no dependencies.
| # | Team | Epic | Low ?Minimum remaining items (optimistic scope). |
High ?Maximum remaining items (pessimistic scope). |
Split ?Split/growth rate, e.g. "1.0-1.5". Leave empty for no growth. |
Throughput ?This team's items completed per iteration (comma-separated). Include zeros. |
Depends on ?Team names or row numbers this team depends on (comma-separated). This team cannot start until ALL listed teams finish. E.g. "Team Alpha, Team Beta" or "1, 2". |
|---|
Settings
PERT: Values near the midpoint are more likely. Produces bell-shaped distributions. Better when you believe the middle is most realistic.
How it works
Product Completion Confidence
Overall product done = the last team to finish in each trial.
Product Completion Distribution
Team Portfolio
Per-team forecasts. Bottom row = product (MAX of all teams per trial).
Dependency Impact Analysis
How much do dependencies shift each team's forecast? Includes compound probability calculations.
Quick Guide
Understand Monte Carlo forecasting and how to use each tool
What is Monte Carlo forecasting?
Traditional estimation asks: "How long will this take?" and produces a single number that's almost always wrong. Monte Carlo forecasting asks a better question: "What are the possible outcomes, and how likely is each?"
The simulation runs hundreds or thousands of trials. Each trial randomly samples from your real throughput history and simulates work progressing week by week until complete. The result is a probability distribution — not one answer, but a range of outcomes with confidence levels.
How it works: a step-by-step example
Imagine your team has 20 user stories remaining, and over the past 6 weeks they completed: 3, 5, 4, 2, 6, 4 stories per week.
Trial 3 might pick: 6, 5, 6, 4 → finishes in 4 weeks.
...
After 500 trials, you might find: 50% finish by week 5, 85% by week 6, 95% by week 7.
Each trial is different because the throughput is randomly sampled each week — just like in reality, your team's output varies. The simulation captures this uncertainty naturally, without anyone having to guess.
Scope uncertainty: the Size-Growth-Pace model
In reality, you rarely know the exact scope. The tool uses Troy Magennis's Size-Growth-Pace (SGP) model to capture this:
e.g. 15–25 items
e.g. 1.0–1.3 (up to 30% growth)
e.g. 3, 5, 4, 2, 6, 4
Each trial randomly picks a scope within Low–High, multiplies by a random split rate, then simulates week-by-week throughput until done. This means every trial faces different scope and different throughput — capturing both sources of uncertainty.
Scope model toggle: Choose between Uniform (every value between Low and High equally likely — Magennis default, conservative) and PERT (values near the midpoint are more likely, producing bell-shaped distributions using mean = (Low + 4×Mid + High) / 6). Use PERT when you believe the middle of your range is the most realistic estimate.
The tools
Forecast a single backlog or epic for one team.
Forecast multiple features sharing one team's throughput.
WIP limit: Controls how many features run in parallel. Lower WIP = more focus per feature, individual features finish sooner, but overall portfolio may take longer.
Allocation %: In shared pool mode, you can weight how much throughput each feature receives (e.g. 50% to Feature A, 25% each to B and C).
How much can your team deliver in a given time window?
Forecast a product delivered by multiple teams with optional dependencies.
Product completion = MAX of all team finish times.
Dependency impact shows the exact time cost: how much later the product ships because of dependencies, with side-by-side comparison of with-deps vs without-deps scenarios.
Reading the results
70% — probable, good for internal targets
85% — recommended for commitments
95% — high confidence, conservative
Burndown paths: Lines going down — remaining work over time.
Buildup paths: Lines going up — items completed over time.
Dashed lines: Confidence thresholds on the chart.
Research by Nick Brown across 25 teams found that 85th percentile forecasts were met or exceeded ~90% of the time. This makes 85% the practical sweet spot — confident enough for commitments without excessive padding.
Input reference
| Input | What to enter | Example |
|---|---|---|
| Throughput | Items completed per iteration, comma-separated. Include zeros. | 3, 5, 4, 0, 6, 4, 3, 5 |
| Scope Low / High | Range of remaining items. Low = optimistic, High = pessimistic. | Low: 12 High: 18 |
| Split rate | Scope growth multiplier. "1.0" = no growth. "1.0-1.3" = 0–30% growth. | 1.0-1.2 |
| Iteration length | Duration of one iteration: 1, 2, or 3 weeks. | 1 week |
| Trials | Number of simulation runs. 500 is enough for most cases. | 500 or 10,000 |
| Data quality | Auto-calculated: (n−1)/(n+1). Aim for 8+ throughput samples (78%+). | 8 samples → 78% |
Tips for better forecasts
Methodology
This tool implements probabilistic forecasting methods developed by: