Backlog Forecast

Forecast a single backlog or epic for one team

Throughput Data

?Enter your team's actual throughput per iteration. Each number = items completed in that iteration. Include good and bad iterations, including zeros.
?How long is each iteration/sprint? Your throughput data should match this cadence.
?More trials = smoother results. 500 is fast and sufficient. 10,000 gives smoother histograms.

Feature Scope

?Minimum remaining items (optimistic). Each trial picks a random scope between Low and High.
?Maximum remaining items (pessimistic). Accounts for scope uncertainty and hidden work.
?Split/growth rate multiplier. E.g. "1.0-1.3" means 0-30% growth. Leave empty for no growth.
?Uniform: Every value between Low and High is equally likely. Conservative — makes no assumptions. (Magennis SGP default)

PERT: Values near the midpoint are more likely than the extremes. Produces bell-shaped distributions. Uses Beta distribution with formula: mean = (Low + 4×Mid + High) / 6. Better when you believe the middle is most realistic.

Confidence Levels

Completion Distribution

Monte Carlo Simulation Paths

Each line is one trial — showing remaining items burned down over iterations. The spread shows uncertainty.

Portfolio Forecast

Forecast multiple features with WIP limits and allocation

Simulation Settings

?Enter your team's actual throughput per iteration. Each number = items completed in that iteration. More data points = more representative forecast. Don't cherry-pick — include good and bad iterations.
?How long is each iteration/sprint? Your throughput data should match this cadence. E.g. if you run 2-week sprints, select "2 weeks" and enter items completed per sprint.
?The date from which to project forward. Results will show calendar dates based on this starting point.
?Set a target deadline. The portfolio table will compare each feature's forecast against this date using traffic lights: green = on time, yellow = within 1 iteration, red = late.
?More trials = smoother, more stable results. 500 is fast and usually sufficient. 10,000 gives smoother histograms. 100,000 gives very precise percentiles but takes a moment.
?Uniform: Every value between Low and High is equally likely. Conservative. (Magennis SGP default)

PERT: Values near the midpoint are more likely. Produces bell-shaped distributions. Better when you believe the middle is most realistic.

Features in Progress

?Shared pool: Team has one throughput number, distributed across features by allocation %. Good when one team works on multiple features.

Per feature: Each feature has its own throughput data. Good when different sub-teams or individuals own different features.
?Work In Progress: how many features the team works on in parallel. Higher WIP = more uncertainty = wider forecast spread. The simulation distributes each iteration's throughput randomly across this many active features.
3
Feature
Low ?Minimum remaining items (optimistic scope). Each trial picks a random value between Low and High. If scope is known exactly, set Low = High. (Magennis Size-Growth-Pace model)
High ?Maximum remaining items (pessimistic scope). Accounts for scope uncertainty, hidden work, etc.
Split ?Split/growth rate multiplier (Magennis). Stories often split during development. A value of 1.2 means 20% scope growth on average. Enter a range like "1.0-1.5" or a single value like "1.0" for no growth. Leave empty for no growth.
Alloc % ?What % of the shared throughput goes to this feature. Leave empty for equal distribution.

Monte Carlo Simulation Paths

Each line is one trial — showing total remaining items burned down over iterations. The spread shows uncertainty.

Completion Distribution

Confidence Levels

Per-Feature Breakdown

Portfolio Forecaster

Sequential cut-line forecast (Magennis method). Features processed in priority order — throughput flows to the next feature as each completes. Shows when each feature will be done at different confidence levels.

Capacity Forecast

How much can your team deliver in a given time window?

Team Throughput

?Enter your team's actual throughput per iteration. Each number = items completed in that iteration. More data points = more representative forecast. Include good and bad iterations.
?How long is each iteration/sprint? Your throughput data should match this cadence.
?Number of Monte Carlo simulation trials. More trials = more stable results.

Planning Window

?How far ahead you're planning. E.g. 12 weeks for a quarter. Converted to iterations automatically.
?85th percentile of historical feature sizes. Used to convert total items into feature count. Start with 5-8 if unknown.

Feature Capacity

How many right-sized features can your team deliver?

Items Forecast Distribution

Total items your team could complete. Confidence lines show: with X% certainty you'll complete at least this many.

Monte Carlo Simulation Paths

Each line is one trial — showing cumulative items completed over iterations. The spread shows uncertainty in total output.

Product Forecast

Model a product built by multiple teams — with or without dependencies — and get probability-weighted completion dates

Teams & Dependencies

One row per team. Each team has its own throughput. The "Depends on" column takes team names or row numbers (comma-separated). Leave empty for no dependencies.

# Team Epic
Low ?Minimum remaining items (optimistic scope).
High ?Maximum remaining items (pessimistic scope).
Split ?Split/growth rate, e.g. "1.0-1.5". Leave empty for no growth.
Throughput ?This team's items completed per iteration (comma-separated). Include zeros.
Depends on ?Team names or row numbers this team depends on (comma-separated). This team cannot start until ALL listed teams finish. E.g. "Team Alpha, Team Beta" or "1, 2".

Settings

?All teams use the same iteration cadence. Throughput data must match this.
?Deadline for the product. Traffic lights compare each team's 85th percentile against this date. Also used for compound probability calculations.
?Uniform: Every value between Low and High is equally likely. Conservative. (Magennis SGP default)

PERT: Values near the midpoint are more likely. Produces bell-shaped distributions. Better when you believe the middle is most realistic.

How it works

No dependencies: Each team runs in parallel. Product completion = MAX of all team finish dates (the slowest team determines when the product is done).
With dependencies: Dependent teams start only after ALL prerequisites finish (per trial). This naturally captures compound probability — e.g. if A and B each have 85% chance of finishing by a date, the chance BOTH finish = ~72%.

Product Completion Confidence

Overall product done = the last team to finish in each trial.

Product Completion Distribution

Team Portfolio

Per-team forecasts. Bottom row = product (MAX of all teams per trial).

Dependency Impact Analysis

How much do dependencies shift each team's forecast? Includes compound probability calculations.

Quick Guide

Understand Monte Carlo forecasting and how to use each tool

What is Monte Carlo forecasting?

Traditional estimation asks: "How long will this take?" and produces a single number that's almost always wrong. Monte Carlo forecasting asks a better question: "What are the possible outcomes, and how likely is each?"

The simulation runs hundreds or thousands of trials. Each trial randomly samples from your real throughput history and simulates work progressing week by week until complete. The result is a probability distribution — not one answer, but a range of outcomes with confidence levels.

How it works: a step-by-step example

Imagine your team has 20 user stories remaining, and over the past 6 weeks they completed: 3, 5, 4, 2, 6, 4 stories per week.

1
Trial 1 begins. Randomly pick a throughput: 4. Remaining: 20 − 4 = 16 stories.
2
Week 2. Randomly pick: 6. Remaining: 16 − 6 = 10.
3
Week 3. Pick: 3. Remaining: 10 − 3 = 7.
4
Week 4. Pick: 5. Remaining: 7 − 5 = 2.
5
Week 5. Pick: 4. Remaining: 2 − 4 = 0. Done! Trial 1 result: 5 weeks.
Trial 2 might pick: 2, 3, 2, 4, 5, 3 → finishes in 6 weeks.
Trial 3 might pick: 6, 5, 6, 4 → finishes in 4 weeks.
...
After 500 trials, you might find: 50% finish by week 5, 85% by week 6, 95% by week 7.

Each trial is different because the throughput is randomly sampled each week — just like in reality, your team's output varies. The simulation captures this uncertainty naturally, without anyone having to guess.

Scope uncertainty: the Size-Growth-Pace model

In reality, you rarely know the exact scope. The tool uses Troy Magennis's Size-Growth-Pace (SGP) model to capture this:

Size
Low–High range
e.g. 15–25 items
Growth
Split rate multiplier
e.g. 1.0–1.3 (up to 30% growth)
Pace
Throughput per iteration
e.g. 3, 5, 4, 2, 6, 4

Each trial randomly picks a scope within Low–High, multiplies by a random split rate, then simulates week-by-week throughput until done. This means every trial faces different scope and different throughput — capturing both sources of uncertainty.

Scope model toggle: Choose between Uniform (every value between Low and High equally likely — Magennis default, conservative) and PERT (values near the midpoint are more likely, producing bell-shaped distributions using mean = (Low + 4×Mid + High) / 6). Use PERT when you believe the middle of your range is the most realistic estimate.

The tools

Backlog Forecast

Forecast a single backlog or epic for one team.

Enter scope (Low/High), split rate, and throughput history. Get a probabilistic completion date with confidence levels and a burndown chart showing simulation paths.
Portfolio Forecast

Forecast multiple features sharing one team's throughput.

Throughput modes: Shared pool splits one throughput stream proportionally across features. Per-feature gives each feature its own independent throughput data.
WIP limit: Controls how many features run in parallel. Lower WIP = more focus per feature, individual features finish sooner, but overall portfolio may take longer.
Allocation %: In shared pool mode, you can weight how much throughput each feature receives (e.g. 50% to Feature A, 25% each to B and C).
Capacity Forecast

How much can your team deliver in a given time window?

Enter throughput history and a planning horizon (e.g. 12 weeks for a quarter). Get probabilistic item counts at each confidence level, converted to feature counts using a right-size value. Includes a buildup chart showing simulation paths.
Product Forecast

Forecast a product delivered by multiple teams with optional dependencies.

Each team has its own throughput, scope, and split rate. Dependencies mean a team cannot start until all its prerequisites finish — simulated per trial, so the delay compounds naturally.
Product completion = MAX of all team finish times.
Dependency impact shows the exact time cost: how much later the product ships because of dependencies, with side-by-side comparison of with-deps vs without-deps scenarios.

Reading the results

Confidence levels
50% — coin flip, too risky for commitments
70% — probable, good for internal targets
85% — recommended for commitments
95% — high confidence, conservative
Charts
Histogram: Distribution of outcomes. Narrow = low risk. Wide = high uncertainty.
Burndown paths: Lines going down — remaining work over time.
Buildup paths: Lines going up — items completed over time.
Dashed lines: Confidence thresholds on the chart.

Research by Nick Brown across 25 teams found that 85th percentile forecasts were met or exceeded ~90% of the time. This makes 85% the practical sweet spot — confident enough for commitments without excessive padding.

Input reference

InputWhat to enterExample
ThroughputItems completed per iteration, comma-separated. Include zeros.3, 5, 4, 0, 6, 4, 3, 5
Scope Low / HighRange of remaining items. Low = optimistic, High = pessimistic.Low: 12 High: 18
Split rateScope growth multiplier. "1.0" = no growth. "1.0-1.3" = 0–30% growth.1.0-1.2
Iteration lengthDuration of one iteration: 1, 2, or 3 weeks.1 week
TrialsNumber of simulation runs. 500 is enough for most cases.500 or 10,000
Data qualityAuto-calculated: (n−1)/(n+1). Aim for 8+ throughput samples (78%+).8 samples → 78%

Tips for better forecasts

Use real data. Pull throughput from your team's last 8–12 iterations. Don't guess or average — the raw values capture natural variation.
Include zero weeks. Sprints with zero delivery (holidays, incidents, blockers) are part of your real throughput pattern. Excluding them makes forecasts too optimistic.
Keep items similar in size. Throughput-based forecasting works best when work items are roughly comparable. If they vary wildly, break them down first.
Use 85% for commitments. The sweet spot between optimism and padding. Research shows it's met ~90% of the time.
Widen scope when unsure. Don't know if it's 12 or 20 items? Set Low=12, High=20. The simulation will show you the impact of that uncertainty.
Update regularly. Re-run as you get new data and scope clarity. Forecasts converge toward reality over time.

Methodology

This tool implements probabilistic forecasting methods developed by:

Focused Objective
Size-Growth-Pace scope model, empirical throughput sampling (not fitted distributions), linear interpolation percentiles, data quality formula.
Real World Agility
Practical Monte Carlo for agile teams, 85th percentile validation across 25 teams, multi-team dependency simulation, compound probability analysis.