Forecast Comparison
The Forecast Comparison view lets you compare the outputs of different forecast models side by side. Each model uses one of Tether’s 11 forecast algorithms, and comparing their outputs helps you choose the best model for your products.Accessing Forecast Comparison
You can compare models in two ways:- Comparison page — Navigate to Demand Forecast → Forecast Comparison.
- Inline comparison — Click any row in the forecast table to open a modal showing all model outputs for that SKU-channel combination.
Model comparison and selection is available to Admin users. Non-admin users can view the currently active model’s output on the forecast dashboard.
Why compare forecasts?
Comparing forecasts helps you:| Purpose | Benefit |
|---|---|
| Model selection | Choose the best-performing algorithm for your product mix |
| Accuracy analysis | Understand how well each model predicts actual sales |
| Segment optimization | Find which algorithms work best for specific product types |
| Ongoing improvement | Track forecast accuracy over time and switch models when needed |
Comparison view
Layout
The comparison view shows:- Model list — all defined models with their algorithm type
- Side-by-side outputs — forecast values from each model for the same SKU, channel, and period
- Actuals — historical sales values for completed periods (visually distinguished from forecasts)
- Variance — differences between each model’s forecast and actual sales
Available algorithms
Each model in the comparison uses one of the following algorithms. See Forecast Algorithms for full details.| Algorithm | Best for |
|---|---|
| 90-Day Rolling Average | Stable, predictable demand |
| 120-Day Rolling Average (Recursive) | Volatile products, strategic planning |
| Exponentially Weighted Moving Average (EWMA) | Trending products, recent pattern changes |
| Linear Trend | Products with consistent growth or decline |
| Simplified Seasonal | Seasonal and holiday-driven products |
| Simple Seasonal with Trend Adjustment | Seasonal products with growth trends |
| Seasonal Year-over-Year Growth | Seasonal products with observed annual growth |
| Rolling Momentum | Accelerating or decelerating products |
| Holt-Winters (Triple Exponential Smoothing) | Complex seasonal patterns with trends |
| SARIMA | Complex time series with seasonal components |
| Static Baseline | External or manually uploaded forecasts |
Comparing models
Selecting models to compare
Open the comparison view
Navigate to Demand Forecast → Forecast Comparison, or click a row in the forecast table to open the inline model selection dialog.
View model outputs
The view shows each defined model as a row, with columns representing time periods. The currently active model is highlighted.
Compare values
Review forecast values side by side. For completed periods, all models show the actual sales value. For current and future periods, each model shows its own forecast output.
Side-by-side view
The table shows forecast outputs from each model for the same SKU-channel combination:| SKU | Period | 90-Day Rolling Avg | EWMA | Holt-Winters | Actual | Variance (Rolling Avg) | Variance (EWMA) |
|---|---|---|---|---|---|---|---|
| SKU-001 | Jan | 100 | 110 | 108 | 105 | -5 | +5 |
| SKU-001 | Feb | 120 | 115 | 117 | 118 | +2 | -3 |
| SKU-002 | Jan | 50 | 55 | 52 | 48 | +2 | +7 |
Completed periods display actual sales history, visually distinguished from forecast values. All models show the same actual value for completed periods.
Accuracy metrics
Key metrics
Tether calculates four accuracy metrics for each model’s forecast against actual sales:| Metric | Description | Formula | Good value |
|---|---|---|---|
| MAPE | Mean Absolute Percentage Error — average percentage difference between forecast and actual | mean(|actual - forecast| / actual) × 100 | Lower is better (< 20%) |
| MAE | Mean Absolute Error — average absolute difference in units | mean(|actual - forecast|) | Lower is better |
| Bias | Systematic over- or under-forecasting tendency | mean(forecast - actual) / mean(actual) × 100 | Close to 0% |
| Hit Rate | Percentage of periods where forecast falls within an acceptable range of actual | % of periods where |error| < threshold | Higher is better (> 75%) |
Accuracy metrics are only calculated for completed periods where actual sales data is available. Future periods do not contribute to accuracy scores.
Understanding MAPE
MAPE (Mean Absolute Percentage Error) is the primary metric used for parameter optimization and model comparison:| MAPE range | Interpretation |
|---|---|
| < 10% | Excellent accuracy |
| 10–20% | Good accuracy |
| 20–30% | Acceptable |
| > 30% | Needs improvement |
Metrics cards
Summary cards show overall performance per model. For example:Filtering comparisons
By product
Compare model performance for specific products:- Filter by collection, tag, or individual SKU
- See how each algorithm performs for different product types
- Identify algorithms that excel for specific demand patterns
By time period
Analyze accuracy over different periods:| Period type | Insight |
|---|---|
| Recent (last 30–90 days) | Current model performance |
| Historical (6+ months) | Long-term accuracy trends |
| Seasonal (peak periods) | Performance during seasonal demand spikes |
| Post-event (after promotions) | Accuracy around demand disruptions |
By channel
Compare accuracy across sales channels:- Some algorithms may perform better for specific channels (e.g., wholesale vs. DTC)
- Channel-specific demand patterns affect algorithm suitability
- Consider creating separate models for channels with very different demand profiles
Variance analysis
Understanding variance
| Variance | Meaning |
|---|---|
| Positive | Forecast was higher than actual (over-forecast) |
| Negative | Forecast was lower than actual (under-forecast) |
| Near zero | Accurate forecast |
Variance patterns
Look for systematic patterns in variance to diagnose model fit:| Pattern | Indicates | Action |
|---|---|---|
| Consistently positive | Model over-forecasts | Try a more reactive algorithm (e.g., EWMA) or reduce trend parameters |
| Consistently negative | Model under-forecasts | Check if growth trends are being captured; try Linear Trend or Seasonal YoY Growth |
| Random | No bias, normal variance | Model is well-calibrated |
| Seasonal misalignment | Seasonal pattern timing is off | Switch to a seasonal algorithm (Simplified Seasonal, Holt-Winters) or verify seasonal period settings |
Model performance by segment
Product segments
Different algorithms tend to perform best for different demand patterns:| Segment | Recommended algorithms | Why |
|---|---|---|
| High volume, stable | 90-Day Rolling Average, EWMA | Simple averages smooth noise on stable products |
| Seasonal products | Simplified Seasonal, Holt-Winters, SARIMA | Capture repeating seasonal patterns |
| Products with growth trends | Linear Trend, Seasonal YoY Growth | Model upward or downward demand trajectories |
| New or volatile products | EWMA, Rolling Momentum | React quickly to recent data without requiring long history |
| Complex seasonal + trend | Holt-Winters, SARIMA | Handle both trend and seasonality simultaneously |
| Externally planned | Static Baseline | Use your own uploaded forecast values |
Analyzing by segment
- Filter to a product segment (by collection, tag, or channel)
- Compare model metrics across the filtered set
- Identify which algorithm performs best for that segment
- Consider creating a model specifically configured for that segment
Best model selection
Choosing the right algorithm
Consider these factors when selecting an algorithm for a model:- Overall accuracy — Which has the lowest MAPE across your product set?
- Bias — Is there systematic over- or under-forecasting?
- Stability — Is accuracy consistent across periods, or does it vary widely?
- Data requirements — Does the algorithm need more history than you have? (e.g., SARIMA and Holt-Winters need 2+ years of data for reliable seasonality)
- Product fit — Does your demand pattern match what the algorithm models?
Algorithm selection guide
| If your products have… | Consider… |
|---|---|
| Stable, predictable demand | 90-Day Rolling Average, 120-Day Rolling Average |
| Recent trend changes | EWMA, Rolling Momentum |
| Clear seasonality | Simplified Seasonal, Holt-Winters |
| Seasonal patterns + growth | Seasonal YoY Growth, Simple Seasonal with Trend Adjustment |
| Complex seasonal patterns | SARIMA, Holt-Winters |
| Linear growth or decline | Linear Trend |
| External or manual forecasts | Static Baseline |
Using comparison results
Switching models
If comparison reveals a better-performing model:- Note which algorithm performs best and for which products
- Go to Forecast Admin → Model Selection
- Either update the active model or create a new model with the better algorithm
- Monitor accuracy after the switch to confirm improvement
Per-SKU optimization
Tether supports different models for different SKU-channel combinations:- Create multiple models, each with a different algorithm
- Assign the best-performing model per SKU-channel pair
- Use comparison metrics to validate your choices over time
Reporting
Exporting comparison data
Using comparison reports
Exports are useful for:- Monthly accuracy reviews with stakeholders
- Documenting model selection decisions
- Identifying trends in forecast performance over time
Best practices
Compare regularly
Compare regularly
Review model performance periodically:
- Monthly for businesses with frequent demand changes
- Quarterly for stable, predictable businesses
- After major events — promotions, product launches, or supply disruptions
Use enough data
Use enough data
Don’t judge algorithms on too little history:
- Minimum 3–6 months of actuals for meaningful comparison
- Include at least one full seasonal cycle for seasonal algorithms
- Account for unusual events (stockouts, promotions) that may skew metrics
Consider business context
Consider business context
Pure accuracy isn’t the only factor:
- Under-forecasting may cause stockouts and lost sales
- Over-forecasting ties up capital in excess inventory
- Check the Bias metric to see if a model consistently over- or under-forecasts
- Choose the error direction that’s less costly for your business
Document model changes
Document model changes
When switching models:
- Record which algorithm was replaced and why
- Note the accuracy improvement you expect
- Set a follow-up date to validate the change worked
Next steps
Forecast Algorithms
Learn how each algorithm works and when to use it
Forecast Admin
Configure and manage forecast models
Forecast Dashboard
View and edit forecasts
Sales History
Analyze historical data