Skip to main content

Forecast Comparison

The Forecast Comparison view lets you compare the outputs of different forecast models side by side. Each model uses one of Tether’s 11 forecast algorithms, and comparing their outputs helps you choose the best model for your products.

Accessing Forecast Comparison

You can compare models in two ways:
  1. Comparison page — Navigate to Demand ForecastForecast Comparison.
  2. Inline comparison — Click any row in the forecast table to open a modal showing all model outputs for that SKU-channel combination.
Model comparison and selection is available to Admin users. Non-admin users can view the currently active model’s output on the forecast dashboard.

Why compare forecasts?

Comparing forecasts helps you:
PurposeBenefit
Model selectionChoose the best-performing algorithm for your product mix
Accuracy analysisUnderstand how well each model predicts actual sales
Segment optimizationFind which algorithms work best for specific product types
Ongoing improvementTrack forecast accuracy over time and switch models when needed

Comparison view

Layout

The comparison view shows:
  • Model list — all defined models with their algorithm type
  • Side-by-side outputs — forecast values from each model for the same SKU, channel, and period
  • Actuals — historical sales values for completed periods (visually distinguished from forecasts)
  • Variance — differences between each model’s forecast and actual sales

Available algorithms

Each model in the comparison uses one of the following algorithms. See Forecast Algorithms for full details.
AlgorithmBest for
90-Day Rolling AverageStable, predictable demand
120-Day Rolling Average (Recursive)Volatile products, strategic planning
Exponentially Weighted Moving Average (EWMA)Trending products, recent pattern changes
Linear TrendProducts with consistent growth or decline
Simplified SeasonalSeasonal and holiday-driven products
Simple Seasonal with Trend AdjustmentSeasonal products with growth trends
Seasonal Year-over-Year GrowthSeasonal products with observed annual growth
Rolling MomentumAccelerating or decelerating products
Holt-Winters (Triple Exponential Smoothing)Complex seasonal patterns with trends
SARIMAComplex time series with seasonal components
Static BaselineExternal or manually uploaded forecasts

Comparing models

Selecting models to compare

1

Open the comparison view

Navigate to Demand ForecastForecast Comparison, or click a row in the forecast table to open the inline model selection dialog.
2

View model outputs

The view shows each defined model as a row, with columns representing time periods. The currently active model is highlighted.
3

Compare values

Review forecast values side by side. For completed periods, all models show the actual sales value. For current and future periods, each model shows its own forecast output.
4

Select a model

If you want to switch, select a different model and click Apply to make it the active forecast.

Side-by-side view

The table shows forecast outputs from each model for the same SKU-channel combination:
SKUPeriod90-Day Rolling AvgEWMAHolt-WintersActualVariance (Rolling Avg)Variance (EWMA)
SKU-001Jan100110108105-5+5
SKU-001Feb120115117118+2-3
SKU-002Jan50555248+2+7
Completed periods display actual sales history, visually distinguished from forecast values. All models show the same actual value for completed periods.

Accuracy metrics

Key metrics

Tether calculates four accuracy metrics for each model’s forecast against actual sales:
MetricDescriptionFormulaGood value
MAPEMean Absolute Percentage Error — average percentage difference between forecast and actualmean(|actual - forecast| / actual) × 100Lower is better (< 20%)
MAEMean Absolute Error — average absolute difference in unitsmean(|actual - forecast|)Lower is better
BiasSystematic over- or under-forecasting tendencymean(forecast - actual) / mean(actual) × 100Close to 0%
Hit RatePercentage of periods where forecast falls within an acceptable range of actual% of periods where |error| < thresholdHigher is better (> 75%)
Accuracy metrics are only calculated for completed periods where actual sales data is available. Future periods do not contribute to accuracy scores.

Understanding MAPE

MAPE (Mean Absolute Percentage Error) is the primary metric used for parameter optimization and model comparison:
MAPE = Average of |Actual - Forecast| / Actual × 100%
MAPE rangeInterpretation
< 10%Excellent accuracy
10–20%Good accuracy
20–30%Acceptable
> 30%Needs improvement
MAPE can be misleading for SKUs with very low sales volumes — a difference of 1 unit on a product that sells 2 units shows 50% MAPE. Use MAE alongside MAPE for low-volume items.

Metrics cards

Summary cards show overall performance per model. For example:
90-Day Rolling Average
├── MAPE: 18.4%
├── MAE: 23 units
├── Bias: +2.1%
└── Hit Rate: 72%

Holt-Winters
├── MAPE: 12.8%
├── MAE: 19 units
├── Bias: -1.5%
└── Hit Rate: 82%

EWMA
├── MAPE: 15.1%
├── MAE: 21 units
├── Bias: +0.8%
└── Hit Rate: 78%

Filtering comparisons

By product

Compare model performance for specific products:
  1. Filter by collection, tag, or individual SKU
  2. See how each algorithm performs for different product types
  3. Identify algorithms that excel for specific demand patterns

By time period

Analyze accuracy over different periods:
Period typeInsight
Recent (last 30–90 days)Current model performance
Historical (6+ months)Long-term accuracy trends
Seasonal (peak periods)Performance during seasonal demand spikes
Post-event (after promotions)Accuracy around demand disruptions

By channel

Compare accuracy across sales channels:
  • Some algorithms may perform better for specific channels (e.g., wholesale vs. DTC)
  • Channel-specific demand patterns affect algorithm suitability
  • Consider creating separate models for channels with very different demand profiles

Variance analysis

Understanding variance

VarianceMeaning
PositiveForecast was higher than actual (over-forecast)
NegativeForecast was lower than actual (under-forecast)
Near zeroAccurate forecast

Variance patterns

Look for systematic patterns in variance to diagnose model fit:
PatternIndicatesAction
Consistently positiveModel over-forecastsTry a more reactive algorithm (e.g., EWMA) or reduce trend parameters
Consistently negativeModel under-forecastsCheck if growth trends are being captured; try Linear Trend or Seasonal YoY Growth
RandomNo bias, normal varianceModel is well-calibrated
Seasonal misalignmentSeasonal pattern timing is offSwitch to a seasonal algorithm (Simplified Seasonal, Holt-Winters) or verify seasonal period settings

Model performance by segment

Product segments

Different algorithms tend to perform best for different demand patterns:
SegmentRecommended algorithmsWhy
High volume, stable90-Day Rolling Average, EWMASimple averages smooth noise on stable products
Seasonal productsSimplified Seasonal, Holt-Winters, SARIMACapture repeating seasonal patterns
Products with growth trendsLinear Trend, Seasonal YoY GrowthModel upward or downward demand trajectories
New or volatile productsEWMA, Rolling MomentumReact quickly to recent data without requiring long history
Complex seasonal + trendHolt-Winters, SARIMAHandle both trend and seasonality simultaneously
Externally plannedStatic BaselineUse your own uploaded forecast values
See Forecast Algorithms for detailed descriptions and parameter information for each algorithm.

Analyzing by segment

  1. Filter to a product segment (by collection, tag, or channel)
  2. Compare model metrics across the filtered set
  3. Identify which algorithm performs best for that segment
  4. Consider creating a model specifically configured for that segment

Best model selection

Choosing the right algorithm

Consider these factors when selecting an algorithm for a model:
  1. Overall accuracy — Which has the lowest MAPE across your product set?
  2. Bias — Is there systematic over- or under-forecasting?
  3. Stability — Is accuracy consistent across periods, or does it vary widely?
  4. Data requirements — Does the algorithm need more history than you have? (e.g., SARIMA and Holt-Winters need 2+ years of data for reliable seasonality)
  5. Product fit — Does your demand pattern match what the algorithm models?

Algorithm selection guide

If your products have…Consider…
Stable, predictable demand90-Day Rolling Average, 120-Day Rolling Average
Recent trend changesEWMA, Rolling Momentum
Clear seasonalitySimplified Seasonal, Holt-Winters
Seasonal patterns + growthSeasonal YoY Growth, Simple Seasonal with Trend Adjustment
Complex seasonal patternsSARIMA, Holt-Winters
Linear growth or declineLinear Trend
External or manual forecastsStatic Baseline

Using comparison results

Switching models

If comparison reveals a better-performing model:
  1. Note which algorithm performs best and for which products
  2. Go to Forecast AdminModel Selection
  3. Either update the active model or create a new model with the better algorithm
  4. Monitor accuracy after the switch to confirm improvement

Per-SKU optimization

Tether supports different models for different SKU-channel combinations:
  • Create multiple models, each with a different algorithm
  • Assign the best-performing model per SKU-channel pair
  • Use comparison metrics to validate your choices over time
Start with a broadly accurate algorithm (like EWMA or Holt-Winters), then create specialized models for product segments where a different algorithm significantly outperforms.

Reporting

Exporting comparison data

1

Set up your view

Configure the models, filters, and date range you want to export.
2

Export

Click the Export button to download comparison data.
3

Review

The export includes forecast values, actuals, variances, and accuracy metrics for all selected models.

Using comparison reports

Exports are useful for:
  • Monthly accuracy reviews with stakeholders
  • Documenting model selection decisions
  • Identifying trends in forecast performance over time

Best practices

Review model performance periodically:
  • Monthly for businesses with frequent demand changes
  • Quarterly for stable, predictable businesses
  • After major events — promotions, product launches, or supply disruptions
Don’t judge algorithms on too little history:
  • Minimum 3–6 months of actuals for meaningful comparison
  • Include at least one full seasonal cycle for seasonal algorithms
  • Account for unusual events (stockouts, promotions) that may skew metrics
Pure accuracy isn’t the only factor:
  • Under-forecasting may cause stockouts and lost sales
  • Over-forecasting ties up capital in excess inventory
  • Check the Bias metric to see if a model consistently over- or under-forecasts
  • Choose the error direction that’s less costly for your business
When switching models:
  • Record which algorithm was replaced and why
  • Note the accuracy improvement you expect
  • Set a follow-up date to validate the change worked

Next steps

Forecast Algorithms

Learn how each algorithm works and when to use it

Forecast Admin

Configure and manage forecast models

Forecast Dashboard

View and edit forecasts

Sales History

Analyze historical data