Smape Demystified: A Thorough Guide to sMAPE, SMAPE and Symmetric Forecast Error

Forecast accuracy is a perennial concern for analysts, data scientists and decision makers alike. When comparing predictive models, it is essential to use an error metric that reflects the behaviour of forecasts in a balanced way. The symmetric mean absolute percentage error, commonly written as sMAPE or SMAPE, is one of the most widely used metrics for this purpose. In this guide, we unpack the concept, explain how to calculate it, discuss when to use it, and show practical implementations across popular tools. Whether you are new to time series evaluation or looking to refine your toolbox, this article offers clear explanations, practical examples and actionable tips.
What is sMAPE? The Role of SMAPE in Forecast Evaluation
Defining the metric
The smape family of metrics is designed to measure the accuracy of forecasted values relative to the real observations in a scale-invariant way. Unlike traditional mean absolute percentage error, the sMAPE formulation accounts for both the forecast and the actual value, ensuring that overestimation and underestimation are treated symmetrically. In practice, sMAPE provides a percentage error figure that makes it easier to compare forecasts across different series and units.
The correct forms: sMAPE, SMAPE and their cousins
In literature and practice you will encounter several spellings and capitalisations. The most common variants are:
- sMAPE — with a lowercase ‘s’ and an uppercase ‘MAPE’ (the widely used standard in forecasting literature).
- SMAPE — all capital letters, sometimes used in software documentation or older articles.
- smape — a lowercase form that some practitioners adopt for readability in plain-s language contexts.
All refer to the same underlying concept: a symmetric mean absolute percentage error. When writing headings or formal sections, you may see sMAPE or SMAPE used interchangeably; for readability, many guides prefer sMAPE to emphasise the symmetry of the denominator and the equal treatment of positive and negative errors.
Why symmetric error matters
MAPE, the mean absolute percentage error, can disproportionately penalise forecasts when the actual values are small. This happens because the percentage error becomes very large as the denominator shrinks. The sMAPE formulation mitigates this by using a symmetric denominator that depends on the sum of the absolute actual and forecast values. In practical terms, this makes sMAPE less sensitive to scale and more robust when comparing forecasts across different blocks of data or across products with widely varying sales levels.
How to Calculate sMAPE: Formula and Intuition
The standard formula
The most widely used expression for sMAPE is either of the following equivalent forms. Both produce a percentage result, typically interpreted as a forecast error relative to the observed scale.
Form 1 (open, transparent):
sMAPE = (100% / n) × ∑t=1 to n [ |Ft − At | / ( (|At| + |Ft|) / 2 ) ]
Form 2 (often used in software and literature):
sMAPE = (200% / n) × ∑t=1 to n [ |Ft − At | / ( |At | + |Ft | ) ]
Where:
– Ft is the forecast for period t,
– At is the actual value observed in period t,
– n is the number of forecast-observation pairs.
Handling zeros and near-zeros
A notable nuance with sMAPE is its behaviour when both At and Ft are zero. In the most common formulations, the term is defined as 0 for that period, since there is no error when both values are zero. If only one of the values is zero, the denominator becomes the absolute value of the non-zero term, and the ratio tends to 2 in the form with the denominator divided by the sum, contributing up to 200% to the overall average. In practice, teams often adopt a small epsilon to avoid division by zero in edge cases, or they treat such periods specially to maintain interpretability.
Interpreting the result
The sMAPE value typically ranges from 0% to 200%, with lower values indicating better forecast accuracy. Values near 0% imply near-perfect forecasts, while values approaching 200% indicate substantial discrepancies, especially when one of the values is zero while the other is not. Because the metric is bounded, it provides a straightforward gauge of relative error that can be compared across time series and products with different scales.
Practical Examples: How sMAPE Works in Real Data
Simple numerical example
Consider a tiny forecast task with the following actuals and forecasts:
- Period 1: A = 50, F = 60
- Period 2: A = 20, F = 25
- Period 3: A = 0, F = 10
- Period 4: A = 100, F = 90
Using Form 2, compute each term:
- Period 1: |60 − 50| / (|50| + |60|) = 10 / 110 ≈ 0.0909
- Period 2: |25 − 20| / (20 + 25) = 5 / 45 ≈ 0.1111
- Period 3: |10 − 0| / (0 + 10) = 10 / 10 = 1.0
- Period 4: |90 − 100| / (100 + 90) = 10 / 190 ≈ 0.0526
Sum of terms ≈ 0.0909 + 0.1111 + 1.0 + 0.0526 ≈ 1.2546. Multiply by 200% / n (n = 4):
sMAPE ≈ (200% / 4) × 1.2546 ≈ 50% × 1.2546 ≈ 62.5%.
Interpretation: on average, the forecasts deviate by about 62.5% of a representative scale across these four periods, with the zero-to-nonzero transition (Period 3) driving a higher error contribution.
Edge case exploration
Suppose A = 0 and F = 0 in a period. The term for that period is typically 0, contributing nothing to the overall smape. If A = 0 and F = 5 in a period, the term becomes:
|5 − 0| / (|0| + |5|) = 5 / 5 = 1.0, and the contribution to sMAPE is 200% / n per period, illustrating the maximum penalty for a single non-zero forecast when the actual is zero.
When to Use sMAPE: Practical Guidance for Forecast Evaluation
Situations favouring sMAPE
- You compare forecasts across multiple products or regions with different scales.
- You want a symmetric treatment of over- and under-forecasting errors.
- You need a bounded metric that communicates error as a percentage.
When sMAPE is not the best choice
- If the data include many near-zero actuals and the forecast is noisy, sMAPE can produce volatile results.
- When you primarily care about relative under- or over-prediction across a narrow band of outcomes, alternative metrics such as MAPE or MASE might be informative.
- When your goal includes penalising large absolute errors regardless of the scale, consider complementary metrics beyond sMAPE.
Comparing sMAPE to MAPE and other metrics
MAPE (mean absolute percentage error) has intuitive appeal but can distort comparisons when series vary in scale or contain zeros. sMAPE mitigates some of these issues by using a symmetric denominator, yet it introduces a non-linear relationship with error magnitudes, especially near zero. In practice, many teams use sMAPE alongside MAPE to obtain a fuller picture of forecast accuracy. In other words, no single metric tells the whole story; using a small battery of metrics often yields the most robust conclusions.
Common Pitfalls and How to Avoid Them
Denominator pitfalls
Periods with very small |At| and |Ft| can inflate the term, especially under the Form 1 definition. To address this, researchers and practitioners often apply a small epsilon (e.g., 1e-6) to the denominator or implement a threshold below which the period is treated specially. This reduces the undue influence of noisy low-value observations on the overall smape score.
Zero and near-zero values
Zeros present a unique challenge because the ratio becomes highly sensitive to small forecast errors when the actual value is zero or near zero. The usual remedy is to adopt the Form 2 formulation or to adopt an epsilon in the denominator, as well as to report both sMAPE and a complementary metric that captures absolute deviation without percentage scaling.
Scale, units and comparability
Although sMAPE is designed to be scale-invariant, careful preparation of data matters. Ensure that the series you compare are meaningful to aggregate; for instance, you would not compare a retail price index in pounds with a different currency series without adjusting for units or performing a consistent transformation.
Handling missing data
In real-world datasets, periods may be missing values for actuals or forecasts. Exclude such periods from the sMAPE calculation or impute missing values using a transparent and justifiable method. Do not simply fill gaps with zeros or with arbitrary constants without documenting how gaps were treated, as this can bias the metric.
Implementing sMAPE in Practice: Quick Start with Popular Tools
Excel and Google Sheets
In spreadsheets, you can implement the sMAPE formula with a few rows of arithmetic. For Form 2, a compact approach is:
= (200% / n) * SUM( ABS(F – A) / (ABS(A) + ABS(F)) )
Where A is the actuals column, F is the forecast column, and n is the number of non-empty pairs. To handle the edge case of A = F = 0, you can wrap the term in an IF statement to return 0 for that row:
= (200% / n) * SUM( IF( ABS(A) + ABS(F) = 0, 0, ABS(F – A) / (ABS(A) + ABS(F)) ) )
Python (NumPy and pandas)
Python users often handle sMAPE with NumPy or pandas. Here is a concise function using the Form 2 definition:
import numpy as np
def sMAPE(y_true, y_pred):
y_true = np.asarray(y_true, dtype=float)
y_pred = np.asarray(y_pred, dtype=float)
denom = np.abs(y_true) + np.abs(y_pred)
# Avoid division by zero by replacing 0 denominators with 0 in the numerator
mask = denom != 0
terms = np.zeros_like(y_true, dtype=float)
terms[mask] = np.abs(y_pred[mask] - y_true[mask]) / denom[mask]
return 200.0 * np.mean(terms)
In this snippet, periods where both actual and forecast are zero contribute 0 to the mean, and periods with a non-zero sum contribute their proportionate error.
R
For R users, a vectorised approach is also straightforward:
sMAPE <- function(actual, forecast) {
denom <- abs(actual) + abs(forecast)
term <- ifelse(denom == 0, 0, abs(forecast - actual) / denom)
return(200 * mean(term))
}
Interpreting sMAPE in Real-World Scenarios
Product portfolios and category comparisons
When evaluating forecasts across a portfolio of products, sMAPE provides a consistent scale-free measure that facilitates fair comparisons. If you forecast demand for multiple SKUs with widely different average volumes, sMAPE helps to avoid undue emphasis on high-volume items simply due to their scale. However, you should still interpret the metric in the context of each product, as some categories may inherently exhibit higher variability.
Time-series vs cross-sectional forecasts
For time-series forecasting, sMAPE can illuminate how well the model tracks patterns over time, including seasonality and trend. When comparing across cross-sectional units (e.g., stores, regions), the symmetry of the denominator makes sMAPE particularly appealing, as it reduces bias toward any single unit with extreme values.
Communicating results to stakeholders
Because sMAPE is expressed as a percentage, it is intuitive for stakeholders. A report that states “our sMAPE is 15% on average” can be more accessible than a raw error figure. It is helpful, though, to supplement the main metric with a short narrative describing where the model performs well and where it struggles, supported by concrete examples or visual diagnostics.
Related Metrics and How They Compare to sMAPE
MAPE and MAE: complementary perspectives
MAPE (mean absolute percentage error) emphasises proportional errors but can be distorted by small denominators. MAE (mean absolute error) focuses on absolute deviations without normalisation. Both can be useful alongside sMAPE to provide a fuller picture of forecast accuracy. In practice, look at a small suite of metrics rather than relying on a single measure.
MASE and the role of benchmarks
MASE (mean absolute scaled error) compares forecast accuracy with a naïve benchmark, which is particularly useful when your data contain seasonality. While MASE and sMAPE capture different aspects of forecast error, reporting both can assist in differentiating model quality and practical usefulness.
Other symmetric or percentage-based metrics
There are variations that attempt to refine the treatment of zero values or to adjust for skew in the data. When you explore alternative metrics, ensure you understand the exact formula and the implications for interpretation, particularly in how the denominator behaves across the range of observed values.
Best Practices for Using sMAPE in Forecast Evaluation
- Use sMAPE as part of a metric suite: combine with MAPE, MAE, or MASE to gain a comprehensive view of forecast quality.
- Document how you handle zeros and missing values to ensure reproducibility and transparency.
- Prefer Form 2 (the 200% formulation) when communicating results to non-technical stakeholders for a clear interpretation of percentage errors.
- Be mindful of the scale of the data; for high-variance series, consider segmenting the analysis to avoid masking poor performance in specific subgroups.
- Include visual diagnostics alongside smape numbers, such as forecast error distributions, residual plots, and time-series overlays to provide context.
Frequently Asked Questions About sMAPE
Is smape always a percentage?
Yes. By design, sMAPE is expressed as a percentage, typically between 0% and 200%, reflecting the average proportional difference between forecast and actual values.
What happens when both actual and forecast are zero?
In most formulations, the corresponding term is defined as zero to avoid spurious inflation of the score. This ensures the metric cleanly reflects non-zero comparisons.
Can smape be negative?
No. By its mathematical construction, sMAPE is non-negative, since it aggregates absolute differences and non-negative denominators.
Which version should I report, sMAPE or SMAPE?
Both refer to the same concept. The choice often comes down to organisational convention or readability. If you publish reports externally, select the version that aligns with your branding and ensure consistency throughout the document.
Conclusion: Elevating Forecast Evaluation with sMAPE
The smape family of metrics — whether you encounter sMAPE, SMAPE or the plain-smape variant in documentation — offers a robust, symmetric lens for assessing forecast accuracy. Its defining strength lies in its balanced treatment of over- and under-forecasting, reducing bias that can arise when relying solely on traditional percentage-based errors. When used thoughtfully, alongside complementary metrics and sound data practices, sMAPE can illuminate where models excel and where improvements are needed, guiding better decisions and more reliable forecasts in business, finance and science.
Glossary: Key Terms You Might See When Reading About sMAPE
- sMAPE: The symmetric mean absolute percentage error, a percentage-based forecast error metric emphasizing symmetry between actuals and forecasts.
- SMAPE: An alternative capitalization for the same metric; used in different texts and software.
- Forecast accuracy: A measure of how closely predicted values match observed data.
- Denominator symmetry: The concept of treating over- and under-forecast errors equally by using a symmetric denominator.
- Edge cases: Periods with zero or near-zero actual and forecast values that require careful handling in the calculation.
- Normalization: The process of expressing error relative to the scale of the data, enabling cross-series comparisons.
With these foundations, you can apply sMAPE confidently to evaluate, compare and refine forecasting models. Remember to balance numerical rigor with clear communication, and to supplement smape figures with contextual insights that tell the full story of model performance.