Skip to main content

Evaluation Metrics

All 12 built-in accuracy metrics for evaluating forecasts.


Error Metrics

MAE (Mean Absolute Error)

Average absolute difference.

Parameters

ParameterTypeRequiredDefaultDescription
actualDOUBLE[]Yes-Actual values
predictedDOUBLE[]Yes-Predicted values

Formula: MAE = Σ|actual - predicted| / n

Range: 0 to infinity. Lower is better.

Example

SELECT anofox_fcst_ts_mae(LIST(actual), LIST(predicted)) as MAE
FROM comparison;

RMSE (Root Mean Squared Error)

Square root of mean squared error.

Formula: RMSE = sqrt(Σ(actual - predicted)² / n)

Range: 0 to infinity. Lower is better.

Example

SELECT anofox_fcst_ts_rmse(LIST(actual), LIST(predicted)) as RMSE
FROM comparison;

MAPE (Mean Absolute Percentage Error)

Average percentage error.

Formula: MAPE = Σ|actual - predicted| / |actual| × 100 / n

Range: 0 to infinity (usually 0-100%).

Example

SELECT anofox_fcst_ts_mape(LIST(actual), LIST(predicted)) as MAPE_pct
FROM comparison;

SMAPE (Symmetric MAPE)

Symmetric percentage error.

Formula: SMAPE = 2 × Σ|actual - predicted| / (|actual| + |predicted|) × 100 / n

Range: 0 to 200%.

Example

SELECT anofox_fcst_ts_smape(LIST(actual), LIST(predicted)) as SMAPE_pct
FROM comparison;

MASE (Mean Absolute Scaled Error)

Error scaled relative to naive baseline.

Parameters

ParameterTypeRequiredDefaultDescription
actualDOUBLE[]Yes-Actual values
predictedDOUBLE[]Yes-Predicted values
seasonal_periodINTEGERYes-Seasonal period

Formula: MASE = MAE / MAE(naive_forecast)

Interpretation:

  • MASE < 1: Better than naive
  • MASE = 1: Same as naive
  • MASE > 1: Worse than naive

Example

SELECT anofox_fcst_ts_mase(LIST(actual), LIST(predicted), 7) as MASE
FROM comparison;

ME (Mean Error)

Average signed error (with direction).

Formula: ME = Σ(actual - predicted) / n

Interpretation:

  • Positive: Forecast too low (pessimistic)
  • Negative: Forecast too high (optimistic)
  • Zero: Unbiased

Example

SELECT anofox_fcst_ts_me(LIST(actual), LIST(predicted)) as ME
FROM comparison;

Goodness of Fit

R² (Coefficient of Determination)

Proportion of variance explained.

Formula: R² = 1 - (Σ(actual - predicted)²) / (Σ(actual - mean(actual))²)

Range: -infinity to 1. Higher is better.

Interpretation:

  • R² = 1.0: Perfect prediction
  • R² = 0.8: Explains 80% of variance
  • R² = 0: No better than predicting mean
  • R² < 0: Worse than mean

Example

SELECT anofox_fcst_ts_r2(LIST(actual), LIST(predicted)) as R2
FROM comparison;

Interval Validation

Coverage

Percentage of actuals within prediction intervals.

Parameters

ParameterTypeRequiredDefaultDescription
actualDOUBLE[]Yes-Actual values
lowerDOUBLE[]Yes-Lower bounds
upperDOUBLE[]Yes-Upper bounds

Formula: Coverage = % of actuals where: lower ≤ actual ≤ upper

Range: 0 to 1 (0% to 100%)

Interpretation:

  • Expected: 95% for 95% confidence
  • Too low: Intervals too narrow (risky)
  • Too high: Intervals too wide (wasteful)

Example

SELECT anofox_fcst_ts_coverage(LIST(actual), LIST(lower), LIST(upper)) as coverage
FROM comparison;

Ideal range: 92-98% for 95% CI


Quantile Loss

Pinball loss for evaluating quantile forecasts.

Parameters

ParameterTypeRequiredDefaultDescription
actualDOUBLE[]Yes-Actual values
predictedDOUBLE[]Yes-Quantile predictions
quantileDOUBLEYes-Target quantile (0-1)

Formula: QL = (q - 1{actual < predicted}) × (actual - predicted)

Range: 0 to infinity. Lower is better.

Example

-- Evaluate 90th percentile forecast
SELECT anofox_fcst_ts_quantile_loss(LIST(actual), LIST(q90), 0.9) as QL_90
FROM comparison;

Mean Quantile Loss (MQLoss)

Average quantile loss across multiple quantiles.

Parameters

ParameterTypeRequiredDefaultDescription
actualDOUBLE[]Yes-Actual values
predictedDOUBLE[]Yes-Predictions
quantilesDOUBLE[]Yes-Target quantiles

Formula: MQLoss = Σ QuantileLoss(q) / n_quantiles

Range: 0 to infinity. Lower is better.

Example

-- Evaluate across multiple quantiles
SELECT anofox_fcst_ts_mqloss(
LIST(actual),
LIST(predicted),
[0.1, 0.5, 0.9]
) as mqloss
FROM comparison;

Use for: Evaluating probabilistic forecasts with multiple prediction intervals.


Complete Metrics Example

WITH comparison AS (
SELECT
actual,
forecast,
lower_95,
upper_95
FROM forecast_results
)
SELECT
ROUND(anofox_fcst_ts_mae(LIST(actual), LIST(forecast)), 2) as MAE,
ROUND(anofox_fcst_ts_rmse(LIST(actual), LIST(forecast)), 2) as RMSE,
ROUND(anofox_fcst_ts_mape(LIST(actual), LIST(forecast)), 2) as MAPE_pct,
ROUND(anofox_fcst_ts_smape(LIST(actual), LIST(forecast)), 2) as SMAPE_pct,
ROUND(anofox_fcst_ts_mase(LIST(actual), LIST(forecast), 7), 2) as MASE,
ROUND(anofox_fcst_ts_me(LIST(actual), LIST(forecast)), 2) as ME,
ROUND(anofox_fcst_ts_r2(LIST(actual), LIST(forecast)), 4) as R2,
ROUND(anofox_fcst_ts_coverage(LIST(actual), LIST(lower_95), LIST(upper_95)), 3) as coverage_95
FROM comparison;

Metric Selection Guide

GoalPrimarySecondary
Minimize errorsMAE or RMSEMAPE
Executive reportMAPEMAE
Avoid big mistakesRMSEMAE
Compare productsMAPE or MASESMAPE
Confidence neededCoverage
Inventory planningME (bias)MASE

Typical Metric Ranges

MetricExcellentGoodFairPoor
MAPE< 5%5-10%10-20%> 20%
RMSE< 10% of mean10-20%20-30%> 30%
> 0.950.80-0.950.60-0.80< 0.60
MASE< 0.80.8-1.01.0-1.2> 1.2
Coverage93-97%90-98%85-99%Outside

🍪 Cookie Settings