Time Series Forecasting Fundamentals
Accurate forecasting drives better decisions across inventory, staffing, and capacity planning. Here are the fundamental techniques you should master before reaching for complex ML models.
Understanding Time Series Components
Every time series can be decomposed into:
- Trend: Long-term direction (up, down, flat)
- Seasonality: Regular patterns (weekly, monthly, yearly)
- Cyclical: Irregular but recurring patterns
- Residual: Random noise
from statsmodels.tsa.seasonal import seasonal_decompose
# Decompose time series
result = seasonal_decompose(df['sales'], model='multiplicative', period=12)
# Plot components
result.plot()
plt.tight_layout()
plt.show()
Simple Methods That Work
Start simple—often these baselines are hard to beat:
Moving Average
# Simple moving average
df['sma_12'] = df['sales'].rolling(window=12).mean()
# Weighted moving average
weights = np.array([0.1, 0.2, 0.3, 0.4])
df['wma_4'] = df['sales'].rolling(4).apply(
lambda x: np.dot(x, weights) / weights.sum()
)
Exponential Smoothing
from statsmodels.tsa.holtwinters import ExponentialSmoothing
# Triple exponential smoothing (Holt-Winters)
model = ExponentialSmoothing(
df['sales'],
trend='add',
seasonal='mul',
seasonal_periods=12
)
fitted = model.fit()
forecast = fitted.forecast(steps=12)
ARIMA Family
For more complex patterns, ARIMA models remain powerful:
from statsmodels.tsa.arima.model import ARIMA
from pmdarima import auto_arima
# Automatic parameter selection
auto_model = auto_arima(
df['sales'],
seasonal=True,
m=12, # Seasonality period
trace=True,
suppress_warnings=True
)
print(auto_model.summary())
Evaluation Metrics
Measure forecast accuracy appropriately:
- MAE: Mean Absolute Error (interpretable units)
- MAPE: Mean Absolute Percentage Error (relative)
- RMSE: Penalizes large errors more heavily
- MASE: Good for comparing across series
from sklearn.metrics import mean_absolute_error, mean_squared_error
mae = mean_absolute_error(actual, predicted)
rmse = np.sqrt(mean_squared_error(actual, predicted))
mape = np.mean(np.abs((actual - predicted) / actual)) * 100
Common Pitfalls
- Ignoring stationarity: Test and transform if needed
- Overfitting: Always use holdout sets
- Ignoring external factors: Promotions, holidays, events
- Not updating models: Patterns change over time
Master these fundamentals before moving to Prophet, LSTM, or Transformer-based approaches.