Tutorial for time series

In this tutorial we describe how to use MapieTimeSeriesRegressor to estimate prediction intervals associated with time series forecast.

Here, we use the Victoria electricity demand dataset used in the book “Forecasting: Principles and Practice” by R. J. Hyndman and G. Athanasopoulos. The electricity demand features daily and weekly seasonalities and is impacted by the temperature, considered here as a exogeneous variable.

Before estimating prediction intervals with MAPIE, we optimize the base model, here a Random Forest model. The hyper-parameters are optimized with a RandomizedSearchCV using a sequential TimeSeriesSplit cross validation, in which the training set is prior to the validation set.

Once the base model is optimized, we can use MapieTimeSeriesRegressor to estimate the prediction intervals associated with one-step ahead forecasts through the EnbPI method [1].

As its parent class MapieRegressor, MapieTimeSeriesRegressor has two main arguments : “cv”, and “method”. In order to implement EnbPI, “method” must be set to “enbpi” (the default value) while “cv” must be set to the BlockBootstrap class that block bootstraps the training set. This sampling method is used in [1] instead of the traditional bootstrap strategy as it is more suited for time series data.

The EnbPI method allows you update the residuals during the prediction, each time new observations are available so that the deterioration of predictions, or the increase of noise level, can be dynamically taken into account. It can be done with MapieTimeSeriesRegressor through the partial_fit class method called at every step.

[1] Chen Xu and Yao Xie. “Conformal Prediction Interval for Dynamic Time-Series.” International Conference on Machine Learning (ICML, 2021).

import warnings

import numpy as np
import pandas as pd
from matplotlib import pylab as plt
from scipy.stats import randint
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import RandomizedSearchCV, TimeSeriesSplit

from mapie.metrics import (regression_coverage_score,
                           regression_mean_width_score)
from mapie.subsample import BlockBootstrap
from mapie.regression import MapieTimeSeriesRegressor

warnings.simplefilter("ignore")

1. Load input data and dataset preparation

The Victoria electricity demand dataset can be downloaded directly on the MAPIE github repository. It consists in hourly electricity demand (in GW) of the Victoria state in Australia together with the temperature (in Celsius degrees). We extract temporal features out of the date and hour.

num_test_steps = 24 * 7

url_file = (
    "https://raw.githubusercontent.com/scikit-learn-contrib/MAPIE/master/"
    "examples/data/demand_temperature.csv"
)
demand_df = pd.read_csv(
    url_file, parse_dates=True, index_col=0
)
demand_df["Date"] = pd.to_datetime(demand_df.index)
demand_df["Weekofyear"] = demand_df.Date.dt.isocalendar().week.astype("int64")
demand_df["Weekday"] = demand_df.Date.dt.isocalendar().day.astype("int64")
demand_df["Hour"] = demand_df.index.hour
n_lags = 5
for hour in range(1, n_lags):
    demand_df[f"Lag_{hour}"] = demand_df["Demand"].shift(hour)

We now introduce a brutal changepoint in the test set by decreasing the electricity demand by 2 GW on February 22. It aims at simulating an effect, such as blackout or lockdown due to a pandemic, that was not taken into account by the model during its training.

demand_df.Demand.iloc[-int(num_test_steps/2):] -= 2

The last week of the dataset is considered as test set, the remaining data is used as training set.

demand_train = demand_df.iloc[:-num_test_steps, :].copy()
demand_test = demand_df.iloc[-num_test_steps:, :].copy()
features = ["Weekofyear", "Weekday", "Hour", "Temperature"]
features += [f"Lag_{hour}" for hour in range(1, n_lags)]

X_train = demand_train.loc[
    ~np.any(demand_train[features].isnull(), axis=1), features
]
y_train = demand_train.loc[X_train.index, "Demand"]
X_test = demand_test.loc[:, features]
y_test = demand_test["Demand"]

Let’s now visualize the training and test sets with the changepoint.

plt.figure(figsize=(16, 5))
plt.plot(y_train)
plt.plot(y_test)
plt.ylabel("Hourly demand (GW)")
plt.legend(["Training data", "Test data"])
plt.show()
plot ts tutorial

2. Optimize the base estimator

Before estimating the prediction intervals with MAPIE, let’s optimize the base model, here a RandomForestRegressor through a RandomizedSearchCV with a temporal cross-validation strategy. For the sake of computational time, the best parameters are already tuned.

model_params_fit_not_done = False
if model_params_fit_not_done:
    # CV parameter search
    n_iter = 100
    n_splits = 5
    tscv = TimeSeriesSplit(n_splits=n_splits)
    random_state = 59
    rf_model = RandomForestRegressor(random_state=random_state)
    rf_params = {"max_depth": randint(2, 30), "n_estimators": randint(10, 100)}
    cv_obj = RandomizedSearchCV(
        rf_model,
        param_distributions=rf_params,
        n_iter=n_iter,
        cv=tscv,
        scoring="neg_root_mean_squared_error",
        random_state=random_state,
        verbose=0,
        n_jobs=-1,
    )
    cv_obj.fit(X_train, y_train)
    model = cv_obj.best_estimator_
else:
    # Model: Random Forest previously optimized with a cross-validation
    model = RandomForestRegressor(
        max_depth=10, n_estimators=50, random_state=59
    )

3. Estimate prediction intervals on the test set

We now use MapieTimeSeriesRegressor to build prediction intervals associated with one-step ahead forecasts. As explained in the introduction, we use the EnbPI method [1].

Estimating prediction intervals can be possible in two ways:

  • with a regular .fit and .predict process, limiting the use of trainining set residuals to build prediction intervals

  • using .partial_fit in addition to .fit and .predict allowing MAPIE to use new residuals from the test points as new data are becoming available.

The latter method is particularly useful to adjust prediction intervals to sudden change points on test sets that have not been seen by the model during training.

Following [1], we use the BlockBootstrap sampling method instead of the traditional bootstrap strategy for training the model since the former is more suited for time series data. Here, we choose to perform 10 resamplings with 10 blocks.

alpha = 0.05
gap = 1
cv_mapiets = BlockBootstrap(
    n_resamplings=10, n_blocks=10, overlapping=False, random_state=59
)
mapie_enbpi = MapieTimeSeriesRegressor(
    model, method="enbpi", cv=cv_mapiets, agg_function="mean", n_jobs=-1
)

Let’s start by estimating prediction intervals without partial fit.

mapie_enbpi = mapie_enbpi.fit(X_train, y_train)
y_pred_npfit, y_pis_npfit = mapie_enbpi.predict(
    X_test, alpha=alpha, ensemble=True, optimize_beta=True
)
coverage_npfit = regression_coverage_score(
    y_test, y_pis_npfit[:, 0, 0], y_pis_npfit[:, 1, 0]
)
width_npfit = regression_mean_width_score(
    y_pis_npfit[:, 0, 0], y_pis_npfit[:, 1, 0]
)

Let’s now estimate prediction intervals with partial fit. As discussed previously, the update of the residuals and the one-step ahead predictions are performed sequentially in a loop.

4. Plot estimated prediction intervals on one-step ahead forecast

Let’s now compare the prediction intervals estimated by MAPIE with and without update of the residuals.

y_preds = [y_pred_npfit, y_pred_pfit]
y_pis = [y_pis_npfit, y_pis_pfit]
coverages = [coverage_npfit, coverage_pfit]
widths = [width_npfit, width_pfit]

fig, axs = plt.subplots(
    nrows=2, ncols=1, figsize=(14, 8), sharey="row", sharex="col"
)
for i, (ax, w) in enumerate(zip(axs, ["without", "with"])):
    ax.set_ylabel("Hourly demand (GW)")
    ax.plot(
        y_train[int(-len(y_test)/2):],
        lw=2,
        label="Training data", c="C0"
    )
    ax.plot(y_test, lw=2, label="Test data", c="C1")

    ax.plot(
        y_test.index, y_preds[i], lw=2, c="C2", label="Predictions"
    )
    ax.fill_between(
        y_test.index,
        y_pis[i][:, 0, 0],
        y_pis[i][:, 1, 0],
        color="C2",
        alpha=0.2,
        label="Prediction intervals",
    )
    title = f"EnbPI, {w} update of residuals. "
    title += f"Coverage:{coverages[i]:.3f} and Width:{widths[i]:.3f}"
    ax.set_title(title)
    ax.legend()
fig.tight_layout()
plt.show()
EnbPI, without update of residuals. Coverage:0.423 and Width:0.285, EnbPI, with update of residuals. Coverage:0.571 and Width:0.906

Let’s now compare the coverages obtained by MAPIE with and without update of the residuals on a 24-hour rolling window of prediction intervals.

window = 24
rolling_coverage_pfit, rolling_coverage_npfit = [], []
for i in range(window, len(y_test), 1):
    rolling_coverage_pfit.append(
        regression_coverage_score(
            y_test[i-window:i], y_pis_pfit[i-window:i, 0, 0],
            y_pis_pfit[i-window:i, 1, 0]
        )
    )
    rolling_coverage_npfit.append(
        regression_coverage_score(
            y_test[i-window:i], y_pis_npfit[i-window:i, 0, 0],
            y_pis_npfit[i-window:i, 1, 0]
        )
    )

plt.figure(figsize=(10, 5))
plt.ylabel(f"Rolling coverage [{window} hours]")
plt.plot(
    y_test[window:].index,
    rolling_coverage_npfit,
    label="Without update of residuals"
)
plt.plot(
    y_test[window:].index,
    rolling_coverage_pfit,
    label="With update of residuals"
)
plt.show()
plot ts tutorial

The training data do not contain a change point, hence the base model cannot anticipate it. Without update of the residuals, the prediction intervals are built upon the distribution of the residuals of the training set. Therefore they do not cover the true observations after the change point, leading to a sudden decrease of the coverage. However, the partial update of the residuals allows the method to capture the increase of uncertainties of the model predictions. One can notice that the uncertainty’s explosion happens about one day late. This is because enough new residuals are needed to change the quantiles obtained from the residuals distribution.

Total running time of the script: ( 0 minutes 36.097 seconds)

Gallery generated by Sphinx-Gallery