Gaussian processes (2/3) - Fitting a Gaussian process kernel
In the previous post we introduced the Gaussian process model with the exponentiated quadratic covariance function. In this post we will introduce parametrized covariance functions (kernels), fit them to real world data, and use them to make posterior predictions. This post is the second part of a series on Gaussian processes:
We will implement the Gaussian process model in TensorFlow Probability which will allow us to easily implement and tune our model without having to worry about the details.
# Imports
import os
from itertools import islice
import warnings
warnings.simplefilter("ignore")
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_probability as tfp
from tqdm.notebook import tqdm
import bokeh
import bokeh.io
import bokeh.plotting
import bokeh.models
from IPython.display import display, HTML
bokeh.io.output_notebook(hide_banner=True)
tfb = tfp.bijectors
tfd = tfp.distributions
tfk = tfp.math.psd_kernels
np.random.seed(42)
tf.random.set_seed(42)
#
Mauna Loa CO₂ data
The dataset used in this example is the monthly average atmospheric CO₂ concentrations (in parts per million (ppm)) collected at the Mauna Loa Observatory in Hawaii. The observatory has been collecting these CO₂ concentrations since 1958 and showed the first significant evidence of rapidly increasing CO₂ levels in the atmosphere.
These measures of atmospheric CO₂ concentrations show different characteristics such as a long term rising trend, variation with the seasons, and smaller irregularities. This made it into a canonical example in Gaussian process modelling ⁽¹⁾ .
In this post the data is downloaded as a CSV file from the Scripps CO₂ Program website . This data is loaded in a pandas dataframe and plotted below.
# Load the data
# Load the data from the Scripps CO2 program website.
co2_df = pd.read_csv(
# Source: https://scrippsco2.ucsd.edu/assets/data/atmospheric/stations/in_situ_co2/monthly/monthly_in_situ_co2_mlo.csv
'./monthly_in_situ_co2_mlo.csv',
header=3, # Data starts here
skiprows=[4, 5], # Headers consist of multiple rows
usecols=[3, 4], # Only keep the 'Date' and 'CO2' columns
na_values='-99.99', # NaNs are denoted as '-99.99'
dtype=np.float64
)
# Drop missing values
co2_df.dropna(inplace=True)
# Remove whitespace from column names
co2_df.rename(columns=lambda x: x.strip(), inplace=True)
#
# Plot data
fig = bokeh.plotting.figure(
width=600, height=300,
x_range=(1958, 2021), y_range=(310, 420))
fig.xaxis.axis_label = 'Date'
fig.yaxis.axis_label = 'CO₂ (ppm)'
fig.add_layout(bokeh.models.Title(
text='In situ air measurements at Mauna Loa, Observatory, Hawaii',
text_font_style="italic"), 'above')
fig.add_layout(bokeh.models.Title(
text='Atmospheric CO₂ concentrations',
text_font_size="14pt"), 'above')
fig.line(
co2_df.Date, co2_df.CO2, legend_label='All data',
line_width=2, line_color='midnightblue')
fig.legend.location = 'top_left'
fig.toolbar.autohide = True
bokeh.plotting.show(fig)
#
# Split the data into observed and to predict
date_split_predict = 2010
df_observed = co2_df[co2_df.Date < date_split_predict]
print('{} measurements in the observed set'.format(len(df_observed)))
df_predict = co2_df[co2_df.Date >= date_split_predict]
print('{} measurements in the test set'.format(len(df_predict)))
#
Gaussian process model
We're going to use a Gaussian process model to make posterior predictions of the atmospheric CO₂ concentrations for 2010 and after based on the observed data from before 2010.
A Gaussian process is uniquely defined by it's mean function $m(x)$ and covariance function $k(x,x')$:
$$f(x) \sim \mathcal{GP}(m(x),k(x,x'))$$# Define mean function which is the means of observations
observations_mean = tf.constant(
[np.mean(df_observed.CO2.values)], dtype=tf.float64)
mean_fn = lambda _: observations_mean
#
Kernel function
To model the different characteristics of our dataset we will create a covariance (kernel) function by combining different kernels already available in TensorFlow-Probability . The different data characterics will be modelled as:
-
Long term smooth change in CO₂ levels over time modelled by a
exponentiated quadratic
kernel defined in the code below as
smooth_kernel
. -
Seasonality based on a local periodic kernel, which consists of a
exponentiated sine squared kernel
multiplied with a exponentiated quadratic to make the seasonality degrade as further it gets from the observations. This seasonal periodic kernel is defined in the code below as
local_periodic_kernel
. -
Short to medium term irregularities modelled by a rational quadratic kernel, which is defined in the code below as
irregular_kernel
. -
Observational noise which will be modelled directly by the
observation_noise_variance
parameters of the TensorFlow Gaussian process model .
These different kernels will be summed into one single kernel function $k_{\theta}(x_a, x_b)$ that will allow for all these effects to occur together. This kernel is defined as
kernel
in the code below. Each of the kernels has hyperparameters $\theta$ that can be tuned, they will be defined as a variable so they can be fitted on the observed data.
This post
provides more insight on the kernels used here and the effect of their hyperparameters.
# Define the kernel with trainable parameters.
# Note we transform some of the trainable variables to ensure
# they stay positive.
# Use float64 because this means that the kernel matrix will have
# less numerical issues when computing the Cholesky decomposition
# Constrain to make sure certain parameters are strictly positive
constrain_positive = tfb.Shift(np.finfo(np.float64).tiny)(tfb.Exp())
# Smooth kernel hyperparameters
smooth_amplitude = tfp.util.TransformedVariable(
initial_value=10., bijector=constrain_positive, dtype=np.float64,
name='smooth_amplitude')
smooth_length_scale = tfp.util.TransformedVariable(
initial_value=10., bijector=constrain_positive, dtype=np.float64,
name='smooth_length_scale')
# Smooth kernel
smooth_kernel = tfk.ExponentiatedQuadratic(
amplitude=smooth_amplitude,
length_scale=smooth_length_scale)
# Local periodic kernel hyperparameters
periodic_amplitude = tfp.util.TransformedVariable(
initial_value=5.0, bijector=constrain_positive, dtype=np.float64,
name='periodic_amplitude')
periodic_length_scale = tfp.util.TransformedVariable(
initial_value=1.0, bijector=constrain_positive, dtype=np.float64,
name='periodic_length_scale')
periodic_period = tfp.util.TransformedVariable(
initial_value=1.0, bijector=constrain_positive, dtype=np.float64,
name='periodic_period')
periodic_local_length_scale = tfp.util.TransformedVariable(
initial_value=1.0, bijector=constrain_positive, dtype=np.float64,
name='periodic_local_length_scale')
# Local periodic kernel
local_periodic_kernel = (
tfk.ExpSinSquared(
amplitude=periodic_amplitude,
length_scale=periodic_length_scale,
period=periodic_period) *
tfk.ExponentiatedQuadratic(
length_scale=periodic_local_length_scale))
# Short-medium term irregularities kernel hyperparameters
irregular_amplitude = tfp.util.TransformedVariable(
initial_value=1., bijector=constrain_positive, dtype=np.float64,
name='irregular_amplitude')
irregular_length_scale = tfp.util.TransformedVariable(
initial_value=1., bijector=constrain_positive, dtype=np.float64,
name='irregular_length_scale')
irregular_scale_mixture = tfp.util.TransformedVariable(
initial_value=1., bijector=constrain_positive, dtype=np.float64,
name='irregular_scale_mixture')
# Short-medium term irregularities kernel
irregular_kernel = tfk.RationalQuadratic(
amplitude=irregular_amplitude,
length_scale=irregular_length_scale,
scale_mixture_rate=irregular_scale_mixture)
# Noise variance of observations
# Start out with a medium-to high noise
observation_noise_variance = tfp.util.TransformedVariable(
initial_value=1, bijector=constrain_positive, dtype=np.float64,
name='observation_noise_variance')
trainable_variables = [v.variables[0] for v in [
smooth_amplitude,
smooth_length_scale,
periodic_amplitude,
periodic_length_scale,
periodic_period,
periodic_local_length_scale,
irregular_amplitude,
irregular_length_scale,
irregular_scale_mixture,
observation_noise_variance
]]
#
# Sum all kernels to single kernel containing all characteristics
kernel = (smooth_kernel + local_periodic_kernel + irregular_kernel)
Tuning the hyperparameters
We can tune the hyperparameters $\theta$ of our Gaussian process model based on the data. This post leverages TensorFlow to fit the parameters by maximizing the marginal likelihood $p(\mathbf{y} \mid X, \theta)$ of the Gaussian process distribution based on the observed data $(X, \mathbf{y})$.
$$\hat{\theta} = \underset{\theta}{\text{argmax}} \left( p(\mathbf{y} \mid X, \theta) \right)$$The marginal likelihood of the Gaussian process is the likelihood of a Gaussian distribution which is defined as:
$$p(\mathbf{y} \mid \mu, \Sigma) = \frac{1}{\sqrt{(2\pi)^d \lvert\Sigma\rvert}} \exp{ \left( -\frac{1}{2}(\mathbf{y} - \mu)^{\top} \Sigma^{-1} (\mathbf{y} - \mu) \right)}$$The mean and covariance are calculated from their parameterized functions using the observed data $X$ as input: $\mu_{\theta} = m_{\theta}(X)$ and $\Sigma_{\theta} = k_{\theta}(X, X)$, so we can write the marginal likelihood as:
$$ p(\mathbf{y} \mid X, \theta) = \frac{1}{\sqrt{(2\pi)^d \lvert \Sigma_{\theta} \rvert}} \exp{ \left( -\frac{1}{2}(\mathbf{y} - \mu_{\theta})^\top \Sigma_{\theta}^{-1} (\mathbf{y} - \mu_{\theta}) \right)}$$With $d$ the dimensionality of the marginal and $\lvert \Sigma_{\theta} \rvert$ the determinant of the kernel matrix. We can get rid of the exponent by taking the log and maximizing the log marginal likelihood:
$$ \log{p(\mathbf{y} \mid X, \theta)} = -\frac{1}{2}(\mathbf{y} - \mu_{\theta})^\top \Sigma_{\theta}^{-1} (\mathbf{y} - \mu_{\theta}) - \frac{1}{2} \log{\lvert \Sigma_{\theta} \rvert} - \frac{d}{2} \log{2 \pi}$$The first term ($-0.5 (\mathbf{y} - \mu_{\theta})^\top \Sigma_{\theta}^{-1} (\mathbf{y} - \mu_{\theta})$) is the data-fit while the rest ($-0.5(\log{\lvert \Sigma_{\theta} \rvert} + d\log{2 \pi})$) is a complexity penalty, also known as differential entropy ⁽¹⁾ .
The optimal parameters $\hat{\theta}$ can then be found by minimizing the negative of the log marginal likelihood:
$$ \hat{\theta} = \underset{\theta}{\text{argmax}} \left( p(\mathbf{y} \mid X, \theta) \right) = \underset{\theta}{\text{argmin}} { \;-\log{ p(\mathbf{y} \mid X, \theta)}}$$Since in this case the log marginal likelihood is derivable with respect to the kernel hyperparameters, we can use a gradient based approach to minimize the negative log marginal likelihood (NLL). In this post we will be using a gradient descent based approach to train the hyperparameters on minibatches of the observed data.
# Define mini-batch data iterator
batch_size = 128
batched_dataset = (
tf.data.Dataset.from_tensor_slices(
(df_observed.Date.values.reshape(-1, 1), df_observed.CO2.values))
.shuffle(buffer_size=len(df_observed))
.repeat(count=None)
.batch(batch_size)
)
#
We will represent the Gaussian process marginal distribution by the
GaussianProcess
object which has a
log_prob
function to get the marginal log likelihood. The
AdamOptimizer
is used to minimize the negative marginal log likelihood.
# Use tf.function for more efficient function evaluation
@tf.function(autograph=False, experimental_compile=False)
def gp_loss_fn(index_points, observations):
"""Gaussian process negative-log-likelihood loss function."""
gp = tfd.GaussianProcess(
mean_fn=mean_fn,
kernel=kernel,
index_points=index_points,
observation_noise_variance=observation_noise_variance
)
negative_log_likelihood = -gp.log_prob(observations)
return negative_log_likelihood
# Fit hyperparameters
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
# Training loop
batch_nlls = [] # Batch NLL for plotting
full_ll = [] # Full data NLL for plotting
nb_iterations = 10001
for i, (index_points_batch, observations_batch) in tqdm(
enumerate(islice(batched_dataset, nb_iterations)), total=nb_iterations):
# Run optimization for single batch
with tf.GradientTape() as tape:
loss = gp_loss_fn(index_points_batch, observations_batch)
grads = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
batch_nlls.append((i, loss.numpy()))
# Evaluate on all observations
if i % 100 == 0:
# Evaluate on all observed data
ll = gp_loss_fn(
index_points=df_observed.Date.values.reshape(-1, 1),
observations=df_observed.CO2.values)
full_ll.append((i, ll.numpy()))
#
# Plot NLL over iterations
fig = bokeh.plotting.figure(
width=600, height=350,
x_range=(0, nb_iterations), y_range=(50, 200))
fig.add_layout(bokeh.models.Title(
text='Negative Log-Likelihood (NLL) during training',
text_font_size="14pt"), 'above')
fig.xaxis.axis_label = 'iteration'
fig.yaxis.axis_label = 'NLL batch'
# First plot
fig.line(
*zip(*batch_nlls), legend_label='Batch data',
line_width=2, line_color='midnightblue')
# Seoncd plot
# Setting the second y axis range name and range
fig.extra_y_ranges = {
'fig1ax2': bokeh.models.Range1d(start=130, end=250)}
fig.line(
*zip(*full_ll), legend_label='All observed data',
line_width=2, line_color='red', y_range_name='fig1ax2')
# Adding the second axis to the plot.
fig.add_layout(bokeh.models.LinearAxis(
y_range_name='fig1ax2', axis_label='NLL all'), 'right')
fig.legend.location = 'top_right'
fig.toolbar.autohide = True
bokeh.plotting.show(fig)
#
Despite the complexity term in the log marginal likelihood it is still possible to overfit to the data. While we didn't prevent further overfitting in this post, you could prevent overfitting by adding a regularization term on the hyperparameters ⁽²⁾ , or split the observations in training an validation sets and select the best fit on your validation data from models trained on the training data.
# Show values of parameters found
variables = [
smooth_amplitude,
smooth_length_scale,
periodic_amplitude,
periodic_length_scale,
periodic_period,
periodic_local_length_scale,
irregular_amplitude,
irregular_length_scale,
irregular_scale_mixture,
observation_noise_variance
]
data = list([(var.variables[0].name[:-2], var.numpy()) for var in variables])
df_variables = pd.DataFrame(
data, columns=['Hyperparameters', 'Value'])
display(HTML(df_variables.to_html(
index=False, float_format=lambda x: f'{x:.4f}')))
#
The fitted kernel and it's components are illustrated in more detail in a follow-up post .
Posterior predictions
The TensorFlow
GaussianProcess
class can only represent an unconditional Gaussian process.
To make predictions by posterior inference conditioned on observed data we will need to create a
GaussianProcessRegressionModel
with the fitted kernel, mean function and observed data.
The posterior predictions conditioned on the observed data before 2010 are plotted in the figure below together with their 95% prediction interval.
Notice that our model captures captures the different dataset characteristics such as the trend and seasonality quite well. The predictions start deviating the further away from the observed data the model was conditioned on, together with widening prediction interval.
# Posterior GP using fitted kernel and observed data
gp_posterior_predict = tfd.GaussianProcessRegressionModel(
mean_fn=mean_fn,
kernel=kernel,
index_points=df_predict.Date.values.reshape(-1, 1),
observation_index_points=df_observed.Date.values.reshape(-1, 1),
observations=df_observed.CO2.values,
observation_noise_variance=observation_noise_variance)
# Posterior mean and standard deviation
posterior_mean_predict = gp_posterior_predict.mean()
posterior_std_predict = gp_posterior_predict.stddev()
# Plot posterior predictions
# Get posterior predictions
μ = posterior_mean_predict.numpy()
σ = posterior_std_predict.numpy()
# Plot
fig = bokeh.plotting.figure(
width=600, height=400,
x_range=(2010, 2021.3), y_range=(384, 418))
fig.xaxis.axis_label = 'Date'
fig.yaxis.axis_label = 'CO₂ (ppm)'
fig.add_layout(bokeh.models.Title(
text='Posterior predictions conditioned on observations before 2010.',
text_font_style="italic"), 'above')
fig.add_layout(bokeh.models.Title(
text='Atmospheric CO₂ concentrations',
text_font_size="14pt"), 'above')
fig.circle(
co2_df.Date, co2_df.CO2, legend_label='True data',
size=2, line_color='midnightblue')
fig.line(
df_predict.Date.values, μ, legend_label='μ (predictions)',
line_width=2, line_color='firebrick')
# Prediction interval
band_x = np.append(
df_predict.Date.values, df_predict.Date.values[::-1])
band_y = np.append(
(μ + 2*σ), (μ - 2*σ)[::-1])
fig.patch(
band_x, band_y, color='firebrick', alpha=0.4,
line_color='firebrick', legend_label='2σ')
fig.legend.location = 'top_left'
fig.toolbar.autohide = True
bokeh.plotting.show(fig)
#
This post illustrated using TensorFlow probability to combine multiple out-of-the-box kernels and fit their hyperparameters on observed data. The fitted model was then used to make posterior predictions.
This post was the second part of a series on Gaussian processes:
Sidenotes
- Note that the determinant $\lvert \Sigma \rvert$ is equal to the product of it's eigenvalues , and that $\lvert \Sigma \rvert$ can be interpreted as the volume spanned by the covariance matrix $\Sigma$. Reducing $\lvert \Sigma \rvert$ will thus decrease the dispersion of the points coming from the distribution with covariance matrix $\Sigma$ and reduce the complexity.
- Using TensorFlow Probability distributions you can define a prior distribution for each hyperparameter and use the log-likelihood of this distribution as a regularization term on your parameter.
References
- Gaussian Processes for Machine Learning. Chapter 5: Model Selection and Adaptation of Hyperparameters by Carl Edward Rasmussen and Christopher K. I. Williams.
- Gaussian Processes for Regression by Christopher K. I. Williams and Carl Edward Rasmussen.
# Python package versions used
%load_ext watermark
%watermark --python
%watermark --iversions
#
This post at peterroelants.github.io is generated from an Python notebook file. Link to the full IPython notebook file
Gaussian Process Kernel TensorFlow Machine Learning Notebook