Gaussian processes (2/3) - Fitting a Gaussian process kernel

In the previous post we introduced the Gaussian process model with the exponentiated quadratic covariance function. In this post we will introduce parametrized covariance functions (kernels), fit them to real world data, and use them to make posterior predictions. This post is the second part of a series on Gaussian processes:

  1. Understanding Gaussian processes
  2. Fitting a Gaussian process kernel (this)
  3. Gaussian process kernels

We will implement the Gaussian process model in TensorFlow Probability which will allow us to easily implement and tune our model without having to worry about the details.

In [1]:

Mauna Loa CO₂ data

The dataset used in this example is the monthly average atmospheric CO₂ concentrations (in parts per million (ppm)) collected at the Mauna Loa Observatory in Hawaii. The observatory has been collecting these CO₂ concentrations since 1958 and showed the first significant evidence of rapidly increasing CO₂ levels in the atmosphere.

These measures of atmospheric CO₂ concentrations show different characteristics such as a long term rising trend, variation with the seasons, and smaller irregularities. This made it into a canonical example in Gaussian process modelling ⁽¹⁾ .

In this post the data is downloaded as a CSV file from the Scripps CO₂ Program website . This data is loaded in a pandas dataframe and plotted below.

In [2]:
In [3]:
In [4]:
617 measurements in the observed set
133 measurements in the test set

Gaussian process model

We're going to use a Gaussian process model to make posterior predictions of the atmospheric CO₂ concentrations for 2010 and after based on the observed data from before 2010.

A Gaussian process is uniquely defined by it's mean function $m(x)$ and covariance function $k(x,x')$:

$$f(x) \sim \mathcal{GP}(m(x),k(x,x'))$$

Mean function

Since most interesting effects will be modelled by the kernel function we will keep the mean function simple. In this example the mean function is going to be modelled as a function that always returns the mean of the observations.

In [5]:

Kernel function

To model the different characteristics of our dataset we will create a covariance (kernel) function by combining different kernels already available in TensorFlow-Probability . The different data characterics will be modelled as:

  • Long term smooth change in CO₂ levels over time modelled by a exponentiated quadratic kernel defined in the code below as smooth_kernel .
  • Seasonality based on a local periodic kernel, which consists of a exponentiated sine squared kernel multiplied with a exponentiated quadratic to make the seasonality degrade as further it gets from the observations. This seasonal periodic kernel is defined in the code below as local_periodic_kernel .
  • Short to medium term irregularities modelled by a rational quadratic kernel, which is defined in the code below as irregular_kernel .
  • Observational noise which will be modelled directly by the observation_noise_variance parameters of the TensorFlow Gaussian process model .

These different kernels will be summed into one single kernel function $k_{\theta}(x_a, x_b)$ that will allow for all these effects to occur together. This kernel is defined as kernel in the code below. Each of the kernels has hyperparameters $\theta$ that can be tuned, they will be defined as a variable so they can be fitted on the observed data. This post provides more insight on the kernels used here and the effect of their hyperparameters.

In [6]:
In [7]:
# Sum all kernels to single kernel containing all characteristics
kernel = (smooth_kernel + local_periodic_kernel + irregular_kernel)

Tuning the hyperparameters

We can tune the hyperparameters $\theta$ of our Gaussian process model based on the data. This post leverages TensorFlow to fit the parameters by maximizing the marginal likelihood $p(\mathbf{y} \mid X, \theta)$ of the Gaussian process distribution based on the observed data $(X, \mathbf{y})$.

$$\hat{\theta} = \underset{\theta}{\text{argmax}} \left( p(\mathbf{y} \mid X, \theta) \right)$$

The marginal likelihood of the Gaussian process is the likelihood of a Gaussian distribution which is defined as:

$$p(\mathbf{y} \mid \mu, \Sigma) = \frac{1}{\sqrt{(2\pi)^d \lvert\Sigma\rvert}} \exp{ \left( -\frac{1}{2}(\mathbf{y} - \mu)^{\top} \Sigma^{-1} (\mathbf{y} - \mu) \right)}$$

The mean and covariance are calculated from their parameterized functions using the observed data $X$ as input: $\mu_{\theta} = m_{\theta}(X)$ and $\Sigma_{\theta} = k_{\theta}(X, X)$, so we can write the marginal likelihood as:

$$ p(\mathbf{y} \mid X, \theta) = \frac{1}{\sqrt{(2\pi)^d \lvert \Sigma_{\theta} \rvert}} \exp{ \left( -\frac{1}{2}(\mathbf{y} - \mu_{\theta})^\top \Sigma_{\theta}^{-1} (\mathbf{y} - \mu_{\theta}) \right)}$$

With $d$ the dimensionality of the marginal and $\lvert \Sigma_{\theta} \rvert$ the determinant of the kernel matrix. We can get rid of the exponent by taking the log and maximizing the log marginal likelihood:

$$ \log{p(\mathbf{y} \mid X, \theta)} = -\frac{1}{2}(\mathbf{y} - \mu_{\theta})^\top \Sigma_{\theta}^{-1} (\mathbf{y} - \mu_{\theta}) - \frac{1}{2} \log{\lvert \Sigma_{\theta} \rvert} - \frac{d}{2} \log{2 \pi}$$

The first term ($-0.5 (\mathbf{y} - \mu_{\theta})^\top \Sigma_{\theta}^{-1} (\mathbf{y} - \mu_{\theta})$) is the data-fit while the rest ($-0.5(\log{\lvert \Sigma_{\theta} \rvert} + d\log{2 \pi})$) is a complexity penalty, also known as differential entropy ⁽¹⁾ .

The optimal parameters $\hat{\theta}$ can then be found by minimizing the negative of the log marginal likelihood:

$$ \hat{\theta} = \underset{\theta}{\text{argmax}} \left( p(\mathbf{y} \mid X, \theta) \right) = \underset{\theta}{\text{argmin}} { \;-\log{ p(\mathbf{y} \mid X, \theta)}}$$

Since in this case the log marginal likelihood is derivable with respect to the kernel hyperparameters, we can use a gradient based approach to minimize the negative log marginal likelihood (NLL). In this post we will be using a gradient descent based approach to train the hyperparameters on minibatches of the observed data.

In [8]:

We will represent the Gaussian process marginal distribution by the GaussianProcess object which has a log_prob function to get the marginal log likelihood. The AdamOptimizer is used to minimize the negative marginal log likelihood.

In [9]:
# Use tf.function for more efficient function evaluation
@tf.function(autograph=False, experimental_compile=False)
def gp_loss_fn(index_points, observations):
    """Gaussian process negative-log-likelihood loss function."""
    gp = tfd.GaussianProcess(
        mean_fn=mean_fn,
        kernel=kernel,
        index_points=index_points,
        observation_noise_variance=observation_noise_variance
    )
    
    negative_log_likelihood = -gp.log_prob(observations)
    return negative_log_likelihood
In [10]:
In [11]:

Despite the complexity term in the log marginal likelihood it is still possible to overfit to the data. While we didn't prevent further overfitting in this post, you could prevent overfitting by adding a regularization term on the hyperparameters ⁽²⁾ , or split the observations in training an validation sets and select the best fit on your validation data from models trained on the training data.

In [12]:
Hyperparameters Value
smooth_amplitude 107.26746259399933
smooth_length_scale 90.0444561884546
periodic_amplitude 3.0688642653575795
periodic_length_scale 1.6477734668235333
periodic_period 0.9997593772006111
periodic_local_length_scale 105.71284661396933
irregular_amplitude 0.9969816727859772
irregular_length_scale 1.3755380538700162
irregular_scale_mixture 0.11173795870100905
observation_noise_variance 0.05365458289291863

The fitted kernel and it's components are illustrated in more detail in a follow-up post .

Posterior predictions

The TensorFlow GaussianProcess class can only represent an unconditional Gaussian process. To make predictions by posterior inference conditioned on observed data we will need to create a GaussianProcessRegressionModel with the fitted kernel, mean function and observed data.

The posterior predictions conditioned on the observed data before 2010 are plotted in the figure below together with their 95% prediction interval.

Notice that our model captures captures the different dataset characteristics such as the trend and seasonality quite well. The predictions start deviating the further away from the observed data the model was conditioned on, together with widening prediction interval.

In [13]:
# Posterior GP using fitted kernel and observed data
gp_posterior_predict = tfd.GaussianProcessRegressionModel(
    mean_fn=mean_fn,
    kernel=kernel,
    index_points=df_predict.Date.values.reshape(-1, 1),
    observation_index_points=df_observed.Date.values.reshape(-1, 1),
    observations=df_observed.CO2.values,
    observation_noise_variance=observation_noise_variance)

# Posterior mean and standard deviation
posterior_mean_predict = gp_posterior_predict.mean()
posterior_std_predict = gp_posterior_predict.stddev()
In [14]:

This post illustrated using TensorFlow probability to combine multiple out-of-the-box kernels and fit their hyperparameters on observed data. The fitted model was then used to make posterior predictions.

Sidenotes

  1. Note that the determinant $\lvert \Sigma \rvert$ is equal to the product of it's eigenvalues , and that $\lvert \Sigma \rvert$ can be interpreted as the volume spanned by the covariance matrix $\Sigma$. Reducing $\lvert \Sigma \rvert$ will thus decrease the dispersion of the points coming from the distribution with covariance matrix $\Sigma$ and reduce the complexity.
  2. Using TensorFlow Probability distributions you can define a prior distribution for each hyperparameter and use the log-likelihood of this distribution as a regularization term on your parameter.

References

  1. Gaussian Processes for Machine Learning. Chapter 5: Model Selection and Adaptation of Hyperparameters by Carl Edward Rasmussen and Christopher K. I. Williams.
  2. Gaussian Processes for Regression by Christopher K. I. Williams and Carl Edward Rasmussen.
In [15]:
Python implementation: CPython
Python version       : 3.9.4
IPython version      : 7.23.1

bokeh                 : 2.3.2
pandas                : 1.2.4
tensorflow_probability: 0.12.2
tensorflow            : 2.5.0
numpy                 : 1.19.5

This post at peterroelants.github.io is generated from an Python notebook file. Link to the full IPython notebook file

Originally published on January 6, 2019.
Gaussian Process Kernel TensorFlow Machine Learning Notebook