Gaussian processes (3/3) - exploring kernels

This post will go more in-depth in the kernels fitted in our example fitting a Gaussian process to model atmospheric CO₂ concentrations . We will describe and visually explore each part of the kernel used in our fitted model, which is a combination of the exponentiated quadratic kernel, exponentiated sine squared kernel, and rational quadratic kernel. This post is the last part of a series on Gaussian processes:

  1. Understanding Gaussian processes
  2. Fitting a Gaussian process kernel
  3. Gaussian process kernels (this)
In [1]:

Kernel function

A kernel (or covariance function) describes the covariance of the Gaussian process random variables. Together with the mean function the kernel completely defines a Gaussian process.

In the first post we introduced the concept of the kernel which defines a prior on the Gaussian process distribution. To summarize the kernel function $k(x, x')$ models the covariance between each pair in $x$. The kernel function together with the mean function $m(x)$ define the Gaussian process distribution:

$$y \sim \mathcal{GP}(m(x),k(x,x'))$$

Valid kernels

In order to be a valid kernel function the resulting kernel matrix $\Sigma = k(X, X)$ should be positive definite . Which implies that the matrix should be symmetric . Being positive definite also means that the kernel matrix is invertible .

The process of defining a new valid kernel from scratch it not always trivial. Typically pre-defined kernels are used to model a variety of processes. In what follows we will visually explore some of these pre-defined kernels that we used in our fitting example .

In [2]:

White noise kernel

The white noise kernel represents independent and identically distributed noise added to the Gaussian process distribution.

$$k(x, x) = \sigma^2 I_n$$

With:

  • $\sigma^2$ the variance of the noise.
  • $I_n$ the identity matrix.

This formula results in a covariance matrix with zeros everywhere except on the diagonal of the covariance matrix. This diagonal contains the variances of the individual random variables. All covariances between samples are zero because the noise is uncorrelated.

Samples from the white noise kernel together with a visual representation of the covariance matrix are plotted in the next figure.

In [3]:
2021-05-15T11:27:51.951504 image/svg+xml Matplotlib v3.4.2, https://matplotlib.org/

Exponentiated quadratic kernel

The exponentiated quadratic kernel (also known as squared exponential kernel, Gaussian kernel or radial basis function kernel) is one of the most popular kernels used in Gaussian process modelling. It can be computed as:

$$k(x_a, x_b) = \sigma^2 \exp \left(-\frac{ \left\Vert x_a - x_b \right\Vert^2}{2\ell^2}\right)$$

With:

  • $\sigma^2$ the overall variance ($\sigma$ is also known as amplitude).
  • $\ell$ the lengthscale.

Using the exponentiated quadratic kernel will result in a smooth prior on functions sampled from the Gaussian process.

The exponentiated quadratic is visualized in the next figures. The first figure shows the distance plot with respect to $0$: $k(0, x)$. Note that the similarity outputted by the kernel decreases exponentially towards $0$ the farther we move move away from the center, and that the similarity is maximum at the center $x_a = x_b$.

In [4]:
2021-05-15T11:27:52.208593 image/svg+xml Matplotlib v3.4.2, https://matplotlib.org/

The following figure shows samples from the exponentiated quadratic kernel together with a visual representation of its covariance matrix.

Observe in the previous and following figure that increasing the lengthscale parameter $\ell$ increases the spread of the covariance. Increasing the amplitude parameter $\sigma$ increases the maximum value of the covariance.

In [5]:
2021-05-15T11:27:53.323634 image/svg+xml Matplotlib v3.4.2, https://matplotlib.org/

Rational quadratic kernel

$$k(x_a, x_b) = \sigma^2 \left( 1 + \frac{ \left\Vert x_a - x_b \right\Vert^2}{2 \alpha \ell^2} \right)^{-\alpha}$$

With:

  • $\sigma^2$ the overall variance ($\sigma$ is also known as amplitude).
  • $\ell$ the lengthscale.
  • $\alpha$ the scale-mixture ($\alpha$ > 0).

Similar to the exponentiated quadratic the rational quadratic kernel will result in a somewhat smooth prior on functions sampled from the Gaussian process. The rational quadratic can be interpreted as an infinite sum of different exponentiated quadratic kernels with different lengthscales with $\alpha$ determining the weighting between different lengthscales. When $\alpha \rightarrow \infty$ the rational quadratic kernel converges into the exponentiated quadratic kernel.

The rational quadratic is visualized in the next figures. The first figure shows the distance plot with respect to $0$: $k(0, x)$ with the amplitude $\sigma$ fixed to $1$.

Note that just like the exponentiated quadratic the similarity outputted by the kernel decreases towards $0$ the farther we move move away from the center, and that the similarity is maximum at the center $x_a = x_b$.

In [6]:
2021-05-15T11:27:53.808584 image/svg+xml Matplotlib v3.4.2, https://matplotlib.org/

The following figure shows samples from the rational quadratic kernel together with a visual representation of its covariance matrix. The amplitude $\sigma$ is fixed to $1$ in all figures, changing the amplitude will have the same effect as changing the amplitude in the exponentiated quadratic.

Observe that in the previous and following figures, increasing the lengthscale parameter $\ell$ increases the overall spread of the covariance. Decreasing the scale-mixture $\alpha$ will allow for more minor local variations while still keeping the longer scale trends defined by the lengthscale parameter. Increasing the scale-mixture to a large value reduces the minor local variations.

In [7]:
2021-05-15T11:27:55.006012 image/svg+xml Matplotlib v3.4.2, https://matplotlib.org/

Periodic kernel

$$k(x_a, x_b) = \sigma^2 \exp \left(-\frac{2}{\ell^2}\sin^2 \left( \pi \frac{\lvert x_a - x_b \rvert}{p}\right) \right)$$

With:

  • $\sigma^2$ the overall variance ($\sigma$ is also known as amplitude).
  • $\ell$ the lengthscale.
  • $p$ the period, which is the distance between repetitions.

The periodic kernel allows us to model periodic functions .

The periodic kernel is visualized in the next figures. The first two figures show the distance plot with respect to $0$: $k(0, x)$ with the amplitude $\sigma$ fixed to $1$ and different variations of the other parameters.

In [8]:
def periodic_tf(length_scale, period):
    """Periodic kernel TensorFlow operation."""
    amplitude_tf = tf.constant(1, dtype=tf.float64)
    length_scale_tf = tf.constant(length_scale, dtype=tf.float64)
    period_tf = tf.constant(period, dtype=tf.float64)
    kernel = tfk.ExpSinSquared(
        amplitude=amplitude_tf, 
        length_scale=length_scale_tf,
        period=period_tf)
    return kernel

def periodic(xa, xb, length_scale, period):
    """Evaluate periodic kernel."""
    kernel = periodic_tf(length_scale, period)
    kernel_matrix = kernel.matrix(xa, xb)
    with tf.Session() as sess:
        return sess.run(kernel_matrix)
In [9]:
2021-05-15T11:27:55.466417 image/svg+xml Matplotlib v3.4.2, https://matplotlib.org/
2021-05-15T11:27:55.616491 image/svg+xml Matplotlib v3.4.2, https://matplotlib.org/

The following figure shows samples from the periodic kernel together with a visual representation of its covariance matrix. The amplitude $\sigma$ is fixed to $1$ in all figures.

Observe that in the previous and following figures, increasing the period $p$ increases the distance between the repetitions (increasing the wavelength). Increasing the lengthscale parameter $\ell$ decreases the local variations within a repetition in the same way that increasing the lengthscale in the exponentiated quadratic kernel decreases the variations over a longer range.

In [10]:
2021-05-15T11:27:56.969582 image/svg+xml Matplotlib v3.4.2, https://matplotlib.org/

Combining kernels by multiplication

Kernels can be combined by multiplying them together. Multiplying kernels is an elementwise multiplication of their corresponding covariance matrices. This means that the covariances of the two multiplied kernels will only have a high value if both covariances have a high value. The multiply operation can thus be interpreted as an AND operation.

Local periodic kernel

The local periodic kernel is a multiplication of the periodic kernel with the exponentiated quadratic kernel to allow the periods to vary over longer distances. Note that the variance parameters $\sigma^2$ are combined into one.

$$k(x_a, x_b) = \sigma^2 \exp \left(-\frac{2}{\ell_p^2}\sin^2 \left( \pi \frac{\lvert x_a - x_b \rvert}{p}\right) \right) \exp \left(-\frac{ \left\Vert x_a - x_b \right\Vert^2}{2\ell_{eq}^2}\right)$$

With:

  • $\sigma^2$ the overall variance ($\sigma$ is also known as amplitude).
  • $\ell_p$ lengthscale of the periodic function.
  • $p$ the period.
  • $\ell_{eq}$ the lengthscale of the exponentiated quadratic.
In [11]:
def get_local_periodic_kernel(periodic_length_scale, period, amplitude, local_length_scale):
    periodic = tfk.ExpSinSquared(amplitude=amplitude, length_scale=periodic_length_scale, period=period)
    local = tfk.ExponentiatedQuadratic(length_scale=local_length_scale)
    return periodic * local

The local periodic kernel is visualized in the next figures. The first figure shows the distance plot with respect to $0$: $k(0, x)$ with only variations of the lengthscale of the exponentiated quadratic.

In [12]:
2021-05-15T11:27:57.405752 image/svg+xml Matplotlib v3.4.2, https://matplotlib.org/

The following figure shows samples from the local periodic kernel together with a visual representation of its covariance matrix. Only variations of the lengthscale of the exponentiated quadratic $\ell_{eq}$ are shown, the rest of the parameters are fixed to $1$.

Observe that increasing the lengthscale parameter $\ell_{eq}$ increases the periodic covariance over a longer lengthscale and keeps the repetitions close to each other more consistent.

In [13]:
2021-05-15T11:27:58.174577 image/svg+xml Matplotlib v3.4.2, https://matplotlib.org/

Combining kernels by addition

Kernels can be combined by adding them together. Adding kernels is an elementwise addition of their corresponding covariance matrices. This means that the covariances of the two added kernels will only have a low value if both of the covariances have a low value. The addition operation can thus be interpreted as an OR operation.

Atmospheric CO₂ kernel

An example of a kernel combined by addition is the kernel we fitted in our previous example of fitting a Gaussian process on atmospheric CO₂ concentrations. This kernel needed to combine different characteristics of the data such as: long term smooth change in CO₂ levels, seasonality, and short to medium term irregularities.

The final kernel was defined as a sum of the exponentiated quadratic kernel, local periodic, and rational quadratic kernel plus observational white noise. The hyperparameters were fitted on the data using a maximum likelihood method.

The prior defined by this kernel with fitted hyperparameters is illustrated in the following figures. These figures will show some samples of the kernel and a visual representation of its covariance over different ranges of input.

In [14]:
def get_combined_kernel():
    smooth_kernel = tfk.ExponentiatedQuadratic(
        amplitude=107.26,
        length_scale=90.044)
    local_periodic_kernel = get_local_periodic_kernel(
        periodic_length_scale=1.65,
        period=1.,
        amplitude=3.07,
        local_length_scale=131.)
    irregular_kernel = tfk.RationalQuadratic(
        amplitude=1.,
        length_scale=1.38,
        scale_mixture_rate=0.111)
    return smooth_kernel + local_periodic_kernel + irregular_kernel
In [15]:
2021-05-15T11:27:59.185765 image/svg+xml Matplotlib v3.4.2, https://matplotlib.org/

References

  1. Gaussian Processes for Machine Learning. Chapter 4: Covariance Functions by Carl Edward Rasmussen and Christopher K. I. Williams.
  2. The Kernel Cookbook by David Duvenaud.
  3. Kernel Design GP Summer School, Sheffield, September 2015. By Nicolas Durrande.
In [16]:
Python implementation: CPython
Python version       : 3.9.4
IPython version      : 7.23.1

matplotlib            : 3.4.2
tensorflow            : 2.5.0
tensorflow_probability: 0.12.2
seaborn               : 0.11.1
numpy                 : 1.19.5

This post at peterroelants.github.io is generated from an Python notebook file. Link to the full IPython notebook file

Originally published on January 7, 2019.
Gaussian Process Kernel Notebook