ReAct REPL Agent (07 Apr 2023)
GPT-Based ReAct Python REPL Agent with access to method retrieval. Can execute simple workflows by chaining API calls.
2021
Tagging GitHub Pages (15 May 2021)
How to use Jekyll tags on GitHub Pages blogs. How to sort tags using liquid templating.
Hosting Jupyter Notebooks on a Blog (15 May 2021)
How this blog, hosted on GitHub Pages, uses Jekyll and Jupyter Notebooks to view notebooks as blog posts.
Gaussian processes (2/3) - Fitting a Gaussian process kernel (06 Jan 2019)
Fit a parameterized Gaussian process kernel on the Mauna Loa CO₂ dataset. We'll use TensorFlow probability to implement the model and fit the kernel parameters. Updated to use TensorFlow 2.
Gaussian processes (1/3) - From scratch (05 Jan 2019)
This post explores some concepts behind Gaussian processes, such as stochastic processes and the kernel function. We will build up deeper understanding of Gaussian process regression by implementing them from scratch using Python and NumPy.
2018
Regression quattro stagioni (22 Oct 2018)
Linear regression implemented in four different ways. We'll describe the model and four different ways of estimating its parameters: MLE, OLS, gradient descent, & MCMC.
Multivariate normal distribution (28 Sep 2018)
Introduction to the multivariate normal distribution (Gaussian). We'll describe how to sample from this distribution and how to compute its conditionals and marginals.
Multi-armed bandit implementation (26 Sep 2018)
How to implement a Bayesian multi-armed bandit model in Python, and use it to simulate an online test. The model is based on the beta distribution and Thompson sampling.
2015
How to implement an RNN (2/2) - Tensor data and non-linearities (27 Sep 2015)
How to implement and train a simple recurrent neural network (RNN) with input data stored as a tensor. The RNN will be learning how to perform binary addition as a toy problem. RMSProp and Nesterov momentum are used as a gradient-based optimization algorithm during training.
How to implement an RNN (1/2) - Minimal example (27 Sep 2015)
How to implement a minimal recurrent neural network (RNN) from scratch with Python and NumPy. The RNN is simple enough to visualize the loss surface and explore why vanishing and exploding gradients can occur during optimization. For stability, the RNN will be trained with backpropagation through time using the RProp optimization algorithm.
How to implement a neural network (5/5) - Generalization to multiple layers (16 Jun 2015)
Generalization of neural networks to multiple layers. Illustrated on a simple network build from scratch using Python and NumPy. The network is trained on a digit classification toy problem using stochastic gradient descent.
How to implement a neural network (4/5) - vectorization of operations (15 Jun 2015)
Vectorization of the neural network and backpropagation algorithm for multi-dimensional data. Vectorization of operations is illustrated on a simple network implemented using Python and NumPy. The network is trained on a toy problem using gradient descent with momentum.
How to implement a neural network (3/5) - backpropagation (14 Jun 2015)
Transition from single-layer linear models to a multi-layer neural network by adding a hidden layer with a nonlinearity. A minimal network is implemented using Python and NumPy. This minimal network is simple enough to visualize its parameter space. The model will be optimized on a toy problem using backpropagation and gradient descent, for which the gradient derivations are included.
How to implement a neural network (2/5) - classification (13 Jun 2015)
How to implement, and optimize, a logistic regression model from scratch using Python and NumPy. The logistic regression model will be approached as a minimal classification neural network. The model will be optimized using gradient descent, for which the gradient derivations are provided.
How to implement a neural network (1/5) - gradient descent (12 Jun 2015)
How to implement, and optimize, a linear regression model from scratch using Python and NumPy. The linear regression model will be approached as a minimal regression neural network. The model will be optimized using gradient descent, for which the gradient derivations are provided.
Softmax classification with cross-entropy (2/2) (11 Jun 2015)
Description of the softmax function used to model multiclass classification problems. Contains derivations of the gradients used for optimizing any parameters with regards to the cross-entropy loss function.
Logistic classification with cross-entropy (1/2) (10 Jun 2015)
Description of the logistic function used to model binary classification problems. Contains derivations of the gradients used for optimizing any parameters with regards to the cross-entropy loss function.