Deep learning is also a useful tool for Biologists

In this entry, I show how to harness deep mutational data and embeddings derived from a protein language model to predict if a receptor binding domain (RBD) variant from the COVID-19 Spike protein (S) has an increased affinity towards the human ACE2 receptor. At the end of the article, I also show how to reuse these embeddings for other purposes such as the prediction of antibody escape mutants.

A Quick Overview

The protein responsible for the binding of COVID-19 to human cells (and thus start the infection) is the spike protein (S) and specifically, it binds the ACE2 receptor with the region known…


Visualise the decision process (weights) of a neural network

Neural networks (NNs) are often deemed as a ‘black box’, which means that we cannot easily pinpoint exactly how they make decisions. Given that NNs store their knowledge in their weights, then it makes sense that their examination should reveal some insights about their decision process.

In this article, we are going to train NNs that recognise handwritten numbers (0–9) and then open their ‘black box’ by visualising their weights.

All the code is written in Python and can be found here

Reading handwritten digits with a neural network

We are going to use handwritten digits that are stored in the MNIST database. Each digit is in…


In this post I’ll try to explain the intuition of maximum likelihood estimation (MLE) which is widely used in Inferential Statistics to estimate the parameters of a statistical model.

Introduction

In many cases, we want to use the information contained in a sample (X) to estimate properties of the population that generated it, and in order to do that we need to make some assumptions about the data generating process or in other words we need to come up with a statistical model.

A very popular and sometimes very reasonable assumption is that the sample was generated from an approximately normal distribution (ND). …


Sit back and let a gang of neural networks explore the optimisation landscape for you by harnessing Darwinian selection

Darwin knew the secret to tame wild neural networks (Adapted from: Charles Darwin — NHM, London)

In this article I’ll try to explain 3 things:

  1. The basic idea of an evolutionary algorithm and how you can evolve a population of neural networks by exploiting Darwinian selection
  2. How you can solve the CartPole-v0 environment from the OpenAI gym using an evolutionary framework
  3. How you can easily recycle the same framework to serially evolve agents that can perform well in the rest of the control environments

If you are too impatient about the agents that evolved as a result of this strategy you can watch them in action here and the code here.

Solving optimisation problems with evolution inspired algorithms

Evolutionary algorithms (EAs) are optimisation…

Luis F. Camarillo-Guerrero

PhD in Genomics at University of Cambridge — Bioinformatics/Phages. MSc in Bioinformatics and Theoretical Systems Biology — Imperial College London

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store