Visualizing statistical models – it’s child’s play

Before you ask a mathematician if they can visualize the fourth dimension, ask them if they can truly visualize a three-dimensional object, like the boundary of a four-dimensional football. If they tell you it’s easy, and their name isn’t Maryna Viazovska, they’re probably lying.

Making an accurate picture of an object from a high dimensional space is very challenging. In this blog post we’ll see a surprising case where it turns out to be possible. We’ll visualize an interesting seven-dimensional object, which comes from a question in statistics.

Let’s consider the probability that each of the teams in the quarter-finals of the Men’s FIFA 2018 World Cup would win. The teams were (Uruguay, France, Brazil, Belgium, Russia, Croatia, Sweden, England). Today we know the probabilities of the teams winning, in that order, are (0,1,0,0,0,0,0,0), because France has already won. Back on 3rd July the probabilities (according to FiveThirtyEight) were (0.06, 0.15, 0.3, 0.11, 0.05, 0.12, 0.07, 0.14), and on 7th July the probabilities were (0,0.29,0,0.26,0,0.18,0,0.27).

In a recent project we were studying which probability distributions lie in a particular statistical model. We found out that our statistical model is given by inequalities that the eight probabilities need to satisfy. If we call the probabilities (a,b,c,d,e,f,g,h), the inequalities are:

(ad-bc)(eh-fg) \geq 0, \quad (af-be)(ch - dg) \geq 0, \quad (ag-ce)(bh-df) \geq 0 .

The probabilities have to sum to 1, so a + b + c + d + e + f + g + h = 1. We want to visualize the part of seven-dimensional space in which the inequalities hold. How can we do it?

The first step is to notice that some combinations of letters do not affect whether the inequalities hold or not. They are:

(a + b + c + d) - (e + f + g + h) , \quad (a + c + e + g) - (b + d + f + h) , \quad (a + b + e + f) - (c + d + g + h)

So we can apply a change of coordinates that removes these three directions, leaving something four-dimensional. Finally, to get something three-dimensional we can assume that the four remaining coordinates lie on the sphere.

We end up with a picture that looks like this:

BlobsL

The part of space that lies inside the statistical model are the points outside either the blue blob, the green blob, or the yellow blob.

These days, we have an even better way to visualize the statistical model, truly in 3D. It even doubles-up as a handmade toy for children.

IMG_20180716_094013 (1)
Order yours here

 

We can’t help but wonder – which other children’s toys are really statistical models in disguise?

Advertisements

A duality of pictures

Duality relates objects, which seem different at first but turn out to be similar. The concept of duality occurs almost everywhere in maths. If two objects seem different but are actually the same, we can view each object in a “usual” way, and in a “dual” way – the new vantage point is helpful for new understanding of the object.  In this blog post we’ll see a pictorial example of a mathematical duality.

How are these two graphs related?

bg1

bg2In the first graph, we have five vertices, the five black dots, and six green edges which connect them. For example, the five vertices could represent cities (San Francisco, Oakland, Sausalito etc. ) and the edges could be bridges between them.

In the second graph, the role of the cities and the bridges has swapped. Now the bridges are the vertices, and the edges (or hyperedges) are the cities. For example, we can imagine that the cities are large metropolises and the green vertices are the bridge tolls between one city and the next.

Apart from swapping the role of the vertices and the edges, the information in the two graphs is the same. If we shrink each city down to a dot in the second graph, and grow each bridge toll into a full bridge, we get the first graph. We will see that the graphs are dual to each other.

We represent each graph by a labeled matrix: we label the rows by the vertices and the columns by the edges, and we put a 1 in the matrix whenever the vertex is in the edge. For example, the entry for vertex 1 and edge a is 1, because edge a contains vertex 1. The matrix on the left is for the first graph, and the one on the right is for the second graph.

bg4

We can see that the information in the two graphs is the same from looking at the two matrices – they are the same matrix, transposed (or flipped). The matrix of a hypergraph is the transpose of the matrix of the dual hypergraph.

Mathematicians are always on the look-out for hidden dualities between seemingly different objects, and we are happy when we find them. For example, in a recent project we studied the connection between graphical models, from statistics, and tensor networks, from physics. We showed that the two constructions are the duals of each other, using the hypergraph duality we saw in this example.

Flattening a cube

If you conduct a survey, among some friends, consisting of three YES/NO questions, how can you summarize the responses?

I conducted a survey recently at a conference. The three questions were:

  • Is it your first time at the Mathematisches Forschungsinstitut Oberwolfach?
  • Do you like the weather?
  • Have you played any games?

Screen Shot 2017-08-15 at 11.49.45 AM

There are eight options for how someone could respond to three YES/NO questions. Taking YES=1, and NO=0, the eight options are labelled by the binary strings: 000, 001, 010, 100, 011, 101, 110, 111.

We can think of 0 and 1 as coordinates in space, and arrange the eight numbers into a cube:

cube2

This 3D arrangement reflects the fact that there are three questions in the survey. Since our dataset is small, there’s not much need for further analysis to compress or visualize the data. But for a larger survey, we will summarize the structural information in the data using principal components.

The first step of principal component analysis is to restructure the 3D cube of data into a 2D matrix. This is called “flattening” the cube. We combine two YES/NO questions from the survey into a single question with four possible responses. There are three choices for which questions to combine, so there are three possible ways to flatten the cube into a matrix:

\begin{bmatrix} p_{000} & p_{001} & p_{010} & p_{011} \\ p_{100} & p_{101} & p_{110} & p_{111} \end{bmatrix} \qquad \begin{bmatrix} p_{000} & p_{001} & p_{100} & p_{101} \\ p_{010} & p_{011} & p_{110} & p_{111} \end{bmatrix} \qquad \begin{bmatrix} p_{000} & p_{010} & p_{100} & p_{110} \\ p_{001} & p_{011} & p_{101} & p_{111} \end{bmatrix}

Our analysis of the data depends on which flattening we choose! Generally speaking, it’s bad news if an arbitrary decision has an impact on the conclusions of an analysis.

So we need to understand…

How do the principal components depend on the choice of flattening?

This picture give an answer to that question:

dec3

All points inside the star-shaped surface correspond to valid combinations of principal components from the three flattenings, while points outside are the invalid combinations. More details can be found here.

Understanding the brain using topology: the Blue Brain project

ALERT ALERT! Applied topology has taken the world has by storm once more. This time techniques from algebraic topology are being applied to model networks of neurons in the brain, in particular with respect to the brain processing information when exposed to a stimulus. Ran Levi, one of the ‘co-senior authors’ of the recent paper published in Frontiers in Computational Neuroscience is based in Aberdeen and he was kind enough to let me show off their pictures in this post. The paper can be found here.

So what are they studying?

When a brain is exposed to a stimulus, neurons fire seemingly at random. We can detect this firing and create a ‘movie’ to study. The firing rate increases towards peak activity, after which it rapidly decreases. In the case of chemical synapses, synaptic communication flows from one neuron to another and you can view this information by drawing a picture with neurons as dots and possible flows between neurons as lines, as shown below. In this image more recent flows show up as brighter.

Image credit: Blue Brain project. This image shows a depiction of neurons and synaptic connections between them. The more recently a synaptic communication has been fired, the brighter it is depicted in the image.

Numerous studies have been conducted to better understand the pattern of this build up and rapid decrease in neuron spikes and this study contains significant new findings as to how neural networks are built up and decay throughout the process, both at a local and global scale. This new approach could provide substantial insights into how the brain processes and transfers information. The brain is one of the main mysteries of medical science so this is huge! For me the most exciting part of this is that the researchers build their theory through the lens of Algebraic Topology and I will try to explain the main players in their game here.

Topological players: cliques and cavities

The study used a digitally constructed model of a rats brain, which reproduced neuron activity from experiments in which the rats were exposed to stimuli. From this model ‘movies’ of neural activity could be extracted and analysed. The study then compared their findings to real data and found that the same phenomenon occurred.

Neural networks have been previously studied using graphs, in which the neurons are represented by vertices and possible synaptic connections between neurons by edges. This throws away quite a lot of information since during chemical synapses the synaptic communication flows, over a miniscule time period, from one neuron to another. The study takes this into account and uses directed graphs, in which an edge has a direction emulating the synaptic flow. This is the structural graph of the network that they study. They also study functional graphs, which are subgraphs of the structural graph. These contain only the connections that fire within a certain ‘time bin’. You can think of these as synaptic connections that occur in a ‘scene’ of the whole ‘movie’. There is one graph for each scene and this research studies how these graphs change throughout the movie.

The main structural objects discovered and consequentially studied in these movies are subgraphs called directed cliques. These are graphs for which every vertex is connected to every other vertex. There is a source neuron from which all edges are directed away, and a sink neuron for which all edges are directed towards. In this sense the flow of information has a natural direction. Directed cliques consisting of n neurons are called simplices of dimension (n-1). Certain sub-simplices of a directed clique for their own directed cliques, when the vertices in the sub-simplices contain their own source and sink neuron, called sub-cliques. Below are some examples of the directed clique simplices.

Image credit: EPFL. This image shows examples of directed cliques.

And the images below show these simplices occurring naturally in the neural network.

Image credit: Frontiers in Computational Neuroscience, ‘Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function’, Figure 1A. This image shows a reconstructed microcircuit produced using the model of neural activity. A 5-neuron clique is shown in red.
Image credit: Frontiers in Computational Neuroscience, ‘Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function’, Figure 1B3. This image shows a zoomed in depiction of the 5 neuron clique in the image above, with its corresponding simplex on the right.
 

Image credit: Frontiers in Computational Neuroscience, ‘Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function’, Adaptation of Figure 2C. This image shows a 6-simplex (a directed clique with 7 vertices) on the left and a 7-simplex on the right, with representations of how these cliques appear in the neural network shown in the centre.

The researchers found that over time, simplices of higher and higher dimension were born in abundance, as synaptic communication increased and information flowed between neurons. Then suddenly all cliques vanished, the brain had finished processing the new information. This relates the neural activity to an underlying structure which we can now study in more detail. It is a very local structure, simplices of up to 7 dimensions were detected, a clique of 8 neurons in a microcircuit containing tens of thousands. It was the pure abundance of this local structure that made it significant, where in this setting local means concerning a small number of vertices in the structural graph.

As well as considering this local structure, the researchers also identified a global structure in the form of cavities. Cavities are formed when cliques share neurons, but not enough neurons to form a larger clique. An example of this sharing is shown below, though please note that this is not yet an example of a cavity. When many cliques together bound a hollow space, this forms a cavity. Cavities represent homology classes, and you can read my post on introducing homology here. An example of a 2 dimensional cavity is also shown below.

An example of simplices sharing neurons.
Image credit: Frontiers in Computational Neuroscience, ‘Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function’, Figure 5A. This image shows an example of a two dimensional cavity. It is bounded by 2 simplicies (triangles) which are directed cliques with 3 neurons.
 

The graph below shows the formation of cavities over time. The x-axis corresponds to the first Betti number, which gives an indication of the number of 1 dimensional cavities, and the y-axis similarly gives an indication of the number of 3 dimensional cavities, via the third Betti number. The spiral is drawn out over time as indicated by the text specifying milliseconds on the curve. We see that at the beginning there is an increase in the first Betti number, before an increase in the third alongside a decrease in the first, and finally a sharp decrease to no cavities at all. Considering the neural movie, we view this as an initial appearance of many 1 dimensional simplices, creating 1 dimensional cavities. Over time, the number of 2 and 3 dimensional simplices increases, by filling in extra connections between 1 dimensional simplices, so the lower dimensional cavities are replaced with higher dimensional ones. When the number of higher dimensional cavities is maximal, the whole thing collapses. The brain has finished processing the information!

Image credit: Frontiers in Computational Neuroscience, ‘Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function’, Figure 6B

The time dependent formation of the cliques and cavities in this model was interpreted to try and measure both local information flow, influenced by the cliques, and global flow across the whole network, influenced by cavities.

So why is topology important?

These topological players provide a strong mathematical framework for measuring the activity of a neural network, and the process a brain undergoes when exposed to stimuli. The framework works without parameters (for example there is no measurement of distance between neurons in the model) and one can study the local structure by considering cliques, or how they bind together to form a global structure with cavities. By continuing to study the topological properties of these emerging and disappearing structures alongside neuroscientists we could come closer to understanding our own brains! I will leave you with a beautiful artistic impression of what is happening.

Image credit: Blue Brain project. This image shows an artists depiction of their interpretation of the results, projected into 3 dimensions. The simplices are represented by the clique-like small structures and the centre is the artists depiction of a cavity.

There is a great video of Kathryn Hess (EPFL) speaking about the project, watch it here.

For those of you who want to read more, check out the following blog and news articles (I’m sure there will be more to come and I will try to update the list)

Frontiers blog

Wired article

Newsweek article

Combing braids

I’m going to a conference next week, and it’s all about braids! So I thought I would write a wee post on combing, a technique which dates back to Artin in the 1940s. In fact the paper where he introduces the concept of combing finishes with the following amusing warning:

“Although it has been proved that every braid can be deformed into a similar normal form the writer is convinced that any attempt to carry this out on a living person would only lead to violent protests and discrimination against mathematics. He would therefore discourage such an experiment.” – Artin 1946

but I really don’t see it as so bad!

Combing is a technique for starting with any braid (see my introductory post on braids here) and ending up with a braid in which first the leftmost strand moves and the others stay put, then the next strand moves while the rest stay put etc etc. It’s much nicer to show this in pictures.

We want to start with any old braid, say this one:

original

 

and transform it into a braid where the strands move one at a time, like the following one. I’ve coloured the strands here so you can see that, reading the braid from top to bottom, first the red strand moves (i.e. all crossing involve the red strand, until it is finished), and then the green, and then the blue.

final coloured

 

For convenience I’ll only look at braids called pure braids, where each strand starts and ends at the same position. You can easily comb non-pure braids, you just need to add an appropriate twist right at the end to make them finish in the correct positions.

So how do we do this? Consider the first strand, I’ve coloured it red to make it clear. We want all the crossings between red and black strands to happen before (higher up than) a crossing of two black strands. So in this case the crossing circled in yellow are okay, because they happen lower down than any crossing involving the red strand. The crossings circled in blue and green need to be changed.

strand 1 highlight

 

We can slide some crossings of black strands down past the red and black crossings, as they don’t interfere. Here we can do it with the crossing circled in blue, as shown:

strand 1 step 1

 

We can start to do it with the crossing circled in green, but we encounter a problem as it wont simply slide past the red strand crossing below it. Moving this crossing down requires using some of the braid relations (see braid post) to replace a few crossings with an equivalent section in which the red strand moves first, as follows:

strand 1 step 2

Even though this braid looks different than the previous one they are in fact the same (you can always test this with string!). Now we have a braid in which the first strand moves before any others. Since all the first stand action is now at the top of the braid, we can now ignore the first strand all together, and consider the rest of the braid, as show below:

strand 1 forgetting

 

we only need to consider the following section now, and again we can put this into a form where only the first strand moves.

strand 2 beginning

In this case using braid relations gives us the following:

strand 2 final

And we can now ignore the green strand!

strand 2 forgetting

Colouring the first strand in this final section gives us no crossing that don’t involve the first strand:

strand 3 start

and we colour the last strand yellow for fun!

strand 3

Remembering all the pieces we have ignored gives us the full combed braid, where we focus on the leftmost strand until it ‘runs out of moves’ before looking to the next one.

final coloured

And this is exactly the same as the original braid, which looks a lot messier when coloured:

original coloured

Why might we want to do this? In some cases it makes mathematical proofs a lot easier. For me, recently I have been focusing only on what the first strand is doing, and so I want a technique to push the other strands down and away!

Tea with (Almond) Milk

Making a cup of tea in a hurry is a challenge. I want the tea to be as drinkable (cold) as possible after a short amount of time. Say, 5 minutes. What should I do: should I add milk to the tea at the beginning of the 5 minutes or at the end?

tea

The rule we will use to work this out is Newton’s Law of Cooling. It says “the rate of heat loss of the tea is proportional to the difference in temperature between the tea and its surroundings”.

This means the temperature of the tea follows the differential equation T' = -k (T - T_s), where the constant k is a positive constant of proportionality. The minus sign is there because the tea is warmer than the room – so it is losing heat. Solving this differential equation, we get T = T_s + (A - T_s) e^{-kt}, where A is the initial temperature of the tea.

We’ll start by defining some variables, to set the question up mathematically. Most of them we won’t end up needing. Let’s say the tea, straight from the kettle, has temperature T_0. The cold milk has temperature m. We want to mix tea and milk in the ratio L:l. The temperature of the surrounding room is T_s.

Option 1: Add the milk at the start

We begin by immediately mixing the tea with the milk. This leaves us with a mixture whose temperature is \frac{T_0 L + m l }{L + l}. Now we leave the tea to cool. Its cooling follows the equation T = T_s +\left( \frac{T_0 L + m l }{L + l} - T_s \right) e^{-kt}. After five minutes, the temperature is

Option 1 = T_s +\left( \frac{T_0 L + m l }{L + l}- T_s \right) e^{-5k} .

Option 2: Add the milk at the end

For this option, we first leave the tea to cool. Its cooling follows the equation T = T_s + (T_0 - T_s) e^{-kt}. After five minutes, it has temperature T = T_s + (T_0 - T_s) e^{-5k}. Then, we add the milk in the specified ratio. The final concoction has temperature

Option 2 = \frac{(T_s + (T_0 - T_s) e^{-5k}) L + m l }{L + l}.

So which temperature is lower: the “Option 1” temperature or the “Option 2” temperature?

It turns out that most of the terms in the two expressions cancel out, and the inequality boils down to a comparison of e^{-5k} (T_s L - ml) (from Option 2) with (T_s L - ml) (from Option 1). The answer depends on whether T_s L - ml > 0. For our cup of tea, it will be: there’s more tea than milk (L > l) and the milk is colder than the surroundings (m < T_s). [What does this quantity represent?] Hence, since k is positive, we have e^{-5k} < 1, and option 2 wins: add the milk at the end.

But, does it really make a difference? (What’s the point of calculus?)

Well, we could plug in reasonable values for all the letters (T_0 = 95^o C, etc.) and see how different the two expressions are.

So, why tea with Almond milk?

My co-blogger Rachael is vegan. She inspires me to make my tea each morning with Almond milk.

Finally, here’s a picture of an empirical experiment from other people (thenakedscientists) tackling this important question:

graph-tea

Planes, trains and Kummer Surfaces

Here’s a short blog post for the holiday season, inspired by this article from Wolfram MathWorld. The topic is Kummer Surfaces, which are a particular family of algebraic varieties in 3-dimensional space. They make beautiful mathematical pictures, like these from their wikipedia page:

kummer_surface

A Kummer surface is the points in space where a particular equation is satisfied. One way to describe them is as the zero-sets of equations like:

{(x^2 + y^2 + z^2 - \mu^2 )}^2 - \lambda (-z-\sqrt{2} x) ( -z + \sqrt{2} x) ( z + \sqrt{2} y ) ( z - \sqrt{2} y ).

The variables x, y , z are coordinates in 3-dimensional space, and \lambda and \mu are two parameters, related by the equation \lambda ( 3 - \mu^2) = 3 \mu^2 - 1. As we change the value of the parameter, the equation changes, and its zero set changes too.

What does the Kummer Surface look like as the parameter \mu changes?

When the parameter \mu^2 = 3, the non-linearity of the Kummer surface disappears, the surface degenerates to a union of four planes.

planes

When the parameter is close to 3, we’re between planes and Kummer surfaces:

trains

And for \mu^2 = 1.5, we see the 16 singular points surrounding five almost-tetrahedra, in the center. A zoomed in version is in my other blog post that featured Kummer Surfaces.

kummer-surfaces

Ok, I can see “planes” and “Kummer surface”, but what about “trains”? Well, I guess you say that when a parameter is changing, often something is being trained. Though, er, not here.

This equation is not for a Kummer surface, but it’s not so dissimilar either. It came up recently in one of my research projects:

{\left( x^2 + y^2 + z^2 - 2( x y + x z + y z ) \right)}^2  - 2(x + y - z )( x - y + z ) ( - x + y + z )

P.S. The code (language=Mathematica) that I used to make the video is here:

anim = Animate[
 ContourPlot3D[{(x^2 + y^2 + z^2 - 
 musq)^2 - ((3*musq - 1)/(3 - musq))*(1 - z - 
 sq2*x)*(1 - z + sq2*x)*(1 + z + sq2*y)*(1 + z - sq2*y) == 
 0}, {x, -5, 5}, {y, -5, 5}, {z, -5, 5}, 
 PerformanceGoal -> "Quality", BoxRatios -> 1, 
 PlotRange -> 1], {musq, 3.001, 1, 0.0002}];