Mapping class groups and curves in surfaces

Firstly, thanks to Rachael for inviting me to write this post after meeting me at the ECSTATIC conference at Imperial College London, and to her and Anna for creating such a great blog!

My research is all about surfaces. One of the simplest examples of a surface is a sphere. We are all familiar with this – think of a globe or a beach ball. Really we should think of this beach ball as having no thickness at all, in other words it is 2-dimensional. We are allowed to stretch and squeeze it so that it doesn’t look round, but we can’t make every surface in this way. The next distinct surface we come to is the torus. Instead of a beach ball, this is like an inflatable ring (see this post by Rachael). We say that the genus of the torus is 1 because it has one “hole” in it. If we have g of these holes then the surface has genus g. The sphere doesn’t have any holes so has genus 0. We can also alter a surface by cutting out a disc. This creates an edge called a boundary component. If we were to try to pass the edge on the surface, we would fall off. Here are a few examples of surfaces.

surfaceexamples

As with the sphere, topology allows us to deform these surfaces in certain ways without them being considered to be different. The classification of surfaces tells us that if two surfaces have the same genus and the same number of boundary components then they are topologically the same, or homeomorphic.

Now that we have a surface, we can start to think about its properties. A recurring theme across mathematics is the idea of symmetries. In topology, the symmetries we have are called self-homeomorphisms. Strictly speaking, all of the self-homeomorphisms we will consider will be orientation-preserving.

Let’s think about some symmetries of the genus 3 surface.

Here is a rotation which has order 2, that is, if we apply it twice, we get back to what we started with.

hyperelliptic

Here is another order 2 rotation.

halfrotation

And here is a rotation of order 3. Remember that we are allowed to deform the surface so that it looks a bit different to the pictures above but still has genus 3.

thirdrotation

However, not all symmetries of a surface have finite order. Let’s look at a Dehn twist. The picture (for the genus 2 surface) shows the three stages – first we cut along a loop in the surface, then we rotate the part of the surface on just one side of this loop by one full turn, then we stick it back together.

twist

A Dehn twist has infinite order, that is, if we keep on applying it again and again, we never get back to what we started with.

If we compose two homeomorphisms (that is, apply one after the other) then we get another homeomorphism. The self-homeomorphisms also satisfy some other properties which mean that they form a group under composition. However, this group is very big and quite nasty to study, so we usually consider two homeomorphisms to be the same if they are isotopic. This is quite a natural relationship between two homeomorphisms and roughly means that there is a nice continuous way of deforming one into the other. Now we have the set of all isotopy classes of orientation-preserving self-homeomorphisms of the surface, which we call mapping classes. These still form a group under composition – the mapping class group. This group is much nicer. It still (usually) has infinitely many elements, but now we can find a finite list of elements which form a generating set for the group. This means that every element of the group can be made by composing elements from this list. Groups with finite generating sets are often easier to study than groups which don’t have one.

An example of a mapping class group appears in Rachael’s post below. The braid group on n strands is the mapping class group of the disc with n punctures (where all homeomorphisms fix the boundary pointwise). Punctures are places where a point is removed from the surface. In some ways punctures are similar to boundary components, where an open disc is removed, but a mapping class can exchange punctures with other punctures.

So how can we study what a mapping class does? Rachael described in her post how we can study the braid group by looking at arcs on the punctured disc. Similarly, in the pictures above of examples of self-homeomorphisms the effect of the homeomorphism is indicated by a few coloured curves. More precisely, these are simple closed curves, which means they are loops which join up without any self-intersections. Suppose we are given a mapping class for a surface but not told which one it is. If we are told that it takes a certain curve to a certain other curve then we can start to narrow it down. If we get information about other curves we can narrow it down even more until eventually we know exactly what the mapping class is.

Now I can tell you a little about what I mainly think about in my research: the curve graph. In topology, a graph consists of a set of points – the vertices – with some pairs of vertices joined by edges.

graphexample

Each vertex in the curve graph represents an isotopy class of curves. As in the case of homeomorphisms, isotopy is a natural relationship between two curves, which more or less corresponds to pushing and pulling a curve into another curve without cutting it open. For example, the two green curves in the picture are isotopic, as are the two blue curves, but green and blue are not isotopic to each other.

isotopic

Also, we don’t quite want to use every isotopy class of curves. Curves that can be squashed down to a point (inessential) or into a boundary component (peripheral) don’t tell us very much, so we will ignore them. Here are a few examples of inessential and peripheral curves.

inessential

We now have infinitely many vertices, one for every isotopy class of essential, non-peripheral curves, and it is time to add edges. We put an edge between two vertices if they have representative curves which do not intersect. So if two curves from these isotopy classes cross each other we can pull one off the other by an isotopy. Here’s an example of some edges in the curve graph of the genus 2 surface. In the picture, all of the curves are intersecting minimally, so if they intersect here they cannot be isotoped to be disjoint.

I should emphasise that this is only a small subgraph of the curve graph of the genus 2 surface. Not only does the curve graph have infinitely many vertices, but it is also locally infinite – at each vertex, there are infinitely many edges going out! This isn’t too hard to see – if we take any vertex, this represents some curve (up to isotopy). If we cut along this curve we get either one or two smaller surfaces. These contain infinitely many isotopy classes of curves, none of which intersects the original curve.

So why is this graph useful? Well, as we noted above, we can record the effect of a mapping class by what it does to curves. Importantly, the property of whether two curves are disjoint is preserved by a mapping class. So not only does a mapping class take vertices of the curve graph (curves) to vertices, but it preserves whether or not two vertices are connected by an edge. Thus a mapping class gives us a map from the curve graph back to itself, where the vertices may be moved around but, if we ignore the labels, the graph is left looking the same. We say that the mapping class group has an isometric action on the curve graph, so to every element of the group we associate an isometry of the graph, which is a map which preserves distances between elements. The distance between two points in the graph is just the smallest number of edges we need to pass along to get from one to the other. When we have an isometric action of a group on a space, this is really useful for studying the geometry of the group, but that would be another story.

Hall of mirrors: Coxeter Groups and the Davis Complex

I’ve spent a lot of time this summer thinking about the best way to present maths. When someone gives a talk or lecture they normally write on the board in chalk or present via Beamer slides. Occasionally though, someone comes along with some great hand drawn slides that make listening to a talk that wee bit more exciting. So the images in this blog are part of my tribute to this new idea.

I’ve talked about Coxeter groups before (here), but I’ll start afresh for this post. It is worth mentioning now that Coxeter groups arise across maths, in areas such as combinatorics, geometric group theory, Lie theory etc. as well as topology.

A Coxeter group is a group generated by reflections, and “braid type” relations. Informally, you can imagine yourself standing in the middle of a room with lots of mirrors around you, angled differently. Your reflection in a single mirror can be viewed as a generator of the group, and any other reflection through a series of mirrors can be viewed as a word in the group. Here is a silly picture to show this:

silly picture

Formally, a Coxeter group is defined by a Coxeter matrix on the generating set S. This is an S by S matrix, with one entry for each ordered pair in the generating set. This entry has to be 1 on the diagonal of the matrix or a whole number that is either bigger than 2 or infinity (\infty ) off the diagonal. It also has to be symmetric, having the same entry for (t,s) as (s,t) . See below:

coxeter groups1

Given this matrix you can then define the Coxeter group to be the group generated by S with relations given by the corresponding entry in the matrix.

coxeter groups2

In particular notice that the diagonal of the matrix being 1 gives that each generator squares to the identity, i.e. it is an involution. It can be hard to see what is going on in the matrix so there is a nicer way to display this information: a Coxeter diagram. This is a graph with a vertex for every generator, and edges which tell you the relations, as described below:

coxeter groups3

The relation (st)^{m_{st}} can also be rewritten as ststs...=tstst... where there are m_{st} elements on each side. This is reminiscent of the braid relations, hence why I called them “braid like” before. In the mirror analogy, this says the mirrors are angled towards each other in such a way that you get the same reflection by bouncing between them m times, independent of which of the two mirrors you turn towards to begin with.

There exist both finite and infinite Coxeter groups. Here is an example of a finite Coxeter group, with two generators s and t. If you view them as reflections on a hexagon (as drawn) then doing first s and then t gives a rotation of 120 degrees, and so doing st 3 times gives the identity, as required.

coxeter example

On the other hand, if you add another generator u with a braid-3 relation with both s and t , then the group is infinite. You can imagine tiling the infinite plane with triangles. If you take s , t and u to be reflections in the 3 sides of one of these triangles then they satisfy the relations they need to, and you can use these three reflections to transport the central triangle to any other one. If you think about this for a while, this shows the group is infinite. A (somewhat truncated) picture is shown below.

coxeter example 1

Examples of Coxeter groups don’t just live in 2-D Euclidean space. There is another finite group which acts as reflections on the permutahedron:

coxeter-complex

And other Coxeter groups which act as reflections on the hyperbolic plane.

The mathematical object I am working with at the moment is called the Davis Complex. You can build it out of finite subgroups of a Coxeter group (side note for the mathematicians: taking cosets of finite subgroups and forming a poset which you can then realise). Even infinite Coxeter groups have lots of finite subgroups. The great thing about the Davis complex being built out of finite things is that there is a classification of finite Coxeter groups! What this means is that when you have a finite Coxeter group its diagram either looks like one of the diagrams below, or a disjoint collection of them.

classification of finite coxeter

So because we only have a few diagrams to look at in the finite case, we can prove some things! Right now I am working on some formulas for staring at the Coxeter diagrams and working out the homology of the group. I’m using the Davis complex and all its nice properties to do this. I’ll leave you with a picture of the Davis complex for our first example.

davis example 2

davis example

Deleting edges to save cows – guest post by Dr Kitty Meeks

many-trades

I met Rachael at the LMS Women in Mathematics Day in Edinburgh in April, where I gave a talk about using graph theory to understand the spread of disease in livestock, and she invited me to write a blog on this topic.  Everything I’m going to say here is based on joint work with Dr Jess Enright from the University of Stirling (supported by Scottish Government as part of EPIC: Scotland’s Centre of Expertise on Animal Disease Outbreaks), who knows a lot more about cattle than I do!

It seems obvious that, when we want to understand how a disease spreads through a population, we should consider which members of the population have contact with each other – this simple observation has given rise to the field of Network Epidemiology.  If we’re trying to understand the spread of disease in humans, one of the key challenges is to guess the underlying contact network – who has contact with whom – and a lot of interesting work has been done on this.  Looking at the spread of disease in livestock, however, we don’t have to worry about this: all movements of farm animals have to be reported to the relevant government agencies, so we (well, people I collaborate with, who provide advice to the Scottish government on controlling animal disease outbreaks) know exactly which animals have contact with each other.

Depending on the specific disease we are interested in, there are several different kinds of contact we might need to consider.   If animals need to be in close contact to transmit the disease, then there are two main ways the disease could spread from one farm to another: it could spread between farms that are geographically adjacent (if cows on opposite sides of the fence decide to have a chat) or by trade when an animal is sold from an infected farm to another farm that is not yet infected.  Some other diseases are spread by biting insects, in which case any farm within a certain radius of an infected farm would be at risk.

Mathematically, we can represent these potentially infectious contacts by means of a graph: a vertex is used to represent each farm, and two farms are connected by an edge if there is the potential for the disease to spread from one to the other (in the case of infection due to trade, this would normally be one-way, so we could capture more information by using a directed edge in the direction of the trade).  My first picture illustrates what part of this graph might look like if we are just considering the spread of disease across shared fence lines.  The graph we get in this situation will normally be planar, that is it can be drawn in the plane without any edges crossing (although there are some circumstances where this is not necessarily true, for example if one farmer owns several fields that are not connected to each other).

farms-planar

Now we can add to this graph the contacts that come from trade.  It’s not so obvious what we would expect the graph of trade links to look like, but this is exactly what Jess and I have been trying to understand.

many-trades

First, though, I should explain why we want to know about the structural properties of these contact graphs.  We’re interested in restricting the spread of disease, and Jess and her colleagues are asked by policy makers to investigate how it might be possible, in theory, to modify the network to make it less vulnerable to an epidemic.  When we talk about modifying the network, we’re considering actions that correspond roughly to deleting edges and/or vertices in the graph: to “delete” a vertex, we might vaccinate all animals at the particular farm (so that it can no longer play any role in the transmission of disease), whereas “deleting” an edge might correspond to putting up a double fence-line between two farms (keeping animals on adjacent farms just a little bit further apart) or introducing extra monitoring on a specific trade route.  What do we want to achieve by modifying the network?  Well, there are lots of different parameters we could look at which relate to how vulnerable a particular graph is to an epidemic, but the simplest one we could think of to start with is the maximum component size (the largest number of vertices that can be reached from any single starting vertex in the graph by traveling along edges).  In the picture below, if the farm circled in red is infected, then all of the others are potentially at risk.

cow-network-1

However, if we delete one edge from this network, we can be sure that the disease will not spread to the three farms on the right hand side (now highlighted in green).

cow-network-2

Of course, all of these possible ways to modify the network are costly, so we want to do this in the most efficient way possible, i.e. make the fewest modifications required to achieve our goal.  This motivates the following question: given a graph, what is the minimum number of edges (or vertices) we need to delete so that the maximum components size of the resulting graph is at most h?  This is an example of a graph modification problem, a kind of problem which comes up a lot on theoretical computer science.  It turns out that, like many graph modification problems, this particular problem is NP-complete: unless most mathematicians and computer scientists are wrong, and P=NP, there is no efficient algorithm to answer this question for all possible inputs.  But we don’t have to give up there!  Just because a problem is intractable on arbitrary inputs doesn’t mean we can’t find a fast way to solve it on input graphs with specific structural properties – most of my research is based on exactly this idea – so in order to solve the problem efficiently on the real data we want to understand the structure of the livestock contact networks.

Jess and I haven’t yet considered the general problem as I described it above: instead, we’ve been looking only at the trade network, and only at edge deletion.  This special case is relevant in the management of a particular kind of persistent cattle trade link that exists in Scotland, so it seemed as good a place to start as any.  Thinking about this problem, we did what any graph theorist does when faced with a computationally hard problem: we tried to solve it on trees (graphs that don’t have any cycles, so there’s exactly one way to get between any two vertices).  And, after dealing with a few slightly awkward details, it worked in exactly the way any graph theorist would expect: we can solve the problem recursively, starting with the leaves (vertices with only one neighbour) and working our way up the tree – the fact there are no cycles means we never have to rethink the answers we’ve already calculated.

trees

But cattle trade networks aren’t trees, so is this any use in the real world?  Somewhat surprisingly, the answer seems to be yes!  Although the trade networks aren’t exactly trees, many of them seem to have small treewidth – this is a hard-to-define graph parameter that essentially captures how “tree-like” a particular graph is.  And the great thing about graphs of small treewidth is that algorithms that work on trees can usually be adapted (taking just a bit more time) to work on graphs of small treewidth in exactly the same way.  The next picture shows how the treewidth of the Scottish cattle trade network grows as we add in more trades, based on real data from 2009.  The trades that took place that year are added to the graph one day at a time, and the treewidth (in fact, for technical reasons, an upper bound on the treewidth) of the resulting graph is shown.

2009_data

We can see from this graph that, even if we look at trades taking place in the whole year (much longer than the timescale we would need to consider for many disease outbreaks) the treewidth of the trade network is still less than 20, for a graph that has nearly 3000 vertices and 8000 edges – this is much lower than we would expect for a random graph with the same number of vertices and edges.

Initial experiments suggest that our algorithm that exploits the treewidth performs significantly better on some real datasets than a generic approach based on constraint programming (which doesn’t take into account any structural properties of the data): there were examples where our algorithm found the optimal solution in under a minute, whereas the alternative couldn’t solve the problem in 6 hours.

We’ve really only scratched the surface of this problem, though.  The planar graphs that arise from geographical proximity definitely don’t have small treewidth, so we will need a different approach here – but they certainly have other distinctive structural properties that we hope to exploit.  What interests me most to investigate next, though, is why many cattle trade networks have surprisingly low treewidth: farmers aren’t looking at the horrible definition of this parameter when they decide where to sell their animals, so there’s something else going on here – and if we could find a good way to model this “something else” mathematically, would it let us devise even better algorithms?

Persistent homology applied to evolution and Twitter

In this post I’ll let you know about an application and a variation of persistent homology I learnt about at the Young Topologists Meeting 2015. You might want to read my post on persistent homology first!

In his talks Gunnar Carlsson (Stanford) gave lots of examples of applications of persistent homology. A really interesting one for me was applying persistent homology to evolution trees. Remember that homology tells us about the shape of the data, and in particular if there are any holes or loops in it. We tend to think of evolution as a tree:

Tree of life

but in reality the reason why all our models for evolution are trees is that we take the data and try to fit it to the best tree we can. We don’t even think that it might have a different shape!

In reality, as well as vertical evolution, where one species becomes another, or two other, distinct species over time, we have something called horizontal  or reticulate evolution. This is where two species combine to form a hybrid species. In their paper Topology of viral evolution, Chan, Carlsson and Rabadan show how the homology (think of this as something describing the shape of the data, specifically the holes or loops that appear) of trees may be different if we take into account species merging together:

Figure from http://www.pnas.org/content/110/46/18566.full
Figure from http://www.pnas.org/content/110/46/18566.full

They go on to show how persistent homology can detect such loops caused by horizontal evolution, in the example of viral evolution. This is a brand new approach and really exciting as we now have a way of finding out how many loops are in a given evolutionary dataset, and which data points they correspond to. This can tell us about horizontal evolution, as well as vertical!

Up next is work from Balchin and Pillin (University of Leicester) on a variation of persistent homology inspired by directed graphs. The images in this section are from the slides of Scott Balchin’s talk at the young topologists meeting! The motivation for their variation is: what if you don’t simply have data points, but some other information as well. Take this example of people following people on twitter: draw an arrow from person A to person B if person A follows person B.

Twitter example

We see that Andy follows Cara but Cara does not reciprocate! If you just had Andy and Cara connected by an edge then this information would be lost. Balchin and Pillin looked at a way of encoding this extra information into the complex, taking into account the number of arrows you would need to move along to get from Andy to Cara (1) and also from Cara to Andy (2, via Bill). I will post a link to their paper here as soon as it is released. When the data is considered without this extra information, persistent homology gives a (crazy) barcode that looks like this:

Barcode

but when you include the directions you get a slightly less mysterious bar code:

Barcode

which is in a lot of ways more accurate and easy to interpret.

Balchin gave another example of a system where direction mattered: non-transitive dice. If you have a few 6 sided dice, you can represent each one by a circle with 6 numbers in it: the numbers on the sides of the dice! Then put an arrow from dice A to dice B if dice A beats dice B on average.

Dice

The non-transitive means sometimes there are loops where dice A beats dice B which beats dice C, but then dice C beats dice A! You can actually buy non-transitive dice and play with them in real life. As you can probably tell, the arrows in this picture are important and so we want to make sure we don’t loose the directions when considering the homology!

There are a few more applications of persistent homology I would like to share with you and hopefully I will get the chance some other week!

Causality

In my last post I showed a picture of a surface in 3D space that gave us information about a probability distribution. This week’s post also finishes with such an image!

It is a problem of central importance in all walks of science to be able to say whether or not “X causes Y”. It’s important to know when we have enough information to be able to make such a declaration.

One way to look at causality is via a “directed acyclic graph” or “DAG”. This is a collection of vertices with arrows connecting them, such that there is no way to follow some arrows and get back to your starting point (there are no “cycles” in the graph). Here is an example, from this paper about the transgenerational impact of nicotine exposure – whether being around smokers makes you want to smoke:

srep07513-f5
Relationships amongst pathways of genes that are changed by Nicotine exposure

Given some observations, we would like to be able to build such a graph. One condition which allows us to do this is called the “faithfulness assumption”, which imposes conditions on the conditional independences of the observed things, for example it tells us information of the form “X is independent of Y given Z”: the only way that X and Y are related is via Z.

This condition is explained in greater detail in this paper “Geometry of the Faithfulness Assumption in Causal Inference” by Caroline Uhler et al. which this blog post is based on and which both the following two images are taken from.

They consider the following graph, which is much smaller and more simple than the one pictured above:

Screen Shot 2015-06-22 at 6.35.12 PMWe have arrows 1 \to 2 , 2 \to 3 and 1 \to 3. Note that whilst it might look as though this graph contains a cycle, it is not possible to follow the directions of the edges and travel in a full loop around the triangle.

The parameters which give the strength of the causality relations between vertices 1, 2 and 3 are given by the weight of the edge that connects them. We have three edges appearing above, so we can consider the distribution as being a point, (x,y,z), in 3-dimensional space, where

  1. x is the weight of the edge 1 \to 2
  2. y is the weight of the edge 1 \to 3
  3. and z is the weight of the edge 2 \to 3.

Since “faithful” combinations of (x,y,z) allow us to make inferences, we want to look at the potential problem areas where we are close to an “unfaithful” combination of (x,y,z). This picture from the paper shows when we are in a problem area for any of the three problems (green, blue and red) which may occur, and the last picture combines these to show the points in 3D space which experience any one of the three possible problems:

Screen Shot 2015-06-22 at 6.55.09 PM

In order to make accurate conclusions in applications we would have to ensure that our distribution does not lie close to any of these problem areas.

Coxeter Complex

In my last post

https://picturethismaths.wordpress.com/2015/03/21/coxeter-dynkin-diagrams/

I talked about Coxeter groups and their corresponding Coxeter-Dynkin Diagrams. Here is a little recall of the definition of  a Coxeter group:

A Coxeter group is generated by a set S and relations between elements of S, of the form

(st) m(s,t)= e

where and t are elements in S, e is the identity element, and m(s,t) is a number greater than 1 which changes depending on what s and t are. A Coxeter group has the property that all of its generators are involutions, meaning m(s,s)=1. When you ‘multiply out’ the other relations you get equations that look like

st=ts (when m(s,t)=2),  sts=tst (when m(s,t)=3), or in general ststst…sts=tststs…tst

and these are called braid relations. 

Now I’m going to describe a simplicial complex called the Coxeter complex. One of these can be constructed out of any Coxeter group. Let’s bullet point the steps of the construction and after each follow the example of the group with two commuting generators, given by the Coxeter-Dynkin diagram below

Screen Shot 2015-04-06 at 09.06.08

and the relations s 2= e, t 2= e, and st=ts.

  • Firstly take all the parabolic subgroups of the Coxeter group. these are groups generated by subsets of the vertices.

For our example the parabolic subgroups are

<e>={e}, <s>={e,s}, <t>={e,t}, <s,t>={e,s, t, st=ts}

  • Secondly form a list of all the cosets of these subgroups. Cosets are things you get from premultiplying your subgroup by a chosen element in the main group.

For our example the cosets for each of the parabolic subgroups are

  <e>={e}  has cosets<e>={e}, s<e>={s}, t<e>={t}, and st<e>{st}                                                   <s>={e,s}  has cosets <s>={e,s}, s<s>={e,s}=<s>, t<s>={t,ts=st}, and st<s>={t, st}                        <t>={e,t}  has cosets <e>={e,t}, s<t>={s, st}, t<t>={e, t}, and st<t>={s, st }                                     <s,t>={e,s,t,st}  has cosets <s,t>=s<s,t>=t<s,t>=st<s,t>={e,s,t,st}                  

We see that some of these are the same so we end up with cosets:

Cosets with one element: <e>, s<e>, t<e>, st<e>

Cosets with two elements: <s>, t<s>, <t>, s<t>

Cosets with four elements: <s,t>

  • Create your complex by setting the vertices to be the cosets, and creating an edge (1-simplex) between two cosets whenever the elements of one are a subset of the elements of another. Then create a triangle (2 simplex) whenever you have one coset inside another which is in turn inside another. Continue like this with triangular pyramids from 4 cosets (3 simplex) etc.

Finally we get to draw some pictures! Here are the vertices:

Cosets

And here are the edges (there is an edge between every other coset and <s,t> since <s,t> is the whole group, so we have missed these out):

Edges

We can rearrange the edges, then putting in the triangles (and colouring them pink) gives us:

Coxeter complex

Many mathematicians like to miss out the middle coset, which represents the whole group. Then our complex would look like this:

Coxeter Complex

Notice that and t act on the Coxeter complex. Multiplying all the cosets by s reflects in the middle vertical line and multiplying all the cosets by t reflects in the middle horizontal line:

Group actionThis is true for any group and their coxeter complex. As another example, here is a photo of a model for the coxeter complex of the symmetric group on 4 elements, or

Coxeter-dynkin diagram

which was made by someone in my department.

Coxeter complex

We see hexagons and squares. The middle vertex acts as a pair with the left vertex on half of the hexagons and with the right vertex on the other half. They are hexagons because the vertices have a line between them, and thus have braid relation of order 3. The two outer vertices (which commute) act on the square, like our square in the previous example! The shapes actually fit together and tesselate, as shown in this next picture.

Tesselating

Determinant vs Permanent

The determinant and the permanent are famous polynomials in the world of maths (some say the determinant is the most famous in the world). We are unable to compare the complexity of the determinant with the permanent, and this is an important hurdle in complexity theory. A google search for “determinant vs. permanent” returns 432,000 results, which amounts to a very big hurdle. It is closely related to the P vs. NP problem.

Given a 2 \times 2 matrix M = \left( \begin{matrix} a & b \\ c & d \end{matrix} \right), the determinant is ad - bc. The less familiar permanent is ad + bc. Both are polynomials in the entries, a, b, c and d, of the matrix.

In general, the permanent is what we get when we put a “+” sign everywhere we see a “-” in the expression for the determinant.

Despite this apparent similarity, the two polynomials behave very differently.

Gaussian Elimination is an algorithm that allows us to calculate the determinant quickly. In the UK, you learn at school aged 17. The algorithm might seem laborious: a 3 \times 3 matrix requires about 30 operations. But, at O(n^3) operations for a matrix of size n, it is a very fast algorithm for matrices.

In contrast, there is no known polynomial-time algorithm for calculating the permanent; there is no known O(n^r) algorithm for any r. An active line of research is to try to write the permanent of a smaller matrix as the determinant of a much larger matrix. Here is an example in the case of a 3 \times 3 matrix:

Permanental Adjacency Matrix
Expressing the permanent as the determinant of a larger matrix, using adjacency matrices of graphs.

This picture is taken from Jan Draisma’s article on “Geometry, Invariants and the Search for Elusive Complexity Bounds” from the March 2nd SIAM news. It depicts a method by Bruno Grenet to find the permanent of a smaller matrix as the determinant of a larger matrix.

The method shows that we can find the permanent of a matrix of size n by finding the determinant of a matrix of size 2^n -1. It works as follows:

  1. We make the partially ordered set (poset) formed from all possible subsets of the numbers \{ 1, 2, \cdots, n \}. Its graph it the diagram shown on the left hand side of the above picture.
  2. We make paths in our poset by starting with the empty set, and adding in elements one at a time until we have all the numbers \{ 1, 2, \cdots, n \}.
  3. Next we make the matrix, on the right hand side of the picture. First, we label the rows and columns of the matrix by the 2^n - 1 non-empty subsets of \{ 1, 2, \cdots, n \}.
  4. The entries of the matrix are given by the weighting of the edge of the graph that goes from the column’s index to the row’s index in the poset picture.
  5. Each term in the determinant of the matrix corresponds to a path in the graph.

Here is Bruno Grenet’s paper from 2012.