Mapping class groups and curves in surfaces

Firstly, thanks to Rachael for inviting me to write this post after meeting me at the ECSTATIC conference at Imperial College London, and to her and Anna for creating such a great blog!

My research is all about surfaces. One of the simplest examples of a surface is a sphere. We are all familiar with this – think of a globe or a beach ball. Really we should think of this beach ball as having no thickness at all, in other words it is 2-dimensional. We are allowed to stretch and squeeze it so that it doesn’t look round, but we can’t make every surface in this way. The next distinct surface we come to is the torus. Instead of a beach ball, this is like an inflatable ring (see this post by Rachael). We say that the genus of the torus is 1 because it has one “hole” in it. If we have $g$ of these holes then the surface has genus $g$. The sphere doesn’t have any holes so has genus 0. We can also alter a surface by cutting out a disc. This creates an edge called a boundary component. If we were to try to pass the edge on the surface, we would fall off. Here are a few examples of surfaces.

As with the sphere, topology allows us to deform these surfaces in certain ways without them being considered to be different. The classification of surfaces tells us that if two surfaces have the same genus and the same number of boundary components then they are topologically the same, or homeomorphic.

Now that we have a surface, we can start to think about its properties. A recurring theme across mathematics is the idea of symmetries. In topology, the symmetries we have are called self-homeomorphisms. Strictly speaking, all of the self-homeomorphisms we will consider will be orientation-preserving.

Let’s think about some symmetries of the genus 3 surface.

Here is a rotation which has order 2, that is, if we apply it twice, we get back to what we started with.

Here is another order 2 rotation.

And here is a rotation of order 3. Remember that we are allowed to deform the surface so that it looks a bit different to the pictures above but still has genus 3.

However, not all symmetries of a surface have finite order. Let’s look at a Dehn twist. The picture (for the genus 2 surface) shows the three stages – first we cut along a loop in the surface, then we rotate the part of the surface on just one side of this loop by one full turn, then we stick it back together.

A Dehn twist has infinite order, that is, if we keep on applying it again and again, we never get back to what we started with.

If we compose two homeomorphisms (that is, apply one after the other) then we get another homeomorphism. The self-homeomorphisms also satisfy some other properties which mean that they form a group under composition. However, this group is very big and quite nasty to study, so we usually consider two homeomorphisms to be the same if they are isotopic. This is quite a natural relationship between two homeomorphisms and roughly means that there is a nice continuous way of deforming one into the other. Now we have the set of all isotopy classes of orientation-preserving self-homeomorphisms of the surface, which we call mapping classes. These still form a group under composition – the mapping class group. This group is much nicer. It still (usually) has infinitely many elements, but now we can find a finite list of elements which form a generating set for the group. This means that every element of the group can be made by composing elements from this list. Groups with finite generating sets are often easier to study than groups which don’t have one.

An example of a mapping class group appears in Rachael’s post below. The braid group on $n$ strands is the mapping class group of the disc with $n$ punctures (where all homeomorphisms fix the boundary pointwise). Punctures are places where a point is removed from the surface. In some ways punctures are similar to boundary components, where an open disc is removed, but a mapping class can exchange punctures with other punctures.

So how can we study what a mapping class does? Rachael described in her post how we can study the braid group by looking at arcs on the punctured disc. Similarly, in the pictures above of examples of self-homeomorphisms the effect of the homeomorphism is indicated by a few coloured curves. More precisely, these are simple closed curves, which means they are loops which join up without any self-intersections. Suppose we are given a mapping class for a surface but not told which one it is. If we are told that it takes a certain curve to a certain other curve then we can start to narrow it down. If we get information about other curves we can narrow it down even more until eventually we know exactly what the mapping class is.

Now I can tell you a little about what I mainly think about in my research: the curve graph. In topology, a graph consists of a set of points – the vertices – with some pairs of vertices joined by edges.

Each vertex in the curve graph represents an isotopy class of curves. As in the case of homeomorphisms, isotopy is a natural relationship between two curves, which more or less corresponds to pushing and pulling a curve into another curve without cutting it open. For example, the two green curves in the picture are isotopic, as are the two blue curves, but green and blue are not isotopic to each other.

Also, we don’t quite want to use every isotopy class of curves. Curves that can be squashed down to a point (inessential) or into a boundary component (peripheral) don’t tell us very much, so we will ignore them. Here are a few examples of inessential and peripheral curves.

We now have infinitely many vertices, one for every isotopy class of essential, non-peripheral curves, and it is time to add edges. We put an edge between two vertices if they have representative curves which do not intersect. So if two curves from these isotopy classes cross each other we can pull one off the other by an isotopy. Here’s an example of some edges in the curve graph of the genus 2 surface. In the picture, all of the curves are intersecting minimally, so if they intersect here they cannot be isotoped to be disjoint.

I should emphasise that this is only a small subgraph of the curve graph of the genus 2 surface. Not only does the curve graph have infinitely many vertices, but it is also locally infinite – at each vertex, there are infinitely many edges going out! This isn’t too hard to see – if we take any vertex, this represents some curve (up to isotopy). If we cut along this curve we get either one or two smaller surfaces. These contain infinitely many isotopy classes of curves, none of which intersects the original curve.

So why is this graph useful? Well, as we noted above, we can record the effect of a mapping class by what it does to curves. Importantly, the property of whether two curves are disjoint is preserved by a mapping class. So not only does a mapping class take vertices of the curve graph (curves) to vertices, but it preserves whether or not two vertices are connected by an edge. Thus a mapping class gives us a map from the curve graph back to itself, where the vertices may be moved around but, if we ignore the labels, the graph is left looking the same. We say that the mapping class group has an isometric action on the curve graph, so to every element of the group we associate an isometry of the graph, which is a map which preserves distances between elements. The distance between two points in the graph is just the smallest number of edges we need to pass along to get from one to the other. When we have an isometric action of a group on a space, this is really useful for studying the geometry of the group, but that would be another story.

A correspondence between braids and arcs

I’ve been thinking a lot about the braid group recently, and different ways of studying it. You can read my original post on the braid group here. The arcs I will talk about in this post have really taken over my PhD in the past few weeks, so much so that I’ve even started drawing them on beer mats in the pub!

There is a rather nice correspondence between braids in the braid group and arcs on a punctured disc (think pancake with a few little holes in the middle). In this post I will try and explain this correspondence in the case of the three strand braid group. In this instance we have to imagine a disc with 3 punctures (little holes):

We will start of with the arc that corresponds to the identity braid, which we draw from the centre of the bottom of the disc to the left puncture.

Below we show the identity arc on the left and the identity braid on the right.

So what happens when we have a braid with a twist, like the one below? How does the arc have to change to incorporate information about this braid?

Consider the braid: it has three strands and we have three punctures! The left strand crosses over the middle strand and we can incorporate this into the picture by starting with the identity arc and letting our left puncture and middle puncture rotate around each other to swap places, with the left one taking the upper path (so a clockwise rotation). We let the arc follow the puncture as though it is attached, making sure it does not intersect itself or the other punctures. This process is pictured in the two images below.

For every braid in the braid group (not just the simple ones with one crossing) an arc can be drawn. When you smoosh (see the braid group post) some simpler braids together to make a longer braid, you can draw the corresponding arc one step at a time, following the braid in a downwards direction to find out the next step you should take.

Below is an image of the arc changing as the braid grows longer. At the first crossing the arc moves over the left puncture as before but then there is a second crossing where now the middle strand of the braid moves under the right strand and therefore the middle puncture will rotate with the right one. This time the middle one moves under since the middle strand moves under on the braid, giving us an anticlockwise rotation.

Below is an example of a more complicated arc. If you want a fun challenge you could try to draw out the braid that it is related to!

You might wonder what use it is to draw these arcs when we already know how to work with the braids. One reason is that we can learn more about certain braids by drawing the corresponding arcs and recognising patterns or algorithms that weren’t obvious from studying the braids. This technique is used quite a lot in mathematical proof, where an answer may be a lot simpler to see when the question is formulated in a different way (for example with arcs instead of braids).

I’ll end with a couple of pictures from my ‘everyday life’. Here is a wee set up I put together to help me figure out the twists and turns for some arc examples I was working on (a hard mornings work…)

And finally here is a shot of my current blackboard, a mess of arc drawing in progress!

Hall of mirrors: Coxeter Groups and the Davis Complex

I’ve spent a lot of time this summer thinking about the best way to present maths. When someone gives a talk or lecture they normally write on the board in chalk or present via Beamer slides. Occasionally though, someone comes along with some great hand drawn slides that make listening to a talk that wee bit more exciting. So the images in this blog are part of my tribute to this new idea.

I’ve talked about Coxeter groups before (here), but I’ll start afresh for this post. It is worth mentioning now that Coxeter groups arise across maths, in areas such as combinatorics, geometric group theory, Lie theory etc. as well as topology.

A Coxeter group is a group generated by reflections, and “braid type” relations. Informally, you can imagine yourself standing in the middle of a room with lots of mirrors around you, angled differently. Your reflection in a single mirror can be viewed as a generator of the group, and any other reflection through a series of mirrors can be viewed as a word in the group. Here is a silly picture to show this:

Formally, a Coxeter group is defined by a Coxeter matrix on the generating set S. This is an S by S matrix, with one entry for each ordered pair in the generating set. This entry has to be 1 on the diagonal of the matrix or a whole number that is either bigger than 2 or infinity ($\infty$) off the diagonal. It also has to be symmetric, having the same entry for $(t,s)$ as $(s,t)$. See below:

Given this matrix you can then define the Coxeter group to be the group generated by S with relations given by the corresponding entry in the matrix.

In particular notice that the diagonal of the matrix being 1 gives that each generator squares to the identity, i.e. it is an involution. It can be hard to see what is going on in the matrix so there is a nicer way to display this information: a Coxeter diagram. This is a graph with a vertex for every generator, and edges which tell you the relations, as described below:

The relation $(st)^{m_{st}}$ can also be rewritten as $ststs...=tstst...$ where there are $m_{st}$ elements on each side. This is reminiscent of the braid relations, hence why I called them “braid like” before. In the mirror analogy, this says the mirrors are angled towards each other in such a way that you get the same reflection by bouncing between them $m$ times, independent of which of the two mirrors you turn towards to begin with.

There exist both finite and infinite Coxeter groups. Here is an example of a finite Coxeter group, with two generators $s$ and $t$. If you view them as reflections on a hexagon (as drawn) then doing first $s$ and then $t$ gives a rotation of 120 degrees, and so doing $st$ 3 times gives the identity, as required.

On the other hand, if you add another generator $u$ with a braid-3 relation with both $s$ and $t$, then the group is infinite. You can imagine tiling the infinite plane with triangles. If you take $s$, $t$ and $u$ to be reflections in the 3 sides of one of these triangles then they satisfy the relations they need to, and you can use these three reflections to transport the central triangle to any other one. If you think about this for a while, this shows the group is infinite. A (somewhat truncated) picture is shown below.

Examples of Coxeter groups don’t just live in 2-D Euclidean space. There is another finite group which acts as reflections on the permutahedron:

And other Coxeter groups which act as reflections on the hyperbolic plane.

The mathematical object I am working with at the moment is called the Davis Complex. You can build it out of finite subgroups of a Coxeter group (side note for the mathematicians: taking cosets of finite subgroups and forming a poset which you can then realise). Even infinite Coxeter groups have lots of finite subgroups. The great thing about the Davis complex being built out of finite things is that there is a classification of finite Coxeter groups! What this means is that when you have a finite Coxeter group its diagram either looks like one of the diagrams below, or a disjoint collection of them.

So because we only have a few diagrams to look at in the finite case, we can prove some things! Right now I am working on some formulas for staring at the Coxeter diagrams and working out the homology of the group. I’m using the Davis complex and all its nice properties to do this. I’ll leave you with a picture of the Davis complex for our first example.

Introducing homology

A lot of things have happened since my last post, and I’ve been waiting for a great way to follow Anna’s fantastic series of SIAGA posts.

On February 16th Professor Robert Ghrist from the University of Pennsylvania gave the annual Potter lecture at the University of Aberdeen. The Potter lecture is to be aimed at a general audience, and his title was “Putting Topology to Work”. He discussed applications of topology to various areas of engineering and science and his talk included a great introduction to a topological invariant called homology.

Topologists work with homology a LOT. It has appeared in the title of my mathematics dissertation at undergrad, essay at masters level and I am pretty sure my PhD thesis title (touch wood) will also contain it. However I have never been good at explaining what homology is in layman’s terms (despite many attempts), so Professor Ghrist’s lecture was particularly inspirational.

A couple of weeks ago I gave a short talk at a London Mathematical Society Women in Mathematics day and tried to give a better description of homology than I have done before. There are some pictures involved so I thought I would recreate that section of my talk here. I’ll screenshot some of my slides and also add text and some extra sketches.

Homology is a process where we start with a topological space X and associate to it a sequence of abelian groups called homology groups, and denoted $H_*(X)$ where $*$ is a natural number (0, 1, 2, 3, …). Some examples of topological spaces are spheres, surfaces and manifolds (which are higher dimensional analogues of surfaces i.e. I can’t draw them).

So what do these groups tell us? $H_0(X)$ tells us about the connected components of our space. If the space is one point, the rank of $H_0(X)$ will be 1, if it is a circle the rank of $H_0(X)$ will still be 1 but if it is two disjoint points or circles the rank of $H_0(X)$ will be 2 and so on.

$H_1(X)$ tells us in some sense about ‘holes which look like a circle’. So it will let us know that a circle has one ‘hole that looks like a circle’, a figure of 8 has two, etcetera.

Similarly $H_2(X)$ tells us about ‘holes which look like a 2- sphere’, in the sense that you can blow up a beach ball and what you get is a 2- sphere, so $H_2(X)$ will tell you there is a hole in your beach ball which ‘looks like a 2-sphere hole’. You can also blow up a rubber ring or inner tube and in the same sense, $H_2(X)$ will tell us these torus surfaces have ‘holes which look like 2-spheres’ or ‘holes which look like beach ball holes’. We can’t really visualise what the homology tells us after $H_2(X)$, since it tells us about holes in higher dimensions than 2.

So why do we do this? We might want to know something about a topological space, but maybe we can’t simply draw the space as it lives in a very high dimension. But the homology of a space is a sequence of groups which tells us about holes of all dimensions: and we know lots about groups! We can try to work out what the homology groups of a space are, we can do things such as study maps between these groups, and there is generally a lot more structure in the sequence of groups for us to take advantage of. So by looking at homology we can learn things about a space that we cannot draw or visualise.

Homology is also functorial in the sense that if we have two spaces X and Y and a map between them (the downwards black arrow in the diagram below), we can look at the homology of X and the homology of Y (horizontal wiggly arrows) and the map between X and Y will induce  a map between the homologies (the dotted green arrow). So because we know a lot about maps between groups this can tell us something about the possible maps between X and Y.

In my talk I was focusing on the homology of a group rather than that of a space, so how do we do that? Well we start off with a group and we associate to it something called a classifying space (see my previous post for an example). Calculating the homology of this space is then the same as calculating the homology of the group.

I also used homology with different coefficients, such as $\mathbb{Z}_2$ instead of the usual integer coefficients $\mathbb{Z}$. This allows us to manipulate what sort of abelian groups we get when taking homology, for instance using $\mathbb{Q}$ coefficients will give us a $\mathbb{Q}$-vector space. Sometimes we do this to make our problem easier to solve, or sometimes the problem itself prescribes that we use different coefficients.

So now I have told you about homology, next time I will follow up with a post on the hot topic of homological stability!

And to reward you for reading to the end, here is a great comic drawn by my friend Tom!

SIAGA: Tensors

Seven pictures from Applied Algebra and Geometry: Picture #6

The Society for Industrial and Applied Mathematics, SIAM, has recently released a journal of Applied Algebra and Geometry called SIAGA. See here for more information on the new journal. They will start taking online submissions on March 23rd.

The poster for the journal features seven pictures. In this penultimate blog post I will talk about the sixth picture, on the subject of Tensors. In the first section of this post, “The Context”, I’ll set the mathematical scene. In the second section, “The Picture”, I’ll talk about this particular image.

The Context

Tensors are the higher-dimensional analogues of matrices. They are data arrays with three or more dimensions, and are represented by an array of size $n_1 \times \cdots \times n_d$, where $n_k$ is the number of ‘rows’ in the $k$th direction of the array. The entries of the tensor $A$ are denoted by $A_{i_1 \ldots i_d}$ where $i_k \in \{ 1, \ldots, n_k \}$ tells you which row in the $k$th direction you are looking at. Just as for a matrix, the entries of a tensor are elements in some field, for example real or complex numbers.

Tensors occur naturally when it makes sense to organize data by more than two indices. For example, if we have a function that depends on three or more discretized inputs $f(x,y,z)$ where $x \in \{ x_1, \ldots, x_{n_1} \}$, $y \in \{ y_1, \ldots, y_{n_2} \}$ and $z \in \{ z_1, \ldots, z_{n_3} \}$, then we can organize the values $A_{ijk} = f(x_i,y_j,z_k)$ into a tensor of size $n_1 \times n_2 \times n_3$. Tensors are increasingly widely used in many applications, especially signal processing, where the uniqueness of a tensor’s decomposition allows the different signals comprising a mixture to be found. They have also been used in machine learning, genomics, geometric complexity theory and statistics.

Our data analysis techniques are currently limited to a matrix-centric perspective. To overcome this, there has been tremendous effort to extend the well-understood properties of matrices to the higher-dimensional world of tensors. A greater understanding of tensors paves the way for very exciting new developments that can cater to the natural structure of tensor-based data, for example in experimental design or confounding factor analysis. This analysis and understanding uses interesting and complicated geometry.

One requirement for computability of a tensor is to have a good low rank approximation. Tensors of size $n_1 \times \cdots \times n_d$ have $n_1 \ldots n_d$ entries and, for applications, this quickly becomes unreasonably large. Matrices are analyzable via their singular value decomposition, and the best low rank approximation is obtainable directly from this by truncating at the $r$th largest singular value. We can extend many of the useful notions from linear algebra to tensors: we have eigenvectors and singular vectors of tensors, and a higher order version of the singular value decomposition.

The Picture

As well as being a picture of the well-known Rubik’s cube, this picture describes a cartoon of a tensor of size $3 \times 3 \times 3$. Such a tensor consists of 27 values.

To understand the structure contained in a tensor, we use its natural symmetry group to find a presentation of it that is simple and structurally transparent. This motivation also underlies the Rubik’s puzzle although the symmetries can be quite different: a change of basis transformation for the tensor case, and a permutation of pieces in the case of the puzzle.

Despite being small, a $3 \times 3 \times 3$ tensor has interesting geometry. It is known that a generic tensor of size $3 \times 3 \times 3$ has seven eigenvectors in $\mathbb{P}^2$. In the paper “Eigenconfigurations of Tensors” by Abo, Seigal and Sturmfels, we show that any configuration of seven eigenvectors can arise, provided no six of the seven points lie on a conic.

The braid group

Apologies for the delay in writing this post. Sometimes when one does maths everyday the last thing they feel like doing when they get home is writing about maths, and I hope that’s enough of an excuse.

This post is going to focus on braid groups, what they are and how we can visualise them. The braid group is the most popular/simplest example of an Artin group, and I guess in some sense my whole PhD is on Artin groups.

So what is the braid group? We have all heard of hair braids

And the hair braid is an example of a braid in the braid group on 3 strands

A group can be thought of as a collection of elements with some sort of operation which for the purpose of the post we will call smooshing i.e. you can smoosh two elements together to get a new element in the group. There is an identity element which doesn’t do anything when you smoosh it with other elements and each element has an inverse element: when you smoosh an element and its inverse you get the identity element back.

SO what is the braid group? The braid group is defined on a set number of strands. An element in this group looks like you have taken these strands and neatly lain them out, then twisted them together in some way. Here are some elements in the braid group on 4 strands, where we have drawn the picture such that if one strand passes over the top of another, the bottom strand seems to break.

SO what is the smooshing relationship between these elements? If we want to smoosh together two braids we simply tie together the bottom of one braid with the top of the other, like so:

The identity element is the braid where no strands are twisted, as tying this to any other braid doesn’t change it (it makes it a bit longer but we don’t care about stretching and squashing because we are doing topology).

And here is an example of a braid and its inverse: when you smoosh them together you get the identity element.

Okay so we have a few cool pictures and we are starting to understand what the braid group is, but if we want to do some maths then we had better be able to write down the elements. And writing down a complicated twisty things might get tricky! A solution to this problem is to work with generators of the group. We can get any element of the braid group on 4 strands by simply working with these 6 elements and smooshing them together on various ways. Notice that the 6 elements are in fact three elements and their inverses.

If we give them all names (or in this case numbers) then we can consequently write down any element by saying the order we have to smoosh these 6 generators together to get it. So we are ready to do some maths!

What I’m actually trying to do involves the classifying space of the braid group and fitting braid groups inside each other by adding an extra straight strand to elements.

Classifying space of the symmetric group

Groups first appear when we study undergraduate algebra. The group we will talk about today is one of the first that undergraduates meet, it is called the symmetric group  and elements of the nth symmetric group permute the numbers {1,2,…,n}.

For instance an element of the 7th symmetric group permutes {1,…,7} and might send the elements to each other like this:

1 >>> 3

2 >>> 5

3 >>> 4

4 >>> 1

5 >>> 6

6 >>> 7

7 >>> 2

Sometimes we want to use geometry and topology to talk about algebraic things such as groups. An example of this is homology of a group, which is a nice invariant. Homology is only defined for spaces though, and so to even be able to talk about the homology of a group, we need to associate the group with a space. This space is called the classifying space  of the group. We will define the classifying space for the symmetric group, using some pictures I made for a talk last week.

For the nth symmetric group, the classifying space is constructed by first considering configurations (conf) of n points in infinite real space. What does is mean to describe a point in ‘infinite real space’? It means is that there are an infinite number of coordinates describing the position of the point, but only finitely many are non-zero. Below is a picture of a configuration of 7 points, {1,…,7}.

The nth symmetric group permutes the labels of these n points: below is the same configuration of 7 points, with the labels permuted by the element of the 7th symmetric group we described earlier. This permutation of the labels is called an action of the symmetric group on the configuration space.

When we have an action, we can ‘divide’ by that action. In the case of our action, this ‘dividing’ means that we think of all the different ways of labeling this configuration with {1,…,7} as the same: we no longer care about how the points are numbered, just which 7 points we chose. It leaves us with the space of  unlabeled configurations: this is the classifying space of the symmetric group.