**(1) How long have you been here at LSE? **

I started at the LSE in September 2016, hence have been here a bit more than two years. Before this, I was a Senior Lecturer at UCL for two years and a Senior Research Fellow at Oxford for three years. I did my PhD at Columbia University in its Statistics department.

**(2) In layman’s terms, what are your main fields of research/interest?**

My main interests are questions arising at the intersection of finance, probability, and statistics. For example, I work in Stochastic Portfolio Theory, a framework to analyse the behaviour of portfolios and the structure of large equity markets. What I like about this field is that it is empirically driven, mathematically interesting, and has insightful and practical results for long-term investors. An example of the research in this field is this non-mathematical paper. This article illustrates how and why certain naive trading strategies (such as the monkey portfolio) outperform the market in the long-run. Together with my PhD student Weiguan Wang, we have also started to study the applications of machine learning techniques to finance.

**(3) How did you first become interested in this area?**

During my doctoral studies, my adviser, Ioannis Karatzas, invited me to this fantastic series of informal research meetings with other university researchers and research-active industry practitioners. Among the participants was also Bob Fernholz, who was a former academic, then founded his own asset management company, and formalised Stochastic Portfolio Theory. These meetings were inspiring – lots of research ideas were formulated there. Since then I have been working on and off on Stochastic Portfolio Theory. Sometimes more, sometimes less; oscillating between more applied and more theoretical questions.

**(4) What are your favourite courses to teach, or favourite part of teaching those courses? **

I’ve taught a couple of courses at the LSE, both on the undergraduate and the graduate level. It’s great to have students with diverse backgrounds and experiences, as it is common for the LSE mathematics courses. I especially enjoyed developing a new summer school course from scratch last summer, with my colleague Luitgard Veraart. This course connects theory with practical implementations; teaching these links I enjoy a lot. The course was part of a longer list of Financial Mathematics courses that we introduced last year to the summer school. They all turned out to be extremely successful and got excellent student feedback – so we decided to continue this coming summer.

**(5) What is the best part of being at LSE? **

LSE has one of the world-wide largest research groups in Financial Mathematics (it might be even the largest). There are also plenty of possibilities to link with the many London-based practitioners who work in banks, in hedge funds, and in Fintech. This allows for a very active research program with lots of visitors, a continuous exchange and flow of ideas, an inspiring research atmosphere, and lots of joint projects.

At the same time, LSE feels actually like a small university – it’s easy to connect to academics in other disciplines, all the research seminars are close, and people very friendly.

]]>

**(1) How long have you been here at LSE?**

I came in the summer of 2013, and have been loving it here ever since! Prior to coming to the LSE I had a Royal Society University Research Fellowship at the University Leeds, and before that a Marie-Curie Fellowship at the University of Siena.

**(2) In layman’s terms, what are your main fields of research/interest?**

I work in a number of areas. A lot of my research has been in Computability Theory, which is normally classified as part of Mathematical Logic. Roughly speaking, researchers in Computability Theory are concerned with the limits of computation — what computers can and cannot achieve, what the “incomputable universe” looks like. Then I’ve also been doing lots of work in Complex Systems, which has manifested in research in Statistical Mechanics, Theoretical Population Biology and Network Science.

Most recently my work in this area has led to investigations in cryptocurrencies. That’s an interesting area to work in because it’s so new, and while there are lots of excellent academics getting into it, most people developing the ideas at this point are in industry.

**(3) How did you first become interested in this area?**

What drew me into Computability Theory in the first place was Godel’s Theorems. There just seemed to be something so profound about a mathematical theorem which speaks to fundamental limits on what one can expect to achieve. I found that idea rather beautiful. Later on I became drawn to work also on matters with immediate societal impact. That’s partly what attracts me to the recent work in cryptocurrencies.

**(4) What are your favourite courses to teach, or favourite part of teaching those courses?**

Ha… I’m teaching two courses at the moment, and if I pick a favourite then it will look bad for the students of the other course! I’m teaching Coding and Cryptography, and also the Mathematics of Networks, which is a relatively new course which I introduced a couple of years ago. The best part of teaching has to be when students get excited by the subject matter and want to pursue ideas further.

**(5) What is the best part of being at LSE?**

One of the nice things about the maths department here from my perspective is the strong algorithmic flavour to a lot of the research that people do. I’ve always existed somewhere on the boundary between maths and computer science, and this is a nice department in that sense — there are quite a few of us who could be in computer science departments. The fact that it’s also quite a small department probably helps it remain friendly, and I’ve always been impressed with how democratically things are run around here. Of course being in London is another major motivation for being here.

]]>The London Mathematical Society organises undergraduate summer schools every summer, consisting of a two-week course held in a UK university. The summer school consists of short lecture courses which include problem-solving sessions as well as colloquium-style talks.

This year, the LMS Undergraduate Summer School was hosted by the University of Glasgow, and the two of us were fortunate to have had the opportunity to attend it. The programme consisted of six short lecture courses (three lectures and two exercise sessions each) and eight colloquia. We have selected four of the lecture courses to discuss below.

The first lecture series, on the Compactness Theorem, was given by Professor Mike Prest from the University of Manchester. After providing us with an introduction to ultrafilters and ultraproducts, he used Zorn’s lemma to prove the existence of a maximal ultrafilter containing any given filter. He defined (diagonal) embeddings, Łoś’s theorem, structures and definable sets and we proved the Compactness Theorem. Finally, a sketch proof of Łoś’s theorem followed. What we found particularly interesting about these lectures was how abstract algebra was used to prove such a fundamental result in predicate logic. Since neither one of us had ever taken such advanced courses in either field, we were fascinated by the challenging material and the highly demanding problem sets.

This lecture course focused on a question in combinatorial group and semigroup theory called the **word problem**.

Here we look at the word problem in the specific context of rewriting systems only. Consider a set of letters A={*a*,*b*}. Then words generated by this set are any finite sequence of letters in this set, such as 1 (the empty set which contains no letters), *a*, *b*, *aa*, *aba*, *bbbaaaaabb*, etc. To form a rewriting system (A,R), the set of letters is paired with a set of rewriting rules (call this R) that allow us to replace a word with another in the forward direction. For example, the rewriting rule *aa*→*b* allows us to state that *baab*→*bbb*, *bbaa*→*bbb* and *abaaaab*→*abaabb*→*abbbb* (we simply replaced *aa* with *b*). In this case, the words *baab* and *bbaa* **represent the same element** in this rewriting system since they can be transformed to each other using the rewriting rules. Our notation for this rewriting system would then be (A,R)=({*a*,*b*},{*aa*→*b*}).

A rewriting system has a **decidable word problem** if there is an algorithm which for any two words in the system decides whether or not they represent the same element in the rewriting system. In other words, is it possible to write a set of instructions such that for any two words, we can determine whether these two words can be rewritten into each other or some common word?

A rewriting system is **noetherian** if there is no infinite descending chain with the rewriting rules defined as reductions. The rewriting system ({*a*,*b*}, {*a*→*b*}) is noetherian because for any word, taking the reduction *a*→*b*, we can only replace *a* with *b* (remember that rewriting rules only apply in the forward direction in this context), so the number of *a*’s keeps decreasing (bounded below by 0), and there cannot be an infinite descending chain (i.e. at some point we cannot reduce further). For example, *aabba*→*babba*→*babbb*→*bbbbb*. The rewriting system ({*a*,*b*},{*a*→*b*,*b*→*a*}) is not noetherian since *a*→*b*→*a*→*b*→*a*→*b*→… is an infinite descending chain (we can just keep alternating between the two rules forever). Another example of a non-noetherian rewriting system is ({*a*},{*a*→*a ^{2}*}).

A rewriting system is **confluent** if whenever a word has two or more options for reduction (by applying two different rules or applying the same rule at two different positions), there is a word that both products can eventually be reduced to. The rewriting system ({*a*,*b*}, {*a*→*b*}) is also confluent because with any word, our only option is to replace *a* with *b*, and the order in which we do this does not matter. For example, *aa*→*ab*→*bb* and also *aa*→*ba*→*bb* (note that the first reduction is done differently, but we reach *bb* in both cases). The rewriting system ({*a*,*b*,*c*},{*a*→*b*,*a*→*c*}) is not confluent since *a*→*b* and *a*→*c* (we have two options to reduce the word *a*), but we cannot apply any sequence of rules so that *b* and *c* are reduced to a common word (since there are no rules that act on *b* or *c*).

A theorem states that **if A and R are both finite, and (A,R) is both noetherian and confluent, then (A,R) has a decidable word problem**. Given any two words in the rewriting system, a simple word problem algorithm would be to reduce both of them until they can no longer be reduced (we know that this can be done, since by the noetherian property there are no infinite descending chains), and then check whether these two irreducible words are the same (a word is irreducible if no rewriting rule can be applied on them).

This is merely a small part of the word problem, which can be studied with various other approaches. In the short lecture course, we also looked at approaches using Cayley graphs (shown in the picture below) as well as one-relator groups.

Gaspar Monge’s problem is the problem of moving a pile of sand to form an embankment. Which grain in the pile shall we move to which part of the embankment for maximum efficiency? This depends on how we measure efficiency. We consider the cost function c(s), where s is the displacement when moving each particle. If the cost function is convex (for example, s^2), it makes sense to translate (minimising the sum of squares of the distances moved for each particle)); if the cost function is concave (for example, |s|^0.5), it makes sense to flip (minimising the sum of square roots of the distances moved for each particle).

This led us to draw parallels with what we learnt in microeconomics, in the sense that our optimal household bundle depends on the characteristics of our utility function, and convex and concave utility functions lead to ‘opposite’ optimal bundles.

The lecture series presented a variety of theorems and lemmas aiming to help us understand the “local-global” question: if f has a root in R and in Q_{p}, for all p, does then f have a root in Q? These lectures introduced us to beautiful ways analysis and abstract algebra are used to tackle a variety of number theoretic problems. The material in this lecture series was difficult (almost driving one of the authors to give up on math completely), and we discuss below one of the more interesting results (using field extensions).

For a field Q and a prime p, we defined a nonarchimedean value | |_{p. } Afterwards we proved the following two theorems:

Now, recall that a sequence ( x_{n}) is **Cauchy **if , and that a metric space is **complete **if every Cauchy sequence converges.

Also, we say that a subset A of a topological space X is **dense **if if every point *x* in *X* either belongs to *A* or is a limit point of A.

*The above theorems allowed us to extend any field Q to a field Q_{p }, such that together with a nonarchimedean valuation | |_{p , }Q_{p }contains Q and:

- Q
_{p }is complete; and - Q is dense in Q
_{p.}

Now, | |_{p }in Q_{p } satisfies the first two theorems and together with (*) we get the following mind-blowing result:

To conclude, don’t tell the first year but the p-adic world is actually much nicer than the real one!

The University of Glasgow has a beautiful campus set in a historic town. We thoroughly enjoyed exploring the university and other sights in Glasgow. The LMS Summer School exposed us to exciting areas of mathematics which we would not normally learn in school, and motivated us to continue learning and deepening our mathematical knowledge.

Anyone who wants to explore the material further can check out the programme web page or email the authors at x.dimitrakopoulou@lse.ac.uk or j.tan18@lse.ac.uk.

]]>

**(1) Hi Tom. Perhaps you could start by telling us what you’ve been up to since leaving the LSE? **

I left the LSE in 2016 to take up a role as Assistant Professor of Management Science and Information Systems at Rutgers University, New Jersey. It has taken a bit of time to get used to the different way things are done in the US, especially with regard to teaching, but I’m happy to say that I’m settling in well and enjoying my new life across the pond. I’m also pleased to continue being a part of the LSE Department of Mathematics as a Visiting Fellow, and have been continuing my LSE collaborations with Katerina Papadaki, László Végh, Bernhard von Stengel and Steve Alpern.

**(2) It’s interesting you highlight differences in the way university life works in the US. Could you elaborate on that? **

In many ways things are a lot more informal in the US. In the UK, course syllabuses must be agreed months in advance and exams are double and triple checked before being printed in a standardised font. In the US, several people may teach the same course simultaneously with their own syllabus and their own grading criteria. Exams can be written the day before they are taken. One senior professor told me he used to make up exams on the spot and write the questions on the blackboard. In some ways it makes the lecturer’s job easier, but it also puts more responsibility on them to be fair to the students and give them a good learning experience.

**(3) ..which system do you think is most effective? Should we be learning lessons here in the UK? **

That’s a difficult question. I think the difference might be partly down to culture. In the US, it sometimes seems like every lecturer is trying to “sell” their course in the free market of higher education credits. If a lecturer is unsuccessful and unappealing to students, their course will fail, so perhaps everything works itself out. That might work in the American “everyone for themselves” culture, but I’m not sure it’d work so well in the UK.

**(4) Presumably Management Science also presents quite a different work environment to Mathematics? **

Yes, though I would say that my department is pretty mathematical on the scale of business school departments. The faculty here have a very broad range of interests, from crytography to stochastic optimization to Boolean functions. The department’s origins were in RUTCOR, which was an operations research interest group in Rutgers established in the 1980’s, with members from departments all across the university, including maths, statistics, engineering and computer science. But yes, I certainly agree that the students (particularly the undergraduates) are overwhelmingly here to get a business degree so that they can get out there and be successful. I suppose that’s not too much different to the LSE!

**(5) .. and mathematically what have you been up to? Could you outline the project you are most excited about that the moment in simple terms? **

I’m most excited about my work on ordering problems, where several tasks have to be completed in some order. For example, some hiding places must be searched one by one with the aim of finding some hidden bad guys in the least possible time, or a machine has to process a set of jobs in some order to minimize the average time to finish them. Ordering problems can be hard to solve because the number of possible orderings of a set increases exponentially with its size. For example, the number of ways to order a set of size 60 is roughly equal to the number of atoms in the universe. But by using set functions to write these types of problems in a very general form, it’s possible to understand the structure of them better and find approximate solutions, or precise ones in special cases.

**(6) How’s New Jersey? Is this your first time living in the USA? How are you settling into the culture? **

Yes, this is my first time living in the US. New Jersey has some beautiful parts, and I like living in Newark. It’s a city that was once an industrial centre of the east coast (the intersection of Broad Street and Market Street, a couple of blocks from where I live was once said to be the busiest intersection in the country). But Newark has suffered a great decline over the second half of the twentieth century, and it’s only recently that the city seems to be turning a corner and making a resurgence. Given that it’s only 20 minutes by train from Manhattan, I can see it becoming a popular commuter hub sometime in the near future!

]]>Thank you! I’m happy to be here. I work on packing and covering problems. Given a finite family of finite sets, what’s the minimum number of sets whose union is everything? What’s the maximum number of elements that intersect every set at most once? What’s the maximum number of pairwise disjoint sets? What’s the minimum number of elements needed to intersect every set at least once?

Two basic integer programs, called set packing and set covering – and their duals – model these problems. These integer programs turn out to be NP-hard. Their natural linear relaxations however are polynomially solvable. So the question becomes: when are the linear relaxations just as strong as the integer programs?

This concept leads to perfect graphs for one of the dual pair of linear programs, and to ideal and Mengerian clutters for the other one. I study the latter.

Packing and covering problems are incredibly simple to state. And it was this simplicity that pulled me in.

I spent a summer during my undergrad years at the University of Waterloo with Bertrand Guenin, who later became my Masters and PhD adviser. We worked on packing odd T-joins in graphs. Our project turned out to be successful. But it turned out to be just the tip of the iceberg.

Over the years I moved on to deeper problems, leading to the regime of ideal and Mengerian clutters, only one instance of which corresponded to the odd T-joins problem.

Good question! The project I have been working on for the past few years is very much on the theoretical side, though the theory of ideal and Mengerian clutters is partly motivated by its applications to computational complexity. So currently it is mainly the theory that’s been keeping me busy. Packing and covering problems are also among the most basic integer programs for computational optimizers, and I would like to study them from that perspective, too.

What drew me to London, first and foremost, are the research groups at LSE Maths and how strong they are. London is a big city, and having lived in Tehran and Toronto, I do like life in big cities and the challenges it entails. I’ve visited London many times before, so I’ve explored a fair bit already. My favourite places so far are the cafes, the theatres, and the British Museum.

Reading, politics, playing tennis, and good coffee!

]]>The central solution concept in game theory is Nash equilibrium, which is required to provide a precise, complete prediction about the players’ behaviour in a game. Polyequilibrium legitimises looser predications, or polyequilibrium results, such as “the equilibrium price is higher than five” or “the outcome is socially efficient”. Technically, a polyequilibrium is a collection of strategy profiles that share the property in question. Put differently, strategy profiles that do not have the property (say, those representing a socially inefficient outcome) are excluded. The exclusion has to be justified in the following sense. Every excluded strategy for a player has an adequate substitute: a non-excluded strategy that does at least as well against all non-excluded strategy profiles. A relatively small polyequilibrium, which excludes many strategy profiles, provides a sharper prediction about the outcome of the game that a larger polyequilibrium does. However, both are legitimate polyequilibria. Thus, this solution concept reflects a somewhat different philosophy than that of Nash equilibrium, in that it is content with learning something interesting about the players’ equilibrium behaviour and does require that the strategy choices be completely pinned down.

**(2) Are there applications you have in mind for your work in polyequilibria outside pure mathematics? **

Yes. Consider the following simple example of bilateral trade. A buyer offers a price for a particular item, which the seller can only accept or reject. Suppose, for simplicity, that the item has zero value for the seller. It is then a reasonable strategy for the seller to accept any price greater than zero. In fact, this is a dominant strategy. But it is not an equilibrium strategy, because the buyer does not have a best response to it: offering any positive price *p* is less profitable than offering, say, half that price. The polyequilibrium concept solves this conundrum in the obvious way: it makes “offering no more than *p*” a legitimate choice for the buyer, for any given *p*. This is not a strategy but a polystrategy: a collection of price offers. Together with the aforementioned seller’s strategy, it constitutes a polyequilibrium, at which the result that the item is sold at a price not exceeding *p* holds.

Another area where the idea that the players’ strategies may only be partially specified seems very natural is dynamic games. In such games, the notion of subgame perfection requires that players not only respond optimally to the other players’ actual moves but would also do so off-equilibrium, that is, as a response to all possible deviations of the others from their equilibrium strategies. This is a very sensible requirement, as it eliminates non-credible threats: those that a rational player would not actually carry out. But in a large game, it can be quite cumbersome to check that it holds, as all counter-threats need also be credible, and so on. With polyequilibrium, it may be sufficient to specify actions only at some of the game’s decision nodes, say, those at or close to the actual path. For example, there is no need to specify a response to an action that is unequivocally detrimental to the acting player. Not specifying a response means that none of the possible reactions is excluded. This natural choice for the responder is, again, one that the Nash equilibrium solution concept does not allow.

** (****3) More generally speaking, what is the state of play at the moment, with regard to applications of game theory? How much symbiosis is there between the interests of pure mathematicians or computer scientists and the interests of economists here? **

Game theory is more relevant now than it ever was. The move to online economic activity means that clever and sophisticated trading mechanism can be implemented with relative ease. Ad spaces, for example, can be sold in large auctions where the reaction times are measured in milliseconds and the rules can be as complicated as one wishes. The design of good, efficient such mechanisms is a challenge to game theorists and computer scientists, whose pursuits increasingly converge.

A nice example for this convergence is matching problems: matching pupils to schools, residents to hospitals, kidney donors to recipients, and so on. The matching algorithms need not only be reasonably easy to implement as code but also have to be incentive compatible. That is, participants in at least one side of the market should be assured that stating their true preferences always leads to the best outcome they can possible get. Finding such algorithms, or even figuring out whether they exist, can be a non-trivial game theoretic problem. The practical importance of these matching algorithms is immense.

**(4) ..so presumably these are some of the areas you would encourage younger people starting out in game theory to focus on? Any general advice you would give to somebody starting a PhD? What would you do differently a second time around? **

There’s nothing as exciting and satisfying as blazing one’s own trail. Ultimately, you go where your ideas take you. The more you hear and learn, the more you expose yourself to new research directions, the greater is the chance that you’ll come up with something new. So my advice would be to start with whatever you find most exciting, but then not to be afraid of making sharp turns, pursuing new ideas as they come along. Another advice would be to try, from time to time, to make contributions that go beyond the incremental, to write papers that other people might find an inspiration, papers that will result in follow-up work. Attaining technical proficiency and “mathematical maturity” are also important, so my advice to PhD students is to take as many advanced math courses as they can possibly bear.

**(5) To finish on a lighter note – what do you enjoy outside of work? Which book did you read last? **

A lively book I just finished is The Life Project, by Helen Pearson. It tells the story of the British cohort studies, which started in 1946. It’s a tale of science, scientific endeavour, and the sociology and politics of science. The heroes are the doctors and scientists who envisioned these studies and struggled to make them a reality and maintain them throughout the many, sometimes difficult years. Their achievements are the medical, sociological and economic insights that resulted from following the lives of the thousands of individuals involved and observing how their starting points and their decisions through the years affected their health, wealth and well-being, and the policy changes that this understanding helped bring about. It’s a story about the value and beauty of science – recommend reading!

]]>Algorithmic randomness is about the existence (or not) of structure or patterns in individual mathematical objects (e.g. a file in your computer, viewed as a string of letters). It is called “algorithmic” because it is with respect to algorithms, i.e. processes that can be executed by computers. Every object is non-random with respect to itself, since it is quite structured and predictable if we already have total access to the object. Hence randomness makes sense with respect to an origin, a fixed point of view, a coordinate system.

Algorithmic information theory allows to study, qualify and quantify the randomness or the predictability of *individual* objects. In contrast, classical information theory (based on probability) studies the predictability of probabilistic sources, i.e. ensembles of outcomes that occur according to a certain probability distribution.

For example, a sequence of a million 0s (say, as the outcome of coin tosses) is as probable as a million bits from the output of a nuclear decay radiation source (quantum mechanics predicts that such sources are trully unpredictable). However algorithmic randomness regards the sequence of 0s as highly non-random, as opposed to the seemingly patternless stream of quantum noise. The difference is that a sequence of a million 0s can be produced as the output of a relatively short computer program (or description, in a standard language) while quantum noise does not have such short descriptions.

To sum up, the idea behind algorithmic randomness is that something is random if it does not have short descriptions (in standard languages). This simple idea leads quite fast to the multiple facets of randomness such as incompressibility (a random object is incompressible) and unpredictability.

Learning theory traditionally aims at understanding the way concepts can be learned by animals (including humans) or machines. In algorithmic learning theory a learner is a machine which typically receives texts from a (formal) language and produces guesses for a grammar that generates the given language, with the expectation that *in the limit* it will eventually settle on a correct guess. Some classes of languages are learnable by machines in this way, while others are not.

Recently it was suggested that one can use this theory in order to approach a fundamental problem in statistics, namely the identification of probability distributions from random data. These days there is an abundance of data collected; probabilistic models of the data can be used to make useful predictions for future outcomes. This basic problem has been approached from all sorts of angles. The new angle that has recently been introduced is to use the methodology of “identification in the limit” from algorithmic learning theory, and require that a learner (machine) eventually produces a description of a probability distribution with respect to which the stream of data that it is reading, is (algorithmically) random. One can see this approach as a combination of traditional algorithmic learning theory with algorithmic information theory. What we recently showed (with Fang Nan from Heidelberg and Frank Stephan from Singapore) is that there are many parallels between this theory of learning from random data and the classic theory of learning from structured text. We constructed tools which allow the direct use of the existing classic theory for the development of this new probabilistic learning theory.

Working in the Chinese Academy of Sciences in Beijing is great. The research environment is very motivating and there is a lot of potential for research interactions within the institutions here. It is an extremely dynamic environment and I find this both exciting and challenging.

While it is true that some westerners find it a slightly hard to feel at home here, this really depends on one’s personality. Having worked in many places in the past (the U.K., Europe, New Zealand, the U.S.) I think that the differences in the universities are perhaps not as big as most people think. I would say that the defining difference with most western institutions is the frequency of changes (in policies, regulations, employment etc.) which is characteristic in dynamic developing environments. This can mean some lack of security and predictability on the one hand, but also a stable stream of new unexpected opportunities on the other.

Many of the stereotypes regarding the mathematics and the computer science community are true to some degree. In mathematics researchers traditionally published less, and did not follow strict deadlines for the completion of their work.

Mathematics conferences are usually rather informal and publication in the proceedings are usually not as high profile as premium journal publications. On the other hand in most areas in computer science there are competitive high-profile conferences which drive much of the research activity, with strict deadlines and rather formal structure. I found that many computer science researchers start a project with a specific conference deadline in mind, rather than a wider goal which is independent of publication prospects. This culture can be both good and bad. The bad side is that the publication volume is much higher in the computer science community and most agree that overall the quality is lowered by the many submissions that are not ready for public dissemination.

The good side is that deadlines drive research, which seems much more streamlined and structured in computer science than in mathematics.

I personally like and try to assimilate both cultures and I don’t regard myself on one side or the other. I would also go as far as to say that the distinction is rather superficial, and in the end what matters is the quality of research, and the impact that it has (sooner or later).

I’m reluctant to give advice as such, but here are some thoughts.

Working on a PhD requires a lot of focus on a specific area, even a specific problem, and such an exercise seems necessary in order to gain depth and expertise on a topic. After this stage it is a good idea for a young mathematician to branch out to different topics or even areas, learning new things and using the PhD experience as a guide. This is perhaps not as easy as continuing working in one’s PhD area, but in the long run it pays off; branching out is also much more interesting. Having said this, such choices also depend on the employment opportunities that one has. The general message here is to keep an open mind for new ideas and concepts that might appear foreign at first, but often are deeply connected to things we already know.

]]>**What are you currently researching?
**I am mostly interested in discrete geometry, which includes combinatorial questions about geometric objects. Sometimes I also think about purely combinatorial questions, for example I am currently working on a question about partitioning edge coloured hypergraphs into monochromatic cycles.

**Why did you choose this area of study?
**I like combinatorics and geometry, and this area is a mixture of these two.

**What do you hope to do career-wise, long term?
**I would like to stay in academia and do research.

**Can you provide any advice to prospective students about the most effective way to approach research and keep stress levels down?
**Of course, this varies from individual to individual and from area to area, but there are some things that can be useful in general. Set realistic expectations: you should not anticipate finding results quickly. It is a slow procedure. For me the recipe is to try to be happy even with small results: don’t let failure disappoint you too greatly. I think it is also good to not separate weekdays and weekends too much; when you have ideas and feel motivated, don’t stop for the weekend, but treat yourself to much-needed rest days later.

**What resources are available at LSE to help young researchers?
**There are several funds at both School and Departmental level. Mathematicians need whiteboards – we’re lucky to have many in our PhD office, plus all the basic provisions we could ever need (stationery, printing, equipment, etc.). Our PhD Office itself is a really good, productive environment to work in, where we can focus solidly on our research but also collaborate and share thoughts. The Department as a whole, alongside the PhD Academy and our Research Manager, assist with the essential practicalities of PhD study. The Department invites key visitors to present at our seminar series. Crucially, we have a fantastic coffee machine in the Department

**In a few words, what is the best thing about studying at LSE?
**Everyone is very nice; I am a valued member of the Department.

Cathy visited the Department of Mathematics, LSE in July 2017 to present a Public Lecture entitled “Weapons of Math Destruction: how big data increases inequality and threatens democracy“, related to her book of the same name. Whilst visiting, she took the time to talk with Martin Anthony and Andy Lewis-Pye (LSE Maths) about how she came to be an author, her latest book, how to define these ‘weapons’ and what the future holds.

A podcast featuring highlights from Cathy’s Public Lecture is available here. The introduction is provided by Martin Anthony (Head of the Department of Mathematics). The recording ends with a great selection of Q&As from a very enthusiastic, engaged audience.

]]>I’ve just been at a conference called “Kinks 2”, or that might be “knkx 2” (I never saw it written down)*(NB from blog editor: it was KnKx2)*. But look at the list of topics: dynamics, functional equations, infinite combinatorics and probability. How could I resist that?

Of course, with a conference like this, the first question is: is the conference on the union of these topics, or their intersection? The conference logo strongly suggests the latter. But as usual I was ready to plunge in with something connecting with more than one of the topics. I have thought about functional equations, but only in the power series ring (or the ring of species); but it seemed to me that sum-free sets would be the best fit: there is infinite combinatorics and probability; there is no dynamics, but some of the behaviour looks as if there should be!

The organisers were Adam Ostaszewski and Nick Bingham. So the conference was centred on their common interests. I will say a bit about this in due course. Unfortunately, for personal reasons, I was not able to travel south until Monday morning, and even getting up at 6am I didn’t get to the conference room until a little after 2pm. So I missed the opening talk by Nick Bingham (which no doubt set the scene, and according to Charles Goldie was a sermon) and the talk by Jaroslav Smítal ; I arrived shortly after the first afternoon lecture, by Vladimir Bogachev, had begun.

I will try to give an overview of what I heard, but will not cover everything at the same level. Of course, a topologist, analyst or probabilist would give a very different account!

Vladimir’s talk was something which tied in to the general theme in a way I didn’t realise until later. He talked about topological vector spaces carrying measures; in such a space, consider the set of vectors along which the measure is differentiable (in one of several senses), one of which is apparently called the Cameron–Martin space. He had something called *D _{C}* and explained that he originally named it after a Russian mathematician whose name began with S (in transliterated form); he had used the Cyrillic form, never imagining in those days that one day he would be lecturing about it in London!

David Applebaum talked about extending the notion of a Feller–Markov process from compact topological groups to symmetric spaces, and considering five questions that have been studied in the classical case, asking how they extend. He wrote on the board, which didn’t actually slow him down much. David said “I want to prove something”, but Nick, who was chairing, said “No, just tell the story”.

Finally for the day, Christopher Good explained two concepts to us: *shifts of finite type* (this means closed subspaces of the space of sequences over a finite alphabet, closed under the shift map, and defined by finitely many excluded subwords), and *shadowing* (this is relevant if you are iterating some function using the computer; typically the computed value of *f*(*x _{i}*) will not be exact, but will be a point

I was the first speaker next morning. I arrived half an hour early, to find the coffee laid out but nobody else around. Soon Rebecca Lumb came along and logged in to the computer, so I could load my talk. I found that the clicker provided had the feature that the left button advances the slides, so I took it out and put in my own, which works the right way round. The talk went well, and I enjoyed a gasp of surprise from the audience when I displayed my empirical approximation to the density spectrum of a random sum-free set. My last slide, a picture of an apparently non-periodic sum-free set generated by a periodic input, was also admired. It was suggested that a series of such pictures, in striking colours, would be worthy of an art exhibition. The slides are available here.

After a coffee break, Imre Leader spoke about sumsets (so not too far from my talk but not at all the same). As usual, he wrote on the board. The question was: if you colour the natural numbers with finitely many colours, is there an infinite set for which all pairwise sums have the same colour? The answer is yes if you take pairwise sums of distinct elements. (Colour the pair {*i,j*} with the colour of *i*+*j*. By Ramsey’s theorem, there is an infinite set with all pairs of the same colour; this does it!). The first surprise was that if you allow *x*+*x* as a sum as well, then it is impossible; he showed us the nice two-step argument for this. (The first step is simple; you can always colour so that *x* and 2*x* have different colours: take two colours, and give x the colour red if the exponent of 2 in the factorisation of *x* is even, blue if it is odd. The second step a bit more elaborate.) What about larger systems? He showed us why the answer is No in the integers, and No in the rationals, but (surprisingly) Yes in the reals, if you assume the Continuum Hypothesis (and indeed, Yes in a vector space of sufficiently large dimension over the rationls (precisely, beth_{ω}).

First after lunch was Dugald Macpherson, who talked about automorphism groups of countable relational structures, especially closed oligomorphic groups (and more specially, automorphism groups of homogeneous structures over finite relational languages). His talk was in three parts, of which he skipped the second for lack of time. The first part was stuff I knew well, the connection between Ramsey structures and topological dynamics (the Kechris–Pestov–Todorcevic theorem and what happened after). The second part would have been about simplicity. The third concerned the existence of “ample generics” and applications of this to many things including the small index property, uncountable cofinality, and the Bergman property, and then sufficient conditions for ample generics (with names like EPPA and APPA).

Eliza Jabłońska talked about “Haar meager sets”, an ideal of subsets of a topological group having many similarites to the Haar null setes in the case of a locally compact group. Some, but not all, of the classic results about measure and category for the real numbers go through to this situation.

Finally, Janusz Brzdęk talked about the generalised Steinhaus property. In loose terms, this says that, if *A* is a subset of a topological group which is “large enough and decent”, then *A*−*A* has non-empty interior, and more generally, if *A* is as above and *B* is “not small”, then *A*−*B* has non-empty interior. This kind of result has applications in the theory of functional equations, for example showing that if you have a solution of *f*(*x*+*y*) = *f*(*x*)+*f*(*y*) in a large enough and decent subset of G then this solution can be extended to the whole of *G*. There are also applications to “automatic continuity” (but I don’t know what this is). He started off with some very general results which apply in any magma (set with a binary operation) with identity. You have to redefine *A*−*B* in such a case, since inverses don’t exist; it is the set of *z* for which *z*+*B* intersects *A*. He went on to a discussion of microperiodic functions (which have arbitrarily small periods): on the reals, such a function, if continuous, must be constant, and if Lebesgue measurable, must be constant almost everywhere. There are also approximately microperiodic functions, sub-microperiodic functions, and so on.

Then off to a pleasant conference dinner at the local Thai restaurant, where conversation ranged over doing, teaching and understanding mathematics, along with many other topics.

Wednesday was a beautiful day, so I walked in to the LSE, past St Pauls and down Fleet Street.

The first speaker, Harry Miller, was talking to us by skype from Sarajevo, since his health is not good enough to allow him to make the trip. The technology worked reasonably well, though the sound quality was not great and the handwritten slides were packed with information. He gave us half a dozen different conditions saying that a subset of the unit interval is “large” (not including the well-known ones, measure 1 and residual), and a number of implications between them and applications. One nice observation: are there compact sets *A* and *B* such that *A*+*B* contains an interval but *A*−*B* doesn’t? Such sets had been constructed by “transfinite mumbo-jumbo”, but he showed us a simple direct construction: *A* is the set of numbers in [0,1] whose base-7 “decimal” expansion has only digits 0, 4 and 6, while *B* is the set using digits 0, 2 and 6.

After this, Marta Štefánková talked about hyperspace: this is the space *K*(*X*) of all compact subsets of a compact metric space *X*, with the Hausdorff metric. Given a map *f*, how do the dynamics of *f* on *X* relate to the dynamics of the induced map on *K*(*X*)? She introduced four kinds of “Li–Yorke chaos”: a map can be generically, or densely, chaotic or epsilon-chaotic. There are a number of implications between the resulting eight situations, but some non-implications as well; almost everything is known if *X* is the unit interval but in other cases there are still many mysteries.

Adrian Mathias, whom I haven’t seen for donkeys years, talked about connections between descriptive set theory and dynamics (he said, inspired by a trip to Barcelona, where he found the dynamicists sadly lacking in knowledge about descriptive set theory). The subtitle was “analytic sets under attack”. (The basic idea is that iteration of a map *f* is continued into the transfinite (I missed the explanation of how this is done), and *x* attacks *y* if there is an increasing sequence of ordinals so that the corresponding sequence of iterations of *f* applied to *x* has limit *y*.)

Dona Strauss talked about the space β**N**, the Stone–Cech compactification of the natural numbers (realised as the set of ultrafilters on the natural numbers). It inherits a semigroup structure from **N**, and the interplay of algebra and topology is very interesting. Her main result was that many subsets of β**N** which are very natural from an algebraic point of view are not Borel sets: these include the set of idempotents, the minimal ideal, any principal right ideal, and so on. (The Borel sets form a hierarchy, but any beginners’ text on descriptive set theory tells you not to worry too much, all interesting sets are Borel, and are in fact very low in the hierarchy.)

Peter Allen took time out from looking after his five-week-old baby to come and tell us about graph limits. It was a remarkable talk; I have heard several presentations of that theory, including one by Dan Kral in Durham, but Peter managed to throw new light on it by taking things in a different order. He also talked about some applications, such as a version of the theory for sparse graphs, and some major results already found using this approach: these include

- a considerable simplification of the Green–Tao theorem that the primes contain arbitrarily long arithmetic progressions; and
- a solution of an old problem: given two bounded sets A, B in Euclidean space of dimension at least 3, having the same measure, each of which “covers” the other (this means finitely many congruent copies of A cover B and vice versa), there is a finite partition of A into measureable pieces, and a collection of isometries (one for each piece) so that the images of the pieces under the isometries partition B.

Finally it was time for Adam’s talk. His title was “Asympotic group actions and their limits”, but he had the brilliant subtitle for a talk at the London School of Economics and Political Science: “On quantifier easing”. He explained how the notion of regular variation had led him to various functional equations, the simplest of which is additivity, and then he had discovered that these equations were actually the same up to a twist. There was quite a lot of technical stuff, and my concentration was beginning to fade, so I didn’t catch all the details.

That was the end of the conference, and we all went our separate ways.

]]>