When LSE was founded in 1895, Mathematics was not one of the foundation subjects. However, from the very start, Statistics was regarded as an important tool for the social scientist, and A.L. Bowley **(right)** gave regular lectures on the subject. In 1901 his book *Elements of Statistics *was published. It went through several editions with only minor changes, but in his preface to the 1920 edition Bowley indicated a major change of emphasis. ‘In the first edition an effort was made to obtain the principal results without the use of the Calculus; but as the subject has developed during the past twenty years, it has become necessary to abandon this attempt.’ In fact, Bowley’s concessions to mathematics were rather limited: he described the normal distribution in terms of an integral, and introduced a small amount of differential calculus. However, from this time forward it is clear that Statistics at the LSE was taught in a more rigorous way. The appointment of mathematically-trained statisticians such as Roy Allen (1928) and Maurice Kendall (1949) was an important factor.

It is worth mentioning that one of the first benefactors of the LSE was Bertrand Russell **(left)**, already becoming famous for his work in logic and the philosophy of mathematics. Indeed, he is reported to have given a course of lectures at the School in 1896, but sadly they were on German Social Democracy.

A more important impetus came, in due course, from Economics. By the beginning of the twentieth century the leading economists, such as Alfred Marshall, were accustomed to using mathematics in their work. In a letter to Bowley, Marshall formulated his famous Six Principles of how to use mathematics in economics:

- Use mathematics as a shorthand language, rather than as an engine of enquiry.
- Keep them till you have done.
- Translate into English.
- Then illustrate by examples that are important in real life.
- Burn the mathematics.
- If you can’t succeed in 4, burn 3. This I do often.

It must be remembered that, to the founders of the LSE, ‘Economics’ was a vague term, covering a broad range of mainly historical enquiries. The prevailing attitude is well-illustrated in a letter from the Director (Hewins) to Sidney Webb, written in 1898: ‘you may be gratified to know … that in Germany you and Mrs Webb are held in the highest estimation of all English writers on Economics. Marshall is nowhere.’ Thus it is no surprise to find little evidence of mathematics being required by LSE economists until the 1920’s. John Hicks, the future Nobel Laureate, came to the LSE as a temporary lecturer in Economics in 1926/7. He had done one year of mathematics at Oxford before switching to PPE, and found that it was ‘sufficient to cope with what anyone (then) used in economics’. But changes were afoot. In 1931 Roy Allen began to lecture on mathematical analysis to LSE students of economics, and his book was published in 1938. He gave a logical development of the subject in the ‘modern’ style, although he referred to Hardy’s *Pure Mathematics* for the foundations of the real number system. In addition, he covered a wide range of economic applications, based on the work of Marshall, Edgeworth, Hicks, F.P. Ramsey, and others.

In the next twenty years the face of economics changed significantly, and when Allen came to revise his book in 1956 he adopted a different approach. His new book was entitled *Mathematical Economics, *and its theme was the exposition of economics in mathematical way. He incorporated the mathematics as it was needed, and so we find matrices and vectors discussed as in a prelude to the theory of games and linear programming.

These developments produced a climate of opinion in the LSE (or at least part of it) sympathetic to the application of mathematics in the social sciences, and the Robbins Report on the expansion of higher education in the UK provided the opportunity for action. The Minutes of the Academic Board held on 26 May 1965 contain the following paragraph:

“Over the past few years there has been a great increase in the use of mathematics related to the School’s subjects and there are more students coming up with Advanced level mathematics combined with Arts and Social Science subjects who want to continue the study of mathematics at the School. It is felt that the School should have a group of pure mathematicians to support and expand the work which is already being done by applied mathematicians in social sciences. There is a growing national need for persons qualified in mathematics with reference to the social sciences. The more traditional form of the mathematics degree which contains a large element of applied physics is not as directly relevant to such occupations as operational research, statistics and econometrics. The new degree which is being proposed would remedy this deficiency, and would enable a mathematical specialist to study his subject in relation to social science disciplines rather than those of the physical sciences.”

In due course it was agreed that this plan should be adopted. Cyril Offord **(right)**, a classical analyst, was appointed to a new Chair of Mathematics. He moved from Birkbeck College, where he had become frustrated by the constraints imposed by that institution’s commitment to part-time students. At the LSE, Offord was faced with the task of setting up a programme of mathematics teaching under a different set of constraints. The core degree was the BSc (Econ), which had a rather rigid structure: in particular, three first-year courses (Economics, Government, and History) were compulsory. This left little room for laying the foundations of a mathematics degree. After some delicate negotiations, Offord, with the help of Roy Allen, persuaded the LSE to establish a new BSc degree, distinct from the BSc (Econ).

Part one of the new degree consisted of five courses: (1) Economics, (2) Analysis and Set Theory, (3) Algebra and Methods of Analysis, (4) Further Algebra and Theory of Probability, and (5) either Elementary Statistical Theory or Introduction to Logic. There was also a dispensation that ‘in specially approved cases’ students could take Analysis and Set Theory in the first year of the BSc (Econ). Offord wrote to a number of schools telling them about the new degree. He pointed out that ‘applied mathematics’, which was traditionally represented by Mechanics and Theoretical Physics, would in this case be represented by Economics and Statistics.

While the framework was being sorted out, recruitment of staff continued. Offord was joined by Haya Freedman **(below left)** in 1967, and by Richard Hornblower and John Bell in 1968. The last of these has written an anecdotal account of the early years, including the part played by the controversial philosopher Imre Lakatos in his appointment. Bell himself was a colourful character – a mathematical child prodigy who went up to Oxford at the age of 15 and became a logician. After twenty years at the LSE he moved to a Chair in Philosophy at the University of Western Ontario.

By the early 1970’s the mathematics group comprised six people with rather diverse mathematical interests. Another complicating factor was that Offord was due to retire, which he duly did in 1973. His successor was Anatole Beck **(below centre)**, a well-known American mathematician with wide interests in classical mathematics. When Beck decided to return to the USA in 1975, the Chair was filled by Ken Binmore **(below right)**, who had joined the staff as a lecturer in 1969. Binmore had made his name as a complex analyst at Imperial College, but he became attracted to the mathematical aspects of game theory, which was to become one of the fundamental paradigms of theoretical economics.

As the years passed, there were minor alterations in the content and structure of the degree. One course was specifically devoted to ‘Mathematical Methods’, and this attracted many students from other disciplines. However, the number of students studying serious mathematics remained small, even after Mathematics and Economics became a ‘special subject’ in the BSc (Econ) degree. The lack of progress was due mainly to the LSE’s traditional approach to internal organisation. This is exemplified by John Bell’s letter of appointment (1968), which did not mention a department, or give any hint as to his duties or to whom he was responsible. Gradually the need for some structure became clear, and the mathematicians became part of a Department of Statistical and Mathematical Sciences (SAMS). In due course SAMS had five ‘sub-departments’, each led by an unfortunate individual with the title of ‘sub-convener’. The BSc (Econ) still played a major part in the life of the School, although several departments had established course-unit degrees, similar to the BSc in Mathematical Sciences.

By 1986-7 changes were afoot. In twenty years the mathematics group had been granted only one new half-post in addition to the original complement of six. Binmore was increasingly moving towards Economics, where he had established an international reputation for his work on Game Theory, and he wished to make his position a fact. John Bell was leaning more towards Philosophy and Haya Freedman was close to retirement. The outcome might have been disastrous for mathematics, but fortunately a Review Group in 1987 recommended that the core group of mathematicians should be maintained. As a result the School decided to appoint a new professor of mathematics, and made a commitment to replace Bell and Freedman. The first step was implemented by the appointment in 1988 of Norman Biggs. In accordance with the School’s traditions, his qualifications did not include any specific knowledge of economics or social science.

]]>

**(1) Can you tell us a little about yourself? **

After I finished my Diploma in Würzburg (my hometown in Bavaria), I moved to Berlin for my PhD. I learned a lot about algorithmic and discrete mathematics there. Additionally, I enjoyed living in a big city. After one year as a postdoc in Lausanne with lakes and mountains and less big city life, I am looking forward to my time in London.

**(2) In layman’s terms, what is your main field of research/study?**

During my PhD, I learned ‘tropical mathematics’. You replace the usual addition by maximum and multiplication by the usual addition. This makes calculations easier but can actually reveal the deeper structure of algorithms by just replacing the operations and seeing how the algorithm behaves then.

That approach is particularly promising for questions about different kinds of complexity. The buzzwords are ‘strongly’, ‘weakly’ and ‘pseudopolynomial’. This just distinguishes how much the run-time of the algorithm really depends on the size of the input.

There are several interesting mathematical objects also flying around these questions (like polytopes, matroids, matchings) which I study.

**(3) How did you first become interested in this area?**

After my Diploma thesis, I wanted to learn something new and exotic. Therefore, I started a PhD on questions from tropical geometry, which involves a lot on polytopes.

During my PhD, I was a teaching assistant for an introductory course on algorithmic questions in mathematics. Before that, I was more into pure maths but I really liked the algorithmic way of thinking. This motivated me to apply the tropical methods to get a better understanding of the complexity of problems and algorithms.

**(4) What other interests do you have? How do you ‘switch off’ from mathematics?**

While I used to play a lot of music myself, I am mainly a consumer now. Music is a great for heating and cooling my mind. Also I like to play and watch basketball. Furthermore, I am looking forward to explore the great variety of food and culture in London!

**(5) Best part of LSE so far?**

It is great how helpful and welcoming everybody is! Coming from abroad, I had to face several logistical problems but I always got support. I also enjoyed several fruitful mathematics discussions.

I am sure that I will see many other nice parts in the forthcoming time

]]>If you favour a second referendum on Brexit (a prospect that is now, February 2019, receding), you should not only think of what you should ask the people, but how you would reconcile their choices. This is a central question of **Mathematical Social Choice**, with attempts to answer it since the middle ages (which is discussed in the ‘History’ section later in this post).

The question becomes interesting when there are more than two choices on the ballot paper. Suppose the choices are:

- D = leave the EU with a negotiated Deal (also called “soft Brexit”),
- N = leave the EU with No Deal (a “hard Brexit”), or
- R = Remain in the EU.

Every voter is given a first and second choice which represents their most and second-most preferred outcome. An example of a ballot is here:

On this ballot, the voter has expressed their choice as 1. R (Remain) and 2. D (Deal) with No Deal as the implicit third choice.

Assume we have 9 voters (or equal-sized voting groups) who have the following preferences:

Here, the first 4 columns are Remain voters whose preference is 1. Remain, 2. Deal, 3. No Deal. The next 3 columns are “Hard Brexiters” whose preference is exactly the reverse. The last 2 columns are voters whose first preference is Deal, with one of them having Remain as their second choice, the other No Deal as second choice.

These preferences and their distribution are not unrealistic.

A *voting rule* now tells us how to distill “the will of the people” from these preferences. But which rule should we choose? There are several contenders for such a rule.

This declares as winner the option that has gotten the most first-choice votes (so one does not even need a second choice and the ballot paper is simpler). In parliamentary elections in the UK, the MP representing a constituency is chosen in this way. Here the most first-choice votes (4 out of 9) are for Remain, but this is not a majority of all votes – 5 out of 9 would rather leave the EU without or with a deal.

This means that the option that gets the least first votes is discarded, and the second preference of the voters who made that choice is counted (as if they would be asked to vote again in a “runoff” vote, assuming that the others stay with their first choice). Here, these are the voters who chose “D” and their votes are split, one of them for “R” and the second for “N”. The total is now 5 for Remain and 4 for No Deal, and Remain is the winner as the declared “will of the people” according to this rule.

The rule seems clear and fair enough, but it has its problems. The main problem is called *strategic voting* which means that voters have an incentive to mis-state their true preference. Namely, the minority (of people who chose “D”) now have in effect the casting choice between two polarised outcomes. If the above preferences were known (supported by opinion polls, say), then an “N” voter as above would have an incentive to mis-state their preference instead as 1=D 2=N 3=R (that is, swap their first and second choices), to let N become the decisive minority with 2 out of 9, after D which now has 3 out of 9 first choices. The other 2 N voters would both choose D and create the final vote 4 R versus 5 D, meaning to the leave the EU with a deal. While it remains doubtful that voters are that strategic, it would, on the other hand, create an incentive to be a bit more moderate.

However, not all voting rules favour R for the above voter preferences.

This rule looks at the stated order of preferences and compares any two outcomes with each other. That is, the preferences of the voters are now used to answer a question such as “do you prefer D over N”?

Here we get the following answers:

- D beats R by 5 against 4 votes
- D beats N by 6 against 3 votes
- R beats N by 5 against 4 votes

This gives the following clear collective preference: a strict majority prefers D over R, and another strict majority prefers R over N, and another strict majority prefers D over N (which does *not* follow from the first two, see below). Here the “will of the people” is D first, R second, N third. Sounds great, but D was a first choice for only 2 out of 9 voters. Does this rule head for mediocre choices? Or for useful compromise?

The Condorcet rule has something less desirable than any of the other rules: It may not produce a clear winner but create *cycles*, as the following modified voter profile shows (three voters suffice):

Here, 2 thirds of voters prefer R over D, 2 thirds prefer D over N, and 2 thirds prefer N over R. Such voter preferences may not be realistic, but who knows? The fact that they are theoretically unavoidable for any reasonable voting system is known as “Arrows Impossibility Theorem”, after the economist Kenneth Arrow (1921-2017).

Yet another voting system tries to avoid cycles by giving points for first and second choice, for example 2 points for first choice, 1 point for second choice, 0 for third choice (instead of points 2,1,0 we could also give 3,2,1 with the same effect, which is just an extra point everywhere). The option with highest total number of points wins.

In our 9-voter example, this gives points

- R = 2+2+2+2+0+0+0+1+0 = 9
- N = 0+0+0+0+2+2+2+0+1 = 7
- D = 1+1+1+1+1+1+1+2+2 = 11

which makes again D the winner. But hey, what if someone does not put an “X” for their second choice at all? How should that shift the points? You get only one point for your first choice, and zero for the others? Or two points for your first choice, and zero for the others (which would surely let the Remain voters above drop their points for “D” in second place).

Or we could, like in football, give 3 points for first choice, 1 point for second choice, resulting in

- R = 3+3+3+3+0+0+0+1+0 = 13
- N = 0+0+0+0+3+3+3+0+1 = 10
- D = 1+1+1+1+1+1+1+3+3 = 13

with R and D tied. But why this rule?

The Condorcet method is named after the Marquis de Condorcet (1743-1794), who died in a prison cell, poisoned, during the French Revolution. However, it was already invented in 1299 by the Majorcan polymath Ramon Llull (ca. 1232-1316). To remember that name (which has 4 letters “L” in it and one vowel) think of the binary number 11011. Llull indeed invented the binary system and is considered by some as the inventor of information theory. He was so enthralled by it that he thought St Mary should be added to the Holy Trinity to make their number a power of two. Heretic stuff that did not make him popular with the church authorities. With the discovery in 2001 of his lost manuscripts, Ars notandi, Ars eleccionis, and Alia ars eleccionis, Llull is given credit for discovering the Borda count (re-discovered several times in later centuries) and the Condorcet criterion.

A very accessible book for general readers on these problems is Szpiro’s ‘*Numbers Rule: The Vexing Mathematics of Democracy, from Plato to the Present’.*

One conclusion is that you probably shouldn’t put more than two options on a ballot paper, or maybe not hold a referendum in the first place. On what voting system should you agree even to *determine* the “will of the people”, when we have enough trouble to determine it when they made one out of two choices? At any rate, you will appreciate “Strong Arrow’s Theorem” from the geeky cartoon XKCD (one of my favourites):

Can we ever agree on anything? **Ask the mathematician!**

I have been at LSE since September 2006 (wow, that went quickly!). Before coming to LSE, I have held postdoctoral fellow positions at University of Pennsylvania, University of Texas at Austin, and Simon Fraser University. In the distant past, I have done my PhD study at Cornell University.

**(2) In layman’s terms, what are your main fields of ****research/interest? **

My main research interests are in algorithms and the theory of computation in general. More specifically, most of my research relates to sublinear algorithms, which is a very active area of research exploring models of computation and problems that aim to capture the limitations and difficulties when the input to a computational problem is very large (the buzzword nowadays is ‘Big Data’).

The common theme of the problems in this area is that the algorithms are very limited in resources, such as time and memory (that is, limited by a sublinear function in the size of the input data), so much so that the algorithm cannot even read or remember the whole input. We explore the limits of what can be computed under such severe restrictions. You can find several surveys of this research area here.

**(3) How did you first become interested in this area? **

In the first year of my PhD study, I got introduced to this research area through the works of a faculty member at Cornell, Ronitt Rubinfeld, who was a pioneer in the field and who later became my PhD supervisor. I have found these research problems interesting, because they required new approaches compared to more conventional models. Those were the early days of prolific research in this field. It was exciting to witness and to contribute to the rapid developments of that era.

**(4) What are your favourite courses to teach, or favourite part of ****teaching those courses? **

Not surprisingly, I especially enjoy teaching courses in the topics related to my research interests. I find that one of the best parts of teaching is sharing your enthusiasm about the subject with others. Hence, it is even better when you have lots of enthusiasm for the subject to start with.

**(5) What is the best part of being at LSE? **

I think what makes being part of LSE most enjoyable for me is the Mathematics Department. It provides a very collegial and friendly environment. The outstanding and dedicated staff – both academic and support – make it an inspiring place to work. It should also be said that the excellent quality of its students and its location (as “in London”) are significant fringe benefits as well.

]]>

**(1) How long have you been here at LSE? **

I started at the LSE in September 2016, hence have been here a bit more than two years. Before this, I was a Senior Lecturer at UCL for two years and a Senior Research Fellow at Oxford for three years. I did my PhD at Columbia University in its Statistics department.

**(2) In layman’s terms, what are your main fields of research/interest?**

My main interests are questions arising at the intersection of finance, probability, and statistics. For example, I work in Stochastic Portfolio Theory, a framework to analyse the behaviour of portfolios and the structure of large equity markets. What I like about this field is that it is empirically driven, mathematically interesting, and has insightful and practical results for long-term investors. An example of the research in this field is this non-mathematical paper. This article illustrates how and why certain naive trading strategies (such as the monkey portfolio) outperform the market in the long-run. Together with my PhD student Weiguan Wang, we have also started to study the applications of machine learning techniques to finance.

**(3) How did you first become interested in this area?**

During my doctoral studies, my adviser, Ioannis Karatzas, invited me to this fantastic series of informal research meetings with other university researchers and research-active industry practitioners. Among the participants was also Bob Fernholz, who was a former academic, then founded his own asset management company, and formalised Stochastic Portfolio Theory. These meetings were inspiring – lots of research ideas were formulated there. Since then I have been working on and off on Stochastic Portfolio Theory. Sometimes more, sometimes less; oscillating between more applied and more theoretical questions.

**(4) What are your favourite courses to teach, or favourite part of teaching those courses? **

I’ve taught a couple of courses at the LSE, both on the undergraduate and the graduate level. It’s great to have students with diverse backgrounds and experiences, as it is common for the LSE mathematics courses. I especially enjoyed developing a new summer school course from scratch last summer, with my colleague Luitgard Veraart. This course connects theory with practical implementations; teaching these links I enjoy a lot. The course was part of a longer list of Financial Mathematics courses that we introduced last year to the summer school. They all turned out to be extremely successful and got excellent student feedback – so we decided to continue this coming summer.

**(5) What is the best part of being at LSE? **

LSE has one of the world-wide largest research groups in Financial Mathematics (it might be even the largest). There are also plenty of possibilities to link with the many London-based practitioners who work in banks, in hedge funds, and in Fintech. This allows for a very active research program with lots of visitors, a continuous exchange and flow of ideas, an inspiring research atmosphere, and lots of joint projects.

At the same time, LSE feels actually like a small university – it’s easy to connect to academics in other disciplines, all the research seminars are close, and people very friendly.

]]>

**(1) How long have you been here at LSE?**

I came in the summer of 2013, and have been loving it here ever since! Prior to coming to the LSE I had a Royal Society University Research Fellowship at the University Leeds, and before that a Marie-Curie Fellowship at the University of Siena.

**(2) In layman’s terms, what are your main fields of research/interest?**

I work in a number of areas. A lot of my research has been in Computability Theory, which is normally classified as part of Mathematical Logic. Roughly speaking, researchers in Computability Theory are concerned with the limits of computation — what computers can and cannot achieve, what the “incomputable universe” looks like. Then I’ve also been doing lots of work in Complex Systems, which has manifested in research in Statistical Mechanics, Theoretical Population Biology and Network Science.

Most recently my work in this area has led to investigations in cryptocurrencies. That’s an interesting area to work in because it’s so new, and while there are lots of excellent academics getting into it, most people developing the ideas at this point are in industry.

**(3) How did you first become interested in this area?**

What drew me into Computability Theory in the first place was Godel’s Theorems. There just seemed to be something so profound about a mathematical theorem which speaks to fundamental limits on what one can expect to achieve. I found that idea rather beautiful. Later on I became drawn to work also on matters with immediate societal impact. That’s partly what attracts me to the recent work in cryptocurrencies.

**(4) What are your favourite courses to teach, or favourite part of teaching those courses?**

Ha… I’m teaching two courses at the moment, and if I pick a favourite then it will look bad for the students of the other course! I’m teaching Coding and Cryptography, and also the Mathematics of Networks, which is a relatively new course which I introduced a couple of years ago. The best part of teaching has to be when students get excited by the subject matter and want to pursue ideas further.

**(5) What is the best part of being at LSE?**

One of the nice things about the maths department here from my perspective is the strong algorithmic flavour to a lot of the research that people do. I’ve always existed somewhere on the boundary between maths and computer science, and this is a nice department in that sense — there are quite a few of us who could be in computer science departments. The fact that it’s also quite a small department probably helps it remain friendly, and I’ve always been impressed with how democratically things are run around here. Of course being in London is another major motivation for being here.

]]>The London Mathematical Society organises undergraduate summer schools every summer, consisting of a two-week course held in a UK university. The summer school consists of short lecture courses which include problem-solving sessions as well as colloquium-style talks.

This year, the LMS Undergraduate Summer School was hosted by the University of Glasgow, and the two of us were fortunate to have had the opportunity to attend it. The programme consisted of six short lecture courses (three lectures and two exercise sessions each) and eight colloquia. We have selected four of the lecture courses to discuss below.

The first lecture series, on the Compactness Theorem, was given by Professor Mike Prest from the University of Manchester. After providing us with an introduction to ultrafilters and ultraproducts, he used Zorn’s lemma to prove the existence of a maximal ultrafilter containing any given filter. He defined (diagonal) embeddings, Łoś’s theorem, structures and definable sets and we proved the Compactness Theorem. Finally, a sketch proof of Łoś’s theorem followed. What we found particularly interesting about these lectures was how abstract algebra was used to prove such a fundamental result in predicate logic. Since neither one of us had ever taken such advanced courses in either field, we were fascinated by the challenging material and the highly demanding problem sets.

This lecture course focused on a question in combinatorial group and semigroup theory called the **word problem**.

Here we look at the word problem in the specific context of rewriting systems only. Consider a set of letters A={*a*,*b*}. Then words generated by this set are any finite sequence of letters in this set, such as 1 (the empty set which contains no letters), *a*, *b*, *aa*, *aba*, *bbbaaaaabb*, etc. To form a rewriting system (A,R), the set of letters is paired with a set of rewriting rules (call this R) that allow us to replace a word with another in the forward direction. For example, the rewriting rule *aa*→*b* allows us to state that *baab*→*bbb*, *bbaa*→*bbb* and *abaaaab*→*abaabb*→*abbbb* (we simply replaced *aa* with *b*). In this case, the words *baab* and *bbaa* **represent the same element** in this rewriting system since they can be transformed to each other using the rewriting rules. Our notation for this rewriting system would then be (A,R)=({*a*,*b*},{*aa*→*b*}).

A rewriting system has a **decidable word problem** if there is an algorithm which for any two words in the system decides whether or not they represent the same element in the rewriting system. In other words, is it possible to write a set of instructions such that for any two words, we can determine whether these two words can be rewritten into each other or some common word?

A rewriting system is **noetherian** if there is no infinite descending chain with the rewriting rules defined as reductions. The rewriting system ({*a*,*b*}, {*a*→*b*}) is noetherian because for any word, taking the reduction *a*→*b*, we can only replace *a* with *b* (remember that rewriting rules only apply in the forward direction in this context), so the number of *a*’s keeps decreasing (bounded below by 0), and there cannot be an infinite descending chain (i.e. at some point we cannot reduce further). For example, *aabba*→*babba*→*babbb*→*bbbbb*. The rewriting system ({*a*,*b*},{*a*→*b*,*b*→*a*}) is not noetherian since *a*→*b*→*a*→*b*→*a*→*b*→… is an infinite descending chain (we can just keep alternating between the two rules forever). Another example of a non-noetherian rewriting system is ({*a*},{*a*→*a ^{2}*}).

A rewriting system is **confluent** if whenever a word has two or more options for reduction (by applying two different rules or applying the same rule at two different positions), there is a word that both products can eventually be reduced to. The rewriting system ({*a*,*b*}, {*a*→*b*}) is also confluent because with any word, our only option is to replace *a* with *b*, and the order in which we do this does not matter. For example, *aa*→*ab*→*bb* and also *aa*→*ba*→*bb* (note that the first reduction is done differently, but we reach *bb* in both cases). The rewriting system ({*a*,*b*,*c*},{*a*→*b*,*a*→*c*}) is not confluent since *a*→*b* and *a*→*c* (we have two options to reduce the word *a*), but we cannot apply any sequence of rules so that *b* and *c* are reduced to a common word (since there are no rules that act on *b* or *c*).

A theorem states that **if A and R are both finite, and (A,R) is both noetherian and confluent, then (A,R) has a decidable word problem**. Given any two words in the rewriting system, a simple word problem algorithm would be to reduce both of them until they can no longer be reduced (we know that this can be done, since by the noetherian property there are no infinite descending chains), and then check whether these two irreducible words are the same (a word is irreducible if no rewriting rule can be applied on them).

This is merely a small part of the word problem, which can be studied with various other approaches. In the short lecture course, we also looked at approaches using Cayley graphs (shown in the picture below) as well as one-relator groups.

Gaspar Monge’s problem is the problem of moving a pile of sand to form an embankment. Which grain in the pile shall we move to which part of the embankment for maximum efficiency? This depends on how we measure efficiency. We consider the cost function c(s), where s is the displacement when moving each particle. If the cost function is convex (for example, s^2), it makes sense to translate (minimising the sum of squares of the distances moved for each particle)); if the cost function is concave (for example, |s|^0.5), it makes sense to flip (minimising the sum of square roots of the distances moved for each particle).

This led us to draw parallels with what we learnt in microeconomics, in the sense that our optimal household bundle depends on the characteristics of our utility function, and convex and concave utility functions lead to ‘opposite’ optimal bundles.

The lecture series presented a variety of theorems and lemmas aiming to help us understand the “local-global” question: if f has a root in R and in Q_{p}, for all p, does then f have a root in Q? These lectures introduced us to beautiful ways analysis and abstract algebra are used to tackle a variety of number theoretic problems. The material in this lecture series was difficult (almost driving one of the authors to give up on math completely), and we discuss below one of the more interesting results (using field extensions).

For a field Q and a prime p, we defined a nonarchimedean value | |_{p. } Afterwards we proved the following two theorems:

Now, recall that a sequence ( x_{n}) is **Cauchy **if , and that a metric space is **complete **if every Cauchy sequence converges.

Also, we say that a subset A of a topological space X is **dense **if if every point *x* in *X* either belongs to *A* or is a limit point of A.

*The above theorems allowed us to extend any field Q to a field Q_{p }, such that together with a nonarchimedean valuation | |_{p , }Q_{p }contains Q and:

- Q
_{p }is complete; and - Q is dense in Q
_{p.}

Now, | |_{p }in Q_{p } satisfies the first two theorems and together with (*) we get the following mind-blowing result:

To conclude, don’t tell the first year but the p-adic world is actually much nicer than the real one!

The University of Glasgow has a beautiful campus set in a historic town. We thoroughly enjoyed exploring the university and other sights in Glasgow. The LMS Summer School exposed us to exciting areas of mathematics which we would not normally learn in school, and motivated us to continue learning and deepening our mathematical knowledge.

Anyone who wants to explore the material further can check out the programme web page or email the authors at x.dimitrakopoulou@lse.ac.uk or j.tan18@lse.ac.uk.

]]>

**(1) Hi Tom. Perhaps you could start by telling us what you’ve been up to since leaving the LSE? **

I left the LSE in 2016 to take up a role as Assistant Professor of Management Science and Information Systems at Rutgers University, New Jersey. It has taken a bit of time to get used to the different way things are done in the US, especially with regard to teaching, but I’m happy to say that I’m settling in well and enjoying my new life across the pond. I’m also pleased to continue being a part of the LSE Department of Mathematics as a Visiting Fellow, and have been continuing my LSE collaborations with Katerina Papadaki, László Végh, Bernhard von Stengel and Steve Alpern.

**(2) It’s interesting you highlight differences in the way university life works in the US. Could you elaborate on that? **

In many ways things are a lot more informal in the US. In the UK, course syllabuses must be agreed months in advance and exams are double and triple checked before being printed in a standardised font. In the US, several people may teach the same course simultaneously with their own syllabus and their own grading criteria. Exams can be written the day before they are taken. One senior professor told me he used to make up exams on the spot and write the questions on the blackboard. In some ways it makes the lecturer’s job easier, but it also puts more responsibility on them to be fair to the students and give them a good learning experience.

**(3) ..which system do you think is most effective? Should we be learning lessons here in the UK? **

That’s a difficult question. I think the difference might be partly down to culture. In the US, it sometimes seems like every lecturer is trying to “sell” their course in the free market of higher education credits. If a lecturer is unsuccessful and unappealing to students, their course will fail, so perhaps everything works itself out. That might work in the American “everyone for themselves” culture, but I’m not sure it’d work so well in the UK.

**(4) Presumably Management Science also presents quite a different work environment to Mathematics? **

Yes, though I would say that my department is pretty mathematical on the scale of business school departments. The faculty here have a very broad range of interests, from crytography to stochastic optimization to Boolean functions. The department’s origins were in RUTCOR, which was an operations research interest group in Rutgers established in the 1980’s, with members from departments all across the university, including maths, statistics, engineering and computer science. But yes, I certainly agree that the students (particularly the undergraduates) are overwhelmingly here to get a business degree so that they can get out there and be successful. I suppose that’s not too much different to the LSE!

**(5) .. and mathematically what have you been up to? Could you outline the project you are most excited about that the moment in simple terms? **

I’m most excited about my work on ordering problems, where several tasks have to be completed in some order. For example, some hiding places must be searched one by one with the aim of finding some hidden bad guys in the least possible time, or a machine has to process a set of jobs in some order to minimize the average time to finish them. Ordering problems can be hard to solve because the number of possible orderings of a set increases exponentially with its size. For example, the number of ways to order a set of size 60 is roughly equal to the number of atoms in the universe. But by using set functions to write these types of problems in a very general form, it’s possible to understand the structure of them better and find approximate solutions, or precise ones in special cases.

**(6) How’s New Jersey? Is this your first time living in the USA? How are you settling into the culture? **

Yes, this is my first time living in the US. New Jersey has some beautiful parts, and I like living in Newark. It’s a city that was once an industrial centre of the east coast (the intersection of Broad Street and Market Street, a couple of blocks from where I live was once said to be the busiest intersection in the country). But Newark has suffered a great decline over the second half of the twentieth century, and it’s only recently that the city seems to be turning a corner and making a resurgence. Given that it’s only 20 minutes by train from Manhattan, I can see it becoming a popular commuter hub sometime in the near future!

]]>Thank you! I’m happy to be here. I work on packing and covering problems. Given a finite family of finite sets, what’s the minimum number of sets whose union is everything? What’s the maximum number of elements that intersect every set at most once? What’s the maximum number of pairwise disjoint sets? What’s the minimum number of elements needed to intersect every set at least once?

Two basic integer programs, called set packing and set covering – and their duals – model these problems. These integer programs turn out to be NP-hard. Their natural linear relaxations however are polynomially solvable. So the question becomes: when are the linear relaxations just as strong as the integer programs?

This concept leads to perfect graphs for one of the dual pair of linear programs, and to ideal and Mengerian clutters for the other one. I study the latter.

Packing and covering problems are incredibly simple to state. And it was this simplicity that pulled me in.

I spent a summer during my undergrad years at the University of Waterloo with Bertrand Guenin, who later became my Masters and PhD adviser. We worked on packing odd T-joins in graphs. Our project turned out to be successful. But it turned out to be just the tip of the iceberg.

Over the years I moved on to deeper problems, leading to the regime of ideal and Mengerian clutters, only one instance of which corresponded to the odd T-joins problem.

Good question! The project I have been working on for the past few years is very much on the theoretical side, though the theory of ideal and Mengerian clutters is partly motivated by its applications to computational complexity. So currently it is mainly the theory that’s been keeping me busy. Packing and covering problems are also among the most basic integer programs for computational optimizers, and I would like to study them from that perspective, too.

What drew me to London, first and foremost, are the research groups at LSE Maths and how strong they are. London is a big city, and having lived in Tehran and Toronto, I do like life in big cities and the challenges it entails. I’ve visited London many times before, so I’ve explored a fair bit already. My favourite places so far are the cafes, the theatres, and the British Museum.

Reading, politics, playing tennis, and good coffee!

]]>The central solution concept in game theory is Nash equilibrium, which is required to provide a precise, complete prediction about the players’ behaviour in a game. Polyequilibrium legitimises looser predications, or polyequilibrium results, such as “the equilibrium price is higher than five” or “the outcome is socially efficient”. Technically, a polyequilibrium is a collection of strategy profiles that share the property in question. Put differently, strategy profiles that do not have the property (say, those representing a socially inefficient outcome) are excluded. The exclusion has to be justified in the following sense. Every excluded strategy for a player has an adequate substitute: a non-excluded strategy that does at least as well against all non-excluded strategy profiles. A relatively small polyequilibrium, which excludes many strategy profiles, provides a sharper prediction about the outcome of the game that a larger polyequilibrium does. However, both are legitimate polyequilibria. Thus, this solution concept reflects a somewhat different philosophy than that of Nash equilibrium, in that it is content with learning something interesting about the players’ equilibrium behaviour and does require that the strategy choices be completely pinned down.

**(2) Are there applications you have in mind for your work in polyequilibria outside pure mathematics? **

Yes. Consider the following simple example of bilateral trade. A buyer offers a price for a particular item, which the seller can only accept or reject. Suppose, for simplicity, that the item has zero value for the seller. It is then a reasonable strategy for the seller to accept any price greater than zero. In fact, this is a dominant strategy. But it is not an equilibrium strategy, because the buyer does not have a best response to it: offering any positive price *p* is less profitable than offering, say, half that price. The polyequilibrium concept solves this conundrum in the obvious way: it makes “offering no more than *p*” a legitimate choice for the buyer, for any given *p*. This is not a strategy but a polystrategy: a collection of price offers. Together with the aforementioned seller’s strategy, it constitutes a polyequilibrium, at which the result that the item is sold at a price not exceeding *p* holds.

Another area where the idea that the players’ strategies may only be partially specified seems very natural is dynamic games. In such games, the notion of subgame perfection requires that players not only respond optimally to the other players’ actual moves but would also do so off-equilibrium, that is, as a response to all possible deviations of the others from their equilibrium strategies. This is a very sensible requirement, as it eliminates non-credible threats: those that a rational player would not actually carry out. But in a large game, it can be quite cumbersome to check that it holds, as all counter-threats need also be credible, and so on. With polyequilibrium, it may be sufficient to specify actions only at some of the game’s decision nodes, say, those at or close to the actual path. For example, there is no need to specify a response to an action that is unequivocally detrimental to the acting player. Not specifying a response means that none of the possible reactions is excluded. This natural choice for the responder is, again, one that the Nash equilibrium solution concept does not allow.

** (****3) More generally speaking, what is the state of play at the moment, with regard to applications of game theory? How much symbiosis is there between the interests of pure mathematicians or computer scientists and the interests of economists here? **

Game theory is more relevant now than it ever was. The move to online economic activity means that clever and sophisticated trading mechanism can be implemented with relative ease. Ad spaces, for example, can be sold in large auctions where the reaction times are measured in milliseconds and the rules can be as complicated as one wishes. The design of good, efficient such mechanisms is a challenge to game theorists and computer scientists, whose pursuits increasingly converge.

A nice example for this convergence is matching problems: matching pupils to schools, residents to hospitals, kidney donors to recipients, and so on. The matching algorithms need not only be reasonably easy to implement as code but also have to be incentive compatible. That is, participants in at least one side of the market should be assured that stating their true preferences always leads to the best outcome they can possible get. Finding such algorithms, or even figuring out whether they exist, can be a non-trivial game theoretic problem. The practical importance of these matching algorithms is immense.

**(4) ..so presumably these are some of the areas you would encourage younger people starting out in game theory to focus on? Any general advice you would give to somebody starting a PhD? What would you do differently a second time around? **

There’s nothing as exciting and satisfying as blazing one’s own trail. Ultimately, you go where your ideas take you. The more you hear and learn, the more you expose yourself to new research directions, the greater is the chance that you’ll come up with something new. So my advice would be to start with whatever you find most exciting, but then not to be afraid of making sharp turns, pursuing new ideas as they come along. Another advice would be to try, from time to time, to make contributions that go beyond the incremental, to write papers that other people might find an inspiration, papers that will result in follow-up work. Attaining technical proficiency and “mathematical maturity” are also important, so my advice to PhD students is to take as many advanced math courses as they can possibly bear.

**(5) To finish on a lighter note – what do you enjoy outside of work? Which book did you read last? **

A lively book I just finished is The Life Project, by Helen Pearson. It tells the story of the British cohort studies, which started in 1946. It’s a tale of science, scientific endeavour, and the sociology and politics of science. The heroes are the doctors and scientists who envisioned these studies and struggled to make them a reality and maintain them throughout the many, sometimes difficult years. Their achievements are the medical, sociological and economic insights that resulted from following the lives of the thousands of individuals involved and observing how their starting points and their decisions through the years affected their health, wealth and well-being, and the policy changes that this understanding helped bring about. It’s a story about the value and beauty of science – recommend reading!

]]>