About Chris Fryer

Goal: carrier-grade reliability. Nines. Lots of nines.

LSE to go wholly online forever, I think

Big news from LSE tonight (I think, anyway). I was in a meeting with colleagues in which I’m pretty sure I learned that we will be selling the campus in its entirety because we have no further use for the buildings.

I say “pretty sure” because my kid was on Fortnite and the WiFi kept dropping out.

But the gist of it was: everyone is so completely comfortable with working, teaching, learning, researching and so forth at home that we really don’t need this massive investment in bricks and mortar, so why not sell it?

Word is, Mike Ashley is interested in turning the Old Building into a Sports Direct and Tim Martin has made a very reasonable offer for the Three Tuns.

Someone with a cat avatar in MS Teams said:

LSE has a grand tradition, stretching back at least two weeks, of making bold decisions and acting on them swiftly. We are proud to be the first of the Russell Group universities to follow the Open University into the online space. Stop hitting your sister with that. Don’t make me come upstairs. One… TWOOOOO

The advantages are legion, and even I can think of at least one, despite having been woken by my children at 04:30 because someone thought it would be fun to put the clocks forward in the middle of a pandemic.

No you shut up.

As I was saying: the “academic hour” is at an end. No longer will I have to scoot across campus from Clement House to 32 Lincoln’s Inn Fields and show up late for a meeting. I can simply drop out of my eleven o’clock and spend the following ten minutes saying: “Can you hear me? No? What about now? Ugh! I hate this computer. Wait, I have a headset somewhere…”

All watched over by machines of cold indifference

This post is based on a presentation I gave at this afternoon’s M25 Learning Technology Group meeting at King’s College London.

The title of this post refers to an Adam Curtis documentary series from 2011, itself taken from a Richard Brautigan poem.  I’ve reproduced the last stanza:

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

This is a lyric expression of something that’s come to be known as Technological Utopianism.  This isn’t merely the preserve of beatniks and hippies; Bertrand Russell wrote, in his 1932 essay In Praise of Idleness, “four hours’ work a day should entitle a man to the necessities and elementary comforts of life, and that the rest of his time should be his to use as he might see fit,” because:

Leisure is essential to civilization, and in former times leisure for the few was rendered possible only by the labors of the many. But their labors were valuable, not because work is good, but because leisure is good. And with modern technique it would be possible to distribute leisure justly without injury to civilization.

And John Maynard Keynes wrote, in his 1930 essay Economic Possibilities for our Grandchildren, that within 100 years the “economic problem” would be solved.  In 2030 we would all be working “three-hour shifts or a fifteen-hour week” and:

For the first time since his creation man will be faced with his real, his permanent problem-how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.

Keynes’s grandchildren are eleven years from this horizon, and (needless to say) things haven’t quite worked out that way.  Why not?

Russell’s “Modern technique”

Jacquard Loom

The Jacquard Loom in the Musée des Arts et Métiers in Paris [Moof (CC BY 2.0)]

The Musée des Arts et Métiers in Paris is a temple to scientific progress.  In its galleries you’ll find hundreds of machines, including a lovely example of the Jacquard loom.  Looking at the machine, you’ll see some punched cards which, as the mechanism moves, raise and lower different warp threads, producing patterned textiles.  The cards can be re-ordered to create different patterns.

Those punched cards may look familiar to computer users of a certain age; they are practically identical to those once used to program computers.  The comparison is not lost on the curators of the museum; the exhibition leads finally to a room containing a Cray 2 supercomputer. Nor was it lost on Charles Babbage, who understood punched cards could be used to program his Analytical Engine.


A fifteen hour week?

The textile industry gave Britain its first full-blown industrial relations crisis: the outbreak of machine breaking by the Luddites.  The Luddites were not, contrary to popular opinion, opposed to technology per se; textile workers had been using stocking frames since Tudor times, but in a highly-regulated industry.  The Luddites’ machine breaking was instead a response to the use of machinery in “a fraudulent and deceitful manner,” particularly by unskilled apprentices working without the supervision of master craftsmen.  In Eric Hobsbawm’s memorable phrase, the Luddites were conducting “collective bargaining by riot“.

The textile workers’ expertise, hitherto distributed among men and machines in a cottage industry, became concentrated in machines housed in factories owned by capitalists.

Did the textile workers share in the profits that followed? Did they reduce their hours to fifteen a week?  Of course not:

[Wages] could be compressed by direct wage-cutting, by the substitution of cheaper machine-tenders for dearer skilled workers, and by the competition of the machine.​  This last reduced the average weekly wage of the handloom weaver from 33s. in 1795 to 4s. 1½d. in 1829.​

Eric Hobsbawm, The Age of Revolution: Europe 1789-1848

In thirty four years, their wages were reduced to one eighth.

So much for history

This is not a phenomenon confined to the pages of history books.  A similar battle is playing out right now, on the streets of London, between black cab drivers and Uber.

In order to become a cabbie, you need “the knowledge,” earned by learning 80 runs across the city, getting at least 60% on two written exams, and passing three oral exams.  This can take between three and five years to accomplish.  By contrast, becoming an Uber driver in London requires that you have a TfL Private Hire license, and I estimate this to take a minimum of eight weeks. The process includes what TfL calls a “topographical skills assessment“, which (being brutally honest) ensures you are able to read a map.

It is very difficult to find out the earnings of black cab drivers or Uber drivers, because they are self-employed and not required to reveal their earnings to impertinent systems administrators.  But the New York Times estimates Uber fares to be about 30% cheaper than black cabs, and Uber extracts a fee upwards of 25% from its drivers.  As Daniel Markovits, author of The Meritocracy Trap, points out, a cabbie can earn enough to own a home, provide for his family, and go on holiday.  The precarious finances of the Uber driver, on the other hand, are legendary.

Naturally all this enrages the black cab drivers; as with the looms in the dark satanic mills of the 19th century, their previously distributed expertise is becoming concentrated in the machines of capitalists.

But this time, there are no looms to smash.  Uber has developed not one machine learning algorithm, but so many that their engineers have created a bespoke machine learning service, so the teams of engineers working on the myriad components of the Uber service can automate them more easily.

So they want to replace you with a machine. But is it any good?

Models used in forecasting have a property called “skill”, which measures how good they are at what they’re intended to do.  I’d like you to consider a specific example which, while detailed, is readable enough for a non-technical audience.  Amazon Web Services will rent you a machine learning service, which you can bend to your requirements.  In this example, Denis Batalov shows how you can use Amazon Machine Learning to predict customer churn from a mobile phone service.

For mobile providers, obtaining new customers is costly. Those special offers you see advertised are loss leaders designed to lure you into signing a contract. They will absorb the loss because they assume you are too lazy to switch providers at the end of your contract, at which point they can milk you for profit. Those customers who do leave are said to “churn”.

The set uses comparatively few data points, for example how long the customer has had the service, how much they use their phone, how much the service costs and how many times they’ve called Customer Services.  The goal of the exercise is to identify those customers most likely to churn, and to stage an automated intervention, buttering them up with free minutes, a new handset, etc.

The algorithm is trained first on data where it can see the outcome.  Customer x with the following attributes remained a customer, but customer y with these attributes decided to leave.  Then, to test its skill, it is shown the data without seeing the outcome.  More successful models are selected for evolution, and the remainder are culled.  This continues until the returns are diminished to the extent that there’s no more tweaking to be done.


As you can see, 14.5% of customers in the set “churned”. Can the machine identify those who will stay, and those who will leave?  Well, it can identify 86% of them.  But this is, in practical terms, the same as having no model at all, or (to put it another way) having a model which assumes all customers are loyal but is wrong about 14.5% of them.

However, since losing customers is expensive, and offering butter-ups is (comparatively) cheap, you can tweak the model so that it is more wrong than having no model at all, and yet saves the company $22.15 per customer.  Scaled up, this is big money (and a big bonus for the ML developer).

Less accurate than guessing, but much more profitable.

What has this to do with Learning Technology?  I’ve seen ML models not very different to the above, deployed in a VLE and using very few data points, making predictions about whether a student will pass or fail a particular module, or whether a student will drop out or remain enrolled.  The problems come in the “costs” we attach to the quadrants in the truth table, and in concentrating expertise in a machine at the expense of our distributed expertise as individual educators.

Seen it all before

Any forecasting discipline also suffers the problem of bias.  In human actors, we hope it is unconscious.  But in machine learning, it is built in, because we are training our AIs on historical data.

In 2017, Amazon announced it had shuttered an experimental programme to train an AI for recruitment. Scouring LinkedIn or sifting a pile of applications is time-consuming (and thus costly), repetitive, boring, and tiring.  These characteristics belong to tasks which IT professionals immediately select for automation.  But, just as when training a model to predict customer churn, historical data are required, with all the perils inherent therein.  Amazon’s AI was chucking women’s applications on the discard pile, because the company had, in the past, consistently favoured male applicants over female ones.

Again, what has this to do with learning technology, or even with IT in HE?  I’ve found institutions which were at least toying with the idea of using machine learning to sift admissions applications.  What will that do to our efforts to widen participation?  Instead of artificial intelligence, we will have automated ignorance.

When you combine bias-as-code with the kinds of de-skilling discussed in the cases of the textile workers and taxi drivers, you have a potent recipe for problematic decision making.  The most recent example is the “sexist AI” behind Apple Card, which assessed a married couple who, on the face of it, presented identical credit risks.  It offered the husband 20x the credit limit it offered his wife. Apple’s customer services people threw their hands up: “It’s just the algorithm”.  Even The Woz waded in, observing “It’s hard to get to a human for a correction though. It’s big tech in 2019.”  Once again, we see expertise, previously distributed among financial advisers, concentrated in a machine.

Weird, inscrutable logic

Even if Apple had retained the human capability to consider an appeal against the AI’s decision, it wouldn’t have been able to explain that decision, because ML algorithms do not admit of human scrutiny: software that is evolved is unreadable.

As a systems administrator, I like code that can be audited.  When something aberrant happens, I like to be able to see if there’s a logic problem.  But in discussions with non-systems (i.e. normal) people, I’ve come to agree that it’s acceptable, in some circumstances, to audit a system only knowing its inputs and its outputs.  An example is the pocket calculator.  You can ask it to solve 5 x 5, 10 – 8, etc, and compare it with your own working.  Eventually you come to trust the system and ask it to solve the square root of pi, and because it’s been right about everything until now, you believe that it’s right about this.

But as we’ve seen, ML is being asked to solve more complex problems than the root of pi.  It’s being asked to make predictions and decisions, with multiple inputs that it may or may not be using to draw its inferences, some of which could be wildly inappropriate.  There are, after all, a lot of spurious correlations in the world.

So I finish on an appeal: if your institution is ever considering the use of AI to admit applicants, or mark students’ work, or predict their likely success, press as hard as you can for the institution to retain a human in the process.  Because if the past is any guide — and it surely is, because that’s the basis on which we’re training our machines — if you don’t, there won’t be anyone left to hear an appeal.

Vacancy: Learning Technology Systems Support Specialist

LSE’s Learning Technology and Innovation team are recruiting a Learning Technology Systems Support Specialist, with a focus on managing our lecture recording system, Echo360 Active Learning Platform.

We offer a salary in the range £35,999 to £43,360, with the potential to progress to £46,617 pa (inclusive of London allowance)

The School is undertaking a programme of expansion by making the system available in more teaching rooms, and moving to an opt-out (record by default) model.

The post-holder will help to manage the integration between Echo360 and the School’s timetable system, Scientia Syllabus Plus.  The successful candidate will have an opportunity to champion lecture recording at LSE, working with teaching, learning technology, audio-visual and systems integration colleagues to make lecture recording and video-on-demand a core component of the student experience.

The successful candidate will have experience of managing Echo360 Active Learning Platform (or a similar enterprise-grade lecture recording system), and excellent planning, organisation and communication skills. Experience with SQL and the use of Talend OpenStudio for Data Integration, or a similar ETL tool, would be an advantage, but training will be available for the successful candidate.

For more information, and to apply, please visit, and for informal enquiries, please contact Chris Fryer,

The closing date for receipt of applications is Sunday, 17th February 2019 (23.59 UTC). Regrettably, we are unable to accept any late applications.

January 21st, 2019|Announcements, Lecture recording|Comments Off on Vacancy: Learning Technology Systems Support Specialist|

Improvements to the lecture recording service for Lent Term 2019

We are pleased to announce that our colleagues in Estates Division and Data and Technology Services (formerly IMT) have worked over the Christmas break to improve the lecture recording system.  Recording facilities are newly installed in the following rooms:
  • CLM.2.04
  • CLM.2.05
  • CLM.2.06
If you are scheduled to teach in these rooms and would like to have your lectures recorded, please let us know by completing the form in LSE for You
The following rooms now have High Definition video, in addition to audio and display recording facilities:
  • 32L.LG.03
  • 32L.LG.18
  • NAB – Alumni Theatre
  • NAB – Wolfson Theatre
  • NAB.1.07
  • NAB.1.09
  • NAB.1.10
  • NAB.1.14
  • NAB.1.15
  • NAB.1.17
  • NAB.1.18
  • NAB.1.19
  • NAB.2.06
  • NAB.2.08
  • NAB.2.09
  • NAB.2.13
  • NAB.2.14
  • NAB.2.16
  • OLD.3.21
  • OLD.4.10
  • PAR.1.02
  • PAR.2.03
  • PAR.LG.03
  • TW2.2.04

“On air” lights

Lights have been installed to help you determine when a recording is taking place.  The lights change colour according to the state of the recording system.  When the light is a steady green, no recording is taking place, but the system is operational.  When the light is a steady yellow, a recording is due to start in the next five minutes.  When the light is a steady red, a recording is in progress.
The light also doubles as a push-button control system.  When the light is a steady red (meaning a recording is in progress) you can push down on the light to pause the recording.  You may wish to do this when there is a break in your lecture, or when you otherwise feel that continuing to record is not appropriate.  When the recording is paused, the light will blink yellow.  Push down on the light to resume recording.  Wait for the light to return to a steady red before continuing with your teaching.
Please note that the light will only function while the PC in the room is switched on.

Opt in, or opt out?

Lecture recording remains opt in pending the ratification of the policy document by Academic Board. Some lecturers expressed concern about paragraph 2, which governs intellectual property. We will be sending a revised draft to the departments and the UCU branch for comment before final submission to Academic Board.
So the procedure for Lent Term continues to be as outlined in this post:
January 15th, 2019|Announcements, Learning Spaces, Lecture recording|Comments Off on Improvements to the lecture recording service for Lent Term 2019|

Lecture recording for 2018/19

Lecturers will need to opt-in for their lectures to be recorded. Please visit LSE for You and complete the form so that we can record your lectures. Please note that this task cannot be delegated to anyone else.  There are detailed instructions available on our website.

If your lectures do not appear on that form, please check the timetable to ensure your lectures have been allocated to you correctly. Contact to correct any errors.

Once the first recording is complete you will receive an email informing you that the lecture is ready to view. Please see our guidance on how to publish your recordings in Moodle.

If you already have a link to lecture recordings in your Moodle course, it is likely that this is to last academic year’s recordings. Unless you want 2018/19 students to watch those recordings, please remove or hide that link.

Recording seminars

LTI are aware that some departments classify their teaching as “seminars” so that they benefit from a register, and can schedule more than one session in a week. Unfortunately, these sessions do not appear for selection in LSE for You. We are working with the Law, Finance and Management departments (who are most affected by this problem) to collate sessions that must be recorded. Nevertheless, if you are not in one of those departments and wish to have teaching sessions not classified as “Lectures” recorded, please email and give details of the sessions.

Guest lecturers

Lecturers who are not members of LSE will not be able to complete the form in LSE for You. Please ensure you have their consent to be recorded by asking them to complete the release form.  Email this, with details of the session to be recorded, to

Further advice

Before making an enquiry not covered above, please see our Frequently Asked Questions about lecture recording

September 28th, 2018|Lecture recording|Comments Off on Lecture recording for 2018/19|

Secondment opportunity in LTI

LSE Learning Technology and Innovation are offering a 12 month secondment opportunity for the role of Learning Technology Systems Support Specialist, looking after our lecture recording system, Echo360.

This role is a secondment opportunity to enable existing employees to broaden their knowledge, experience and skills by transferring to a different team/department. Existing employees wishing to apply for this role as a secondment opportunity should discuss the role and seek agreement from their line manager in advance of their application.

Please see the full details in LSE Jobs.

The closing date for receipt of applications is 28th August 2018

August 3rd, 2018|Announcements, Lecture recording|Comments Off on Secondment opportunity in LTI|

Vacancy: Learning Technology Systems Support Specialist

LSE’s Learning Technology and Innovation team are recruiting a Learning Technology Systems Support Specialist, with a focus on managing our lecture recording system, Echo360 Active Learning Platform.

We offer a salary in the range £34,736 to £42,019, with the potential to progress to £45,212 (inclusive of London allowance).

The successful candidate will have experience of managing Echo360 Active Learning Platform or a similar enterprise-grade lecture recording system,​ working knowledge of SQL, and excellent planning, organisation and communication skills.

The post-holder will manage the integration between Echo360 and the School’s timetable system, Scientia Syllabus Plus.  Experience using Talend Open Studio for Data Integration, or a similar ETL tool, would be an advantage, but training will be available for the successful candidate.

The School currently operates an opt-in lecture recording policy, and records 42% of teaching.  We plan to expand coverage by making the system available in more teaching rooms, and to move to an opt-out (record by default) model.

The successful candidate has an opportunity to champion lecture recording at LSE, working with teaching, learning technology, audio-visual and systems integration colleagues to make lecture recording and video-on-demand a core component of the student experience.

For more information, and to apply, please visit, and for informal enquiries, please contact Chris Fryer,

The closing date for receipt of applications is 14th June 2018 (23.59 BST). Regrettably, we are unable to accept any late applications.

May 24th, 2018|Announcements|Comments Off on Vacancy: Learning Technology Systems Support Specialist|

Lecture recording now available in KSW.1.04

If you are timetabled to deliver lectures in KSW.1.04 during Lent Term 2018, you can now choose to have your lectures recorded.  See our guide to setting your lecture recording preferences. Note that your sessions will need to be classified as “Lectures” in the timetable for you to be able to book them in LSE for You.

December 20th, 2017|Learning Spaces, Lecture recording|Comments Off on Lecture recording now available in KSW.1.04|

Two new vacancies at LSE

LSE’s Learning Technology and Innovation team are recruiting for two vacancies: a Learning Technology Systems Officer, to provide first and second line technical support for learning technologies including Moodle and lecture capture, ​and a Learning Technology Content Developer, to produce a range of engaging media content for use in the School’s online and blended learning provision.

The closing date for receipt of applications is 12th November 2017 (23.59 GMT)

October 19th, 2017|Announcements|Comments Off on Two new vacancies at LSE|

The way you link to reading lists in Moodle is changing

After we upgrade Moodle to version 3.1 this summer, one of the ways you currently link to reading lists won’t work. The integration between Talis Aspire and Moodle was written by the University of Kent and has not been updated to work with the new system.

How can I tell if I need to make changes?

If your reading lists bear this icon:  then they will no longer work after Tuesday 15th August

Instead, you will need to use one of the following methods:

June 20th, 2017|Announcements, Moodle|Comments Off on The way you link to reading lists in Moodle is changing|