Ed-Tech news and issues

All watched over by machines of cold indifference

This post is based on a presentation I gave at this afternoon’s M25 Learning Technology Group meeting at King’s College London.

The title of this post refers to an Adam Curtis documentary series from 2011: All watched over by machines of loving grace. This in turn was taken from the title of a Richard Brautigan poem.  I’ve reproduced the last stanza:

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

This is a lyric expression of something that’s come to be known as Technological Utopianism.  This isn’t merely the preserve of beatniks and hippies; Bertrand Russell wrote, in his 1932 essay In Praise of Idleness, “four hours’ work a day should entitle a man to the necessities and elementary comforts of life, and that the rest of his time should be his to use as he might see fit,” because:

Leisure is essential to civilization, and in former times leisure for the few was rendered possible only by the labors of the many. But their labors were valuable, not because work is good, but because leisure is good. And with modern technique it would be possible to distribute leisure justly without injury to civilization.

And John Maynard Keynes wrote, in his 1930 essay Economic Possibilities for our Grandchildren,  that within 100 years the “economic problem” would be solved.  In 2030 we would all be working “three-hour shifts or a fifteen-hour week” and:

For the first time since his creation man will be faced with his real, his permanent problem-how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.

Keynes’s grandchildren are eleven years from this horizon, and (needless to say) things haven’t quite worked out that way.  Why not?

Jacquard Loom

The Jacquard Loom in the Musée des Arts et Métiers in Paris [Moof (CC BY 2.0)]

The Musée des Arts et Métiers in Paris is a temple to scientific progress.  In its galleries you’ll find hundreds of machines, including a lovely example of the Jacquard loom.  Looking at the machine, you’ll see some punched cards through which threads can be strung, and as the mechanism moves, patterned textiles are produced.  The cards can be re-ordered to produce different patterns.

Those punched cards may look familiar to computer users of a certain age; they are practically identical to those once used to program computers.  The comparison is not lost on the curators of the museum; the exhibition leads finally to a room containing a Cray 2 supercomputer. Nor was it lost on Charles Babbage, who understood punched cards could be used to program his Analytical Engine.

Luddite

The textile industry gave Britain its first full-blown industrial relations crisis in the shape of the outbreak of machine breaking by the Luddites.  These were not, contrary to popular opinion, opposed to technology per se; textile workers had been using stocking frames since Tudor times, but in a highly-regulated industry.  The Luddites’ machine breaking was a response to the use of automation in “a fraudulent and deceitful manner,” particularly by unskilled apprentices without the supervision of master craftsmen.  In Eric Hobsbawm’s memorable phrase, the Luddites were conducting “collective bargaining by riot“.

Their expertise, which had hitherto been distributed among men and machines in a cottage industry, was becoming concentrated in machines owned by capitalists.  Did the textile workers find themselves sharing in the profits generated by these machines, and reducing their hours to fifteen a week?  Of course not:

[Wages] could be compressed by direct wage-cutting, by the substitution of cheaper machine-tenders for dearer skilled workers, and by the competition of the machine.​  This last reduced the average weekly wage of the handloom weaver from 33s. in 1795 to 4s. 1½d. in 1829.​

Eric Hobsbawm, The Age of Revolution: Europe 1789-1848

In thirty four years, their wages were reduced to one eighth.

This phenomenon is not confined to the pages of history books.  A similar battle is playing out right now, on the streets of London, between black cab drivers and Uber.  In order to become a cabbie, you need “the knowledge,” earned by learning 80 runs across the city, getting at least 60% on two written exams, and passing three oral exams.  This can take between three and five years to accomplish.  By contrast, becoming an Uber driver in London requires that you have a TfL Private Hire license, and I estimate this to take a minimum of eight weeks. This process includes what TfL calls a “topographical skills assessment“, which (being brutally honest) ensures you are able to read a map.

It is very difficult to find out the earnings of black cab drivers or Uber drivers, because they are self-employed and not required to reveal their earnings to impertinent systems administrators.  But the New York Times estimates Uber fares to be about 30% cheaper than black cabs, and Uber extracts a fee upwards of 25% from its drivers.  As Daniel Markovits, author of The Meritocracy Trap, points out, a cabbie can earn enough to own a home, provide for his family, and go on holiday.  The precarious finances of the Uber driver, on the other hand, are legendary.

Naturally all this enrages the black cab drivers; as with the looms in the dark satanic mills of the 19th century, their previously distributed expertise is becoming concentrated in the machines of capitalists.

But this time, there are no looms to smash.  Uber has developed not one machine learning algorithm, but so many that their engineers have created a bespoke machine learning service, so teams working on the myriad components of the Uber service can automate them more easily.

So they want to replace you with a machine. Is it any good?

Models used in forecasting have a property called “skill”, which measures how good they are at what they’re intended to do.  I’d like you to consider a specific example which, while detailed, is readable enough for a non-technical audience.  Amazon Web Services will rent you a machine learning service, which you can bend to your requirements.  In this example, Denis Batalov shows how you can use Amazon Machine Learning to predict customer churn from a mobile phone service.

The set uses comparatively few data points, for example how long the customer has had the service, how much they use their phone, how much the service costs and how many times they’ve called Customer Services.  The goal of the exercise is to identify those customers most likely to churn, and to stage an automated intervention, buttering the customers up with free minutes, a new handset, etc.

The algorithm is trained first on data where it can see the outcome.  Customer x with the following attributes remained a customer, but customer y with these attributes decided to leave.  Then, to test its skill, it is shown the data without seeing the outcome.  More successful models are selected for evolution, and the remainder are culled.  This continues until the returns are diminished to the extent that there’s no more tweaking to be done.

As you can see, 14.5% of customers in the set “churned”.  Can the machine identify those who will stay, and those who will leave?  Well, it can identify 86% of them.  But this is, in practical terms, the same as having no model at all, or (to put it another way) having a model which assumes all customers are loyal but is wrong about 14.5% of them.

However, since losing customers is expensive, and offering butter-ups is (comparatively) cheap, you can tweak the model so that it is more wrong than having no model at all, and yet saves the company $22.15 per customer.  Scaled up, this is big money (and a big bonus for the ML developer).

What has this to do with Learning Technology?  I’ve seen ML models not very different to the above, deployed in a VLE and using very few data points, making predictions about whether a student will pass or fail a particular module, or whether a student will drop out or remain enrolled.  The problems come in the “costs” we attach to the quadrants in the truth table, and in concentrating expertise in a machine at the expense of our distributed expertise as individual educators.

Any forecasting discipline also suffers the problem of bias.  In human actors, we hope it is unconcious.  But in machine learning, it is built in, because we are training our AIs on historical data.  In 2017, Amazon announced it had shuttered an experimental programme to train an AI for recruitment.  Scouring LinkedIn or sifting a pile of applications is time-consuming (and thus costly), repetitive, boring, and tiring.  These characteristics belong to tasks which IT professionals immediately select for automation.  But, just as when training a model to predict customer churn, historical data are required, with all the perils inherent therein.  Amazon’s AI was chucking women’s applications on the discard pile, because historically the company had favoured male applicants over female ones.

Again, what has this to do with learning technology, or even with IT in HE?  I’ve found institutions which were at least toying with the idea of using machine learning to sift admissions applications.  What will that do to our efforts to widen participation?  Instead of artificial intelligence, we will have automated ignorance.

When you combine bias-as-code with the kinds of de-skilling discussed in the cases of the textile workers and taxi drivers, you have a potent recipe for problematic decision making.  The most recent example is the “sexist AI” behind Apple Card, which assessed a married couple who, on the face of it, presented identical credit risks.  It offered the husband 20x the credit limit it offered his wife. Apple’s customer services people threw their hands up: “It’s just the algorithm”.  Even The Woz waded in, observing “It’s hard to get to a human for a correction though. It’s big tech in 2019.”  Once again, we see expertise, previously distributed among financial advisers, concentrated in a machine.

But even if Apple had retained a human capability to consider an appeal against the AI’s decision, it wouldn’t have been able to explain that decision, because ML algorithms do not admit of human scrutiny: software that is evolved is unreadable.

As a systems administrator, I like code that can be audited.  When something aberrant happens, I like to be able to see if there’s a logic problem.  But in discussions with non-systems people, I’ve come to agree that it’s acceptable, in some circumstances, to audit a system only knowing its inputs and its outputs.  An example is a pocket calculator.  You can ask it to solve 5 x 5, 10 – 8, etc, and compare it with your own working.  Eventually you come to trust the system and ask it to solve the square root of pi, and because it’s been right about everything until now, you believe that it’s right about this.

But as we’ve seen, ML is being asked to solve more complex problems than the root of pi.  It’s being asked to make predictions and decisions, with multiple inputs that it may or may not be using to draw its inferences, some of which could be wildly inappropriate.  There are, after all, a lot of spurious correlations in the world.

So I finish on an appeal: if your institution is ever considering the use of AI to admit applicants, or mark students’ work, or predict their likely success, press as hard as you can for the institution to retain a human in the process.  Because if the past is any guide — and it surely is, because that’s the basis on which we’re training our machines — there won’t be anyone left to hear an appeal.

LSE Turnitin Guidance for dealing with requests to view student papers

LSE is using Turnitin to check similarity of students’ work against submitted work of other students and various web sources. Papers submitted by students are added onto Turnitin’s repository, that allows teachers of one institution to find matches of student work that has been submitted at other institutions that use Turnitin. Moreover, teachers can request to view papers submitted to other institutions, if they think necessary.

What is a paper view request?

Turnitin enables academics to find matches of students’ submitted work to other students’ work. When a match is found the LSE representative can only view part of the source. To view the full source, the LSE representative can make a paper view request to the institution where that source was originally submitted. These requests are made through Turnitin who pass the request on to the relevant institution.”

LSE has recently developed a Guidance for dealing with requests to view student papers. The guidance (approved by Academic Board) provides information on what is a paper view request, when and how to make a paper review request, and answers important questions, such as: on what basis you should decline or accept a paper view request from another institution, what is the procedure for declining/accepting paper view requests, how does a paper view request email looks like etc.

For further advice and questions about the paper view requests, please contact Martin Johnson, Assessment Regulations Manager.

Paper view requests to another institution should normally be made when the suspicion of plagiarism is high enough that the assessment in question needs to be investigated in accordance with the School’s Regulations on Assessment Offences: Plagiarism.

Also, don’t forget to visit our Staff and Students support page on using Turnitin, that provides useful information about interpreting similarity reports, FAQs for staff and students, etc. For general advice using Turnitin, contact LTI.

.

April 2nd, 2019|eAssessment News, Ed-Tech news and issues, Teaching & Learning, Tools & Technologies|Comments Off on LSE Turnitin Guidance for dealing with requests to view student papers|

An evaluation of LSE’s new informal learning spaces

In the 2016-2017 academic year, staff at LTI undertook an evaluation of the use of new LSE informal learning spaces. The findings and lessons learnt can be found in our final report. Here are the highlights.

Background

As part of a School-wide objective to provide students with more informal learning spaces across campus, “forgotten” spaces were redeveloped and opened for the 2016-17 academic year. Staff at LTI led the design of 6 spaces – one at each landing of Clement House’s back stairwell- along with Estates, the Teaching and Learning Centre and AV services.

While each space was designed to fulfil a specific function, such as collaborative work or quiet study, they were also intended to be flexible so that students could own and shape them.

This work was also an opportunity for LTI to experiment with new configurations and technology to apply a variety of modular spaces for LSE’s future buildings. LTI’s report investigates the effective use made by students of the six spaces, and whether they match the design intentions. It also provided a context to understand how they fit into the overall experience of students with informal learning spaces at the School.

Click the picture for a description of the spaces

Findings

In spite of the fact that the effective use of the spaces did not always match the original design intentions, the spaces were welcomed by both students and staff and saw high levels of occupancy.

As far as use is concerned, students seemed to favour individual use of the spaces, even on those floors fitted with collaborative furniture. This was found to align with the most common approach to teaching and learning adopted at the School and also reflected in assessment, namely quiet study and individual working. It would be interesting to reassess the use of those and similar spaces once other modes of teaching and assessment are adopted as a result of the School-wide initiative to diversify assessment from next year.

With regards to the spaces themselves. students appreciated the calm and relaxed feel to the spaces and the range of equipment available to them.  Areas for improvement include noise levels (especially between classes) and a lack of work space (such as tables or chairs).

Report

More information about the spaces, findings and our analysis can be found in the full report: An Evaluation of Clement House Informal Learning Spaces.

LTI is currently working on the redevelopment of other informal spaces, as well as three rooms in various areas of the campus (more details to follow soon)

Findings from this evaluation and our previous new teaching spaces evaluation will inform the design of these spaces and the future ones.

We would love to hear your feedback, please use the comments below or email LTI to share your thoughts!

June 29th, 2017|Announcements, Ed-Tech news and issues, innovation, Learning Spaces, Projects, Reports & Papers, Teaching & Learning, TEL Trends, Uncategorized|Comments Off on An evaluation of LSE’s new informal learning spaces|

SPARK Grants: results and last call!

The results are in! 

In March the SPARK! Committee reviewed applications from our  first call and approved three projects aimed at improving the student learning experience through the use of technology and innovative pedagogical approaches.

The projects include an extension of a very successful students-as-producers project to further develop students filmmaking skills, the use of specialist software to create interactive assessment in Maths and a student-owned digital platform to produce and disseminate student research.

Find out more about these and previously-funded projects on our webpages.

It’s not too late to apply!

Our second call will be closing on Friday 5th May at 12 noon. This means you still have time to talk to us about your ideas and submit your application!

Detailed guidance on the application process can be find on our website. Get in touch now!

Using Powerpoint to create engaging simulations

Last academic year, two PhD students  teaching in the Department of International Relations  embarked on a journey to make their course more engaging to students. They applied for an LTI SPARK! Grant to support the development of Powerpoint-based simulation games.

Here are the highlights of the project following its completion and evaluation. Quotes are from the two recipients of the grant, Gustav Meibauer and Andreas Aagaard Nohr.

Related outcomes and resources on our website

The rationale

                Issues addressed

Currently available IR simulations for teaching purposes are often high-cost/high-tech and especially time-intensive: even if they do not require custom-made software packages with difficult interfaces and expensive licensing fees, they are almost without exception targeted at course-long or at least day-long activities that demand extensive preparation of both teachers and students, with book-length manuals, intricate rules, integrated assessment tools, and specific secondary literature. This is irrelevant for most of the undergraduate teaching practice, especially in introductory courses that often treat specific concepts only once in a 50-minutes class. But this should not mean that undergraduate students simply never get the chance to profit from interactive gaming and simulations.

                Why simulations? The pedagogy behind the technology

The project is based in the pedagogy of experiential learning, student ownership and self-directed learning, and the use of gaming activities and simulations in the classroom.

Simulations and interactive gaming solutions have long been known to enhance understanding both of specific empirical examples as well as, more importantly, theoretical linkages because they make students experience, rather than only hear about, factors and variables involved in such different topics as foreign policy decision-making, diplomacy, great power dynamics or identity formation.

Students do not simply passively receive the PowerPoint (as in a standard presentation), but play it, change its outcome (within given options), determine what the next slide will show, and are thus actively involved in what they learn. This is thought to encourage deeper learning.

It is not the outcome of the simulation that matters, but the process of its coming-about. Just as in real-world foreign policy or diplomacy, there is not necessarily a correct path to take or a right decision to find – instead, by playing the simulation, students engage in discussion and compromise, take into account a multitude of different factors, realize own mistakes, and get a feeling for the complexity of decision-making in multiple settings.

                Why Powerpoint?

There is no need to change the course design, overhaul the entire teaching approach, or experiment wildly outside what is currently known and available. Instead, our project aims at diversifying teaching where possible to integrate student-centered, activity-based teaching and learning. It does so by bringing out the true potential of already available teacher skills and learning technologies.

We do this by employing PowerPoint, specifically in-built features such as hyperlinks, interactive pathways, or audio or video integration that can be used interactively rather than passively.

Implementation

                Integration into the course

By necessity, simulations do not stand alone: they are accompanied by a set of theoretical structures and debates in which students talk and theorize about their experiences during the gaming activity

Each of our simulation classes consisted of an introductory stage of about 5 minutes, a simulation stage with multiple discussion periods interspersed (moderated variously by the class teacher or by the students themselves, depending on class dynamics) of about 20-30 minutes, and a discussion stage to tease out theoretical insights of about 20-30 minutes.

Take Aways

“Andreas and Gustav came up with a formula that gave students ownership of their own decisions and helped them to apply their knowledge to difficult real world dilemmas. Students were able to experience the consequences of both the cautious and risk taking approach and the many nuances and customs that apply to foreign policy decisions.”

Sarah Leach, Senior Learning Technologist on the project

                Students experience

Overall, results indicate a positive impact on student learning: students on average perceived simulations were enjoyable, allowed for stimulating discussion in the classroom and an experience of expertise and immersion into the topic of the class.

Not only did the simulations add an important additional method to diversify the learning experience and complement more “traditional” instruction styles, they also led to greater overall  participation rates in class (compared to more conventional class types, as assessed by teachers,  observers, and the students themselves), allowed students to bring in own previous experience and  learn from their peers, and try out learned theoretical concepts in class.

They gave students a language to talk about new and often highly abstract concepts, and allowed for smooth and often in-depth reflection and discussion. The simulations also proved entertaining and supported positive group dynamics in class, such as self-moderated discussion and quick exchanges between students without teacher interference.

                The teacher’s views

They allowed us as teachers to transition more easily towards roles of moderator and facilitator, as students interact with the simulation and with each other without input or instruction from the teacher.

Students worry that the simulations somehow divert from the “actual” material they are supposed to learn from the course, which means additional effort has to be put into developing desired learning outcomes and appropriate theoretical teaching materials.

“Andreas and Gustav have demonstrated that engaging students with technology doesn’t have to be daunting or cutting edge, a simple tweak can dramatically change the learning experience for students. To make this step even easier, they have written a ‘how to’ guide for any teachers who want to create simulations for small class teaching. The guide covers every aspect from defining the learning objectives and creating the slides through to teaching plans and evaluation. It’s a great resource.”

Sarah Leach

If you are interested in using technology to support teaching, learning and assessment like Andreas and Gustav, then please get in touch with LTI to discuss your ideas. Take a look at LTI’s SPARK! Grants for more information.

NMC Horizon Report 2017: Key trends and challenges of technology in the global HE Sector

The 2017 (Higher Education) horizon report was released a week ago by the New Media Consortium (nmc). It reflects on what the global HE sector is doing with and about (educational) technologies, how it deals with key trends and how it faces critical challenges. Most interestingly, it reflects on these trends and challenges and forecasts which technologies will be taken up by the sector in the short, medium and long term.

It is one report to which it really pays to pay attention, and is short enough to be read in a lunch hour. For a shorter read you might look at their summary 10 talking points.  Or you can stay with this blog post and have a look at my summary of this year’s trends, challenges and technologies below. I explain some of the terminology used in my summary of the 2015 Horizon report. Technology concepts are explained or linked to below.

Trends, challenges, technologies:

Trends

Key trends in the sector drive technology adoption, and in the short term these are:  Blended Learning Designs and Collaborative Learning. 

In the mid-term, the sector is driven by Growing focus on measuring learning and Redesigning Learning Spaces.

In the long term, the sector is driven by Advancing Cultures of Innovation and Deeper Learning Approaches. 

Challenges

The sector faces plenty challenges, some that we know, understand and are able to meet ‘easily‘, because we have been facing them for a while now: Improving Digital Literacy and Integrating Formal and Informal Learning.

The Achievement Gap, which “reflects a disparity in the enrollment and academic performance between student groups, defined by socioeconomic status, race, ethnicity, or gender”, and its ‘complement’ challenge to Advance Digital Equity present more of a headache and are a difficult demand on the sector as a whole.

The report suggests as wicked challenges, those that are “complex to even define, much less address”: Managing Knowledge Obsolescence, and Rethinking the Roles of Educators.

The former refers to the rapid rate of technologies cropping up and (possibly) vanishing again, while the latter refers to how teachers are to cope with that and the shift towards proper student-centred learning. The latter was mentioned as a key trend for the first time in the 2010 report and continued to appear until 2013, after which it was dropped, presumably as something that had happened. That this year it is highlighted as a wicked challenge suggests that a) it has become a much more pressing issue and b) educators continue to struggle with adapting to the changes in the Higher Education sector, and/ or the 21st Century as such.

Technologies take-up projection:

In about one year: Adaptive Learning Technologies (Think of it as mimicking the luxury of personal tutoring which reacts to individual students’ progress through their learning as it happens); Mobile Learning (harnessing the awesome computing power that almost all of us have in our phones these days).

In about 2 to 3 years: The Internet of Things (your fridge tells your phone to tell you to buy milk; moodle tells your students’ Applewatch to remind them to eat porridge and finish their dissertation); Next-Generation LMS (Moodle, but a bit slicker? So Moodle with a make-over…).

In about 4 to 5 years: Artificial Intelligence (intuitive computer tutors; HAL); Natural User Interfaces (“speech recognition, touchscreen interfaces, gesture recognition, eye-tracking, haptics, and brain computer interface”).

How good at predicting are Horizon reports? In a follow-up post I will offer an overview of ten years of Horizon report predictions.

February 24th, 2017|Ed-Tech news and issues, TEL Trends|Comments Off on NMC Horizon Report 2017: Key trends and challenges of technology in the global HE Sector|

Copyright, the future and Brexit – what does it mean for education?

Copyright guide coverThe following post is based on a post published on the UK Copyright Literacy blog by LSE’s Copyright and Digital Literacy Advisor, Jane Secker and Chris Morrison, Copyright and Licensing Compliance Officer at the University of Kent. An edited and abridged version appears below. 

I’ve now been to two recent events on the future of copyright in the UK following our exit from the European Union. Whatever your views on Brexit, we can’t deny it will happen but there is much uncertainty about what it means for education and what copyright implications there might be. This is because in recent years much UK copyright legislation has been amended following directives from the European Union. And there are important new changes going through the European Parliament currently on Copyright in the Digital Single Market. On 12 January 2017, the Commission’s proposal was debated by the European Parliament’s Committee on Legal Affairs (JURI). This week EIFL (Electronic Information for Libraries) issued a statement on the need for copyright reform across Europe, supporting the statement issued by five key organisations (including LIBER, and the European Universities Association) on ‘Future-proofing European Research Excellence‘. LIBER are also calling for more change to copyright to give Europe a real opportunity to become a global leader in data-driven innovation and research.

So what does the future hold for copyright in the UK? In October last year I was interested to read this LSE blog post from Professor Alison Harcourt of Exeter University. However, I thought I would share a few thoughts from recent events. Firstly in October last year I attended a meeting at the Intellectual Property Office (IPO) to discuss the copyright implications of Brexit on the higher education sector. Then earlier this week a conference organised by the Journal of Intellectual Property, Law and Practice (JIPLP). Both events were an opportunity to understand more about how important copyright and IP are particularly in the context of international trade but also the increasingly global education offered by the UK. In both meetings all agreed that following Brexit the UK would not have the same relationship with the Court of Justice of the EU, but no one was clear if decisions of this court might be taken into account by English judges. There were references here to important recent cases on issues such as whether hyperlinking is copyright infringement.

However what is clear is that not only does Brexit mean Brexit (and of course we all know exactly what that means) it also means we are unlikely to get a new copyright act in the UK any time soon. This is despite the view of Sir Richard Arnold, British High Court of Justice judge, that we are much in need of one. On Monday he gave us eight reasons why the Copyright Designs and Patents Act 1988 (as amended and revised) was long overdue a major overhaul, technology being his first reason and Brexit being the last. This last reason was a recent addition – for the original list of seven reasons see his Herchel Smith IP lecture from 2014. However he concluded by saying that copyright is unlikely to be a priority for parliament over the next few years.

So in these dark, rather depressing January days is there any light on the horizon? The IPO suggested Brexit might be an opportunity to rethink copyright and make it fit for the UK. The lobbying work of organisations such as EIFL and Communia are hoping to convince Brussels that reforming copyright to support education and research is vital. We would like to think that those within the research and education world might be able to play a significant role in shaping the future of copyright in the UK. But it remains to be seen….

January 24th, 2017|Conferences, copyright, Ed-Tech news and issues, Open Education, Reports & Papers, Teaching & Learning|Comments Off on Copyright, the future and Brexit – what does it mean for education?|

Mahara, Blogging and Peer Review

Edgar Whitley from the department of Management tells us about using Mahara as a tool for blogging and peer assessment and its benefits to teaching, learning and assessment.

LTI in the spotlight

Last week staff from LTI  attended the Association for Learning Technology’s annual conference (ALT-C 2016). It was an eventful three days at the University of Warwick for the team, with five of us presenting a total of 4 papers and one keynote. And oh, we also won the Learning Technologist of the Year Awards!

Learning experiences and virtual learning environments: It’s all about design!

A design for learning; Learning Experiences for the Post-Digital World – Peter Bryant

In the first part of his presentation, Peter described his new approach to teaching and learning whereby seven  learning experiences (found, making, identity, play, discontinuity, authenticity and community) can “shape, influence and enhance the opportunities for students to learn, to share learning and to teach others in a post-digital world”. Participants then discussed how existing learning technology tools could be used to create such learning experiences.

You can find a summary, reflections and slides from Peter’s presentation on his blog

Innovating from the Outside In: a Creative Hub to Change eLearning Practice- Sonja Grussendorf

Sonja introduced the audience to LTI’s “creative hub”, a project bringing together film makers, artists and designers, and how it  is being used  to design a VLE that can “accentuate communication between participants; support independent learning, collaboration and student creativity; facilitate peer learning and peer assessment and deliver ongoing, two-way feedback opportunities.”

Physical teaching and learning spaces

Learning Spaces: Roles and Responsibilities of the Learning Technologist – Kris Roger and Sarah Ney

While Sonja was presenting on virtual spaces, Kris and myself discussed physical teaching and learning spaces. More specifically, we reflected on a recent project to develop new active learning spaces at the LSE that made us wonder about what our roles and responsibilities as learning technologists were in the design of learning spaces.

Copyright and eLearning: who else but Jane Secker?

Jane presented a paper AND a keynote at ALT-C this year!

CopyrightBuddiesLecture Capture: Risky Business or Evolving Open Practice? co-presented with Chris Morrisson, Copyright Licensing and Compliance Officer at the University of Kent.

Jane and Chris presented the findings from a recent survey on institutional attitudes towards intellectual property issues in relation to lecture capture and contents used in lectures. They also reflected on the relation between good policy and good practice and how to support staff in implementing and encouraging it.

Keynote: Copyright and eLearning: Understanding our Privileges and Freedoms

Jane presented an entertaining, fun, moving and very interesting keynote on how a better understanding of  copyright can empower copyright users and educators.

You can view Jane’s full keynote on youtube:

Last But Not Least: We won!

LTI was presented with the prestigious Team Learning Technologist of the Year Award last Wednesday for their work around Students as Producers. The award recognises “outstanding achievements in the learning technology field and the promotion of intelligent use of Learning Technology on a national scale”.

“LSE are proud to be selected as the Learning Technology team of the year, especially in its 10th year.  This recognition by our peers is a celebration of the innovative work being done by academic and LTI staff to better the student experience and provide more opportunities for engaging, positive and transformational education with technology.” Peter Bryant, Head of LTI

Here are a few pictures from the evening:

 

 

Crowdsourcing for Massive Engagement

London School of Economics and Political Science embarked on a crowdsourced, gamified approach to education and citizenship, harnessing the massive open online space to engage a community of learners in writing a model UK constitution.

The project is a Campus Technology Innovators Award winner for 2016.

Please visit the Campus Technology website to read more about this innovative project which was led by LSE Professor Connor Gearty of the Institute of Public Affairs (IPA) in partnership with Learning Technology and Innovation

201608LSEconstitutionUK

 

More information on the project can be found on our blog