LSE - Small Logo
LSE - Small Logo

Misha Teplitskiy

Soya Park

Neil Thompson

David Karger

July 19th, 2023

Does anyone learn anything new at conferences? Measuring serendipity and knowledge diffusion at academic conferences

4 comments | 30 shares

Estimated reading time: 6 minutes

Misha Teplitskiy

Soya Park

Neil Thompson

David Karger

July 19th, 2023

Does anyone learn anything new at conferences? Measuring serendipity and knowledge diffusion at academic conferences

4 comments | 30 shares

Estimated reading time: 6 minutes

The rise of digital networking and conference platforms in recent years has led many to question the value of conferences on environmental and accessibility grounds. Yet, one frequently cited example of their value is the opportunity for serendipitous unplanned interactions. Exploiting evidence from conference scheduling conflicts and subsequent publications, Misha Teplitskiy, Soya Park, Neil Thompson and David Karger find these kinds of chance encounters to be both significant and measurable.


Academics and those in many other industries invest substantial resources into conferencing. The costs are significant in both financial and environmental terms. For example, a single academic conference can represent as much as 7% of an individual’s annual CO2 emissions. What are the benefits of conferencing and are they worth it? The evidence for answering this straightforward question has been surprisingly limited and uneven.

One area where evidence is accumulating is on the networking function of conferences. For example, in an experiment at a medical school Boudreau and colleagues found that placing two scientists in the same room for a networking event – a common occurrence at conferences – increases the probability that they’ll co-author a grant proposal by 75%. Echoing this result, Campos and colleagues found that when a major political science conference was unexpectedly cancelled, the chances that a would-be attendee co-authored a paper with another would-be attendee decreased by 16%.

So do conferences lead to diffusion above and beyond the alternative access option and if so, how?

What is less clear is the diffusion function of conferences – how much do attendees really learn? The question has only become more pressing in recent years with the rise of alternative diffusion channels and conferencing formats. Papers that were once only available once published via personal connections or as presentations at conferences are now frequently available as publicly available preprints. The quality and uptake of virtual and hybrid conferencing formats has also increased, with an attendant widening of their accessibility. These large-scale changes reframe what the choice of attending conferences represents – it is increasingly not between having vs. not having access to information, but between accessing it online, i.e. reading a preprint or watching a video stream, vs. synchronously in-person. So do conferences lead to diffusion above and beyond the alternative access option and if so, how?

We can separate diffusion into two types, “directed diffusion” when attendees see presentations of interest and “serendipitous diffusion” when attendees are incidentally exposed to presentations they did not plan to see. The history of science offers abundant cases of serendipity, but how important is serendipity relative to the “normal” directed diffusion? Is serendipitous diffusion common enough that individuals should take it into consideration when deciding whether to attend a conference?

Our recent preprint offers unusually strong evidence that conference attendees really do learn a lot in presentations, and that a sizable fraction of the diffusion is serendipitous, i.e. the attendees did not plan on learning the ideas. We use data from the conference scheduling software Confer, which lets attendees Like talks and generate a personal schedule. The data include the personal schedules of 2404 attendees of 25 computer science conferences held in person between 2013-2020. We connect these schedules to attendees’ future publications and, to proxy learning,  measure whether attendees cited the presented papers.

Our recent preprint offers unusually strong evidence that conference attendees really do learn a lot in presentations, and that a sizable fraction of the diffusion is serendipitous

Of course it is very likely that conference attendees Like and attend presentations of interest to them and subsequently cite them more that un-Liked ones. Because Liking papers is not random, we do not compare citing of Liked vs. un-Liked papers. Instead, we focus on only the set of papers a specific user Liked and exploit the fact that sometimes the conference schedules some of these paper presentations at the same time, reducing the person’s ability to see the presentations on average. If seeing presentations is important, we expect the person to cite Liked papers without scheduling conflicts more than the conflicting-timeslot ones. If seeing presentations is not that important, citation rates should be unaffected by scheduling conflicts. Additionally, we exploit the fact that users will often Like only a paper or two in a particular session, but once they arrive there, they are also likely to see other papers in the same session. We call these in-same-session-but-not-Liked papers “serendipity” papers. If serendipity is important for diffusion, we expect attendees to cite serendipity papers more in sessions they are more able to attend.

Using this “scheduling conflicts” research design, we find that the diffusion function of conferences is substantial. Compared to papers presented in timeslots that individuals could not attend due to scheduling conflicts, they cite papers in presentations without conflicts about 50% more often. About 22% of the diffusion caused by presentations is to non-Liked “serendipity” papers. To our knowledge, this is the first time that the contribution of serendipity to diffusion has been quantified. The estimate shows that even when serendipity is defined in this limited sense, it represents a large fraction of what attendees can expect to learn at a conference. Accounting for all the channels of serendipity, such as when attendees learn unexpected ideas in hallway conversations, would increase its estimated contribution to diffusion even more.

Our paper uses a novel research design based on scheduling conflicts to answer an old question – do conferences work for diffusion ideas? At least in the computer science conferences we study, the answer appears to be yes, and a lot of that diffusion is serendipitous. This work also presents clear questions for future research. Can these same benefits of in-person conferences be obtained through other conferencing formats? Do other formats enable serendipity to the same (or even higher) extent? And, ultimately, are the in-person conference benefits worth the costs, particularly when taking into account accessibility?

 


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Adapted from Mikael Kristenson via Unsplash.


Print Friendly, PDF & Email

About the author

Misha Teplitskiy

Misha Teplitskiy is an Assistant Professor at the University of Michigan School of Information. He is a sociologist of innovation, studying how institutions and technology can help accelerate scientific discovery. His recent work focuses on understanding and mitigating biases in research assessment.

Soya Park

Soya Park is a Ph.D. student at MIT CSAIL. She studies and develops collaborative technology in groups and examines whether it is inclusive within the groups. Particularly focusing on the user experience perceived by contextually marginalized individuals within the group, specifically low-status workers. Her research is in human computer interaction. I received an IBM PhD fellowship and a Kwanjeong fellowship.

Neil Thompson

Neil Thompson is the Director of the FutureTech research project at MIT’s Computer Science and Artificial Intelligence Lab and a Principal Investigator at MIT’s Initiative on the Digital Economy. Previously, he was an Assistant Professor of Innovation and Strategy at the MIT Sloan School of Management, where he co-directed the Experimental Innovation Lab (X-Lab), and a Visiting Professor at the Laboratory for Innovation Science at Harvard. He has advised businesses and government on the future of Moore’s Law, has been on National Academies panels on transformational technologies and scientific reliability, and is part of the Council on Competitiveness’ National Commission on Innovation & Competitiveness Frontiers.

David Karger

David Karger is a member of the Computer Science and Artificial Intelligence Laboratory in the EECS department at MIT. His primary interest is in developing tools that help individuals manage information better. This involves studying people and current tools to understand where the problems are, creating and evaluating tools that address those problems, and deploying those tools to learn how people use them and iterate the whole process. He draws on whatever fields can help: information retrieval, machine learning, databases, and algorithms, but most often human computer interaction.

Posted In: Academic communication | Conferences | Higher education

4 Comments