LSE - Small Logo
LSE - Small Logo

Helena Vieira

January 11th, 2017

‘Collective intelligence’ is not necessarily present in virtual groups

0 comments | 31 shares

Estimated reading time: 5 minutes

Helena Vieira

January 11th, 2017

‘Collective intelligence’ is not necessarily present in virtual groups

0 comments | 31 shares

Estimated reading time: 5 minutes


Do groups of smart people perform better than groups of less intelligent people?

Research published in Science magazine in 2010 reported that groups, like individuals, have a certain level of “collective intelligence,” such that some groups perform consistently well across many different types of tasks, while other groups perform consistently poorly. Collective intelligence is similar to individual intelligence, but at the group level.

Interestingly, the Science study found that collective intelligence was not related to the individual intelligence of group members; groups of people with higher intelligence did not perform better than groups with lower intelligence. Instead, the study found that high performing teams had members with higher social sensitivity – the ability to read the emotions of others using visual facial cues.

Social sensitivity is important when we sit across a table from each other. But what about online, when we exchange emails or text messages? Does social sensitivity matter when I can’t see your face?

We examined the collective intelligence in an online environment in which groups used text-based computer-mediated communication. We followed the same procedures as the original Science study, which used the approach typically used to measure individual intelligence. In individual intelligence tests, a person completes several small “tasks” or problems. An analysis of task scores typically demonstrates that task scores are correlated, meaning that if a person does well on one problem, it is likely that they did well on other problems.

We used four different tasks that have been well-studied in prior research on teams: one task was brainstorming ideas, one was deciding among a set of alternatives, one was negotiating a shared task, and the last was a more complex task that required a bit of all three. Prior to conducting our study, we wanted to ensure that the four tasks were appropriate – that individual intelligence mattered to their performance. We recruited 91 participants to complete the tasks working as individuals (rather than in groups). We found that an individual intelligence factor emerged and that this factor was significantly correlated with scores on a well-known intelligence test (r=0.318; p<0.01). In other words, the performance on these four tasks was a good measure of the intelligence of individuals.

For the main study, we recruited 324 participants who were organized into 86 groups of 3-5 members. These groups used Gmail chat software to work on the four tasks.

The results were not what we expected. The correlations between our groups’ performance scores were either not statistically significant or significantly negative, as shown in Table 1. The average correlation between any two tasks was -0.05, indicating that performance on one task was not correlated with performance on other tasks. In other words, groups who performed well on one of the tasks were unlikely to perform well on the other tasks.

Table 1. Correlations


The Science study conducted two separate experiments and found task performance to be correlated in both, so when they conducted a factor analysis of task scores, they found one dominant factor (the “collective intelligence” factor) which explained most of the variation in the performance scores across tasks.

In our factor analysis, no single factor emerged (see Figure 1). This is in contrast to our individual-level results as well as the results of face-to-face groups reported in Science. We ran a factor analysis on all combinations of three or more tasks (see Figure 2) and found that no matter what tasks we included in the factor analysis, a single intelligence factor did not emerge.

Figure 1. Factor analysis results


Figure 2. Combination of tasks


Our findings challenge the conclusion reported in Science that groups have a general collective intelligence analogous to individual intelligence. Our study shows that no collective intelligence factor emerged when groups used a popular commercial text-based online tool. That is, when using tools with limited visual cues, groups that performed well on one task were no more likely to perform well on a different task. Thus the “collective intelligence” factor related to social sensitivity that was reported in Science is not collective intelligence; it is instead a factor associated with the ability to work well using face-to-face communication, and does not transcend media.

Intelligence mattered to individuals performing the tasks we used, but intelligence was unrelated to group performance on these same tasks. Groups of more intelligent people did not perform any better than groups of less intelligent people, and no other factor consistently predicted performance across multiple tasks. In other words, what makes a group work well together in an online environment is not dependent on some group characteristic nor a combination of individual characteristics.

This is rather disappointing, because it suggests that managers cannot rely on virtual groups who have previously done well on one task, to perform well on a new type of task (at least when considering teams who have little experience working together). There may be some consistency within a type of task (creativity, decision making, negotiation), but groups that are highly creative are not necessarily good at making decisions or negotiating, at least when working online with limited social cues.

Although there is an intuitive appeal to selecting highly intelligent group members in the hopes of getting better performance, our research suggests that this approach is no more likely to lead to good group performance than selecting group members based on other characteristics – at least for modern work that involves email and text messaging. Group member intelligence may or may not be necessary for good group performance, but it is not sufficient, except for tasks that do not require consensus (such as brainstorming). As a result, we caution managers that when putting together a group of individuals to complete a task other than brainstorming, choosing the best and the brightest is not a good strategy. Group performance depends more on task requirements and the communication processes groups use than on the intelligence of group members.



  • This blog post is based on the authors’ paper Not As Smart As We Think: A Study of Collective Intelligence in Virtual Groups, in Journal of Management Information Systems, Volume 33, Issue 3. Unfortunately, the article is not open access, but interested parties may contact the authors for more information or a copy of the article.
  • The post gives the views of its author, not the position of LSE Business Review or the London School of Economics.
  • Featured image credit: By geralt, under a CC0 licence, via Pixabay
  • Before commenting, please read our Comment Policy.

jordan-barlowJordan B. Barlow is an assistant professor of Information Systems & Decision Sciences at California State University, Fullerton. His two main research interests are group collaboration and IT security. He has published research in several top information systems journals.



alan-dennisAlan R. Dennis is Professor and John T. Chambers Chair of Internet Systems at Indiana University. Prof. Dennis is a Fellow of the Association for Information Systems. His research focuses on three main themes: team collaboration; IT for the subconscious; and the Internet. Prof. Dennis also has co-founded six IT start-up companies, the most recent of which is, which uses big data and analytics to help parents select good baby names.



About the author

Helena Vieira

Posted In: Management

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.