Taking first steps in the Twitterverse can be a nerve-wrecking experience with new users unsure what thoughts to tweet to the world. Here, Paul André, Michael Bernstein and Kurt Luther attempt to fill the void and give some insights into what makes interesting and valuable microblog content.
While microblogging has been found to have broad value as a news and communication medium, and increasingly as a valuable tool in academia, little is known about more fine-grained content value. Our tweets might be funny, interesting, confusing, or just plain boring, but with little audience feedback it’s hard to tell; we’re often tweeting into a void. If we understood what content is valued (or not), and why, we may be able to 1) derive design implications for better tools or filters; and 2) develop insight into emerging norms and practice to help users create and consume more valued content.
To understand Twitter content value, we designed a site called Who Gives a Tweet. In exchange for rating friends’ tweets, users of the site would get anonymous feedback from strangers (Mechanical Turkers), and followers (if their followers signed up). In this snapshot of data, 1,443 users rated 43,738 tweets, drawn from 21,014 Twitter accounts. Our user population was skewed towards “Informers” (people who share information), and technologists, largely due to the sites that pushed traffic our way. While our results provide insight into certain subsets of Twitter, future work should investigate whether different communities exist with different value judgments.
Broadly, we found that a little more than a third (36 per cent) of tweets were considered worth reading, while a quarter were not worth reading at all. (39 per cent elicited no strong opinion). Despite the social nature of Twitter, current mood, activity or location tweets were particularly disliked, while questions to followers and information sharing were most worthwhile. (A full listing of rating by category can be seen in image below). Personal details and whining were commented upon: “too much personal info”, “He moans about this ALL THE TIME. Seriously.” Location check-ins were particularly called out: “foursquare updates don’t need to be shared on Twitter unless there’s a relevant update to be made”, or, more simply: “4sq, ffs…”
We are able to confirm some commonly held truths about content, as well as quantify the value of categories. Reasons why tweets were liked and disliked provide some insight into accepted practice and emerging norms, as well as simple lessons to improve content.
- With Twitter’s emphasis on real-time information, old news, even links that were fresh this morning, can be seen as annoying.
- One way to add value to links is to include a personal opinion or thought.
- Misuse of retweets and @mentions can make it feel like you’re overhearing someone else’s conversation, and overuse of #hashtags can make it hard to find the real content. E-mail or direct messages might be more appropriate; though unique hashtags are valued when users want to follow a question.
- Cryptic tweets and a lack of context were particularly disliked: “just links are the worst thing in the world”
- The standout reason for not liking a tweet was it being boring: “… and so what?”
- Professionals should consider why people are following them, perhaps limiting personal details: “I unfollowed you for this tweet. I don’t know you; I followed you because of your job.”
- News organizations may want to pique a user’s curiosity enough to click a link, but not give all the information away in the tweet.
- Conciseness, even within 140 characters, is valued.
- Whining was disliked, while happy sentiments and a human, honest, outlook were valued.
We see two directions for utilizing these results, and a comparison to other sites with social media updates. Facebook, for example, has invested significant time and experimentation to determine who and what to show in one’s newsfeed. Twitter, on the other hand, has been successful despite, or because of, very simple presentation (essentially viewing all updates). Yet, does Facebook do any better than the 36 per cent of tweets deemed Worth Reading we found in Twitter?
Thus, the first direction is technological intervention: design implications to make the most of what is valued, or reduce or repurpose what is not. The second focuses more on Twitter’s simplistic view at the moment and taking a social intervention approach: helping to inform users about perceived value, audience reaction and emerging norms, but ultimately leaving users in control of what they share and what is seen. Both approaches have the potential to address issues of value and audience reaction, improving the experience of microblogging for all.
Further discussion of future work directions, limitations, and detailed analysis can be found in the full paper.
Note: This article gives the views of the author(s), and not the position of the Impact of Social Sciences blog, nor of the London School of Economics.
As far as I understand the paper shows results of a study that involved general users, not only “academic tweets”. It seems obvious that not everyone agrees on what an academic tweet is, but one could infer it means something more specific than a tweet by someone who happens to work in an educational institution. Academics have all types of conversations; even those conversations that could be labeled as “academic” take place under different contexts, have different themes and approaches, nuances, agendas, etc.
I am unavoidably attracted to tools developed using the Twitter API, and I am convinced very interesting conclusions can be drawn from their development, use and data they provide. Nevertheless I am seriously concerned we have come to accept as a fact that we would need “technological intervention: design implications to make the most of what is valued, or reduce or repurpose what is not”, especially when the judgement criteria is so inherently subjective and context-specific. When is critique “whining”? When is geolocation data useful, and when is it “boring”? Maybe millions of users have already read that link, but what about the other potential timelines with users who consume Twitter asynchronically?
The problem I have with these “approaches [with] the potential to address issues of value and audience reaction” (and we can add Topsy here, the service used by the altmetrics tool) is that they resemble too much what many of us have done for a living, which is market sentiment research. Any trained humanist or social scientist familiar with sentiment research knows that the ways to classify “audience reaction” avoid thorough qualitative analysis and are almost de facto simplistic and pre-analytical.
What matters here is that we are supposed to be discussing academic or scholarly social media, not opinions about a new soda. A given user (or millions of them if you will) may think a certain tweet is boring or devoid of value, but that same tweet may be of great interest for a different type of academic user interested in precisely that which most find uninteresting. Social media services like Twitter and Facebook also have a way of self-regulating, most capable users learn good practices by trial and error. Though the need for a larger discussion on how to establish guidelines for institutional social media good practice remains urgent, I find attempts to suspend users’ ability to freely decide the type of social media feed they want or need through crowdsourced rating based on utterly subjective categories frankly horrific. By that token libraries would only have whatever is in the current best-sellers list. Long live an academic’s right to decide what she finds of value without imposing it on others!