Communicating Research

Communications practices for university-led research

Twitter icon flying across an altimetric report screen

Altmetric top 100s: social media for research communication

Any set of putative league tables seem out of kilter with true academia and its best traditions. They do however appear in increasingly wondrous ways and with increasingly familiar responses. While institutional use is made of whatever statistic seems to validate micro-supremacy (even to the point of action by advertising standards), for those unhappy with their scores, there’s ample defence in what seems a truism, that the measurements being taken and being used for judgement have missed the point of what academia does and what universities are for.

With the Altmetrics Top 100 now a regular piece of pizzaz in the academic meta-press, it’s worth considering what issues are being taken with this notion of social media popularity as a meaningful measure of…something. Not only meaningful to measure but to table-ize, weapon-ize and perhaps a heap of other verb-ized and activized, methods of aggressive and celebratory competition.

There are many uses of the tools and processes that researchers can call upon in their work, and those in the realm of communication are particularly powerful, particularly mis-understood and particularly mis-used.

With the move towards the league of ‘most popular research-based story among the social-media-active’ it’s worth asking a few questions as to what we get from measuring social media, particularly where the gaps are in understanding what it might be. Partly to gather some of what research on bibliometrics has done to shape thinking in that area. Partly to consider whether there might be a shift in the nature of research and its communications. Also, more practically, whether this can help articulate new questions for research communicators as to where their energies might be concentrated and where help can be best given and most appreciated.

Social media and the single researcher

Researchers have made good use of social media. It brings small, disparate communities into contact with each other, it can offer a voice to those in niche areas, it can facilitate the reach of well-packaged memes. But popularity? How can the world of social media popularity meet the world of research?  At best this seems a relationship that will experience rocky patches. The world of small steps and big truths meets one of ungoverned pronouncements and ill-prepared audiences.

Research in its most traditional sense had a trajectory through which tireless, hyper-dedicated and often very talented people shared their discoveries with those who were best placed to verify or challenge the step forward. Once verified, these were delivered to increasing ranks of generalists or alternative specialists, reaching those who might use the knowledge or be interested in it.

One problem with a media popularity league table is that it encourages a belief that the process is still the same – just with bigger numbers. The pleasure in seeing a hit count rise around a piece of research news is interpreted with a historic hang-over from the more formal bibliometrics – I’ve got 4,000 digital impressions, so that ‘feels’ as though I had 4,000 readers of an equivalent printed pamphlet. It ‘feels’ as though I’ve had a recommendation from 4,000 peers. It ‘feels’ as though I’ve had 4,000 users of my knowledge, who may in turn pass this, orally, to many more.

An item that has 4,000 hits and two comments, for example – how popular is it? What is its reach as a meme – 2? 4k? Somewhere in-between and unknown? Closer to 2 or closer to 4K? Does this popularity have any possible correlation with the kind of impact researchers would hope to have or governments would hope to measure.

Creators of the tables will trade on this assumed correlation between those traditional measures that were criticised for various failings, and this new super-barometer which yields excitingly high numbers. While it is useful to see numbers rise around who has touched an item of digital collateral, the numbers are probably used more to justify communications workloads than to actually persuade the world of new truths.

Yet there is ample commentary persuading the world-at-large that media representation charts will surface otherwise hidden gems; they note that citations heavily favour certain subjects, citations heavily favour certain styles, and so here is a fairer alternative. This was being mooted in 2010, by Priem and Hemming: “A novel and promising approach is to examine the use and citation of articles in a new forum: Web 2.0 services like social bookmarking and microblogging. Metrics based on this data could build a “Scientometics 2.0,” supporting richer and more timely pictures of articles’ impact.”

The THE announcement on the Altmetric 100 this year reads, “The term “alt-metrics” is short for “alternative metrics”, and refers to the practice of rating papers on things like social media mentions and online citations, rather than simply looking at citations in other journals.”  Is this hinting that there is some improvement in the practice? The words “simply looking at citations” suggest the archaic practice is a bleak contrast to a service that covers the far corners of the world wide web. Nor should a word like “rating” be underestimated.

When we get these tables, are we looking at the reach and impact of research? the accessibility and democratisation of knowledge? or the best work of the nation’s PR and Press Offices?

Traditional bibliometrics, which evolved alongside the pace of production in the 1920s and 30s, may no longer be an adequate guide to tracing memes across the blogosphere. But how should we compare an evaluation of research according to mediated reproduction?  In what the authors described as “the first large-scale characterization of the drivers of social media metrics”, Haustein, Costas and Larivière produced their analysis of the correlation between the altimetric and the citation index in ‘Characterizing Social Media Metrics of Scholarly Papers‘ 2014.

Mainstream media and social media correlate strongly according to Haustein, Costas and Lariviere. These differ from patterns of citations, though, and there are interesting differences between the ways types of item appear in data comparisons. Differences appear with the length of the documents and with the subject areas:

while longer papers typically attract more citations, an opposite trend is seen on social media platforms. Finally, contrary to what is observed for citations, it is papers in the Social Sciences and humanities that are the most often found on social media platforms. On the whole, these findings suggest that factors driving social media and citations are different. Haustein, Costas and Lariviere

Is this supplementary to citations or should we be leaping off the bandwagon of who is citing what in favour of who is tweeting what. There have been attempts to emphasise that the early tools of Webometric and Sceintometrics, which scaled poorly and offered poor data based on search crawlers, have now been supplemented by the social media measuring methods. Priem, Piwowar and Hemminger (2012) were fans of the idea that these tools are non-invasive: “Importantly, these tools do not create new types of scholarly practice so much as they facilitate existing practice. Social reference managers like Mendeley, for example, are an extension of paper-based bibliography collections academics have maintained for centuries, while Twitter facilitates the sort of informal conference chats that have long vivified the academy’s invisible colleges” (Altmetrics in the wild: Using social media to explore scholarly impact.)

This only seems likely if we pursue the notion that all bibliometric methods have a wholly neutral effect on the subjects being measured. However, the different forms of media should not be mistaken for cheaper and more convenient alternatives to the more long-standing methods. Social media does not follow a model through which information is gradually tested against authorities before being gradually released. Nor does it necessarily follow recommended communications practices: no single source of trusted information, no cohesive messaging or equal acceptability to sender and recipient. Moreover it creates and shapes new kinds of involvement with information and these are not neutral. In the context of a ‘top of the researcher pops’, there must be an influence on dissemination practice and it seems unlikely that this has no effect whatsoever on some of the research practice in disciplines that choose engage.

Rules of engagement

Each media form (“channel” if you will) is something of its own, and over-infatuation with an alt-metric type scoring system may mask this. If social media is significantly different from other forms, then what is the effect of that upon research information and how might we turn the system to best use? Should the unique patterns of electronic communication be helping us, for example, to isolate methods of direct use to research dissemination? Cameron Neylon introduces his 2014 post-conference post with a recognition that we can look at the data around social media and still not grasp what it is we are looking at:

It’s important to realise that these data are proxies of things we don’t truly understand. They are signals of the flow of information down paths that we haven’t mapped. To me this is the most exciting possibility and one we are only just starting to explore. What can these signals tell us about the underlying pathways down which information flows? How do different combinations of signals tell us about who is using that information now, and how they might be applying it in the future. Correlation analysis can’t tell us this, but more sophisticated approaches might. And with that information in hand we could truly design scholarly communication systems to maximise their reach, value and efficiency.” Cameron Neylon

The differences in patterns of engagement and the definitions of it are receiving scholarly interest, often with due note that engagement studies have been the domain of marketing departments.  Gilstrap notes a confusion that stems directly from the corporate-led enthusiasm for social media. For corporate interaction, ‘engagement’ is something that leads to enhanced commerce. Even in that realm there are debates as to the true value of widespread yet token engagement. Gilstrap believes there are few scholars joining his quest to understand what engagement means for social media users – whether this is through curatorial sharing, friending, following, commenting, or simply agreeing to a lightning feed. The study of research engagement through social media is in similarly early stages, although the community for it is vast.

Engagement is something researchers want – more than they want hit counts. For those who are genuinely shaping new knowledge, the primary aim is to “effectively share” that knowledge in the words of the UK REF. Effective sharing of quality research is rarely something that would register as a twitter tornado on the alt-metric Beaufort scale. An effective share is more likely to register with another laboratory, with a fellow specialist, or with a tight community of practice. The kind of engagement a researcher might seek could be the accepting of a new truth into an influential group, or at least consideration of it in a debate. Is this at all measurable? Does its appearance on social media help?

If, as Thomas Beakdal often says, there is “little correlation between traffic and sale” ( Cf. e.g. Insight: Sales vs Traffic vs Intent) then we should also be strongly querying the correlation between engagement with research and engagement with social-media-that-happens-to-be-about-research-activity. That’s not to say, either, that the form as measured by citation or booksales have any necessarily closer correlation but there is something so speedy and so unchecked about social media. Multiple citation is a kind of popularity, but one that does show at some level that a work has become an accepted reference in the discipline. Enormity of mediated research coverage shows something else, surely – possibly little more than that a community of journalists and PR specialists have gathered to reproduce something to their familiar audiences, an activity classically illustrating Bourdieu’s “circulation circulaire de l’information” (Sur la télévision, Raisons d’agir, p.22 ). And as far as retweets, likes and so on, these hit counts are as much casual coincidences of browsing habits (time, place, mood) as they are genuine desire to follow-up on an idea.

This has something akin to ‘content marketing’ which had a brief boom earlier in the decade and saw marketing managers seeking infomercial writers on a grand scale. Mark Higginson has his own tireless and well-researched view on the value that commercial content marketing cannot possibly have. And there are lessons for those uploading research ‘content.’ While there is a caveat for those who don’t need sales and for whom content is of itself (regardless of readership) valuable to have online, the main point is again that companies and individuals make fundamental mistakes in their beliefs as to how these forms of media work. I am suggesting above that academics hold a token belief in former models of esteem and mistakenly see social media statistics in similar terms. In Content Marketing is Wish Fulfilment, Higginson brings something similar to an understanding of the patterns of web-use:

No business can make the expenditure on the quality and quantity of content required to win significant attention pay a decent return on their investment; at least not in terms of getting people to care enough about them to turn this into sales. The argument that it ‘creates awareness’ is intangible, but more to the point is likewise untrue because of how attention flows on the web.

It’s perhaps tricky, even tricksy, to bring commercial commentaries in alongside the use of this for research communication. But this simply reflects other confusions much more deeply rooted, the kind that lead to brand-market type tables about academic material. It is of critical importance to understand “how people’s attention flows on the web.”

Attention on the web flows with a tall head and a long, long tail. There are strong reasons why, for example, while there are (or were) a number of bookshops in every town, competing for market percentage, there is only one online bookshop worldwide. There is only one major online auction site. There is only one wizard school entertainment franchise. These are Leviathan examples of the Matthew effect and the grotesquely long-tail of digitally-developed popularity (Cf Chris Anderson, longtail.com). The long tail phenomenon may be good news for niche interests and the new delivery mechanisms to sparsely represented areas of scholarship, but this is not what is being lauded in a forum of media popularity. More pertinent is the distortion caused by disproportionate popularity . From Higginson we learn, “The rule is that only a minority of pages receive a majority of people’s attention over any given period of time.”  This, he points out, is directly to do with the way websites link to each other and how algorithms make their searches. This is exacerbated in the quick fire kiss-chase of social media. Disproportionate popularity is a useful phrase (Cf Reka Albert, Albert-Laszlo Barabasi ‘Statistical mechanics of complex networks’ 2002). When we begin to use digital media to measure, to sift or to understand the nuances of a wider research context, we should surely do it in the complete knowledge that the results are exactly that, disproportionate.

Not necessarily useless, not necessarily a bad thing, but something to take an even bigger pinch of salt with than academics already do.

Alt-metricsPierre BourdieuTwitter for research

Research Communications • December 19, 2017


Previous Post

Next Post

Leave a Reply

Your email address will not be published / Required fields are marked *

Skip to toolbar