Forskning

Could Danish foundations’ reliance on reporting platforms cause clashes with researchers over how to register impact?

A recent British scandal demonstrates the problematic nature of research project reporting. Duncan Thomas takes a look at how to handle potential tensions and contradictions around impact of funded research.

Duncan A. Thomas, Senior Researcher, The Danish Centre for Studies in Research and Research Policy, Department of Political Science, Aarhus University

Research policy scholars know it can bechallenging to design and deploy ‘successful’ research funding instruments across global science. Sometimes advancing ‘excellent’ science is a goal for funders, at other times, funders need to promote sustainability, as seen in Denmark’s ‘green’ research and innovation agenda.1

Similarly, research funders must often ‘walk a fine line’ in dealing with outputs from their various funding inputs. This includes how funders understand ‘impact’ and how they register it in their internal or external-facing systems, and how they account for it during their planning and strategy processes, how they discuss it within their boards, with ministry sponsors and other principal stakeholders, and so on.

This is about how you – as a funder – choose the ‘right’ research impact assessment (RIA) approach,2 and select the ‘best’ digital software platform, so that the recipients can report back to you about research progress and impacts in a meaningful way.

The Researchfish scandal

A popular ‘platform of choice’ for Danish private research foundations to register impacts of their funded research has been Researchfish. This is ‘academic faculty management software’3 pitched to funders as a one-size-fits-all tool to help them ‘track, study, and communicate the total impact of their research.’4

Researchfish is sold by the US/UK owned private firm Interfolio, and is currently used by Leo Foundation, Lundbeck Foundation, Novo Nordisk Foundation, Steno, and Villum Fonden. It was recently dropped by Kræftens Bekæmpelse (The Danish Cancer Society).5 It is also used by NordForsk.

What happened in the UK from March to April 2022 provided an ‘object lesson’ on unexpected/unintended effects from using such software. The scandal involved public researchers, the national public funding agency UK Research and Innovation (UKRI) and Researchfish.

UKRI eventually apologised to the entire UK research community. It may yet face legal challenges regarding its (alleged) breach of GDPR rules.6 UKRI’s reputation also suffered terribly from what researchers perceived as UKRI defending/prioritizing the reputation and interests of a private, corporate, commercial brand instead of upholding and safeguarding public value interests, academic freedom of speech, and data privacy rights of UK researchers.

The Twitter War

What precipitated this scandal was the Tweets of a few researchers, arguing that Researchfish’s ‘grant reporting forms’ were ‘time-consuming and repetitive’, and that the system overall had some ‘cumbersome aspects’7 they did not enjoy.

Rather than treat these Tweets as ‘user feedback’, of sorts, Researchfish – if online reporting is to be believed – entered into an almost ‘unholy alliance’ with UKRI. It aimed to force the ‘offending’ researchers to delete and/or publicly apologise for their apparently ‘abusive’ Tweets that allegedly attacked Interfolio and its staff directly.

It seems that Researchfish and UKRI implied that if the researchers in question did not comply with Interfolio’s takedown/apology ‘request’, they would have their UKRI funding cancelled. According to some sources Researchfish shared the Twitter handles of the affected researchers with UKRI. UKRI then – allegedly – used its confidential databases to access personally identifying data of the researchers involved, before emailing them to indicate that their ‘abusive’ behaviour towards Interfolio/Researchfish was not permitted, and could result in grant termination.8

Degrees of impact

Forcing researchers to use a ‘tick box’, ‘yes’ or ‘no’ binary choice system to register impacts may have been part of the problem here. Could the fact that Researchfish is now an apparent ‘dominant design’ for impact reporting in science – signalling over-reliance on a one-size-fits-all approach – create future conflicts? If Danish (and other Nordic) funders are to avoid such scandals there is a need for nuance, and acceptance of complexity and flexibility.

One way a funding institution might signal such understanding could be to remove or at least expand upon current yes/ no ‘tick box’ accounting for impact. This could involve adding supplementary ‘degrees of impact’, going beyond simple numerical indicators and measurements.

These indicators should aim at registering at least four sets of impact-related process dynamics (see Figure):
• Degree of impact intentionality
• Degree of impact predictability
• Degree of impact exceptionality
(or sensationality)
• Degree of impact discoverability

Intentionality and predictability derive from previous research on whether impacts were intended by the specified funding, and whether they are expected given the funding and the projects’ objectives.9

Exceptionality captures whether the impact of the funded research went above and beyond what might reasonably consider a ‘normal’ baseline of impact for this research activity, within that societal system.10 The UK’s Research Excellence Framework ‘impact cases’ are an example of this. There a ‘sensation’ – or verified impact narrative – is created or co-constructed by researchers and their non-academic partners.

I became aware of the discoverability process by accident. This was at an event featuring a representative of the Research Council of Norway. He explained they had made use of a certain table from a paper written by me and my colleagues. The table had been used to support RCN’s internal deliberations and procedural reforms.11 This impact was thus only ‘discovered’ because I happened to be in the right room at the right time – and a colleague attending, ochen Gläser, gave this kind of accidental impact registration a label, as its ‘discoverability’.

Implementing such degree scales in an RIA platform would of course be no mean feat. It would require preparatory work, testing and validation of the designs and measures – and hopefully also, as a vital and inherent part of such reforms, inclusion and participation from the relevant funded research communities.

Nevertheless, I hope my ideas on how funders may need to open up their current, potentially overly-narrow RIA approaches/platforms provide food for thought, about avenues to explore, and other ways to approach registering and accounting for the impacts of their funded research projects.

1
2 https://doi.org/10.1186/s12961-022-00888-1
3 https://en.wikipedia.org/wiki/Interfolio
4 https://www.interfolio.com/researchfish/
5 See https://leo-foundation.org/en/grants-andawards/for-grantees/;
https://lundbeckfonden.com/en/grants-and-prizes/grantrecipients/reporting;
https://impact.novonordiskfonden.dk/grant-reporting-impact-assessment/;
https:steno.dk/en/topics/researchfish/;
https://veluxfoundations.dk/en/content/villum-fonden-introduces-researchfish-0;
https://www.cancer.dk/forskning/stoette-til-forskning/researchfish/
6
7
8 ; https://www.ukri.org/news/ukri-update-on-researchfish/
9
10 https://doi.org/10.1093/reseval/rvz032
11 https://doi.org/10.1007/s11024-019-09385-2

Foto: Vadym-Pastukh