Skip to main content

Fixing the crisis state of scientific evaluation

by Jonathan Tennant and Charlotte Wien

Published onApr 06, 2020
Fixing the crisis state of scientific evaluation
·

Originally at OSF.IO: https://osf.io/preprints/socarxiv/f4zk9/ DOI: 10.31235/osf.io/f4zk9

Fixing the crisis state of scientific evaluation 

Jonathan P. Tennant1 and Charlotte Wien2 

1 IGDORE, Indonesia. ORCID: 0000-0001-7794-0218 
2 Southern Denmark University Library, Odense, Denmark.
ORCID: 0000-0002-3257-2084 

Our scientific evaluation system is in a state of crisis. Every researcher has heard of the “publish and/or perish” culture: the fact that individual researchers are evaluated primarily based where the journals that we publish in, rather than any intrinsic merit or quality of our work. Countless pages in newspapers, blogs, and journals have been spilled criticising this evaluation system, and in particular, the pernicious use of journal brands and metrics like the Journal Impact Factor (JIF) (13). Without rehashing well-known arguments, the simple fact is that there is virtually no empirical or theoretical evidence or reasonable justification to support the use of impact factors or journal brands in research evaluation. 

There is now talk of how to ‘incentivise’ researchers to adopt certain practices; for example, making their research articles publicly accessible, reproducing existing research results, or sharing the data associated with their own results (4, 5). Incentives take on the form of financial rewards, increased citations, public attention, and things that advance either the egos or careers of researchers. The idea that researchers require ‘incentives’ to do their jobs effectively and efficiently does not seem particular noble to me. People don’t get incentivised to be kind, or for being morally upstanding and ethical. People are good because it is the right thing to do. Scientists should not need incentives for doing rigorous and responsible research; we should do it because it is our job to do so and because it is good for science (6, 7). 

Intellectual merit is not something that can or needs to be identified, measured, ranked, and optimised. 

How did it get to this state? 

To look for the roots of evil, we will dig back in time to the mid-1960s. A time when librarian and businessman Eugene Garfield founded the private Institute of Scientific Information (ISI). 

ISI's primary product was a printed list of scientific journals. Many similar lists were already available, but this one was different. Garfield’s list also contained information about who had cited which journals. Prior to this, citation was just citation – qualitative acknowledgement of prior work. But as the scholarly literature grew, Garfield’s citation list ultimately grew to become a database containing data describing researchers’ academic behaviour, documenting genealogies and networks describing the diffusion of scholarly knowledge. We all know this list well, as it is now owned by Clarivate Analytics, whose primary product is the Web of Science (WoS); producer of, you guessed it, the Journal Impact Factor (JIF). Just like the Kardashian family, most scientists simply now wish it did not exist and try to ignore it. 

It is around this time that digital technologies began to become more-widespread. With this, the ISI database began to find its way into more research institutes around the world. It became a valuable resource for librarians to work out which journals were being cited (used) the most by researchers, and therefore which were more worthwhile to dedicate their acquisitions budgets to. Around the turn of the Millennium, New Public Management (NPM) was relentlessly gaining strength across the western academic world (8, 9). The agenda of NPM was remarkably simple, to apply organisational principles of economic reform across the higher education and academic sector to help increase its efficiency. Such reforms were born from the private sector, comprising a formula of marketisation, privatisation, and most importantly here, performance measurement. Research, like any other industry, produced outputs; primarily, research papers (10). Therefore, like the private sector, administrative reform to increase efficiency should lead to more outputs; for example, by facilitating greater free competition, and diverting funds away from ‘low-performing’ sectors and towards the ‘high-performing’ ones. This treacherous ideology invaded academic culture at the institutional and national level, where all of a sudden, research communities became beholden to an administrative beast that valued competition over collaboration. Instead of our cultural institutions striving to produce knowledge for the betterment of global society, they began to behave more like corporate enterprises. A consequence of this was shifting notions of value more towards premeditated outputs (results), rather than processes, the former of which are explicitly out of the control of researchers, whereas the latter are under their control. 

Scholarly traditions of collegiality and collaboration, and the quest for knowledge for the betterment of society, became gradually replaced by a new quest for “impact” and “excellence” (11, 12). However, excellence is not an outcome or a result; excellence is a process. You can be an excellent researcher, and still produce results that have no “impact”. 

Likewise, you can be a terrible researcher producing unreliable results and still get published in Nature (13). Researchers began to trade their knowledge for this facade of excellence, in the hopes of finding a place in the competitive ‘job market’, which became increasingly defined by job-insecurity and unstable short-term contracts. Thus, it was no longer knowledge and skills about how to idealise and conduct a scientific investigation that mattered, so much as the impetus for that investigation to yield something that could be measured, and then one’s skill in marketing that work as something of value. Due to increasing NPM, this meant that in order to maintain intellectual credentials and standing among ones’ colleagues and community, as well as job security and the chance of promotion, researchers were forced to realign traditional ideals of merit to the limits imposed upon them by administrators. 

However, institutes still required new measures of continuously documenting and evaluating their staff productivity and success in order to meet newly-imposed efficiency goals. Such assessments needed data. Inadvertently, what Garfield had created was a data source that made it very easy for institutes to assess their researchers, irrespective of whether it was appropriate or not. At its core, what citation represented never changed. It was what it became interpreted as and used for that did. In 2005, Jorge E. Hirsch founded the “h-index”, which became another primary indicator of a researcher's productivity (14). The calculation of a researcher's h-index is based on calculations of their number of publications and citations, and based on the data Garfield had been collecting since 1964. Therefore, the h-index quickly became an international de facto standard for researchers' productivity. The reason being that these metrics were relatively trivial to calculate, easy to obtain, and simple to use for comparative purposes. As with the JIF, countless pages have now been written over the use and mis-use of the h-index (15, 16). Yet, both are still prominent in research evaluation cultures, especially in western Europe and North America, where neoliberal ideologies are most prominent (3, 17). 

NPM, metrics like the h-index and JIF, and readily available databases like WoS collided to form an unholy trinity that transformed academia into a productivity machine. Now, a whole industry thrives around this, often now with the same organisations both providing publishing services and metrics for evaluation publication activities, in a rather explicit display of conflicts of interest (18, 19); researchers have become the financial dairy cows of our knowledge society, and are being milked dry. It is a tyrannical system designed to abuse the cultural capital of scientific knowledge and suppress our intellectual elite, all for the benefit of a few corporate enterprises and their political desires; a problem that seems to be more universal and a direct consequence of late-stage capitalism, rather than anything specific to academia. 

The consequences are immeasurable 

This neoliberal point-scoring system organises academia into faux-meritocratic hierarchies. It represents a shifting of ethical research boundaries into ones politically defined by whatever the evaluation flavour of the month is. An inevitable consequence of this is a new and toxic ‘class’ system based on linear and superficial rankings and their coupling with notions of prestige, at the individual and institutional level. This is exacerbated by the coupling of the evaluation system to the reward system, which is essentially binary: you either get money to do more research, or you don’t. As such, this hierarchical management scheme encourages fabrication, nepotism, and embellishment of research results, while simultaneously forcing researchers to compete to migrate between different output-based castes. Academia only the contains winners of this ego-driven lottery, while those who are of ‘lower-class’ are forced into external careers. Academics have little choice but to treat this like it is, as a game of politics; influenced by discrimination, social background and status, institutional privilege, narcissism and egoism, citation rings, and gift authorships. A cult-mentality still guards academia, the modern version of historical ‘old boys clubs’, where non-conformation through intellectualism is punished by career execution and unjust dismissal from the system. It is a competitive dogma that has gone on for decades in our universities without restraint. 

The dire consequences of this are the continued erosion of all aspects of a healthy academic culture, and the cognitive destruction of generations of our finest intellects (20, 21). 

Academics are overworked, risk- and conflict-averse, and facing increasingly gruelling working environments for poor career prospects on unstable budgets and contracts. What better way to keep them apathetic towards the global economic, social, and political problems that we all face? Academics have lost their confrontational and enlightened courage of speaking truth to power, being now professionally vulnerable for daring to be political or controversial. The only activism seem now at any real scale from academia is when precious public funding sources are threatened. Only then do scientists take to the streets to protest with gusto, while simultaneously gifting away the research that the same public funds to private publishers. 

There are no protests against the billions of dollars in net profits that the commercial sector removes from higher education institutes each year. Activism against the structural inequities that plagues academia had now been mostly reduced to yelling online into the social media void. Suppression of intellect in any socio-political context is a fantastic way to impose institutional constraints on a demographic that is highly likely to try and disrupt conservative governments. Keep them busy, undervalued, and under constant threat. This is the real, far more sinister, and far more damaging “publish or perish” paradigm: do what we say or you are fired. 

Another truly bizarre element of this is that such metrics and databases that have created such a culture are based on information that researchers informally provide for free through their publishing behaviours, which are then in turn sold back to their employers in order to make them work more efficiently. Our academies and academics are all at once the producers, data, and consumers for what has been termed the “para-academic industry” (22). 

The commercial sector further capitalised on this, promising glory (maybe) to researchers who produced ‘positive results’ for choosing to publish in one of their journals. This is why innovation in the scholarly publishing sector lags two decades behind any other industry (23). 

Virtually every great ‘innovation’ in scholarly communication is just another journal or another metric dressed in different clothes, within a system in which the slightest hint of disruption becomes neutralised by an injection of cold, hard, capital (18). 

The problem pervades at all levels. Governments and institutes, instead of focusing their power and finances towards critical and foundational elements of scholarship, such as effective reproducibility, peer review, and digitally reliable infrastructure, they are caught in a relentless, and pointless, competition. Combined together, all of this makes research evaluation reform something that is increasingly difficult. It requires the not-particularly-simple effort of simultaneously breaking the stranglehold that NPM has imposed upon our research culture, stopping outsourcing metrics and evaluation to third-party commercial vendors, and recognising that if we are going to evaluate, then metrics like the h-index and JIF just are not up to this complex task. If we want to change the culture of scientific research, it has to start by acknowledging this complexity and scale. 

The “Open Science movement” seems to be beginning to grasp the scale and complexity of this problem (24). But it cannot do it by simply focusing on replacing or diversifying incentives (7): these are neoliberal solutions to neoliberal problems (25). Measures based on existing data sources will continue to be inherently biased and subjective, and continue to be gamed and abused (26). New or more metrics will simply diversify the things that researchers need to do in order to have their work ‘count’. All of this is pushing more towards a ‘product- oriented’ academia, where science is still treated more and more like a private enterprise. 

This fundamental model is conceptually misaligned with the goals of scientific research, and instead we need to focus our collective efforts on reshaping how our entire scholarly research management system functions. 

This has to start with strategic accountability. For this, we can start by turning to Tony Benn and his five democratic questions. What power do you have? Where did you get it? In whose interests do you exercise it? To whom are you accountable? And finally, how do we get rid of you? It is perhaps now beyond the time that we asked these powerful questions to our administrative overlords, the para-academic industry, and the chokehold system that they have imposed upon research. The short versions of the answers to these questions, are that the power was stolen, it came from researchers, it is exercised for commercial interests, they are accountable to no-one, and we cannot get rid of them (yet). Those who have created this perverse incentive system, have all of the power and none of the responsibility; they have no skin in the game. 

At some point, those involved in drafting the promotion/tenure/assessment guidelines must be held accountable for misconduct in drafting evidence-lacking policy statements. People established those rules, and those rules can be rewritten. If we know evaluations are baseless, senseless, and harmful, then tolerating them is unjustified. If the same illiterate standards were applied by researchers to their work, it would constitute a form of malpractice, misconduct, and a severe breach of duty. Careers, lives, and tens of billions of dollars every year are channelled through this system, part of an increasing institutionalisation within state and financial structures based around illiberal standards. The politics involved flies in the face of the ethics, objectivity, and rationalism we uphold as researchers. The institutionalised tenure system was built by an authoritarian power that was not elected, is not democratic, and has not been used responsibly. 

Possible solutions 

As mentioned above, simply providing “alternative metrics” and new forms of measurement and evaluation then misses the point. Research is still treated through the lens of corporatisation as something that needs to be incentivised, evaluated, and controlled in order to be better or to produce better or more outputs. No major breakthrough in science ever occurred because it was incentivised. 

Here are three potential solutions that could simultaneously banish, or at least severely reduce the negative impact of, the unholy trinity: 

1. Policing the police. Create a new layer of monitoring and accountability that incentivises those in charge of the incentives to provide better incentives. This could be at multiple levels, for example, by having a governing body that oversees evaluation regulations for research assessment bodies. There is always a bigger fish. 

It could also involve having bibliometric or peer review experts sitting in evaluation panels to monitor processes. Or, it could involve those in charge of research assessment to have to pass simple competency exams or sign ethics statements. One side effect of this is that it would force metrics vendors to function at a much higher standard, and thus actually inject competition into the scholarly communication market. You could apply the same standard to metrics too, and create another super-metric that evaluated the statistical performance of metrics like the JIF or h-index. One metric to rule them all. 

2. Open up peer review. This is not the same thing as “Open Peer Review” (27), but it is related. We already have a universal system of evaluation that covers our entire scholarly research record, stretching back for decades: peer review. At the present, we know almost nothing about how peer review actually works (28). However, if peer review is exposed, it can be remodelled, standardised, and evaluated itself to provide an enriched, contextual, and rigorous peer-to-peer framework of evaluation that is closer to the actual research itself (29). We do not need multiple layers of evaluation, and thus journal brands and associated metrics become virtually redundant overnight. 

As an added extra, you then get all of the additional system-wide benefits associated with increasing transparency and accountability in peer review. 

3. Start from scratch. In no single university mission statement does it say anything like “Give our labour and knowledge away to for-profit companies for free”, or “Publish work in high-impact-factor journals”, because they know that it is a joke. If the general public ever found out how institutional forms of research evaluation operated, science would probably lose much of its credibility and trust. Therefore, evaluation guidelines should be drafted that are empirically based on the values and mission statements of universities. This will force universities to pay particular attention to new forms of societal impact, and foster a new wave of public engagement with science. Or, as Alex Lancaster puts it, “how do we use open science approaches in the context of retooling our institutions to benefit actual living and breathing humans (scientists and non-scientists)?” (30). Institutes should ask themselves what do they value most about research, and their workforce: critical thinking, rigorous research processes, creativity, taking risks, being politically and publicly active, truth-seeking, being a good mentor, providing support for students, being kind and compassionate, and promoting a healthy research environment. Once what is most-valued is known, integrate a system that rewards people for doing those things. If science is supposed to serve the public, then its impact on the public should be of what is most important. How does scientific knowledge shape the cultural legacy of nations, how does it generate a sense of pride 

This is a non-peer reviewed preprint and cohesion in historical discoveries, how does it influence our perspective on the world. When we start to think about the context of science in society, we realise just how woefully inadequate journals and metrics really are. 

Politics might have a lesson for us here. Whatever dictionary definition you have read about politics, it is wrong. Politics is really all about getting someone else to do something that you want, but making them think it is either their idea or in their own best interests. At the moment, we know that researchers are, by and large, incentivised to do bad research, and play into an ineffective and corrupt communication system. They are going to play any game which they are told to play, so long as they believe that playing it is in their best interests. 

Each one of these above ideas works because it is frictionless for researchers, and does not require any additional effort. Researchers are already doing peer review. Vendors are still vending, and institutes are responsible for management. By synchronously tackling the combined triad of NPM, commercial metrics vendors, and irresponsible metrics usage, researchers no longer have to worry about “gaming the system” or the invisible but real threat of “publish or perish”, and can simply focus on doing good research, because that is now in their best interests. 

Simple solutions to complex problems 

What seems clear though is that this is not going to just magically happen based on our existing institutional structures. Inspiring calls for collective action on research culture reform will require co-operation from our existing institutes on a scale that is likely never to materialise (31). Fundamentally, we need science to be doing something different than it is right now. We need science to be embracing different forms of complexity to solve major problems that we face (24), and a convenient framing for this is the United Nations 

Sustainable Development Goals (SDGs) (32, 33). At the moment, the reward system does not incentivise researchers to actually even begin to acknowledge the scale and complexity of this task. Our institutes are too anachronistic and too locked into ‘tradition’ and the para-academic industry that by the time they have even finished squabbling about alternatives to the JIF, whatever goals we set for achieving the Sustainable Agenda by 2030 will have been missed. Global climate change is indifferent to the journals we publish in. 

So, what if we had new institutes. New organisations. A federated, connected, network of organisations that started again from scratch, and embraces the multi-faceted complexity of the real-world problems that we face, and not constrained by artificial neoliberal boundaries. 

What are the goals we are trying to achieve? The UN Sustainable Development Goals. How do we ‘incentivise’ researchers to work towards those goals? We reward them for making tangible success steps towards achieving them, for being able to demonstrate societal impact. 

Often, the things that matter most to us simply cannot be measured, or ‘indicated’, and this is where peer-to-peer evaluation has an increasing role to play in the future. 

This is a simple institutional vision, based around a healthier, egalitarian, and more humane research culture. Employment prospects are stable and sustained with fair wages backed by a committed funding system. Innovation and risk-taking are placed at the core of advancement, but also in parallel with slow, careful, and rigorous science for balance. Global participation is encouraged through virtual collaborative environments, specially aimed at meeting the goal-setting framework of the SDGs. Abuse of power dynamics simply does not exist, and neither does the influence of the para-academic industry. Instead, collective intelligence (and responsibility) leads to a total refocus of the functional role of science in society, where it is enjoys a greater prominence through being more reliable and trustworthy, and ordinary citizens are empowered by this (34). Critically, the notion that higher education and science need to be subject to the same rules of private management is rendered obsolete: the “knowledge economy” does not function like a market, so let us stop treating it as such. 

Deconstructing institutes and then rebuilding them is far more difficult than simply building new ones and allowing the others to become redundant in the face of overwhelming superiority. The best way to overcome obstacles is not to destroy them, but to let them destroy themselves. Displacement like this happens all of the time in society, and perhaps our academic institutes are overdue such a shock to the system. So, the task is to develop new institutes that have ‘open scholarship’ strategies at their core, working explicitly towards the Sustainable Agenda 2030. We now know that the ‘affordability problem’ of increased public investment in science, research, and development was a political mirage, and there is nothing reasonable to prevent investment in such a new system. Scientific knowledge is not scarce; it is abundant, sustainable, and essential, and it is time to give it the support that it deserves. 

Obviously though, utopic, well-funded, and rationally-organised systems like this do not just spring up overnight. But what if there was a chance that they could? On March 30 2020, UNESCO announced that it had mobilised the resources of 122 nations for the biggest ever international collaboration on open science, ever to tackle the SARS-CoV-2 pandemic. 

Together with the global scientific community, they have demonstrated the technical capacity for global open science has always been there, and that the primary barriers to widespread adoption have mostly been political. The agenda of UNESCO here focused primarily on: 

• Pooling of knowledge, measures to support scientific research, and the reduction of the knowledge gap among countries; 

• Mobilisation of decision-makers, researchers, innovators, publishers and civil society to allow free access to scientific data, research findings, educational resources and research facilities; 

• Reinforcement of links between science and policy decisions, to meet societal needs; 

and 

• Opening of science to society while national borders are closed. 

There must be a parallel commitment from UNESCO and all member nations to manage new forms of research evaluation that place societal impact at their core. The role of science, and open science, in society has never been more striking and foundational than it is right now: it is a matter of basic human decency, social justice, and national security. This vision from UNESCO though cannot, and will not, be realised based on our existing institutional structures. They are inappropriately designed, and still beholden to an evaluation system that prevents them from engaging with society in this manner. Even in the wake of the COVID-19 pandemic, and total disruption to global research institutes, all many institutes have been able to muster is token gestures that ‘extend the clock’ on tenure applications. If UNESCO truly wants to mobilize the global scientific community to strategically tackle not just the Coronavirus pandemic, but all of the global issues that our society faces, their member nations must commit – with resources, and not just words – to a new form of institute that is well-equipped to do so. Five years ago, the very idea was laughable. If we wait another five years, it will be too late. 

References 

1. B. Brembs, K. Button, M. Munafò, Deep impact: unintended consequences of journal rank. Frontiers in human Neuroscience. 7, 291 (2013). 

2. V. Lariviere, C. R. Sugimoto, The Journal Impact Factor: A brief history, critique, and discussion of adverse effects. arXiv:1801.08992 [physics] (2018) (available at http://arxiv.org/abs/1801.08992). 

3. E. C. McKiernan, L. A. Schimanski, C. Muñoz Nieves, L. Matthias, M. T. Niles, J. P. Alperin, Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations. eLife. 8, e47338 (2019). 

4. P. O. of the E. Union, Evaluation of research careers fully acknowledging Open Science practices : rewards, incentives and/or recognition for researchers practicing Open Science. (2017), (available at https://publications.europa.eu/en/publication-detail/- /publication/47a3a330-c9cb-11e7-8e69-01aa75ed71a1/language-en). 

5. S. E. Ali-Khan, L. W. Harris, E. R. Gold, Motivating participation in open science by examining researcher incentives. eLife. 6, e29319 (2017). 

6. R. Owen, P. Macnaghten, J. Stilgoe, Responsible research and innovation: From science in society to science for society, with society. Sci Public Policy. 39, 751–760 (2012). 

7. J. Wilsdon, J. Bar-Ilan, R. Frodeman, E. Lex, I. Peters, P. F. Wouters, Next-Generation Metrics: Reponsible Metrics and Evaluation for Open Science. Report of the European Commission Expert Group on Altmetrics (2017). 

8. P. Dunleavy, C. Hood, From old public administration to new public management. Public Money & Management. 14, 9–16 (1994). 

9. S. Tolofari, New Public Management and Education. Policy Futures in Education. 3, 75–89 (2005). 

10. S. Jong, K. Slavova, When publications lead to products: The open science conundrum in new product development. Research Policy. 43, 645–654 (2014). 

11. S. Moore, C. Neylon, M. Paul Eve, D. Paul O’Donnell, D. Pattinson, “Excellence R Us”: university research and the fetishisation of excellence. Palgrave Communications. 3, 16105 (2017). 

12. H. Vessuri, J.-C. Guédon, A. M. Cetto, Excellence or quality? Impact of the current competition regime on science and scientific publishing in Latin America and its implications for development. Current Sociology. 62, 647–665 (2014). 

13. B. Brembs, Prestigious Science Journals Struggle to Reach Even Average Reliability. Frontiers in human neuroscience. 12, 37 (2018). 

14. J. E. Hirsch, An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 102, 16569–16572 (2005). 

15. L. Bornmann, H.-D. Daniel, What do we know about the h index? Journal of the American Society for Information Science and Technology. 58, 1381–1385 (2007). 

16. L. Bornmann, H. Daniel, What do citation counts measure? A review of studies on citing behavior. Journal of Documentation. 64, 45–80 (2008). 

17. B. Saenan, R. Morais, V. Gaillard, L. Borrell-Damián, “Research Assessment in the Transition to Open Science: 2019 EUA Open Science and Access Survey Results” (European University Association, 2019), pp. 1–48. 

18. J. Tennant, B. Brembs, “RELX referral to EU competition authority” (Zenodo, 2018), doi:10.5281/zenodo.1472045. 

19. A. Posada, G. Chen, in ELPUB 2018, L. Cha, P. Mounie, Eds. (Toronto, Canada, 2018; https://hal.archives-ouvertes.fr/hal-01816707). 

20. A. Abbott, Stress, anxiety, harassment: huge survey reveals pressures of scientists’ working lives. Nature. 577, 460–461 (2020). 

21. D. K. Smith, The race to the bottom and the route to the top. Nat. Chem. 12, 101–103 (2020). 

22. C. Wien, J. Tennant, Ondets rod: Hemmelighedskræmmeri. Weekendavisen. Ideer (2019), p. #44. 

23. J. Tennant, “How open science is fighting against private, proprietary publishing platforms” (preprint, SocArXiv, 2020), , doi:10.31235/osf.io/wq4x8. 

24. B. Fecher, Embracing complexity: COVID-19 is a case for academic collaboration and co-creation (2020), doi:10.5281/zenodo.3712898. 

25. E. Kansa, It’s the Neoliberalism, Stupid: Why instrumentalist arguments for Open Access, Open Data, and Open Science are not enough. Impact of Social Sciences (2014), (available at http://blogs.lse.ac.uk/impactofsocialsciences/2014/01/27/its-the- neoliberalism-stupid-kansa/). 

26. C. A. Chapman, J. C. Bicca-Marques, S. Calvignac-Spencer, P. Fan, P. J. Fashing, J. Gogarten, S. Guo, C. A. Hemingway, F. Leendertz, B. Li, I. Matsuda, R. Hou, J. C. Serio- Silva, N. Chr. Stenseth, Games academics play and their consequences: how authorship, h-index and journal impact factors are shaping the future of academia. Proceedings of the Royal Society B: Biological Sciences. 286, 20192047 (2019). 

27. T. Ross-Hellauer, What is open peer review? A systematic review. F1000Research. 6, 588 (2017). 

28. J. Tennant, T. Ross-Hellauer, The limitations to our understanding of peer review (2019), doi:10.31235/osf.io/jq623. 

29. J. Tennant, “Standardising Peer Review in Paleontology journals” (preprint, PaleorXiv, 2020), , doi:10.31233/osf.io/qzycs. 

30. A. Lancaster, Open Science and its Discontents. Ronin Institute (2016), (available at http://ronininstitute.org/open-science-and-its-discontents/1383/). 

31. M. Munafò, Raising research quality will require collective action. Nature. 576, 183– 183 (2019). 

32. J.-C. Burgelman, Viewpoint: COVID-19, open science, and a ‘red alert’ health indicator.  Science|Business (2020), (available at https://sciencebusiness.net/viewpoint/viewpoint-covid-19-open-science-and-red- alert-health-indicator). 

33. J. Tennant, W. Francuzik, D. J. Dunleavy, B. Fecher, M. Gonzalez-Marquez, T. Steiner, “Open Scholarship as a mechanism for the United Nations Sustainable Development Goals” (preprint, SocArXiv, 2020), , doi:10.31235/osf.io/8yk62. 

34. D. DeBronkart, Open Access as a Revolution: Knowledge Alters Power. Journal of  Medical Internet Research. 21, e16368 (2019). 

Acknowledgements 

Thank you to Benedikt Fecher for seeding the idea of creating whole new institutes to tackle this problem in my mind. 


About the authors

Jonathan Tennant and Charlotte Wien

Jon Tennant completed his PhD at Imperial College London in the Department of Earth Science and Engineering, where he won the prestigious Janet Watson award for research excellence. His research focused on patterns of diversity and extinction in deep time and their biological, geological and environmental drivers, as well as the early evolution of crocodiles. He founded paleorXiv, and the Open Science MOOC, a peer-to-peer community around open research practices.

For 2 years, Jon was the Communications Director for Berlin-based tech startup ScienceOpen. He believed strongly that science should be as accessible as possible, and helped lead a campaign against privatisation of knowledge with Education International. He gave numerous talks around the world on this topic. He wrote the kids books Excavate Dinosaurs! and World of Dinosaurs. He won a travel award to work with IGDORE at their campus in Bali in 2018. He received a Shuttleworth Foundation flash grant, and the Jean-Claude Guedon 2018 award for his work on peer review. He was an ambassador for the Center for Open Science and ASAPbio, and a member of the Mozilla Open Leadership Cohort.

Jon was Executive Editor Geoscience Communication, and an Editor for Publications and the Journal of Evidence-Based Healthcare. He was a PLOS Paleo Community Editor for 4 years. He as most recently an independent researcher, based in Indonesia, working on the future relationships between science and society. Due to his work on open science, he was granted membership into the Global Young Academy.

Comments
0
comment
No comments here
Why not start the discussion?