What is research funding, how does it influence research, and how is it recorded? Key dimensions of variation

  • Open access
  • Published: 16 September 2023
  • Volume 128 , pages 6085–6106, ( 2023 )

Cite this article

You have full access to this open access article

research funding journal articles

  • Mike Thelwall   ORCID: orcid.org/0000-0001-6065-205X 1 , 2 ,
  • Subreena Simrick   ORCID: orcid.org/0000-0002-0170-6940 3 ,
  • Ian Viney   ORCID: orcid.org/0000-0002-9943-4989 4 &
  • Peter Van den Besselaar   ORCID: orcid.org/0000-0002-8304-8565 5 , 6  

5711 Accesses

2 Citations

2 Altmetric

Explore all metrics

Evaluating the effects of some or all academic research funding is difficult because of the many different and overlapping sources, types, and scopes. It is therefore important to identify the key aspects of research funding so that funders and others assessing its value do not overlook them. This article outlines 18 dimensions through which funding varies substantially, as well as three funding records facets. For each dimension, a list of common or possible variations is suggested. The main dimensions include the type of funder of time and equipment, any funding sharing, the proportion of costs funded, the nature of the funding, any collaborative contributions, and the amount and duration of the grant. In addition, funding can influence what is researched, how and by whom. The funding can also be recorded in different places and has different levels of connection to outputs. The many variations and the lack of a clear divide between “unfunded” and funded research, because internal funding can be implicit or unrecorded, greatly complicate assessing the value of funding quantitatively at scale. The dimensions listed here should nevertheless help funding evaluators to consider as many differences as possible and list the remainder as limitations. They also serve as suggested information to collect for those compiling funding datasets.

Avoid common mistakes on your manuscript.

Introduction

Academic research grants account for billions of pounds in many countries and so the funders may naturally want to assess their value for money in the sense of financing desirable outcomes at a reasonable cost (Raftery et al., 2016 ). Since many of the benefits of research are long term and difficult to identify or quantify financially, it is common to benchmark against previous results or other funders to judge progress and efficiency. This is a complex task because academic funding has many small and large variations and is influenced by, and may influence, many aspects of the work and environment of the funded academics (e.g., Reale et al., 2017 ). The goal of this article is to support future analyses of the effectiveness or influence of grant funding by providing a typology of the important dimensions to be considered in evaluations (or otherwise acknowledged as limitations). The focus is on grant funding rather than block funding.

The ideal way to assess the value of a funding scheme would be a counterfactual analyses that showed its contribution by identifying what would have happened without the funding. Unfortunately, counterfactual analyses are usually impossible because of the large number of alternative funding sources. Similarly, comparisons between successful and unsuccessful bidders are faced with major confounding factors that include groups not winning one grant winning another (Neufeld, 2016 ), and complex research projects attracting funding of different kinds from multiple sources (Langfeldt et al., 2015 ; Rigby, 2011 ). Even analyses with effective control groups, such as a study of funded vs. unfunded postdocs (Schneider & van Leeuwen, 2014 ), cannot separate the effect of the funding from the success of the grant selection process: were better projects funded or did the funding or reviewer feedback improve the projects? Although qualitative analyses of individual projects help to explain what happened to the money and what it achieved, large scale analyses are sometimes needed to inform management decision making. For example: would a funder get more value for money from larger or smaller, longer or shorter, more specific or more general grants? For such analyses, many simplifying assumptions need to be made. The same is true for checks of the peer review process of research funders. For example, a funder might compute the average citation impact of publications produced by their grants and compare it to a reference set. This reference set might be as outputs from the rejected set or outputs from a comparable funder. The selection of the reference set is crucial for any attempt to identify the added value of any funding, however defined. For example, comparing the work of grant winners with that of high-quality unsuccessful applicants (e.g., those that just failed to be funded) would be useful to detect the added value of the money rather than the success of the procedure to select winners, assuming that there is little difference in potential between winners and narrow losers (Van den Besselaar & Leydesdorff, 2009 ). Because of the need to make comparisons between groups of outputs based on the nature of their funding, it is important to know the major variations in academic research funding types.

The dimensions of funding analysed in previous evaluations can point to how the above issues have been tackled. Unfortunately, most evaluations of the effectiveness, influence, or products of research funding (however defined) have probably been private reports for or by research funders, but some are in the public domain. Two non-funder studies have analysed whether funding improves research in specific contexts: peer review scores for Scoliosis conference submissions (Roach et al., 2008 ), and the methods of randomised controlled trials in urogynecology (Kim et al., 2018 ). Another compared research funded by China with that funded by the EU (Wang et al., 2020 ). An interesting view on the effect of funding on research output suggests that a grant does not necessarily always result in increased research output compared to participation in a grant competition (Ayoubi et al., 2019 ; Jonkers et al., 2017 ). Finally, a science-wide study of funding for journal articles from the UK suggested that it associated with higher quality research in at least some and possibly all fields (the last figure in: Thelwall et al., 2023 ).

From a different perspective, at least two studies have investigated whether academic funding has commercial value. The UK Medical Research Council (MRC) has analysed whether medical spinouts fared better if they were from teams that received MRC funding rather than from unsuccessful applicants, suggesting that funding helped spin-outs to realise commercial value from their health innovations (Annex A2.7 of: MRC, 2019 ). Also in the UK, firms participating in UK research council funded projects tended to grow faster afterwards compared to comparator firms (ERC, 2017 ).

Discussing the main variations in academic research funding types to inform analyses of the value of research funding is the purpose of the current article. Few prior studies seem to have introduced any systematic attempt to characterise the key dimensions of research funding, although some have listed several different types (e.g., four in: Garrett-Jones, 2000 ; three in: Paulson et al., 2011 ; nine in: Versleijen et al., 2007 ). The focus of the current paper is on grant-funded research conducted at least partly by people employed by an academic institution rather than by people researching as part of their job in a business, government, or other non-academic organisation. The latter are presumably funded usually by their employer, although they may sometimes conduct collaborative projects with academics or win academic research funding. The focus is also on research outputs, such as journal articles, books, patents, performances, or inventions, rather than research impacts or knowledge generation. Nevertheless, many of the options apply to the more general case. The list of dimensions relevant to evaluating the value of research funding has been constructed from a literature review of academic research about funding and insights from discussions with funders and analyses of funding records. The influence of funding on individual research projects is analysed, rather than systematic effects of funding, such as at the national level (e.g., for this, see: Sandström & Van den Besselaar, 2018 ; Van den Besselaar & Sandström, 2015 ). The next sections discuss dimensions in difference in the funding awarded, the influence of the funding on the research, and the way in which the funding is recorded.

Funding sources

There are many types of funders of academic research (Hu, 2009 ). An effort to distinguish between types of funding schemes based on a detailed analysis of the Dutch government budget and the annual reports of the main research funders in the Netherlands found the following nine types of funding instruments (Versleijen et al., 2007 ), but the remainder of this section gives finer-grained breakdown of types. The current paper is primarily concerned with all these except for the basic funding category, which includes the block grants that many universities receive for general research support. Block grants were originally uncompetitive but now may also be fully competitive, as in the UK where they depend on Research Excellence Framework scores, or partly competitive as in the Netherlands, where they partly depend on performance-based parameters like PhD completions (see also: Jonkers & Zacharewicz, 2016 ).

Contract research (project—targeted—small scale)

Open competition (project—free—small scale)

Thematic competition (project—targeted—small scale)

Competition between consortia (project—targeted—large scale)

Mission oriented basic funding (basic—targeted—large scale)

Funding of infrastructure and equipment (basic—targeted—diverse)

Basic funding for universities and public research institutes (basic—free—large scale)

International funding of programs and institutes (basic, both, mainly large scale)

EU funding (which can be subdivided in the previous eight types)

Many studies of the influence of research funding have focused on individual funders (Thelwall et al, 2016 ) and funding agencies’ (frequently unpublished) internal analyses presumably often compare between their own funding schemes, compare overall against a world benchmark, or check whether a funding scheme performance has changed over time (BHF, 2022 ). Public evaluations sometimes analyse individual funding schemes, particularly for large funders (e.g., Defazio et al., 2009 ). The source of funding for a project could be the employing academic institution, academic research funders, or other organisations that sometimes fund research. There are slightly different sets of possibilities for equipment and time funding.

Who funded the research project (type of funder)?

A researcher may be funded by their employer, a specialist research funding organisation (e.g., government-sponsored or non-profit) or an organisation that needs the research. Commercial funding seems likely to have different requirements and goals from academic funding (Kang & Motohashi, 2020 ), such as a closer focus on product or service development, different accounting rules, and confidentiality agreements. The source of funding is an important factor in funding analysis because funders have different selection criteria and methods to allocate and monitor funding. This is a non-exhaustive list.

Self-funded or completely unfunded (individual). Although the focus of this paper is on grant funding, this (and the item below) may be useful to record because it may partly underpin projects with other sources and may form parts of comparator sets (e.g., for the research of unfunded highly qualified applicants) in other contexts.

University employer. This includes funding reallocated from national competitive (e.g., performance-based research funding: Hicks, 2012 ) or non-competitive block research grants, from teaching income, investments and other sources that are allocated for research in general rather than equipment, time, or specific projects.

Other university (e.g., as a visiting researcher on a collaborative project).

National academic research funder (e.g., the UK’s Economic and Social Research Council: ESRC).

International academic research funder (e.g., European Union grants).

Government (contract, generally based on a tender and not from a pot of academic research funding)

Commercial (contract or research funding), sometimes called industry funding.

NGO (contract or research funding, e.g., Cancer Research charity). Philanthropic organisations not responsible to donors may have different motivations to charities, so it may be useful to separate the two sometimes.

Who funded the time needed for the research?

Research typically needs both people and equipment, and these two are sometimes supported separately. The funding for a researcher, if any, might be generic and implicit (it is part of their job to do research) or explicit in terms of a specified project that needs to be completed. Clinicians can have protected research time too: days that are reserved for research activities as part of their employment, including during advanced training (e.g., Elkbuli et al., 2020 ; Voss et al., 2021 ). For academics, research time is sometimes “borrowed” from teaching time (Bernardin, 1996 ; Olive, 2017 ). Time for a project may well be funded differently between members, such as the lead researcher being institutionally supported but using a grant to hire a team of academic and support staff. Inter-institutional research may also have a source for each team. The following list covers a range of different common arrangements.

Independent researcher, own time (e.g., not employed by but emeritus or affiliated with a university).

University researcher, own time (e.g., holidays, evenings, weekends).

University, percentage of the working time of academic staff devoted for research. In some countries this is large related to the amount of block finding versus project funding (Sandström & Van den Besselaar, 2018 ).

University, time borrowed from other activities (e.g., teaching, clinical duties, law practice).

Funder, generic research time funding (e.g., Gates chair of neuropsychology, long term career development funding for a general research programme).

University/Funder, specific time allocated for research programme (e.g., five years to develop cybersecurity research expertise).

University/Funder, employed for specific project (e.g., PhD student, postdoc supervised by member of staff).

University/Funder, specific time allocated for specific study (e.g., sabbatical to write a book).

Who funded the equipment or other non-human resources used in the research?

The resources needed for a research project might be funded as part of the project by the main funder, it may be already available to the researcher (e.g., National Health Service equipment that an NHS researcher could expect to access), or it may be separately funded and made available during the project (e.g., Richards, 2019 ). Here, “equipment” includes data or samples that are access-controlled as well as other resources unrelated to pay, such as travel. These types can be broken down as follows.

Researcher’s own equipment (e.g., a musician’s violin for performance-based research or composition; an archaeologist’s Land Rover to transport equipment to a dig).

University equipment, borrowed/repurposed (e.g., PC for teaching, unused library laptop).

University equipment, dual purpose (e.g., PC for teaching and research, violin for music teaching and research).

University/funder equipment for generic research (e.g., research group’s shared microbiology lab).

University/funder equipment research programme (e.g., GPU cluster to investigate deep learning).

University/funder equipment for specific project (e.g., PCs for researchers recruited for project; travel time).

University/funder equipment for single study (e.g., travel for interviews).

Of course, a funder may only support the loan or purchase of equipment on the understanding that the team will find other funding for research projects using it (e.g., “Funding was provided by the Water Research Commission [WRC]. The Covidence software was purchased by the Water Research fund”: Deglon et al., 2023 ). Getting large equipment working for subsequent research (e.g., a space telescope, a particle accelerator, a digitisation project) might also be the primary goal of a project.

How many funders contributed?

Although many projects are funded by a single source, some have multiple funders sharing the costs by agreement or by chance (Davies, 2016 ), and the following seem to be the logical possibilities for cost sharing.

Partially funded from one source, partly unfunded.

Partially funded from multiple sources, partly unfunded.

Fully funded from multiple sources.

Fully funded from a single source.

As an example of unplanned cost sharing, a researcher might have their post funded by one source and then subsequently bid for funding for equipment and support workers to run a large project. This project would then be part funded by the two sources, but not in a coordinated way. It seems likely that a project with a single adequate source of funding might be more efficient than a project with multiple sources that need to be coordinated. Conversely, a project with multiple funders may have passed through many different quality control steps or shown relevance to a range of different audiences. Those funded by multiple sources may also be less dependent on individual funders and therefore more able to autonomously follow their own research agenda, potentially leading to more innovative research.

How competitive was the funding allocation process?

Whilst government and charitable funding is often awarded on a competitive basis, the degree of competition (e.g., success rate) clearly varies between countries and funding calls and changes over time. In contrast, commercial funding may be gained without transparent competition (Kang & Motohashi, 2020 ), perhaps as part of ongoing work in an established collaboration or even due to a chance encounter. In between these, block research grants and prizes may be awarded for past achievements, so they are competitive, but the recipients are relatively free to spend on any type of research and do not need to write proposals (Franssen et al., 2018 ). Similarly, research centre grants may be won competitively but give the freedom to conduct a wide variety of studies over a long period. This gives the following three basic dimensions.

The success rate from the funding call (i.e., the percentage of initial applicants that were funded) OR

The success rate based on funding awarded for past performance (e.g., prize or competitive block grant, although this may be difficult to estimate) OR

The contract or other funding was allocated non-competitively (e.g., non-competitive block funding).

How was the funding decision made?

Who decides on which researchers receive funding and through which processes is also relevant (Van den Besselaar & Horlings, 2011 ). This is perhaps one of the most important considerations for funders.

The procedure for grant awarding: who decided and how?

There is a lot of research into the relative merits of different selection criteria for grants, such as a recent project to assess whether randomisation could be helpful (Fang & Casadevall, 2016 ; researchonresearch.org/experimental-funder). Peer review, triage, and deliberative committees are common, but not universal, components (Meadmore et al., 2020 ) and sources of variation include whether non-academic stakeholders are included within peer review teams (Luo et al., 2021 ), whether one or two stage submissions are required (Gross & Bergstrom, 2019 ) and whether sandpits are used (Meadmore et al., 2020 ). Although each procedure may be unique in personnel and fine details, broad information about it would be particularly helpful in comparisons between funders or schemes.

What were the characteristics of the research team?

The characteristics of successful proposals or applicants are relevant to analyses of competitive calls (Grimpe, 2012 ), although there are too many to list individually. Some deserve some attention here.

What are the characteristics of the research team behind the project or output (e.g., gender, age, career status, institution)?

What is the track record of the research team (e.g., citations, publications, awards, previous grants, service work).

Gender bias is an important consideration and whether it plays a role is highly disputed in the literature. Recent findings suggest that there is gender bias in reviews, but not success rates (Bol et al., 2022 ; Van den Besselaar & Mom, 2021 ). Some funding schemes have team requirements (e.g., established vs. early career researcher grants) and many evaluate applicants’ track records. Applicants’ previous achievements may be critical to success for some calls, such as those for established researchers or funding for leadership, play a minor role in others, or be completely ignored (e.g., for double blind grant reviewing). In any case, research team characteristics may be important for evaluating the influence of the funding or the fairness of the selection procedure.

What were the funder’s goals?

Funding streams or sources often have goals that influence what type of research can be funded. Moreover, researchers can be expected to modify their aspirations to align with the funding stream. The funder may have different types of goal, from supporting aspects of the research process to supporting relevant projects or completing a specific task (e.g., Woodward & Clifton, 1994 ), to generating societal benefits (Fernández-del-Castillo et al., 2015 ).

A common distinction is between basic and applied research, and the category “strategic research” has also been used to capture basic research aiming at long term societal benefits (Sandström, 2009 ). The Frascati Manual uses Basic Research, Applied Research and Experimental Development instead (OECD, 2015 ), but this is more relevant for analyses that incorporate industrial research and development.

Research funding does not necessarily have the goal to fund research because some streams support network formation in the expectation that the network will access other resources to support studies (Aagaard et al., 2021 ). European Union COST (European Cooperation in Science and Technology) Actions are an example (cost.eu). Others may have indirect goals, such as capacity building or creating a strong national research base that helps industry or attracts to international business research investment (Cooksey, 2006 ), or promoting a topic (e.g., educational research: El-Sawi et al., 2009 ). As a corollary to the last point, some topics may be of little interest to most funders, for example because they would mainly benefit marginalised communities (Woodson & Williams, 2020 ).

Since the early 2000s, many countries have also issued so-called career grants which have become prestigious. At the European level career grants started in 2009: the European Research Council (ERC) grants. These grants have a career effect (Bloch et al., 2014 ; Danell & Hjerm, 2013 ; Schroder et al., 2021 ; Van den Besselaar & Sandström, 2015 ) but this dimension, and the longer-term effects of funding other than on specific outputs, is not considered here. A funding scheme may also have multiple of the following goals.

Basic research (e.g., the Malaysia Toray Science Foundation supports basic research by young scientists to boost national capacity: www.mtsf.org ).

Strategic research (e.g., the UK Natural Environment Research Council’s strategic research funding targets areas of important environmental concern, targeting long term solutions: www.ukri.org/councils/nerc/ ).

Applied research (e.g., the Dutch NWO [Dutch Research Council] applied research fund to develop innovations supporting food security: www.nwo.nl/en/researchprogrammes/food-business-research ).

Technology transfer (i.e., applying research knowledge or skills to a non-research problem) or translational research.

Researcher development and training (including career grants).

Capacity building (e.g., to support research in resource-poor settings).

Collaboration formation (e.g., industry-academia, international, inter-university).

Research within a particular field.

Research with a particular application area (e.g., any research helping Alzheimer’s patients, including a ring-fenced proportion of funding within a broader call).

Tangible academic outputs (e.g., articles, books).

Tangible non-academic outputs (e.g., policy changes, medicine accreditation, patents, inventions).

Extent of the funding

The extent of funding of a project can vary substantially from a small percentage, such as for a single site visit, to 100%. A project might even make a surplus if it is allowed to keep any money left over, its equipment survives the project, or it generates successful intellectual property. The financial value of funding is clearly an important consideration because a cheaper project delivering similar outcomes to a more expensive one would have performed better. Nevertheless, grant size is often ignored in academic studies of the value of funding (e.g., Thelwall et al., 2023 ) because it is difficult to identify the amount and to divide it amongst grant outputs. This section covers four dimensions of the extent of a grant.

What proportion of the research was funded?

A research project might be fully funded, funded for the extras needed above what is already available, or deliberately partly funded (Comins, 2015 ). This last approach is sometimes called “cost sharing”. A grant applied on the Full Economic Cost (FEC) model would pay for the time and resources used by the researchers as well as the administrative support and accommodation provided by their institution. The following seem to be the main possibilities.

Partly funded.

Fully funded but on a partial FEC or sub-FEC model cost sharing model.

FEC plus surplus.

The Frascatti Manual about collecting research and development statistics distinguishes between funding internally within a unit of analysis or externally (OECD, 2015 ) but here the distinction is between explicit and implicit funding, with the latter being classed as “Unfunded”.

How was the funding delivered?

Whilst a research grant would normally be financial, a project might be supported in kind by the loan or gift of equipment or time. For instance, agricultural research might be supported with access to relevant land or livestock (Tricarico et al., 2022 ). Here are three common approaches for delivering funding.

In kind—lending time or loaning/giving equipment or other resources.

Fixed amount of money.

A maximum amount of money, with actual spending justified by receipts.

How much funding did the project receive?

Project funding can be tiny, such as a few pounds for a trip or travel expenses, or enormous, such as for a particle accelerator. Grants of a few thousand pounds can also be common in some fields and for some funders (e.g., Gallo et al., 2014 ; Lyndon, 2018 ). In competitive processes, the funder normally indicates the grant size range that it is prepared to fund. The amount of funding for research has increased over time (Bloch & Sørensen, 2015 ).

The money awarded and/or claimed by the project.

How long was the funding for?

Funded projects can be short term, such as for a one-day event, or very long term, such as a 50-year nuclear fusion reactor programme. There seems to be a trend for longer term and larger amounts of funding, such as for centres of excellence that can manage multiple different lines of research (Hellström, 2018 ; OECD, 2014 ).

The intended or actual (e.g., due to costed or non-costed extensions) duration of the project.

Influence of the funding on the research project

A variety of aspects of the funding system were discussed in the previous sections, and this section and the next switch to the effects of funding on what research is conducted and how. Whist some grant schemes explicitly try to direct research (e.g., funding calls to build national artificial intelligence research capacity), even open calls may have indirect influences on team formation, goals, and broader research directions. This section discusses three different ways in which funding can influence a research project.

Influence on what the applicant did

Whilst funding presumably has a decisive influence on whether a study occurs most of the time because of the expense of the equipment or effort (e.g., to secure ethical approval for medical studies: Jonker et al., 2011 ), there may be exceptions. For example, an analysis of unfunded medical research found that it was often hospital-based (Álvarez-Bornstein et al., 2019 ), suggesting that it was supported by employers. Presumably the researcher applying for funding would usually have done something else research-related if they did not win the award, such as conducting different studies or applying for other funding. The following seem to be the main dimensions of variation here.

No influence (the study would have gone ahead without the funding).

Improved existing study (e.g., more time to finish, more/better equipment, more collaborators, constructive ideas from the peer review process). An extreme example of the latter is the Medical Research Council’s Developmental Pathway Funding Scheme (DPFS), which has expert input and decision making throughout a project.

Made the study possible, replacing other research-related activities (e.g., a different type of investigation, supporting another project, PhD mentoring).

Made the study possible, replacing non-research activities (e.g., teaching, clinical practice).

Researchers may conduct unfunded studies if financing is not essential and they would like to choose their own goals (Edwards, 2022 ; Kayrooz et al., 2007 ), or if their research time can be subsidised by teaching revenue (Olive, 2017 ). Some types of research are also inherently cheaper than others, such as secondary data analysis (Vaduganathan et al., 2018 ) and reviews in medical fields, so may not need funding. At the other extreme, large funding sources may redirect the long-term goals of an entire research group (Jeon, 2019 ). In between these two, funding may improve the quality of a study that would have gone ahead anyway, such as by improving its methods, including the sample size or the range of analyses used (Froud et al., 2015 ). Alternatively, it may have changed a study without necessarily improving it, such as by incorporating funder-relevant goals, methods, or target groups. Scholars with topics that do not match the major funding sources may struggle to be able to do research (Laudel, 2005 ).

Influence on research goals or methods

In addition to supporting the research, the nature of the influence of the source of funding can be minor or major, from the perspective of the funded researcher. It seems likely most funding requires some changes to what a self-funded researcher might otherwise do, if only to give reassurance that the proposed research will deliver tangible outputs (Serrano Velarde, 2018 ), or to fit specific funder requirements (Luukkonen & Thomas, 2016 ). Funding influence can perhaps be split into the following broad types, although they are necessarily imprecise, with considerable overlaps.

No influence (the applicant did not modify their research goals for the funder, or ‘relabelled’ their research goals to match the funding scheme).

Partial influence (the applicant modified their research goals for the funder)

Strong influence (the applicant developed new research goals for the funder, such as a recent call for non-AI researchers to retrain to adopt AI).

Full determination (the funder specified the project, such as a pharmaceutical industry contract to test a new vaccine).

Focusing on more substantial changes only, the funding has no influence if the academic did not need to consider funder-related factors when proposing their study, or could select a funder that fully aligned with their goals. On the other hand, the influence is substantial if the researcher changed their goals to fit the funder requirements (Currie-Alder, 2015 ; Tellmann, 2022 ). In between, a project goals may be tailored to a funder or funding requirements (Woodward & Clifton, 1994 ). An indirect way in which health-related funders often influence research is by requiring Patient and Public Involvement (PPI) at all levels of a project, including strategy development (e.g., Brett et al., 2014 ). Funding initiatives may aim to change researchers’ goals, such as to encourage the growth of a promising new field (Gläser et al., 2016 ). The wider funding environment may also effectively block some research types or topics if it is not in scope for most grants (Laudel & Gläser, 2014 ).

It seems likely that funding sources have the greatest influence on researchers’ goals in resource intensive areas, presumably including most science and health research, and especially those that routinely issue topic-focused calls (e.g., Laudel, 2006 ; Woelert et al., 2021 ). The perceived likelihood of receiving future funding may also influence research methods, such as by encouraging researchers to hoard resources (e.g., perform fewer laboratory experiments for a funded paper) when future access may be at risk (Laudel, 2023 ).

Influence on research team composition

The funder call may list eligibility requirements of various types. For example, the UK national funders specify that applicants must be predominantly UK academics. One common type of specification seems to be team size and composition since many funders (e.g., EU) specify or encourage collaborative projects. Funding may also encourage commercial participants or end user partnerships, which may affect team composition (e.g., Gaughan & Bozeman, 2002 ). Four different approaches may be delineated as follows.

No influence (the funder allows any team size).

Partial influence (the applicant chooses a team size to enhance their perceived success rate).

Funder parameters (the funder specifies parameters, such as a requirement for collaboration or partners from at least three EU countries, disciplinary composition or interdisciplinarity mandate).

Full determination (the funder specifies the team size, such as individual applicants only for career-related grants).

The influence of funders on research team composition is unlikely to be strict even if they fully determine grant applicant team sizes because the funded researchers may choose to collaborate with others using their own grants or unfunded.

Influence of the funding on the research outputs

The above categories cover how research funding helps or influences research studies. This section focuses on what may change in the outputs of researchers or projects due to the receipt of funding. This is important to consider because research outputs are the most visible and countable outcomes of research projects, but they are not always necessary (e.g., funding for training or equipment) and different types can be encouraged. Four relevant dimensions of influence are discussed below.

Influence of funding on the applicant’s productivity

Funding can normally be expected to support the production of new outputs by an academic or team (Bloch et al., 2014 ; Danell & Hjerm, 2013 ), but this may be field dependent. Studying the factors affecting productivity, DFG grants had a positive effect on the productivity for German political scientists (Habicht et al., 2021 ). However, in some cases funding may produce fewer tangible outputs because of the need to collaborate with end users or conduct activities of value to them (Hottenrott & Thorwarth, 2011 ), or if the funding is for long-term high-risk investigations. In areas where funding is inessential or where or core/block funding provides some baseline capability, academics who choose not to apply for it can devote all their research time to research rather than grant writing, which may increase their productivity (Thyer, 2011 ). Although simplistic, the situation may therefore be characterised into three situations.

Reduction in the number or size of outputs of relevant types by the applicant(s) during and/or after the project.

No change in the number or size of outputs of relevant types by the applicant(s) during and/or after the project.

Increase in the number or size of outputs of relevant types by the applicant(s) during and/or after the project.

Funding can also have the long-term indirect effect of improving productivity, though career benefits for those funded, such as making them more likely to attract collaborators and future funding (Defazio et al., 2009 ; Heyard & Hottenrott, 2021 ; Hussinger & Carvalho, 2022 ; Saygitov, 2018 ; Shimada et al., 2017 ). Writing grant applications may also provide an intensive learning process, which may help careers (Ayoubi et al., 2019 ; Jonkers et al., 2017 ).

Influence of funding on the applicant’s research output types

Funding may change what a researcher or research team produces. For example, a commercial component of grants may reduce the number of journal articles produced (Hottenrott & Lawson, 2017 ). Funder policies may have other influences on what a researcher does, such as conditions to disseminate the results in a certain way. This may include open access, providing accessible research data, or writing briefings for policy makers or the public. Whilst this may be considered good practice, some may be an additional overhead for the researcher. This may be summarised as follows, although the distinctions are qualitative.

No change in the nature of the outputs produced.

Partial change in the nature of the outputs produced.

Complete change in the nature of the outputs produced (e.g., patents instead of articles).

Influence of funding on the impact or quality of the research

Although cause-and-effect may be difficult to prove (e.g., Aagaard & Schneider, 2017 ), funding seems likely to change the citation, scholarly, societal, or other impacts of what a researcher or research team produces. For example, a reduction in citation impact may occur if the research becomes more application-focused and an increase may occur if the funding improves the quality of the research.

Most studies have focused on citation impact, finding that funded research, or research funded by a particular funder, tends to be more cited than other research (Álvarez-Bornstein et al., 2019 ; Gush et al., 2018 ; Heyard & Hottenrott, 2021 ; Rigby, 2011 ; Roshani et al., 2021 ; Thelwall et al., 2016 ; Yan et al., 2018 ), albeit with a few exceptions (Alkhawtani et al., 2020 ; Jowkar et al., 2011 ; Muscio et al., 2017 ). Moreover, unfunded work, or work that does not explicitly declare funding sources, in some fields can occasionally be highly cited (Sinha et al., 2016 ; Zhao, 2010 ). Logically, however, there are three broad types of influence on the overall impacts of the outputs produced, in addition to changes in the nature of the impacts.

Reduction in the citation/scholarly/societal/other impact of the outputs produced.

No change in the citation/scholarly/societal/other impact of the outputs produced.

Increase in the citation/scholarly/societal/other impact of the outputs produced.

The quality of the research produced is also important and could be assessed by a similar list to the one above. Research quality is normally thought to encompass three aspects: methodological rigour, innovativeness, and societal/scientific impact (Langfeldt et al., 2020 ). Considering quality overall therefore entails attempting to also assess the rigour and innovativeness of research. These seem likely to correlate positively with research impact and are difficult to assess on a large scale. Whilst rigour might be equated with passing journal peer review in some cases, innovation has no simple proxy indictor and is a particular concern for funding decisions (Franssen, et al., 2018 ; Whitley et al., 2018 ).

The number and types of outcomes supported by a grant

When evaluating funding, it is important to consider the nature and number of the outputs and other outcomes produced specifically from it. Research projects often deliver multiple products, such as journal articles, scholarly talks, public-facing talks, and informational websites. There may also be more applied outputs, such as health policy changes, spin-out companies, and new drugs (Ismail et al., 2012 ). Since studies evaluating research funding often analyse only the citation impact of the journal articles produced (because of the ease of benchmarking), it is important to at least acknowledge that other outputs are also produced by researchers, even if it is difficult to take them into account in quantitative analyses.

The number and type of outcomes or outputs associated with a grant.

Of course, the non-citation impacts of research, such as policy changes or drug development, are notoriously difficult to track down even for individual projects (Boulding et al., 2020 ; Raftery et al., 2016 ), although there have been systematic attempts to identify policy citations (Szomszor & Adie, 2022 ). Thus, most types of impacts could not be analysed on a large scale and individual qualitative analyses are the only option for detailed impact analyses (Guthrie et al., 2015 ). In parallel with this, studies that compare articles funded by different sources should really consider the number of outputs per grant, since a grant producing more outputs would tend to be more successful. This approach does not seem to be used when average citation impact is compared, which is a limitation.

A pragmatic issue for studies of grants: funding records

Finally, from a pragmatic data collection perspective, the funding for a research output can be recorded in different places, not all of which are public. A logical place to look for this information is within the output, although it may be recorded within databases maintained by the funder or employer. Related to this, it is not always clear how much of an output can be attributed to an acknowledged funding source. Whilst the location of a funding record presumably has no influence on the effectiveness of the funding, so is not relevant to the goals of this article, it is included here an important practical consideration that all studies of grant funding must cope with. Three relevant dimensions of this ostensibly simple issue are discussed below.

Where the funding is recorded inside the output

Funding can be acknowledged explicitly in journal articles (Aagaard et al., 2021 ) and other research outputs, whether to thank the funder or to record possible conflicts of interest. This information may be omitted because the authors forget or do not want to acknowledge some or all funders. Here is a list of common locations.

A Funding section.

An Acknowledgements section.

A Notes section.

A Declaration of Interests section.

The first footnote.

The last footnote.

The last paragraph of the conclusions.

Elsewhere in the output.

Not recorded in the output.

The compulsory funding declaration sections of an increasing minority of journals are the ideal place for funder information. These force corresponding authors to declare funding, although they may not be able to track down all sources for large, multiply-funded teams. This section also is probably the main place where a clear statement that a study was unfunded could be found. A Declaration of Interests section may also announce an absence of funding, although this cannot be inferred from the more usual statement that the authors have no competing interests. Funding statements in other places are unsystematic in the sense that it seems easy for an author to forget them. Nevertheless, field norms may dictate a specific location for funding information (e.g., always a first page footnote), and this seems likely to reduce the chance that this step is overlooked.

Where the funding is recorded outside the output

Large funders are likely to keep track of the outputs from their funded research, and research institutions may also keep systematic records (Clements et al., 2017 ). These may be completed by researchers or administrators and may be mandatory or optional. Funders usually also record descriptive qualitative information about funded projects that is not essential for typical large-scale analyses of funded research but is important to keep track of individual projects. It may also be used large scale descriptive analyses of grant portfolio changes over time. For example, the UKRI Gateway to Research information includes project title, abstract (lay and technical), value (amount awarded by UKRI—so usually 80% FEC), funded period (start and end), project status (whether still active), category (broad research grant type—e.g., Fellowship), grant reference, Principle Investigator (PI) (and all co-Investigators), research classifications (e.g. Health Research Classification System [HRCS] for MRC grants), research organisations involved (whether as proposed collaborators or funding recipients/partners), and, as the project progresses, any outputs reported via Researchfish.

Academic employers may also track the outputs and funding of their staff in a current research information system or within locally designed databases or spreadsheets. Dimensions for Funders (Dimensions, 2022 ), for example, compiles funding information from a wide range of sources. Other public datasets include the UKRI Gateway to Research (extensive linkage to outputs), the Europe PMC grant lookup tool (good linkage to publications) or the UKCDR covid funding tracker (some linkage to publications via Europe PMC), or the occasional UK Health Research Analysis (.net), and the European commission CORDIS dataset. There are also some initiatives to comprehensively catalogue who funds what in particular domains, such as for UK non-commercial health research (UKCRC, 2020 ). Of course, there are ad-hoc funding statements too, such as in narrative claims of research impact in university websites or as part of evaluations (Grant & Hinrichs, 2015 ), but these may be difficult to harvest systematically. The following list includes a range of common locations.

In a university/employer public/private funding record.

In the academic’s public/private CV.

In the funder’s public/private record.

In a shared public/private research funding system used by the funder (e.g., Researchfish).

In publicity for the grant award (if output mentioned specifically enough).

In publicity for the output (e.g., a theatre programme for a performance output).

Elsewhere outside the output.

Not recorded outside the output.

From the perspective of third parties obtaining information about funding for outputs, if the employer and/or funder databases are private or public but difficult to search then online publicity about the outputs or funding may give an alternative record.

What is the connection between outputs and their declared funders?

Some outputs have a clear identifiable funder or set of funders. For example, a grant may be awarded to write a book and the book would therefore clearly be the primary output of the project. Similarly, a grant to conduct a specified randomised controlled trial seems likely to produce an article reporting the results; this, after passing review, would presumably be the primary research output even though an unpublished statistical summary of the results might suffice in some cases, especially when time is a factor. More loosely, a grant may specify a programme of research and promise several unspecified or vaguely specified outputs. In this case there may be outputs related to the project but not essential to it that might be classed as being part of it. It is also possible that outputs with little connection to a project are recorded as part of it for strategic reasons, such as to satisfy a project quota or gain a higher end-of-project grade. For example, Researchfish (Reddick et al., 2022 ) allows grant holders to select which publications on their CVs associate with each grant. There are also genuine mistakes in declaring funding (e.g., Elmunim et al., 2022 ). The situation may be summarised with the following logical categories.

Direct, clear connection (e.g., the study is a named primary output of a project).

Indirect, clear connection (e.g., the study is a writeup of a named project outcome).

Indirect, likely connection (e.g., the study is an output of someone working on the project and the output is on the project topic).

Tenuous connection (e.g., the study was completed before the project started, by personnel not associated with the project, or by project personnel on an unrelated topic).

No connection at all (such as due to a recording error; presumably rare).

Conclusions

This paper has described dimensions along which research funding differs between projects, with a focus on grant funding. This includes dimensions that are important to consider when analysing the value of research funding quantitatively. This list is incomplete, and not all aspects will be relevant to all future analyses of funding. Most qualitative and rarer dimensions of difference associated with funding are omitted, including the exact nature of any societal impact, support for researcher development, and support for wider social, ethical or scientific issues (e.g., promoting open science).

Organisations that compile funding datasets or otherwise record funding information may also consult the lists above when considering the records that are desirable to collect. Of course, the providers of large datasets, such as the Dimensions for Funders system, may often not be able to find this information for inclusion (not provided by funders) or not be able to adequately process it (e.g., simply too many variations in funding types, and no straightforward way to present this data to users).

When comparing funding sources or evaluating the impact of funding, it is important to consider as many dimensions as practically possible to ensure that comparisons are fair as achievable, whilst acknowledging the remaining sources of variation as limitations. Even at the level of funding schemes, all have unique features but since comparisons must be made for management purposes, it is important to consider differences or to at least be aware of them when making comparisons.

Aagaard, K., Mongeon, P., Ramos-Vielba, I., & Thomas, D. A. (2021). Getting to the bottom of research funding: Acknowledging the complexity of funding dynamics. PLoS ONE, 16 (5), e0251488.

Article   Google Scholar  

Aagaard, K., & Schneider, J. W. (2017). Some considerations about causes and effects in studies of performance-based research funding systems. Journal of Informetrics, 11 (3), 923–926.

Alkhawtani, R. H., Kwee, T. C., & Kwee, R. M. (2020). Funding of radiology research: Frequency and association with citation rate. American Journal of Roentgenology, 215 , 1286–1289.

Álvarez-Bornstein, B., Díaz-Faes, A. A., & Bordons, M. (2019). What characterises funded biomedical research? Evidence from a basic and a clinical domain. Scientometrics, 119 (2), 805–825.

Ayoubi, C., Pezzoni, M., & Visentin, F. (2019). The important thing is not to win, it is to take part: What if scientists benefit from participating in research grant competitions? Research Policy, 48 (1), 84–97.

Bernardin, H. J. (1996). Academic research under siege: Toward better operational definitions of scholarship to increase effectiveness, efficiencies and productivity. Human Resource Management Review, 6 (3), 207–229.

BHF. (2022). Research evaluation report—British Heart Foundation. Retrieved from https://www.bhf.org.uk/for-professionals/information-for-researchers/managing-your-grant/research-evaluation

Bloch, C., Graversen, E., & Pedersen, H. (2014). Competitive grants and their impact on career performance. Minerva, 52 , 77–96.

Bloch, C., & Sørensen, M. P. (2015). The size of research funding: Trends and implications. Science and Public Policy, 42 (1), 30–43.

Bol, T., de Vaan, T., & van de Rijt, A. (2022). Gender-equal funding rates conceal unequal evaluations. Research Policy, 51 (2022), 104399.

Boulding, H., Kamenetzky, A., Ghiga, I., Ioppolo, B., Herrera, F., Parks, S., & Hinrichs-Krapels, S. (2020). Mechanisms and pathways to impact in public health research: A preliminary analysis of research funded by the National Institute for health research (NIHR). BMC Medical Research Methodology, 20 (1), 1–20.

Brett, J. O., Staniszewska, S., Mockford, C., Herron-Marx, S., Hughes, J., Tysall, C., & Suleman, R. (2014). Mapping the impact of patient and public involvement on health and social care research: A systematic review. Health Expectations, 17 (5), 637–650.

Clements, A., Reddick, G., Viney, I., McCutcheon, V., Toon, J., Macandrew, H., & Wastl, J. (2017). Let’s Talk-Interoperability between university CRIS/IR and Researchfish: A case study from the UK. Procedia Computer Science, 106 , 220–231.

Comins, J. A. (2015). Data-mining the technological importance of government-funded patents in the private sector. Scientometrics, 104 (2), 425–435.

Cooksey, D. (2006). A review of UK health research funding. Retrieved from https://www.jla.nihr.ac.uk/news-and-publications/downloads/Annual-Report-2007-08/Annexe-8-2007-2008-CookseyReview.pdf

Currie-Alder, B. (2015). Research for the developing world: Public funding from Australia, Canada, and the UK . Oxford University Press.

Book   Google Scholar  

Danell, R., & Hjerm, R. (2013). The importance of early academic career opportunities and gender differences in promotion rates. Research Evaluation, 22 , 2010–2214.

Davies, J. (2016). Collaborative funding for NCDs—A model of research funding. The Lancet Diabetes & Endocrinology, 4 (9), 725–727.

Defazio, D., Lockett, A., & Wright, M. (2009). Funding incentives, collaborative dynamics and scientific productivity: Evidence from the EU framework program. Research Policy, 38 (2), 293–305.

Deglon, M., Dalvie, M. A., & Abrams, A. (2023). The impact of extreme weather events on mental health in Africa: A scoping review of the evidence. Science of the Total Environment, 881 , 163420.

Dimensions. (2022). Dimensions for funders. Retrieved from https://www.dimensions.ai/who/government-and-funders/dimensions-for-funders/

Edwards, R. (2022). Why do academics do unfunded research? Resistance, compliance and identity in the UK neo-liberal university. Studies in Higher Education, 47 (4), 904–914.

Elkbuli, A., Zajd, S., Narvel, R. I., Dowd, B., Hai, S., Mckenney, M., & Boneva, D. (2020). Factors affecting research productivity of trauma surgeons. The American Surgeon, 86 (3), 273–279.

Elmunim, N. A., Abdullah, M., & Bahari, S. A. (2022). Correction: Elnumin et al. Evaluating the Performance of IRI-2016 Using GPS-TEC measurements over the equatorial region: Atmosphere 2021, 12, 1243. Atmosphere, 13 (5), 762.

El-Sawi, N. I., Sharp, G. F., & Gruppen, L. D. (2009). A small grants program improves medical education research productivity. Academic Medicine, 84 (10), S105–S108.

ERC. (2017). Assessing the business performance effects of receiving publicly-funded science, research and innovation grants. Retrieved from https://www.enterpriseresearch.ac.uk/publications/accessing-business-performance-effects-receiving-publicly-funded-science-research-innovation-grants-research-paper-no-61/

Fang, F. C., & Casadevall, A. (2016). Research funding: The case for a modified lottery. Mbio, 7 (2), 10–1128.

Fernández-del-Castillo, E., Scardaci, D., & García, Á. L. (2015). The EGI federated cloud e-infrastructure. Procedia Computer Science, 68 , 196–205.

Franssen, T., Scholten, W., Hessels, L. K., & de Rijcke, S. (2018). The drawbacks of project funding for epistemic innovation: Comparing institutional affordances and constraints of different types of research funding. Minerva, 56 (1), 11–33.

Froud, R., Bjørkli, T., Bright, P., Rajendran, D., Buchbinder, R., Underwood, M., & Eldridge, S. (2015). The effect of journal impact factor, reporting conflicts, and reporting funding sources, on standardized effect sizes in back pain trials: A systematic review and meta-regression. BMC Musculoskeletal Disorders, 16 (1), 1–18.

Gallo, S. A., Carpenter, A. S., Irwin, D., McPartland, C. D., Travis, J., Reynders, S., & Glisson, S. R. (2014). The validation of peer review through research impact measures and the implications for funding strategies. PLoS ONE, 9 (9), e106474.

Garrett-Jones, S. (2000). International trends in evaluating university research outcomes: What lessons for Australia? Research Evaluation, 9 (2), 115–124.

Gaughan, M., & Bozeman, B. (2002). Using curriculum vitae to compare some impacts of NSF research grants with research center funding. Research Evaluation, 11 (1), 17–26.

Gläser, J., Laudel, G., & Lettkemann, E. (2016). Hidden in plain sight: The impact of generic governance on the emergence of research fields. The local configuration of new research fields: On regional and national diversity, 25–43.

Grant, J., & Hinrichs, S. (2015). The nature, scale and beneficiaries of research impact: An initial analysis of the Research Excellence Framework (REF) 2014 impact case studies. Retrieved from https://kclpure.kcl.ac.uk/portal/files/35271762/Analysis_of_REF_impact.pdf

Grimpe, C. (2012). Extramural research grants and scientists’ funding strategies: Beggars cannot be choosers? Research Policy, 41 (8), 1448–1460.

Gross, K., & Bergstrom, C. T. (2019). Contest models highlight inherent inefficiencies of scientific funding competitions. PLoS Biology, 17 (1), e3000065.

Gush, J., Jaffe, A., Larsen, V., & Laws, A. (2018). The effect of public funding on research output: The New Zealand Marsden Fund. New Zealand Economic Papers, 52 (2), 227–248.

Guthrie, S., Bienkowska-Gibbs, T., Manville, C., Pollitt, A., Kirtley, A., & Wooding, S. (2015). The impact of the national institute for health research health technology assessment programme, 2003–13: A multimethod evaluation. Health Technology Assessment, 19 (67), 1–291.

Habicht, I. M., Lutter, M., & Schröder, M. (2021). How human capital, universities of excellence, third party funding, mobility and gender explain productivity in German political science. Scientometrics, 126 , 9649–9675.

Hellström, T. (2018). Centres of excellence and capacity building: From strategy to impact. Science and Public Policy, 45 (4), 543–552.

Heyard, R., & Hottenrott, H. (2021). The value of research funding for knowledge creation and dissemination: A study of SNSF research grants. Humanities and Social Sciences Communications, 8 (1), 1–16.

Hicks, D. (2012). Performance-based university research funding systems. Research Policy, 41 (2), 251–261.

Hottenrott, H., & Lawson, C. (2017). Fishing for complementarities: Research grants and research productivity. International Journal of Industrial Organization, 51 (1), 1–38.

Hottenrott, H., & Thorwarth, S. (2011). Industry funding of university research and scientific productivity. Kyklos, 64 (4), 534–555.

Hu, M. C. (2009). Developing entrepreneurial universities in Taiwan: The effects of research funding sources. Science, Technology and Society, 14 (1), 35–57.

Hussinger, K., & Carvalho, J. N. (2022). The long-term effect of research grants on the scientific output of university professors. Industry and Innovation, 29 (4), 463–487.

Ismail, S., Tiessen, J., & Wooding, S. (2012). Strengthening research portfolio evaluation at the medical research council: Developing a survey for the collection of information about research outputs. Rand Health Quarterly , 1 (4). Retrieved from https://www.rand.org/pubs/technical_reports/TR743.html

Jeon, J. (2019). Invisibilizing politics: Accepting and legitimating ignorance in environmental sciences. Social Studies of Science, 49 (6), 839–862.

Jonker, L., Cox, D., & Marshall, G. (2011). Considerations, clues and challenges: Gaining ethical and trust research approval when using the NHS as a research setting. Radiography, 17 (3), 260–264.

Jonkers, K., & Zacharewicz, T. (2016). Research performance based funding systems: A comparative assessment. European Commission. Retrieved from https://ec.europa.eu/jrc/en/publication/eur-scientific-and-technical-research-reports/research-performance-based-funding-systems-comparative-assessment

Jonkers, K., Fako P., Isella, L., Zacharewicz, T., Sandstrom, U., & Van den Besselaar, P. (2017). A comparative analysis of the publication behaviour of MSCA fellows. Proceedings STI conference . Retrieved from https://www.researchgate.net/profile/Ulf-Sandstroem-2/publication/319547178_A_comparative_analysis_of_the_publication_behaviour_of_MSCA_fellows/links/59b2ae00458515a5b48d133f/A-comparative-analysis-of-the-publication-behaviour-of-MSCA-fellows.pdf

Jowkar, A., Didegah, F., & Gazni, A. (2011). The effect of funding on academic research impact: A case study of Iranian publications. Aslib Proceedings, 63 (6), 593–602.

Kang, B., & Motohashi, K. (2020). Academic contribution to industrial innovation by funding type. Scientometrics, 124 (1), 169–193.

Kayrooz, C., Åkerlind, G. S., & Tight, M. (Eds.). (2007). Autonomy in social science research, volume 4: The View from United Kingdom and Australian Universities . Emerald Group Publishing Limited.

Google Scholar  

Kim, K. S., Chung, J. H., Jo, J. K., Kim, J. H., Kim, S., Cho, J. M., & Lee, S. W. (2018). Quality of randomized controlled trials published in the international urogynecology journal 2007–2016. International Urogynecology Journal, 29 (7), 1011–1017.

Langfeldt, L., Bloch, C. W., & Sivertsen, G. (2015). Options and limitations in measuring the impact of research grants—Evidence from Denmark and Norway. Research Evaluation, 24 (3), 256–270.

Langfeldt, L., Nedeva, M., Sörlin, S., & Thomas, D. A. (2020). Co-existing notions of research quality: A framework to study context-specific understandings of good research. Minerva, 58 (1), 115–137.

Laudel, G. (2005). Is external research funding a valid indicator for research performance? Research Evaluation, 14 (1), 27–34.

Laudel, G. (2006). The art of getting funded: How scientists adapt to their funding conditions. Science and Public Policy, 33 (7), 489–504.

Laudel, G. (2023). Researchers’ responses to their funding situation. In: B. Lepori & B. Jongbloed (Eds.), Handbook of public funding of research (pp. 261–278).

Laudel, G., & Gläser, J. (2014). Beyond breakthrough research: Epistemic properties of research and their consequences for research funding. Research Policy, 43 (7), 1204–1216.

Luo, J., Ma, L., & Shankar, K. (2021). Does the inclusion of non-academic reviewers make any difference for grant impact panels? Science and Public Policy, 48 (6), 763–775.

Lutter, M., Habicht, I. M., & Schröder, M. (2022). Gender differences in the determinants of becoming a professor in Germany: An event history analysis of academic psychologists from 1980 to 2019. Research Policy, 51 , 104506.

Luukkonen, T., & Thomas, D. A. (2016). The ‘negotiated space’ of university researchers’ pursuit of a research agenda. Minerva, 54 (1), 99–127.

Lyndon, A. R. (2018). Influence of the FSBI small research grants scheme: An analysis and appraisal. Journal of Fish Biology, 92 (3), 846–850.

Meadmore, K., Fackrell, K., Recio-Saucedo, A., Bull, A., Fraser, S. D., & Blatch-Jones, A. (2020). Decision-making approaches used by UK and international health funding organisations for allocating research funds: A survey of current practice. PLoS ONE, 15 (11), e0239757.

MRC. (2019). MRC 10 year translational research evaluation report 2008 to 2018. Retrieved from https://www.ukri.org/publications/mrc-translational-research-evaluation-report/

Muscio, A., Ramaciotti, L., & Rizzo, U. (2017). The complex relationship between academic engagement and research output: Evidence from Italy. Science and Public Policy, 44 (2), 235–245.

Neufeld, J. (2016). Determining effects of individual research grants on publication output and impact: The case of the Emmy Noether Programme (German Research Foundation). Research Evaluation, 25 (1), 50–61.

OECD. (2014). Promoting research excellence: new approaches to funding. OECD. Retrieved from https://www.oecd-ilibrary.org/science-and-technology/promoting-research-excellence_9789264207462-en

OECD. (2015). Frascati manual 2015. Retrieved from https://www.oecd.org/innovation/frascati-manual-2015-9789264239012-en.htm

Olive, V. (2017). How much is too much? Cross-subsidies from teaching to research in British Universities . Higher Education Policy Institute.

Paulson, K., Saeed, M., Mills, J., Cuvelier, G. D., Kumar, R., Raymond, C., & Seftel, M. D. (2011). Publication bias is present in blood and marrow transplantation: An analysis of abstracts at an international meeting. Blood, the Journal of the American Society of Hematology, 118 (25), 6698–6701.

Raftery, J., Hanley, S., Greenhalgh, T., Glover, M., & Blotch-Jones, A. (2016). Models and applications for measuring the impact of health research: Update of a systematic review for the health technology assessment programme. Health Technology Assessment, 20 (76), 1–254. https://doi.org/10.3310/hta20760

Reale, E., Lepori, B., & Scherngell, T. (2017). Analysis of national public research funding-pref. JRC-European Commission. Retrieved from https://core.ac.uk/download/pdf/93512415.pdf

Reddick, G., Malkov, D., Sherbon, B., & Grant, J. (2022). Understanding the funding characteristics of research impact: A proof-of-concept study linking REF 2014 impact case studies with Researchfish grant agreements. F1000Research, 10 , 1291.

Richards, H. (2019). Equipment grants: It’s all in the details. Journal of Biomolecular Techniques: JBT, 30 (Suppl), S49.

Rigby, J. (2011). Systematic grant and funding body acknowledgement data for publications: New dimensions and new controversies for research policy and evaluation. Research Evaluation, 20 (5), 365–375.

Roach, J. W., Skaggs, D. L., Sponseller, P. D., & MacLeod, L. M. (2008). Is research presented at the scoliosis research society annual meeting influenced by industry funding? Spine, 33 (20), 2208–2212.

Roshani, S., Bagherylooieh, M. R., Mosleh, M., & Coccia, M. (2021). What is the relationship between research funding and citation-based performance? A comparative analysis between critical disciplines. Scientometrics, 126 (9), 7859–7874.

Sandström, U. (2009). Research quality and diversity of funding: A model for relating research money to output of research. Scientometrics, 79 (2), 341–349.

Sandström, U., & Van den Besselaar, P. (2018). Funding, evaluation, and the performance of national research systems. Journal of Informetrics, 12 , 365–384.

Saygitov, R. T. (2018). The impact of grant funding on the publication activity of awarded applicants: A systematic review of comparative studies and meta-analytical estimates. Biorxiv , 354662.

Schneider, J. W., & van Leeuwen, T. N. (2014). Analysing robustness and uncertainty levels of bibliometric performance statistics supporting science policy: A case study evaluating Danish postdoctoral funding. Research Evaluation, 23 (4), 285–297.

Schroder, M., Lutter, M., & Habicht, I. M. (2021). Publishing, signalling, social capital, and gender: Determinants of becoming a tenured professor in German political science. PLoS ONE, 16 (1), e0243514.

Serrano Velarde, K. (2018). The way we ask for money… The emergence and institutionalization of grant writing practices in academia. Minerva, 56 (1), 85–107.

Shimada, Y. A., Tsukada, N., & Suzuki, J. (2017). Promoting diversity in science in Japan through mission-oriented research grants. Scientometrics, 110 (3), 1415–1435.

Sinha, Y., Iqbal, F. M., Spence, J. N., & Richard, B. (2016). A bibliometric analysis of the 100 most-cited articles in rhinoplasty. Plastic and Reconstructive Surgery Global Open, 4 (7), e820. https://doi.org/10.1097/GOX.0000000000000834

Szomszor, M., & Adie, E. (2022). Overton: A bibliometric database of policy document citations. arXiv preprint arXiv:2201.07643 .

Tellmann, S. M. (2022). The societal territory of academic disciplines: How disciplines matter to society. Minerva, 60 (2), 159–179.

Thelwall, M., Kousha, K., Abdoli, M., Stuart, E., Makita, M., Font-Julián, C. I., Wilson, P., & Levitt, J. (2023). Is research funding always beneficial? A cross-disciplinary analysis of UK research 2014–20. Quantitative Science Studies, 4 (2), 501–534. https://doi.org/10.1162/qss_a_00254

Thelwall, M., Kousha, K., Dinsmore, A., & Dolby, K. (2016). Alternative metric indicators for funding scheme evaluations. Aslib Journal of Information Management, 68 (1), 2–18. https://doi.org/10.1108/AJIM-09-2015-0146

Thyer, B. A. (2011). Harmful effects of federal research grants. Social Work Research, 35 (1), 3–7.

Tricarico, J. M., de Haas, Y., Hristov, A. N., Kebreab, E., Kurt, T., Mitloehner, F., & Pitta, D. (2022). Symposium review: Development of a funding program to support research on enteric methane mitigation from ruminants. Journal of Dairy Science, 105 , 8535–8542.

UKCRC. (2020). UK health research analysis 2018. Retrieved from https://hrcsonline.net/reports/analysis-reports/uk-health-research-analysis-2018/

Vaduganathan, M., Nagarur, A., Qamar, A., Patel, R. B., Navar, A. M., Peterson, E. D., & Butler, J. (2018). Availability and use of shared data from cardiometabolic clinical trials. Circulation, 137 (9), 938–947.

Van den Besselaar, P., & Horlings, E. (2011). Focus en massa in het wetenschappelijk onderzoek. de Nederlandse onderzoeksportfolio in internationaal perspectief. (In Dutch : Focus and mass in research: The Dutch research portfolio from an international perspective ). Den Haag, Rathenau Instituut.

Van den Besselaar, P. & Mom, C. (2021). Gender bias in grant allocation, a mixed picture . Preprint.

Van den Besselaar, P., & Leydesdorff, L. (2009). Past performance, peer review, and project selection: A case study in the social and behavioral sciences. Research Evaluation, 18 (4), 273–288.

Van den Besselaar, P., & Sandström, U. (2015). Early career grants, performance and careers; a study of predictive validity in grant decisions. Journal of Informetrics, 9 , 826–838.

Versleijen, A., van der Meulen, B., van Steen, J., Kloprogge, P., Braam, R., Mamphuis, R., & van den Besselaar, P. (2007). Dertig jaar onderzoeksfinanciering—rends, beleid en implicaties. (In Dutch: Thirty years research funding in the Netherlands—1975–2005) . Den Haag: Rathenau Instituut 2007.

Voss, A., Andreß, B., Pauzenberger, L., Herbst, E., Pogorzelski, J., & John, D. (2021). Research productivity during orthopedic surgery residency correlates with pre-planned and protected research time: A survey of German-speaking countries. Knee Surgery, Sports Traumatology, Arthroscopy, 29 , 292–299.

Wang, L., Wang, X., Piro, F. N., & Philipsen, N. J. (2020). The effect of competitive public funding on scientific output: A comparison between China and the EU. Research Evaluation, 29 (4), 418–429.

Whitley, R., Gläser, J., & Laudel, G. (2018). The impact of changing funding and authority relationships on scientific innovations. Minerva, 56 , 109–134.

Woelert, P., Lewis, J. M., & Le, A. T. (2021). Formally alive yet practically complex: An exploration of academics’ perceptions of their autonomy as researchers. Higher Education Policy, 34 , 1049–1068.

Woodson, T. S., & Williams, L. D. (2020). Stronger together: Inclusive innovation and undone science frameworks in the Global South. Third World Quarterly, 41 (11), 1957–1972.

Woodward, D. K., & Clifton, G. D. (1994). Development of a successful research grant application. American Journal of Health-System Pharmacy, 51 (6), 813–822.

Yan, E., Wu, C., & Song, M. (2018). The funding factor: A cross-disciplinary examination of the association between research funding and citation impact. Scientometrics, 115 (1), 369–384.

Zhao, D. (2010). Characteristics and impact of grant-funded research: A case study of the library and information science field. Scientometrics, 84 (2), 293–306.

Download references

No funding was received for conducting this study.

Author information

Authors and affiliations.

Statistical Cybermetrics and Research Evaluation Group, University of Wolverhampton, Wolverhampton, UK

Mike Thelwall

Information School, University of Sheffield, Sheffield, UK

MRC Secondee, Evaluation and Analysis Team, Medical Research Council, London, UK

Subreena Simrick

Evaluation and Analysis Team, Medical Research Council, London, UK

Department of Organization Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands

Peter Van den Besselaar

German Centre for Higher Education Research and Science Studies (DZHW), Berlin, Germany

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mike Thelwall .

Ethics declarations

Competing interest.

The first and fourth authors are members of the Distinguished Reviewers Board of Scientometrics. The second and third authors work for research funders.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Thelwall, M., Simrick, S., Viney, I. et al. What is research funding, how does it influence research, and how is it recorded? Key dimensions of variation. Scientometrics 128 , 6085–6106 (2023). https://doi.org/10.1007/s11192-023-04836-w

Download citation

Received : 12 February 2023

Accepted : 05 September 2023

Published : 16 September 2023

Issue Date : November 2023

DOI : https://doi.org/10.1007/s11192-023-04836-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research funding
  • Academic research funding
  • Research funding typology
  • Funding effects
  • Find a journal
  • Publish with us
  • Track your research

Advertisement

Issue Cover

  • Previous Article
  • Next Article

PEER REVIEW

1. introduction, 2. background: research funding types and benefits, 5. discussion, 6. conclusions, author contributions, competing interests, funding information, data availability, is research funding always beneficial a cross-disciplinary analysis of u.k. research 2014–20.

ORCID logo

Handling Editor: Ludo Waltman

  • Funder(s):  Research England, Scottish Funding Council, Higher Education Funding Council for Wales, and Department for the Economy, Northern Ireland
  • Cite Icon Cite
  • Open the PDF for in another window
  • Permissions
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Search Site

Mike Thelwall , Kayvan Kousha , Mahshid Abdoli , Emma Stuart , Meiko Makita , Cristina I. Font-Julián , Paul Wilson , Jonathan Levitt; Is research funding always beneficial? A cross-disciplinary analysis of U.K. research 2014–20. Quantitative Science Studies 2023; 4 (2): 501–534. doi: https://doi.org/10.1162/qss_a_00254

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Although funding is essential for some types of research and beneficial for others, it may constrain academic choice and creativity. Thus, it is important to check whether it ever seems unnecessary. Here we investigate whether funded U.K. research tends to be higher quality in all fields and for all major research funders. Based on peer review quality scores for 113,877 articles from all fields in the U.K.’s Research Excellence Framework (REF) 2021, we estimate that there are substantial disciplinary differences in the proportion of funded journal articles, from Theology and Religious Studies (16%+) to Biological Sciences (91%+). The results suggest that funded research is likely to be of higher quality overall, for all the largest research funders, and for 30 out of 34 REF Units of Assessment (disciplines or sets of disciplines), even after factoring out research team size. There are differences between funders in the average quality of the research supported, however. Funding seems particularly associated with higher research quality in health-related fields. The results do not show cause and effect and do not take into account the amount of funding received but are consistent with funding either improving research quality or being won by high-quality researchers or projects.

https://www.webofscience.com/api/gateway/wos/peer-review/10.1162/qss_a_00254

Writing and managing grants occupies a substantial amount of academic time, but it is not clear whether the benefits outweigh the costs in all fields. In some cases, researchers may be unable to experiment without funding, but scholars not needing new equipment, resources, or time buyout may be able to work equally well without financing. Nevertheless, little is known about the proportion of academic time spent on grant writing and administering, so it is difficult to weigh the benefits of funding against its costs. Although many studies report the proportion of time spent by academics on research, teaching, and administration ( Bentley & Kyvik, 2012 ), they rarely ask about grant writing as a separate activity. One exception estimated that each Australian National Health and Medical Research Council grant proposal took 38 working days (nearly two months), or 28 for a resubmitted proposal. In 79% of cases, this effort was unrewarded ( Herbert, Barnett et al., 2013 ), so Australian researchers collectively spent between half a year and a full year writing grant proposals for each one funded. A survey of 12 European countries found that between 51% and 84% of academics (71% in the United Kingdom) wrote grant proposals or otherwise responded to calls for proposals each year ( Drennan, Clarke et al., 2013 ). This work is highly stressful, affecting work–life balance in almost all applicants, but deemed necessary partly due to institutional pressure and expectations from colleagues ( Herbert, Coveney et al., 2014 ). There have also been claims that the constant pressure to win grants undermines the quality of research, particularly in situations where ongoing employment is funding dependent ( Fumasoli, Goastellec, & Kehm, 2015 ), and perhaps through research time lost to grant writing. Thus, it is important to assess whether the benefits of funding always outweigh the costs. This article focuses on the narrower issue of whether funding is always beneficial, at least in the sense of being associated with higher research quality outputs.

Although the time and equipment for early scientific research was self-financed or informally supported by benefactors, the system of competitively awarding national grants for future research emerged from the prize system (for previous achievements) in France before the First World War ( Crosland & Galvez, 1989 ). Over the last half century, university funding in many countries has changed from being awarded unconditionally for the benefit of science, albeit with a focus on government priority areas such as defense, to being mainly accountable and harnessed for societal benefits ( Banchoff, 2002 ; Demeritt, 2000 ; Lepori, van den Besselaar et al., 2007 ; Mustar & Larédo, 2002 ), such as medical priority areas ( Xu, Zhang et al., 2014 ). Resource-seeking behaviors (“academic capitalism”; Slaughter & Leslie, 2001 ) are long-established norms in several major research countries ( Johnson & Hirt, 2011 ; Metcalfe, 2010 ). Research funding is now primarily awarded for achievements (i.e., performance-based funding; Hicks, 2012 ) or future promise, through competitive grants ( OECD, 2014 ). This is supplemented by incentives to seek finances from industry and other nonacademic sources to fund research for nonacademic benefits ( Laudel, 2005 ). External funding is considered valuable despite the huge amount of lost time spent by experts writing grant proposals ( Polster, 2007 ) and the potential to skew science ( Thyer, 2011 ). National grant awards may aim to generally support promising research or researchers, or support research with societal benefits (e.g., Takemura, 2021 ).

The effect of funding seems likely to depend on the researcher, with field-related variations. For example, in specialties needing funding for any kind of research (e.g., areas of medicine or genetics), such as to employ enough assistants or to access equipment or consumables, a researcher without funding cannot research, so what do they do instead? If their role is not changed to teaching only ( Edwards, 2022 ; Nyamapfene, 2018 ) or professional (e.g., clinical doctor), they might use any research time allowance to write grant proposals, accept consultancy or advisory roles, read academic articles, develop aspects of their skills, develop their research methods or theory, or devote more time to teaching, administration, or other roles. They may also support others’ funded research projects in a minor role. In some cases, they may also write short papers about aspects of their research process, such as ethics or minor methods details. In contrast, other researchers may easily be able to conduct unfunded research, even though funding might improve their work with better equipment or larger teams. From a U.K. Research Excellence Framework (REF) perspective, funding might improve research productivity by supporting larger teams (allowing divisions of labor) and give the funded researcher a larger pool of publications from which to choose from for REF evaluations. Thus, any comparison between funded and unfunded research presupposes that it is possible to do research in a field without funding and necessarily excludes researchers that can only conduct funded research but did not receive funding in the period examined. The situation is complicated in the United Kingdom because university teaching budgets subsidize 13% of research ( Olive, 2017 ) in a way that may not be recorded. Researchers with the choice may prefer unfunded research because it gives them autonomy from funder goals and requirements ( Edwards, 2022 ), particularly in the era of challenge-led research ( Olive, 2017 ).

RQ1: How prevalent is research funding for U.K. REF journal articles and are there disciplinary differences in the answer?

RQ2: Is funded research (U.K. REF journal articles) higher quality for all major research funders?

RQ3: Do research funders support different quality research (U.K. REF journal articles)?

RQ4: Is funded research (U.K. REF journal articles) of higher quality irrespective of authorship team size?

RQ5: Are average citation counts effective proxies for average quality for externally funded research (U.K. REF journal articles)?

This section mainly reviews research findings. No theory of the disciplinary organization of science has yet shed light on the relationship between funding and research quality. A partial exception is that grant review outcomes might be expected to be more unpredictable in fields with low level of agreement on the objects and methods of research (task uncertainty: Whitley, 2000 ), probably including most of the social sciences and humanities.

2.1. Types of Funding

Size : The average size of individual grants has increased over recent decades, for example with long-term funding for large centers of excellence at the expense of sets of individual grants ( Bloch & Sørensen, 2015 ; OECD, 2014 ). In the United States, block-funded National Science Foundation centers do not seem to improve the journal outputs of members, although they do improve commercial partnerships ( Gaughan & Bozeman, 2002 ). Smaller grants seem to help productivity more than larger grants for research centers ( Bloch, Schneider, & Sinkjær, 2016 ). Smaller research awards lead to more citations overall for biological science research ( Gallo, Carpenter et al., 2014 ). In medicine, a small amount of funding from public and private research contracts and consultancies reduces research impact but a large amount increases it ( Muscio, Ramaciotti, & Rizzo, 2017 ). The latter may reflect the large-scale funding needed for effective medical studies in many cases, with underfunded research also being underpowered.

Rationale/source : The source of funding received by a research group influences their research agenda ( Currie-Alder, 2015 ; Hottenrott & Lawson, 2017 ; Tellmann, 2022 ). In terms of quality, spinal research harnessed weaker types of evidence (e.g., case series) when it had industry funding but was more likely to report positive outcomes ( Amiri, Kanesalingam et al., 2014 ).

2.2. Effectiveness of Grant Proposal Peer Review and Characteristics of Recipients

For any analysis of the influence of funding on research, it is difficult to distinguish between cause and effect in terms of funders finding the best research/researchers or the funding improving/allowing research/researchers. Although some grant selection processes focus on applicant characteristics, most concentrate on the proposal, checking its rationale, evaluating its validity, and (often) match with funding criteria ( Chubin, 1994 ; Franssen, Scholten et al., 2018 ; van Arensbergen & van den Besselaar, 2012 ).

There is limited overall evidence of the effectiveness of peer review for grant proposals ( Liaw, Freedman et al., 2017 ), based on evaluations typically using citation indicators as a proxy for research quality or achievements. In some contexts, higher scores or success in winning awards have been shown to associate with more citations ( Bornmann & Daniel, 2006 ; Gallo et al., 2014 ) or more outputs ( Fang, Bowen, & Casadevall, 2016 ; Győrffy, Herman, & Szabó, 2020 ). In contrast, for economic and social sciences research council grants in the Netherlands, while weak researchers tended to be rejected, awardees performed substantially worse in bibliometric terms than rejected researchers with similar scores ( van den Besselaar & Leydesdorff, 2009 ). This suggests that the research council process selected above-average researchers but not the very highest performing (at least bibliometrically), or that the funding was detrimental.

Many studies have found disparities in review outcomes that are suggestive of bias, whether deliberate or accidental, or systemic effects. These biases include gender ( Cruz-Castro, Ginther, & Sanz-Menendez, 2022 ; Tricco, Thomas et al., 2017 ), age ( Levitt & Levitt, 2017 ), ethnicity ( Cruz-Castro et al., 2022 ; Hayden, 2015 ), interdisciplinarity ( Seeber, Vlegels, & Cattaneo, 2022 ), and institutional prestige ( Ali, Bhattacharyya, & Olejniczak, 2010 ; Enger & Castellacci, 2016 ; Horta, Huisman, & Heitor, 2008 ; Jappe & Heinze, 2023 ). All biases seem likely to reduce the effectiveness of the grant allocation process and hence, presumably, the overall benefits of funding.

2.3. The Impact of Grants on Research Productivity and Impact

Funding could be expected to increase the productivity or impact of the funded researchers. The benefits of research funding are impossible to fully quantify, and it is difficult to generate meaningful statistics because of the lack of effective control groups in most cases, and particularly the ability of unfunded groups to receive funding from sources other than the one examined ( Neufeld, 2016 ; Schneider & van Leeuwen, 2014 ). Most previous studies have analyzed individual funding sources and assumed that the papers acknowledging them were primarily caused by the funding, whereas journal articles often draw upon a range of different long-term and short-term funding for equipment and different team members as well as specific project-based grants, at least for biomedical research ( Rigby, 2011 ). Moreover, many studies do not distinguish between selection effects and funding effects ( Neufeld, 2016 ): Are funded researchers more productive because of the money or because better researchers/proposals were selected, or both? Moreover, all studies so far have had limited scope: There are different types of funding and disciplinary differences in funding uses and procedures so there is unlikely to be a simple relationship between funding and impacts. For example, larger funded studies may find it easier to get ethical approval to research in clinical settings ( Jonker, Cox, & Marshall, 2011 ), reducing the number of unfunded studies.

Funding usually associates with (i.e., correlates with but does not necessarily cause) increased research productivity, as measured by journal articles, often even after the end of the funding period ( Chudnovsky, López et al., 2008 ; Godin, 2003 ; Defazio, Lockett, & Wright, 2009 ; Ebadi & Schiffauerova, 2016 ; El-Sawi, Sharp, & Gruppen, 2009 ; Hussinger & Carvalho, 2022 ; Saygitov, 2018 ; Shimada, Tsukada, & Suzuki, 2017 ) but commercial funding can slow academic publishing because of the need to write patents or produce other outcomes ( Hottenrott & Thorwarth, 2011 ). A systematic attempt to track down all funding sources for research from one university suggested that funding increased productivity but not citation impact, although it would be difficult to disentangle disciplinary differences in funding value with this data ( Sandström, 2009 ).

Funding also usually associates with higher citation impact (e.g., Álvarez-Bornstein, Díaz-Faes, & Bordons, 2019 ; Berman, Borkowski et al., 1995 ; Gush, Jaffe et al., 2018 ; Heyard & Hottenrott, 2021 ; Jowkar, Didegah, & Gazni, 2011 ; Levitt, 2011 ; Lewison & Dawson, 1998 ; Neufeld, 2016 ; Peritz, 1990 ; Rigby, 2011 ; Roshani, Bagherylooieh et al., 2021 ; Thelwall, Kousha et al., 2016 ; Yan, Wu, & Song, 2018 ) but there are exceptions ( Alkhawtani, Kwee, & Kwee, 2020 ; Jowkar et al., 2011 ; Langfeldt, Bloch, & Sivertsen, 2015 ; Neufeld, 2016 ; Sandström, 2009 ). In support of the latter, 89% of the most cited rhinoplasty articles published by 2015 were unfunded ( Sinha, Iqbal et al., 2016 ) and 30% of key papers for physics, chemistry, and medicine Nobel Prize winners 2000–2008 declared no funding ( Tatsioni, Vavva, & Ioannidis, 2010 ). Unfunded research might sometimes be highly cited because it has more scope to be innovative, at least in fields such as library and information science not needing expensive resources ( Zhao, 2010 ). Grants may constrain academic freedom, which is a particular threat to the role of social science research in challenging authority and in being able to interpret results free from external pressures ( Kayrooz, Åkerlind, & Tight, 2007 ).

2.4. Levels and Types of Unfunded Research

Most research in the previous century was unfunded, at least as reported in journals. An early study of 900 journal articles in three medical journals from 1987, 1989, and 1991 found high levels of unfunded research (at least without declared funding sources): internal medicine (60%), pathology (62%), and surgery (74%) ( Berman et al., 1995 ). Similarly, in 1987, 1989, and 1991, 84% of journal articles by pathologists were unfunded ( Borkowski, Berman, & Moore, 1992 ) and 63% of emergency medicine articles were unfunded in 1994 ( Ernst, Houry, & Weiss, 1997 ). In 1992, however, only 23% of internal medicine and neurology journal articles were unfunded ( Stein, Rubenstein, & Wachtel, 1993 ). Partly funded research is also common in medicine ( Mai, Agan et al., 2013 ).

Early unfunded research was often different from funded research ( Bodensteiner, 1995 ; Silberman & Snyderman, 1997 ; Stein et al., 1993 ) and a few studies have compared funded with unfunded research types this century. For Spanish virology, cardiology, and cardiovascular scholars, unfunded research was hospital based and clinical, suggesting that it had been internally supported by hospital resources ( Álvarez-Bornstein et al., 2019 ). Unfunded investigations may tend to be desk research or other cheaper types, including secondary data analysis ( Vaduganathan, Nagarur et al., 2018 ), guidelines ( Goddard, James et al., 2011 ), review articles (e.g., Imran, Aamer et al., 2020 ), retrospective records-based analyses (e.g., Brookes, Farr et al., 2021 ; Sedney, Daffner et al., 2016 ), small case studies (e.g., Qi & Wei, 2021 ), or analytical/theoretical/opinion papers without primary data ( Underhill, Dols et al., 2020 ). In nursing, evidence-based practice research may often be unfunded because the data analyzed may come mainly from investigators’ daily work roles ( Higgins, Downes et al., 2019 ). Researching may be a compulsory part of some higher-level courses, such as for radiology, and this may result in many small-scale unfunded studies by educators and learners ( Johnson, Mathews, & Artemakis, 2002 ). In medicine, unfunded research may be disproportionately from general practitioners compared to hospital doctors because they lack the infrastructure to obtain and maintain large grants ( van Driel, Deckx et al., 2017 ).

For this study, the U.K. Research and Innovation (UKRI) national science and research funding government agency gave us the preliminary scores from March 2022 of all 148,977 journal articles submitted to the U.K. REF 2021, excluding those from the University of Wolverhampton (the project host institution, for confidentiality reasons). The REF ( REF, 2022 ) is a periodic (up to seven-year gaps) exercise to evaluate U.K. academic research to, among other things, allocate the U.K. block funding research grants known as “Mainstream QR” and worth over £2 billion per year for up to 7 years. The REF includes postpublication expert review of selected outputs (1–5 per researcher), from which we were given the journal articles. Academics submit only their best outputs over the period and teaching staff do not need to submit anything, so the articles analyzed are likely to represent predominantly the best research produced by U.K. researchers 2014–20. Each article had been given a “quality” score by at least two out of over 1,000 expert assessors (usually full professors), with the grades being 1* (nationally recognized), 2* (internationally recognized), 3* (internationally excellent), and 4* (world leading). The grades reflect originality, significance, and rigor, with different and detailed guidelines for these from each of four overseeing Main Panels ( REF, 2020 , pp. 34–51). There was careful norm referencing between assessors within each of the 34 Units of Assessment (UoAs) to which they had been assigned to ensure that the scores by different pairs of assessors were comparable. There was also overall norm referencing for the entire REF. The peer review process is carefully managed because of its multi-billion pound financial value (about £50,000 per individual score, on average), although the reviewers are not experts in all areas that they need to assess. Each UoA covers what might be called a broad field (very broad in some cases, like UoA 34: Communication, Cultural and Media Studies, Library and Information Management) and is either a recognizable discipline (e.g., UoA 18 Law) or a set of related disciplines (e.g., UoA 8 Agriculture, Food and Veterinary Sciences). The four Main Panels group together cognitively related UoAs for administrative and norm referencing purposes.

The REF articles were matched against Scopus records by DOI comparisons ( n = 133,218). REF articles without a DOI in Scopus were matched instead by title, with a manual check to accept or reject all potential matches ( n = 997). The Scopus record was used for funding and citation information. Scopus cross-references information in articles with funding information gained from other sources populating its funding database ( McCullough, 2021 ). Scopus reports a single funder for each paper, at least through its Applications Programming Interface (API), as used to gather the data, although some studies have multiple funders. Thus, the funder-level results reported here are based on incomplete data.

Some of the articles were given multiple grades from the same or different UoAs. This is possible because each author is entitled to submit between one and five outputs for which they are a coauthor, and coauthors from different institutions may choose the same article(s). For analysis, duplicate articles were removed within the grouping analyzed (UoA, Main Panel, or all). When an article had received different scores, it was given the median or a random median when there were two.

Scopus was used for funding information because of its wider coverage than the Web of Science ( Martín-Martín, Thelwall et al., 2021 ) and because Google Scholar does not extract relevant information. Scopus started systematically indexing funding in 2011 ( Rigby, 2011 ) so its data should be mature for the REF period 2014–20. Funding data in academic articles might be in a separate “Funding sources” section, in the acknowledgments, or as a footnote alongside author information. The acknowledgment section was a common place for funding information ( Paul-Hus, Díaz-Faes et al., 2017 ) before the rise of the dedicated funding section.

Some article funders were universities, suggesting that the authors had been allocated internal university money for their research or that it was unfunded but recorded as university-funded to reflect employers allowing research time for the scholars involved, or for university policy reasons. As it was not possible to distinguish between the two, for the regression analysis, research was classed as unfunded if the funder was a university, irrespective of country. For this, we checked the 4,042 funders and classified 1,317 of them as internal university or research institute funding (e.g., Weizmann Institute of Science) and 2,725 as external funding (e.g., American Mathematical Society). After this stage, research was classified as externally funded if it declared a funder in Scopus and the funder name was not one of the 1,317 universities found. When funding information was present (e.g., a grant number) but no name for the funder was given, it was assumed to be externally funded.

3.2. Data Quality Checks

To check whether the Scopus API funding information was accurate, for six UoAs chosen to represent different field types, we read samples of articles for details of research support. For each UoA, 100 unfunded articles, 100 university-funded articles, and 100 nonuniversity funded articles were selected with a random number generator for checking (or 100%, when fewer than 100). The checks were performed by two people, the first author for all and either ES, MM, or MA. Publisher versions of articles were checked for funding information except when the preprint was online with funding information. In one case (Theology, unfunded) we were unable to obtain the article through any method (including interlibrary loans) and it was substituted with the next article selected by the random number generator. A study was counted as university funded if the only funding source mentioned was university based. It was recorded as externally funded if any nonuniversity funding source was mentioned.

Funding could be mentioned in multiple places, although a “Funding” section or an “Acknowledgments” section at the end of the article were common, and a “Disclosure of Interests” end section sometimes also included funding information. Other places included first page footnotes, last page footnotes (rare), a notes section at the end of the article, the first paragraph of the article (rare), and the last paragraph of the conclusions (common in Physics, one example in Theology). Articles sometimes declared that the research was unfunded, usually within a funding section, and sometimes in a disclosure of interests section. In one case, a funding section declared that the research was unfunded but the acknowledgments section thanked a funder, so this was coded as funded. Some articles included author biographies that might have mentioned funding sources but never did.

Funding statements varied in length from short declarations of the funder name and grant number to several paragraphs of thanks. In some fields it was common to thank departments hosting a visit or seminar and current and former employers. An article was classed as funded if this was stated directly (e.g., “funded by”) or if it was suggested by the context, such as by naming a research funding organization or thanking one for an unspecified type of “support.” Acknowledgments of support from universities were not counted as funding if these seemed to be minor and routine, such as hosting a visit or supporting a seminar. University support was counted as funding for the purposes of these checks if the term “funding” was mentioned or “grant” or it was obvious from the context that a financial transaction had occurred, as in the case of a PhD studentship. In a few cases, support in kind was provided, such as through access to equipment, but this was not counted as funding. Research described as part-funded was recorded as funded.

Although in some cases the article appeared to be the primary outcome of a grant, in most cases the relationship between the funding and the output was less clear. For example, the article could be one of the outputs of a PhD studentship or Leverhulme Trust fellowship. Many articles had authors with differing funding sources, suggesting that the study itself had not been funded but had been made possible by funding given to the participants. Such studies were counted as funded. In Medicine and Physics, for example, long paragraphs often recorded the financial support given to all participants as well as the equipment used and the study itself.

The information found manually is unreliable. A funded article may have no declaration within the text if the author forgot or the journal style or field norms discouraged it. Checks were made of cases where Scopus recorded a funder but the article didn’t mention one. These checks found examples where Scopus was correct because the article was listed on a funding website as an output of the grant. Although Scopus has reported that its funding information is imported from the acknowledgment sections of articles ( Beatty, 2017 ), it seems likely that it now automatically links articles to funding records from elsewhere and might also perform wider searches of article text. In other cases the funding was plausible because the scholar thanked the same funder on a different output at a similar time or listed the funder on their online CV. Scopus also seemed to have listed incorrect funders in at least two cases: the wrong funder altogether in one case, and a university in another case where the author had included an acknowledgment that an earlier version had been presented at a seminar at that university. These were not altered in our data because the checks were for quality assessment rather than correction.

Comparing the Scopus API information with manual checks, the Scopus API results were always imperfect and substantially misleading in some cases ( Figure 1 ). Almost all Clinical Medicine and Physics articles were externally funded (i.e., at least one nonuniversity funder) even if the Scopus API listed none. In these cases, Scopus had presumably not found where the funding was listed in the article. Physics article funding statements were often in the last paragraph of the conclusions, where they may have been missed. For all six fields, most articles classed as university funded (i.e., the Scopus API funder was apparently a university) were externally funded. This typically occurred because the Scopus API reports only one external funder, and the manual checks classed an article as externally funded if any of the funders were not universities. In four cases, most Scopus API results were correct for unfunded and externally funded articles, however. This information should be taken into consideration when interpreting the results.

The results of manual checks of random samples of REF2021 articles recorded by Scopus as funded (listing a university or funder) or unfunded for six UoAs.

The results of manual checks of random samples of REF2021 articles recorded by Scopus as funded (listing a university or funder) or unfunded for six UoAs.

3.3. Analyses

For RQ1, the proportion of articles declaring research funding was calculated for each UoA and Main Panel.

For RQ2 and RQ3, the average quality of the articles from each funder was calculated and compared to the average quality of unfunded research. The grade point average (GPA) was used for this, which is the arithmetic mean of the quality scores. Although widely used in the United Kingdom in rankings of institutions, the GPA is a convenience and not theoretically informed because there is no reason to believe that a 4* article is four times as good as a 1* article. Nevertheless, it at least gives a straightforward and easily understandable indicator of average quality scores for funded journal articles. The 30 largest funders (including unfunded and unknown funder) were reported. The choice of 30 is relatively arbitrary. The GPA for small funders with a few articles would be imprecise estimates of the funder’s average research quality, and 30 is a common statistical choice for the minimum size to identify a pattern. This calculation ignores funders not reported by the Scopus API, which particularly affects articles with multiple funders. The RQ2 test involves making multiple comparisons using confidence intervals and this increases the chance of getting at least one statistically positive result by accident, the problem of familywise error rates. We have reported confidence intervals without error rates because the individual funders are of interest, but use Bonferroni corrections (see Perneger, 1998 ) to discuss the results as a group. These increase the probability threshold for a difference to be deemed statistically significant in a way that protects the chance of making at least one false positive (i.e., a Type I error) at the 0.05 level.

We used ordinal regression (the polr function in the R MASS package with the Hess = TRUE option) to answer RQ4, with research quality as the dependent variable and research funding (binary) and the log of the number of authors as independent variables. A similar approach has been used with citations as a proxy for research quality as the dependent variable ( Ebadi & Schiffauerova, 2016 ). We excluded 23 articles with no authors listed from the regressions. We ran a separate regression for each UoA and Main Panel (combining similar UoAs) and for all the data. Ordinal regression only assumes that the four quality scores are in ascending order but does not assume that they are equidistant, so it is better than types of regression requiring a scalar output ( Gutiérrez, Perez-Ortiz et al., 2015 ). By including both authors and funding as independent variables, the regression output can show whether one of the two is redundant in any area. The log of the number of authors was used instead of the number of authors because the relationship between author numbers and log-transformed citation counts is approximately logarithmic ( Thelwall & Maflahi, 2020 ), and the shape is similar for the relationship between REF scores and author numbers ( Thelwall, Kousha et al., 2022a ).

For RQ5, we calculated a field-normalized citation score for every REF2021 article to allow fair comparisons between articles from different fields. For this, we first log normalized each citation count with ln(1 + x ) to reduce skewing in the data set caused by very highly cited articles ( Thelwall & Fairclough, 2017 ). Then, we calculated the average of the logged citations for each of the 330 Scopus narrow fields and each year 2014–18 (i.e., 5 × 330 averages) and divided each article’s logged citation count by the average for its narrow field and year. Articles in multiple fields were instead divided by the average of the relevant field averages. This gives a Normalized Log-transformed Citation Score (NLCS) ( Thelwall, 2017 ) for each journal article. These can be compared between fields and years because, by design, a score of 1 always reflects an average number of citations for the field and year of an article. Averaging the NLCS of all articles associated with a funder gives the funder’s Mean NLCS (MNLCS), which is a measure of the normalized average citation rate for the journal articles it funded. Again, an MNLCS above 1 always reflects a funder that tends to fund articles that are more cited than average for their fields and years. The most recent 2 years were excluded from the calculation to give a citation window of at least 2 years, reducing the influence of short citation windows. Although a 3-year citation window is better ( Wang, 2013 ), it would reduce the amount of data and the log transformation in the NLCS formula reduces the statistical variability caused by short time windows.

In all analyses, we did not take into account any dependencies in the data caused by up to five outputs being submitted by a single researcher, and this is a limitation. On average, each full-time equivalent (FTE) researcher submitted 2.5 outputs. A minimum of one and a maximum of five outputs could be submitted by a single academic (whether full time or part time). Accounting for some nonarticle outputs and an unknown number of part timers, researchers (whether full time or part time) probably submitted about two articles each, on average. If every researcher produced uniform quality solo work, then this would reduce the effective sample sizes in all the analyses by 50%. Nevertheless, uniform quality work for all researchers is unrealistic and most work was coauthored, so the effective sample size reduction due to dependency (i.e., two articles are more likely to have the same REF score if they have at least one author in common) is unknown. Because of this, the widths of the confidence intervals in all the graphs should be treated with caution.

4.1. RQ1: Prevalence of Research Funding

Just under two-thirds (63%) of journal articles submitted to REF2021 had funding information recorded by the Scopus API, with substantial disciplinary differences ( Table 1 , Figure 2 ). This figure excludes funded journal articles where the funder was not recorded by the author, the journal did not allow a funding declaration, or a technical issue prevented Scopus from finding the declaration (see Figure 1 ). This also includes research that was internally funded, whether nominally (part of the scholar’s job to research) or more substantially, such as with money for equipment or research assistants. Some universities (e.g., University of Wolverhampton, not in the data set) now require scholars to record their employer as the funder within the internal research information management system for articles not externally funded, and this may encourage them to report the same within their articles.

Number of articles, unfunded articles, and university funded articles. Number of funders per UoA, main panel or all

The percentage of U.K. REF2021 journal articles with a declared source of funding in Scopus.

The percentage of U.K. REF2021 journal articles with a declared source of funding in Scopus.

Funding is the norm for Main Panels A (80%) and B (76%), but half as prevalent in Main Panels C (40%) and D (32%). The difference is presumably due to the need for equipment and large teams in the health, life, and physical sciences (except for purely theoretical contributions), whereas expensive or perishable equipment is probably rarer in the social sciences, arts, and humanities, except for long-term purchases (e.g., musical instruments). Moreover, there may be more social sciences, arts, and humanities topics that can be researched in small teams or alone. The three UoAs with the highest proportions of funded papers are Biological Sciences (91%), Physics (91%), Clinical Medicine (88%), and Chemistry (87%). All these subjects have subfields that do not need expensive equipment: theoretical physics, theoretical chemistry, biostatistics (related to medicine), and systems biology. Thus, the result may reflect “cheaper” specialties being rare in the United Kingdom or globally.

4.2. RQ2: Is Funded Research Higher Quality for All Major Research Funders?

The GPA of the REF2021 scores of funded journal articles tends to be higher than the unfunded article GPA for most large research funders in Main Panels A–D ( Figures 3 – 6 ). In the few cases where the funded GPA is lower than the unfunded GPA, the confidence interval for the former almost always includes the latter. The sole minor exception is the European Commission funding in Main Panel C ( Figure 5 ). Nevertheless, this exception could be a side effect of the large number of tests (29 × 4), and with a Bonferroni correction, the difference between European Commission-funded research and unfunded research in Main Panel C is not statistically significant. Thus, at the Main Panel level, the results are broadly consistent with research funding being an advantage for all major funders, albeit marginal in some cases.

The average quality score of REF2021 journal articles by research funder for Main Panel A (mainly health and life sciences) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The average quality score of REF2021 journal articles by research funder for Main Panel A (mainly health and life sciences) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The average quality score of REF2021 journal articles by research funder for Main Panel B (mainly engineering, physical sciences and mathematics) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The average quality score of REF2021 journal articles by research funder for Main Panel B (mainly engineering, physical sciences and mathematics) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The average quality score of REF2021 journal articles by research funder for Main Panel C (mainly social sciences) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The average quality score of REF2021 journal articles by research funder for Main Panel C (mainly social sciences) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The average quality score of REF2021 journal articles by research funder for Main Panel D (mainly arts and humanities) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The average quality score of REF2021 journal articles by research funder for Main Panel D (mainly arts and humanities) for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

For Main Panel A ( Figure 3 ), all research funder GPAs are above the unfunded GPA and none of the research funder confidence intervals contain the unfunded GPA. Thus, funding from a major funder is an advantage in Main Panel A. The same is broadly true for Main Panel B ( Figure 3 ) except that four of the funder GPA confidence intervals contain the unfunded score.

The pattern is mixed for Main Panel C ( Figure 5 ), perhaps because of smaller sample sizes giving less accurate mean estimates and wider confidence intervals. Although there are three funders with GPAs below the unfunded GPA, there are many funders with GPAs substantially above it and with narrow confidence intervals. Thus, there is still a general trend for major funder money to be advantageous in Main Panel C. For Main Panel D, most funders have a GPA above the unfunded GPA, and a few have substantially higher GPAs with narrow confidence intervals, suggesting that major funder money is also an advantage here.

Major funders also tend to support higher quality research when the data are aggregated to the level of individual UoAs, although there are some exceptions. Some illustrative examples are discussed here, focusing on larger UoAs for which the patterns are clearest. For Clinical Medicine (UoA 1, Figure 7 ), Engineering and Physical Sciences Research Council (EPSRC) funded research surprisingly generated lower quality scores than unfunded research. The reason for this may be that UoA 1 assessors did not value research with substantial inputs from nonmedical fields in the context of their UoA (e.g., because of more rigid quality criteria: Whitley, 2000 ). There is no similar problem for UoAs 2 ( Figure 8 ) and 3 ( Figure 9 ).

The average quality score of REF2021 journal articles by research funder for UoA 1 Clinical Medicine for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The average quality score of REF2021 journal articles by research funder for UoA 1 Clinical Medicine for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The average quality score of REF2021 journal articles by research funder for UoA 2 Public Health, Health Services and Primary Care for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The average quality score of REF2021 journal articles by research funder for UoA 2 Public Health, Health Services and Primary Care for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The average quality score of REF2021 journal articles by research funder for UoA 3 Allied Health Prof., Dentistry, Nursing & Pharmacy for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The average quality score of REF2021 journal articles by research funder for UoA 3 Allied Health Prof., Dentistry, Nursing & Pharmacy for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The presence of pharmaceutical companies as funders for health and medical research is clear in UoAs 1–3 ( Figures 7 – 9 ). The research that they fund tends to have a substantially higher GPA than unfunded research, suggesting that the commercial income enhances rather than compromises academic quality, or that a commercial funder boosts the significance component of quality for REF assessors.

The second largest UoA, Engineering ( Figure 10 ) illustrates the general advantage of major research funders for quality in this field. Although most of the funders are governmental research funding bodies, military funding clearly produces above-average quality research.

The average quality score of REF2021 journal articles by research funder for UoA 12 Engineering for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The average quality score of REF2021 journal articles by research funder for UoA 12 Engineering for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

Funding seems to be a marginal advantage for the largest UoA, Business and Management, as eight of the top 28 funders have a below average GPA ( Figure 11 ). Moreover, the core funder, the Economic and Social Research Council (ESRC), confers the relatively minor advantage of a 0.1 higher average GPA. The European Research Council was (pre-Brexit) particularly effective at funding high-quality research, but this is a logical side effect of its strategy of selecting “top researchers” through very competitive grants.

The average quality score of REF2021 journal articles by research funder for UoA 17 Business and Management Studies for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

The average quality score of REF2021 journal articles by research funder for UoA 17 Business and Management Studies for the 30 research funders with the most articles. Error bars indicate 95% confidence intervals.

4.3. RQ3: Do Research Funders Support Different Quality Research?

As the graphs above illustrate, there are statistically significant differences in the average quality of research supported by different funders. For example, in UoA 1 Clinical Medicine ( Figure 7 ), the average GPA of the main three research funders is different, with their confidence intervals not overlapping. In particular, the National Institutes of Health (NIH) funded particularly high-quality research, followed by the Wellcome Trust (U.K. charity) and the Medical Research Council (MRC), all of which have large budgets and general funding remits. The NIH advantages may be that its research funded with U.K. partners would usually be international, because it is based in the United States, and its funding is backed by the greater financial resources of the United States.

4.4. RQ4: Does Authorship Team Size Moderate the Effect of Funding on Research Quality?

Research has shown that articles with more authors tend to be more cited, and funding seems to attract large team sizes, so it is possible that the advantage of funding is sometimes primarily in bringing together many authors. In our data, for all UoAs and Main Panels, funder GPA correlates positively with the average (geometric mean) number of authors on papers associated with the funder ( Figure 12 , GPA vs. authors). In other words, the larger the average authorship team size supported by a funder, the higher the average quality of the research it funds. The correlations tend to be strong in Main Panels A and B.

Pearson correlations between funder MNLCS, GPA, and geometric mean authors by UoA or Main Panel. MNLCS correlations only cover research published 2014–18. UoAs are included only when they have at least 30 funders associated with at least five papers each.

Pearson correlations between funder MNLCS, GPA, and geometric mean authors by UoA or Main Panel. MNLCS correlations only cover research published 2014–18. UoAs are included only when they have at least 30 funders associated with at least five papers each.

For UoAs with at least 30 funders associated with at least five papers each, the weakest correlation between GPA and authors is for Business and Management Studies (0.06). Thus, for Business and Management Studies research funders, there is almost no relationship between average funded authorship team size and average research quality. This may be due to relatively little variation in GPA between funders and typically small research teams (average three authors per paper for all major funders, varying between 1.7 and 4.3). In contrast, the highest correlation is for Agriculture, Food and Veterinary Sciences (0.84), partly due to medical funders (MRC, NIH, Wellcome) and the Gordon and Betty Moore Foundation supporting large team research with high GPAs.

Ordinal regressions for each UoA, Main Panel, and overall (39 regressions) allow the effects of funding and author numbers to be analyzed separately. As a conservative step (see Figure 1 ), university-funded research was classed as unfunded, so the focus is on external funding for research. Because of the incompleteness of the funding data, the results will tend to underestimate any differences that exist. In the regressions, an exponentiated coefficient of 1 indicates that the independent variable (logged number of authors or external funding) has no effect on the odds ratios for quality scores (1, 2, 3, or 4). Values greater than 1 indicate that the variable increases the odds ratio for a higher quality score and values less than 1 the opposite. Every increase of 1 in the exponentiated regression coefficient for funding increases the odds ratio for higher quality research by 1 for funded research compared to unfunded research. Similarly, every increase of 1 in the exponentiated regression coefficient for logged authors increases the odds ratio for higher quality research by 1 for research with e = 2.718 times more authors.

The results show that, when considered independently from the number of authors, funding associates with improved odds of higher quality research in 33 out of 34 UoAs and all four Main Panels ( Figure 13 ). The confidence intervals exclude the null value 1 for 30 out of 34 (and 30 out of all 39) regressions. These calculations do not include familywise error rate corrections for 39 separate tests, so may include some false positives; hence the true number of UoAs where funding is important may be less than 30. Conversely, on the basis of 30 out of 39 positives and the possibility that the remainder could have a funding advantage because values greater than 1 are comfortably in all 95% confidence intervals, it is also plausible that, after factoring out the number of authors, funding always associates with an improved chance of higher quality journal articles. This is because the few exceptions could be due to normal levels of chance. Nevertheless, while funding has the most substantial association with quality in Main Panels A and B, its association is marginal in some UoAs from Main Panels C and D. Thus, overall, there is evidence that funding has weak or moderate value, even after factoring out authorship team size, in associating with higher quality research in the social sciences, arts, and humanities, but there is strong evidence that it has a considerable value in medicine, life and physical sciences, and engineering.

Exponentiated ordinal regression coefficients for quality score against external funding (binary) and the logged number of authors for REF2021 articles 2014–20. Error bars show 95% confidence intervals. University-funded research and research without declared funding is classified as unfunded. Each pair of coefficients shown above is from a separate model.

Exponentiated ordinal regression coefficients for quality score against external funding (binary) and the logged number of authors for REF2021 articles 2014–20. Error bars show 95% confidence intervals. University-funded research and research without declared funding is classified as unfunded. Each pair of coefficients shown above is from a separate model.

Although less important here, increased author numbers usually, but not always, associate with increased odds of higher quality journal articles, even after factoring out research funding. The exceptions are mainly in the arts and humanities.

4.5. RQ5: Are Average Citation Counts Effective Proxies for Average Quality for Externally Funded Research?

Research funders often have their own evaluation teams to assess the effectiveness of their grants. For this, the main quantitative evidence is likely to be citation data, perhaps with project grades from end-of-grant reviewers in some cases. If they make like-for-like comparisons against similar funders, then the only quantitative data that they would have for both would be citation counts. Thus, it is useful to check whether the average citation impact of funders is an effective proxy for the average quality of the research that they fund.

Correlations between funder citation rates (MNLCS) and average quality (GPA) are strong (> 0.5) in all Main Panels ( Figure 12 ), suggesting that citation impact is a reasonable proxy for research quality overall. The correlations also tend to be moderate or strong in the UoAs of Main Panels A and B ( Figure 12 ), but are variable in the UoAs of Main Panel C. In particular, the correlations are close to 0 (positive or negative) in UoAs 17 (Business and Management Studies), 20 (Social Work and Social Policy), and 23 (Education) and weak (0.2) in UoA 13 (Architecture, Built Environment and Planning). Thus, citation rates are inappropriate proxies for funder quality in these areas. By extension, and due to a lack of evidence, it seems that citation rates should not be used as proxies for funder research quality throughout the social sciences, arts, and humanities, except for Geography and Environmental Studies.

The results are limited to journal articles from the United Kingdom, and to the best 1–5 journal articles written by U.K. academics 2014–20, so are not representative of typical U.K. research (especially books). Moreover, while the scores given to the articles by the REF assessors are relatively carefully allocated, usually by two senior field experts following written guidelines ( REF, 2022 ) and norm referenced within each UoA and broadly between UoAs, they are imperfect. In particular, an unknown but nontrivial number of articles will have been assessed by people without the knowledge to understand them, so guesswork was needed for these. Moreover, research quality is subjective and other assessors may well have given different scores to the same outputs; the assessors may also have taken into account funding when allocating scores (especially nonacademic funding as an indicator of significance). Nevertheless, the scores seem likely to be broadly reasonable, with unreasonable scores or errors being noise in the data. This hypothesis is sufficient for the above results to make sense, although noise in the data would tend to reduce the magnitude of any differences found. As a caveat, however, there are different ways of conceiving research quality and although the REF definition is relatively universal (combining originality, significance, and rigor: REF, 2022 ), there are others ( Langfeldt, Nedeva et al., 2020 ). Related to this, researchers may not submit their most creative unfunded articles to the REF because of the significance and rigor criteria, and this may influence the results.

Another limitation is that the results only consider the funder reported by the Scopus API, ignoring any that Scopus could not find and all funders except one in the case of multiple-funded articles. This is a substantial limitation, as discussed in the evaluation at the end of Section 3 . In particular, the extent of funding is underestimated in the data here. This does not invalidate the findings because funded research is still more likely to be recorded as such in the API ( Figure 1 ), so the funded and unfunded groups are statistically distinct. This limitation nevertheless indicates that differences found between funded and unfunded research are larger than shown in the data (because the unfunded subsets are “polluted” with funded articles). The errors in the Scopus API data would also tend to reduce the difference between funded and unfunded research for the same reason. This reduction is likely to be largest when the Scopus API has the most missing information (probably lower-numbered UoAs).

The findings ignore the value of each grant, whether the funding was partial, what the money was spent on, how many publications were produced from it, and whether journal articles were the primary outcome of the project or a side effect. They also ignore disciplinary differences in the need to record funding sources, with biomedical fields apparently most affected due to a need to register any potential conflicts of interest. They also ignore the purpose of the funding, which may not be to conduct high-quality research but to develop a technology for industry, to train a PhD student, to develop a junior postdoc, to build research networks, or to support researcher mobility. The results do not differentiate between projects awarded explicit funding by a university and projects without explicit funding but presumably consuming university resources and time: Both are classed as unfunded for the regression and are otherwise recorded as university funded only if this is stated in the funding information. More generally, the results do not take into account the time taken to write funding proposals for either successful or unsuccessful bids. Finally, funding here is tied to publications, although a team may be partly funded and draw on different sources ( Aagaard, Mongeon et al., 2021 ).

5.1. Comparison with Prior Work

The findings mostly have little directly comparable prior work. For RQ1, the prevalence of research funding for any country is reported apparently for the first time, albeit with partial data. The existence of disciplinary differences in funding rates is unsurprising but does not seem to have been previously investigated for all academic fields. The prevalence of funding is much higher than previously reported ( Berman et al., 1995 ; Borkowski et al., 1992 ; Ernst et al., 1997 ; Jowkar et al., 2011 ; Lim, Yoon et al., 2012 ; Shandhi, Goldsack et al., 2021 ; Stein et al., 1993 ), with a few exceptions ( Godin, 2003 ), probably at least partly due to more systematic funding reporting now, and the U.K. sample (e.g., excluding publishing practitioners/professionals).

The higher quality rates for major funders (RQ2) are a new finding but echo many previous studies of individual funders that have shown funded articles or researchers to be more cited than a comparable group (unfunded articles, unsuccessful applicants, or researchers before the funding) ( Álvarez-Bornstein et al., 2019 ; Berman et al., 1995 ; Gush et al., 2018 ; Heyard & Hottenrott, 2021 ; Levitt, 2011 ; Lewison & Dawson, 1998 ; Peritz, 1990 ; Rigby, 2011 ; Roshani et al., 2021 ; Yan et al., 2018 ), and conflicts with the few studies not showing this or showing the reverse in specific fields or contexts ( Jowkar et al., 2011 ; Muscio et al., 2017 ; Neufeld, 2016 ). The discrepancies include two fields where citations are reasonably reliable indicators of quality—Biology/Biochemistry and Environment/Ecology in Iran ( Jowkar et al., 2011 )—so it is possible that there are international differences in the value of research funding.

The unsurprising finding that funders can support different quality research (RQ3) aligns with prior findings that research funders can support research with different average citation impacts ( Thelwall et al., 2016 ), and that the amount of research funding influences the citation impact of the research ( Muscio et al., 2017 ).

The finding that funded research is higher quality than unfunded research even after factoring out team size (RQ4) is not directly comparable to prior studies. It contradicts claims that the current managerial approach to research in higher education reduces the quality of research in the social sciences by restricting the autonomy of researchers ( Horta & Santos, 2020 ), although it is not clear whether academics with more autonomy but the same amount of funding would produce better work. The evidence of fields in which average citation counts are effective proxies for average quality (as conceived in the REF) for externally funded research (RQ5) is also not directly comparable to prior studies.

5.2. Alternative Causes of Funded Research Being Higher Quality

The higher quality of funded research has multiple possible causes, all of which may be true to some extent. Although it seems self-evident that funding improves research, it is not always true ( Jowkar et al., 2011 ; Muscio et al., 2017 ; Neufeld, 2016 ). There are many pathways that could explain the usually positive relationship.

5.2.1. Funders select more successful researchers to fund

Research, albeit with limited scope, suggests that funding councils may be good at excluding weak researchers but not good at identifying the very best, at least if citations are accepted as a proxy for research quality ( van den Besselaar & Leydesdorff, 2009 ). Assuming that the first group, together with researchers that were unable to submit funding bids, formed a majority or were substantially weaker than the other two groups, this would likely translate into a statistical association between funding and researcher quality. There may also be a REF selection effect that would strengthen the results, with stronger researchers differentially submitting their funded research and weaker researchers often not having funded research.

5.2.2. Funding improves existing research

At the simplest level, funding may allow some researchers to conduct better versions of the research that they had already intended to pursue. For example, the funding might support a larger scale survey, newer equipment, expert collaborators, or additional supporting analyses. It seems unlikely that a project given extra funding would often become worse, for example because new equipment was bought but did not work well, or an expanded survey incorporated lower quality data collection methods in the additional areas.

5.2.3. Funding changes the research carried out, replacing weaker (or no) with stronger work

Funding might allow a study that would be impossible for the applicant(s) without external funding ( Bloch, Graversen, & Pedersen, 2014 ). If the funding was for expensive equipment or other processes (e.g., large-scale in-person interviews) then the work seems likely to be more original than average, assuming that few researchers in a field would have access to funding for investigations with a similar purpose. For example, perhaps an Alzheimer’s researcher gets funding to run a large-scale genetic screening test and produces one of the few studies on this topic. Originality is one of the three components of research quality ( Langfeldt et al., 2020 ), so increasing this would be enough to improve the overall quality grade for an article. Of course, funded types of research could also sometimes tend to be weaker than unfunded research in some fields or contexts. For example, funding commonly supports PhD projects ( Ates & Brechelmacher, 2013 ), and PhD research could be better or worse than average, depending on the field.

5.2.4. Funding-led research goals are more valued

Research projects that align with funders’ strategic priorities may be highly valued if assessors accept these priorities. Although there are open call grants, some may pursue unfunded research because of the freedom to choose their own priorities ( Behrens & Gray, 2001 ; Cheek, 2008 ), so strategic goals seem likely to be more common in funded research. Funding also generates an implicit hierarchy of research value, with even unfunded goals aligning with societal needs potentially being undervalued ( Frickel, Gibbon et al., 2010 ).

5.2.5. Funding is regarded as a good in itself

Given high levels of competition for research funding, a funding declaration may be seen as an important achievement, especially as the evaluators are mainly from a U.K. higher education environment in which funding is encouraged and rewarded. Conversely, in funding-rich areas, articles lacking funding may be treated with extra suspicion.

5.2.6. Funding entails impact requirements

Although industry funding typically has commercial value as a goal, research council grants have societal impact requirements and give resources to achieve these through dissemination activities. Thus, funded research may be more impactful through multiple pathways related to the funding sources.

In the United Kingdom, there are substantial disciplinary differences in the proportions of funded research and the extent to which funded research tends to be of higher quality than unfunded research. Although this was only evaluated in a limited U.K. REF context, the results suggest, but do not prove, that there are few (and perhaps no) broad fields of research in which funding does not help academics to produce higher quality research. The main exceptions are a few individual funders in some contexts, and the evidence is weak for the arts and humanities and some social sciences. Moreover, as the results could be equally explained by better researchers being more successful at attracting funding or funding improving the researchers’ outputs, no cause-and-effect relationship can be claimed. The results are not due to funded research tending to involve larger teams because the regressions showed a residual funding advantage after taking into account team size. Overall, however, because the results are at least consistent with research funding adding value nearly universally across disciplines, avoiding grants seems like a risk for all researchers, unless they have good reasons to believe that their research is an exception.

This study does not take into account productivity and the time taken writing successful and unsuccessful bids, so the results cannot be used for a cost–benefit analysis of funding. More detailed research that considers the amount of funding available for each study and the role of the funding (e.g., improving existing research, allowing expensive studies) would be needed to make a reasonable cost–benefit analysis to give useful information about the disciplinary differences in the effectiveness of funding, but this seems unlikely to be possible with current public data.

A secondary finding is that citations are not always effective proxies for average funder quality, especially in the social sciences, arts, and humanities. Funders and studies that use citations as proxies for quality to assess the impact of funding should only do so for the fields identified above where appropriately field-normalized citation counts correlate at least moderately with quality.

Mike Thelwall: Formal analysis, Methodology, Writing—original draft. Kayvan Kousha: Writing—review & editing. Mahshid Abdoli: Formal analysis, Writing—review & editing. Emma Stuart: Formal analysis, Writing—review & editing. Meiko Makita: Formal analysis, Writing—review & editing. Cristina I. Font-Julián: Writing–review & editing. Paul Wilson: Writing—review & editing. Jonathan Levitt: Methodology, Writing—review & editing.

The authors have no competing interests.

This study was funded by Research England, Scottish Funding Council, Higher Education Funding Council for Wales, and Department for the Economy, Northern Ireland as part of the Future Research Assessment Programme ( https://www.jisc.ac.uk/future-research-assessment-programme ). The content is solely the responsibility of the authors and does not necessarily represent the official views of the funders.

The raw data were deleted before submission to follow UKRI policy for REF2021. More data information is available in an associated report ( Thelwall, Kousha et al., 2022b ; with extra information here: https://cybermetrics.wlv.ac.uk/ai/ ).

Author notes

Email alerts, related articles, affiliations.

  • Online ISSN 2641-3337

A product of The MIT Press

Mit press direct.

  • About MIT Press Direct

Information

  • Accessibility
  • For Authors
  • For Customers
  • For Librarians
  • Direct to Open
  • Open Access
  • Media Inquiries
  • Rights and Permissions
  • For Advertisers
  • About the MIT Press
  • The MIT Press Reader
  • MIT Press Blog
  • Seasonal Catalogs
  • MIT Press Home
  • Give to the MIT Press
  • Direct Service Desk
  • Terms of Use
  • Privacy Statement
  • Crossref Member
  • COUNTER Member  
  • The MIT Press colophon is registered in the U.S. Patent and Trademark Office

This Feature Is Available To Subscribers Only

Sign In or Create an Account

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Disclosure of Funding Sources

All PLOS authors are required to declare what support they received to carry out their research. Declaring funding sources acknowledges funders’ contributions, fulfills funding requirements, and promotes greater transparency in the research process.

We support GPP2 Good Publication Practice for Communicating Company Sponsored Medical Research .

What to Declare

Each author must individually declare all sources of funding received for the research submitted to the journal. This information includes the name of granting agencies, grant numbers, and a description of each funder’s role. If the funder has played no role in the research, this must be stated as well.

Authors are not required to provide the complete list of every single grant that supports them if the grant is not related to the research published.

Funding Statement

A Funding Statement is included in the metadata of each published article. The Funding Statement includes the funding information declared by the authors.

Inaccurate information about funding discovered after publication may require a correction. 

How to Declare

Funding information is entered in the online submission system. Read the submission guidelines .

Funding from Tobacco Companies

PLOS Biology, PLOS Climate, PLOS Complex Systems, PLOS Digital Health, PLOS Global Public Health, PLOS Medicine, PLOS Mental Health, PLOS Neglected Tropical Diseases,  PLOS ONE, PLOS Sustainability and Transformation , and PLOS Water   will not consider for publication manuscripts in which any of the research costs or authors' salaries have been funded, in whole or in part, by a tobacco company.

  • Search Search
  • CN (Chinese)
  • DE (German)
  • ES (Spanish)
  • FR (Français)
  • JP (Japanese)
  • Open Research
  • Booksellers
  • Peer Reviewers
  • Springer Nature Group ↗
  • Fundamentals of open research
  • Gold or Green routes to open research
  • Benefits of open research
  • Open research timeline
  • Whitepapers
  • About overview
  • Journal pricing FAQs
  • Publishing an OA book
  • Journals & books overview
  • OA article funding
  • Article OA funding and policy guidance
  • OA book funding
  • Book OA funding and policy guidance
  • Funding & support overview
  • Open access agreements
  • Springer Nature journal policies
  • APC waivers and discounts
  • Springer Nature book policies
  • Publication policies overview

OA funding and policy guidance for articles

Springer Nature offers a free open access support service to make it easier for our authors to discover and apply for funding to cover article processing charges (APCs). In addition to our list of funding sources for OA publication costs, we have a number of additional resources to help authors learn about APC funding. We also provide guidance to help authors understand and meet their funders’ and institutions’ OA policy requirements when publishing in Springer Nature journals.

Funding sources for open access articles

Browse guidance.

Identify potential sources of APC funding and ensure your publication meets all the OA requirements of your funders and institutions.

From January 2021, if your research has been funded by certain members of cOAlition S, you may be required to make sure that your new submissions to journals are immediately openly accessible on publication under a CC BY licence.

Learn about the types of APC funding available, as well as how to secure such funding and meet eligibility requirements.

Learn how to meet your funders' and institutions' requirements for open access articles.

Stay up to date

Here to foster information exchange with the library community

Connect with us on LinkedIn and stay up to date with news and development.

  • Tools & Services
  • Account Development
  • Sales and account contacts
  • Professional
  • Press office
  • Locations & Contact

We are a world leading research, educational and professional publisher. Visit our main website for more information.

  • © 2024 Springer Nature
  • General terms and conditions
  • Your US State Privacy Rights
  • Your Privacy Choices / Manage Cookies
  • Accessibility
  • Legal notice
  • Help us to improve this site, send feedback.
  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • News & Views
  • Tobacco funded...

Tobacco funded research: how even journals with bans find it hard to stem the tide of publications

Linked editorial.

Protecting BMJ journals’ content from tobacco industry influence

  • Related content
  • Peer review
  • Irene van den Berg , investigative journalist 1 ,
  • Mathilde de Jeu , investigative journalist 1 ,
  • Hristio Boytchev , investigative reporter 2
  • 1 The Investigative Desk
  • hboytchev{at}bmj.com

Tobacco companies are investing billions in pharmaceutical and medical products. These ties complicate the ongoing efforts of researchers, scientific organisations, and journals to distance themselves from the industry, find Irene van den Berg , Mathilde de Jeu , and Hristio Boytchev

Research funded by the tobacco industry is still appearing in highly cited medical journals, despite attempts by some to cut ties altogether, an investigation by The Investigative Desk and The BMJ has found.

Although the tobacco industry has a long history of subverting science, most of the leading medical journals don’t have policies that ban research wholly or partly funded by the industry. Even when publishers, authors, and universities are willing to restrict ties to the industry, evidence indicates that they struggle to identify funding sources, because tobacco companies have funded front groups and have diversified into pharmaceutical and health technology.

“It is awkward to suddenly find yourself affiliated with a company whose sole purpose is to sell cigarettes,” says Wytse van den Bosch, a physician and researcher into pulmonary diseases at the Erasmus MC Sophia Children’s Hospital in the Netherlands. He had a research grant from the drug company Vectura when it was bought by the tobacco giant Philip Morris International in 2021. Nevertheless, his study was still published in a journal with a policy to reject research with connections to the tobacco industry.

Big tobacco’s involvement in therapeutic fields has led to calls for journals to institute bans on research funded not only by tobacco companies but also by their subsidiaries. Nicholas Hopkinson, professor of respiratory medicine at Imperial College London, says that, given the industry’s “longstanding history of dishonesty,” it is “very straightforward” that researchers should cut ties to companies after they are acquired by big tobacco, as otherwise they would be “working with the tobacco industry” and contributing to its profits.

“The point is to bring the tobacco industry to an end,” as its products “are lethal when used as intended,” he says.

Hidden in plain sight

In recent years the “big four” global tobacco companies (according to sales 1 )—Philip Morris International (PMI)/Altria, British American Tobacco (BAT), Imperial Brands, and Japan Tobacco International (JTI)—have invested billions in companies that produce medicines or other medical products. These investments include treatments for conditions caused or aggravated by smoking. For example, Vectura, a subsidiary of PMI since 2021, produces an inhaler used by patients with chronic obstructive pulmonary disease or asthma. And JTI’s pharmaceutical branch produces treatments for lung cancer, skin conditions such as psoriasis and atopic dermatitis, and heart disease.

The Investigative Desk and The BMJ searched PubMed and found hundreds of relationships between big tobacco’s medical and pharmaceutical subsidiaries and medical research.

In 876 studies on PubMed going back to 1996 the declarations of conflicts of interest reveal a relationship between at least one of the researchers and a medical company with financial ties to the tobacco industry. Many of these studies appeared in journals that lack policies on collaboration with tobacco companies. However, at least 13 journals with a tobacco policy in place also published studies (27 in total) that can be linked to companies with tobacco industry investments.

The Investigative Desk and The BMJ checked the tobacco policies of 40 top medical journals as ranked by their Clarivate impact factor score: the top 10 general medical journals and the top 10 in each of three therapeutic areas especially affected by smoking (oncology, cardiac and cardiovascular, and respiratory medicine). Of these 40 journals, only eight (20%) had policies prohibiting studies wholly or partly funded by the tobacco industry. Those that did were mainly journals dealing with lung diseases. Six of the 10 journals in the field of respiratory medicine have a tobacco policy. But of the top 10 journals in oncology only one did, and in cardiology none had one. Among the top 10 leading general medical journals only The BMJ had such a policy.

The Investigative Desk and The BMJ invited comment from all these journals or their publishers. Many of the journals that lacked a specific tobacco policy referred to their standard transparency policies regarding funding and conflicts of interest. “All financial and material support for the research and the work should be clearly and completely identified in an acknowledgement section of the manuscript,” said a JAMA spokesperson.

Some pointed out the nefarious history of the tobacco industry. The Lancet Group, calling attention to the tobacco industry’s “damaging history of distorting research and evidence to further commercial interests,” says that it is “very unlikely to publish tobacco industry funded research unless it provides a clear benefit to public and human health and only then following robust editorial and external peer review.”

Since 2013 the BMJ titles The BMJ , Heart , Thorax , and BMJ Open have had a policy not to consider tobacco funded research. BMJ’s Tobacco Control already had such a policy, as did PLoS Medicine , PLoS One , PLoS Biology , the Journal of Health Psychology , and journals published by the American Thoracic Society.

This hasn’t prevented studies funded by tobacco industry subsidiaries from being published in other BMJ journals that don’t have such a policy. 2 3 4 5

BMJ has this week updated its tobacco policy. In addition to excluding research funded by the tobacco industry, BMJ journals will also exclude work where authors have personal financial ties to the tobacco industry.

“Creating a stronger firewall between the tobacco industry and BMJ content will provide space for editors to curate and publish content that is more independent and trusted, and contributes to a healthier world,” explain the editors. 6

Head of research integrity Helen Macdonald acknowledges the challenges involved in upholding such policies: “The policy is challenging to implement for journals that rely in large part on the accuracy of authors’ declarations. It is further complicated by how the tobacco industry is defined, which also evolves, as recent moves to take over pharmaceutical companies have shown. The nature and timescale of potential authors’ personal interests are also relevant. In making decisions in complex cases BMJ puts patients and the public at the heart of the matter.”

Even among the journals that have a policy, enforcement can be difficult when it involves subsidiaries or organisations associated with the tobacco companies. Last year BMJ Open retracted a paper after it became clear that ECLAT SRL (the listed funder) was receiving sponsorship from the Foundation for a Smoke-Free World, a group supported and fully funded by PMI. 7

“Your gut feeling tells you: just quit”

European Respiratory Society journals are bound by a policy that states they will not publish work by authors who have worked for, consulted for, or had any connections to the tobacco industry or received any benefits from it since 1 January 2000. 8 Nevertheless, the society published research by the Dutch researcher van den Bosch in January 2024. 9 When Vectura, the company funding his research into the role of small airways in asthma, was taken over by PMI in 2021 he had to read about it in the media rather than through any personal communication.

The acquisition posed a dilemma: should he accept money from what was now a subsidiary of the tobacco industry or quit his research, sacrificing years of effort?

“Your gut feeling tells you: just quit. As a pulmonologist I would never get involved with a tobacco company,” says van den Bosch.

He discussed the situation with colleagues, supervisors, and medical ethicists at the Erasmus MC. “From a research perspective, we assessed whether stopping the research would be more harmful for the patient than continuing.” The researchers decided to continue the study but to publish a scientific article on the “unfortunate course of events” afterwards in BMJ Open Respiratory Research as a warning to other researchers. 10

Still, publishing van den Bosch’s research was not in breach of its policy, says the European Respiratory Society. “The research grant from Vectura was initiated in 2018. Therefore, this would not be a breach of the ERS publications and ERS conflict of interest policy relating to tobacco,” says Elin Reeves, the society’s director of publications.

Pushing for stricter policies

Other journals push for more far reaching policies on tobacco industry funding. Responding to an inquiry by The BMJ and the Investigative Desk, Filippo Crea, editor in chief of the European Heart Journal , said that his journal has “not published manuscripts sponsored by tobacco industry in the past few years” and that at present it didn’t have a defined policy but that it “will implement a policy which forbids acceptance of manuscripts sponsored by tobacco industry soon.”

The Swiss scientific publisher Multidisciplinary Digital Publishing Institute has also included a strong statement in its tobacco code: “MDPI does not publish studies funded partially or fully by the tobacco industry.” 11 However, analysis by the Investigative Desk and The BMJ shows that 12 journals published by the MDPI include studies with a financial link to the tobacco industry. For example, in a 2024 publication on tetrahydrocannabinol in the Journal of Clinical Medicine two of the seven researchers were employees of the PMI subsidiary Vectura, according to their declarations of interests. 12 And five of the authors are paid scientific advisers or consultants.

In 2021 the MDPI journal Vaccines published a research study authored by employees of the BAT subsidiary Kentucky BioProcessing. 13 For Vaccines ’ editor in chief, Ralph Tripp, of the Department of Infectious Diseases at the University of Georgia’s College of Veterinary Medicine, this was not a reason to reject the research. “We were then (and still are) in a SARS-CoV-2 pandemic, thus there was a responsibility to evaluate stable, effective, easy-to-manufacture vaccines needed to stop the covid-19 pandemic. The manuscript in question was reviewed and subsequently accepted based on its scientific merits only. The publication did not promote anything.”

Scientists who received research grants, speaker fees, or lecture fees from JTI’s pharma division Torii Pharmaceuticals saw their studies published in various journals published by MDPI, such as Pathogens , 14 the International Journal of Molecular Sciences , 15 16 Current Issues in Molecular Biology , 17 and the Journal of Clinical Medicine . 18 19

The scientific publisher has always relied on scientists themselves to disclose any ties to the tobacco industry, says Ana Stankovic, research integrity and publication ethics specialist at MDPI. Stankovic adds, “As such, we expect all our authors to fully adhere to all our publication ethics policies and to notify us on submission of any potential conflict of interest, which includes providing detailed information on funders.” As the reaction of Vaccines ’ Tripp shows, this expectation is not always met.

MDPI will investigate all 24 cases brought to light by the Investigative Desk and The BMJ , Stankovic says.

Effects of industry links on researchers

The tobacco industry’s medical investments can have important consequences for scientists, as van den Bosch experienced. He is concerned that PMI’s acquisition of Vectura, also flagged by respiratory medicine societies as a problem, 10 can be another way to infiltrate science and deceive the public. “I would never wilfully enter into a collaboration with anything tobacco related,” he says.

His decision to continue the investigation after PMI’s takeover of Vectura put him in an awkward position, because his name has now become associated with the tobacco industry. “If you want to go to a conference, present an abstract, or submit a paper, the first thing they ask is: do you have any connections with a tobacco manufacturer? I have to say: yes.

“I am at the start of my career, I want to become a paediatrician, and I would very much like to stay active in research. My name is now associated with Vectura, and Vectura with PMI. So I am now affiliated with the tobacco industry myself.”

In other cases the industry’s investment provides an incentive for a scientist to break off relations. Liam Heaney, professor of respiratory medicine at Queen’s University Belfast, told the Investigative Desk and The BMJ in an email that he no longer wants to work with Vectura after it became a subsidiary of PMI.

Ensuring survival

With its pharmaceutical investments the tobacco industry targets many conditions caused or aggravated by smoking. 20 Vectura produces inhalers used by patients with COPD or asthma, and Fertin Pharma, a Danish subsidiary of PMI, produces nicotine gums. JTI’s pharma branch produces drugs for lung cancer, heart disease, and skin conditions, such as psoriasis and atopic dermatitis, known to be adversely affected by smoking. Imperial Brands is investing in medicinal cannabis for people living with severe chronic pain conditions and cancer. The tobacco industry’s “health drive” is taking place through dozens of companies and billions in investment, even though the sums are small when compared with its revenue from cigarettes.

Ruth Malone, professor of social behavioural sciences at the University of California and former editor in chief of BMJ’s Tobacco Control , said that institutions, professionals, and scientific journals should refuse any involvement with the tobacco industry. “Anyone who understands the history, extent, and nature of the epidemic and the tobacco industry’s strategic efforts to divide the public health community knows that industry money should be rejected,” she says.

Funding might go towards work that is only tangentially related to tobacco or to “distracting research” aimed at drawing attention away from studies the industry does not like, or a study may be designed in a way that obscures evidence that is unfavourable to the industry.

“People may not realise how their work will be deployed to advance industry strategic goals,” Malone says. The industry uses science by “stoking division around the endlessly proliferating array of new tobacco and nicotine products and their potential to be less harmful to users.”

“Tobacco companies use sponsored research conducted by their contractors in regulatory applications, public relations announcements, and even in their marketing,” says Robert Jackler, Sewall professor emeritus at the Stanford University School of Medicine. This research is intended to sow doubt, he adds.

“The tobacco industry also works through private research companies that produce ‘research’ which faithfully adheres to the business interests of the sponsoring company,” Jackler says. By transforming themselves into “quasi-pharmaceutical companies,” tobacco companies seek to burnish their tarnished image and forestall adverse regulatory attention, he adds.

Leslie London, chair of public health medicine at the University of Cape Town, says that accepting the tobacco industry’s funding for research on products aiming to ameliorate the consequences of using tobacco “is surely the most insurmountable conflict of interest imaginable.”

Medical research could even help the tobacco companies in their development of products for smoking. Studies of asthma inhalers can, for instance, be valuable for the production of cigarettes and vapes, says paediatrician Harm Tiddens, emeritus professor of paediatric pulmonology at Erasmus MC and Wytse van den Bosch’s PhD supervisor. “It is very difficult to get medications that you inhale precisely to the right place in the lungs,” says Tiddens. “That was what Wytse’s research was about. This could be interesting for a tobacco manufacturer who wants to know how to absorb nicotine as quickly as possible through the lungs.”

“The tobacco industry has a long history of strategic rebranding, and this is nothing new,” Malone says of the industry’s “beyond nicotine” 21 strategy of diversification. Caution regarding the industry should therefore be extended to its medical subsidiaries. “Once the tobacco companies get out of the business of killing people, we can talk about publishing work funded by their other drug companies,” she says.

“I see no immediate problem”—Maastricht University, the industry, and WHO

Alexander Sack, professor of brain stimulation and applied cognitive neuroscience at Maastricht University in the Netherlands, is a scientific adviser for PlatoScience, producer of technological tools for mental health illnesses such as depression. British American Tobacco owns 20% to 24.99% of the Danish company through its corporate venture capital arm BTomorrow Ventures (BTV). Sack has also published together with PlatoScience employees. 22 23

“As long as British American Tobacco does not interfere with my scientific work, I see no immediate problem,” he says. “I just don’t know anything about the size of investment, reason for investment, overall investor or funding structure, etc,” Sack says of BAT’s involvement. “BTV invested in PlatoScience before my time,” he says, “for the purpose of consumer cognitive sensations.”

In 2021 PlatoScience decided on a “radical change in the company’s focus,” in a direction not favourable to BTV, but “BTV stayed onboard as a minority shareholder,” Sack said. “I have always considered them a bit of a ‘leftover’ from the company’s previous time as a consumer goods company,” he added.

To receive a grant from the World Health Organization, Sack’s employer, Maastricht University, signed a statement that there was no collaboration between the university and the tobacco industry. Pamela Habibović, rector magnificus of the university, says that Sack’s collaboration with PlatoScience “is not a collaboration with the tobacco industry.” She adds, “The studies were not related to the tobacco industry in the broadest sense of the word. Also, PlatoScience is not a tobacco industry company.”

Maastricht University has no policy on staff engagement with tobacco companies, it says. “This does not alter the fact that permission must be requested for each external activity, which is evaluated on a case-by-case basis,” says Habibović. “It is conceivable that permission will not be granted for an external activity at PMI or BAT,” she says, adding, “It is not possible to check in public records whether a company from the tobacco industry is a shareholder of a company that we work with; after all, this is not public information.” However, the Investigative Desk was able to find such information easily online by checking annual reports, press releases, and company databases.

A BAT spokesperson said, “While BTV does have a minority investment in PlatoScience, neither BTV nor BAT is commercially or scientifically collaborating with PlatoScience and [they] have not had a working relationship with the individual in question.

“BTV invests in consumer brands, digital transformation, new technologies, and future sciences but not medical investments. When BTV invested in PlatoScience, the company’s focus was on cognitive improvement, not medical applications.”

The WHO Foundation didn’t respond to a request for comment.

Fig 1

Journal tobacco policies

  • Download figure
  • Open in new tab
  • Download powerpoint

Fig 2

Tobacco investments and publications. An interactive version of this graphic is available at: https://bit.ly/4aDNwFH

Competing interests: The Investigative Desk is a group of investigative journalists funded through donations, grants, scholarships, and fees. Sponsors do not play a role in or have substantive influence on the research or publications. The Investigative Desk is collaborating with the Tobacco Control Research Group (TRCG) of the University of Bath on tax avoidance research. The University of Bath has also provided some financial support for that research. As the TRCG reports on its web pages, this research group receives funding from a variety of sources, including the National Institute for Health Research and Bloomberg Philanthropies. The collaboration with TRCG did not concern the present investigation.

This feature has been funded by the BMJ Investigations Unit. For details see bmj.com/investigations . This article was a collaboration with John Hansen of the Danish newspaper Politiken and was made possible with the support of Journalismfund Europe. A Dutch version of the investigation is published on Follow the Money.

Commissioning and peer review: Commissioned; externally peer reviewed.

  • ↵ Who is “Big Tobacco”? STOP. https://exposetobacco.org/news/who-is-big-tobacco
  • D’Haens GR ,
  • Strutner S ,
  • van Vollenhoven RF ,
  • Smolen JS ,
  • Behrens F ,
  • SPIRIT H2H study group
  • Macdonald H ,
  • ↵ Retraction: International randomised controlled trial evaluating metabolic syndrome in type 2 diabetic cigarette smokers following switching to combustion-free nicotine delivery systems: the DIASMOKE protocol . BMJ Open 2023 ; 13 . pmid: 37339831 OpenUrl FREE Full Text
  • ↵ Publication ethics and malpractice statement. European Respiratory Society. 2021. https://www.ersjournals.com/about-us/publication-ethics-and-malpractice-statement
  • van den Bosch WB ,
  • Andrinopoulou ER ,
  • Tiddens H ,
  • van de Vathorst S
  • ↵ MDPI. Research and Publication Ethics. https://www.mdpi.com/ethics#_bookmark17
  • Pressman P ,
  • Simpson CA ,
  • McCormick AA ,
  • Nakagome K ,
  • Batinic A ,
  • Sutlovic D ,
  • Maruyama H ,
  • Taguchi A ,
  • Locatelli F ,
  • Akizawa T ,
  • Van den Berg I ,
  • Ganho-Ávila A ,
  • Leunissen I ,
  • Schuhmann T ,

research funding journal articles

  • Technical Support
  • Find My Rep

You are here

  • Publication ethics and research integrity: policy guidelines for authors

We all want to ensure that the research you publish with us meets strict quality guidelines. Your reputation, and ours, depends on it. So does the reputation of your research institution on the global stage.  

Here you will find guidance on policies that help us achieve this, and what we ask of you to comply with them. We also recommend taking a look at the guidelines provided by The Committee on Publication Ethics (COPE)  

Open Access Publishing Options

Declaration of conflicting interests

If you’ve received research funding, or been paid for attending a talk related to your work, these may be interpreted as competing interests. You should declare anything relevant, and we’re here to help you to do so easily by providing sample wording for typical scenarios.

What do authors need to declare? How should authors make this declaration? What do Editors need to declare?

Funding statements

All research articles need a funding statement, so here’s a quick guide to writing one.

Writing a funding statement Industry-funded articles

Ethics approval and informed consent statements

Guidance on preparing ethics statements in support of research findings, with multiple examples of scenarios and statements to get you started.

Studies involving humans Studies involving animals Studies not involving humans or animals Global research ethics and inclusion

Sage follows COPE and ICMJE guidance on the declaration of conflicts of interest by authors, reviewers, and editors. A conflict of interest is defined as any direct or indirect interest that my influence the reading, assessment of or conducting of the research reported in the submission. Any interests within a five-year period prior to beginning the research are considered relevant, although authors must disclose interests outside this time frame if they may have influenced the research. 

What do authors need to declare?

Authors are required to disclose any direct or indirect interests that relate to their submission to any Sage journal so that the editor, reviewers and readers may be able to make informed judgements about any potential bias in the research process, writing or publication. The following interests may present a conflict and should be declared upon submission: 

Financial Interests

  • Grants from a funding agency, a commercial entity or any type of payment to authors from organisations that are likely to benefit financially from the research  
  • Employment with or affiliation to an organisation that has an interest in the research and/or is likely to benefit from its publication and dissemination  
  • Stocks, shares, patents or patent applications or other forms of financial holdings that are likely to benefit from the publication and dissemination of the research  
  • Consulting fees, reimbursement or any other payments made to authors for conducting the research  
  • Close relatives who may financially benefit from the publication and dissemination of the research  

Non-financial

  • Affiliation to an organisation that will have an interest in the outcome such as members of a research advisory board, steering or advisory committees, associations or honorary affiliations  
  • Membership of organisations or scientific societies that undertake advocacy work  
  • Ideology, beliefs, thoughts, faith relevant to the research topic  
  • Activism or other advocacy work related to the research   
  • Political leanings or legal action relevant to the research or its potential outcome   
  • Research competition, previous personal disagreements, close working relationships with editors, editorial board members  
  • Editorial responsibilities or membership of the editorial board of the journal  

How should authors make this declaration?

If you are publishing in a journal that requires a declaration of conflicting interests it should be added under the heading ‘Declaration of conflicting interests’ after any Acknowledgments and before Funding, Notes and References. If there are no conflicting interests, we’ll publish this statement: ‘The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article’.

In your Contributor Agreement you will be asked to certify that:

  • All financial support is acknowledged in your article.
  • Any commercial or financial arrangements related to your article have been discussed with your Editor, who will advise whether details should be declared.
  • You have not signed an agreement with any sponsor that might compromise the impartial reporting of data.

Not all Sage journals require a declaration of conflicting interests to be published alongside your article, but authors are strongly encouraged to declare these upon submission by including these in a cover letter or by writing to the Editor.

Submissions will be evaluated fairly and will not necessarily be rejected when any conflicting interests are declared. If a relevant conflict that was not declared by authors becomes apparent at any time during the peer review or publishing process, the Editor reserves the right to reject the submission. Sage will follow COPE guidelines for any conflicts that come to light post-publication.  

What do Editors need to declare?

All Editors are required to declare any conflicts of interest that may impact the peer review and decision-making process. If a conflict arises, an alternative member of the Editorial board must be appointed and the Editor with the conflict must recuse themselves from the decision-making process. These conflicts include financial and non-financial interests listed above.

  • If the Editor is based at the same institution as the authors or has previously conducted research with them in the last three years, they should recuse themselves from the peer review and decision-making process.  
  • Journal Editors or members of the editorial board submitting their research to their own journals may do so and must declare their involvement with the journal as a conflict of interest. Where applicable, the Journal Editor or Editorial Board member must recuse themselves from the peer review process. The Journal Editor must appoint another member of the Editorial board who will invite two or more subject experts to evaluate the manuscript.   
  • Guest Editors may submit their research to a special issue or special collection they are leading and must declare a conflict of interest upon submission. Where Guest Editors are involved in peer review and final decisions, their submissions will be handled by an alternate member of the editorial board or the Journal Editor.   

Sage Editorial Staff 

All Sage editorial staff are required to declare their employment at Sage as a conflict of interest in any submission they send to a Sage journal. These submissions are treated as any other submissions within the journal.

Back to top

All research articles need a funding statement, so here’s a quick guide to writing one. It should appear under the heading ‘Funding’ after any Acknowledgments and Declaration of conflicting interests, and before Notes and References.

This should comprise the text in bold in the example below, followed by the full name of the funding agency, and the grant number in square brackets:

The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Medical Research Council [grant number xxx].

Multiple grant numbers should be separated by commas, multiple agencies by semicolons.

What if your research was supported indirectly by grants available to your institution, but not to you personally? In this case, simply include this statement:

The author(s) received no financial support for the research, authorship, and/or publication of this article.

Note: If you have any concerns that the information you give may compromise your anonymity prior to the peer review process, you can choose to wait until you submit your final accepted manuscript.

Industry-funded articles

We can only consider industry-funded articles if funding is fully declared within the manuscript, together with any role in the design of the study or in the analysis and interpretation of data played by the funder.

Both direct and indirect funding must be declared, whether funding is in full or in part. Indirect funding includes organisations associated with the research and/or authors receiving funding from one of the industries below, or any industry where the funder could be seen to have a vested interest in the results of a study.

Industries include but are not limited to:

  • Pharmaceuticals
  • E-cigarettes

Failure to declare industry-specific funding can lead to the rejection of your article at submission, or retraction of the whole article if the oversight comes to light after publication.

How do we make sure that we maintain high ethical standards in publishing? By including clear statements in support of results reported in articles. Here is guidance, with examples of sample statements, for:

Studies involving humans

All studies involving humans, for example including patients, their samples, data or any other study involving human participants must be evaluated by a suitably qualified research ethics committee prior to undertaking the research in line with the Declaration of Helsinki .

All studies involving people, whether patients or research participants, must be evaluated by a qualified research ethics committee prior to undertaking the research in line with the Declaration of Helsinki .

Ethics statements should be included in the methods section of all relevant submissions (unless the journal you’re submitting to publishes them under a separate heading – the journal’s submission guidelines will make this clear) and must include the name and location of the review board, approval number and date, as in the examples below.

Ethics approval statements

These examples can be used If ethics approval was obtained:

This study was approved by the XXXX Research Ethics Committee (approval no. XYZ123) on Month DD, YYYY.

This study received ethical approval from the XXXX IRB (approval #XYZ123) on Month DD, YYYY.

Examples of ethics approval statements suitable for different scenarios:

The XXXX Ethics Review Committee at XXXX University approved our interviews (approval: XYZ123) on Month DD, YYYY. Respondents gave written consent for review and signature before starting interviews.

This study received ethical approval from the XXXX IRB (approval #XYZ123) on Month DD, YYYY. This is an IRB-approved retrospective study, all patient information was de-identified and patient consent was not required. Patient data will not be shared with third parties.

The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of XXXX (no. XYZ123) on Month DD, YYYY, with the need for  written informed consent waived. 

This study was conducted in accordance with the Declaration of Helsinki. Approval was granted on Month DD, YYYY. The Institutional Review Board (IRB) at XXXX acted as the central IRB, whose review was accepted by all participating institutions’ IRBs (Ref. XYZ123). The central IRB determined that this research involved minimal risk and approved a waiver for informed consent.

This study was approved by the Ethics Committee of XXXX University (Ethics Code: XYZ123) on Month DD, YYYY. All participants provided written informed consent prior to enrolment in the study. This research was conducted ethically in accordance with the World Medical Association Declaration of Helsinki.

The ethics review committee of the XXXX University approved this study on Month DD, YYYY. Number: XYZ123. Date: Month DD, YYYY. Written informed consent for inclusion in this research was obtained from the patients prior to surgery.

The experimental protocols were approved by the Institutional Review Board (IRB) of the XXX University (No. XYZ123) on Month DD, YYYY. All research activities complied with ethical regulations and were performed in accordance with regulations of each hospital. Informed consent to use histopathological samples and pathological diagnostic reports for research purposes was obtained from all patients prior to surgery. They were given the option to refuse to participate by opting out.

These examples can be used if ethics approval and patient consent were waived:

The Ethics Committee of the XXXX waived the need for ethics approval and patient consent for the collection, analysis and publication of the retrospectively obtained and anonymised data for this non-interventional study.

The study was approved by the Danish Data Protection Agency. According to Danish legislation, neither ethics  approval nor patient consent is required for registry linkage studies [23].

Case reports

Ethical approval to report this case (or case series) was obtained from *name of ethics committee or institutional review board (approval number/ID)*.  

Our institution does not require ethical approval for reporting individual cases or case series.

Patient or participant consent

In all cases, it’s important to state that consent has been granted to use any personal information, and how that consent has been gathered. Informed consent includes three key components:  

  • Communication of information  
  • Comprehension of information  
  • Voluntary participation

For research articles, you need to state in the methods section whether the participants provided written or verbal informed consent. This statement should include whether the participants provided consent to conduct the study, publish the study, and have their photos or other images used . If information has been anonymized, that should be clearly stated too. If you are publishing on an open access basis, patients or participants must be aware that the article will be available to anyone with an internet connection.

The American Psychological Association (APA) recommends that for those who are legally incapable of giving informed consent, researchers should:  

  • Provide an appropriate explanation  
  • Seek the individual's assent  
  • Consider the person’s preferences and best interests  
  • Obtain appropriate permission from a legally authorized person, if such a substitute consent is permitted or required by law  

If the law doesn’t permit or require a legally authorized person to provide consent, you’ll need to show that you’ve taken reasonable steps to protect the individual's rights and welfare.   Participants should be informed about:  

  • The purpose, duration, and procedure of the research  
  • Their right to decline to participate and how to withdraw from the research along with the possible consequences  
  • Factors that may be expected to influence their willingness to participate, such as potential risks, discomfort, or adverse effects  
  • Prospective research benefits and incentives for participation  
  • Limits of confidentiality  
  • Whom to contact for questions about the research and research participants’ rights.  

You can access Sage’s Participant Consent Form here . Authors should not submit completed consent forms alongside the manuscript files unless specifically requested to do so, owing to potential legal/privacy issues with sending and receiving confidential information. Instead, they should confirm in the relevant section of the – e.g. manuscript – text that the individual(s) or their proxy has provided written informed consent for the publication of this information in the present work.

Informed consent for ethnographic research  

Ethnographic research requires similar ethics approvals from an International Review Board or independent local, regional, or national review body as well as participant consent to conduct and publish the research. Consent should obtained before the research is conducted and it should be part of the project design, implementation, and other parts of the research process. Consultation with groups or communities affected by the studies and consideration of their input on the study design should happen throughout the entire research period. If consent is not obtained before the study perhaps because of the research context, process, or researcher/participant relations, you can seek to obtain it retroactively if necessary.  

Participants should also understand that there is a possibility that confidentiality might be compromised, despite best efforts. This is especially important to note for small communities where participants may be more easily identifiable.

Examples below if consent was obtained:

1.      Written informed consent

The study was approved by the XXXX (Ethical Clearance Reference Number: XYZ123) on Month DD, YYYY. All participants provided written informed consent prior to participating.

Written informed consent was obtained from a legally authorised representative for anonymised patient information to be published in this article.

2.      Verbal informed consent

Informed consent was obtained verbally before participation. The consent was audio-recorded in the presence of an independent witness.

3.      Patient consent for studies involving minors

Written informed consent to participate in this study was provided by the participants’ legal guardians/next of kin.  

4.     Patient or participant consent for use of images including faces

Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

If patient consent was not obtained:

Informed consent for information published in this article was not obtained because [please state the reason].

Studies involving animals

All studies involving animals, particularly vertebrate animals, must be evaluated by a qualified animal ethics or welfare committee. Ethics statements should include the name and location of the reviewing committee, approval number and date, as in these examples:

Protocols for animal experiments were approved by the Animal Experimental Ethics Committee of the XXXX University (Approval no. XYZ123) on Month DD, YYYY, in compliance with the National Institutes of Health guidelines for the care and use of laboratory animals.

All animals were cared for in strict accordance with the Guide for the Care and Use of Laboratory Animals (NIH Publication No. 85-23, revised 1996), and the experimental design was approved by the Ethics Committee of XXXX (Approval no. XYZ123) on Month DD, YYYY.

If IACUC approval was obtained you can use the example below:

The XXX Institutional Animal Care and Use Committee approved the experimental procedures used in this study (approval no. XYZ123) on Month DD, YYYY.

An example on animal welfare:

All animal housing and experiments were conducted in strict accordance with the institutional Guidelines for Care and Use of Laboratory Animals at [insert name of institution where the research was conducted and/or where the author is based].

Studies not involving humans or animals

These statements can be used in situations where a study did not involve human or animal participants as well as non-research articles such as reviews:

This article does not contain any studies with human or animal participants.

There are no human participants in this article and informed consent is not required.

Global research ethics and inclusion

If your article reports on global research, you should follow the TRUST Code . We recommend including a statement in your methods section describing how your work adheres to the Articles in the Code. Some journals may require a statement under the separate heading ‘Global research ethics and inclusion’ at the end of your article. This should cover the following:

  • Confirm that the research has local relevance and has been designed and conducted in collaboration with local communities. Note that contributors meeting the requirements of authorship as per the ICMJE Guidelines must be listed as authors. Sometimes research is conducted in low- and middle-income countries in partnership with researchers from higher income countries. In these cases it is expected that local researchers are listed in first and/or last authorship positions to reflect the significant contribution they will have made. Joint first and/or joint senior authorship is an option here.
  • Confirm that local ethics review was sought and if not, explain the reasons why.
  • Where research involvement presented any kind of risk to participants e.g. discrimination, incrimination or risk to personal health and safety, describe the special measures taken to ensure their safety and wellbeing that were agreed with local partners.
  • Explain how communication was tailored to the requirements of local participants to ensure that informed consent was freely given, and any feedback on findings was comprehensible to local communities. Describe how any potential cultural sensitivities were explored in advance.
  • Confirm that all research was conducted to the highest possible ethical standards, regardless of the requirements of the local setting.

We also recommend the guidance provided by Consensus statement on measures to promote equitable authorship in the publication of research from international partnerships (Morton, B. et al, 2022) .

  • Journal Author Gateway
  • Journal Editor Gateway
  • Journal Reviewer Gateway
  • Ethics & Responsibility
  • Sage editorial policies
  • Authorship guidelines
  • Sage Chinese Author Gateway 中国作者资源

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of f1000res

  • PMC8063518.1 ; 2021 Feb 11
  • ➤ PMC8063518.2; 2021 Apr 16

Top health research funders’ guidance on selecting journals for funded research

Larissa shamseer.

1 School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, K1G 5Z3, Canada

2 Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, Ontario, M5B 1T8, Canada

3 Centre for Journalology, Ottawa Hospital Research Institute, Ottawa, ON, K1H 8L6, Canada

4 Centre for Implementation Research, Ottawa Hospital Research Institute, Ottawa, ON, K1H 8L6, Canada

Kelly D. Cobey

Matthew j. page.

5 School of Public Health and Preventive Medicine, Monash University, Melbourne, VIC, 3004, Australia

Jamie C. Brehaut

Jeremy m. grimshaw.

6 Department of Medicine, University of Ottawa, Ottawa, Ontario, K1H 8M5, Canada

Sharon E. Straus

Lesley a. stewart.

7 Centre for Reviews and Dissemination, University of York, UK, York, YO10 5DD, UK

David Moher

Associated data, underlying data.

Open Science Framework: Audit of health research funder policies and recommendations on journal publication of research: Extracted Data, https://doi.org/10.17605/OSF.IO/YUDP4 63 .

This project contains the following underlying data:

  • - Funders Data analysis data - clean 2020Apr21.dta

Extended data

Open Science Framework: Audit of health research funder policies and recommendations on journal publication of research: Study Forms, https://doi.org/10.17605/OSF.IO/FSUQ2 64

This project contains the following extended data:

  • - Level 1 - Searching funder websites form 2017Nov23.pdf
  • - Level 2 - Data extraction form 2017Nov23.pdf

Open Science Framework: Audit of health research funder policies and recommendations on journal publication of research: Protocol, https://doi.org/10.17605/OSF.IO/J6CSK 65

Registration of overarching OSF project: https://doi.org/10.17605/OSF.IO/Z59U6 66 .

Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).

Version Changes

Revised. amendments from version 1.

We have addressed/responded to the suggestions from reviewers with the following changes: - simplified the wording/sentence structure in the introduction and discussion sections. - clarified the relevance of this research to health research funders in the introduction (i.e., discoverability, uptake). - added more detail on how extraction items were devised. - indicated the name of an excluded funder in Figure 1. - acknowledged, in the limitations section, the potential benefit of engaging funders in the study process.

Peer Review Summary

Background: Funded health research is being published in journals that many regard as “predatory”, deceptive, and non-credible. We do not currently know whether funders provide guidance on how to select a journal in which to publish funded health research.

Methods: We identified the largest 46 philanthropic, public, development assistance, public-private partnership, and multilateral funders of health research by expenditure, globally as well as four public funders from lower-middle income countries, from the list at https://healthresearchfunders.org . One of us identified guidance on disseminating funded research from each funders’ website (August/September 2017), then extracted information about selecting journals, which was verified by another assessor. Discrepancies were resolved by discussion. Results were summarized descriptively. This research used publicly available information; we did not seek verification with funding bodies.

Results: The majority (44/50) of sampled funders indicated funding health research. 38 (of 44, 86%) had publicly available information about disseminating funded research, typically called “policies” (29, 76%). Of these 38, 36 (95%) mentioned journal publication for dissemination of which 13 (36.11%) offer variable guidance on selecting a journal, all of which relate to the funder’s open access mandate. Six funders (17%) outlined publisher requirements or features by which to select a journal. One funder linked to a document providing features of journals to look for (e.g. listed in the Directory of Open Access Journals) and to be wary of (e.g., no journal scope statement, uses direct and unsolicited marketing).

Conclusions: Few funders provided guidance on how to select a journal in which to publish funded research. Funders have a duty to ensure that the research they fund is discoverable by others. This research is a benchmark for funder guidance on journal selection prior to the January 2021 implementation of Plan S (a global, funder-led initiative to ensure immediate, open access to funded, published research).

Introduction

Biomedical research studies supported by well-known funding organizations such as the National Institutes of Health (NIH), are published in so-called “predatory” journals 1 . Predatory journals are regarded as non-credible and are criticized for failing to provide typical or expected publishing services and their lack of transparent operations 2 , 3 . Such services include peer review, long term preservation of content, and indexing in scientific, bibliographic databases. Among their many shortcomings, the potential failure of predatory journals to ensure permanent discoverability of research threatens the integrity of the scientific record. Such research cannot contribute to science, thus wasting time, money, and resources 1 , 4 . Even if discovered, the potential impact and uptake of funded research in predatory journals may be limited due to being published in a perceived untrustworthy source. While benefits from investments in research are difficult to quantify 5 . One way funders measure returns on investments is by tracking research outputs, including scholarly journal publications 6 . Predatory journals may limit returns on funders’ investments by undermining the intended promise of scholarly publishing – to enable the results of research to be known for others to build upon 7 .

Health research funders ought to be concerned that the funds they provide may be wasted or contribute to research waste as a result of funded research being published in predatory journals. They may be supporting research that is not identifiable or able to be found if published in predatory journals, potentially wasting millions of dollars of research funding. When research is easily identifiable it can reduce unintentional redundancies in research efforts and investments. Additional wastage occurs when funder investments are used to pay for article processing charges (APCs). In biomedicine, research grants and national funding agencies are the largest source of funds supporting publication of at least 50% of open access articles 8 .

Funders & open access

Most major health research funders mandate that funded research outputs be open access 9 . Open access mandates typically require researchers to ensure that research (and sometimes data) is published in an open access journal or is deposited in a publicly accessible digital repository (regardless of whether the publication was published in an open access journal), or both. Some journals may impose an embargo period only after which an article is made publicly available or can be archived in a repository (i.e., delayed-access journals). Many funders’ policies allow for such delays in open access to accommodate publishers’ preferences.

Open access policies are one way for funders to direct funded researchers towards publishing in credible journals abiding by established open access tenets 10 :

  • 1. Research is/should be freely available and accessible to anyone.
  • 2. The copyright accompanying published work should be open, allowing for free use and re-use (i.e., allowing research to be freely built on/adapted with attribution).

To facilitate researcher adherence with funder open access policies, many biomedical journals offering open access have agreements with the PubMed Central (PMC) repository to automatically deposit their published content on authors’ behalf 11 . Additionally, researchers funded by the NIH and 13 partner funding organizations in the USA can upload funder-supported publications to PMC from journals without PMC agreements 12 . Likewise, 29 funders from across Europe can submit funder-supported research to Europe PMC (which is mirrored in PMC) 13 . For some of these organizations, such as the NIH and Wellcome Trust, archiving in PMC or Europe PMC, respectively, is mandatory.

In a possible attempt to attract submissions, predatory journals appear to market themselves as ‘open access’ 14 , 15 . While research in them may indeed be free to access, discovery of their content in scientifically-curated databases is sparse and inconsistent 16 – 18 . Predatory journal articles may haphazardly appear in search engines such as Google Scholar (which indexes anything that appears formatted as a scholarly article) or in PubMed (since it includes author-uploaded articles from PMC) 19 . Additionally, we do not know whether the contents of unindexed/unarchived journals will be perpetually available if a journal ceases to operate. Such preservation is typically achieved through journal/publisher agreements with digital preservation providers (e.g. Lots of Copies Keep Stuff Safe, LOCKSS). For journals indexed in Medline, for example, this is a prerequisite of indexing 20 ; PMC functions as a preservation service (i.e., has a remit to preserve content funded by public money) 21 . It is unknown whether predatory journals, not formally indexed in Medline, PMC, or other databases with similar requirements, have digital preservation arrangements.

Most researchers have a limited understanding of what open access means beyond making research free to read 22 – 25 . Free use and unrestricted re-use of research is a fundamental component of open access, and licensing that permits this is a regular component of open access journals 26 . Journals running nefarious and deceptive publishing operations have likely benefited from or exploited authors’ lack of awareness 27 . Indeed, few predatory journals mention licensing for articles or provide information on the use and re-use of published research 26 . Without explicit licensing for published articles, the legalities around distributing or building on research in predatory journals, for example, is uncertain. Whether researchers are deceived by predatory journals or are knowingly seeking easy and rapid publications in them (these journals tend to deliver quicker turnaround time than credible journals due to subpar or non-existent peer review 28 , 29 ), they are likely breaching the open access policies set by their funders.

In January 2017, the Bill & Melinda Gates Foundation implemented a policy mandating open access to research publications and data, without delay for all funded research 30 . In February 2017, they initiated a one-year partnership pilot with the American Association for the Advancement of Science (AAAS) to enable Gates-funded research to be published as open access in five AAAS journals, including Science 31 . The Gates-AAAS partnership seemed to inspire several other influential journals (i.e., New England Journal of Medicine , Proceedings of the National Academy of Sciences ) to introduce policies ensuring permanent open access for Gates-funded research 32 .

In January 2021, a number of international funders (including UK Research and Innovation, the Gates Foundation, Wellcome Trust, and the World Health Organization), led by Science Europe (a group representing funders across Europe), delivered a radical change to the way that funded research is published, via Plan S ( coalition-s.org ). Plan S, in part, requires research funders to mandate open access to funded research through publication in an open access journal or platform; requiring publications to be immediately available through an open access repository upon publication. To support this, agreed funders will pay the cost of article publishing charges (APCs) (up to a yet unannounced limit) to journals that are immediately and wholly open access (sometimes referred to as ‘gold’ open access).

Whether health research funding bodies, prior to Plan S, provide funded researchers with guidance or support towards selecting publishing journals in line with their policies and which facilitate proper (and permanent) access to research, and whether they monitor such policies, is unknown. Previous studies confirm that many non-commercial health research funders’ have policies requiring open access to completed research or results via publication or otherwise 33 , 34 . Yet none seem to have assessed whether funders provide any specific information to researchers to facilitate their choice of publishing journal. For public or charitable funders, providing such guidance or support may be one way of ensuring responsible stewardship of public or donor funds. While research publication routes exist beyond scientific journals (e.g., preprint servers, repositories) the present project examines journals as the primary vehicles of research dissemination due to funders’ and academia’s reliance on them as a gauge of research impact/productivity. The current work will establish a pre-Plan S baseline of health research funders’ guidance on selecting journals in which to publish funded research.

The aim of this research is to describe the policies and recommendations of major health research funding bodies regarding suitable journals for funded research.

We considered the public websites of 50 health funding bodies for statements, guidance, or policies specifically mentioning the publication of funded research. Detailed methods and rationale for this study are elaborated in an a priori study protocol ( https://doi.org/10.17605/OSF.IO/J6CSK ) and summarized below.

Data source

Global funding bodies with the largest documented health research expenditures were sampled from the curated Health Research Funder’s list found at: www.healthresearchfunders.org 35 . The list was developed as part of an unfunded post-doctoral project by researchers in the Netherlands 35 . It was last updated in 2016; expenditure data are reported in 2013 US dollars (USD, accounting for inflation/deflation rates by country). A detailed account of the systematic process used to identify funders and obtain expenditure data is found here: http://www.healthresearchfunders.org/faq/ . At the time of retrieval for this study (August 2017), 287 health research funding bodies from 30 countries were included on the list. The list distinguishes five categories of funders: [1] philanthropic funders (n=194), [2] public funders (n=77), [3] public funders who fund health research through Official Development Assistance (public ODA) 1 (n=8), [4] multilateral funders (funding across countries) (n=7), and [5] public-private partnerships (PPP) (n=1) ( Table 1 ). While there are some inequities in its coverage (e.g. public funders were selected from only G20 countries; paucity of funders from low income countries), the list is likely the most comprehensive source of global health research funder expenditure information in existence (personal communication, Dr. Beverley Holmes, CEO, Michael Smith Foundation for Health Research) and has been used to construct samples in at least two other studies 34 , 36 . This study excludes commercial funders since their expenditure data are not publicly or readily available.

* Created from data at healthresearchfunders.org .

† OECD: Organisation for Economic Co-operation and Development.

‡ Includes 3 funders with no expenditure data available.

§ Includes 11 funders with no expenditure data available.

** Includes 1 funder with no expenditure data available.

†† Includes 5 funders with no expenditure data available.

To construct our sample, we sought up to 15 funders with the largest expenditures from each of the five funder categories from the list at www.healthresearchfunders.org , and aimed to include all listed lower income countries (n=4) if they were not otherwise represented in the sample. We included the latter group of funders in order to ensure representation from lower income countries, since researchers and journals from these countries have been disproportionately implicated in predatory publishing 26 , 37 . Working with the available number of funders in each category ( Table 1 ), we ended up with 50 funders: 15 philanthropic, 15 public, eight public ODA, seven multilateral, one PPP, and four lower-middle income country funders.

In line with previous investigations into health research funder policies 34 , 38 , we expected that guidance for funded researchers would be publicly available and easily obtained. For each included funder, one of us (LS) visited the website using the URL provided by www.healthresearchfunders.org , or if the URL was not working, found it through a Google search using the funder name. When a funder’s website could not be located/did not work or when the funder was a duplicate, the next largest funder on the list was used. For each funder, we sought and downloaded the website section on policies for funded research in August-September 2017. If no specific policies were found, we searched the SHERPA (Securing a Hybrid Environment for Research Preservation and Access)/Juliet database ( www.sherpa.ac.uk/juliet/index.php ), which lists and links conditions for open access publication for some funders (though this is incomplete as it is reliant on voluntary contributions from funders and other organizations [e.g., libraries] tracking funder policies). If a funder’s website did not mention funding health research (i.e., funded other scientific research) or if the funder did not specifically award grants for research, the funding body was excluded from the sample and replaced with the next largest funder (by expenditure), where possible. Reasons for exclusion are documented in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is f1000research-10-55848-g0000.jpg

Data extraction

One assessor (LS) extracted information from the downloaded policy documents into an online form in Distiller SR, and a second assessor (KDC or MJP) verified the extracted data. Discrepancies were resolved by discussion. If additional documents were identified during extraction, we saved them and searched them for the desired data. The verification process led to clarifications in collected data or provided additional information. Since no reference standard for funder policies on publishing exists, the extracted items were derived de novo by the study team; no formal consensus process was used. The following information was assessed or extracted, as available:

  • Any statement(s) about the dissemination of outputs from funded research
  • Policy or recommendations about publication of funded research
  • Policies or recommendations on research/data accessibility
  • Policies or recommendations on journal quality, impact factor or other metric, ethical standards, and indexing of funded research;
  • Whether/what information is provided to researchers about predatory or substandard journals, or about journal credibility
  • Strength of any aforementioned policies/recommendations (‘must’, ‘should’, or ‘suggested’)
  • For publication policies, whether adherence will be monitored
  • For publication policies, whether consequences of non-adherence are listed

If non-English websites or documents were encountered and an English-language version was not available on the website, Google Translate was used to automatically translate the websites and documents. Google Translate has recently shown 85%–97% accuracy across nine languages for translating health research studies 39 , including the languages encountered in this study (French and German).

Data analysis

We summarized data descriptively by calculating proportions for dichotomous data; the date of funder policies/recommendations were summarized as medians and interquartile range.

Protocol deviations

In the protocol for this study, we stated that we wanted to determine whether there were differences in the number of funders with policies/statements about journal quality and predatory publishing based on the income-level of the funder country or country being funded. However, as only four funders from lower-middle income countries and none from low income countries were on the list we sampled from, there were not enough funders to enable meaningful comparisons between higher income and lower income countries.

For the 50 funding bodies originally identified using the described sampling technique, three allocated money from a funder (European Commission 2 ) already in the sample and were replaced with the next organizations on the list. One of the replacement funders also allocated money from an included funder and was also replaced. Two funders funded non-health research and four funders did not list any research grants (and appeared to fund health development initiatives) and could not be evaluated for our purposes. Overall, six funders were excluded and lacked replacements in the categories they belonged to. 44 funders remained in the sample for which grant policies and guidelines were sought ( Figure 1 ). 35 funders are from high income countries, one from upper-middle income (China), three are from a lower-middle income country (India), and five are not classified by income level because they are multilateral (n=3) or fund across the European Union (EU) (n=2, Table 2 ).

ii All funders from India.

iii Income-level not available since funders distributing funds across multiple countries (3 multilateral funders and 2 European Union funders).

iv No funders from “Low income” countries in sample.

38 of 44 funders (86%) had publicly available information for grantees about disseminating funded research outputs ( Table 3 ). Of the six funders that did not have publicly available information, five are from high-income countries (US, Germany, France, UK) and one funds research in the EU through public-private partnership. Three are philanthropic organizations and two are public-ODA funders. Information about disseminating research was contained within “policies” for 29/38 (76%) funders, “recommendations” (suggestions and guidance) for 8/38 (21%) of funders, and as a “code of conduct” for one funder ( Table 3 ). All but one policy/recommendation referred to funded research (including results) as the unit of dissemination (37/38, 97%). Over a third of policies/recommendations also specifically mentioned the dissemination of “data” (25/38, 66%). The median implementation date or date listed on collected documents was September 2014 (IQR: Apr 2012 to Apr 2016, n=35).

a Denominator = 44 funders with grant guidelines.

b Denominator = 38 funders with statements about research outputs.

c Policy: uses the words “policy”, “must”, “require”; Recommendation/Guideline: uses the words “recommendation” “recommend”, “suggest”, “should”, “guideline”.

d Other - described as “Code of Conduct”.

e Verbatim: activities of funded organizations; all research outputs, news releases; photos; any and all other published material referencing the research project or grant; code; research materials; protocols; research resources including, but are not limited to, the full range of tools that scientists and technicians use in the laboratory, such as cell lines, antibodies, reagents, animal models, growth factors, combinatorial chemistry, DNA libraries, clones and cloning tools (such as PCR), methods, laboratory equipment and machines; presentations; media interviews; and other professional activities; 'research tools'; metadata; bibliographic metadata; supplementary materials; other supporting artefacts, research resources/tools.

f Out of 35 funders listing this information. Date of implementation was used if available, otherwise, date of document or last update was used. When only year was given, January was used as default month; when a date range was given the most recent date was used.

Open access and journal selection

36 of 38 policies/recommendations (95%) specifically referred to publication in a journal as one form of dissemination for completed research ( Table 4 ). 31 of these (86%) mentioned that research should be open access, either through journal publication (n=24, 77%) or through self-archiving the final report or accepted manuscript in a freely accessible repository (such as PMC) (n=30, 97%). One funder from India (Indian Council of Medical Research), one from France (Institut National de la Santé Et de la Recherche Médicale, INSERM), and three from the USA (US Department of Defense, Congressionally Directed Medical Research Program, and the American Cancer Society) did not mention open access in their policies about research dissemination.

1 Denominator: 36 funders mentioning journal publication.

2 See Table 5 for verbatim text of statements about journal selection.

3 See Table 5 for verbatim text of statements about journal credibility.

4 excluding JIF.

5 2 funders indicate journal should be “high quality, peer reviewed journal”; 1 funder indicates journal should be “quality peer-reviewed journal”; 1 funder indicates what a good journal is: "Good journals have guidelines for reviewers of manuscripts committing them to strict confidentiality/to disclose conflicts of interest and promise to respond to submitted manuscripts within a specified, short time limit, and correspondingly set their reviewers short time limits for their comments."

6 See Table 6 for transparency or ethics standards for publications.

7 Other: 1 funder encourages publication in “primary scientific journals”; 1 funder states “models and mechanisms for publication and access to research must be both efficient and cost effective”;.

13 of 36 (36%) policies recommending publication contained some guidance on how to select a journal and six (17%) listed features or requirements of publishers or journals for researchers to look for ( Table 5 ). These six are described here. Only one funder policy (NIH) included a definition of a journal (i.e., either a publication listed in the journal section of the National Library of Medicine or one meeting stated criteria). And only one funder policy (Canadian International Development Research Council, IDRC) appeared to provide any information about ‘questionable’ journals in a guidance document entitled “Publishing in Open Access Journals”. The document lists journal features to “look for” and to “be wary of” and mentions Beall’s List 3 as a resource ( Table 5 ). One policy (Deutsche Forschungsgemeinschaft/German Research Foundation, DFG) linked to Think, Check, Submit ( www.thinkchecksubmit.org ) – an initiative to facilitate researchers’ assessment of the credentials of a journal – on a page supplementing their open access policy listing open access resources. Two funders distributing APC fees through the Charitable Open Access Fund (Cancer Research UK and Wellcome Trust) list the requirements of journals whose APCs are eligible for payment through the fund. The Bill and Melinda Gates Foundation provide researchers with a portal (called Chronos) through which to submit manuscripts directly to pre-selected journals whose standards are in line with their requirements.

* “Open Choice” is a term used by the publisher Springer to refer to hybrid journals.

The policies of at least three funders (German Federal Ministry of Education and Research Indian [BMBF], Indian Department of Biotechnology [DBT], Indian Department of Science & Technology [DST]) include a statement that further to making research freely accessible, researchers’ choice of journal was unrestricted.

Other journal characteristics mentioned by funders

Most funders mentioned that funded research should be peer reviewed or published in a peer reviewed journal ( Table 4 ). Four funders made non-specific reference to selecting a “good” or “quality” journal in relation to publication of funded research; none mentioned the journal impact factor. Eight funders made statements about publication transparency or ethics. For instance, one funder discussed reproducibility in published research, three mentioned adherence to reporting guidelines, and at least six asked that metadata accompany published articles ( Table 6 ).

Adherence to policies/recommendations

Of 38 policies/recommendations providing information about disseminating research outputs, only nine (24%) stated that they monitor adherence to either a policy (n=7) or recommendation (n=2); two philanthropic funders (Wellcome Trust and Bill & Melinda Gates Foundation) specified that they would evaluate publications of funded research reported to them to ensure they are published in journals meeting the funder’s outlined publishing requirements ( Table 7 ). No monitoring or adherence data appears to be publicly available. Only five (13%) funders with policies or recommendations about journal publication indicated that there would be consequences for non-adherence. And only two of those (Wellcome Trust and NIH) stated that they would withhold or suspend payments if articles are not made open access.

Most health research funders appear to have active policies about the dissemination of funded research, typically about open access which often include statements about journal publication. Few policies contain guidance on how to select journals, list features of journals meeting funder requirements, or about the credibility of publishing outlets. Only one health research funding organization (IDRC) made specific reference to the “questionable journals” at the time of data collection (August-September 2017). Additionally, few policies describe whether funded outputs are monitored for compliance with funders’ dissemination policies, and few describe any consequences for researchers’ non-adherence to their policies. Information is not available on whether the NIH or Wellcome Trust, both of whom promise to withhold or suspend grant funds for breaching their open access policies, have actually ever done so 9 .

For many of the funders in our sample, information to guide research publication was found across multiple documents and not always within open access policy statements/documents where publication is mentioned. For example, the only guidance we identified that referred to predatory journals (IDRC) was contained in a PDF (entitled “Publishing in Open Access Journals”) separate from the funders’ main open access policy. The policy did not flag that the document contained information about predatory/questionable/non-credible journals. This unobvious placement of guidance or expectations around journal selection relies on researchers’ curiosity or knowledge that important information may be located outside of the main policy webpages or documents. If funders wish to provide guidance about journal credibility and predatory publishing, they may reach more researchers (and increase the likelihood of them reading it) by including such information within their main policies.

Comparison to other research

At least four previous studies examining health research funder policies on clinical trial transparency have collected information on funder’s recommendations for disseminating research.

Two studies using similar methods evaluated trial transparency policies (i.e., those related to trial registration, access to summary results, individual data availability) for non-commercial health research funders globally (n=18) 34 and in the USA (n=9) 36 . After accounting for three common funders across studies, 21 of 24 (87.5%) funders (16 of which are represented in our study) either required or supported publication or registration of trial results (neither study or their available data distinguished between publication or registration). This is in line with our findings in which 86% (38 of 44) funders had such policies/recommendations.

A third study, published in 2017 which examined research waste-reducing policies of 11 non-commercial funders (six of which are represented in our study) reported six to be explicit in requiring publication of full reports of funded research 33 . In comparison, 36 of 38 policies/recommendations (95%) in our study referred to journal publication as one form of dissemination for completed research but did not indicate that it was mandatory. There may be differences in how authors of that study and interpreted language in documents or policies. The names of the six funders ‘requiring’ publication in that study were not obvious in either the publication or available data, so we are unable to investigate this further.

A study published in 2008 examined 73 UK and international non-commercial health research funders’ guidance for reporting funded clinical trials 42 . 49 funders (67%) explicitly stated that trials could or should be published. Of the three funders appearing in the 2008 sample and ours, all have maintained recommending (but not requiring) the publication of trial results. Whether funders provided any guidance on selecting a journal to publish in was not collected in the study.

No previous studies appear to have investigated whether health research funders’ provide guidance to help funded researchers select a journal for publication. Our study appears to be the most comprehensive investigation on this matter. This is surprising since our findings suggest that funders in our sample regard publication as the primary means of disseminating funded research. Further, studies show that researchers view journal publication as the primary way of disseminating research 43 , 44 .

Strengths and limitations

This study is the first to examine the information funders provide researchers about selecting a journal in which to publish funded research. All funders in our sample that mention journal publication or provide guidance on selecting journals, do so within their open access policies. In a time where the scholarly publishing landscape has been infiltrated and confused by predatory journals, inadvertently resulting in some researchers trying to achieve open access to publish in predatory journals 45 , funders can play a critical role in steering researchers in the right direction. Funders can be specific and explicit with regards to which journal features researchers should look for in order to select one that meets their open access requirements.

This study provides a benchmark by which to monitor how major health research funders are performing pre and post Plan S implementation (January 2021). Data collection occurred in August & September 2017, prior to the September 2018 announcement of Plan S. So far, 24 funders have committed to implementing Plan S, five of which were considered in this research (European Commission, Gates Foundation, MRC/UK Research and Innovation, Wellcome Trust, and the World Health Organization). Two of these, the Wellcome Trust and Gates Foundation, provided guidance (in the form of tools) to facilitate selecting a journal in line with their open access policies at the time of sampling. At least one funder (Wellcome Trust) has made changes to their open access policies in anticipation of Plan S 46 .

Our study relied on publicly available information about funder expectations of funded research and was abstracted by a single person with verification by a second (i.e., not two independent people). Six funders in our sample did not provide any relevant public information. We did not seek verification on policies from funders. Data were collected at a time when publishing activities, particularly open access, was rapidly changing, in part in response to funded research being published in predatory journals 45 , 47 . We are aware that the NIH issued a notice on their Public Access Policy in November 2017 (outside of our sampling and data collection period) with recommendations to publish funded research in journals with ‘credible practices’ 48 . Engaging funders in our study may have had the added benefit of increasing uptake of our findings/recommendations into practice.

The focus of this research is limited to health research funders. We have not accounted for or evaluated other potential scientific publishing gatekeepers such as academic institutions, governments, or companies carrying out scholarly research, despite the important role they can play 49 .

Implications and recommendations for funders

Explicit funder policies on publication expectations. Selecting a journal in which to publish research is not a straightforward undertaking 40 , particularly since the emergence of predatory journals. For funders looking to make their expectations around publishing funded research more explicit and more transparent, we propose several recommendations on how this might be achieved in Box 1 , based on findings of this research and on the expertise of authors. Providing specific information about journal considerations in funders’ policies to funded researchers may facilitate more thoughtful and responsible selection of journals. Several recommendations in Box 1 pertain to the explicitness of article/journal considerations mentioned in Plan S (e.g., persistent identifiers for publications; long-term preservation/archiving; article-level metadata). All health research funders may wish to consider making aspects of their policies that pertain to publishing more explicit, whether or not they intend to implement Plan S.

Box 1. Recommendations for providing explicit/transparent guidance on journal selection in health research funders' open access policies

1. Use precise wording to describe your agency’s expectations that funded research be published

  • - Indicate whether researchers are expected to publish their research (e.g. use of “must” vs “should”)
  • - indicate whether open access publication is one of several options for meeting the agency’s open access requirements.

2. Provide a definition of a journal that is suitable to your agency

  • ◦ Decide what essential features a publishing entity should and should not have in order to be considered a suitable place for publication.
  • ◦ Consider referring to/including the Committee on Publication Ethics (COPE) list of Core Practices all journals and publishers should follow: https://publicationethics.org/core-practices
  • • Publication meets the requirements for ISSN (International Standard Serial Number) assignment;
  • • Publication content is issued over time under a common title;
  • • Publication is a collection of articles by different authors;
  • • Publication is intended to be published indefinitely.

3. Indicate your agency’s requirements for access and discoverability of published articles

  • • Published articles are free to access; AND additionally, for open access,
  • • Licensing for published articles permit reuse and building on (typically through a Creative Commons Attribution License, CC BY).
  • ○ Researchers can determine whether the publishing journal has a permanent archival arrangement in place either through automatic deposition to PMC ( https://www.ncbi.nlm.nih.gov/pmc/journals/ ), or to another archive (via the Keepers Registry: https://keepers.issn.org/keepers-registry ) xxv
  • ○ PMC-partnered funders can require that researchers upload published research directly to PMC
  • ◦ Journal provides unique permanent identifiers (e.g. digital object identifier [DOI]) (can check if journal/publisher is registered with CrossRef: https://www.crossref.org/06members/51depositor.html )

4. Be clear about your agency’s support for article processing charges arising from publication of funded research

  • ◦ Indicate how much money is available each open access publication (e.g. maximum APC amount)
  • ◦ Indicate who will receive APC payment from the funder – the author (institution) or the journal
  • ◦ Indicate when funding will be distributed to support article processing charges
  • ◦ Indicate whether there are any conditions on distribution of APC funds

5. Indicate whether your agency requires archiving in a repository alongside publication

  • ◦ Indicate whether the publication, data, or both, are expected to be deposited in a repository
  • ◦ Indicate when deposition is expected to occur (i.e., immediately or within a specified time frame)
  • ◦ Indicate whether you have a dedicated repository for research publications (e.g., PMC for NIH-funded research), and if not, suggest one or more repositories that are considered acceptable by your agency
  • ◦ Be clear that it is the authors’ responsibility to ensure publications are deposited in a repository
  • ◦ Provide instructions/link to resources on how to deposit research in the suggested repository.

6. Indicate how your agency will monitor that funded research is published in appropriate journals, in line with agency recommendations/mandates

  • ◦ For ease of monitoring, Provide instructions for researchers about where and how to include the funding agency name and grant number in published articles (guidance here: https://www.ukri.org/wp-content/uploads/2020/10/RIN-251020-FundersAcknowledgementInScholarlyjournalArticles.pdf )
  • ◦ Provide instructions on if, how, and when to submit publications of funded research to the funding agency, or state how publications will be monitored otherwise
  • ◦ Provide specific actions/consequences that the agency will carry out when funded research is published in a journal that does not meet agency requirements

xxv provides global monitoring of archiving arrangements for electronic journals.

The NIH is the only funder in our sample to clearly describe what it considers a journal – either those listed in the journal section of the National Library of Medicine (NLM) ( https://www.ncbi.nlm.nih.gov/nlmcatalog/journals ) or those meeting a comprehensive set of criteria 41 : (1) meets the requirements for ISSN (International Standard Serial Number) assignment; (2) content is issued over time under a common title; (3) is a collection of articles by different authors; and (4) is intended to be published indefinitely. All but the final criterion are straightforward to judge; presumably it is meant to distinguish a journal from a book or a monograph however NIH or NLM do not provide guidance on how to judge this criterion. Whether and how we can predict journals’ intentions to publish indefinitely has not been described. A more meaningful criterion for distinguishing journals from non-journals may be whether the publishing entity has archival arrangements in place (e.g., with LOCKSS, Portico, PubMed Central) to ensure perpetual access to content in the event a journal ceases to operate. Since preserving publisher content may have associated costs 50 , predatory or non-credible journals (which some describe as “primarily fee-collecting operations” 51 ) may be unlikely to seek this service.

We surprised that the three funders from India in our sample (Indian Council for Medical Research, DBT, and DST) did not mention journal credibility or predatory journals, and further, that a common policy for two Indian funders (DST and DBT), dated December 2014, recognizes “the right of researchers to publish their work in journals of their choice, because researchers are the best judges of where to publish their work”. Since at least 2016, there has been an ongoing national initiative combat predatory journals and to support researchers in their choice of journals across higher education institutes in India. The main product of this work has been a list of approved journals in which academic researchers are permitted to publish in as well as standard templates for researchers when communicating with journals 52 . The University Grants Commission (UGC), the regulator and funder of high education, has been leading the initiative. It is uncertain whether the country’s largest health research funders are on board due to their lack of guidance in this space. A coordinated approach by a range of stakeholder groups 49 , which includes funders (who have innate authority to implement mandates about publishing), may facilitate improved publication decisions by researchers. Importantly, however, UGC’s list of approved journals has been plagued with numerous credibility concerns in its short existence 53 , 54 . Explicit recommendations from India’s funders regarding credible and non-credible features of journals in which to publish may be warranted in the absence of a trusted and comprehensive list.

Facilitating and monitoring adherence to funder policies. Funders are well-positioned to provide researchers with resources and tools to help ensure that results from funded research are published in credible and discoverable journals, in line with their policies. Several organizations in our sample consistently offer more information about potential publishing routes and tools to facilitate adherence to their policies. We provide a list of tools to facilitate the development of funder policies on research outputs, adherence to such policies, and monitoring of policy adherence ( Table 8 ).

xxvi Previously known as FundRef (until Nov 2015).

Monitoring researchers’ adherence to their policies may help funders understand the extent to which researcher’s publishing practices are guided by their policies 55 . Informing researchers that their adherence to open access policies is being monitored may facilitate better awareness of such policies and potentially increase adherence to them 56 . A 2018 study examining the accessibility of research supported by 12 research funding agencies across North America and Europe with open access policies, found that 62% of almost 1.3 million articles over nine years were freely available 9 . In 2016, 90% of published research supported by the NIH and Wellcome Trust was free to access (via journal, repository, or both) 9 . Both agencies mandate the deposit of published research by publishing journals or funded authors into dedicated repositories (PMC for NIH; PMC Europe for Wellcome Trust). The remaining 10 funders in the sample did not mandate depositing in a repository alongside publication and had lower rates of freely accessible articles. For example, for the Canadian Institutes of Health Research (CIHR) only 55% of published research was freely accessible in 2016, even though the funder had a dedicated repository (PMC Canada) until 2018 (it closed due to low usage and high upkeep costs) 57 . The study’s authors conclude that funders with low compliance rates used less enforcement and had less infrastructure to support compliance with their open access mandates 9 .

Areas of future research

An important area of future study is whether researchers are being funded on the basis of grant applications that include research published in predatory journals – or in journals that may not be indexed in trusted databases. Predatory journals have made their way into consideration (via CVs submitted by researchers or through institution-initiated database searches) into applications for academic career advancement 58 – 60 . Some have called for such publications to either be discounted from consideration or for researchers who submit them for consideration to be prevented from career advancement overall 61 , 62 . It is unknown whether researchers are including publications in predatory journals as part of their funding applications. This should be evaluated. If they are, funders may wish to consider whether this is an important consideration for awarding funding.

Most large health research funders mandate open access to funded research outputs, typically by way of open access journal publication and by deposition of published research in digital repositories. Few funders provide guidance on what constitutes a journal (or an open access journal) or are checking to ensure that published research that they have funded is indeed meeting specified requirements about how research should be shared. Health research funding organizations have an obligation to support researchers in meeting their mandates so that research can, as intended, contribute to the broader evidence base. The publishing community needs to provide guidance to funders and researchers on universally acceptable and transparent standards for journal operations. Many solutions to improve policies, facilitate adherence, monitor compliance and work with other funders on large-scale improvements exist and should be implemented.

Journals that fail to make research discoverable breach the basic trust that researchers and their funders have in the current publishing system. Most funded researchers publish their work under the basic assumption that their journal or publisher is following best practices to ensure future use 7 . Bodies funding health research have a responsibility to protect their investments and even more importantly, to ensure that funded research is not wasted by being published in non-credible and non-discoverable sources.

Data availability

Acknowledgements.

We would like to thank Dr. Beverley Holmes of the Michael Smith Foundation for Health Research for providing valuable insight into global health research funder operations and policies at the outset of this project.

[version 2; peer review: 2 approved]

Funding Statement

SES is supported by a Tier 1 Canada Research Chair in Knowledge Translation and Quality of Care; JMG is supported by a Tier 1 Canada Research Chair is Health Knowledge Transfer and Uptake; DM is supported by a University Research Chair. MJP is supported by an Australian Research Council Discovery Early Career Researcher Award (DE200101618).

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

1 ODA is a term coined by the Development Assistance Committee of the Organisation for Economic Co-operation and Development ( https://data.oecd.org/oda/net-oda.htm )

2 Any European Commission funding program that explicitly stated using the European Horizon 2020 guidelines for grantees were jointly represented as “European Commission”. ( Figure 1 )

3 Beall’s List is an archived listing of potential predatory journals and publishers, as determined by librarian Jeffrey Beall between 2011 and 2017, https://web.archive.org/web/20170103170903/https://scholarlyoa.com/

Author contributions

The study was conceived of by LS, JG, and DM. LS drafted the initial protocol, with approval from the study team. LS managed the research project. LS, KDC, and MJP carried out the research. LS cleaned and analysed the data, prepared tables and figures, and drafted the manuscript. All authors provided feedback and gave their approval prior to submission.

Reviewer response for version 2

Nicholas devito.

1 The DataLab, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK

Many thanks to the authors for their consideration of my review and their revisions. While many of my comments were addressed, I do have some additional feedback based on the responses from the authors that I believe should be further considered. That said, these are all relatively minor enough that I recommend the manuscript move forward for indexing as the analysis and related discussion is overall fundamentally sound.

-As indicated, Dr. Holmes is the CEO of a health research funding organization (not included in our sample). Her expertise of the funding landscape and whether a listing of health research funders existed or whether the chosen list was the best source of this information, was an invaluable methodological contribution to our research. We prefer not to remove this acknowledgment.

I still believe this reads awkwardly in the text and is not how personal communication citations are typically used. If it is just an acknowledgement it is ultimately redundant to your thanks of Dr. Holmes in the “Acknowledgements” section at the end of the piece. However this is a minor point.

-Extraction criteria were devised de novo based on expertise within the author team. There is no reference standard for funder policies about publishing and as such we did not use any framework for extraction. We have added a clarification of this point in the methods section rather than in the discussion section.

The addition to the methods is appropriate although I do believe there is an argument to be made that using criteria developed in-house based on expertise, while an acceptable method, could be seen as a limitation as it is lacking further validation and consideration beyond the study team.

-We did not apply an analytical framework to the extracted text and doing so would be a post-hoc analysis. The table/text is provided for reference/transparency of extracted data summarized in Tables 2-4. We do not feel that a summary is necessary.

I do not agree that a full analytic framework is required to condense multiple tables of raw verbatim text within the body of a manuscript into something more easily digestible to readers. I still strongly recommend that the authors reconsider the utility that 7 pages of Tables in the middle of the manuscript (including 4 pages just for Table 5) has for readers when Tables 2-4 summarize the extractions from this raw data as necessary for the endpoints considered and interested parties could examine the very well-curated, transparent, and clearly sign-posted raw data for themselves as necessary.

-To be clear, we did not make a post-hoc justification regarding the lack of outreach to funders. We specifically did not have the a priori intent of verifying our data with funders. We anticipated that the results of our previous work, https://www.nature.com/news/stop-this-waste-of-people-animals-and-money-1.22554, which identified a large number of funders supporting research in predatory journals, may impact/prompt changes in funder policies once published (Sept 2017). Due to the timing of the current study, (Aug – Sept 2017), we decided not to engage funders since we did not want to influence any potential changes to their policies as a result of the study process. We did not find this to be a limitation. As expected, and as noted in our discussion, we are aware of at least one funder (NIH) who, in Nov 2017, clarified their policies about publication as a result of the previous study. The specific impacts (i.e., number and type of clarifications) of engaging funders to clarify extracted data in Goldacre et al’s previous work are unclear, however, we have added text to the limitations section outlining the additional potential benefit of engaging funders in the study process.

While it is true that outreach to sponsors was not specified in the protocol, it is also true that the justification for not doing so was also not provided in the protocol despite the fact that it was used in the study cited as informing your data collection/extraction methods. Therefore, while the decision not to conduct outreach was not post hoc there is no way to know if the provided justification was or not. In my original reading the provided information was not well-supported by even your own findings. Expanding on the context provided above in your revision is helpful however there is another aspect at play here. You relied on a single author for searches and only involved a second party for extractions. There is no acknowledgement of the potential that documents could have been missed by a single searcher. Outreach to the parties under investigation would allow them to confirm whether or not you have missed potentially relevant documentation on the topics of interest. It is a validation step for your methods rather than simply a way to potentially increase “uptake of our findings/ recommendations into practice.”

Is the work clearly and accurately presented and does it cite the current literature?

If applicable, is the statistical analysis and its interpretation appropriate?

Are all the source data underlying the results available to ensure full reproducibility?

Is the study design appropriate and is the work technically sound?

Are the conclusions drawn adequately supported by the results?

Are sufficient details of methods and analysis provided to allow replication by others?

Reviewer Expertise:

Metascience, health policy, publication ethics.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

Reviewer response for version 1

Many thanks for the opportunity to review this piece on health research funders’ policies on journal selection for research outputs.

Overall, this piece investigates an interesting topic, has appropriate methods, appears well-executed, and follows open science best principles of pre-registration and open sharing of data for which it should be commended. I ultimately believe it should be recommended for indexing but a minor revision is in order. Specific points are provided below.

Can the authors please revisit the piece for a bit of editing? The informational content all appears very relevant and well-researched but the prose can carry on at times and structurally it jumps back and forth between topics and it is difficult to connect the threads. This is most applicable to the introduction (and perhaps the Discussion as well) but I think taking a critical eye towards developing a more direct, succinct, and straight-forward writing style throughout would enhance readability. A sharper consolidation and organisation of your ideas and arguments, while trying to be more concise overall, will aid readers considerably. A brief example: “Funders ought to be concerned that funded research may be published in journals that do not ensure discoverability of content ensuring it is available to contribute to future science” could surely be consolidated to something like “Funders should ensure published research is discoverable to the scientific community.” 

One area that doesn’t really come through clearly in the Intro is what funders get out of publication of their funded work and how this influences this dynamic. Why are they moving towards Open Access? What about other dissemination routes?

The personal communication citation of Dr. Beverly Holmes feels a bit out of place. Is she merely stating an opinion? If so, why can the authors not just state this opinion directly themselves since I assume they share it? I’m sure Dr. Holmes is perfectly knowledgeable and her opinion is valuable, but is citing her necessary here? I’m most familiar with citations of personal communication to convey official information not published elsewhere, from a direct source, rather than simply used to state an opinion the authors agree with. The average reader won’t have any insight as to why they should trust the word of Dr. Holmes on this issue.

The link to the protocol is upfront and clear and points readers to exactly where they can easily access more detailed information about the search/data extraction methods. Very well done by the authors.

How were the extraction criteria derived? Did the authors base them on anything or did they come up with them de novo? This should be stated and potentially explored in the Discussion.

One major area I was surprised that the authors did not include in their assessments is whether the funder offers to pay the open access/APCs of work they fund. This would be an important part of the dynamic of how, where, and why certain journals may be chosen and why funders should care even more about where the research they fund ends up.

In Figure 1, it is unclear when INSERM is mentioned as to whether that is the funder being removed or the reason another funder is being removed (later context shows it’s the latter, but this is unclear). Also there is a type (“allocated”) in the “Excluded Funders n=6” box. I think ideally you would list which exact funders are entering or exiting the sample in this figure for clarity as you don’t get a good sense of what funders are actually being assessed until the results. It’s also not necessary to restate in prose everything Figure 1 already conveys (something to be cognisant of throughout).

Table 5 is a bit overwhelming. Perhaps consider summarising each policy in Table 5 and make the full text extractions available as an appendix for interested readers.

I don’t find the reason given for a lack of outreach to sponsors particularly compelling. The author’s state they did not verify with sponsors (despite this being a feature of the work from Goldacre et al . they cite as a template for this research) because “data collection took place at a time when the publishing activities...was rapidly changing.” That seems like a post hoc justification rather than an actual reason why it didn’t actually occur. It also isn’t supported by their own data as the median date of effect of policies reported was September 2014, 3 years prior to data collection and the higher IQR range is April 2016, over a year before the searches. Similarly, this article is cross-sectional, so I don’t see that reason as relevant anyway. You could simply have discarded policies from before a cutoff date, etc. Can the authors please revisit their discussion of this limitation?

Table 8 may fit better in an appendix.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

Unity Health Toronto, Canada

Thank you for reviewing this work. Please see responses to your comments below.

  • We have made some changes in the introduction and discussion sections to make the wording more concise. 
  • We have clarified that scholarly publishing is intended to facilitate the discoverability and uptake of funded research (into practice) and that predatory journals limit this by having unreliable discovery and questionable trustworthiness. We did not set out to investigate funders’ open access (OA) policies or shifts towards OA since discoverability and other journal features compromised by predatory journals can be achieved independent of openness. We note that several sources (ROARMAP, SHERPA JULIET)  list details of funders' open access policies. This research is instead concerned with hallmarks of journal validity which have been (mis)taken as signals of research quality.
  • Regarding other dissemination routes, we have clarified that this research focuses on journal publications rather than other publications routes (end of introduction section).
  • As indicated, Dr. Holmes is the CEO of a health research funding organization (not included in our sample). Her expertise of the funding landscape and whether a listing of health research funders existed or whether the chosen list was the best source of this information, was an invaluable methodological contribution to our research. We prefer not to remove this acknowledgment.
  • Extraction criteria were devised de novo based on expertise within the author team. There is no reference standard for funder policies about publishing and as such we did not use any framework for extraction. We have added a clarification of this point in the methods section rather than in the discussion section.
  • We have added text to the introduction (2nd para) outlining the waste of research funds & APCs due to predatory journals. While we did not extract specific data on funders' APC provisions, where funders provided information about journal APC support, this can be found in Table 5.
  • We have updated the suggested information in the flow diagram and left the related descriptive text in results for additional reference.
  • We did not apply an analytical framework to the extracted text and doing so would be a post-hoc analysis. The table/text is provided for reference/transparency of extracted data summarized in Tables 2-4. We do not feel that a summary is necessary.
  • To be clear, we did not make a post-hoc justification regarding the lack of outreach to funders. We specifically did not have the a priori intent of verifying our data with funders. We anticipated that the results of our previous work, https://www.nature.com/news/stop-this-waste-of-people-animals-and-money-1.22554 , which identified a large number of funders supporting research in predatory journals, may impact/prompt changes in funder policies once published (Sept 2017). Due to the timing of the current study, (Aug – Sept 2017), we decided not to engage funders since we did not want to influence any potential changes to their policies as a result of the study process. We did not find this to be a limitation. As expected, and as noted in our discussion, we are aware of at least one funder (NIH) who, in Nov 2017, clarified their policies about publication as a result of the previous study. The specific impacts (i.e., number and type of clarifications) of engaging funders to clarify extracted data in Goldacre et al’s previous work are unclear, however, we have added text to the limitations section outlining the additional potential benefit of engaging funders in the study process.
  • F1000 does not allow appendices and we have opted not to move this table to the Open Science Framework repository for this project as supplementary materials. We feel that the recommendations in this table are part of our discussion and are likely to receive little uptake if readers have to navigate away from the full text.

Shubhada Nagarkar

1 Department of Library and Information Science, Savitribai Phule Pune University, Pune, India

2 Centre for Publication Ethics, Savitribai Phule Pune University, Pune, Maharashtra, India

The purpose of this work is clear and the work is explicitly and precisely presented. Researchers would like to describe the policies of major health research funding agencies. The job is sound technically.

Authors selected policies from 44 funding agencies and assessed systematically to find information and recommendations on the dissemination of research output, recommendations on criteria for the selection of journals (quality, impact factor and other metrics).

Methodology for data collection and analysis is discussed very well and the protocol followed is described in detail.

Statistical data can be interpreted easily. Source data can be reproducible.

The results obtained through this exercise are well discussed and will be of interest to all funding agencies, especially those from developing countries. The findings of the study show that there is no specific guidance about how to select a journal for publication. The mention and cautionary notes on predatory publishers and their open-access nature should be made clear by these funding agencies. Authors warned policymakers about this. If the policy of these funding agencies advises researchers to current trends in publishing, researchers may not fall into pray with predatory publishers.

Overall, this work is recommended for indexing. Similar studies can be considered for the assessment of policies of funding agencies other than health research funders.

Bibliometrics, publication ethics, institutional repositories.

Thank you for your thoughtful comments.

  • Letter to the Editor
  • Open access
  • Published: 27 May 2024

Analyzing global research trends and focal points of pyoderma gangrenosum from 1930 to 2023: visualization and bibliometric analysis

  • Sa’ed H. Zyoud   ORCID: orcid.org/0000-0002-7369-2058 1 , 2  

Journal of Translational Medicine volume  22 , Article number:  508 ( 2024 ) Cite this article

135 Accesses

Metrics details

To the Editor, I read with great interest the publication entitled “An approach to the diagnosis and management of patients with pyoderma gangrenosum from an international perspective: results from an expert forum” [ 1 ]. Pyoderma gangrenosum is an ulcerative, cutaneous condition with distinctive clinical characteristics first described in 1930 [ 2 ]. Due to the importance of the subject, this published study was searched in databases, and I did not find any bibliometric studies on this topic. In recent years, researchers have successfully applied bibliometric analysis in various domains, contributing to the development of novel theories and assessing research frontiers, including in the dermatology field. Nonetheless, comprehensive bibliometric analyses of P. gangrenosum have not been performed. This study addresses this gap by conducting a thorough bibliometric analysis in the field of P. gangrenosum at the global level. The goal is to assist researchers in swiftly grasping the knowledge structure and current focal points in the field, generating new research topic ideas, and enhancing the overall quality of research on P. gangrenosum.

This bibliometric analysis sought to delineate research endeavors concerning P. gangrenosum, pinpoint the primary contributing countries, and discern prevalent topics within this domain. Using a descriptive cross-sectional bibliometric methodology, this study extracted pertinent documents from the Scopus database covering the period from 1930 to December 31, 2023. The search strategy included keywords related to ‘pyoderma gangrenosum.’ VOSviewer software (version 1.6.20) was used to illustrate the most recurring terms or themes [ 3 ]. The scope of the retrieved documents was restricted to including only journal research articles while ignoring other forms of documents.

Overall, 4,326 papers about P. gangrenosum were published between 1930 and 2023. Among these were 3,095 (71.54%) original papers, 548 (12.67%) letters, 477 (11.03%) reviews, and 206 (4.76%) other kinds of articles, such as conference abstracts, editorials, or notes. With 3,454 publications, English was the most often used language, followed by French ( n  = 253), German ( n  = 190), and Spanish ( n  = 163), accounting for 93.85% of all related publications.

Figure  1 shows the distribution of these publications. Between 1930 and 2023, there were steadily more publications on P. gangrenosum (R 2  = 0.9257; P value < 0.001). Growth trends and productivity trends in P. gangrenosum-related publications have been influenced by developments in medical research, clinical practice and patient care [ 4 , 5 ]. All of these factors have advanced our knowledge of the condition, enhanced our methods of treatment, and helped to create standardized findings for clinical studies.

figure 1

Annual growth of published research related to P. gangrenosum (1930–2023)

The top 10 countries with the most publications on P. gangrenosum are listed in Table  1 . These are the USA ( n  = 1073; 24.80%), the UK ( n  = 345; 7.98%), Japan ( n  = 335, 7.74%), and Germany ( n  = 296; 6.84%). With 65 articles, the Mayo Clinic in the USA led the institutions; Oregon Health & Science University in the USA and Università degli Studi di Milano in Italy followed with 60 articles each.

To create a term co-occurrence map in VOSviewer 1.6.20, terms had to appear in the title and abstract at least forty times by binary counting. The network was visualized by building the map using terms with the highest relevance scores. Large bubbles for often cooccurring terms and close spacing between terms with high similarity were guaranteed by the algorithm. The larger circles in Fig.  2 A represent frequently occurring terms in titles and abstracts. Four primary topic clusters—“Treatment modalities” (green cluster), “epidemiology and clinical presentation” (blue cluster), “improved diagnostic methods” (red cluster), and “the links between P. gangrenosum and other morbidities such as inflammatory bowel disease or autoimmune conditions” (yellow cluster)—are distinguished by color.

figure 2

Mapping of terms used in research on P. gangrenosum. A : The co-occurrence network of terms extracted from the title or abstract of at least 40 articles. The colors represent groups of terms that are relatively strongly linked to each other. The size of a term signifies the number of publications related to P. gangrenosum in which the term appeared, and the distance between two terms represents an estimated indication of the relatedness of these terms. B : Mapping of terms used in research on P. gangrenosum. The terms “early” (blue) or “late” (red) years indicate when the term appeared

Interestingly, after 2012, terms related to “treatment modalities” and “epidemiology and clinical presentation” have gained more attention than in the past, which focused on “improved diagnostic methods” and “the links between P. gangrenosum and other morbidities such as inflammatory bowel disease or autoimmune conditions” (pre-2012). Figure  2 B shows this tendency.

In conclusion, there has recently been an increase in P. gangrenosum research, especially in the last decade. The current focus of research is on treatment challenges, obstacles to diagnosis, and connections to underlying diseases. Furthermore, efforts are being made to create core outcome sets and standardized diagnostic criteria for clinical trials. These patterns demonstrate continuous attempts to comprehend, identify, and treat this illness with greater effectiveness. This recent increase in research has important implications for clinical practice. Clinicians can improve patient care by remaining current in emerging trends and areas of interest. Moreover, an in-depth analysis of previous studies can identify knowledge gaps, directing future research efforts toward the most important issues. In the end, a deeper comprehension of the body of research can result in better clinical judgment based on best practices, which could enhance patient outcomes and advance the dermatological field.

Data availability

This published article contains all the information produced or examined in this research. Additional datasets utilized during this study can be obtained from the corresponding author.

Haddadin OM, Ortega-Loayza AG, Marzano AV, Davis MDP, Dini V, Dissemond J, Hampton PJ, Navarini AA, Shavit E, Tada Y, et al. An approach to diagnosis and management of patients with pyoderma gangrenosum from an international perspective: results from an expert forum. Arch Dermatol Res. 2024;316(3):89.

Article   PubMed   Google Scholar  

Brunsting LA. Pyoderma (Echthyma) Gangrenosum. Arch Derm Syphilol. 1930;22(4):655–80.

Article   Google Scholar  

van Eck NJ, Waltman L. Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics. 2010;84(2):523–38.

Kridin K, Cohen AD, Amber KT. Underlying systemic diseases in Pyoderma Gangrenosum: a systematic review and Meta-analysis. Am J Clin Dermatol. 2018;19(4):479–87.

McKenzie F, Arthur M, Ortega-Loayza AG. Pyoderma Gangrenosum: what do we know now? Curr Dermatol Rep. 2018;7(3):147–57.

Download references

Acknowledgements

The author thanks An-Najah National University for all the administrative assistance during the implementation of the project.

No support was received for conducting this study.

Author information

Authors and affiliations.

Department of Clinical and Community Pharmacy, College of Medicine and Health Sciences, An-Najah National University, Nablus, 44839, Palestine

Sa’ed H. Zyoud

Clinical Research Centre, An-Najah National University Hospital, Nablus, 44839, Palestine

You can also search for this author in PubMed   Google Scholar

Contributions

Sa’ed H. Zyoud significantly contributed to the conceptualization and design of the research project, overseeing data management and analysis, generating figures, and making substantial contributions to the literature search and interpretation. Furthermore, Sa’ed H. Zyoud authored the manuscript, which he reviewed and approved as the sole author.

Corresponding author

Correspondence to Sa’ed H. Zyoud .

Ethics declarations

Ethics approval and consent to participate.

The ethics committee’s approval was unnecessary because the study did not include human interactions.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Zyoud, S.H. Analyzing global research trends and focal points of pyoderma gangrenosum from 1930 to 2023: visualization and bibliometric analysis. J Transl Med 22 , 508 (2024). https://doi.org/10.1186/s12967-024-05306-4

Download citation

Received : 10 May 2024

Accepted : 14 May 2024

Published : 27 May 2024

DOI : https://doi.org/10.1186/s12967-024-05306-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Journal of Translational Medicine

ISSN: 1479-5876

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research funding journal articles

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • Personal Finance
  • AP Investigations
  • AP Buyline Personal Finance
  • AP Buyline Shopping
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Election Results
  • Delegate Tracker
  • AP & Elections
  • Auto Racing
  • 2024 Paris Olympic Games
  • Movie reviews
  • Book reviews
  • Personal finance
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

Milwaukee schools superintendent resigns amid potential loss of millions in funding

  • Copy Link copied

MILWAUKEE (AP) — The superintendent of Milwaukee Public Schools resigned Tuesday, with Wisconsin’s largest school district in jeopardy of losing millions of dollars in state funding after not submitting required financial reports to the state.

Keith Posley, a former teacher who has been superintendent since 2018, resigned hours after a public hearing at which more than 100 parents, school district staff members and community members called for his ouster, Wisconsin Public Radio reported .

In addition to potentially losing millions from the state due to missing report filing deadlines, millions of dollars in federal funding are also being withheld from the district’s Head Start program after officials discovered abuse and lack of supervision in Milwaukee Public Schools programs.

Federal officials cited “deficiencies,” such as failure to supervise students in the early education and nutrition program for low-income children.

The district received $14 million from the federal Head Start program in the most recent school year, according to district budget materials.

The district also hired Todd Gray, the former Waukesha School District superintendent and a financial consultant who helped the Glendale-River Hills School District through a financial problem. He will be paid $48,000 to help the district through the end of July.

FILE - The Pennsylvania Capitol is seen, Feb. 6, 2024, in Harrisburg, Pa. Pennsylvania lawmakers will return to session Monday, June 3, as they begin a four-week countdown to the state government's new fiscal year, as Democratic Gov. Josh Shapiro and Republican lawmakers offer competing visions for how to use a massive surplus. (AP Photo/Matt Rourke, File)

Milwaukee is the state’s largest school district, with about 68,000 students.

“We want you gone. No more playing games,” resident Elizabeth Brown said during Monday’s meeting calling for Posley’s removal.

Complaints went beyond the current crisis, calling into question the board and administration’s decisions about spending and staffing, about communication with families, and about students’ poor academic performance.

The school board accepted Posley’s resignation around 2 a.m. Tuesday after a closed session that began at about 8 p.m. Monday.

Concerns came to light after the state Department of Public Instruction released a letter May 24 showing the district had not provided “key financial data,” despite numerous meetings with the state.

The delay in sending the documents makes it impossible for state education officials to calculate aid estimates for other public school districts for the upcoming academic year.

Gov. Tony Evers, a former state superintendent of education, was asked last week about the district’s problems.

“Am I concerned? Hell, yes,” Evers said. “Frankly, it does not look good.”

research funding journal articles

  • Search Menu
  • Sign in through your institution
  • Advance Articles
  • Editor's Choice
  • Supplements
  • Spotlight Issues
  • Image Gallery
  • ESC Journals App
  • ESC Content Collections
  • Author Guidelines
  • Submission Site
  • Why publish with CVR?
  • Open Access Options
  • Read & Publish
  • Author Resources
  • Self-Archiving Policy
  • About Cardiovascular Research
  • About the European Society of Cardiology
  • ESC Publications
  • Journal Career Network
  • Editorial Board
  • ESC membership
  • Advertising and Corporate Services
  • Developing Countries Initiative
  • Dispatch Dates
  • Terms and Conditions
  • Journals on Oxford Academic
  • Books on Oxford Academic

Browse issues

Issue Cover

Cover image

issue cover

Volume 120, Issue 7, May 2024

Cardiovascular research onlife, clinical commentaries, cardiac magnetic resonance or computed tomography: are we ready for a change of gold standard before transcatheter aortic valve replacement.

featured image

  • View article

Expert Opinion

Transition to innovative, human-relevant pre-clinical cardiovascular research: a perspective.

featured image

Microtubules: highway to … arrhythmia?

Interactions between kcnq1 and kcnh2 may modulate the long qt type 1 phenotype, the epigenetic legacy of renin–angiotensin system inhibition in preventing hypertension, research letter, inhibition of oxidized low-density lipoprotein with orticumab inhibits coronary inflammation and reduces residual inflammatory risk in psoriasis: a pilot randomized, double-blind placebo-controlled trial, invited review, mast cells: a novel therapeutic avenue for cardiovascular diseases, new insight into air pollution-related cardiovascular disease: an adverse outcome pathway framework of pm 2.5 -associated vascular calcification.

featured image

  • Supplementary data

Original Articles

Atherosclerosis and lipid biology, lactiplantibacillus plantarum strains kabp011, kabp012, and kabp013 modulate bile acids and cholesterol metabolism in humans.

featured image

Cardiac electrophysiology and arrhythmia

Decreasing microtubule detyrosination modulates na v 1.5 subcellular distribution and restores sodium current in mdx cardiomyocytes.

featured image

The electrophysiologic effects of KCNQ1 extend beyond expression of I Ks : evidence from genetic and pharmacologic block

featured image

Stem cell factor and cKIT modulate endothelial glycolysis in hypoxia

featured image

Right ventricle and pulmonary circulation

Atp13a3 variants promote pulmonary arterial hypertension by disrupting polyamine transport.

featured image

Vascular pathophysiology

Four-week inhibition of the renin–angiotensin system in spontaneously hypertensive rats results in persistently lower blood pressure with reduced kidney renin and changes in expression of relevant gene networks.

featured image

BMP9 is a key player in endothelial identity and its loss is sufficient to induce arteriovenous malformations

featured image

Small nucleolar RNA host gene 18 controls vascular smooth muscle cell contractile phenotype and neointimal hyperplasia

Email alerts.

  • Recommend to Your Librarian
  • Journals Career Network

Affiliations

  • Online ISSN 1755-3245
  • Print ISSN 0008-6363
  • Copyright © 2024 European Society of Cardiology
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Correspondence
  • Published: 18 April 2023

Addicted to the grant lottery? Dysfunctional reward structures of competitive research funding

  • Martin Dresler   ORCID: orcid.org/0000-0001-7441-3818 1 &
  • Murat Yücel 2  

Nature Mental Health volume  1 ,  page 232 ( 2023 ) Cite this article

162 Accesses

1 Citations

2 Altmetric

Metrics details

Obtaining a research grant is a great reward for the intense efforts that scientists invest in both the composition of funding applications and the research that brought them to the position of successfully applying for funding. However, the decisions and distribution practices of most research funding schemes are increasingly questioned: grant review fails strikingly in its task of reliably ranking research proposals, and grant success is a poor predictor of research impact 1 , 2 . In addition, many funding distribution practices are highly inefficient, sometimes even having negative returns on investment: the expected value of a grant call may be less than the required time investment for application 3 . Alternative funding distribution strategies that refrain from competitive calls for extensive project proposals do exist 2 , 4 — so why are more researchers not rebelling against the norms of a dysfunctional funding system?

One potential explanation is the very ‘lottery’ character of existing funding practices: applicants typically cannot predict how their project proposal will fare. Lack of clear information regarding the timing of outcomes and unspecific feedback add to this uncertainty. Research on behavioral addictions such as gambling disorders has demonstrated that reinforcement structures with such high reward uncertainty are particularly addictive 5 .

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 digital issues and online access to articles

55,14 € per year

only 4,60 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

Conix, S., De Block, A. & Vaesen, K. F1000 Res. 10 , 1126 (2021).

Article   Google Scholar  

Düzgüneş, N. F1000 Res. 11 , 961 (2022).

Dresler, M. et al. Nat. Hum. Behav. 6 , 607–608 (2022).

Article   PubMed   Google Scholar  

Dresler, M. Eur. J. Neurosci . https://doi.org/10.1111/ejn.15818 (2023).

Anselme, P. & Robinson, M. J. F. Front. Behav. Neurosci. 7 , 182 (2013).

Article   PubMed   PubMed Central   Google Scholar  

Download references

Acknowledgements

We are grateful for discussions on this topic within Die Junge Akademie and the FENS-Kavli Network of Excellence (FKNE).

Author information

Authors and affiliations.

Radboud University Medical Center, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands

Martin Dresler

BrainPark, Monash University, Clayton, Victoria, Australia

Murat Yücel

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Martin Dresler or Murat Yücel .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Dresler, M., Yücel, M. Addicted to the grant lottery? Dysfunctional reward structures of competitive research funding. Nat. Mental Health 1 , 232 (2023). https://doi.org/10.1038/s44220-023-00045-9

Download citation

Published : 18 April 2023

Issue Date : April 2023

DOI : https://doi.org/10.1038/s44220-023-00045-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research funding journal articles

IMAGES

  1. (PDF) How much would each researcher receive if competitive government

    research funding journal articles

  2. (PDF) Choosing the Right Journal for a Scientific Paper

    research funding journal articles

  3. IJSER Publishing

    research funding journal articles

  4. (PDF) How to Write an Original Research Article: A Guide for

    research funding journal articles

  5. (PDF) Trends in Funding, Internationalization, and Types of Study for

    research funding journal articles

  6. Journal of Research in Marketing Template

    research funding journal articles

VIDEO

  1. 1. Introduction to Graduate Funding

  2. $50M gift to expand local research

  3. The AI opportunity: Sequoia Capital's AI Ascent 2024 opening remarks

  4. Introducing Mendeley Funding

  5. Where to find funding for Research

  6. Applying For Grants And Funding To Support Your Research #shorts #advice

COMMENTS

  1. Getting to the bottom of research funding: Acknowledging the ...

    Research funding is an important factor for public science. Funding may affect which research topics get addressed, and what research outputs are produced. However, funding has often been studied simplistically, using top-down or system-led perspectives. ... The relevant journals and article-level clusters are also included in S1 File.

  2. What is research funding, how does it influence research ...

    The focus is also on research outputs, such as journal articles, books, patents, performances, or inventions, rather than research impacts or knowledge generation. ... Aagaard, K., & Schneider, J. W. (2017). Some considerations about causes and effects in studies of performance-based research funding systems. Journal of Informetrics, 11(3), 923 ...

  3. The value of research funding for knowledge creation and ...

    This study investigates the effect of competitive project funding on researchers' publication outputs. Using detailed information on applicants at the Swiss National Science Foundation and their ...

  4. What steps to take when funding starts to run out

    The situation in the United Kingdom is similar; the UK Research and Innovation (UKRI) funding agency gave out £3.1 billion (US$3.9 billion) in the 2022-23 fiscal year, with a success rate of 27 ...

  5. Research Funding—Why, When, and How?

    Research funding is defined as a grant obtained for conducting scientific research generally through a competitive process. To apply for grants and securing research funding is an essential part of conducting research. In this article, we will discuss why should one apply for research grants, what are the avenues for getting research grants ...

  6. (PDF) What is research funding, how does it influence research, and how

    This article outlines 18 dimensions through which funding varies substantially, as well as three funding records facets. For each dimension, a list of common or possible variations is suggested.

  7. Getting to the bottom of research funding: Acknowledging the complexity

    Funding is assumed likely to affect which topics get addressed, and the scope, content, direction, outputs and even potential impacts of public research [e.g. 1 - 4 ]. Funding secures the livelihood of researchers and is an indispensable prerequisite for almost all research [ 5 ]. Yet knowledge of how funding affects research remains ...

  8. The ripple effects of funding on researchers and output

    The impact of funding may vary across funding mechanism and/or research area. Figure S2E provides estimates by funding mechanism—R01s only and all research grants and research program grants including R01s. The estimates for all research grants are similar to those for all funding, but the estimates for R01s turn out to be quite noisy.

  9. Public use and public funding of science

    In this Article, we advance a measurement framework to study public uses of science, the public funding of science and how public use and public funding relate. Building on prior research that ...

  10. University research funding: Why does industry funding ...

    In this article, the author analyzes data from a number of government sources and other organizations to provide a full picture of the funding landscape in the USA. Some thought is given to why such interactions have not been successful and how we may be able to realign university activities and policies to embrace partnerships with industry.

  11. Is research funding always beneficial? A cross-disciplinary analysis of

    Abstract. Although funding is essential for some types of research and beneficial for others, it may constrain academic choice and creativity. Thus, it is important to check whether it ever seems unnecessary. Here we investigate whether funded U.K. research tends to be higher quality in all fields and for all major research funders. Based on peer review quality scores for 113,877 articles from ...

  12. Performance-based research funding: Evidence from the largest natural

    Teaching and research indicators (number of PhDs awarded, allocation of EU funding for research, allocation of funding from the Norwegian Research Council and bibliometrics) Poland: 2008-present: Parametric evaluation: Bibliometric: Department: Publications (impact factor), patents, external funding, scientific awards and PhD degrees awarded ...

  13. How to Receive More Funding for Your Research? Get Connected to the

    Funding has been viewed in the literature as one of the main determinants of scientific activities. Also, at an individual level, securing funding is one of the most important factors for a researcher, enabling him/her to carry out research projects. However, not everyone is successful in obtaining the necessary funds. The main objective of this work is to measure the effect of several ...

  14. Funding open access articles

    Funding for open access articles. In order to support authors who choose to make their research articles immediately available upon publication, we have compiled a list of research funders and institutions worldwide that fund open access article-processing charges (APCs). Funding sources are listed by country, and links to each organization's ...

  15. White House requires immediate public access to all U.S.-funded ...

    A decadeslong battle over how best to provide public access to the fruits of research funded by the U.S. government has taken a major turn. President Joe Biden's administration announced yesterday that, by the end of 2025, federal agencies must make papers that describe taxpayer-funded work freely available to the public as soon as the final peer-reviewed manuscript is published.

  16. Funding & Support

    Funding & support services. Springer Nature offers a free open access support service to make it easier for our authors to discover and apply for funding to cover article processing charges (APCs) and/or book processing charges (BPCs). We can: Provide personalised information on OA funds that may be available to you.

  17. Disclosure of Funding Sources

    Disclosure of Funding Sources. The following policy applies to all PLOS journals, unless otherwise noted. All PLOS authors are required to declare what support they received to carry out their research. Declaring funding sources acknowledges funders' contributions, fulfills funding requirements, and promotes greater transparency in the ...

  18. funding and policy guidance for articles

    Open access funding and policy checklist for articles. Identify potential sources of APC funding and ensure your publication meets all the OA requirements of your funders and institutions. Plan S compliance for Springer Nature authors. From January 2021, if your research has been funded by certain members of cOAlition S, you may be required to ...

  19. Tobacco funded research: how even journals with bans find it hard to

    Tobacco companies are investing billions in pharmaceutical and medical products. These ties complicate the ongoing efforts of researchers, scientific organisations, and journals to distance themselves from the industry, find Irene van den Berg , Mathilde de Jeu , and Hristio Boytchev Research funded by the tobacco industry is still appearing in highly cited medical journals, despite attempts ...

  20. Research funding: dial back the competition

    Research funding: dial back the competition. You call for lotteries as a tie-breaker in applications for research funding ( Nature 609, 653; 2022 ). But there is an argument for rethinking funding ...

  21. Publication ethics and research integrity: policy guidelines for

    Journal Editors or members of the editorial board submitting their research to their own journals may do so and must declare their involvement with the journal as a conflict of interest. Where applicable, the Journal Editor or Editorial Board member must recuse themselves from the peer review process. ... Funding statements. All research ...

  22. Top health research funders' guidance on selecting journals for funded

    Introduction. Biomedical research studies supported by well-known funding organizations such as the National Institutes of Health (NIH), are published in so-called "predatory" journals 1.Predatory journals are regarded as non-credible and are criticized for failing to provide typical or expected publishing services and their lack of transparent operations 2, 3.

  23. International public opinion on climate change: drivers, challenges and

    ABSTRACT. Climate change enhanced by human beings has become one of the greatest challenges humanity has faced in its entire existence. In this paper, an analysis is made of the variables that influence the environmental perception of the population of thirty countries —28 European countries, USA and China—, in order to estimate the most explanatory factors and identify the causes of the ...

  24. Analyzing global research trends and focal points of pyoderma

    Mapping of terms used in research on P. gangrenosum. A: The co-occurrence network of terms extracted from the title or abstract of at least 40 articles.The colors represent groups of terms that are relatively strongly linked to each other. The size of a term signifies the number of publications related to P. gangrenosum in which the term appeared, and the distance between two terms represents ...

  25. All of Us Research Program lays off 45 workers ...

    However, the program is expecting a drop in funding from the 21st Century Cures Act that was signed into law in 2016, earmarking $6.3 billion for biomedical research and to fast-track drug ...

  26. Milwaukee schools superintendent resigns amid potential loss of

    The district received $14 million from the federal Head Start program in the most recent school year, according to district budget materials. The district also hired Todd Gray, the former Waukesha School District superintendent and a financial consultant who helped the Glendale-River Hills School District through a financial problem.

  27. Volume 120 Issue 7

    An official journal of the European Society of Cardiology. Publishes basic and translational research across different disciplines and areas. The Journal aims to enhance insight in cardiovascular disease mechanisms and the perspective for innovation.

  28. Experts analyze Silicon Valley AI companies' office needs and funding

    First quarter funding for venture-backed AI startups in Silicon Valley reached 2.3 billion this year, compared to an average of $1.7 billion over the past 9 years, JLL research showed.

  29. Saguaro Biosciences Secures $3M in Seed Funding to Transform ...

    QUEBEC CITY--(BUSINESS WIRE)-- Saguaro Biosciences, a pioneering developer of live cell reagents for AI-enhanced discoveries, today announced the successful closing of a $3 million seed funding round, comprised of equity investments and non-dilutive grant funding.The funding round was led by AQC Capital and Anges Québec, featuring significant contributions from notable figures including ...

  30. Addicted to the grant lottery? Dysfunctional reward structures of

    Obtaining a research grant is a great reward for the intense efforts that scientists invest in both the composition of funding applications and the research that brought them to the position of ...