• Search Menu
  • Sign in through your institution
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Research Evaluation
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

1. introduction, what is meant by impact, 2. why evaluate research impact, 3. evaluating research impact, 4. impact and the ref, 5. the challenges of impact evaluation, 6. developing systems and taxonomies for capturing impact, 7. indicators, evidence, and impact within systems, 8. conclusions and recommendations.

  • < Previous

Assessment, evaluations, and definitions of research impact: A review

  • Article contents
  • Figures & tables
  • Supplementary Data

Teresa Penfield, Matthew J. Baker, Rosa Scoble, Michael C. Wykes, Assessment, evaluations, and definitions of research impact: A review, Research Evaluation , Volume 23, Issue 1, January 2014, Pages 21–32, https://doi.org/10.1093/reseval/rvt021

  • Permissions Icon Permissions

This article aims to explore what is understood by the term ‘research impact’ and to provide a comprehensive assimilation of available literature and information, drawing on global experiences to understand the potential for methods and frameworks of impact assessment being implemented for UK impact assessment. We take a more focused look at the impact component of the UK Research Excellence Framework taking place in 2014 and some of the challenges to evaluating impact and the role that systems might play in the future for capturing the links between research and impact and the requirements we have for these systems.

When considering the impact that is generated as a result of research, a number of authors and government recommendations have advised that a clear definition of impact is required ( Duryea, Hochman, and Parfitt 2007 ; Grant et al. 2009 ; Russell Group 2009 ). From the outset, we note that the understanding of the term impact differs between users and audiences. There is a distinction between ‘academic impact’ understood as the intellectual contribution to one’s field of study within academia and ‘external socio-economic impact’ beyond academia. In the UK, evaluation of academic and broader socio-economic impact takes place separately. ‘Impact’ has become the term of choice in the UK for research influence beyond academia. This distinction is not so clear in impact assessments outside of the UK, where academic outputs and socio-economic impacts are often viewed as one, to give an overall assessment of value and change created through research.

an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia

Impact is assessed alongside research outputs and environment to provide an evaluation of research taking place within an institution. As such research outputs, for example, knowledge generated and publications, can be translated into outcomes, for example, new products and services, and impacts or added value ( Duryea et al. 2007 ). Although some might find the distinction somewhat marginal or even confusing, this differentiation between outputs, outcomes, and impacts is important, and has been highlighted, not only for the impacts derived from university research ( Kelly and McNicol 2011 ) but also for work done in the charitable sector ( Ebrahim and Rangan, 2010 ; Berg and Månsson 2011 ; Kelly and McNicoll 2011 ). The Social Return on Investment (SROI) guide ( The SROI Network 2012 ) suggests that ‘The language varies “impact”, “returns”, “benefits”, “value” but the questions around what sort of difference and how much of a difference we are making are the same’. It is perhaps assumed here that a positive or beneficial effect will be considered as an impact but what about changes that are perceived to be negative? Wooding et al. (2007) adapted the terminology of the Payback Framework, developed for the health and biomedical sciences from ‘benefit’ to ‘impact’ when modifying the framework for the social sciences, arguing that the positive or negative nature of a change was subjective and can also change with time, as has commonly been highlighted with the drug thalidomide, which was introduced in the 1950s to help with, among other things, morning sickness but due to teratogenic effects, which resulted in birth defects, was withdrawn in the early 1960s. Thalidomide has since been found to have beneficial effects in the treatment of certain types of cancer. Clearly the impact of thalidomide would have been viewed very differently in the 1950s compared with the 1960s or today.

In viewing impact evaluations it is important to consider not only who has evaluated the work but the purpose of the evaluation to determine the limits and relevance of an assessment exercise. In this article, we draw on a broad range of examples with a focus on methods of evaluation for research impact within Higher Education Institutions (HEIs). As part of this review, we aim to explore the following questions:

What are the reasons behind trying to understand and evaluate research impact?

What are the methodologies and frameworks that have been employed globally to assess research impact and how do these compare?

What are the challenges associated with understanding and evaluating research impact?

What indicators, evidence, and impacts need to be captured within developing systems

What are the reasons behind trying to understand and evaluate research impact? Throughout history, the activities of a university have been to provide both education and research, but the fundamental purpose of a university was perhaps described in the writings of mathematician and philosopher Alfred North Whitehead (1929) .

‘The justification for a university is that it preserves the connection between knowledge and the zest of life, by uniting the young and the old in the imaginative consideration of learning. The university imparts information, but it imparts it imaginatively. At least, this is the function which it should perform for society. A university which fails in this respect has no reason for existence. This atmosphere of excitement, arising from imaginative consideration transforms knowledge.’

In undertaking excellent research, we anticipate that great things will come and as such one of the fundamental reasons for undertaking research is that we will generate and transform knowledge that will benefit society as a whole.

One might consider that by funding excellent research, impacts (including those that are unforeseen) will follow, and traditionally, assessment of university research focused on academic quality and productivity. Aspects of impact, such as value of Intellectual Property, are currently recorded by universities in the UK through their Higher Education Business and Community Interaction Survey return to Higher Education Statistics Agency; however, as with other public and charitable sector organizations, showcasing impact is an important part of attracting and retaining donors and support ( Kelly and McNicoll 2011 ).

The reasoning behind the move towards assessing research impact is undoubtedly complex, involving both political and socio-economic factors, but, nevertheless, we can differentiate between four primary purposes.

HEIs overview. To enable research organizations including HEIs to monitor and manage their performance and understand and disseminate the contribution that they are making to local, national, and international communities.

Accountability. To demonstrate to government, stakeholders, and the wider public the value of research. There has been a drive from the UK government through Higher Education Funding Council for England (HEFCE) and the Research Councils ( HM Treasury 2004 ) to account for the spending of public money by demonstrating the value of research to tax payers, voters, and the public in terms of socio-economic benefits ( European Science Foundation 2009 ), in effect, justifying this expenditure ( Davies Nutley, and Walter 2005 ; Hanney and González-Block 2011 ).

Inform funding. To understand the socio-economic value of research and subsequently inform funding decisions. By evaluating the contribution that research makes to society and the economy, future funding can be allocated where it is perceived to bring about the desired impact. As Donovan (2011) comments, ‘Impact is a strong weapon for making an evidence based case to governments for enhanced research support’.

Understand. To understand the method and routes by which research leads to impacts to maximize on the findings that come out of research and develop better ways of delivering impact.

The growing trend for accountability within the university system is not limited to research and is mirrored in assessments of teaching quality, which now feed into evaluation of universities to ensure fee-paying students’ satisfaction. In demonstrating research impact, we can provide accountability upwards to funders and downwards to users on a project and strategic basis ( Kelly and McNicoll 2011 ). Organizations may be interested in reviewing and assessing research impact for one or more of the aforementioned purposes and this will influence the way in which evaluation is approached.

It is important to emphasize that ‘Not everyone within the higher education sector itself is convinced that evaluation of higher education activity is a worthwhile task’ ( Kelly and McNicoll 2011 ). The University and College Union ( University and College Union 2011 ) organized a petition calling on the UK funding councils to withdraw the inclusion of impact assessment from the REF proposals once plans for the new assessment of university research were released. This petition was signed by 17,570 academics (52,409 academics were returned to the 2008 Research Assessment Exercise), including Nobel laureates and Fellows of the Royal Society ( University and College Union 2011 ). Impact assessments raise concerns over the steer of research towards disciplines and topics in which impact is more easily evidenced and that provide economic impacts that could subsequently lead to a devaluation of ‘blue skies’ research. Johnston ( Johnston 1995 ) notes that by developing relationships between researchers and industry, new research strategies can be developed. This raises the questions of whether UK business and industry should not invest in the research that will deliver them impacts and who will fund basic research if not the government? Donovan (2011) asserts that there should be no disincentive for conducting basic research. By asking academics to consider the impact of the research they undertake and by reviewing and funding them accordingly, the result may be to compromise research by steering it away from the imaginative and creative quest for knowledge. Professor James Ladyman, at the University of Bristol, a vocal adversary of awarding funding based on the assessment of research impact, has been quoted as saying that ‘…inclusion of impact in the REF will create “selection pressure,” promoting academic research that has “more direct economic impact” or which is easier to explain to the public’ ( Corbyn 2009 ).

Despite the concerns raised, the broader socio-economic impacts of research will be included and count for 20% of the overall research assessment, as part of the REF in 2014. From an international perspective, this represents a step change in the comprehensive nature to which impact will be assessed within universities and research institutes, incorporating impact from across all research disciplines. Understanding what impact looks like across the various strands of research and the variety of indicators and proxies used to evidence impact will be important to developing a meaningful assessment.

What are the methodologies and frameworks that have been employed globally to evaluate research impact and how do these compare? The traditional form of evaluation of university research in the UK was based on measuring academic impact and quality through a process of peer review ( Grant 2006 ). Evidence of academic impact may be derived through various bibliometric methods, one example of which is the H index, which has incorporated factors such as the number of publications and citations. These metrics may be used in the UK to understand the benefits of research within academia and are often incorporated into the broader perspective of impact seen internationally, for example, within the Excellence in Research for Australia and using Star Metrics in the USA, in which quantitative measures are used to assess impact, for example, publications, citation, and research income. These ‘traditional’ bibliometric techniques can be regarded as giving only a partial picture of full impact ( Bornmann and Marx 2013 ) with no link to causality. Standard approaches actively used in programme evaluation such as surveys, case studies, bibliometrics, econometrics and statistical analyses, content analysis, and expert judgment are each considered by some (Vonortas and Link, 2012) to have shortcomings when used to measure impacts.

Incorporating assessment of the wider socio-economic impact began using metrics-based indicators such as Intellectual Property registered and commercial income generated ( Australian Research Council 2008 ). In the UK, more sophisticated assessments of impact incorporating wider socio-economic benefits were first investigated within the fields of Biomedical and Health Sciences ( Grant 2006 ), an area of research that wanted to be able to justify the significant investment it received. Frameworks for assessing impact have been designed and are employed at an organizational level addressing the specific requirements of the organization and stakeholders. As a result, numerous and widely varying models and frameworks for assessing impact exist. Here we outline a few of the most notable models that demonstrate the contrast in approaches available.

The Payback Framework is possibly the most widely used and adapted model for impact assessment ( Wooding et al. 2007 ; Nason et al. 2008 ), developed during the mid-1990s by Buxton and Hanney, working at Brunel University. It incorporates both academic outputs and wider societal benefits ( Donovan and Hanney 2011 ) to assess outcomes of health sciences research. The Payback Framework systematically links research with the associated benefits ( Scoble et al. 2010 ; Hanney and González-Block 2011 ) and can be thought of in two parts: a model that allows the research and subsequent dissemination process to be broken into specific components within which the benefits of research can be studied, and second, a multi-dimensional classification scheme into which the various outputs, outcomes, and impacts can be placed ( Hanney and Gonzalez Block 2011 ). The Payback Framework has been adopted internationally, largely within the health sector, by organizations such as the Canadian Institute of Health Research, the Dutch Public Health Authority, the Australian National Health and Medical Research Council, and the Welfare Bureau in Hong Kong ( Bernstein et al. 2006 ; Nason et al. 2008 ; CAHS 2009; Spaapen et al. n.d. ). The Payback Framework enables health and medical research and impact to be linked and the process by which impact occurs to be traced. For more extensive reviews of the Payback Framework, see Davies et al. (2005) , Wooding et al. (2007) , Nason et al. (2008) , and Hanney and González-Block (2011) .

A very different approach known as Social Impact Assessment Methods for research and funding instruments through the study of Productive Interactions (SIAMPI) was developed from the Dutch project Evaluating Research in Context and has a central theme of capturing ‘productive interactions’ between researchers and stakeholders by analysing the networks that evolve during research programmes ( Spaapen and Drooge, 2011 ; Spaapen et al. n.d. ). SIAMPI is based on the widely held assumption that interactions between researchers and stakeholder are an important pre-requisite to achieving impact ( Donovan 2011 ; Hughes and Martin 2012 ; Spaapen et al. n.d. ). This framework is intended to be used as a learning tool to develop a better understanding of how research interactions lead to social impact rather than as an assessment tool for judging, showcasing, or even linking impact to a specific piece of research. SIAMPI has been used within the Netherlands Institute for health Services Research ( SIAMPI n.d. ). ‘Productive interactions’, which can perhaps be viewed as instances of knowledge exchange, are widely valued and supported internationally as mechanisms for enabling impact and are often supported financially for example by Canada’s Social Sciences and Humanities Research Council, which aims to support knowledge exchange (financially) with a view to enabling long-term impact. In the UK, UK Department for Business, Innovation, and Skills provided funding of £150 million for knowledge exchange in 2011–12 to ‘help universities and colleges support the economic recovery and growth, and contribute to wider society’ ( Department for Business, Innovation and Skills 2012 ). While valuing and supporting knowledge exchange is important, SIAMPI perhaps takes this a step further in enabling these exchange events to be captured and analysed. One of the advantages of this method is that less input is required compared with capturing the full route from research to impact. A comprehensive assessment of impact itself is not undertaken with SIAMPI, which make it a less-suitable method where showcasing the benefits of research is desirable or where this justification of funding based on impact is required.

The first attempt globally to comprehensively capture the socio-economic impact of research across all disciplines was undertaken for the Australian Research Quality Framework (RQF), using a case study approach. The RQF was developed to demonstrate and justify public expenditure on research, and as part of this framework, a pilot assessment was undertaken by the Australian Technology Network. Researchers were asked to evidence the economic, societal, environmental, and cultural impact of their research within broad categories, which were then verified by an expert panel ( Duryea et al. 2007 ) who concluded that the researchers and case studies could provide enough qualitative and quantitative evidence for reviewers to assess the impact arising from their research ( Duryea et al. 2007 ). To evaluate impact, case studies were interrogated and verifiable indicators assessed to determine whether research had led to reciprocal engagement, adoption of research findings, or public value. The RQF pioneered the case study approach to assessing research impact; however, with a change in government in 2007, this framework was never implemented in Australia, although it has since been taken up and adapted for the UK REF.

In developing the UK REF, HEFCE commissioned a report, in 2009, from RAND to review international practice for assessing research impact and provide recommendations to inform the development of the REF. RAND selected four frameworks to represent the international arena ( Grant et al. 2009 ). One of these, the RQF, they identified as providing a ‘promising basis for developing an impact approach for the REF’ using the case study approach. HEFCE developed an initial methodology that was then tested through a pilot exercise. The case study approach, recommended by the RQF, was combined with ‘significance’ and ‘reach’ as criteria for assessment. The criteria for assessment were also supported by a model developed by Brunel for ‘measurement’ of impact that used similar measures defined as depth and spread. In the Brunel model, depth refers to the degree to which the research has influenced or caused change, whereas spread refers to the extent to which the change has occurred and influenced end users. Evaluation of impact in terms of reach and significance allows all disciplines of research and types of impact to be assessed side-by-side ( Scoble et al. 2010 ).

The range and diversity of frameworks developed reflect the variation in purpose of evaluation including the stakeholders for whom the assessment takes place, along with the type of impact and evidence anticipated. The most appropriate type of evaluation will vary according to the stakeholder whom we are wishing to inform. Studies ( Buxton, Hanney and Jones 2004 ) into the economic gains from biomedical and health sciences determined that different methodologies provide different ways of considering economic benefits. A discussion on the benefits and drawbacks of a range of evaluation tools (bibliometrics, economic rate of return, peer review, case study, logic modelling, and benchmarking) can be found in the article by Grant (2006) .

Evaluation of impact is becoming increasingly important, both within the UK and internationally, and research and development into impact evaluation continues, for example, researchers at Brunel have developed the concept of depth and spread further into the Brunel Impact Device for Evaluation, which also assesses the degree of separation between research and impact ( Scoble et al. working paper ).

Although based on the RQF, the REF did not adopt all of the suggestions held within, for example, the option of allowing research groups to opt out of impact assessment should the nature or stage of research deem it unsuitable ( Donovan 2008 ). In 2009–10, the REF team conducted a pilot study for the REF involving 29 institutions, submitting case studies to one of five units of assessment (in clinical medicine, physics, earth systems and environmental sciences, social work and social policy, and English language and literature) ( REF2014 2010 ). These case studies were reviewed by expert panels and, as with the RQF, they found that it was possible to assess impact and develop ‘impact profiles’ using the case study approach ( REF2014 2010 ).

From 2014, research within UK universities and institutions will be assessed through the REF; this will replace the Research Assessment Exercise, which has been used to assess UK research since the 1980s. Differences between these two assessments include the removal of indicators of esteem and the addition of assessment of socio-economic research impact. The REF will therefore assess three aspects of research:

Environment

Research impact is assessed in two formats, first, through an impact template that describes the approach to enabling impact within a unit of assessment, and second, using impact case studies that describe the impact taking place following excellent research within a unit of assessment ( REF2014 2011a ). HEFCE indicated that impact should merit a 25% weighting within the REF ( REF2014 2011b ); however, this has been reduced for the 2014 REF to 20%, perhaps as a result of feedback and lobbying, for example, from the Russell Group and Million + group of Universities who called for impact to count for 15% ( Russell Group 2009 ; Jump 2011 ) and following guidance from the expert panels undertaking the pilot exercise who suggested that during the 2014 REF, impact assessment would be in a developmental phase and that a lower weighting for impact would be appropriate with the expectation that this would be increased in subsequent assessments ( REF2014 2010 ).

The quality and reliability of impact indicators will vary according to the impact we are trying to describe and link to research. In the UK, evidence and research impacts will be assessed for the REF within research disciplines. Although it can be envisaged that the range of impacts derived from research of different disciplines are likely to vary, one might question whether it makes sense to compare impacts within disciplines when the range of impact can vary enormously, for example, from business development to cultural changes or saving lives? An alternative approach was suggested for the RQF in Australia, where it was proposed that types of impact be compared rather than impact from specific disciplines.

Providing advice and guidance within specific disciplines is undoubtedly helpful. It can be seen from the panel guidance produced by HEFCE to illustrate impacts and evidence that it is expected that impact and evidence will vary according to discipline ( REF2014 2012 ). Why should this be the case? Two areas of research impact health and biomedical sciences and the social sciences have received particular attention in the literature by comparison with, for example, the arts. Reviews and guidance on developing and evidencing impact in particular disciplines include the London School of Economics (LSE) Public Policy Group’s impact handbook (LSE n.d.), a review of the social and economic impacts arising from the arts produced by Reeve ( Reeves 2002 ), and a review by Kuruvilla et al. (2006) on the impact arising from health research. Perhaps it is time for a generic guide based on types of impact rather than research discipline?

What are the challenges associated with understanding and evaluating research impact? In endeavouring to assess or evaluate impact, a number of difficulties emerge and these may be specific to certain types of impact. Given that the type of impact we might expect varies according to research discipline, impact-specific challenges present us with the problem that an evaluation mechanism may not fairly compare impact between research disciplines.

5.1 Time lag

The time lag between research and impact varies enormously. For example, the development of a spin out can take place in a very short period, whereas it took around 30 years from the discovery of DNA before technology was developed to enable DNA fingerprinting. In development of the RQF, The Allen Consulting Group (2005) highlighted that defining a time lag between research and impact was difficult. In the UK, the Russell Group Universities responded to the REF consultation by recommending that no time lag be put on the delivery of impact from a piece of research citing examples such as the development of cardiovascular disease treatments, which take between 10 and 25 years from research to impact ( Russell Group 2009 ). To be considered for inclusion within the REF, impact must be underpinned by research that took place between 1 January 1993 and 31 December 2013, with impact occurring during an assessment window from 1 January 2008 to 31 July 2013. However, there has been recognition that this time window may be insufficient in some instances, with architecture being granted an additional 5-year period ( REF2014 2012 ); why only architecture has been granted this dispensation is not clear, when similar cases could be made for medicine, physics, or even English literature. Recommendations from the REF pilot were that the panel should be able to extend the time frame where appropriate; this, however, poses difficult decisions when submitting a case study to the REF as to what the view of the panel will be and whether if deemed inappropriate this will render the case study ‘unclassified’.

5.2 The developmental nature of impact

Impact is not static, it will develop and change over time, and this development may be an increase or decrease in the current degree of impact. Impact can be temporary or long-lasting. The point at which assessment takes place will therefore influence the degree and significance of that impact. For example, following the discovery of a new potential drug, preclinical work is required, followed by Phase 1, 2, and 3 trials, and then regulatory approval is granted before the drug is used to deliver potential health benefits. Clearly there is the possibility that the potential new drug will fail at any one of these phases but each phase can be classed as an interim impact of the original discovery work on route to the delivery of health benefits, but the time at which an impact assessment takes place will influence the degree of impact that has taken place. If impact is short-lived and has come and gone within an assessment period, how will it be viewed and considered? Again the objective and perspective of the individuals and organizations assessing impact will be key to understanding how temporal and dissipated impact will be valued in comparison with longer-term impact.

5.3 Attribution

Impact is derived not only from targeted research but from serendipitous findings, good fortune, and complex networks interacting and translating knowledge and research. The exploitation of research to provide impact occurs through a complex variety of processes, individuals, and organizations, and therefore, attributing the contribution made by a specific individual, piece of research, funding, strategy, or organization to an impact is not straight forward. Husbands-Fealing suggests that to assist identification of causality for impact assessment, it is useful to develop a theoretical framework to map the actors, activities, linkages, outputs, and impacts within the system under evaluation, which shows how later phases result from earlier ones. Such a framework should be not linear but recursive, including elements from contextual environments that influence and/or interact with various aspects of the system. Impact is often the culmination of work within spanning research communities ( Duryea et al. 2007 ). Concerns over how to attribute impacts have been raised many times ( The Allen Consulting Group 2005 ; Duryea et al. 2007 ; Grant et al. 2009 ), and differentiating between the various major and minor contributions that lead to impact is a significant challenge.

Figure 1 , replicated from Hughes and Martin (2012) , illustrates how the ease with which impact can be attributed decreases with time, whereas the impact, or effect of complementary assets, increases, highlighting the problem that it may take a considerable amount of time for the full impact of a piece of research to develop but because of this time and the increase in complexity of the networks involved in translating the research and interim impacts, it is more difficult to attribute and link back to a contributing piece of research.

Time, attribution, impact. Replicated from (Hughes and Martin 2012).

Time, attribution, impact. Replicated from ( Hughes and Martin 2012 ).

This presents particular difficulties in research disciplines conducting basic research, such as pure mathematics, where the impact of research is unlikely to be foreseen. Research findings will be taken up in other branches of research and developed further before socio-economic impact occurs, by which point, attribution becomes a huge challenge. If this research is to be assessed alongside more applied research, it is important that we are able to at least determine the contribution of basic research. It has been acknowledged that outstanding leaps forward in knowledge and understanding come from immersing in a background of intellectual thinking that ‘one is able to see further by standing on the shoulders of giants’.

5.4 Knowledge creep

It is acknowledged that one of the outcomes of developing new knowledge through research can be ‘knowledge creep’ where new data or information becomes accepted and gets absorbed over time. This is particularly recognized in the development of new government policy where findings can influence policy debate and policy change, without recognition of the contributing research ( Davies et al. 2005 ; Wooding et al. 2007 ). This is recognized as being particularly problematic within the social sciences where informing policy is a likely impact of research. In putting together evidence for the REF, impact can be attributed to a specific piece of research if it made a ‘distinctive contribution’ ( REF2014 2011a ). The difficulty then is how to determine what the contribution has been in the absence of adequate evidence and how we ensure that research that results in impacts that cannot be evidenced is valued and supported.

5.5 Gathering evidence

Gathering evidence of the links between research and impact is not only a challenge where that evidence is lacking. The introduction of impact assessments with the requirement to collate evidence retrospectively poses difficulties because evidence, measurements, and baselines have, in many cases, not been collected and may no longer be available. While looking forward, we will be able to reduce this problem in the future, identifying, capturing, and storing the evidence in such a way that it can be used in the decades to come is a difficulty that we will need to tackle.

Collating the evidence and indicators of impact is a significant task that is being undertaken within universities and institutions globally. Decker et al. (2007) surveyed researchers in the US top research institutions during 2005; the survey of more than 6000 researchers found that, on average, more than 40% of their time was spent doing administrative tasks. It is desirable that the assignation of administrative tasks to researchers is limited, and therefore, to assist the tracking and collating of impact data, systems are being developed involving numerous projects and developments internationally, including Star Metrics in the USA, the ERC (European Research Council) Research Information System, and Lattes in Brazil ( Lane 2010 ; Mugabushaka and Papazoglou 2012 ).

Ideally, systems within universities internationally would be able to share data allowing direct comparisons, accurate storage of information developed in collaborations, and transfer of comparable data as researchers move between institutions. To achieve compatible systems, a shared language is required. CERIF (Common European Research Information Format) was developed for this purpose, first released in 1991; a number of projects and systems across Europe such as the ERC Research Information System ( Mugabushaka and Papazoglou 2012 ) are being developed as CERIF-compatible.

In the UK, there have been several Jisc-funded projects in recent years to develop systems capable of storing research information, for example, MICE (Measuring Impacts Under CERIF), UK Research Information Shared Service, and Integrated Research Input and Output System, all based on the CERIF standard. To allow comparisons between institutions, identifying a comprehensive taxonomy of impact, and the evidence for it, that can be used universally is seen to be very valuable. However, the Achilles heel of any such attempt, as critics suggest, is the creation of a system that rewards what it can measure and codify, with the knock-on effect of directing research projects to deliver within the measures and categories that reward.

Attempts have been made to categorize impact evidence and data, for example, the aim of the MICE Project was to develop a set of impact indicators to enable impact to be fed into a based system. Indicators were identified from documents produced for the REF, by Research Councils UK, in unpublished draft case studies undertaken at King’s College London or outlined in relevant publications (MICE Project n.d.). A taxonomy of impact categories was then produced onto which impact could be mapped. What emerged on testing the MICE taxonomy ( Cooke and Nadim 2011 ), by mapping impacts from case studies, was that detailed categorization of impact was found to be too prescriptive. Every piece of research results in a unique tapestry of impact and despite the MICE taxonomy having more than 100 indicators, it was found that these did not suffice. It is perhaps worth noting that the expert panels, who assessed the pilot exercise for the REF, commented that the evidence provided by research institutes to demonstrate impact were ‘a unique collection’. Where quantitative data were available, for example, audience numbers or book sales, these numbers rarely reflected the degree of impact, as no context or baseline was available. Cooke and Nadim (2011) also noted that using a linear-style taxonomy did not reflect the complex networks of impacts that are generally found. The Goldsmith report ( Cooke and Nadim 2011 ) recommended making indicators ‘value free’, enabling the value or quality to be established in an impact descriptor that could be assessed by expert panels. The Goldsmith report concluded that general categories of evidence would be more useful such that indicators could encompass dissemination and circulation, re-use and influence, collaboration and boundary work, and innovation and invention.

While defining the terminology used to understand impact and indicators will enable comparable data to be stored and shared between organizations, we would recommend that any categorization of impacts be flexible such that impacts arising from non-standard routes can be placed. It is worth considering the degree to which indicators are defined and provide broader definitions with greater flexibility.

It is possible to incorporate both metrics and narratives within systems, for example, within the Research Outcomes System and Researchfish, currently used by several of the UK research councils to allow impacts to be recorded; although recording narratives has the advantage of allowing some context to be documented, it may make the evidence less flexible for use by different stakeholder groups (which include government, funding bodies, research assessment agencies, research providers, and user communities) for whom the purpose of analysis may vary ( Davies et al. 2005 ). Any tool for impact evaluation needs to be flexible, such that it enables access to impact data for a variety of purposes (Scoble et al. n.d.). Systems need to be able to capture links between and evidence of the full pathway from research to impact, including knowledge exchange, outputs, outcomes, and interim impacts, to allow the route to impact to be traced. This database of evidence needs to establish both where impact can be directly attributed to a piece of research as well as various contributions to impact made during the pathway.

Baselines and controls need to be captured alongside change to demonstrate the degree of impact. In many instances, controls are not feasible as we cannot look at what impact would have occurred if a piece of research had not taken place; however, indications of the picture before and after impact are valuable and worth collecting for impact that can be predicted.

It is now possible to use data-mining tools to extract specific data from narratives or unstructured data ( Mugabushaka and Papazoglou 2012 ). This is being done for collation of academic impact and outputs, for example, Research Portfolio Online Reporting Tools, which uses PubMed and text mining to cluster research projects, and STAR Metrics in the US, which uses administrative records and research outputs and is also being implemented by the ERC using data in the public domain ( Mugabushaka and Papazoglou 2012 ). These techniques have the potential to provide a transformation in data capture and impact assessment ( Jones and Grant 2013 ). It is acknowledged in the article by Mugabushaka and Papazoglou (2012) that it will take years to fully incorporate the impacts of ERC funding. For systems to be able to capture a full range of systems, definitions and categories of impact need to be determined that can be incorporated into system development. To adequately capture interactions taking place between researchers, institutions, and stakeholders, the introduction of tools to enable this would be very valuable. If knowledge exchange events could be captured, for example, electronically as they occur or automatically if flagged from an electronic calendar or a diary, then far more of these events could be recorded with relative ease. Capturing knowledge exchange events would greatly assist the linking of research with impact.

The transition to routine capture of impact data not only requires the development of tools and systems to help with implementation but also a cultural change to develop practices, currently undertaken by a few to be incorporated as standard behaviour among researchers and universities.

What indicators, evidence, and impacts need to be captured within developing systems? There is a great deal of interest in collating terms for impact and indicators of impact. Consortia for Advancing Standards in Research Administration Information, for example, has put together a data dictionary with the aim of setting the standards for terminology used to describe impact and indicators that can be incorporated into systems internationally and seems to be building a certain momentum in this area. A variety of types of indicators can be captured within systems; however, it is important that these are universally understood. Here we address types of evidence that need to be captured to enable an overview of impact to be developed. In the majority of cases, a number of types of evidence will be required to provide an overview of impact.

7.1 Metrics

Metrics have commonly been used as a measure of impact, for example, in terms of profit made, number of jobs provided, number of trained personnel recruited, number of visitors to an exhibition, number of items purchased, and so on. Metrics in themselves cannot convey the full impact; however, they are often viewed as powerful and unequivocal forms of evidence. If metrics are available as impact evidence, they should, where possible, also capture any baseline or control data. Any information on the context of the data will be valuable to understanding the degree to which impact has taken place.

Perhaps, SROI indicates the desire to be able to demonstrate the monetary value of investment and impact by some organizations. SROI aims to provide a valuation of the broader social, environmental, and economic impacts, providing a metric that can be used for demonstration of worth. This is a metric that has been used within the charitable sector ( Berg and Månsson 2011 ) and also features as evidence in the REF guidance for panel D ( REF2014 2012 ). More details on SROI can be found in ‘A guide to Social Return on Investment’ produced by The SROI Network (2012) .

Although metrics can provide evidence of quantitative changes or impacts from our research, they are unable to adequately provide evidence of the qualitative impacts that take place and hence are not suitable for all of the impact we will encounter. The main risks associated with the use of standardized metrics are that

The full impact will not be realized, as we focus on easily quantifiable indicators

We will focus attention towards generating results that enable boxes to be ticked rather than delivering real value for money and innovative research.

They risk being monetized or converted into a lowest common denominator in an attempt to compare the cost of a new theatre against that of a hospital.

7.2 Narratives

Narratives can be used to describe impact; the use of narratives enables a story to be told and the impact to be placed in context and can make good use of qualitative information. They are often written with a reader from a particular stakeholder group in mind and will present a view of impact from a particular perspective. The risk of relying on narratives to assess impact is that they often lack the evidence required to judge whether the research and impact are linked appropriately. Where narratives are used in conjunction with metrics, a complete picture of impact can be developed, again from a particular perspective but with the evidence available to corroborate the claims made. Table 1 summarizes some of the advantages and disadvantages of the case study approach.

The advantages and disadvantages of the case study approach

By allowing impact to be placed in context, we answer the ‘so what?’ question that can result from quantitative data analyses, but is there a risk that the full picture may not be presented to demonstrate impact in a positive light? Case studies are ideal for showcasing impact, but should they be used to critically evaluate impact?

7.3 Surveys and testimonies

One way in which change of opinion and user perceptions can be evidenced is by gathering of stakeholder and user testimonies or undertaking surveys. This might describe support for and development of research with end users, public engagement and evidence of knowledge exchange, or a demonstration of change in public opinion as a result of research. Collecting this type of evidence is time-consuming, and again, it can be difficult to gather the required evidence retrospectively when, for example, the appropriate user group might have dispersed.

The ability to record and log these type of data is important for enabling the path from research to impact to be established and the development of systems that can capture this would be very valuable.

7.4 Citations (outside of academia) and documentation

Citations (outside of academia) and documentation can be used as evidence to demonstrate the use research findings in developing new ideas and products for example. This might include the citation of a piece of research in policy documents or reference to a piece of research being cited within the media. A collation of several indicators of impact may be enough to convince that an impact has taken place. Even where we can evidence changes and benefits linked to our research, understanding the causal relationship may be difficult. Media coverage is a useful means of disseminating our research and ideas and may be considered alongside other evidence as contributing to or an indicator of impact.

The fast-moving developments in the field of altmetrics (or alternative metrics) are providing a richer understanding of how research is being used, viewed, and moved. The transfer of information electronically can be traced and reviewed to provide data on where and to whom research findings are going.

The understanding of the term impact varies considerably and as such the objectives of an impact assessment need to be thoroughly understood before evidence is collated.

While aspects of impact can be adequately interpreted using metrics, narratives, and other evidence, the mixed-method case study approach is an excellent means of pulling all available information, data, and evidence together, allowing a comprehensive summary of the impact within context. While the case study is a useful way of showcasing impact, its limitations must be understood if we are to use this for evaluation purposes. The case study does present evidence from a particular perspective and may need to be adapted for use with different stakeholders. It is time-intensive to both assimilate and review case studies and we therefore need to ensure that the resources required for this type of evaluation are justified by the knowledge gained. The ability to write a persuasive well-evidenced case study may influence the assessment of impact. Over the past year, there have been a number of new posts created within universities, such as writing impact case studies, and a number of companies are now offering this as a contract service. A key concern here is that we could find that universities which can afford to employ either consultants or impact ‘administrators’ will generate the best case studies.

The development of tools and systems for assisting with impact evaluation would be very valuable. We suggest that developing systems that focus on recording impact information alone will not provide all that is required to link research to ensuing events and impacts, systems require the capacity to capture any interactions between researchers, the institution, and external stakeholders and link these with research findings and outputs or interim impacts to provide a network of data. In designing systems and tools for collating data related to impact, it is important to consider who will populate the database and ensure that the time and capability required for capture of information is considered. Capturing data, interactions, and indicators as they emerge increases the chance of capturing all relevant information and tools to enable researchers to capture much of this would be valuable. However, it must be remembered that in the case of the UK REF, impact is only considered that is based on research that has taken place within the institution submitting the case study. It is therefore in an institution’s interest to have a process by which all the necessary information is captured to enable a story to be developed in the absence of a researcher who may have left the employment of the institution. Figure 2 demonstrates the information that systems will need to capture and link.

Research findings including outputs (e.g., presentations and publications)

Communications and interactions with stakeholders and the wider public (emails, visits, workshops, media publicity, etc)

Feedback from stakeholders and communication summaries (e.g., testimonials and altmetrics)

Research developments (based on stakeholder input and discussions)

Outcomes (e.g., commercial and cultural, citations)

Impacts (changes, e.g., behavioural and economic)

Overview of the types of information that systems need to capture and link.

Overview of the types of information that systems need to capture and link.

Attempting to evaluate impact to justify expenditure, showcase our work, and inform future funding decisions will only prove to be a valuable use of time and resources if we can take measures to ensure that assessment attempts will not ultimately have a negative influence on the impact of our research. There are areas of basic research where the impacts are so far removed from the research or are impractical to demonstrate; in these cases, it might be prudent to accept the limitations of impact assessment, and provide the potential for exclusion in appropriate circumstances.

This work was supported by Jisc [DIINN10].

Google Scholar

Google Preview

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5449
  • Print ISSN 0958-2029
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Loading metrics

Open Access

Peer-reviewed

Research Article

Assessing the impact of healthcare research: A systematic review of methodological frameworks

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Validation, Writing – original draft, Writing – review & editing

Affiliation Centre for Patient Reported Outcomes Research, Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom

ORCID logo

Roles Conceptualization, Formal analysis, Funding acquisition, Methodology, Project administration, Supervision, Validation, Writing – review & editing

* E-mail: [email protected]

Roles Data curation, Formal analysis, Methodology, Validation, Writing – review & editing

Roles Formal analysis, Methodology, Supervision, Validation, Writing – review & editing

  • Samantha Cruz Rivera, 
  • Derek G. Kyte, 
  • Olalekan Lee Aiyegbusi, 
  • Thomas J. Keeley, 
  • Melanie J. Calvert

PLOS

  • Published: August 9, 2017
  • https://doi.org/10.1371/journal.pmed.1002370
  • Reader Comments

Fig 1

Increasingly, researchers need to demonstrate the impact of their research to their sponsors, funders, and fellow academics. However, the most appropriate way of measuring the impact of healthcare research is subject to debate. We aimed to identify the existing methodological frameworks used to measure healthcare research impact and to summarise the common themes and metrics in an impact matrix.

Methods and findings

Two independent investigators systematically searched the Medical Literature Analysis and Retrieval System Online (MEDLINE), the Excerpta Medica Database (EMBASE), the Cumulative Index to Nursing and Allied Health Literature (CINAHL+), the Health Management Information Consortium, and the Journal of Research Evaluation from inception until May 2017 for publications that presented a methodological framework for research impact. We then summarised the common concepts and themes across methodological frameworks and identified the metrics used to evaluate differing forms of impact. Twenty-four unique methodological frameworks were identified, addressing 5 broad categories of impact: (1) ‘primary research-related impact’, (2) ‘influence on policy making’, (3) ‘health and health systems impact’, (4) ‘health-related and societal impact’, and (5) ‘broader economic impact’. These categories were subdivided into 16 common impact subgroups. Authors of the included publications proposed 80 different metrics aimed at measuring impact in these areas. The main limitation of the study was the potential exclusion of relevant articles, as a consequence of the poor indexing of the databases searched.

Conclusions

The measurement of research impact is an essential exercise to help direct the allocation of limited research resources, to maximise research benefit, and to help minimise research waste. This review provides a collective summary of existing methodological frameworks for research impact, which funders may use to inform the measurement of research impact and researchers may use to inform study design decisions aimed at maximising the short-, medium-, and long-term impact of their research.

Author summary

Why was this study done.

  • There is a growing interest in demonstrating the impact of research in order to minimise research waste, allocate resources efficiently, and maximise the benefit of research. However, there is no consensus on which is the most appropriate tool to measure the impact of research.
  • To our knowledge, this review is the first to synthesise existing methodological frameworks for healthcare research impact, and the associated impact metrics by which various authors have proposed impact should be measured, into a unified matrix.

What did the researchers do and find?

  • We conducted a systematic review identifying 24 existing methodological research impact frameworks.
  • We scrutinised the sample, identifying and summarising 5 proposed impact categories, 16 impact subcategories, and over 80 metrics into an impact matrix and methodological framework.

What do these findings mean?

  • This simplified consolidated methodological framework will help researchers to understand how a research study may give rise to differing forms of impact, as well as in what ways and at which time points these potential impacts might be measured.
  • Incorporating these insights into the design of a study could enhance impact, optimizing the use of research resources.

Citation: Cruz Rivera S, Kyte DG, Aiyegbusi OL, Keeley TJ, Calvert MJ (2017) Assessing the impact of healthcare research: A systematic review of methodological frameworks. PLoS Med 14(8): e1002370. https://doi.org/10.1371/journal.pmed.1002370

Academic Editor: Mike Clarke, Queens University Belfast, UNITED KINGDOM

Received: February 28, 2017; Accepted: July 7, 2017; Published: August 9, 2017

Copyright: © 2017 Cruz Rivera et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the paper and supporting files.

Funding: Funding was received from Consejo Nacional de Ciencia y Tecnología (CONACYT). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript ( http://www.conacyt.mx/ ).

Competing interests: I have read the journal's policy and the authors of this manuscript have the following competing interests: MJC has received consultancy fees from Astellas and Ferring pharma and travel fees from the European Society of Cardiology outside the submitted work. TJK is in full-time paid employment for PAREXEL International.

Abbreviations: AIHS, Alberta Innovates—Health Solutions; CAHS, Canadian Academy of Health Sciences; CIHR, Canadian Institutes of Health Research; CINAHL+, Cumulative Index to Nursing and Allied Health Literature; EMBASE, Excerpta Medica Database; ERA, Excellence in Research for Australia; HEFCE, Higher Education Funding Council for England; HMIC, Health Management Information Consortium; HTA, Health Technology Assessment; IOM, Impact Oriented Monitoring; MDG, Millennium Development Goal; NHS, National Health Service; MEDLINE, Medical Literature Analysis and Retrieval System Online; PHC RIS, Primary Health Care Research & Information Service; PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses; PROM, patient-reported outcome measures; QALY, quality-adjusted life year; R&D, research and development; RAE, Research Assessment Exercise; REF, Research Excellence Framework; RIF, Research Impact Framework; RQF, Research Quality Framework; SDG, Sustainable Development Goal; SIAMPI, Social Impact Assessment Methods for research and funding instruments through the study of Productive Interactions between science and society

Introduction

In 2010, approximately US$240 billion was invested in healthcare research worldwide [ 1 ]. Such research is utilised by policy makers, healthcare providers, and clinicians to make important evidence-based decisions aimed at maximising patient benefit, whilst ensuring that limited healthcare resources are used as efficiently as possible to facilitate effective and sustainable service delivery. It is therefore essential that this research is of high quality and that it is impactful—i.e., it delivers demonstrable benefits to society and the wider economy whilst minimising research waste [ 1 , 2 ]. Research impact can be defined as ‘any identifiable ‘benefit to, or positive influence on the economy, society, public policy or services, health, the environment, quality of life or academia’ (p. 26) [ 3 ].

There are many purported benefits associated with the measurement of research impact, including the ability to (1) assess the quality of the research and its subsequent benefits to society; (2) inform and influence optimal policy and funding allocation; (3) demonstrate accountability, the value of research in terms of efficiency and effectiveness to the government, stakeholders, and society; and (4) maximise impact through better understanding the concept and pathways to impact [ 4 – 7 ].

Measuring and monitoring the impact of healthcare research has become increasingly common in the United Kingdom [ 5 ], Australia [ 5 ], and Canada [ 8 ], as governments, organisations, and higher education institutions seek a framework to allocate funds to projects that are more likely to bring the most benefit to society and the economy [ 5 ]. For example, in the UK, the 2014 Research Excellence Framework (REF) has recently been used to assess the quality and impact of research in higher education institutions, through the assessment of impact cases studies and selected qualitative impact metrics [ 9 ]. This is the first initiative to allocate research funding based on the economic, societal, and cultural impact of research, although it should be noted that research impact only drives a proportion of this allocation (approximately 20%) [ 9 ].

In the UK REF, the measurement of research impact is seen as increasingly important. However, the impact element of the REF has been criticised in some quarters [ 10 , 11 ]. Critics deride the fact that REF impact is determined in a relatively simplistic way, utilising researcher-generated case studies, which commonly attempt to link a particular research outcome to an associated policy or health improvement despite the fact that the wider literature highlights great diversity in the way research impact may be demonstrated [ 12 , 13 ]. This led to the current debate about the optimal method of measuring impact in the future REF [ 10 , 14 ]. The Stern review suggested that research impact should not only focus on socioeconomic impact but should also include impact on government policy, public engagement, academic impacts outside the field, and teaching to showcase interdisciplinary collaborative impact [ 10 , 11 ]. The Higher Education Funding Council for England (HEFCE) has recently set out the proposals for the REF 2021 exercise, confirming that the measurement of such impact will continue to form an important part of the process [ 15 ].

With increasing pressure for healthcare research to lead to demonstrable health, economic, and societal impact, there is a need for researchers to understand existing methodological impact frameworks and the means by which impact may be quantified (i.e., impact metrics; see Box 1 , 'Definitions’) to better inform research activities and funding decisions. From a researcher’s perspective, understanding the optimal pathways to impact can help inform study design aimed at maximising the impact of the project. At the same time, funders need to understand which aspects of impact they should focus on when allocating awards so they can make the most of their investment and bring the greatest benefit to patients and society [ 2 , 4 , 5 , 16 , 17 ].

Box 1. Definitions

  • Research impact: ‘any identifiable benefit to, or positive influence on, the economy, society, public policy or services, health, the environment, quality of life, or academia’ (p. 26) [ 3 ].
  • Methodological framework: ‘a body of methods, rules and postulates employed by a particular procedure or set of procedures (i.e., framework characteristics and development)’ [ 18 ].
  • Pathway: ‘a way of achieving a specified result; a course of action’ [ 19 ].
  • Quantitative metrics: ‘a system or standard of [quantitative] measurement’ [ 20 ].
  • Narrative metrics: ‘a spoken or written account of connected events; a story’ [ 21 ].

Whilst previous researchers have summarised existing methodological frameworks and impact case studies [ 4 , 22 – 27 ], they have not summarised the metrics for use by researchers, funders, and policy makers. The aim of this review was therefore to (1) identify the methodological frameworks used to measure healthcare research impact using systematic methods, (2) summarise common impact themes and metrics in an impact matrix, and (3) provide a simplified consolidated resource for use by funders, researchers, and policy makers.

Search strategy and selection criteria

Initially, a search strategy was developed to identify the available literature regarding the different methods to measure research impact. The following keywords: ‘Impact’, ‘Framework’, and ‘Research’, and their synonyms, were used during the search of the Medical Literature Analysis and Retrieval System Online (MEDLINE; Ovid) database, the Excerpta Medica Database (EMBASE), the Health Management Information Consortium (HMIC) database, and the Cumulative Index to Nursing and Allied Health Literature (CINAHL+) database (inception to May 2017; see S1 Appendix for the full search strategy). Additionally, the nonindexed Journal of Research Evaluation was hand searched during the same timeframe using the keyword ‘Impact’. Other relevant articles were identified through 3 Internet search engines (Google, Google Scholar, and Google Images) using the keywords ‘Impact’, ‘Framework’, and ‘Research’, with the first 50 results screened. Google Images was searched because different methodological frameworks are summarised in a single image and can easily be identified through this search engine. Finally, additional publications were sought through communication with experts.

Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (see S1 PRISMA Checklist ), 2 independent investigators systematically screened for publications describing, evaluating, or utilising a methodological research impact framework within the context of healthcare research [ 28 ]. Papers were eligible if they included full or partial methodological frameworks or pathways to research impact; both primary research and systematic reviews fitting these criteria were included. We included any methodological framework identified (original or modified versions) at the point of first occurrence. In addition, methodological frameworks were included if they were applicable to the healthcare discipline with no need of modification within their structure. We defined ‘methodological framework’ as ‘a body of methods, rules and postulates employed by a particular procedure or set of procedures (i.e., framework characteristics and development)’ [ 18 ], whereas we defined ‘pathway’ as ‘a way of achieving a specified result; a course of action’ [ 19 ]. Studies were excluded if they presented an existing (unmodified) methodological framework previously available elsewhere, did not explicitly describe a methodological framework but rather focused on a single metric (e.g., bibliometric analysis), focused on the impact or effectiveness of interventions rather than that of the research, or presented case study data only. There were no language restrictions.

Data screening

Records were downloaded into Endnote (version X7.3.1), and duplicates were removed. Two independent investigators (SCR and OLA) conducted all screening following a pilot aimed at refining the process. The records were screened by title and abstract before full-text articles of potentially eligible publications were retrieved for evaluation. A full-text screening identified the publications included for data extraction. Discrepancies were resolved through discussion, with the involvement of a third reviewer (MJC, DGK, and TJK) when necessary.

Data extraction and analysis

Data extraction occurred after the final selection of included articles. SCR and OLA independently extracted details of impact methodological frameworks, the country of origin, and the year of publication, as well as the source, the framework description, and the methodology used to develop the framework. Information regarding the methodology used to develop each methodological framework was also extracted from framework webpages where available. Investigators also extracted details regarding each framework’s impact categories and subgroups, along with their proposed time to impact (‘short-term’, ‘mid-term’, or ‘long-term’) and the details of any metrics that had been proposed to measure impact, which are depicted in an impact matrix. The structure of the matrix was informed by the work of M. Buxton and S. Hanney [ 2 ], P. Buykx et al. [ 5 ], S. Kuruvila et al. [ 29 ], and A. Weiss [ 30 ], with the intention of mapping metrics presented in previous methodological frameworks in a concise way. A consensus meeting with MJC, DGK, and TJK was held to solve disagreements and finalise the data extraction process.

Included studies

Our original search strategy identified 359 citations from MEDLINE (Ovid), EMBASE, CINAHL+, HMIC, and the Journal of Research Evaluation, and 101 citations were returned using other sources (Google, Google Images, Google Scholar, and expert communication) (see Fig 1 ) [ 28 ]. In total, we retrieved 54 full-text articles for review. At this stage, 39 articles were excluded, as they did not propose new or modified methodological frameworks. An additional 15 articles were included following the backward and forward citation method. A total of 31 relevant articles were included in the final analysis, of which 24 were articles presenting unique frameworks and the remaining 7 were systematic reviews [ 4 , 22 – 27 ]. The search strategy was rerun on 15 May 2017. A further 19 publications were screened, and 2 were taken forward to full-text screening but were ineligible for inclusion.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pmed.1002370.g001

Methodological framework characteristics

The characteristics of the 24 included methodological frameworks are summarised in Table 1 , 'Methodological framework characteristics’. Fourteen publications proposed academic-orientated frameworks, which focused on measuring academic, societal, economic, and cultural impact using narrative and quantitative metrics [ 2 , 3 , 5 , 8 , 29 , 31 – 39 ]. Five publications focused on assessing the impact of research by focusing on the interaction process between stakeholders and researchers (‘productive interactions’), which is a requirement to achieve research impact. This approach tries to address the issue of attributing research impact to metrics [ 7 , 40 – 43 ]. Two frameworks focused on the importance of partnerships between researchers and policy makers, as a core element to accomplish research impact [ 44 , 45 ]. An additional 2 frameworks focused on evaluating the pathways to impact, i.e., linking processes between research and impact [ 30 , 46 ]. One framework assessed the ability of health technology to influence efficiency of healthcare systems [ 47 ]. Eight frameworks were developed in the UK [ 2 , 3 , 29 , 37 , 39 , 42 , 43 , 45 ], 6 in Canada [ 8 , 33 , 34 , 44 , 46 , 47 ], 4 in Australia [ 5 , 31 , 35 , 38 ], 3 in the Netherlands [ 7 , 40 , 41 ], and 2 in the United States [ 30 , 36 ], with 1 model developed with input from various countries [ 32 ].

thumbnail

https://doi.org/10.1371/journal.pmed.1002370.t001

Methodological framework development

The included methodological frameworks varied in their development process, but there were some common approaches employed. Most included a literature review [ 2 , 5 , 7 , 8 , 31 , 33 , 36 , 37 , 40 – 46 ], although none of them used a recognised systematic method. Most also consulted with various stakeholders [ 3 , 8 , 29 , 31 , 33 , 35 – 38 , 43 , 44 , 46 , 47 ] but used differing methods to incorporate their views, including quantitative surveys [ 32 , 35 , 43 , 46 ], face-to-face interviews [ 7 , 29 , 33 , 35 , 37 , 42 , 43 ], telephone interviews [ 31 , 46 ], consultation [ 3 , 7 , 36 ], and focus groups [ 39 , 43 ]. A range of stakeholder groups were approached across the sample, including principal investigators [ 7 , 29 , 43 ], research end users [ 7 , 42 , 43 ], academics [ 3 , 8 , 39 , 40 , 43 , 46 ], award holders [ 43 ], experts [ 33 , 38 , 39 ], sponsors [ 33 , 39 ], project coordinators [ 32 , 42 ], and chief investigators [ 31 , 35 ]. However, some authors failed to identify the stakeholders involved in the development of their frameworks [ 2 , 5 , 34 , 41 , 45 ], making it difficult to assess their appropriateness. In addition, only 4 of the included papers reported using formal analytic methods to interpret stakeholder responses. These included the Canadian Academy of Health Sciences framework, which used conceptual cluster analysis [ 33 ]. The Research Contribution [ 42 ], Research Impact [ 29 ], and Primary Health Care & Information Service [ 31 ] used a thematic analysis approach. Finally, some authors went on to pilot their framework, which shaped refinements on the methodological frameworks until approval. Methods used to pilot the frameworks included a case study approach [ 2 , 3 , 30 , 32 , 33 , 36 , 40 , 42 , 44 , 45 ], contrasting results against available literature [ 29 ], the use of stakeholders’ feedback [ 7 ], and assessment tools [ 35 , 46 ].

Major impact categories

1. primary research-related impact..

A number of methodological frameworks advocated the evaluation of ‘research-related impact’. This encompassed content related to the generation of new knowledge, knowledge dissemination, capacity building, training, leadership, and the development of research networks. These outcomes were considered the direct or primary impacts of a research project, as these are often the first evidenced returns [ 30 , 62 ].

A number of subgroups were identified within this category, with frameworks supporting the collection of impact data across the following constructs: ‘research and innovation outcomes’; ‘dissemination and knowledge transfer’; ‘capacity building, training, and leadership’; and ‘academic collaborations, research networks, and data sharing’.

1 . 1 . Research and innovation outcomes . Twenty of the 24 frameworks advocated the evaluation of ‘research and innovation outcomes’ [ 2 , 3 , 5 , 7 , 8 , 29 – 39 , 41 , 43 , 44 , 46 ]. This subgroup included the following metrics: number of publications; number of peer-reviewed articles (including journal impact factor); citation rates; requests for reprints, number of reviews, and meta-analysis; and new or changes in existing products (interventions or technology), patents, and research. Additionally, some frameworks also sought to gather information regarding ‘methods/methodological contributions’. These advocated the collection of systematic reviews and appraisals in order to identify gaps in knowledge and determine whether the knowledge generated had been assessed before being put into practice [ 29 ].

1 . 2 . Dissemination and knowledge transfer . Nineteen of the 24 frameworks advocated the assessment of ‘dissemination and knowledge transfer’ [ 2 , 3 , 5 , 7 , 29 – 32 , 34 – 43 , 46 ]. This comprised collection of the following information: number of conferences, seminars, workshops, and presentations; teaching output (i.e., number of lectures given to disseminate the research findings); number of reads for published articles; article download rate and number of journal webpage visits; and citations rates in nonjournal media such as newspapers and mass and social media (i.e., Twitter and blogs). Furthermore, this impact subgroup considered the measurement of research uptake and translatability and the adoption of research findings in technological and clinical applications and by different fields. These can be measured through patents, clinical trials, and partnerships between industry and business, government and nongovernmental organisations, and university research units and researchers [ 29 ].

1 . 3 . Capacity building , training , and leadership . Fourteen of 24 frameworks suggested the evaluation of ‘capacity building, training, and leadership’ [ 2 , 3 , 5 , 8 , 29 , 31 – 35 , 39 – 41 , 43 ]. This involved collecting information regarding the number of doctoral and postdoctoral studentships (including those generated as a result of the research findings and those appointed to conduct the research), as well as the number of researchers and research-related staff involved in the research projects. In addition, authors advocated the collection of ‘leadership’ metrics, including the number of research projects managed and coordinated and the membership of boards and funding bodies, journal editorial boards, and advisory committees [ 29 ]. Additional metrics in this category included public recognition (number of fellowships and awards for significant research achievements), academic career advancement, and subsequent grants received. Lastly, the impact metric ‘research system management’ comprised the collection of information that can lead to preserving the health of the population, such as modifying research priorities, resource allocation strategies, and linking health research to other disciplines to maximise benefits [ 29 ].

1 . 4 . Academic collaborations , research networks , and data sharing . Lastly, 10 of the 24 frameworks advocated the collection of impact data regarding ‘academic collaborations (internal and external collaborations to complete a research project), research networks, and data sharing’ [ 2 , 3 , 5 , 7 , 29 , 34 , 37 , 39 , 41 , 43 ].

2. Influence on policy making.

Methodological frameworks addressing this major impact category focused on measurable improvements within a given knowledge base and on interactions between academics and policy makers, which may influence policy-making development and implementation. The returns generated in this impact category are generally considered as intermediate or midterm (1 to 3 years). These represent an important interim stage in the process towards the final expected impacts, such as quantifiable health improvements and economic benefits, without which policy change may not occur [ 30 , 62 ]. The following impact subgroups were identified within this category: ‘type and nature of policy impact’, ‘level of policy making’, and ‘policy networks’.

2 . 1 . Type and nature of policy impact . The most common impact subgroup, mentioned in 18 of the 24 frameworks, was ‘type and nature of policy impact’ [ 2 , 7 , 29 – 38 , 41 – 43 , 45 – 47 ]. Methodological frameworks addressing this subgroup stressed the importance of collecting information regarding the influence of research on policy (i.e., changes in practice or terminology). For instance, a project looking at trafficked adolescents and women (2003) influenced the WHO guidelines (2003) on ethics regarding this particular group [ 17 , 21 , 63 ].

2 . 2 . Level of policy impact . Thirteen of 24 frameworks addressed aspects surrounding the need to record the ‘level of policy impact’ (international, national, or local) and the organisations within a level that were influenced (local policy makers, clinical commissioning groups, and health and wellbeing trusts) [ 2 , 5 , 8 , 29 , 31 , 34 , 38 , 41 , 43 – 47 ]. Authors considered it important to measure the ‘level of policy impact’ to provide evidence of collaboration, coordination, and efficiency within health organisations and between researchers and health organisations [ 29 , 31 ].

2 . 3 . Policy networks . Five methodological frameworks highlighted the need to collect information regarding collaborative research with industry and staff movement between academia and industry [ 5 , 7 , 29 , 41 , 43 ]. A policy network emphasises the relationship between policy communities, researchers, and policy makers. This relationship can influence and lead to incremental changes in policy processes [ 62 ].

3. Health and health systems impact.

A number of methodological frameworks advocated the measurement of impacts on health and healthcare systems across the following impact subgroups: ‘quality of care and service delivering’, ‘evidence-based practice’, ‘improved information and health information management’, ‘cost containment and effectiveness’, ‘resource allocation’, and ‘health workforce’.

3 . 1 . Quality of care and service delivery . Twelve of the 24 frameworks highlighted the importance of evaluating ‘quality of care and service delivery’ [ 2 , 5 , 8 , 29 – 31 , 33 – 36 , 41 , 47 ]. There were a number of suggested metrics that could be potentially used for this purpose, including health outcomes such as quality-adjusted life years (QALYs), patient-reported outcome measures (PROMs), patient satisfaction and experience surveys, and qualitative data on waiting times and service accessibility.

3 . 2 . Evidence-based practice . ‘Evidence-based practice’, mentioned in 5 of the 24 frameworks, refers to making changes in clinical diagnosis, clinical practice, treatment decisions, or decision making based on research evidence [ 5 , 8 , 29 , 31 , 33 ]. The suggested metrics to demonstrate evidence-based practice were adoption of health technologies and research outcomes to improve the healthcare systems and inform policies and guidelines [ 29 ].

3 . 3 . Improved information and health information management . This impact subcategory, mentioned in 5 of the 24 frameworks, refers to the influence of research on the provision of health services and management of the health system to prevent additional costs [ 5 , 29 , 33 , 34 , 38 ]. Methodological frameworks advocated the collection of health system financial, nonfinancial (i.e., transport and sociopolitical implications), and insurance information in order to determine constraints within a health system.

3 . 4 . Cost containment and cost-effectiveness . Six of the 24 frameworks advocated the subcategory ‘cost containment and cost-effectiveness’ [ 2 , 5 , 8 , 17 , 33 , 36 ]. ‘Cost containment’ comprised the collection of information regarding how research has influenced the provision and management of health services and its implication in healthcare resource allocation and use [ 29 ]. ‘Cost-effectiveness’ refers to information concerning economic evaluations to assess improvements in effectiveness and health outcomes—for instance, the cost-effectiveness (cost and health outcome benefits) assessment of introducing a new health technology to replace an older one [ 29 , 31 , 64 ].

3 . 5 . Resource allocation . ‘Resource allocation’, mentioned in 6frameworks, can be measured through 2 impact metrics: new funding attributed to the intervention in question and equity while allocating resources, such as improved allocation of resources at an area level; better targeting, accessibility, and utilisation; and coverage of health services [ 2 , 5 , 29 , 31 , 45 , 47 ]. The allocation of resources and targeting can be measured through health services research reports, with the utilisation of health services measured by the probability of providing an intervention when needed, the probability of requiring it again in the future, and the probability of receiving an intervention based on previous experience [ 29 , 31 ].

3 . 6 . Health workforce . Lastly, ‘health workforce’, present in 3 methodological frameworks, refers to the reduction in the days of work lost because of a particular illness [ 2 , 5 , 31 ].

4. Health-related and societal impact.

Three subgroups were included in this category: ‘health literacy’; ‘health knowledge, attitudes, and behaviours’; and ‘improved social equity, inclusion, or cohesion’.

4 . 1 . Health knowledge , attitudes , and behaviours . Eight of the 24 frameworks suggested the assessment of ‘health knowledge, attitudes, behaviours, and outcomes’, which could be measured through the evaluation of levels of public engagement with science and research (e.g., National Health Service (NHS) Choices end-user visit rate) or by using focus groups to analyse changes in knowledge, attitudes, and behaviour among society [ 2 , 5 , 29 , 33 – 35 , 38 , 43 ].

4 . 2 . Improved equity , inclusion , or cohesion and human rights . Other methodological frameworks, 4 of the 24, suggested capturing improvements in equity, inclusion, or cohesion and human rights. Authors suggested these could be using a resource like the United Nations Millennium Development Goals (MDGs) (superseded by Sustainable Development Goals [SDGs] in 2015) and human rights [ 29 , 33 , 34 , 38 ]. For instance, a cluster-randomised controlled trial in Nepal, which had female participants, has demonstrated the reduction of neonatal mortality through the introduction of maternity health care, distribution of delivery kits, and home visits. This illustrates how research can target vulnerable and disadvantaged groups. Additionally, this research has been introduced by the World Health Organisation to achieve the MDG ‘improve maternal health’ [ 16 , 29 , 65 ].

4 . 3 . Health literacy . Some methodological frameworks, 3 of the 24, focused on tracking changes in the ability of patients to make informed healthcare decisions, reduce health risks, and improve quality of life, which were demonstrably linked to a particular programme of research [ 5 , 29 , 43 ]. For example, a systematic review showed that when HIV health literacy/knowledge is spread among people living with the condition, antiretroviral adherence and quality of life improve [ 66 ].

5. Broader economic impacts.

Some methodological frameworks, 9 of 24, included aspects related to the broader economic impacts of health research—for example, the economic benefits emerging from the commercialisation of research outputs [ 2 , 5 , 29 , 31 , 33 , 35 , 36 , 38 , 67 ]. Suggested metrics included the amount of funding for research and development (R&D) that was competitively awarded by the NHS, medical charities, and overseas companies. Additional metrics were income from intellectual property, spillover effects (any secondary benefit gained as a repercussion of investing directly in a primary activity, i.e., the social and economic returns of investing on R&D) [ 33 ], patents granted, licences awarded and brought to the market, the development and sales of spinout companies, research contracts, and income from industry.

The benefits contained within the categories ‘health and health systems impact’, ‘health-related and societal impact’, and ‘broader economic impacts’ are considered the expected and final returns of the resources allocated in healthcare research [ 30 , 62 ]. These benefits commonly arise in the long term, beyond 5 years according to some authors, but there was a recognition that this could differ depending on the project and its associated research area [ 4 ].

Data synthesis

Five major impact categories were identified across the 24 included methodological frameworks: (1) ‘primary research-related impact’, (2) ‘influence on policy making’, (3) ‘health and health systems impact’, (4) ‘health-related and societal impact’, and (5) ‘broader economic impact’. These major impact categories were further subdivided into 16 impact subgroups. The included publications proposed 80 different metrics to measure research impact. This impact typology synthesis is depicted in ‘the impact matrix’ ( Fig 2 and Fig 3 ).

thumbnail

CIHR, Canadian Institutes of Health Research; HTA, Health Technology Assessment; PHC RIS, Primary Health Care Research & Information Service; RAE, Research Assessment Exercise; RQF, Research Quality Framework.

https://doi.org/10.1371/journal.pmed.1002370.g002

thumbnail

AIHS, Alberta Innovates—Health Solutions; CAHS, Canadian Institutes of Health Research; IOM, Impact Oriented Monitoring; REF, Research Excellence Framework; SIAMPI, Social Impact Assessment Methods for research and funding instruments through the study of Productive Interactions between science and society.

https://doi.org/10.1371/journal.pmed.1002370.g003

Commonality and differences across frameworks

The ‘Research Impact Framework’ and the ‘Health Services Research Impact Framework’ were the models that encompassed the largest number of the metrics extracted. The most dominant methodological framework was the Payback Framework; 7 other methodological framework models used the Payback Framework as a starting point for development [ 8 , 29 , 31 – 35 ]. Additional methodological frameworks that were commonly incorporated into other tools included the CIHR framework, the CAHS model, the AIHS framework, and the Exchange model [ 8 , 33 , 34 , 44 ]. The capture of ‘research-related impact’ was the most widely advocated concept across methodological frameworks, illustrating the importance with which primary short-term impact outcomes were viewed by the included papers. Thus, measurement of impact via number of publications, citations, and peer-reviewed articles was the most common. ‘Influence on policy making’ was the predominant midterm impact category, specifically the subgroup ‘type and nature of policy impact’, in which frameworks advocated the measurement of (i) changes to legislation, regulations, and government policy; (ii) influence and involvement in decision-making processes; and (iii) changes to clinical or healthcare training, practice, or guidelines. Within more long-term impact measurement, the evaluations of changes in the ‘quality of care and service delivery’ were commonly advocated.

In light of the commonalities and differences among the methodological frameworks, the ‘pathways to research impact’ diagram ( Fig 4 ) was developed to provide researchers, funders, and policy makers a more comprehensive and exhaustive way to measure healthcare research impact. The diagram has the advantage of assorting all the impact metrics proposed by previous frameworks and grouping them into different impact subgroups and categories. Prospectively, this global picture will help researchers, funders, and policy makers plan strategies to achieve multiple pathways to impact before carrying the research out. The analysis of the data extraction and construction of the impact matrix led to the development of the ‘pathways to research impact’ diagram ( Fig 4 ). The diagram aims to provide an exhaustive and comprehensive way of tracing research impact by combining all the impact metrics presented by the different 24 frameworks, grouping those metrics into different impact subgroups, and grouping these into broader impact categories.

thumbnail

NHS, National Health Service; PROM, patient-reported outcome measure; QALY, quality-adjusted life year; R&D, research and development.

https://doi.org/10.1371/journal.pmed.1002370.g004

This review has summarised existing methodological impact frameworks together for the first time using systematic methods ( Fig 4 ). It allows researchers and funders to consider pathways to impact at the design stage of a study and to understand the elements and metrics that need to be considered to facilitate prospective assessment of impact. Users do not necessarily need to cover all the aspects of the methodological framework, as every research project can impact on different categories and subgroups. This review provides information that can assist researchers to better demonstrate impact, potentially increasing the likelihood of conducting impactful research and reducing research waste. Existing reviews have not presented a methodological framework that includes different pathways to impact, health impact categories, subgroups, and metrics in a single methodological framework.

Academic-orientated frameworks included in this review advocated the measurement of impact predominantly using so-called ‘quantitative’ metrics—for example, the number of peer-reviewed articles, journal impact factor, and citation rates. This may be because they are well-established measures, relatively easy to capture and objective, and are supported by research funding systems. However, these metrics primarily measure the dissemination of research finding rather than its impact [ 30 , 68 ]. Whilst it is true that wider dissemination, especially when delivered via world-leading international journals, may well lead eventually to changes in healthcare, this is by no means certain. For instance, case studies evaluated by Flinders University of Australia demonstrated that some research projects with non-peer-reviewed publications led to significant changes in health policy, whilst the studies with peer-reviewed publications did not result in any type of impact [ 68 ]. As a result, contemporary literature has tended to advocate the collection of information regarding a variety of different potential forms of impact alongside publication/citations metrics [ 2 , 3 , 5 , 7 , 8 , 29 – 47 ], as outlined in this review.

The 2014 REF exercise adjusted UK university research funding allocation based on evidence of the wider impact of research (through case narrative studies and quantitative metrics), rather than simply according to the quality of research [ 12 ]. The intention was to ensure funds were directed to high-quality research that could demonstrate actual realised benefit. The inclusion of a mixed-method approach to the measurement of impact in the REF (narrative and quantitative metrics) reflects a widespread belief—expressed by the majority of authors of the included methodological frameworks in the review—that individual quantitative impact metrics (e.g., number of citations and publications) do not necessary capture the complexity of the relationships involved in a research project and may exclude measurement of specific aspects of the research pathway [ 10 , 12 ].

Many of the frameworks included in this review advocated the collection of a range of academic, societal, economic, and cultural impact metrics; this is consistent with recent recommendations from the Stern review [ 10 ]. However, a number of these metrics encounter research ‘lag’: i.e., the time between the point at which the research is conducted and when the actual benefits arise [ 69 ]. For instance, some cardiovascular research has taken up to 25 years to generate impact [ 70 ]. Likewise, the impact may not arise exclusively from a single piece of research. Different processes (such as networking interactions and knowledge and research translation) and multiple individuals and organisations are often involved [ 4 , 71 ]. Therefore, attributing the contribution made by each of the different actors involved in the process can be a challenge [ 4 ]. An additional problem associated to attribution is the lack of evidence to link research and impact. The outcomes of research may emerge slowly and be absorbed gradually. Consequently, it is difficult to determine the influence of research in the development of a new policy, practice, or guidelines [ 4 , 23 ].

A further problem is that impact evaluation is conducted ‘ex post’, after the research has concluded. Collecting information retrospectively can be an issue, as the data required might not be available. ‘ex ante’ assessment is vital for funding allocation, as it is necessary to determine the potential forthcoming impact before research is carried out [ 69 ]. Additionally, ex ante evaluation of potential benefit can overcome the issues regarding identifying and capturing evidence, which can be used in the future [ 4 ]. In order to conduct ex ante evaluation of potential benefit, some authors suggest the early involvement of policy makers in a research project coupled with a well-designed strategy of dissemination [ 40 , 69 ].

Providing an alternate view, the authors of methodological frameworks such as the SIAMPI, Contribution Mapping, Research Contribution, and the Exchange model suggest that the problems of attribution are a consequence of assigning the impact of research to a particular impact metric [ 7 , 40 , 42 , 44 ]. To address these issues, these authors propose focusing on the contribution of research through assessing the processes and interactions between stakeholders and researchers, which arguably take into consideration all the processes and actors involved in a research project [ 7 , 40 , 42 , 43 ]. Additionally, contributions highlight the importance of the interactions between stakeholders and researchers from an early stage in the research process, leading to a successful ex ante and ex post evaluation by setting expected impacts and determining how the research outcomes have been utilised, respectively [ 7 , 40 , 42 , 43 ]. However, contribution metrics are generally harder to measure in comparison to academic-orientated indicators [ 72 ].

Currently, there is a debate surrounding the optimal methodological impact framework, and no tool has proven superior to another. The most appropriate methodological framework for a given study will likely depend on stakeholder needs, as each employs different methodologies to assess research impact [ 4 , 37 , 41 ]. This review allows researchers to select individual existing methodological framework components to create a bespoke tool with which to facilitate optimal study design and maximise the potential for impact depending on the characteristic of their study ( Fig 2 and Fig 3 ). For instance, if researchers are interested in assessing how influential their research is on policy making, perhaps considering a suite of the appropriate metrics drawn from multiple methodological frameworks may provide a more comprehensive method than adopting a single methodological framework. In addition, research teams may wish to use a multidimensional approach to methodological framework development, adopting existing narratives and quantitative metrics, as well as elements from contribution frameworks. This approach would arguably present a more comprehensive method of impact assessment; however, further research is warranted to determine its effectiveness [ 4 , 69 , 72 , 73 ].

Finally, it became clear during this review that the included methodological frameworks had been constructed using varied methodological processes. At present, there are no guidelines or consensus around the optimal pathway that should be followed to develop a robust methodological framework. The authors believe this is an area that should be addressed by the research community, to ensure future frameworks are developed using best-practice methodology.

For instance, the Payback Framework drew upon a literature review and was refined through a case study approach. Arguably, this approach could be considered inferior to other methods that involved extensive stakeholder involvement, such as the CIHR framework [ 8 ]. Nonetheless, 7 methodological frameworks were developed based upon the Payback Framework [ 8 , 29 , 31 – 35 ].

Limitations

The present review is the first to summarise systematically existing impact methodological frameworks and metrics. The main limitation is that 50% of the included publications were found through methods other than bibliographic databases searching, indicating poor indexing. Therefore, some relevant articles may not have been included in this review if they failed to indicate the inclusion of a methodological impact framework in their title/abstract. We did, however, make every effort to try to find these potentially hard-to-reach publications, e.g., through forwards/backwards citation searching, hand searching reference lists, and expert communication. Additionally, this review only extracted information regarding the methodology followed to develop each framework from the main publication source or framework webpage. Therefore, further evaluations may not have been included, as they are beyond the scope of the current paper. A further limitation was that although our search strategy did not include language restrictions, we did not specifically search non-English language databases. Thus, we may have failed to identify potentially relevant methodological frameworks that were developed in a non-English language setting.

In conclusion, the measurement of research impact is an essential exercise to help direct the allocation of limited research resources, to maximise benefit, and to help minimise research waste. This review provides a collective summary of existing methodological impact frameworks and metrics, which funders may use to inform the measurement of research impact and researchers may use to inform study design decisions aimed at maximising the short-, medium-, and long-term impact of their research.

Supporting information

S1 appendix. search strategy..

https://doi.org/10.1371/journal.pmed.1002370.s001

S1 PRISMA Checklist. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist.

https://doi.org/10.1371/journal.pmed.1002370.s002

Acknowledgments

We would also like to thank Mrs Susan Bayliss, Information Specialist, University of Birmingham, and Mrs Karen Biddle, Research Secretary, University of Birmingham.

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 3. HEFCE. REF 2014: Assessment framework and guidance on submissions 2011 [cited 2016 15 Feb]. Available from: http://www.ref.ac.uk/media/ref/content/pub/assessmentframeworkandguidanceonsubmissions/GOS%20including%20addendum.pdf .
  • 8. Canadian Institutes of Health Research. Developing a CIHR framework to measure the impact of health research 2005 [cited 2016 26 Feb]. Available from: http://publications.gc.ca/collections/Collection/MR21-65-2005E.pdf .
  • 9. HEFCE. HEFCE allocates £3.97 billion to universities and colleges in England for 2015–1 2015. Available from: http://www.hefce.ac.uk/news/newsarchive/2015/Name,103785,en.html .
  • 10. Stern N. Building on Success and Learning from Experience—An Independent Review of the Research Excellence Framework 2016 [cited 2016 05 Aug]. Available from: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/541338/ind-16-9-ref-stern-review.pdf .
  • 11. Matthews D. REF sceptic to lead review into research assessment: Times Higher Education; 2015 [cited 2016 21 Apr]. Available from: https://www.timeshighereducation.com/news/ref-sceptic-lead-review-research-assessment .
  • 12. HEFCE. The Metric Tide—Report of the Independent Review of the Role of Metrics in Research Assessment and Management 2015 [cited 2016 11 Aug]. Available from: http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/The,Metric,Tide/2015_metric_tide.pdf .
  • 14. LSE Public Policy Group. Maximizing the impacts of your research: A handbook for social scientists. http://www.lse.ac.uk/government/research/resgroups/LSEPublicPolicy/Docs/LSE_Impact_Handbook_April_2011.pdf . London: LSE; 2011.
  • 15. HEFCE. Consultation on the second Research Excellence Framework. 2016.
  • 18. Merriam-Webster Dictionary 2017. Available from: https://www.merriam-webster.com/dictionary/methodology .
  • 19. Oxford Dictionaries—pathway 2016 [cited 2016 19 June]. Available from: http://www.oxforddictionaries.com/definition/english/pathway .
  • 20. Oxford Dictionaries—metric 2016 [cited 2016 15 Sep]. Available from: https://en.oxforddictionaries.com/definition/metric .
  • 21. WHO. WHO Ethical and Safety Guidelines for Interviewing Trafficked Women 2003 [cited 2016 29 July]. Available from: http://www.who.int/mip/2003/other_documents/en/Ethical_Safety-GWH.pdf .
  • 31. Kalucy L, et al. Primary Health Care Research Impact Project: Final Report Stage 1 Adelaide: Primary Health Care Research & Information Service; 2007 [cited 2016 26 Feb]. Available from: http://www.phcris.org.au/phplib/filedownload.php?file=/elib/lib/downloaded_files/publications/pdfs/phcris_pub_3338.pdf .
  • 33. Canadian Academy of Health Sciences. Making an impact—A preferred framework and indicators to measure returns on investment in health research 2009 [cited 2016 26 Feb]. Available from: http://www.cahs-acss.ca/wp-content/uploads/2011/09/ROI_FullReport.pdf .
  • 39. HEFCE. RAE 2008—Guidance in submissions 2005 [cited 2016 15 Feb]. Available from: http://www.rae.ac.uk/pubs/2005/03/rae0305.pdf .
  • 41. Royal Netherlands Academy of Arts and Sciences. The societal impact of applied health research—Towards a quality assessment system 2002 [cited 2016 29 Feb]. Available from: https://www.knaw.nl/en/news/publications/the-societal-impact-of-applied-health-research/@@download/pdf_file/20021098.pdf .
  • 48. Weiss CH. Using social research in public policy making: Lexington Books; 1977.
  • 50. Kogan M, Henkel M. Government and research: the Rothschild experiment in a government department: Heinemann Educational Books; 1983.
  • 51. Thomas P. The Aims and Outcomes of Social Policy Research. Croom Helm; 1985.
  • 52. Bulmer M. Social Science Research and Government: Comparative Essays on Britain and the United States: Cambridge University Press; 2010.
  • 53. Booth T. Developing Policy Research. Aldershot, Gower1988.
  • 55. Kalucy L, et al Exploring the impact of primary health care research Stage 2 Primary Health Care Research Impact Project Adelaide: Primary Health Care Research & Information Service (PHCRIS); 2009 [cited 2016 26 Feb]. Available from: http://www.phcris.org.au/phplib/filedownload.php?file=/elib/lib/downloaded_files/publications/pdfs/phcris_pub_8108.pdf .
  • 56. CHSRF. Canadian Health Services Research Foundation 2000. Health Services Research and Evidence-based Decision Making [cited 2016 February]. Available from: http://www.cfhi-fcass.ca/migrated/pdf/mythbusters/EBDM_e.pdf .
  • 58. W.K. Kellogg Foundation. Logic Model Development Guide 2004 [cited 2016 19 July]. Available from: http://www.smartgivers.org/uploads/logicmodelguidepdf.pdf .
  • 59. United Way of America. Measuring Program Outcomes: A Practical Approach 1996 [cited 2016 19 July]. Available from: https://www.bttop.org/sites/default/files/public/W.K.%20Kellogg%20LogicModel.pdf .
  • 60. Nutley S, Percy-Smith J and Solesbury W. Models of research impact: a cross sector review of literature and practice. London: Learning and Skills Research Centre 2003.
  • 61. Spaapen J, van Drooge L. SIAMPI final report [cited 2017 Jan]. Available from: http://www.siampi.eu/Content/SIAMPI_Final%20report.pdf .
  • 63. LSHTM. The Health Risks and Consequences of Trafficking in Women and Adolescents—Findings from a European Study 2003 [cited 2016 29 July]. Available from: http://www.oas.org/atip/global%20reports/zimmerman%20tip%20health.pdf .
  • 70. Russell G. Response to second HEFCE consultation on the Research Excellence Framework 2009 [cited 2016 04 Apr]. Available from: http://russellgroup.ac.uk/media/5262/ref-consultation-response-final-dec09.pdf .
  • Open access
  • Published: 18 March 2015

A narrative review of research impact assessment models and methods

  • Andrew J Milat 1 , 2 ,
  • Adrian E Bauman 2 &
  • Sally Redman 2 , 3  

Health Research Policy and Systems volume  13 , Article number:  18 ( 2015 ) Cite this article

24k Accesses

101 Citations

36 Altmetric

Metrics details

Research funding agencies continue to grapple with assessing research impact. Theoretical frameworks are useful tools for describing and understanding research impact. The purpose of this narrative literature review was to synthesize evidence that describes processes and conceptual models for assessing policy and practice impacts of public health research.

The review involved keyword searches of electronic databases, including MEDLINE, CINAHL, PsycINFO, EBM Reviews, and Google Scholar in July/August 2013. Review search terms included ‘research impact’, ‘policy and practice’, ‘intervention research’, ‘translational research’, ‘health promotion’, and ‘public health’. The review included theoretical and opinion pieces, case studies, descriptive studies, frameworks and systematic reviews describing processes, and conceptual models for assessing research impact. The review was conducted in two phases: initially, abstracts were retrieved and assessed against the review criteria followed by the retrieval and assessment of full papers against review criteria.

Thirty one primary studies and one systematic review met the review criteria, with 88% of studies published since 2006. Studies comprised assessments of the impacts of a wide range of health-related research, including basic and biomedical research, clinical trials, health service research, as well as public health research. Six studies had an explicit focus on assessing impacts of health promotion or public health research and one had a specific focus on intervention research impact assessment. A total of 16 different impact assessment models were identified, with the ‘payback model’ the most frequently used conceptual framework. Typically, impacts were assessed across multiple dimensions using mixed methodologies, including publication and citation analysis, interviews with principal investigators, peer assessment, case studies, and document analysis. The vast majority of studies relied on principal investigator interviews and/or peer review to assess impacts, instead of interviewing policymakers and end-users of research.

Conclusions

Research impact assessment is a new field of scientific endeavour and there are a growing number of conceptual frameworks applied to assess the impacts of research.

Peer Review reports

There is increasing recognition that health research investment should lead to improvements in policy [ 1 - 3 ], practice, resource allocation, and, ultimately, the health of the community [ 4 , 5 ]. However, research impacts are complex, non-linear, and unpredictable in nature and there is a propensity to ‘count what can be easily measured’, rather than measuring what ‘counts’ in terms of significant, enduring changes [ 6 ].

Traditional academic-oriented indices of research productivity, such as number of papers, impact factors of journals, citations, research funding, and esteem measures, are well established and widely used by research granting bodies and academic institutions [ 7 ], but they do not always relate well to the ultimate goals of applied health research [ 6 , 8 , 9 ]. Governments are signaling that research metrics of research quality and productivity are insufficient to determine research value because they say little about the real world benefits of research [ 10 - 12 ]. At the same time, research funders continue to grapple with the fundamental problem of assessing broader impacts of research. This task is made more challenging because there are currently no agreed systematic approaches to measuring broader research impacts, particularly impacts on health policy and practice [ 13 , 14 ].

Recent years have seen the development of a number of frameworks that can assist in better describing and understanding the impact of research. Conceptual frameworks can help organize data collection, analysis, and reporting to promote clarity and consistency in the impact assessments made. In the context of this review, research impact is defined as: “… any type of output of research activities which can be considered a ‘positive return’ for the scientific community, health systems, patients, and the society in general ” [ 13 ], p. 2.

In light of these gaps in the literature, the purpose of this narrative literature review was to synthesize evidence that describes processes and conceptual models for assessing research impacts, with a focus on policy and practice impacts of public health research.

Literature review search strategy

The review involved keyword searches of electronic databases including MEDLINE (general medicine), CINAHL (nursing and allied health), PsycINFO (psychology and related behavioural and social sciences), EBM Reviews, Cochrane Database of Systematic Reviews 2005 to May 2013, and Google Scholar. Review search terms included ‘research impact’ OR ‘policy and practice’ AND ‘intervention research’ AND ‘translational research’ AND ‘health promotion’ AND ‘public health’.

The review included theoretical and opinion pieces, case studies, descriptive studies, frameworks and systematic reviews describing processes, and conceptual models for assessing research impact.

The review was conducted in two phases in July/August 2013. In phase 1, abstracts were retrieved and assessed against the review criteria. For abstracts that met the review criteria in phase 1, full papers were retrieved and were assessed for inclusion in the final review. Studies included in the review met the following criteria: i) published in English from January 1990 to June 2013; ii) described processes, theories, or frameworks associated with the assessment of research impact; and iii) were theoretical and opinion pieces, case studies, descriptive studies, frameworks, or systematic reviews.

Due the dearth of public health and health promotion-specific research impact assessment, papers with a focus on clinical or health services research impact assessment were included. The reference lists of the final papers were checked to ensure inclusion of further relevant papers; where such articles were considered relevant, they were included in the review. The search process is shown in Figure  1 .

Literature search process and numbers of papers identified, excluded, and included in the review of research impact assessment.

Findings of the literature review

An initial review of abstracts in electronic databases against the inclusion criteria yielded 431 abstracts and searches of reference lists and the grey literature identified a further 9 documents. Of the 434 abstracts and documents reviewed, 39 met the inclusion criteria and full papers were retrieved. Upon review of the full publications against the review criteria, a further 7 papers were excluded as they did not meet the review criteria, leaving 32 publications in the review [ 8 , 9 , 13 , 15 - 44 ]. A summary of characteristics of studies included in the review that have a focus on processes, theories, or frameworks associated with the assessment of research impact including reference details, study type, domains of impact, methods and indicators, frameworks applied or proposed, and key lessons learned is provided in Additional file 1 : Table S1.

Study characteristics

The review identified 31 primary studies and 1 systematic review that met the review criteria. Six of the studies were reports found in the grey literature. Interestingly, 88% of studies that met the review criteria were published since 2006. The studies in the review included assessments of the impacts of a wide range of health-related research, including basic and biomedical research, clinical trials, health service research, as well as public health research. Six studies [ 22 , 23 , 34 , 36 , 40 , 43 ] had an explicit focus on assessing impacts of health promotion or public health research and 1 had a specific focus on intervention research impact assessment [ 36 ].

The majority of studies were conducted in Australia, United Kingdom, and North America, noting that the review was limited to studies published in English. The unit of assessment varied greatly from researchers (research teams [ 22 ] to whole institutions [ 15 ]) to research disciplines (e.g., prevention research [ 23 ], cancer research [ 41 ], tobacco control research [ 43 ]) or type of grants, for example, from public funding bodies [ 17 , 24 ]. The most frequently applied research methods across studies in rank order were publication and citation analysis, interviews with principal investigators, peer assessment, case studies, and document analysis. The nature of frameworks and methods used to measure research impacts will now be examined in greater detail.

Frameworks and methods for measuring research impacts

Indices of traditional research productivity such as number of papers, impact factors of journals, and citations figured prominently in studies in the literature review [ 18 , 23 , 41 ].

Across the majority of studies in this review, research impact was assessed using multiple dimensions and methodological approaches. A total of 16 different impact assessment models were identified, with the ‘payback model’ being the most frequently used conceptual framework [ 15 , 24 , 29 , 31 , 44 ]. Other frequently used models included health economics frameworks [ 19 , 21 , 37 ], variants of Research Program Logic Models [ 9 , 35 , 42 ], and the Research Impact Framework [ 8 , 30 ]. A number of recent frameworks, including the Health Services Research Impact Framework [ 20 ] and the Banzi Health Research Impact Framework [ 13 , 34 , 36 ], are hybrids of previous conceptual approaches and categorize impacts and benefits in many dimensions, trying to integrate them. Commonly applied frameworks identified in the review, including the Payback model, Research Impact Framework, health economics models, and the new hybrid Health Research Impact Framework, will now be examined in greater detail.

The payback model was developed by Buxton and Hanney [ 45 ] and takes into account resources, research processes, primary outputs, dissemination, secondary outputs and applications, and benefits or final outcomes provided by the research. Categories of outcome in the ‘payback’ framework include i) knowledge production (journal articles, books/book chapters, conference proceeding, reports); ii) use of research in the research system (acquisition of formal qualifications by members of the research team, career advancement, and use of project findings for methodology in subsequent research); iii) use of research project findings in health system policy/decision making (findings used in policy/decision making at any level of the health service such as geographic level and organisation level); iv) application of the research findings through changed behaviour (changes in behaviour observed or expected through the application of findings to research-informed policies at a geographical, organisation and population level); v) factors influencing the utilization of research (impact of research dissemination in terms of policy/decision making/behavioural change); and vi) health/health service/economic benefits (improved service delivery, cost savings, improved health, or increased equity).

The model is usually applied as a semi-structured interview guide for researchers to identify the impact of their research and is often accompanied by bibliometric analysis and verification processes. The payback categories have been found to be applicable to assessing impact of research [ 15 , 24 , 29 ], especially the more proximal impacts on knowledge production, research targeting, capacity building and absorption, and informing practice, policy, and product development. The model has been found to be less effective in eliciting information about the longer term categories of impact on health and health sector benefits and economics [ 29 ].

The Research Impact Framework was developed in the UK by Kuruvilla et al. [ 8 , 30 ], and draws upon both the research impact literature and UK research assessment criteria for publically funded research, and was validated through empirical analysis of research projects at the London School of Hygiene & Tropical Medicine. The framework is built around four categories of impact, namely i) research related, ii) policy, iii) service, and iv) societal. Within each of these areas, further descriptive categories are identified. For example, the nature of research impact on policy can be described using the Weiss categorisation of ‘instrumental use’, where research findings drive policy-making; ‘mobilisation of support’, where research provides support for policy proposals; ‘conceptual use’, where research influences the concepts and language of policy deliberations; and ‘redefining/wider influence’, where research leads to rethinking and changing established practices and beliefs [ 30 ]. The framework is applied as a semi-structured interview guide for researchers to identify the impact of their research. Users of the framework have reported that it enables the systematic identification of a range of specific and verifiable impacts and allows consideration of the unintended effects of research [ 30 ].

The framework proposed by Banzi et al. [ 13 ] is an adaption of the Canadian Academy of Health Science impact model [ 25 ] in light of a systematic review and includes five broad categories of research impact, namely i) advancing knowledge, ii) capacity building, iii) informing decision-making, iv) health and other sector benefits, and v) broad socio-economic benefits. The Banzi framework proposes a set of indicators for each domain. To illustrate, indicators for informing decision making include citation in guidelines, policy documents, and plans; references used as background for successful funding proposals; consulting, support activity, and contributing to advisory committees; patents and industrial collaboration; packages of material and communication to key target audiences about findings. This multidimensional framework takes into account several aspects of research impact and use, as well as comprehensive analytical approaches including bibliometric analysis, surveys, audit, document review, case studies, and panel assessment. Panel assessments generally involve a process asking experts to assess the merits of research against impact criteria.

Economic models used to assess impacts of research varied from cost benefit analysis to return on investment and employed a variety of methods for determining economic benefits of research. The National Institutes of Medicine study in 1993 was among the first studies to attempt to systematically monetize the benefits of medical research. It provided estimates of savings for health care systems (direct costs) and savings for the community as a whole (indirect costs), and quantified benefits in terms of quality adjusted life years. On the other hand, the Deloitte Access Economics study [ 21 ] built on the foundations of the 1993 analysis to estimate the returns on investment in research in Australia for the main disease areas and employed of health system expenditure modelling and monetised total quality adjusted life years gained. According to Buxton et al. [ 19 ], measuring only health care savings is generally seen as too narrow a focus, and their analysis considered the benefits, or indirect cost savings, in avoiding lost production and the further activity stimulated by research.

The aforementioned models all attempted to quantify a mix of more proximal research and policy and practice impacts, as well as more distal societal and economic benefits of research. It is also interesting to note that across the studies in this review, only four [ 16 , 29 , 34 , 36 ] interviewed non-academic end-users of research in impact assessment processes, with the vast majority of studies relying on principal investigator interviews and/or peer review processes to assess impacts.

Comprehensive monitoring and measurement of research impact is a complex undertaking requiring the involvement of many actors within the research pipeline [ 13 ]. Interestingly, 90% of studies that met the review criteria were published since 2006, indicating that this is a new field of research. Given the dearth of literature on public health research impact assessment, this review included assessments of the impacts of a wide range of health-related research, including basic and biomedical research, clinical trials, and health service research as well as public health research.

The review of both the published and grey literature also revealed that there are a number of conceptual frameworks currently being applied that describe processes of assessing research impact. These frameworks differ in their terminology and approaches. The lack of a common understanding of terminology and metrics makes the task of quantifying research efforts, outputs, and, ultimately, performance in this area more difficult.

Most of the models identified in the review used multidimensional conceptualization and categorization of research impact. These multidimensional models, such as the Payback model, Research Impact Framework, and Banzi Health Research Impact Framework, shared common features including assessment of traditional research outputs, such as publication and research funding, but also a broader range of potential benefits, including capacity, building, policy and product development, and service development, as well as broader societal and economic impacts. Assessments that considered more than one category were valued for their ability to capture multifaceted impact processes [ 13 , 36 , 44 ]. Interestingly, these frameworks recognised that research often impacts not only in the country within which research is conducted, but also internationally. However, for practical reasons, most studies limited assessment and verification of impacts to a single country [ 19 , 34 , 36 ].

Several methods were used to practically assess research impact, including desk analysis, bibliometrics, panel assessments, interviews, and case studies. A number of studies highlighted the utility of case study methods noting that a considerable range of research paybacks and perspectives would not have been identified without employing a structured case study approach [ 13 , 36 , 44 ]. However, it was noted that case studies can be at risk of ‘conceptualization bias’ and ‘reporting bias’ especially when they are designed or carried out retrospectively [ 13 ]. The costs of conducting case studies can also be a barrier when assessing large volumes of research [ 13 , 36 ].

Despite recent efforts, little is known about the nature and mechanisms that underpin the influence that health research has on health policy or practice. This review suggests that, to date, most primary studies of health research impacts have been small scale case studies or reviews of medical and health services research funding [ 27 , 31 , 35 , 39 , 41 ], with only two studies offering comprehensive assessments of the policy and practice impacts of public health research, with both focusing on prevention research in Australia.

The first of these aforementioned studies examined impact of population health surveillance studies on obesity prevention policy and practice [ 34 ], while the second [ 36 ] examined the policy and practice impacts of intervention research funded through the NSW Health Promotion Demonstration Research Grants Scheme 2000–2006. Both of these studies utilised comprehensive mixed methods to assess impacts that included semi-structured interviews with both investigators and end-users, bibliometric analysis, document review, verification processes, and case studies. These studies concluded that research projects can achieve the greatest policy and practice impacts if they address proximal needs of the policy context by engaging end-users from the inception of research projects and utilizing existing policy networks and structures, as well as using a range of strategies to disseminate findings that go beyond traditional peer review publications.

This review suggests that the research sector often still uses bibliometric indices to assess research impacts, rather than measuring more enduring and arguably more important policy and practice outcomes [ 6 ]. However, governments are increasingly signaling that research metrics of research quality are insufficient to determine research value because they say little about real world benefits of research [ 10 - 12 ]. The Australian Excellence in Innovation trial [ 26 ] and the UK’s Research Excellence Framework trials [ 28 , 46 ] were commissioned by governments to determine the public benefit from research spending [ 10 , 16 , 47 ].

These attempts raise an important question of how to construct an impact assessment process that can assess multi-dimensional impacts while being feasible to implement on a system level. For example, can 28 indicators across 4 domains of Research Impact Framework be realistically measured in practice? This could also be said of the Research Impact Model [ 13 ], which has 26 indicators, and the Research Excellent Framework by Ovseiko et al. [ 38 ], which has a total of 20 impact indicators. If such methods are to be widely used in practice by research funders and academic institutions to assess research impacts, the right balance between comprehensiveness and feasibility must be struck.

Though a number of studies suggest it is difficult to determine longer-term societal and economic benefits of research as part of multi-dimensional research impact assessment processes [ 13 , 36 , 44 ], the health economic impact models presented in this review and the broader literature demonstrate that it is feasible to undertake these analyses, particularly if the right methods are used [ 19 , 21 , 37 , 48 ].

The review revealed that, where broader policy and practice impacts of research have been assessed in the literature, the vast majority of studies have relied on principal investigator interviews and/or peer review to assess impacts, instead of interviewing policymakers and other important end-users of research. This would seem to be a methodological weakness of previous research, as solely relying on principal investigator assessments, particularly of impacts of their own research, has an inherent bias, leaving the research impact assessment process open to ‘gilding the lily’. In light of this, future impact assessment processes should routinely engage end-users of research in interviews and assessment processes, but also include independent documentary verification, thus addressing methodological limitations of previous research.

One of the greatest practical issues in measuring research impact, including the impact of public health research, are the long lag times before impacts manifest. It has been observed that, on average, it takes over 6 years for research evidence to reach reviews, papers, and textbooks, and a further 9 years for this evidence to be implemented into practice [ 49 ]. In light of this, it is important to allow sufficient time for impacts to manifest, while not waiting so long that these impacts cannot be verified by stakeholders involved in the production and use of the research. Studies in this review have addressed this issue by only assessing studies that had been completed for at least 24 months [ 36 ].

As identified in previous research [ 13 ], a major challenge is attribution of impacts and understanding what would have happened without individual research activity or what some describe as the ‘counterfactual’. Creating a control situation for this type of research is difficult, but, where possible, identification of baseline measures and contextual factors is important in understanding what counterfactual situations may have arisen. Confidence in attribution of effects can be improved by undertaking independent verification of processes and engaging end-users in assessments instead of solely relying on investigators accounts of impacts [ 36 ].

The research described in this review has some limitations that merit closer examination. Given the paucity of research in this area, review criteria had to be adjusted to include assessment of impacts beyond public health research to include all health research. It was also challenging to make direct comparisons across studies mostly due to the heterogeneity of studies and the lack of a standard terminology, hence the broad definition of ‘research impact’ finally applied in the review criteria. Although the majority of studies were found in the traditional biomedical databases (i.e., Medline, etc.), 18% were found in the grey literature highlighting the importance of using multiple data sources in future review processes. Another methodological limitation also identified in previous reviews [ 13 ], is that we did not estimate the level of publication bias and selective publication in this emerging field. Finally, as our analysis included studies published up to June 2013, we may not have captured more recent approaches to impact assessment.

Research impact assessment is a new field of scientific endeavour and typically impacts are assessed using mixed methodologies, including publication and citation analysis, interviews with principal investigators, peer assessment, case studies, and document analysis. The literature is characterised by an over reliance on bibliometric methods to assess research impact. Future impact assessment processes could be strengthened by routinely engaging the end-users of research in interviews and assessment processes. If multidimensional research impact assessment methods are to be widely used in practice by research funders and academic institutions, the right balance between comprehensiveness and feasibility must be determined.

Anderson W, Papadakis E. Research to improve health practice and policy. Med J Aust. 2009;191(11/12):646–7.

PubMed   Google Scholar  

Cooksey D. A review of UK health research funding. London: HMSO; 2006.

Google Scholar  

Health and Medical Research Strategic Review Committee. The virtuous cycle: working together for health and medical research. Canberra: Commonwealth of Australia; 1998.

National Health and Medical Research Council Public Health Advisory Committee. Report of the Review of Public Health Research Funding in Australia. Canberra: NHMRC; 2008.

Campbell DM. Increasing the use of evidence in health policy: practice and views of policy makers and researchers. Aust New Zealand Health Policy. 2009;6:21.

PubMed   PubMed Central   Google Scholar  

Wells R, Whitworth JA. Assessing outcomes of health and medical research: do we measure what counts or count what we can measure? Aust New Zealand Health Policy. 2007;4:14.

Australian Government Australian Research Council. Excellence in Research in Australia 2012. Canberra: Australian Research Council; 2012.

Kuruvilla S, Mays N, Walt G. Describing the impact of health services and policy research. J Health Serv Res Policy. 2007;12 Suppl 1:S1. -23-31.

Weiss AP. Measuring the impact of medical research: moving from outputs to outcomes. Am J Psychiatr. 2007;164(2):206–14.

Bornmann L. Measuring the societal impact of research. Eur Mol Biol Organ. 2012;13(8):673–6.

CAS   Google Scholar  

Holbrook JB. Re-assessing the science–society relation: The case of the US National Science Foundation’s broader impacts merit review criterion (1997–2011). In: Frodeman R, Holbrook JB, Mitcham C, Xiaonan H, editors. Peer Review, Research Integrity, and the Governance of Science–Practice, Theory, and Current Discussions. Dalian: People’s Publishing House and Dalian University of Technology; 2012. p. 328–62.

Holbrook JB, Frodeman R. Science’s social effects. Issues in Science and Technology. 2007. http://issues.org/23-3/p_frodeman-3/ .

Banzi R, Moja L, Pistotti V, Facchini A, Liberati A. Conceptual frameworks and empirical approaches used to assess the impact of health research: an overview of reviews. health Res Policy Syst. 2011;9:26.

Boaz A, Fitzpatrick S, Shaw B. Assessing the impact of research on policy: A review of the literature for a project on bridging research and policy through outcome evaluation. London: Policy Studies Institute London; 2008.

Aymerich M, Carrion C, Gallo P, Garcia M, López-Bermejo A, Quesada M, et al. Measuring the payback of research activities: a feasible ex-post evaluation methodology in epidemiology and public health. Soc Sci Med. 2012;75(3):505–10.

Barber R, Boote JD, Parry GD, Cooper CL, Yeeles P, Cook S. Can the impact of public involvement on research be evaluated? A mixed methods study. Health Expect. 2012;15(3):229–41.

Barker K, The UK. Research Assessment Exercise: the evolution of a national research evaluation system. Res Eval. 2007;16(1):3–12.

Boyack KW, Jordan P. Metrics associated with NIH funding: a high-level view. J Am Med Inform Assoc. 2011;18(4):423–31.

Buxton M, Hanney S, Morris S, Sundmacher L, Mestre-Ferrandiz J, Garau M, et al. Medical research: what’s it worth. Estimating the economic benefits from medical research in the UK. Report for MRC, Wellcome Trust and the Academy of Medical Sciences. 2008. http://www.wellcome.ac.uk/stellent/groups/corporatesite/@sitestudioobjects/documents/web_document/wtx052110.pdf .

Buykx P, Humphreys J, Wakerman J, Perkins D, Lyle D, McGrail M, et al. ‘Making evidence count’: A framework to monitor the impact of health services research. Aust J Rural Health. 2012;20(2):51–8.

Deloitte Access Economics. Extrapolated returns on investment in NHMRC medical research. Canberra: Australian Society for Medical Research; 2012.

Derrick GE, Haynes A, Chapman S, Hall WD. The association between four citation metrics and peer rankings of research influence of Australian researchers in six fields of public health. PLoS One. 2011;6(4):e18521.

CAS   PubMed   PubMed Central   Google Scholar  

Franks AL, Simoes EJ, Singh R, Gray BS. Assessing prevention research impact: a bibliometric analysis. Am J Prev Med. 2006;30(3):211–6.

Graham KE, Chorzempa HL, Valentine PA, Magnan J. Evaluating health research impact: development and implementation of the Alberta Innovates–Health Solutions impact framework. Res Eval. 2012;21(5):354–67.

Canadian Institutes of Health Research. Developing a CIHR framework to measure the impact of health research. Ottawa: Canadian Institutes of Health Research; 2005.

Group of Eight. Excellence in innovation: research impacting our nation’s future – assessing the benefits. Adelaide: Australian Technology Network of Universities; 2012.

Hanney S. An assessment of the impact of the NHS Health Technology Assessment Programme. Southampton: National Coordinating Centre for Health Technology Assessment, University of Southampton; 2007.

Higher Education Funding Council for England. Panel criteria and working methods. London: Higher Education Funding Council for England; 2012.

Kalucy EC, Jackson-Bowers E, McIntyre E, Reed R. The feasibility of determining the impact of primary health care research projects using the Payback Framework. Health Res Policy Syst. 2009;7:11.

Kuruvilla S, Mays N, Pleasant A, Walt G. Describing the impact of health research: a Research Impact Framework. BMC Health Serv Res. 2006;6(1):134.

Kwan P, Johnston J, Fung AYK, Chong DSY, Collins RA, Lo SV. A systematic evaluation of payback of publicly funded health and health services research in Hong Kong. BMC Health Serv Res. 2007;7(1):121.

Landry R, Amara N, Lamari M. Climbing the ladder of research utilization: Evidence from social science research. Sci Commun. 2001;22:396–422.

Lavis J, Ross S, McLeod C, Gildiner A. Measuring the impact of health research. J Health Serv Res Policy. 2003;8(3):165–70.

Laws R, King L, Hardy LL, Milat AJ, Rissel C, Newson R, et al. Utilization of a population health survey in policy and practice: a case study. Health Res Policy Syst. 2013;11:4.

Liebow E, Phelps J, Van Houten B, Rose S, Orians C, Cohen J, et al. Toward the assessment of scientific and public health impacts of the National Institute of Environmental Health Sciences Extramural Asthma Research Program using available data. Environ Health Perspect. 2009;117(7):1147.

Milat AJ, Laws R, King L, Newson R, Rychetnik L, Rissel C, et al. Policy and practice impacts of applied research: a case study analysis of the New South Wales Health Promotion Demonstration Research Grants Scheme 2000–2006. Health Res Policy Syst. 2013;11:5.

National Institutes of Health. Cost savings resulting from NIH research support. Bethesda, MD: United States Department of Health and Human Services National Institute of Health; 1993.

Ovseiko PV, Oancea A, Buchan AM. Assessing research impact in academic clinical medicine: a study using Research Excellence Framework pilot impact indicators. BMC Health Serv Res. 2012;12:478.

Schapper CC, Dwyer T, Tregear GW, Aitken M, Clay MA. Research performance evaluation: the experience of an independent medical research institute. Aust Health Rev. 2012;36(2):218–23.

Spoth RL, Schainker LM, Hiller-Sturmhöefel S. Translating family-focused prevention science into public health impact: illustrations from partnership-based research. Alcohol Res Health. 2011;34(2):188.

Sullivan R, Lewison G, Purushotham AD. An analysis of research activity in major UK cancer centres. Eur J Cancer. 2011;47(4):536–44.

CAS   PubMed   Google Scholar  

Taylor J, Bradbury-Jones C. International principles of social impact assessment: lessons for research? J Res Nurs. 2011;16(2):133–45.

Warner KE, Tam J. The impact of tobacco control research on policy: 20 years of progress. Tob Control. 2012;21(2):103–9.

Wooding S, Hanney S, Buxton M, Grant J. The returns from arthritis research. Volume 1: Approach analysis and recommendations. Netherlands: RAND Europe; 2004.

Buxton M, Hanney S. How can payback from health services research be assessed? J Health Serv Res Policy. 1996;1(1):35–43.

Higher Education Funding Council for England. Decisions on assessing research impact. Bristol: Higher Education Funding Council for England; 2011.

Grant J, Brutscher P-B, Kirk SE, Butler L, Wooding S. Capturing research impacts: a review of international practice. Documented Briefing. RAND Corporation; 2010. http://www.rand.org/pubs/documented_briefings/DB578.html .

Murphy KM, Topel RH. Measuring the gains from medical research: an economic approach. Chicago: University of Chicago Press; 2010.

Balas EA, Boren SA. Managing clinical knowledge for health care improvement. In: Bemmel J, McCray AT, editors. Yearbook of Medical Informatics 2000: Patient-Centered Systems. Stuttgart, Germany: Schattauer Verlagsgesellschaft mbH; 2000. p. 65–70.

Download references

Author information

Authors and affiliations.

New South Wales Ministry of Health, 73 Miller St North, Sydney, NSW, 2060, Australia

Andrew J Milat

School of Public Health, University of Sydney, Level 2, Medical Foundation, Building, K25, Sydney, NSW, 2006, Australia

Andrew J Milat, Adrian E Bauman & Sally Redman

Sax Institute, Sydney, Level 2, 10 Quay, St Haymarket, NSW, 2000, Australia

Sally Redman

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Andrew J Milat .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors’ contributions

AJM conceived the study, designed the methods, and conducted the literature searches. AJM drafted the manuscript and all authors contributed to data interpretation and have read and approved the final manuscript.

Additional file

Additional file 1: table s1..

Characteristics of studies focusing on processes, theories, or frameworks assessing research impact.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/4.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Milat, A.J., Bauman, A.E. & Redman, S. A narrative review of research impact assessment models and methods. Health Res Policy Sys 13 , 18 (2015). https://doi.org/10.1186/s12961-015-0003-1

Download citation

Received : 07 November 2014

Accepted : 16 February 2015

Published : 18 March 2015

DOI : https://doi.org/10.1186/s12961-015-0003-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Policy and practice impact
  • Research impact
  • Research returns

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

impact of analysis in research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Open access
  • Published: 04 October 2019

Engaging with research impact assessment for an environmental science case study

  • Kirstie A. Fryirs   ORCID: orcid.org/0000-0003-0541-3384 1 ,
  • Gary J. Brierley   ORCID: orcid.org/0000-0002-1310-1105 2 &
  • Thom Dixon   ORCID: orcid.org/0000-0003-4746-2301 3  

Nature Communications volume  10 , Article number:  4542 ( 2019 ) Cite this article

15k Accesses

17 Citations

40 Altmetric

Metrics details

  • Environmental impact
  • Research management

An Author Correction to this article was published on 08 November 2019

This article has been updated

Impact assessment is embedded in many national and international research rating systems. Most applications use the Research Impact Pathway to track inputs, activities, outputs and outcomes of an invention or initiative to assess impact beyond scholarly contributions to an academic research field (i.e., benefits to environment, society, economy and culture). Existing approaches emphasise easy to attribute ‘hard’ impacts, and fail to include a range of ‘soft’ impacts that are less easy to attribute, yet are often a dominant part of the impact mix. Here, we develop an inclusive 3-part impact mapping approach. We demonstrate its application using an environmental initiative.

Similar content being viewed by others

impact of analysis in research

SciSciNet: A large-scale open data lake for the science of science research

impact of analysis in research

A dataset for measuring the impact of research data and their curation

impact of analysis in research

Interdisciplinarity revisited: evidence for research impact and dynamism

Introduction.

Universities around the World are increasingly required to demonstrate and measure the impact of their research beyond academia. The Times Higher Education (THE) World University Rankings now includes a measure of knowledge transfer and impact as an indicator of an institution’s quality and the THE World University Rankings released their inaugural University impact rankings in 2019. With the global rise of impact assessment, most nations adopt a variant of the Organisation for Economic Cooperation and Development (OECD) definition of impact 1 ; “the contribution that research makes to the economy, society, environment or culture, beyond the contribution to academic research.” Yet research impact mapping provides benefits beyond just meeting the requirements for assessment 1 . It provides an opportunity for academics to reflect on and consider the impact their research can, and should, have on the environment, our social networks and wellbeing, our economic prosperity and our cultural identities. If considered at the development stage of research practices, the design and implementation of impact mapping procedures and frameworks can provide an opportunity to better plan for impact and create an environment where impact is more likely to be achieved.

Almost all impact assessments use variants of the Research Impact Pathway (Fig. 1 ) as the conceptual framework and model with which to document, measure and assess environmental, social, economic and cultural impacts of research 1 . This Pathway starts with inputs, followed by activities. Outputs and outcomes are produced and these lead to impact. Writing for Nature Outlook: Assessing Science , Morgan 2 reported on how Australia’s Commonwealth Scientific and Research Organisation (CSIRO) mapped impact using this approach. However, the literature contains very few worked examples to guide academics and co-ordinators in the process of research impact mapping. This is particularly evident for environmental initiatives and innovations 3 , 4 .

Here we provide a new, 3-part impact mapping approach that can accommodate non-linearity in the impact pathway and can more broadly include and assess both ‘hard’ impacts, those that can be directly attributed to an initiative or invention, and ‘soft’ impacts, those that can be indirectly attributed to an initiative or invention. We then present a worked example for an environmental innovation called the River Styles Framework, developed at Macquarie University, Sydney, Australia. The River Styles Framework is an approach to analysis, interpretation and application of geomorphic insights into river landscapes as a tool to support management applications 5 , 6 . We document and map how this Framework has shaped, and continues to shape, river management practice in various parts of the world. Through mapping impact we demonstrate how the River Styles Framework has contributed to environmental, social and economic benefits at local, national and international scales. Cvitanovic and Hobday (2018) 3  in Nature Communications might consider this case study a ‘bright spot’ that sits at the environmental science-policy-practice interface and is representative of examples that are seldom documented.

figure 1

The Research Impact Pathway (modified from ref. 2 )

This case study is presented from the perspective of the researchers who developed the River Styles Framework, and the University Impact co-ordinator who has worked with the researchers to document and measure the impact as part of ex post assessment 1 , 7 . We highlight challenges in planning for impact, as the research impact pathway evolves and entails significant lag times 8 . We discuss challenges that remain in the mapping process, particularly when trying to measure and attribute ‘soft’ impacts such as a change in practice or philosophy, an improvement in environmental condition, or a reduction in community conflict to a particular initiative or innovation 9 . We then provide a personal perspective of the challenges faced and lessons learnt in applying and mapping research impact so that others, particularly in the environmental sciences and related interdisciplinary fields, can undertake similar exercises for their own research impact assessments.

Brief background on research impact assessment and reporting

Historical reviews of research policy record long-term shifts towards incorporation of concerns for research impact within national funding agencies. In the 1970s the focus was on ‘research utilisation’ 10 , more recently it has been on ‘knowledge mobilisation’ 11 . The focus is always on seeking to understand the actual manner and pathways through which research becomes incorporated into policy, and through which research has an economic, social, cultural and environmental impact. Often these are far from linear circumstances, entailing multiple pathways.

Since the 1980s, higher education systems around the world have been transitioning to performance-based research funding systems (PRFS). The initial application of the PRFS in university contexts occurred as part of the first Research Assessment Exercise (RAE) in the United Kingdom in 1986 12 . PRFS systems have been designed to reward and perpetuate the highest quality research, presenting notionally rational criteria with which to support more intellectually competitive institutions 13 . The United Kingdom’s (UK) RAE was replicated in Australia as the Research Quality Framework (RQF), and more recently as the Excellence in Research for Australia (ERA) assessment. In 2010, 15 countries engaged in some form of PRFS 14 . These frameworks focus almost solely on academic research performance and productivity, rather than the contribution and impact that research makes to the economy, society, environment or culture.

In the last decade, research policy frameworks have increasingly focused on facilitating national prosperity through the transfer, translation and commercialisation of knowledge 15 , 16 , combined with the integration of research findings into government policy-making 17 . In 2009, the Higher Education Funding Council for England conducted a year-long review and consultation process regarding the structure of the Research Excellence Framework (REF) 18 . Following this review, in 2010 the Higher Education Funding Council for England (HEFCE) commissioned a series of impact pilot studies designed to produce narrative-style case studies by 29 higher education institutions. The pilot studies featured five units of assessment: clinical medicine, physics, earth systems and environmental sciences, social work and social policy, and English language and literature 12 . These pilot studies became the basis of the REF conducted in the UK in 2014 9 , 19 with research impact reporting comprising a 20% component of the overall assessment.

In Canada, in 2009 and from 2014 the Canadian Academy of Health Sciences and Manitoba Research, respectively, developed an impact framework and narrative outputs to evaluate the returns on investment in health research 20 , 21 . Similarly the UK National Institute for Health Research (NIHR) regularly produces impact synthesis case studies 22 . In Ireland, in 2012, the Science Foundation Ireland placed research impact assessment at the core of its scientific and engineering research vision, called Agenda 2020 23 . In the United States, in 2016, the National Science Foundation, National Institute of Health, US Department of Agriculture, and US Environmental Protection Authority developed a repository of data and tools for assessing the impact of federal research and development investments 24 . In 2016–2017, the European Union (EU) established a high-level group to advise on how to maximise the impact of the EU’s investment in research and innovation, focussing on the future of funding allocation and the implementation of the remaining years of Horizon 2020 25 . In New Zealand, in 2017, the Ministry of Business, Innovation and Employment released a discussion paper proposing the introduction of an impact ‘pillar’ into the science investment system 26 . In 2020, Hong Kong will include impact assessment in their Research Assessment Exercise (RAE) for the first time 27 . Other countries including Denmark, Finland and Israel have scoped the use of research impact assessments of their major research programs as part of the Small Advanced Economies Initiative 28 .

In 2017, the Australian Research Council (ARC) conducted an Engagement and Impact Assessment Pilot (EIAP) 7 . While engagement is not analogous to impact, it is an evidential mechanism that elucidates the potential beneficiaries, stakeholders, and partners of academic research 12 , 16 . In addition to piloting narrative-style impact case study reporting, the EIAP characterised and mapped patterns of academic engagement with end users that create and enable research impact. The 2017 EIAP assessed a selection of disciplines for engagement, and a selection of disciplines for impact. Environmental science was a discipline selected for the impact pilot. These pilots became the basis for the Australian Engagement and Impact (EI) assessment in 2018 7 that ran in parallel with the ERA, and from which the case study in this paper is drawn.

Research impact assessment does not just include ex post reporting that can feed into a national PRFS. A large component of academic impact assessment involves ex ante impact reporting in research funding applications. In both the UK and Australia, the perceived merit of a research funding application has been linked in part to its planning and potential for external research impact. In the UK this is labelled a ‘Pathways to Impact’ statement (used by the Research Council UK), in Australia this is an Impact statement (used by the ARC), with a national interest statement also implemented in 2018. These statements explicitly draw from the ‘pathway to impact’ model which simplifies a direct and linear relationship between research excellence, research engagement, and research impact 29 . These ex ante impact statements can be difficult for academics, especially early career researchers, if they do not understand the process, nature and timing of impact. This issue exists in ex post impact reporting and assessment as well, with many researchers finding it difficult to supply evidence that directly or indirectly links their research to impacts that may have taken decades to manifest 1 , 7 , 8 . Also, the simplified linearity of the Research Impact Pathway model makes it difficult to adequately represent the transformation of research into impact.

For research impact statements and assessments to be successful, researchers need to understand the patterns and pathways by which impact occurs prior to articulating how their own research project might achieve impact ex ante, or has had impact ex post. The quality of research impact assessment will improve if researchers and funding agencies understand the types and qualities of impact that can reasonably be expected to arise from a research project or initiative.

Given the plethora of interest in, and a growing global movement towards, both ex ante and ex post research impact assessment and reporting, it is surprising that very few published examples demonstrate how to map research impact. Even in the business, economics and corporate sectors where impact assessment and reporting is common practice 30 , 31 , 32 , very few published examples exist. This hinders prospects for researchers and co-ordinators to develop a more critical understanding of impact, inhibiting more nuanced understandings of the pathways to impact model. Mapping impact networks and recording a cartography of impact for research projects and initiatives provides an appropriate basis to conduct such tasks. This paper provides a new method by which this can be achieved.

The research impact pathway and impact mapping

Many impact assessment frameworks around the world have common characteristics, often structured around the Research Impact Pathway model (Fig. 1 ). This model can be identified in a series of 2009 and 2016 Organisation for Economic Cooperation and Development (OECD) reports that investigated the mechanisms of impact reporting 1 , 33 . The Research Impact Pathway is presented as a sequence of steps by which impact is realised. This pathway can be visualised for an innovation or initiative using an impact mapping approach. It starts with inputs that can include funding, staff, background intellectual property and support structures (e.g., administration, facilities). This is followed by activities or the ‘doing’ elements. This includes the work of discovery (i.e., research) and the translation—i.e., courses, workshops, conferences, and processes of community and stakeholder engagement.

Outputs are the results of inputs and activities. They includes publications, reports, databases, new intellectual property, patents and inventions, policy briefings, media, and new courses or teaching materials. Inputs, activities and outputs can be planned and somewhat controlled by the researcher, their collaborators and their organisations (universities). Outcomes then occur under direct influence of the researcher(s) with intended results. This may include commercial products and licences, job creation, new contracts, grants or programs, citations of work, new companies or spin-offs and new joint ventures and collaborations.

Impacts (sometimes called benefits) tend to occur via uptake and use of an innovation or initiative by independent parties under indirect (or no) influence from the original researcher(s). Impacts can be ‘hard’ or ‘soft’ and have intended and unintended consequences. They span four main areas outside of academia, including environmental, social, economic and cultural spaces. Impacts can include improvements in environmental health, quality of life, changes in industry or agency philosophy and practice, implementation or improvement in policy, improvements in monitoring and reporting, cost-savings to the economy or industry, generation of a higher quality workforce, job creation, improvements in community knowledge, better inter-personal relationships and collaborations, beneficial transfer and use of knowledge, technologies, methods or resources, and risk-reduction in decision making.

The challenge: applying the research impact pathway to map impact for a case study

The River Styles Framework 5 , 34 aligns with UN Sustainable Development Goals of Life on Land and Clean Water and Sanitation that have a 2020 target to “ensure the conservation, restoration and sustainable use of terrestrial and inland freshwater ecosystems and their services” and a 2030 target to urgently “implement integrated water resources management at all levels” 35 .

The River Styles Framework is a catchment-scale approach to analysis and interpretation of river geomorphology 36 . It is an open-ended, generic approach for use in any landscape or environmental setting. The Framework has four stages (see refs. 5 , 37 , 38 , 39 ); (1) Analysis of river types, behaviour and controls, (2) Assessment of river condition, (3) Forecasting of river recovery potential, and (4) Vision setting and prioritisation for decision making.

River Styles Framework development, uptake, extension and training courses have contributed to a global change in river management philosophy and practice, resulting in improved on-ground river condition, use of geomorphology in river management, and end-user professional development. Using the River Styles Framework has changed the way river management decisions are made and the level of intervention and resources required to reach environmental health targets. This has been achieved through the generation of catchment-scale and regional-level templates derived from use of the Framework 6 . These templates are integrated with other biophysical science tools and datasets to enhance planning, monitoring and forecasting of freshwater resources 6 . The Framework is based on foundation research on the form and function of streams and their interaction with the landscape through which they flow (fluvial geomorphology) 5 , 40 .

The Framework has a pioneering structure and coherence due to its open-ended and generic approach to river analysis and interpretation. Going well beyond off-the-shelf imported manuals for river management, the Framework has been adopted because of its innovative approach to geomorphic analysis of rivers. The Framework is tailored for the landscape and institutional context of any given place to produce scaffolded, coherent and consistent datasets for catchment-specific decision making. Through on-ground communication of place-based results, the application of the Framework spans local, state, national and international networks and initiatives. The quality of the underlying science has been key to generating the confidence required in industry and government to adopt geomorphology as a core scientific tool to support river management in a range of geographical, societal and scientific contexts 6 .

The impact of this case study spans conceptual use, instrumental use and capacity building 4 defined as ways of thinking and alerting policy makers and practitioners to an issue. Impact also includes direct use of research in policy and planning decisions, and education, training and development of end-users, respectively 4 , 41 , 42 . The River Styles Framework has led to establishment of new decision-making processes while also changing philosophy and practice so on-ground impacts can be realised.

Impact does not just occur at one point in time. Rather, it comes and goes or builds and is sustained. How this is represented and measured, particularly for an environmental case study, and especially for an initiative built around a Framework where a traditional ‘product’, ‘widget’, or ‘invention’ is not produced is challenging 4 . More traditional metrics-based indicators such as the number of lives saved or the amount of money generated cannot be deployed for these types of case studies 4 , 9 . It is particularly challenging to unravel the commercial value and benefits of adopting and using an initiative (or Framework) that is part of a much bigger, international paradigm shift in river management philosophy and practice.

Similarly, how do you measure environmental, social, economic or cultural impacts of an initiative where the benefits can take many years (and in the case of rivers, decades) to emerge, and how do you then link and attribute those impacts directly with the design, development, use and extension of that initiative in many different places at many different times? For the River Styles Framework, on-ground impacts in terms of improved river condition and recovery are occurring 43 , but other environmental, social and economic benefits may be years or decades away. Impactful initiatives in themselves often reshape the contextual setting that then frames the next phase of science and management practices which leads to further implications for policy and institutional settings, and for societal (socio-cultural) and environmental benefits. This is currently the case in assessing the impact of the River Styles Framework.

The method: a new, 3-part impact mapping approach

Using the River Styles framework as an environmental case study, Fig. 2 presents a 3-part impact mapping approach that contains (1) a context strip, (2) an impact map, and (3) soft impact intensity strips to capture the scope of the impact and the conditions under which it has been realised. This approach provides a template that can be used or replicated by others in their own impact mapping exercises 44 .

figure 2

The research impact map for the River Styles Framework case study. This map contains 3 parts, a context strip, impact map and soft impact intensity strips

The cartographic approach to mapping impact shown in Fig. 2 provides a mechanism to display a large amount of complex information and interactions in a style that conveys and communicates an immediate snapshot of the research impact pathway, its components and associated impacts. The map can be analysed to identify patterns and interactions between components as part of ex post assessment, and as a basis for ex ante impact forecasting.

The 3-part impact map output is produced in an interactive online environment, acknowledging that impact maps are live, open-ended documents that evolve as new impacts emerge and inputs, activities, outputs and outcomes continue. The map changes when activities, outputs or outcomes that the developers had forgotten, or considered to be peripheral, later re-appear as having been influential to a stakeholder, community or network not originally considered as an end-user. Such activities, outputs and outcomes can be inserted into a live map to broaden its base and understand the impact. Also, by clicking on each icon on the map, pop-up bubbles contain details that are specific to each component of the case study. This functionality can also be used to journal or archive important information and evidence in the ‘back-end’ of the map. Such evidence is often required, or called upon, in research impact assessments. Figure 2 only provides a static reproduction of the map output for the River Styles Framework. The fully worked, interactive, River Styles Framework impact map can be viewed at https://indd.adobe.com/view/c9e2a270–4396–4fe3-afcb-be6dd9da7a36 .

Context is a key driver of research impact 1 , 45 . Context can provide goals for research agendas and impact that feeds into ex ante assessments, or provide a lens through which to analyse the conditions within which certain impacts emerged and occurred as part of ex post assessment. Part 1 of our mapping approach produces a context strip that situates the case study (Fig. 2 ). This strip is used to document settings occurring outside of academia before, during and throughout the case study. Context can be local, national or global and examples can be gathered from a range of sources such as reports, the media and personal experience. For the River Styles case study only key context moments are shown. Context for this case study is the constantly changing communities of practice in global river restoration that are driven by (or inhibited by) the environmental setting (coded with a leaf symbol), policy and institutional settings (coded with a building symbol), social and cultural settings (coded with a crowd symbol), and economic settings (coded with a dollar symbol). For most case studies, these extrinsic setting categories will be similar, but others can be added to this part of the map if needed.

Part 2 of our mapping approach produces an impact map using the Research Impact Pathway (Fig. 1 ). This impact map (Fig. 2 ) documents the time-series of inputs (coded with a blue hexagon), activities (coded with a green hexagon), outputs (coded with a yellow hexagon), outcomes (coded with a red hexagon) and impacts (coded with a purple hexagon) that occurred for the case study. Heavier bordered hexagons and intensity strips represent international aspects and uptake. To start, only the primary inputs, activities, outputs and outcomes are mapped. A hexagon appears when there is evidence that an input, activity, output or outcome has occurred. Evidence includes event advertisements, reports, publications, website mentions, funding applications, awards, personnel appointments and communications products.

However, in conducting this standard mapping exercise it soon became evident that it is difficult to map and attribute impacts, particularly for an initiative that has a wide range of both direct and indirect impacts. To address this, our approach distinguishes between ‘hard’ impacts and ‘soft’ impacts. Hard impacts can be directly attributed to an initiative or invention, whereas soft impacts can be indirectly attributed to an initiative or invention. The inclusion of soft impacts is critical as they are often an important and sometimes dominant part of the impact mix. Both quantitative and qualitative measures and evidence can be used to attribute hard or soft impacts. There is not a direct one-to-one relationship between quantitative measurement of hard impacts and qualitative appraisal of soft impacts.

Hard impacts are represented as purple hexagons in the body of the impact map. For the River Styles Framework we have only placed a purple hexagon on the impact map where the impact can be ‘named’ and for which there is ‘hard’ evidence (in the form of a report, policy, strategic plan or citation) that directly mentions and therefore attributes the impact to River Styles. Most of these are multi-year impacts and the position of the hexagons on the map is noted at the first mention.

For many case studies, particularly those that impact on the environment, society and culture, attributing impact directly to an initiative or invention is not necessarily easy or straighforward. To address this our approach contains a third element, soft impact intensity strips (Fig. 2 ) to recognise, document, capture and map the extent and influence of impact created by an initiative or invention. This is represented as a heat intensity chart (coded as a purple bar of varying intenstiy) and organised under the environmental, social and economic categories that are often used to measure Triple-Bottom-Line (TBL) benefits in sustainability and research and development (R&D) reporting (e.g., refs. 7 , 46 ). Within these broad categories, soft impacts are categorised according to the dimensions of impacts of science used by the OECD 1 . These include environmental, societal, cultural, economic, policy, organisational, scientific, symbolic and training impacts. Each impact strip for soft impacts uses different levels of purple shading (to match the purple hexagon colour in the impact map) to visualise the timing and intensity of soft impacts. For the River Styles Framework, the intensity of the purple colour is used to show those impacts that have been most impactful (darker purple), the timing of initiation, growth or step-change in intensity of each impact, the rise and wane of some impacts and the longevity of others. A heavy black border is used to note the timing of internationalisation of some impacts. This heat intensity chart was constructed by quantitatively representing qualitative sentiment in testimonials, interviews, course evaluations and feedback, surveys and questionnaires, acknowledgements and recognitions, documentation of collaborations and networks, use of River Styles concepts, and reports on the development of spin-off frameworks. Quantitative representations of qualitative sentiment was achieved through using the methods of time-series keyword searches and expert judgement. These are just two methods by which the level of heat intensity can be measured and assigned 9 .

The outcome: impact of the River Styles Framework case study

Figure 2 , and its interactive online version, present the impact map for the River Styles Framework initiative and Table 1 documents the detail of the River Styles impact story from pre-1996 to post-2020. The distribution of colour-coded hexagons and the intensity of purple on the soft impact intensity strips on Fig. 2 demonstrates the development and maturation of the initiative and the emergence of the impact.

In the first phase (pre-1996–2002), blue inputs, green activities and yellow output hexagons dominate. The next phase (2002–2005) was an intensive phase of output production (yellow hexagons). It is during this phase that red outcome hexagons appear and intensify. From 2006, purple impact hexagons appear for the first time, representing hard impact outside of academia. Soft impacts also start to emerge more intensely (Fig. 2 ). 2008–2015 represents a phase of domestic consolidation of yellow outputs, red outcomes and purple impacts, and the start of international uptake. Some of this impact is under direct influence and some is independent of the developers of the River Styles Framework (Fig. 1 ). The number of purple impact hexagons is more intense during the 2008–2015 period and soft impacts intensify further. 2016–2018 (and beyond) represents a phase of extension into international markets, collaborations and impact (heavier bordered hexagons and intensity strips; Fig. 2 ). The domestic impacts that emerged most intensively post-2006 continue in the background. Green activity hexagons re-appear during this period, much like the 1996–2002 phase, but in an international context. Foundational science (green activity hexagons) re-emerge, particularly internationally with new collaborations. At the same time, yellow outputs and red outcomes continue.

For the River Styles case study the challenge still remains one of how to adequately attribute, measure and provide evidence for soft impacts 4 that include:

a change in river management philosophy and practice

an improvement in river health and conservation of threatened species

the provision of an operational Framework that provides a common and consistent approach to analysis

the value of knowledge generation and databases for monitoring river health and informing river management decision-making for years to come

the integration into, and improvement in, river management policy

a change in prioritisation that reduces risk in decision-making and cost savings on-the-ground

professional development to produce a better trained, higher quality workforce and increased graduate employability

the creation of stronger networks of river professionals and a common suite of concepts that enable communication

more confident and appropriate use of geomorphic principles by river management practitioners

an improvement in citizen knowledge and reduced community conflict in river management practice

Lessons learnt by applying research impact mapping to a real case study

When applying the Research Impact Pathway and undertaking impact mapping for a case study it becomes obvious that generating and realising impact is not a linear process and it is never complete, and in many aspects it cannot be planned 8 , 9 , 29 . Rather, the pathway has many highways, secondary roads, intersections, some dead ends or cul-de-sacs and many unexpected detours of interest along the way.

Cycles of input, activity, outputs, outcomes and impact occur throughout the process. There are phases where greater emphasis is placed on inputs and activities, or phases of productivity that produce outputs and outcomes, and there are phases where the innovation or initiative gains momentum and produces a flurry of benefits and impacts. However, throughout the journey, inputs, activities, outputs and outcomes are always occurring, and the impact pathway never ends. Some impacts come and go while others are sustained.

The saying “being in the right place at the right time with the right people” has some truth. Impact can be probabilistically generated ex ante by the researcher(s) regularly placing themselves and their outputs in key locations or ‘rooms’ and in ‘moments’ where the chance of non-academic translation is high 47 . Context is also critical 45 . Economic, political, institutional, social and environmental conditions need to come together if an innovation or initiative is to ‘get off the ground’, gain traction and lead to impact (e.g., Fig. 2 ). Ongoing and sustained support is vital. An innovation funded 10 years ago may not receive funding today, or an innovation funded today may not lead to impact unless the right sets of circumstances and support are in place. This is, in part, a serendipitous process that involves the calculated creation of circumstances aligned to evoke the ‘black swan’ event of impact 48 . The ‘black swan’ effect, coined by Nassem Nicholas Taleb, is a metaphor for an unanticipated event that becomes reinterpreted through the benefit of hindsight, or alternatively, an event that exists ‘outside the model’. For example, black swans were presumed not to exist by Europeans until they were encountered in Australia and scientifically described in 1790. Such ‘black swan’ events are a useful device in ex post assessment for characterising those pivotal moments when a research program translates into research impact. While the exact nature of such events cannot be anticipated, by understanding the ways in which ‘black swan’ events take place in the context of research impact, researchers can manufacture scenarios that optimise their probability of provoking a ‘black swan’ event and therefore translating their research project into research impact, albeit in an unexpected way. One ‘black swan’ event for the River Styles Framework occurred between 1996–2002 (Table 1 ). Initial motivations for developing the Framework reflected inappropriate use of geomorphic principles derived elsewhere to address management concerns for distinctive river landscapes and ecosystems in Australia. Although initial applications and testing of the Framework were local (regional-scale), advice by senior-level personnel in the original funding agency, Land and Water Australia (blue input hexagon in 1997; Fig. 2 ), suggested we make principles generic such that the Framework can be used in any landscape setting. The impact of this ‘moment’ was only apparent much later on, when the Framework was adopted to inform place-based, catchment-specific river management applications in various parts of the world.

What is often not recognised is the time lag in the research impact process 9 . Depending on the innovation or initiative, this is, at best, a decadal process. Of critical importance is setting the foundations for impact. The ‘gem of an idea’ needs to be translated into a sound program of research, testing (proof of concept), peer-review and demonstration. These foundations must generate a level of confidence in the innovation or initiative before uptake. A level of branding may be required to make the innovation or initiative stand out from the crowd. Drivers are required to incentivise academics, both internal and external to their University setting, encouraging them to go outside their comfort zone to apply and translate their research in ‘real-world’ settings. Maintaining passion, patience and persistence throughout the journey are some of the most hidden and unrecognised parts of this process.

Some impacts are not foreseeable and surprises are inevitable. Activities, outputs and outcomes that may initially have seemed like a dead end, often re-appear in a different context or in a different network. Other outputs or outcomes take off very quickly and are implemented with immediate impact. Catalytic moments are sometimes required for uptake and impact to be realised 8 . These surprises are particularly obvious when an innovation or initiative enters the independent uptake stage, called impact under indirect influence on Fig. 1 . In this phase the originating researchers, developers or inventors are often absent or peripheral to the impact process. Other people or organisations have the confidence to use the innovation or initiative (as intended, or in some cases not as intended), and find new ways of taking the impact further. The innovation or initiative generates a life of its own in a snowball effect. Independent uptake is not easily measured, but it is a critical indicator of impact. Unless the foundations are solid and sound, prospects for sustained impact are diminished.

The maturity and type of impact also vary in different places at different times. This is particularly the case for innovations and initiatives where local and domestic uptake is strong, but international impact lags. Some places may be well advanced on the uptake part of the impact journey, firmly embedding the benefits while developing new extensions, add-ons and spin-offs with inputs and activities. Elsewhere, the uptake will only have just begun, such that outputs and outcomes are the primary focus for now, with the aim of generating impact soon. In some instances, authorities and practitioners are either unaware or are yet to be convinced that the innovation or initiative is relevant and useful for their circumstances. In these places the focus is on the inputs and activity phases necessary to generating outputs and outcomes relevant to their situation and context. Managing this variability while maintaining momentum is critical to creating impact.

Future directions for the practice of impact mapping and assessment

The process of engaging with impact and undertaking impact mapping for an environmental case study has been a reflective, positive but challenging experience. Our example is typical of many of the issues that must be addressed when undertaking research impact mapping and assessments where both ‘hard’ and ‘soft’ impacts are generated. Our 3-part impact mapping approach helps deal with these challenges and provides a mechanism to visualise and enhance communication of research impact to a broad range of scientists and policy practitioners from many fields, including industry and government agencies, as well as citizens who are interested in learning about the tangible and intangible benefits that arise from investing in research.

Such impact mapping work cannot be undertaken quickly 44 , 45 . Lateral thinking is required about what research impact really means, moving beyond the perception in academia that outputs and outcomes equals impact 4 , 9 , 12 . This is not the case. The research impact journey does not end at outcomes. The real measure of research impact is when an initiative gains a ‘life of its own’ and is independently picked-up and used for environmental, social or economic benefit in the ‘real-world’. This is when an initiative exits from the original researcher(s) owning the entirety of the impact, to one where the researcher(s) have an ongoing contribution to vastly scaled-up sets of collective impacts that are no longer controlled by any one actor, community or network. Penfield et al. 9 relates this to ‘knowledge creep’ where new data, information or frameworks become accepted and get absorbed over time.

Careful consideration of how an initiative is developed, emerges, is used, and the resulting benefits is needed to map impact. This process, in its own regard, provides solid foundations for future planning and consideration of possible (or maybe unforeseen) opportunities to develop the impact further as part of ex ante impact forecasting 1 , 44 . It’s value also lies in communicating and teaching others, using worked case studies, about what impact can mean, to demonstrate how it can evolve and mature, and outline the possible pathways of impact as part of ex post impact assessment 1 , 44 .

With greater emphasis being placed on impact in research policy and reporting in many parts of the world, it is timely to consider the level of ongoing support required to genuinely capture and assess impact over yearly and decadal timeframes 20 . Creation of environments and cultures in which impact can be incubated, nourished and supported aids effective planning, knowledge translation and engagement. Ongoing research is required to consider, more broadly and laterally, what is measured, what indicators are used, and the evidence required to assign attribution. This remains a challenge not just for the case study documented here, but for the process of impact assessment more generally 1 , 9 . Continuous monitoring of impacts (both intended and unintended) is needed. To do this requires support and systems to gather, archive and track data, whether quantitative or qualitative, and adequately build evidence portfolios 20 . A keen eye is needed to identify, document and archive evidence that may seem insignificant at the time, but can lead to a step-change in impact, or a re-appearance elsewhere on the pathway.

Impact reporting extends beyond traditional outreach and service roles in academia 16 , 19 . Despite the increasing recognition of the importance of impact and its permeation into academic lives, it is yet to be formally built into many academic and professional roles 9 . To date, the rewards are implicit rather than explicit 44 . Support is required if impact planning and reporting for assessment is to become a new practice for academics.

Managing the research impact process is vital, but it is also important to be open to new ideas and avenues for creating impact at different stages of the process. It is important to listen and to be attuned to developments outside of academia, and learn to live with the creative spark of uncertainty as we expect the unexpected!

Change history

08 november 2019.

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

Organisation for Economic Cooperation and Development (OECD). Enhancing Research performance through Evaluation, Impact Assessment and Priority Setting  (Directorate for Science, Technology and Innovation, Paris, 2009). This is a ‘go-to’ guide for impact assessment in Research and Development, used in OECD countries .

Morgan, B. Income for outcome. Australia and New Zealand are experimenting with ways of assessing the impact of publicly funded research. Nat. Outlook 511 , S72–S75 (2014). This Nature Outlook article reports on how Australia’s Commonwealth Scientific and Research Organisation (CSIRO) mapped their research programs against impact classes using the Research Impact Pathway .

CAS   Google Scholar  

Cvitanovic, C. & Hobday, A. J. Building optimism at the environmental science-policy-practice interface through the study of bright spots. Nat. Commun. 9 , 3466 (2018). This Nature Communications paper presents a commentary on the key principles that underpin what are termed ‘bright spots’, case studies where science and research has successfully influenced and impacted on policy and practice, as a means to inspire optimism in humanity’s capacity to address environmental challenges .

Article   ADS   Google Scholar  

Rau, H., Goggins, G. & Fahy, F. From invisibility to impact: recognising the scientific and societal relevance of interdisciplinary sustainability research. Res. Policy 47 , 266–276 (2018). This paper uses interdisciplinary sustainability research as a centrepiece for arguing the need for alternative approaches for conceptualising and measuring impact that recognise and capture the diverse forms of engagement between scientists and non-scientists, and diverse uses and uptake of knowledge at the science-policy-practice interface .

Article   Google Scholar  

Brierley, G. J. & Fryirs, K. A. Geomorphology and River Management: Applications of the River Styles Framework . 398 (Blackwell Publications, Oxford, 2005). This book contains the full River Styles Framework set within the context of the science of fluvial geomorphology .

Brierley, G. J. et al. Geomorphology in action: linking policy with on-the-ground actions through applications of the River Styles framework. Appl. Geogr. 31 , 1132–1143 (2011).

Australian Research Council (ARC). EI 2018 Framework  (Commonwealth of Australia, Canberra, 2017). This document and associated website contains the procedures for assessing research impact as part of the Australian Research Council Engagement and Impact process, and the national report, outcomes and impact cases studies assessed in the 2018 round .

Matt, M., Gaunand, A., Joly, P.-B. & Colinet, L. Opening the black box of impact–Ideal type impact pathways in a pubic agricultural research organisation. Res. Policy 46 , 207–218 (2017). This article presents a metrics-based approach to impact assessment, called the Actor Network Theory approach, to systematically code variables used to measure ex-post research impact in the agricultural sector .

Penfield, T., Baker, M. J., Scoble, R. & Wykes, M. C. Assessment, evaluations, and definitions of research impact: a review. Res. Eval. 23 , 21–32 (2014). This article reviews the concepts behind research impact assessment and takes a focussed look at how impact assessment was implemented for the UK’s Research Excellence Framework (REF) .

Weiss, C. H. The many meanings of research utilization. Public Adm. Rev. 39 , 426–431 (1979).

Cooper, A. & Levin, B. Some Canadian contributions to understanding knowledge mobilisation. Evid. Policy 6 , 351–369 (2010).

Watermeyer, R. Issues in the articulation of ‘impact’: the responses of UK academics to ‘impact’ as a new measure of research assessment. Stud. High. Educ. 39 , 359–377 (2014).

Hicks, D. Overview of Models of Performance-based Research Funding Systems. In: Organisation for Economic Cooperation and Development (OECD), Performance-based Funding for Public Research in Tertiary Education Institutions: Workshop Proceedings . 23–52 (OECD Publishing, Paris, 2010). https://doi.org/10.1787/9789264094611-en (Accessed 27 Aug 2019).

Hicks, D. Performance-based university research funding systems. Res. Policy 41 , 251–26 (2012).

Etzkowitz, H. Networks of innovation: science, technology and development in the triple helix era. Int. J. Technol. Manag. Sustain. Dev. 1 , 7–20 (2002).

Perkmann, M. et al. Academic engagement and commercialisation: a review of the literature on university-industry relations. Res. Policy 42 , 423–442 (2013).

Leydesdorff, L. & Etzkowitz, H. Emergence of a Triple Helix of university—industry—government relations. Sci. Public Policy 23 , 279–286 (1996).

Google Scholar  

Higher Education Funding Council for England (HEFCE). Research Excellence Framework . Second consultation on the assessment and funding of research. London. https://www.hefce.ac.uk (Accessed 12 Aug 2019).

Smith, S., Ward, V. & House, A. ‘Impact’ in the proposals for the UK’s Research Excellence Framework: Shifting the boundaries of academic autonomy. Res. Policy 40 , 1369–1379 (2011).

Canadian Academy of Health Sciences (CAHS). Making an Impact. A Preferred Framework and Indicators to Measure Returns on Investment in Health Research  (Canadian Academy of Health Sciences, Ottawa, 2009). This report presents the approach to research impact assessment adopted by the health science industry in Canada using the Research Impact Pathway .

Research Manitoba. Impact Framework . Research Manitoba, Winnipeg, Manitoba, Canada. (2012–2019). https://researchmanitoba.ca/impacts/impact-framework/ (Accessed 3 June 2019).

United Kingdom National Institute for Health Research (UKNIHR). Research and Impact . (NIHR, London, 2019).

Science Foundation Ireland (SFI). Agenda 2020: Excellence and Impact . (SFI, Dublin, 2012).

StarMetrics. Science and Technology for America’s Reinvestment Measuring the Effects of Research on Innovation , Competitiveness and Science . Process Guide (Office of Science and Technology Policy, Washington DC, 2016).

European Commission (EU). Guidelines on Impact Assessment . (EU, Brussels, 2015).

Ministry of Business, Innovation and Employment (MBIE). The impact of science: Discussion paper . (MBIE, Wellington, 2018).

University Grants Committee. Panel-specific Guidelines on Assessment Criteria and Working Methods for RAE 2020. University Grants Committee, (Government of the Hong Kong Special Administrative Region, Hong Kong, 2018).

Harland, K. & O’Connor, H. Broadening the Scope of Impact: Defining, assessing and measuring impact of major public research programmes, with lessons from 6 small advanced economies . Public issue version: 2, Small Advanced Economies Initiative, (Department of Foreign Affairs and Trade, Dublin, 2015).

Chubb, J. & Watermeyer, R. Artifice or integrity in the marketization of research impact? Investigating the moral economy of (pathways to) impact statements within research funding proposals in the UK and Australia. Stud. High. Educ. 42 , 2360–2372 (2017).

Oliver Schwarz, J. Ex ante strategy evaluation: the case for business wargaming. Bus. Strategy Ser. 12 , 122–135 (2011).

Neugebauer, S., Forin, S. & Finkbeiner, M. From life cycle costing to economic life cycle assessment-introducing an economic impact pathway. Sustainability 8 , 428 (2016).

Legner, C., Urbach, N. & Nolte, C. Mobile business application for service and maintenance processes: Using ex post evaluation by end-users as input for iterative design. Inf. Manag. 53 , 817–831 (2016).

Organisation for Economic Cooperation and Development (OECD). Fact sheets: Approaches to Impact Assessment; Research and Innovation Process Issues; Causality Problems; What is Impact Assessment?; What is Impact Assessment? Mechanisms . (Directorate for Science, Technology and Innovation, Paris, 2016).

River Styles. https://riverstyles.com (Accessed 2 May 2019).

United Nations Sustainable Development Goals. https://sustainabledevelopment.un.org (Accessed 2 May 2019).

Kasprak, A. et al. The Blurred Line between form and process: a comparison of stream channel classification frameworks. PLoS ONE 11 , e0150293 (2016).

Fryirs, K. Developing and using geomorphic condition assessments for river rehabilitation planning, implementation and monitoring. WIREs Water 2 , 649–667 (2015).

Fryirs, K. & Brierley, G. J. Assessing the geomorphic recovery potential of rivers: forecasting future trajectories of adjustment for use in river management. WIREs Water 3 , 727–748 (2016).

Fryirs, K. A. & Brierley, G. J. What’s in a name? A naming convention for geomorphic river types using the River Styles Framework. PLoS ONE 13 , e0201909 (2018).

Fryirs, K. A. & Brierley, G. J. Geomorphic Analysis of River Systems: An Approach to Reading the Landscape . 345 (John Wiley and Sons: Chichester, 2013).

Meagher, L., Lyall, C. & Nutley, S. Flows of knowledge, expertise and influence: a method for assessing policy and practice impacts from social science research. Res. Eval. 17 , 163–173 (2008).

Meagher, L. & Lyall, C. The invisible made visible. Using impact evaluations to illuminate and inform the role of knowledge intermediaries. Evid. Policy 9 , 409–418 (2013).

Fryirs, K. A. et al. Tracking geomorphic river recovery in process-based river management. Land Degrad. Dev. 29 , 3221–3244 (2018).

Kuruvilla, S., Mays, N., Pleasant, A. & Walt, G. Describing the impact of health research: a Research Impact Framework. BMC Health Serv. Res. 6 , 134 (2006).

Barjolle, D., Midmore, P. & Schmid, O. Tracing the pathways from research to innovation: evidence from case studies. EuroChoices 17 , 11–18 (2018).

Department of Environment and Heritage (DEH). Triple bottom line reporting in Australia. A guide to reporting against environmental indicators . (Commonwealth of Australia, Canberra, 2003).

Le Heron, E., Le Heron, R. & Lewis, N. Performing Research Capability Building in New Zealand’s Social Sciences: Capacity–Capability Insights from Exploring the Work of BRCSS’s ‘sustainability’ Theme, 2004–2009. Environ. Plan. A 43 , 1400–1420 (2011).

Taleb, N. N. The Black Swan: The Impact of the Highly Improbable . 2nd edn. (Penguin, London, 2010).

Fryirs, K. A. & Brierley, G. J. Practical Applications of the River Styles Framework as a Tool for Catchment-wide River Management : A Case Study from Bega Catchment. (Macquarie University Press, Sydney, 2005).

Brierley, G. J. & Fryirs, K. A. (eds) River Futures: An Integrative Scientific Approach to River Repair . (Island Press, Washington, DC, 2008).

Fryirs, K., Wheaton, J., Bizzi, S., Williams, R. & Brierley, G. To plug-in or not to plug-in? Geomorphic analysis of rivers using the River Styles Framework in an era of big data acquisition and automation. WiresWater . https://doi.org/10.1002/wat2.1372 (2019).

Rinaldi, M. et al. New tools for the hydromorphological assessment and monitoring of European streams. J. Environ. Manag. 202 , 363–378 (2017).

Article   CAS   Google Scholar  

Rinaldi, M., Surian, N., Comiti, F. & Bussettini, M. A method for the assessment and analysis of the hydromorphological condition of Italian streams: The Morphological Quality Index (MQI). Geomorphology 180–181 , 96–108 (2013).

Rinaldi, M., Surian, N., Comiti, F. & Bussettini, M. A methodological framework for hydromorphological assessment, analysis and monitoring (IDRAIM) aimed at promoting integrated river management. Geomorphology 251 , 122–136 (2015).

Gurnell, A. M. et al. A multi-scale hierarchical framework for developing understanding of river behaviour to support river management. Aquat. Sci. 78 , 1–16 (2016).

Belletti, B., Rinaldi, M., Buijse, A. D., Gurnell, A. M. & Mosselman, E. A review of assessment methods for river hydromorphology. Environ. Earth Sci. 73 , 2079–2100 (2015).

Belletti, B. et al. Characterising physical habitats and fluvial hydromorphology: a new system for the survey and classification of river geomorphic units. Geomorphology 283 , 143–157 (2017).

O’Brien, G. et al. Mapping valley bottom confinement at the network scale. Earth Surf. Process. Landf. 44 , 1828–1845 (2019).

Sinha, R., Mohanta, H. A., Jain, V. & Tandon, S. K. Geomorphic diversity as a river management tool and its application to the Ganga River, India. River Res. Appl. 33 , 1156–1176 (2017).

O’Brien, G. O. & Wheaton, J. M. River Styles Report for the Middle Fork John Day Watershed, Oregon . Ecogeomorphology and Topographic Analysis Lab, Prepared for Eco Logical Research, and Bonneville Power Administration, Logan. 215 (Utah State University, Utah, 2014).

Marçal, M., Brierley, G. J. & Lima, R. Using geomorphic understanding of catchment-scale process relationships to support the management of river futures: Macaé Basin, Brazil. Appl. Geogr. 84 , 23–41 (2017).

Download references

Acknowledgements

We thank Simon Mould for building the online interactive version of the impact map for River Styles and Dr Faith Welch, Research Impact Manager at the University of Auckland for comments on the paper. The case study documented in this paper builds on over 20 years of foundation research in fluvial geomorphology and strong and lasting collaboration between researchers, scientists and managers at various universities and government agencies in many parts of the world.

Author information

Authors and affiliations.

Department of Environmental Sciences, Macquarie University, Sydney, NSW, 2109, Australia

Kirstie A. Fryirs

School of Environment, University of Auckland, Auckland, 1010, New Zealand

Gary J. Brierley

Research Services, Macquarie University, Sydney, NSW, 2109, Australia

You can also search for this author in PubMed   Google Scholar

Contributions

K.F. conceived, developed and wrote this paper. G.B., T.D. contributed to, and edited, the paper. K.F., T.D. conceived, developed and produced the impact mapping toolbox.

Corresponding author

Correspondence to Kirstie A. Fryirs .

Ethics declarations

Competing interests.

K.F. and G.B. are co-developers of the River Styles Framework. River Styles foundation research has been supported through competitive grant schemes and university grants. Consultancy-based River Styles short courses taught by K.F. and G.B. are administered by Macquarie University. River Styles contract research is administered by Macquarie University and University of Auckland. River Styles as a trade mark expires in May 2020. T.D. declares no conflict of interest.

Additional information

Peer review information Nature Communications thanks Barbara Belletti and Gary Goggins for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Fryirs, K.A., Brierley, G.J. & Dixon, T. Engaging with research impact assessment for an environmental science case study. Nat Commun 10 , 4542 (2019). https://doi.org/10.1038/s41467-019-12020-z

Download citation

Received : 17 June 2019

Accepted : 15 August 2019

Published : 04 October 2019

DOI : https://doi.org/10.1038/s41467-019-12020-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Applying a framework to assess the impact of cardiovascular outcomes improvement research.

  • Mitchell N. Sarkies
  • Suzanne Robinson

Health Research Policy and Systems (2021)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Anthropocene newsletter — what matters in anthropocene research, free to your inbox weekly.

impact of analysis in research

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

impact of analysis in research

Home Market Research

Data Analysis in Research: Types & Methods

data-analysis-in-research

Content Index

Why analyze data in research?

Types of data in research, finding patterns in the qualitative data, methods used for data analysis in qualitative research, preparing data for analysis, methods used for data analysis in quantitative research, considerations in research data analysis, what is data analysis in research.

Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. 

Three essential things occur during the data analysis process — the first is data organization . Summarization and categorization together contribute to becoming the second known method used for data reduction. It helps find patterns and themes in the data for easy identification and linking. The third and last way is data analysis – researchers do it in both top-down and bottom-up fashion.

LEARN ABOUT: Research Process Steps

On the other hand, Marshall and Rossman describe data analysis as a messy, ambiguous, and time-consuming but creative and fascinating process through which a mass of collected data is brought to order, structure and meaning.

We can say that “the data analysis and data interpretation is a process representing the application of deductive and inductive logic to the research and data analysis.”

Researchers rely heavily on data as they have a story to tell or research problems to solve. It starts with a question, and data is nothing but an answer to that question. But, what if there is no question to ask? Well! It is possible to explore data even without a problem – we call it ‘Data Mining’, which often reveals some interesting patterns within the data that are worth exploring.

Irrelevant to the type of data researchers explore, their mission and audiences’ vision guide them to find the patterns to shape the story they want to tell. One of the essential things expected from researchers while analyzing data is to stay open and remain unbiased toward unexpected patterns, expressions, and results. Remember, sometimes, data analysis tells the most unforeseen yet exciting stories that were not expected when initiating data analysis. Therefore, rely on the data you have at hand and enjoy the journey of exploratory research. 

Create a Free Account

Every kind of data has a rare quality of describing things after assigning a specific value to it. For analysis, you need to organize these values, processed and presented in a given context, to make it useful. Data can be in different forms; here are the primary data types.

  • Qualitative data: When the data presented has words and descriptions, then we call it qualitative data . Although you can observe this data, it is subjective and harder to analyze data in research, especially for comparison. Example: Quality data represents everything describing taste, experience, texture, or an opinion that is considered quality data. This type of data is usually collected through focus groups, personal qualitative interviews , qualitative observation or using open-ended questions in surveys.
  • Quantitative data: Any data expressed in numbers of numerical figures are called quantitative data . This type of data can be distinguished into categories, grouped, measured, calculated, or ranked. Example: questions such as age, rank, cost, length, weight, scores, etc. everything comes under this type of data. You can present such data in graphical format, charts, or apply statistical analysis methods to this data. The (Outcomes Measurement Systems) OMS questionnaires in surveys are a significant source of collecting numeric data.
  • Categorical data: It is data presented in groups. However, an item included in the categorical data cannot belong to more than one group. Example: A person responding to a survey by telling his living style, marital status, smoking habit, or drinking habit comes under the categorical data. A chi-square test is a standard method used to analyze this data.

Learn More : Examples of Qualitative Data in Education

Data analysis in qualitative research

Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process. Hence it is typically used for exploratory research and data analysis .

Although there are several ways to find patterns in the textual information, a word-based method is the most relied and widely used global technique for research and data analysis. Notably, the data analysis process in qualitative research is manual. Here the researchers usually read the available data and find repetitive or commonly used words. 

For example, while studying data collected from African countries to understand the most pressing issues people face, researchers might find  “food”  and  “hunger” are the most commonly used words and will highlight them for further analysis.

LEARN ABOUT: Level of Analysis

The keyword context is another widely used word-based technique. In this method, the researcher tries to understand the concept by analyzing the context in which the participants use a particular keyword.  

For example , researchers conducting research and data analysis for studying the concept of ‘diabetes’ amongst respondents might analyze the context of when and how the respondent has used or referred to the word ‘diabetes.’

The scrutiny-based technique is also one of the highly recommended  text analysis  methods used to identify a quality data pattern. Compare and contrast is the widely used method under this technique to differentiate how a specific text is similar or different from each other. 

For example: To find out the “importance of resident doctor in a company,” the collected data is divided into people who think it is necessary to hire a resident doctor and those who think it is unnecessary. Compare and contrast is the best method that can be used to analyze the polls having single-answer questions types .

Metaphors can be used to reduce the data pile and find patterns in it so that it becomes easier to connect data with theory.

Variable Partitioning is another technique used to split variables so that researchers can find more coherent descriptions and explanations from the enormous data.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

There are several techniques to analyze the data in qualitative research, but here are some commonly used methods,

  • Content Analysis:  It is widely accepted and the most frequently employed technique for data analysis in research methodology. It can be used to analyze the documented information from text, images, and sometimes from the physical items. It depends on the research questions to predict when and where to use this method.
  • Narrative Analysis: This method is used to analyze content gathered from various sources such as personal interviews, field observation, and  surveys . The majority of times, stories, or opinions shared by people are focused on finding answers to the research questions.
  • Discourse Analysis:  Similar to narrative analysis, discourse analysis is used to analyze the interactions with people. Nevertheless, this particular method considers the social context under which or within which the communication between the researcher and respondent takes place. In addition to that, discourse analysis also focuses on the lifestyle and day-to-day environment while deriving any conclusion.
  • Grounded Theory:  When you want to explain why a particular phenomenon happened, then using grounded theory for analyzing quality data is the best resort. Grounded theory is applied to study data about the host of similar cases occurring in different settings. When researchers are using this method, they might alter explanations or produce new ones until they arrive at some conclusion.

LEARN ABOUT: 12 Best Tools for Researchers

Data analysis in quantitative research

The first stage in research and data analysis is to make it for the analysis so that the nominal data can be converted into something meaningful. Data preparation consists of the below phases.

Phase I: Data Validation

Data validation is done to understand if the collected data sample is per the pre-set standards, or it is a biased data sample again divided into four different stages

  • Fraud: To ensure an actual human being records each response to the survey or the questionnaire
  • Screening: To make sure each participant or respondent is selected or chosen in compliance with the research criteria
  • Procedure: To ensure ethical standards were maintained while collecting the data sample
  • Completeness: To ensure that the respondent has answered all the questions in an online survey. Else, the interviewer had asked all the questions devised in the questionnaire.

Phase II: Data Editing

More often, an extensive research data sample comes loaded with errors. Respondents sometimes fill in some fields incorrectly or sometimes skip them accidentally. Data editing is a process wherein the researchers have to confirm that the provided data is free of such errors. They need to conduct necessary checks and outlier checks to edit the raw edit and make it ready for analysis.

Phase III: Data Coding

Out of all three, this is the most critical phase of data preparation associated with grouping and assigning values to the survey responses . If a survey is completed with a 1000 sample size, the researcher will create an age bracket to distinguish the respondents based on their age. Thus, it becomes easier to analyze small data buckets rather than deal with the massive data pile.

LEARN ABOUT: Steps in Qualitative Research

After the data is prepared for analysis, researchers are open to using different research and data analysis methods to derive meaningful insights. For sure, statistical analysis plans are the most favored to analyze numerical data. In statistical analysis, distinguishing between categorical data and numerical data is essential, as categorical data involves distinct categories or labels, while numerical data consists of measurable quantities. The method is again classified into two groups. First, ‘Descriptive Statistics’ used to describe data. Second, ‘Inferential statistics’ that helps in comparing the data .

Descriptive statistics

This method is used to describe the basic features of versatile types of data in research. It presents the data in such a meaningful way that pattern in the data starts making sense. Nevertheless, the descriptive analysis does not go beyond making conclusions. The conclusions are again based on the hypothesis researchers have formulated so far. Here are a few major types of descriptive analysis methods.

Measures of Frequency

  • Count, Percent, Frequency
  • It is used to denote home often a particular event occurs.
  • Researchers use it when they want to showcase how often a response is given.

Measures of Central Tendency

  • Mean, Median, Mode
  • The method is widely used to demonstrate distribution by various points.
  • Researchers use this method when they want to showcase the most commonly or averagely indicated response.

Measures of Dispersion or Variation

  • Range, Variance, Standard deviation
  • Here the field equals high/low points.
  • Variance standard deviation = difference between the observed score and mean
  • It is used to identify the spread of scores by stating intervals.
  • Researchers use this method to showcase data spread out. It helps them identify the depth until which the data is spread out that it directly affects the mean.

Measures of Position

  • Percentile ranks, Quartile ranks
  • It relies on standardized scores helping researchers to identify the relationship between different scores.
  • It is often used when researchers want to compare scores with the average count.

For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell. For example, the mean is the best way to demonstrate the students’ average scores in schools. It is better to rely on the descriptive statistics when the researchers intend to keep the research or outcome limited to the provided  sample  without generalizing it. For example, when you want to compare average voting done in two different cities, differential statistics are enough.

Descriptive analysis is also called a ‘univariate analysis’ since it is commonly used to analyze a single variable.

Inferential statistics

Inferential statistics are used to make predictions about a larger population after research and data analysis of the representing population’s collected sample. For example, you can ask some odd 100 audiences at a movie theater if they like the movie they are watching. Researchers then use inferential statistics on the collected  sample  to reason that about 80-90% of people like the movie. 

Here are two significant areas of inferential statistics.

  • Estimating parameters: It takes statistics from the sample research data and demonstrates something about the population parameter.
  • Hypothesis test: I t’s about sampling research data to answer the survey research questions. For example, researchers might be interested to understand if the new shade of lipstick recently launched is good or not, or if the multivitamin capsules help children to perform better at games.

These are sophisticated analysis methods used to showcase the relationship between different variables instead of describing a single variable. It is often used when researchers want something beyond absolute numbers to understand the relationship between variables.

Here are some of the commonly used methods for data analysis in research.

  • Correlation: When researchers are not conducting experimental research or quasi-experimental research wherein the researchers are interested to understand the relationship between two or more variables, they opt for correlational research methods.
  • Cross-tabulation: Also called contingency tables,  cross-tabulation  is used to analyze the relationship between multiple variables.  Suppose provided data has age and gender categories presented in rows and columns. A two-dimensional cross-tabulation helps for seamless data analysis and research by showing the number of males and females in each age category.
  • Regression analysis: For understanding the strong relationship between two variables, researchers do not look beyond the primary and commonly used regression analysis method, which is also a type of predictive analysis used. In this method, you have an essential factor called the dependent variable. You also have multiple independent variables in regression analysis. You undertake efforts to find out the impact of independent variables on the dependent variable. The values of both independent and dependent variables are assumed as being ascertained in an error-free random manner.
  • Frequency tables: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Analysis of variance: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Researchers must have the necessary research skills to analyze and manipulation the data , Getting trained to demonstrate a high standard of research practice. Ideally, researchers must possess more than a basic understanding of the rationale of selecting one statistical method over the other to obtain better data insights.
  • Usually, research and data analytics projects differ by scientific discipline; therefore, getting statistical advice at the beginning of analysis helps design a survey questionnaire, select data collection methods , and choose samples.

LEARN ABOUT: Best Data Collection Tools

  • The primary aim of data research and analysis is to derive ultimate insights that are unbiased. Any mistake in or keeping a biased mind to collect data, selecting an analysis method, or choosing  audience  sample il to draw a biased inference.
  • Irrelevant to the sophistication used in research data and analysis is enough to rectify the poorly defined objective outcome measurements. It does not matter if the design is at fault or intentions are not clear, but lack of clarity might mislead readers, so avoid the practice.
  • The motive behind data analysis in research is to present accurate and reliable data. As far as possible, avoid statistical errors, and find a way to deal with everyday challenges like outliers, missing data, data altering, data mining , or developing graphical representation.

LEARN MORE: Descriptive Research vs Correlational Research The sheer amount of data generated daily is frightening. Especially when data analysis has taken center stage. in 2018. In last year, the total data supply amounted to 2.8 trillion gigabytes. Hence, it is clear that the enterprises willing to survive in the hypercompetitive world must possess an excellent capability to analyze complex research data, derive actionable insights, and adapt to the new market needs.

LEARN ABOUT: Average Order Value

QuestionPro is an online survey platform that empowers organizations in data analysis and research and provides them a medium to collect data by creating appealing surveys.

MORE LIKE THIS

We are on the front end of an innovation that can help us better predict how to transform our customer interactions.

How Can I Help You? — Tuesday CX Thoughts

Jun 5, 2024

impact of analysis in research

Why Multilingual 360 Feedback Surveys Provide Better Insights

Jun 3, 2024

Raked Weighting

Raked Weighting: A Key Tool for Accurate Survey Results

May 31, 2024

Data trends

Top 8 Data Trends to Understand the Future of Data

May 30, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Institute for Employment Research National Guidance Research Forum

Impact Analysis: Can the Guidance Community Learn Anything about Impact Research from Other Disciplines?

This paper briefing introduces the concepts of impact analysis and evidence-based practice (EBP). Drawing on two empirical examples, it compares the use of randomised controlled trials to measure the impact of interventions in the medical and guidance professions, highlighting the benefits and limitations of the approach.

Impact Analysis.

Impact analysis is an evaluative process, designed to provide scientifically credible information to legitimise the existence of a service or use of an intervention, which is intended to make a difference or induce benefit[1]. In essence, impact analysis is a method of measuring outcomes in order to address the question ‘Are we making a difference?' There are various outcomes that can be measured, in both the medical and guidance fields, all of which cannot be addressed in this paper, but which could include:

providing value for money; achieving organisational goals; achieving government agendas; providing a meaningful service which is of value and use to recipients; benefiting an individual client/patient on a personal level.

This paper focuses on the impact analysis of specific interventions through a comparison between a medical intervention - diagnosis, treatment, prognosis, therapy, and medication, and a guidance interventions - computer assisted guidance. It considers the use of experimental and quasi-experimental measurements and concentrates on the impact of such interventions at the level of the individual client/patient.

Evidence-based Practice

Evidence-based Practice (EBP) has its origins in the medical field and emerged in the early 1990s. It has subsequently been adopted by many other disciplines throughout the UK, including the guidance community. Sackett et al. 1997:71 defined evidence-based medicine as

‘The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients, based on skills which allow the doctor to evaluate both personal experience and external evidence in a systematic and objective manner’.[2]

EBP is based on the premise that practice should be based on knowledge derived from scientific endeavour. By adopting an EBP approach, practitioners should be able to base their decisions in relation to a client/patient on readily available empirical research; thus providing the best possible treatment/service to that individual[3].

EBP is a cyclical process, the first ‘loop’ of which involves the identification of existing research evidence, an assessment of its relevance, validity and reliability and the application of appropriate research evidence to practice in order to inform decision making. The second loop of the cycle involves empirical evaluation and reflection on current practice, including the dissemination of findings so that they can inform the work of other practitioners[4].

Advocates of EBP espouse that an EBP approach to professional decision-making results in best practice. It empowers professionals and increases their confidence in their ability to ‘do a good job’.

‘Medicine is effective because of the application of biomedical science to the understanding of disease’. [5]

However, opponents claim that EBP can limit a practitioner’s autonomy and that it is over-simplistic, only providing evidence of questions that can be measured and controlled and of those that can be analysed using quantitative methods[6]. It has been argued that the approach does not take into account practitioners’ knowledge and expertise. Clinical decisions are not solely based on empirical research but on the practitioners’ judgement based on experience.

Impact analysis can, therefore, fulfil a significant role in informed professional decision-making. However, it is imperative that the evidence base used to inform practice is relevant, useful, valid, and is based on sound methodology[7]. The remainder of this paper will consider how far the lesson learned in the field of medicine in relation to EBP can be applied to the field of guidance.

Experimental/Quasi-experimental Methods.

Experimental/quasi-experimental methods are one method for assessing impact. Experimental methods and evidence based on randomised control trials (RCT) enjoy ‘near hegemony’[8] in the health and medical field and are regarded as the ‘gold standard’[9].

‘The classical and still standard methodological paradigm for outcome evaluation is experimental design and its various quasi-experimental approximations. The superiority of experimental methods for investigating the causal effects of deliberate intervention is widely acknowledged’. [10]

‘They [RCTs] are the standard and accepted approach for evaluating interventions. This design provides the most scientifically rigorous methodology and avoids the bias which limit the usefulness of alternative non-randomised designs’ [11]

A randomised control trial involves the random selection and assignment of participants into a ‘treatment group’ and a ‘comparison or control group’. Typically, the ‘treatment group’ receives a new intervention, while the ‘control’ group receives an existing intervention or a placebo. The purpose is to allow the investigator to evaluate the impact of the new intervention relative to the existing intervention or no intervention at all[12]. Neither the investigator, nor the participant know in advance which intervention the participant will receive.

The professionals and academics in the medical field have identified several limitations to randomised control trials, which correlate with the challenges faced by the guidance community when using these methods to assess and analyse impact. These limitations include:

the practicalities of implementing random assignment; controlling external variables after assignment that could potentially influence the trial.[13] Researcher bias - variation in processes which occur after assignment including: - poor programme implementation; - augmentation of the control group with non-programme services; - poor retention of participants in programme and control conditions; - receipt of incomplete or inconsistent programme services by participants; and - attrition or incomplete follow up measurement. [14]

Ethical and legal restraints resulting from withholding services from otherwise eligible people; Empirically measuring the impact of the questions and phenomena that cannot be controlled, measured and counted - a participant’s life, history and feelings cannot be easily translated to biomedical variables and statistics.[15] Statistical prediction of the likely effect of an intervention is usually based on the ‘average’ outcome aggregated across all participants in the trial. Is it possible to generalise these findings for the wider population?[16] Small samples can yield a small amount of information which cannot necessarily explain why certain effects were or were not found; Decisions and methods of intervention are based on much more than just the results of controlled experiments. Professional knowledge consists of interpretive action and interaction – factors that involve communication, opinions and experience.[17] Experimental methods produce evidence of cause and effect relationships. However, these methods take no account of the process or the context within which the intervention occurred and cannot explain why it occurred[18].

RCTs are used to measure impact in both the medical and guidance professions. Although the professions do face with some similar challenges, it could be argued that the use of these methods is, on balance, more applicable to medicine. The following provide examples of when RCTs have been used in the respective fields. The benefits and limitations of their use are discussed. A Guidance Randomised Control Trial:

Effects of DISCOVER on the Career Maturity of Middle School Students:[19]

This study evaluated the effects of DISCOVER, a computer-assisted career guidance system, on the career maturity of 38 students enrolled in a rural middle school in The United States. Students randomly assigned to the treatment group worked with DISCOVER for approximately 1 hour a day over a 2 week period, whereas students in the control group did not have access to the DISCOVER programme. Career maturity was measured by Screening Form A-2 of the Career Maturity Inventory’s Attitude Scale (CMI-AS; Crites, 1978), completed before and after the intervention. This scale includes 50 true-false items representing a variety of attitudes toward the career decision-making process. Students in the control group were taught a unit on oral and written business communication skills in the regular classroom and did not have access to the computer lab. Furthermore, students in the treatment group were asked not to discuss their experiences with DISCOVER with other students to avoid an exchange of information about the programme. Results indicated significant gains in career maturity among students in the treatment group. (p<.05).

A Medical Randomised Control Trial:

Treatment of active ankylosing spondylitis with infleximab: a randomised controlled multi-centre trial. [20]

The aim of this study was to assess the effectiveness of infleximab, an antibody to tumour necrosis factor in treatment of patients with ankylosing spondylitis (a chronic inflammatory rheumatic disease), a disease with very few treatment options. In this 12-week placebo-controlled multi-centre study, 35 patients with active ankylosing spondylitis were randomly assigned to intravenous infleximab and 35 to placebo. The primary outcome was regression of disease activity at week 12 of at least 50% of the treatment group compared with 9% on placebo. Function and quality of life also improved significantly on imfleximab but not on placebo. Treatment with infleximab was generally well tolerated, but three patients had to stop treatment because of side effects. To assess response, validated clinical criteria from the ankylosing spondylitits working group was used, including disease activity, functional indices, metrology and quality of life.

Benefits and Limitations

The Participants.

In clinical trials it is possible to ensure that the ‘treatment’ and ‘control’ groups are appropriately matched according to the physical condition from which they are suffering. In the majority of clinical trials, the participants are patients who are diagnosed with the same physical condition, in the same stage of development and who have the same prognosis. The physical condition is the only characteristic that is of concern, and individual personalities and characteristics are irrelevant. The criteria for inclusion ensure that generalisations about the effectiveness of a drug for patients with a specified condition at a particular stage of development can be made.

In a guidance setting, participants can be selected according to specific characteristics including, age, gender, socio-economic status, and employment status. However, it is difficult to determine and match individual traits and behaviours. Traits such as motivation, capability, capacity and level of ability level cannot be controlled for but will have implications for the effectiveness of an intervention and significantly affect the results. For example, in the study above, it was possible to ensure that participants were in the same year groups and that the sample was representative in terms ethnicity and gender. However, the study could not take into account the students’ level of motivation and other personal characteristics, making generalisations more problematic.

The intervention:

In terms of medical interventions, when the intervention is the administration of a certain treatment or drug, no other intervention between pre-test and post-test, (other than an act of God) will affect the patients’ condition. The patient in the treatment group will receive a new drug whereas the patient in the control group will receive either an existing drug or a placebo, patient care will be otherwise the same.

In terms of guidance interventions, although a specific intervention such as a computer assisted guidance programme may be undertaken by the participants in the treatment group and not by the control group, confounding variables are harder to control. There may be opportunity for the treatment group to pass on their knowledge to the control group. In the above example students in the treatment group were asked not to disclose their experiences of the DISCOVER programme, but this is not a reliable control of contamination of results, especially when the participants are children. Chance Encounters: after the treatment, a participant may on returning home encounter a friend or stranger who is aware of an employment/business opportunity, and pass on information, which may result in the participant finding employment in a way, which cannot be considered as a result of the intervention. Simultaneous Interventions: For example the advice of friends and family

The outcome of a clinical trial is whether or not the new treatment/drug benefits the patient in the control group in that the patients’ physical condition improves. There is something tangible to measure, and compare with the control group, such as the regression of ankylosing spondylitis above. A physical effect is significantly easier to measure and record statistically.

In a guidance setting, there is no immediate single tangible outcome that can be measured effectively with statistics. There are various outcomes that a guidance intervention may have whether it is increased motivation, increased self-awareness, increased employability, increased knowledge, or in fact securing employment. It may be able to measure such outcomes qualitatively but not quantitatively.

Conclusions

The evidence suggests that although both professions face many of the same challenges when using scientific methods to analyse the impact of interventions, the nature of guidance-related research lends itself to less scientific approaches, that capture the qualitative as well as the quantitative difference an intervention can make to individual clients. Although lessons can be learned from the use of RCT to measure impact in the medical profession, attempting to adopt similar experimental/quasi experiment methods for guidance research is unlikely to yield valid and reliable results.

However, the methods adopted to measure impact at an organisational or national level may be more comparable. Services are often assessed on the basis of the following performance indicators:

Customer satisfaction Access to services Waiting times for appointments and follow-up appointments Outcomes

In both professions, data can be collected using similar methods including:

Observation; Customer feedback Management information

These quantitative measures are an indication of organisational achievement against prescribed targets, some but not all of which will be impact measures.

In my research of the medical profession, I have come across several documents, which may be of use in a future report on such a topic, and may be found at www.doh.gov.uk/picconsultation/nhspaf.htm, under the headings of the NHS plan and NHS performance Assessment Framework.

Guidelines and Regulation of Research

Clinical trials in the UK are subject to research governance[21] and numerous guidelines. A number of changes have taken place in recent years designed to ensure that clinical research is of the highest achievable scientific and ethical standard. Several of these changes relate specifically to research connected to drug development and have been introduced at the international level, including the International Committee on Harmonisation Good Clinical Practice Guideline (ICH GCP)[22]. Actions taken to improve the performance of clinical research in the UK can be summarised as follows:

The introduction of the Research Governance Framework for Health and Social Care has set in place a comprehensive set of principles for the organisation, management and corporate governance of research within healthcare. The NHS Research and Development Forum has been established as the body responsible for the dissemination of good practice in research management in the health service[23]. The Medical Research Council (MRC)[24] has been associated with randomised control trials for over 50 years. It is involved in producing guidelines such as the MRC guidelines for good clinical practice in clinical trials. The MRC ensures that those who are funded to conduct research its behalf involving human participation agree to adhere to guidelines that safeguard participants and ensure that the data gathered are of high quality. The MRC are also involved in joint projects with the Department of Health to address issues arising from the implementation of the EU Clinical Trials Directive (Directive 2001/20/EC). This directive aims to protect trial participants and to simplify and harmonise trials across Europe. The UK’s Medicines & Healthcare Products Regulatory Agency (MHRA) consulted widely on the draft regulations and legislation to give effect to the Directive. The regulations are the Medicines for Human Use (Clinical Trials) Regulations 2003 (MLX 287). A model Clinical Trial Agreement[25] has also been developed for use in connection with contract clinical trials sponsored by pharmaceutical companies and carried out by NHS Trusts in England. There is a requirement under the NHS Research Governance Framework for Health and Social Care (RGF) for pharmaceutical companies to enter into a contractual agreement with NHS Trusts when clinical trials involve NHS patients.

In a field where life is at stake, such as the medical profession, clear, and comprehensive guidelines governing the conduct of clinical trials are imperative.

Research in the guidance sector is not governed by statutory policies and procedures in the same way as the medical profession. This is perhaps unsurprising as research into guidance policy and practice is highly unlikely to put life at risk. However, as career education and guidance becomes increasingly more diverse and takes account of wider personal and social issues, failure to conduct research in a safe and ethical manner could have serious implications for the participants, the interventions they receive and the researchers.

In 1993, the British Psychological Society (BPS) published its Ethical Principles for Conducting Research with Human Participants, which state that:

…investigations must consider the ethical implications and psychological consequences for the participants in their research…The investigation should be considered from the standpoint of all participants; foreseeable threats to their psychological well-being, health, values or dignity should be eliminated.’[26]

Other considerations include informed consent, privacy, harm, exploitation, confidentiality, coercion and consequences for the future[27].

In January 1999, ESOMAR produced guidelines[28] for interviewing children and young people that recommend that the welfare of participants should be the overriding consideration. The rights of the child must be safeguarded and researchers must be protected against possible allegations of misconduct. At the very least, researchers working with children and young people under the age of 18 and vulnerable adults should apply for Criminal Records Bureau Clearance.

Although it not obligatory for social researchers to comply with these guidelines, adherence should be encouraged to protect the researcher and the participants.

BIBLIOGRAPHY/FURTHER READING SUGGESTIONS:

Abrams, H. (Feb, 2001) Outcome measures: In health care today, you can’t afford not to do them. Hearing Journal

Alderson, P. Roberts, I. (Feb 5, 2000) Should journals publish systematic reviews that find no evidence to guide practice? Examples from injury research. British Medical Journal. Vol.320, pp. 376 - 377.

Barber, J.A. Thompson, S.G. (Oct 31, 1998) Analysis and interpretation of cost data in randomised controlled trials: review of published studies. British Medical Journal. Vol. 317, pp. 1195 - 1200.

Barker, J. Gilbert, D. (Feb 19, 2000) Evidence produced in evidence-based medicine needs to be relevant. (Letter to the Editor). British Medical Journal. Vol. 320, pp. 515.

Barton, S. (Jul 29, 2000) Which clinical studies provide the best evidence? (Editorial). British Medical Journal. Vol.321, pp.255 - 256.

Barton, S. (Mar 3, 2001) Using clinical evidence: having the evidence in your hand is just a start – but a good one. (Editorial). British Medical Journal. Vol. 322, pp. 503 - 504.

Braun, J. et al., (2002) Treatment of Active Ankylosing Spondylitis With Infleximab: A Randomised Controlled Multi-centre Trial. THE LANCET. Vol.359, pp. 1187-1193

Chantler, C. (2002) The second greatest benefit to mankind? THE LANCET, Vol. 360, pp. 1870-1877.

Culpepper, L. Gilbert, T.T. (1999) Evidence and ethics. THE LANCET. Vol. 353, pp. 829-31.

DeMets, D.L. Pocock, S.J. Julian, D.G. (1999) The agonising negative trend in monitoring of clinical trials. THE LANCET. Vol 354, pp. 1983-88

Falshaw, M. Carter, Y.H. Gray, R.W. (Sept 2, 2000) Evidence should be accessible as well as relevant. (Letter to the Editor). British Medical Journal. Vol. 321, p.567.

Gilber, J. Morgan, A. Harmon, R.J. (Apr 2003) Pretest-posttest comparison group designs: analysis and interpretation. (clinicians’ Guide to Research Methods and Statistics). Journal of the American Academy of Child and Adolescent Psychiatry. Vol. 42:4, pp. 500

Glaniville, J. Haines, M. Auston, I. (Jul 18, 1998) Finding information on clinical effectiveness. British Medical Journal. Vol. 317, pp. 200 - 203.

Haynes, B. Haines, A. (Jul 25, 1998) Barriers and bridges to evidence based clinical practice. (Getting Research Findings into Practice, part 4). British Medical Journal. Vol. 317, pp. 273 - 276.

Irvine, D. (Apr 3, 1999) The performance of doctors: the new professionalism. THE LANCET. Vol.353, pp.1174-1177.

Lipsey, M.W. Cordray, D.S. (2000) Evaluation methods for social intervention. Annual Review of Psychology. Vol. 51, pp. 345-375.

Lock, K. (May 20, 2000) Health impact assessment. British Medical Journal. Vol. 320, pp. 1395 - 1398.

Luzzo, D.A., Pierce, G. (1996) Effects of DISCOVER on the Career Maturity of Middle School Students. Career Development Quarterly. Vol.45(2), pp.170-172.

Malterud, K. (Aug 4, 2001) The art and science of clinical knowledge: evidence beyond measures and numbers. THE LANCET. Vol. 358, pp. 397-400.

Mant, D. (Feb 27, 1999) Can randomised trials inform clinical decisions about individual patients? British Medical Journal. Vol. 353, pp. 743-746.

March, J.S. Curry, J.F. (Feb, 1998) Predicting the outcome of treatment. Journal of Abnormal Child Psychology. Vol. 26 (1), pp. 39-51

Mariotto, A. Lam, A.M. Bleck, T.P. (Jul 22, 2000) Alternatives to evidence based medicine. British Medical Journal. Vol. 321, p.239

McColl, A. Smith, H. White, P. Field, J. (Jan 31, 1998) General practitioners’ perceptions of the route to evidence based medicine: a questionnaire survey. British Medical Journal. Vol. 316, pp. 361 - 365.

Medical Research Council (MRC) (Apr 2000) A Framework for Development and Evaluation of RCTs For Complex Interventions to Improve Health: A discussion document. www.mrc.ac.uk

Medical Research Council (MRC) (Nov 2002) Cluster randomised trials: Methodological and ethical considerations: MRC Clinical trials series. www.mrc.ac.uk

Moher, D. et al (Aug 22, 1998) Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? THE LANCET. Vol. 352, pp. 609-613.

Newburn, T. (2001) What do we mean by evaluation? Children & Society, Vol. 15, pp. 5-13.

Paton, C.R. (Jan 16, 1999) Evidence-Based Medicine. British Medical Journal. Vol. 318, pp. 201.

Pogue, J. Salim, Yusuf. (Jan 3, 1998) Overcoming the limitations of current meta-analysis of randomised control trials. THE LANCET. Vol. 351, pp. 47-52.

Rosser, W.W. (Feb20, 1999) Application of evidence from randomised controlled trials to general practice. THE LANCET. Vol. 353, pp. 661-664.

Sheldon, T.A. Guyatt, G.H. Hanines, A. (Jul 11, 1998) When to act on the evidence. (Getting Research Findings Into Practice, part 2). British Medical Journal. Vol. 317, pp. 139 - 142.

Smith, G.D. Ebrahim, S. Frankel, S. (Jan 27, 2001) how policy informs the evidence: “evidence based” thinking can lead to debased policy making. (Editorial). British Medical Journal. Vol. 322, pp. 184 - 185.

Sniderman, A.D. (Jul 24, 1999) Clinical trials, consensus conferences, and clinical practice. THE LANCET. Vol. 354, pp. 327-330.

Trindel, L. Reynolds, S. (2000) Evidence-Based Practice: A Critical Appraisal Blackwell Science Ltd. Chapters 1-2.

Van Weel, C. Knottnerus, J.A. (1999) Evidence-based interventions and comprehensive treatment. THE LANCET. Vol. 353, pp. 916-18.

Medical Research Council electronic publications/ information.

http://www.mrc.ac.uk/index/publications/publications-electronic_publications-link2

http://www.mrc.ac.uk/index/current-research.htm

http://www.mrc.ac.uk/index/about.htm

http://www.mrc.ac.uk/index/current-research/current-clinical_research.htm

http://www.mrc.ac.uk/index/publications.htm

National Library of Medical Electronic Publications.

http://www.ncbi.nlm.nih.gov/pubmed

Database of Controlled Trials.

http://www.controlled-trials.com/isrctn/

Find articles search engine – articles in BMJ, HSJ.

http://www.findarticles.com/cf_0/m0902/n1_v26/20565427/p1/article.jhtml

British Medical Journal

http://bmj.bmjjournals.com/

Electronic publications of articles appearing in the LANCET journal

www.thelancet.com

Health service journal

http://www.hsj.co.uk/

National Institute for Clinical Excellence

http://www.nice.org.uk/

Department of Health website

www.doh.gov.uk

1 Centre for Guidance Studies December 2003 [1] See Lipsey, M.W., Cordray, D.S. (2000) Evaluation Methods for Social Intervention, Annual Review of Psychology. Vol. Vol. 51, pp. 345-375 [2] Trindel, L., Reynolds, S., (2000) Evidence-Based Practice: A Critical Appraisal, Blackwell Science Ltd. p.19 [3] Ibid. pp. 18-19. [4] Ibid, p.22-23 [5] Chantler, C. (2002) The second greatest benefit to mankind? THE LANCET, Vol. 360, pp. 1870-1877.

[6] Malterud, K., (Aug 2001) The Art and Science of Clinical Knowledge: Evidence Beyond Measures and Numbers. Qualitative Research Series, LANCET, Vol 358 p.397 [7] See Barker, J., Gilbert, D., (Feb 19, 2000) Evidence Produced in Evidence Based Medicine Needs to be Relevant. (Letter to the Editor), British Medical Journal. Vol. Vol. 320, pp. 515. Alderson, P., Roberts, I. (Feb 5, 2000). Should Journals Publish Systematic Reviews that Find no Evidence to Guide Practice? , British Medical Journal. Vol.320, pp. 376 - 377. [8] Newburn, T., (2001) What do We Mean By Evaluation? Children & Society, Vol 15 pp. 5-13 [9] Barton, S., (July, 29, 2000) Which Clinical Studies Provide the Best Evidence? (Editorial) British Medical Journal. Vol. 321, pp.255 - 256 [10] Lipsey, M.W., Cordray, D.S. op.cit. [11] Barber, J.A., Thompson, S.G. (Oct 31, 1998) Analysis and Interpretation of Cost Data in Randomised Controlled Trials: Review of Published Studies. British Medical Journal. Vol. 317, pp. 1195 – 1200. [12] Gilber, J.A., Morgan, G.A, Harmon, R.J. (April 2003) Pretest-posttest Comparison Group Designs: Analysis and Interpretation. (Clinicians’ Guide to Research Methods and Statistics). Journal of the American Academy of Child and Adolescent Psychiatry. Vol. 42:4, pp. 500 [13] The above three limitations are recognised by Lipsey, M.W. and Cordray, D.S. Op.cit. [14] Ibid [15] Malterud, Op.Cit. [16] Mant, D., (Feb 27, 1999) Can Randomised Trials Inform Clinical Decisions About Individual Patients? Evidence and Primary Care, THE LANCET, Vol 353, pp. 743-746 [17] Irvine, D. (Apr 3, 1999) The performance of doctors: the new professionalism. THE LANCET. Vol.353, pp.1174-1177.

[18] [19] Luzzo, D.A., Pierce, G. (1996) Effects of DISCOVER on the Career Maturity of Middle School Students. Career Development Quarterly. Vol.45(2), pp.170-172.

[20] Braun, J. et al., (2002) Treatment of Active Ankylosing Spondylitis With Infleximab: A Randomised Controlled Multi-centre Trial. THE LANCET. Vol.359, pp. 1187-1193 [21] See the NHS Research Governance Framework for Health and Social Care at www.mrc.ac.uk [22] See Guidance for R&D Managers in NHS Trusts and Clinical Research Departments in the Pharmaceutical Industry at www.doh.gov.uk [23] http://www.doh.gov.uk/research/rd3/nhsrandd/rd3index.htm [24] The information below is taken from the following website: www.mrc.ac.uk [25] See www.doh.gov.uk/pictf/ [26] British Psychological Society (1993) Ethical Principles for Conducting Research with Human Participants. The Psychologist Vol. 6 pp 33-35. [27] Hammersley, M., & Atkinson, P., (1995), Ethnography. London: Routledge. [28] ESOMAR The World Association of Research Professionals (1999) Guideline on Interviewing Children and Young People. Published at http://www.esomar.nl/guidelines/interviewing_children_99.html

Explaining research performance: investigating the importance of motivation

  • Original Paper
  • Open access
  • Published: 23 May 2024
  • Volume 4 , article number  105 , ( 2024 )

Cite this article

You have full access to this open access article

impact of analysis in research

  • Silje Marie Svartefoss   ORCID: orcid.org/0000-0001-5072-1293 1   nAff4 ,
  • Jens Jungblut 2 ,
  • Dag W. Aksnes 1 ,
  • Kristoffer Kolltveit 2 &
  • Thed van Leeuwen 3  

603 Accesses

6 Altmetric

Explore all metrics

In this article, we study the motivation and performance of researchers. More specifically, we investigate what motivates researchers across different research fields and countries and how this motivation influences their research performance. The basis for our study is a large-N survey of economists, cardiologists, and physicists in Denmark, Norway, Sweden, the Netherlands, and the UK. The analysis shows that researchers are primarily motivated by scientific curiosity and practical application and less so by career considerations. There are limited differences across fields and countries, suggesting that the mix of motivational aspects has a common academic core less influenced by disciplinary standards or different national environments. Linking motivational factors to research performance, through bibliometric data on publication productivity and citation impact, our data show that those driven by practical application aspects of motivation have a higher probability for high productivity. Being driven by career considerations also increases productivity but only to a certain extent before it starts having a detrimental effect.

Similar content being viewed by others

impact of analysis in research

Theories of Motivation in Education: an Integrative Framework

impact of analysis in research

How to design bibliometric research: an overview and a framework proposal

impact of analysis in research

Literature reviews as independent studies: guidelines for academic practice

Avoid common mistakes on your manuscript.

Introduction

Motivation and abilities are known to be as important factors in explaining employees’ job performance of employees (Van Iddekinge et al. 2018 ), and in the vast scientific literature on motivation, it is common to differentiate between intrinsic and extrinsic motivation factors (Ryan and Deci 2000 ). In this context, path-breaking individuals are said to often be intrinsically motivated (Jindal-Snape and Snape 2006 ; Thomas and Nedeva 2012 ; Vallerand et al. 1992 ), and it has been found that the importance of these of types of motivations differs across occupations and career stages (Duarte and Lopes 2018 ).

In this article, we address the issue of motivation for one specific occupation, namely: researchers working at universities. Specifically, we investigate what motivates researchers across fields and countries (RQ1) and how this motivation is linked to their research performance (RQ2). The question of why people are motivated to do their jobs is interesting to address in an academic context, where work is usually harder to control, and individuals tend to have a lot of much freedom in structuring their work. Moreover, there have been indications that academics possess an especially high level of motivation for their tasks that is not driven by a search for external rewards but by an intrinsic satisfaction from academic work (Evans and Meyer 2003 ; Leslie 2002 ). At the same time, elements of researchers’ performance are measurable through indicators of their publication activity: their productivity through the number of outputs they produce and the impact of their research through the number of citations their publications receive (Aksnes and Sivertsen 2019 ; Wilsdon et al. 2015 ).

Elevating research performance is high on the agenda of many research organisations (Hazelkorn 2015 ). How such performance may be linked to individuals’ motivational aspects has received little attention. Thus, a better understanding of this interrelation may be relevant for developing institutional strategies to foster environments that promote high-quality research and research productivity.

Previous qualitative research has shown that scientists are mainly intrinsically motivated (Jindal-Snape and Snape 2006 ). Other survey-based contributions suggest that there can be differences in motivations across disciplines (Atta-Owusu and Fitjar 2021 ; Lam 2011 ). Furthermore, the performance of individual scientists has been shown to be highly skewed in terms of publication productivity and citation rates (Larivière et al. 2010 ; Ruiz-Castillo and Costas 2014 ). There is a large body of literature explaining these differences. Some focus on national and institutional funding schemes (Hammarfelt and de Rijcke 2015 ; Melguizo and Strober 2007 ) and others on the research environment, such as the presence of research groups and international collaboration (Jeong et al. 2014 ), while many studies address the role of academic rank, age, and gender (see e.g. Baccini et al. 2014 ; Rørstad and Aksnes 2015 ). Until recently, less emphasis has been placed on the impact of researchers’ motivation. Some studies have found that different types of motivations drive high levels of research performance (see e.g. Horodnic and Zaiţ 2015 ; Ryan and Berbegal-Mirabent 2016 ). However, researchers are only starting to understand how this internal drive relates to research performance.

While some of the prior research on the impact of motivation depends on self-reported research performance evaluations (Ryan 2014 ), the present article combines survey responses with actual bibliometric data. To investigate variation in research motivation across scientific fields and countries, we draw on a large-N survey of economists, cardiologists, and physicists in Denmark, Norway, Sweden, the Netherlands, and the UK. To investigate how this motivation is linked to their research performance, we map the survey respondents’ publication and citation data from the Web of Science (WoS).

This article is organised as follows. First, we present relevant literature on research performance and motivation. Next, the scientific fields and countries are then presented before elaborating on our methodology. In the empirical analysis, we investigate variations in motivation across fields, gender, age, and academic position and then relate motivation to publications and citations as our two measures of research performance. In the concluding section, we discuss our findings and implications for national decision-makers and individual researchers.

Motivation and research performance

As noted above, the concepts of intrinsic and extrinsic motivation play an important role in the literature on motivation and performance. Here, intrinsic motivation refers to doing something for its inherent satisfaction rather than for some separable consequence. Extrinsic motivation refers to doing something because it leads to a separable outcome (Ryan and Deci 2000 ).

Some studies have found that scientists are mainly intrinsically motivated (Jindal-Snape and Snape 2006 ; Lounsbury et al. 2012 ). Research interests, curiosity, and a desire to contribute to new knowledge are examples of such motivational factors. Intrinsic motives have also been shown to be crucial when people select research as a career choice (Roach and Sauermann 2010 ). Nevertheless, scientists are also motivated by extrinsic factors. Several European countries have adopted performance-based research funding systems (Zacharewicz et al. 2019 ). In these systems, researchers do not receive direct financial bonuses when they publish, although such practices may occur at local levels (Stephan et al. 2017 ). Therefore, extrinsic motivation for such researchers may include salary increases, peer recognitions, promotion, or expanded access to research resources (Lam 2011 ). According to Tien and Blackburn ( 1996 ), both types of motivations operate simultaneously, and their importance vary and may depend on the individual’s circumstances, personal situation, and values.

The extent to which different kinds of motivations play a role in scientists’ performance has been investigated in several studies. In these studies, bibliometric indicators based on the number of publications are typically used as outcome measures. Such indicators play a critical role in various contexts in the research system (Wilsdon et al. 2015 ), although it has also been pointed out that individuals can have different motivations to publish (Hangel and Schmidt-Pfister 2017 ).

Based on a survey of Romanian economics and business administration academics combined with bibliometric data, Horodnic and Zait ( 2015 ) found that intrinsic motivation was positively correlated with research productivity, while extrinsic motivation was negatively correlated. Their interpretations of the results are that researchers motivated by scientific interest are more productive, while researchers motivated by extrinsic forces will shift their focus to more financially profitable activities. Similarly, based on the observation that professors continue to publish even after they have been promoted to full professor, Finkelstein ( 1984 ) concluded that intrinsic rather than extrinsic motivational factors have a decisive role regarding the productivity of academics.

Drawing on a survey of 405 research scientists working in biological, chemical, and biomedical research departments in UK universities, Ryan ( 2014 ) found that (self-reported) variations in research performance can be explained by instrumental motivation based on financial incentives and internal motivation based on the individual’s view of themselves (traits, competencies, and values). In the study, instrumental motivation was found to have a negative impact on research performance: As the desire for financial rewards increase, the level of research performance decreases. In other words, researchers mainly motivated by money will be less productive and effective in their research. Contrarily, internal motivation was found to have a positive impact on research performance. This was explained by highlighting that researchers motivated by their self-concept set internal standards that become a reference point that reinforces perceptions of competency in their environments.

Nevertheless, it has also been argued that intrinsic and extrinsic motivations for publishing are intertwined (Ma 2019 ). According to Tien and Blackburn ( 1996 ), research productivity is neither purely intrinsically nor purely extrinsically motivated. Publication activity is often a result of research, which may be intrinsically motivated or motivated by extrinsic factors such as a wish for promotion, where the number of publications is often a part of the assessment (Cruz-Castro and Sanz-Menendez 2021 ; Tien 2000 , 2008 ).

The negative relationship between external/instrumental motivation and performance and the positive relationship between internal/self-concept motivation and performance are underlined by Ryan and Berbegal-Mirabent ( 2016 ). Drawing on a fuzzy set qualitative comparative analysis of a random sampling of 300 of the original respondents from Ryan ( 2014 ), they find that scientists working towards the standards and values they identify with, combined with a lack of concern for instrumental rewards, contribute to higher levels of research performance.

Based on the above, this article will address two research questions concerning different forms of motivation and the relationship between motivation and research performance.

How does the motivation of researchers vary across fields and countries?

How do different types of motivations affect research performance?

In this study, the roles of three different motivational factors are analysed. These are scientific curiosity, practical and societal applications, and career progress. The study aims to assess the role of these specific motivational factors and not the intrinsic-extrinsic distinction more generally. Of the three factors, scientific curiosity most strongly relates to intrinsic motivation; practical and societal applications also entail strong intrinsic aspects. On the other hand, career progress is linked to extrinsic motivation.

In addition to variation in researchers’ motivations by field and country, we consider differences in relation to age, position and gender. Additionally, when investigating how motivation relates to scientific performance we control for the influence of age, gender, country and funding. These are dimensions where differences might be found in motivational factors given that scientific performance, particularly publication productivity, has been shown to differ along these dimensions (Rørstad and Aksnes 2015 ).

Research context: three fields, five countries

To address the research question about potential differences across fields and countries, the study is based on a sample consisting of researchers in three different fields (cardiology, economics, and physics) and five countries (Denmark, Norway, Sweden, the Netherlands, and the UK). Below, we describe this research context in greater detail.

The fields represent three different domains of science: medicine, social sciences, and the natural sciences, where different motivational factors may be at play. This means that the fields cover three main areas of scientific investigations: the understanding of the world, the functioning of the human body, and societies and their functions. The societal role and mission of the fields also differ. While a primary aim of cardiology research and practice is to reduce the burden of cardiovascular disease, physics research may drive technology advancements, which impacts society. Economics research may contribute to more effective use of limited resources and the management of people, businesses, markets, and governments. In addition, the fields also differ in publication patterns (Piro et al. 2013 ). The average number of publications per researcher is generally higher in cardiology and physics than in economics (Piro et al. 2013 ). Moreover, cardiologists and physicists mainly publish in international scientific journals (Moed 2005 ; Van Leeuwen 2013 ). In economics, researchers also tend to publish books, chapters, and articles in national languages, in addition to international journal articles (Aksnes and Sivertsen 2019 ; van Leeuwen et al. 2016 ).

We sampled the countries with a twofold aim. On the one hand, we wanted to have countries that are comparable so that differences in the development of the science systems, working conditions, or funding availability would not be too large. On the other hand, we also wanted to assure variation among the countries regarding these relevant framework conditions to ensure that our findings are not driven by a specific contextual condition.

The five countries in the study are all located in the northwestern part of Europe, with science systems that are foremost funded by block grant funding from the national governments (unlike, for example, the US, where research grants by national funding agencies are the most important funding mechanism) (Lepori et al. 2023 ).

In all five countries, the missions of the universities are composed of a blend of education, research, and outreach. Furthermore, the science systems in Norway, Denmark, Sweden, and the Netherlands have a relatively strong orientation towards the Anglo-Saxon world in the sense that publishing in the national language still exists, but publishing in English in internationally oriented journals in which English is the language of publications is the norm (Kulczycki et al. 2018 ). These framework conditions ensure that those working in the five countries have somewhat similar missions to fulfil in their professions while also belonging to a common mainly Anglophone science system.

However, in Norway, Denmark, Sweden, and the Netherlands, research findings in some social sciences, law, and the humanities are still oriented on publishing in various languages. Hence, we avoided selecting the humanities field for this study due to a potential issue with cross-country comparability (Sivertsen 2019 ; Sivertsen and Van Leeuwen 2014 ; Van Leeuwen 2013 ).

Finally, the chosen countries vary regarding their level of university autonomy. When combining the scores for organisational, financial, staffing, and academic autonomy presented in the latest University Autonomy in Europe Scorecard presented by the European University Association (EUA), the UK, the Netherlands, and Denmark have higher levels of autonomy compared to Norway and Sweden, with Swedish universities having less autonomy than their Norwegian counterparts (Pruvot et al. 2023 ). This variation is relevant for our study, as it ensures that our findings are not driven by response from a higher education system with especially high or low autonomy, which can influence the motivation and satisfaction of academics working in it (Daumiller et al. 2020 ).

Data and methods

The data used in this article are a combination of survey data and bibliometric data retrieved from the WoS. The WoS database was chosen for this study due to its comprehensive coverage of research literature across all disciplines, encompassing the three specific research areas under analysis. Additionally, the WoS database is well-suited for bibliometric analyses, offering citation counts essential for this study.

Two approaches were used to identify the sample for the survey. Initially, a bibliometric analysis of the WoS using journal categories (‘Cardiac & cardiovascular systems’, ‘Economics’, and ‘Physics’) enabled the identification of key institutions with a minimum number of publications within these journal categories. Following this, relevant organisational units and researchers within these units were identified through available information on the units’ webpages. Included were employees in relevant academic positions (tenured academic personnel, post-docs, and researchers, but not PhD students, adjunct positions, guest researchers, or administrative and technical personnel).

Second, based on the WoS data, people were added to this initial sample if they had a minimum number of publications within the field and belonged to any of the selected institutions, regardless of unit affiliation. For economics, the minimum was five publications within the selected period (2011–2016). For cardiology and physics, where the individual publication productivity is higher, the minimum was 10 publications within the same period. The selection of the minimum publication criteria was based on an analysis of publication outputs in these fields between 2011 and 2016. The thresholds were applied to include individuals who are more actively engaged in research while excluding those with more peripheral involvement. The higher thresholds for cardiology and physics reflect the greater frequency of publications (and co-authorship) observed in these fields.

The benefit of this dual-approach strategy to sampling is that we obtain a more comprehensive sample: the full scope of researchers within a unit and the full scope of researchers that publish within the relevant fields. Overall, 59% of the sample were identified through staff lists and 41% through the second step involving WoS data.

The survey data were collected through an online questionnaire first sent out in October 2017 and closed in December 2018. In this period, several reminders were sent to increase the response rate. Overall, the survey had a response rate of 26.1% ( N  = 2,587 replies). There were only minor variations in response rates between scientific fields; the variations were larger between countries. Tables  1 and 2 provide an overview of the response rate by country and field.

Operationalisation of motivation

Motivation was measured by a question in the survey asking respondents what motivates or inspires them to conduct research, of which three dimensions are analysed in the present paper. The two first answer categories were related to intrinsic motivation (‘Curiosity/scientific discovery/understanding the world’ and ‘Application/practical aims/creating a better society’). The third answer category was more related to extrinsic motivation (‘Progress in my career [e.g. tenure/permanent position, higher salary, more interesting/independent work]’). Appendix Table A1 displays the distribution of respondents and the mean value and standard deviation for each item.

These three different aspects of motivation do not measure the same phenomenon but seem to capture different aspects of motivation (see Pearson’s correlation coefficients in Appendix Table A2 ). There is no correlation between curiosity/scientific discovery, career progress, and practical application. However, there is a weak but significant positive correlation between career progress and practical application. These findings indicate that those motivated by career considerations to some degrees also are motivated by practical application.

In addition to investigating how researchers’ motivation varies by field and country, we consider the differences in relation to age, position and gender as well. Field of science differentiates between economics, cardiology, physics, and other fields. The country variables differentiate between the five countries. Age is a nine-category variable. The position variable differentiates between full professors, associate professors, and assistant professors. The gender variable has two categories (male or female). For descriptive statistics on these additional variables, see Appendix Table A3 .

Publication productivity and citation impact

To analyse the respondents’ bibliometric performance, the Centre for Science and Technology Studies (CWTS) in-house WoS database was used. We identified the publication output of each respondent during 2011–2017 (limited to regular articles, reviews, and letters). For 16% of the respondents, no publications were identified in the database. These individuals had apparently not published in international journals covered by the database. However, in some cases, the lack of publications may be due to identification problems (e.g. change of names). Therefore, we decided not to include the latter respondents in the analysis.

Two main performance measures were calculated: publication productivity and citation impact. As an indicator of productivity, we counted the number of publications for each individual (as author or co-author) during the period. To analyse the citation impact, a composite measure using three different indicators was used: total number of citations (total citations counts for all articles they have contributed to during the period, counting citations up to and including 2017), normalised citation score (MNCS), and proportion of publications among the 10% most cited articles in their fields (Waltman and Schreiber 2013 ). Here, the MNCS is an indicator for which the citation count of each article is normalised by subject, article type, and year, where 1.00 corresponds to the world average (Waltman et al. 2011 ). Based on these data, averages for the total publication output of each respondent were calculated. By using three different indicators, we can avoid biases or limitations attached to each of them. For example, using the MNCS, a respondent with only one publication would appear as a high impact researcher if this article was highly cited. However, when considering the additional indicator, total citation counts, this individual would usually perform less well.

The bibliometric scores were skewedly distributed among the respondents. Rather than using the absolute numbers, in this paper, we have classified the respondents into three groups according to their scores on the indicators. Here, we have used percentile rank classes (tertiles). Percentile statistics are increasingly applied in bibliometrics (Bornmann et al. 2013 ; Waltman and Schreiber 2013 ) due to the presence of outliers and long tails, which characterise both productivity and citation distributions.

As the fields analysed have different publication patterns, the respondents within each field were ranked according to their scores on the indicators, and their percentile rank was determined. For the productivity measure, this means that there are three groups that are equal in terms of number of individuals included: 1: Low productivity (the group with the lowest publication numbers, 0–33 percentile), 2: Medium productivity (33–67 percentile), and 3: High productivity (67–100 percentile). For the citation impact measure, we conducted a similar percentile analysis for each of the three composite indicators. Then everyone was assigned to one of the three percentile groups based on their average score: 1: Low citation impact (the group with lowest citation impact, 0–33 percentile), 2: Medium citation impact (33–67 percentile), and 3: High citation impact (67–100 percentile), cf. Table  3 . Although it might be argued that the application of tertile groups rather than absolute numbers leads to a loss of information, the advantage is that the results are not influenced by extreme values and may be easier to interpret.

Via this approach, we can analyse the two important dimensions of the respondents’ performance. However, it should be noted that the WoS database does not cover the publication output of the fields equally. Generally, physics and cardiology are very well covered, while the coverage of economics is somewhat lower due to different publication practices (Aksnes and Sivertsen 2019 ). This problem is accounted for in our study by ranking the respondents in each field separately, as described above. In addition, not all respondents may have been active researchers during the entire 2011–2017 period, which we have not adjusted for. Despite these limitations, the analysis provides interesting information on the bibliometric performance of the respondents at an aggregated level.

Regression analysis

To analyse the relationship between motivation and performance, we apply multinomial logistic regression rather then ordered logistic regression because we assume that the odds for respondents belonging in each category of the dependent variables are not equal (Hilbe 2017 ). The implication of this choice of model is that the model tests the probability of respondents being in one category compared to another (Hilbe 2017 ). This means that a reference or baseline category must be selected for each of the dependent variables (productivity and citation impact). Furthermore, the coefficient estimates show how the probability of being in one of the other categories decreases or increases compared to being in the reference category.

For this analysis, we selected the medium performers as the reference or baseline category for both our dependent variables. This enables us to evaluate how the independent variables affect the probability of being in the low performers group compared to the medium performers and the high performers compared to the medium performers.

To evaluate model fit, we started with a baseline model where only types of motivations were included as independent variables. Subsequently, the additional variables were introduced into the model, and based on measures for model fit (Pseudo R 2 , -2LL, and Akaike Information Criterion (AIC)), we concluded that the model with all additional variables included provides the best fit to the data for both the dependent variables (see Appendix Tables A5 and A6 ). Additional control variables include age, gender, country, and funding. We include these variables as controls to obtain robust effects of motivation and not effects driven by other underlying factors. The type of funding was measured by variables where the respondent answered the following question: ‘How has your research been funded the last five years?’ The funding variable initially consisted of four categories: ‘No source’, ‘Minor source’, ‘Moderate source’, and ‘Major source’. In this analysis, we have combined ‘No source’ and ‘Minor source’ into one category (0) and ‘Moderate source’ and ‘Major source’ into another category (1). Descriptive statistics for the funding variables are available in Appendix Table A4 . We do not control for the influence of field due to how the scientific performance variables are operationalised, the field normalisation implies that there are no variations across fields. We also do not control for position, as this variable is highly correlated with age, and we are therefore unable to include these two variables in the same model.

The motivation of researchers

In the empirical analysis, we first investigate variation in motivation and then relate it to publications and citations as our two measures of research performance.

As Fig.  1 shows, the respondents are mainly driven by curiosity and the wish to make scientific discoveries. This is by far the most important motivation. Practical application is also an important source of motivation, while making career progress is not identified as being very important.

figure 1

Motivation of researchers– percentage

As Table  4 shows, at the level of fields, there are no large differences, and the motivational profiles are relatively similar. However, physicists tend to view practical application as somewhat less important than cardiologists and economists. Moreover, career progress is emphasised most by economists. Furthermore, as table 5 shows, there are some differences in motivation between countries. For curiosity/scientific discovery and practical application, the variations across countries are minor, but researchers in Denmark tend to view career progress as somewhat more important than researchers in the other countries.

Furthermore, as table 6 shows, women seem to view practical application and career progress as a more important motivation than men; these differences are also significant. Similar gender disparities have also been reported in a previous study (Zhang et al. 2021 ).

There are also some differences in motivation across the additional variables worth mentioning, as Table  7 shows. Unsurprisingly, perhaps, there is a significant moderate negative correlation between age, position, and career progress. This means that the importance of career progress as a motivation seems to decrease with increased age or a move up the position hierarchy.

In the second part of the analysis, we relate motivation to research performance. We first investigate publications and productivity using the percentile groups. Here, we present the results we use using predicted probabilities because they are more easily interpretable than coefficient estimates. For the model with productivity percentile groups as the dependent variable, the estimates for career progress were negative when comparing the medium productivity group to the high productivity group and the medium productivity group to the low productivity group. This result indicates that the probability of being in the high and low productivity groups decreases compared to the medium productivity group as the value of career progress increases, which may point towards a curvilinear relationship between the variables. A similar pattern was also found in the model with the citation impact group as the dependent variable, although it was not as apparent.

As a result of this apparent curvilinear relationship, we included quadric terms for career progress in both models, and these were significant. Likelihood ratio tests also show that the models with quadric terms included have a significant better fit to the data. Furthermore, the AIC was also lower for these models compared to the initial models where quadric terms were not included (see Appendix Tables A5 – A7 ). Consequently, we base our results on these models, which can be found in Appendix Table A7 . Due to a low number of respondents in the low categories of the scientific curiosity/discovery variable, we also combined the first three values into one to include it as a variable in the regression analysis, which results in a reduced three-value variable for scientific curiosity/discovery.

Results– productivity percentile group

Using the productivity percentile group as the dependent variable, we find that the motivational aspects of practical application and career progress have a significant effect on the probability of being in the low, medium, or high productivity group but not curiosity/scientific discovery. In Figs.  2 and 3 , each line represents the probability of being in each group across the scale of each motivational aspect.

figure 2

Predicted probability for being in each of the productivity groups according to the value on the ‘practical application’ variable

figure 3

Predicted probability of being in the low and high productivity groups according to the value on the ‘progress in my career’ variable

Figure  2 shows that at low values of application, there are no significant differences between the probability of being in either of the groups. However, from around value 3 of application, the differences between the probability of being in each group increases, and these are also significant. As a result, we concluded that high scores on practical application is related to increased probability of being in the high productivity group.

In Fig.  3 , we excluded the medium productivity group from the figure because there are no significant differences between this group and the high and low productivity group. Nevertheless, we found significant differences between the low productivity and the high productivity group. Since we added a quadric term for career progress, the two lines in Fig.  3 have a curvilinear shape. Figure  3 shows that there are only significant differences between the probability of being in the low or high productivity group at mid and high values of career progress. In addition, the probability of being in the high productivity group is at its highest value at mid values of career progress. This indicates that being motivated by career progress increases the probability of being in the high productivity group but only up to a certain point before it begins to have a negative effect on the probability of being in this group.

We also included age and gender as variables in the model, and Figs.  4 and 5 show the results. Figure  4 shows that age especially impacts the probability of being in the high productivity and low productivity groups. The lowest age category (< 30–34 years) has the highest probability for being in the low productivity group, while from the mid age category (50 years and above), the probability is highest for being in the high productivity group. This means that increased age is related to an increased probability of high productivity. The variable controlling for the effect of funding also showed some significant results (see Appendix Table A7 ). The most relevant finding is that receiving competitive grants from external public sources had a very strong and significant positive effect on being in the high productivity group and a medium-sized significant negative effect on being in the low productivity group. This shows that receiving external funding in the form of competitive grants has a strong effect on productivity.

figure 4

Predicted probability of being in each of the productivity groups according to age

Figure  5 shows that there is a difference between male and female respondents. For females, there are no differences in the probability of being in either of the groups, while males have a higher probability of being in the high productivity group compared to the medium and low productivity groups.

figure 5

Results– citation impact group

For the citation impact group as the dependent variable, we found that career progress has a significant effect on the probability of being in the low citation impact group or the high citation group but not curiosity/scientific discovery or practical application. Figure  6 shows how the probability of being in the high citation impact group increases as the value on career progress increases and is higher than that of being in the low citation impact group, but only up to a certain point. This indicates that career progress increases the probability of being in the high citation impact group to some degree but that too high values are not beneficial for high citation impact. However, it should also be noted that the effect of career progress is weak and that it is difficult to conclude on how very low or very high values of career progress affect the probability of being in the two groups.

figure 6

Predicted probability for being in each of the citation impact groups according to the value on the ‘progress in my career’ variable

We also included age and gender as variables in the model, and we found a similar pattern as in the model with productivity percentile group as the dependent variable. However, the relationship between the variables is weaker in this model with the citation impact group as the dependent variable. Figure  7 shows that the probability of being in the high citation impact group increases with age, but there is no significant difference between the probability of being in the high citation impact group and the medium citation impact group. We only see significant differences when each of these groups is compared to the low citation impact group. In addition, the increase in probability is more moderate in this model.

figure 7

Predicted probability of being in each of the citation impact groups according to age

Figure  8 shows that there are differences between male and female respondents. Male respondents have a significant higher probability of being in the medium or high citation impact group compared to the low citation impact group, but there is no significant difference in the probability between the high and medium citation impact groups. For female respondents, there are no significant differences. Similarly, for age, the effect also seems to be more moderate in this model compared to the model with productivity percentile groups as the dependent variable. In addition, the effect of funding sources is more moderate on citation impact compared to productivity (see Appendix Table A7 ). Competitive grants from external public sources still have the most relevant effect, but the effect size and level of significance is lower than for the model where productivity groups are the dependent variable. Respondents who received a large amount of external funding through competitive grants are more likely to be highly cited, but the effect size is much smaller, and the result is only significant at p  < 0.1. Those who do not receive much funding from this source are more likely to be in the low impact group. Here, the effect size is large, and the coefficient is highly significant.

figure 8

Predicted probability for being in each of the citation impact groups according to gender

Concluding discussion

This article aimed to explore researchers’ motivations and investigate the impact of motivation on research performance. By addressing these issues across several fields and countries, we provided new evidence on the motivation and performance of researchers.

Most researchers in our large-N survey found curiosity/scientific discovery to be a crucial motivational factor, with practical application being the second most supported aspect. Only a smaller number of respondents saw career progress as an important inspiration to conduct their research. This supports the notion that researchers are mainly motivated by core aspects of academic work such as curiosity, discoveries, and practical application of their knowledge and less so by personal gains (see Evans and Meyer 2003 ). Therefore, our results align with earlier research on motivation. In their interview study of scientists working at a government research institute in the UK, Jindal-Snape and Snape ( 2006 ) found that the scientists were typically motivated by the ability to conduct high quality, curiosity-driven research and de-motivated by the lack of feedback from management, difficulty in collaborating with colleagues, and constant review and change. Salaries, incentive schemes, and prospects for promotion were not considered a motivator for most scientists. Kivistö and colleagues ( 2017 ) also observed similar patterns in more recent survey data from Finnish academics.

As noted in the introduction, the issue of motivation has often been analysed in the literature using the intrinsic-extrinsic distinction. In our study, we have not applied these concepts directly. However, it is clear that the curiosity/scientific discovery item should be considered a type of intrinsic motivation, as it involves performing the activity for its inherent satisfaction. Moreover, the practical application item should probably be considered mainly intrinsic, as it involves creating a better society (for others) without primarily focusing on gains for oneself. The career progress item explicitly mentions personal gains such as position and higher salary and is, therefore, a type of extrinsic motivation. This means that our results support the notion that there are very strong elements of intrinsic motivation among researchers (Jindal-Snape and Snape 2006 ).

When analysing the three aspects of motivation, we found some differences. Physicists tend to view practical application as less important than researchers in the two other fields, while career progress was most emphasised by economists. Regarding country differences, our data suggest that career progress is most important for researchers in Denmark. Nevertheless, given the limited effect sizes, the overall picture is that motivational factors seem to be relatively similar regarding disciplinary and country dimensions.

Regarding gender aspects of motivation, our data show that women seem to view practical application and career progress as more important than men. One explanation for this could be the continued gender differences in academic careers, which tend to disadvantage women, thus creating a greater incentive for female scholars to focus on and be motivated by career progress aspects (Huang et al. 2020 ; Lerchenmueller and Sorenson 2018 ). Unsurprisingly, respondents’ age and academic position influenced the importance of different aspects of motivation, especially regarding career progress. Here, increased age and moving up the positional hierarchy are linked to a decrease in importance. This highlights that older academics and those in more senior positions drew more motivation from other sources that are not directly linked to their personal career gains. This can probably be explained by the academic career ladder plateauing at a certain point in time, as there are often no additional titles and very limited recognition beyond becoming a full professor. Finally, the type of funding that scholars received also had an influence on their productivity and, to a certain extent, citation impact.

Overall, there is little support that researchers across various fields and countries are very different when it comes to their motivation for conducting research. Rather, there seems to be a strong common core of academic motivation that varies mainly by gender and age/position. Rather than talking about researchers’ motivation per se, our study, therefore, suggests that one should talk about motivation across gender, at different stages of the career, and, to a certain degree, in different fields. Thus, motivation seems to be a multi-faceted construct, and the importance of different aspects of motivation vary between different groups.

In the second step of our analysis, we linked motivation to performance. Here, we focused on both scientific productivity and citation impact. Regarding the former, our data show that both practical application and career progress have a significant effect on productivity. The relationship between practical application aspects and productivity is linear, meaning that those who indicate that this aspect of motivation is very important to them have a higher probability of being in the high productivity group. The relationship between career aspects of motivation and productivity is curve linear, and we found only significant differences between the high and low productivity groups at mid and high values of the motivation scale. This indicates that being more motivated by career progress increases productivity but only to a certain extent before it starts having a detrimental effect. A common assumption has been that intrinsic motivation has a positive and instrumental effect and extrinsic motivation has a negative effect on the performance of scientists (Peng and Gao 2019 ; Ryan and Berbegal-Mirabent 2016 ). Our results do not generally support this, as motives related to career progress are positively linked with productivity only to a certain point. Possibly, this can be explained by the fact that the number of publications is often especially important in the context of recruitment and promotion (Langfeldt et al. 2021 ; Reymert et al. 2021 ). Thus, it will be beneficial from a scientific career perspective to have many publications when trying to get hired or promoted.

Regarding citation impact, our analysis highlights that only the career aspects of motivation have a significant effect. Similar to the results regarding productivity, being more motivated by career progress increases the probability of being in the high citation impact group, but only to a certain value when the difference stops being significant. It needs to be pointed out that the effect strength is weaker than in the analysis that focused on productivity. Thus, these results should be treated with greater caution.

Overall, our results shed light on some important aspects regarding the motivation of academics and how this translates into research performance. Regarding our first research question, it seems to be the case that there is not one type of motivation but rather different contextual mixes of motivational aspects that are strongly driven by gender and the academic position/age. We found only limited effects of research fields and even less pronounced country effects, suggesting that while situational, the mix of motivational aspects also has a common academic core that is less influenced by different national environments or disciplinary standards. Regarding our second research question, our results challenge the common assumption that intrinsic motivation has a positive effect and extrinsic motivation has a negative effect on the performance of scientists. Instead, we show that motives related to career are positively linked to productivity at least to a certain point. Our analysis regarding citation patterns achieved similar results. Combined with the finding regarding the importance of current academic position and age for specific patterns of motivation, it could be argued that the fact that the number of publications is often used as a measurement in recruitment and promotion makes academics that are more driven by career aspects publish more, as this is perceived as a necessary condition for success.

Our study has a clear focus on the research side of academic work. However, most academics do both teaching and research, which raises the question of how far our results can also inform our knowledge regarding the motivation for teaching. On the one hand, previous studies have highlighted that intrinsic motivation is also of high importance for the quality of teaching (see e.g. Wilkesmann and Lauer 2020 ), which fits well with our findings. At the same time, the literature also highlights persistent goal conflicts of academics (see e.g. Daumiller et al. 2020 ), given that extra time devoted to teaching often comes at the costs of publications and research. Given that other findings in the literature show that research performance continues to be of higher importance than teaching in academic hiring processes (Reymert et al. 2021 ), the interplay between research performance, teaching performance, and different types of motivation is most likely more complicated and demands further investigation.

While offering several relevant insights, our study still comes with certain limitations that must be considered. First, motivation is a complex construct. Thus, there are many ways one could operationalise it, and not one specific understanding so far seems to have emerged as best practice. Therefore, our approach to operationalisation and measurement should be seen as an addition to this broader field of measurement approaches, and we do not claim that this is the only sensible way of doing it. Second, we rely on self-reported survey data to measure the different aspects of motivation in our study. This means that aspects such as social desirability could influence how far academics claim to be motivated by certain aspects. For example, claiming to be mainly motivated by personal career gains may be considered a dubious motive among academics.

With respect to the bibliometric analyses, it is important to realise that we have lumped researchers into categories, thereby ‘smoothening’ the individual performances into group performances under the various variables. This has an effect that some extraordinary scores might have become invisible in our study, which might have been interesting to analyse separately, throwing light on the relationships we studied. However, breaking the material down to the lower level of analysis of individual researchers also comes with a limitation, namely that at the level of the individual academic, bibliometrics tend to become quite sensitive for the underlying numbers, which in itself is then hampered by the coverage of the database used, the publishing cultures in various countries and fields, and the age and position of the individuals. Therefore, the level of the individual academic has not been analysed in our study, how interesting and promising outcomes might have been. even though we acknowledge that such a study could yield interesting results.

Finally, our sample is drawn from northwestern European countries and a limited set of disciplines. We would argue that we have sufficient variation in countries and disciplines to make the results relevant for a broader audience context. While our results show rather small country or discipline differences, we are aware that there might be country- or discipline-specific effects that we cannot capture due to the sampling approach we used. Moreover, as we had to balance sufficient variation in framework conditions with the comparability of cases, the geographical generalisation of our results has limitations.

This article investigated what motivates researchers across different research fields and countries and how this motivation influences their research performance. The analysis showed that the researchers are mainly motivated by scientific curiosity and practical application and less so by career considerations. Furthermore, the analysis shows that researchers driven by practical application aspects of motivation have a higher probability of high productivity. Being driven by career considerations also increases productivity but only to a certain extent before it starts having a detrimental effect.

The article is based on a large-N survey of economists, cardiologists, and physicists in Denmark, Norway, Sweden, the Netherlands, and the UK. Building on this study, future research should expand the scope and study the relationship between motivation and productivity as well as citation impact in a broader disciplinary and geographical context. In addition, we encourage studies that develop and validate our measurement and operationalisation of aspects of researchers’ motivation.

Finally, a long-term panel study design that follows respondents throughout their academic careers and investigates how far their motivational patterns shift over time would allow for more fine-grained analysis and thereby a richer understanding of the important relationship between motivation and performance in academia.

Data availability

The data set for this study is available from the corresponding author upon reasonable request.

Aksnes DW, Sivertsen G (2019) A criteria-based assessment of the coverage of Scopus and web of Science. J Data Inform Sci 4(1):1–21. https://doi.org/10.2478/jdis-2019-0001

Article   Google Scholar  

Atta-Owusu K, Fitjar RD (2021) What motivates academics for external engagement? Exploring the effects of motivational drivers and organizational fairness. Sci Public Policy. https://doi.org/10.1093/scipol/scab075 . November, scab075

Baccini A, Barabesi L, Cioni M, Pisani C (2014) Crossing the hurdle: the determinants of individual. Sci Perform Scientometrics 101(3):2035–2062. https://doi.org/10.1007/s11192-014-1395-3

Bornmann L, Leydesdorff L, Mutz R (2013) The use of percentiles and percentile rank classes in the analysis of bibliometric data: opportunities and limits. J Informetrics 7(1):158–165. https://doi.org/10.1016/j.joi.2012.10.001

Cruz-Castro L, Sanz-Menendez L (2021) What should be rewarded? Gender and evaluation criteria for tenure and promotion. J Informetrics 15(3):1–22. https://doi.org/10.1016/j.joi.2021.101196

Daumiller M, Stupnisky R, Janke S (2020) Motivation of higher education faculty: theoretical approaches, empirical evidence, and future directions. Int J Educational Res 99:101502. https://doi.org/10.1016/j.ijer.2019.101502

Duarte H, Lopes D (2018) Career stages and occupations impacts on workers motivations. Int J Manpow 39(5):746–763. https://doi.org/10.1108/IJM-02-2017-0026

Evans IM, Meyer LH (2003) Motivating the professoriate: why sticks and carrots are only for donkeys. High Educ Manage Policy 15(3):151–167. https://doi.org/10.1787/hemp-v15-art29-en

Finkelstein MJ (1984) The American academic profession: a synthesis of social scientific inquiry since World War II. Ohio State University, Columbus

Google Scholar  

Hammarfelt B, de Rijcke S (2015) Accountability in context: effects of research evaluation systems on publication practices, disciplinary norms, and individual working routines in the Faculty of arts at Uppsala University. Res Evaluation 24(1):63–77. https://doi.org/10.1093/reseval/rvu029

Hangel N, Schmidt-Pfister D (2017) Why do you publish? On the tensions between generating scientific knowledge and publication pressure. Aslib J Inform Manage 69(5):529–544. https://doi.org/10.1108/AJIM-01-2017-0019

Hazelkorn E (2015) Rankings and the reshaping of higher education: the battle for world-class excellence. Palgrave McMillan, Basingstoke

Book   Google Scholar  

Hilbe JM (2017) Logistic regression models. Taylor & Francis Ltd, London

Horodnic IA, Zaiţ A (2015) Motivation and research productivity in a university system undergoing transition. Res Evaluation 24(3):282–292

Huang J, Gates AJ, Sinatra R, Barabási A-L (2020) Historical comparison of gender inequality in scientific careers across countries and disciplines. Proceedings of the National Academy of Sciences 117(9):4609–4616. https://doi.org/10.1073/pnas.1914221117

Jeong S, Choi JY, Kim J-Y (2014) On the drivers of international collaboration: the impact of informal communication, motivation, and research resources. Sci Public Policy 41(4):520–531. https://doi.org/10.1093/scipol/sct079

Jindal-Snape D, Snape JB (2006) Motivation of scientists in a government research institute: scientists’ perceptions and the role of management. Manag Decis 44(10):1325–1343. https://doi.org/10.1108/00251740610715678

Kivistö J, Pekkola E, Lyytinen A (2017) The influence of performance-based management on teaching and research performance of Finnish senior academics. Tert Educ Manag 23(3):260–275. https://doi.org/10.1080/13583883.2017.1328529

Kulczycki E, Engels TCE, Pölönen J, Bruun K, Dušková M, Guns R et al (2018) Publication patterns in the social sciences and humanities: evidence from eight European countries. Scientometrics 116(1):463–486. https://doi.org/10.1007/s11192-018-2711-0

Lam A (2011) What motivates academic scientists to engage in research commercialization: gold, ribbon or puzzle? Res Policy 40(10):1354–1368. https://doi.org/10.1016/j.respol.2011.09.002

Langfeldt L, Reymert I, Aksnes DW (2021) The role of metrics in peer assessments. Res Evaluation 30(1):112–126. https://doi.org/10.1093/reseval/rvaa032

Larivière V, Macaluso B, Archambault É, Gingras Y (2010) Which scientific elites? On the concentration of research funds, publications and citations. Res Evaluation 19(1):45–53. https://doi.org/10.3152/095820210X492495

Lepori B, Jongbloed B, Hicks D (2023) Introduction to the handbook of public funding of research: understanding vertical and horizontal complexities. In: Lepori B, Hicks BJ D (eds) Handbook of public funding of research. Edward Elgar Publishing, Cheltenham, pp 1–19

Chapter   Google Scholar  

Lerchenmueller MJ, Sorenson O (2018) The gender gap in early career transitions in the life sciences. Res Policy 47(6):1007–1017. https://doi.org/10.1016/j.respol.2018.02.009

Leslie DW (2002) Resolving the dispute: teaching is academe’s core value. J High Educ 73(1):49–73

Lounsbury JW, Foster N, Patel H, Carmody P, Gibson LW, Stairs DR (2012) An investigation of the personality traits of scientists versus nonscientists and their relationship with career satisfaction: relationship of personality traits and career satisfaction of scientists and nonscientists. R&D Manage 42(1):47–59. https://doi.org/10.1111/j.1467-9310.2011.00665.x

Ma L (2019) Money, morale, and motivation: a study of the output-based research support scheme. Univ Coll Dublin Res Evaluation 28(4):304–312. https://doi.org/10.1093/reseval/rvz017

Melguizo T, Strober MH (2007) Faculty salaries and the maximization of prestige. Res High Educt 48(6):633–668

Moed HF (2005) Citation analysis in research evaluation. Springer, Dordrecht

Netherlands Observatory of Science (NOWT) (2012) Report to the Dutch Ministry of Science, Education and Culture (OC&W). Den Haag 1998

Peng J-E, Gao XA (2019) Understanding TEFL academics’ research motivation and its relations with research productivity. SAGE Open 9(3):215824401986629. https://doi.org/10.1177/2158244019866295

Piro FN, Aksnes DW, Rørstad K (2013) A macro analysis of productivity differences across fields: challenges in the measurement of scientific publishing. J Am Soc Inform Sci Technol 64(2):307–320. https://doi.org/10.1002/asi.22746

Pruvot EB, Estermann T, Popkhadze N (2023) University autonomy in Europe IV. The scorecard 2023. Retrieved from Brussels. https://eua.eu/downloads/publications/eua autonomy scorecard.pdf

Reymert I, Jungblut J, Borlaug SB (2021) Are evaluative cultures national or global? A cross-national study on evaluative cultures in academic recruitment processes in Europe. High Educ 82(5):823–843. https://doi.org/10.1007/s10734-020-00659-3

Roach M, Sauermann H (2010) A taste for science? PhD scientists’ academic orientation and self-selection into research careers in industry. Res Policy 39(3):422–434. https://doi.org/10.1016/j.respol.2010.01.004

Rørstad K, Aksnes DW (2015) Publication rate expressed by age, gender and academic position– A large-scale analysis of Norwegian academic staff. J Informetrics 9(2):317–333. https://doi.org/10.1016/j.joi.2015.02.003

Ruiz-Castillo J, Costas R (2014) The skewness of scientific productivity. J Informetrics 8(4):917–934. https://doi.org/10.1016/j.joi.2014.09.006

Ryan JC (2014) The work motivation of research scientists and its effect on research performance: work motivation of research scientists. R&D Manage 44(4):355–369. https://doi.org/10.1111/radm.12063

Ryan JC, Berbegal-Mirabent J (2016) Motivational recipes and research performance: a fuzzy set analysis of the motivational profile of high-performing research scientists. J Bus Res 69(11):5299–5304. https://doi.org/10.1016/j.jbusres.2016.04.128

Ryan RM, Deci EL (2000) Intrinsic and extrinsic motivations: classic definitions and new directions. Contemp Educ Psychol 25(1):54–67. https://doi.org/10.1006/ceps.1999.1020

Sivertsen G (2019) Understanding and evaluating research and scholarly publishing in the social sciences and humanities (SSH). Data Inform Manage 3(2):61–71. https://doi.org/10.2478/dim-2019-0008

Sivertsen G, Van Leeuwen T (2014) Scholarly publication patterns in the social sciences and humanities and their relationship with research assessment

Stephan P, Veugelers R, Wang J (2017) Reviewers are blinkered by bibliometrics. Nature 544(7651):411–412. https://doi.org/10.1038/544411a

Thomas D, Nedeva M (2012) Characterizing researchers to study research funding agency impacts: the case of the European Research Council’s starting grants. Res Evaluation 21(4):257–269. https://doi.org/10.1093/reseval/rvs020

Tien FF (2000) To what degree does the desire for promotion motivate faculty to perform research? Testing the expectancy theory. Res High Educt 41(6):723–752. https://doi.org/10.1023/A:1007020721531

Tien FF (2008) What kind of faculty are motivated to perform research by the desire for promotion? High Educ 55(1):17–32. https://doi.org/10.1007/s10734-006-9033-5

Tien FF, Blackburn RT (1996) Faculty rank system, research motivation, and faculty research productivity: measure refinement and theory testing. J High Educ 67(1):2. https://doi.org/10.2307/2943901

Vallerand RJ, Pelletier LG, Blais MR, Briere NM, Senecal C, Vallieres EF (1992) The academic motivation scale: a measure of intrinsic, extrinsic, and amotivation in education. Educ Psychol Meas 52(4):1003–1017. https://doi.org/10.1177/0013164492052004025

Van Iddekinge CH, Aguinis H, Mackey JD, DeOrtentiis PS (2018) A meta-analysis of the interactive, additive, and relative effects of cognitive ability and motivation on performance. J Manag 44(1):249–279. https://doi.org/10.1177/0149206317702220

Van Leeuwen T (2013) Bibliometric research evaluations, Web of Science and the social sciences and humanities: A problematic relationship? Bibliometrie - Praxis Und Forschung, September, Bd. 2(2013). https://doi.org/10.5283/BPF.173

Van Leeuwen T, van Wijk E, Wouters PF (2016) Bibliometric analysis of output and impact based on CRIS data: a case study on the registered output of a Dutch university. Scientometrics 106(1):1–16. https://doi.org/10.1007/s11192-015-1788-y

Waltman L, Schreiber M (2013) On the calculation of percentile-based bibliometric indicators. J Am Soc Inform Sci Technol 64(2):372–379. https://doi.org/10.1002/asi.22775

Waltman L, van Eck NJ, van Leeuwen TN, Visser MS, van Raan AFJ (2011) Towards a new crown indicator: an empirical analysis. Scientometrics 87(3):467–481. https://doi.org/10.1007/s11192-011-0354-5

Wilkesmann U, Lauer S (2020) The influence of teaching motivation and new public management on academic teaching. Stud High Educ 45(2):434–451. https://doi.org/10.1080/03075079.2018.1539960

Wilsdon J, Allen L, Belfiore E, Campbell P, Curry S, Hill S, Jones R et al (2015) The metric tide: report of the independent review of the role of metrics in research assessment and management. https://doi.org/10.13140/RG.2.1.4929.1363

Zacharewicz T, Lepori B, Reale E, Jonkers K (2019) Performance-based research funding in EU member states—A comparative assessment. Sci Public Policy 46(1):105–115. https://doi.org/10.1093/scipol/scy041

Zhang L, Sivertsen G, Du H, Huang Y, Glänzel W (2021) Gender differences in the aims and impacts of research. Scientometrics 126(11):8861–8886. https://doi.org/10.1007/s11192-021-04171-y

Download references

Acknowledgements

We are thankful to the R-QUEST team for input and comments to the paper.

The authors disclosed the receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Research Council Norway (RCN) [grant number 256223] (R-QUEST).

Open access funding provided by University of Oslo (incl Oslo University Hospital)

Author information

Silje Marie Svartefoss

Present address: TIK Centre for Technology, Innovation and Culture, University of Oslo, 0317, Oslo, Norway

Authors and Affiliations

Nordic Institute for Studies in Innovation, Research and Education (NIFU), Økernveien 9, 0608, Oslo, Norway

Silje Marie Svartefoss & Dag W. Aksnes

Department of Political Science, University of Oslo, 0315, Oslo, Norway

Jens Jungblut & Kristoffer Kolltveit

Centre for Science and Technology Studies (CWTS), Leiden University, 2311, Leiden, The Netherlands

Thed van Leeuwen

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed by Silje Marie Svartefoss, Jens Jungblut, Dag W. Aksnes, Kristoffer Kolltveit, and Thed van Leeuwen. The first draft of the manuscript was written by all authors in collaboration, and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Silje Marie Svartefoss .

Ethics declarations

Competing interests.

The authors have no competing interests to declare that are relevant to the content of this article.

Informed consent

was retrieved from the participants in this study.

Electronic Supplementary Material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Svartefoss, S.M., Jungblut, J., Aksnes, D.W. et al. Explaining research performance: investigating the importance of motivation. SN Soc Sci 4 , 105 (2024). https://doi.org/10.1007/s43545-024-00895-9

Download citation

Received : 14 December 2023

Accepted : 15 April 2024

Published : 23 May 2024

DOI : https://doi.org/10.1007/s43545-024-00895-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Performance
  • Productivity
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Health Res Policy Syst

Logo of hlthresps

How do organisations implement research impact assessment (RIA) principles and good practice? A narrative review and exploratory study of four international research funding and administrative organisations

Adam kamenetzky.

1 National Institute for Health Research Central Commissioning Facility, Twickenham, TW1 3NL United Kingdom

2 Policy Institute at King’s College London, Strand Campus, London, WC2B 6LE United Kingdom

Saba Hinrichs-Krapels

3 King’s Global Health Institute, King’s College London, Denmark Hill, London, SE5 9RJ United Kingdom

Associated Data

Not applicable.

Public research funding agencies and research organisations are increasingly accountable for the wider impacts of the research they support. While research impact assessment (RIA) frameworks and tools exist, little is known and shared of how these organisations implement RIA activities in practice.

We conducted a review of academic literature to search for research organisations’ published experiences of RIAs. We followed this with semi-structured interviews from a convenience sample ( n  = 7) of representatives of four research organisations deploying strategies to support and assess research impact.

We found only five studies reporting empirical evidence on how research organisations put RIA principles into practice. From our interviews, we observed a disconnect between published RIA frameworks and tools, and the realities of organisational practices, which tended not to be reported.

We observed varying maturity and readiness with respect to organisations’ structural set ups for conducting RIAs, particularly relating to leadership, skills for evaluation and automating RIA data collection. Key processes for RIA included efforts to engage researcher communities to articulate and plan for impact, using a diversity of methods, frameworks and indicators, and supporting a learning approach. We observed outcomes of RIAs as having supported a dialogue to orient research to impact, underpinned shared learning from analyses of research, and provided evidence of the value of research in different domains and to different audiences.

Conclusions

Putting RIA principles and frameworks into practice is still in early stages for research organisations. We recommend that organisations (1) get set up by considering upfront the resources, time and leadership required to embed impact strategies throughout the organisation and wider research ‘ecosystem’, and develop methodical approaches to assessing impact; (2) work together by engaging researcher communities and wider stakeholders as a core part of impact pathway planning and subsequent assessment; and (3) recognise the benefits that RIA can bring about as a means to improve mutual understanding of the research process between different actors with an interest in research.

There is an increasing drive for organisations that fund, support and/or administer research (hereafter referred to as ‘research organisations’) to be held accountable not only for various administrative and research governance functions but also for the longer-term impacts of the research that their activities and funding support. This is evidenced by the proliferation of approaches to assess research processes, policies and productivity [ 1 ]. The emerging practice of research impact assessment (RIA) is an area where there have been a number of developments – be these analytical tools to help conceptualise impact (in its myriad forms), such as the Payback framework [ 2 ], the inclusion of impact as a criteria to determine the allocation of public funds to higher educational institutions (e.g. the United Kingdom’s Research Excellence Framework) or methods to determine the wider ‘spill-over’ effects from government and charitable investments in research, as a means to advocate for the value of the combined contribution of these sectors to national research and development efforts [ 3 ].

What is RIA and why do research organisations have a role to play?

RIA falls within a series of practices referred to – somewhat interchangeably – as ‘the science of science’, ‘research on research’ or ‘meta-research’. The founding Editors-in-Chief of the journal Health Research Policy and Systems defined such practices, in their broadest sense, as being “ … conducted for a variety of purposes, including to strengthen the capacity to undertake scientifically valid and relevant research and to maximise and more equitably spread the benefits that can come from investing in research ” [ 4 ].

The International School on Research Impact Assessment (ISRIA) was set up by research organisations who recognised a need for research impact strategies and associated assessment efforts to be given an explicit practitioner focus based on principles of good practice and the application of robust and repeatable evaluation methods. Drawing on insights from over 400 scholars and practitioners attending five ISRIA schools – with stakeholders from research funding agencies, and the health sector as the most highly represented among these – ISRIA developed a series of best practice guidelines [ 5 ]. These guidelines aim to distil down ‘what works’ in RIA, and how to situate such practices in a broader organisational context. Essentially, the ISRIA guidelines encourage organisations to (1) analyse the research context, (2) reflect on the purpose of RIA, (3) identify stakeholders’ needs, (4) engage with the research community, (5) choose appropriate conceptual frameworks, (6) choose appropriate evaluation methods and data sources, (7) choose indicators and metrics responsibly, (8) consider ethics and conflicts of interest, (9) communicate results, and (10) share learning.

National RIA exercises tend to define research impact as a change or benefit demonstrably realised beyond academia as a result of research activity(ies) [ 6 ]. Most of these RIA exercises, and thus impact definitions derived from them, are driven by research funding organisations or funders of funders (i.e. governmental or other public research funding agencies) [ 7 ]. Yet, in spite of calls for RIA to be deployed as a means for robust analysis of aspects such as the effectiveness, efficiency and equity of research [ 8 ], most impact definitions exhibit a clear positivity bias, thus encouraging the use of RIA as, at best, a route to bolster organisational advocacy efforts or, at worst, a means “ to count what is easily measured ” [ 7 ]. A concern is consequently that research organisations with a role in supporting research and related activities are not deploying RIA with sufficient consideration of its potential to understand the real-world processes, for example, community involvement [ 9 ] or engagement activities [ 10 ], that might augment research having wider societal impacts. This is borne out by studies of research organisations’ roles. Most organisations do not base their efforts to encourage the translation of research into meaningful impacts on people’s lives around evidence of what works in practice [ 11 ]. Studies of funding managers themselves highlight their limited knowledge of complex phenomena such as ‘implementation’, risking a blurring of responsibility and thus hampering any potential facilitative role for research organisations [ 12 ].

There is a need to see how research organisations undertake RIA

To evaluate impact, ISRIA’s guidelines recommend that research organisations use “ a multitude of methods from social science disciplines to examine the research process with a view to maximising its societal and economic impacts ” [ 5 ]. Crucially, these guidelines do not advocate for any specific framework but recommend to “ critically choose frameworks in a way that fits the context and purpose of a given RIA exercise and to explicitly state the limitations of the chosen framework ” [ 5 ]. Despite the need for accountability of research funds, the growing activities around RIA, and the development of RIA principles and a community of practice (all alluded to above), we do not know if there is empirical evidence that demonstrates if and how such critical and methodical approaches to RIA work in practice within research organisations. Though empirically grounded policy research has informed the setup and operations of integrated public research funding agencies, such as the United Kingdom National Institute for Health Research (NIHR) [ 13 ], and national RIA exercises of higher educational institutions, such as the United Kingdom Research Excellence Framework [ 14 ], our observations are that institutional/organisational policies for RIA – in particular those of grant-awarding research funding agencies – are lacking in an empirical basis (in common with findings from other studies of research organisations’ practices, as noted above). Our question of interest for this study was therefore to ask what experiences research organisations had in putting into practice RIA activities, frameworks and approaches as well as how these experiences might inform others as they develop policies around impact and its assessment.

Examination of organisational RIA activities in practice, on the ground, is important for a number of reasons. Firstly, much of the scientific literature on research impact is theoretical in nature, with the concept of ‘impact’ itself emergent and complex. Attempts to draw together this literature demonstrate the challenges. In their systematic review for the NIHR’s Health Technology Assessment programme, Raftery et al. describe a wide range of underlying philosophical ‘ideal types’ of impact [ 1 ]; they conclude that a logic model approach “ with scope for interpretative case studies to explain unique and/or non-linear influences ” is appropriate for assessing the impact of the bulk of Health Technology Assessment-funded research. They also make the case for further work being needed to determine appropriate models and tools for RIA in other NIHR research programmes.

Secondly, those organisations with sufficient time and energy to engage with the scientific literature will themselves discover many such RIA models and tools but little guidance on what could work and for whom, from the ‘lived’ perspective of those working within the organisation. One systematic review in healthcare research identified 24 unique methodological frameworks for RIA, proposing a total of 80 different metrics [ 15 ]. Another narrative review identified 16 different RIA models, also pointing out that a majority of RIAs did not involve policy-makers and end-users of research, and thus risked promulgating a bureaucratic bias to organisations’ consideration of impact [ 16 ], compounding issues previously highlighted.

The impetus for our conducting this study has come from initial observations afforded through one author (AK) being appointed as a researcher-in-residence with NIHR to explore questions around impact and routes to embed methodical approaches to RIA. NIHR is the largest public funder of health and care research in the United Kingdom, whose management of upwards of £1 billion annual funding is directed through a series of independently commissioned coordinating centres. Via interactions with staff, including a series of sequential cohorts enrolled into a formal impact training co-designed with colleagues at King’s College London, it was evident that data required to conduct RIAs are hard to come by, exist in a variety of forms, and are not systematically captured or published across multiple NIHR programmes. This is further complicated by a breadth of evaluative approaches employed; within NIHR alone, the range of published results of RIAs include econometric [ 17 – 19 ], case-study based [ 20 ], narrative synthesis [ 21 ] and documentary review [ 22 ]. Additionally, semi-standardised impact data collection systems, such as Researchfish™, have had little empirical validation since being rolled out across multiple NIHR programmes and, indeed, other United Kingdom and international research organisations [ 1 ].

Thus, our concern is that research organisations, in spite of having a crucial role to play in setting expectations and procedures around impact, are under-served by much of the ‘science of science’ literature, insomuch as it does not extend to practical application or application within a complex research funding landscape.

Our aim is to address this knowledge and practice gap by describing the experiences of research organisations in putting into practice RIA activities, frameworks and approaches. We examined this by (1) identifying published research that provides empirical evidence of organisational experiences of research impact and its assessment, particularly from the funders’ perspective, and (2) supplementing this with reflections from interviews with a convenience sample of four regional and national public research organisations contributing to international best practice in this emerging field [ 5 ].

Narrative literature review

We searched the English language scientific literature in November 2017 with the aim of determining the extent of published empirical observations of research organisations’ experiences of research impact and RIA. Studies of particular interest were those reporting on interviews with or observational/participatory/operational/action (i.e. qualitative) research from the perspective of research organisations undertaking impact assessment. We searched the databases listed below, setting the timespan for the searches to the maximum allowable (noted in years following the name of each database) – AGRIS (18), EMBASE (43), MEDLINE (70), Global Health (47), HMIC (38), PsycINFO (201), and Social Policy and Practice (36). We used a search string modified from Deeming et al. [ 23 ], which included terms specific to papers exploring health and medical RIA frameworks. As our intention was to identify literature reporting on the experiences of national/international public (e.g. government, charity, not-for-profit, health and/or general medical) research organisations, we included additional terms to generate a larger initial pool of publications of potential interest ( Appendix 1 ). We excluded studies reporting only conceptual/theoretical impact assessment frameworks, systematic or narrative reviews, and/or studies reporting the results of RIAs that did not include empirical reflections from/observations of the organisations themselves undertaking or commissioning these activities.

Our preliminary synthesis involved reading abstracts and, where relevant, full texts of the studies returned from the database searches and noting whether they met the aims of the narrative review, as described above. We also noted the primary aims and focus of excluded studies.

To aid our description of the extent of literature reporting observations of research organisations undertaking RIA, we grouped findings of included studies under three broad domains of focus relating to the ‘structures’, ‘processes’ and ‘outcomes’ relevant to organisations’ various RIA activities. This approach, set out originally by Donabedian as a means to evaluate healthcare quality and applied widely in health services research [ 24 ], is used here as an aid to present key features of studies included from the literature and, subsequently, interviewees’ reflections on RIA within their own organisations. It is not intended as a formal means of evaluating the quality of RIA practices, rather to explore, at a more abstracted level, how RIA is situated within organisations whose roles span various aspects of the health research funding landscape.

Interviews with research organisations

The second stage involved an enquiry of a convenience sample of representatives from four regional and national public research organisations contributing to international best practice in this emerging field, identified by their participation at the last of five outings of the ISRIA in November 2017. Given the relative newness of RIA as an area of expertise, ISRIA provided a unique opportunity to identify the main research organisations actively engaging in, and contributing to, its practice. We approached four organisations, chosen to represent different global regions and varying levels of experience in conducting RIAs, and note details in Table  1 . We conducted four interviews with a total of seven staff (i.e. two joint and two individual interviews), whose roles within their organisation ranged from senior executives for research performance/evaluation/management, research impact management and/or research impact analysis. AK conducted the interviews and was also a participant and part-time facilitator at the ISRIA conference. AK and SHK worked on designing the interview topic guide. SHK was a faculty member at ISRIA.

Details of ISRIA 2017 faculty member organisations interviewed in convenience sample

a Converted into equivalent UK£ at Sept 2018 exchange rates

We used semi-structured interviews based around a topic guide ( Appendix 2 ). Areas of omission in the literature that limited the practical application of RIA activities (for example, local context, resources and challenges of implementation) formed a particular focus for the questions. We transcribed interviews verbatim and then undertook thematic analysis, grouping themes against the three Donabedian domains of focus previously described [ 24 ]. The first ‘structure’ explores themes relating to the setup of the four organisations we interviewed and factors relating to the organisations themselves. The second ‘process’ looks at the assessment activities that organisations carry out to support impact and its assessment. The final domain ‘outcome’ presents interviewees’ reflections on what doing RIA has meant – the value RIA has brought to the organisation, to researchers, and to wider communities and stakeholders.

Document analysis

Interviews were followed by desktop research for documents relating to each organisation's approaches to impact and its assessment, which included annual/impact reports, published online strategies, and any studies published in the literature (as guided by interviewees, and if not already forming part of the literature search, detailed above).

Findings from narrative literature review

Of 129 papers identified using our search criteria, we found only five published examples of research organisations describing and/or reflecting back on their approaches to RIA [ 25 – 29 ]. We have summarised these by Donabedian’s domains of focus in Table  2 [ 24 ] and discuss key findings below.

Published studies meeting inclusion criteria by key domains of focus reported on in the study (per Donabedian [ 24 ], as described above)

Structural aspects of RIA reported in the literature

Searles et al. present a conceptual model to support and evaluate impact at Australia’s Hunter Medical Research Institute [ 25 ], which considers the likely resource intensity of different evaluation frameworks, having systematically compared their various capabilities [ 23 ]. They explicitly set out a dual purpose both to support processes of ‘research translation’ (which they provide a working definition of) and measuring ‘research impact’ (also defined and tailored for health and medical research). Their prototype ‘framework to assess the impact from translational health research’ focuses on a (micro) research-level modified programme logic model – a blend of ‘Payback framework’ domains [ 2 ], social return on investment and case studies. The authors recognise that their model is as yet untested, though they reflect thoughtfully on potential opportunities and issues relating to its successful implementation (discussed further in ‘processes’, below).

Greenhalgh et al. set out the United Kingdom NIHR Oxford Biomedical Research Centre’s plans to apply an evidence base to – and research how – regional partnerships between universities and healthcare organisations play out with respect to the Centre’s ambitions of translational research [ 26 ]. As part of a protocol to ‘maximise value’, they propose future RIA activities designed around organisational case studies, informed by action research. As with Searles et al. [ 25 ], above, the protocol is untested; nonetheless, the authors set out a number of wider contextual aspects relating to the wider research funding environment, governance, collaboration and resourcing that underpin the proposed initiative and the work of the Centre more broadly. They also note a series of operational objectives that will form the basis of future RIA activities, including establishment of a ‘partnerships’ external advisory group and associated stakeholder engagement activities, and use of research on research methodologies to evaluate progress and impact.

Trochim et al. present a series of principles to guide evaluations of translational biomedical research [ 27 ], building on their previous work such as the Evaluation of Large Initiatives project, which had been designed to evaluate research of a large centre funded via the US National Cancer Institute [ 30 ]. They reflect on the importance of high-quality research evaluations, and set out key issues and practices to guide the community ‘during evaluation planning, implementation, and utilization’ for the National Institutes of Health (NIH)‘s Clinical and Translational Science Awards (CTSAs). They highlight a number of factors relating to the organisational structure of the CTSAs, including linking evaluation to formal planning cycles, local pilots of smaller scale but nonetheless rigorous approaches, and considering how to integrate RIA at differing organisational levels (e.g. local and national) across NIH’s CTSA programme. While they consider a number of nuanced aspects of evaluation pertinent to RIA more broadly, these are set out as future-looking recommendations – the authors report ‘lived’ reflections on the processes and outcomes of RIA in the CTSA programme in a separate, later paper by Rubio et al. [ 28 ], summarised below.

Lastly, McLean et al. [ 29 ] reflect on specific objectives (e.g. learning and development, accountability, resource constraints) that are addressed by their protocol to evaluate the Canadian Institutes of Health Research (CIHR)’s knowledge translation funding programmes. They describe a novel method of “ participatory, utilization-focussed evaluation ”, a methodology aligned with the principles of “ integrated knowledge transfer ”, which formed the topic of the evaluation itself. In particular, they focus on the efficacy of participatory evaluation, as judged by those who will use its results (described further in ‘processes’, below).

Processes of RIA reported in the literature

Searles et al. [ 25 ] describe ‘how’ questions around impact and its assessment as partially informed by a steering group, established to look at issues of bias, communication of findings and scaling issues. The steering group provided a platform for co-design of the prototype framework and was made up of a blend of researchers, clinicians/healthcare staff, and funder and university administrators. The authors note future stakeholder engagement activities as an important feature of the framework’s aim to encourage translation – by defining impact aims and determining relevant metrics for RIA, including process metrics, as part of a dialogue with researchers. To encourage such a dialogue, they propose combining RIA results onto a project scorecard that acts as a record of impact as the research progresses. The authors provide hypothetical scorecard examples, given that the framework was yet to be put into practice.

In the literature relating to assessment of the NIH CTSAs, Trochim et al. [ 27 ] had previously set out a number of nuanced aspects relevant to consideration of evaluation methodologies, uses and policies, including guidance on how to build capacity for the development of RIA as a practice within NIH. Of particular note were aspects relating to the scope of evaluations, including stakeholder engagement, scale, professional standards, and intensity of resource and ambition required to evaluate innovative programmes. Reflecting back on these activities following a pilot exercise to develop a common set of metrics across the CTSAs, Rubio et al. [ 28 ] – notably, the only group we found to have published reflections on lessons learned subsequent to undertaking RIA activities (noted further in ‘outcomes’, below) – describe their strategy for engaging with individual CTSA institutions via their participation in a Common Metrics Workgroup. This Workgroup was a subgroup of a CTSA-wide Evaluation Key Function Committee, made up of evaluators from all 62 CTSAs. Key factors noted as important to the effort to develop common metrics were to prioritise those that were of low burden to both researchers and the CTSA, but high value to the research institution and the CTSA. They also recommended working iteratively, using formative evaluation methodology, to pilot and revise individual measures with regular feedback (e.g. surveys and conference calls) from those collecting data.

McLean et al. [ 29 ] reflect on the need to collaborate with multiple stakeholders, with a view to improve the ultimate use of RIAs by end-users. They propose multiple methodologies to elucidate qualitative and quantitative evidence on CIHR’s role (in this case, enabling and promoting knowledge translation) as well as activities to situate CIHR with similar organisations around the world. They also propose an expert review panel to offer an independent opinion on activities and analysis of the Evaluation Working Group. Subsequent web searches revealed this group had published an evaluation of CIHR’s Knowledge Translation Funding Program on CIHR’s website [ 31 ] which, while not (strictly) meeting our inclusion criteria for publication in the scientific literature, we felt worth including in our sample of organisations taking an empirically robust and reflexive approach to RIA.

Outcomes of RIA reported in the literature

Rubio et al. [ 28 ] were the only group to report observations across all three domains of focus – structure, process and outcomes – relating to their experiences of establishing and piloting a common approach to metrics for RIA, across the portfolio of NIH CTSAs. They note success in that the pilot identified a number of metrics that could be consistently reported despite CTSA institutions having different legacy processes and data systems. This provided a template for further efforts to simplify and reduce the burden of RIA. They also note the value of having taken a systematic and methodical approach to developing common metrics as providing an ‘empirical anchor’ to use as the basis for more in-depth evaluations of CTSA performance and their role in underpinning research translation. The detailed and reflexive nature of this and the previous Trochim et al. study [ 27 ] noting intended evaluation principles and approaches for the CTSAs would seem particularly relevant to governmental/federal funders of (especially biomedical and health) research with an interest in evaluation, and we explore a number of aspects raised by this group further in our discussion.

The remaining 124 studies identified in our search did not meet our inclusion criteria given that they reported only the results or outcomes of RIAs rather than the organisational processes of undertaking and/or learning from RIAs; presented generalised reflections on features of RIA frameworks, approaches or activities from an external perspective (i.e. academic or consultancy role), rather than any ‘lived’ practical application; or reviewed RIA approaches with the intention of applying them in an organisational setting, but presented no empirical data or perspectives from the organisation(s) themselves.

Findings from research organisation interviews

The findings from the narrative review confirm that there is a relative paucity of empirical data in the published scientific literature that looks holistically at features of the research organisations themselves doing RIA as well as the theory, design and results (i.e. outputs) of RIA studies. Thus, we present results from the second stage of the research, namely findings from the interviews with a convenience sample of research organisations contributing to international best practice by virtue of their status as faculty members of ISRIA.

Structural factors relevant to research organisations’ RIA practices

Common across all interviewees was the notion that RIA practices were in their infancy, and we observed varying levels of maturity with respect to structural set ups for conducting RIAs. Interviewees described a number of systemic ‘rate limiting’ factors contributing to the success of efforts to implement and scale up robust RIA processes. In particular, these include support from senior management and strong leadership, developing a skills base for evaluation, and automating data collection wherever feasible.

Senior management support, leadership and resourcing of RIA

A key factor relating to the organisational structures that supported RIA was the aspect of leadership.

Research organisation #2 reflected on the importance of having a well-respected leader acting as a spokesperson for more rigorous and comprehensive approaches to RIA. They felt that “ the right people and right drivers at the right time ” helped them to make headway. But leadership for impact was more than just a ‘top down’ exercise, as highlighted by this quote:

“ Managing upwards is exhausting. Trying to keep impact on the radar of the executives, of the board, of the CEOs, and them understanding what the hell impact is, how it links back to the core person [within the organisation] … that's exhausting and has massive challenges. Because they're distracted by everyone else throwing their framework, their idea, their area of research, or their area, even, of other support services trying to stay on the radar. ” (Interviewee A)

Research organisation #2 noted that setting up RIA activities required senior management to create conditions that would allow for constructive engagement with research communities (we discuss engagement under ‘processes relevant to RIA’, below). When it came to scaling RIA activities up, strong leadership (noted by research organisations #2 and #3) and appropriate resourcing (noted by research organisations #2, #3 and #4) appeared to be common determinants of research organisations’ abilities to meet demands for the RIA data.

Developing a skills base for evaluation

Business models for conducting RIA varied both between and within research organisations, with a number of different approaches in place. These ranged from external commissioning of evaluations through to internal programme reviews. Which evaluation model organisations used was determined by having the right skillset for conducting RIA, which varied considerably from organisation to organisation. We know from another study looking at the capabilities of research organisations using RIA data that, while larger organisations may have in-house evaluation and analysis teams to produce analytical reports, it has taken them considerable time to develop this capacity and capability, and such resources may not be available to smaller organisations [ 32 ].

Independent validation was an important factor to those we interviewed in providing rigour, though it could be costly. Research organisation #4 noted that they worked with external consultants in a collaborative fashion, such that in time they could ultimately bring elements of these analyses in-house.

A strategy to reduce consultancy costs was illustrated by interviewee A’s description of commissioning economic impact assessments via procurement-approved panels of impact consultants. They described a model whereby they initially paid external evaluators upwards of £50,000 to generate detailed, mixed-method case studies (i.e. combining qualitative evidence and econometric data) of the impacts of specific research investments. Having taken the time to standardise methods for generating these case studies, and train staff accordingly, the organisation reduced the cost of each case study to under £15,000. Internal teams now work with researchers across a ‘pipeline’ of impact-related activities (be these planning, monitoring or evaluation), while gathering novel data to feed into downstream case studies, applying the same standardised methods.

Automating data collection wherever feasible

A major structural factor that facilitated cross-organisational RIA activities was the availability of records – and in particular well-curated electronic records – to identify research topic areas, extract data and aggregate these for the purposes of assessing impact.

Organisation #2 identified the administrative challenge of manually retrieving studies against a particular topic. They recognised the value of algorithmic and semi-automated approaches such as Digital Science’s Dimensions tool, to help search for and validate records in a particular research domain or theme, before going on to query the extent and availability of impact data to analyse manually.

Two research organisations (#2 and #4) used Researchfish® (an online platform that enables research organisations to capture and track the impact of research investments and/or activities, and enables researchers to log the outcomes of their work) as a means to capture and categorise research outputs electronically. However, research organisation #2 made a point that their use of this electronic tool was only a relative success as a result of efforts to engage their local research community around the purposes and principles of undertaking RIA across their funding portfolios. They pointed out that researchers may not always report on their impact activities adequately and so engagement with them was necessary (more on this in ‘engaging researcher communities’, below). They noted that acceptance of reporting on impact and contributing to the organisation’s RIA, was in part due to a rollout of Researchfish®, which included acting on feedback from researchers by, for example, acknowledging that researchers were concerned of administrative burden of reporting, and making this process easier by not requiring them to continue reporting through old written annual reports once the system had been implemented.

Processes relevant to RIA

Under processes we include research organisations’ accounts of the activities that they carry out for RIA – from grassroots efforts to engage researcher communities to articulate and plan for impact, using a diversity of methods, frameworks and indicators for impact assessment, and supporting a learning approach.

Engaging researcher communities to articulate and plan for impact

All research organisations spoke of RIA as a highly collaborative practice that ought not to be undertaken without thoughtful and sustained efforts to engage with the researcher community and others – both internal and external to their own organisations – with an interest in the research being assessed.

Reinforcing this were interviewees’ descriptions of embedded activities, designed as an integral part of research planning and activity monitoring (i.e. not just for the purposes of assessing impact). Throughout, it was apparent that these activities required dedicated time and resource, not least staff with sufficient skills, to run them – and we reflect further on resource implications for research organisations below.

At the earliest stages of the research process, research organisation #1 spoke of routinely conducting workshops involving the research team, ‘end-users’ (i.e. those whom the research is intended to involve and/or benefit) and others stakeholders, to help them articulate their intended impacts. Research teams’ willingness to involve a suitably diverse stakeholder group in these discussions was used as a heuristic for whether they were ‘RIA ready’ (i.e. whether it was yet appropriate for them to consider being part of more formal evaluations, requiring more than process and activity data collected by the organisation as part of their standard portfolio monitoring).

As another example, they described an instance where researchers working in the manufacturing sector were initially reluctant to describe the potential for health benefits from their work, not themselves having the expertise to evaluate impacts in this field. They spoke about how they supported these researchers to articulate these impacts, and offered resources to help future evaluation in these domains:

“ It was trying to push them, going, ‘Actually, there are indicators that we could use. You don't have to be the one that collects it. We can either get a social scientist, or sometimes, the organisation itself collects that information.’ So it was trying to wean them off thinking that they were the only ones that had the right to collect the information, and to go into areas that weren't their area of expertise as well .” (Interviewee A)

A telling lesson – particularly relevant for research organisations looking to embark on formal RIA exercises for the first time – came from this same interviewee reflecting on the need to bring researchers along with any strategy, rather than impose it on them:

“ If we're going to do a post-assessment of any of our projects, at the moment, they were never set up and designed to be actually monitored in reporting impact. So you'll find it can be a little bit hit and miss on what type of data they collected along the way, and what type of evidence to the claims that they're making, and all those types of things .” (Interviewee A)

Research organisation #3 noted how transparency in their approach to RIA was a motivating factor for many researchers, who had responded enthusiastically to the opportunity to have their work form part of a snapshot of research activities. They noted that this was particularly the case for early career researchers, whose work tended to be less well represented by ‘traditional’ (mostly citation-based) assessment metrics.

Using a diversity of methods, frameworks and indicators

Despite aspirations to develop a ‘common language’ with which to explain organisational impacts and approaches, comparing across the research organisations we interviewed, it was striking how different these approaches appeared. RIA methods (or perhaps more accurately, methodologies) ranged from regular survey-based assessments of whole portfolios of research, to populating logic models with routinely captured programme outputs, to externally commissioned evaluations in specific domains of impact (e.g. cost-benefit analysis), to in-depth case studies co-produced and guided by research teams according to the availability of RIA data in their field.

Interviewee responses all indicated that managing a hierarchy of RIA activities, i.e. determining the ‘unit of analysis’, and determining which research projects/programmes are responsible to report into said analysis, was not a trivial task. They reported pragmatism, as much as any formal methodology, in guiding their initial efforts.

There was recognition by all interviewees that a diversity of approaches was needed for RIA, highlighted by the following quote:

“ No one system is going to answer all of the questions and meet all your needs ." (Interviewee C)

Research organisations variously noted the value of RIA frameworks and indicators as a means to help situate conversations with research communities, communicate with other funders (if funding in similar areas of research) and benchmark between organisations’ activities. While all interviewees drew on published literature as the basis of their approaches – from formal bespoke RIA frameworks, to broader conceptualising tools such as the use of programme logic models – two research organisations described additional activities that our analysis finds less well reported in the literature, that we note below.

Research organisation #1 spoke of ‘learning by doing’, taking an established RIA framework, approach or method and adapting it through experimentation and discussion with the research community.

Research organisation #2 described initial data audits (as opposed to more formal ‘research on research’ studies) as a means to use existing reports, grant data and other statistics to populate a draft RIA framework; this could be at the project-by-project or programme-by-programme level. Starting off with a limited – if imperfect – number of impact categories or indicators, nonetheless derived in an iterative and reflexive fashion, could overcome ‘death by indicators’, as described by the following interviewee:

“ Where do the indicators end? It can be overwhelming, if you say: ‘You might have some social impacts. Do you want to open up the Excel spreadsheet that has a look at social indicators?’ – and there are 5,000 of them. They're going to shut it and not even engage with it .” (Interviewee A)

Research organisation #1 also described the value of a framework in terms of expressing the organisation’s main areas of focus for RIA (e.g. economic, social and environmental impacts), with indicators helping to progress a more granular conversation with researchers around the kinds of activities they might be in a position to report on (as opposed to those that would require more in-depth and bespoke evaluative activities to capture).

They also described the value of workshops to help populate an organisation-wide impact framework or test the appropriateness and usability of existing frameworks and impact indicators across multiple funders. These activities, conducted transparently and with the intention of engaging wider communities at their core, ensured that impact indicator sets (as well as organisational ambitions) had a degree of legitimacy with different stakeholder groups – helping ambitions, as noted above, to develop a ‘common language’.

Supporting a learning approach

Looking outward, international engagement with other funders helped organisations to learn from each other, as part of a ‘community of practice’, ultimately connecting different actors across the global innovation system.

Two research organisations (#1 and #2) noted the value of asking other funders ‘ How do you do it? ’, as a means to move RIA into a more mature level. Research organisation #2 described efforts to set up a series of workshops, based around the ISRIA core teaching materials, 1 to bring together different actors in the funding community and across other sectors, as part of a peer-to-peer learning process. They noted a major driver being an aspiration to develop common RIA languages and tools:

“ A lot of the organisations are being asked the same questions. It only makes sense to put our minds and our experiences together to figure that out, because there is no clear path forward, necessarily .” (Interviewee C)

Having a set of publicly available RIA guidelines provided a means to ensure research organisation #1 applied a consistent approach, while embedding learning in evaluation techniques and approaches more widely:

“ If you are to conduct an impact case study within [the organisation] , it will follow these guidelines, otherwise we won’t recognise it as an impact case study. We've just come together to go, ‘Now, what have we learnt? How can we update the guidelines where we thought this was a particular method we should be using? What other information can we be adding?’ and those types of things .” (Interviewee A)

Whether inward or outward-facing, all research organisations recognised the value of a learning approach to inform an aggregate picture of research impact, given that neither they nor others in the wider research community operated in isolation.

Outcomes of conducting RIA activities

In this last section we report on reflections on what doing RIA has meant to research organisations – the value RIA has brought to the organisation, to researchers, and to wider communities and stakeholders. Interviewees noted RIA data as supporting a dialogue to orient research to impact, underpinning shared learning from analyses of research, and providing evidence of the value of research in different domains and to different audiences.

Supporting a dialogue to orient research to impact

Research organisation #3 spoke of the transformative nature of RIA efforts and narratives as “ orienting research to impact, advocating for a different way to design research ” (Interviewee D).

Research organisation #1 spoke of a systemic shift in impact from the being the focus of centralised assessment exercises, to being part of a more informed dialogue between researcher and funder throughout the research lifecycle:

“ Now [the research teams] are really starting to link impact clearly to their strategy. They've got a person that works with all their teams to do their impact pathway planning. Not just from my [central] team; they've assigned a full-time person to do that. They then work with my team to be checking that they're doing the right thing. So you've got them investing time, effort, strategic alignment that goes beyond my team having to do it for everyone .” (Interviewee A)

While research organisation #2 felt the proposition of RIA as a means to optimise funders’ return on investments was still somewhat aspirational, they reflected that having impact as a high-level ‘performance indicator’ could help research teams to focus on the capabilities they needed to have an impact, rather than the limitations of their current capabilities. In terms of business development, this helped them to have more realistic conversations around resources required to deliver grantees’ research ambitions – though they also noted that this seemed to be a sensitive subject for a majority of researchers.

Regardless of the mode of assessment, a strong emerging theme was the value of research teams themselves taking ownership of impact plans and ambitions as well as assessment. By linking the organisations’ evaluative practices to researchers’ own strategies, a majority of research organisations (#1, #2 and #3) provided upfront support for impact pathway planning as well as downstream support to identify the availability of impact data and appropriate methods to source it. For these organisations, engaging with research teams to help them see the value of RIA indicated a ‘culture shift’ away from centralised performance management to a more self-motivated and participatory mode of evaluation.

Underpinning shared learning from analyses of research

All research organisations stressed the value of regularly collecting and making publicly available RIA data, to share what they were learning from their analyses of research activities.

Basic questions of accountability – answering questions such as ‘ How much did we invest in X, and what did we do? ’ – determined organisations’ early decisions on the appropriate systems to collect and link impact data.

Research organisation #2 described an initial process of review and categorising ‘known’ outputs from research across various programmes, as a first step, before undertaking analysis of impact at a more systematic level. One described a series of programme reviews, designed to feed data into a common impact ‘architecture’ – in their case, a series of programme profiles to commonly describe the duration, objectives, levels of investment and intended outputs for each research programme.

Research organisation #4 described a twofold pathway by which their organisation was introducing RIA practices; firstly, a pragmatic exercise to capture and publish data from annual accountability surveys that could feed ‘rather quick and dirty’ analyses for decision-makers and, secondly, a prospective strategy for more analytical assessments.

An example of more in-depth analytical work by this organisation was their exploration of the nature of researchers’ collaborations with industry. Informed by descriptive statistics initially collected via annual reporting cycles, the research organisation had designed a more detailed analysis (involving semi-structured interviews and combining their datasets with their national register of companies), through which they might learn about motivations of researchers to work with industry (and visa-versa).

Reflecting on value to the research system itself, all interviewees noted that RIA data could help researchers learn how to have a greater impact. Down the line, research organisation #2 noted that sharing the results of RIAs engendered a greater spirit of cooperation from researchers, as part of a virtuous circle:

“ I think it is really important to ensure that we are sharing back with our stakeholders and the research community [ … ] We are in this together. If we collect data and we never report back, it is really not much of a collaborative relationship. It’s important for us to share the results that we are achieving, not just for us, but we always acknowledge the efforts that the researchers make. This is their work that we simply help fund and support .” (Interviewee C)

Providing evidence of the value of research

All interviewees indicated that RIA data was providing underpinning evidence to communicate the value of research, in a number of ways.

In the case of research organisation #1, their efforts to standardise methods to generate return on investment data meant that they were now able to calculate an aggregate figure representative of the organisation’s overall return on investment, across its entire research portfolio. This figure, calculated every 2 years, was now being released publicly by the Chairman of the Board and CEO. They noted the value of RIA data in supporting wider organisational engagement activities, particularly involving their organisations’ communications teams. They described how impact-led communications were themselves more engaging – one noting that their organisation could not “ pump them out fast enough ” for the demands of their communications team and ministerial liaison office.

‘Soft’ advocacy formed the principal reason to scale up RIA practices in this organisation. They couched this not in terms of direct appeals for funding, but instead being able to provide a more robust series of answers to generic questions such as return on investment or the contribution of research investments in different thematic portfolios (e.g. health, environmental sciences, engineering) to specific issues forming the focus of public policy initiatives (e.g. tackling CO 2 emissions). They noted RIA as a means to generate “ the feedback that’s required in regards to decision-making ” as well as “ accountability back to the public on what we’ve spent their money on, and what they’re getting for it ” (Interviewee A).

Interestingly, none of the research organisations we interviewed indicated that their RIA activities nor the evidence they generated were yet sophisticated or systematic enough to justify high-level decisions around the allocation of research funds. Indeed, research organisation #2 stated caution in responding without due care to what they perceived as an increase in demand for questions around how best to allocate public resources for research:

“ If around the world there is not strong evidence in research and innovation, or science, supporting how to allocate investments using any credible good designs or evidence, then we have to identify the imperfect science for that aim. Then, as a community, if the decision-makers and policymakers are asking more for it, we have to figure out, as a community, how we are going to respond to that demand. We are treading extremely carefully in terms of allocation. Extremely .” (Interviewee B)

Thus, while RIA was recognised as providing one line of evidence to help inform programme decisions, it was by no means the only evidence that could do so – stakeholder consultation was recognised by a majority of research organisations (#1, #2 and #3) as crucial to exploring the consequences of RIA data being used in future allocation decisions, such as that undertaken by the United Kingdom’s Research Excellence Framework.

A spirit of cooperation or shared endeavour in RIA had the potential for research organisations and researcher alike to make the case for continued investment in research – and thus act in concert to benefit society, as expressed by the following interviewee:

“ I think, we are very fortunate that our research community, by and large, understand the need to start demonstrating impact, and that we are in this together. That we fund them. They generate outcomes and impact. That we need to show what our researchers are getting, so that we can advocate for continued funding from the government. That it is not them in isolation, or us in isolation. That we really do need to bring our efforts together .” (Interviewee C)

Our aim in conducting this study was to describe the experiences of research organisations in putting into practice RIA activities, frameworks and approaches. We found that the scientific literature on impact, though containing a number of examples of RIA practices of potential relevance to research organisations, was abstracted from the realities of organisations’ lived experiences. By comparing observed sets of RIA structures, processes and outcomes from our interviews with what is represented by the limited ( n  = 5) examples of experiences published in the scientific literature, we can begin to suggest what might be considered good ‘realistic’ practice in this highly emergent field. We thus set out three high-level recommendations for research organisations derived from our analysis and linking to the Donabedian domains of focus [ 24 ] – getting set up for RIA, working with diverse stakeholder groups to plan for impact and its evaluation, and realising the benefits of RIA as a means to underpin shared sectoral learning.

Get set up: impact strategies need leadership, skilled evaluators, effective data systems, and time to set up and deliver

Our overarching impression from the accounts of all interviewees was that organisational maturity and readiness for RIA varied considerably within each organisation, not least across the four different organisations in our convenience sample – all of whom were already part of engaged learning activities through their participation at ISRIA. Interviewees described various structural aspects that could be considered ‘rate-limiting’ factors to developing impact strategies and, within this, the design and delivery of RIA. Thus, our first recommendation calls on research organisations to consider carefully and prospectively the structures within their own organisations that may facilitate their future capacity and capability to undertake RIA.

Two research organisations we interviewed noted gains to be made from semi-automated methods to search, validate and analyse impact data. However, they noted a limiting factor to be the extent and availability of electronic records. How to make best use of existing systems and consider efficiencies in data analysis to service a range of potential evaluation questions, would seem a critical point of reflection. In their guidelines, ISRIA make the point that RIA practitioners themselves need to have a nuanced understanding of the merits of different approaches to evaluation, to gather data via methods that address assessment questions efficiently and effectively [ 5 ]. Trochim et al. reinforce this point in their broad-ranging series of guidelines for the NIH’s CTSA initiative, reflecting that evaluators ought to aspire to high professional standards of practice and be sufficiently skilled to apply innovative approaches [ 27 ]. However, they make the point that, to bring in and/or train evaluation professionals to this level, requires upfront allocation of resources. We observed that research organisations’ own resource constraints for conducting evaluative activities was a critical factor in their scale-up of RIA.

Looking to RIA skills, our interviewees reflected on a number of business models applied to undertake RIA, and the advantages of encouraging a more decentralised (i.e. collaborative) approach. Yet, only one interviewee noted efforts to train staff within their organisation. As part of a team responsible for co-developing an impact training programme for staff working across the United Kingdom’s NIHR, we have observed first-hand the importance of efforts to develop skills and competencies when it comes to exploring questions around research impact and its assessment. ‘Impact literacy’ as a concept has indeed been noted by others working to advance institutional practices in this area, particularly to ensure that approaches (either inherently, or as applied at an organisational level) do not encourage instrumentalism or short-termism [ 33 ].

While we would encourage further necessary reflection from research organisations to understand their capabilities across the above important structural domains of RIA, we consider an overarching – and potentially underrepresented – issue to be that of leadership. This was noted by a number of our interviewees as vital for making headway in organisations adopting more rigorous approaches to RIA. Trochim et al. are clear in their recommendations that evaluation practices are integrated into research programme planning and implementation [ 27 ]. While they note that responsibility for timely and high-quality evaluation lies across all stakeholders, they particularly highlight the role of research programme leaders – both at the level of the funding organisation and at local research centres – to embed evaluation as “ an ongoing function distributed across all cores ” . In the case of the CTSA programme, such an evaluation function was prospectively and explicitly mandated by NIH at the commissioning stage, to be planned and costed into requests for funding by prospective centres, as part of the application process [ 27 ].

Taking just these three structural factors – better impact data, RIA skills and leadership – we note the importance of considering the resource requirements that an embedded, strategically relevant and methodical approach to evaluation demands if good RIA practices are to become integral to effective research programming. While we agree with the warnings from a number of commentators against the dangers of research organisations taking an overly bureaucratic approach to impact and its assessment [ 7 ], we find as concerning mounting evidence that a number of major RIA programmes are insufficiently resourced to achieve their intended objectives [ 34 ] or impact being overlooked entirely as a focus of evaluation by research commissioning programmes operating in the translational/implementation space [ 35 ]. While we recognise our own institutional biases (working as mixed methodologists in policy analysis – itself a fairly ‘blended’ research area) we would stress the effort and commitment required to introduce what can be seen as unorthodox methodological approaches to large organisations with well-established working practices. The work of Swan et al. sets out a cautionary tale – they describe how dominant institutional, so-called ‘mode 1’, logics can prevail over and ultimately lead to the abandonment of initiatives that deliberately try and set a focus towards use-led, so-called ‘mode 2’, research [ 36 ].

We reflect that structural factors played a large, if not dominant, role in organisations’ early efforts to undertake RIA. These lead us to recommend that research organisations consider their own maturity, and ultimately capability, when determining the purpose and scope of RIA. Ensuring complementarity between RIA activities at different levels of a complex programme(s) of research requires effort to plan and coordinate, if RIA is ultimately to meet the needs of stakeholders with different interests and expectations, as we describe further below.

Work together: engaging researcher communities and wider stakeholders should be at the core of impact pathway planning and RIA

Engaging with research communities and wider stakeholders, including other funding agencies and government departments, is important for a number of reasons, in particular to help the quality of RIA impact data reported by researchers, for research organisations to better understand what impact means in their research ecosystem and, not least, to address the ethical considerations of collecting impact data.

Interviewees described a range of thoughtful and sustained efforts to engage within and across their organisations’ teams, with other research organisations, and a range of stakeholders with an interest in the research being assessed. These activities ranged from means to encourage peer-to-peer learning of appropriate RIA methods and approaches between organisations, to platforms to motivate research groups to plan and articulate their ambitions for impact. Interviewees also described a spectrum of enabling work for RIA that spanned from inwardly focussed efforts to gain managerial support, to outwardly focussed efforts to promote a cooperative ethos among research teams being assessed, such that they guided the organisation as to which data they were in a position to collect as part of an ongoing dialogue around anticipated or actual impacts that their work had contributed to.

ISRIA’s guidelines set out a number of practical steps that encourage processes of identifying and engaging with stakeholders, and reflecting on their interests, as critical steps in RIA [ 5 ]. The authors note that doing so can support the social robustness of knowledge derived from RIA and, by extension, the science that it represents. They also point out that given limited resources for evaluative activities, stakeholder engagement can help to prioritise areas for RIA. Yet, warning signs are apparent from other commentators of the dangers of instrumental approaches to engagement. Jude Fransman, in her nuanced and comprehensive synthesis of the history and ecology of research engagement practices, points out the dominance of academic conceptualisations of engagement [ 10 ].

Indeed, we observed that the process of conducting RIA activities in itself supports a dialogue between researcher and research organisation to orient research to impact, based on co-ownership of impact plans and a focus on shared capabilities to deliver and evaluate impact. McLean et al., in their study of research organisations’ roles in translating research, speak of the power held by funders, in particular, in stimulating and incentivising action among the wider research community [ 11 ]. This power ought to be used with careful reflection by funders on their motivation for conducting RIA. Mclean et al. make the case that under-investment in critical reflection is not a sustainable means for research organisations to cut costs [ 11 ].

In their guidelines for effective RIA, ISRIA highlight the need for clarity from research organisations on their rationale for assessment. Researchers’ perceptions of RIA matter; research organisations ought to consider the ethical implications of their requests for information, particularly where assessment might create perverse incentives (e.g. if linked to further funding or other conflicts of interest). Funders in particular must recognise and mitigate any ‘conflict of commitment’ arising from the time and effort spent by researchers in responding to requests for information [ 5 ].

This sentiment is echoed and expanded upon by Trochim et al. [ 27 ], who recognise that setting out the implications of RIA can support policy and action as well as clarify conceptual concerns and engage thinking amongst researchers. They encourage research organisations to work collaboratively with local groups and explore the merits of different approaches to evaluation. In their view, the role of research organisations in this area is to provide general guidance, not explicit requirements, to allow scope for local ownership and contextually relevant planning of evaluation activities to take place. Nonetheless, they encourage research organisations developing written evaluation policies or guidance to ensure that these address important topics over and above management and methods, such as goals, roles, participation, as well as the use, dissemination and meta-evaluation of such policies. They reinforce that metrics alone do not make for good evaluations and recommend piloting small, rigorous sets of definitions, metrics and measures. We would echo calls for research organisations to be aware of and adopt, where practical, calls for responsible metrics [ 37 ].

Finally, we found that a learning approach, through international collaboration and the sharing of emerging RIA practices, helped research organisations to apply more mature methods and generate better (i.e. more rigorous and more strategically relevant) impact data. This is perhaps not surprising, given all four interviewees were faculty members of a collaborative international programme dedicated to “ learning to assess research with the aim to optimise returns ”. Indeed, a call for mutual learning with the RIA community forms one of the ten guideline points published by this group [ 5 ]. A logical first step ought to be mapping the context for evaluative activities – a ‘macro’ example comparing United Kingdom versus Australian perspectives on RIA being that of Williams and Grant [ 38 ]. This can help understand the wider environment for RIA and benchmark strengths and weaknesses of the research environment.

Recognise benefits: a focus on impact can lead to greater engagement between research organisations and researchers, improved communications and ultimately better evidence on the value of research

Throughout our interviewees’ accounts, we noted themes of organisational improvement and benefits to the research system brought about by taking an open, reflexive and methodical approach to RIA. We feel it is necessary to present some of these benefits, and encourage other organisations to do likewise, in the spirit of ‘continuous improvement’ that seeks to improve research funding practices for wider societal gains [ 39 ].

Proximal benefits of RIA included research organisations’ access to data that underpinned a learning and cooperative approach to achieving wider impacts. Interviewees spoke of ‘bringing the community along’, echoing calls to improve the social robustness of research and the social desirability of impacts, by involving public and other stakeholder groups with an interest in research throughout the process of impact delivery and impact evaluation [ 5 , 40 ]. Evaluation processes and their results ought to be open and accessible [ 27 ]. Thus, RIA, by setting out methodical and transparent approach to research evaluation, ought to help serve organisations seeking a diversified communications offering, where no one form of impact is preferred over another [ 5 ].

More distal benefits of RIA included better evidence of the value of research to inform decisions and ultimately organisations’ ambitions to optimise returns. All of our interviewees reflected on the emergent and relatively new practice of RIA but spoke of this as a learning opportunity. Our own experience in designing training for the United Kingdom NIHR is that research staff are responsive to training on impact and RIA methods, part of a wider programme of work to build capacity in this emerging area. In their guidelines to the wider community, ISRIA notes that a responsible approach to RIA provides one line of evidence by which organisations can make better decisions [ 5 ]. Experimentation and variation of approaches to RIA is appropriate: research organisations need not be put off by encouraging smaller, localized efforts that can act as incubators or testing grounds for larger/macro approaches [ 27 ]. ‘Joining forces’ via fora to bring evaluators together is one way organisations might tap into the heterogeneity of approaches in this emergent field, noted by Kane et al. as both “ a liability and a strength ” [ 41 ]. Softer approaches for knowledge exchange between organisations, such as a series of ‘impact coffee clubs’ set up by the United Kingdom’s Association of Medical Research Charities and the NIHR, could also act to share learning and grow organisational capacity in RIA [ 42 ].

Thus, our findings echo calls by others tasked with exploring and mapping emerging practice in this area: making the case for the need for investment to support methodological innovation in research evaluation, the better use of existing datasets, and the wider education and cultural change of the research sector as a whole, if we are to understand the benefits that arise from research and, ultimately, set in place policies to realise the full value of public investments in R&D [ 43 ].

Limitations

In conducting this study, we recognize the emergent and relatively under-explored aspect of research organisations’ own impact practices. We did not search grey literature sources systematically, and appreciate that this body of work may well include results from (and potentially reflections on) methodical and robust RIA activities. However, our aim was to identify the extent to which empirical data and reflections on research organisations’ own impact practices were being reported in the scientific literature, and supplement this with our own access to organisations willing to go ‘on the record’ with their own lived experiences.

Our use of interviews was designed to provide illustrative case studies from a convenience sample of research organisations contributing to emerging RIA practices, and thus not necessarily representative nor appropriate to generalise to research organisations in other contexts (for further discussion on merits of case study method and its value in generating practical knowledge, see Flyvbjerg [ 44 ] and, as specifically applied to a study of research impact, Greenhalgh [ 45 ]). We sought to apply rigour in how we conducted and analysed interviews, ensuring prior ethical review and approval, and the confidentiality of interviewees and their explicit approval of any quotes.

Though our interview sample was restricted to faculty ISRIA members, we feel this is justified given our aim was to shed light on organisational activities and behaviours (as opposed to theory or principles) of RIA. We recognise that, to a degree, terminology and responses may reflect an already engaged group seeking to contribute to the community of practice in RIA. We appreciate that this engaged group may not be representative of a majority of research organisations, either in terms of capabilities or capacity. Thus, we have made efforts to present our findings in a logical fashion, starting with structural aspects, moving on to procedural aspects and, eventually, the outcomes of conducting RIAs, such that others might be inspired to join this growing community of practice. Wherever possible we have sought to situate responses in context of activities to which other research organisations can relate, and provide specific examples and quotes wherever these do not identify the organisation in question. We would welcome critical feedback and insights from any individuals or organisations who feel motivated to contribute.

There are very few examples that provide empirical evidence of how research organisations put RIA principles into practice in the scientific literature. From our interviews, we find evidence of the value of RIA, but also a disconnect between published RIA tools and results, and the realities of organisational practices, which tend not to be reported.

Our analysis suggests a number of common areas where research organisations are aligning their practices to optimise research impact and its evaluation. We observed varying structural set ups for conducting RIAs, which included support from senior management and strong leadership, developing a skills base for evaluation, and automating data collection wherever feasible. With respect to processes, we described grassroots efforts to engage researcher communities to articulate and plan for impact, using a diversity of methods, frameworks and indicators for impact assessment, and supporting a learning approach both within and across organisations. Finally, under outcomes of conducting RIA activities, we reported on interviewees’ reflections on the value that RIA has brought to their organisation, to researchers, and to wider communities and stakeholders, including that RIA helps with supporting a dialogue to orient research to impact, underpinning shared learning from analyses of research, and providing evidence of the value of research in different domains and to different audiences.

We suggest three factors that can enable good ‘realistic’ practice in RIA, derived from our analysis, as follows: (1) getting set up for RIA in terms of data, skills, time and supportive leadership able to allocate sufficient resources to developing strategy; (2) working with researchers and other funders and stakeholders collaboratively; and (3) realising RIA benefits such as better data on impact, transparency and the potential to obtain evidence on the value of research.

We conclude that, while theoretical and conceptual RIA models abound, the research organisations’ challenge is to adapt, and experiment with, practical RIA approaches in their own context. Other than the very few notable exceptions that we describe, the ‘science of science’ agenda seems insufficiently embedded in organisational practices if it is to usefully inform RIA. Given research organisations’ key role in shaping research systems, and a growing emphasis on impact, efforts are needed to address this ‘knowledge to practice’ gap.

Assessment of research impact implicitly requires value judgements, on choices of frameworks, indicators, methods, tools, themes and priorities, to name but a few practical considerations. We see from our interviews that research organisations have dedicated time and effort to reflect on how they go about making those decisions and, crucially, engage researcher communities as part of the process. Examples from these organisations that are taking a grassroot, researcher-centric approach to RIA suggest that equal, if not greater, emphasis be placed on strategies to encourage dialogue with researchers and their wider communities around impact, as on evaluative activities to evidence impact. Research organisations benefit from taking a collaborative approach that encourages shared learning as a primary ambition of RIA.

We see a need for investment in skills and supportive structures as well as efforts to make funder datasets more accessible for analysis and publish results to encourage shared learning. We call for research organisations to adapt RIA practices based on clear sight of ‘what works’ in other organisations, as we hope to have begun detailing here. By situating reflections, analysis and further ‘research on research’ into their own working practices, we believe that research organisations can work cooperatively with researchers to orient and optimize research to societal impacts.

Acknowledgements

The authors thank all interviewees and members of the ISRIA faculty who shared information on their organisations and approaches to impact. We also thank colleagues across NIHR, and in particular Dr. Mark A Taylor, whose thoughtful reflections on current practices and challenges in RIA provided the impetus for this as part of a wider programme of organisational and cultural change relating to impact and its assessment, and Dr. Claire Vaughan, whose role in establishing training, support and collegiate reflection on what impact means to NIHR and its wider stakeholders has been crucial to developing a learning approach for this and related work.

Abbreviations

Search string.

List of search terms (and sequence of database searches)

  • (translation* adj1 (research or knowledge)) or “knowledge mobili?ation*” or “research into practice” or “translation to health application*” or “translation to patient*” or “translation to health practice*” or “translation to population health impact” or “research impact” or “knowledge into practice” or “populari?ation of research” or “research generated knowledge” or valorization or “value for money” or “social return” or sroi
  • metric* or framework* or payback or measure* or “financial return*” or “political impact” or “policy impact*” or “social impact*” or bibliometrics or econometrics or “economic evaluation*” or “cost effectiveness” or “cost benefit analysis” or assessment or evaluation
  • (government* or charit* or non-profit* or not-for-profit* or public or health or medic*) adj1 (research or scien*)
  • (research or scien*) adj1 (fund* or organi?ation or institution* or grant* or charit* or NGO)
  • remove duplicates from 7

Interview topic guide

Structures and process of research impact assessment (ria) activities.

When did you start undertaking RIA as a formal activity?

Can you briefly describe the methods you currently employ to undertake RIA?

(frameworks/tools/approaches, based on theory/adapted for own use/developed own?)

What data do you collect, and how often do you collect it?

What informed your decisions in this regard?

(literature review/training e.g. ISRIA/own research?)

Who actually does the work?

Outcomes and value of RIA

Can you recall your primary purpose when you first set out to do RIA?

Looking back, what have you found to be the most valuable aspect of doing RIA? (explore esp. if different from primary purpose/evolved over time?)

Have you found that undertaking RIA has led to improvements in research translation and impact? If so, do you have any evidence of this?

(perceived/experiential/substantiated/measured?)

If RIA helped to identify when research translation wasn’t occurring, what did you do as a result with the information that you gathered?

(explore organisational links & intentions vs. power to effect change)

Has RIA facilitated your organisation’s research (+ impact) communications?

(which audiences, to what effect, any evidence or materials exemplifying RIA?)

Challenges/lessons learned

What were some of the challenges you faced as you implemented RIA practices? (describe/how overcome/what learned?)

Is there evidence of others having benefitted from the approach to RIA you’ve taken?

(organisational or personal reflections/write-ups/reviews/policies?)

(How) has your approach to RIA developed since you first put it into practice?

(what informed this?)

Have you/do you plan to publish materials describing your experiences of RIA?

Authors’ contributions

AK led study investigation, formal analysis and original drafting of the manuscript. AK & SHK contributed equally to study conceptualisation, methodology, and manuscript review and editing. SHK led study supervision. All authors read and approved the final manuscript.

AK receives salary funding from LGC Ltd. on behalf of its independent grant management function for the National Institute for Health Research (NIHR) Central Commissioning Facility, formerly (2016–18) via a researcher-in-residence grant to the Policy Institute at King’s College London and currently (2019 onwards) as an employed senior research fellow, exploring questions around NIHR’s impact, value and approaches to evaluation.

Availability of data and materials

Ethics approval and consent to participate.

This study was eligible for and received King’s College London’s minimal risk ethical approval (ref. MR/17/18–49) in advance of approaching interviewees for their consent to participate.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

1 These are freely available under a CC-BY-NC-SA 4.0 license at: https://www.theinternationalschoolonria.com/resources.php

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The state of AI in early 2024: Gen AI adoption spikes and starts to generate value

If 2023 was the year the world discovered generative AI (gen AI) , 2024 is the year organizations truly began using—and deriving business value from—this new technology. In the latest McKinsey Global Survey  on AI, 65 percent of respondents report that their organizations are regularly using gen AI, nearly double the percentage from our previous survey just ten months ago. Respondents’ expectations for gen AI’s impact remain as high as they were last year , with three-quarters predicting that gen AI will lead to significant or disruptive change in their industries in the years ahead.

About the authors

This article is a collaborative effort by Alex Singla , Alexander Sukharevsky , Lareina Yee , and Michael Chui , with Bryce Hall , representing views from QuantumBlack, AI by McKinsey, and McKinsey Digital.

Organizations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology. The survey also provides insights into the kinds of risks presented by gen AI—most notably, inaccuracy—as well as the emerging practices of top performers to mitigate those challenges and capture value.

AI adoption surges

Interest in generative AI has also brightened the spotlight on a broader set of AI capabilities. For the past six years, AI adoption by respondents’ organizations has hovered at about 50 percent. This year, the survey finds that adoption has jumped to 72 percent (Exhibit 1). And the interest is truly global in scope. Our 2023 survey found that AI adoption did not reach 66 percent in any region; however, this year more than two-thirds of respondents in nearly every region say their organizations are using AI. 1 Organizations based in Central and South America are the exception, with 58 percent of respondents working for organizations based in Central and South America reporting AI adoption. Looking by industry, the biggest increase in adoption can be found in professional services. 2 Includes respondents working for organizations focused on human resources, legal services, management consulting, market research, R&D, tax preparation, and training.

Also, responses suggest that companies are now using AI in more parts of the business. Half of respondents say their organizations have adopted AI in two or more business functions, up from less than a third of respondents in 2023 (Exhibit 2).

Gen AI adoption is most common in the functions where it can create the most value

Most respondents now report that their organizations—and they as individuals—are using gen AI. Sixty-five percent of respondents say their organizations are regularly using gen AI in at least one business function, up from one-third last year. The average organization using gen AI is doing so in two functions, most often in marketing and sales and in product and service development—two functions in which previous research  determined that gen AI adoption could generate the most value 3 “ The economic potential of generative AI: The next productivity frontier ,” McKinsey, June 14, 2023. —as well as in IT (Exhibit 3). The biggest increase from 2023 is found in marketing and sales, where reported adoption has more than doubled. Yet across functions, only two use cases, both within marketing and sales, are reported by 15 percent or more of respondents.

Gen AI also is weaving its way into respondents’ personal lives. Compared with 2023, respondents are much more likely to be using gen AI at work and even more likely to be using gen AI both at work and in their personal lives (Exhibit 4). The survey finds upticks in gen AI use across all regions, with the largest increases in Asia–Pacific and Greater China. Respondents at the highest seniority levels, meanwhile, show larger jumps in the use of gen Al tools for work and outside of work compared with their midlevel-management peers. Looking at specific industries, respondents working in energy and materials and in professional services report the largest increase in gen AI use.

Investments in gen AI and analytical AI are beginning to create value

The latest survey also shows how different industries are budgeting for gen AI. Responses suggest that, in many industries, organizations are about equally as likely to be investing more than 5 percent of their digital budgets in gen AI as they are in nongenerative, analytical-AI solutions (Exhibit 5). Yet in most industries, larger shares of respondents report that their organizations spend more than 20 percent on analytical AI than on gen AI. Looking ahead, most respondents—67 percent—expect their organizations to invest more in AI over the next three years.

Where are those investments paying off? For the first time, our latest survey explored the value created by gen AI use by business function. The function in which the largest share of respondents report seeing cost decreases is human resources. Respondents most commonly report meaningful revenue increases (of more than 5 percent) in supply chain and inventory management (Exhibit 6). For analytical AI, respondents most often report seeing cost benefits in service operations—in line with what we found last year —as well as meaningful revenue increases from AI use in marketing and sales.

Inaccuracy: The most recognized and experienced risk of gen AI use

As businesses begin to see the benefits of gen AI, they’re also recognizing the diverse risks associated with the technology. These can range from data management risks such as data privacy, bias, or intellectual property (IP) infringement to model management risks, which tend to focus on inaccurate output or lack of explainability. A third big risk category is security and incorrect use.

Respondents to the latest survey are more likely than they were last year to say their organizations consider inaccuracy and IP infringement to be relevant to their use of gen AI, and about half continue to view cybersecurity as a risk (Exhibit 7).

Conversely, respondents are less likely than they were last year to say their organizations consider workforce and labor displacement to be relevant risks and are not increasing efforts to mitigate them.

In fact, inaccuracy— which can affect use cases across the gen AI value chain , ranging from customer journeys and summarization to coding and creative content—is the only risk that respondents are significantly more likely than last year to say their organizations are actively working to mitigate.

Some organizations have already experienced negative consequences from the use of gen AI, with 44 percent of respondents saying their organizations have experienced at least one consequence (Exhibit 8). Respondents most often report inaccuracy as a risk that has affected their organizations, followed by cybersecurity and explainability.

Our previous research has found that there are several elements of governance that can help in scaling gen AI use responsibly, yet few respondents report having these risk-related practices in place. 4 “ Implementing generative AI with speed and safety ,” McKinsey Quarterly , March 13, 2024. For example, just 18 percent say their organizations have an enterprise-wide council or board with the authority to make decisions involving responsible AI governance, and only one-third say gen AI risk awareness and risk mitigation controls are required skill sets for technical talent.

Bringing gen AI capabilities to bear

The latest survey also sought to understand how, and how quickly, organizations are deploying these new gen AI tools. We have found three archetypes for implementing gen AI solutions : takers use off-the-shelf, publicly available solutions; shapers customize those tools with proprietary data and systems; and makers develop their own foundation models from scratch. 5 “ Technology’s generational moment with generative AI: A CIO and CTO guide ,” McKinsey, July 11, 2023. Across most industries, the survey results suggest that organizations are finding off-the-shelf offerings applicable to their business needs—though many are pursuing opportunities to customize models or even develop their own (Exhibit 9). About half of reported gen AI uses within respondents’ business functions are utilizing off-the-shelf, publicly available models or tools, with little or no customization. Respondents in energy and materials, technology, and media and telecommunications are more likely to report significant customization or tuning of publicly available models or developing their own proprietary models to address specific business needs.

Respondents most often report that their organizations required one to four months from the start of a project to put gen AI into production, though the time it takes varies by business function (Exhibit 10). It also depends upon the approach for acquiring those capabilities. Not surprisingly, reported uses of highly customized or proprietary models are 1.5 times more likely than off-the-shelf, publicly available models to take five months or more to implement.

Gen AI high performers are excelling despite facing challenges

Gen AI is a new technology, and organizations are still early in the journey of pursuing its opportunities and scaling it across functions. So it’s little surprise that only a small subset of respondents (46 out of 876) report that a meaningful share of their organizations’ EBIT can be attributed to their deployment of gen AI. Still, these gen AI leaders are worth examining closely. These, after all, are the early movers, who already attribute more than 10 percent of their organizations’ EBIT to their use of gen AI. Forty-two percent of these high performers say more than 20 percent of their EBIT is attributable to their use of nongenerative, analytical AI, and they span industries and regions—though most are at organizations with less than $1 billion in annual revenue. The AI-related practices at these organizations can offer guidance to those looking to create value from gen AI adoption at their own organizations.

To start, gen AI high performers are using gen AI in more business functions—an average of three functions, while others average two. They, like other organizations, are most likely to use gen AI in marketing and sales and product or service development, but they’re much more likely than others to use gen AI solutions in risk, legal, and compliance; in strategy and corporate finance; and in supply chain and inventory management. They’re more than three times as likely as others to be using gen AI in activities ranging from processing of accounting documents and risk assessment to R&D testing and pricing and promotions. While, overall, about half of reported gen AI applications within business functions are utilizing publicly available models or tools, gen AI high performers are less likely to use those off-the-shelf options than to either implement significantly customized versions of those tools or to develop their own proprietary foundation models.

What else are these high performers doing differently? For one thing, they are paying more attention to gen-AI-related risks. Perhaps because they are further along on their journeys, they are more likely than others to say their organizations have experienced every negative consequence from gen AI we asked about, from cybersecurity and personal privacy to explainability and IP infringement. Given that, they are more likely than others to report that their organizations consider those risks, as well as regulatory compliance, environmental impacts, and political stability, to be relevant to their gen AI use, and they say they take steps to mitigate more risks than others do.

Gen AI high performers are also much more likely to say their organizations follow a set of risk-related best practices (Exhibit 11). For example, they are nearly twice as likely as others to involve the legal function and embed risk reviews early on in the development of gen AI solutions—that is, to “ shift left .” They’re also much more likely than others to employ a wide range of other best practices, from strategy-related practices to those related to scaling.

In addition to experiencing the risks of gen AI adoption, high performers have encountered other challenges that can serve as warnings to others (Exhibit 12). Seventy percent say they have experienced difficulties with data, including defining processes for data governance, developing the ability to quickly integrate data into AI models, and an insufficient amount of training data, highlighting the essential role that data play in capturing value. High performers are also more likely than others to report experiencing challenges with their operating models, such as implementing agile ways of working and effective sprint performance management.

About the research

The online survey was in the field from February 22 to March 5, 2024, and garnered responses from 1,363 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 981 said their organizations had adopted AI in at least one business function, and 878 said their organizations were regularly using gen AI in at least one function. To adjust for differences in response rates, the data are weighted by the contribution of each respondent’s nation to global GDP.

Alex Singla and Alexander Sukharevsky  are global coleaders of QuantumBlack, AI by McKinsey, and senior partners in McKinsey’s Chicago and London offices, respectively; Lareina Yee  is a senior partner in the Bay Area office, where Michael Chui , a McKinsey Global Institute partner, is a partner; and Bryce Hall  is an associate partner in the Washington, DC, office.

They wish to thank Kaitlin Noe, Larry Kanter, Mallika Jhamb, and Shinjini Srivastava for their contributions to this work.

This article was edited by Heather Hanselman, a senior editor in McKinsey’s Atlanta office.

Explore a career with us

Related articles.

One large blue ball in mid air above many smaller blue, green, purple and white balls

Moving past gen AI’s honeymoon phase: Seven hard truths for CIOs to get from pilot to scale

A thumb and an index finger form a circular void, resembling the shape of a light bulb but without the glass component. Inside this empty space, a bright filament and the gleaming metal base of the light bulb are visible.

A generative AI reset: Rewiring to turn potential into value in 2024

High-tech bees buzz with purpose, meticulously arranging digital hexagonal cylinders into a precisely stacked formation.

Implementing generative AI with speed and safety

IMAGES

  1. Types of Research Impact

    impact of analysis in research

  2. Data, analyses and impact studies

    impact of analysis in research

  3. The 'Four As' of research impact assessment: advocacy, analysis

    impact of analysis in research

  4. What is Impact Analysis? And Why Is It Important?

    impact of analysis in research

  5. Standard statistical tools in research and data analysis

    impact of analysis in research

  6. What is Impact Analysis?

    impact of analysis in research

VIDEO

  1. Research Profile 1: Why is it so important?

  2. Impact analysis study on School leadership development programme training-SIEMAT, Rajasthan

  3. Creating an Impact Analysis in NetSpring

  4. Unlocking the power of impact analysis with GLPI

  5. Differences Between Research and Analysis

  6. Data Analysis in Research

COMMENTS

  1. Assessment, evaluations, and definitions of research impact: A review

    Research impact is assessed in two formats, first, through an impact template that describes the approach to enabling impact within a unit of ... and user communities) for whom the purpose of analysis may vary (Davies et al. 2005). Any tool for impact evaluation needs to be flexible, such that it enables access to impact data for a variety of ...

  2. Evaluating impact from research: A methodological framework

    A typology of research impact evaluation designs is provided. •. A methodological framework is proposed to guide evaluations of the significance and reach of impact that can be attributed to research. •. These enable evaluation design and methods to be selected to evidence the impact of research from any discipline.

  3. Research impact: a narrative review

    Future research should also address the topical question of whether research impact tools could be used to help target resources and reduce waste in research (for example, to decide whether to commission a new clinical trial or a meta-analysis of existing trials); we note, for example, the efforts of the UK National Institute for Health ...

  4. Assessing the impact of healthcare research: A systematic review of

    Methods and findings. Two independent investigators systematically searched the Medical Literature Analysis and Retrieval System Online (MEDLINE), the Excerpta Medica Database (EMBASE), the Cumulative Index to Nursing and Allied Health Literature (CINAHL+), the Health Management Information Consortium, and the Journal of Research Evaluation from inception until May 2017 for publications that ...

  5. Conceptualizing the elements of research impact: towards semantic

    Figure 1 illustrates a research program Footnote 4 theory of change. The three spheres reflect the fact that the relative influence of any intervention declines as interactions with other actors ...

  6. Research impact evaluation and academic discourse

    Introduction. The introduction of 'research impact' as an element of evaluation constitutes a major change in the construction of research evaluation systems. 'Impact', understood broadly ...

  7. Assessing the impact of healthcare research: A systematic review of

    Research impact: 'any identifiable benefit to, or positive influence on, the economy, society, public policy ... and policy makers plan strategies to achieve multiple pathways to impact before carrying the research out. The analysis of the data extraction and construction of the impact matrix led to the development of the 'pathways to ...

  8. Measuring the impact of methodological research: a framework and

    Gathering evidence through analysis of publications and their citations, semistructured interviews and analysis of research queries enabled us to obtain multiple indicators and thus to demonstrate broad impacts of methodological research. Collating evidence of impact has enabled us to adapt a framework that may be broadly applied to future ...

  9. Research Impact: The What, Why, When and How

    Rand Europe carried out an examination and analysis on the inclusion of research impact in the 2014 REF. The conclusion was that it was a success and both universities and expert panel members were supportive of its continued use in future assessment exercises. Both the big research-intensive universities and the smaller, specialist ...

  10. A narrative review of research impact assessment models and methods

    The Research Impact Framework was developed in the UK by Kuruvilla et al. [8,30], and draws upon both the research impact literature and UK research assessment criteria for publically funded research, and was validated through empirical analysis of research projects at the London School of Hygiene & Tropical Medicine. The framework is built ...

  11. Evaluating impact from research: A methodological framework

    Five types of impact evaluation design are identified encompassing a range of evaluation methods and approaches: i) experimental and statistical methods; ii) textual, oral and arts-based methods ...

  12. How to conduct a meta-analysis in eight steps: a practical guide

    2.1 Step 1: defining the research question. The first step in conducting a meta-analysis, as with any other empirical study, is the definition of the research question. Most importantly, the research question determines the realm of constructs to be considered or the type of interventions whose effects shall be analyzed.

  13. Engaging with research impact assessment for an environmental ...

    Impact assessment is embedded in many national and international research rating systems. Most applications use the Research Impact Pathway to track inputs, activities, outputs and outcomes of an ...

  14. A Step-by-Step Process of Thematic Analysis to Develop a Conceptual

    Thematic analysis is a research method used to identify and interpret patterns or themes in a data set; it often leads to new insights and understanding (Boyatzis, 1998; Elliott, 2018; Thomas, 2006).However, it is critical that researchers avoid letting their own preconceptions interfere with the identification of key themes (Morse & Mitcham, 2002; Patton, 2015).

  15. A practical guide to data analysis in general literature reviews

    This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.

  16. Learning to Do Qualitative Data Analysis: A Starting Point

    For many researchers unfamiliar with qualitative research, determining how to conduct qualitative analyses is often quite challenging. Part of this challenge is due to the seemingly limitless approaches that a qualitative researcher might leverage, as well as simply learning to think like a qualitative researcher when analyzing data. From framework analysis (Ritchie & Spencer, 1994) to content ...

  17. Data Analysis in Research: Types & Methods

    Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. Three essential things occur during the data ...

  18. Impact Analysis

    Impact Analysis. Impact analysis is an evaluative process, designed to provide scientifically credible information to legitimise the existence of a service or use of an intervention, which is intended to make a difference or induce benefit [1]. In essence, impact analysis is a method of measuring outcomes in order to address the question 'Are ...

  19. A Critical Analysis of the Impact of Research in Education: A

    Creating rese arch impact: the roles of research users in interacti ve research mobilisation, Evidence and Policy: A Journal of Research, Debate and Practice, 11(1): 35 55. Mortorano, N. (2013).

  20. Undergraduate research experiences: Impacts and opportunities

    Many claim that undergraduate research experiences improve preparation of the next generation of scientists and increase persistence in science ( 1 - 3 ). The limited evidence for the impact of undergraduate research experiences makes it difficult, however, to justify the substantial resources they require.

  21. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  22. Explaining research performance: investigating the importance of

    Regarding citation impact, our analysis highlights that only the career aspects of motivation have a significant effect. Similar to the results regarding productivity, being more motivated by career progress increases the probability of being in the high citation impact group, but only to a certain value when the difference stops being ...

  23. How do organisations implement research impact assessment (RIA

    The emerging practice of research impact assessment (RIA) is an area where there have been a number of developments - be these analytical tools to help conceptualise impact (in its myriad forms), such as the Payback framework , the inclusion of impact as a criteria to determine the allocation of public funds to higher educational institutions ...

  24. Pathways for Advancing Careers and Education (PACE) Evaluation and

    This Office of Planning, Research and Evaluation report presents the plan for evaluating the impacts of the Pathways for Advancing Careers and Education project and the first round of Health Profession Opportunity Grants on key outcomes nine years after random assignment into the program.

  25. Impact Analysis of Transitioning to Heat Pump Rooftop Units for the U.S

    The analysis is performed using ComStock (TM), the U.S. Department of Energy's calibrated model of the U.S. commercial building stock. Results show 10% and 9% reductions in stock aggregate energy consumption and greenhouse gas emissions, respectively.

  26. The state of AI in early 2024: Gen AI adoption spikes and starts to

    Respondents' expectations for gen AI's impact remain as high as they were last year, ... About the research. The online survey was in the field from February 22 to March 5, 2024, and garnered responses from 1,363 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. ...

  27. How Will AI Impact Finance? New Research Uncovers Key Findings

    New research commissioned by SAP Insights identifies macro trends affecting finance organizations. One of the primary goals of this research was to gain a deeper understanding of AI's impact on ...

  28. Ice Age Climate Analysis Reduces Worst-Case Warming Expected From

    Analysis in the new study shows that these cloud changes over the oceans compounded the glacier's global cooling effects by reflecting even more sunlight. In short, the study shows that CO2 played a smaller role in setting ice age temperatures than previously estimated.

  29. IBM director of research explains how AI can help companies leverage

    The Center of Excellence in Bioinformatics and Life Sciences hosted the sixth edition of the UB | AI Chat Series, titled "Exploring the Future of AI for Maximum Industry Impact." Panelists, speakers, and exhibitors from IBM, M&T, Moog, and other industry partners underscored the crucial role of academic-industry collaborations in advancing AI for enhanced data analysis and value creation.

  30. A content analysis of EPPM's threat and efficacy information in

    The extended parallel process model summarizes the positive impact of threat and efficacy messages on behavioral intentions. In news contexts, research to date shows national journalists emphasize threat information and neglect efficacious information.