Which elements should be prioritized as the focal point for attention? (Squires et al 2013, Kelly et al 2017).
consider the philosophical foundations of the primary studies. Would it be appropriate to favour a method such as thematic synthesis that it is less reliant on epistemological considerations? (Barnett-Page and Thomas 2009).
– consider what type of qualitative evidence synthesis will be feasible and manageable within the time frame available (Booth et al 2016).
– consider whether the ambition of the review matches the available resources. Will the extent of the scope and the sampling approach of the review need to be limited? (Benoot et al 2016, Booth et al 2016).
consider access to expertise, both within the review team and among a wider group of advisors. Does the available expertise match the qualitative evidence synthesis approach chosen? (Booth et al 2016).
consider the intended audience and purpose of the review. Does the approach to question formulation, the scope of the review and the intended outputs meet their needs? (Booth et al 2016).
consider the type of data present in typical studies for inclusion. To what extent are candidate studies conceptually rich and contextually thick in their detail?
The review question is critical to development of the qualitative evidence synthesis (Harris et al 2018). Question development affords a key point for integration with the intervention review. Complementary guidance supports novel thinking about question development, application of question development frameworks and the types of questions to be addressed by a synthesis of qualitative evidence (Cargo et al 2018, Harris et al 2018, Noyes et al 2018a, Booth et al 2019b, Flemming et al 2019).
Research questions for quantitative reviews are often mapped using structures such as PICO. Some qualitative reviews adopt this structure, or use an adapted variation of such a structure (e.g. SPICE (Setting, Perspective, Intervention or Phenomenon of Interest, Comparison, Evaluation) or SPIDER (Sample, Phenomenon of Interest, Design, Evaluation, Research type); (Cooke et al 2012). Booth and colleagues (Booth et al (2019b) propose an extended question framework (PerSPecTIF) to describe both wider context and immediate setting that is particularly suited to qualitative evidence synthesis and complex intervention reviews (see Table 21.5.a ).
Detailed attention to the question and specification of context at an early stage is critical to many aspects of qualitative synthesis (see Petticrew et al (2019) and Booth et al (2019a) for a more detailed discussion). By specifying the context a review team is able to identify opportunities for integration with the intervention review, or opportunities for maximizing use and interpretation of evidence as a mixed-method review progresses (see Figure 21.2.a ), and informs both the interpretation of the observed effects and assessment of the strength of the evidence available in addressing the review question (Noyes et al 2019). Subsequent application of GRADE CERQual (Lewin et al 2015, Lewin et al 2018), an approach to assess the confidence in synthesized qualitative findings, requires further specification of context in the review question.
Table 21.5.a PerSPecTIF Question formulation framework for qualitative evidence syntheses (Booth et al (2019b). Reproduced with permission of BMJ Publishing Group
|
|
|
|
|
|
|
Perspective | Setting | Phenomenon of interest/ Problem | Environment | Comparison (optional) | Time/ Timing | Findings |
From the perspective of a pregnant woman | In the setting of rural communities | How does facility-based care | Within an environment of poor transport infrastructure and distantly located facilities | Compare with traditional birth attendants at home | Up to and including delivery | In relation to the woman’s perceptions and experiences? |
Additional guidance is available on formulation of questions to understand and assess intervention implementation (Cargo et al 2018). A strong understanding of how an intervention is thought to work, and how it should be implemented in practice, will enable a critical consideration of whether any observed lack of effect might be due to a poorly conceptualized intervention (i.e. theory failure) or a poor intervention implementation (i.e. implementation failure). Heterogeneity needs to be considered for both the underlying theory and the ways in which the intervention was implemented. An a priori scoping review (Levac et al 2010), concept analysis (Walker and Avant 2005), critical review (Grant and Booth 2009) or textual narrative synthesis (Barnett-Page and Thomas 2009) can be undertaken to classify interventions and/or to identify the programme theory, logic model or implementation measures and processes. The intervention Complexity Assessment Tool for Systematic Reviews iCAT_SR (Lewin et al 2017) may be helpful in classifying complexity in interventions and developing associated questions.
An existing intervention model or framework may be used within a new topic or context. The ‘best-fit framework’ approach to synthesis (Carroll et al 2013) can be used to establish the degree to which the source context (from where the framework was derived) resembles the new target context (see Figure 21.2.a ). In the absence of an explicit programme theory and detail of how implementation relates to outcomes, an a priori realist review, meta-ethnography or meta-interpretive review can be undertaken (Booth et al 2016). For example, Downe and colleagues (Downe et al (2016) undertook an initial meta-ethnography review to develop an understanding of the outcomes of importance to women receiving antenatal care.
However, these additional activities are very resource-intensive and are only recommended when the review team has sufficient resources to supplement the planned qualitative evidence syntheses with an additional explanatory review. Where resources are less plentiful a review team could engage with key stakeholders to articulate and develop programme theory (Kelly et al 2017, De Buck et al 2018).
Review authors can develop a more comprehensive representation of question features through use of logic models, programme theories, theories of change, templates and pathways (Anderson et al 2011, Kneale et al 2015, Noyes et al 2016a) (see also Chapter 17, Section 17.2.1 and Chapter 2, Section 2.5.1 ). These different forms of social theory can be used to visualize and map the research question, its context, components, influential factors and possible outcomes (Noyes et al 2016a, Rehfuess et al 2018).
Finally, review authors need to engage stakeholders, including consumers affected by the health issue and interventions, or likely users of the review from clinical or policy contexts. From the preparatory stage, this consultation can ensure that the review scope and question is appropriate and resulting products address implementation concerns of decision makers (Kelly et al 2017, Harris et al 2018).
In comparison with identification of quantitative studies (see also Chapter 4 ), procedures for retrieval of qualitative research remain relatively under-developed. Particular challenges in retrieval are associated with non-informative titles and abstracts, diffuse terminology, poor indexing and the overwhelming prevalence of quantitative studies within data sources (Booth et al 2016).
Principal considerations when planning a search for qualitative studies, and the evidence that underpins them, have been characterized using a 7S framework from Sampling and Sources through Structured questions, Search procedures, Strategies and filters and Supplementary strategies to Standards for Reporting (Booth et al 2016).
A key decision, aligned to the purpose of the qualitative evidence synthesis is whether to use the comprehensive, exhaustive approaches that characterize quantitative searches or whether to use purposive sampling that is more sensitive to the qualitative paradigm (Suri 2011). The latter, which is used when the intent is to generate an interpretative understanding, for example, when generating theory, draws upon a versatile toolkit that includes theoretical sampling, maximum variation sampling and intensity sampling. Sources of qualitative evidence are more likely to include book chapters, theses and grey literature reports than standard quantitative study reports, and so a search strategy should place extra emphasis on these sources. Local databases may be particularly valuable given the criticality of context (Stansfield et al 2012).
Another key decision is whether to use study filters or simply to conduct a topic-based search where qualitative studies are identified at the study selection stage. Search filters for qualitative studies lack the specificity of their quantitative counterparts. Nevertheless, filters may facilitate efficient retrieval by study type (e.g. qualitative (Rogers et al 2018) or mixed methods (El Sherif et al 2016) or by perspective (e.g. patient preferences (Selva et al 2017)) particularly where the quantitative literature is overwhelmingly large and thus increases the number needed to retrieve. Poor indexing of qualitative studies makes citation searching (forward and backward) and the Related Articles features of electronic databases particularly useful (Cooper et al 2017). Further guidance on searching for qualitative evidence is available (Booth et al 2016, Noyes et al 2018a). The CLUSTER method has been proposed as a specific named method for tracking down associated or sibling reports (Booth et al 2013). The BeHEMoTh approach has been developed for identifying explicit use of theory (Booth and Carroll 2015).
Four potential approaches are available to identify process evaluations.
More detailed guidance is provided by Cargo and colleagues (Cargo et al (2018).
Assessment of the methodological strengths and limitations of qualitative research remains contested within the primary qualitative research community (Garside 2014). However, within systematic reviews and evidence syntheses it is considered essential, even when studies are not to be excluded on the basis of quality (Carroll et al 2013). One review found almost 100 appraisal tools for assessing primary qualitative studies (Munthe-Kaas et al 2019). Limitations included a focus on reporting rather than conduct and the presence of items that are separate from, or tangential to, consideration of study quality (e.g. ethical approval).
Authors should distinguish between assessment of study quality and assessment of risk of bias by focusing on assessment of methodological strengths and limitations as a marker of study rigour (what we term a ‘risk to rigour’ approach (Noyes et al 2019)). In the absence of a definitive risk to rigour tool, we recommend that review authors select from published, commonly used and validated tools that focus on the assessment of the methodological strengths and limitations of qualitative studies (see Box 21.8.a ). Pragmatically, we consider a ‘validated’ tool as one that has been subjected to evaluation. Issues such as inter-rater reliability are afforded less importance given that identification of complementary or conflicting perspectives on risk to rigour is considered more useful than achievement of consensus per se (Noyes et al 2019).
The CASP tool for qualitative research (as one example) maps onto the domains in Box 21.8.a (CASP 2013). Tools not meeting the criterion of focusing on assessment of methodological strengths and limitations include those that integrate assessment of the quality of reporting (such as scoring of the title and abstract, etc) into an overall assessment of methodological strengths and limitations. As with other risk of bias assessment tools, we strongly recommend against the application of scores to domains or calculation of total quality scores. We encourage review authors to discuss the studies and their assessments of ‘risk to rigour’ for each paper and how the study’s methodological limitations may affect review findings (Noyes et al 2019). We further advise that qualitative ‘sensitivity analysis’, exploring the robustness of the synthesis and its vulnerability to methodologically limited studies, be routinely applied regardless of the review authors’ overall confidence in synthesized findings (Carroll et al 2013). Evidence suggests that qualitative sensitivity analysis is equally advisable for mixed methods studies from which the qualitative component is extracted (Verhage and Boels 2017).
Box 21.8.a Example domains that provide an assessment of methodological strengths and limitations to determine study rigour
Clear aims and research question Congruence between the research aims/question and research design/method(s) Rigour of case and or participant identification, sampling and data collection to address the question Appropriate application of the method Richness/conceptual depth of findings Exploration of deviant cases and alternative explanations Reflexivity of the researchers* *Reflexivity encourages qualitative researchers and reviewers to consider the actual and potential impacts of the researcher on the context, research participants and the interpretation and reporting of data and findings (Newton et al 2012). Being reflexive entails making conflicts of interest transparent, discussing the impact of the reviewers and their decisions on the review process and findings and making transparent any issues discussed and subsequent decisions. |
Adapted from Noyes et al (2019) and Alvesson and Sköldberg (2009)
Few assessment tools explicitly address rigour in process evaluation or implementation evidence. For qualitative primary studies, the 8-item process evaluation tool developed by the EPPI-Centre (Rees et al 2009, Shepherd et al 2010) can be used to supplement tools selected to assess methodological strengths and limitations and risks to rigour in primary qualitative studies. One of these items, a question on usefulness (framed as ‘how well the intervention processes were described and whether or not the process data could illuminate why or how the interventions worked or did not work’ ) offers a mechanism for exploring process mechanisms (Cargo et al 2018).
Decisions about inclusion or exclusion of studies can be more complex in qualitative evidence syntheses compared to reviews of trials that aim to include all relevant studies. Decisions on whether to include all studies or to select a sample of studies depend on a range of general and review specific criteria that Noyes and colleagues (Noyes et al (2019) outline in detail. The number of qualitative studies selected needs to be consistent with a manageable synthesis, and the contexts of the included studies should enable integration with the trials in the effectiveness analysis (see Figure 21.2.a ). The guiding principle is transparency in the reporting of all decisions and their rationale.
Authors will typically find that they cannot select an appropriate synthesis method until the pool of available qualitative evidence has been thoroughly scoped. Flexible options concerning choice of method may need to be articulated in the protocol.
The INTEGRATE-HTA guidance on selecting methodology and methods for qualitative evidence synthesis and health technology assessment offers a useful starting point when selecting a method of synthesis (Booth et al 2016, Booth et al 2018). Some methods are designed primarily to develop findings at a descriptive level and thus directly feed into lines of action for policy and practice. Others hold the capacity to develop new theory (e.g. meta-ethnography and theory building approaches to thematic synthesis). Noyes and colleagues (Noyes et al (2019) and Flemming and colleagues (Flemming et al (2019) elaborate on key issues for consideration when selecting a method that is particularly suited to a Cochrane Review and decision making context (see Table 21.10.a ). Three qualitative evidence synthesis methods (thematic synthesis, framework synthesis and meta-ethnography) are recommended to produce syntheses that can subsequently be integrated with an intervention review or analysis.
Table 21.10.a Recommended methods for undertaking a qualitative evidence synthesis for subsequent integration with an intervention review, or as part of a mixed-method review (adapted from an original source developed by convenors (Flemming et al 2019, Noyes et al 2019))
|
|
| |
Thematic synthesis (Thomas and Harden 2008) | Most accessible form of synthesis. Clear approach, can be used with ‘thin’ data to produce descriptive themes and with ‘thicker’ data to develop descriptive themes in to more in-depth analytic themes. Themes are then integrated within the quantitative synthesis. May be limited in interpretive ‘power’ and risks over-simplistic use and thus not truly informing decision making such as guidelines. Complex synthesis process that requires an experienced team. Theoretical findings may combine empirical evidence, expert opinion and conjecture to form hypotheses. More work is needed on how GRADE CERQual to assess confidence in synthesized qualitative findings (see Section ) can be applied to theoretical findings. May lack clarity on how higher-level findings translate into actionable points. |
| |
Framework synthesis (Oliver et al 2008, Dixon-Woods 2011) Best-fit framework synthesis (Carroll et al 2011) | Works well within reviews of complex interventions by accommodating complexity within the framework, including representation of theory. The framework allows a clear mechanism for integration of qualitative and quantitative evidence in an aggregative way – see Noyes et al (2018a). Works well where there is broad agreement about the nature of interventions and their desired impacts. Requires identification, selection and justification of framework. A framework may be revealed as inappropriate only once extraction/synthesis is underway. Risk of simplistically forcing data into a framework for expedience. |
| |
Meta-ethnography (Noblit and Hare 1988) | Primarily interpretive synthesis method leading to creation of descriptive as well as new high order constructs. Descriptive and theoretical findings can help inform decision making such as guidelines. Explicit reporting standards have been developed. Complex methodology and synthesis process that requires highly experienced team. Can take more time and resources than other methodologies. Theoretical findings may combine empirical evidence, expert opinion and conjecture to form hypotheses. May not satisfy requirements for an audit trail (although new reporting guidelines will help overcome this (France et al 2019). More work is needed to determine how CERQual can be applied to theoretical findings. May be unclear how higher-level findings translate into actionable points. |
21.11 Data extraction
Qualitative findings may take the form of quotations from participants, subthemes and themes identified by the study’s authors, explanations, hypotheses or new theory, or observational excerpts and author interpretations of these data (Sandelowski and Barroso 2002). Findings may be presented as a narrative, or summarized and displayed as tables, infographics or logic models and potentially located in any part of the paper (Noyes et al 2019).
Methods for qualitative data extraction vary according to the synthesis method selected. Data extraction is not sequential and linear; often, it involves moving backwards and forwards between review stages. Review teams will need regular meetings to discuss and further interrogate the evidence and thereby achieve a shared understanding. It may be helpful to draw on a key stakeholder group to help in interpreting the evidence and in formulating key findings. Additional approaches (such as subgroup analysis) can be used to explore evidence from specific contexts further.
Irrespective of the review type and choice of synthesis method, we consider it best practice to extract detailed contextual and methodological information on each study and to report this information in a table of ‘Characteristics of included studies’ (see Table 21.11.a ). The template for intervention description and replication TIDieR checklist (Hoffmann et al 2014) and ICAT_SR tool may help with specifying key information for extraction (Lewin et al 2017). Review authors must ensure that they preserve the context of the primary study data during the extraction and synthesis process to prevent misinterpretation of primary studies (Noyes et al 2019).
Table 21.11.a Contextual and methodological information for inclusion within a table of ‘Characteristics of included studies’. From Noyes et al (2019). Reproduced with permission of BMJ Publishing Group
|
|
Context and participants | Important elements of study context, relevant to addressing the review question and locating the context of the primary study; for example, the study setting, population characteristics, participants and participant characteristics, the intervention delivered (if appropriate), etc. |
Study design and methods used | Methodological design and approach taken by the study; methods for identifying the sample recruitment; the specific data collection and analysis methods utilized; and any theoretical models used to interpret or contextualize the findings. |
Noyes and colleagues (Noyes et al (2019) provide additional guidance and examples of the various methods of data extraction. It is usual for review authors to select one method. In summary, extraction methods can be grouped as follows.
The GRADE system has long featured in assessing the certainty of quantitative findings and application of its qualitative counterpart, GRADE-CERQual, is recommended for Cochrane qualitative evidence syntheses (Lewin et al 2015). CERQual has four components (relevance, methodological limitations, adequacy and coherence) which are used to formulate an overall assessment of confidence in the synthesized qualitative finding. Guidance on its components and reporting requirements have been published in a series in Implementation Science (Lewin et al 2018).
A range of methods and tools is available for data integration or mixed-method synthesis (Harden et al 2018, Noyes et al 2019). As noted at the beginning of this chapter, review authors can integrate a qualitative evidence synthesis with an existing intervention review published on a similar topic (sequential approach), or conduct a new intervention review and qualitative evidence syntheses in parallel before integration (convergent approach). Irrespective of whether the qualitative synthesis is sequential or convergent to the intervention review, we recommend that qualitative and quantitative evidence be synthesized separately using appropriate methods before integration (Harden et al 2018). The scope for integration can be more limited with a pre-existing intervention review unless review authors have access to the data underlying the intervention review report.
Harden and colleagues and Noyes and colleagues outline the following methods and tools for integration with an intervention review (Harden et al 2018, Noyes et al 2019):
Review authors can use the above methods in combination (e.g. patterns observed through juxtaposing findings within a matrix can be tested using subgroup analysis or QCA). Analysing programme theory, using logic models and QCA would require members of the review team with specific skills in these methods. Using subgroup analysis and QCA are not suitable when limited evidence is available (Harden et al 2018, Noyes et al 2019). (See also Chapter 17 on intervention complexity.)
Reporting standards and tools designed for intervention reviews (such as Cochrane’s MECIR standards ( http://methods.cochrane.org/mecir ) or the PRISMA Statement (Liberati et al 2009), may not be appropriate for qualitative evidence syntheses or an integrated mixed-method review. Additional guidance on how to choose, adapt or create a hybrid reporting tool is provided as a 5-point ‘decision flowchart’ ( Figure 21.14.a ) (Flemming et al 2018). Review authors should consider whether: a specific set of reporting guidance is available (e.g. eMERGe for meta-ethnographies (France et al 2015)); whether generic guidance (e.g. ENTREQ (Tong et al 2012)) is suitable; or whether additional checklists or tools are appropriate for reporting a specific aspect of the review.
Figure 21.14.a Decision flowchart for choice of reporting approach for syntheses of qualitative, implementation or process evaluation evidence (Flemming et al 2018). Reproduced with permission of Elsevier
Authors: Jane Noyes, Andrew Booth, Margaret Cargo, Kate Flemming, Angela Harden, Janet Harris, Ruth Garside, Karin Hannes, Tomás Pantoja, James Thomas
Acknowledgements: This chapter replaces Chapter 20 in the first edition of this Handbook (2008) and subsequent Version 5.2. We would like to thank the previous Chapter 20 authors Jennie Popay and Alan Pearson. Elements of this chapter draw on previous supplemental guidance produced by the Cochrane Qualitative and Implementation Methods Group Convenors, to which Simon Lewin contributed.
Funding: JT is supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care North Thames at Barts Health NHS Trust. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.
Ames HM, Glenton C, Lewin S. Parents' and informal caregivers' views and experiences of communication about routine childhood vaccination: a synthesis of qualitative evidence. Cochrane Database of Systematic Reviews 2017; 2 : CD011787.
Anderson LM, Petticrew M, Rehfuess E, Armstrong R, Ueffing E, Baker P, Francis D, Tugwell P. Using logic models to capture complexity in systematic reviews. Research Synthesis Methods 2011; 2 : 33-42.
Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Medical Research Methodology 2009; 9 : 59.
Benoot C, Hannes K, Bilsen J. The use of purposeful sampling in a qualitative evidence synthesis: a worked example on sexual adjustment to a cancer trajectory. BMC Medical Research Methodology 2016; 16 : 21.
Bonell C, Jamal F, Harden A, Wells H, Parry W, Fletcher A, Petticrew M, Thomas J, Whitehead M, Campbell R, Murphy S, Moore L. Public Health Research. Systematic review of the effects of schools and school environment interventions on health: evidence mapping and synthesis . Southampton (UK): NIHR Journals Library; 2013.
Booth A, Harris J, Croot E, Springett J, Campbell F, Wilkins E. Towards a methodology for cluster searching to provide conceptual and contextual "richness" for systematic reviews of complex interventions: case study (CLUSTER). BMC Medical Research Methodology 2013; 13 : 118.
Booth A, Carroll C. How to build up the actionable knowledge base: the role of 'best fit' framework synthesis for studies of improvement in healthcare. BMJ Quality and Safety 2015; 24 : 700-708.
Booth A, Noyes J, Flemming K, Gerhardus A, Wahlster P, van der Wilt GJ, Mozygemba K, Refolo P, Sacchini D, Tummers M, Rehfuess E. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessment for complex interventions 2016. https://www.integrate-hta.eu/wp-content/uploads/2016/02/Guidance-on-choosing-qualitative-evidence-synthesis-methods-for-use-in-HTA-of-complex-interventions.pdf
Booth A. Qualitative evidence synthesis. In: Facey K, editor. Patient involvement in Health Technology Assessment . Singapore: Springer; 2017. p. 187-199.
Booth A, Noyes J, Flemming K, Gehardus A, Wahlster P, Jan van der Wilt G, Mozygemba K, Refolo P, Sacchini D, Tummers M, Rehfuess E. Structured methodology review identified seven (RETREAT) criteria for selecting qualitative evidence synthesis approaches. Journal of Clinical Epidemiology 2018; 99 : 41-52.
Booth A, Moore G, Flemming K, Garside R, Rollins N, Tuncalp Ö, Noyes J. Taking account of context in systematic reviews and guidelines considering a complexity perspective. BMJ Global Health 2019a; 4 : e000840.
Booth A, Noyes J, Flemming K, Moore G, Tuncalp Ö, Shakibazadeh E. Formulating questions to address the acceptability and feasibility of complex interventions in qualitative evidence synthesis. BMJ Global Health 2019b; 4 : e001107.
Candy B, King M, Jones L, Oliver S. Using qualitative synthesis to explore heterogeneity of complex interventions. BMC Medical Research Methodology 2011; 11 : 124.
Cargo M, Harris J, Pantoja T, Booth A, Harden A, Hannes K, Thomas J, Flemming K, Garside R, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 4: methods for assessing evidence on intervention implementation. Journal of Clinical Epidemiology 2018; 97 : 59-69.
Carroll C, Booth A, Cooper K. A worked example of "best fit" framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Medical Research Methodology 2011; 11 : 29.
Carroll C, Booth A, Leaviss J, Rick J. "Best fit" framework synthesis: refining the method. BMC Medical Research Methodology 2013; 13 : 37.
Carroll C. Qualitative evidence synthesis to improve implementation of clinical guidelines. BMJ 2017; 356 : j80.
CASP. Making sense of evidence: 10 questions to help you make sense of qualitative research: Public Health Resource Unit, England; 2013. http://media.wix.com/ugd/dded87_29c5b002d99342f788c6ac670e49f274.pdf .
Cooke A, Smith D, Booth A. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qualitative Health Research 2012; 22 : 1435-1443.
Cooper C, Booth A, Britten N, Garside R. A comparison of results of empirical studies of supplementary search techniques and recommendations in review methodology handbooks: a methodological review. Systematic Reviews 2017; 6 : 234.
De Buck E, Hannes K, Cargo M, Van Remoortel H, Vande Veegaete A, Mosler HJ, Govender T, Vandekerckhove P, Young T. Engagement of stakeholders in the development of a Theory of Change for handwashing and sanitation behaviour change. International Journal of Environmental Research and Public Health 2018; 28 : 8-22.
Dixon-Woods M. Using framework-based synthesis for conducting reviews of qualitative studies. BMC Medicine 2011; 9 : 39.
Downe S, Finlayson K, Tuncalp, Metin Gulmezoglu A. What matters to women: a systematic scoping review to identify the processes and outcomes of antenatal care provision that are important to healthy pregnant women. BJOG: An International Journal of Obstetrics and Gynaecology 2016; 123 : 529-539.
El Sherif R, Pluye P, Gore G, Granikov V, Hong QN. Performance of a mixed filter to identify relevant studies for mixed studies reviews. Journal of the Medical Library Association 2016; 104 : 47-51.
Flemming K, Booth A, Hannes K, Cargo M, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 6: reporting guidelines for qualitative, implementation, and process evaluation evidence syntheses. Journal of Clinical Epidemiology 2018; 97 : 79-85.
Flemming K, Booth A, Garside R, Tuncalp O, Noyes J. Qualitative evidence synthesis for complex interventions and guideline development: clarification of the purpose, designs and relevant methods. BMJ Global Health 2019; 4 : e000882.
France EF, Ring N, Noyes J, Maxwell M, Jepson R, Duncan E, Turley R, Jones D, Uny I. Protocol-developing meta-ethnography reporting guidelines (eMERGe). BMC Medical Research Methodology 2015; 15 : 103.
France EF, Cunningham M, Ring N, Uny I, Duncan EAS, Jepson RG, Maxwell M, Roberts RJ, Turley RL, Booth A, Britten N, Flemming K, Gallagher I, Garside R, Hannes K, Lewin S, Noblit G, Pope C, Thomas J, Vanstone M, Higginbottom GMA, Noyes J. Improving reporting of Meta-Ethnography: The eMERGe Reporting Guidance BMC Medical Research Methodology 2019; 19 : 25.
Garside R. Should we appraise the quality of qualitative research reports for systematic reviews, and if so, how? Innovation: The European Journal of Social Science Research 2014; 27 : 67-79.
Glenton C, Colvin CJ, Carlsen B, Swartz A, Lewin S, Noyes J, Rashidian A. Barriers and facilitators to the implementation of lay health worker programmes to improve access to maternal and child health: qualitative evidence synthesis. Cochrane Database of Systematic Reviews 2013; 10 : CD010414.
Glenton C, Lewin S, Norris S. Chapter 15: Using evidence from qualitative research to develop WHO guidelines. In: Norris S, editor. World Health Organization Handbook for Guideline Development . 2nd. ed. Geneva: WHO; 2016.
Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information and Libraries Journal 2009; 26 : 91-108.
Greenhalgh T, Kristjansson E, Robinson V. Realist review to understand the efficacy of school feeding programmes. BMJ 2007; 335 : 858.
Harden A, Oakley A, Weston R. A review of the effectiveness and appropriateness of peer-delivered health promotion for young people. London: Institute of Education, University of London; 1999.
Harden A, Thomas J, Cargo M, Harris J, Pantoja T, Flemming K, Booth A, Garside R, Hannes K, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 5: methods for integrating qualitative and implementation evidence within intervention effectiveness reviews. Journal of Clinical Epidemiology 2018; 97 : 70-78.
Harris JL, Booth A, Cargo M, Hannes K, Harden A, Flemming K, Garside R, Pantoja T, Thomas J, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 2: methods for question formulation, searching, and protocol development for qualitative evidence synthesis. Journal of Clinical Epidemiology 2018; 97 : 39-48.
Harris KM, Kneale D, Lasserson TJ, McDonald VM, Grigg J, Thomas J. School-based self management interventions for asthma in children and adolescents: a mixed methods systematic review (Protocol). Cochrane Database of Systematic Reviews 2015; 4 : CD011651.
Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, Altman DG, Barbour V, Macdonald H, Johnston M, Lamb SE, Dixon-Woods M, McCulloch P, Wyatt JC, Chan AW, Michie S. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ 2014; 348 : g1687.
Houghton C, Murphy K, Meehan B, Thomas J, Brooker D, Casey D. From screening to synthesis: using nvivo to enhance transparency in qualitative evidence synthesis. Journal of Clinical Nursing 2017; 26 : 873-881.
Hurley M, Dickson K, Hallett R, Grant R, Hauari H, Walsh N, Stansfield C, Oliver S. Exercise interventions and patient beliefs for people with hip, knee or hip and knee osteoarthritis: a mixed methods review. Cochrane Database of Systematic Reviews 2018; 4 : CD010842.
Kahwati L, Jacobs S, Kane H, Lewis M, Viswanathan M, Golin CE. Using qualitative comparative analysis in a systematic review of a complex intervention. Systematic Reviews 2016; 5 : 82.
Kelly MP, Noyes J, Kane RL, Chang C, Uhl S, Robinson KA, Springs S, Butler ME, Guise JM. AHRQ series on complex intervention systematic reviews-paper 2: defining complexity, formulating scope, and questions. Journal of Clinical Epidemiology 2017; 90 : 11-18.
Kneale D, Thomas J, Harris K. Developing and Optimising the Use of Logic Models in Systematic Reviews: Exploring Practice and Good Practice in the Use of Programme Theory in Reviews. PloS One 2015; 10 : e0142187.
Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implementation Science 2010; 5 : 69.
Lewin S, Munabi-Babigumira S, Glenton C, Daniels K, Bosch-Capblanch X, van Wyk BE, Odgaard-Jensen J, Johansen M, Aja GN, Zwarenstein M, Scheel IB. Lay health workers in primary and community health care for maternal and child health and the management of infectious diseases. Cochrane Database of Systematic Reviews 2010; 3 : CD004015.
Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gulmezoglu M, Noyes J, Booth A, Garside R, Rashidian A. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Medicine 2015; 12 : e1001895.
Lewin S, Hendry M, Chandler J, Oxman AD, Michie S, Shepperd S, Reeves BC, Tugwell P, Hannes K, Rehfuess EA, Welch V, McKenzie JE, Burford B, Petkovic J, Anderson LM, Harris J, Noyes J. Assessing the complexity of interventions within systematic reviews: development, content and use of a new tool (iCAT_SR). BMC Medical Research Methodology 2017; 17 : 76.
Lewin S, Booth A, Glenton C, Munthe-Kaas H, Rashidian A, Wainwright M, Bohren MA, Tuncalp O, Colvin CJ, Garside R, Carlsen B, Langlois EV, Noyes J. Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series. Implementation Science 2018; 13 : 2.
Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ 2009; 339 : b2700.
Moore G, Audrey S, Barker M, Bond L, Bonell C, Harderman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ 2015; 350 : h1258.
Munabi-Babigumira S, Glenton C, Lewin S, Fretheim A, Nabudere H. Factors that influence the provision of intrapartum and postnatal care by skilled birth attendants in low- and middle-income countries: a qualitative evidence synthesis. Cochrane Database of Systematic Reviews 2017; 11 : CD011558.
Munthe-Kaas H, Glenton C, Booth A, Noyes J, Lewin S. Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool. BMC Medical Research Methodology 2019; 19 : 113.
National Institute for Health Care Excellence. NICE Process and Methods Guides. Methods for the Development of NICE Public Health Guidance . London: National Institute for Health and Care Excellence (NICE); 2012.
Newton BJ, Rothlingova Z, Gutteridge R, LeMarchand K, Raphael JH. No room for reflexivity? Critical reflections following a systematic review of qualitative research. Journal of Health Psychology 2012; 17 : 866-885.
Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies . Newbury Park: Sage Publications, Inc; 1988.
Noyes J, Hendry M, Booth A, Chandler J, Lewin S, Glenton C, Garside R. Current use was established and Cochrane guidance on selection of social theories for systematic reviews of complex interventions was developed. Journal of Clinical Epidemiology 2016a; 75 : 78-92.
Noyes J, Hendry M, Lewin S, Glenton C, Chandler J, Rashidian A. Qualitative "trial-sibling" studies and "unrelated" qualitative studies contributed to complex intervention reviews. Journal of Clinical Epidemiology 2016b; 74 : 133-143.
Noyes J, Booth A, Flemming K, Garside R, Harden A, Lewin S, Pantoja T, Hannes K, Cargo M, Thomas J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 3: methods for assessing methodological limitations, data extraction and synthesis, and confidence in synthesized qualitative findings. Journal of Clinical Epidemiology 2018a; 97 : 49-58.
Noyes J, Booth A, Cargo M, Flemming K, Garside R, Hannes K, Harden A, Harris J, Lewin S, Pantoja T, Thomas J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 1: introduction. Journal of Clinical Epidemiology 2018b; 97 : 35-38.
Noyes J, Booth A, Moore G, Flemming K, Tuncalp O, Shakibazadeh E. Synthesising quantitative and qualitative evidence to inform guidelines on complex interventions: clarifying the purposes, designs and outlining some methods. BMJ Global Health 2019; 4 (Suppl 1) : e000893.
O'Neill J, Tabish H, Welch V, Petticrew M, Pottie K, Clarke M, Evans T, Pardo Pardo J, Waters E, White H, Tugwell P. Applying an equity lens to interventions: using PROGRESS ensures consideration of socially stratifying factors to illuminate inequities in health. Journal of Clinical Epidemiology 2014; 67 : 56-64.
Oliver S, Rees R, Clarke-Jones L, Milne R, Oakley A, Gabbay J, Stein K, Buchanan P, Gyte G. A multidimensional conceptual framework for analysing public involvement in health services research. Health Expectations 2008; 11 : 72-84.
Petticrew M, Knai C, Thomas J, Rehfuess E, Noyes J, Gerhardus A, Grimshaw J, Rutter H. Implications of a complexity perspective for systematic reviews and guideline development in health decision making. BMJ Global Health 2019; 4 (Suppl 1) : e000899.
Rees R, Oliver K, Woodman J, Thomas J. Children's views about obesity, body size, shape and weight. A systematic review. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London; 2009.
Rehfuess EA, Booth A, Brereton L, Burns J, Gerhardus A, Mozygemba K, Oortwijn W, Pfadenhauer LM, Tummers M, van der Wilt GJ, Rohwer A. Towards a taxonomy of logic models in systematic reviews and health technology assessments: A priori, staged, and iterative approaches. Research Synthesis Methods 2018; 9 : 13-24.
Robbins SCC, Ward K, Skinner SR. School-based vaccination: a systematic review of process evaluations. Vaccine 2011; 29 : 9588-9599.
Rogers M, Bethel A, Abbott R. Locating qualitative studies in dementia on MEDLINE, EMBASE, CINAHL, and PsycINFO: a comparison of search strategies. Research Synthesis Methods 2018; 9 : 579-586.
Sandelowski M, Barroso J. Finding the findings in qualitative studies. Journal of Nursing Scholarship 2002; 34 : 213-219.
Selva A, Sola I, Zhang Y, Pardo-Hernandez H, Haynes RB, Martinez Garcia L, Navarro T, Schünemann H, Alonso-Coello P. Development and use of a content search strategy for retrieving studies on patients' views and preferences. Health and Quality of Life Outcomes 2017; 15 : 126.
Shepherd J, Kavanagh J, Picot J, Cooper K, Harden A, Barnett-Page E, Jones J, Clegg A, Hartwell D, Frampton GK, Price A. The effectiveness and cost-effectiveness of behavioural interventions for the prevention of sexually transmitted infections in young people aged 13-19: a systematic review and economic evaluation. Health Technology Assessment 2010; 14 : 1-206, iii-iv.
Squires JE, Valentine JC, Grimshaw JM. Systematic reviews of complex interventions: framing the review question. Journal of Clinical Epidemiology 2013; 66 : 1215-1222.
Stansfield C, Kavanagh J, Rees R, Gomersall A, Thomas J. The selection of search sources influences the findings of a systematic review of people's views: a case study in public health. BMC Medical Research Methodology 2012; 12 : 55.
SURE (Supporting the Use of Research Evidence) Collaboration. SURE Guides for Preparing and Using Evidence-based Policy Briefs: 5 Identifying and Addressing Barriers to Implementing the Policy Options. Version 2.1, updated November 2011. https://epoc.cochrane.org/sites/epoc.cochrane.org/files/public/uploads/SURE-Guides-v2.1/Collectedfiles/sure_guides.html
Suri H. Purposeful sampling in qualitative research synthesis. Qualitative Research Journal 2011; 11 : 63-75.
Thomas J, Harden A, Oakley A, Oliver S, Sutcliffe K, Rees R, Brunton G, Kavanagh J. Integrating qualitative research with trials in systematic reviews. BMJ 2004; 328 : 1010-1012.
Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Medical Research Methodology 2008; 8 : 45.
Thomas J, Brunton J, Graziosi S. EPPI-Reviewer 4.0: software for research synthesis [Software]. EPPI-Centre Software. Social Science Research Unit, Institute of Education, University of London UK; 2010. https://eppi.ioe.ac.uk/CMS/Default.aspx?alias=eppi.ioe.ac.uk/cms/er4& .
Thomas J, O'Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Systematic Reviews 2014; 3 : 67.
Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Medical Research Methodology 2012; 12 : 181.
van Grootel L, van Wesel F, O'Mara-Eves A, Thomas J, Hox J, Boeije H. Using the realist perspective to link theory from qualitative evidence synthesis to quantitative studies: broadening the matrix approach. Research Synthesis Methods 2017; 8 : 303-311.
Verhage A, Boels D. Critical appraisal of mixed methods research studies in a systematic scoping review on plural policing: assessing the impact of excluding inadequately reported studies by means of a sensitivity analysis. Quality & Quantity 2017; 51 : 1449-1468.
Walker LO, Avant KC. Strategies for theory construction in nursing . Upper Saddle River (NJ): Pearson Prentice Hall; 2005.
For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.
Our editors will review what you’ve submitted and determine whether to revise the article.
empirical evidence , information gathered directly or indirectly through observation or experimentation that may be used to confirm or disconfirm a scientific theory or to help justify, or establish as reasonable, a person’s belief in a given proposition. A belief may be said to be justified if there is sufficient evidence to make holding the belief reasonable.
The concept of evidence is the basis of philosophical evidentialism, an epistemological thesis according to which a person is justified in believing a given proposition p if and only if the person’s evidence for p is proper or sufficient. In this context , the Scottish Enlightenment philosopher David Hume (1711–76) famously asserted that the “wise man…proportions his belief to the evidence.” In a similar vein, the American astronomer Carl Sagan popularized the statement, “Extraordinary claims require extraordinary evidence.”
Foundationalists , however, defend the view that certain basic, or foundational, beliefs are either inherently justified or justified by something other than another belief (e.g., a sensation or perception) and that all other beliefs may be justified only if they are directly or indirectly supported by at least one foundational belief (that is, only if they are either supported by at least one foundational belief or supported by other beliefs that are themselves supported by at least one foundational belief). The most influential foundationalist of the modern period was the French philosopher and mathematician René Descartes (1596–1650), who attempted to establish a foundation for justified beliefs regarding an external world in his intuition that, for as long as he is thinking, he exists (“I think, therefore I am”; see cogito, ergo sum ). A traditional argument in favour of foundationalism asserts that no other account of inferential justification—the act of justifying a given belief by inferring it from another belief that itself is justified—is possible. Thus, assume that one belief, Belief 1, is justified by another belief, Belief 2. How is Belief 2 justified? It cannot be justified by Belief 1, because the inference from Belief 2 to Belief 1 would then be circular and invalid. It cannot be justified by a third nonfoundational Belief 3, because the same question would then apply to that belief, leading to an infinite regress. And one cannot simply assume that Belief 2 is not justified, for then Belief 1 would not be justified through the inference from Belief 2. Accordingly, there must be some beliefs whose justification does not depend on other beliefs, and those justified beliefs must function as a foundation for the inferential justification of other beliefs.
Empirical evidence can be quantitative or qualitative. Typically, numerical quantitative evidence can be represented visually by means of diagrams, graphs, or charts, reflecting the use of statistical or mathematical data and the researcher’s neutral noninteractive role. It can be obtained by methods such as experiments, surveys, correlational research (to study the relationship between variables), cross-sectional research (to compare different groups), causal-comparative research (to explore cause-effect relationships), and longitudinal studies (to test a subject during a given time period).
Qualitative evidence, on the other hand, can foster a deeper understanding of behaviour and related factors and is not typically expressed by using numbers. Often subjective and resulting from interaction between the researcher and participants, it can stem from the use of methods such as interviews (based on verbal interaction), observation (informing ethnographic research design), textual analysis (involving the description and interpretation of texts), focus groups (planned group discussions), and case studies (in-depth analyses of individuals or groups).
Empirical evidence is subject to assessments of its validity. Validity can be internal, involving the soundness of an experiment’s design and execution and the accuracy of subsequent data analysis , or external, involving generalizability to other research contexts ( see ecological validity ).
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on June 19, 2020 by Pritha Bhandari . Revised on June 22, 2023.
Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.
Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.
Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.
Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.
Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.
Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.
Approach | What does it involve? |
---|---|
Grounded theory | Researchers collect rich data on a topic of interest and develop theories . |
Researchers immerse themselves in groups or organizations to understand their cultures. | |
Action research | Researchers and participants collaboratively link theory to practice to drive social change. |
Phenomenological research | Researchers investigate a phenomenon or event by describing and interpreting participants’ lived experiences. |
Narrative research | Researchers examine how stories are told to understand how participants perceive and make sense of their experiences. |
Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.
Professional editors proofread and edit your paper by focusing on:
See an example
Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:
Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.
For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.
Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.
Most types of qualitative data analysis share the same five steps:
There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.
Approach | When to use | Example |
---|---|---|
To describe and categorize common words, phrases, and ideas in qualitative data. | A market researcher could perform content analysis to find out what kind of language is used in descriptions of therapeutic apps. | |
To identify and interpret patterns and themes in qualitative data. | A psychologist could apply thematic analysis to travel blogs to explore how tourism shapes self-identity. | |
To examine the content, structure, and design of texts. | A media researcher could use textual analysis to understand how news coverage of celebrities has changed in the past decade. | |
To study communication and how language is used to achieve effects in specific contexts. | A political scientist could use discourse analysis to study how politicians generate trust in election campaigns. |
Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:
The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.
Data collection occurs in real-world contexts or in naturalistic ways.
Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.
Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.
Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:
The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.
Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.
Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .
Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
Research bias
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
There are five common approaches to qualitative research :
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.
There are various approaches to qualitative data analysis , but they all share five steps in common:
The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Bhandari, P. (2023, June 22). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved June 19, 2024, from https://www.scribbr.com/methodology/qualitative-research/
Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, "i thought ai proofreading was useless but..".
I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”
Databases for finding empirical research, guided search, google scholar, examples of empirical research, sources and further reading.
Canvas | University | Ask a Librarian
Quantitative methods, qualitative methods.
Call us at 706-776-0111
Chat with a Librarian
Send Us Email
Library Hours
Empirical research is based on phenomena that can be observed and measured. Empirical research derives knowledge from actual experience rather than from theory or belief.
Key characteristics of empirical research include:
(based on the original from the Connelly LIbrary of LaSalle University)
Empirical Research: Qualitative vs. Quantitative
Learn about common types of journal articles that use APA Style, including empirical studies; meta-analyses; literature reviews; and replication, theoretical, and methodological articles.
Academic Writer
© 2024 American Psychological Association.
A quantitative research project is characterized by having a population about which the researcher wants to draw conclusions, but it is not possible to collect data on the entire population.
Statistical methods are used in all three stages of a quantitative research project.
For observational studies, the data are collected using statistical sampling theory. Then, the sample data are analyzed using descriptive statistical analysis. Finally, generalizations are made from the sample data to the entire population using statistical inference.
For experimental studies, the subjects are allocated to experimental and control group using randomizing methods. Then, the experimental data are analyzed using descriptive statistical analysis. Finally, just as for observational data, generalizations are made to a larger population.
Iversen, G. (2004). Quantitative research . In M. Lewis-Beck, A. Bryman, & T. Liao (Eds.), Encyclopedia of social science research methods . (pp. 897-898). Thousand Oaks, CA: SAGE Publications, Inc.
What makes a work deserving of the label qualitative research is the demonstrable effort to produce richly and relevantly detailed descriptions and particularized interpretations of people and the social, linguistic, material, and other practices and events that shape and are shaped by them.
Qualitative research typically includes, but is not limited to, discerning the perspectives of these people, or what is often referred to as the actor’s point of view. Although both philosophically and methodologically a highly diverse entity, qualitative research is marked by certain defining imperatives that include its case (as opposed to its variable) orientation, sensitivity to cultural and historical context, and reflexivity.
In its many guises, qualitative research is a form of empirical inquiry that typically entails some form of purposive sampling for information-rich cases; in-depth interviews and open-ended interviews, lengthy participant/field observations, and/or document or artifact study; and techniques for analysis and interpretation of data that move beyond the data generated and their surface appearances.
Sandelowski, M. (2004). Qualitative research . In M. Lewis-Beck, A. Bryman, & T. Liao (Eds.), Encyclopedia of social science research methods . (pp. 893-894). Thousand Oaks, CA: SAGE Publications, Inc.
Arrendale Library Piedmont University 706-776-0111
Welcome to the world of empirical research! In this exploration, we will unravel the complexities surrounding empirical research techniques and provide illuminating examples to enhance your understanding. Empirical research forms the backbone of scientific inquiry, offering a systematic approach to gathering and analyzing data to answer research questions. By delving into this topic, we aim to demystify the process, making it accessible and comprehensible to all. Throughout this journey, we will showcase various methodologies, such as experiments, surveys, and observations, shedding light on how researchers employ these techniques to uncover valuable insights. Through real-world examples, we will illustrate the practical application of empirical research in diverse fields, demonstrating its significance in advancing knowledge and driving evidence-based decision-making. Join us as we embark on a quest to decode the intricacies of empirical research, empowering you to navigate this domain with confidence and proficiency.
Empirical research is a fundamental aspect of scientific inquiry, providing a systematic approach to understanding the world around us. In this blog section, we will delve into the various methods used in empirical research, highlighting the differences between quantitative and qualitative approaches, exploring different data collection techniques, and discussing various data analysis methods.
By understanding the various methods in empirical research, researchers can enhance the rigor and validity of their studies, ultimately contributing to the advancement of knowledge in their respective fields. The choice of research methods should align with the research questions and objectives, ensuring that the data collected and analyzed provide meaningful insights. Researchers should also consider ethical considerations in data collection and analysis, maintaining the integrity of the research process and upholding participant confidentiality and consent. Continuous learning and adaptation of research methods based on emerging trends and technologies are essential for staying at the forefront of empirical research practices.
Medical studies.
Empirical research in the medical field involves conducting experiments and studies to gather data on the effectiveness of treatments, medications, and medical procedures. This type of research is crucial in advancing medical knowledge and improving patient outcomes. Medical studies often include clinical trials, cohort studies, and case-control studies to assess the impact of various interventions on health outcomes. Researchers in this field work tirelessly to ensure that medical practices are evidence-based and safe for patients.
Empirical research in social sciences focuses on studying human behavior, society, and relationships through systematic observation and experimentation. It helps in understanding social phenomena, developing theories, and making evidence-based decisions. Social sciences encompass disciplines such as psychology, sociology, anthropology, and economics, each contributing valuable insights into how individuals and societies function. Researchers in social sciences often use surveys, interviews, and experiments to collect data and draw meaningful conclusions.
Educational research uses empirical methods to study teaching, learning, and educational systems. Researchers gather data to evaluate educational interventions, assess student performance, and improve teaching practices. This research is essential for enhancing educational outcomes and informing policy decisions. Educational researchers explore topics such as curriculum development, classroom management, and educational technology to enhance the learning experience for students of all ages. By conducting empirical studies, educators can continuously improve teaching methods and contribute to the overall advancement of the education sector.
Empirical research plays a vital role in various fields by providing concrete evidence to support or refute hypotheses. In the medical field, empirical studies help in determining the efficacy of new treatments, identifying potential risks, and improving patient care. Social sciences research relies on empirical data to understand complex human behaviors, societal trends, and cultural dynamics. Educational research uses empirical findings to enhance teaching strategies, develop innovative educational tools, and address challenges in the education system.
While empirical research offers valuable insights, researchers often face challenges such as obtaining reliable data, ensuring ethical practices, and interpreting results accurately. However, these challenges also present opportunities for further exploration and refinement of research methodologies. Collaborative efforts among researchers from different disciplines can lead to interdisciplinary studies that offer comprehensive solutions to complex problems.
The future of empirical research is promising, with advancements in technology enabling researchers to collect and analyze data more efficiently. Interdisciplinary collaborations are likely to increase, leading to innovative research projects that address multifaceted issues. As the importance of evidence-based practices continues to grow across various sectors, empirical research will remain a cornerstone in driving progress and innovation.
Empirical research serves as a cornerstone in advancing knowledge, informing decision-making, and driving positive change across diverse fields. By embracing empirical methods and fostering a culture of evidence-based inquiry, researchers can contribute significantly to the growth and development of their respective disciplines.
Common challenges faced.
Data Collection: Ensuring accurate and relevant data is collected can be a major challenge in empirical research. This involves not only gathering data but also ensuring its quality and relevance to the research questions being addressed.
Sample Size: Determining the appropriate sample size to ensure statistical significance and generalizability of results. A small sample size may lead to unreliable findings, while an excessively large sample size may be impractical or wasteful.
Research Design: Developing a robust research design that minimizes bias and confounding variables. It is crucial to carefully plan the research methodology to control for potential sources of error and ensure the validity of the study findings.
Data Analysis: Properly analyzing the data to draw valid conclusions and avoid misinterpretation. Utilizing appropriate statistical techniques and ensuring the accuracy of data interpretation are essential for deriving meaningful insights from empirical research.
Clear Objectives: Define clear research objectives and hypotheses to guide the study. Clearly outlining the purpose of the research helps maintain focus and ensures that the study addresses relevant research questions.
Rigorous Methodology: Follow a systematic and rigorous methodology to ensure the validity and reliability of results. Adhering to established research protocols and best practices enhances the credibility of the study outcomes.
Peer Review: Seek feedback from peers and experts in the field to improve the quality of research. Peer review provides valuable insights, helps identify potential flaws, and enhances the overall robustness of the research.
Transparency: Maintain transparency in reporting methods, data collection, and analysis procedures. Transparent reporting promotes reproducibility and allows other researchers to assess the validity of the study findings.
Continuous Learning: Stay updated with the latest research methods and tools to enhance the quality of empirical research. Embracing a culture of continuous learning and professional development enables researchers to adopt innovative approaches and improve the rigor of their research practices.
In the realm of empirical research, challenges often arise that can impede the progress and quality of a study. Data collection stands out as a fundamental challenge, where the accuracy and relevance of collected data significantly impact the research outcomes. Moreover, determining the optimal sample size is crucial to ensure the reliability and generalizability of findings. Researchers must carefully consider these aspects to avoid pitfalls that could compromise the integrity of their research.
Research design plays a pivotal role in empirical studies, influencing the validity of results. By developing a robust research design that accounts for biases and confounding variables, researchers can enhance the credibility of their findings. Additionally, proper data analysis techniques are essential to draw accurate conclusions and derive meaningful insights from the collected data.
To conduct successful empirical research, it is imperative to establish clear objectives and hypotheses, guiding the study in a focused direction. A rigorous methodology, adherence to best practices, and seeking peer feedback contribute to the overall quality of research. Transparency in reporting methods and a commitment to continuous learning further elevate the standards of empirical research, fostering a culture of excellence and innovation in the scientific community.
Demystifying empirical research involves understanding the various techniques and examples used in the process. By exploring the methodologies, data collection methods, and analysis techniques discussed in this blog, researchers can gain a clearer insight into how empirical research is conducted. Embracing empirical research not only enhances the quality of academic work but also contributes to the advancement of knowledge in various fields. By applying the techniques and examples shared here, researchers can navigate the complexities of empirical research with confidence and precision.
Adding {{itemName}} to cart
Added {{itemName}} to cart
You have full access to this open access article
611k Accesses
305 Citations
24 Altmetric
Explore all metrics
What is qualitative research? If we look for a precise definition of qualitative research, and specifically for one that addresses its distinctive feature of being “qualitative,” the literature is meager. In this article we systematically search, identify and analyze a sample of 89 sources using or attempting to define the term “qualitative.” Then, drawing on ideas we find scattered across existing work, and based on Becker’s classic study of marijuana consumption, we formulate and illustrate a definition that tries to capture its core elements. We define qualitative research as an iterative process in which improved understanding to the scientific community is achieved by making new significant distinctions resulting from getting closer to the phenomenon studied. This formulation is developed as a tool to help improve research designs while stressing that a qualitative dimension is present in quantitative work as well. Additionally, it can facilitate teaching, communication between researchers, diminish the gap between qualitative and quantitative researchers, help to address critiques of qualitative methods, and be used as a standard of evaluation of qualitative research.
A new criterion for assessing discriminant validity in variance-based structural equation modeling, systematic review or scoping review guidance for authors when choosing between a systematic or scoping review approach.
Avoid common mistakes on your manuscript.
If we assume that there is something called qualitative research, what exactly is this qualitative feature? And how could we evaluate qualitative research as good or not? Is it fundamentally different from quantitative research? In practice, most active qualitative researchers working with empirical material intuitively know what is involved in doing qualitative research, yet perhaps surprisingly, a clear definition addressing its key feature is still missing.
To address the question of what is qualitative we turn to the accounts of “qualitative research” in textbooks and also in empirical work. In his classic, explorative, interview study of deviance Howard Becker ( 1963 ) asks ‘How does one become a marijuana user?’ In contrast to pre-dispositional and psychological-individualistic theories of deviant behavior, Becker’s inherently social explanation contends that becoming a user of this substance is the result of a three-phase sequential learning process. First, potential users need to learn how to smoke it properly to produce the “correct” effects. If not, they are likely to stop experimenting with it. Second, they need to discover the effects associated with it; in other words, to get “high,” individuals not only have to experience what the drug does, but also to become aware that those sensations are related to using it. Third, they require learning to savor the feelings related to its consumption – to develop an acquired taste. Becker, who played music himself, gets close to the phenomenon by observing, taking part, and by talking to people consuming the drug: “half of the fifty interviews were conducted with musicians, the other half covered a wide range of people, including laborers, machinists, and people in the professions” (Becker 1963 :56).
Another central aspect derived through the common-to-all-research interplay between induction and deduction (Becker 2017 ), is that during the course of his research Becker adds scientifically meaningful new distinctions in the form of three phases—distinctions, or findings if you will, that strongly affect the course of his research: its focus, the material that he collects, and which eventually impact his findings. Each phase typically unfolds through social interaction, and often with input from experienced users in “a sequence of social experiences during which the person acquires a conception of the meaning of the behavior, and perceptions and judgments of objects and situations, all of which make the activity possible and desirable” (Becker 1963 :235). In this study the increased understanding of smoking dope is a result of a combination of the meaning of the actors, and the conceptual distinctions that Becker introduces based on the views expressed by his respondents. Understanding is the result of research and is due to an iterative process in which data, concepts and evidence are connected with one another (Becker 2017 ).
Indeed, there are many definitions of qualitative research, but if we look for a definition that addresses its distinctive feature of being “qualitative,” the literature across the broad field of social science is meager. The main reason behind this article lies in the paradox, which, to put it bluntly, is that researchers act as if they know what it is, but they cannot formulate a coherent definition. Sociologists and others will of course continue to conduct good studies that show the relevance and value of qualitative research addressing scientific and practical problems in society. However, our paper is grounded in the idea that providing a clear definition will help us improve the work that we do. Among researchers who practice qualitative research there is clearly much knowledge. We suggest that a definition makes this knowledge more explicit. If the first rationale for writing this paper refers to the “internal” aim of improving qualitative research, the second refers to the increased “external” pressure that especially many qualitative researchers feel; pressure that comes both from society as well as from other scientific approaches. There is a strong core in qualitative research, and leading researchers tend to agree on what it is and how it is done. Our critique is not directed at the practice of qualitative research, but we do claim that the type of systematic work we do has not yet been done, and that it is useful to improve the field and its status in relation to quantitative research.
The literature on the “internal” aim of improving, or at least clarifying qualitative research is large, and we do not claim to be the first to notice the vagueness of the term “qualitative” (Strauss and Corbin 1998 ). Also, others have noted that there is no single definition of it (Long and Godfrey 2004 :182), that there are many different views on qualitative research (Denzin and Lincoln 2003 :11; Jovanović 2011 :3), and that more generally, we need to define its meaning (Best 2004 :54). Strauss and Corbin ( 1998 ), for example, as well as Nelson et al. (1992:2 cited in Denzin and Lincoln 2003 :11), and Flick ( 2007 :ix–x), have recognized that the term is problematic: “Actually, the term ‘qualitative research’ is confusing because it can mean different things to different people” (Strauss and Corbin 1998 :10–11). Hammersley has discussed the possibility of addressing the problem, but states that “the task of providing an account of the distinctive features of qualitative research is far from straightforward” ( 2013 :2). This confusion, as he has recently further argued (Hammersley 2018 ), is also salient in relation to ethnography where different philosophical and methodological approaches lead to a lack of agreement about what it means.
Others (e.g. Hammersley 2018 ; Fine and Hancock 2017 ) have also identified the treat to qualitative research that comes from external forces, seen from the point of view of “qualitative research.” This threat can be further divided into that which comes from inside academia, such as the critique voiced by “quantitative research” and outside of academia, including, for example, New Public Management. Hammersley ( 2018 ), zooming in on one type of qualitative research, ethnography, has argued that it is under treat. Similarly to Fine ( 2003 ), and before him Gans ( 1999 ), he writes that ethnography’ has acquired a range of meanings, and comes in many different versions, these often reflecting sharply divergent epistemological orientations. And already more than twenty years ago while reviewing Denzin and Lincoln’ s Handbook of Qualitative Methods Fine argued:
While this increasing centrality [of qualitative research] might lead one to believe that consensual standards have developed, this belief would be misleading. As the methodology becomes more widely accepted, querulous challengers have raised fundamental questions that collectively have undercut the traditional models of how qualitative research is to be fashioned and presented (1995:417).
According to Hammersley, there are today “serious treats to the practice of ethnographic work, on almost any definition” ( 2018 :1). He lists five external treats: (1) that social research must be accountable and able to show its impact on society; (2) the current emphasis on “big data” and the emphasis on quantitative data and evidence; (3) the labor market pressure in academia that leaves less time for fieldwork (see also Fine and Hancock 2017 ); (4) problems of access to fields; and (5) the increased ethical scrutiny of projects, to which ethnography is particularly exposed. Hammersley discusses some more or less insufficient existing definitions of ethnography.
The current situation, as Hammersley and others note—and in relation not only to ethnography but also qualitative research in general, and as our empirical study shows—is not just unsatisfactory, it may even be harmful for the entire field of qualitative research, and does not help social science at large. We suggest that the lack of clarity of qualitative research is a real problem that must be addressed.
Seen in an historical light, what is today called qualitative, or sometimes ethnographic, interpretative research – or a number of other terms – has more or less always existed. At the time the founders of sociology – Simmel, Weber, Durkheim and, before them, Marx – were writing, and during the era of the Methodenstreit (“dispute about methods”) in which the German historical school emphasized scientific methods (cf. Swedberg 1990 ), we can at least speak of qualitative forerunners.
Perhaps the most extended discussion of what later became known as qualitative methods in a classic work is Bronisław Malinowski’s ( 1922 ) Argonauts in the Western Pacific , although even this study does not explicitly address the meaning of “qualitative.” In Weber’s ([1921–-22] 1978) work we find a tension between scientific explanations that are based on observation and quantification and interpretative research (see also Lazarsfeld and Barton 1982 ).
If we look through major sociology journals like the American Sociological Review , American Journal of Sociology , or Social Forces we will not find the term qualitative sociology before the 1970s. And certainly before then much of what we consider qualitative classics in sociology, like Becker’ study ( 1963 ), had already been produced. Indeed, the Chicago School often combined qualitative and quantitative data within the same study (Fine 1995 ). Our point being that before a disciplinary self-awareness the term quantitative preceded qualitative, and the articulation of the former was a political move to claim scientific status (Denzin and Lincoln 2005 ). In the US the World War II seem to have sparked a critique of sociological work, including “qualitative work,” that did not follow the scientific canon (Rawls 2018 ), which was underpinned by a scientifically oriented and value free philosophy of science. As a result the attempts and practice of integrating qualitative and quantitative sociology at Chicago lost ground to sociology that was more oriented to surveys and quantitative work at Columbia under Merton-Lazarsfeld. The quantitative tradition was also able to present textbooks (Lundberg 1951 ) that facilitated the use this approach and its “methods.” The practices of the qualitative tradition, by and large, remained tacit or was part of the mentoring transferred from the renowned masters to their students.
This glimpse into history leads us back to the lack of a coherent account condensed in a definition of qualitative research. Many of the attempts to define the term do not meet the requirements of a proper definition: A definition should be clear, avoid tautology, demarcate its domain in relation to the environment, and ideally only use words in its definiens that themselves are not in need of definition (Hempel 1966 ). A definition can enhance precision and thus clarity by identifying the core of the phenomenon. Preferably, a definition should be short. The typical definition we have found, however, is an ostensive definition, which indicates what qualitative research is about without informing us about what it actually is :
Qualitative research is multimethod in focus, involving an interpretative, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Qualitative research involves the studied use and collection of a variety of empirical materials – case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts – that describe routine and problematic moments and meanings in individuals’ lives. (Denzin and Lincoln 2005 :2)
Flick claims that the label “qualitative research” is indeed used as an umbrella for a number of approaches ( 2007 :2–4; 2002 :6), and it is not difficult to identify research fitting this designation. Moreover, whatever it is, it has grown dramatically over the past five decades. In addition, courses have been developed, methods have flourished, arguments about its future have been advanced (for example, Denzin and Lincoln 1994) and criticized (for example, Snow and Morrill 1995 ), and dedicated journals and books have mushroomed. Most social scientists have a clear idea of research and how it differs from journalism, politics and other activities. But the question of what is qualitative in qualitative research is either eluded or eschewed.
We maintain that this lacuna hinders systematic knowledge production based on qualitative research. Paul Lazarsfeld noted the lack of “codification” as early as 1955 when he reviewed 100 qualitative studies in order to offer a codification of the practices (Lazarsfeld and Barton 1982 :239). Since then many texts on “qualitative research” and its methods have been published, including recent attempts (Goertz and Mahoney 2012 ) similar to Lazarsfeld’s. These studies have tried to extract what is qualitative by looking at the large number of empirical “qualitative” studies. Our novel strategy complements these endeavors by taking another approach and looking at the attempts to codify these practices in the form of a definition, as well as to a minor extent take Becker’s study as an exemplar of what qualitative researchers actually do, and what the characteristic of being ‘qualitative’ denotes and implies. We claim that qualitative researchers, if there is such a thing as “qualitative research,” should be able to codify their practices in a condensed, yet general way expressed in language.
Lingering problems of “generalizability” and “how many cases do I need” (Small 2009 ) are blocking advancement – in this line of work qualitative approaches are said to differ considerably from quantitative ones, while some of the former unsuccessfully mimic principles related to the latter (Small 2009 ). Additionally, quantitative researchers sometimes unfairly criticize the first based on their own quality criteria. Scholars like Goertz and Mahoney ( 2012 ) have successfully focused on the different norms and practices beyond what they argue are essentially two different cultures: those working with either qualitative or quantitative methods. Instead, similarly to Becker ( 2017 ) who has recently questioned the usefulness of the distinction between qualitative and quantitative research, we focus on similarities.
The current situation also impedes both students and researchers in focusing their studies and understanding each other’s work (Lazarsfeld and Barton 1982 :239). A third consequence is providing an opening for critiques by scholars operating within different traditions (Valsiner 2000 :101). A fourth issue is that the “implicit use of methods in qualitative research makes the field far less standardized than the quantitative paradigm” (Goertz and Mahoney 2012 :9). Relatedly, the National Science Foundation in the US organized two workshops in 2004 and 2005 to address the scientific foundations of qualitative research involving strategies to improve it and to develop standards of evaluation in qualitative research. However, a specific focus on its distinguishing feature of being “qualitative” while being implicitly acknowledged, was discussed only briefly (for example, Best 2004 ).
In 2014 a theme issue was published in this journal on “Methods, Materials, and Meanings: Designing Cultural Analysis,” discussing central issues in (cultural) qualitative research (Berezin 2014 ; Biernacki 2014 ; Glaeser 2014 ; Lamont and Swidler 2014 ; Spillman 2014). We agree with many of the arguments put forward, such as the risk of methodological tribalism, and that we should not waste energy on debating methods separated from research questions. Nonetheless, a clarification of the relation to what is called “quantitative research” is of outmost importance to avoid misunderstandings and misguided debates between “qualitative” and “quantitative” researchers. Our strategy means that researchers, “qualitative” or “quantitative” they may be, in their actual practice may combine qualitative work and quantitative work.
In this article we accomplish three tasks. First, we systematically survey the literature for meanings of qualitative research by looking at how researchers have defined it. Drawing upon existing knowledge we find that the different meanings and ideas of qualitative research are not yet coherently integrated into one satisfactory definition. Next, we advance our contribution by offering a definition of qualitative research and illustrate its meaning and use partially by expanding on the brief example introduced earlier related to Becker’s work ( 1963 ). We offer a systematic analysis of central themes of what researchers consider to be the core of “qualitative,” regardless of style of work. These themes – which we summarize in terms of four keywords: distinction, process, closeness, improved understanding – constitute part of our literature review, in which each one appears, sometimes with others, but never all in the same definition. They serve as the foundation of our contribution. Our categories are overlapping. Their use is primarily to organize the large amount of definitions we have identified and analyzed, and not necessarily to draw a clear distinction between them. Finally, we continue the elaboration discussed above on the advantages of a clear definition of qualitative research.
In a hermeneutic fashion we propose that there is something meaningful that deserves to be labelled “qualitative research” (Gadamer 1990 ). To approach the question “What is qualitative in qualitative research?” we have surveyed the literature. In conducting our survey we first traced the word’s etymology in dictionaries, encyclopedias, handbooks of the social sciences and of methods and textbooks, mainly in English, which is common to methodology courses. It should be noted that we have zoomed in on sociology and its literature. This discipline has been the site of the largest debate and development of methods that can be called “qualitative,” which suggests that this field should be examined in great detail.
In an ideal situation we should expect that one good definition, or at least some common ideas, would have emerged over the years. This common core of qualitative research should be so accepted that it would appear in at least some textbooks. Since this is not what we found, we decided to pursue an inductive approach to capture maximal variation in the field of qualitative research; we searched in a selection of handbooks, textbooks, book chapters, and books, to which we added the analysis of journal articles. Our sample comprises a total of 89 references.
In practice we focused on the discipline that has had a clear discussion of methods, namely sociology. We also conducted a broad search in the JSTOR database to identify scholarly sociology articles published between 1998 and 2017 in English with a focus on defining or explaining qualitative research. We specifically zoom in on this time frame because we would have expect that this more mature period would have produced clear discussions on the meaning of qualitative research. To find these articles we combined a number of keywords to search the content and/or the title: qualitative (which was always included), definition, empirical, research, methodology, studies, fieldwork, interview and observation .
As a second phase of our research we searched within nine major sociological journals ( American Journal of Sociology , Sociological Theory , American Sociological Review , Contemporary Sociology , Sociological Forum , Sociological Theory , Qualitative Research , Qualitative Sociology and Qualitative Sociology Review ) for articles also published during the past 19 years (1998–2017) that had the term “qualitative” in the title and attempted to define qualitative research.
Lastly we picked two additional journals, Qualitative Research and Qualitative Sociology , in which we could expect to find texts addressing the notion of “qualitative.” From Qualitative Research we chose Volume 14, Issue 6, December 2014, and from Qualitative Sociology we chose Volume 36, Issue 2, June 2017. Within each of these we selected the first article; then we picked the second article of three prior issues. Again we went back another three issues and investigated article number three. Finally we went back another three issues and perused article number four. This selection criteria was used to get a manageable sample for the analysis.
The coding process of the 89 references we gathered in our selected review began soon after the first round of material was gathered, and we reduced the complexity created by our maximum variation sampling (Snow and Anderson 1993 :22) to four different categories within which questions on the nature and properties of qualitative research were discussed. We call them: Qualitative and Quantitative Research, Qualitative Research, Fieldwork, and Grounded Theory. This – which may appear as an illogical grouping – merely reflects the “context” in which the matter of “qualitative” is discussed. If the selection process of the material – books and articles – was informed by pre-knowledge, we used an inductive strategy to code the material. When studying our material, we identified four central notions related to “qualitative” that appear in various combinations in the literature which indicate what is the core of qualitative research. We have labeled them: “distinctions”, “process,” “closeness,” and “improved understanding.” During the research process the categories and notions were improved, refined, changed, and reordered. The coding ended when a sense of saturation in the material arose. In the presentation below all quotations and references come from our empirical material of texts on qualitative research.
In this section we describe the four categories we identified in the coding, how they differently discuss qualitative research, as well as their overall content. Some salient quotations are selected to represent the type of text sorted under each of the four categories. What we present are examples from the literature.
This analytic category comprises quotations comparing qualitative and quantitative research, a distinction that is frequently used (Brown 2010 :231); in effect this is a conceptual pair that structures the discussion and that may be associated with opposing interests. While the general goal of quantitative and qualitative research is the same – to understand the world better – their methodologies and focus in certain respects differ substantially (Becker 1966 :55). Quantity refers to that property of something that can be determined by measurement. In a dictionary of Statistics and Methodology we find that “(a) When referring to *variables, ‘qualitative’ is another term for *categorical or *nominal. (b) When speaking of kinds of research, ‘qualitative’ refers to studies of subjects that are hard to quantify, such as art history. Qualitative research tends to be a residual category for almost any kind of non-quantitative research” (Stiles 1998:183). But it should be obvious that one could employ a quantitative approach when studying, for example, art history.
The same dictionary states that quantitative is “said of variables or research that can be handled numerically, usually (too sharply) contrasted with *qualitative variables and research” (Stiles 1998:184). From a qualitative perspective “quantitative research” is about numbers and counting, and from a quantitative perspective qualitative research is everything that is not about numbers. But this does not say much about what is “qualitative.” If we turn to encyclopedias we find that in the 1932 edition of the Encyclopedia of the Social Sciences there is no mention of “qualitative.” In the Encyclopedia from 1968 we can read:
Qualitative Analysis. For methods of obtaining, analyzing, and describing data, see [the various entries:] CONTENT ANALYSIS; COUNTED DATA; EVALUATION RESEARCH, FIELD WORK; GRAPHIC PRESENTATION; HISTORIOGRAPHY, especially the article on THE RHETORIC OF HISTORY; INTERVIEWING; OBSERVATION; PERSONALITY MEASUREMENT; PROJECTIVE METHODS; PSYCHOANALYSIS, article on EXPERIMENTAL METHODS; SURVEY ANALYSIS, TABULAR PRESENTATION; TYPOLOGIES. (Vol. 13:225)
Some, like Alford, divide researchers into methodologists or, in his words, “quantitative and qualitative specialists” (Alford 1998 :12). Qualitative research uses a variety of methods, such as intensive interviews or in-depth analysis of historical materials, and it is concerned with a comprehensive account of some event or unit (King et al. 1994 :4). Like quantitative research it can be utilized to study a variety of issues, but it tends to focus on meanings and motivations that underlie cultural symbols, personal experiences, phenomena and detailed understanding of processes in the social world. In short, qualitative research centers on understanding processes, experiences, and the meanings people assign to things (Kalof et al. 2008 :79).
Others simply say that qualitative methods are inherently unscientific (Jovanović 2011 :19). Hood, for instance, argues that words are intrinsically less precise than numbers, and that they are therefore more prone to subjective analysis, leading to biased results (Hood 2006 :219). Qualitative methodologies have raised concerns over the limitations of quantitative templates (Brady et al. 2004 :4). Scholars such as King et al. ( 1994 ), for instance, argue that non-statistical research can produce more reliable results if researchers pay attention to the rules of scientific inference commonly stated in quantitative research. Also, researchers such as Becker ( 1966 :59; 1970 :42–43) have asserted that, if conducted properly, qualitative research and in particular ethnographic field methods, can lead to more accurate results than quantitative studies, in particular, survey research and laboratory experiments.
Some researchers, such as Kalof, Dan, and Dietz ( 2008 :79) claim that the boundaries between the two approaches are becoming blurred, and Small ( 2009 ) argues that currently much qualitative research (especially in North America) tries unsuccessfully and unnecessarily to emulate quantitative standards. For others, qualitative research tends to be more humanistic and discursive (King et al. 1994 :4). Ragin ( 1994 ), and similarly also Becker, ( 1996 :53), Marchel and Owens ( 2007 :303) think that the main distinction between the two styles is overstated and does not rest on the simple dichotomy of “numbers versus words” (Ragin 1994 :xii). Some claim that quantitative data can be utilized to discover associations, but in order to unveil cause and effect a complex research design involving the use of qualitative approaches needs to be devised (Gilbert 2009 :35). Consequently, qualitative data are useful for understanding the nuances lying beyond those processes as they unfold (Gilbert 2009 :35). Others contend that qualitative research is particularly well suited both to identify causality and to uncover fine descriptive distinctions (Fine and Hallett 2014 ; Lichterman and Isaac Reed 2014 ; Katz 2015 ).
There are other ways to separate these two traditions, including normative statements about what qualitative research should be (that is, better or worse than quantitative approaches, concerned with scientific approaches to societal change or vice versa; Snow and Morrill 1995 ; Denzin and Lincoln 2005 ), or whether it should develop falsifiable statements; Best 2004 ).
We propose that quantitative research is largely concerned with pre-determined variables (Small 2008 ); the analysis concerns the relations between variables. These categories are primarily not questioned in the study, only their frequency or degree, or the correlations between them (cf. Franzosi 2016 ). If a researcher studies wage differences between women and men, he or she works with given categories: x number of men are compared with y number of women, with a certain wage attributed to each person. The idea is not to move beyond the given categories of wage, men and women; they are the starting point as well as the end point, and undergo no “qualitative change.” Qualitative research, in contrast, investigates relations between categories that are themselves subject to change in the research process. Returning to Becker’s study ( 1963 ), we see that he questioned pre-dispositional theories of deviant behavior working with pre-determined variables such as an individual’s combination of personal qualities or emotional problems. His take, in contrast, was to understand marijuana consumption by developing “variables” as part of the investigation. Thereby he presented new variables, or as we would say today, theoretical concepts, but which are grounded in the empirical material.
This category contains quotations that refer to descriptions of qualitative research without making comparisons with quantitative research. Researchers such as Denzin and Lincoln, who have written a series of influential handbooks on qualitative methods (1994; Denzin and Lincoln 2003 ; 2005 ), citing Nelson et al. (1992:4), argue that because qualitative research is “interdisciplinary, transdisciplinary, and sometimes counterdisciplinary” it is difficult to derive one single definition of it (Jovanović 2011 :3). According to them, in fact, “the field” is “many things at the same time,” involving contradictions, tensions over its focus, methods, and how to derive interpretations and findings ( 2003 : 11). Similarly, others, such as Flick ( 2007 :ix–x) contend that agreeing on an accepted definition has increasingly become problematic, and that qualitative research has possibly matured different identities. However, Best holds that “the proliferation of many sorts of activities under the label of qualitative sociology threatens to confuse our discussions” ( 2004 :54). Atkinson’s position is more definite: “the current state of qualitative research and research methods is confused” ( 2005 :3–4).
Qualitative research is about interpretation (Blumer 1969 ; Strauss and Corbin 1998 ; Denzin and Lincoln 2003 ), or Verstehen [understanding] (Frankfort-Nachmias and Nachmias 1996 ). It is “multi-method,” involving the collection and use of a variety of empirical materials (Denzin and Lincoln 1998; Silverman 2013 ) and approaches (Silverman 2005 ; Flick 2007 ). It focuses not only on the objective nature of behavior but also on its subjective meanings: individuals’ own accounts of their attitudes, motivations, behavior (McIntyre 2005 :127; Creswell 2009 ), events and situations (Bryman 1989) – what people say and do in specific places and institutions (Goodwin and Horowitz 2002 :35–36) in social and temporal contexts (Morrill and Fine 1997). For this reason, following Weber ([1921-22] 1978), it can be described as an interpretative science (McIntyre 2005 :127). But could quantitative research also be concerned with these questions? Also, as pointed out below, does all qualitative research focus on subjective meaning, as some scholars suggest?
Others also distinguish qualitative research by claiming that it collects data using a naturalistic approach (Denzin and Lincoln 2005 :2; Creswell 2009 ), focusing on the meaning actors ascribe to their actions. But again, does all qualitative research need to be collected in situ? And does qualitative research have to be inherently concerned with meaning? Flick ( 2007 ), referring to Denzin and Lincoln ( 2005 ), mentions conversation analysis as an example of qualitative research that is not concerned with the meanings people bring to a situation, but rather with the formal organization of talk. Still others, such as Ragin ( 1994 :85), note that qualitative research is often (especially early on in the project, we would add) less structured than other kinds of social research – a characteristic connected to its flexibility and that can lead both to potentially better, but also worse results. But is this not a feature of this type of research, rather than a defining description of its essence? Wouldn’t this comment also apply, albeit to varying degrees, to quantitative research?
In addition, Strauss ( 2003 ), along with others, such as Alvesson and Kärreman ( 2011 :10–76), argue that qualitative researchers struggle to capture and represent complex phenomena partially because they tend to collect a large amount of data. While his analysis is correct at some points – “It is necessary to do detailed, intensive, microscopic examination of the data in order to bring out the amazing complexity of what lies in, behind, and beyond those data” (Strauss 2003 :10) – much of his analysis concerns the supposed focus of qualitative research and its challenges, rather than exactly what it is about. But even in this instance we would make a weak case arguing that these are strictly the defining features of qualitative research. Some researchers seem to focus on the approach or the methods used, or even on the way material is analyzed. Several researchers stress the naturalistic assumption of investigating the world, suggesting that meaning and interpretation appear to be a core matter of qualitative research.
We can also see that in this category there is no consensus about specific qualitative methods nor about qualitative data. Many emphasize interpretation, but quantitative research, too, involves interpretation; the results of a regression analysis, for example, certainly have to be interpreted, and the form of meta-analysis that factor analysis provides indeed requires interpretation However, there is no interpretation of quantitative raw data, i.e., numbers in tables. One common thread is that qualitative researchers have to get to grips with their data in order to understand what is being studied in great detail, irrespective of the type of empirical material that is being analyzed. This observation is connected to the fact that qualitative researchers routinely make several adjustments of focus and research design as their studies progress, in many cases until the very end of the project (Kalof et al. 2008 ). If you, like Becker, do not start out with a detailed theory, adjustments such as the emergence and refinement of research questions will occur during the research process. We have thus found a number of useful reflections about qualitative research scattered across different sources, but none of them effectively describe the defining characteristics of this approach.
Although qualitative research does not appear to be defined in terms of a specific method, it is certainly common that fieldwork, i.e., research that entails that the researcher spends considerable time in the field that is studied and use the knowledge gained as data, is seen as emblematic of or even identical to qualitative research. But because we understand that fieldwork tends to focus primarily on the collection and analysis of qualitative data, we expected to find within it discussions on the meaning of “qualitative.” But, again, this was not the case.
Instead, we found material on the history of this approach (for example, Frankfort-Nachmias and Nachmias 1996 ; Atkinson et al. 2001), including how it has changed; for example, by adopting a more self-reflexive practice (Heyl 2001), as well as the different nomenclature that has been adopted, such as fieldwork, ethnography, qualitative research, naturalistic research, participant observation and so on (for example, Lofland et al. 2006 ; Gans 1999 ).
We retrieved definitions of ethnography, such as “the study of people acting in the natural courses of their daily lives,” involving a “resocialization of the researcher” (Emerson 1988 :1) through intense immersion in others’ social worlds (see also examples in Hammersley 2018 ). This may be accomplished by direct observation and also participation (Neuman 2007 :276), although others, such as Denzin ( 1970 :185), have long recognized other types of observation, including non-participant (“fly on the wall”). In this category we have also isolated claims and opposing views, arguing that this type of research is distinguished primarily by where it is conducted (natural settings) (Hughes 1971:496), and how it is carried out (a variety of methods are applied) or, for some most importantly, by involving an active, empathetic immersion in those being studied (Emerson 1988 :2). We also retrieved descriptions of the goals it attends in relation to how it is taught (understanding subjective meanings of the people studied, primarily develop theory, or contribute to social change) (see for example, Corte and Irwin 2017 ; Frankfort-Nachmias and Nachmias 1996 :281; Trier-Bieniek 2012 :639) by collecting the richest possible data (Lofland et al. 2006 ) to derive “thick descriptions” (Geertz 1973 ), and/or to aim at theoretical statements of general scope and applicability (for example, Emerson 1988 ; Fine 2003 ). We have identified guidelines on how to evaluate it (for example Becker 1996 ; Lamont 2004 ) and have retrieved instructions on how it should be conducted (for example, Lofland et al. 2006 ). For instance, analysis should take place while the data gathering unfolds (Emerson 1988 ; Hammersley and Atkinson 2007 ; Lofland et al. 2006 ), observations should be of long duration (Becker 1970 :54; Goffman 1989 ), and data should be of high quantity (Becker 1970 :52–53), as well as other questionable distinctions between fieldwork and other methods:
Field studies differ from other methods of research in that the researcher performs the task of selecting topics, decides what questions to ask, and forges interest in the course of the research itself . This is in sharp contrast to many ‘theory-driven’ and ‘hypothesis-testing’ methods. (Lofland and Lofland 1995 :5)
But could not, for example, a strictly interview-based study be carried out with the same amount of flexibility, such as sequential interviewing (for example, Small 2009 )? Once again, are quantitative approaches really as inflexible as some qualitative researchers think? Moreover, this category stresses the role of the actors’ meaning, which requires knowledge and close interaction with people, their practices and their lifeworld.
It is clear that field studies – which are seen by some as the “gold standard” of qualitative research – are nonetheless only one way of doing qualitative research. There are other methods, but it is not clear why some are more qualitative than others, or why they are better or worse. Fieldwork is characterized by interaction with the field (the material) and understanding of the phenomenon that is being studied. In Becker’s case, he had general experience from fields in which marihuana was used, based on which he did interviews with actual users in several fields.
Another major category we identified in our sample is Grounded Theory. We found descriptions of it most clearly in Glaser and Strauss’ ([1967] 2010 ) original articulation, Strauss and Corbin ( 1998 ) and Charmaz ( 2006 ), as well as many other accounts of what it is for: generating and testing theory (Strauss 2003 :xi). We identified explanations of how this task can be accomplished – such as through two main procedures: constant comparison and theoretical sampling (Emerson 1998:96), and how using it has helped researchers to “think differently” (for example, Strauss and Corbin 1998 :1). We also read descriptions of its main traits, what it entails and fosters – for instance, an exceptional flexibility, an inductive approach (Strauss and Corbin 1998 :31–33; 1990; Esterberg 2002 :7), an ability to step back and critically analyze situations, recognize tendencies towards bias, think abstractly and be open to criticism, enhance sensitivity towards the words and actions of respondents, and develop a sense of absorption and devotion to the research process (Strauss and Corbin 1998 :5–6). Accordingly, we identified discussions of the value of triangulating different methods (both using and not using grounded theory), including quantitative ones, and theories to achieve theoretical development (most comprehensively in Denzin 1970 ; Strauss and Corbin 1998 ; Timmermans and Tavory 2012 ). We have also located arguments about how its practice helps to systematize data collection, analysis and presentation of results (Glaser and Strauss [1967] 2010 :16).
Grounded theory offers a systematic approach which requires researchers to get close to the field; closeness is a requirement of identifying questions and developing new concepts or making further distinctions with regard to old concepts. In contrast to other qualitative approaches, grounded theory emphasizes the detailed coding process, and the numerous fine-tuned distinctions that the researcher makes during the process. Within this category, too, we could not find a satisfying discussion of the meaning of qualitative research.
In sum, our analysis shows that some notions reappear in the discussion of qualitative research, such as understanding, interpretation, “getting close” and making distinctions. These notions capture aspects of what we think is “qualitative.” However, a comprehensive definition that is useful and that can further develop the field is lacking, and not even a clear picture of its essential elements appears. In other words no definition emerges from our data, and in our research process we have moved back and forth between our empirical data and the attempt to present a definition. Our concrete strategy, as stated above, is to relate qualitative and quantitative research, or more specifically, qualitative and quantitative work. We use an ideal-typical notion of quantitative research which relies on taken for granted and numbered variables. This means that the data consists of variables on different scales, such as ordinal, but frequently ratio and absolute scales, and the representation of the numbers to the variables, i.e. the justification of the assignment of numbers to object or phenomenon, are not questioned, though the validity may be questioned. In this section we return to the notion of quality and try to clarify it while presenting our contribution.
Broadly, research refers to the activity performed by people trained to obtain knowledge through systematic procedures. Notions such as “objectivity” and “reflexivity,” “systematic,” “theory,” “evidence” and “openness” are here taken for granted in any type of research. Next, building on our empirical analysis we explain the four notions that we have identified as central to qualitative work: distinctions, process, closeness, and improved understanding. In discussing them, ultimately in relation to one another, we make their meaning even more precise. Our idea, in short, is that only when these ideas that we present separately for analytic purposes are brought together can we speak of qualitative research.
We believe that the possibility of making new distinctions is one the defining characteristics of qualitative research. It clearly sets it apart from quantitative analysis which works with taken-for-granted variables, albeit as mentioned, meta-analyses, for example, factor analysis may result in new variables. “Quality” refers essentially to distinctions, as already pointed out by Aristotle. He discusses the term “qualitative” commenting: “By a quality I mean that in virtue of which things are said to be qualified somehow” (Aristotle 1984:14). Quality is about what something is or has, which means that the distinction from its environment is crucial. We see qualitative research as a process in which significant new distinctions are made to the scholarly community; to make distinctions is a key aspect of obtaining new knowledge; a point, as we will see, that also has implications for “quantitative research.” The notion of being “significant” is paramount. New distinctions by themselves are not enough; just adding concepts only increases complexity without furthering our knowledge. The significance of new distinctions is judged against the communal knowledge of the research community. To enable this discussion and judgements central elements of rational discussion are required (cf. Habermas [1981] 1987 ; Davidsson [ 1988 ] 2001) to identify what is new and relevant scientific knowledge. Relatedly, Ragin alludes to the idea of new and useful knowledge at a more concrete level: “Qualitative methods are appropriate for in-depth examination of cases because they aid the identification of key features of cases. Most qualitative methods enhance data” (1994:79). When Becker ( 1963 ) studied deviant behavior and investigated how people became marihuana smokers, he made distinctions between the ways in which people learned how to smoke. This is a classic example of how the strategy of “getting close” to the material, for example the text, people or pictures that are subject to analysis, may enable researchers to obtain deeper insight and new knowledge by making distinctions – in this instance on the initial notion of learning how to smoke. Others have stressed the making of distinctions in relation to coding or theorizing. Emerson et al. ( 1995 ), for example, hold that “qualitative coding is a way of opening up avenues of inquiry,” meaning that the researcher identifies and develops concepts and analytic insights through close examination of and reflection on data (Emerson et al. 1995 :151). Goodwin and Horowitz highlight making distinctions in relation to theory-building writing: “Close engagement with their cases typically requires qualitative researchers to adapt existing theories or to make new conceptual distinctions or theoretical arguments to accommodate new data” ( 2002 : 37). In the ideal-typical quantitative research only existing and so to speak, given, variables would be used. If this is the case no new distinction are made. But, would not also many “quantitative” researchers make new distinctions?
Process does not merely suggest that research takes time. It mainly implies that qualitative new knowledge results from a process that involves several phases, and above all iteration. Qualitative research is about oscillation between theory and evidence, analysis and generating material, between first- and second -order constructs (Schütz 1962 :59), between getting in contact with something, finding sources, becoming deeply familiar with a topic, and then distilling and communicating some of its essential features. The main point is that the categories that the researcher uses, and perhaps takes for granted at the beginning of the research process, usually undergo qualitative changes resulting from what is found. Becker describes how he tested hypotheses and let the jargon of the users develop into theoretical concepts. This happens over time while the study is being conducted, exemplifying what we mean by process.
In the research process, a pilot-study may be used to get a first glance of, for example, the field, how to approach it, and what methods can be used, after which the method and theory are chosen or refined before the main study begins. Thus, the empirical material is often central from the start of the project and frequently leads to adjustments by the researcher. Likewise, during the main study categories are not fixed; the empirical material is seen in light of the theory used, but it is also given the opportunity to kick back, thereby resisting attempts to apply theoretical straightjackets (Becker 1970 :43). In this process, coding and analysis are interwoven, and thus are often important steps for getting closer to the phenomenon and deciding what to focus on next. Becker began his research by interviewing musicians close to him, then asking them to refer him to other musicians, and later on doubling his original sample of about 25 to include individuals in other professions (Becker 1973:46). Additionally, he made use of some participant observation, documents, and interviews with opiate users made available to him by colleagues. As his inductive theory of deviance evolved, Becker expanded his sample in order to fine tune it, and test the accuracy and generality of his hypotheses. In addition, he introduced a negative case and discussed the null hypothesis ( 1963 :44). His phasic career model is thus based on a research design that embraces processual work. Typically, process means to move between “theory” and “material” but also to deal with negative cases, and Becker ( 1998 ) describes how discovering these negative cases impacted his research design and ultimately its findings.
Obviously, all research is process-oriented to some degree. The point is that the ideal-typical quantitative process does not imply change of the data, and iteration between data, evidence, hypotheses, empirical work, and theory. The data, quantified variables, are, in most cases fixed. Merging of data, which of course can be done in a quantitative research process, does not mean new data. New hypotheses are frequently tested, but the “raw data is often the “the same.” Obviously, over time new datasets are made available and put into use.
Another characteristic that is emphasized in our sample is that qualitative researchers – and in particular ethnographers – can, or as Goffman put it, ought to ( 1989 ), get closer to the phenomenon being studied and their data than quantitative researchers (for example, Silverman 2009 :85). Put differently, essentially because of their methods qualitative researchers get into direct close contact with those being investigated and/or the material, such as texts, being analyzed. Becker started out his interview study, as we noted, by talking to those he knew in the field of music to get closer to the phenomenon he was studying. By conducting interviews he got even closer. Had he done more observations, he would undoubtedly have got even closer to the field.
Additionally, ethnographers’ design enables researchers to follow the field over time, and the research they do is almost by definition longitudinal, though the time in the field is studied obviously differs between studies. The general characteristic of closeness over time maximizes the chances of unexpected events, new data (related, for example, to archival research as additional sources, and for ethnography for situations not necessarily previously thought of as instrumental – what Mannay and Morgan ( 2015 ) term the “waiting field”), serendipity (Merton and Barber 2004 ; Åkerström 2013 ), and possibly reactivity, as well as the opportunity to observe disrupted patterns that translate into exemplars of negative cases. Two classic examples of this are Becker’s finding of what medical students call “crocks” (Becker et al. 1961 :317), and Geertz’s ( 1973 ) study of “deep play” in Balinese society.
By getting and staying so close to their data – be it pictures, text or humans interacting (Becker was himself a musician) – for a long time, as the research progressively focuses, qualitative researchers are prompted to continually test their hunches, presuppositions and hypotheses. They test them against a reality that often (but certainly not always), and practically, as well as metaphorically, talks back, whether by validating them, or disqualifying their premises – correctly, as well as incorrectly (Fine 2003 ; Becker 1970 ). This testing nonetheless often leads to new directions for the research. Becker, for example, says that he was initially reading psychological theories, but when facing the data he develops a theory that looks at, you may say, everything but psychological dispositions to explain the use of marihuana. Especially researchers involved with ethnographic methods have a fairly unique opportunity to dig up and then test (in a circular, continuous and temporal way) new research questions and findings as the research progresses, and thereby to derive previously unimagined and uncharted distinctions by getting closer to the phenomenon under study.
Let us stress that getting close is by no means restricted to ethnography. The notion of hermeneutic circle and hermeneutics as a general way of understanding implies that we must get close to the details in order to get the big picture. This also means that qualitative researchers can literally also make use of details of pictures as evidence (cf. Harper 2002). Thus, researchers may get closer both when generating the material or when analyzing it.
Quantitative research, we maintain, in the ideal-typical representation cannot get closer to the data. The data is essentially numbers in tables making up the variables (Franzosi 2016 :138). The data may originally have been “qualitative,” but once reduced to numbers there can only be a type of “hermeneutics” about what the number may stand for. The numbers themselves, however, are non-ambiguous. Thus, in quantitative research, interpretation, if done, is not about the data itself—the numbers—but what the numbers stand for. It follows that the interpretation is essentially done in a more “speculative” mode without direct empirical evidence (cf. Becker 2017 ).
While distinction, process and getting closer refer to the qualitative work of the researcher, improved understanding refers to its conditions and outcome of this work. Understanding cuts deeper than explanation, which to some may mean a causally verified correlation between variables. The notion of explanation presupposes the notion of understanding since explanation does not include an idea of how knowledge is gained (Manicas 2006 : 15). Understanding, we argue, is the core concept of what we call the outcome of the process when research has made use of all the other elements that were integrated in the research. Understanding, then, has a special status in qualitative research since it refers both to the conditions of knowledge and the outcome of the process. Understanding can to some extent be seen as the condition of explanation and occurs in a process of interpretation, which naturally refers to meaning (Gadamer 1990 ). It is fundamentally connected to knowing, and to the knowing of how to do things (Heidegger [1927] 2001 ). Conceptually the term hermeneutics is used to account for this process. Heidegger ties hermeneutics to human being and not possible to separate from the understanding of being ( 1988 ). Here we use it in a broader sense, and more connected to method in general (cf. Seiffert 1992 ). The abovementioned aspects – for example, “objectivity” and “reflexivity” – of the approach are conditions of scientific understanding. Understanding is the result of a circular process and means that the parts are understood in light of the whole, and vice versa. Understanding presupposes pre-understanding, or in other words, some knowledge of the phenomenon studied. The pre-understanding, even in the form of prejudices, are in qualitative research process, which we see as iterative, questioned, which gradually or suddenly change due to the iteration of data, evidence and concepts. However, qualitative research generates understanding in the iterative process when the researcher gets closer to the data, e.g., by going back and forth between field and analysis in a process that generates new data that changes the evidence, and, ultimately, the findings. Questioning, to ask questions, and put what one assumes—prejudices and presumption—in question, is central to understand something (Heidegger [1927] 2001 ; Gadamer 1990 :368–384). We propose that this iterative process in which the process of understanding occurs is characteristic of qualitative research.
Improved understanding means that we obtain scientific knowledge of something that we as a scholarly community did not know before, or that we get to know something better. It means that we understand more about how parts are related to one another, and to other things we already understand (see also Fine and Hallett 2014 ). Understanding is an important condition for qualitative research. It is not enough to identify correlations, make distinctions, and work in a process in which one gets close to the field or phenomena. Understanding is accomplished when the elements are integrated in an iterative process.
It is, moreover, possible to understand many things, and researchers, just like children, may come to understand new things every day as they engage with the world. This subjective condition of understanding – namely, that a person gains a better understanding of something –is easily met. To be qualified as “scientific,” the understanding must be general and useful to many; it must be public. But even this generally accessible understanding is not enough in order to speak of “scientific understanding.” Though we as a collective can increase understanding of everything in virtually all potential directions as a result also of qualitative work, we refrain from this “objective” way of understanding, which has no means of discriminating between what we gain in understanding. Scientific understanding means that it is deemed relevant from the scientific horizon (compare Schütz 1962 : 35–38, 46, 63), and that it rests on the pre-understanding that the scientists have and must have in order to understand. In other words, the understanding gained must be deemed useful by other researchers, so that they can build on it. We thus see understanding from a pragmatic, rather than a subjective or objective perspective. Improved understanding is related to the question(s) at hand. Understanding, in order to represent an improvement, must be an improvement in relation to the existing body of knowledge of the scientific community (James [ 1907 ] 1955). Scientific understanding is, by definition, collective, as expressed in Weber’s famous note on objectivity, namely that scientific work aims at truths “which … can claim, even for a Chinese, the validity appropriate to an empirical analysis” ([1904] 1949 :59). By qualifying “improved understanding” we argue that it is a general defining characteristic of qualitative research. Becker‘s ( 1966 ) study and other research of deviant behavior increased our understanding of the social learning processes of how individuals start a behavior. And it also added new knowledge about the labeling of deviant behavior as a social process. Few studies, of course, make the same large contribution as Becker’s, but are nonetheless qualitative research.
Understanding in the phenomenological sense, which is a hallmark of qualitative research, we argue, requires meaning and this meaning is derived from the context, and above all the data being analyzed. The ideal-typical quantitative research operates with given variables with different numbers. This type of material is not enough to establish meaning at the level that truly justifies understanding. In other words, many social science explanations offer ideas about correlations or even causal relations, but this does not mean that the meaning at the level of the data analyzed, is understood. This leads us to say that there are indeed many explanations that meet the criteria of understanding, for example the explanation of how one becomes a marihuana smoker presented by Becker. However, we may also understand a phenomenon without explaining it, and we may have potential explanations, or better correlations, that are not really understood.
We may speak more generally of quantitative research and its data to clarify what we see as an important distinction. The “raw data” that quantitative research—as an idealtypical activity, refers to is not available for further analysis; the numbers, once created, are not to be questioned (Franzosi 2016 : 138). If the researcher is to do “more” or “change” something, this will be done by conjectures based on theoretical knowledge or based on the researcher’s lifeworld. Both qualitative and quantitative research is based on the lifeworld, and all researchers use prejudices and pre-understanding in the research process. This idea is present in the works of Heidegger ( 2001 ) and Heisenberg (cited in Franzosi 2010 :619). Qualitative research, as we argued, involves the interaction and questioning of concepts (theory), data, and evidence.
Ragin ( 2004 :22) points out that “a good definition of qualitative research should be inclusive and should emphasize its key strengths and features, not what it lacks (for example, the use of sophisticated quantitative techniques).” We define qualitative research as an iterative process in which improved understanding to the scientific community is achieved by making new significant distinctions resulting from getting closer to the phenomenon studied. Qualitative research, as defined here, is consequently a combination of two criteria: (i) how to do things –namely, generating and analyzing empirical material, in an iterative process in which one gets closer by making distinctions, and (ii) the outcome –improved understanding novel to the scholarly community. Is our definition applicable to our own study? In this study we have closely read the empirical material that we generated, and the novel distinction of the notion “qualitative research” is the outcome of an iterative process in which both deduction and induction were involved, in which we identified the categories that we analyzed. We thus claim to meet the first criteria, “how to do things.” The second criteria cannot be judged but in a partial way by us, namely that the “outcome” —in concrete form the definition-improves our understanding to others in the scientific community.
We have defined qualitative research, or qualitative scientific work, in relation to quantitative scientific work. Given this definition, qualitative research is about questioning the pre-given (taken for granted) variables, but it is thus also about making new distinctions of any type of phenomenon, for example, by coining new concepts, including the identification of new variables. This process, as we have discussed, is carried out in relation to empirical material, previous research, and thus in relation to theory. Theory and previous research cannot be escaped or bracketed. According to hermeneutic principles all scientific work is grounded in the lifeworld, and as social scientists we can thus never fully bracket our pre-understanding.
We have proposed that quantitative research, as an idealtype, is concerned with pre-determined variables (Small 2008 ). Variables are epistemically fixed, but can vary in terms of dimensions, such as frequency or number. Age is an example; as a variable it can take on different numbers. In relation to quantitative research, qualitative research does not reduce its material to number and variables. If this is done the process of comes to a halt, the researcher gets more distanced from her data, and it makes it no longer possible to make new distinctions that increase our understanding. We have above discussed the components of our definition in relation to quantitative research. Our conclusion is that in the research that is called quantitative there are frequent and necessary qualitative elements.
Further, comparative empirical research on researchers primarily working with ”quantitative” approaches and those working with ”qualitative” approaches, we propose, would perhaps show that there are many similarities in practices of these two approaches. This is not to deny dissimilarities, or the different epistemic and ontic presuppositions that may be more or less strongly associated with the two different strands (see Goertz and Mahoney 2012 ). Our point is nonetheless that prejudices and preconceptions about researchers are unproductive, and that as other researchers have argued, differences may be exaggerated (e.g., Becker 1996 : 53, 2017 ; Marchel and Owens 2007 :303; Ragin 1994 ), and that a qualitative dimension is present in both kinds of work.
Several things follow from our findings. The most important result is the relation to quantitative research. In our analysis we have separated qualitative research from quantitative research. The point is not to label individual researchers, methods, projects, or works as either “quantitative” or “qualitative.” By analyzing, i.e., taking apart, the notions of quantitative and qualitative, we hope to have shown the elements of qualitative research. Our definition captures the elements, and how they, when combined in practice, generate understanding. As many of the quotations we have used suggest, one conclusion of our study holds that qualitative approaches are not inherently connected with a specific method. Put differently, none of the methods that are frequently labelled “qualitative,” such as interviews or participant observation, are inherently “qualitative.” What matters, given our definition, is whether one works qualitatively or quantitatively in the research process, until the results are produced. Consequently, our analysis also suggests that those researchers working with what in the literature and in jargon is often called “quantitative research” are almost bound to make use of what we have identified as qualitative elements in any research project. Our findings also suggest that many” quantitative” researchers, at least to some extent, are engaged with qualitative work, such as when research questions are developed, variables are constructed and combined, and hypotheses are formulated. Furthermore, a research project may hover between “qualitative” and “quantitative” or start out as “qualitative” and later move into a “quantitative” (a distinct strategy that is not similar to “mixed methods” or just simply combining induction and deduction). More generally speaking, the categories of “qualitative” and “quantitative,” unfortunately, often cover up practices, and it may lead to “camps” of researchers opposing one another. For example, regardless of the researcher is primarily oriented to “quantitative” or “qualitative” research, the role of theory is neglected (cf. Swedberg 2017 ). Our results open up for an interaction not characterized by differences, but by different emphasis, and similarities.
Let us take two examples to briefly indicate how qualitative elements can fruitfully be combined with quantitative. Franzosi ( 2010 ) has discussed the relations between quantitative and qualitative approaches, and more specifically the relation between words and numbers. He analyzes texts and argues that scientific meaning cannot be reduced to numbers. Put differently, the meaning of the numbers is to be understood by what is taken for granted, and what is part of the lifeworld (Schütz 1962 ). Franzosi shows how one can go about using qualitative and quantitative methods and data to address scientific questions analyzing violence in Italy at the time when fascism was rising (1919–1922). Aspers ( 2006 ) studied the meaning of fashion photographers. He uses an empirical phenomenological approach, and establishes meaning at the level of actors. In a second step this meaning, and the different ideal-typical photographers constructed as a result of participant observation and interviews, are tested using quantitative data from a database; in the first phase to verify the different ideal-types, in the second phase to use these types to establish new knowledge about the types. In both of these cases—and more examples can be found—authors move from qualitative data and try to keep the meaning established when using the quantitative data.
A second main result of our study is that a definition, and we provided one, offers a way for research to clarify, and even evaluate, what is done. Hence, our definition can guide researchers and students, informing them on how to think about concrete research problems they face, and to show what it means to get closer in a process in which new distinctions are made. The definition can also be used to evaluate the results, given that it is a standard of evaluation (cf. Hammersley 2007 ), to see whether new distinctions are made and whether this improves our understanding of what is researched, in addition to the evaluation of how the research was conducted. By making what is qualitative research explicit it becomes easier to communicate findings, and it is thereby much harder to fly under the radar with substandard research since there are standards of evaluation which make it easier to separate “good” from “not so good” qualitative research.
To conclude, our analysis, which ends with a definition of qualitative research can thus both address the “internal” issues of what is qualitative research, and the “external” critiques that make it harder to do qualitative research, to which both pressure from quantitative methods and general changes in society contribute.
Åkerström, Malin. 2013. Curiosity and serendipity in qualitative research. Qualitative Sociology Review 9 (2): 10–18.
Google Scholar
Alford, Robert R. 1998. The craft of inquiry. Theories, methods, evidence . Oxford: Oxford University Press.
Alvesson, Mats, and Dan Kärreman. 2011. Qualitative research and theory development. Mystery as method . London: SAGE Publications.
Book Google Scholar
Aspers, Patrik. 2006. Markets in Fashion, A Phenomenological Approach. London Routledge.
Atkinson, Paul. 2005. Qualitative research. Unity and diversity. Forum: Qualitative Social Research 6 (3): 1–15.
Becker, Howard S. 1963. Outsiders. Studies in the sociology of deviance . New York: The Free Press.
Becker, Howard S. 1966. Whose side are we on? Social Problems 14 (3): 239–247.
Article Google Scholar
Becker, Howard S. 1970. Sociological work. Method and substance . New Brunswick: Transaction Books.
Becker, Howard S. 1996. The epistemology of qualitative research. In Ethnography and human development. Context and meaning in social inquiry , ed. Jessor Richard, Colby Anne, and Richard A. Shweder, 53–71. Chicago: University of Chicago Press.
Becker, Howard S. 1998. Tricks of the trade. How to think about your research while you're doing it . Chicago: University of Chicago Press.
Becker, Howard S. 2017. Evidence . Chigaco: University of Chicago Press.
Becker, Howard, Blanche Geer, Everett Hughes, and Anselm Strauss. 1961. Boys in White, student culture in medical school . New Brunswick: Transaction Publishers.
Berezin, Mabel. 2014. How do we know what we mean? Epistemological dilemmas in cultural sociology. Qualitative Sociology 37 (2): 141–151.
Best, Joel. 2004. Defining qualitative research. In Workshop on Scientific Foundations of Qualitative Research , eds . Charles, Ragin, Joanne, Nagel, and Patricia White, 53-54. http://www.nsf.gov/pubs/2004/nsf04219/nsf04219.pdf .
Biernacki, Richard. 2014. Humanist interpretation versus coding text samples. Qualitative Sociology 37 (2): 173–188.
Blumer, Herbert. 1969. Symbolic interactionism: Perspective and method . Berkeley: University of California Press.
Brady, Henry, David Collier, and Jason Seawright. 2004. Refocusing the discussion of methodology. In Rethinking social inquiry. Diverse tools, shared standards , ed. Brady Henry and Collier David, 3–22. Lanham: Rowman and Littlefield.
Brown, Allison P. 2010. Qualitative method and compromise in applied social research. Qualitative Research 10 (2): 229–248.
Charmaz, Kathy. 2006. Constructing grounded theory . London: Sage.
Corte, Ugo, and Katherine Irwin. 2017. “The Form and Flow of Teaching Ethnographic Knowledge: Hands-on Approaches for Learning Epistemology” Teaching Sociology 45(3): 209-219.
Creswell, John W. 2009. Research design. Qualitative, quantitative, and mixed method approaches . 3rd ed. Thousand Oaks: SAGE Publications.
Davidsson, David. 1988. 2001. The myth of the subjective. In Subjective, intersubjective, objective , ed. David Davidsson, 39–52. Oxford: Oxford University Press.
Denzin, Norman K. 1970. The research act: A theoretical introduction to Ssociological methods . Chicago: Aldine Publishing Company Publishers.
Denzin, Norman K., and Yvonna S. Lincoln. 2003. Introduction. The discipline and practice of qualitative research. In Collecting and interpreting qualitative materials , ed. Norman K. Denzin and Yvonna S. Lincoln, 1–45. Thousand Oaks: SAGE Publications.
Denzin, Norman K., and Yvonna S. Lincoln. 2005. Introduction. The discipline and practice of qualitative research. In The Sage handbook of qualitative research , ed. Norman K. Denzin and Yvonna S. Lincoln, 1–32. Thousand Oaks: SAGE Publications.
Emerson, Robert M., ed. 1988. Contemporary field research. A collection of readings . Prospect Heights: Waveland Press.
Emerson, Robert M., Rachel I. Fretz, and Linda L. Shaw. 1995. Writing ethnographic fieldnotes . Chicago: University of Chicago Press.
Esterberg, Kristin G. 2002. Qualitative methods in social research . Boston: McGraw-Hill.
Fine, Gary Alan. 1995. Review of “handbook of qualitative research.” Contemporary Sociology 24 (3): 416–418.
Fine, Gary Alan. 2003. “ Toward a Peopled Ethnography: Developing Theory from Group Life.” Ethnography . 4(1):41-60.
Fine, Gary Alan, and Black Hawk Hancock. 2017. The new ethnographer at work. Qualitative Research 17 (2): 260–268.
Fine, Gary Alan, and Timothy Hallett. 2014. Stranger and stranger: Creating theory through ethnographic distance and authority. Journal of Organizational Ethnography 3 (2): 188–203.
Flick, Uwe. 2002. Qualitative research. State of the art. Social Science Information 41 (1): 5–24.
Flick, Uwe. 2007. Designing qualitative research . London: SAGE Publications.
Frankfort-Nachmias, Chava, and David Nachmias. 1996. Research methods in the social sciences . 5th ed. London: Edward Arnold.
Franzosi, Roberto. 2010. Sociology, narrative, and the quality versus quantity debate (Goethe versus Newton): Can computer-assisted story grammars help us understand the rise of Italian fascism (1919- 1922)? Theory and Society 39 (6): 593–629.
Franzosi, Roberto. 2016. From method and measurement to narrative and number. International journal of social research methodology 19 (1): 137–141.
Gadamer, Hans-Georg. 1990. Wahrheit und Methode, Grundzüge einer philosophischen Hermeneutik . Band 1, Hermeneutik. Tübingen: J.C.B. Mohr.
Gans, Herbert. 1999. Participant Observation in an Age of “Ethnography”. Journal of Contemporary Ethnography 28 (5): 540–548.
Geertz, Clifford. 1973. The interpretation of cultures . New York: Basic Books.
Gilbert, Nigel. 2009. Researching social life . 3rd ed. London: SAGE Publications.
Glaeser, Andreas. 2014. Hermeneutic institutionalism: Towards a new synthesis. Qualitative Sociology 37: 207–241.
Glaser, Barney G., and Anselm L. Strauss. [1967] 2010. The discovery of grounded theory. Strategies for qualitative research. Hawthorne: Aldine.
Goertz, Gary, and James Mahoney. 2012. A tale of two cultures: Qualitative and quantitative research in the social sciences . Princeton: Princeton University Press.
Goffman, Erving. 1989. On fieldwork. Journal of Contemporary Ethnography 18 (2): 123–132.
Goodwin, Jeff, and Ruth Horowitz. 2002. Introduction. The methodological strengths and dilemmas of qualitative sociology. Qualitative Sociology 25 (1): 33–47.
Habermas, Jürgen. [1981] 1987. The theory of communicative action . Oxford: Polity Press.
Hammersley, Martyn. 2007. The issue of quality in qualitative research. International Journal of Research & Method in Education 30 (3): 287–305.
Hammersley, Martyn. 2013. What is qualitative research? Bloomsbury Publishing.
Hammersley, Martyn. 2018. What is ethnography? Can it survive should it? Ethnography and Education 13 (1): 1–17.
Hammersley, Martyn, and Paul Atkinson. 2007. Ethnography. Principles in practice . London: Tavistock Publications.
Heidegger, Martin. [1927] 2001. Sein und Zeit . Tübingen: Max Niemeyer Verlag.
Heidegger, Martin. 1988. 1923. Ontologie. Hermeneutik der Faktizität, Gesamtausgabe II. Abteilung: Vorlesungen 1919-1944, Band 63, Frankfurt am Main: Vittorio Klostermann.
Hempel, Carl G. 1966. Philosophy of the natural sciences . Upper Saddle River: Prentice Hall.
Hood, Jane C. 2006. Teaching against the text. The case of qualitative methods. Teaching Sociology 34 (3): 207–223.
James, William. 1907. 1955. Pragmatism . New York: Meredian Books.
Jovanović, Gordana. 2011. Toward a social history of qualitative research. History of the Human Sciences 24 (2): 1–27.
Kalof, Linda, Amy Dan, and Thomas Dietz. 2008. Essentials of social research . London: Open University Press.
Katz, Jack. 2015. Situational evidence: Strategies for causal reasoning from observational field notes. Sociological Methods & Research 44 (1): 108–144.
King, Gary, Robert O. Keohane, S. Sidney, and S. Verba. 1994. Designing social inquiry. In Scientific inference in qualitative research . Princeton: Princeton University Press.
Chapter Google Scholar
Lamont, Michelle. 2004. Evaluating qualitative research: Some empirical findings and an agenda. In Report from workshop on interdisciplinary standards for systematic qualitative research , ed. M. Lamont and P. White, 91–95. Washington, DC: National Science Foundation.
Lamont, Michèle, and Ann Swidler. 2014. Methodological pluralism and the possibilities and limits of interviewing. Qualitative Sociology 37 (2): 153–171.
Lazarsfeld, Paul, and Alan Barton. 1982. Some functions of qualitative analysis in social research. In The varied sociology of Paul Lazarsfeld , ed. Patricia Kendall, 239–285. New York: Columbia University Press.
Lichterman, Paul, and Isaac Reed I (2014), Theory and Contrastive Explanation in Ethnography. Sociological methods and research. Prepublished 27 October 2014; https://doi.org/10.1177/0049124114554458 .
Lofland, John, and Lyn Lofland. 1995. Analyzing social settings. A guide to qualitative observation and analysis . 3rd ed. Belmont: Wadsworth.
Lofland, John, David A. Snow, Leon Anderson, and Lyn H. Lofland. 2006. Analyzing social settings. A guide to qualitative observation and analysis . 4th ed. Belmont: Wadsworth/Thomson Learning.
Long, Adrew F., and Mary Godfrey. 2004. An evaluation tool to assess the quality of qualitative research studies. International Journal of Social Research Methodology 7 (2): 181–196.
Lundberg, George. 1951. Social research: A study in methods of gathering data . New York: Longmans, Green and Co..
Malinowski, Bronislaw. 1922. Argonauts of the Western Pacific: An account of native Enterprise and adventure in the archipelagoes of Melanesian New Guinea . London: Routledge.
Manicas, Peter. 2006. A realist philosophy of science: Explanation and understanding . Cambridge: Cambridge University Press.
Marchel, Carol, and Stephanie Owens. 2007. Qualitative research in psychology. Could William James get a job? History of Psychology 10 (4): 301–324.
McIntyre, Lisa J. 2005. Need to know. Social science research methods . Boston: McGraw-Hill.
Merton, Robert K., and Elinor Barber. 2004. The travels and adventures of serendipity. A Study in Sociological Semantics and the Sociology of Science . Princeton: Princeton University Press.
Mannay, Dawn, and Melanie Morgan. 2015. Doing ethnography or applying a qualitative technique? Reflections from the ‘waiting field‘. Qualitative Research 15 (2): 166–182.
Neuman, Lawrence W. 2007. Basics of social research. Qualitative and quantitative approaches . 2nd ed. Boston: Pearson Education.
Ragin, Charles C. 1994. Constructing social research. The unity and diversity of method . Thousand Oaks: Pine Forge Press.
Ragin, Charles C. 2004. Introduction to session 1: Defining qualitative research. In Workshop on Scientific Foundations of Qualitative Research , 22, ed. Charles C. Ragin, Joane Nagel, Patricia White. http://www.nsf.gov/pubs/2004/nsf04219/nsf04219.pdf
Rawls, Anne. 2018. The Wartime narrative in US sociology, 1940–7: Stigmatizing qualitative sociology in the name of ‘science,’ European Journal of Social Theory (Online first).
Schütz, Alfred. 1962. Collected papers I: The problem of social reality . The Hague: Nijhoff.
Seiffert, Helmut. 1992. Einführung in die Hermeneutik . Tübingen: Franke.
Silverman, David. 2005. Doing qualitative research. A practical handbook . 2nd ed. London: SAGE Publications.
Silverman, David. 2009. A very short, fairly interesting and reasonably cheap book about qualitative research . London: SAGE Publications.
Silverman, David. 2013. What counts as qualitative research? Some cautionary comments. Qualitative Sociology Review 9 (2): 48–55.
Small, Mario L. 2009. “How many cases do I need?” on science and the logic of case selection in field-based research. Ethnography 10 (1): 5–38.
Small, Mario L 2008. Lost in translation: How not to make qualitative research more scientific. In Workshop on interdisciplinary standards for systematic qualitative research, ed in Michelle Lamont, and Patricia White, 165–171. Washington, DC: National Science Foundation.
Snow, David A., and Leon Anderson. 1993. Down on their luck: A study of homeless street people . Berkeley: University of California Press.
Snow, David A., and Calvin Morrill. 1995. New ethnographies: Review symposium: A revolutionary handbook or a handbook for revolution? Journal of Contemporary Ethnography 24 (3): 341–349.
Strauss, Anselm L. 2003. Qualitative analysis for social scientists . 14th ed. Chicago: Cambridge University Press.
Strauss, Anselm L., and Juliette M. Corbin. 1998. Basics of qualitative research. Techniques and procedures for developing grounded theory . 2nd ed. Thousand Oaks: Sage Publications.
Swedberg, Richard. 2017. Theorizing in sociological research: A new perspective, a new departure? Annual Review of Sociology 43: 189–206.
Swedberg, Richard. 1990. The new 'Battle of Methods'. Challenge January–February 3 (1): 33–38.
Timmermans, Stefan, and Iddo Tavory. 2012. Theory construction in qualitative research: From grounded theory to abductive analysis. Sociological Theory 30 (3): 167–186.
Trier-Bieniek, Adrienne. 2012. Framing the telephone interview as a participant-centred tool for qualitative research. A methodological discussion. Qualitative Research 12 (6): 630–644.
Valsiner, Jaan. 2000. Data as representations. Contextualizing qualitative and quantitative research strategies. Social Science Information 39 (1): 99–113.
Weber, Max. 1904. 1949. Objectivity’ in social Science and social policy. Ed. Edward A. Shils and Henry A. Finch, 49–112. New York: The Free Press.
Download references
Financial Support for this research is given by the European Research Council, CEV (263699). The authors are grateful to Susann Krieglsteiner for assistance in collecting the data. The paper has benefitted from the many useful comments by the three reviewers and the editor, comments by members of the Uppsala Laboratory of Economic Sociology, as well as Jukka Gronow, Sebastian Kohl, Marcin Serafin, Richard Swedberg, Anders Vassenden and Turid Rødne.
Authors and affiliations.
Department of Sociology, Uppsala University, Uppsala, Sweden
Patrik Aspers
Seminar for Sociology, Universität St. Gallen, St. Gallen, Switzerland
Department of Media and Social Sciences, University of Stavanger, Stavanger, Norway
You can also search for this author in PubMed Google Scholar
Correspondence to Patrik Aspers .
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Reprints and permissions
Aspers, P., Corte, U. What is Qualitative in Qualitative Research. Qual Sociol 42 , 139–160 (2019). https://doi.org/10.1007/s11133-019-9413-7
Download citation
Published : 27 February 2019
Issue Date : 01 June 2019
DOI : https://doi.org/10.1007/s11133-019-9413-7
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Quantitative Research
Purpose | Supports a hypothesis through a review of the literature | |
Aim | Provides a statistical model of what the literature presents | |
Previous Knowledge | Researcher already knows what has been discovered | |
Phase in Process | Generally occurs later in the research process | |
Research Design | Designed before research begins | |
Data-Gathering | Data is gathered using tools like surveys or computer programs | |
Form of Data | Data is numerical | |
Objectivity of Research | More objective; researcher measures and analyzes data | |
Keywords | Quantitative, survey, literature review |
Qualitative Research
Purpose | Used for exploration, generates a hypothesis | |
Aim | Provides an in-depth description of the research methods to be used | |
Previous Knowledge | Researcher has a general idea of what will be discovered | |
Phase in Process | Usually occurs early in the research process | |
Research Design | Design is developed during research | |
Data-Gathering | Researcher gathers data from interviews, etc. | |
Form of Data | Data takes the form of interviews, videos, artifacts | |
Objectivity of Research | More subjective; researcher interprets events | |
Keywords | Qualitative, methods, results, interviews |
Empirical Studies
empirical, experiment, methodology, observation, outcomes, sample size, statistical analysis, study |
generally employ . |
Mixed Methods Research
Mixed Methods Research uses strategies from both qualitative and quantitative research processes to provide a greater understanding of the subject matter.
Warning: The NCBI web site requires JavaScript to function. more...
An official website of the United States government
The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
Scarbrough H, D’Andreta D, Evans S, et al. Networked innovation in the health sector: comparative qualitative study of the role of Collaborations for Leadership in Applied Health Research and Care in translating research into practice. Southampton (UK): NIHR Journals Library; 2014 May. (Health Services and Delivery Research, No. 2.13.)
Chapter 4 empirical analysis and findings: qualitative investigation.
As noted previously, to be able to address the different dimensions of CLAHRC activity relevant to our study, we adopted a ‘multilevel’ approach in our fieldwork and analysis 97 that sought to integrate evidence from both our CLAHRC-level and our project-level data collection to provide a coherent, narratively structured account of the CLAHRCs’ development.
The overall approach that we adopted to data analysis incorporated a hybrid process combining both inductive and deductive thematic analysis of interview data. 98 At a basic level, thematic analysis of interview data is simply where coding ‘is used to break up and segment the data into simpler, general categories and expand and tease out the data in order to formulate new questions and levels of interpretation’ (p. 30). 99 It was important to recognise that, in building our study on a theoretical concern with networked innovation, 31 , 79 we had already made assumptions and developed ideas about the focus for the analysis. However, it was also important to allow our analysis to be data driven to allow new ideas to emerge during the process of coding. Therefore, we needed to develop an approach that allowed us to make use of our preconceived ideas and theoretical underpinning, while still maintaining the inductive flexibility of an approach that supports the generation and development of new ideas.
As interpretive research still needs to demonstrate credibility and trustworthiness through being founded on a systematic evidence of the research process, our data analysis was supported by a structured method that combined steps in which we were ‘data driven’ and inductively developed codes based on interesting ideas and themes that emerged from our study of transcripts, together with incorporating phases of review in which we reflected on how these ideas fitted in with the overall objectives of our study. Therefore, although our research analysis was based on a linear ‘step-by-step’ procedure, this still facilitated an iterative and reflexive process. 98 However, in following a structured approach, we were able to continually reframe our analysis both based on ideas from inductive study, and allowing our theoretical grounding to be an integral part of the generation of codes.
We used NVivo to support our data analysis. While NVivo can be used to support a more objective and logical categorisation of codes, we should recognise that this is only an aid to the organisation of the material and is not in itself an interpretive device.
To structure the individual case narratives outlined below, we have adopted three major headings which reflect our conceptual framework and support critical concerns around the development of the CLAHRCs. These headings are as follows: governance, management and organisation; collaboration and networks; and KT. To begin our account, however, we focus on the way in which the goals of the CLAHRC initiative were appropriated by individual CLAHRCs in terms of the vision which they defined for themselves.
As the three case sites of translational initiatives within the UK were all created through the same UK NIHR funding programme, they were all designed to meet the same aim and generic mission. However, there was significant flexibility in the way in which this mission was interpreted by the leaders of different CLAHRCs. We term these interpretive acts of leadership as different ‘visions’ of collaborative translational research. Within our study we have explored how the vision of each CLAHRC has emerged from and interacted with the structuring of the initiative, particularly in terms of management and governance. By studying the CLAHRCs’ development over time, then, the qualitative fieldwork has been able to explore how these distinctive features of each CLAHRC influence their approach to KT.
The Bluetown CLAHRC is based on a partnership with organisations from a large urban area. It is led by a university hospital with an established strong reputation in conducting research. The health-care partners are representative of a range of organisation types, including acute hospital, primary care and mental health trusts, which includes both organisations with extensive research experience and those that have been previously less involved. The CLAHRC was originally established around a simple ‘hub-and-spoke’ model of a small central management team and nine clinical project teams. The core of the CLAHRC, including its management team and several of the clinical project teams, is centred on a traditional medical school public health department with high-profile academic expertise in clinical sciences research, and historic links with the lead NHS site. Each project team is largely composed of members based in the same geographical base, with a number of teams based at the university, and other clinical project teams are located within one of the health-care partnership organisations. Specialist support services were included as a CLAHRC-wide resource, providing each clinical project team with access to people who could contribute medical sociology, health economics, methods such as systematic reviewing, and statistical expertise.
The director was integral to developing the vision for this CLAHRC and for embedding this within the different clinical projects. Throughout the development of this CLAHRC, his vision has been strongly influential on the form that the work programmes within the CLAHRC have taken as they have progressed. In particular, a clinical scientific tradition was incorporated into the design of study protocols at the start of the programme, with particular attention being placed on scientific methodological rigour, especially the production of outputs suitable for top-quality, peer-reviewed academic publication. In particular, all of the clinical projects were designed as prospective evaluation clinical-academic research studies and, therefore, constituted a set of work programmes all linked by a common scientific approach.
It’s to prospectively evaluate service delivery as it happens. And where possible to interact, you know, with managers and how the service delivery takes place. So that the product will be examples where this has happened prospectively and good examples that have been published in good places. In the process of doing that to export the idea or develop the idea in the local area. BLUETOWN001
This illustrates a cornerstone of the Bluetown CLAHRC model: the vision emphasises that the quality of the evidence being produced is crucial to its ultimate impact. As a result, the vision of this CLAHRC was founded on the view that any programme of work should first be grounded in a rigorous scientific approach, as only high-quality evidence should be taken up within health-care policy and practice.
The CLAHRC was originally formed around a small central leadership team, with the vision of the director strongly influencing the focus and direction of the CLAHRC model. As the director had a historically strong reputation in the local area, this helped to legitimise the CLAHRC as something that was perceived as of value by those in senior positions within the partner organisations.
The vision of the core management team has been strongly influential on the approach that each programme of work uses. Each programme of work is expected to use a rigorous scientific design and methodology in order to produce robust evidence that is suitable for publication in high-quality academic journals. Therefore, the model builds on the approach to scientific work that was historically conducted by the lead organisations, with the CLAHRC emphasising that through these work programmes the teams should foster collaborative relationships with relevant service areas. This vision is emphasised through the role of the leaders within each of the project teams, who provide scientific and methodological direction to the programme of work. However, although there is recognition of the overarching objectives expected from each team by the central management, there is no CLAHRC-wide strategy for how each team should be structured or how collaborative relationships should be formed and developed. As a result, the lead of each project team has been provided with extensive flexibility on how their individual programme is organised. As a consequence, each team tended to foster relationships with particular groups and communities as relevant to the local services on which they were gathering evaluative evidence. The influence of the CLAHRC was important here in formalising and legitimising this collaboration between clinical academics and targeted groups in the NHS.
Without CLAHRC, we would have some of those connections but I think the momentum, thrust and energy that’s going into current programme really wouldn’t be there . . . associating with individuals from other fields, groups that we wouldn’t normally be part of. This has really allowed us to reflect more objectively on work, and the direction we’re going. BLUETOWN022
Structural features of the CLAHRC were used to communicate the overarching objectives of central management to the clinical project teams. This involved regular interactions between the centre and projects, management representation at project team meetings, and programmed meetings for project leaders and project managers. The positions which project members held in other environments (i.e. outside of their own team environment) were typically construed as ‘honorary’ – that is, not part of the main role which project members perform within their clinical team or central management group ( Box 1 ).
Case example: organisational processes by which the CLAHRC vision is distributed Observation of the interactions within a project meeting illustrated how the vision of the overall initiative is emphasised through the presence of a member of the core management (more...)
Although the majority of clinical project team members share similar types of disciplinary expertise, with most having clinical–academic experience, the structural organisation of the initiative provides access to other types of expertise. The extent of the CLAHRC-wide resources means that individuals with expertise such as health economics, statistics, systematic reviewing, sociology and communication are easily available for project teams to access. With the sociology theme, for example, each project team allocated a small proportion of their own resources to support the employment of a select number of people with this type of expertise. Although these team members come from different working cultures from the majority of the CLAHRC members, it is clear to the clinical project teams that the director values and respects the expertise that these individuals can provide. This helps to legitimise their contribution within the teams, even in sociological territory, which such teams would not normally view as part of their remit. At the start of CLAHRC, the cross-cutting activity for sociology was an undefined programme of work, but this provided an opportunity for these members to liaise with the clinical project teams to identify how they could support their programme of work. As relationships were built up, they quickly identified certain project teams where they could add value to the other work that was already planned by the team lead.
We are officially termed as a cross-cutting theme but we’re also embedded in the individual research, as in our jobs are paid out of individual projects. BLUETOWN003
This cross-cutting work has become an embedded part of a proportion of the clinical project teams. Although they provide a different type of expertise to the clinical projects, the members of this cross-cutting theme enact their role in such a way as to fit in with the overall work programme. While, overall, the cross-cutting theme constitutes only a small part of the CLAHRC, resources were deliberately allocated so that the members of this group would be highly skilled and experienced, and therefore able to achieve this. They have also been able to contribute guidance to more junior members of the project teams who are involved with areas that overlap with their area of expertise, such as qualitative components. Observation from in-depth studies of the four clinical themes indicates that this approach has facilitated the ‘embedding’ of the cross-cutting theme members within the project teams.
The qualitative interviews demonstrated that from the early stages of the CLAHRC’s development, a clear objective was understood to focus on working with stakeholder groups, such as collaborating with NHS practitioners and managers within the clinical project work. There was also acknowledgement that this required some compromise with established academic work practices, with some effort being required to produce work that is suitable for practitioners.
Getting researchers to understand practitioners is a covert aim of CLAHRC. So that you don’t go away for five years and then tell them what they should have done in the first place because practitioners don’t want to hear that. CMBLUETOWN007
The activity of each of the clinical project teams means that they create links to defined health-care organisations involved in the CLAHRC partnership. The interaction between members of the project team (e.g. the project lead) is integral to fostering the relationships among the official partner organisations of the CLAHRC. As many senior CLAHRC members had pre-existing collaborative relationships with NHS trusts, they were able to enact ‘senior’ boundary-spanning roles. Many of the theme leads were in clinical–academic dual activity roles and held honorary contract positions with NHS organisations. However, their leadership typically reflected the wider ‘epistemic’ community of a university setting, emphasising academic values rather than the practical concerns seen in the health service environment. For university-based teams, the collaborative interaction was framed by the values of the academic community, with high-level academic publication considered as important for demonstrating value to these groups. This was seen as consistent with the vision of the CLAHRC, in that the collaboration is seen as creating a culture within health-care settings which is more receptive to high-quality scientific evidence.
It’s nonsense to say . . . the PCT health, local authority or the voluntary sector don’t consider evidence. They do. They just consider evidence perhaps in a different way than you or I perhaps might consider evidence . . . The CLAHRC process is about the better, the optimal decision making that we can bring, the greater rigour, to set different parameters for making the decision. That is the value. BLUETOWN014
Thus, the CLAHRC emphasises clinical–academic evaluation of service delivery in terms of producing high-quality evidence. As a result, the CLAHRC’s applied health research is conducted on the terms set by the clinical–academic community, to which other groups must be able to fit in. At the same time, the CLAHRC’s strong scientific reputation within certain fields helps it to build links with communities that value this type of evidence, such as national policy groups and certain local clinical groups. Certain groups of health-care managers and policy makers were perceived as valuing exactly this type of evidence.
Publishing all this information in top quality journals as absolutely, absolutely key. Not just to the academic credibility of CLAHRC but to the managerial credibility of CLAHRC . . . dealing with managers, when they want to know how good somebody is the first thing they do is see whether other academics respect that person. And one method of that is you publish in the top journals. They don’t want to deal with somebody who’s not up there at the cutting edge. They want to be with the front people. BLUETOWN001
The work taking place within these themes tends to be dominated by the traditions of the theme lead and the culture where the theme is physically located. Non-health care-based teams do use ‘in-built boundary spanners’, pre-existing contacts and mechanisms such as advisory boards to draw on insights about the local health-care context ( Box 2 ). This approach enables the teams to focus on their own academic areas of expertise but perhaps supports depth rather than breadth of approach.
Case example: clinical boundary spanner within a university-based team Although this CLAHRC did not have a formalised structure within its functional design to facilitate connections between clinical project teams, the flexibility provided for each subgroup (more...)
Bluetown CLAHRC also includes a number of themes based directly in health-care organisations. As the teams working on these themes are closer to the issues of practice, their work has the potential to more easily impact on health-care practice. In ‘speaking the same language’ as the practice and community groups they intend to impact, they are in a better position to integrate the perspectives of these groups. This should facilitate the implementation of findings and local impact that these groups will have. However, as these teams are further away from the core CLAHRC, greater attention has been required to ensure that these themes feel part of the Bluetown CLAHRC community.
Bluetown CLAHRC was not built on an explicit CLAHRC-wide strategy for KT. However, as each project team was provided with extensive flexibility in how they organised their clinical study, each individual theme has developed their own strategy for dissemination of outputs, with a flexibility to interpret the aims of the CLAHRC to fit their own clinical context. As a result, the themes have developed their own tailored approaches to relating their findings to local health-care practice and policy. These are not, however, related to an overall CLAHRC plan or strategy for this approach to the work. Additionally, these approaches do not appear to be shared between the themes, and their context-specific design may potentially limit wider application.
Overall, the Bluetown CLAHRC is formed from members who represent both academic disciplines and health-care professional groups. This means that the cumulative work of Bluetown CLAHRC has the potential to innovatively incorporate many different perspectives and draw from different types of expertise. However, currently, these tend to focus on project–team connections, which limit the potential of a co-ordinated Bluetown CLAHRC effect across the region. The lack of a co-ordinated strategy to link with the different types and levels of the Bluetown partners may have limited the capacity-building of the CLAHRC as a whole, notwithstanding the benefit of the strong relationships fostered by each individual team.
As the CLAHRC developed, a new theme of knowledge management was developed by the central CLAHRC management team. This was designed as a high-level cross-cutting theme. The activity was separate from the clinical project work.
But having said all that I am very keen to in addition add a specific knowledge management function to the CLAHRC but I am very clear this is not based on discoveries made in the CLAHRC, although I would not exclude those, but it’s not based on that, it’s based on the knowledge out there in the world, what can we make of that. So knowledge exchange if you like. BLUETOWN001
However, it drew on the infrastructure that the teams had used and fostered during the early stages of the CLAHRC. In particular, it was felt that, as the clinical project teams had demonstrated activity within the NHS organisations at an early stage, NHS managers and executives valued the work of the CLAHRC, and that this facilitated buy-in for this new venture.
Due to the financial model of the CLAHRC, no specific funding was allocated to this area. However, NIHR Flexibility and Sustainability Funding money was allocated to this activity. As part of this, the core management team was expanded to include support for the clinical teams and CLAHRC as a whole with external engagement and communication and dissemination activities. In addition, a knowledge-exchange forum was developed. This was targeted at high-level managers within NHS and local authority organisations. The meetings acted as a place where NHS organisations could discuss issues that they considered important areas for further work, and the CLAHRC team developed these ideas into discrete programmes of work in which they provided the resource for tailored evidence to be produced.
In this sense, as led by the director, this CLAHRC has evolved to incorporate new objectives as it has developed over time. However, the original vision of the CLAHRC model has been central to supporting the development of these new components. In particular, the knowledge-exchange forum has drawn on the connections fostered by the clinical teams, and the reputation established within partner organisations and beyond. This has allowed them to generate a membership of executive-level personnel who are in decision-making and management positions within their own organisations. Overall, the conceptual vision of the CLAHRC has helped to expand the CLAHRC programme of work beyond the initial set of clinical projects ( Box 3 ).
Case example: evolving collaborative relationships to provide additional services Theme 1 was built on what they describe as ‘a unique opportunity offered by these changes to carry out longitudinal studies into the interaction between changing (more...)
The Bluetown CLAHRC’s vision and structure enables it to collaborate readily with those communities that are more aligned with the working practices of its members. The partner organisations and the members involved in the initiative, therefore, tend to support the principle that only rigorous evidence should be used to inform service developments. However, within this constraint, the Bluetown initiative does support new operational approaches, and in particular emphasises that project teams need to develop collaborative relationships with external communities, especially management and decision-makers.
With no overarching CLAHRC-wide strategy to support KT activity, each theme has developed approaches tailored to their own local communities and clinical context to support the mobilisation of the knowledge produced through their programme of work. However, as the CLAHRC has progressed over time, central management have augmented the original structure to develop new aspects to support overall Bluetown CLAHRC work. For example, they identified that CLAHRC-wide support was required to support the dissemination activities of the individual project teams. As a result, new central support was established to facilitate engagement of the project teams with end-users of their research, and to support the translation of research findings through a CLAHRC-wide communication strategy within the region.
The Greentown CLAHRC is led by a mental health trust, and the core of the initiative builds on established academic-research links between this health-care trust, a university hospital acute trust, and a university institution. However, an aim of this CLAHRC is to spread beyond the organisations that have traditionally been involved with research in order to build research capacity in localities further away from this core. As such, partner organisations are spread over a large geographical area, comprising a mix of urban and rural areas. The overall aim of the CLAHRC is to use an organisational learning model to facilitate a change in how research is conducted and utilised within the region. One key component of the model of this CLAHRC is about using the research experience of members from the ‘core’ organisations as a foundation from which this expertise can be dispersed and built on in other areas that have not traditionally worked in this way.
One of the things that CLAHRC was trying to do is move outside traditional university clinical centres, that would be located in this region . . . So it was not just about getting research into practice per se but broadening research into practice . . . In [another area] it’s red neck territory . . . best practice isn’t as evident over there as it is around the university, the trusts closely located to the university. GREENTOWN001
The CLAHRC model is built on an organisational learning approach which aims to close the gap between academia and practice. Within this approach, the vision of this CLAHRC is to develop the initiative in such a way that it can bring about a ‘step-change’ in how research is delivered and services are designed by facilitating a change in how the different communities involved conceptualise and undertake these types of activities. From the outset, the Greentown CLAHRC model created a number of structural features which were intended to embed this vision into its operational organisation. Key features include the clustering of work programmes within a small number of defined clinical themes, which support the building of communities around these clinical areas. There was also resourcing of dedicated ‘knowledge broker’ roles, through which a selected group of practitioners would support KT from project teams to the wider NHS. Cross-cutting themes were formed with the aim of providing clinical project team members with specialist forms of expertise in areas such as KT, synthesis of evidence, external engagement and communication, and statistical support. A CLAHRC-wide approach guided a similar constitution for all project teams, which included explicit mechanisms to support boundary spanning between different communities, through incorporating links within the structure of the CLAHRC to support the contact academics have with practitioners and managers in health services. This illustrates a key attribute of the Greentown vision for the CLAHRC in which the model that is being developed is designed to close the gap between research and practice by changing the working culture of the various stakeholder groups involved. However, as the CLAHRC has progressed, issues with the original approach were recognised, and amendments to this vision have been undertaken.
Greentown CLAHRC has had several changes in leadership since the decision to apply for a CLAHRC was made, with each one bringing different types of professional expertise to the role, ranging from clinical–academic, through business school academic, to health-services executive management. These changes in leadership are reflected in the ways in which the vision and approach of the CLAHRC has evolved from its conception and funding application through to its latter stages. Each new director has contributed a particular characteristic to the CLAHRC through their leadership. As the first director of the CLAHRC observed, the novelty of the role itself created a need for it to be ‘interpreted’ and ‘enacted’:
Even if they’d had a visible leadership role before the structured things were set up the brokering, engagement, it can be quite intangible. These sort of things have come to the fore . . . I am doing a leadership role that’s much more strategic and autonomous and I think there will be a degree of variability and interpretation of what the role is and how it’s enacted. My first port of call was to get practice on board. Those chief execs are quite frightening people sometimes. So get practice on board. So I went round every trust board that was a partner in CLAHRC and presented the CLAHRC. GREENTOWN001
As this suggests, the agency demonstrated by the leadership of the CLAHRC has been important in shaping its approach to KT. The original head of the CLAHRC bid was a clinical academic with a strong reputation in mental health research. A change in leadership was made to a specialist in organisational studies, with the aim of emphasising that the CLAHRC model was about a different way of working compared with traditional, standalone clinical–academic projects.
Overall, however, Greentown CLAHRC is structured around a set of clinical research programmes of work designed by clinical academics (i.e. typically a professor from one of the medical school clinical subdisciplines). These focus on applied health research issues to do with the delivery of services for chronic and mental health conditions. Although the basic form of the clinical themes persisted over time, they were eventually complemented by the creation of explicit structures for cross-cutting work. A considerable proportion of the finance was reallocated away from clinical research into implementation science to emphasise the integral role of this approach to their proposal. This process of evolving the planned model for the CLAHRC has generally been viewed as a positive process that enabled the CLAHRC strategy to be well defined by the time it started, supporting earlier progress.
And I mean, that letter and the feedback, you know, it was fairly explicit and it provided a platform for the reworking in the bid. You know, along implementation lines and gave me legitimacy to lead it . . . And I think that referral process was really useful for us. I think we were able to hit the ground running to a much greater extent than the other CLAHRCs. GREENTOWN001
Initiative-wide structures connected each clinical project team to members who provided specialist academic expertise, such as KT or statistical support. These fitted in with CLAHRC-wide activities, and were incorporated into the Greentown CLAHRC organisational model as a way of facilitating the sharing of different types of expertise with members from varied disciplinary and professional backgrounds.
This incorporation of this greater emphasis on implementation work was included after the majority of the core individuals had already ‘signed up’ to be part of the bid, and at a point when much of the focus for the clinical research projects had already been decided. In effect, the new KT strategy was grafted onto a CLAHRC structure that had evolved around a more traditional clinical–academic research model. Therefore, this new emphasis on an explicit implementation strategy had to be adopted by CLAHRC members with established interpretations of their roles, and where much of the content of the CLAHRC’s work programme had already been specified.
The majority of CLAHRC members are employed by the university, and many are co-located. However, as the CLAHRC spans multiple university departments, bringing together academics from a clinical–academic background within community health sciences, and social sciences from business and sociology departments, it created significant challenges for members with different types of expertise trying to work together.
For the clinical scientists this is a completely new way for them to do any work . . . They’ve never thought about the wider implementation aspects and actually how do you physically get research into practice. GREENTOWN002
As Greentown CLAHRC is built on a cross-disciplinary academic collaboration, this CLAHRC has faced additional challenges of working across the academic norms and practices of several different academic disciplines, including clinical sciences, nursing and allied health, and management and social sciences. In practice, it has sometimes been difficult to align these different spheres, especially in relating the conceptual remit of the cross-cutting themes. A further change to the leadership was made in the middle phase of the CLAHRC’s time frame, bringing in a director with a background in NHS executive management. This move helped to bring about a shift towards greater engagement with different types and levels of NHS organisations, in order to make the CLAHRC more NHS facing. At the same time, a key part of the restructuring from the mid-term review of Greentown was designed to combat uncertainty about the role of different programme components, and particularly the cross-cutting themes. In particular, as the CLAHRC’s vision emphasised new ways of working, it had been difficult for individuals to comprehend what their role should involve, or to fit this into traditional career trajectories.
As part of this restructuring, roles were divided into academic and non-academic parts, with the aim of allowing each member to focus on their own form of expertise, and to provide clear access to other types of skills. As a result, the CLAHRC was able to become more NHS facing, with a clearly defined team to assist with the delivery of research to practitioner and policy communities.
A typical configuration for the arrangement of positions within project teams had been designed by central management and was used across the initiative. Each core team was established around the team leader, which generally involved other colleagues from within the same academic subdivisions, and the employment of other team members who had trained within similar academic areas to fill designated roles for research and management of the programme of work. Additionally, management created a team structure that aimed to create connections between the core members of each project team, and other team members who could contribute different forms of expertise. These included ‘knowledge broker’ roles aimed at facilitating externally directed ties within health-care or community organisations. These acted as an explicit mechanism to support KT between different communities. An account provided by one team member who had taken on one such externally focused ‘knowledge brokering’ position describes some of her responsibilities within the team, and highlights how working across boundaries was achieved. She describes how an important part of this aspiration is for the team members in the designated brokering positions to spend time with external groups who are relevant stakeholders for to the intervention they are studying.
I did a lot of meetings etc., doing a presentation, explaining about the project. You need to tell them what’s happening with the project and how it’s going to be implemented locally, what the practice can have on them. And they’re a key part of making it work. Whether you . . . take their ideas up or we’re bringing ideas from the [initiative] to them, I just think it works both ways. So that won’t work or this won’t work and what about these participants, how is that going to work? Is there any money for that and then have we got stuff for that. Stuff that people in the university don’t understand.
Members in defined boundary-spanning roles within academic-centred project teams describe how they use team meetings to, for example, bring in insights about public health issues to the academic core team members. We observed in one discussion how project meetings were used as a place where the implications of the information provided by external groups was incorporated into a discussion on developing a sustainable implementation of the intervention which the project team were researching. Senior academic members of the team integrated this information by aligning it with the scientific and methodological approach for the project work, such as considering how this related to the study design, and the implications changes to the protocol would have on demonstrating the academic rigour required for high-quality journal publications.
I think a CLAHRC is about doing things differently to what’s been done before. I think the only way to do that is to bring in people who have different backgrounds and different experiences, who understand the world in a different way. Because I think one of the problems that the CLAHRCs are set up to address is that academic research is done for academics and clinical research is done for clinicians and the twain don’t meet. And I think this CLAHRC has been sensible in bringing in people from different backgrounds. GREENTOWN002
The structure of Greentown is centred on themes defined by related clinical areas. This supports the achievement of collaboration between CLAHRC members and relevant stakeholders in the local area. The commonality of many activities covering mental health, and related projects within the stroke rehabilitation theme, has provided an opportunity for CLAHRC members to develop communities around these clinical areas. In building on the formal roles to link project teams with external groups, efforts were made from the project team leadership to develop relationships with communities relevant to their research topic. With the senior leaders already having a reputation within their field, this provided a platform for connecting with stakeholders from within the local region, and in presenting a body of ‘CLAHRC-type’ work to national clinical groups.
As part of the CLAHRC-wide strategy, the building of ‘communities of practice’ is actively promoted as an approach which can ultimately support the implementation of evidence into local policy and practice. Four groups are perceived as particularly important: academics, clinical practitioners, health services decision-makers (managers and commissioners) and service user representatives. In particular, creating connections and building on links of existing networks of practitioners is viewed as a strategy to support this aim. In addition, the importance of developing relationships with opinion leaders within the local area has been described as an important part of the implementation process. The CLAHRC has developed an inclusive register of associates, where anyone who is interested in the CLAHRC work can sign up to receive updates about the CLAHRC work and are invited to events. Many of the activities and events are formed around the clinical topics that are of mutual interest, which forms an emergent community for people within the local area to connect to certain groups within the CLAHRC.
Greentown adopted an explicitly translational approach for informing external communities, such as commissioners, decision-makers and clinical practitioners, about the results of the clinical teams’ programme of work. This was based on a standard template and style of writing which emphasises the implications for policy and practice. These ‘bite-sized’ outputs operate as a tool to support KT of the work of the CLAHRC to the local practitioner and commissioning community. They require the academic teams (with assistance from specialist support members of the CLAHRC, and those with insight into practice) to tailor the way in which their work is presented for different types of audiences ( Box 4 ).
Case example: using a workshop event to discuss interim findings and facilitate KT A clinical project team organised a workshop to share interim findings from their programme of work. This both acted as a strategic tool for the dissemination of the study (more...)
In addition to organisational-level mechanisms, roles performed by individuals acted as bridges to support KT between different communities. Members in defined boundary-spanning roles within academic-centred project teams describe how they use team meetings to bring insight to the academic core team members relating to issues about using the public health intervention in a community setting:
I add an ability to look at things from an outsider’s perspective and say, ‘why are you doing it that way?’ And I think there’s as much challenge to academia as there is to the NHS. I think this CLAHRC along with the others were set up with ‘we’re the academics, we’ve got the knowledge, we’re going to tell you what you need to know’. And actually one of the biggest shifts that’s happening here is that academics have to get a sort of understanding of a new process around the democratisation of research. GREENTOWN009
Ultimately, these individuals were able to work with the academic team by enacting roles, such as ‘translating’ how information should be presented for different audiences. They also acted as ‘interpreters’ when providing insights into the practical implications of particular implementations of interventions, and helped to negotiate or legitimise the CLAHRC’s work in non-academic settings:
One of the goals was to try and encourage evidence based practice, and it’s using people like [the clinical practitioner who is a team member] to try and overcome some of the barriers. He’s very useful, as being a clinician he would command quite a lot of respect. GREENTOWN018
However, this KT occurred very much at the periphery of the project team’s work, with little impact on academic work practices. The creation of explicit knowledge-broker roles provided a formalised link for the team to work with those with academic expertise, and we observed how the project team meetings were used by its members as a focused time and space in which the insight and knowledge of those from beyond the community of the core team, such as academic advisors, and those representing the vision of the central management of the initiative, could be considered.
As noted, this CLAHRC has explicitly aimed at developing a model in which its members work in new and different ways. However, the qualitative data suggest that, in practice, there have been issues with the effectiveness of the CLAHRC’s original model. Some members (including leads) have failed to fully engage with the work of the CLAHRC (a key objective of the CLAHRC model), as they have struggled to understand the purpose and remit of certain elements of this CLAHRC’s structure (e.g. what the purpose of the implementation theme is; what the role of knowledge brokers is), and how they can integrate the CLAHRC model into achieving clinical sciences programmes of work.
The changes that took place in the mid-term of the CLAHRC’s life cycle were designed to take these challenges into account. Overall, the vision of the CLAHRC was reaffirmed, including the organisational learning model and its structural features and roles. However, it was recognised that the original CLAHRC model had led to some confusion about the contribution of particular roles and themes. As a result, the restructuring involved redefining roles to emphasise the specialisms of particular members, while providing greater resources for individuals with the expertise to make the CLAHRC more ‘NHS facing’.
Browntown CLAHRC is based on a partnership between the universities and health-care organisations within a region that comprises several metropolitan boroughs. The core activity of the initiative is centred on organisations based within the same city that have historically been engaged with research activity. Many of the CLAHRC members are employed either by the universities or by the acute health-care organisation within this city, but the CLAHRC was also designed to build extensive numbers of new collaborative relationships between different communities based across these organisations. In particular, the initiative brings together academics from across different departments and universities who have not previously worked together. In addition, a range of health-care organisations are involved as partners, including acute hospital trusts, primary trusts and mental health services.
This vision of the CLAHRC is described as bringing about a step-change in the way research evidence is used, so as to influence the design of health-care services within the region and to improve their quality and effectiveness.
I initially thought the initiative would have been that the content and the research questions that were there, although soon I began to realise that although they’re important, it was about a much bigger transformation in the way that things are done. To get a paradigm shift really. BROWNTOWN014
The metropolitan area has a high level of health inequalities, and the CLAHRC programme of work is structured around groupings of clinical research projects for various long-term conditions which are undertaken using an applied health approach. In addition, there are a number of separate activities that use KT approaches to undertake later-stage implementation programmes of work designed to directly interact with and impact current health-care policy and services. There is a strong emphasis on capacity building within the various partner organisations across the region in relation to developing expertise on how evidence is handled. Overall, both the clinical research projects and the KT programmes of work are based on a model of integrating members from both research and practice by embedding a focus on practice-based issues.
It’s about addressing the second gap in translation. So it’s about getting research very close to practice or as part of practice. So it’s about undertaking applied research or making sure that research is implemented into practice . . . I don’t think it’s on practice, it’s research with practice . . . really integrating research as practice almost to improve services as you go along through reflection and action. The implementation arm is exactly the same, it’s really trying to get research so close to practice that you can’t really see a difference. So I think it’s about the full bridge really. So it’s about not saying research is one place and practice is another, it’s about trying to make things a lot more connected and integrated. BROWNTOWN021
The CLAHRC’s leadership is largely from an allied health and nursing background, with the CLAHRC membership including a high percentage of academic researchers and health-care practitioners with an allied health background. The vision of this CLAHRC is consequently informed by the practice-based approaches traditionally used within allied health professions, including an operating model centred on applied activities that aim to seamlessly integrate research and practice. However, while the approach of this CLAHRC fits well with the working practices of the allied health professions (both research and practice), it faces greater challenges to involve other groups, such as academics from other disciplines.
Browntown CLAHRC is built around a novel collaboration between the two main academic centres within the region: one which focuses on traditional forms of clinical science work, and another with expertise in allied health. This allied health ethos is reflected in the types of partnerships that are being built with the NHS, with strong links with senior and middle-level management in nursing and allied health within the partner trusts. These have been influential in informing the Browntown programmes of work.
This is not centric about the university. This is about an NHS collaboration that is distributed across a health environment, a health economy . . . It is about long-term conditions, and about knowledge mobilisation. BROWNTOWN002
In practice, this CLAHRC model faced challenges in creating a coherent overall programme of work that integrated members from a wide range of professional and disciplinary backgrounds, including different types of academics, and different groups of health-care practitioners. Nevertheless, especially at the core management level, the overall emphasis of the CLAHRC is to develop integrated work programmes involving both academic and NHS groups, and for these connections to be across different levels (i.e. to create links at high- and medium-level health-care management, and also NHS practice-level participation), and also to build the work of the CLAHRC across the whole Browntown region. In particular, the capacity-building objective of the CLAHRC is about building up the skills and expertise in all partner organisations, and ultimately bringing about an evidence-based applied health research culture across the Browntown region.
Overall, Browntown CLAHRC is formed from members of a wide mixture of academic disciplines and different groups of health-care professionals. This heterogeneous composition means that the work of Browntown CLAHRC has the potential to innovatively incorporate many different perspectives and draw from different types of expertise. This provides this CLAHRC with a strong foundation to develop a novel approach for an inclusive and collaborative model of applied research – one better able to span the boundaries between the ‘producers’ and ‘users’ and research. The clinical project work is structured into a small number of clusters which each constitute a programme of work within a common clinical topic. This deployment of members into relatively large subcommunities helps to foster collaboration between members who come from different departments and organisations. On being involved in a project that was one of a set within a larger programme of work, one respondent comments about the benefits of belonging to the translational initiative, as opposed to doing standalone project work:
It was a group of like-minded people where we could work together, because often we would all be doing independent things and not working together. So it was an opportunity for people to work together in some sort of funded activity which would have more power and influence than an individual academic working on their own. BROWNTOWN010
It was felt that having an interlinked set of projects could potentially have a greater impact. The overall programmes of work were often designed to inform different levels or aspects of health services for the same clinical areas. This more ‘holistic’ package of activity was perceived to have the potential to have a greater impact. In belonging to a larger team, there could be a co-ordinated presentation of activities to policy groups, with team members able to draw on the established connections with other groups to legitimise their own work through the high reputation of other colleagues. However, as members remain part of their ‘home’ organisation, this presents some operational challenges, as limited day-to-day interaction can mean that some members cannot easily access tacit information, or do not have a day-to-day reinforcement of the CLAHRC ‘approach’.
Many of the team members have several roles across different parts of the initiative, such as being members of clinical project teams while holding positions within the core management of the initiative. The central management team comprises a large group of CLAHRC members, meaning that internally the CLAHRC adheres to a distributed leadership style. This helps to support the diffusion of the values and aspirations of the central management team within the project work, and helps to reinforce the vision of the CLAHRC across the whole work programme. Many senior CLAHRC members, both from within the core management and the theme leads, are implicit boundary spanners – sometimes by dint of having ‘dual contracts’ with both NHS and university bodies – who contribute hybrid expertise and ‘belong’ to more than one community. They help to support the aim that the various programmes of work should incorporate different perspectives. The overlap between a large core management group and those in positions of leadership within the project teams helps to spread this vision. A smaller core group co-ordinates the overall CLAHRC-wide organisation, and presents the external-facing view of this CLAHRC model, and is influential in driving the vision throughout the CLAHRC. Overall, the onus is on project leadership to co-ordinate the different types of knowledge into one coherent programme. The leaders of the CLAHRC itself view their role as one of facilitating a new form of collaboration, rather than providing specific scientific or methodological expertise ( Box 5 ).
Case example: the role of a project team leader The role of the project leader is to co-ordinate different areas of work that are producing knowledge aligned to different disciplines, with the aim of producing one coherent programme of work. In working (more...)
The work taking place within these themes or project work is not dominated by the vision or traditions of one individual (e.g. the theme lead), but a culture has been created where individuals can contribute from their own perspective. As there is no one dominant culture influencing the approach of the work programmes, members with ‘specialist’ types of expertise are integrated members of project teams, allowing different types of knowledge to routinely inform the programme of work.
In [this translational initiative] you’re going into situations all the time where everyone in the room has got lots of different roles. That can be a bit of a challenge at times with people having to approach things from lots of different perspectives. It’s very much going in and out of roles sometimes. BROWNTOWN013
Members within the Browntown CLAHRC typically evolved more flexible and overlapping roles, reflecting the need for expertise to support different aspects of the work programme across the initiative.
Although the overall theme of each programme of work remained largely unchanged during the study period, the CLAHRC model and leadership in Browntown has allowed different groups to shape the focus and direction of the work packages. By collaboratively working with other stakeholder groups, the work packages were shaped by the values and insight of different communities, in what one respondent describes as an ‘organic process’.
It’s a tool for facilitating research, applied healthcare research. To enable patients and clinicians and commissioners to make sense of decisions about what to, about what types of treatment to provide . . . The overall structure of the research design didn’t change but it was such an organic process really, what we set out to do is what we’re doing, but their support and interest and feedback was important. CMBROWNTOWN002
In describing the work programme within the clinical theme, the participant highlights the flexibility of the plans for the project work, and how this allowed the integration of insights from different groups to inform and shape the direction of the work ( Box 6 ).
Case example: collaboration with different types of communities In this example, we observed a situation where a project team responded to external groups’ requests for outputs to inform their service development by refocusing their research from (more...)
The project teams across the CLAHRC were composed of a mix of academics from different disciplines and clinical practitioners. The senior management group actively encouraged teams to continue developing their original proposals based on discussion and dialogue with stakeholder groups. This more emergent approach was enabled by certain features of the CLAHRC model and membership. Within the project team, for example, several of the team members were what we will term ‘hybrid’ individuals, that is, they were affiliated to both academic and practitioner communities, and they helped to foster an environment where no one group dominated the direction of work programmes. Instead, all team members were encouraged to actively interact with other groups, and to be flexible in doing their project work. Thus, the vision of the CLAHRC fostered an environment where members were encouraged to develop new work practices and build relationships, rather than to conform to particular disciplinary approaches.
Across the Browntown CLAHRC, many of the members were able to act as boundary spanners precisely by virtue of their ‘hybrid’ academic and health practitioner background. This supported a more fluid integration between research and practice and the building of sustainable relationships, as the overlapping roles conferred membership of both the CLAHRC community and of external groups of managers and commissioners. At the same time, those in leadership positions helped to create an environment where knowledge from different perspectives was routinely shared across teams. Boundary-spanning mechanisms, such as project meetings, were used to support the fluid integration of different perspectives into the various components of work involved.
You see everyone has got a different perspective. Whether you’re a commissioner, you know, perhaps coming from a public health or social services background, but you’re commissioning. Or a manager in the NHS, perhaps social services seconded to NHS. Or a doctor or a nurse or a psychologist or a GP or a service user. You’ve all got a different understanding of what the care pathway is and what needs to be done to improve it. And so very much we deliberately wanted to incorporate a collaborative project between all those different groups. BROWNTOWN008
This CLAHRC’s KT approach drew from an established implementation model, the Canadian ‘knowledge into action’ cycle. In particular, many of the members of Browntown CLAHRC come from the NHS partner organisations, and many of the academics involved with this CLAHRC also hold NHS contracts and have been practically involved with roles at the local NHS organisations. In this sense, the CLAHRC organisation itself acts as a key mechanism to support the translation of knowledge between different communities.
I think CLAHRC is a boundary spanner. That’s its job, that’s what it is. Because the organisations, the NHS organisations, I mean, they do talk to each other because of, you know, Department of Health policy and stuff but they all have different ways of implementing policy. And so they don’t necessarily talk to each other but through us there’s work happening that can be, you know, translated across the different organisations. So we do act as kind of a, it’s almost like a phone exchange. BROWNTOWN017
The Browntown CLAHRC model included a number of work programmes which were designed from the outset to focus on explicit late-stage implementation activity that would produce tangible impact at an early stage in the CLAHRC time frame. These themes were not designed to produce new research evidence, but were intended to align with the NIHR aim of learning more about KT by conducting implementation activity.
We need to demonstrate progress with implementation from the start of CLAHRC. They are implementation projects using research methods, as well as making a difference to practice, and we’re very much committed to adding to the body of knowledge about knowledge translation . . . Implementation work is really a cross between action research and participatory research, and you need to shape the projects as you work with the key stakeholders. BROWNTOWN001
The implementation programmes of work were designed to produce results and impact on practice at an early stage. Thus, although they drew on established research expertise, they also emphasised collaboration with the local partners, including at all levels (executives, middle-management and ground-level staff) with whom the activities were taking place. These strands of work thus helped to reinforce and promote the expectation that all programmes of work, including the research themes, would generate outputs that could be applied to inform health-care practice.
The KT themes, in particular, focus on developing evidence that will help to make a practical difference to local health-care services. The clinical priorities for implementation are identified collaboratively, and the project work is seen as socially embedded in the organisations where change is happening. This facilitates the effective translation of knowledge into action by individuals and teams.
There’s a sense in which often researchers come to the NHS with their research ideas and then you try and get sign-up from them to take a project forward. This is turning the coin over completely and it’s saying, ‘we’re a resource; we want to work with you. What are your priorities?’ It’s been getting that ownership that I think has meant that we work in a different way, but also we get a different response back from NHS managers and clinicians. But also seeing, I guess it’s the role of brokers in all of this. BROWNTOWN001
The main focus of the work of these themes is on piloting and evaluating innovative strategies for implementation, which are then planned to be rolled out across the NIHR CLAHRC for Browntown partnership (with further evaluation of their impact). One key mechanism in this effort involves focusing resources on individuals who are employed by the health-care trusts. A role was designed for these members to act as ‘facilitators’ to build up research activity within their organisations based on the priorities and general ethos of Browntown CLAHRC-type work.
We have what are called ‘research and development facilitators’. The model which was developed was to actively engage each partner healthcare organisation by having a person working within them. So we would bring CLAHRC to these organisations. My role is to bring research evidence to my organisation, and encourage people to use the research evidence much more determining the way that work is completed. And also to help people here to articulate some of their, some of the issues which they have which could have a research solution. They could find a solution through research or at least some preliminary studies. BROWNTOWN022
It is clear from our analysis that members from throughout the CLAHRC recognise that there is an underlying strategy for this initiative to facilitate KT, and that this is conceptualised as involving an integrated, collaborative approach between academics, health-care practitioners and managers in order to facilitate implementation work. Thus, although the CLAHRC does designate certain work packages as ‘implementation’ projects, the vision enacted throughout the CLAHRC emphasises the need to bridge the second translational gap, and not limit KT activity to discrete programmes of work.
It’s not that simple as just having an implementation arm in CLAHRC . . . I think that a lot of people have been practitioners and have done research, there are very few pure academics who have never really linked in and have been in practice or not in the NHS . . . if you look at nearly every lead they’ve either been a practitioner or had a role in the NHS before as well as doing research. They’re all boundary spanners. BROWNTOWN021
However, while this priority given to KT is very apparent at the core management level of the CLAHRC, it is perhaps more difficult for this type of approach to radically influence all areas of the CLAHRC work, and in particular for innovation approaches to inform the approach of the research-focused clinical theme work. In addition, the challenges of conducting implementation research are also highlighted, as the inductive co-production approach here differs from the design of conventional academic projects. Thus, while achieving KT through the seamless integration of research and practice is an important part of the Browntown CLAHRC vision, it depends heavily on the contribution of those occupying ‘hybrid roles’. Crucially, the individuals within these roles are not deemed peripheral to the project teams, but are well positioned to incorporate insights from different community perspectives into the work of the teams.
I work between a number of different organisations, so principally the NHS and academia. It was useful that I am actually from an academic background myself. It makes it a little bit easier in terms of understanding what academic opportunities there might be which the NHS might be able to tap into . . . I work with the initiative really because I work for the NHS and ensure that the CLAHRC work is embedded within this NHS organisation. So it’s very much that boundary-spanning role, I have two identities. BROWNTOWN022
The vision of the Browntown CLAHRC collaboration aims to foster a change in the culture of members across the partner organisations by building the capacity of its members to engage with applied health activities. The fact that the core of the CLAHRC is not university-centric, but is instead based on the traditions of allied health academics and practitioners, has helped to privilege health services’ concerns in shaping the programme of work. Although discrete KT activities were established at its inception, the vision of the Browntown CLAHRC is for all programmes of work to engage with diverse perspectives and traditions. In this way, the CLAHRC builds capacity for innovation through a work environment where work practices are able to draw on a diverse range of perspectives.
The Browntown CLAHRC has evolved incrementally over time as new activities have been developed in response to partner need. From its inception, the CLAHRC has emphasised the scope for such incremental growth both through grafting on new programmes of work and through the expansion of the CLAHRC community to include new types of partners. Although not emphasised in the original bid, collaboration with industry has also emerged as an important strand of this evolution. One result of this approach has been the ability of the CLAHRC Browntown model to adapt to a changing policy landscape, as the ‘organisational memory’ of relationships with the primary care trusts has been used to forge relationships with new commissioning organisations.
The CLAHRCs were given extensive flexibility in interpreting the NIHR remit. They thus represent a ‘natural experiment’ in how to focus, organise and manage applied health research, which will have an impact on a local health-care environment. Our study of three CLAHRCs has highlighted how it has been necessary for each model to be tailored to their own local context.
The senior management of Bluetown CLAHRC strongly emphasises a common vision throughout its work, and contributes technical scientific support to work programmes. The ambition here is to produce high-quality scientific evidence through a rigorous methodological approach. Within this broad remit, each project team has extensive flexibility in the operational management of their work programmes. Instead of drawing on a CLAHRC-wide approach for KT, each project team here developed their own approach to translating the findings from their work programmes into practice. In this sense, the Bluetown model for KT is about supporting the operational autonomy of each project team to develop its own locally tailored approach.
Greentown CLAHRC draws on an explicit CLAHRC-wide organisational structure to facilitate KT activity. This structure emphasises a common operational management championed by the core leadership. The overall approach draws on a cross-disciplinary conceptual model which integrates different types of knowledge, including both clinical and social science academic traditions. However, each project team develops its own approach to accessing and developing requisite scientific and technical expertise, which allows team members to retain their pre-existing working practices.
The Browntown CLAHRC model was based on adapting the explicit Canadian framework for KT. Operationally, the CLAHRC work was situated within heterogeneous teams embedded within the local partner organisations. This, together with features such as overlapping community memberships and hybrid roles, helped to support the fluid, and often tacit, integration of different types of knowledge across all work programmes.
Notably, all three of the CLAHRCs we studied were able to develop and adapt their model over the 5-year funding time frame. The focus of these developments, and the extent to which modifications occurred, varied depending on each CLAHRC context. Bluetown CLAHRC started with a conventional model of the relationship between research and practice, which allowed work activity to commence straight away, and did not require radical changes to the working practices of its members. However, as the CLAHRC developed over time, new activities were introduced to support KT and knowledge management and engagement activities. Importantly, the established relationships of the early CLAHRC model are credited with helping to gain support for these new types of activities.
Greentown CLAHRC has experienced several changes in leadership since the initial formation of its model. Its development over time has particularly focused on refining the original organisational learning model in order to improve effectiveness. As such, this CLAHRC has engaged in changes to its operational management by restructuring the groups within the CLAHRC, and, in particular, making adaptations to the work of the specialist support services to support the clinical project teams more effectively.
The model of Browntown CLAHRC has evolved incrementally and has maintained consistent support for capacity building across local health-care communities. In particular, this CLAHRC has sought to develop and integrate new work programmes reflective of its overall vision into its CLAHRC community. It has continually expanded its engagement with its original health-care partners, and also new types of stakeholder groups, such as industry. In keeping with its distributed leadership style, the central management team itself grew over time, so as to assimilate representatives of different work components and incorporate their views into the organisation of the CLAHRC’s work. Our study of the three different CLAHRC helps to explain the way in which the broad CLAHRC remit has been appropriated in distinctive ways, according to the social networks and local contexts which have shaped the CLAHRCs’ development. In the process, we have observed also the generative effect of leadership and vision. These variations in the interpretation and enactment of the CLAHRC mission underlines the importance of their differing network structures and sense-making cognitions as addressed through the other research strands in our study (and succeeding sections of this report). In respect of KT, it is clear that each CLAHRC has developed its own distinctive approach, some aspects of which are managed and articulated at senior management level, while others (e.g. the implications of hybrid roles) make a more implicit contribution.
The next part of the report will describe, analyse and discuss the three North American cases. The first case is the Canada-Coordination, an initiative involving a number of health-care players in the Ottawa district, Ontario (Canada), and aiming to improve the co-ordination and quality of health-care delivery. The initiative is specifically addressed to a small number of children with complex care needs (these children have at least five different specialists who follow them). At the time of our fieldwork, the Canada-Coordination initiative included 23 such children. The second case is the Canada-Translation, an initiative involving a community hospital (Com-Hospital) and a large university (Uni-Canada) in the same city in Quebec, aiming to promote KT processes from academic (Uni-Canada) to practitioners (Com-Hospital) and, more generally, aiming to increase collaboration and cross-fertilisation activities between the hospital and the university. The third case is US-Health, an initiative involving a number of universities, hospitals and consultant companies specialising in health-care management and applied research and aiming to promote implementation research across the USA.
All three initiatives are different to the CLAHRCs in terms of size, being either somewhat smaller (e.g. the Canada-Translation) or larger (US-Health). In addition, their thematic focus and governance arrangements are necessarily different to the CLAHRCs’ (e.g. the Canada-Coordination is more focused on KT and collaboration across existing health-care organisations). However, these three cases were chosen because they are all organised to promote implementation research and KT processes on a networked basis, by exploiting existing networks and/or facilitating the creation of new collaborative networks.
The qualitative analysis of the North American cases, in line with the CLAHRCs analysis, is structured as follows (for each case): firstly, we introduce the case; secondly, we break down the analysis into the three main theoretical elements (or dimensions) of the framework supporting this study (governance and management aspects, networks and collaborations, and KT); and, thirdly, we discuss the implications of the case. As with the CLAHRCs qualitative analysis, each case involves three case examples, one for each dimension of the framework. We conclude this section with a discussion of the three initiatives and a comparison between these (North American) cases and the UK CLAHRCs.
Canada-Coordination is a pilot project housed at the White Hospital, located in Ottawa, ON, Canada, and involves the hospital itself (a world-class tertiary paediatric centre), and several paediatric organisations and agencies in the Ottawa community.
The pilot project involves four main players (organisations): (1) the Regional Community Care Access Centre (RCAC), which is a community health provider that organises home, school, and hospital care, developing customised ‘care plans’ and providing support from health-care professionals, nurses, physiotherapists, social workers, registered dieticians, occupational therapists, speech therapists, and personal support workers to provide a range of care and support services; (2) the Ottawa Association to Support Children (OASC), which is another community health provider that provides specialised care for children and youth in Ontario with multiple physical, developmental, and associated behavioural needs; (3) the social services (SS), which is an agency that develops case resolution mechanisms to provide recommendations and referrals for families with children with complex care needs who are experiencing difficulties accessing support and services in the community; and (4) the White Hospital, which is the ‘hub’ of the project in that the children with complex care needs are patients of the hospital and, therefore, the main treatments are provided at the White Hospital.
The pilot project funds three key people to manage the project: (1) a project manager, (2) the most responsible physician and (3) the nurse co-ordinator. The project manager supervises the pilot project: she ensures that processes, communication pathways and flow maps have been developed; conducts staff training sessions; co-ordinates the meetings and presentations for the steering and advisory committees; and prepares reports. The most responsible physician reviews the overall complex medical needs of each child and co-ordinates communication with all the specialists at the White Hospital, other tertiary paediatric centres specialists, and the community physicians. The nurse co-ordinator works very closely with the most responsible physician, interfacing between the doctors, nurses, and managers at the White Hospital, and the other agencies; the nurse co-ordinator is the link person for all of the families of the children in the project. Interestingly, along with a solid governance structure, informal relationships played a central role in the pilot project, as is outlined in the example below ( Box 7 ).
Case example: governance structure and informal relationships The governance structure of the pilot project is relatively formal including steering and advisory committees that meet monthly in addition to an external entity – the White Hospital (more...)
From the case example in Box 7 , it is clear that informal relationships facilitated tight collaborations and promoted trust among the players involved in the pilot project. The development of trust, as we will show below, was also a relevant element in supporting the health-care network of the Ottawa community.
The pilot project exploits existing networks between the players involved including RCAC, OASC, SS, independent paediatricians, and the White Hospital personnel (doctors, nurses and staff). In fact, while the project involved hiring some key people who could facilitate collaboration among the players, many of the relationships between, for example, social services (RCAC) and the nurses and doctors in the hospital (the White Hospital) were already tight. This aspect (prior networks) has positively affected the overall project because while formal networks are relatively easy to establish, it takes time for people to begin working together productively. Interestingly, the pilot project network developed collaborations aimed at improving the quality of health-care delivery that were not limited to the players that were involved in the initial project. In fact, the managers (of the four agencies) soon realised that in order to clearly identify the needs of the children involved in the pilot project, it was important to involve the patients directly. Therefore, the project manager, in accordance with the directors of RCAC, OASC, and SS, decided to involve the parents of two children involved in the project. This involvement included having the parents sitting in the steering committee of the project that meets monthly and makes decisions regarding how the co-ordination of care at the White Hospital and across its network can be improved.
As the project manager highlighted regarding the evaluation process of the project: ‘So the two parents from family forum that also sit on our steering committee for this Pilot Project reviewed our questionnaires and helped us get to the questions we wanted so that it’s more a participatory evaluation approach’. This quote underlines a collaborative climate where actors belonging to different networks are willing to bring their contributions and are supported by a common aim: to identify ways to improve the quality of health-care delivery services for children with complex care needs. One of the most relevant issues that emerged from the feedback with the (two) families who sit in the steering committee was the difficulty of having all of the different community services up to date with the most recent changes regarding the children’s condition – this problem was made very challenging by the fact that each child is frequently seen by a number of specialists.
Knowledge translation within Canada-Coordination was focused on relations between clinicians and with family members of the children involved. It was facilitated not only by direct interactions among these groups, but also by the development of artefacts that could help span the boundaries between them. In this context, the introduction of the Single Point of Care (SPOC) document is an important example because it radically improved the relationship between the White Hospital and the health-care networks. The SPOC is a paper-based medical sheet including all basic information about a child’s health, such as current health status, current treatments (medications), and any other detail that can be helpful at school (e.g. current allergies) or in emergency situations (particular drugs that the child needs to take if his/her condition suddenly becomes severe or life-threatening). The SPOC is issued by the White Hospital and is carried by the families, who now no longer have the difficult and at times confusing task of collating themselves all the medical information related to their children produced by different specialists. The SPOC is also shared with a number of organisations in the Ottawa community, such as police and schools. In particular, it is very relevant that, for example, if the child develops a new allergy or changes one medication, the school nurse is aware of the changes. While the creation of the SPOC will be broadly discussed in the next section – the SPOC being a KT tool – the following case is meaningful in highlighting how SPOC contributed to promoting networks and collaboration across different players in the Ottawa community ( Box 8 ).
Case example: SPOC as a tool for KT The SPOC originated from discussions between the families of the children involved in the pilot project and the doctors and managers during steering committee sessions. Its value is highlighted here through the example (more...)
The SPOC also helped to reduce redundant examinations because the parents show the medical sheet to each specialist who would edit it as appropriate and gather information about forthcoming tests. In sum, the introduction of the SPOC improved efficiency (being a co-ordination mechanism for different specialists) and provided the families with more awareness about the conditions of their children. In fact, while on the one hand the SPOC is acknowledged by any doctor as an official document because it is issued by a hospital, on the other hand, the (simple) way the SPOC is structured allows the family to interpret (in general terms) the health status of their child.
In sum, according to the project founders (the leaders of RCAC, OASC and SS), the involvement of the parents had produced significant benefits. This point was also confirmed by a number of interviews that we conducted with the families of the children involved in the pilot project because a number of issues of the families in terms of co-ordination emerged that could be brought up for discussion in the project meetings (steering committee).
The pilot project is an example of how informal networks are effective in promoting KT and collaboration across community partners, including health-care partners such as a hospital and the SS, yet also including also non-health-care partners such as the police department and the schools. The management of the project includes periodic controls undertaken by an independent entity that is the White Hospital Research Institute. The White Hospital Research Institute surveys doctors and families of the children involved in the project and makes sure that co-ordination and health-care delivery services are continuously improved. This is evidenced by feedback from both clinicians and patient families.
The Canada-Translation Centre is an initiative that originates at Com-Hospital, a community and university affiliated health centre in Quebec, Canada, and that serves a multicultural population in southern Quebec.
The Canada-Translation Centre started in 2010 with the aim to promote and co-ordinate clinical research carried out by clinicians who work at Com-Hospital and are also appointed by Uni-Canada, a university in Quebec. According to the regulatory framework of the Canada-Translation Centre, it carries out clinical, epidemiological, and health services research studies, provides consultation to other researchers in the hospital, and provides educational services, including seminars and workshops, related to research.
The Canada-Translation Centre is led by Johanna, Associate Professor at Uni-Canada and the vice president (VP) of Academic Affairs at Com-Hospital. The Canada-Translation Centre organises monthly meetings (last Thursday of every month) that are held in the hospital. As at the time of our research the Canada-Translation Centre was in its initial stage of development, the main objective of the meetings that we observed involved the establishment of guidelines and systems.
From a governance perspective, the three main bodies that collaborate with the Canada-Translation Centre are (a) the REC, which is responsible for reviewing the scientific and ethical aspects of all research projects involving human subjects; (b) the research review office, which is co-ordinated by the research administrative secretary and provides administrative support to both the EC and the REC; and (c) the Quality Assessment (QA) Unit, which provides assistance to hospital staff on various aspects of QA analysis and evaluation, that is, projects that assess current performance or practice.
The governance aspects of the Canada-Translation Centre are extremely formalised, reflecting the interest of the academics at Uni-Canada to secure control of the development of the project, especially in its early stages. The board of directors supervises all research activities, while the VP of Academic Affairs (who is also the Canada-Translation Centre’s Director) supervises all main research committees. The Quality and Risk Management Committee is an independent body chaired by the VP of Professional Services. According to an interview with Johanna, the rationale of this design lies in the desire to have objective control of research activities by an independent body (the Quality Committee).
The network of the Canada-Translation Centre is formed by people who work in the hospital who also have at least a teaching appointment or, more often, a professorial appointment, at Uni-Canada. The design of the three key bodies of the centre sought to promote collaboration and cross-fertilisation between the hospital and Uni-Canada. In particular, within Family Medicine at Uni-Canada there are four departments that have been always very independent and loosely coupled; however, with the start of the Canada-Translation Centre, members of these four departments started meeting twice a month to try and pursue common research objectives. In sum, the main objectives of the Canada-Translation Centre are to (1) tighten the collaboration between Uni-Canada and Com-Hospital by promoting networks and (2) bring in expertise from external collaborations.
In terms of improving the network within the Canada-Translation Centre (Com-Hospital and Uni-Canada) one of the initial challenges was to try to build relationships between key actors who had no experience of working together. For instance, quality and risk management staff rarely interacted with academics prior to the start of the project. This group quickly became committed to collaborating with the academic groups, as highlighted by the director of the quality and risk management at Com-Hospital:
And I convinced Johanna that the uniqueness of the relationship between Quality and Research here has to live somehow and has to be formalised in whatever plan she does. And I think she saw that as an opportunity. If it were someone else here than me I don’t know if it would work the same. It’s just because I started out working in Research and I, you know, I did that for ten, twelve years and I know the language and I know their challenges and I know, you know, how they think and how they . . . operate.
While it is important that people within Com-Hospital are able to work together (e.g. the research team and the quality and risk management department, as per what was highlighted above), it is also relevant that the (more practice-oriented) researchers at Com-Hospital collaborate and develop networks with the professors at Uni-Canada. The associate dean of Inter-Hospital Affairs plays a role in doing this by trying to promote fruitful collaborations between the hospital and the university. As he highlighted to us:
It was a natural thing to develop research along these lines. Again my role has been to facilitate communication between the hospital and the university, both with the Dean of Medicine and with other research leaders within the university. I’m not a researcher myself. My role is mainly to put people together. And to use the links that we have with the Ministries to help support what the hospitals do. The other way in which I interplay with the hospital is I’m on the board of directors. So being on the board of directors of course some of these initiatives to fund this research infrastructure comes to the board and at times I speak to it in a supportive fashion in order to help promote this venture for the hospital.
Included in the Canada-Translation Centre mission for the period 2010–15 there are five main domains of research to be pursued:
From the above it is clear that the Canada-Translation Centre has a specific focus on KT. In fact, each research member (i.e. a professor) must also be a clinician (i.e. he/she needs to spend a minimum number of hours per week in the hospital), a rule which is aimed at promoting applied research. As outlined by the case example below ( Box 9 ), one of the strategies that the Canada-Translation Centre adopts for promoting implementation research is to work on small projects whose short-term tangible results are directly testable in the hospital environment.
Case example: short-term implementation research One example of how the Canada-Translation Centre promotes implementation research is to try to develop very small projects where implementation requires months if not just weeks. Through the exploitation (more...)
In contrast with the pilot project in Ottawa, this initiative is much more complex and involves more than 50 people including Com-Hospital physicians, Uni-Canada’s academics, and administrative personnel. Staff involved are fully aware of the barriers between academics and practitioners and between people who work in different departments (in the case of Com-Hospital) and in different institutions (at the hospital and at Uni-Canada). Therefore, specific boundary spanners – such as the associate dean of Inter-Hospital affairs – were identified to connect people with different background, professional and personal interests.
US-Health is a model of field-based research designed to promote innovation in health-care delivery by promoting the diffusion of research into practice. The US-Health initiative promotes innovation in health-care delivery by speeding up the development, implementation, diffusion, and uptake of evidence-based tools, strategies, and findings. In particular, US-Health develops and aims to disseminate scientific evidence to improve health-care delivery systems.
The US-Health network includes a number of large partnerships (the ‘contractors’) and collaborating organisations that provide health care to more than 100 million Americans and is a 5-year implementation model of research that is field based and that fosters public–private collaboration aiming to provide concrete results in the short term. The US-Health partnerships involve most US states and provide access to large numbers of providers, major health plans, hospitals, long-term care facilities, ambulatory care settings, and other health-care structures. Each partnership includes health-care systems with large, robust databases, clinical and research expertise, and the authority to implement health-care innovations.
US-Health focuses on a wide variety of demand-driven, practical, applied topics of interest to the partnerships’ own operational leaders as well as the project funders. The programme emphasises projects that are addressed to user needs and operational interests and which, ideally, are expected to be generalisable across a number of settings.
US-Health partnerships operate under multi-year contracts. Proposals are bid on a rolling basis throughout each 5-year cycle. Projects need to be undertaken in the short term; they are awarded under separate task orders and are completed within 12–24 months. Also, the US-Health network is promoted by a national health agency called Federal-Health. Federal-Health’s mission is to improve the quality, safety, efficiency, and effectiveness of US health care. Federal-Health supports research that helps people make more informed decisions and improves the quality of health-care services.
US-Health’s research has two main characteristics: it is practice based and implementation oriented. Thus, it supports field-based research to explore practical, applied topics that are responsive to diverse user needs and operational interests. By testing innovations directly in the practical settings in which they are intended to be adopted, US-Health increases the likelihood of their eventual successful uptake. US-Health research is also designed to increase knowledge about the process of implementing innovations and the contextual factors influencing implementation. It aims to promote understanding of how and why specific strategies work or fail.
The governance structure of US-Health is relatively flat. Federal-Health is the government organisation that manages the tendering process for contracts, and US-Health (the network) is one of the recipients of these contracts. Informal relationships between Federal-Health and its collaborations over time, however, help to ensure that contracts are tailored to the strengths of members of the US-Health network. For example, Federal-Health frequently discusses potential research projects with the contractors of previous bids to elicit their interest.
Once a contract is signed, a project officer is assigned to each contractor; there are few project officers who manage multiple contracts nationwide. The main task of the project officer is to make sure that all milestones are met during the very short period of the contract. Moreover, the project officer can review ongoing documents, can come to visit a contractor to see how the research project progresses, and can provide suggestions and indications. The project officers need to write periodic reports for Federal-Health; therefore, Federal-Health can closely monitor each contractor (and each research project). The way a contractor (e.g. a member of the US-Health network) manages the research project is very subjective. While a proposal on how they will reach the objectives needs to be provided to Federal-Health, no specific guidelines on how to undertake the research are given. However, the contracts (and the research projects) are generally very specific in their objectives and concrete results. Very often, the research output is a ‘toolkit’, that is, a document with specific recommendations on how to manage a health-care problem in practice.
Many of the people whom we interviewed and who were involved in one or more US-Health research projects (i.e. contracts) highlighted that the success of the project depended to a large extent on the project officer. The effects of the project officer role can be both positive and negative, as shown by the example below ( Box 10 ).
Case example: the role of project officers in US-Health This interview was undertaken with a professor in a department of family medicine who has been involved in a Federal-Health project for several years. She describes with examples points of strength (more...)
US-Health has a very complex network including, as we previously noted, a number of large partnerships. In this section we provide some insights that focus on two main partnerships where we were able to conduct interviews: Health-partnership and Health-Consulting. We chose these two networks because they are very different (one is a partnership whose participants are academics while the other is a consultant company with expertise in health care). Although collaboration on bid writing between partners was limited by a lack of funding, some collaboration did take place across US-Health contractors.
Health-partnership (primary contractor) is a partnership led by a US university (name not disclosed) which managed some 10 contracts with Federal-Health within US-Health. The contracts of Health-partnership involve themes such as improving hepatitis C virus-screening practices and testing uptake in select primary care providers, preventing pressure ulcers in hospitals, co-ordinating care across primary care practices, using innovative communication technology to improve the health of minorities, avoiding readmissions in hospitals using technology, and reducing infections caused by particular bacteria.
Health-Consulting is a much smaller entity, being a private consultancy company with some 15 senior consultants. As a result, it is not in a position to develop clinical research involving patients. Health-Consulting’s strategy within US-Health was, therefore, to find collaborators (subcontractors) who could undertake implementation research in health-care structures. To do this, they drew on a wide network of subcontractors. The contracts managed by Health-Consulting focused on the relationship between patient information management in hospitals and risks of complications and mortality while patients are hospitalised (this contract investigates both electronic and paper-based medical records) and on the development of performance measures for injurious falls in nursing homes and rehospitalisation of patients discharged from hospitals to home care. Although collaboration on bid writing between partners was limited by a lack of funding, some collaboration did take place across US-Health contractors.
With the overall goal of translating research into practice, US-Health links many of the largest health-care systems in the USA with top health services researchers who are identified through a tendering process and are managed through short- to medium-term contracts.
It provides a network of delivery-affiliated researchers and sites with a means of testing the application and uptake of research knowledge. US-Health is the successor to another large-scale initiative which was completed in 2005. All of the large partnerships (i.e. the prime contractors who work with a specified range of other organisations) have a demonstrated capacity to turn research into practice for proven interventions, targeting those who manage, deliver or receive health-care services. As per the above, both Health-partnership and Health-Consulting develop research that is focused on a wide variety of demand-driven, practical, applied topics which are of interest to the partnerships’ own operational leaders as well as to the project funders. The overall programme (US-Health) emphasises projects that are broadly responsive to user needs and operational interests and which are expected to be generalisable across a number of settings. An example of implementation research conducted by Health-Consulting is outlined in the case example below ( Box 11 ).
Case example: implementation research at Health-Consulting Health-Consulting developed a tool to improve pharmacists’ communications with patients. This was informed by evidence that only 12% of US adults understand and use health information (more...)
The US-Health case, if compared with the two other North American cases, involves a number of partnerships, each partnership involving a number of organisations and players. Moreover, the funding model is quite unique, depending on bids for contracts (very often agreed between parties – e.g. Federal-Health and, in our case, US-Health). Even though there was little interaction across the US-Health network as a whole, collaborations proliferated within partnerships. Moreover, most projects have been successful and some projects were adopted nationwide.
Included under terms of UK Non-commercial Government License .
Other titles in this collection.
Your browsing activity is empty.
Activity recording is turned off.
Turn recording back on
Connect with NLM
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies FOIA HHS Vulnerability Disclosure
Help Accessibility Careers
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
Evidence-based green human resource management: a systematic literature review.
1.1. state of the art of ghrm, 1.2. previous reviews of ghrm, 1.3. aims of the research, 2.1. identification: information sources and search strategies, 2.1.1. information sources, 2.1.2. search strategies, 2.2. screening: selection process and criteria, 2.3. included: data processing and analyzing, 3. results and discussion, 3.1. general overview of ghrm literature, 3.1.1. publications by years, 3.1.2. publications by sources, 3.1.3. publications by contexts, 3.1.4. publications by methodologies, 3.2. ghrm research foci and trends, 3.2.1. bibliographic coupling analysis, 3.2.2. keyword evolution analysis, 3.3. ghrm conceptualizations and research framework, 3.3.1. dimensions and measurements of ghrm, 3.3.2. theoretical bases of ghrm, 3.3.3. framework of ghrm, 3.4. recommendation for practitioners and researchers, 3.4.1. context and trend of ghrm, 3.4.2. access and approach to ghrm, 3.4.3. mechanism and innovation in ghrm, 4. conclusions, 4.1. summary of the research, 4.2. contribution of the research, 4.3. limitations and proposed solutions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.
Click here to enlarge figure
Database | Search Terms | Eligible Criteria Set in Automatic Filters |
---|---|---|
N = Initial Records (searching date) | n = Records marked as ineligible | |
Scopus | TITLE-ABS-KEY = (“green hr*” OR “green human resource*” OR “green attract*” OR “green recruit*” OR “green select*” OR “green train*” OR “green performance manag*” OR “green performance apprais*” OR “green pay*” OR “green compensat*” OR “green reward” OR “green employee*”) N = 963 (19/04/2023) | Publication years: 2022 Document type: Article Source type: Journal Publication stage: Final Subject Area: Business, Management, and Accounting Language: English n = 636 (excluded) |
Web of Science Core Collection | TS = (“green hr*” OR “green human resource*” OR “green attract*” OR “green recruit*” OR “green select*” OR “green train*” OR “green performance manag*” OR “green performance apprais*” OR “green pay*” OR “green compensat*” OR “green reward” OR “green employee*”) N = 796 (19/04/2023) | Publication years: 2022 Document type: Article Publication stage: Final Language: English Citation Topics Meso: Management Research Area: Business Economics, Environmental Sciences Ecology, Social Sciences Other Topics n = 406 (excluded) |
RQs | Evidence from GHRM Literature | New Findings of GHRM |
---|---|---|
RQ1. What is the status quo of empirical GHRM research? | 1. Significant growth varies in research disciplines; | 1. Attention from not only HRM but also across disciplines; |
2. Conducted context: mainly in Asia, developing economies, and the service and manufacturing sectors; | 2. Lack of evidence in the primary sector; lack of industrial description for the manufacturing sector; | |
3. Research clusters: theory implementing and framework extension, linking with employees, linking with an organization; | 3. Three main research streams; | |
4. Keyword trends: pro-environmental behavior, green creativity, and competitive advantage appeared in recent studies. | 4. Recent interests are employee-related issues. | |
RQ2. How has GHRM been conceptualized, and how have theories been in empirical research? | 1. GHRM dimensions: most studies adopted bundled GHRMPs, and GT is among the most examined GHRMP aspect; bundled GHRMPs and GT were found positively relates to desired outcomes in most cases; | 1. Preference showed in bundled GHRMPs and GT, which show a high possibility of having positive effects; comparisons between GHRMPs were rarely made; |
2. Theory application: around half of the studies were based on a single theory; AMO theory was used most, followed by RBV, SET, and SIT; some recent studies used multiple theories. | 2. Tendency of using multiple theories and conducting research from different perspectives. | |
RQ3. What suggestions can be offered for future GHRM development based on the evidence? | 1. Research that adopted the same measurement scales for GHRM constructs show similar choices in the design of GHRM dimensions; 2. Research that applied the same theories had similar inclinations of construct levels (organizational vs. employee level). | 1. Selection of GHRM dimensions influences the choice of measurement scales; 2. Theory applications interrelated with constructs choice and framework design. |
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Xie, H.; Lau, T.C. Evidence-Based Green Human Resource Management: A Systematic Literature Review. Sustainability 2023 , 15 , 10941. https://doi.org/10.3390/su151410941
Xie H, Lau TC. Evidence-Based Green Human Resource Management: A Systematic Literature Review. Sustainability . 2023; 15(14):10941. https://doi.org/10.3390/su151410941
Xie, Huirong, and Teck Chai Lau. 2023. "Evidence-Based Green Human Resource Management: A Systematic Literature Review" Sustainability 15, no. 14: 10941. https://doi.org/10.3390/su151410941
Article access statistics, further information, mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
IMAGES
VIDEO
COMMENTS
Another hint: some scholarly journals use a specific layout, called the "IMRaD" format, to communicate empirical research findings. Such articles typically have 4 components: Introduction : sometimes called "literature review" -- what is currently known about the topic -- usually includes a theoretical framework and/or discussion of previous ...
The point is that the ideal-typical quantitative process does not imply change of the data, and iteration between data, evidence, hypotheses, empirical work, and theory. The data, quantified variables, are, in most cases fixed. ... Lamont M. Evaluating qualitative research: Some empirical findings and an agenda. In: Lamont M, White P, editors.
Empirical research is defined as any research where conclusions of the study is strictly drawn from concretely empirical evidence, and therefore "verifiable" evidence. This empirical evidence can be gathered using quantitative market research and qualitative market research methods. For example: A research is being conducted to find out if ...
Empirical evidence is information that is acquired by observation or experimentation and is used to prove or disprove a hypothesis. ... Qualitative research, often used in the social sciences, ...
The evidence collected during empirical research is often referred to as "data." ... Qualitative research -- an approach to documenting reality that relies on words and images as the primary data source (p. 33). Both quantitative and qualitative methods are empirical. If you can recognize that a research study is quantitative or qualitative ...
The aim of this chapter is to provide authors (who already have experience of undertaking qualitative research and qualitative evidence synthesis) with additional guidance on undertaking a qualitative evidence synthesis that is subsequently integrated with an intervention review. ... Theoretical findings may combine empirical evidence, expert ...
Empirical evidence can be quantitative or qualitative. Typically, numerical quantitative evidence can be represented visually by means of diagrams, graphs, or charts, reflecting the use of statistical or mathematical data and the researcher's neutral noninteractive role.
Empirical research is research using empirical evidence. ... Quantifying the evidence or making sense of it in qualitative form, a researcher can answer empirical questions, which should be clearly defined and answerable with the evidence collected (usually called data). Research design varies by field and by the question being investigated.
While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...
Qualitative research is used to gain insights into people's feelings and thoughts, which may provide the basis for a future stand-alone qualitative study or may help researchers to map out survey instruments for use in a quantitative study. ... the researchers must have felt that there was anecdotal evidence of differences in access to ...
Bott ends the article by saying the project has shown "how the 'othering' of researcher subjectivity by research subjects and how we as researchers feel about those often 'othering' appropriations, as well as our own intellectual reactions to them, can affect the empirical, epistemological and ethical concerns of the research" (p. 172).
Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...
Abstract. Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research question must be clear and focused and supported by a strong conceptual framework ...
the logic behind the empirical research process, whether qualitative, quantitative or mixed methods. (1) The pre-empirical stage of research A sharp distinction is made in the early part of this book between the pre-empir-ical and empirical stages of research. It is stressed in Chapters 4 and 5 that
Strategies for Empirical Research in Writing is a particularly accessible approach to both qualitative and quantitative empirical research methods, helping novices appreciate the value of empirical research in writing while easing their fears about the research process. This comprehensive book covers research methods ranging from traditional ...
Stake mentions four defining characteristics of qualitative research which are valid for qualitative case studies as well: they are "holistic," "empirical," "interpretive," and "emphatic." Whether the study is experimental or quasi-experimental, the data collection and analysis methods are known to hide some details (Yazan, 2015).
In its many guises, qualitative research is a form of empirical inquiry that typically entails some form of purposive sampling for information-rich cases; in-depth interviews and open-ended interviews, lengthy participant/field observations, and/or document or artifact study; and techniques for analysis and interpretation of data that move ...
Conclusion. Demystifying empirical research involves understanding the various techniques and examples used in the process. By exploring the methodologies, data collection methods, and analysis techniques discussed in this blog, researchers can gain a clearer insight into how empirical research is conducted.
Qualitative research involves the studied use and collection of a variety of empirical materials - case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts - that describe routine and problematic moments and meanings in individuals' lives.
A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...
An empirical study is research derived from actual observation or experimentation. The written articles resulting from empirical studies undergo a rigorous review by experts in the field of study prior to being published in journals.
Pluye and Hong 52 define mixed-methods research as "a research approach in which a researcher integrates (a) qualitative and quantitative research questions, (b) qualitative research methods* and quantitative research designs, (c) techniques for collecting and analyzing qualitative and quantitative evidence, and (d) qualitative findings and quantitative results".A mixed-method synthesis ...
Empirical evidence supporting the potential of MMLA in assisting teachers to facilitate students' reflective practices during intricate collaborative learning scenarios. ... Additionally, there is a need for more research on the qualitative aspects of communication and the development of measures to capture these aspects in the MMLA solution ...
These themes were not designed to produce new research evidence, but were intended to align with the NIHR aim of learning more about KT by conducting implementation activity. ... Empirical analysis and findings: qualitative investigation - Networked innovation in the health sector: comparative qualitative study of the role of Collaborations for ...
To better understand the empirical development of green human resource management (GHRM) research and theories and to provide evidence-based suggestions, the article conducts a systematic review of evidence-based studies within the academic field of GHRM. The review follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Protocol 2020 to select GHRM-focused and ...
The empirical evidence shows that the experiences of school counsellors in reporting child maltreatment are seldom investigated despite school counsellors being considered frontline members of the school-based child protection workforce for decades (World Health Organisation, 2019). The existing evidence is limited to having been generated in ...