• How it works

How to Write a Research Design – Guide with Examples

Published by Alaxendra Bets at August 14th, 2021 , Revised On October 3, 2023

A research design is a structure that combines different components of research. It involves the use of different data collection and data analysis techniques logically to answer the  research questions .

It would be best to make some decisions about addressing the research questions adequately before starting the research process, which is achieved with the help of the research design.

Below are the key aspects of the decision-making process:

  • Data type required for research
  • Research resources
  • Participants required for research
  • Hypothesis based upon research question(s)
  • Data analysis  methodologies
  • Variables (Independent, dependent, and confounding)
  • The location and timescale for conducting the data
  • The time period required for research

The research design provides the strategy of investigation for your project. Furthermore, it defines the parameters and criteria to compile the data to evaluate results and conclude.

Your project’s validity depends on the data collection and  interpretation techniques.  A strong research design reflects a strong  dissertation , scientific paper, or research proposal .

Steps of research design

Step 1: Establish Priorities for Research Design

Before conducting any research study, you must address an important question: “how to create a research design.”

The research design depends on the researcher’s priorities and choices because every research has different priorities. For a complex research study involving multiple methods, you may choose to have more than one research design.

Multimethodology or multimethod research includes using more than one data collection method or research in a research study or set of related studies.

If one research design is weak in one area, then another research design can cover that weakness. For instance, a  dissertation analyzing different situations or cases will have more than one research design.

For example:

  • Experimental research involves experimental investigation and laboratory experience, but it does not accurately investigate the real world.
  • Quantitative research is good for the  statistical part of the project, but it may not provide an in-depth understanding of the  topic .
  • Also, correlational research will not provide experimental results because it is a technique that assesses the statistical relationship between two variables.

While scientific considerations are a fundamental aspect of the research design, It is equally important that the researcher think practically before deciding on its structure. Here are some questions that you should think of;

  • Do you have enough time to gather data and complete the write-up?
  • Will you be able to collect the necessary data by interviewing a specific person or visiting a specific location?
  • Do you have in-depth knowledge about the  different statistical analysis and data collection techniques to address the research questions  or test the  hypothesis ?

If you think that the chosen research design cannot answer the research questions properly, you can refine your research questions to gain better insight.

Step 2: Data Type you Need for Research

Decide on the type of data you need for your research. The type of data you need to collect depends on your research questions or research hypothesis. Two types of research data can be used to answer the research questions:

Primary Data Vs. Secondary Data

Qualitative vs. quantitative data.

Also, see; Research methods, design, and analysis .

Need help with a thesis chapter?

  • Hire an expert from ResearchProspect today!
  • Statistical analysis, research methodology, discussion of the results or conclusion – our experts can help you no matter how complex the requirements are.

analysis image

Step 3: Data Collection Techniques

Once you have selected the type of research to answer your research question, you need to decide where and how to collect the data.

It is time to determine your research method to address the  research problem . Research methods involve procedures, techniques, materials, and tools used for the study.

For instance, a dissertation research design includes the different resources and data collection techniques and helps establish your  dissertation’s structure .

The following table shows the characteristics of the most popularly employed research methods.

Research Methods

Step 4: Procedure of Data Analysis

Use of the  correct data and statistical analysis technique is necessary for the validity of your research. Therefore, you need to be certain about the data type that would best address the research problem. Choosing an appropriate analysis method is the final step for the research design. It can be split into two main categories;

Quantitative Data Analysis

The quantitative data analysis technique involves analyzing the numerical data with the help of different applications such as; SPSS, STATA, Excel, origin lab, etc.

This data analysis strategy tests different variables such as spectrum, frequencies, averages, and more. The research question and the hypothesis must be established to identify the variables for testing.

Qualitative Data Analysis

Qualitative data analysis of figures, themes, and words allows for flexibility and the researcher’s subjective opinions. This means that the researcher’s primary focus will be interpreting patterns, tendencies, and accounts and understanding the implications and social framework.

You should be clear about your research objectives before starting to analyze the data. For example, you should ask yourself whether you need to explain respondents’ experiences and insights or do you also need to evaluate their responses with reference to a certain social framework.

Step 5: Write your Research Proposal

The research design is an important component of a research proposal because it plans the project’s execution. You can share it with the supervisor, who would evaluate the feasibility and capacity of the results  and  conclusion .

Read our guidelines to write a research proposal  if you have already formulated your research design. The research proposal is written in the future tense because you are writing your proposal before conducting research.

The  research methodology  or research design, on the other hand, is generally written in the past tense.

How to Write a Research Design – Conclusion

A research design is the plan, structure, strategy of investigation conceived to answer the research question and test the hypothesis. The dissertation research design can be classified based on the type of data and the type of analysis.

Above mentioned five steps are the answer to how to write a research design. So, follow these steps to  formulate the perfect research design for your dissertation .

ResearchProspect writers have years of experience creating research designs that align with the dissertation’s aim and objectives. If you are struggling with your dissertation methodology chapter, you might want to look at our dissertation part-writing service.

Our dissertation writers can also help you with the full dissertation paper . No matter how urgent or complex your need may be, ResearchProspect can help. We also offer PhD level research paper writing services.

Frequently Asked Questions

What is research design.

Research design is a systematic plan that guides the research process, outlining the methodology and procedures for collecting and analysing data. It determines the structure of the study, ensuring the research question is answered effectively, reliably, and validly. It serves as the blueprint for the entire research project.

How to write a research design?

To write a research design, define your research question, identify the research method (qualitative, quantitative, or mixed), choose data collection techniques (e.g., surveys, interviews), determine the sample size and sampling method, outline data analysis procedures, and highlight potential limitations and ethical considerations for the study.

How to write the design section of a research paper?

In the design section of a research paper, describe the research methodology chosen and justify its selection. Outline the data collection methods, participants or samples, instruments used, and procedures followed. Detail any experimental controls, if applicable. Ensure clarity and precision to enable replication of the study by other researchers.

How to write a research design in methodology?

To write a research design in methodology, clearly outline the research strategy (e.g., experimental, survey, case study). Describe the sampling technique, participants, and data collection methods. Detail the procedures for data collection and analysis. Justify choices by linking them to research objectives, addressing reliability and validity.

You May Also Like

To help students organise their dissertation proposal paper correctly, we have put together detailed guidelines on how to structure a dissertation proposal.

Make sure that your selected topic is intriguing, manageable, and relevant. Here are some guidelines to help understand how to find a good dissertation topic.

Repository of ten perfect research question examples will provide you a better perspective about how to create research questions.

USEFUL LINKS

LEARNING RESOURCES

secure connection

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works
  • University Libraries
  • Research Guides
  • Topic Guides
  • Research Methods Guide
  • Research Design & Method

Research Methods Guide: Research Design & Method

  • Introduction
  • Survey Research
  • Interview Research
  • Data Analysis
  • Resources & Consultation

Tutorial Videos: Research Design & Method

Research Methods (sociology-focused)

Qualitative vs. Quantitative Methods (intro)

Qualitative vs. Quantitative Methods (advanced)

proposed research design and methodology

FAQ: Research Design & Method

What is the difference between Research Design and Research Method?

Research design is a plan to answer your research question.  A research method is a strategy used to implement that plan.  Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively.

Which research method should I choose ?

It depends on your research goal.  It depends on what subjects (and who) you want to study.  Let's say you are interested in studying what makes people happy, or why some students are more conscious about recycling on campus.  To answer these questions, you need to make a decision about how to collect your data.  Most frequently used methods include:

  • Observation / Participant Observation
  • Focus Groups
  • Experiments
  • Secondary Data Analysis / Archival Study
  • Mixed Methods (combination of some of the above)

One particular method could be better suited to your research goal than others, because the data you collect from different methods will be different in quality and quantity.   For instance, surveys are usually designed to produce relatively short answers, rather than the extensive responses expected in qualitative interviews.

What other factors should I consider when choosing one method over another?

Time for data collection and analysis is something you want to consider.  An observation or interview method, so-called qualitative approach, helps you collect richer information, but it takes time.  Using a survey helps you collect more data quickly, yet it may lack details.  So, you will need to consider the time you have for research and the balance between strengths and weaknesses associated with each method (e.g., qualitative vs. quantitative).

  • << Previous: Introduction
  • Next: Survey Research >>
  • Last Updated: Aug 21, 2023 10:42 AM

Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.

We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.

Brief introduction to this section that descibes Open Access especially from an IntechOpen perspective

Want to get in touch? Contact our London head office or media team here

Our team is growing all the time, so we’re always on the lookout for smart people who want to help us reshape the world of scientific publishing.

Home > Books > Cyberspace

Research Design and Methodology

Submitted: 23 January 2019 Reviewed: 08 March 2019 Published: 07 August 2019

DOI: 10.5772/intechopen.85731

Cite this chapter

There are two ways to cite this chapter:

From the Edited Volume

Edited by Evon Abu-Taieh, Abdelkrim El Mouatasim and Issam H. Al Hadid

To purchase hard copies of this book, please contact the representative in India: CBS Publishers & Distributors Pvt. Ltd. www.cbspd.com | [email protected]

Chapter metrics overview

30,677 Chapter Downloads

Impact of this chapter

Total Chapter Downloads on intechopen.com

IntechOpen

Total Chapter Views on intechopen.com

Overall attention for this chapters

There are a number of approaches used in this research method design. The purpose of this chapter is to design the methodology of the research approach through mixed types of research techniques. The research approach also supports the researcher on how to come across the research result findings. In this chapter, the general design of the research and the methods used for data collection are explained in detail. It includes three main parts. The first part gives a highlight about the dissertation design. The second part discusses about qualitative and quantitative data collection methods. The last part illustrates the general research framework. The purpose of this section is to indicate how the research was conducted throughout the study periods.

  • research design
  • methodology
  • data sources

Author Information

Kassu jilcha sileyew *.

  • School of Mechanical and Industrial Engineering, Addis Ababa Institute of Technology, Addis Ababa University, Addis Ababa, Ethiopia

*Address all correspondence to: [email protected]

1. Introduction

Research methodology is the path through which researchers need to conduct their research. It shows the path through which these researchers formulate their problem and objective and present their result from the data obtained during the study period. This research design and methodology chapter also shows how the research outcome at the end will be obtained in line with meeting the objective of the study. This chapter hence discusses the research methods that were used during the research process. It includes the research methodology of the study from the research strategy to the result dissemination. For emphasis, in this chapter, the author outlines the research strategy, research design, research methodology, the study area, data sources such as primary data sources and secondary data, population consideration and sample size determination such as questionnaires sample size determination and workplace site exposure measurement sample determination, data collection methods like primary data collection methods including workplace site observation data collection and data collection through desk review, data collection through questionnaires, data obtained from experts opinion, workplace site exposure measurement, data collection tools pretest, secondary data collection methods, methods of data analysis used such as quantitative data analysis and qualitative data analysis, data analysis software, the reliability and validity analysis of the quantitative data, reliability of data, reliability analysis, validity, data quality management, inclusion criteria, ethical consideration and dissemination of result and its utilization approaches. In order to satisfy the objectives of the study, a qualitative and quantitative research method is apprehended in general. The study used these mixed strategies because the data were obtained from all aspects of the data source during the study time. Therefore, the purpose of this methodology is to satisfy the research plan and target devised by the researcher.

2. Research design

The research design is intended to provide an appropriate framework for a study. A very significant decision in research design process is the choice to be made regarding research approach since it determines how relevant information for a study will be obtained; however, the research design process involves many interrelated decisions [ 1 ].

This study employed a mixed type of methods. The first part of the study consisted of a series of well-structured questionnaires (for management, employee’s representatives, and technician of industries) and semi-structured interviews with key stakeholders (government bodies, ministries, and industries) in participating organizations. The other design used is an interview of employees to know how they feel about safety and health of their workplace, and field observation at the selected industrial sites was undertaken.

Hence, this study employs a descriptive research design to agree on the effects of occupational safety and health management system on employee health, safety, and property damage for selected manufacturing industries. Saunders et al. [ 2 ] and Miller [ 3 ] say that descriptive research portrays an accurate profile of persons, events, or situations. This design offers to the researchers a profile of described relevant aspects of the phenomena of interest from an individual, organizational, and industry-oriented perspective. Therefore, this research design enabled the researchers to gather data from a wide range of respondents on the impact of safety and health on manufacturing industries in Ethiopia. And this helped in analyzing the response obtained on how it affects the manufacturing industries’ workplace safety and health. The research overall design and flow process are depicted in Figure 1 .

proposed research design and methodology

Research methods and processes (author design).

3. Research methodology

To address the key research objectives, this research used both qualitative and quantitative methods and combination of primary and secondary sources. The qualitative data supports the quantitative data analysis and results. The result obtained is triangulated since the researcher utilized the qualitative and quantitative data types in the data analysis. The study area, data sources, and sampling techniques were discussed under this section.

3.1 The study area

According to Fraenkel and Warren [ 4 ] studies, population refers to the complete set of individuals (subjects or events) having common characteristics in which the researcher is interested. The population of the study was determined based on random sampling system. This data collection was conducted from March 07, 2015 to December 10, 2016, from selected manufacturing industries found in Addis Ababa city and around. The manufacturing companies were selected based on their employee number, established year, and the potential accidents prevailing and the manufacturing industry type even though all criterions were difficult to satisfy.

3.2 Data sources

3.2.1 primary data sources.

It was obtained from the original source of information. The primary data were more reliable and have more confidence level of decision-making with the trusted analysis having direct intact with occurrence of the events. The primary data sources are industries’ working environment (through observation, pictures, and photograph) and industry employees (management and bottom workers) (interview, questionnaires and discussions).

3.2.2 Secondary data

Desk review has been conducted to collect data from various secondary sources. This includes reports and project documents at each manufacturing sectors (more on medium and large level). Secondary data sources have been obtained from literatures regarding OSH, and the remaining data were from the companies’ manuals, reports, and some management documents which were included under the desk review. Reputable journals, books, different articles, periodicals, proceedings, magazines, newsletters, newspapers, websites, and other sources were considered on the manufacturing industrial sectors. The data also obtained from the existing working documents, manuals, procedures, reports, statistical data, policies, regulations, and standards were taken into account for the review.

In general, for this research study, the desk review has been completed to this end, and it had been polished and modified upon manuals and documents obtained from the selected companies.

4. Population and sample size

4.1 population.

The study population consisted of manufacturing industries’ employees in Addis Ababa city and around as there are more representative manufacturing industrial clusters found. To select representative manufacturing industrial sector population, the types of the industries expected were more potential to accidents based on random and purposive sampling considered. The population of data was from textile, leather, metal, chemicals, and food manufacturing industries. A total of 189 sample sizes of industries responded to the questionnaire survey from the priority areas of the government. Random sample sizes and disproportionate methods were used, and 80 from wood, metal, and iron works; 30 from food, beverage, and tobacco products; 50 from leather, textile, and garments; 20 from chemical and chemical products; and 9 from other remaining 9 clusters of manufacturing industries responded.

4.2 Questionnaire sample size determination

A simple random sampling and purposive sampling methods were used to select the representative manufacturing industries and respondents for the study. The simple random sampling ensures that each member of the population has an equal chance for the selection or the chance of getting a response which can be more than equal to the chance depending on the data analysis justification. Sample size determination procedure was used to get optimum and reasonable information. In this study, both probability (simple random sampling) and nonprobability (convenience, quota, purposive, and judgmental) sampling methods were used as the nature of the industries are varied. This is because of the characteristics of data sources which permitted the researchers to follow the multi-methods. This helps the analysis to triangulate the data obtained and increase the reliability of the research outcome and its decision. The companies’ establishment time and its engagement in operation, the number of employees and the proportion it has, the owner types (government and private), type of manufacturing industry/production, types of resource used at work, and the location it is found in the city and around were some of the criteria for the selections.

The determination of the sample size was adopted from Daniel [ 5 ] and Cochran [ 6 ] formula. The formula used was for unknown population size Eq. (1) and is given as

proposed research design and methodology

where n  = sample size, Z  = statistic for a level of confidence, P  = expected prevalence or proportion (in proportion of one; if 50%, P  = 0.5), and d  = precision (in proportion of one; if 6%, d  = 0.06). Z statistic ( Z ): for the level of confidence of 95%, which is conventional, Z value is 1.96. In this study, investigators present their results with 95% confidence intervals (CI).

The expected sample number was 267 at the marginal error of 6% for 95% confidence interval of manufacturing industries. However, the collected data indicated that only 189 populations were used for the analysis after rejecting some data having more missing values in the responses from the industries. Hence, the actual data collection resulted in 71% response rate. The 267 population were assumed to be satisfactory and representative for the data analysis.

4.3 Workplace site exposure measurement sample determination

The sample size for the experimental exposure measurements of physical work environment has been considered based on the physical data prepared for questionnaires and respondents. The response of positive were considered for exposure measurement factors to be considered for the physical environment health and disease causing such as noise intensity, light intensity, pressure/stress, vibration, temperature/coldness, or hotness and dust particles on 20 workplace sites. The selection method was using random sampling in line with purposive method. The measurement of the exposure factors was done in collaboration with Addis Ababa city Administration and Oromia Bureau of Labour and Social Affair (AACBOLSA). Some measuring instruments were obtained from the Addis Ababa city and Oromia Bureau of Labour and Social Affair.

5. Data collection methods

Data collection methods were focused on the followings basic techniques. These included secondary and primary data collections focusing on both qualitative and quantitative data as defined in the previous section. The data collection mechanisms are devised and prepared with their proper procedures.

5.1 Primary data collection methods

Primary data sources are qualitative and quantitative. The qualitative sources are field observation, interview, and informal discussions, while that of quantitative data sources are survey questionnaires and interview questions. The next sections elaborate how the data were obtained from the primary sources.

5.1.1 Workplace site observation data collection

Observation is an important aspect of science. Observation is tightly connected to data collection, and there are different sources for this: documentation, archival records, interviews, direct observations, and participant observations. Observational research findings are considered strong in validity because the researcher is able to collect a depth of information about a particular behavior. In this dissertation, the researchers used observation method as one tool for collecting information and data before questionnaire design and after the start of research too. The researcher made more than 20 specific observations of manufacturing industries in the study areas. During the observations, it found a deeper understanding of the working environment and the different sections in the production system and OSH practices.

5.1.2 Data collection through interview

Interview is a loosely structured qualitative in-depth interview with people who are considered to be particularly knowledgeable about the topic of interest. The semi-structured interview is usually conducted in a face-to-face setting which permits the researcher to seek new insights, ask questions, and assess phenomena in different perspectives. It let the researcher to know the in-depth of the present working environment influential factors and consequences. It has provided opportunities for refining data collection efforts and examining specialized systems or processes. It was used when the researcher faces written records or published document limitation or wanted to triangulate the data obtained from other primary and secondary data sources.

This dissertation is also conducted with a qualitative approach and conducting interviews. The advantage of using interviews as a method is that it allows respondents to raise issues that the interviewer may not have expected. All interviews with employees, management, and technicians were conducted by the corresponding researcher, on a face-to-face basis at workplace. All interviews were recorded and transcribed.

5.1.3 Data collection through questionnaires

The main tool for gaining primary information in practical research is questionnaires, due to the fact that the researcher can decide on the sample and the types of questions to be asked [ 2 ].

In this dissertation, each respondent is requested to reply to an identical list of questions mixed so that biasness was prevented. Initially the questionnaire design was coded and mixed up from specific topic based on uniform structures. Consequently, the questionnaire produced valuable data which was required to achieve the dissertation objectives.

The questionnaires developed were based on a five-item Likert scale. Responses were given to each statement using a five-point Likert-type scale, for which 1 = “strongly disagree” to 5 = “strongly agree.” The responses were summed up to produce a score for the measures.

5.1.4 Data obtained from experts’ opinion

The data was also obtained from the expert’s opinion related to the comparison of the knowledge, management, collaboration, and technology utilization including their sub-factors. The data obtained in this way was used for prioritization and decision-making of OSH, improving factor priority. The prioritization of the factors was using Saaty scales (1–9) and then converting to Fuzzy set values obtained from previous researches using triangular fuzzy set [ 7 ].

5.1.5 Workplace site exposure measurement

The researcher has measured the workplace environment for dust, vibration, heat, pressure, light, and noise to know how much is the level of each variable. The primary data sources planned and an actual coverage has been compared as shown in Table 1 .

proposed research design and methodology

Planned versus actual coverage of the survey.

The response rate for the proposed data source was good, and the pilot test also proved the reliability of questionnaires. Interview/discussion resulted in 87% of responses among the respondents; the survey questionnaire response rate obtained was 71%, and the field observation response rate was 90% for the whole data analysis process. Hence, the data organization quality level has not been compromised.

This response rate is considered to be representative of studies of organizations. As the study agrees on the response rate to be 30%, it is considered acceptable [ 8 ]. Saunders et al. [ 2 ] argued that the questionnaire with a scale response of 20% response rate is acceptable. Low response rate should not discourage the researchers, because a great deal of published research work also achieves low response rate. Hence, the response rate of this study is acceptable and very good for the purpose of meeting the study objectives.

5.1.6 Data collection tool pretest

The pretest for questionnaires, interviews, and tools were conducted to validate that the tool content is valid or not in the sense of the respondents’ understanding. Hence, content validity (in which the questions are answered to the target without excluding important points), internal validity (in which the questions raised answer the outcomes of researchers’ target), and external validity (in which the result can generalize to all the population from the survey sample population) were reflected. It has been proved with this pilot test prior to the start of the basic data collections. Following feedback process, a few minor changes were made to the originally designed data collect tools. The pilot test made for the questionnaire test was on 10 sample sizes selected randomly from the target sectors and experts.

5.2 Secondary data collection methods

The secondary data refers to data that was collected by someone other than the user. This data source gives insights of the research area of the current state-of-the-art method. It also makes some sort of research gap that needs to be filled by the researcher. This secondary data sources could be internal and external data sources of information that may cover a wide range of areas.

Literature/desk review and industry documents and reports: To achieve the dissertation’s objectives, the researcher has conducted excessive document review and reports of the companies in both online and offline modes. From a methodological point of view, literature reviews can be comprehended as content analysis, where quantitative and qualitative aspects are mixed to assess structural (descriptive) as well as content criteria.

A literature search was conducted using the database sources like MEDLINE; Emerald; Taylor and Francis publications; EMBASE (medical literature); PsycINFO (psychological literature); Sociological Abstracts (sociological literature); accident prevention journals; US Statistics of Labor, European Safety and Health database; ABI Inform; Business Source Premier (business/management literature); EconLit (economic literature); Social Service Abstracts (social work and social service literature); and other related materials. The search strategy was focused on articles or reports that measure one or more of the dimensions within the research OSH model framework. This search strategy was based on a framework and measurement filter strategy developed by the Consensus-Based Standards for the Selection of Health Measurement Instruments (COSMIN) group. Based on screening, unrelated articles to the research model and objectives were excluded. Prior to screening, researcher (principal investigator) reviewed a sample of more than 2000 articles, websites, reports, and guidelines to determine whether they should be included for further review or reject. Discrepancies were thoroughly identified and resolved before the review of the main group of more than 300 articles commenced. After excluding the articles based on the title, keywords, and abstract, the remaining articles were reviewed in detail, and the information was extracted on the instrument that was used to assess the dimension of research interest. A complete list of items was then collated within each research targets or objectives and reviewed to identify any missing elements.

6. Methods of data analysis

Data analysis method follows the procedures listed under the following sections. The data analysis part answered the basic questions raised in the problem statement. The detailed analysis of the developed and developing countries’ experiences on OSH regarding manufacturing industries was analyzed, discussed, compared and contrasted, and synthesized.

6.1 Quantitative data analysis

Quantitative data were obtained from primary and secondary data discussed above in this chapter. This data analysis was based on their data type using Excel, SPSS 20.0, Office Word format, and other tools. This data analysis focuses on numerical/quantitative data analysis.

Before analysis, data coding of responses and analysis were made. In order to analyze the data obtained easily, the data were coded to SPSS 20.0 software as the data obtained from questionnaires. This task involved identifying, classifying, and assigning a numeric or character symbol to data, which was done in only one way pre-coded [ 9 , 10 ]. In this study, all of the responses were pre-coded. They were taken from the list of responses, a number of corresponding to a particular selection was given. This process was applied to every earlier question that needed this treatment. Upon completion, the data were then entered to a statistical analysis software package, SPSS version 20.0 on Windows 10 for the next steps.

Under the data analysis, exploration of data has been made with descriptive statistics and graphical analysis. The analysis included exploring the relationship between variables and comparing groups how they affect each other. This has been done using cross tabulation/chi square, correlation, and factor analysis and using nonparametric statistic.

6.2 Qualitative data analysis

Qualitative data analysis used for triangulation of the quantitative data analysis. The interview, observation, and report records were used to support the findings. The analysis has been incorporated with the quantitative discussion results in the data analysis parts.

6.3 Data analysis software

The data were entered using SPSS 20.0 on Windows 10 and analyzed. The analysis supported with SPSS software much contributed to the finding. It had contributed to the data validation and correctness of the SPSS results. The software analyzed and compared the results of different variables used in the research questionnaires. Excel is also used to draw the pictures and calculate some analytical solutions.

7. The reliability and validity analysis of the quantitative data

7.1 reliability of data.

The reliability of measurements specifies the amount to which it is without bias (error free) and hence ensures consistent measurement across time and across the various items in the instrument [ 8 ]. In reliability analysis, it has been checked for the stability and consistency of the data. In the case of reliability analysis, the researcher checked the accuracy and precision of the procedure of measurement. Reliability has numerous definitions and approaches, but in several environments, the concept comes to be consistent [ 8 ]. The measurement fulfills the requirements of reliability when it produces consistent results during data analysis procedure. The reliability is determined through Cranach’s alpha as shown in Table 2 .

proposed research design and methodology

Internal consistency and reliability test of questionnaires items.

K stands for knowledge; M, management; T, technology; C, collaboration; P, policy, standards, and regulation; H, hazards and accident conditions; PPE, personal protective equipment.

7.2 Reliability analysis

Cronbach’s alpha is a measure of internal consistency, i.e., how closely related a set of items are as a group [ 11 ]. It is considered to be a measure of scale reliability. The reliability of internal consistency most of the time is measured based on the Cronbach’s alpha value. Reliability coefficient of 0.70 and above is considered “acceptable” in most research situations [ 12 ]. In this study, reliability analysis for internal consistency of Likert-scale measurement after deleting 13 items was found similar; the reliability coefficients were found for 76 items were 0.964 and for the individual groupings made shown in Table 2 . It was also found internally consistent using the Cronbach’s alpha test. Table 2 shows the internal consistency of the seven major instruments in which their reliability falls in the acceptable range for this research.

7.3 Validity

Face validity used as defined by Babbie [ 13 ] is an indicator that makes it seem a reasonable measure of some variables, and it is the subjective judgment that the instrument measures what it intends to measure in terms of relevance [ 14 ]. Thus, the researcher ensured, in this study, when developing the instruments that uncertainties were eliminated by using appropriate words and concepts in order to enhance clarity and general suitability [ 14 ]. Furthermore, the researcher submitted the instruments to the research supervisor and the joint supervisor who are both occupational health experts, to ensure validity of the measuring instruments and determine whether the instruments could be considered valid on face value.

In this study, the researcher was guided by reviewed literature related to compliance with the occupational health and safety conditions and data collection methods before he could develop the measuring instruments. In addition, the pretest study that was conducted prior to the main study assisted the researcher to avoid uncertainties of the contents in the data collection measuring instruments. A thorough inspection of the measuring instruments by the statistician and the researcher’s supervisor and joint experts, to ensure that all concepts pertaining to the study were included, ensured that the instruments were enriched.

8. Data quality management

Insight has been given to the data collectors on how to approach companies, and many of the questionnaires were distributed through MSc students at Addis Ababa Institute of Technology (AAiT) and manufacturing industries’ experience experts. This made the data quality reliable as it has been continually discussed with them. Pretesting for questionnaire was done on 10 workers to assure the quality of the data and for improvement of data collection tools. Supervision during data collection was done to understand how the data collectors are handling the questionnaire, and each filled questionnaires was checked for its completeness, accuracy, clarity, and consistency on a daily basis either face-to-face or by phone/email. The data expected in poor quality were rejected out of the acting during the screening time. Among planned 267 questionnaires, 189 were responded back. Finally, it was analyzed by the principal investigator.

9. Inclusion criteria

The data were collected from the company representative with the knowledge of OSH. Articles written in English and Amharic were included in this study. Database information obtained in relation to articles and those who have OSH area such as interventions method, method of accident identification, impact of occupational accidents, types of occupational injuries/disease, and impact of occupational accidents, and disease on productivity and costs of company and have used at least one form of feedback mechanism. No specific time period was chosen in order to access all available published papers. The questionnaire statements which are similar in the questionnaire have been rejected from the data analysis.

10. Ethical consideration

Ethical clearance was obtained from the School of Mechanical and Industrial Engineering, Institute of Technology, Addis Ababa University. Official letters were written from the School of Mechanical and Industrial Engineering to the respective manufacturing industries. The purpose of the study was explained to the study subjects. The study subjects were told that the information they provided was kept confidential and that their identities would not be revealed in association with the information they provided. Informed consent was secured from each participant. For bad working environment assessment findings, feedback will be given to all manufacturing industries involved in the study. There is a plan to give a copy of the result to the respective study manufacturing industries’ and ministries’ offices. The respondents’ privacy and their responses were not individually analyzed and included in the report.

11. Dissemination and utilization of the result

The result of this study will be presented to the Addis Ababa University, AAiT, School of Mechanical and Industrial Engineering. It will also be communicated to the Ethiopian manufacturing industries, Ministry of Labor and Social Affair, Ministry of Industry, and Ministry of Health from where the data was collected. The result will also be availed by publication and online presentation in Google Scholars. To this end, about five articles were published and disseminated to the whole world.

12. Conclusion

The research methodology and design indicated overall process of the flow of the research for the given study. The data sources and data collection methods were used. The overall research strategies and framework are indicated in this research process from problem formulation to problem validation including all the parameters. It has laid some foundation and how research methodology is devised and framed for researchers. This means, it helps researchers to consider it as one of the samples and models for the research data collection and process from the beginning of the problem statement to the research finding. Especially, this research flow helps new researchers to the research environment and methodology in particular.

Conflict of interest

There is no “conflict of interest.”

  • 1. Aaker A, Kumar VD, George S. Marketing Research. New York: John Wiley & Sons Inc; 2000
  • 2. Saunders M, Lewis P, Thornhill A. Research Methods for Business Student. 5th ed. Edinburgh Gate: Pearson Education Limited; 2009
  • 3. Miller P. Motivation in the Workplace. Work and Organizational Psychology. Oxford: Blackwell Publishers; 1991
  • 4. Fraenkel FJ, Warren NE. How to Design and Evaluate Research in Education. 4th ed. New York: McGraw-Hill; 2002
  • 5. Danniel WW. Biostatist: A Foundation for Analysis in the Health Science. 7th ed. New York: John Wiley & Sons; 1999
  • 6. Cochran WG. Sampling Techniques. 3rd ed. New York: John Wiley & Sons; 1977
  • 7. Saaty TL. The Analytical Hierarchy Process. Pittsburg: PWS Publications; 1990
  • 8. Sekaran U, Bougie R. Research Methods for Business: A Skill Building Approach. 5th ed. New Delhi: John Wiley & Sons, Ltd; 2010. pp. 1-468
  • 9. Luck DJ, Rubin RS. Marketing Research. 7th ed. New Jersey: Prentice-Hall International; 1987
  • 10. Wong TC. Marketing Research. Oxford, UK: Butterworth-Heinemann; 1999
  • 11. Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951; 16 :297-334
  • 12. Tavakol M, Dennick R. Making sense of Cronbach’s alpha. International Journal of Medical Education. 2011; 2 :53-55. DOI: 10.5116/ijme.4dfb.8dfd
  • 13. Babbie E. The Practice of Social Research. 12th ed. Belmont, CA: Wadsworth; 2010
  • 14. Polit DF, Beck CT. Generating and Assessing Evidence for Nursing Practice. 8th ed. Williams and Wilkins: Lippincott; 2008

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Continue reading from the same book

Edited by Evon Abu-Taieh

Published: 17 June 2020

By Sabína Gáliková Tolnaiová and Slavomír Gálik

1001 downloads

By Carlos Pedro Gonçalves

1540 downloads

By Konstantinos-George Thanos, Andrianna Polydouri, A...

1040 downloads

  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

37k Accesses

52 Citations

58 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

proposed research design and methodology

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Design – Types, Methods and Examples

Research Design – Types, Methods and Examples

Table of Contents

Research Design

Research Design

Definition:

Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study. Research design is important because it guides the entire research process and ensures that the study is conducted in a systematic and rigorous manner.

Types of Research Design

Types of Research Design are as follows:

Descriptive Research Design

This type of research design is used to describe a phenomenon or situation. It involves collecting data through surveys, questionnaires, interviews, and observations. The aim of descriptive research is to provide an accurate and detailed portrayal of a particular group, event, or situation. It can be useful in identifying patterns, trends, and relationships in the data.

Correlational Research Design

Correlational research design is used to determine if there is a relationship between two or more variables. This type of research design involves collecting data from participants and analyzing the relationship between the variables using statistical methods. The aim of correlational research is to identify the strength and direction of the relationship between the variables.

Experimental Research Design

Experimental research design is used to investigate cause-and-effect relationships between variables. This type of research design involves manipulating one variable and measuring the effect on another variable. It usually involves randomly assigning participants to groups and manipulating an independent variable to determine its effect on a dependent variable. The aim of experimental research is to establish causality.

Quasi-experimental Research Design

Quasi-experimental research design is similar to experimental research design, but it lacks one or more of the features of a true experiment. For example, there may not be random assignment to groups or a control group. This type of research design is used when it is not feasible or ethical to conduct a true experiment.

Case Study Research Design

Case study research design is used to investigate a single case or a small number of cases in depth. It involves collecting data through various methods, such as interviews, observations, and document analysis. The aim of case study research is to provide an in-depth understanding of a particular case or situation.

Longitudinal Research Design

Longitudinal research design is used to study changes in a particular phenomenon over time. It involves collecting data at multiple time points and analyzing the changes that occur. The aim of longitudinal research is to provide insights into the development, growth, or decline of a particular phenomenon over time.

Structure of Research Design

The format of a research design typically includes the following sections:

  • Introduction : This section provides an overview of the research problem, the research questions, and the importance of the study. It also includes a brief literature review that summarizes previous research on the topic and identifies gaps in the existing knowledge.
  • Research Questions or Hypotheses: This section identifies the specific research questions or hypotheses that the study will address. These questions should be clear, specific, and testable.
  • Research Methods : This section describes the methods that will be used to collect and analyze data. It includes details about the study design, the sampling strategy, the data collection instruments, and the data analysis techniques.
  • Data Collection: This section describes how the data will be collected, including the sample size, data collection procedures, and any ethical considerations.
  • Data Analysis: This section describes how the data will be analyzed, including the statistical techniques that will be used to test the research questions or hypotheses.
  • Results : This section presents the findings of the study, including descriptive statistics and statistical tests.
  • Discussion and Conclusion : This section summarizes the key findings of the study, interprets the results, and discusses the implications of the findings. It also includes recommendations for future research.
  • References : This section lists the sources cited in the research design.

Example of Research Design

An Example of Research Design could be:

Research question: Does the use of social media affect the academic performance of high school students?

Research design:

  • Research approach : The research approach will be quantitative as it involves collecting numerical data to test the hypothesis.
  • Research design : The research design will be a quasi-experimental design, with a pretest-posttest control group design.
  • Sample : The sample will be 200 high school students from two schools, with 100 students in the experimental group and 100 students in the control group.
  • Data collection : The data will be collected through surveys administered to the students at the beginning and end of the academic year. The surveys will include questions about their social media usage and academic performance.
  • Data analysis : The data collected will be analyzed using statistical software. The mean scores of the experimental and control groups will be compared to determine whether there is a significant difference in academic performance between the two groups.
  • Limitations : The limitations of the study will be acknowledged, including the fact that social media usage can vary greatly among individuals, and the study only focuses on two schools, which may not be representative of the entire population.
  • Ethical considerations: Ethical considerations will be taken into account, such as obtaining informed consent from the participants and ensuring their anonymity and confidentiality.

How to Write Research Design

Writing a research design involves planning and outlining the methodology and approach that will be used to answer a research question or hypothesis. Here are some steps to help you write a research design:

  • Define the research question or hypothesis : Before beginning your research design, you should clearly define your research question or hypothesis. This will guide your research design and help you select appropriate methods.
  • Select a research design: There are many different research designs to choose from, including experimental, survey, case study, and qualitative designs. Choose a design that best fits your research question and objectives.
  • Develop a sampling plan : If your research involves collecting data from a sample, you will need to develop a sampling plan. This should outline how you will select participants and how many participants you will include.
  • Define variables: Clearly define the variables you will be measuring or manipulating in your study. This will help ensure that your results are meaningful and relevant to your research question.
  • Choose data collection methods : Decide on the data collection methods you will use to gather information. This may include surveys, interviews, observations, experiments, or secondary data sources.
  • Create a data analysis plan: Develop a plan for analyzing your data, including the statistical or qualitative techniques you will use.
  • Consider ethical concerns : Finally, be sure to consider any ethical concerns related to your research, such as participant confidentiality or potential harm.

When to Write Research Design

Research design should be written before conducting any research study. It is an important planning phase that outlines the research methodology, data collection methods, and data analysis techniques that will be used to investigate a research question or problem. The research design helps to ensure that the research is conducted in a systematic and logical manner, and that the data collected is relevant and reliable.

Ideally, the research design should be developed as early as possible in the research process, before any data is collected. This allows the researcher to carefully consider the research question, identify the most appropriate research methodology, and plan the data collection and analysis procedures in advance. By doing so, the research can be conducted in a more efficient and effective manner, and the results are more likely to be valid and reliable.

Purpose of Research Design

The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection and analysis.

Some of the key purposes of research design include:

  • Providing a clear and concise plan of action for the research study.
  • Ensuring that the research is conducted ethically and with rigor.
  • Maximizing the accuracy and reliability of the research findings.
  • Minimizing the possibility of errors, biases, or confounding variables.
  • Ensuring that the research is feasible, practical, and cost-effective.
  • Determining the appropriate research methodology to answer the research question(s).
  • Identifying the sample size, sampling method, and data collection techniques.
  • Determining the data analysis method and statistical tests to be used.
  • Facilitating the replication of the study by other researchers.
  • Enhancing the validity and generalizability of the research findings.

Applications of Research Design

There are numerous applications of research design in various fields, some of which are:

  • Social sciences: In fields such as psychology, sociology, and anthropology, research design is used to investigate human behavior and social phenomena. Researchers use various research designs, such as experimental, quasi-experimental, and correlational designs, to study different aspects of social behavior.
  • Education : Research design is essential in the field of education to investigate the effectiveness of different teaching methods and learning strategies. Researchers use various designs such as experimental, quasi-experimental, and case study designs to understand how students learn and how to improve teaching practices.
  • Health sciences : In the health sciences, research design is used to investigate the causes, prevention, and treatment of diseases. Researchers use various designs, such as randomized controlled trials, cohort studies, and case-control studies, to study different aspects of health and healthcare.
  • Business : Research design is used in the field of business to investigate consumer behavior, marketing strategies, and the impact of different business practices. Researchers use various designs, such as survey research, experimental research, and case studies, to study different aspects of the business world.
  • Engineering : In the field of engineering, research design is used to investigate the development and implementation of new technologies. Researchers use various designs, such as experimental research and case studies, to study the effectiveness of new technologies and to identify areas for improvement.

Advantages of Research Design

Here are some advantages of research design:

  • Systematic and organized approach : A well-designed research plan ensures that the research is conducted in a systematic and organized manner, which makes it easier to manage and analyze the data.
  • Clear objectives: The research design helps to clarify the objectives of the study, which makes it easier to identify the variables that need to be measured, and the methods that need to be used to collect and analyze data.
  • Minimizes bias: A well-designed research plan minimizes the chances of bias, by ensuring that the data is collected and analyzed objectively, and that the results are not influenced by the researcher’s personal biases or preferences.
  • Efficient use of resources: A well-designed research plan helps to ensure that the resources (time, money, and personnel) are used efficiently and effectively, by focusing on the most important variables and methods.
  • Replicability: A well-designed research plan makes it easier for other researchers to replicate the study, which enhances the credibility and reliability of the findings.
  • Validity: A well-designed research plan helps to ensure that the findings are valid, by ensuring that the methods used to collect and analyze data are appropriate for the research question.
  • Generalizability : A well-designed research plan helps to ensure that the findings can be generalized to other populations, settings, or situations, which increases the external validity of the study.

Research Design Vs Research Methodology

About the author.

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Leave a comment x.

Save my name, email, and website in this browser for the next time I comment.

Grad Coach

How To Choose Your Research Methodology

Qualitative vs quantitative vs mixed methods.

By: Derek Jansen (MBA). Expert Reviewed By: Dr Eunice Rautenbach | June 2021

Without a doubt, one of the most common questions we receive at Grad Coach is “ How do I choose the right methodology for my research? ”. It’s easy to see why – with so many options on the research design table, it’s easy to get intimidated, especially with all the complex lingo!

In this post, we’ll explain the three overarching types of research – qualitative, quantitative and mixed methods – and how you can go about choosing the best methodological approach for your research.

Overview: Choosing Your Methodology

Understanding the options – Qualitative research – Quantitative research – Mixed methods-based research

Choosing a research methodology – Nature of the research – Research area norms – Practicalities

Free Webinar: Research Methodology 101

1. Understanding the options

Before we jump into the question of how to choose a research methodology, it’s useful to take a step back to understand the three overarching types of research – qualitative , quantitative and mixed methods -based research. Each of these options takes a different methodological approach.

Qualitative research utilises data that is not numbers-based. In other words, qualitative research focuses on words , descriptions , concepts or ideas – while quantitative research makes use of numbers and statistics. Qualitative research investigates the “softer side” of things to explore and describe, while quantitative research focuses on the “hard numbers”, to measure differences between variables and the relationships between them.

Importantly, qualitative research methods are typically used to explore and gain a deeper understanding of the complexity of a situation – to draw a rich picture . In contrast to this, quantitative methods are usually used to confirm or test hypotheses . In other words, they have distinctly different purposes. The table below highlights a few of the key differences between qualitative and quantitative research – you can learn more about the differences here.

  • Uses an inductive approach
  • Is used to build theories
  • Takes a subjective approach
  • Adopts an open and flexible approach
  • The researcher is close to the respondents
  • Interviews and focus groups are oftentimes used to collect word-based data.
  • Generally, draws on small sample sizes
  • Uses qualitative data analysis techniques (e.g. content analysis , thematic analysis , etc)
  • Uses a deductive approach
  • Is used to test theories
  • Takes an objective approach
  • Adopts a closed, highly planned approach
  • The research is disconnected from respondents
  • Surveys or laboratory equipment are often used to collect number-based data.
  • Generally, requires large sample sizes
  • Uses statistical analysis techniques to make sense of the data

Mixed methods -based research, as you’d expect, attempts to bring these two types of research together, drawing on both qualitative and quantitative data. Quite often, mixed methods-based studies will use qualitative research to explore a situation and develop a potential model of understanding (this is called a conceptual framework), and then go on to use quantitative methods to test that model empirically.

In other words, while qualitative and quantitative methods (and the philosophies that underpin them) are completely different, they are not at odds with each other. It’s not a competition of qualitative vs quantitative. On the contrary, they can be used together to develop a high-quality piece of research. Of course, this is easier said than done, so we usually recommend that first-time researchers stick to a single approach , unless the nature of their study truly warrants a mixed-methods approach.

The key takeaway here, and the reason we started by looking at the three options, is that it’s important to understand that each methodological approach has a different purpose – for example, to explore and understand situations (qualitative), to test and measure (quantitative) or to do both. They’re not simply alternative tools for the same job. 

Right – now that we’ve got that out of the way, let’s look at how you can go about choosing the right methodology for your research.

Methodology choices in research

2. How to choose a research methodology

To choose the right research methodology for your dissertation or thesis, you need to consider three important factors . Based on these three factors, you can decide on your overarching approach – qualitative, quantitative or mixed methods. Once you’ve made that decision, you can flesh out the finer details of your methodology, such as the sampling , data collection methods and analysis techniques (we discuss these separately in other posts ).

The three factors you need to consider are:

  • The nature of your research aims, objectives and research questions
  • The methodological approaches taken in the existing literature
  • Practicalities and constraints

Let’s take a look at each of these.

Factor #1: The nature of your research

As I mentioned earlier, each type of research (and therefore, research methodology), whether qualitative, quantitative or mixed, has a different purpose and helps solve a different type of question. So, it’s logical that the key deciding factor in terms of which research methodology you adopt is the nature of your research aims, objectives and research questions .

But, what types of research exist?

Broadly speaking, research can fall into one of three categories:

  • Exploratory – getting a better understanding of an issue and potentially developing a theory regarding it
  • Confirmatory – confirming a potential theory or hypothesis by testing it empirically
  • A mix of both – building a potential theory or hypothesis and then testing it

As a rule of thumb, exploratory research tends to adopt a qualitative approach , whereas confirmatory research tends to use quantitative methods . This isn’t set in stone, but it’s a very useful heuristic. Naturally then, research that combines a mix of both, or is seeking to develop a theory from the ground up and then test that theory, would utilize a mixed-methods approach.

Exploratory vs confirmatory research

Let’s look at an example in action.

If your research aims were to understand the perspectives of war veterans regarding certain political matters, you’d likely adopt a qualitative methodology, making use of interviews to collect data and one or more qualitative data analysis methods to make sense of the data.

If, on the other hand, your research aims involved testing a set of hypotheses regarding the link between political leaning and income levels, you’d likely adopt a quantitative methodology, using numbers-based data from a survey to measure the links between variables and/or constructs .

So, the first (and most important thing) thing you need to consider when deciding which methodological approach to use for your research project is the nature of your research aims , objectives and research questions. Specifically, you need to assess whether your research leans in an exploratory or confirmatory direction or involves a mix of both.

The importance of achieving solid alignment between these three factors and your methodology can’t be overstated. If they’re misaligned, you’re going to be forcing a square peg into a round hole. In other words, you’ll be using the wrong tool for the job, and your research will become a disjointed mess.

If your research is a mix of both exploratory and confirmatory, but you have a tight word count limit, you may need to consider trimming down the scope a little and focusing on one or the other. One methodology executed well has a far better chance of earning marks than a poorly executed mixed methods approach. So, don’t try to be a hero, unless there is a very strong underpinning logic.

Need a helping hand?

proposed research design and methodology

Factor #2: The disciplinary norms

Choosing the right methodology for your research also involves looking at the approaches used by other researchers in the field, and studies with similar research aims and objectives to yours. Oftentimes, within a discipline, there is a common methodological approach (or set of approaches) used in studies. While this doesn’t mean you should follow the herd “just because”, you should at least consider these approaches and evaluate their merit within your context.

A major benefit of reviewing the research methodologies used by similar studies in your field is that you can often piggyback on the data collection techniques that other (more experienced) researchers have developed. For example, if you’re undertaking a quantitative study, you can often find tried and tested survey scales with high Cronbach’s alphas. These are usually included in the appendices of journal articles, so you don’t even have to contact the original authors. By using these, you’ll save a lot of time and ensure that your study stands on the proverbial “shoulders of giants” by using high-quality measurement instruments .

Of course, when reviewing existing literature, keep point #1 front of mind. In other words, your methodology needs to align with your research aims, objectives and questions. Don’t fall into the trap of adopting the methodological “norm” of other studies just because it’s popular. Only adopt that which is relevant to your research.

Factor #3: Practicalities

When choosing a research methodology, there will always be a tension between doing what’s theoretically best (i.e., the most scientifically rigorous research design ) and doing what’s practical , given your constraints . This is the nature of doing research and there are always trade-offs, as with anything else.

But what constraints, you ask?

When you’re evaluating your methodological options, you need to consider the following constraints:

  • Data access
  • Equipment and software
  • Your knowledge and skills

Let’s look at each of these.

Constraint #1: Data access

The first practical constraint you need to consider is your access to data . If you’re going to be undertaking primary research , you need to think critically about the sample of respondents you realistically have access to. For example, if you plan to use in-person interviews , you need to ask yourself how many people you’ll need to interview, whether they’ll be agreeable to being interviewed, where they’re located, and so on.

If you’re wanting to undertake a quantitative approach using surveys to collect data, you’ll need to consider how many responses you’ll require to achieve statistically significant results. For many statistical tests, a sample of a few hundred respondents is typically needed to develop convincing conclusions.

So, think carefully about what data you’ll need access to, how much data you’ll need and how you’ll collect it. The last thing you want is to spend a huge amount of time on your research only to find that you can’t get access to the required data.

Constraint #2: Time

The next constraint is time. If you’re undertaking research as part of a PhD, you may have a fairly open-ended time limit, but this is unlikely to be the case for undergrad and Masters-level projects. So, pay attention to your timeline, as the data collection and analysis components of different methodologies have a major impact on time requirements . Also, keep in mind that these stages of the research often take a lot longer than originally anticipated.

Another practical implication of time limits is that it will directly impact which time horizon you can use – i.e. longitudinal vs cross-sectional . For example, if you’ve got a 6-month limit for your entire research project, it’s quite unlikely that you’ll be able to adopt a longitudinal time horizon. 

Constraint #3: Money

As with so many things, money is another important constraint you’ll need to consider when deciding on your research methodology. While some research designs will cost near zero to execute, others may require a substantial budget .

Some of the costs that may arise include:

  • Software costs – e.g. survey hosting services, analysis software, etc.
  • Promotion costs – e.g. advertising a survey to attract respondents
  • Incentive costs – e.g. providing a prize or cash payment incentive to attract respondents
  • Equipment rental costs – e.g. recording equipment, lab equipment, etc.
  • Travel costs
  • Food & beverages

These are just a handful of costs that can creep into your research budget. Like most projects, the actual costs tend to be higher than the estimates, so be sure to err on the conservative side and expect the unexpected. It’s critically important that you’re honest with yourself about these costs, or you could end up getting stuck midway through your project because you’ve run out of money.

Budgeting for your research

Constraint #4: Equipment & software

Another practical consideration is the hardware and/or software you’ll need in order to undertake your research. Of course, this variable will depend on the type of data you’re collecting and analysing. For example, you may need lab equipment to analyse substances, or you may need specific analysis software to analyse statistical data. So, be sure to think about what hardware and/or software you’ll need for each potential methodological approach, and whether you have access to these.

Constraint #5: Your knowledge and skillset

The final practical constraint is a big one. Naturally, the research process involves a lot of learning and development along the way, so you will accrue knowledge and skills as you progress. However, when considering your methodological options, you should still consider your current position on the ladder.

Some of the questions you should ask yourself are:

  • Am I more of a “numbers person” or a “words person”?
  • How much do I know about the analysis methods I’ll potentially use (e.g. statistical analysis)?
  • How much do I know about the software and/or hardware that I’ll potentially use?
  • How excited am I to learn new research skills and gain new knowledge?
  • How much time do I have to learn the things I need to learn?

Answering these questions honestly will provide you with another set of criteria against which you can evaluate the research methodology options you’ve shortlisted.

So, as you can see, there is a wide range of practicalities and constraints that you need to take into account when you’re deciding on a research methodology. These practicalities create a tension between the “ideal” methodology and the methodology that you can realistically pull off. This is perfectly normal, and it’s your job to find the option that presents the best set of trade-offs.

Recap: Choosing a methodology

In this post, we’ve discussed how to go about choosing a research methodology. The three major deciding factors we looked at were:

  • Exploratory
  • Confirmatory
  • Combination
  • Research area norms
  • Hardware and software
  • Your knowledge and skillset

If you have any questions, feel free to leave a comment below. If you’d like a helping hand with your research methodology, check out our 1-on-1 research coaching service , or book a free consultation with a friendly Grad Coach.

proposed research design and methodology

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Research methodology example

Very useful and informative especially for beginners

Goudi

Nice article! I’m a beginner in the field of cybersecurity research. I am a Telecom and Network Engineer and Also aiming for PhD scholarship.

Margaret Mutandwa

I find the article very informative especially for my decitation it has been helpful and an eye opener.

Anna N Namwandi

Hi I am Anna ,

I am a PHD candidate in the area of cyber security, maybe we can link up

Tut Gatluak Doar

The Examples shows by you, for sure they are really direct me and others to knows and practices the Research Design and prepration.

Tshepo Ngcobo

I found the post very informative and practical.

Joyce

I’m the process of constructing my research design and I want to know if the data analysis I plan to present in my thesis defense proposal possibly change especially after I gathered the data already.

Janine Grace Baldesco

Thank you so much this site is such a life saver. How I wish 1-1 coaching is available in our country but sadly it’s not.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Research process
  • How to Write a Research Proposal | Examples & Templates

How to Write a Research Proposal | Examples & Templates

Published on 30 October 2022 by Shona McCombes and Tegan George. Revised on 13 June 2023.

Structure of a research proposal

A research proposal describes what you will investigate, why it’s important, and how you will conduct your research.

The format of a research proposal varies between fields, but most proposals will contain at least these elements:

Introduction

Literature review.

  • Research design

Reference list

While the sections may vary, the overall objective is always the same. A research proposal serves as a blueprint and guide for your research plan, helping you get organised and feel confident in the path forward you choose to take.

Table of contents

Research proposal purpose, research proposal examples, research design and methods, contribution to knowledge, research schedule, frequently asked questions.

Academics often have to write research proposals to get funding for their projects. As a student, you might have to write a research proposal as part of a grad school application , or prior to starting your thesis or dissertation .

In addition to helping you figure out what your research can look like, a proposal can also serve to demonstrate why your project is worth pursuing to a funder, educational institution, or supervisor.

Research proposal length

The length of a research proposal can vary quite a bit. A bachelor’s or master’s thesis proposal can be just a few pages, while proposals for PhD dissertations or research funding are usually much longer and more detailed. Your supervisor can help you determine the best length for your work.

One trick to get started is to think of your proposal’s structure as a shorter version of your thesis or dissertation , only without the results , conclusion and discussion sections.

Download our research proposal template

Prevent plagiarism, run a free check.

Writing a research proposal can be quite challenging, but a good starting point could be to look at some examples. We’ve included a few for you below.

  • Example research proposal #1: ‘A Conceptual Framework for Scheduling Constraint Management’
  • Example research proposal #2: ‘ Medical Students as Mediators of Change in Tobacco Use’

Like your dissertation or thesis, the proposal will usually have a title page that includes:

  • The proposed title of your project
  • Your supervisor’s name
  • Your institution and department

The first part of your proposal is the initial pitch for your project. Make sure it succinctly explains what you want to do and why.

Your introduction should:

  • Introduce your topic
  • Give necessary background and context
  • Outline your  problem statement  and research questions

To guide your introduction , include information about:

  • Who could have an interest in the topic (e.g., scientists, policymakers)
  • How much is already known about the topic
  • What is missing from this current knowledge
  • What new insights your research will contribute
  • Why you believe this research is worth doing

As you get started, it’s important to demonstrate that you’re familiar with the most important research on your topic. A strong literature review  shows your reader that your project has a solid foundation in existing knowledge or theory. It also shows that you’re not simply repeating what other people have already done or said, but rather using existing research as a jumping-off point for your own.

In this section, share exactly how your project will contribute to ongoing conversations in the field by:

  • Comparing and contrasting the main theories, methods, and debates
  • Examining the strengths and weaknesses of different approaches
  • Explaining how will you build on, challenge, or synthesise prior scholarship

Following the literature review, restate your main  objectives . This brings the focus back to your own project. Next, your research design or methodology section will describe your overall approach, and the practical steps you will take to answer your research questions.

To finish your proposal on a strong note, explore the potential implications of your research for your field. Emphasise again what you aim to contribute and why it matters.

For example, your results might have implications for:

  • Improving best practices
  • Informing policymaking decisions
  • Strengthening a theory or model
  • Challenging popular or scientific beliefs
  • Creating a basis for future research

Last but not least, your research proposal must include correct citations for every source you have used, compiled in a reference list . To create citations quickly and easily, you can use our free APA citation generator .

Some institutions or funders require a detailed timeline of the project, asking you to forecast what you will do at each stage and how long it may take. While not always required, be sure to check the requirements of your project.

Here’s an example schedule to help you get started. You can also download a template at the button below.

Download our research schedule template

If you are applying for research funding, chances are you will have to include a detailed budget. This shows your estimates of how much each part of your project will cost.

Make sure to check what type of costs the funding body will agree to cover. For each item, include:

  • Cost : exactly how much money do you need?
  • Justification : why is this cost necessary to complete the research?
  • Source : how did you calculate the amount?

To determine your budget, think about:

  • Travel costs : do you need to go somewhere to collect your data? How will you get there, and how much time will you need? What will you do there (e.g., interviews, archival research)?
  • Materials : do you need access to any tools or technologies?
  • Help : do you need to hire any research assistants for the project? What will they do, and how much will you pay them?

Once you’ve decided on your research objectives , you need to explain them in your paper, at the end of your problem statement.

Keep your research objectives clear and concise, and use appropriate verbs to accurately convey the work that you will carry out for each one.

I will compare …

A research aim is a broad statement indicating the general purpose of your research project. It should appear in your introduction at the end of your problem statement , before your research objectives.

Research objectives are more specific than your research aim. They indicate the specific ways you’ll address the overarching aim.

A PhD, which is short for philosophiae doctor (doctor of philosophy in Latin), is the highest university degree that can be obtained. In a PhD, students spend 3–5 years writing a dissertation , which aims to make a significant, original contribution to current knowledge.

A PhD is intended to prepare students for a career as a researcher, whether that be in academia, the public sector, or the private sector.

A master’s is a 1- or 2-year graduate degree that can prepare you for a variety of careers.

All master’s involve graduate-level coursework. Some are research-intensive and intend to prepare students for further study in a PhD; these usually require their students to write a master’s thesis . Others focus on professional training for a specific career.

Critical thinking refers to the ability to evaluate information and to be aware of biases or assumptions, including your own.

Like information literacy , it involves evaluating arguments, identifying and solving problems in an objective and systematic way, and clearly communicating your ideas.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. & George, T. (2023, June 13). How to Write a Research Proposal | Examples & Templates. Scribbr. Retrieved 9 April 2024, from https://www.scribbr.co.uk/the-research-process/research-proposal-explained/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a research methodology | steps & tips, what is a literature review | guide, template, & examples, how to write a results section | tips & examples.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Types of Research Designs Compared | Guide & Examples

Types of Research Designs Compared | Guide & Examples

Published on June 20, 2019 by Shona McCombes . Revised on June 22, 2023.

When you start planning a research project, developing research questions and creating a  research design , you will have to make various decisions about the type of research you want to do.

There are many ways to categorize different types of research. The words you use to describe your research depend on your discipline and field. In general, though, the form your research design takes will be shaped by:

  • The type of knowledge you aim to produce
  • The type of data you will collect and analyze
  • The sampling methods , timescale and location of the research

This article takes a look at some common distinctions made between different types of research and outlines the key differences between them.

Table of contents

Types of research aims, types of research data, types of sampling, timescale, and location, other interesting articles.

The first thing to consider is what kind of knowledge your research aims to contribute.

Prevent plagiarism. Run a free check.

The next thing to consider is what type of data you will collect. Each kind of data is associated with a range of specific research methods and procedures.

Finally, you have to consider three closely related questions: how will you select the subjects or participants of the research? When and how often will you collect data from your subjects? And where will the research take place?

Keep in mind that the methods that you choose bring with them different risk factors and types of research bias . Biases aren’t completely avoidable, but can heavily impact the validity and reliability of your findings if left unchecked.

Choosing between all these different research types is part of the process of creating your research design , which determines exactly how your research will be conducted. But the type of research is only the first step: next, you have to make more concrete decisions about your research methods and the details of the study.

Read more about creating a research design

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Types of Research Designs Compared | Guide & Examples. Scribbr. Retrieved April 9, 2024, from https://www.scribbr.com/methodology/types-of-research/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a research design | types, guide & examples, qualitative vs. quantitative research | differences, examples & methods, what is a research methodology | steps & tips, what is your plagiarism score.

Get science-backed answers as you write with Paperpal's Research feature

What is Research Methodology? Definition, Types, and Examples

proposed research design and methodology

Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of the research. Several aspects must be considered before selecting an appropriate research methodology, such as research limitations and ethical concerns that may affect your research.

The research methodology section in a scientific paper describes the different methodological choices made, such as the data collection and analysis methods, and why these choices were selected. The reasons should explain why the methods chosen are the most appropriate to answer the research question. A good research methodology also helps ensure the reliability and validity of the research findings. There are three types of research methodology—quantitative, qualitative, and mixed-method, which can be chosen based on the research objectives.

What is research methodology ?

A research methodology describes the techniques and procedures used to identify and analyze information regarding a specific research topic. It is a process by which researchers design their study so that they can achieve their objectives using the selected research instruments. It includes all the important aspects of research, including research design, data collection methods, data analysis methods, and the overall framework within which the research is conducted. While these points can help you understand what is research methodology, you also need to know why it is important to pick the right methodology.

Why is research methodology important?

Having a good research methodology in place has the following advantages: 3

  • Helps other researchers who may want to replicate your research; the explanations will be of benefit to them.
  • You can easily answer any questions about your research if they arise at a later stage.
  • A research methodology provides a framework and guidelines for researchers to clearly define research questions, hypotheses, and objectives.
  • It helps researchers identify the most appropriate research design, sampling technique, and data collection and analysis methods.
  • A sound research methodology helps researchers ensure that their findings are valid and reliable and free from biases and errors.
  • It also helps ensure that ethical guidelines are followed while conducting research.
  • A good research methodology helps researchers in planning their research efficiently, by ensuring optimum usage of their time and resources.

Writing the methods section of a research paper? Let Paperpal help you achieve perfection

Types of research methodology.

There are three types of research methodology based on the type of research and the data required. 1

  • Quantitative research methodology focuses on measuring and testing numerical data. This approach is good for reaching a large number of people in a short amount of time. This type of research helps in testing the causal relationships between variables, making predictions, and generalizing results to wider populations.
  • Qualitative research methodology examines the opinions, behaviors, and experiences of people. It collects and analyzes words and textual data. This research methodology requires fewer participants but is still more time consuming because the time spent per participant is quite large. This method is used in exploratory research where the research problem being investigated is not clearly defined.
  • Mixed-method research methodology uses the characteristics of both quantitative and qualitative research methodologies in the same study. This method allows researchers to validate their findings, verify if the results observed using both methods are complementary, and explain any unexpected results obtained from one method by using the other method.

What are the types of sampling designs in research methodology?

Sampling 4 is an important part of a research methodology and involves selecting a representative sample of the population to conduct the study, making statistical inferences about them, and estimating the characteristics of the whole population based on these inferences. There are two types of sampling designs in research methodology—probability and nonprobability.

  • Probability sampling

In this type of sampling design, a sample is chosen from a larger population using some form of random selection, that is, every member of the population has an equal chance of being selected. The different types of probability sampling are:

  • Systematic —sample members are chosen at regular intervals. It requires selecting a starting point for the sample and sample size determination that can be repeated at regular intervals. This type of sampling method has a predefined range; hence, it is the least time consuming.
  • Stratified —researchers divide the population into smaller groups that don’t overlap but represent the entire population. While sampling, these groups can be organized, and then a sample can be drawn from each group separately.
  • Cluster —the population is divided into clusters based on demographic parameters like age, sex, location, etc.
  • Convenience —selects participants who are most easily accessible to researchers due to geographical proximity, availability at a particular time, etc.
  • Purposive —participants are selected at the researcher’s discretion. Researchers consider the purpose of the study and the understanding of the target audience.
  • Snowball —already selected participants use their social networks to refer the researcher to other potential participants.
  • Quota —while designing the study, the researchers decide how many people with which characteristics to include as participants. The characteristics help in choosing people most likely to provide insights into the subject.

What are data collection methods?

During research, data are collected using various methods depending on the research methodology being followed and the research methods being undertaken. Both qualitative and quantitative research have different data collection methods, as listed below.

Qualitative research 5

  • One-on-one interviews: Helps the interviewers understand a respondent’s subjective opinion and experience pertaining to a specific topic or event
  • Document study/literature review/record keeping: Researchers’ review of already existing written materials such as archives, annual reports, research articles, guidelines, policy documents, etc.
  • Focus groups: Constructive discussions that usually include a small sample of about 6-10 people and a moderator, to understand the participants’ opinion on a given topic.
  • Qualitative observation : Researchers collect data using their five senses (sight, smell, touch, taste, and hearing).

Quantitative research 6

  • Sampling: The most common type is probability sampling.
  • Interviews: Commonly telephonic or done in-person.
  • Observations: Structured observations are most commonly used in quantitative research. In this method, researchers make observations about specific behaviors of individuals in a structured setting.
  • Document review: Reviewing existing research or documents to collect evidence for supporting the research.
  • Surveys and questionnaires. Surveys can be administered both online and offline depending on the requirement and sample size.

Let Paperpal help you write the perfect research methods section. Start now!

What are data analysis methods.

The data collected using the various methods for qualitative and quantitative research need to be analyzed to generate meaningful conclusions. These data analysis methods 7 also differ between quantitative and qualitative research.

Quantitative research involves a deductive method for data analysis where hypotheses are developed at the beginning of the research and precise measurement is required. The methods include statistical analysis applications to analyze numerical data and are grouped into two categories—descriptive and inferential.

Descriptive analysis is used to describe the basic features of different types of data to present it in a way that ensures the patterns become meaningful. The different types of descriptive analysis methods are:

  • Measures of frequency (count, percent, frequency)
  • Measures of central tendency (mean, median, mode)
  • Measures of dispersion or variation (range, variance, standard deviation)
  • Measure of position (percentile ranks, quartile ranks)

Inferential analysis is used to make predictions about a larger population based on the analysis of the data collected from a smaller population. This analysis is used to study the relationships between different variables. Some commonly used inferential data analysis methods are:

  • Correlation: To understand the relationship between two or more variables.
  • Cross-tabulation: Analyze the relationship between multiple variables.
  • Regression analysis: Study the impact of independent variables on the dependent variable.
  • Frequency tables: To understand the frequency of data.
  • Analysis of variance: To test the degree to which two or more variables differ in an experiment.

Qualitative research involves an inductive method for data analysis where hypotheses are developed after data collection. The methods include:

  • Content analysis: For analyzing documented information from text and images by determining the presence of certain words or concepts in texts.
  • Narrative analysis: For analyzing content obtained from sources such as interviews, field observations, and surveys. The stories and opinions shared by people are used to answer research questions.
  • Discourse analysis: For analyzing interactions with people considering the social context, that is, the lifestyle and environment, under which the interaction occurs.
  • Grounded theory: Involves hypothesis creation by data collection and analysis to explain why a phenomenon occurred.
  • Thematic analysis: To identify important themes or patterns in data and use these to address an issue.

How to choose a research methodology?

Here are some important factors to consider when choosing a research methodology: 8

  • Research objectives, aims, and questions —these would help structure the research design.
  • Review existing literature to identify any gaps in knowledge.
  • Check the statistical requirements —if data-driven or statistical results are needed then quantitative research is the best. If the research questions can be answered based on people’s opinions and perceptions, then qualitative research is most suitable.
  • Sample size —sample size can often determine the feasibility of a research methodology. For a large sample, less effort- and time-intensive methods are appropriate.
  • Constraints —constraints of time, geography, and resources can help define the appropriate methodology.

Got writer’s block? Kickstart your research paper writing with Paperpal now!

How to write a research methodology .

A research methodology should include the following components: 3,9

  • Research design —should be selected based on the research question and the data required. Common research designs include experimental, quasi-experimental, correlational, descriptive, and exploratory.
  • Research method —this can be quantitative, qualitative, or mixed-method.
  • Reason for selecting a specific methodology —explain why this methodology is the most suitable to answer your research problem.
  • Research instruments —explain the research instruments you plan to use, mainly referring to the data collection methods such as interviews, surveys, etc. Here as well, a reason should be mentioned for selecting the particular instrument.
  • Sampling —this involves selecting a representative subset of the population being studied.
  • Data collection —involves gathering data using several data collection methods, such as surveys, interviews, etc.
  • Data analysis —describe the data analysis methods you will use once you’ve collected the data.
  • Research limitations —mention any limitations you foresee while conducting your research.
  • Validity and reliability —validity helps identify the accuracy and truthfulness of the findings; reliability refers to the consistency and stability of the results over time and across different conditions.
  • Ethical considerations —research should be conducted ethically. The considerations include obtaining consent from participants, maintaining confidentiality, and addressing conflicts of interest.

Streamline Your Research Paper Writing Process with Paperpal

The methods section is a critical part of the research papers, allowing researchers to use this to understand your findings and replicate your work when pursuing their own research. However, it is usually also the most difficult section to write. This is where Paperpal can help you overcome the writer’s block and create the first draft in minutes with Paperpal Copilot, its secure generative AI feature suite.  

With Paperpal you can get research advice, write and refine your work, rephrase and verify the writing, and ensure submission readiness, all in one place. Here’s how you can use Paperpal to develop the first draft of your methods section.  

  • Generate an outline: Input some details about your research to instantly generate an outline for your methods section 
  • Develop the section: Use the outline and suggested sentence templates to expand your ideas and develop the first draft.  
  • P araph ras e and trim : Get clear, concise academic text with paraphrasing that conveys your work effectively and word reduction to fix redundancies. 
  • Choose the right words: Enhance text by choosing contextual synonyms based on how the words have been used in previously published work.  
  • Check and verify text : Make sure the generated text showcases your methods correctly, has all the right citations, and is original and authentic. .   

You can repeat this process to develop each section of your research manuscript, including the title, abstract and keywords. Ready to write your research papers faster, better, and without the stress? Sign up for Paperpal and start writing today!

Frequently Asked Questions

Q1. What are the key components of research methodology?

A1. A good research methodology has the following key components:

  • Research design
  • Data collection procedures
  • Data analysis methods
  • Ethical considerations

Q2. Why is ethical consideration important in research methodology?

A2. Ethical consideration is important in research methodology to ensure the readers of the reliability and validity of the study. Researchers must clearly mention the ethical norms and standards followed during the conduct of the research and also mention if the research has been cleared by any institutional board. The following 10 points are the important principles related to ethical considerations: 10

  • Participants should not be subjected to harm.
  • Respect for the dignity of participants should be prioritized.
  • Full consent should be obtained from participants before the study.
  • Participants’ privacy should be ensured.
  • Confidentiality of the research data should be ensured.
  • Anonymity of individuals and organizations participating in the research should be maintained.
  • The aims and objectives of the research should not be exaggerated.
  • Affiliations, sources of funding, and any possible conflicts of interest should be declared.
  • Communication in relation to the research should be honest and transparent.
  • Misleading information and biased representation of primary data findings should be avoided.

Q3. What is the difference between methodology and method?

A3. Research methodology is different from a research method, although both terms are often confused. Research methods are the tools used to gather data, while the research methodology provides a framework for how research is planned, conducted, and analyzed. The latter guides researchers in making decisions about the most appropriate methods for their research. Research methods refer to the specific techniques, procedures, and tools used by researchers to collect, analyze, and interpret data, for instance surveys, questionnaires, interviews, etc.

Research methodology is, thus, an integral part of a research study. It helps ensure that you stay on track to meet your research objectives and answer your research questions using the most appropriate data collection and analysis tools based on your research design.

Accelerate your research paper writing with Paperpal. Try for free now!

  • Research methodologies. Pfeiffer Library website. Accessed August 15, 2023. https://library.tiffin.edu/researchmethodologies/whatareresearchmethodologies
  • Types of research methodology. Eduvoice website. Accessed August 16, 2023. https://eduvoice.in/types-research-methodology/
  • The basics of research methodology: A key to quality research. Voxco. Accessed August 16, 2023. https://www.voxco.com/blog/what-is-research-methodology/
  • Sampling methods: Types with examples. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/types-of-sampling-for-social-research/
  • What is qualitative research? Methods, types, approaches, examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-qualitative-research-methods-types-examples/
  • What is quantitative research? Definition, methods, types, and examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-quantitative-research-types-and-examples/
  • Data analysis in research: Types & methods. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/data-analysis-in-research/#Data_analysis_in_qualitative_research
  • Factors to consider while choosing the right research methodology. PhD Monster website. Accessed August 17, 2023. https://www.phdmonster.com/factors-to-consider-while-choosing-the-right-research-methodology/
  • What is research methodology? Research and writing guides. Accessed August 14, 2023. https://paperpile.com/g/what-is-research-methodology/
  • Ethical considerations. Business research methodology website. Accessed August 17, 2023. https://research-methodology.net/research-methodology/ethical-considerations/

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • Dangling Modifiers and How to Avoid Them in Your Writing 
  • Webinar: How to Use Generative AI Tools Ethically in Your Academic Writing
  • Research Outlines: How to Write An Introduction Section in Minutes with Paperpal Copilot
  • How to Paraphrase Research Papers Effectively

Language and Grammar Rules for Academic Writing

Climatic vs. climactic: difference and examples, you may also like, what is hedging in academic writing  , how to use ai to enhance your college..., how to use paperpal to generate emails &..., ai in education: it’s time to change the..., is it ethical to use ai-generated abstracts without..., do plagiarism checkers detect ai content, word choice problems: how to use the right..., how to avoid plagiarism when using generative ai..., what are journal guidelines on using generative ai..., types of plagiarism and 6 tips to avoid....

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Anaesth
  • v.60(9); 2016 Sep

How to write a research proposal?

Department of Anaesthesiology, Bangalore Medical College and Research Institute, Bengaluru, Karnataka, India

Devika Rani Duggappa

Writing the proposal of a research work in the present era is a challenging task due to the constantly evolving trends in the qualitative research design and the need to incorporate medical advances into the methodology. The proposal is a detailed plan or ‘blueprint’ for the intended study, and once it is completed, the research project should flow smoothly. Even today, many of the proposals at post-graduate evaluation committees and application proposals for funding are substandard. A search was conducted with keywords such as research proposal, writing proposal and qualitative using search engines, namely, PubMed and Google Scholar, and an attempt has been made to provide broad guidelines for writing a scientifically appropriate research proposal.

INTRODUCTION

A clean, well-thought-out proposal forms the backbone for the research itself and hence becomes the most important step in the process of conduct of research.[ 1 ] The objective of preparing a research proposal would be to obtain approvals from various committees including ethics committee [details under ‘Research methodology II’ section [ Table 1 ] in this issue of IJA) and to request for grants. However, there are very few universally accepted guidelines for preparation of a good quality research proposal. A search was performed with keywords such as research proposal, funding, qualitative and writing proposals using search engines, namely, PubMed, Google Scholar and Scopus.

Five ‘C’s while writing a literature review

An external file that holds a picture, illustration, etc.
Object name is IJA-60-631-g001.jpg

BASIC REQUIREMENTS OF A RESEARCH PROPOSAL

A proposal needs to show how your work fits into what is already known about the topic and what new paradigm will it add to the literature, while specifying the question that the research will answer, establishing its significance, and the implications of the answer.[ 2 ] The proposal must be capable of convincing the evaluation committee about the credibility, achievability, practicality and reproducibility (repeatability) of the research design.[ 3 ] Four categories of audience with different expectations may be present in the evaluation committees, namely academic colleagues, policy-makers, practitioners and lay audiences who evaluate the research proposal. Tips for preparation of a good research proposal include; ‘be practical, be persuasive, make broader links, aim for crystal clarity and plan before you write’. A researcher must be balanced, with a realistic understanding of what can be achieved. Being persuasive implies that researcher must be able to convince other researchers, research funding agencies, educational institutions and supervisors that the research is worth getting approval. The aim of the researcher should be clearly stated in simple language that describes the research in a way that non-specialists can comprehend, without use of jargons. The proposal must not only demonstrate that it is based on an intelligent understanding of the existing literature but also show that the writer has thought about the time needed to conduct each stage of the research.[ 4 , 5 ]

CONTENTS OF A RESEARCH PROPOSAL

The contents or formats of a research proposal vary depending on the requirements of evaluation committee and are generally provided by the evaluation committee or the institution.

In general, a cover page should contain the (i) title of the proposal, (ii) name and affiliation of the researcher (principal investigator) and co-investigators, (iii) institutional affiliation (degree of the investigator and the name of institution where the study will be performed), details of contact such as phone numbers, E-mail id's and lines for signatures of investigators.

The main contents of the proposal may be presented under the following headings: (i) introduction, (ii) review of literature, (iii) aims and objectives, (iv) research design and methods, (v) ethical considerations, (vi) budget, (vii) appendices and (viii) citations.[ 4 ]

Introduction

It is also sometimes termed as ‘need for study’ or ‘abstract’. Introduction is an initial pitch of an idea; it sets the scene and puts the research in context.[ 6 ] The introduction should be designed to create interest in the reader about the topic and proposal. It should convey to the reader, what you want to do, what necessitates the study and your passion for the topic.[ 7 ] Some questions that can be used to assess the significance of the study are: (i) Who has an interest in the domain of inquiry? (ii) What do we already know about the topic? (iii) What has not been answered adequately in previous research and practice? (iv) How will this research add to knowledge, practice and policy in this area? Some of the evaluation committees, expect the last two questions, elaborated under a separate heading of ‘background and significance’.[ 8 ] Introduction should also contain the hypothesis behind the research design. If hypothesis cannot be constructed, the line of inquiry to be used in the research must be indicated.

Review of literature

It refers to all sources of scientific evidence pertaining to the topic in interest. In the present era of digitalisation and easy accessibility, there is an enormous amount of relevant data available, making it a challenge for the researcher to include all of it in his/her review.[ 9 ] It is crucial to structure this section intelligently so that the reader can grasp the argument related to your study in relation to that of other researchers, while still demonstrating to your readers that your work is original and innovative. It is preferable to summarise each article in a paragraph, highlighting the details pertinent to the topic of interest. The progression of review can move from the more general to the more focused studies, or a historical progression can be used to develop the story, without making it exhaustive.[ 1 ] Literature should include supporting data, disagreements and controversies. Five ‘C's may be kept in mind while writing a literature review[ 10 ] [ Table 1 ].

Aims and objectives

The research purpose (or goal or aim) gives a broad indication of what the researcher wishes to achieve in the research. The hypothesis to be tested can be the aim of the study. The objectives related to parameters or tools used to achieve the aim are generally categorised as primary and secondary objectives.

Research design and method

The objective here is to convince the reader that the overall research design and methods of analysis will correctly address the research problem and to impress upon the reader that the methodology/sources chosen are appropriate for the specific topic. It should be unmistakably tied to the specific aims of your study.

In this section, the methods and sources used to conduct the research must be discussed, including specific references to sites, databases, key texts or authors that will be indispensable to the project. There should be specific mention about the methodological approaches to be undertaken to gather information, about the techniques to be used to analyse it and about the tests of external validity to which researcher is committed.[ 10 , 11 ]

The components of this section include the following:[ 4 ]

Population and sample

Population refers to all the elements (individuals, objects or substances) that meet certain criteria for inclusion in a given universe,[ 12 ] and sample refers to subset of population which meets the inclusion criteria for enrolment into the study. The inclusion and exclusion criteria should be clearly defined. The details pertaining to sample size are discussed in the article “Sample size calculation: Basic priniciples” published in this issue of IJA.

Data collection

The researcher is expected to give a detailed account of the methodology adopted for collection of data, which include the time frame required for the research. The methodology should be tested for its validity and ensure that, in pursuit of achieving the results, the participant's life is not jeopardised. The author should anticipate and acknowledge any potential barrier and pitfall in carrying out the research design and explain plans to address them, thereby avoiding lacunae due to incomplete data collection. If the researcher is planning to acquire data through interviews or questionnaires, copy of the questions used for the same should be attached as an annexure with the proposal.

Rigor (soundness of the research)

This addresses the strength of the research with respect to its neutrality, consistency and applicability. Rigor must be reflected throughout the proposal.

It refers to the robustness of a research method against bias. The author should convey the measures taken to avoid bias, viz. blinding and randomisation, in an elaborate way, thus ensuring that the result obtained from the adopted method is purely as chance and not influenced by other confounding variables.

Consistency

Consistency considers whether the findings will be consistent if the inquiry was replicated with the same participants and in a similar context. This can be achieved by adopting standard and universally accepted methods and scales.

Applicability

Applicability refers to the degree to which the findings can be applied to different contexts and groups.[ 13 ]

Data analysis

This section deals with the reduction and reconstruction of data and its analysis including sample size calculation. The researcher is expected to explain the steps adopted for coding and sorting the data obtained. Various tests to be used to analyse the data for its robustness, significance should be clearly stated. Author should also mention the names of statistician and suitable software which will be used in due course of data analysis and their contribution to data analysis and sample calculation.[ 9 ]

Ethical considerations

Medical research introduces special moral and ethical problems that are not usually encountered by other researchers during data collection, and hence, the researcher should take special care in ensuring that ethical standards are met. Ethical considerations refer to the protection of the participants' rights (right to self-determination, right to privacy, right to autonomy and confidentiality, right to fair treatment and right to protection from discomfort and harm), obtaining informed consent and the institutional review process (ethical approval). The researcher needs to provide adequate information on each of these aspects.

Informed consent needs to be obtained from the participants (details discussed in further chapters), as well as the research site and the relevant authorities.

When the researcher prepares a research budget, he/she should predict and cost all aspects of the research and then add an additional allowance for unpredictable disasters, delays and rising costs. All items in the budget should be justified.

Appendices are documents that support the proposal and application. The appendices will be specific for each proposal but documents that are usually required include informed consent form, supporting documents, questionnaires, measurement tools and patient information of the study in layman's language.

As with any scholarly research paper, you must cite the sources you used in composing your proposal. Although the words ‘references and bibliography’ are different, they are used interchangeably. It refers to all references cited in the research proposal.

Successful, qualitative research proposals should communicate the researcher's knowledge of the field and method and convey the emergent nature of the qualitative design. The proposal should follow a discernible logic from the introduction to presentation of the appendices.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 10 April 2024

An integrated design concept evaluation model based on interval valued picture fuzzy set and improved GRP method

  • Qing Ma 1 ,
  • Zhe Chen 1 ,
  • Yuhang Tan 1 &
  • Jianing Wei 1  

Scientific Reports volume  14 , Article number:  8433 ( 2024 ) Cite this article

Metrics details

  • Computational methods
  • Computational science
  • Information technology

The objective of this research is to enhance the precision and efficiency of design concept assessments during the initial stages of new product creation. Design concept evaluation, which occurs at the end of the conceptual design phase, is a critical step in product development. The outcome of this evaluation significantly impacts the product's eventual success, as flawed design concepts are difficult to remedy in later stages. However, the evaluation of new product concepts is a procedure that encompasses elements of subjectivity and ambiguity. In order to deal with the problem, a novel decision-making method for choosing more logical new product concepts is introduced. Basically, the evaluation process is outlined in three main phases: the construction of evaluation index system for design concept alternatives, the calculation of weights for evaluation criteria and decision-makers, the selection of the best design concept alternatives. These stages are composed of a hybrid method based on kano model, multiplicative analytic hierarchy process (AHP) method, the entropy of IVPFS and improved grey relational projection (GRP) under interval-valued picture fuzzy set (IVPFS). The novel approach integrates the strength of interval-valued picture fuzzy number in handling vagueness, the advantage of multiplicative AHP and the merit of improved GRP method in modelling multi-criteria decision-making. In final, the effectiveness of the proposed model is validated through comparisons with other models. The potential applications of this study include but are not limited to product development, industrial design, and innovation management, providing decision-makers with a more accurate and comprehensive design concept evaluation tool.

Introduction

New Product Development (NPD) is crucial for manufacturers to excel in competitive markets. As a key corporate function, NPD involves critical decision-making, with design concept evaluation being a standout step. This process assesses potential designs against criteria to select the most viable option. Since a large portion of a product's cost and quality is set in the conceptual phase, accurate evaluations are vital to avoid costly redesigns 1 , 2 . Effective evaluations also help managers quickly focus on promising ideas, streamlining development and boosting NPD success rates.

In the evaluation process of NPD, the uncertainty and ambiguity arise from the different cognitive levels and experiences of DMs. These factors can generate a negative impact on the evaluation process and the results of design concept. Therefore, how to eliminate information ambiguity is an important issue in product concept design evaluation 3 .

In order to solve the ambiguity and uncertainty of evaluation information for DMs, previous researchers have proposed interval set 4 , rough set 5 and fuzzy set (FS) 6 theories. The interval number provides DMs with a clearer understanding of the meaning of design choices. At the same time, it is more helpful for DMs to make wise decisions, considering uncertainty and change. However, interval theory oversimplifies practical problems when dealing with uncertainty, ignoring the fuzziness and probability distribution of parameters. FS, along with its extended forms such as intuitionistic fuzzy sets (IFS) 7 , hesitant fuzzy sets (HFS) 8 , neutrosophic set (NS) 9 , 10 , pythagorean fuzzy sets 11 , and picture fuzzy sets (PFS) 12 , can compensate for the deficiencies of interval sets. The combination of interval theory and FS can express the degree of uncertainty of parameters within intervals using fuzzy membership functions. Compared to extended forms, FS still falls short in describing the ambiguity and uncertainty of DMs’ evaluation information. For instance, FS only considers membership degrees without taking into account non-membership degrees, hesitation degrees, or degrees of abstention. This may be insufficient to fully describe the DMs’ preferences in practical situations, leading to inaccurate evaluation results.

In order to overcome the above issues, this study proposes a novel and reasonable framework to select design concept schemes. The main innovations and contributions of this study are organized as:

The first study applied to the mapping relation between CRs and the evaluation index to determine criteria of design concept.

This study effectively proposed the transformation of linguistic values to IVPFN to express DM evaluation information, which solves the uncertainty in the design concept evaluation process.

This study proposed improved GRP method to determine the best alternative in product design concept evaluation process.

The subsequent sections of this study are organized as follows: In Section “ Literature review ”, an overview of the relevant literature is presented. Section “ Basic preliminaries ” sets out various essential concepts within the IVPFS, introduces fundamental operating principles of IVPFN. Section “ Proposed methodology ” elaborates a distinctive framework for assessing and selecting design concept alternatives, incorporating the Kano model and an enhanced GRP method with IVPFS. To showcase the applicability of the proposed approach, a case study is expounded upon in Section “ Case study ”. Section “ Conclusion ” summarizes the findings of the study and explores potential future applications.

Literature review

Our research aims to assess design concept alternatives using the Kano model, IVPFS, and an improved GRP method. Consequently, the literature review is divided into three sections: (1) research on the Kano model, (2) research on uncertainty and fuzzy modeling in evaluation information. (3) research on ranking the schemes through improved GRP method under IVPFS.

Kano and his colleagues first put forth the Kano model 13 . The Kano model aims to categorize the features of a product or service based on their ability to meet customer needs. In practical terms, the properties of the Kano model can be classified into five groups, as illustrated in Fig.  1 and Table 1 .

figure 1

Kano model.

Applying the Kano model to define quality categories aids designers in understanding customers’ actual requirements. This, in turn, enables more precise control over quality and satisfaction during the product design and development process 14 . Wu et al. 15 proposed that an evaluation procedure based on the Kano model is mainly to help identify attractive customer requirements (CRs) through the use of the Kano model. To capture CRs and provide inspiring insights for emotional design from the perspective of businesses, Jin et al. 16 created the Kansei-integrated Kano model. In our research, we utilize the Kano model to categorize CRs, identify the ultimate CRs, and establish the evaluation index system by mapping the connection between CRs and attributes.

Uncertainty and fuzzy modeling in evaluation information

In the process of design concept evaluation, the fuzziness of individual experience and knowledge of DMs leads to uncertainty in evaluation information 17 . To ensure the accuracy of evaluation results, interval theory and various FS have been introduced, including IFS, NS, Pythagorean fuzzy sets and PFS.

Interval theory represent fuzziness by defining upper and lower bounds. This method can more intuitively describe the uncertainty of DMs regarding evaluation information, especially suitable for situations where precise values are difficult to define. Jiang et al. 18 proposed a new interval comparison relation and applied it to interval number programming, and established two transformation models for linear and nonlinear interval number programming problems to solve practical engineering problems. Yao et al. 19 defined an interval number ordering method and its application considering symmetry axis compensation. The feasibility and validity of the method are also verified through examples. However, interval theory also faces the problem of insufficient accuracy, as they typically represent uncertainty through ranges and fail to provide detailed fuzzy membership functions. FS use membership functions to model fuzziness, but their simplification of varying degrees of fuzziness limit their expressive power when dealing with complex design information. IFS emphasize the subjective cognition and experience of DMs. Wang et al. 20 combined intuitionistic fuzzy sets with the VIKOR method for the project investment decision-making process. Zeng et al. 21 proposed the weighted intuitionistic fuzzy IOWA weighted average operator. And using the proposed operator, they also developed a procedure for solving multi-attribute group decision-making problems. Nevertheless, they have certain shortcomings, such as the inability to accurately express the attitudes or opinions of DMs including affirmation, neutrality, negation, and rejection. NS theory has more extensive applications than FS and IFS theory. However, the function values of the three membership functions in the NS are subsets of non-standard unit intervals, making it difficult to apply to practical problems. Compared to others, PFS as a novel form of FS, introduces concepts such as membership degree, non-membership degree, neutrality degree, and abstention degree, which more comprehensively considers the psychological state of DMs in evaluation. Membership degree describes the degree of belonging between elements and FS, non membership degree reflects the degree to which elements do not belong to FS, and abstention degree expresses the degree of uncertainty that DMs have about certain elements. This comprehensive consideration of different aspects of information makes the PFS more adaptable and can more accurately and comprehensively reflect the psychological state of DMs in actual decision-making situations, providing more accurate information support for design concept evaluation. Kahraman 22 proposed proportion-based models for PFS, facilitating the utilization of PFS by incorporating accurate data that more effectively reflects the judgments of DMs. Luo et al. 23 introduced a novel distance metric for PFS, employing three-dimensional divergence aggregation. This proposed distance metric is then utilized to address MCDM problems. Wang et al. 24 devised a multi-attributive border approximation area comparison method based on prospect theory in a picture fuzzy environment. The algorithm's applicability is demonstrated through a numerical example, highlighting its advantages.

However, in MCDM, due to the limitations of DMs' understanding of the decision object and the ambiguity of the decision environment, DMs are often faced with situations that are difficult to define precisely, and thus prefer to give an interval number. In order to better deal with this challenge, the IVPFS has been proposed 12 . The innovation of IVPFS lies in its ability to represent membership degree, non-membership degree, neutrality degree, and abstention degree in the form of interval numbers 25 , 26 . In contrast, the interval-valued Pythagorean fuzzy set is composed of three parts: membership degree, non-membership degree, and hesitancy degree 27 , 28 . IVPFS can better describe and express the uncertainty and fuzziness of DMs in practical decision-making. This theory is proposed to improve the credibility of decision-making outcomes thus enhancing the usefulness and adaptability of DMs participation in MCDM problems. Cao et al. 29 proposed an innovative similarity measure for IVPFS, taking into account the impact of the margin of the degree of refusal membership. Mahmood et al. 30 introduced the interval-valued picture fuzzy frank averaging operator, and discussed their properties. The relationship between IVPFS and other sets is shown in Table 2 .

Improved grey relational projection method

In the process of evaluating design concepts, one must choose a favorite from a multitude of options, a task that constitutes a MCDM issue. Traditional methods for solving the MCDM problem, including the AHP, TOPSIS method, EDAS method, and VIKOR method, which have the unique advantage of targeting specific decision scenarios. However, these methods generally have limitations in dealing with the early stages of design concept. As a multi-factor statistical analysis method, the GRP method excels in dealing with correlations between attributes. The main reasons for applying the GRP method to design concept evaluation are as follows. The GRP method's key benefits include easy-to-understand calculations, high accuracy, and reliance on actual data. In the decision-making process of design concept evaluation, each attribute is not independent of the others. Although the internal relationship is not clear, there is actually some correlation. In essence, it is a grey relationship. Therefore, in decision analysis of such a system, it is actually a grey MCDM problem. Decision making in the GRP approach is a mapping of the set of decision metrics. Once the set of attributes is identified, alternatives can be identified. This approach combines the effects of the entire decision indicator space. Especially when the attributes have discrete sample data, the GRP method avoids unilateral bias, i.e., the bias that arises from comparing a single attribute for each alternative, and thus integrates the analysis of the relationships between the indicators, reflecting the impact of the entire indicator space. Since most GRP methods are based on a single base point (the ideal alternative), our study builds on the existing literature and improves on the GRP method by determining the final score for each design alternative based on the IVPFS.

Table 3 contains a summary that compares the proposed technique to other multi-criteria concept evaluation approaches. These scholars investigated a number of potential aspects that could influence the decision-making process. However, significant obstacles remain in concept evaluation, which is the focus of this paper's research. To address the above issues thoroughly, a design concept evaluation technique is provided that incorporates the kano model, mapping relation, IVPFS, and improved GRP method to produce the best concept.

Basic preliminaries

We review several fundamental ideas in this section to provide some required background knowledge.

Construct the index of design concept evaluation

The Kano model finds extensive application in the realm of MCDM. The creation of the design concept evaluation indicator system, as proposed in this paper, primarily involves the following steps. First, relevant CRs for evaluating the design concept scheme are gathered. Then, employing the Kano model, requirement attributes are assessed, filtering out less critical requirements and retaining the most important ones. Ultimately, the evaluation index system for the design concept is formulated by establishing the mapping relationship between requirements and the evaluation indices.

Initially, we gathered and organized the primary CRs for the design concept schemes, as illustrated in Table 4 .

Next, we designed a questionnaire for CRs considering both a product with and without the same functional requirement. Each question in the questionnaire includes a description of the functional requirement to aid customers in comprehending its significance. To ensure uniform understanding among users, we provided consistent explanations for the meaning of the options in the questionnaire. This facilitates easy comprehension for users, allowing them to indicate their responses effectively. The design of the Kano questionnaire is presented in Table 5 .

Subsequently, we processed the feedback data from the returned questionnaires. Quantifying the two dimensions, namely “with function” and “without function,” we obtained an overlapping result by referencing Table 6 for the options corresponding to the scores. This approach allows us to discern the type of CRs.

The CRs established in this study are derived from an analysis of issues identified by research customers during product use in specific scenarios. The fulfillment of these requirements indicates customer satisfaction with the product’s usage. Consequently, the CRs serve as indicator factors for users to assess the design concept. The mapping relationship between the two is depicted in Fig.  2 .

figure 2

The mapping relation between CRs and the evaluation index.

Ultimately, by excluding indicators that fall outside the scope of CRs, the evaluation index system for design concept alternatives based on CRs can be established.

The multiplicative AHP method

AHP is widely used for attribute weight determination, relying on an additive value function and making decisions through pairwise comparisons. However, AHP may encounter rank reversals, potentially leading to incorrect results. An enhanced method, the multiplicative AHP, addresses this by introducing a structured hierarchical approach, mitigating rank reversal issues associated with the original AHP 46 . In the multiplicative AHP method, DMs are tasked with comparing schemes in pairs and rendering decisions based on attributes. Subsequently, these judgments are aggregated, and the criteria weights are calculated using the compiled information 47 . The specific steps of the multiplicative AHP approach are as follows: Assume there are \(t\) experts in the decision-making group \(E\) , denoted as \(E=\{{e}_{1},{e}_{2},\dots ,{e}_{t}\}\) . \({A}_{j}\) and \({A}_{k}\) are two alternatives, the expert’s preference of \({A}_{j}\) and \({A}_{k}\) are present to two stimuli \({S}_{j}\) and \({S}_{k}\) , and expert \(e\) in group \(E\) is assigned to make pairwise comparisons according to an attribute by the linguistic information in Table 7 . The linguistic information is then converted into numerical scales denoted as \({\delta }_{jke}\) . Comparisons made by expert \(e\) are denoted as \({\delta }_{12e}\) , \({\delta }_{13e}\) ,…, \({\delta }_{23e}\) , \({\delta }_{24e}\) , … , \({\delta }_{(t-1)(t)e}\) . To eliminate the bias caused by the individual emotional factor, the comparisons with the expert themself are invalid and not included in the evaluation. Hence, for expert group \(E\) , the maximum number of valid judgements is \((t-1)(t-2)/2\) .

Step 1 : From the judgements made by the experts in group \(E\) , establish the decision matrix \({\{r}_{jke}\}\) by combining the judgements of the experts, denoted as:

Here the variant \(\mathrm{\gamma d}\) enotes a scale parameter commonly equal to \({\text{ln}}2\) , \(j=\mathrm{1,2},\dots ,t\) .

Step 2 : Determine the approximate vector \(p\) of stimulus values by the logarithmic least-squares method:

where \({S}_{jk}\) denotes the expert set who judged \({S}_{j}\) with respect to \({S}_{k}\) . Let \({\lambda }_{j}={\text{ln}}{p}_{j}\) , \({\lambda }_{k}={\text{ln}}{p}_{k}\) and \({q}_{jke}={\text{ln}}{r}_{jke}=\upgamma {\delta }_{jke}\) . Rewrite Eq. ( 2 ) with these substitutions as

Let \({N}_{jk}\) be the cardinality of the expert set \({S}_{jk}\) , Eq. ( 3 ) can be transferred to

If the comparisons including the expert are not considered, then

As the maximum pairwise comparison is \(\left(t-1\right)\left(t-2\right)\) , Eq. ( 4 ) can be rewritten as

A simplified style of the equation is

Step 3 : From Table 7 , for \({A}_{k}\) and \({A}_{j}\) , the sum of the numerical scale \({\delta }_{jke}\) and \({\delta }_{kje}\) is equal to 0, which means \({q}_{jky}=-{q}_{kjy}\) . Hence \({q}_{jjy}=0\) , so let \({\sum }_{k=1,k\ne j}^{t}{{\text{w}}}_{k}=0\) . Equation ( 7 ) can be further simplified and \({\lambda }_{j}\) can be determined as

Hence, the \({p}_{j}\) can be computed as:

Step 4 : Calculate the normalized weight \({w}_{j}\) determined by multiplicative AHP as

  • Interval-valued picture fuzzy set

In 2013, Cuong et al. proposed a new concept of IVPFN to quantify vague DMs’perception based on the basic principles of IVPFS. IVPFN more accurately captures the genuine insights of DMs, thus increasing the objectivity of the evaluation data. According to Cuong et al., the definition of IVPFS is shown below.

Definition 1

12 Considering a designated domain of discourse denoted as \(X\) , where U [0,1] signifies the set of subintervals within the interval [0,1], and \(x\ne 0\) is a given set. In this study, the IVPFS is defined as follows:

The intervals \({\varrho }_{B}\left(x\right),{\xi }_{B}\left(x\right),{\upsilon }_{B}\left(x\right)\) represent positive, negative and neutral membership degrees of \(B\) , Additionally, \({\varrho }_{B}^{L}\left(x\right), {\varrho }_{B}^{U}\left(x\right), {\xi }_{B}^{L}\left(x\right), {\xi }_{B}^{U}\left(x\right), {\upsilon }_{B}^{L}\left(x\right), {\upsilon }_{B}^{U}\left(x\right)\) represent the lower and upper end points. Consequently, the IVPFS B can be expressed as:

where \({\varrho }_{B}^{L}\left(x\right)\ge 0, {\xi }_{B}^{L}\left(x\right)\ge 0 \& {\upsilon }_{B}^{L}\left(x\right)\ge 0\) and \(0\le {\varrho }_{B}^{U}\left(x\right)+{\xi }_{B}^{U}\left(x\right)+{\upsilon }_{B}^{U}\left(x\right)\le 1\) .Refusal membership degree expressed by \({\sigma }_{B}\) can be calculated using the Eq. ( 13 ).

Definition 2

48 Let that \({{\text{B}}}_{{\text{i}}}=(\left[{\varrho }_{{\text{i}}}^{{\text{L}}},{\varrho }_{{\text{i}}}^{{\text{U}}}\right],\left[{\xi }_{{\text{i}}}^{{\text{L}}},{\xi }_{{\text{i}}}^{{\text{U}}}\right],\left[{\upsilon }_{{\text{i}}}^{{\text{L}}},{\upsilon }_{{\text{i}}}^{{\text{U}}}\right])({\text{i}}=\mathrm{1,2},\ldots ,{\text{n}})\) be the IVPFN, \(\Omega\) is the set of IVPFNs. \(\upomega ={\left({\upomega }_{1},{\upomega }_{2},\ldots ,{\upomega }_{{\text{n}}}\right)}^{{\text{T}}}\) as the weight vector of them, a mapping IVPFOWIA: \({\Omega }^{{\text{n}}}\to\Omega\) of dimension n is an IVPFOWIA operator, with \(\sum_{i=1}^{n}{\omega }_{i}=1\) , \({\omega }_{i}=\left[\mathrm{0,1}\right]\) . Then,

Definition 3

49 For two IVPFNs \(A={(\varrho }_{A}\left(x\right),{\xi }_{A}\left(x\right),{\upsilon }_{A}\left(x\right))\) and \({B=(\varrho }_{B}\left(x\right),{\xi }_{B}\left(x\right),{\upsilon }_{B}\left(x\right))\) . \(\lambda\) as a scalar value \(\lambda >0\) . The following shows the basic and significant operations of IVPFS:

\(A\oplus B=\left(\left[{\varrho }_{A}^{L}+{\varrho }_{B}^{L}-{\varrho }_{A}^{L}{\varrho }_{B}^{L},{\varrho }_{A}^{U}+{\varrho }_{B}^{U}-{\varrho }_{A}^{U}{\varrho }_{B}^{U}\right],\left[{\xi }_{A}^{L}{\xi }_{B}^{L},{\xi }_{A}^{U}{\xi }_{B}^{U}\right],\left[{\upsilon }_{A}^{L}{\upsilon }_{B}^{L},{\upsilon }_{A}^{U}{\upsilon }_{B}^{U}\right]\right)\)

\(A\otimes B=([{\varrho }_{A}^{L}{\varrho }_{B}^{L},{\varrho }_{A}^{U}{\varrho }_{B}^{U}],[{\xi }_{A}^{L}+{\xi }_{B}^{L}-{\xi }_{A}^{L}{\xi }_{B}^{L},{\xi }_{A}^{U}+{\xi }_{B}^{U}-{\xi }_{A}^{U}{\eta }_{B}^{U}],[{\upsilon }_{A}^{L}+{\upsilon }_{B}^{L}-{\upsilon }_{A}^{L}{\upsilon }_{B}^{L},{\upsilon }_{A}^{U}+{\upsilon }_{B}^{U}-{\upsilon }_{A}^{U}{\upsilon }_{B}^{U}])\)

\({A}^{\lambda }=\left(\left[{\left({\varrho }_{A}^{L}\right)}^{\lambda },{\left({\varrho }_{A}^{U}\right)}^{\lambda }\right],\left[1-{\left(1-{\xi }_{A}^{L}\right)}^{\lambda },1-{\left(1-{\xi }_{A}^{U}\right)}^{\lambda }\right],\left[1-{\left(1-{\upsilon }_{A}^{L}\right)}^{\lambda },1-{\left(1-{\upsilon }_{A}^{U}\right)}^{\lambda }\right]\right)\)

\(\lambda A=\left(\left[1-{\left(1-{\varrho }_{A}^{L}\right)}^{\lambda },1-{\left(1-{\varrho }_{A}^{U}\right)}^{\lambda }\right],\left[{({\xi }_{A}^{L})}^{\lambda },{({\xi }_{A}^{U})}^{\lambda }\right],\left[{({\upsilon }_{A}^{L})}^{\lambda },{({\upsilon }_{A}^{U})}^{\lambda }\right]\right)\)

Definition 4

30 Let \({B}_{i}=(\left[{\varrho }_{{\text{i}}}^{{\text{L}}},{\varrho }_{{\text{i}}}^{{\text{U}}}\right],\left[{\xi }_{{\text{i}}}^{{\text{L}}},{\xi }_{{\text{i}}}^{{\text{U}}}\right],\left[{\upsilon }_{{\text{i}}}^{{\text{L}}},{\upsilon }_{{\text{i}}}^{{\text{U}}}\right])\) be an IVPFN, then the score function \(SF\left({B}_{i}\right)\) and the accuracy function \(AF\left({B}_{i}\right)\) of the IVPFNs can be described as:

Based on the \(SF\left({B}_{i}\right)\) and \(AF\) of each IVPFN, the comparison rules 50 between two IVPFNs are given as follows:

For any two IVPFNs \({B}_{1}, {B}_{2}\) ,

If \(SF\left({B}_{1}\right)> SF\left({B}_{2}\right)\) , then \({B}_{1}>{ B}_{2}\) ;

If \(SF\left({B}_{1}\right)= SF\left({B}_{2}\right)\) , then

If \(AF\left({B}_{1}\right)> AF\left({B}_{2}\right)\) , then \({B}_{1}>{ B}_{2};\)

If \(AF\left({B}_{1}\right)= AF\left({B}_{2}\right)\) , then \({B}_{1}={ B}_{2}\) .

Definition 5

Let \({B}_{1}=\left(\left[{\varrho }_{1}^{{\text{L}}},{\varrho }_{1}^{{\text{U}}}\right], \left[{\xi }_{1}^{{\text{L}}},{\xi }_{1}^{{\text{U}}}\right], \left[{\upsilon }_{1}^{{\text{L}}},{\upsilon }_{1}^{{\text{U}}}\right]\right)\) and \({B}_{2}=(\left[{\varrho }_{2}^{{\text{L}}},{\varrho }_{2}^{{\text{U}}}\right], \left[{\xi }_{2}^{{\text{L}}},{\xi }_{2}^{{\text{U}}}\right],\left[{\upsilon }_{2}^{{\text{L}}},{\upsilon }_{2}^{{\text{U}}}\right])\) represent two IVPFNs, The Hamming distance between \({B}_{1}\) and \({B}_{2}\) is defined as follows:

The Euclidean distance of \({B}_{1}\) and \({B}_{2}\) is as follows:

The entropy of interval-valued picture fuzzy set

In this section, the entropy of IVPFS method is used to calculate criteria weights 48 . This method can handle uncertainty more flexibly and effectively capture measurement errors and fuzziness in practical problems by describing the membership degree of criteria through intervals. The specific calculation formula is as follows:

Finally, use Eq. ( 19 ) to calculate the weight of the criteria.

for all \(j=\mathrm{1,2},\ldots ,n.\)

Proposed methodology

In this section, we introduce a new framework for selecting yacht design alternatives based on IVPFS and the enhanced GRP technique. The procedural phases of the IVPFS-Improved GRP method are illustrated in Fig.  3 , comprising three stages: (1) Construct the collective IVPF decision matrix, (2) Enhance the GRP method under IVPFS theory, and (3) case study. In phase 1, the evaluation index system of the design concept is established using the Kano model, and the weight of each DM is computed through the multiplicative AHP method. With the help of IVPFOWIA, the collective IVPF decision matrix is formulated. In phase 2, the GRP technique is improved within the context of IVPFS to calculate the relative grey relational projection for each alternative. Finally, in phase 3, leveraging the outcomes from phases 1 and 2, the final ranking of different design concept schemes is determined.

figure 3

The process of the improved GRP method based on IVPFS.

For the MCDM problem of design concept evaluation, we denote the set of DMs as \(D=\left\{{D}_{1},{D}_{2},\dots ,{D}_{k}\right\}\) , the set of design criteria \(C=\left\{{C}_{1},{C}_{2},\cdots ,{C}_{n}\right\}\) , and the set of design schemes as \(A=\left\{{A}_{1},{A}_{2},\dots ,{A}_{m}\right\}\) . The weights of design criteria are presented by \(w=({w}_{1},{w}_{2},\cdots ,{w}_{j})\) , where \(\sum_{{\text{j}}=1}^{{\text{n}}}{{\text{w}}}_{{\text{j}}}=1, 0\le {{\text{w}}}_{{\text{j}}}\le 1\) . The next sections discuss the specifics of the established design alternative evaluation model based on these assumptions.

Phase 1: Construct the collective IVPF decision matrix

Step 1 : Establish the evaluation index evaluation system of design concept by the Kano model.

Step 2 : Generate the IVPF decision matrix for each DM.

where \({r}_{ij}^{(k)}=\left\{\left[{\varrho }_{ij}^{L(k)}, {\varrho }_{ij}^{U(k)}\right],\left[{\xi }_{ij}^{L(k)}, {\xi }_{ij}^{U(k)}\right],\left[{\upsilon }_{ij}^{L(k)}, {\upsilon }_{ij}^{U(k)}\right]\right\}\) represents an IVPFN. this IVPFN signifies the evaluation value of the alternatives \({A}_{i}\) concerning the criterion \({C}_{j}\) as provided by the DM \({D}_{k}\in D\) . And

To specify each \({r}_{ij}^{(k)}\) , a 5-scale evaluation was conducted throughout this process. Table 8 illustrates the details of these linguistic scales and their IVPFN equivalents.

Step 3 : Apply the multiplicative AHP approach to determine the weight for each DM.

In this stage, we calculate the weight of each DM using the multiplicative AHP approach.

Step 4 : Build the collective IVPF decision matrix.

To improve the GRP method in the process of group decision-making, it is essential to aggregate all individual decision matrices \({R}^{(k)}={\left({r}_{ij}^{(k)}\right)}_{m\times n}\) into the collective IVPF decision matrix \(\widetilde{R}={\left({\widetilde{r}}_{ij}\right)}_{m\times n}\) . This cluster is achieved through the application of the IVPFOWIA operator, as specified in Eq. ( 14 ):

Phase 2: Improve GRP method under IVPFS

Traditional GRP method is based on a single base point, and the similarity between the alternatives and the ideal solution is determined by calculating the cosine value of the angle between the alternatives and the ideal solution. Our research has improved the GRP method based on the existing literature by calculating the relative grey relation projection of each yacht design alternative based on the IVPFS theory as a way to select the optimal design alternative. The extended GRP method not only improves the accuracy of evaluation, but also enhances the rationality and effectiveness of decision-making. The specific steps of the improved GRP method are as follows:

Step 1 : Normalize the decision-making evaluation matrix. In MCDM, we distinguish between two types of criteria: benefit type and cost type. Consequently, the risk evaluation matrix \(\widetilde{R}={\left({\widetilde{r}}_{ij}\right)}_{m\times n}\) is transformed into a normalized decision matrix \({\widetilde{R}}^{*}={\left({\widetilde{r}}_{ij}^{*}\right)}_{m\times n}\) . Where:

For \(i=\mathrm{1,2},\cdots m,j=\mathrm{1,2}\cdots ,n\) .

Step 2 : Under the normalized evaluation decision matrix by Eq. ( 23 ).

(a) Determine the interval-valued picture fuzzy positive ideal solution (IVPF-PIS): \({{\text{R}}}^{+}\) can be obtained using Eq. ( 24 ):

(b) Determine the interval-valued picture fuzzy negative ideal solution (IVPF-NIS), \({{\text{R}}}^{-}\) can be determined using Eq. ( 25 ):

Step 3 : Calculate positive and negative correlation matrices.

Represent the gray correlation matrix between the i th sample and the positive (negative) ideal sample as \({\varphi }^{+}\) ( \({\varphi }^{-}\) ), where \({\varphi }_{ij}^{+} {\text{and}}\) \({\varphi }_{ij}^{-}\) are the individual elements:

where \(\rho\) is referred to as the resolution coefficient, serving to modify the scale of the comparison environment. \(\rho =0\) implies the absence of a surrounding environment, while \(\rho = 1\) signifies no alteration in the surrounding environment. Typically, \(\rho = 0.5\) . The term \(d\left({\widetilde{r}}_{ij},{\widetilde{r}}_{j}^{+(-)}\right)\) represents the distance between \({\widetilde{r}}_{ij}\) and \({\widetilde{r}}_{j}^{+}({\widetilde{r}}_{j}^{-})\) , calculable using Eq. ( 17 ).

Through the \({\varphi }_{ij}^{+\left(-\right)}\left(i=\mathrm{1,2},\cdots ,m,j=\mathrm{1,2},\cdots ,n\right)\) , we can construct the two grey relational coefficient matrices:

Step 4 : Construct the two weighted grey relational coefficient matrices.

Two weighted grey relational coefficient matrices \({\psi }^{+}={\left({\psi }_{ij}^{+}\right)}_{m\times n}\) and \({\psi }^{-}={\left({\psi }_{ij}^{-}\right)}_{m\times n}\) can be calculated by Eqs. ( 31 ) and ( 32 ), respectively.

where \({\psi }_{ij}^{+}={w}_{j}{\varphi }_{ij}^{+}\) , \({\psi }_{ij}^{-}={w}_{j}{\varphi }_{ij}^{-}\) . \({w}_{j}\) is the weight of the criterion \({C}_{j}\) , we can calculate it by Eqs. ( 18 ) and ( 19 ).

Step 5 : Calculate the grey relational projections of each scheme \({A}_{i} (i = \mathrm{1,2},\dots ,m)\) on the IVPF-PIS and IVPF-NIS, respectively.

Phase 3: Sort according to the final results and select the best design scheme

The relative grey relational projection of every alternative to the IVPF-PIS \({\psi }_{0}^{+}=\left({w}_{1},{w}_{2},\ldots ,{w}_{n}\right)\) is defined as follows:

The results are arranged in ascending order based on the values of \({\tau }_{i}\) . The relative closeness \({\tau }_{i}\) signifies the proximity of scheme \({A}_{i}\) to the ideal scheme. As the relative closeness become greater, the scheme improves.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Choosing the optimal alternative with the proposed methodology

In this phase, the aforementioned approach is employed to identify the optimal design among yacht alternatives. All DMs are seasoned experts in yacht design, possessing extensive design expertise. These DMs constitute an evaluation and selection group, comprising 10 members denoted as \(D=\left\{{D}_{1},{D}_{2},\ldots ,{D}_{10}\right\}\) , and considering three concept design alternatives \(A=\left\{{A}_{1},{A}_{2},{A}_{3}\right\}\) . The data, assessed by the 10 DMs, is represented as IVPFNs after statistical processing. Refer to the table below for the decision-making information. Following the outlined procedures of the proposed model, the specific steps for design concept evaluation are detailed as follows:

Step 1 : Determine the evaluation index evaluation system of design concept by the Kano model. First, we analyze the data through questionnaires, and the initial CRs for yacht design were determined as shown in Table 9 .

During Kano model evaluation on the attribute set shown in Table 9 , 126 questionnaires were issued and returned, including 120 valid results. The statistical results are shown in Table 10 .

According to Kano’s customer satisfaction model, the fundamental elements with A/M/O attributes are considered core requirements. By utilizing the mapping relationship shown in Fig.  2 , CRs are translated into evaluation criteria for the assessment of design concepts, as illustrated in Fig.  4 . It is crucial to understand that there is a unique, one-to-one correspondence in this mapping process.

figure 4

The mapping relation of CRs- design concept evaluation index.

Step 2 : Construct the IVPF decision matrix for each DM.

Taking DM \({{\text{R}}}^{1}\) for example, the decision matrix for DM \({{\text{R}}}^{1}\) is built as shown in Table 11 . And all the DMs evaluated three yachts design alternatives \(A=\left\{{A}_{1},{A}_{2},{A}_{3}\right\}\) according to the attributes, as shown in Appendix A .

The linguistic evaluation value matrix in Table 8 can be converted into an IVPFN matrix through Table 11 , as shown in Table 12 .

Step 3 : Determine the weights of DMs by the multiplicative AHP approach.

With the help of the multiplicative AHP approach, we compute the weights of DMs \(\omega ={\left({\omega }_{1},{\omega }_{2},\dots ,{\omega }_{10}\right)}^{T}={\left(\mathrm{0.213,0.213,0.213,0.0533,0.0533,0.0533,0.0503,0.0503,0.0503,0.0503}\right)}^{T}\)

Step 4 : Construct the collective IVPF decision matrix.

Through the application of the IVPFOWIA, the collective decision matrix is derived, as depicted in Table 13 .

Step 5 : With the help of Eqs. ( 18 )–( 19 ), we can determine the entropy weights of IVPFS of \(C=\left\{{C}_{1},{C}_{2},{C}_{3},{C}_{4},{C}_{5},{C}_{6},{C}_{7},{C}_{8}\right\}\) is \(w={\left(\mathrm{0.167,0.133,0.37,0.048,0.119,0.223,0.090,0.082}\right)}^{T}\) .

Phase 2: Improved GRP method under IVPFS

Step 1 : Given that all eight criteria are benefits (not costs), according to Eq. ( 23 ), the standardized evaluation decision matrix aligns with the contents of Table 13 .

Step 2 : The IVPF-PIS and IVPF-NIS of the collective decision matrix are calculated through Eqs. ( 24 )–( 25 ).

Step 3 : Determine the grey relational coefficient matrices by Eqs. ( 29 ) and ( 30 ).

Step 4 : Calculate the weighted grey relational coefficient matrices through Eqs. ( 31 ) and ( 32 ), respectively.

Compute the grey relational projections of each alternative \({A}_{i} (i = \mathrm{1,2},3)\) on the IVPF-PIS and IVPF-NIS through Eqs. ( 33 )–( 35 ), respectively. The detailed parameters and alternatives are provided in Table 14 .

According to the \({\tau }_{i}\) , the ranking order is A 3 ≻ A 2 ≻ A 1 .

Sensitivity analysis

In this section, in order to further investigate the evaluation process of the IVPF-improved GRP method, a sensitivity analysis of the resolution coefficient \(\rho\) was conducted. When \(\rho =0.5\) , the ranking of the three design concept alternatives is A 3 ≻ A 2 ≻ A 1 . Table 15 shows the \({\tau }_{i}\) for different resolution coefficients \(\rho\) , and the corresponding figures are shown in Fig.  5 . As shown in Fig.  5 , A3 is consistently the optimal choice among the three design concept alternatives. It can be observed from Fig.  5 that as the resolution coefficient \(\rho\) changes, the gap between alternative 2 and alternative 3 gradually narrows. However, the ranking of the design concept alternatives remains unchanged (A 3 ≻ A 2 ≻ A 1 ). Therefore, the proposed improved GRP method based on IVPFS demonstrates stability and reliability in the evaluation of design concept alternatives.

figure 5

Sensitivity analysis by different resolution coefficient \(\rho\) .

Alternatively, sensitivity analysis allows for a variety of change techniques. Because of space constraints, this research has only included the examples where the resolution coefficient \(\rho\) is employed. More extensions can be added to improve sensitivity analysis in the future research.

Comparative analysis and discussion

To assess the effectiveness of the proposed methodology, comparative studies are conducted alongside the case study, utilizing the Rough Entropy TOPSIS-PSI method 52 , Interval-Valued Intuitionistic Fuzzy (IVIF)-Improved GRP method, IVPF-VIKOR method 53 and IVPF-TOPSIS method. Table 16 and Fig.  6 present the results of a comprehensive comparison among different methodologies.

figure 6

The close index between the four MAGDM methods.

From Fig.  6 it can be seen that \({A}_{3}\) represents the best alternative for yacht design through the Rough Entropy TOPSIS-PSI, IVPF-improved GRP, IVPF-VIKOR and IVPF-TOPSIS. From Fig.  6 , it can be seen that there are certain differences between different optimization models. These differences are reflected in the entire design optimization process or certain data processing stages. The specific details are summarized as follows:

Rough Entropy TOPSIS-PSI method: it is proposed by Chen, this method is fundamentally rooted in rough sets. The ranking approach emphasizes the subjectivity of the data, establishes a fuzzy environment using rough numbers, and finalizes scheme selection through proximity coefficients based on the TOPSIS method. Notably, this method does not consider DMs weights in the calculation process. Additionally, an interval weight calculation method based on entropy weight in the form of intervals is introduced for attribute weight calculation.

IVIF- Improved GRP method: The main difference between this method and our model is the fuzzy environment used. As a method based on IVIFS, the IVIF-Improved GRP method has been successful in applications, but as an extended form of interval fuzzy sets, it does not take into account the degree of neutral when describing uncertain information compared to IVPFS, which means that IVIFS are not as detailed as IVPFS when describing uncertainty. As detailed and accurate as the IVPFS.

IVPF-TOPSIS method: The IVPF-TOPSIS method differs from our proposed model in the ranking model; the IVPF-TOPSIS method ranks the alternatives based on relative proximity. This method may be computationally more time-consuming, especially when dealing with a large amount of data or multiple attributes, and is unable to focus on the trends and similarities of the data sequences, leading to inaccurate final ranking results.

IVPF-VIKOR method: In this method, uncertainty and ambiguity in the decision-making process are addressed due to the benefits of the IVPFS environment. VIKOR method is used to reflect multiple criteria inherited from the selection problem into the solution, however, the VIKOR method may be affected by outliers, which may lead to unstable decision results in the presence of extreme values or outliers. of instability in the presence of extreme values or outliers.

The comparison with the Rough Entropy TOPSIS-PSI method is presented in Table 17 . Despite certain dissimilarities between the two methods, they share a foundation in membership relationships and linguistic information. Ultimately, both approaches apply a compromise theory-based model for design concept scheme optimization and ranking. Additionally, the grey correlation projection value \({\tau }_{i}\) involved in our method bears similarity to the calculation form of the closeness coefficient \({CI}_{i}\) in the Rough Entropy TOPSIS-PSI method. The values of both exhibit a positive relationship within the interval [0,1]. Consequently, \({\tau }_{i}\) and \({CI}_{i}\) are compared, as depicted in Fig.  7 . The results indicate that the scheme ranking of the Rough Entropy TOPSIS-PSI method aligns with the method based on membership relationships proposed in this manuscript. In both cases, \({A}_{3}>{A}_{2}>{A}_{1}\) , signifying that \({A}_{3}\) is the optimal design concept scheme. Notably, the differentiation between the three schemes in the method introduced in this chapter is more pronounced, showcasing a greater level of distinction compared to the Rough Entropy TOPSIS-PSI method.

figure 7

The Close Index between the two MAGDM methods.

Figure  8 presents a comparison between the method proposed in this paper, the IVIF-Improved GRP method, and the IVPF-TOPSIS method. The results of the method proposed in this study and the IVIF-Improved GRP method exhibit similarities. In comparison with the IVIF-Improved GRP method, our proposed model possesses distinct advantages in addressing MADM problems. As an extension of IVIFS, IVPFS incorporate an increased neutral membership degree, providing richer decision information and aligning more closely with human cognition.

figure 8

The comparison among the proposed method and IVIF-Improved GRP and IVPF-TOPSIS.

Furthermore, the IVPF-TOPSIS method differs from the above two methods in the ranking model, leading to some variations in the results. However, the ranking among the schemes has not undergone significant changes. Consequently, we assert that our IVPF-Improved GRP approach, as proposed in this manuscript, is more reliable and accurate in decision-making processes.

The comparison of the method proposed in this study with the IVPF-VIKOR method is shown in Fig.  9 . From Fig.  9 it can be seen that \({A}_{3}\) is the best design concept alternative. However, except for alternative 3, which is consistent, there are some differences in the other ranking results of the two models. One reason for this is because each attribute is not independent of the other during the design concept evaluation process. Although the internal relationship is not clear, there is actually some correlation. the VIKOR method cannot handle the correlation between the indicators internally; the second reason is that when the attributes have discrete sample data, the improved GRP method can avoid the unilateral bias, which is the bias resulting from comparing a single attribute for each alternative, and thus comprehensively analyze the relationship between the criteria, reflecting the impact of the whole attribute space.

figure 9

The comparison among the proposed method and IVPF-VIKOR method.

Ultimately, the improved GRP approach with IVPF can be adjusted to accommodate any quantity of alternatives, evaluation criteria, resulting in a minimal increase in its complexity. Consequently, this expanded version of the GRP method is applicable to addressing any MCDM issue within the context of IVPFS.

The evaluation of design concepts plays a crucial role in the product development process. The purpose of this study is to introduce an innovative approach for design concept evaluation, taking into account inherent ambiguity and uncertainty present in information. The main contributions of this research are summarized as follows:

Utilizing the Kano model, the mapping relation between CRs and the evaluation index, we construct the decision attributes set for the design concept evaluation.

By applying IVPFS theory, this research effectively identifies and characterizes ambiguity and uncertainty in design concept evaluation. Specifically, we adopt a practical approach, transforming linguistic information in concept design evaluation into IVPFNs, facilitating flexible decision-making procedures.

Enhancements to the GRP method leads to the construction of IVPF-PIS and IVPF-NIS. The distance relationship between each scheme and IVPF-PIS and IVPF-NIS is calculated, ultimately determining the optimal design concept scheme by comparing the relative grey relational projection of each scheme. This improvement avoids the problem of inaccurate results caused by traditional GRP methods based on calculations from a single base point.

Results from a real yacht design case demonstrate the success of our proposed method in addressing the challenges of evaluating product conceptual designs in uncertain and ambiguous environments. It was compared with the Rough Entropy TOPSIS-PSI, IVPF-improved GRP, IVPF-VIKOR and IVPF-TOPSIS method. The results also showed that this novel method can effectively evaluate product concept design schemes.

Furthermore, our research lays the groundwork for potential future outcomes, such as applications in green supply chain management, project ranking, urban planning, and environmental governance. Future studies also can further explore the applicability and effectiveness of this framework across different industries and decision-making contexts, as well as how to further optimize the model for broader applications.

Qi, J., Hu, J. & Peng, Y. Modified rough VIKOR based design concept evaluation method compatible with objective design and subjective preference factors. Appl. Soft Comput. https://doi.org/10.1016/j.asoc.2021.107414 (2021).

Article   Google Scholar  

Sun, H. Y., Ma, Q., Chen, Z. & Si, G. Y. A novel decision-making approach for product design evaluation using improved TOPSIS and GRP method under picture fuzzy set. Int. J. Fuzzy Syst. 25 , 1689–1706. https://doi.org/10.1007/s40815-023-01471-8 (2023).

Dou, Y. B. et al. A concept evaluation approach based on incomplete information: Considering large-scale criteria and risk attitudes. Adv. Eng. Inform. https://doi.org/10.1016/j.aei.2023.102234 (2023).

Li, J., Shao, Y. & Qi, X. On variable-precision-based rough set approach to incomplete interval-valued fuzzy information systems and its applications. J. Intell. Fuzzy Syst. Appl. Eng. Technol. 40 , 463–475 (2021).

Google Scholar  

Shidpour, H., Da Cunha, C. & Bernard, A. Group multi-criteria design concept evaluation using combined rough set theory and fuzzy set theory. Expert Syst. Appl. 64 , 633–644. https://doi.org/10.1016/j.eswa.2016.08.022 (2016).

Zadeh, L. A. Fuzzy sets. Inf. Control 8 , 338–353. https://doi.org/10.1016/S0019-9958(65)90241-X (1965).

Atanassov, K. & Vassilev, P. Intuitionistic fuzzy sets and other fuzzy sets extensions representable by them. J. Intell. Fuzzy Syst. 38 , 525–530. https://doi.org/10.3233/jifs-179426 (2020).

Torra, V. Hesitant fuzzy sets. Int. J. Intell. Syst. 25 , 529–539. https://doi.org/10.1002/int.20418 (2010).

Luo, M., Sun, Z., Xu, D. & Wu, L. Fuzzy inference full implication method based on single valued neutrosophic t-representable t-norm: Purposes, strategies, and a proof-of-principle study. Neutrosophic Syst. Appl. 14 , 1–16. https://doi.org/10.61356/j.nswa.2024.104 (2024).

Article   CAS   Google Scholar  

Mohamed, A., Mohammed, J. & Sameh, S. A. A neutrosophic framework for assessment of distributed circular water to give neighborhoods analysis to prepare for unexpected stressor events. Neutrosophic Syst. Appl. 5 , 27–35. https://doi.org/10.61356/j.nswa.2023.25 (2023).

Ganie, A. H., Singh, S., Khalaf, M. M. & Al-Shamiri, M. M. A. On some measures of similarity and entropy for Pythagorean fuzzy sets with their applications. Comput. Appl. Math. https://doi.org/10.1007/s40314-022-02103-x (2022).

Article   MathSciNet   Google Scholar  

Cuong, B. C., Kreinovich, V. & Ieee. In Third World Congress on Information and Communication Technologies (WICT). pp. 1–6.

Kano. Attractive quality and must-be quality. J. Jpn. Soc. Qual. Control 14 , 147–156 (1984).

Shang, B., Chen, Z., Ma, Q. & Tan, Y. H. A comprehensive mortise and tenon structure selection method based on Pugh’s controlled convergence and rough Z-number MABAC method. PLoS ONE https://doi.org/10.1371/journal.pone.0283704 (2023).

Article   PubMed   PubMed Central   Google Scholar  

Wu, C. T., Wang, M. T., Liu, N. T. & Pan, T. S. Developing a Kano-based evaluation model for innovation design. Math. Probl. Eng. https://doi.org/10.1155/2015/153694 (2015).

Jin, J., Jia, D. P. & Chen, K. J. Mining online reviews with a Kansei-integrated Kano model for innovative product design. Int. J. Prod. Res. 60 , 6708–6727. https://doi.org/10.1080/00207543.2021.1949641 (2022).

Zhu, G. N., Hu, J. & Ren, H. L. A fuzzy rough number-based AHP-TOPSIS for design concept evaluation under uncertain environments. Appl. Soft Comput. https://doi.org/10.1016/j.asoc.2020.106228 (2020).

Jiang, C., Han, X. & Li, D. A new interval comparison relation and application in interval number programming for uncertain problems. Cmc-Comput. Mater. Contin. 27 , 275–303 (2012).

Yao, N., Ye, Y., Wang, Q. & Hu, N. Interval number ranking method considering multiple decision attitudes. Iran. J. Fuzzy Syst. 17 , 115–127 (2020).

MathSciNet   Google Scholar  

Caichuan, W., Jiajun, L., Hasmat, M., Gopal, C. & Smriti, S. Project investment decision based on VIKOR interval intuitionistic fuzzy set. J. Intell. Fuzzy Syst. 42 , 623–631 (2022).

Zeng, S., Llopis-Albert, C. & Zhang, Y. A novel induced aggregation method for intuitionistic fuzzy set and its application in multiple attribute group decision making. Int. J. Intell. Syst. 33 , 2175–2188. https://doi.org/10.1002/int.22009 (2018).

Kahraman, C. Proportional picture fuzzy sets and their AHP extension: Application to waste disposal site selection. Expert Syst. Appl. https://doi.org/10.1016/j.eswa.2023.122354 (2024).

Luo, M. X. & Zhang, G. F. Divergence-based distance for picture fuzzy sets and its application to multi-attribute decision-making. Soft Comput. https://doi.org/10.1007/s00500-023-09205-6 (2023).

Wang, T., Wu, X. X., Garg, H., Liu, Q. & Chen, G. R. A prospect theory-based MABAC algorithm with novel similarity measures and interactional operations for picture fuzzy sets and its applications. Eng. Appl. Artif. Intell. https://doi.org/10.1016/j.engappai.2023.106787 (2023).

Article   PubMed   Google Scholar  

Naeem, M., Qiyas, M. & Abdullah, S. An approach of interval-valued picture fuzzy uncertain linguistic aggregation operator and their application on supplier selection decision-making in logistics service value concretion. Math. Probl. Eng. 2021 , 8873230. https://doi.org/10.1155/2021/8873230 (2021).

Khalil, A. M., Li, S. G., Garg, H., Li, H. & Ma, S. New operations on interval-valued picture fuzzy set, interval-valued picture fuzzy soft set and their applications. IEEE Access 7 , 51236–51253. https://doi.org/10.1109/ACCESS.2019.2910844 (2019).

Mishra, A. R., Rani, P., Alrasheedi, A. F. & Dwivedi, R. Evaluating the blockchain-based healthcare supply chain using interval-valued Pythagorean fuzzy entropy-based decision support system. Eng. Appl. Artif. Intell. https://doi.org/10.1016/j.engappai.2023.107112 (2023).

Hua, Z. & Jing, X. C. A generalized Shapley index-based interval-valued Pythagorean fuzzy PROMETHEE method for group decision-making. Soft Comput. 27 , 6629–6652. https://doi.org/10.1007/s00500-023-07842-5 (2023).

Cao, G. & Shen, L. X. A novel parameter similarity measure between interval-valued picture fuzzy sets with its application in pattern recognition. J. Intell. Fuzzy Syst. 44 , 10239 (2023).

Mahmood, T., Waqas, H. M., Ali, Z., Ullah, K. & Pamucar, D. Frank aggregation operators and analytic hierarchy process based on interval-valued picture fuzzy sets and their applications. Int. J. Intell. Syst. 36 , 7925–7962. https://doi.org/10.1002/int.22614 (2021).

Zhang, D. & Hu, J. H. A novel multi-interval-valued fuzzy set model to solve MADM problems. Expert Syst. Appl. https://doi.org/10.1016/j.eswa.2023.122248 (2024).

Büyüközkan, G. & Göçer, F. Application of a new combined intuitionistic fuzzy MCDM approach based on axiomatic design methodology for the supplier selection problem. Appl. Soft Comput. 52 , 1222–1238. https://doi.org/10.1016/j.asoc.2016.08.051 (2017).

Jing, L. T. et al. A rough set-based interval-valued intuitionistic fuzzy conceptual design decision approach with considering diverse customer preference distribution. Adv. Eng. Inform. https://doi.org/10.1016/j.aei.2021.101284 (2021).

Singh, A. & Kumar, S. Picture fuzzy set and quality function deployment approach based novel framework for multi-criteria group decision making method. Eng. Appl. Artif. Intell. https://doi.org/10.1016/j.engappai.2021.104395 (2021).

Kahraman, C., Oztaysi, B. & Onar, S. A novel interval valued picture fuzzy TOPSIS method: Application on supplier selection. J. Mult.-Valued Logic Soft Comput. 39 , 635 (2022).

Akay, D., Kulak, O. & Henson, B. Conceptual design evaluation using interval type-2 fuzzy information axiom. Comput. Ind. 62 , 138–146. https://doi.org/10.1016/j.compind.2010.10.007 (2011).

Zhu, G.-N., Hu, J., Qi, J., Gu, C.-C. & Peng, Y.-H. An integrated AHP and VIKOR for design concept evaluation based on rough number. Adv. Eng. Inform. 29 , 408–418. https://doi.org/10.1016/j.aei.2015.01.010 (2015).

Aikhuele, D. & Turan, F. An integrated fuzzy dephi and interval-valued intuitionistic fuzzy M-Topsis model for design concept selection. Pak. J. Stat. Oper. Res. 13 , 425 (2017).

Tiwari, V., Jain, P. K. & Tandon, P. An integrated Shannon entropy and TOPSIS for product design concept evaluation based on bijective soft set. J. Intell. Manuf. 30 , 1645–1658 (2017).

Hayat, K., Ali, M. I., Karaaslan, F., Cao, B. Y. & Shah, M. H. Design concept evaluation using soft sets based on acceptable and satisfactory levels: An integrated TOPSIS and Shannon entropy. Soft Comput. 24 , 2229–2263. https://doi.org/10.1007/s00500-019-04055-7 (2020).

Wenyan, S., Zixuan, N. & Pai, Z. Design concept evaluation of smart product-service systems considering sustainability: An integrated method. Comput. Ind. Eng. 159 , 107485 (2021).

Qi, J., Hu, J., Huang, H. Q. & Peng, Y. H. New customer-oriented design concept evaluation by using improved Z-number-based multi-criteria decision-making method. Adv. Eng. Inform. https://doi.org/10.1016/j.aei.2022.101683 (2022).

Zhou, T. T., Chen, Z. H. & Ming, X. G. Multi-criteria evaluation of smart product-service design concept under hesitant fuzzy linguistic environment: A novel cloud envelopment analysis approach. Eng. Appl. Artif. Intell. https://doi.org/10.1016/j.engappai.2022.105228 (2022).

Huang, G. Q., Xiao, L. M. & Zhang, G. B. An integrated design concept evaluation method based on best-worst entropy and generalized TODIM considering multiple factors of uncertainty. Appl. Soft Comput. https://doi.org/10.1016/j.asoc.2023.110165 (2023).

Yang, Q. et al. Concept design evaluation of sustainable product-service systems: A QFD-TOPSIS integrated framework with basic uncertain linguistic information. Group Decis. Negot. https://doi.org/10.1007/s10726-023-09870-w (2024).

Barfod, M. B., van den Honert, R. & Salling, K. B. Modeling group perceptions using stochastic simulation: Scaling issues in the multiplicative AHP. Int. J. Inf. Technol. Decis. Making 15 , 453–474. https://doi.org/10.1142/s0219622016500103 (2016).

Chen, Z., Zhong, P., Liu, M., Ma, Q. & Si, G. A novel integrated MADM method for design concept evaluation. Sci. Rep. 12 , 15885. https://doi.org/10.1038/s41598-022-20044-7 (2022).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ma, Q., Sun, H., Chen, Z. & Tan, Y. A novel MCDM approach for design concept evaluation based on interval-valued picture fuzzy sets. PLoS ONE 18 , e0294596. https://doi.org/10.1371/journal.pone.0294596 (2023).

Fan, J. P., Zhang, H. & Wu, M. Q. Dynamic multi-attribute decision-making based on interval-valued picture fuzzy geometric heronian mean operators. IEEE Access 10 , 12070–12083. https://doi.org/10.1109/access.2022.3142283 (2022).

Cuong, B. C., Kreinovitch, V. & Ngan, R. T. 19–24.

Zulkifli, N., Abdullah, L. & Garg, H. An integrated interval-valued intuitionistic fuzzy vague set and their linguistic variables. Int. J. Fuzzy Syst. 23 , 182–193. https://doi.org/10.1007/s40815-020-01011-8 (2021).

Chen, Z., Zhong, P., Liu, M., Sun, H. & Shang, K. A novel hybrid approach for product concept evaluation based on rough numbers, shannon entropy and TOPSIS-PSI. J. Intell. Fuzzy Syst. 40 , 12087–12099. https://doi.org/10.3233/JIFS-210184 (2021).

Göçer, F. A novel interval value extension of picture fuzzy sets into group decision making: An approach to support supply chain sustainability in catastrophic disruptions. IEEE Access 9 , 117080–117096. https://doi.org/10.1109/access.2021.3105734 (2021).

Download references

Acknowledgements

This work was supported by the Shandong Province Intelligent Yacht Cruise Technology Laboratory.

Author information

Authors and affiliations.

Shandong Jiaotong University, Jinan, 250357, China

Qing Ma, Zhe Chen, Yuhang Tan & Jianing Wei

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: All authors; Methodology: Q.M., Z.C., Y.T. and J.W.; Data collection: Y.T., J.W.; Data Analysis: Q.M., Z.C.; Writing—original draft preparation: Q.M., Z.C.; Writing—review and editing: Q.M., Z.C. and Y.T.

Corresponding author

Correspondence to Zhe Chen .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Ma, Q., Chen, Z., Tan, Y. et al. An integrated design concept evaluation model based on interval valued picture fuzzy set and improved GRP method. Sci Rep 14 , 8433 (2024). https://doi.org/10.1038/s41598-024-57960-9

Download citation

Received : 26 January 2024

Accepted : 23 March 2024

Published : 10 April 2024

DOI : https://doi.org/10.1038/s41598-024-57960-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Design concept evaluation
  • Multiplicative AHP method
  • Entropy of IVPFS
  • Improved GRP method

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

proposed research design and methodology

IMAGES

  1. 15 Research Methodology Examples (2023)

    proposed research design and methodology

  2. Proposed framework of research methodology.

    proposed research design and methodology

  3. 11 Research Proposal Examples to Make a Great Paper

    proposed research design and methodology

  4. Research methodology framework

    proposed research design and methodology

  5. Research Methodology Examples

    proposed research design and methodology

  6. CHAPTER 3 Research design and methodology

    proposed research design and methodology

VIDEO

  1. Part 1: Designing the Methodology

  2. WRITING THE CHAPTER 3|| Research Methodology (Research Design and Method)

  3. Main Points of Abstract

  4. Creating a research proposal

  5. Research Design, Methodology & Methods

  6. What is research design? #how to design a research advantages of research design

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. Research Design

    Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Frequently asked questions.

  3. What Is a Research Methodology?

    Step 1: Explain your methodological approach. Step 2: Describe your data collection methods. Step 3: Describe your analysis method. Step 4: Evaluate and justify the methodological choices you made. Tips for writing a strong methodology chapter. Other interesting articles.

  4. How to Write a Research Design

    Step 5: Write your Research Proposal. The research design is an important component of a research proposal because it plans the project's execution. ... To write a research design in methodology, clearly outline the research strategy (e.g., experimental, survey, case study). Describe the sampling technique, participants, and data collection ...

  5. What Is Research Methodology? Definition + Examples

    As we mentioned, research methodology refers to the collection of practical decisions regarding what data you'll collect, from who, how you'll collect it and how you'll analyse it. Research design, on the other hand, is more about the overall strategy you'll adopt in your study. For example, whether you'll use an experimental design ...

  6. Research Methods Guide: Research Design & Method

    Most frequently used methods include: Observation / Participant Observation. Surveys. Interviews. Focus Groups. Experiments. Secondary Data Analysis / Archival Study. Mixed Methods (combination of some of the above) One particular method could be better suited to your research goal than others, because the data you collect from different ...

  7. Research Design and Methodology

    There are a number of approaches used in this research method design. The purpose of this chapter is to design the methodology of the research approach through mixed types of research techniques. The research approach also supports the researcher on how to come across the research result findings. In this chapter, the general design of the research and the methods used for data collection are ...

  8. How to Write Research Methodology in 2024: Overview, Tips, and

    Saunders et al. (2007) proposed the concept of the research onion model to help researchers develop a methodology and construct research design techniques within the field of future studies. The characteristic of research onion model is illustrated by its six main layers, which serve as a step-by-step guide for researchers on how to write a ...

  9. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  10. Research Methodology

    Research methodology is typically written after the research proposal has been approved and before the actual research is conducted. It should be written prior to data collection and analysis, as it provides a clear roadmap for the research project. ... To design the research: Research methodology helps in designing the research by selecting ...

  11. A tutorial on methodological studies: the what, when, how and why

    Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. We provide an overview of some of the key aspects of methodological studies such ...

  12. What is a Research Design? Definition, Types, Methods and Examples

    Research design methods refer to the systematic approaches and techniques used to plan, structure, and conduct a research study. The choice of research design method depends on the research questions, objectives, and the nature of the study. Here are some key research design methods commonly used in various fields: 1.

  13. Research Design

    The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection ...

  14. How To Choose The Right Research Methodology

    Mixed methods-based research, as you'd expect, attempts to bring these two types of research together, drawing on both qualitative and quantitative data.Quite often, mixed methods-based studies will use qualitative research to explore a situation and develop a potential model of understanding (this is called a conceptual framework), and then go on to use quantitative methods to test that ...

  15. Q: How do I write the methods section of a research proposal?

    Answer: The methods section of a research proposal contains details about how you will conduct your research. It includes your study design - the methodology and methods that you plan to use - as well as your work plan - the activities that you plan to undertake to complete your project. The methods section of a research proposal must contain ...

  16. How to Write a Research Proposal

    Research proposal examples. Writing a research proposal can be quite challenging, but a good starting point could be to look at some examples. We've included a few for you below. Example research proposal #1: 'A Conceptual Framework for Scheduling Constraint Management'.

  17. Types of Research Designs Compared

    Types of Research Designs Compared | Guide & Examples. Published on June 20, 2019 by Shona McCombes.Revised on June 22, 2023. When you start planning a research project, developing research questions and creating a research design, you will have to make various decisions about the type of research you want to do.. There are many ways to categorize different types of research.

  18. What is Research Methodology? Definition, Types, and Examples

    Definition, Types, and Examples. Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of ...

  19. How to prepare a Research Proposal

    It puts the proposal in context. 3. The introduction typically begins with a statement of the research problem in precise and clear terms. 1. The importance of the statement of the research problem 5: The statement of the problem is the essential basis for the construction of a research proposal (research objectives, hypotheses, methodology ...

  20. How to write a research proposal?

    INTRODUCTION. A clean, well-thought-out proposal forms the backbone for the research itself and hence becomes the most important step in the process of conduct of research.[] The objective of preparing a research proposal would be to obtain approvals from various committees including ethics committee [details under 'Research methodology II' section [Table 1] in this issue of IJA) and to ...

  21. (PDF) Research Design and Methodology

    In this chapter, the general design of the research and the methods used for data collection are explained in detail. It includes three main parts. The first part gives a highlight about the ...

  22. (PDF) CHAPTER FIVE RESEARCH DESIGN AND METHODOLOGY 5.1. Introduction

    In other words, the research design sets the procedure on the required data, the methods to be applied to collect and analyze this data, and how all of this is going to answer the research ...

  23. Towards improving full-length ribosome density prediction by ...

    Translation elongation plays an important role in regulating protein concentrations in the cell, and dysregulation of this process has been linked to several human diseases. In this study, we use data from ribo-seq experiments to model ribosome dwell times, and in turn, predict the speed of translation. The proposed method, RiboGL, combines graph and recurrent neural networks to account for ...

  24. An integrated design concept evaluation model based on ...

    The comparison of the method proposed in this study with the IVPF-VIKOR method is shown in Fig. 9. From Fig. 9 it can be seen that \({A}_{3}\) is the best design concept alternative. However ...

  25. Research on an Intelligent Vehicle Trajectory Tracking Method Based on

    This study aims to explore an intelligent vehicle trajectory tracking control method based on optimal control theory. Considering the limitations of existing control strategies in dealing with signal delays and communication lags, a control strategy combining an anthropomorphic forward-looking reference path and longitudinal velocity closure is proposed to improve the accuracy and stability of ...

  26. Design optimization of irregularity RC structure based on ANN-PSO

    The research presented an effective method, including an innovative combination of ANN-PSO and the finite element method (FEM), for designing RC structures. The findings of the research provided a practical solution to fulfill torsional regularity criteria, indicating the proposed approach is an effective method for the economical and safe ...