• Descriptive Research Designs: Types, Examples & Methods

busayo.longe

One of the components of research is getting enough information about the research problem—the what, how, when and where answers, which is why descriptive research is an important type of research. It is very useful when conducting research whose aim is to identify characteristics, frequencies, trends, correlations, and categories.

This research method takes a problem with little to no relevant information and gives it a befitting description using qualitative and quantitative research method s. Descriptive research aims to accurately describe a research problem.

In the subsequent sections, we will be explaining what descriptive research means, its types, examples, and data collection methods.

What is Descriptive Research?

Descriptive research is a type of research that describes a population, situation, or phenomenon that is being studied. It focuses on answering the how, what, when, and where questions If a research problem, rather than the why.

This is mainly because it is important to have a proper understanding of what a research problem is about before investigating why it exists in the first place. 

For example, an investor considering an investment in the ever-changing Amsterdam housing market needs to understand what the current state of the market is, how it changes (increasing or decreasing), and when it changes (time of the year) before asking for the why. This is where descriptive research comes in.

What Are The Types of Descriptive Research?

Descriptive research is classified into different types according to the kind of approach that is used in conducting descriptive research. The different types of descriptive research are highlighted below:

  • Descriptive-survey

Descriptive survey research uses surveys to gather data about varying subjects. This data aims to know the extent to which different conditions can be obtained among these subjects.

For example, a researcher wants to determine the qualification of employed professionals in Maryland. He uses a survey as his research instrument , and each item on the survey related to qualifications is subjected to a Yes/No answer. 

This way, the researcher can describe the qualifications possessed by the employed demographics of this community. 

  • Descriptive-normative survey

This is an extension of the descriptive survey, with the addition being the normative element. In the descriptive-normative survey, the results of the study should be compared with the norm.

For example, an organization that wishes to test the skills of its employees by a team may have them take a skills test. The skills tests are the evaluation tool in this case, and the result of this test is compared with the norm of each role.

If the score of the team is one standard deviation above the mean, it is very satisfactory, if within the mean, satisfactory, and one standard deviation below the mean is unsatisfactory.

  • Descriptive-status

This is a quantitative description technique that seeks to answer questions about real-life situations. For example, a researcher researching the income of the employees in a company, and the relationship with their performance.

A survey will be carried out to gather enough data about the income of the employees, then their performance will be evaluated and compared to their income. This will help determine whether a higher income means better performance and low income means lower performance or vice versa.

  • Descriptive-analysis

The descriptive-analysis method of research describes a subject by further analyzing it, which in this case involves dividing it into 2 parts. For example, the HR personnel of a company that wishes to analyze the job role of each employee of the company may divide the employees into the people that work at the Headquarters in the US and those that work from Oslo, Norway office.

A questionnaire is devised to analyze the job role of employees with similar salaries and who work in similar positions.

  • Descriptive classification

This method is employed in biological sciences for the classification of plants and animals. A researcher who wishes to classify the sea animals into different species will collect samples from various search stations, then classify them accordingly.

  • Descriptive-comparative

In descriptive-comparative research, the researcher considers 2 variables that are not manipulated, and establish a formal procedure to conclude that one is better than the other. For example, an examination body wants to determine the better method of conducting tests between paper-based and computer-based tests.

A random sample of potential participants of the test may be asked to use the 2 different methods, and factors like failure rates, time factors, and others will be evaluated to arrive at the best method.

  • Correlative Survey

Correlative surveys are used to determine whether the relationship between 2 variables is positive, negative, or neutral. That is, if 2 variables say X and Y are directly proportional, inversely proportional or are not related to each other.

Examples of Descriptive Research

There are different examples of descriptive research, that may be highlighted from its types, uses, and applications. However, we will be restricting ourselves to only 3 distinct examples in this article.

  • Comparing Student Performance:

An academic institution may wish 2 compare the performance of its junior high school students in English language and Mathematics. This may be used to classify students based on 2 major groups, with one group going ahead to study while courses, while the other study courses in the Arts & Humanities field.

Students who are more proficient in mathematics will be encouraged to go into STEM and vice versa. Institutions may also use this data to identify students’ weak points and work on ways to assist them.

  • Scientific Classification

During the major scientific classification of plants, animals, and periodic table elements, the characteristics and components of each subject are evaluated and used to determine how they are classified.

For example, living things may be classified into kingdom Plantae or kingdom animal is depending on their nature. Further classification may group animals into mammals, pieces, vertebrae, invertebrae, etc. 

All these classifications are made a result of descriptive research which describes what they are.

  • Human Behavior

When studying human behaviour based on a factor or event, the researcher observes the characteristics, behaviour, and reaction, then use it to conclude. A company willing to sell to its target market needs to first study the behaviour of the market.

This may be done by observing how its target reacts to a competitor’s product, then use it to determine their behaviour.

What are the Characteristics of Descriptive Research?  

The characteristics of descriptive research can be highlighted from its definition, applications, data collection methods, and examples. Some characteristics of descriptive research are:

  • Quantitativeness

Descriptive research uses a quantitative research method by collecting quantifiable information to be used for statistical analysis of the population sample. This is very common when dealing with research in the physical sciences.

  • Qualitativeness

It can also be carried out using the qualitative research method, to properly describe the research problem. This is because descriptive research is more explanatory than exploratory or experimental.

  • Uncontrolled variables

In descriptive research, researchers cannot control the variables like they do in experimental research.

  • The basis for further research

The results of descriptive research can be further analyzed and used in other research methods. It can also inform the next line of research, including the research method that should be used.

This is because it provides basic information about the research problem, which may give birth to other questions like why a particular thing is the way it is.

Why Use Descriptive Research Design?  

Descriptive research can be used to investigate the background of a research problem and get the required information needed to carry out further research. It is used in multiple ways by different organizations, and especially when getting the required information about their target audience.

  • Define subject characteristics :

It is used to determine the characteristics of the subjects, including their traits, behaviour, opinion, etc. This information may be gathered with the use of surveys, which are shared with the respondents who in this case, are the research subjects.

For example, a survey evaluating the number of hours millennials in a community spends on the internet weekly, will help a service provider make informed business decisions regarding the market potential of the community.

  • Measure Data Trends

It helps to measure the changes in data over some time through statistical methods. Consider the case of individuals who want to invest in stock markets, so they evaluate the changes in prices of the available stocks to make a decision investment decision.

Brokerage companies are however the ones who carry out the descriptive research process, while individuals can view the data trends and make decisions.

Descriptive research is also used to compare how different demographics respond to certain variables. For example, an organization may study how people with different income levels react to the launch of a new Apple phone.

This kind of research may take a survey that will help determine which group of individuals are purchasing the new Apple phone. Do the low-income earners also purchase the phone, or only the high-income earners do?

Further research using another technique will explain why low-income earners are purchasing the phone even though they can barely afford it. This will help inform strategies that will lure other low-income earners and increase company sales.

  • Validate existing conditions

When you are not sure about the validity of an existing condition, you can use descriptive research to ascertain the underlying patterns of the research object. This is because descriptive research methods make an in-depth analysis of each variable before making conclusions.

  • Conducted Overtime

Descriptive research is conducted over some time to ascertain the changes observed at each point in time. The higher the number of times it is conducted, the more authentic the conclusion will be.

What are the Disadvantages of Descriptive Research?  

  • Response and Non-response Bias

Respondents may either decide not to respond to questions or give incorrect responses if they feel the questions are too confidential. When researchers use observational methods, respondents may also decide to behave in a particular manner because they feel they are being watched.

  • The researcher may decide to influence the result of the research due to personal opinion or bias towards a particular subject. For example, a stockbroker who also has a business of his own may try to lure investors into investing in his own company by manipulating results.
  • A case-study or sample taken from a large population is not representative of the whole population.
  • Limited scope:The scope of descriptive research is limited to the what of research, with no information on why thereby limiting the scope of the research.

What are the Data Collection Methods in Descriptive Research?  

There are 3 main data collection methods in descriptive research, namely; observational method, case study method, and survey research.

1. Observational Method

The observational method allows researchers to collect data based on their view of the behaviour and characteristics of the respondent, with the respondents themselves not directly having an input. It is often used in market research, psychology, and some other social science research to understand human behaviour.

It is also an important aspect of physical scientific research, with it being one of the most effective methods of conducting descriptive research . This process can be said to be either quantitative or qualitative.

Quantitative observation involved the objective collection of numerical data , whose results can be analyzed using numerical and statistical methods. 

Qualitative observation, on the other hand, involves the monitoring of characteristics and not the measurement of numbers. The researcher makes his observation from a distance, records it, and is used to inform conclusions.

2. Case Study Method

A case study is a sample group (an individual, a group of people, organizations, events, etc.) whose characteristics are used to describe the characteristics of a larger group in which the case study is a subgroup. The information gathered from investigating a case study may be generalized to serve the larger group.

This generalization, may, however, be risky because case studies are not sufficient to make accurate predictions about larger groups. Case studies are a poor case of generalization.

3. Survey Research

This is a very popular data collection method in research designs. In survey research, researchers create a survey or questionnaire and distribute it to respondents who give answers.

Generally, it is used to obtain quick information directly from the primary source and also conducting rigorous quantitative and qualitative research. In some cases, survey research uses a blend of both qualitative and quantitative strategies.

Survey research can be carried out both online and offline using the following methods

  • Online Surveys: This is a cheap method of carrying out surveys and getting enough responses. It can be carried out using Formplus, an online survey builder. Formplus has amazing tools and features that will help increase response rates.
  • Offline Surveys: This includes paper forms, mobile offline forms , and SMS-based forms.

What Are The Differences Between Descriptive and Correlational Research?  

Before going into the differences between descriptive and correlation research, we need to have a proper understanding of what correlation research is about. Therefore, we will be giving a summary of the correlation research below.

Correlational research is a type of descriptive research, which is used to measure the relationship between 2 variables, with the researcher having no control over them. It aims to find whether there is; positive correlation (both variables change in the same direction), negative correlation (the variables change in the opposite direction), or zero correlation (there is no relationship between the variables).

Correlational research may be used in 2 situations;

(i) when trying to find out if there is a relationship between two variables, and

(ii) when a causal relationship is suspected between two variables, but it is impractical or unethical to conduct experimental research that manipulates one of the variables. 

Below are some of the differences between correlational and descriptive research:

  • Definitions :

Descriptive research aims is a type of research that provides an in-depth understanding of the study population, while correlational research is the type of research that measures the relationship between 2 variables. 

  • Characteristics :

Descriptive research provides descriptive data explaining what the research subject is about, while correlation research explores the relationship between data and not their description.

  • Predictions :

 Predictions cannot be made in descriptive research while correlation research accommodates the possibility of making predictions.

Descriptive Research vs. Causal Research

Descriptive research and causal research are both research methodologies, however, one focuses on a subject’s behaviors while the latter focuses on a relationship’s cause-and-effect. To buttress the above point, descriptive research aims to describe and document the characteristics, behaviors, or phenomena of a particular or specific population or situation. 

It focuses on providing an accurate and detailed account of an already existing state of affairs between variables. Descriptive research answers the questions of “what,” “where,” “when,” and “how” without attempting to establish any causal relationships or explain any underlying factors that might have caused the behavior.

Causal research, on the other hand, seeks to determine cause-and-effect relationships between variables. It aims to point out the factors that influence or cause a particular result or behavior. Causal research involves manipulating variables, controlling conditions or a subgroup, and observing the resulting effects. The primary objective of causal research is to establish a cause-effect relationship and provide insights into why certain phenomena happen the way they do.

Descriptive Research vs. Analytical Research

Descriptive research provides a detailed and comprehensive account of a specific situation or phenomenon. It focuses on describing and summarizing data without making inferences or attempting to explain underlying factors or the cause of the factor. 

It is primarily concerned with providing an accurate and objective representation of the subject of research. While analytical research goes beyond the description of the phenomena and seeks to analyze and interpret data to discover if there are patterns, relationships, or any underlying factors. 

It examines the data critically, applies statistical techniques or other analytical methods, and draws conclusions based on the discovery. Analytical research also aims to explore the relationships between variables and understand the underlying mechanisms or processes involved.

Descriptive Research vs. Exploratory Research

Descriptive research is a research method that focuses on providing a detailed and accurate account of a specific situation, group, or phenomenon. This type of research describes the characteristics, behaviors, or relationships within the given context without looking for an underlying cause. 

Descriptive research typically involves collecting and analyzing quantitative or qualitative data to generate descriptive statistics or narratives. Exploratory research differs from descriptive research because it aims to explore and gain firsthand insights or knowledge into a relatively unexplored or poorly understood topic. 

It focuses on generating ideas, hypotheses, or theories rather than providing definitive answers. Exploratory research is often conducted at the early stages of a research project to gather preliminary information and identify key variables or factors for further investigation. It involves open-ended interviews, observations, or small-scale surveys to gather qualitative data.

Read More – Exploratory Research: What are its Method & Examples?

Descriptive Research vs. Experimental Research

Descriptive research aims to describe and document the characteristics, behaviors, or phenomena of a particular population or situation. It focuses on providing an accurate and detailed account of the existing state of affairs. 

Descriptive research typically involves collecting data through surveys, observations, or existing records and analyzing the data to generate descriptive statistics or narratives. It does not involve manipulating variables or establishing cause-and-effect relationships.

Experimental research, on the other hand, involves manipulating variables and controlling conditions to investigate cause-and-effect relationships. It aims to establish causal relationships by introducing an intervention or treatment and observing the resulting effects. 

Experimental research typically involves randomly assigning participants to different groups, such as control and experimental groups, and measuring the outcomes. It allows researchers to control for confounding variables and draw causal conclusions.

Related – Experimental vs Non-Experimental Research: 15 Key Differences

Descriptive Research vs. Explanatory Research

Descriptive research focuses on providing a detailed and accurate account of a specific situation, group, or phenomenon. It aims to describe the characteristics, behaviors, or relationships within the given context. 

Descriptive research is primarily concerned with providing an objective representation of the subject of study without explaining underlying causes or mechanisms. Explanatory research seeks to explain the relationships between variables and uncover the underlying causes or mechanisms. 

It goes beyond description and aims to understand the reasons or factors that influence a particular outcome or behavior. Explanatory research involves analyzing data, conducting statistical analyses, and developing theories or models to explain the observed relationships.

Descriptive Research vs. Inferential Research

Descriptive research focuses on describing and summarizing data without making inferences or generalizations beyond the specific sample or population being studied. It aims to provide an accurate and objective representation of the subject of study. 

Descriptive research typically involves analyzing data to generate descriptive statistics, such as means, frequencies, or percentages, to describe the characteristics or behaviors observed.

Inferential research, however, involves making inferences or generalizations about a larger population based on a smaller sample. 

It aims to draw conclusions about the population characteristics or relationships by analyzing the sample data. Inferential research uses statistical techniques to estimate population parameters, test hypotheses, and determine the level of confidence or significance in the findings.

Related – Inferential Statistics: Definition, Types + Examples

Conclusion  

The uniqueness of descriptive research partly lies in its ability to explore both quantitative and qualitative research methods. Therefore, when conducting descriptive research, researchers have the opportunity to use a wide variety of techniques that aids the research process.

Descriptive research explores research problems in-depth, beyond the surface level thereby giving a detailed description of the research subject. That way, it can aid further research in the field, including other research methods .

It is also very useful in solving real-life problems in various fields of social science, physical science, and education.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • descriptive research
  • descriptive research method
  • example of descriptive research
  • types of descriptive research
  • busayo.longe

Formplus

You may also like:

Acceptance Sampling: Meaning, Examples, When to Use

In this post, we will discuss extensively what acceptance sampling is and when it is applied.

descriptive comparative research title examples

Type I vs Type II Errors: Causes, Examples & Prevention

This article will discuss the two different types of errors in hypothesis testing and how you can prevent them from occurring in your research

Cross-Sectional Studies: Types, Pros, Cons & Uses

In this article, we’ll look at what cross-sectional studies are, how it applies to your research and how to use Formplus to collect...

Extrapolation in Statistical Research: Definition, Examples, Types, Applications

In this article we’ll look at the different types and characteristics of extrapolation, plus how it contrasts to interpolation.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

helpful professor logo

18 Descriptive Research Examples

Descriptive research examples and definition, explained below

Descriptive research involves gathering data to provide a detailed account or depiction of a phenomenon without manipulating variables or conducting experiments.

A scholarly definition is:

“Descriptive research is defined as a research approach that describes the characteristics of the population, sample or phenomenon studied. This method focuses more on the “what” rather than the “why” of the research subject.” (Matanda, 2022, p. 63)

The key feature of descriptive research is that it merely describes phenomena and does not attempt to manipulate variables nor determine cause and effect .

To determine cause and effect , a researcher would need to use an alternate methodology, such as experimental research design .

Common approaches to descriptive research include:

  • Cross-sectional research : A cross-sectional study gathers data on a population at a specific time to get descriptive data that could include categories (e.g. age or income brackets) to get a better understanding of the makeup of a population.
  • Longitudinal research : Longitudinal studies return to a population to collect data at several different points in time, allowing for description of changes in categories over time. However, as it’s descriptive, it cannot infer cause and effect (Erickson, 2017).

Methods that could be used include:

  • Surveys: For example, sending out a census survey to be completed at the exact same date and time by everyone in a population.
  • Case Study : For example, an in-depth description of a specific person or group of people to gain in-depth qualitative information that can describe a phenomenon but cannot be generalized to other cases.
  • Observational Method : For example, a researcher taking field notes in an ethnographic study. (Siedlecki, 2020)

Descriptive Research Examples

1. Understanding Autism Spectrum Disorder (Psychology): Researchers analyze various behavior patterns, cognitive skills, and social interaction abilities specific to children with Autism Spectrum Disorder to comprehensively describe the disorder’s symptom spectrum. This detailed description classifies it as descriptive research, rather than analytical or experimental, as it merely records what is observed without altering any variables or trying to establish causality.

2. Consumer Purchase Decision Process in E-commerce Marketplaces (Marketing): By documenting and describing all the factors that influence consumer decisions on online marketplaces, researchers don’t attempt to predict future behavior or establish causes—just describe observed behavior—making it descriptive research.

3. Impacts of Climate Change on Agricultural Practices (Environmental Studies): Descriptive research is seen as scientists outline how climate changes influence various agricultural practices by observing and then meticulously categorizing the impacts on crop variability, farming seasons, and pest infestations without manipulating any variables in real-time.

4. Work Environment and Employee Performance (Human Resources Management): A study of this nature, describing the correlation between various workplace elements and employee performance, falls under descriptive research as it merely narrates the observed patterns without altering any conditions or testing hypotheses.

5. Factors Influencing Student Performance (Education): Researchers describe various factors affecting students’ academic performance, such as studying techniques, parental involvement, and peer influence. The study is categorized as descriptive research because its principal aim is to depict facts as they stand without trying to infer causal relationships.

6. Technological Advances in Healthcare (Healthcare): This research describes and categorizes different technological advances (such as telemedicine, AI-enabled tools, digital collaboration) in healthcare without testing or modifying any parameters, making it an example of descriptive research.

7. Urbanization and Biodiversity Loss (Ecology): By describing the impact of rapid urban expansion on biodiversity loss, this study serves as a descriptive research example. It observes the ongoing situation without manipulating it, offering a comprehensive depiction of the existing scenario rather than investigating the cause-effect relationship.

8. Architectural Styles across Centuries (Art History): A study documenting and describing various architectural styles throughout centuries essentially represents descriptive research. It aims to narrate and categorize facts without exploring the underlying reasons or predicting future trends.

9. Media Usage Patterns among Teenagers (Sociology): When researchers document and describe the media consumption habits among teenagers, they are performing a descriptive research study. Their main intention is to observe and report the prevailing trends rather than establish causes or predict future behaviors.

10. Dietary Habits and Lifestyle Diseases (Nutrition Science): By describing the dietary patterns of different population groups and correlating them with the prevalence of lifestyle diseases, researchers perform descriptive research. They merely describe observed connections without altering any diet plans or lifestyles.

11. Shifts in Global Energy Consumption (Environmental Economics): When researchers describe the global patterns of energy consumption and how they’ve shifted over the years, they conduct descriptive research. The focus is on recording and portraying the current state without attempting to infer causes or predict the future.

12. Literacy and Employment Rates in Rural Areas (Sociology): A study aims at describing the literacy rates in rural areas and correlating it with employment levels. It falls under descriptive research because it maps the scenario without manipulating parameters or proving a hypothesis.

13. Women Representation in Tech Industry (Gender Studies): A detailed description of the presence and roles of women across various sectors of the tech industry is a typical case of descriptive research. It merely observes and records the status quo without establishing causality or making predictions.

14. Impact of Urban Green Spaces on Mental Health (Environmental Psychology): When researchers document and describe the influence of green urban spaces on residents’ mental health, they are undertaking descriptive research. They seek purely to understand the current state rather than exploring cause-effect relationships.

15. Trends in Smartphone usage among Elderly (Gerontology): Research describing how the elderly population utilizes smartphones, including popular features and challenges encountered, serves as descriptive research. Researcher’s aim is merely to capture what is happening without manipulating variables or posing predictions.

16. Shifts in Voter Preferences (Political Science): A study describing the shift in voter preferences during a particular electoral cycle is descriptive research. It simply records the preferences revealed without drawing causal inferences or suggesting future voting patterns.

17. Understanding Trust in Autonomous Vehicles (Transportation Psychology): This comprises research describing public attitudes and trust levels when it comes to autonomous vehicles. By merely depicting observed sentiments, without engineering any situations or offering predictions, it’s considered descriptive research.

18. The Impact of Social Media on Body Image (Psychology): Descriptive research to outline the experiences and perceptions of individuals relating to body image in the era of social media. Observing these elements without altering any variables qualifies it as descriptive research.

Descriptive vs Experimental Research

Descriptive research merely observes, records, and presents the actual state of affairs without manipulating any variables, while experimental research involves deliberately changing one or more variables to determine their effect on a particular outcome.

De Vaus (2001) succinctly explains that descriptive studies find out what is going on , but experimental research finds out why it’s going on /

Simple definitions are below:

  • Descriptive research is primarily about describing the characteristics or behaviors in a population, often through surveys or observational methods. It provides rich detail about a specific phenomenon but does not allow for conclusive causal statements; however, it can offer essential leads or ideas for further experimental research (Ivey, 2016).
  • Experimental research , often conducted in controlled environments, aims to establish causal relationships by manipulating one or more independent variables and observing the effects on dependent variables (Devi, 2017; Mukherjee, 2019).

Experimental designs often involve a control group and random assignment . While it can provide compelling evidence for cause and effect, its artificial setting might not perfectly mirror real-worldly conditions, potentially affecting the generalizability of its findings.

These two types of research are complementary, with descriptive studies often leading to hypotheses that are then tested experimentally (Devi, 2017; Zhao et al., 2021).

Benefits and Limitations of Descriptive Research

Descriptive research offers several benefits: it allows researchers to gather a vast amount of data and present a complete picture of the situation or phenomenon under study, even within large groups or over long time periods.

It’s also flexible in terms of the variety of methods used, such as surveys, observations, and case studies, and it can be instrumental in identifying patterns or trends and generating hypotheses (Erickson, 2017).

However, it also has its limitations.

The primary drawback is that it can’t establish cause-effect relationships, as no variables are manipulated. This lack of control over variables also opens up possibilities for bias, as researchers might inadvertently influence responses during data collection (De Vaus, 2001).

Additionally, the findings of descriptive research are often not generalizable since they are heavily reliant on the chosen sample’s characteristics.

See More Types of Research Design Here

De Vaus, D. A. (2001). Research Design in Social Research . SAGE Publications.

Devi, P. S. (2017). Research Methodology: A Handbook for Beginners . Notion Press.

Erickson, G. S. (2017). Descriptive research design. In  New Methods of Market Research and Analysis  (pp. 51-77). Edward Elgar Publishing.

Gresham, B. B. (2016). Concepts of Evidence-based Practice for the Physical Therapist Assistant . F.A. Davis Company.

Ivey, J. (2016). Is descriptive research worth doing?.  Pediatric nursing ,  42 (4), 189. ( Source )

Krishnaswamy, K. N., Sivakumar, A. I., & Mathirajan, M. (2009). Management Research Methodology: Integration of Principles, Methods and Techniques . Pearson Education.

Matanda, E. (2022). Research Methods and Statistics for Cross-Cutting Research: Handbook for Multidisciplinary Research . Langaa RPCIG.

Monsen, E. R., & Van Horn, L. (2007). Research: Successful Approaches . American Dietetic Association.

Mukherjee, S. P. (2019). A Guide to Research Methodology: An Overview of Research Problems, Tasks and Methods . CRC Press.

Siedlecki, S. L. (2020). Understanding descriptive research designs and methods.  Clinical Nurse Specialist ,  34 (1), 8-12. ( Source )

Zhao, P., Ross, K., Li, P., & Dennis, B. (2021). Making Sense of Social Research Methodology: A Student and Practitioner Centered Approach . SAGE Publications.

Dave

Dave Cornell (PhD)

Dr. Cornell has worked in education for more than 20 years. His work has involved designing teacher certification for Trinity College in London and in-service training for state governments in the United States. He has trained kindergarten teachers in 8 countries and helped businessmen and women open baby centers and kindergartens in 3 countries.

  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 25 Positive Punishment Examples
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 25 Dissociation Examples (Psychology)
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ 15 Zone of Proximal Development Examples
  • Dave Cornell (PhD) https://helpfulprofessor.com/author/dave-cornell-phd/ Perception Checking: 15 Examples and Definition

Chris

Chris Drew (PhD)

This article was peer-reviewed and edited by Chris Drew (PhD). The review process on Helpful Professor involves having a PhD level expert fact check, edit, and contribute to articles. Reviewers ensure all content reflects expert academic consensus and is backed up with reference to academic studies. Dr. Drew has published over 20 academic articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education and holds a PhD in Education from ACU.

  • Chris Drew (PhD) #molongui-disabled-link 25 Positive Punishment Examples
  • Chris Drew (PhD) #molongui-disabled-link 25 Dissociation Examples (Psychology)
  • Chris Drew (PhD) #molongui-disabled-link 15 Zone of Proximal Development Examples
  • Chris Drew (PhD) #molongui-disabled-link Perception Checking: 15 Examples and Definition

1 thought on “18 Descriptive Research Examples”

' src=

Very nice, educative article. I appreciate the efforts.

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Descriptive Research | Definition, Types, Methods & Examples

Descriptive Research | Definition, Types, Methods & Examples

Published on May 15, 2019 by Shona McCombes . Revised on June 22, 2023.

Descriptive research aims to accurately and systematically describe a population, situation or phenomenon. It can answer what , where , when and how   questions , but not why questions.

A descriptive research design can use a wide variety of research methods  to investigate one or more variables . Unlike in experimental research , the researcher does not control or manipulate any of the variables, but only observes and measures them.

Table of contents

When to use a descriptive research design, descriptive research methods, other interesting articles.

Descriptive research is an appropriate choice when the research aim is to identify characteristics, frequencies, trends, and categories.

It is useful when not much is known yet about the topic or problem. Before you can research why something happens, you need to understand how, when and where it happens.

Descriptive research question examples

  • How has the Amsterdam housing market changed over the past 20 years?
  • Do customers of company X prefer product X or product Y?
  • What are the main genetic, behavioural and morphological differences between European wildcats and domestic cats?
  • What are the most popular online news sources among under-18s?
  • How prevalent is disease A in population B?

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

descriptive comparative research title examples

Descriptive research is usually defined as a type of quantitative research , though qualitative research can also be used for descriptive purposes. The research design should be carefully developed to ensure that the results are valid and reliable .

Survey research allows you to gather large volumes of data that can be analyzed for frequencies, averages and patterns. Common uses of surveys include:

  • Describing the demographics of a country or region
  • Gauging public opinion on political and social topics
  • Evaluating satisfaction with a company’s products or an organization’s services

Observations

Observations allow you to gather data on behaviours and phenomena without having to rely on the honesty and accuracy of respondents. This method is often used by psychological, social and market researchers to understand how people act in real-life situations.

Observation of physical entities and phenomena is also an important part of research in the natural sciences. Before you can develop testable hypotheses , models or theories, it’s necessary to observe and systematically describe the subject under investigation.

Case studies

A case study can be used to describe the characteristics of a specific subject (such as a person, group, event or organization). Instead of gathering a large volume of data to identify patterns across time or location, case studies gather detailed data to identify the characteristics of a narrowly defined subject.

Rather than aiming to describe generalizable facts, case studies often focus on unusual or interesting cases that challenge assumptions, add complexity, or reveal something new about a research problem .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, June 22). Descriptive Research | Definition, Types, Methods & Examples. Scribbr. Retrieved March 26, 2024, from https://www.scribbr.com/methodology/descriptive-research/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is quantitative research | definition, uses & methods, correlational research | when & how to use, descriptive statistics | definitions, types, examples, what is your plagiarism score.

descriptive comparative research title examples

Descriptive Research: Methods And Examples

A research project always begins with selecting a topic. The next step is for researchers to identify the specific areas…

Descriptive Research Design

A research project always begins with selecting a topic. The next step is for researchers to identify the specific areas of interest. After that, they tackle the key component of any research problem: how to gather enough quality information. If we opt for a descriptive research design we have to ask the correct questions to access the right information. 

For instance, researchers may choose to focus on why people invest in cryptocurrency, knowing how dynamic the market is rather than asking why the market is so shaky. These are completely different questions that require different research approaches. Adopting the descriptive method can help capitalize on trends the information reveals. Descriptive research examples show the thorough research involved in such a study. 

Get to know more about descriptive research design .

Descriptive Research Meaning

Features of descriptive research design, types of descriptive research, descriptive research methods, applications of descriptive research, descriptive research examples.

A descriptive method of research is one that describes the characteristics of a phenomenon, situation or population. It uses quantitative and qualitative approaches to describe problems with little relevant information. Descriptive research accurately describes a research problem without asking why a particular event happened. By researching market patterns, the descriptive method answers how patterns change, what caused the change and when the change occurred, instead of dwelling on why the change happened.

Descriptive research refers to questions, study design and analysis of data conducted on a particular topic. It is a strictly observational research methodology with no influence on variables. Some distinctive features of descriptive research are:

  • It’s a research method that collects quantifiable information for statistical analysis of a sample. It’s a quantitative market research tool that can analyze the nature of a demographic
  • In a descriptive method of research , the nature of research study variables is determined with observation, without influence from the researcher
  • Descriptive research is cross-sectional and different sections of a group can be studied
  • The analyzed data is collected and serves as information for other search techniques. In this way, a descriptive research design becomes the basis of further research

To understand the descriptive research meaning , data collection methods, examples and application, we need a deeper understanding of its features.

Different ways of approaching the descriptive method help break it down further. Let’s look at the different types of descriptive research :

Descriptive Survey

Descriptive normative survey, descriptive status.

This type of research quantitatively describes real-life situations. For example, to understand the relation between wages and performance, research on employee salaries and their respective performances can be conducted.

Descriptive Analysis

This technique analyzes a subject further. Once the relation between wages and performance has been established, an organization can further analyze employee performance by researching the output of those who work from an office with those who work from home.

Descriptive Classification

Descriptive classification is mainly used in the field of biological science. It helps researchers classify species once they have studied the data collected from different search stations.

Descriptive Comparative

Comparing two variables can show if one is better than the other. Doing this through tests or surveys can reveal all the advantages and disadvantages associated with the two. For example, this technique can be used to find out if paper ballots are better than electronic voting devices.

Correlative Survey

The researcher has to effectively interpret the area of the problem and then decide the appropriate technique of descriptive research design . 

A researcher can choose one of the following methods to solve research problems and meet research goals:

Observational Method

With this method, a researcher observes the behaviors, mannerisms and characteristics of the participants. It is widely used in psychology and market research and does not require the participants to be involved directly. It’s an effective method and can be both qualitative and quantitative for the sheer volume and variety of data that is generated.

Survey Research

It’s a popular method of data collection in research. It follows the principle of obtaining information quickly and directly from the main source. The idea is to use rigorous qualitative and quantitative research methods and ask crucial questions essential to the business for the short and long term.

Case Study Method

Case studies tend to fall short in situations where researchers are dealing with highly diverse people or conditions. Surveys and observations are carried out effectively but the time of execution significantly differs between the two. 

There are multiple applications of descriptive research design but executives must learn that it’s crucial to clearly define the research goals first. Here’s how organizations use descriptive research to meet their objectives:

  • As a tool to analyze participants : It’s important to understand the behaviors, traits and patterns of the participants to draw a conclusion about them. Close-ended questions can reveal their opinions and attitudes. Descriptive research can help understand the participant and assist in making strategic business decisions
  • Designed to measure data trends : It’s a statistically capable research design that, over time, allows organizations to measure data trends. A survey can reveal unfavorable scenarios and give an organization the time to fix unprofitable moves
  • Scope of comparison: Surveys and research can allow an organization to compare two products across different groups. This can provide a detailed comparison of the products and an opportunity for the organization to capitalize on a large demographic
  • Conducting research at any time: An analysis can be conducted at any time and any number of variables can be evaluated. It helps to ascertain differences and similarities

Descriptive research is widely used due to its non-invasive nature. Quantitative observations allow in-depth analysis and a chance to validate any existing condition.

There are several different descriptive research examples that highlight the types, applications and uses of this research method. Let’s look at a few:

  • Before launching a new line of gym wear, an organization chose more than one descriptive method to gather vital information. Their objective was to find the kind of gym clothes people like wearing and the ones they would like to see in the market. The organization chose to conduct a survey by recording responses in gyms, sports shops and yoga centers. As a second method, they chose to observe members of different gyms and fitness institutions. They collected volumes of vital data such as color and design preferences and the amount of money they’re willing to spend on it .
  • To get a good idea of people’s tastes and expectations, an organization conducted a survey by offering a new flavor of the sauce and recorded people’s responses by gathering data from store owners. This let them understand how people reacted, whether they found the product reasonably priced, whether it served its purpose and their overall general preferences. Based on this, the brand tweaked its core marketing strategies and made the product widely acceptable .

Descriptive research can be used by an organization to understand the spending patterns of customers as well as by a psychologist who has to deal with mentally ill patients. In both these professions, the individuals will require thorough analyses of their subjects and large amounts of crucial data to develop a plan of action.

Every method of descriptive research can provide information that is diverse, thorough and varied. This supports future research and hypotheses. But although they can be quick, cheap and easy to conduct in the participants’ natural environment, descriptive research design can be limited by the kind of information it provides, especially with case studies. Trying to generalize a larger population based on the data gathered from a smaller sample size can be futile. Similarly, a researcher can unknowingly influence the outcome of a research project due to their personal opinions and biases. In any case, a manager has to be prepared to collect important information in substantial quantities and have a balanced approach to prevent influencing the result. 

Harappa’s Thinking Critically program harnesses the power of information to strengthen decision-making skills. It’s a growth-driven course for young professionals and managers who want to be focused on their strategies, outperform targets and step up to assume the role of leader in their organizations. It’s for any professional who wants to lay a foundation for a successful career and business owners who’re looking to take their organizations to new heights.

Explore Harappa Diaries to learn more about topics such as Main Objectives of Research , Examples of Experimental Research , Methods Of Ethnographic Research , and How To Use Blended Learning to upgrade your knowledge and skills.

Thriversitybannersidenav

Just one more step to your free trial.

.surveysparrow.com

Already using SurveySparrow?  Login

By clicking on "Get Started", I agree to the Privacy Policy and Terms of Service .

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Enterprise Survey Software

Enterprise Survey Software to thrive in your business ecosystem

NPS Software

Turn customers into promoters

Offline Survey

Real-time data collection, on the move. Go internet-independent.

360 Assessment

Conduct omnidirectional employee assessments. Increase productivity, grow together.

Reputation Management

Turn your existing customers into raving promoters by monitoring online reviews.

Ticket Management

Build loyalty and advocacy by delivering personalized support experiences that matter.

Chatbot for Website

Collect feedback smartly from your website visitors with the engaging Chatbot for website.

Swift, easy, secure. Scalable for your organization.

Executive Dashboard

Customer journey map, craft beautiful surveys, share surveys, gain rich insights, recurring surveys, white label surveys, embedded surveys, conversational forms, mobile-first surveys, audience management, smart surveys, video surveys, secure surveys, api, webhooks, integrations, survey themes, accept payments, custom workflows, all features, customer experience, employee experience, product experience, marketing experience, sales experience, hospitality & travel, market research, saas startup programs, wall of love, success stories, sparrowcast, nps benchmarks, learning centre, apps & integrations.

Our surveys come with superpowers ⚡

Blog General

Descriptive Research 101: Definition, Methods and Examples

Parvathi vijayamohan.

18 October 2023

Table Of Contents

  • Descriptive Research 101: The Definitive Guide

What is Descriptive Research?

Key characteristics of descriptive research.

  • Descriptive Research Methods: The 3 You Need to Know!

Observation

Case studies, 7 types of descriptive research, descriptive research: examples to build your next study, tips to excel at descriptive research.

Imagine you are a detective called to a crime scene. Your job is to study the scene and report whatever you find: whether that’s the half-smoked cigarette on the table or the large “RACHE” written in blood on the wall. That, in a nutshell, is  descriptive research .

Researchers often need to do descriptive research on a problem before they attempt to solve it. So in this guide, we’ll take you through:

  • What is descriptive research + characteristics
  • Descriptive research methods
  • Types of descriptive research
  • Descriptive research examples
  • Tips to excel at the descriptive method

Click to jump to the section that interests you.

Definition: As its name says, descriptive research  describes  the characteristics of the problem, phenomenon, situation, or group under study.

So the goal of all descriptive studies is to  explore  the background, details, and existing patterns in the problem to fully understand it. In other words, preliminary research.

However, descriptive research can be both  preliminary and conclusive . You can use the data from a descriptive study to make reports and get insights for further planning.

What descriptive research isn’t: Descriptive research finds the  what/when/where  of a problem, not the  why/how .

Because of this, we can’t use the descriptive method to explore cause-and-effect relationships where one variable (like a person’s job role) affects another variable (like their monthly income).

  • Answers the “what,” “when,” and “where”  of a research problem. For this reason, it is popularly used in  market research ,  awareness surveys , and  opinion polls .
  • Sets the stage  for a research problem. As an early part of the research process, descriptive studies help you dive deeper into the topic.
  • Opens the door  for further research. You can use descriptive data as the basis for more profound research, analysis and studies.
  • Qualitative and quantitative . It is possible to get a balanced mix of numerical responses and open-ended answers from the descriptive method.
  • No control or interference with the variables . The researcher simply observes and reports on them. However, specific research software has  filters  that allow her to zoom in on one variable.
  • Done in natural settings . You can get the best results from descriptive research by talking to people, surveying them, or observing them in a suitable environment. For example, suppose you are a website beta testing an app feature. In that case, descriptive research invites users to try the feature, tracking their behavior and then asking their opinions .
  • Can be applied to many research methods and areas. Examples include healthcare, SaaS, psychology, political studies, education, and pop culture.

Descriptive Research Methods: The Top Three You Need to Know!

In short, survey research is a brief interview or conversation with a set of prepared questions about a topic.

So you create a questionnaire, share it, and analyze the data you collect for further action. Learn about the differences between surveys and questionnaires  here .

You can access free survey templates , over 20+ question types , and pass data to 1,500+ applications with survey software, like SurveySparrow . It enables you to create surveys, share them and capture data with very little effort.

Sign up today to launch stunning surveys for free.

Please enter a valid Email ID.

14-Day Free Trial • No Credit Card Required • No Strings Attached

  • Surveys can be hyper-local, regional, or global, depending on your objectives.
  • Share surveys in-person, offline, via SMS, email, or QR codes – so many options !
  • Easy to automate if you want to conduct many surveys over a period.

The observational method is a type of descriptive research in which you, the researcher, observe ongoing behavior.

Now, there are several (non-creepy) ways you can observe someone. In fact, observational research has three main approaches:

  • Covert observation: In true spy fashion, the researcher mixes in with the group undetected or observes from a distance.
  • Overt observation : The researcher identifies himself as a researcher – “The name’s Bond. J. Bond.” – and explains the purpose of the study.
  • Participatory observation : The researcher participates in what he is observing to understand his topic better.
  • Observation is one of the most accurate ways to get data on a subject’s behavior in a natural setting.
  • You don’t need to rely on people’s willingness to share information.
  • Observation is a universal method that can be applied to any area of research.

In the case study method, you do a detailed study of a specific group, person, or event over a period.

This brings us to a frequently asked question: “What’s the difference between case studies and longitudinal studies?”

A case study will go  very in-depth into the subject with one-on-one interviews, observations, and archival research. They are also qualitative, though sometimes they will use numbers and stats.

An example of longitudinal research would be a study of the health of night shift employees vs. general shift employees over a decade. An example of a case study would involve in-depth interviews with Casey, an assistant director of nursing who’s handled the night shift at the hospital for ten years now.

  • Due to the focus on a few people, case studies can give you a tremendous amount of information.
  • Because of the time and effort involved, a case study engages both researchers and participants.
  • Case studies are helpful for ethically investigating unusual, complex, or challenging subjects. An example would be a study of the habits of long-term cocaine users.

1. Case Study: Airbnb’s Growth Strategy

In an excellent case study, Tam Al Saad, Principal Consultant, Strategy + Growth at Webprofits, deep dives into how Airbnb attracted and retained 150 million users .

“What Airbnb offers isn’t a cheap place to sleep when you’re on holiday; it’s the opportunity to experience your destination as a local would. It’s the chance to meet the locals, experience the markets, and find non-touristy places.

Sure, you can visit the Louvre, see Buckingham Palace, and climb the Empire State Building, but you can do it as if it were your hometown while staying in a place that has character and feels like a home.” – Tam al Saad, Principal Consultant, Strategy + Growth at Webprofits

2. Observation – Better Tech Experiences for the Elderly

We often think that our elders are so hopeless with technology. But we’re not getting any younger either, and tech is changing at a hair trigger! This article by Annemieke Hendricks shares a wonderful example where researchers compare the levels of technological familiarity between age groups and how that influences usage.

“It is generally assumed that older adults have difficulty using modern electronic devices, such as mobile telephones or computers. Because this age group is growing in most countries, changing products and processes to adapt to their needs is increasingly more important. “ – Annemieke Hendricks, Marketing Communication Specialist, Noldus

3. Surveys – Decoding Sleep with SurveySparrow

SRI International (formerly Stanford Research Institute) – an independent, non-profit research center – wanted to investigate the impact of stress on an adolescent’s sleep. To get those insights, two actions were essential: tracking sleep patterns through wearable devices and sending surveys at a pre-set time –  the pre-sleep period.

“With SurveySparrow’s recurring surveys feature, SRI was able to share engaging surveys with their participants exactly at the time they wanted and at the frequency they preferred.”

Read more about this project : How SRI International decoded sleep patterns with SurveySparrow

1: Answer the six Ws –

  • Who should we consider?
  • What information do we need?
  • When should we collect the information?
  • Where should we collect the information?
  • Why are we obtaining the information?
  • Way to collect the information

#2: Introduce and explain your methodological approach

#3: Describe your methods of data collection and/or selection.

#4: Describe your methods of analysis.

#5: Explain the reasoning behind your choices.

#6: Collect data.

#7: Analyze the data. Use software to speed up the process and reduce overthinking and human error.

#8: Report your conclusions and how you drew the results.

Wrapping Up

That’s all, folks!

Growth Marketer at SurveySparrow

Fledgling growth marketer. Cloud watcher. Aunty to a naughty beagle.

You Might Also Like

How to make a personality quiz: a step-by-step guide, october leaves, halloween trick ‘o treats, conversational support: definition, tips and software features.

Leave us your email, we wont spam. Promise!

Start your free trial today

No Credit Card Required. 14-Day Free Trial

Request a Demo

Want to learn more about SurveySparrow? We'll be in touch soon!

Scale up your descriptive research with the best survey software

Build surveys that actually work. give surveysparrow a free try today.

14-Day Free Trial • No Credit card required • 40% more completion rate

Hi there, we use cookies to offer you a better browsing experience and to analyze site traffic. By continuing to use our website, you consent to the use of these cookies. Learn More

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Descriptive Research Design | Definition, Methods & Examples

Descriptive Research Design | Definition, Methods & Examples

Published on 5 May 2022 by Shona McCombes . Revised on 10 October 2022.

Descriptive research aims to accurately and systematically describe a population, situation or phenomenon. It can answer what , where , when , and how   questions , but not why questions.

A descriptive research design can use a wide variety of research methods  to investigate one or more variables . Unlike in experimental research , the researcher does not control or manipulate any of the variables, but only observes and measures them.

Table of contents

When to use a descriptive research design, descriptive research methods.

Descriptive research is an appropriate choice when the research aim is to identify characteristics, frequencies, trends, and categories.

It is useful when not much is known yet about the topic or problem. Before you can research why something happens, you need to understand how, when, and where it happens.

  • How has the London housing market changed over the past 20 years?
  • Do customers of company X prefer product Y or product Z?
  • What are the main genetic, behavioural, and morphological differences between European wildcats and domestic cats?
  • What are the most popular online news sources among under-18s?
  • How prevalent is disease A in population B?

Prevent plagiarism, run a free check.

Descriptive research is usually defined as a type of quantitative research , though qualitative research can also be used for descriptive purposes. The research design should be carefully developed to ensure that the results are valid and reliable .

Survey research allows you to gather large volumes of data that can be analysed for frequencies, averages, and patterns. Common uses of surveys include:

  • Describing the demographics of a country or region
  • Gauging public opinion on political and social topics
  • Evaluating satisfaction with a company’s products or an organisation’s services

Observations

Observations allow you to gather data on behaviours and phenomena without having to rely on the honesty and accuracy of respondents. This method is often used by psychological, social, and market researchers to understand how people act in real-life situations.

Observation of physical entities and phenomena is also an important part of research in the natural sciences. Before you can develop testable hypotheses , models, or theories, it’s necessary to observe and systematically describe the subject under investigation.

Case studies

A case study can be used to describe the characteristics of a specific subject (such as a person, group, event, or organisation). Instead of gathering a large volume of data to identify patterns across time or location, case studies gather detailed data to identify the characteristics of a narrowly defined subject.

Rather than aiming to describe generalisable facts, case studies often focus on unusual or interesting cases that challenge assumptions, add complexity, or reveal something new about a research problem .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Descriptive Research Design | Definition, Methods & Examples. Scribbr. Retrieved 25 March 2024, from https://www.scribbr.co.uk/research-methods/descriptive-research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, a quick guide to experimental design | 5 steps & examples, correlational research | guide, design & examples, qualitative vs quantitative research | examples & methods.

Join thousands of product people at Insight Out Conf on April 11. Register free.

Insights hub solutions

Analyze data

Uncover deep customer insights with fast, powerful features, store insights, curate and manage insights in one searchable platform, scale research, unlock the potential of customer insights at enterprise scale.

Featured reads

Create a quick summary to identify key takeaways and keep your team in the loop.

Tips and tricks

Make magic with your customer data in Dovetail

descriptive comparative research title examples

Four ways Dovetail helps Product Managers master continuous product discovery

descriptive comparative research title examples

Product updates

Dovetail retro: our biggest releases from the past year

Events and videos

© Dovetail Research Pty. Ltd.

  • What is descriptive research?

Last updated

5 February 2023

Reviewed by

Cathy Heath

Descriptive research is a common investigatory model used by researchers in various fields, including social sciences, linguistics, and academia.

Read on to understand the characteristics of descriptive research and explore its underlying techniques, processes, and procedures.

Analyze your descriptive research

Dovetail streamlines analysis to help you uncover and share actionable insights

Descriptive research is an exploratory research method. It enables researchers to precisely and methodically describe a population, circumstance, or phenomenon.

As the name suggests, descriptive research describes the characteristics of the group, situation, or phenomenon being studied without manipulating variables or testing hypotheses . This can be reported using surveys , observational studies, and case studies. You can use both quantitative and qualitative methods to compile the data.

Besides making observations and then comparing and analyzing them, descriptive studies often develop knowledge concepts and provide solutions to critical issues. It always aims to answer how the event occurred, when it occurred, where it occurred, and what the problem or phenomenon is.

  • Characteristics of descriptive research

The following are some of the characteristics of descriptive research:

Quantitativeness

Descriptive research can be quantitative as it gathers quantifiable data to statistically analyze a population sample. These numbers can show patterns, connections, and trends over time and can be discovered using surveys, polls, and experiments.

Qualitativeness

Descriptive research can also be qualitative. It gives meaning and context to the numbers supplied by quantitative descriptive research .

Researchers can use tools like interviews, focus groups, and ethnographic studies to illustrate why things are what they are and help characterize the research problem. This is because it’s more explanatory than exploratory or experimental research.

Uncontrolled variables

Descriptive research differs from experimental research in that researchers cannot manipulate the variables. They are recognized, scrutinized, and quantified instead. This is one of its most prominent features.

Cross-sectional studies

Descriptive research is a cross-sectional study because it examines several areas of the same group. It involves obtaining data on multiple variables at the personal level during a certain period. It’s helpful when trying to understand a larger community’s habits or preferences.

Carried out in a natural environment

Descriptive studies are usually carried out in the participants’ everyday environment, which allows researchers to avoid influencing responders by collecting data in a natural setting. You can use online surveys or survey questions to collect data or observe.

Basis for further research

You can further dissect descriptive research’s outcomes and use them for different types of investigation. The outcomes also serve as a foundation for subsequent investigations and can guide future studies. For example, you can use the data obtained in descriptive research to help determine future research designs.

  • Descriptive research methods

There are three basic approaches for gathering data in descriptive research: observational, case study, and survey.

You can use surveys to gather data in descriptive research. This involves gathering information from many people using a questionnaire and interview .

Surveys remain the dominant research tool for descriptive research design. Researchers can conduct various investigations and collect multiple types of data (quantitative and qualitative) using surveys with diverse designs.

You can conduct surveys over the phone, online, or in person. Your survey might be a brief interview or conversation with a set of prepared questions intended to obtain quick information from the primary source.

Observation

This descriptive research method involves observing and gathering data on a population or phenomena without manipulating variables. It is employed in psychology, market research , and other social science studies to track and understand human behavior.

Observation is an essential component of descriptive research. It entails gathering data and analyzing it to see whether there is a relationship between the two variables in the study. This strategy usually allows for both qualitative and quantitative data analysis.

Case studies

A case study can outline a specific topic’s traits. The topic might be a person, group, event, or organization.

It involves using a subset of a larger group as a sample to characterize the features of that larger group.

You can generalize knowledge gained from studying a case study to benefit a broader audience.

This approach entails carefully examining a particular group, person, or event over time. You can learn something new about the study topic by using a small group to better understand the dynamics of the entire group.

  • Types of descriptive research

There are several types of descriptive study. The most well-known include cross-sectional studies, census surveys, sample surveys, case reports, and comparison studies.

Case reports and case series

In the healthcare and medical fields, a case report is used to explain a patient’s circumstances when suffering from an uncommon illness or displaying certain symptoms. Case reports and case series are both collections of related cases. They have aided the advancement of medical knowledge on countless occasions.

The normative component is an addition to the descriptive survey. In the descriptive–normative survey, you compare the study’s results to the norm.

Descriptive survey

This descriptive type of research employs surveys to collect information on various topics. This data aims to determine the degree to which certain conditions may be attained.

You can extrapolate or generalize the information you obtain from sample surveys to the larger group being researched.

Correlative survey

Correlative surveys help establish if there is a positive, negative, or neutral connection between two variables.

Performing census surveys involves gathering relevant data on several aspects of a given population. These units include individuals, families, organizations, objects, characteristics, and properties.

During descriptive research, you gather different degrees of interest over time from a specific population. Cross-sectional studies provide a glimpse of a phenomenon’s prevalence and features in a population. There are no ethical challenges with them and they are quite simple and inexpensive to carry out.

Comparative studies

These surveys compare the two subjects’ conditions or characteristics. The subjects may include research variables, organizations, plans, and people.

Comparison points, assumption of similarities, and criteria of comparison are three important variables that affect how well and accurately comparative studies are conducted.

For instance, descriptive research can help determine how many CEOs hold a bachelor’s degree and what proportion of low-income households receive government help.

  • Pros and cons

The primary advantage of descriptive research designs is that researchers can create a reliable and beneficial database for additional study. To conduct any inquiry, you need access to reliable information sources that can give you a firm understanding of a situation.

Quantitative studies are time- and resource-intensive, so knowing the hypotheses viable for testing is crucial. The basic overview of descriptive research provides helpful hints as to which variables are worth quantitatively examining. This is why it’s employed as a precursor to quantitative research designs.

Some experts view this research as untrustworthy and unscientific. However, there is no way to assess the findings because you don’t manipulate any variables statistically.

Cause-and-effect correlations also can’t be established through descriptive investigations. Additionally, observational study findings cannot be replicated, which prevents a review of the findings and their replication.

The absence of statistical and in-depth analysis and the rather superficial character of the investigative procedure are drawbacks of this research approach.

  • Descriptive research examples and applications

Several descriptive research examples are emphasized based on their types, purposes, and applications. Research questions often begin with “What is …” These studies help find solutions to practical issues in social science, physical science, and education.

Here are some examples and applications of descriptive research:

Determining consumer perception and behavior

Organizations use descriptive research designs to determine how various demographic groups react to a certain product or service.

For example, a business looking to sell to its target market should research the market’s behavior first. When researching human behavior in response to a cause or event, the researcher pays attention to the traits, actions, and responses before drawing a conclusion.

Scientific classification

Scientific descriptive research enables the classification of organisms and their traits and constituents.

Measuring data trends

A descriptive study design’s statistical capabilities allow researchers to track data trends over time. It’s frequently used to determine the study target’s current circumstances and underlying patterns.

Conduct comparison

Organizations can use a descriptive research approach to learn how various demographics react to a certain product or service. For example, you can study how the target market responds to a competitor’s product and use that information to infer their behavior.

  • Bottom line

A descriptive research design is suitable for exploring certain topics and serving as a prelude to larger quantitative investigations. It provides a comprehensive understanding of the “what” of the group or thing you’re investigating.

This research type acts as the cornerstone of other research methodologies . It is distinctive because it can use quantitative and qualitative research approaches at the same time.

What is descriptive research design?

Descriptive research design aims to systematically obtain information to describe a phenomenon, situation, or population. More specifically, it helps answer the what, when, where, and how questions regarding the research problem rather than the why.

How does descriptive research compare to qualitative research?

Despite certain parallels, descriptive research concentrates on describing phenomena, while qualitative research aims to understand people better.

How do you analyze descriptive research data?

Data analysis involves using various methodologies, enabling the researcher to evaluate and provide results regarding validity and reliability.

Get started today

Go from raw data to valuable insights with a flexible research platform

Editor’s picks

Last updated: 21 December 2023

Last updated: 16 December 2023

Last updated: 17 February 2024

Last updated: 19 November 2023

Last updated: 5 March 2024

Last updated: 15 February 2024

Last updated: 11 March 2024

Last updated: 12 December 2023

Last updated: 6 March 2024

Last updated: 10 April 2023

Last updated: 20 December 2023

Latest articles

Related topics, log in or sign up.

Get started for free

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • This Or That Game New
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Education and Communications
  • College University and Postgraduate
  • Academic Writing

How to Write a Title for a Compare and Contrast Essay

Last Updated: August 10, 2021 Fact Checked

This article was co-authored by Emily Listmann, MA . Emily Listmann is a private tutor in San Carlos, California. She has worked as a Social Studies Teacher, Curriculum Coordinator, and an SAT Prep Teacher. She received her MA in Education from the Stanford Graduate School of Education in 2014. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 111,278 times.

The title is an important part of any essay. After all, it’s the first thing people read. When you write a title for your compare and contrast essay, it needs to let your reader know what subjects you want to compare and how you plan to compare them. Some essays need more formal, informative titles while others benefit from creative titles. No matter what, just remember to keep your title short, readable, and relevant to your writing.

Creating an Informative Title

Step 1 Establish your audience.

  • Informative titles like “The Benefit of Owning a Cat vs. a Dog”, for example, would be better for a classroom setting, while a creative title like “My Dog is Better than a Cat” would be better for a blog. [2] X Research source

Step 2 List what you want to compare.

  • You only need to include the broad topics or themes you want to compare, such as dogs and cats. Don’t worry about putting individual points in your title. Those points will be addressed in the body of your essay.
  • You may be comparing something to itself over time or space, like rock music in the 20th and 21st centuries, or Renaissance art in Italy and the Netherlands. If that’s the case, list the subject you want to compare, and places or timeframes that you are using for your comparison.

Step 3 Decide if your essay is meant to be persuasive or not.

  • Persuasive essay titles might use words like “benefit,” “better,” “advantages,” “should,” “will,” and other words that convey a sense that one subject has an advantage over the other.
  • Informative titles might use words like “versus,” “compared,” or “difference”. These words don’t suggest that one subject is better or worse, they simply point out they are not the same.

Step 4 Write your informative title.

  • The end result should be a title that lets readers know what you want to compare and contrast, and how you plan on doing so in just a few words. If for example, you're comparing rock music across time, your title might be The Difference in Chord Progressions of 20th and 21st-century Rock Music .

[4] X Research source

Generating a Creative Title

Step 1 Establish your purpose.

  • If, for example, you just want to compare white and milk chocolate, you are providing facts. Your goal will not be to make your audience think one particular chocolate is better. Your title, then, may be something like "Loco for Cocoa: The Differences Between Types of Chocolate."
  • If, however, you want to tell your audience why milk chocolate is better, you are reinforcing a popular idea. If you want to explain why white chocolate is better, you are going against a popular idea. In that case, a better title might be "Milking it - Why White Chocolate is Totally the Best Chocolate."

Step 2 Avoid direct comparison words.

  • ”Do Hash Browns Stack up Against Fries as a Burger Side” creates a sense of tension between your subjects and challenges a popular opinion. It is a more engaging title for your readers than “Comparing Hash Browns and Fries as Burger Sides.”

Step 3 Use a colon.

  • For example, if you want to write an essay comparing two works of art by Van Gogh, you may use a title like, “Look at Him Gogh: Comparing Floral Composition in Almond Blossoms and Poppy Flowers.”

Keeping Your Title Relevant and Readable

Step 1 Write the paper first.

  • Your essay is where you will make your arguments. Your title just needs to convey your subjects and establish that you plan to compare and contrast them in some way.

Step 3 Ask a friend for their opinion.

Expert Q&A

  • If you're struggling to figure out a title, try writing your thesis at the top of a blank page, then brainstorming all the titles you can think of below. Go through slowly to see which ones fit your paper the best and which you like the most. Thanks Helpful 0 Not Helpful 1

descriptive comparative research title examples

You Might Also Like

Write an Essay

  • ↑ https://www.kibin.com/essay-writing-blog/how-to-write-good-essay-titles/
  • ↑ http://www.schooleydesigns.com/compare-and-contrast-essay-title/
  • ↑ http://www.editage.com/insights/3-basic-tips-on-writing-a-good-research-paper-title
  • ↑ http://canuwrite.com/article_titles.php
  • ↑ http://writing.umn.edu/sws/assets/pdf/quicktips/titles.pdf
  • ↑ http://www.aacstudents.org/tips-for-essay-writing-asking-friends-to-help-you-out.php

About This Article

Emily Listmann, MA

  • Send fan mail to authors

Reader Success Stories

Jason Y.

Dec 4, 2019

Did this article help you?

descriptive comparative research title examples

Jan 7, 2022

Jusi Tusilene

Jusi Tusilene

Jul 17, 2021

Blaine

Mar 23, 2022

Am I a Narcissist or an Empath Quiz

Featured Articles

Make Your School More Period Friendly

Trending Articles

8 Reasons Why Life Sucks & 15 Ways to Deal With It

Watch Articles

Fold Boxer Briefs

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

Get all the best how-tos!

Sign up for wikiHow's weekly email newsletter

  • Write my thesis
  • Thesis writers
  • Buy thesis papers
  • Bachelor thesis
  • Master's thesis
  • Thesis editing services
  • Thesis proofreading services
  • Buy a thesis online
  • Write my dissertation
  • Dissertation proposal help
  • Pay for dissertation
  • Custom dissertation
  • Dissertation help online
  • Buy dissertation online
  • Cheap dissertation
  • Dissertation editing services
  • Write my research paper
  • Buy research paper online
  • Pay for research paper
  • Research paper help
  • Order research paper
  • Custom research paper
  • Cheap research paper
  • Research papers for sale
  • Thesis subjects
  • How It Works

80+ Great Research Titles Examples in Various Academic Fields

Research titles examples

Coming up with a research title for an academic paper is one of the most challenging parts of the writing process. Even though there is an unlimited quantity of research titles to write about, knowing which one is best for you can be hard. We have done the research for you and compiled eighty examples of research titles to write on. Additionally, we have divided the research titles examples into sections to make them easier to choose.

Research Study Examples of Current Events

Examples of research topics on ethics, title of research study examples on health, research paper title examples on social concerns, examples of research title on art and culture, example of research interest in religion, samples of research study topics on technology, research examples of environmental studies, good research title examples on history, specific topic examples regarding education, research title examples for students on family, food, and nutrition, research problems examples computer science, samples of research title about business marketing and communications, sample of research study topics in women’s studies, research problem example on politics, what are some examples of research paper topics on law, final words about research titles.

When it comes to choosing a good sample research title, research is one of the best tips you can get. By reading widely, including your school notes and scholarly articles, you will have a problem/line of interest examples in research. Then, you can derive any question from areas that appear to have a knowledge gap and proceed with researching the answer. As promised, below are eighty research title examples categorized into different areas, including social media research topics .

  • Discuss the peculiar policies of a named country – for example, discuss the impacts of the one-child policy of China.
  • Research on the influence of a named political leader, say a president, on the country they governed and other countries around. For instance, you can talk about how Trump’s presidency has changed international relations.
  • Conduct an analysis of a particular aspect of two named countries – for example, the history of the relationship between the U.S. and North Korea.
  • Compare the immigration laws in two or more named countries – for example, discuss how the immigration laws in the U.S. compares with other countries.
  • Discuss how the Black Lives Matter movement has affected the view and discussions about racism in the United States.
  • Enumerate the different ways the government of the United States can reduce deaths arising from the unregulated use of guns.
  • Analyze the place of ethics in medicine or of medical practitioners. For instance, you can discuss the prevalence of physician-assisted suicides in a named country. You may also talk about the ethicality of such a practice and whether it should be legal.
  • Explain how recent research breakthroughs have affected that particular field – for instance, how stem cell research has impacted the medical field.
  • Explain if and why people should be able to donate organs in exchange for money.
  • Discuss ethical behaviors in the workplace and (or) the educational sector. For example, talk about whether or not affirmative action is still important or necessary in education or the workplace.
  • Weigh the benefits and risks of vaccinating children and decide which one outweighs the other. Here, you might want to consider the different types of vaccinations and the nature and frequency of associated complications.
  • Investigate at least one of the health issues that currently pose a threat to humanity and which are under investigation. These issues can include Alzheimer’s, cancer, depression, autism, and HIV/AIDS. Research how these issues affect individuals and society and recommend solutions to alleviate cost and suffering.
  • Study some individuals suffering from and under treatment for depression. Then, investigate the common predictors of the disease and how this information can help prevent the issue.

Tip : To make this example of a research title more comprehensive, you can focus on a certain age range – say, teenagers.

  • Discuss whether or not free healthcare and medication should be available to people and the likely implications.
  • Identify and elucidate different methods or programs that have been most effective in preventing or reducing teen pregnancy.
  • Analyze different reasons and circumstances for genetic manipulation and the different perspectives of people on this matter. Then, discuss whether or not parents should be allowed to engineer designer babies.
  • Identify the types of immigration benefits, including financial, medical, and education, your country provides for refugees and immigrants. Then, discuss how these benefits have helped them in settling down and whether more or less should be provided.
  • Discuss the acceptance rate of the gay community in your country or a specific community. For example, consider whether or not gay marriage is permitted if they can adopt children, and if they are welcome in religious gatherings.
  • Explore and discuss if terrorism truly creates a fear culture that can become a society’s unintended terrorist.
  • Consider and discuss the different techniques one can use to identify pedophiles on social media.

Tip : Social issues research topics are interesting, but ensure you write formally and professionally.

  • Investigate the importance or lack of importance of art in primary or secondary education. You can also recommend whether or not it should be included in the curriculum and why.

Tip : You can write on this possible research title based on your experiences, whether positive or negative.

  • Discuss the role of illustration in children’s books and how it facilitates easy understanding in children. You may focus on one particular book or select a few examples and compare and contrast.
  • Should the use of art in books for adults be considered, and what are the likely benefits?
  • Compare and contrast the differences in art from two named cultural Renaissance – for instance, the Northern Renaissance and the Italian Renaissance.
  • Investigate how sexism is portrayed in different types of media, including video games, music, and film. You can also talk about whether or not the amount of sexism portrayed has reduced or increased over the years.
  • Explore different perspectives and views on dreams; are they meaningful or simply a game of the sleeping mind? You can also discuss the functions and causes of dreams, like sleeping with anxiety, eating before bed, and prophecies.
  • Investigate the main reasons why religious cults are powerful and appealing to the masses, referring to individual cases.
  • Investigate the impact of religion on the crime rate in a particular region.

Tip : Narrow down this research title by choosing to focus on a particular age group, say children or teenagers, or family. Alternatively, you can focus on a particular crime in the research to make the paper more extensive.

  • Explore reasons why Martin Luther decided to split with the Catholic church.
  • Discuss the circumstances in Siddhartha’s life that led to him becoming the Buddha.

Tip : It is important to remove sentiments from your research and base your points instead on clear evidence from a sound study. This ensures your title of research does not lead to unsubstantiated value judgments, which reduces the quality of the paper.

  • Discuss how the steel sword, gunpowder, biological warfare, longbow, or atomic bomb has changed the nature of warfare.

Tip : For this example of the research problem, choose only one of these technological developments or compare two or more to have a rich research paper.

  • Explore the changes computers, tablets, and smartphones have brought to human behaviors and culture, using published information and personal experience.

Tip : Approach each research study example in a research paper context or buy research paper online , giving a formal but objective view of the subject.

  • Are railroads and trains primary forces in the industrialization, exploitation, and settlement of your homeland or continent?
  • Discuss how the use of fossil fuels has changed or shaped the world.

Tip : Narrow down this title of the research study to focus on a local or particular area or one effect of fossil fuels, like oil spill pollution.

  • Discuss what progress countries have made with artificial intelligence. You can focus on one named country or compare the progress of one country with another.
  • Investigate the factual status of global warming – that is, is it a reality or a hoax? If it is a reality, explore the primary causes and how humanity can make a difference.
  • Conduct in-depth research on endangered wildlife species in your community and discuss why they have become endangered. You can also enumerate what steps the community can take to prevent these species from going extinct and increase their chances of survival.
  • Investigate the environmental soundness of the power sources in your country or community. Then, recommend alternative energy sources that might be best suited for the area and why.
  • Consider an area close to wildlife reserves and national parks, and see whether oil and mineral exploration has occurred there. Discuss whether this action should be allowed or not, with fact-backed reasons.
  • Investigate how the use and abolishment of DDT have affected the population of birds in your country.

Tip : Each example research title requires that you consult authoritative scientific reports to improve the quality of your paper. Furthermore, specificity and preciseness are required in each example of research title and problem, which only an authority source can provide.

  • Discuss the importance of a major historical event and why it was so important in the day. These events can include the assassination of John F. Kennedy or some revolutionary document like the Magna Carta.
  • Consider voyagers such as the Vikings, Chinese, as well as native populations and investigate whether Columbus discovered America first.
  • Choose a named historical group, family, or individual through their biographies, examining them for reader responses.
  • Research people of different cultural orientations and their responses to the acts of others who live around them.
  • Investigate natural disasters in a named country and how the government has responded to them. For example, explore how the response of the New Orleans government to natural disasters has changed since Hurricane Katrina.

Tip : Focus this research title sample on one particular country or natural disaster or compare the responses of two countries with each other.

  • Explore the educational policy, “no child left behind,” investigating its benefits and drawbacks.
  • Investigate the concept of plagiarism in the twenty-first century, its consequences, and its prevalence in modern universities. Take a step further to investigate how and why many students don’t understand the gravity of their errors.
  • Do in-depth research on bullying in schools, explaining the seriousness of the problem in your area in particular. Also, recommend actions schools, teachers, and parents can take to improve the situation if anything.
  • Explore the place of religion in public schools; if it has a place, explain why, and if it does not, explain why not.
  • Does a student’s financial background have any effect on his or her academic performance? In this sample research title, you can compare students from different financial backgrounds, from wealthy to average, and their scores on standardized tests.
  • Is spanking one’s child considered child abuse; if so, why? In this research problem example for students, consider whether or not parents should be able to spank their children.
  • Investigate the relationship between family health and nutrition, focusing on particular nutrition. This example of the title of the research study, for instance, can focus on the relationship between breastfeeding and baby health.
  • Elucidate on, if any, the benefits of having a home-cooked meal and sitting down as a family to eat together.
  • Explore the effect of fast-food restaurants on family health and nutrition, and whether or not they should be regulated.
  • Research local food producers and farms in your community, pinpointing how much of your diet is acquired from them.

Tip : These are great research titles from which you can coin research topics for STEM students .

  • Compare and contrast the two major operating systems: Mac and Windows, and discuss which one is better.

Tip : This title of the research study example can lead to strong uninformed opinions on the matter. However, it is important to investigate and discuss facts about the two operating systems, basing your conclusions on these.

  • Explain the effect of spell checkers, autocorrect functions, and grammar checkers on the writing skills of computer users. Have these tools improved users’ writing skills or weakened them?

Tip : For this example of title research, it is better to consider more than one of these tools to write a comprehensive paper.

  • Discuss the role(s) artificial intelligence is playing now or will likely play in the future as regards human evolution.
  • Identify and investigate the next groundbreaking development in computer science (like the metaverse), explaining why you believe it will be important.
  • Discuss a particular trendsetting technological tool, like blockchain technology, and how it has benefited different sectors.

Tip : For this research title example, you may want to focus on the effect of one tool on one particular sector. This way, you can investigate this example of research and thesis statement about social media more thoroughly and give as many details as possible.

  • Consider your personal experiences as well as close friends’ and families experiences. Then, determine how marketing has invaded your lives and whether these impersonal communications are more positive than negative or vice versa.
  • Investigate the regulations (or lack thereof) that apply to marketing items to children in your region. Do you think these regulations are unfounded, right, or inadequate?
  • Investigate the merits and demerits of outsourcing customer services; you can compare the views of businesses with those of their customers.
  • How has the communication we do through blog sites, messaging, social media, email, and other online platforms improved interpersonal communications if it has?
  • Can understanding culture change the way you do business? Discuss how.

Tip : Ensure you share your reasoning on this title of the research study example and provide evidence-backed information to support your points.

  • Learn everything you can about eating disorders like bulimia and anorexia, as well as their causes, and symptoms. Then, investigate and discuss the impact of its significance and recommend actions that might improve the situation.
  • Research a major development in women’s history, like the admission of women to higher institutions and the legalization of abortion. Discuss the short-term and (or) long-term implications of the named event or development.
  • Discuss gender inequality in the workplace – for instance, the fact that women tend to earn less than men for doing the same job. Provide specific real-life examples as you explain the reasons for this and recommend solutions to the problem.
  • How have beauty contests helped women: have they empowered them in society or objectified them?

Tip : You may shift the focus of this topic research example to female strippers or women who act in pornographic movies.

  • Investigate exceptional businesswomen in the 21st century; you can focus on one or compare two or more.

Tip : When writing on the title of a research example related to women, avoid using persuasion tactics; instead, be tactful and professional in presenting your points.

  • Discuss the unique nature and implications of Donald Trump’s presidency on the United States and the world.
  • Investigate the conditions and forces related to the advent and rise of Nazi Germany. Shift the focus of this title research example on major wars like WWI or the American Civil War.
  • Is the enormous amount of money spent during election campaigns a legitimate expense?
  • Investigate a named major political scandal that recently occurred in your region or country. Discuss how it started, how its news spread, and its impacts on individuals in that area.
  • Discuss the impacts British rule had on India.
  • Investigate the rate of incarceration in your region and compare it with that of other countries or other regions.
  • Is incarcerating criminals an effective solution in promoting the rehabilitation of criminals and controlling crime rates?
  • Consider various perspectives on the issue of gun control and coin several argumentative essay topics on the matter.
  • Why do drivers continue to text while driving despite legal implications and dire consequences?
  • Discuss the legality of people taking their own lives due to suffering from a debilitating terminal disease.

Each example of the research title provided in this article will make for a rich, information-dense research paper. However, you have a part to play in researching thoroughly on the example of the research study. To simplify the entire process for you, hiring our writing services is key as you wouldn’t have to worry about choosing topics. Our team of skilled writers knows the right subject that suits your research and how to readily get materials on them.

Leave a Reply Cancel reply

National Academies Press: OpenBook

On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations (2004)

Chapter: 5 comparative studies, 5 comparative studies.

It is deceptively simple to imagine that a curriculum’s effectiveness could be easily determined by a single well-designed study. Such a study would randomly assign students to two treatment groups, one using the experimental materials and the other using a widely established comparative program. The students would be taught the entire curriculum, and a test administered at the end of instruction would provide unequivocal results that would permit one to identify the more effective treatment.

The truth is that conducting definitive comparative studies is not simple, and many factors make such an approach difficult. Student placement and curricular choice are decisions that involve multiple groups of decision makers, accrue over time, and are subject to day-to-day conditions of instability, including student mobility, parent preference, teacher assignment, administrator and school board decisions, and the impact of standardized testing. This complex set of institutional policies, school contexts, and individual personalities makes comparative studies, even quasi-experimental approaches, challenging, and thus demands an honest and feasible assessment of what can be expected of evaluation studies (Usiskin, 1997; Kilpatrick, 2002; Schoenfeld, 2002; Shafer, in press).

Comparative evaluation study is an evolving methodology, and our purpose in conducting this review was to evaluate and learn from the efforts undertaken so far and advise on future efforts. We stipulated the use of comparative studies as follows:

A comparative study was defined as a study in which two (or more) curricular treatments were investigated over a substantial period of time (at least one semester, and more typically an entire school year) and a comparison of various curricular outcomes was examined using statistical tests. A statistical test was required to ensure the robustness of the results relative to the study’s design.

We read and reviewed a set of 95 comparative studies. In this report we describe that database, analyze its results, and draw conclusions about the quality of the evaluation database both as a whole and separated into evaluations supported by the National Science Foundation and commercially generated evaluations. In addition to describing and analyzing this database, we also provide advice to those who might wish to fund or conduct future comparative evaluations of mathematics curricular effectiveness. We have concluded that the process of conducting such evaluations is in its adolescence and could benefit from careful synthesis and advice in order to increase its rigor, feasibility, and credibility. In addition, we took an interdisciplinary approach to the task, noting that various committee members brought different expertise and priorities to the consideration of what constitutes the most essential qualities of rigorous and valid experimental or quasi-experimental design in evaluation. This interdisciplinary approach has led to some interesting observations and innovations in our methodology of evaluation study review.

This chapter is organized as follows:

Study counts disaggregated by program and program type.

Seven critical decision points and identification of at least minimally methodologically adequate studies.

Definition and illustration of each decision point.

A summary of results by student achievement in relation to program types (NSF-supported, University of Chicago School Mathematics Project (UCSMP), and commercially generated) in relation to their reported outcome measures.

A list of alternative hypotheses on effectiveness.

Filters based on the critical decision points.

An analysis of results by subpopulations.

An analysis of results by content strand.

An analysis of interactions among content, equity, and grade levels.

Discussion and summary statements.

In this report, we describe our methodology for review and synthesis so that others might scrutinize our approach and offer criticism on the basis of

our methodology and its connection to the results stated and conclusions drawn. In the spirit of scientific, fair, and open investigation, we welcome others to undertake similar or contrasting approaches and compare and discuss the results. Our work was limited by the short timeline set by the funding agencies resulting from the urgency of the task. Although we made multiple efforts to collect comparative studies, we apologize to any curriculum evaluators if comparative studies were unintentionally omitted from our database.

Of these 95 comparative studies, 65 were studies of NSF-supported curricula, 27 were studies of commercially generated materials, and 3 included two curricula each from one of these two categories. To avoid the problem of double coding, two studies, White et al. (1995) and Zahrt (2001), were coded within studies of NSF-supported curricula because more of the classes studied used the NSF-supported curriculum. These studies were not used in later analyses because they did not meet the requirements for the at least minimally methodologically adequate studies, as described below. The other, Peters (1992), compared two commercially generated curricula, and was coded in that category under the primary program of focus. Therefore, of the 95 comparative studies, 67 studies were coded as NSF-supported curricula and 28 were coded as commercially generated materials.

The 11 evaluation studies of the UCSMP secondary program that we reviewed, not including White et al. and Zahrt as previously mentioned, benefit from the maturity of the program, while demonstrating an orientation to both establishing effectiveness and improving a product line. For these reasons, at times we will present the summary of UCSMP’s data separately.

The Saxon materials also present a somewhat different profile from the other commercially generated materials because many of the evaluations of these materials were conducted in the 1980s and the materials were originally developed with a rather atypical program theory. Saxon (1981) designed its algebra materials to combine distributed practice with incremental development. We selected the Saxon materials as a middle grades commercially generated program, and limited its review to middle school studies from 1989 onward when the first National Council of Teachers of Mathematics (NCTM) Standards (NCTM, 1989) were released. This eliminated concerns that the materials or the conditions of educational practice have been altered during the intervening time period. The Saxon materials explicitly do not draw from the NCTM Standards nor did they receive support from the NSF; thus they truly represent a commercial venture. As a result, we categorized the Saxon studies within the group of studies of commercial materials.

At times in this report, we describe characteristics of the database by

descriptive comparative research title examples

FIGURE 5-1 The distribution of comparative studies across programs. Programs are coded by grade band: black bars = elementary, white bars = middle grades, and gray bars = secondary. In this figure, there are six studies that involved two programs and one study that involved three programs.

NOTE: Five programs (MathScape, MMAP, MMOW/ARISE, Addison-Wesley, and Harcourt) are not shown above since no comparative studies were reviewed.

particular curricular program evaluations, in which case all 19 programs are listed separately. At other times, when we seek to inform ourselves on policy-related issues of funding and evaluating curricular materials, we use the NSF-supported, commercially generated, and UCSMP distinctions. We remind the reader of the artificial aspects of this distinction because at the present time, 18 of the 19 curricula are published commercially. In order to track the question of historical inception and policy implications, a distinction is drawn between the three categories. Figure 5-1 shows the distribution of comparative studies across the 14 programs.

The first result the committee wishes to report is the uneven distribution of studies across the curricula programs. There were 67 coded studies of the NSF curricula, 11 studies of UCSMP, and 17 studies of the commercial publishers. The 14 evaluation studies conducted on the Saxon materials compose the bulk of these 17-non-UCSMP and non-NSF-supported curricular evaluation studies. As these results suggest, we know more about the

evaluations of the NSF-supported curricula and UCSMP than about the evaluations of the commercial programs. We suggest that three factors account for this uneven distribution of studies. First, evaluations have been funded by the NSF both as a part of the original call, and as follow-up to the work in the case of three supplemental awards to two of the curricula programs. Second, most NSF-supported programs and UCSMP were developed at university sites where there is access to the resources of graduate students and research staff. Finally, there was some reported reluctance on the part of commercial companies to release studies that could affect perceptions of competitive advantage. As Figure 5-1 shows, there were quite a few comparative studies of Everyday Mathematics (EM), Connected Mathematics Project (CMP), Contemporary Mathematics in Context (Core-Plus Mathematics Project [CPMP]), Interactive Mathematics Program (IMP), UCSMP, and Saxon.

In the programs with many studies, we note that a significant number of studies were generated by a core set of authors. In some cases, the evaluation reports follow a relatively uniform structure applied to single schools, generating multiple studies or following cohorts over years. Others use a standardized evaluation approach to evaluate sequential courses. Any reports duplicating exactly the same sample, outcome measures, or forms of analysis were eliminated. For example, one study of Mathematics Trailblazers (Carter et al., 2002) reanalyzed the data from the larger ARC Implementation Center study (Sconiers et al., 2002), so it was not included separately. Synthesis studies referencing a variety of evaluation reports are summarized in Chapter 6 , but relevant individual studies that were referenced in them were sought out and included in this comparative review.

Other less formal comparative studies are conducted regularly at the school or district level, but such studies were not included in this review unless we could obtain formal reports of their results, and the studies met the criteria outlined for inclusion in our database. In our conclusions, we address the issue of how to collect such data more systematically at the district or state level in order to subject the data to the standards of scholarly peer review and make it more systematically and fairly a part of the national database on curricular effectiveness.

A standard for evaluation of any social program requires that an impact assessment is warranted only if two conditions are met: (1) the curricular program is clearly specified, and (2) the intervention is well implemented. Absent this assurance, one must have a means of ensuring or measuring treatment integrity in order to make causal inferences. Rossi et al. (1999, p. 238) warned that:

two prerequisites [must exist] for assessing the impact of an intervention. First, the program’s objectives must be sufficiently well articulated to make

it possible to specify credible measures of the expected outcomes, or the evaluator must be able to establish such a set of measurable outcomes. Second, the intervention should be sufficiently well implemented that there is no question that its critical elements have been delivered to appropriate targets. It would be a waste of time, effort, and resources to attempt to estimate the impact of a program that lacks measurable outcomes or that has not been properly implemented. An important implication of this last consideration is that interventions should be evaluated for impact only when they have been in place long enough to have ironed out implementation problems.

These same conditions apply to evaluation of mathematics curricula. The comparative studies in this report varied in the quality of documentation of these two conditions; however, all addressed them to some degree or another. Initially by reviewing the studies, we were able to identify one general design template, which consisted of seven critical decision points and determined that it could be used to develop a framework for conducting our meta-analysis. The seven critical decision points we identified initially were:

Choice of type of design: experimental or quasi-experimental;

For those studies that do not use random assignment: what methods of establishing comparability of groups were built into the design—this includes student characteristics, teacher characteristics, and the extent to which professional development was involved as part of the definition of a curriculum;

Definition of the appropriate unit of analysis (students, classes, teachers, schools, or districts);

Inclusion of an examination of implementation components;

Definition of the outcome measures and disaggregated results by program;

The choice of statistical tests, including statistical significance levels and effect size; and

Recognition of limitations to generalizability resulting from design choices.

These are critical decisions that affect the quality of an evaluation. We further identified a subset of these evaluation studies that met a set of minimum conditions that we termed at least minimally methodologically adequate studies. Such studies are those with the greatest likelihood of shedding light on the effectiveness of these programs. To be classified as at least minimally methodologically adequate, and therefore to be considered for further analysis, each evaluation study was required to:

Include quantifiably measurable outcomes such as test scores, responses to specified cognitive tasks of mathematical reasoning, performance evaluations, grades, and subsequent course taking; and

Provide adequate information to judge the comparability of samples. In addition, a study must have included at least one of the following additional design elements:

A report of implementation fidelity or professional development activity;

Results disaggregated by content strands or by performance by student subgroups; and/or

Multiple outcome measures or precise theoretical analysis of a measured construct, such as number sense, proof, or proportional reasoning.

Using this rubric, the committee identified a subset of 63 comparative studies to classify as at least minimally methodologically adequate and to analyze in depth to inform the conduct of future evaluations. There are those who would argue that any threat to the validity of a study discredits the findings, thus claiming that until we know everything, we know nothing. Others would claim that from the myriad of studies, examining patterns of effects and patterns of variation, one can learn a great deal, perhaps tentatively, about programs and their possible effects. More importantly, we can learn about methodologies and how to concentrate and focus to increase the likelihood of learning more quickly. As Lipsey (1997, p. 22) wrote:

In the long run, our most useful and informative contribution to program managers and policy makers and even to the evaluation profession itself may be the consolidation of our piecemeal knowledge into broader pictures of the program and policy spaces at issue, rather than individual studies of particular programs.

We do not wish to imply that we devalue studies of student affect or conceptions of mathematics, but decided that unless these indicators were connected to direct indicators of student learning, we would eliminate them from further study. As a result of this sorting, we eliminated 19 studies of NSF-supported curricula and 13 studies of commercially generated curricula. Of these, 4 were eliminated for their sole focus on affect or conceptions, 3 were eliminated for their comparative focus on outcomes other than achievement, such as teacher-related variables, and 19 were eliminated for their failure to meet the minimum additional characteristics specified in the criteria above. In addition, six others were excluded from the studies of commercial materials because they were not conducted within the grade-

level band specified by the committee for the selection of that program. From this point onward, all references can be assumed to refer to at least minimally methodologically adequate unless a study is referenced for illustration, in which case we label it with “EX” to indicate that it is excluded in the summary analyses. Studies labeled “EX” are occasionally referenced because they can provide useful information on certain aspects of curricular evaluation, but not on the overall effectiveness.

The at least minimally methodologically adequate studies reported on a variety of grade levels. Figure 5-2 shows the different grade levels of the studies. At times, the choice of grade levels was dictated by the years in which high-stakes tests were given. Most of the studies reported on multiple grade levels, as shown in Figure 5-2 .

Using the seven critical design elements of at least minimally methodologically adequate studies as a design template, we describe the overall database and discuss the array of choices on critical decision points with examples. Following that, we report on the results on the at least minimally methodologically adequate studies by program type. To do so, the results of each study were coded as either statistically significant or not. Those studies

descriptive comparative research title examples

FIGURE 5-2 Single-grade studies by grade and multigrade studies by grade band.

that contained statistically significant results were assigned a percentage of outcomes that are positive (in favor of the treatment curriculum) based on the number of statistically significant comparisons reported relative to the total number of comparisons reported, and a percentage of outcomes that are negative (in favor of the comparative curriculum). The remaining were coded as the percentage of outcomes that are non significant. Then, using seven critical decision points as filters, we identified and examined more closely sets of studies that exhibited the strongest designs, and would therefore be most likely to increase our confidence in the validity of the evaluation. In this last section, we consider alternative hypotheses that could explain the results.

The committee emphasizes that we did not directly evaluate the materials. We present no analysis of results aggregated across studies by naming individual curricular programs because we did not consider the magnitude or rigor of the database for individual programs substantial enough to do so. Nevertheless, there are studies that provide compelling data concerning the effectiveness of the program in a particular context. Furthermore, we do report on individual studies and their results to highlight issues of approach and methodology and to remain within our primary charge, which was to evaluate the evaluations, we do not summarize results of the individual programs.

DESCRIPTION OF COMPARATIVE STUDIES DATABASE ON CRITICAL DECISION POINTS

An experimental or quasi-experimental design.

We separated the studies into experimental and quasiexperimental, and found that 100 percent of the studies were quasiexperimental (Campbell and Stanley, 1966; Cook and Campbell, 1979; and Rossi et al., 1999). 1 Within the quasi-experimental studies, we identified three subcategories of comparative study. In the first case, we identified a study as cross-curricular comparative if it compared the results of curriculum A with curriculum B. A few studies in this category also compared two samples within the curriculum to each other and specified different conditions such as high and low implementation quality.

A second category of a quasi-experimental study involved comparisons that could shed light on effectiveness involving time series studies. These studies compared the performance of a sample of students in a curriculum

descriptive comparative research title examples

FIGURE 5-3 The number of comparative studies in each category.

under investigation across time, such as in a longitudinal study of the same students over time. A third category of comparative study involved a comparison to some form of externally normed results, such as populations taking state, national, or international tests or prior research assessment from a published study or studies. We categorized these studies and divided them into NSF, UCSMP, and commercial and labeled them by the categories above ( Figure 5-3 ).

In nearly all studies in the comparative group, the titles of experimental curricula were explicitly identified. The only exception to this was the ARC Implementation Center study (Sconiers et al., 2002), where three NSF-supported elementary curricula were examined, but in the results, their effects were pooled. In contrast, in the majority of the cases, the comparison curriculum is referred to simply as “traditional.” In only 22 cases were comparisons made between two identified curricula. Many others surveyed the array of curricula at comparison schools and reported on the most frequently used, but did not identify a single curriculum. This design strategy is used often because other factors were used in selecting comparison groups, and the additional requirement of a single identified curriculum in

these sites would often make it difficult to match. Studies were categorized into specified (including a single or multiple identified curricula) and nonspecified curricula. In the 63 studies, the central group was compared to an NSF-supported curriculum (1), an unnamed traditional curriculum (41), a named traditional curriculum (19), and one of the six commercial curricula (2). To our knowledge, any systematic impact of such a decision on results has not been studied, but we express concern that when a specified curriculum is compared to an unspecified content which is a set of many informal curriculum, the comparison may favor the coherency and consistency of the single curricula, and we consider this possibility subsequently under alternative hypotheses. We believe that a quality study should at least report the array of curricula that comprise the comparative group and include a measure of the frequency of use of each, but a well-defined alternative is more desirable.

If a study was both longitudinal and comparative, then it was coded as comparative. When studies only examined performances of a group over time, such as in some longitudinal studies, it was coded as quasi-experimental normed. In longitudinal studies, the problems created by student mobility were evident. In one study, Carroll (2001), a five-year longitudinal study of Everyday Mathematics, the sample size began with 500 students, 24 classrooms, and 11 schools. By 2nd grade, the longitudinal sample was 343. By 3rd grade, the number of classes increased to 29 while the number of original students decreased to 236 students. At the completion of the study, approximately 170 of the original students were still in the sample. This high rate of attrition from the study suggests that mobility is a major challenge in curricular evaluation, and that the effects of curricular change on mobile students needs to be studied as a potential threat to the validity of the comparison. It is also a challenge in curriculum implementation because students coming into a program do not experience its cumulative, developmental effect.

Longitudinal studies also have unique challenges associated with outcome measures, a study by Romberg et al. (in press) (EX) discussed one approach to this problem. In this study, an external assessment system and a problem-solving assessment system were used. In the External Assessment System, items from the National Assessment of Educational Progress (NAEP) and Third International Mathematics and Science Survey (TIMSS) were balanced across four strands (number, geometry, algebra, probability and statistics), and 20 items of moderate difficulty, called anchor items, were repeated on each grade-specific assessment (p. 8). Because the analyses of the results are currently under way, the evaluators could not provide us with final results of this study, so it is coded as EX.

However, such longitudinal studies can provide substantial evidence of the effects of a curricular program because they may be more sensitive to an

TABLE 5-1 Scores in Percentage Correct by Everyday Mathematics Students and Various Comparison Groups Over a Five-Year Longitudinal Study

accumulation of modest effects and/or can reveal whether the rates of learning change over time within curricular change.

The longitudinal study by Carroll (2001) showed that the effects of curricula may often accrue over time, but measurements of achievement present challenges to drawing such conclusions as the content and grade level change. A variety of measures were used over time to demonstrate growth in relation to comparison groups. The author chose a set of measures used previously in studies involving two Asian samples and an American sample to provide a contrast to the students in EM over time. For 3rd and 4th grades, where the data from the comparison group were not available, the authors selected items from the NAEP to bridge the gap. Table 5-1 summarizes the scores of the different comparative groups over five years. Scores are reported as the mean percentage correct for a series of tests on number computation, number concepts and applications, geometry, measurement, and data analysis.

It is difficult to compare performances on different tests over different groups over time against a single longitudinal group from EM, and it is not possible to determine whether the students’ performance is increasing or whether the changes in the tests at each grade level are producing the results; thus the results from longitudinal studies lacking a control group or use of sophisticated methodological analysis may be suspect and should be interpreted with caution.

In the Hirsch and Schoen (2002) study, based on a sample of 1,457 students, scores on Ability to Do Quantitative Thinking (ITED-Q) a subset of the Iowa Tests of Education Development, students in Core-Plus showed increasing performance over national norms over the three-year time period. The authors describe the content of the ITED-Q test and point out

that “although very little symbolic algebra is required, the ITED-Q is quite demanding for the full range of high school students” (p. 3). They further point out that “[t]his 3-year pattern is consistent, on average, in rural, urban, and suburban schools, for males and females, for various minority groups, and for students for whom English was not their first language” (p. 4). In this case, one sees that studies over time are important as results over shorter periods may mask cumulative effects of consistent and coherent treatments and such studies could also show increases that do not persist when subject to longer trajectories. One approach to longitudinal studies was used by Webb and Dowling in their studies of the Interactive Mathematics Program (Webb and Dowling, 1995a, 1995b, 1995c). These researchers conducted transcript analyses as a means to examine student persistence and success in subsequent course taking.

The third category of quasi-experimental comparative studies measured student outcomes on a particular curricular program and simply compared them to performance on national tests or international tests. When these tests were of good quality and were representative of a genuine sample of a relevant population, such as NAEP reports or TIMSS results, the reports often provided one a reasonable indicator of the effects of the program if combined with a careful description of the sample. Also, sometimes the national tests or state tests used were norm-referenced tests producing national percentiles or grade-level equivalents. The normed studies were considered of weaker quality in establishing effectiveness, but were still considered valid as examples of comparing samples to populations.

For Studies That Do Not Use Random Assignment: What Methods of Establishing Comparability Across Groups Were Built into the Design

The most fundamental question in an evaluation study is whether the treatment has had an effect on the chosen criterion variable. In our context, the treatment is the curriculum materials, and in some cases, related professional development, and the outcome of interest is academic learning. To establish if there is a treatment effect, one must logically rule out as many other explanations as possible for the differences in the outcome variable. There is a long tradition on how this is best done, and the principle from a design point of view is to assure that there are no differences between the treatment conditions (especially in these evaluations, often there are only the new curriculum materials to be evaluated and a control group) either at the outset of the study or during the conduct of the study.

To ensure the first condition, the ideal procedure is the random assignment of the appropriate units to the treatment conditions. The second condition requires that the treatment is administered reliably during the length of the study, and is assured through the careful observation and

control of the situation. Without randomization, there are a host of possible confounding variables that could differ among the treatment conditions and that are related themselves to the outcome variables. Put another way, the treatment effect is a parameter that the study is set up to estimate. Statistically, an estimate that is unbiased is desired. The goal is that its expected value over repeated samplings is equal to the true value of the parameter. Without randomization at the onset of a study, there is no way to assure this property of unbiasness. The variables that differ across treatment conditions and are related to the outcomes are confounding variables, which bias the estimation process.

Only one study we reviewed, Peters (1992), used randomization in the assignment of students to treatments, but that occurred because the study was limited to one teacher teaching two sections and included substantial qualitative methods, so we coded it as quasi-experimental. Others report partially assigning teachers randomly to treatment conditions (Thompson, et al., 2001; Thompson et al., 2003). Two primary reasons seem to account for a lack of use of pure experimental design. To justify the conduct and expense of a randomized field trial, the program must be described adequately and there must be relative assurance that its implementation has occurred over the duration of the experiment (Peterson et al., 1999). Additionally, one must be sure that the outcome measures are appropriate for the range of performances in the groups and valid relative to the curricula under investigation. Seldom can such conditions be assured for all students and teachers and over the duration of a year or more.

A second reason is that random assignment of classrooms to curricular treatment groups typically is not permitted or encouraged under normal school conditions. As one evaluator wrote, “Building or district administrators typically identified teachers who would be in the study and in only a few cases was random assignment of teachers to UCSMP Algebra or comparison classes possible. School scheduling and teacher preference were more important factors to administrators and at the risk of losing potential sites, we did not insist on randomization” (Mathison et al., 1989, p. 11).

The Joint Committee on Standards for Educational Evaluation (1994, p. 165) committee of evaluations recognized the likelihood of limitations on randomization, writing:

The groups being compared are seldom formed by random assignment. Rather, they tend to be natural groupings that are likely to differ in various ways. Analytical methods may be used to adjust for these initial differences, but these methods are based upon a number of assumptions. As it is often difficult to check such assumptions, it is advisable, when time and resources permit, to use several different methods of analysis to determine whether a replicable pattern of results is obtained.

Does the dearth of pure experimentation render the results of the studies reviewed worthless? Bias is not an “either-or” proposition, but it is a quantity of varying degrees. Through careful measurement of the most salient potential confounding variables, precise theoretical description of constructs, and use of these methods of statistical analysis, it is possible to reduce the amount of bias in the estimated treatment effect. Identification of the most likely confounding variables and their measurement and subsequent adjustments can greatly reduce bias and help estimate an effect that is likely to be more reflective of the true value. The theoretical fully specified model is an alternative to randomization by including relevant variables and thus allowing the unbiased estimation of the parameter. The only problem is realizing when the model is fully specified.

We recognized that we can never have enough knowledge to assure a fully specified model, especially in the complex and unstable conditions of schools. However, a key issue in determining the degree of confidence we have in these evaluations is to examine how they have identified, measured, or controlled for such confounding variables. In the next sections, we report on the methods of the evaluators in identifying and adjusting for such potential confounding variables.

One method to eliminate confounding variables is to examine the extent to which the samples investigated are equated either by sample selection or by methods of statistical adjustments. For individual students, there is a large literature suggesting the importance of social class to achievement. In addition, prior achievement of students must be considered. In the comparative studies, investigators first identified participation of districts, schools, or classes that could provide sufficient duration of use of curricular materials (typically two years or more), availability of target classes, or adequate levels of use of program materials. Establishing comparability was a secondary concern.

These two major factors were generally used in establishing the comparability of the sample:

Student population characteristics, such as demographic characteristics of students in terms of race/ethnicity, economic levels, or location type (urban, suburban, or rural).

Performance-level characteristics such as performance on prior tests, pretest performance, percentage passing standardized tests, or related measures (e.g., problem solving, reading).

In general, four methods of comparing groups were used in the studies we examined, and they permit different degrees of confidence in their results. In the first type, a matching class, school, or district was identified.

Studies were coded as this type if specified characteristics were used to select the schools systematically. In some of these studies, the methodology was relatively complex as correlates of performance on the outcome measures were found empirically and matches were created on that basis (Schneider, 2000; Riordan and Noyce, 2001; and Sconiers et al., 2002). For example, in the Sconiers et al. study, where the total sample of more than 100,000 students was drawn from five states and three elementary curricula are reviewed (Everyday Mathematics, Math Trailblazers [MT], and Investigations [IN], a highly systematic method was developed. After defining eligibility as a “reform school,” evaluators conducted separate regression analyses for the five states at each tested grade level to identify the strongest predictors of average school mathematics score. They reported, “reading score and low-income variables … consistently accounted for the greatest percentage of total variance. These variables were given the greatest weight in the matching process. Other variables—such as percent white, school mobility rate, and percent with limited English proficiency (LEP)—accounted for little of the total variance but were typically significant. These variables were given less weight in the matching process” (Sconiers et al., 2002, p. 10). To further provide a fair and complete comparison, adjustments were made based on regression analysis of the scores to minimize bias prior to calculating the difference in scores and reporting effect sizes. In their results the evaluators report, “The combined state-grade effect sizes for math and total are virtually identical and correspond to a percentile change of about 4 percent favoring the reform students” (p. 12).

A second type of matching procedure was used in the UCSMP evaluations. For example, in an evaluation centered on geometry learning, evaluators advertised in NCTM and UCSMP publications, and set conditions for participation from schools using their program in terms of length of use and grade level. After selecting schools with heterogeneous grouping and no tracking, the researchers used a match-pair design where they selected classes from the same school on the basis of mathematics ability. They used a pretest to determine this, and because the pretest consisted of two parts, they adjusted their significance level using the Bonferroni method. 2 Pairs were discarded if the differences in means and variance were significant for all students or for those students completing all measures, or if class sizes became too variable. In the algebra study, there were 20 pairs as a result of the matching, and because they were comparing three experimental conditions—first edition, second edition, and comparison classes—in the com-

parison study relevant to this review, their matching procedure identified 8 pairs. When possible, teachers were assigned randomly to treatment conditions. Most results are presented with the eight identified pairs and an accumulated set of means. The outcomes of this particular study are described below in a discussion of outcome measures (Thompson et al., 2003).

A third method was to measure factors such as prior performance or socio-economic status (SES) based on pretesting, and then to use analysis of covariance or multiple regression in the subsequent analysis to factor in the variance associated with these factors. These studies were coded as “control.” A number of studies of the Saxon curricula used this method. For example, Rentschler (1995) conducted a study of Saxon 76 compared to Silver Burdett with 7th graders in West Virginia. He reported that the groups differed significantly in that the control classes had 65 percent of the students on free and reduced-price lunch programs compared to 55 percent in the experimental conditions. He used scores on California Test of Basic Skills mathematics computation and mathematics concepts and applications as his pretest scores and found significant differences in favor of the experimental group. His posttest scores showed the Saxon experimental group outperformed the control group on both computation and concepts and applications. Using analysis of covariance, the computation difference in favor of the experimental group was statistically significant; however, the difference in concepts and applications was adjusted to show no significant difference at the p < .05 level.

A fourth method was noted in studies that used less rigorous methods of selection of sample and comparison of prior achievement or similar demographics. These studies were coded as “compare.” Typically, there was no explicit procedure to decide if the comparison was good enough. In some of the studies, it appeared that the comparison was not used as a means of selection, but rather as a more informal device to convince the reader of the plausibility of the equivalence of the groups. Clearly, the studies that used a more precise method of selection were more likely to produce results on which one’s confidence in the conclusions is greater.

Definition of Unit of Analysis

A major decision in forming an evaluation design is the unit of analysis. The unit of selection or randomization used to assign elements to treatment and control groups is closely linked to the unit of analysis. As noted in the National Research Council (NRC) report (1992, p. 21):

If one carries out the assignment of treatments at the level of schools, then that is the level that can be justified for causal analysis. To analyze the results at the student level is to introduce a new, nonrandomized level into

the study, and it raises the same issues as does the nonrandomized observational study…. The implications … are twofold. First, it is advisable to use randomization at the level at which units are most naturally manipulated. Second, when the unit of observation is at a “lower” level of aggregation than the unit of randomization, then for many purposes the data need to be aggregated in some appropriate fashion to provide a measure that can be analyzed at the level of assignment. Such aggregation may be as simple as a summary statistic or as complex as a context-specific model for association among lower-level observations.

In many studies, inadequate attention was paid to the fact that the unit of selection would later become the unit of analysis. The unit of analysis, for most curriculum evaluators, needs to be at least the classroom, if not the school or even the district. The units must be independently responding units because instruction is a group process. Students are not independent, the classroom—even if the teachers work together in a school on instruction—is not entirely independent, so the school is the unit. Care needed to be taken to ensure that an adequate numbers of units would be available to have sufficient statistical power to detect important differences.

A curriculum is experienced by students in a group, and this implies that individual student responses and what they learn are correlated. As a result, the appropriate unit of assignment and analysis must at least be defined at the classroom or teacher level. Other researchers (Bryk et al., 1993) suggest that the unit might be better selected at an even higher level of aggregation. The school itself provides a culture in which the curriculum is enacted as it is influenced by the policies and assignments of the principal, by the professional interactions and governance exhibited by the teachers as a group, and by the community in which the school resides. This would imply that the school might be the appropriate unit of analysis. Even further, to the extent that such decisions about curriculum are made at the district level and supported through resources and professional development at that level, the appropriate unit could arguably be the district. On a more practical level, we found that arguments can be made for a variety of decisions on the selection of units, and what is most essential is to make a clear argument for one’s choice, to use the same unit in the analysis as in the sample selection process, and to recognize the potential limits to generalization that result from one’s decisions.

We would argue in all cases that reports of how sites are selected must be explicit in the evaluation report. For example, one set of evaluation studies selected sites by advertisements in a journal distributed by the program and in NCTM journals (UCSMP) (Thompson et al., 2001; Thompson et al., 2003). The samples in their studies tended to be affluent suburban populations and predominantly white populations. Other conditions of inclusion, such as frequency of use also might have influenced this outcome,

but it is important that over a set of studies on effectiveness, all populations of students be adequately sampled. When a study is not randomized, adjustments for these confounding variables should be included. In our analysis of equity, we report on the concerns about representativeness of the overall samples and their impact on the generalizability of the results.

Implementation Components

The complexity of doing research on curricular materials introduces a number of possible confounding variables. Due to the documented complexity of curricular implementation, most comparative study evaluators attempt to monitor implementation in some fashion. A valuable outcome of a well-conducted evaluation is to determine not only if the experimental curriculum could ideally have a positive impact on learning, but whether it can survive or thrive in the conditions of schooling that are so variable across sites. It is essential to know what the treatment was, whether it occurred, and if so, to what degree of intensity, fidelity, duration, and quality. In our model in Chapter 3 , these factors were referred to as “implementation components.” Measuring implementation can be costly for large-scale comparative studies; however, many researchers have shown that variation in implementation is a key factor in determining effectiveness. In coding the comparative studies, we identified three types of components that help to document the character of the treatment: implementation fidelity, professional development treatments, and attention to teacher effects.

Implementation Fidelity

Implementation fidelity is a measure of the basic extent of use of the curricular materials. It does not address issues of instructional quality. In some studies, implementation fidelity is synonymous with “opportunity to learn.” In examining implementation fidelity, a variety of data were reported, including, most frequently, the extent of coverage of the curricular material, the consistency of the instructional approach to content in relation to the program’s theory, reports of pedagogical techniques, and the length of use of the curricula at the sample sites. Other less frequently used approaches documented the calendar of curricular coverage, requested teacher feedback by textbook chapter, conducted student surveys, and gauged homework policies, use of technology, and other particular program elements. Interviews with teachers and students, classroom surveys, and observations were the most frequently used data-gathering techniques. Classroom observations were conducted infrequently in these studies, except in cases when comparative studies were combined with case studies, typically with small numbers of schools and classes where observations

were conducted for long or frequent time periods. In our analysis, we coded only the presence or absence of one or more of these methods.

If the extent of implementation was used in interpreting the results, then we classified the study as having adjusted for implementation differences. Across all 63 at least minimally methodologically adequate studies, 44 percent reported some type of implementation fidelity measure, 3 percent reported and adjusted for it in interpreting their outcome measures, and 53 percent recorded no information on this issue. Differences among studies, by study type (NSF, UCSMP, and commercially generated), showed variation on this issue, with 46 percent of NSF reporting or adjusting for implementation, 75 percent of UCSMP, and only 11 percent of the other studies of commercial materials doing so. Of the commercial, non-UCSMP studies included, only one reported on implementation. Possibly, the evaluators for the NSF and UCSMP Secondary programs recognized more clearly that their programs demanded significant changes in practice that could affect their outcomes and could pose challenges to the teachers assigned to them.

A study by Abrams (1989) (EX) 3 on the use of Saxon algebra by ninth graders showed that concerns for implementation fidelity extend to all curricula, even those like Saxon whose methods may seem more likely to be consistent with common practice. Abrams wrote, “It was not the intent of this study to determine the effectiveness of the Saxon text when used as Saxon suggests, but rather to determine the effect of the text as it is being used in the classroom situations. However, one aspect of the research was to identify how the text is being taught, and how closely teachers adhere to its content and the recommended presentation” (p. 7). Her findings showed that for the 9 teachers and 300 students, treatment effects favoring the traditional group (using Dolciani’s Algebra I textbook, Houghton Mifflin, 1980) were found on the algebra test, the algebra knowledge/skills subtest, and the problem-solving test for this population of teachers (fixed effect). No differences were found between the groups on an algebra understanding/applications subtest, overall attitude toward mathematics, mathematical self-confidence, anxiety about mathematics, or enjoyment of mathematics. She suggests that the lack of differences might be due to the ways in which teachers supplement materials, change test conditions, emphasize

and deemphasize topics, use their own tests, vary the proportion of time spent on development and practice, use calculators and group work, and basically adapt the materials to their own interpretation and method. Many of these practices conflict directly with the recommendations of the authors of the materials.

A study by Briars and Resnick (2000) (EX) in Pittsburgh schools directly confronted issues relevant to professional development and implementation. Evaluators contrasted the performance of students of teachers with high and low implementation quality, and showed the results on two contrasting outcome measures, Iowa Test of Basic Skills (ITBS) and Balanced Assessment. Strong implementers were defined as those who used all of the EM components and provided student-centered instruction by giving students opportunities to explore mathematical ideas, solve problems, and explain their reasoning. Weak implementers were either not using EM or using it so little that the overall instruction in the classrooms was “hardly distinguishable from traditional mathematics instruction” (p. 8). Assignment was based on observations of student behavior in classes, the presence or absence of manipulatives, teacher questionnaires about the programs, and students’ knowledge of classroom routines associated with the program.

From the identification of strong- and weak-implementing teachers, strong- and weak-implementation schools were identified as those with strong- or weak-implementing teachers in 3rd and 4th grades over two consecutive years. The performance of students with 2 years of EM experience in these settings composed the comparative samples. Three pairs of strong- and weak-implementation schools with similar demographics in terms of free and reduced-price lunch (range 76 to 93 percent), student living with only one parent (range 57 to 82 percent), mobility (range 8 to 16 percent), and ethnicity (range 43 to 98 percent African American) were identified. These students’ 1st-grade ITBS scores indicated similarity in prior performance levels. Finally, evaluators predicted that if the effects were due to the curricular implementation and accompanying professional development, the effects on scores should be seen in 1998, after full implementation. Figure 5-4 shows that on the 1998 New Standards exams, placement in strong- and weak-implementation schools strongly affected students’ scores. Over three years, performance in the district on skills, concepts, and problem solving rose, confirming the evaluator’s predictions.

An article by McCaffrey et al. (2001) examining the interactions among instructional practices, curriculum, and student achievement illustrates the point that distinctions are often inadequately linked to measurement tools in their treatment of the terms traditional and reform teaching. In this study, researchers conducted an exploratory factor analysis that led them to create two scales for instructional practice: Reform Practices and Tradi-

descriptive comparative research title examples

FIGURE 5-4 Percentage of students who met or exceeded the standard. Districtwide grade 4 New Standards Mathematics Reference Examination (NSMRE) performance for 1996, 1997, and 1998 by level of Everyday Mathematics implementation. Percentage of students who achieved the standard. Error bars denote the 99 percent confidence interval for each data point.

SOURCE: Re-created from Briars and Resnick (2000, pp. 19-20).

tional Practices. The reform scale measured the frequency, by means of teacher report, of teacher and student behaviors associated with reform instruction and assessment practices, such as using small-group work, explaining reasoning, representing and using data, writing reflections, or performing tasks in groups. The traditional scale focused on explanations to whole classes, the use of worksheets, practice, and short-answer assessments. There was a –0.32 correlation between scores for integrated curriculum teachers. There was a 0.27 correlation between scores for traditional

curriculum teachers. This shows that it is overly simplistic to think that reform and traditional practices are oppositional. The relationship among a variety of instructional practices is rather more complex as they interact with curriculum and various student populations.

Professional Development

Professional development and teacher effects were separated in our analysis from implementation fidelity. We recognized that professional development could be viewed by the readers of this report in two ways. As indicated in our model, professional development can be considered a program element or component or it can be viewed as part of the implementation process. When viewed as a program element, professional development resources are considered mandatory along with program materials. In relation to evaluation, proponents of considering professional development as a mandatory program element argue that curricular innovations, which involve the introduction of new topics, new types of assessment, or new ways of teaching, must make provision for adequate training, just as with the introduction of any new technology.

For others, the inclusion of professional development in the program elements without a concomitant inclusion of equal amounts of professional development relevant to a comparative treatment interjects a priori disproportionate treatments and biases the results. We hoped for an array of evaluation studies that might shed some empirical light on this dispute, and hence separated professional development from treatment fidelity, coding whether or not studies reported on the amount of professional development provided for the treatment and/or comparison groups. A study was coded as positive if it either reported on the professional development provided on the experimental group or reported the data on both treatments. Across all 63 at least minimally methodologically adequate studies, 27 percent reported some type of professional development measure, 1.5 percent reported and adjusted for it in interpreting their outcome measures, and 71.5 percent recorded no information on the issue.

A study by Collins (2002) (EX) 4 illustrates the critical and controversial role of professional development in evaluation. Collins studied the use of Connected Math over three years, in three middle schools in threat of being classified as low performing in the Massachusetts accountability system. A comparison was made between one school (School A) that engaged

substantively in professional development opportunities accompanying the program and two that did not (Schools B and C). In the CMP school reports (School A) totals between 100 and 136 hours of professional development were recorded for all seven teachers in grades 6 through 8. In School B, 66 hours were reported for two teachers and in School C, 150 hours were reported for eight teachers over three years. Results showed significant differences in the subsequent performance by students at the school with higher participation in professional development (School A) and it became a districtwide top performer; the other two schools remained at risk for low performance. No controls for teacher effects were possible, but the results do suggest the centrality of professional development for successful implementation or possibly suggest that the results were due to professional development rather than curriculum materials. The fact that these two interpretations cannot be separated is a problem when professional development is given to one and not the other. The effect could be due to textbook or professional development or an interaction between the two. Research designs should be adjusted to consider these issues when different conditions of professional development are provided.

Teacher Effects

These studies make it obvious that there are potential confounding factors of teacher effects. Many evaluation studies devoted inadequate attention to the variable of teacher quality. A few studies (Goodrow, 1998; Riordan and Noyce, 2001; Thompson et al., 2001; and Thompson et al., 2003) reported on teacher characteristics such as certification, length of service, experience with curricula, or degrees completed. Those studies that matched classrooms and reported by matched results rather than aggregated results sought ways to acknowledge the large variations among teacher performance and its impact on student outcomes. We coded any effort to report on possible teacher effects as one indicator of quality. Across all 63 at least minimally methodologically adequate studies, 16 percent reported some type of teacher effect measure, 3 percent reported and adjusted for it in interpreting their outcome measures, and 81 percent recorded no information on this issue.

One can see that the potential confounding factors of teacher effects, in terms of the provision of professional development or the measure of teacher effects, are not adequately considered in most evaluation designs. Some studies mention and give a subjective judgment as to the nature of the problem, but this is descriptive at the most. Hardly any of the studies actually do anything analytical, and because these are such important potential confounding variables, this presents a serious challenge to the efficacy of these studies. Figure 5-5 shows how attention to these factors varies

descriptive comparative research title examples

FIGURE 5-5 Treatment of implementation components by program type.

NOTE: PD = professional development.

across program categories among NSF-supported, UCSMP, and studies of commercial materials. In general, evaluations of NSF-supported studies were the most likely to measure these variables; UCSMP had the most standardized use of methods to do so across studies; and commercial material evaluators seldom reported on issues of implementation fidelity.

Identification of a Set of Outcome Measures and Forms of Disaggregation

Using the selected student outcomes identified in the program theory, one must conduct an impact assessment that refers to the design and measurement of student outcomes. In addition to selecting what outcomes should be measured within one’s program theory, one must determine how these outcomes are measured, when those measures are collected, and what

purpose they serve from the perspective of the participants. In the case of curricular evaluation, there are significant issues involved in how these measures are reported. To provide insight into the level of curricular validity, many evaluators prefer to report results by topic, content strand, or item cluster. These reports often present the level of specificity of outcome needed to inform curriculum designers, especially when efforts are made to document patterns of errors, distribution of results across multiple choices, or analyses of student methods. In these cases, whole test scores may mask essential differences in impact among curricula at the level of content topics, reporting only average performance.

On the other hand, many large-scale assessments depend on methods of test equating that rely on whole test scores and make comparative interpretations of different test administrations by content strands of questionable reliability. Furthermore, there are questions such as whether to present only gain scores effect sizes, how to link pretests and posttests, and how to determine the relative curricular sensitivity of various outcome measures.

The findings of comparative studies are reported in terms of the outcome measure(s) collected. To describe the nature of the database with regard to outcome measures and to facilitate our analyses of the studies, we classified each of the included studies on four outcome measure dimensions:

Total score reported;

Disaggregation of content strands, subtest, performance level, SES, or gender;

Outcome measure that was specific to curriculum; and

Use of multiple outcome measures.

Most studies reported a total score, but we did find studies that reported only subtest scores or only scores on an item-by-item basis. For example, in the Ben-Chaim et al. (1998) evaluation study of Connected Math, the authors were interested in students’ proportional reasoning proficiency as a result of use of this curriculum. They asked students from eight seventh-grade classes of CMP and six seventh-grade classes from the control group to solve a variety of tasks categorized as rate and density problems. The authors provide precise descriptions of the cognitive challenges in the items; however, they do not explain if the problems written up were representative of performance on a larger set of items. A special rating form was developed to code responses in three major categories (correct answer, incorrect answer, and no response), with subcategories indicating the quality of the work that accompanied the response. No reports on reliability of coding were given. Performance on standardized tests indicated that control students’ scores were slightly higher than CMP at the beginning of the

year and lower at the end. Twenty-five percent of the experimental group members were interviewed about their approaches to the problems. The CMP students outperformed the control students (53 percent versus 28 percent) overall in providing the correct answers and support work, and 27 percent of the control group gave an incorrect answer or showed incorrect thinking compared to 13 percent of the CMP group. An item-level analysis permitted the researchers to evaluate the actual strategies used by the students. They reported, for example, that 82 percent of CMP students used a “strategy focused on package price, unit price, or a combination of the two; those effective strategies were used by only 56 of 91 control students (62 percent)” (p. 264).

The use of item or content strand-level comparative reports had the advantage that they permitted the evaluators to assess student learning strategies specific to a curriculum’s program theory. For example, at times, evaluators wanted to gauge the effectiveness of using problems different from those on typical standardized tests. In this case, problems were drawn from familiar circumstances but carefully designed to create significant cognitive challenges, and assess how well the informal strategies approach in CMP works in comparison to traditional instruction. The disadvantages of such an approach include the use of only a small number of items and the concerns for reliability in scoring. These studies seem to represent a method of creating hybrid research models that build on the detailed analyses possible using case studies, but still reporting on samples that provide comparative data. It possibly reflects the concerns of some mathematicians and mathematics educators that the effectiveness of materials needs to be evaluated relative to very specific, research-based issues on learning and that these are often inadequately measured by multiple-choice tests. However, a decision not to report total scores led to a trade-off in the reliability and representativeness of the reported data, which must be addressed to increase the objectivity of the reports.

Second, we coded whether outcome data were disaggregated in some way. Disaggregation involved reporting data on dimensions such as content strand, subtest, test item, ethnic group, performance level, SES, and gender. We found disaggregated results particularly helpful in understanding the findings of studies that found main effects, and also in examining patterns across studies. We report the results of the studies’ disaggregation by content strand in our reports of effects. We report the results of the studies’ disaggregation by subgroup in our discussions of generalizability.

Third, we coded whether a study used an outcome measure that the evaluator reported as being sensitive to a particular treatment—this is a subcategory of what was defined in our framework as “curricular validity of measures.” In such studies, the rationale was that readily available measures such as state-mandated tests, norm-referenced standardized tests, and

college entrance examinations do not measure some of the aims of the program under study. A frequently cited instance of this was that “off the shelf” instruments do not measure well students’ ability to apply their mathematical knowledge to problems embedded in complex settings. Thus, some studies constructed a collection of tasks that assessed this ability and collected data on it (Ben-Chaim et al., 1998; Huntley et al., 2000).

Finally, we recorded whether a study used multiple outcome measures. Some studies used a variety of achievement measures and other studies reported on achievement accompanied by measures such as subsequent course taking or various types of affective measures. For example, Carroll (2001, p. 47) reported results on a norm-referenced standardized achievement test as well as a collection of tasks developed in other studies.

A study by Huntley et al. (2000) illustrates how a variety of these techniques were combined in their outcome measures. They developed three assessments. The first emphasized contextualized problem solving based on items from the American Mathematical Association of Two-Year Colleges and others; the second assessment was on context-free symbolic manipulation and a third part requiring collaborative problem solving. To link these measures to the overall evaluation, they articulated an explicit model of cognition based on how one links an applied situation to mathematical activity through processes of formulation and interpretation. Their assessment strategy permitted them to investigate algebraic reasoning as an ability to use algebraic ideas and techniques to (1) mathematize quantitative problem situations, (2) use algebraic principles and procedures to solve equations, and (3) interpret results of reasoning and calculations.

In presenting their data comparing performance on Core-Plus and traditional curriculum, they presented both main effects and comparisons on subscales. Their design of outcome measures permitted them to examine differences in performance with and without context and to conclude with statements such as “This result illustrates that CPMP students perform better than control students when setting up models and solving algebraic problems presented in meaningful contexts while having access to calculators, but CPMP students do not perform as well on formal symbol-manipulation tasks without access to context cues or calculators” (p. 349). The authors go on to present data on the relationship between knowing how to plan or interpret solutions and knowing how to carry them out. The correlations between these variables were weak but significantly different (0.26 for control groups and 0.35 for Core-Plus). The advantage of using multiple measures carefully tied to program theory is that they can permit one to test fine content distinctions that are likely to be the level of adjustments necessary to fine tune and improve curricular programs.

Another interesting approach to the use of outcome measures is found in the UCSMP studies. In many of these studies, evaluators collected infor-

TABLE 5-2 Mean Percentage Correct on the Subject Tests

mation from teachers’ reports and chapter reviews as to whether topics for items on the posttests were taught, calling this an “opportunity to learn” measure. The authors reported results from three types of analyses: (1) total test scores, (2) fair test scores (scores reported by program but only on items on topics taught), and (3) conservative test scores (scores on common items taught in both). Table 5-2 reports on the variations across the multiple- choice test scores for the Geometry study (Thompson et al., 2003) on a standardized test, High School Subject Tests-Geometry Form B , and the UCSMP-constructed Geometry test, and for the Advanced Algebra Study on the UCSMP-constructed Advanced Algebra test (Thompson et al., 2001). The table shows the mean scores for UCSMP classes and comparison classes. In each cell, mean percentage correct is reported first by whole test, then by fair test, and then by conservative test.

The authors explicitly compare the items from the standard Geometry test with the items from the UCSMP test and indicate overlap and difference. They constructed their own test because, in their view, the standard test was not adequately balanced among skills, properties, and real-world uses. The UCSMP test included items on transformations, representations, and applications that were lacking in the national test. Only five items were taught by all teachers; hence in the case of the UCSMP geometry test, there is no report on a conservative test. In the Advanced Algebra evaluation, only a UCSMP-constructed test was viewed as appropriate to cover the treatment of the prior material and alignment to the goals of the new course. These data sets demonstrate the challenge of selecting appropriate outcome measures, the sensitivity of the results to those decisions, and the importance of full disclosure of decision-making processes in order to permit readers to assess the implications of the choices. The methodology utilized sought to ensure that the material in the course was covered adequately by treatment teachers while finding ways to make comparisons that reflected content coverage.

Only one study reported on its outcomes using embedded assessment items employed over the course of the year. In a study of Saxon and UCSMP, Peters (1992) (EX) studied the use of these materials with two classrooms taught by the same teacher. In this small study, he randomly assigned students to treatment groups and then measured their performance on four unit tests composed of items common to both curricula and their progress on the Orleans-Hanna Algebraic Prognosis Test.

Peters’ study showed no significant difference in placement scores between Saxon and UCSMP on the posttest, but did show differences on the embedded assessment. Figure 5-6 (Peters, 1992, p. 75) shows an interesting display of the differences on a “continuum” that shows both the direction and magnitude of the differences and provides a level of concept specificity missing in many reports. This figure and a display ( Figure 5-7 ) in a study by Senk (1991, p. 18) of students’ mean scores on Curriculum A versus Curriculum B with a 10 percent range of differences marked represent two excellent means to communicate the kinds of detailed content outcome information that promises to be informative to curriculum writers, publishers, and school decision makers. In Figure 5-7 , 16 items listed by number were taken from the Second International Mathematics Study. The Functions, Statistics, and Trigonometry sample averaged 41 percent correct on these items whereas the U.S. precalculus sample averaged 38 percent. As shown in the figure, differences of 10 percent or less fall inside the banded area and greater than 10 percent fall outside, producing a display that makes it easy for readers and designers to identify the relative curricular strengths and weaknesses of topics.

While we value detailed outcome measure information, we also recognize the importance of examining curricular impact on students’ standardized test performance. Many developers, but not all, are explicit in rejecting standardized tests as adequate measures of the outcomes of their programs, claiming that these tests focus on skills and manipulations, that they are overly reliant on multiple-choice questions, and that they are often poorly aligned to new content emphases such as probability and statistics, transformations, use of contextual problems and functions, and process skills, such as problem solving, representation, or use of calculators. However, national and state tests are being revised to include more content on these topics and to draw on more advanced reasoning. Furthermore, these high-stakes tests are of major importance in school systems, determining graduation, passing standards, school ratings, and so forth. For this reason, if a curricular program demonstrated positive impact on such measures, we referred to that in Chapter 3 as establishing “curricular alignment with systemic factors.” Adequate performance on these measures is of paramount importance to the survival of reform (to large groups of parents and

descriptive comparative research title examples

FIGURE 5-6 Continuum of criterion score averages for studied programs.

SOURCE: Peters (1992, p. 75).

school administrators). These examples demonstrate how careful attention to outcomes measures is an essential element of valid evaluation.

In Table 5-3 , we document the number of studies using a variety of types of outcome measures that we used to code the data, and also report on the types of tests used across the studies.

descriptive comparative research title examples

FIGURE 5-7 Achievement (percentage correct) on Second International Mathematics Study (SIMS) items by U.S. precalculus students and functions, statistics, and trigonometry (FST) students.

SOURCE: Re-created from Senk (1991, p. 18).

TABLE 5-3 Number of Studies Using a Variety of Outcome Measures by Program Type

A Choice of Statistical Tests, Including Statistical Significance and Effect Size

In our first review of the studies, we coded what methods of statistical evaluation were used by different evaluators. Most common were t-tests; less frequently one found Analysis of Variance (ANOVA), Analysis of Co-

descriptive comparative research title examples

FIGURE 5-8 Statistical tests most frequently used.

variance (ANCOVA), and chi-square tests. In a few cases, results were reported using multiple regression or hierarchical linear modeling. Some used multiple tests; hence the total exceeds 63 ( Figure 5-8 ).

One of the difficult aspects of doing curriculum evaluations concerns using the appropriate unit both in terms of the unit to be randomly assigned in an experimental study and the unit to be used in statistical analysis in either an experimental or quasi-experimental study.

For our purposes, we made the decision that unless the study concerned an intact student population such as the freshman at a single university, where a student comparison was the correct unit, we believed that for statistical tests, the unit should be at least at the classroom level. Judgments were made for each study as to whether the appropriate unit was utilized. This question is an important one because statistical significance is related to sample size, and as a result, studies that inappropriately use the student as the unit of analysis could be concluding significant differences where they are not present. For example, if achievement differences between two curricula are tested in 16 classrooms with 400 students, it will always be easier to show significant differences using scores from those 400 students than using 16 classroom means.

Fifty-seven studies used students as the unit of analysis in at least one test of significance. Three of these were coded as correct because they involved whole populations. In all, 10 studies were coded as using the

TABLE 5-4 Performance on Applied Algebra Problems with Use of Calculators, Part 1

TABLE 5-5 Reanalysis of Algebra Performance Data

correct unit of analysis; hence, 7 studies used teachers or classes, or schools. For some studies where multiple tests were conducted, a judgment was made as to whether the primary conclusions drawn treated the unit of analysis adequately. For example, Huntley et al. (2000) compared the performance of CPMP students with students in a traditional course on a measure of ability to formulate and use algebraic models to answer various questions about relationships among variables. The analysis used students as the unit of analysis and showed a significant difference, as shown in Table 5-4 .

To examine the robustness of this result, we reanalyzed the data using an independent sample t-test and a matched pairs t-test with class means as the unit of analysis in both tests ( Table 5-5 ). As can be seen from the analyses, in neither statistical test was the difference between groups found to be significantly different (p < .05), thus emphasizing the importance of using the correct unit in analyzing the data.

Reanalysis of student-level data using class means will not always result

TABLE 5-6 Mean Percentage Correct on Entire Multiple-Choice Posttest: Second Edition and Non-UCSMP

in a change in finding. Furthermore, using class means as the unit of analysis does not suggest that significant differences will not be found. For example, a study by Thompson et al. (2001) compared the performance of UCSMP students with the performance of students in a more traditional program across several measures of achievement. They found significant differences between UCSMP students and the non-UCSMP students on several measures. Table 5-6 shows results of an analysis of a multiple-choice algebraic posttest using class means as the unit of analysis. Significant differences were found in five of eight separate classroom comparisons, as shown in the table. They also found a significant difference using a matched-pairs t-test on class means.

The lesson to be learned from these reanalyses is that the choice of unit of analysis and the way the data are aggregated can impact study findings in important ways including the extent to which these findings can be generalized. Thus it is imperative that evaluators pay close attention to such considerations as the unit of analysis and the way data are aggregated in the design, implementation, and analysis of their studies.

Second, effect size has become a relatively common and standard way of gauging the practical significance of the findings. Statistical significance only indicates whether the main-level differences between two curricula are large enough to not be due to chance, assuming they come from the same population. When statistical differences are found, the question remains as to whether such differences are large enough to consider. Because any innovation has its costs, the question becomes one of cost-effectiveness: Are the differences in student achievement large enough to warrant the costs of change? Quantifying the practical effect once statistical significance is established is one way to address this issue. There is a statistical literature for doing this, and for the purposes of this review, the committee simply noted whether these studies have estimated such an effect. However, the committee further noted that in conducting meta-analyses across these studies, effect size was likely to be of little value. These studies used an enormous variety of outcome measures, and even using effect size as a means to standardize units across studies is not sensible when the measures in each

study address such a variety of topics, forms of reasoning, content levels, and assessment strategies.

We note very few studies drew upon the advances in methodologies employed in modeling, which include causal modeling, hierarchical linear modeling (Bryk and Raudenbush, 1992; Bryk et al., 1993), and selection bias modeling (Heckman and Hotz, 1989). Although developing detailed specifications for these approaches is beyond the scope of this review, we wish to emphasize that these methodological advances should be considered within future evaluation designs.

Results and Limitations to Generalizability Resulting from Design Constraints

One also must consider what generalizations can be drawn from the results (Campbell and Stanley, 1966; Caporaso and Roos, 1973; and Boruch, 1997). Generalization is a matter of external validity in that it determines to what populations the study results are likely to apply. In designing an evaluation study, one must carefully consider, in the selection of units of analysis, how various characteristics of those units will affect the generalizability of the study. It is common for evaluators to conflate issues of representativeness for the purpose of generalizability (external validity) and comparativeness (the selection of or adjustment for comparative groups [internal validity]). Not all studies must be representative of the population served by mathematics curricula to be internally valid. But, to be generalizable beyond restricted communities, representativeness must be obtained by the random selection of the basic units. Clearly specifying such limitations to generalizability is critical. Furthermore, on the basis of equity considerations, one must be sure that if overall effectiveness is claimed, that the studies have been conducted and analyzed with reference of all relevant subgroups.

Thus, depending on the design of a study, its results may be limited in generalizability to other populations and circumstances. We identified four typical kinds of limitations on the generalizability of studies and coded them to determine, on the whole, how generalizable the results across studies might be.

First, there were studies whose designs were limited by the ability or performance level of the students in the samples. It was not unusual to find that when new curricula were implemented at the secondary level, schools kept in place systems of tracking that assigned the top students to traditional college-bound curriculum sequences. As a result, studies either used comparative groups who were matched demographically but less skilled than the population as a whole, in relation to prior learning, or their results compared samples of less well-prepared students to samples of students

with stronger preparations. Alternatively, some studies reported on the effects of curricula reform on gifted and talented students or on college-attending students. In these cases, the study results would also limit the generalizability of the results to similar populations. Reports using limited samples of students’ ability and prior performance levels were coded as a limitation to the generalizability of the study.

For example, Wasman (2000) conducted a study of one school (six teachers) and examined the students’ development of algebraic reasoning after one (n=100) and two years (n=73) in CMP. In this school, the top 25 percent of the students are counseled to take a more traditional algebra course, so her experimental sample, which was 61 percent white, 35 percent African American, 3 percent Asian, and 1 percent Hispanic, consisted of the lower 75 percent of the students. She reported on the student performance on the Iowa Algebraic Aptitude Test (IAAT) (1992), in the subcategories of interpreting information, translating symbols, finding relationships, and using symbols. Results for Forms 1 and 2 of the test, for the experimental and norm group, are shown in Table 5-7 for 8th graders.

In our coding of outcomes, this study was coded as showing no significant differences, although arguably its results demonstrate a positive set of

TABLE 5-7 Comparing Iowa Algebraic Aptitude Test (IAAT) Mean Scores of the Connected Mathematics Project Forms 1 and 2 to the Normative Group (8th Graders)

outcomes as the treatment group was weaker than the control group. Had the researcher used a prior achievement measure and a different statistical technique, significance might have been demonstrated, although potential teacher effects confound interpretations of results.

A second limitation to generalizability was when comparative studies resided entirely at curriculum pilot site locations, where such sites were developed as a means to conduct formative evaluations of the materials with close contact and advice from teachers. Typically, pilot sites have unusual levels of teacher support, whether it is in the form of daily technical support in the use of materials or technology or increased quantities of professional development. These sites are often selected for study because they have established cooperative agreements with the program developers and other sources of data, such as classroom observations, are already available. We coded whether the study was conducted at a pilot site to signal potential limitations in generalizability of the findings.

Third, studies were also coded as being of limited generalizability if they failed to disaggregate their data by socioeconomic class, race, gender, or some other potentially significant sources of restriction on the claims. We recorded the categories in which disaggregation occurred and compiled their frequency across the studies. Because of the need to open the pipeline to advanced study in mathematics by members of underrepresented groups, we were particularly concerned about gauging the extent to which evaluators factored such variables into their analysis of results and not just in terms of the selection of the sample.

Of the 46 included studies of NSF-supported curricula, 19 disaggregated their data by student subgroup. Nine of 17 studies of commercial materials disaggregated their data. Figure 5-9 shows the number of studies that disaggregated outcomes by race or ethnicity, SES, gender, LEP, special education status, or prior achievement. Studies using multiple categories of disaggregation were counted multiple times by program category.

The last category of restricted generalization occurred in studies of limited sample size. Although such studies may have provided more indepth observations of implementation and reports on professional development factors, the smaller numbers of classrooms and students in the study would limit the extent of generalization that could be drawn from it. Figure 5-10 shows the distribution of sizes of the samples in terms of numbers of students by study type.

Summary of Results by Student Achievement Among Program Types

We present the results of the studies as a means to further investigate their methodological implications. To this end, for each study, we counted across outcome measures the number of findings that were positive, nega-

descriptive comparative research title examples

FIGURE 5-9 Disaggregation of subpopulations.

descriptive comparative research title examples

FIGURE 5-10 Proportion of studies by sample size and program.

tive, or indeterminate (no significant difference) and then calculated the proportion of each. We represented the calculation of each study as a triplet (a, b, c) where a indicates the proportion of the results that were positive and statistically significantly stronger than the comparison program, b indicates the proportion that were negative and statistically significantly weaker than the comparison program, and c indicates the proportion that showed no significant difference between the treatment and the comparative group. For studies with a single outcome measure, without disaggregation by content strand, the triplet is always composed of two zeros and a single one. For studies with multiple measures or disaggregation by content strand, the triplet is typically a set of three decimal values that sum to one. For example, a study with one outcome measure in favor of the experimental treatment would be coded (1, 0, 0), while one with multiple measures and mixed results more strongly in favor of the comparative curriculum might be listed as (.20, .50, .30). This triplet would mean that for 20 percent of the comparisons examined, the evaluators reported statistically significant positive results, for 50 percent of the comparisons the results were statistically significant in favor of the comparison group, and for 30 percent of the comparisons no significant difference were found. Overall, the mean score on these distributions was (.54, .07, .40), indicating that across all the studies, 54 percent of the comparisons favored the treatment, 7 percent favored the comparison group, and 40 percent showed no significant difference. Table 5-8 shows the comparison by curricular program types. We present the results by individual program types, because each program type relies on a similar program theory and hence could lead to patterns of results that would be lost in combining the data. If the studies of commercial materials are all grouped together to include UCSMP, their pattern of results is (.38, .11, .51). Again we emphasize that due to our call for increased methodological rigor and the use of multiple methods, this result is not sufficient to establish the curricular effectiveness of these programs as a whole with adequate certainty.

We caution readers that these results are summaries of the results presented across a set of evaluations that meet only the standard of at least

TABLE 5-8 Comparison by Curricular Program Types

minimally methodologically adequate . Calculations of statistical significance of each program’s results were reported by the evaluators; we have made no adjustments for weaknesses in the evaluations such as inappropriate use of units of analysis in calculating statistical significance. Evaluations that consistently used the correct unit of analysis, such as UCSMP, could have fewer reports of significant results as a consequence. Furthermore, these results are not weighted by study size. Within any study, the results pay no attention to comparative effect size or to the established credibility of an outcome measure. Similarly, these results do not take into account differences in the populations sampled, an important consideration in generalizing the results. For example, using the same set of studies as an example, UCSMP studies used volunteer samples who responded to advertisements in their newsletters, resulting in samples with disproportionately Caucasian subjects from wealthier schools compared to national samples. As a result, we would suggest that these results are useful only as baseline data for future evaluation efforts. Our purpose in calculating these results is to permit us to create filters from the critical decision points and test how the results change as one applies more rigorous standards.

Given that none of the studies adequately addressed all of the critical criteria, we do not offer these results as definitive, only suggestive—a hypothesis for further study. In effect, given the limitations of time and support, and the urgency of providing advice related to policy, we offer this filtering approach as an informal meta-analytic technique sufficient to permit us to address our primary task, namely, evaluating the quality of the evaluation studies.

This approach reflects the committee’s view that to deeply understand and improve methodology, it is necessary to scrutinize the results and to determine what inferences they provide about the conduct of future evaluations. Analogous to debates on consequential validity in testing, we argue that to strengthen methodology, one must consider what current methodologies are able (or not able) to produce across an entire series of studies. The remainder of the chapter is focused on considering in detail what claims are made by these studies, and how robust those claims are when subjected to challenge by alternative hypothesis, filtering by tests of increasing rigor, and examining results and patterns across the studies.

Alternative Hypotheses on Effectiveness

In the spirit of scientific rigor, the committee sought to consider rival hypotheses that could explain the data. Given the weaknesses in the designs generally, often these alternative hypotheses cannot be dismissed. However, we believed that only after examining the configuration of results and

alternative hypotheses can the next generation of evaluations be better informed and better designed. We began by generating alternative hypotheses to explain the positive directionality of the results in favor of experimental groups. Alternative hypotheses included the following:

The teachers in the experimental groups tended to be self-selecting early adopters, and thus able to achieve effects not likely in regular populations.

Changes in student outcomes reflect the effects of professional development instruction, or level of classroom support (in pilot sites), and thus inflate the predictions of effectiveness of curricular programs.

Hawthorne effect (Franke and Kaul, 1978) occurs when treatments are compared to everyday practices, due to motivational factors that influence experimental participants.

The consistent difference is due to the coherence and consistency of a single curricular program when compared to multiple programs.

The significance level is only achieved by the use of the wrong unit of analysis to test for significance.

Supplemental materials or new teaching techniques produce the results and not the experimental curricula.

Significant results reflect inadequate outcome measures that focus on a restricted set of activities.

The results are due to evaluator bias because too few evaluators are independent of the program developers.

At the same time, one could argue that the results actually underestimate the performance of these materials and are conservative measures, and their alternative hypotheses also deserve consideration:

Many standardized tests are not sensitive to these curricular approaches, and by eliminating studies focusing on affect, we eliminated a key indicator of the appeal of these curricula to students.

Poor implementation or increased demands on teachers’ knowledge dampens the effects.

Often in the experimental treatment, top-performing students are missing as they are advised to take traditional sequences, rendering the samples unequal.

Materials are not well aligned with universities and colleges because tests for placement and success in early courses focus extensively on algebraic manipulation.

Program implementation has been undercut by negative publicity and the fears of parents concerning change.

There are also a number of possible hypotheses that may be affecting the results in either direction, and we list a few of these:

Examining the role of the teacher in curricular decision making is an important element in effective implementation, and design mandates of evaluation design make this impossible (and the positives and negatives or single- versus dual-track curriculum as in Lundin, 2001).

Local tests that are sensitive to the curricular effects typically are not mandatory and hence may lead to unpredictable performance by students.

Different types and extent of professional development may affect outcomes differentially.

Persistence or attrition may affect the mean scores and are often not considered in the comparative analyses.

One could also generate reasons why the curricular programs produced results showing no significance when one program or the other is actually more effective. This could include high degrees of variability in the results, samples that used the correct unit of analysis but did not obtain consistent participation across enough cases, implementation that did not show enough fidelity to the measures, or outcome measures insensitive to the results. Again, subsequent designs should be better informed by these findings to improve the likelihood that they will produce less ambiguous results and replication of studies could also give more confidence in the findings.

It is beyond the scope of this report to consider each of these alternative hypotheses separately and to seek confirmation or refutation of them. However, in the next section, we describe a set of analyses carried out by the committee that permits us to examine and consider the impact of various critical evaluation design decisions on the patterns of outcomes across sets of studies. A number of analyses shed some light on various alternative hypotheses and may inform the conduct of future evaluations.

Filtering Studies by Critical Decision Points to Increase Rigor

In examining the comparative studies, we identified seven critical decision points that we believed would directly affect the rigor and efficacy of the study design. These decision points were used to create a set of 16 filters. These are listed as the following questions:

Was there a report on comparability relative to SES?

Was there a report on comparability of samples relative to prior knowledge?

Was there a report on treatment fidelity?

Was professional development reported on?

Was the comparative curriculum specified?

Was there any attempt to report on teacher effects?

Was a total test score reported?

Was total test score(s) disaggregated by content strand?

Did the outcome measures match the curriculum?

Were multiple tests used?

Was the appropriate unit of analysis used in their statistical tests?

Did they estimate effect size for the study?

Was the generalizability of their findings limited by use of a restricted range of ability levels?

Was the generalizability of their findings limited by use of pilot sites for their study?

Was the generalizability of their findings limited by not disaggregating their results by subgroup?

Was the generalizability of their findings limited by use of small sample size?

The studies were coded to indicate if they reported having addressed these considerations. In some cases, the decision points were coded dichotomously as present or absent in the studies, and in other cases, the decision points were coded trichotomously, as description presented, absent, or statistically adjusted for in the results. For example, a study may or may not report on the comparability of the samples in terms of race, ethnicity, or socioeconomic status. If a report on SES was given, the study was coded as “present” on this decision; if a report was missing, it was coded as “absent”; and if SES status or ethnicity was used in the analysis to actually adjust outcomes, it was coded as “adjusted for.” For each coding, the table that follows reports the number of studies that met that condition, and then reports on the mean percentage of statistically significant results, and results showing no significant difference for that set of studies. A significance test is run to see if the application of the filter produces changes in the probability that are significantly different. 5

In the cases in which studies are coded into three distinct categories—present, absent, and adjusted for—a second set of filters is applied. First, the studies coded as present or adjusted for are combined and compared to those coded as absent; this is what we refer to as a weak test of the rigor of the study. Second, the studies coded as present or absent are combined and compared to those coded as adjusted for. This is what we refer to as a strong test. For dichotomous codings, there can be as few as three compari-

sons, and for trichotomous codings, there can be nine comparisons with accompanying tests of significance. Trichotomous codes were used for adjustments for SES and prior knowledge, examining treatment fidelity, professional development, teacher effects, and reports on effect sizes. All others were dichotomous.

NSF Studies and the Filters

For example, there were 11 studies of NSF-supported curricula that simply reported on the issues of SES in creating equivalent samples for comparison, and for this subset the mean probabilities of getting positive, negative, or results showing no significant difference were (.47, .10, .43). If no report of SES was supplied (n= 21), those probabilities become (.57, .07, .37), indicating an increase in positive results and a decrease in results showing no significant difference. When an adjustment is made in outcomes based on differences in SES (n=14), the probabilities change to (.72, .00, .28), showing a higher likelihood of positive outcomes. The probabilities that result from filtering should always be compared back to the overall results of (.59, .06, .35) (see Table 5-8 ) so as to permit one to judge the effects of more rigorous methodological constraints. This suggests that a simple report on SES without adjustment is least likely to produce positive outcomes; that is, no report produces the outcomes next most likely to be positive and studies that adjusted for SES tend to have a higher proportion of their comparisons producing positive results.

The second method of applying the filter (the weak test for rigor) for the treatment of the adjustment of SES groups compares the probabilities when a report is either given or adjusted for compared to when no report is offered. The combined percentage of a positive outcome of a study in which SES is reported or adjusted for is (.61, .05, .34), while the percentage for no report remains as reported previously at (.57, .07, .37). A final filter compares the probabilities of the studies in which SES is adjusted for with those that either report it only or do not report it at all. Here we compare the percentage of (.72, .00, .28) to (.53, .08, .37) in what we call a strong test. In each case we compared the probability produced by the whole group to those of the filtered studies and conducted a test of the differences to determine if they were significant. These differences were not significant. These findings indicate that to date, with this set of studies, there is no statistically significant difference in results when one reports or adjusts for changes in SES. It appears that by adjusting for SES, one sees increases in the positive results, and this result deserves a closer examination for its implications should it prove to hold up over larger sets of studies.

We ran tests that report the impact of the filters on the number of studies, the percentage of studies, and the effects described as probabilities

for each of the three study categories, NSF-supported and commercially generated with UCSMP included. We claim that when a pattern of probabilities of results does not change after filtering, one can have more confidence in that pattern. When the pattern of results changes, there is a need for an explanatory hypothesis, and that hypothesis can shed light on experimental design. We propose that this “filtering process” constitutes a test of the robustness of the outcome measures subjected to increasing degrees of rigor by using filtering.

Results of Filtering on Evaluations of NSF-Supported Curricula

For the NSF-supported curricular programs, out of 15 filters, 5 produced a probability that differed significantly at the p<.1 level. The five filters were for treatment fidelity, specification of control group, choosing the appropriate statistical unit, generalizability for ability, and generalizability based on disaggregation by subgroup. For each filter, there were from three to nine comparisons, as we examined how the probabilities of outcomes change as tests were more stringent and across the categories of positive results, negative results, and results with no significant differences. Out of a total of 72 possible tests, only 11 produced a probability that differed significantly at the p < .1 level. With 85 percent of the comparisons showing no significant difference after filtering, we suggest the results of the studies were relatively robust in relation to these tests. At the same time, when rigor is increased for the five filters just listed, the results become generally more ambiguous and signal the need for further research with more careful designs.

Studies of Commercial Materials and the Filters

To ensure enough studies to conduct the analysis (n=17), our filtering analysis of the commercially generated studies included UCSMP (n=8). In this case, there were six filters that produced a probability that differed significantly at the p < .1 level. These were treatment fidelity, disaggregation by content, use of multiple tests, use of effect size, generalizability by ability, and generalizability by sample size. In this case, because there were no studies in some possible categories, there were a total of 57 comparisons, and 9 displayed significant differences in the probabilities after filtering at the p < .1 level. With 84 percent of the comparisons showing no significant difference after filtering, we suggest the results of the studies were relatively robust in relation to these tests. Table 5-9 shows the cases in which significant differences were recorded.

Impact of Treatment Fidelity on Probabilities

A few of these differences are worthy of comment. In the cases of both the NSF-supported and commercially generated curricula evaluation studies, studies that reported treatment fidelity differed significantly from those that did not. In the case of the studies of NSF-supported curricula, it appeared that a report or adjustment on treatment fidelity led to proportions with less positive effects and more results showing no significant differences. We hypothesize that this is partly because larger studies often do not examine actual classroom practices, but can obtain significance more easily due to large sample sizes.

In the studies of commercial materials, the presence or absence of measures of treatment fidelity worked differently. Studies reporting on or adjusting for treatment fidelity tended to have significantly higher probabilities in favor of experimental treatment, less positive effects in fewer of the comparative treatments, and more likelihood of results with no significant differences. We hypothesize, and confirm with a separate analysis, that this is because UCSMP frequently reported on treatment fidelity in their designs while study of Saxon typically did not, and the change represents the preponderance of these different curricular treatments in the studies of commercially generated materials.

Impact of Identification of Curricular Program on Probabilities

The significant differences reported under specificity of curricular comparison also merit discussion for studies of NSF-supported curricula. When the comparison group is not specified, a higher percentage of mean scores in favor of the experimental curricula is reported. In the studies of commercial materials, a failure to name specific curricular comparisons also produced a higher percentage of positive outcomes for the treatment, but the difference was not statistically significant. This suggests the possibility that when a specified curriculum is compared to an unspecified curriculum, reports of impact may be inflated. This finding may suggest that in studies of effectiveness, specifying comparative treatments would provide more rigorous tests of experimental approaches.

When studies of commercial materials disaggregate their results of content strands or use multiple measures, their reports of positive outcomes increase, the negative outcomes decrease, and in one case, the results show no significant differences. Percentage of significant difference was only recorded in one comparison within each one of these filters.

TABLE 5-9 Cases of Significant Differences

Impact of Units of Analysis on Probabilities 6

For the evaluations of the NSF-supported materials, a significant difference was reported on the outcomes for the studies that used the correct unit of analysis compared to those that did not. The percentage for those with the correct unit were (.30, .40, .30) compared to (.63, .01, .36) for those that used the incorrect result. These results suggest that our prediction that using the correct unit of analysis would decrease the percentage of positive outcomes is likely to be correct. It also suggests that the most serious threat to the apparent conclusions of these studies comes from selecting an incorrect unit of analysis. It causes a decrease in favorable results, making the results more ambiguous, but never reverses the direction of the effect. This is a concern that merits major attention in the conduct of further studies.

For the commercially generated studies, most of the ones coded with the correct unit of analysis were UCSMP studies. Because of the small number of studies involved, we could not break out from the overall filtering of studies of commercial materials, but report this issue to assist readers in interpreting the relative patterns of results.

Impact of Generalizability on Probabilities

Both types of studies yielded significant differences for some of the comparisons coded as restrictions to generalizability. Investigating these is important in order to understand the effects of these curricular programs on different subpopulations of students. In the case of the studies of commercially generated materials, significantly different results occurred in the categories of ability and sample size. In the studies of NSF-supported materials, the significant differences occurred in ability and disaggregation by subgroups.

In relation to generalizability, the studies of NSF-supported curricula reported significantly more positive results in favor of the treatment when they included all students. Because studies coded as “limited by ability” were restricted either by focusing only on higher achieving students or on lower achieving students, we sorted these two groups. For higher performing students (n=3), the probabilities of effects were (.11, .67, .22). For lower

performing students (n=2), the probabilities were (.39, .025, .59). The first two comparisons are significantly different at p < .05. These findings are based on only a total of five studies, but they suggest that these programs may be serving the weaker ability students more effectively than the stronger ability students, serving both less well than they serve whole heterogeneous groups. For the studies of commercial materials, there were only three studies that were restricted to limited populations. The results for those three studies were (.23, .41, .32) and for all students (n=14) were (.42, .53, .09). These studies were significantly different at p = .004. All three studies included UCSMP and one also included Saxon and was limited by serving primarily high-performing students. This means both categories of programs are showing weaker results when used with high-ability students.

Finally, the studies on NSF-supported materials were disaggregated by subgroups for 28 studies. A complete analysis of this set follows, but the studies that did not report results disaggregated by subgroup generated probabilities of results of (.48, .09, .43) whereas those that did disaggregate their results reported (.76, 0, .24). These gains in positive effects came from significant losses in reporting no significant differences. Studies of commercial materials also reported a small decrease in likelihood of negative effects for the comparison program when disaggregation by subgroup is reported offset by increases in positive results and results with no significant differences, although these comparisons were not significantly different. A further analysis of this topic follows.

Overall, these results suggest that increased rigor seems to lead in general to less strong outcomes, but never reports of completely contrary results. These results also suggest that in recommending design considerations to evaluators, there should be careful attention to having evaluators include measures of treatment fidelity, considering the impact on all students as well as one particular subgroup; using the correct unit of analysis; and using multiple tests that are also disaggregated by content strand.

Further Analyses

We conducted four further analyses: (1) an analysis of the outcome probabilities by test type; (2) content strands analysis; (3) equity analysis; and (4) an analysis of the interactions of content and equity by grade band. Careful attention to the issues of content strand, equity, and interaction is essential for the advancement of curricular evaluation. Content strand analysis provides the detail that is often lost by reporting overall scores; equity analysis can provide essential information on what subgroups are adequately served by the innovations, and analysis by content and grade level can shed light on the controversies that evolve over time.

Analysis by Test Type

Different studies used varied combinations of outcome measures. Because of the importance of outcome measures on test results, we chose to examine whether the probabilities for the studies changed significantly across different types of outcome measures (national test, local test). The most frequent use of tests across all studies was a combination of national and local tests (n=18 studies), a local test (n=16), and national tests (n=17). Other uses of test combinations were used by three studies or less. The percentages of various outcomes by test type in comparison to all studies are described in Table 5-10 .

These data ( Table 5-11 ) suggest that national tests tend to produce less positive results, and with the resulting gains falling into results showing no significant differences, suggesting that national tests demonstrate less curricular sensitivity and specificity.

TABLE 5-10 Percentage of Outcomes by Test Type

TABLE 5-11 Percentage of Outcomes by Test Type and Program Type

TABLE 5-12 Number of Studies That Disaggregated by Content Strand

Content Strand

Curricular effectiveness is not an all-or-nothing proposition. A curriculum may be effective in some topics and less effective in others. For this reason, it is useful for evaluators to include an analysis of curricular strands and to report on the performance of students on those strands. To examine this issue, we conducted an analysis of the studies that reported their results by content strand. Thirty-eight studies did this; the breakdown is shown in Table 5-12 by type of curricular program and grade band.

To examine the evaluations of these content strands, we began by listing all of the content strands reported across studies as well as the frequency of report by the number of studies at each grade band. These results are shown in Figure 5-11 , which is broken down by content strand, grade level, and program type.

Although there are numerous content strands, some of them were reported on infrequently. To allow the analysis to focus on the key results from these studies, we separated out the most frequently reported on strands, which we call the “major content strands.” We defined these as strands that were examined in at least 10 percent of the studies. The major content strands are marked with an asterisk in the Figure 5-11 . When we conduct analyses across curricular program types or grade levels, we use these to facilitate comparisons.

A second phase of our analysis was to examine the performance of students by content strand in the treatment group in comparison to the control groups. Our analysis was conducted across the major content strands at the level of NSF-supported versus commercially generated, initially by all studies and then by grade band. It appeared that such analysis permitted some patterns to emerge that might prove helpful to future evaluators in considering the overall effectiveness of each approach. To do this, we then coded the number of times any particular strand was measured across all studies that disaggregated by content strand. Then, we coded the proportion of times that this strand was reported as favoring the experimental treatment, favoring the comparative curricula, or showing no significant difference. These data are presented across the major content strands for the NSF-supported curricula ( Figure 5-12 ) and the commercially generated curricula, ( Figure 5-13 ) (except in the case of the elemen-

descriptive comparative research title examples

FIGURE 5-11 Study counts for all content strands.

tary curricula where no data were available) in the forms of percentages, with the frequencies listed in the bars.

The presentation of results by strands must be accompanied by the same restrictions as stated previously. These results are based on studies identified as at least minimally methodologically adequate. The quality of the outcome measures in measuring the content strands has not been examined. Their results are coded in relation to the comparison group in the study and are indicated as statistically in favor of the program, as in favor of the comparative program, or as showing no significant differences. The results are combined across studies with no weighting by study size. Their results should be viewed as a means for the identification of topics for potential future study. It is completely possible that a refinement of methodologies may affect the future patterns of results, so the results are to be viewed as tentative and suggestive.

descriptive comparative research title examples

FIGURE 5-12 Major content strand result: All NSF (n=27).

According to these tentative results, future evaluations should examine whether the NSF-supported programs produce sufficient competency among students in the areas of algebraic manipulation and computation. In computation, approximately 40 percent of the results were in favor of the treatment group, no significant differences were reported in approximately 50 percent of the results, and results in favor of the comparison were revealed 10 percent of the time. Interpreting that final proportion of no significant difference is essential. Some would argue that because computation has not been emphasized, findings of no significant differences are acceptable. Others would suggest that such findings indicate weakness, because the development of the materials and accompanying professional development yielded no significant difference in key areas.

descriptive comparative research title examples

FIGURE 5-13 Major content strand result: All commercial (n=8).

From Figure 5-13 of findings from studies of commercially generated curricula, it appears that mixed results are commonly reported. Thus, in evaluations of commercial materials, lack of significant differences in computations/operations, word problems, and probability and statistics suggest that careful attention should be given to measuring these outcomes in future evaluations.

Overall, the grade band results for the NSF-supported programs—while consistent with the aggregated results—provide more detail. At the elementary level, evaluations of NSF-supported curricula (n=12) report better performance in mathematics concepts, geometry, and reasoning and problem solving, and some weaknesses in computation. No content strand analysis for commercially generated materials was possible. Evaluations

(n=6) at middle grades of NSF-supported curricula showed strength in measurement, geometry, and probability and statistics and some weaknesses in computation. In the studies of commercial materials, evaluations (n=4) reported favorable results in reasoning and problem solving and some unfavorable results in algebraic procedures, contextual problems, and mathematics concepts. Finally, at the high school level, the evaluations (n=9) by content strand for the NSF-supported curricula showed strong favorable results in algebra concepts, reasoning/problem solving, word problems, probability and statistics, and measurement. Results in favor of the control were reported in 25 percent of the algebra procedures and 33 percent of computation measures.

For the studies of commercial materials (n=4), only the geometry results favor the control group 25 percent of the time, with 50 percent having favorable results. Algebra concepts, reasoning, and probability and statistics also produced favorable results.

Equity Analysis of Comparative Studies

When the goal of providing a standards-based curriculum to all students was proposed, most people could recognize its merits: the replacement of dull, repetitive, largely dead-end courses with courses that would lead all students to be able, if desired and earned, to pursue careers in mathematics-reliant fields. It was clear that the NSF-supported projects, a stated goal of which was to provide standards-based courses to all students, called for curricula that would address the problem of too few students persisting in the study of mathematics. For example, as stated in the NSF Request for Proposals (RFP):

Rather than prematurely tracking students by curricular objectives, secondary school mathematics should provide for all students a common core of mainstream mathematics differentiated instructionally by level of abstraction and formalism, depth of treatment and pace (National Science Foundation, 1991, p. 1). In the elementary level solicitation, a similar statement on causes for all students was made (National Science Foundation, 1988, pp. 4-5).

Some, but not enough attention has been paid to the education of students who fall below the average of the class. On the other hand, because the above average students sometimes do not receive a demanding education, it may be incorrectly assumed they are easy to teach (National Science Foundation, 1989, p. 2).

Likewise, with increasing numbers of students in urban schools, and increased demographic diversity, the challenges of equity are equally significant for commercial publishers, who feel increasing pressures to demonstrate the effectiveness of their products in various contexts.

The problem was clearly identified: poorer performance by certain subgroups of students (minorities—non-Asian, LEP students, sometimes females) and a resulting lack of representation of such groups in mathematics-reliant fields. In addition, a secondary problem was acknowledged: Highly talented American students were not being provided adequate challenge and stimulation in comparison with their international counterparts. We relied on the concept of equity in examining the evaluation. Equity was contrasted to equality, where one assumed all students should be treated exactly the same (Secada et al., 1995). Equity was defined as providing opportunities and eliminating barriers so that the membership in a subgroup does not subject one to undue and systematically diminished possibility of success in pursuing mathematical study. Appropriate treatment therefore varies according to the needs of and obstacles facing any subgroup.

Applying the principles of equity to evaluate the progress of curricular programs is a conceptually thorny challenge. What is challenging is how to evaluate curricular programs on their progress toward equity in meeting the needs of a diverse student body. Consider how the following questions provide one with a variety of perspectives on the effectiveness of curricular reform regarding equity:

Does one expect all students to improve performance, thus raising the bar, but possibly not to decrease the gap between traditionally well-served and under-served students?

Does one focus on reducing the gap and devote less attention to overall gains, thus closing the gap but possibly not raising the bar?

Or, does one seek evidence that progress is made on both challenges—seeking progress for all students and arguably faster progress for those most at risk?

Evaluating each of the first two questions independently seems relatively straightforward. When one opts for a combination of these two, the potential for tensions between the two becomes more evident. For example, how can one differentiate between the case in which the gap is closed because talented students are being underchallenged from the case in which the gap is closed because the low-performing students improved their progress at an increased rate? Many believe that nearly all mathematics curricula in this country are insufficiently challenging and rigorous. Therefore achieving modest gains across all ability levels with evidence of accelerated progress by at-risk students may still be criticized for failure to stimulate the top performing student group adequately. Evaluating curricula with regard to this aspect therefore requires judgment and careful methodological attention.

Depending on one’s view of equity, different implications for the collection of data follow. These considerations made examination of the quality of the evaluations as they treated questions of equity challenging for the committee members. Hence we spell out our assumptions as precisely as possible:

Evaluation studies should include representative samples of student demographics, which may require particular attention to the inclusion of underrepresented minority students from lower socioeconomic groups, females, and special needs populations (LEP, learning disabled, gifted and talented students) in the samples. This may require one to solicit participation by particular schools or districts, rather than to follow the patterns of commercial implementation, which may lead to an unrepresentative sample in aggregate.

Analysis of results should always consider the impact of the program on the entire spectrum of the sample to determine whether the overall gains are distributed fairly among differing student groups, and not achieved as improvements in the mean(s) of an identifiable subpopulation(s) alone.

Analysis should examine whether any group of students is systematically less well served by curricular implementation, causing losses or weakening the rate of gains. For example, this could occur if one neglected the continued development of programs for gifted and talented students in mathematics in order to implement programs focused on improving access for underserved youth, or if one improved programs solely for one group of language learners, ignoring the needs of others, or if one’s study systematically failed to report high attrition affecting rates of participation of success or failure.

Analyses should examine whether gaps in scores between significantly disadvantaged or underperforming subgroups and advantaged subgroups are decreasing both in relation to eliminating the development of gaps in the first place and in relation to accelerating improvement for underserved youth relative to their advantaged peers at the upper grades.

In reviewing the outcomes of the studies, the committee reports first on what kinds of attention to these issues were apparent in the database, and second on what kinds of results were produced. Some of the studies used multiple methods to provide readers with information on these issues. In our report on the evaluations, we both provide descriptive information on the approaches used and summarize the results of those studies. Developing more effective methods to monitor the achievement of these objectives may need to go beyond what is reported in this study.

Among the 63 at least minimally methodologically adequate studies, 26 reported on the effects of their programs on subgroups of students. The

TABLE 5-13 Most Common Subgroups Used in the Analyses and the Number of Studies That Reported on That Variable

other 37 reported on the effects of the curricular intervention on means of whole groups and their standard deviations, but did not report on their data in terms of the impact on subpopulations. Of those 26 evaluations, 19 studies were on NSF-supported programs and 7 were on commercially generated materials. Table 5-13 reports the most common subgroups used in the analyses and the number of studies that reported on that variable. Because many studies used multiple categories for disaggregation (ethnicity, SES, and gender), the number of reports is more than double the number of studies. For this reason, we report the study results in terms of the “frequency of reports on a particular subgroup” and distinguish this from what we refer to as “study counts.” The advantage of this approach is that it permits reporting on studies that investigated multiple ways to disaggregate their data. The disadvantage is that in a sense, studies undertaking multiple disaggregations become overrepresented in the data set as a result. A similar distinction and approach were used in our treatment of disaggregation by content strands.

It is apparent from these data that the evaluators of NSF-supported curricula documented more equity-based outcomes, as they reported 43 of the 56 comparisons. However, the same percentage of the NSF-supported evaluations disaggregated their results by subgroup, as did commercially generated evaluations (41 percent in both cases). This is an area where evaluations of curricula could benefit greatly from standardization of ex-

pectation and methodology. Given the importance of the topic of equity, it should be standard practice to include such analyses in evaluation studies.

In summarizing these 26 studies, the first consideration was whether representative samples of students were evaluated. As we have learned from medical studies, if conclusions on effectiveness are drawn without careful attention to representativeness of the sample relative to the whole population, then the generalizations drawn from the results can be seriously flawed. In Chapter 2 we reported that across the studies, approximately 81 percent of the comparative studies and 73 percent of the case studies reported data on school location (urban, suburban, rural, or state/region), with suburban students being the largest percentage in both study types. The proportions of students studied indicated a tendency to undersample urban and rural populations and oversample suburban schools. With a high concentration of minorities and lower SES students in these areas, there are some concerns about the representativeness of the work.

A second consideration was to see whether the achievement effects of curricular interventions were achieved evenly among the various subgroups. Studies answered this question in different ways. Most commonly, evaluators reported on the performance of various subgroups in the treatment conditions as compared to those same subgroups in the comparative condition. They reported outcome scores or gains from pretest to posttest. We refer to these as “between” comparisons.

Other studies reported on the differences among subgroups within an experimental treatment, describing how well one group does in comparison with another group. Again, these reports were done in relation either to outcome measures or to gains from pretest to posttest. Often these reports contained a time element, reporting on how the internal achievement patterns changed over time as a curricular program was used. We refer to these as “within” comparisons.

Some studies reported both between and within comparisons. Others did not report findings by comparing mean scores or gains, but rather created regression equations that predicted the outcomes and examined whether demographic characteristics are related to performance. Six studies (all on NSF-supported curricula) used this approach with variables related to subpopulations. Twelve studies used ANCOVA or Multiple Analysis of Variance (MANOVA) to study disaggregation by subgroup, and two reported on comparative effect sizes. In the studies using statistical tests other than t-tests or Chi-squares, two were evaluations of commercially generated materials and the rest were of NSF-supported materials.

Of the studies that reported on gender (n=19), the NSF-supported ones (n=13) reported five cases in which the females outperformed their counterparts in the controls and one case in which the female-male gap decreased within the experimental treatments across grades. In most cases, the studies

present a mixed picture with some bright spots, with the majority showing no significant difference. One study reported significant improvements for African-American females.

In relation to race, 15 of 16 reports on African Americans showed positive effects in favor of the treatment group for NSF-supported curricula. Two studies reported decreases in the gaps between African Americans and whites or Asians. One of the two evaluations of African Americans, performance reported for the commercially generated materials, showed significant positive results, as mentioned previously.

For Hispanic students, 12 of 15 reports of the NSF-supported materials were significantly positive, with the other 3 showing no significant difference. One study reported a decrease in the gaps in favor of the experimental group. No evaluations of commercially generated materials were reported on Hispanic populations. Other reports on ethnic groups occurred too seldom to generalize.

Students from lower socioeconomic groups fared well, according to reported evaluations of NSF-supported materials (n=8), in that experimental groups outperformed control groups in all but one case. The one study of commercially generated materials that included SES as a variable reported no significant difference. For students with limited English proficiency, of the two evaluations of NSF-supported materials, one reported significantly more positive results for the experimental treatment. Likewise, one study of commercially generated materials yielded a positive result at the elementary level.

We also examined the data for ability differences and found reports by quartiles for a few evaluation studies. In these cases, the evaluations showed results across quartiles in favor of the NSF-supported materials. In one case using the same program, the lower quartiles showed the most improvement, and in the other, the gains were in the middle and upper groups for the Iowa Test of Basic Skills and evenly distributed for the informal assessment.

Summary Statements

After reviewing these studies, the committee observed that examining differences by gender, race, SES, and performance levels should be examined as a regular part of any review of effectiveness. We would recommend that all comparative studies report on both “between” and “within” comparisons so that the audience of an evaluation can simply and easily consider the level of improvement, its distribution across subgroups, and the impact of curricular implementation on any gaps in performance. Each of the major categories—gender, race/ethnicity, SES, and achievement level—contributes a significant and contrasting view of curricular impact. Further-

more, more sophisticated accounts would begin to permit, across studies, finer distinctions to emerge, such as the effect of a program on young African-American women or on first generation Asian students.

In addition, the committee encourages further study and deliberation on the use of more complex approaches to the examination of equity issues. This is particularly important due to the overlaps among these categories, where poverty can show itself as its own variable but also may be highly correlated to prior performance. Hence, the use of one variable can mask differences that should be more directly attributable to another. The committee recommends that a group of measurement and equity specialists confer on the most effective design to advance on these questions.

Finally, it is imperative that evaluation studies systematically include demographically representative student populations and distinguish evaluations that follow the commercial patterns of use from those that seek to establish effectiveness with a diverse student population. Along these lines, it is also important that studies report on the impact data on all substantial ethnic groups, including whites. Many studies, perhaps because whites were the majority population, failed to report on this ethnic group in their analyses. As we saw in one study, where Asian students were from poor homes and first generation, any subgroup can be an at-risk population in some setting, and because gains in means may not necessarily be assumed to translate to gains for all subgroups or necessarily for the majority subgroup. More complete and thorough descriptions and configurations of characteristics of the subgroups being served at any location—with careful attention to interactions—is needed in evaluations.

Interactions Among Content and Equity, by Grade Band

By examining disaggregation by content strand by grade levels, along with disaggregation by diverse subpopulations, the committee began to discover grade band patterns of performance that should be useful in the conduct of future evaluations. Examining each of these issues in isolation can mask some of the overall effects of curricular use. Two examples of such analysis are provided. The first example examines all the evaluations of NSF-supported curricula from the elementary level. The second examines the set of evaluations of NSF-supported curricula at the high school level, and cannot be carried out on evaluations of commercially generated programs because they lack disaggregation by student subgroup.

Example One

At the elementary level, the findings of the review of evaluations of data on effectiveness of NSF-supported curricula report consistent patterns of

benefits to students. Across the studies, it appears that positive results are enhanced when accompanied by adequate professional development and the use of pedagogical methods consistent with those indicated by the curricula. The benefits are most consistently evidenced in the broadening topics of geometry, measurement, probability, and statistics, and in applied problem solving and reasoning. It is important to consider whether the outcome measures in these areas demonstrate a depth of understanding. In early understanding of fractions and algebra, there is some evidence of improvement. Weaknesses are sometimes reported in the areas of computational skills, especially in the routinization of multiplication and division. These assertions are tentative due to the possible flaws in designs but quite consistent across studies, and future evaluations should seek to replicate, modify, or discredit these results.

The way to most efficiently and effectively link informal reasoning and formal algorithms and procedures is an open question. Further research is needed to determine how to most effectively link the gains and flexibility associated with student-generated reasoning to the automaticity and generalizability often associated with mastery of standard algorithms.

The data from these evaluations at the elementary level generally present credible evidence of increased success in engaging minority students and students in poverty based on reported gains that are modestly higher for these students than for the comparative groups. What is less well documented in the studies is the extent to which the curricula counteract the tendencies to see gaps emerge and result in long-term persistence in performance by gender and minority group membership as they move up the grades. However, the evaluations do indicate that these curricula can help, and almost never do harm. Finally, on the question of adequate challenge for advanced and talented students, the data are equivocal. More attention to this issue is needed.

Example Two

The data at the high school level produced the most conflicting results, and in conducting future evaluations, evaluators will need to examine this level more closely. We identify the high school as the crucible for curricular change for three reasons: (1) the transition to postsecondary education puts considerable pressure on these curricula; (2) the criteria outlined in the NSF RFP specify significant changes from traditional practice; and (3) high school freshmen arrive from a myriad of middle school curricular experiences. For the NSF-supported curricula, the RFP required that the programs provide a core curriculum “drawn from statistics/probability, algebra/functions, geometry/trigonometry, and discrete mathematics” (NSF, 1991, p. 2) and use “a full range of tools, including graphing calculators

and computers” (NSF, 1991, p. 2). The NSF RFP also specified the inclusion of “situations from the natural and social sciences and from other parts of the school curriculum as contexts for developing and using mathematics” (NSF, 1991, p. 1). It was during the fourth year that “course options should focus on special mathematical needs of individual students, accommodating not only the curricular demands of the college-bound but also specialized applications supportive of the workplace aspirations of employment-bound students” (NSF, 1991, p. 2). Because this set of requirements comprises a significant departure from conventional practice, the implementation of the high school curricula should be studied in particular detail.

We report on a Systemic Initiative for Montana Mathematics and Science (SIMMS) study by Souhrada (2001) and Brown et al. (1990), in which students were permitted to select traditional, reform, and mixed tracks. It became apparent that the students were quite aware of the choices they faced, as illustrated in the following quote:

The advantage of the traditional courses is that you learn—just math. It’s not applied. You get a lot of math. You may not know where to use it, but you learn a lot…. An advantage in SIMMS is that the kids in SIMMS tell me that they really understand the math. They understand where it comes from and where it is used.

This quote succinctly captures the tensions reported as experienced by students. It suggests that student perceptions are an important source of evidence in conducting evaluations. As we examined these curricular evaluations across the grades, we paid particular attention to the specificity of the outcome measures in relation to curricular objectives. Overall, a review of these studies would lead one to draw the following tentative summary conclusions:

There is some evidence of discontinuity in the articulation between high school and college, resulting from the organization and emphasis of the new curricula. This discontinuity can emerge in scores on college admission tests, placement tests, and first semester grades where nonreform students have shown some advantage on typical college achievement measures.

The most significant areas of disadvantage seem to be in students’ facility with algebraic manipulation, and with formalization, mathematical structure, and proof when isolated from context and denied technological supports. There is some evidence of weakness in computation and numeration, perhaps due to reliance on calculators and varied policies regarding their use at colleges (Kahan, 1999; Huntley et al., 2000).

There is also consistent evidence that the new curricula present

strengths in areas of solving applied problems, the use of technology, new areas of content development such as probability and statistics and functions-based reasoning in the use of graphs, using data in tables, and producing equations to describe situations (Huntley et al., 2000; Hirsch and Schoen, 2002).

Despite early performance on standard outcome measures at the high school level showing equivalent or better performance by reform students (Austin et al., 1997; Merlino and Wolff, 2001), the common standardized outcome measures (Preliminary Scholastic Assessment Test [PSAT] scores or national tests) are too imprecise to determine with more specificity the comparisons between the NSF-supported and comparison approaches, while program-generated measures lack evidence of external validity and objectivity. There is an urgent need for a set of measures that would provide detailed information on specific concepts and conceptual development over time and may require use as embedded as well as summative assessment tools to provide precise enough data on curricular effectiveness.

The data also report some progress in strengthening the performance of underrepresented groups in mathematics relative to their counterparts in the comparative programs (Schoen et al., 1998; Hirsch and Schoen, 2002).

This reported pattern of results should be viewed as very tentative, as there are only a few studies in each of these areas, and most do not adequately control for competing factors, such as the nature of the course received in college. Difficulties in the transition may also be the result of a lack of alignment of measures, especially as placement exams often emphasize algebraic proficiencies. These results are presented only for the purpose of stimulating further evaluation efforts. They further emphasize the need to be certain that such designs examine the level of mathematical reasoning of students, particularly in relation to their knowledge of understanding of the role of proofs and definitions and their facility with algebraic manipulation as we as carefully document the competencies taught in the curricular materials. In our framework, gauging the ease of transition to college study is an issue of examining curricular alignment with systemic factors, and needs to be considered along with those tests that demonstrate a curricular validity of measures. Furthermore, the results raising concerns about college success need replication before secure conclusions are drawn.

Also, it is important that subsequent evaluations also examine curricular effects on students’ interest in mathematics and willingness to persist in its study. Walker (1999) reported that there may be some systematic differences in these behaviors among different curricula and that interest and persistence may help students across a variety of subgroups to survive entry-level hurdles, especially if technical facility with symbol manipulation

can be improved. In the context of declines in advanced study in mathematics by American students (Hawkins, 2003), evaluation of curricular impact on students’ interest, beliefs, persistence, and success are needed.

The committee takes the position that ultimately the question of the impact of different curricula on performance at the collegiate level should be resolved by whether students are adequately prepared to pursue careers in mathematical sciences, broadly defined, and to reason quantitatively about societal and technological issues. It would be a mistake to focus evaluation efforts solely or primarily on performance on entry-level courses, which can clearly function as filters and may overly emphasize procedural competence, but do not necessarily represent what concepts and skills lead to excellence and success in the field.

These tentative patterns of findings indicate that at the high school level, it is necessary to conduct individual evaluations that examine the transition to college carefully in order to gauge the level of success in preparing students for college entry and the successful negotiation of majors. Equally, it is imperative to examine the impact of high school curricula on other possible student trajectories, such as obtaining high school diplomas, moving into worlds of work or through transitional programs leading to technical training, two-year colleges, and so on.

These two analyses of programs by grade-level band, content strand, and equity represent a methodological innovation that could strengthen the empirical database on curricula significantly and provide the level of detail really needed by curriculum designers to improve their programs. In addition, it appears that one could characterize the NSF programs (and not the commercial programs as a group) as representing a particular approach to curriculum, as discussed in Chapter 3 . It is an approach that integrates content strands; relies heavily on the use of situations, applications, and modeling; encourages the use of technology; and has a significant dose of mathematical inquiry. One could ask the question of whether this approach as a whole is “effective.” It is beyond the charge and scope of this report, but is a worthy target of investigation if one uses proper care in design, execution, and analysis. Likewise other approaches to curricular change should be investigated at the aggregate level, using careful and rigorous design.

The committee believes that a diversity of curricular approaches is a strength in an educational system that maintains local and state control of curricular decision making. While “scientifically established as effective” should be an increasingly important consideration in curricular choice, local cultural differences, needs, values, and goals will also properly influence curricular choice. A diverse set of effective curricula would be ideal. Finally, the committee emphasizes once again the importance of basing the studies on measures with established curricular validity and avoiding cor-

ruption of indicators as a result of inappropriate amounts of teaching to the test, so as to be certain that the outcomes are the product of genuine student learning.

CONCLUSIONS FROM THE COMPARATIVE STUDIES

In summary, the committee reviewed a total of 95 comparative studies. There were more NSF-supported program evaluations than commercial ones, and the commercial ones were primarily on Saxon or UCSMP materials. Of the 19 curricular programs reviewed, 23 percent of the NSF-supported and 33 percent of the commercially generated materials selected had programs with no comparative reviews. This finding is particularly disturbing in light of the legislative mandate in No Child Left Behind (U.S. Department of Education, 2001) for scientifically based curricular programs and materials to be used in the schools. It suggests that more explicit protocols for the conduct of evaluation of programs that include comparative studies need to be required and utilized.

Sixty-nine percent of NSF-supported and 61 percent of commercially generated program evaluations met basic conditions to be classified as at least minimally methodologically adequate studies for the evaluation of effectiveness. These studies were ones that met the criteria of including measures of student outcomes on mathematical achievement, reporting a method of establishing comparability among samples and reporting on implementation elements, disaggregating by content strand, or using precise, theoretical analyses of the construct or multiple measures.

Most of these studies had both strengths and weaknesses in their quasi-experimental designs. The committee reviewed the studies and found that evaluators had developed a number of features that merit inclusions in future work. At the same time, many had internal threats to validity that suggest a need for clearer guidelines for the conduct of comparative evaluations.

Many of the strengths and innovations came from the evaluators’ understanding of the program theories behind the curricula, their knowledge of the complexity of practice, and their commitment to measuring valid and significant mathematical ideas. Many of the weaknesses came from inadequate attention to experimental design, insufficient evidence of the independence of evaluators in some studies, and instability and lack of cooperation in interfacing with the conditions of everyday practice.

The committee identified 10 elements of comparative studies needed to establish a basis for determining the effectiveness of a curriculum. We recognize that not all studies will be able to implement successfully all elements, and those experimental design variations will be based largely on study size and location. The list of elements begins with the seven elements

corresponding to the seven critical decisions and adds three additional elements that emerged as a result of our review:

A better balance needs to be achieved between experimental and quasi-experimental studies. The virtual absence of large-scale experimental studies does not provide a way to determine whether the use of quasi-experimental approaches is being systematically biased in unseen ways.

If a quasi-experimental design is selected, it is necessary to establish comparability. When quasi-experimentation is used, it “pertains to studies in which the model to describe effects of secondary variables is not known but assumed” (NRC, 1992, p. 18). This will lead to weaker and potentially suspect causal claims, which should be acknowledged in the evaluation report, but may be necessary in relation to feasibility (Joint Committee on Standards for Educational Evaluation, 1994). In general, to date, studies have assumed prior achievement measures, ethnicity, gender, and SES, are acceptable variables on which to match samples or on which to make statistical adjustments. But there are often other variables in need of such control in such evaluations including opportunity to learn, teacher effectiveness, and implementation (see #4 below).

The selection of a unit of analysis is of critical importance to the design. To the extent possible, it is useful to randomly assign the unit for the different curricula. The number of units of analysis necessary for the study to establish statistical significance depends not on the number of students, but on this unit of analysis. It appears that classrooms and schools are the most likely units of analysis. In addition, the development of increasingly sophisticated means of conducting studies that recognize that the level of the educational system in which experimentation occurs affects research designs.

It is essential to examine the implementation components through a set of variables that include the extent to which the materials are implemented, teaching methods, the use of supplemental materials, professional development resources, teacher background variables, and teacher effects. Gathering these data to gauge the level of implementation fidelity is essential for evaluators to ensure adequate implementation. Studies could also include nested designs to support analysis of variation by implementation components.

Outcome data should include a variety of measures of the highest quality. These measures should vary by question type (open ended, multiple choice), by type of test (international, national, local) and by relation of testing to everyday practice (formative, summative, high stakes), and ensure curricular validity of measures and assess curricular alignment with systemic factors. The use of comparisons among total tests, fair tests, and

conservative tests, as done in the evaluations of UCSMP, permits one to gain insight into teacher effects and to contrast test results by items included. Tests should also include content strands to aid disaggregation, at a level of major content strands (see Figure 5-11 ) and content-specific items relevant to the experimental curricula.

Statistical analysis should be conducted on the appropriate unit of analysis and should include more sophisticated methods of analysis such as ANOVA, ANCOVA, MACOVA, linear regression, and multiple regression analysis as appropriate.

Reports should include clear statements of the limitations to generalization of the study. These should include indications of limitations in populations sampled, sample size, unique population inclusions or exclusions, and levels of use or attrition. Data should also be disaggregated by gender, race/ethnicity, SES, and performance levels to permit readers to see comparative gains across subgroups both between and within studies.

It is useful to report effect sizes. It is also useful to present item-level data across treatment program and show when performances between the two groups are within the 10 percent confidence interval of each other. These two extremes document how crucial it is for curricula developers to garner both precise and generalizable information to inform their revisions.

Careful attention should also be given to the selection of samples of populations for participation. These samples should be representative of the populations to whom one wants to generalize the results. Studies should be clear if they are generalizing to groups who have already selected the materials (prior users) or to populations who might be interested in using the materials (demographically representative).

The control group should use an identified comparative curriculum or curricula to avoid comparisons to unstructured instruction.

In addition to these prototypical decisions to be made in the conduct of comparative studies, the committee suggests that it would be ideal for future studies to consider some of the overall effects of these curricula and to test more directly and rigorously some of the findings and alternative hypotheses. Toward this end, the committee reported the tentative findings of these studies by program type. Although these results are subject to revision, based on the potential weaknesses in design of many of the studies summarized, the form of analysis demonstrated in this chapter provides clear guidance about the kinds of knowledge claims and the level of detail that we need to be able to judge effectiveness. Until we are able to achieve an array of comparative studies that provide valid and reliable information on these issues, we will be vulnerable to decision making based excessively on opinion, limited experience, and preconceptions.

This book reviews the evaluation research literature that has accumulated around 19 K-12 mathematics curricula and breaks new ground in framing an ambitious and rigorous approach to curriculum evaluation that has relevance beyond mathematics. The committee that produced this book consisted of mathematicians, mathematics educators, and methodologists who began with the following charge:

  • Evaluate the quality of the evaluations of the thirteen National Science Foundation (NSF)-supported and six commercially generated mathematics curriculum materials;
  • Determine whether the available data are sufficient for evaluating the efficacy of these materials, and if not;
  • Develop recommendations about the design of a project that could result in the generation of more reliable and valid data for evaluating such materials.

The committee collected, reviewed, and classified almost 700 studies, solicited expert testimony during two workshops, developed an evaluation framework, established dimensions/criteria for three methodologies (content analyses, comparative studies, and case studies), drew conclusions on the corpus of studies, and made recommendations for future research.

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Fastest Nurse Insight Engine

  • MEDICAL ASSISSTANT
  • Abdominal Key
  • Anesthesia Key
  • Basicmedical Key
  • Otolaryngology & Ophthalmology
  • Musculoskeletal Key
  • Obstetric, Gynecology and Pediatric
  • Oncology & Hematology
  • Plastic Surgery & Dermatology
  • Clinical Dentistry
  • Radiology Key
  • Thoracic Key
  • Veterinary Medicine
  • Gold Membership

Clarifying Quantitative Research Designs

Chapter 8 Clarifying Quantitative Research Designs Chapter Overview Identifying Designs Used in Nursing Studies Descriptive Designs Typical Descriptive Design Comparative Descriptive Design Correlational Designs Descriptive Correlational Design Predictive Correlational Design Model Testing Design Understanding Concepts Important to Causality in Designs Multicausality Probability Bias Control Manipulation Examining the Validity of Studies Statistical Conclusion Validity Internal Validity Construct Validity External Validity Elements of Designs Examining Causality Examining Interventions in Nursing Studies Experimental and Control or Comparison Groups Quasi-Experimental Designs Pretest and Post-test Designs with Comparison Group Experimental Designs Classic Experimental Pretest and Post-test Designs with Experimental and Control Groups Post-test–Only with Control Group Design Randomized Controlled Trials Introduction to Mixed-Methods Approaches Key Concepts References Learning Outcomes After completing this chapter, you should be able to: 1.  Identify the nonexperimental designs (descriptive and correlational) and experimental designs (quasi-experimental and experimental) commonly used in nursing studies. 2.  Critically appraise descriptive and correlational designs in published studies. 3.  Describe the concepts important to examining causality—multicausality, probability, bias, control, and manipulation. 4.  Examine study designs for strengths and threats to statistical conclusion, and to internal, construct, and external validity. 5.  Describe the elements of designs that examine causality. 6.  Critically appraise the interventions implemented in studies. 7.  Critically appraise the quasi-experimental and experimental designs in published studies. 8.  Examine the quality of randomized controlled trials (RCTs) conducted in nursing. 9.  Discuss the implementation of mixed-methods approaches in nursing studies. Key Terms Bias, p. 223 Blinding, p. 241 Causality, p. 222 Comparative descriptive design, p. 216 Construct validity, p. 227 Control, p. 224 Control or comparison group, p. 230 Correlational design, p. 217 Cross-sectional design, p. 212 Descriptive correlational design, p. 218 Descriptive design, p. 212 Design validity, p. 211 Experimental designs, p. 237 Experimental or treatment group, p. 230 Experimenter expectancy, p. 228 External validity, p. 228 Internal validity, p. 226 Intervention, p. 230 Intervention fidelity, p. 230 Longitudinal design, p. 212 Low statistical power, p. 226 Manipulation, p. 224 Mixed-methods approaches, p. 243 Model testing design, p. 221 Multicausality, p. 223 Nonexperimental designs, p. 212 Predictive correlational design, p. 220 Probability, p. 223 Quasi-experimental design, p. 232 Randomized controlled trial (RCT), p. 241 Research design, p. 211 Statistical conclusion validity, p. 224 Study validity, p. 224 Threats to validity, p. 224 Triangulation, p. 244 Typical descriptive design, p. 214 A research design is a blueprint for conducting a study. Over the years, several quantitative designs have been developed for conducting descriptive, correlational, quasi-experimental, and experimental studies. Descriptive and correlational designs are focused on describing and examining relationships of variables in natural settings. Quasi-experimental and experimental designs were developed to examine causality, or the cause and effect relationships between interventions and outcomes. The designs focused on causality were developed to maximize control over factors that could interfere with or threaten the validity of the study design. The strengths of the design validity increase the probability that the study findings are an accurate reflection of reality. Well-designed studies, especially those focused on testing the effects of nursing interventions, are essential for generating sound research evidence for practice ( Brown, 2014 ; Craig & Smyth, 2012 ). Being able to identify the study design and evaluate design flaws that might threaten the validity of the findings is an important part of critically appraising studies. Therefore this chapter introduces you to the different types of quantitative study designs and provides an algorithm for determining whether a study design is descriptive, correlational, quasi-experimental, or experimental. Algorithms are also provided so that you can identify specific types of designs in published studies. A background is provided for understanding causality in research by defining the concepts of multicausality, probability, bias, control, and manipulation. The different types of validity—statistical conclusion validity, internal validity, construct validity, and external validity—are described. Guidelines are provided for critically appraising descriptive, correlational, quasi-experimental, and experimental designs in published studies. In addition, a flow diagram is provided to examine the quality of randomized controlled trials conducted in nursing. The chapter concludes with an introduction to mixed-method approaches, which include elements of quantitative designs and qualitative procedures in a study. Identifying Designs Used in Nursing Studies A variety of study designs are used in nursing research; the four most commonly used types are descriptive, correlational, quasi-experimental, and experimental. These designs are categorized in different ways in textbooks ( Fawcett & Garity, 2009 ; Hoe & Hoare, 2012 ; Kerlinger & Lee, 2000 ). Sometimes, descriptive and correlational designs are referred to as nonexperimental designs because the focus is on examining variables as they naturally occur in environments and not on the implementation of a treatment by the researcher. Some of these nonexperimental designs include a time element. Designs with a cross-sectional element involve data collection at one point in time. Cross-sectional design involves examining a group of subjects simultaneously in various stages of development, levels of education, severity of illness, or stages of recovery to describe changes in a phenomenon across stages. The assumption is that the stages are part of a process that will progress over time. Selecting subjects at various points in the process provides important information about the totality of the process, even though the same subjects are not monitored throughout the entire process ( Grove, Burns, & Gray, 2013 ). Longitudinal design involves collecting data from the same subjects at different points in time and might also be referred to as repeated measures. Repeated measures might be included in descriptive, correlational, quasi-experimental, or experimental study designs. Quasi-experimental and experimental studies are designed to examine causality or the cause and effect relationship between a researcher-implemented treatment and selected study outcome. The designs for these studies are sometime referred to as experimental because the focus is on examining the differences in dependent variables thought to be caused by independent variables or treatments. For example, the researcher-implemented treatment might be a home monitoring program for patients initially diagnosed with hypertension, and the dependent or outcome variable could be blood pressure measured at 1 week, 1 month, and 6 months. This chapter introduces you to selected experimental designs and provides examples of these designs from published nursing studies. Details on other study designs can be found in a variety of methodology sources ( Campbell & Stanley, 1963 ; Creswell, 2014 ; Grove et al., 2013 ; Kerlinger & Lee, 2000 ; Shadish, Cook, & Campbell, 2002 ). The algorithm shown in Figure 8-1 may be used to determine the type of design (descriptive, correlational, quasi-experimental, and experimental) used in a published study. This algorithm includes a series of yes or no responses to specific questions about the design. The algorithm starts with the question, “Is there a treatment?” The answer leads to the next question, with the four types of designs being identified in the algorithm. Sometimes, researchers combine elements of different designs to accomplish their study purpose. For example, researchers might conduct a cross-sectional, descriptive, correlational study to examine the relationship of body mass index (BMI) to blood lipid levels in early adolescence (ages 13 to 16 years) and late adolescence (ages 17 to 19 years). It is important that researchers clearly identify the specific design they are using in their research report. Fig 8-1 Algorithm for determining the type of study design. Descriptive Designs Descriptive studies are designed to gain more information about characteristics in a particular field of study. The purpose of these studies is to provide a picture of a situation as it naturally happens. A descriptive design may be used to develop theories, identify problems with current practice, make judgments about practice, or identify trends of illnesses, illness prevention, and health promotion in selected groups. No manipulation of variables is involved in a descriptive design. Protection against bias in a descriptive design is achieved through (1) conceptual and operational definitions of variables, (2) sample selection and size, (3) valid and reliable instruments, and (4) data collection procedures that might partially control the environment. Descriptive studies differ in level of complexity. Some contain only two variables; others may include multiple variables that are studied over time. You can use the algorithm shown in Figure 8-2 to determine the type of descriptive design used in a published study. Typical descriptive and comparative descriptive designs are discussed in this chapter. Grove and colleagues (2013) have provided details about additional descriptive designs. Fig 8-2 Algorithm for determining the type of descriptive design. Typical Descriptive Design A typical descriptive design is used to examine variables in a single sample ( Figure 8-3 ). This descriptive design includes identifying the variables within a phenomenon of interest, measuring these variables, and describing them. The description of the variables leads to an interpretation of the theoretical meaning of the findings and the development of possible relationships or hypotheses that might guide future correlational or quasi-experimental studies. Fig 8-3 Typical descriptive design. Critical Appraisal Guidelines Descriptive and Correlational Designs When critically appraising the designs of descriptive and correlational studies, you need to address the following questions: 1.  Is the study design descriptive or correlational? Review the algorithm in Figure 8-1 to determine the type of study design. 2.  If the study design is descriptive, use the algorithm in Figure 8-2 to identify the specific type of descriptive design implemented in the study. 3.  If the study design is correlational, use the algorithm in Figure 8-5 to identify the specific type of correlational design implemented in the study. 4.  Does the study design address the study purpose and/or objectives or questions? 5.  Was the sample appropriate for the study? 6.  Were the study variables measured with quality measurement methods? Research Example Typical Descriptive Design Research Study Excerpt Maloni, Przeworski, and Damato (2013) studied women with postpartum depression (PPD) after pregnancy complications for the purpose of describing their barriers to treatment for PPD, use of online resources for assistance with PPD, and preference for Internet treatment for PPD. This study included a typical descriptive design; key aspects of this study’s design are presented in the following excerpt. “Methods An exploratory descriptive survey design was used to obtain a convenience sample of women who self-report feelings of PPD across the past week [sample size n  = 53]. Inclusion criteria were women between 2 weeks and 6 months postpartum who had been hospitalized for pregnancy complications. Women were excluded if they had a score of < 6 on the Edinburgh Postnatal Depression Scale (EPDS)…. EPDS is a widely used screening instrument to detect postpartum depression…. In addition, a series of 26 descriptive questions assessed women’s barriers to PPD treatment, whether they sought information about depression after birth from any sources and their information seeking about PPD from the Internet, how often they sought the information, and whether the information was helpful. Questions were developed from review of the literature.… Content validity was established by a panel of four experts.… The survey was posted using a university-protected website using standardized software for surveys.” ( Maloni et al., 2013, pp. 91-92 ) Critical Appraisal Maloni and associates (2013) clearly identified their study design as descriptive and indicated that the data were collected using an online survey. This type of design was appropriate to address the study purpose. The sample section was strengthened by using the EPDS to identify women with PPD and using the sample criteria to ensure that the women had been hospitalized for pregnancy complications. However, the sample size of 53 was small for a descriptive study. The 26-item questionnaire had content validity and was consistently implemented online using standard survey software. This typical descriptive design was implemented in a way to provide quality study findings. Implications for Practice Maloni and co-workers (2013) noted that of the 53 women who were surveyed because they reported PPD, 70% had major depression. The common barriers that prevented them from getting treatment included time and the stigma of PPD diagnosis. Over 90% of the women did use the Internet as a resource to learn about coping with PPD and expressed an interest in a web-based PPD treatment. Comparative Descriptive Design A comparative descriptive design is used to describe variables and examine differences in variables in two or more groups that occur naturally in a setting. A comparative descriptive design compares descriptive data obtained from different groups, which might have been formed using gender, age, educational level, medical diagnosis, or severity of illness. Figure 8-4 provides a diagram of this design’s structure. Fig 8-4 Comparative descriptive design. Research Example Comparative Descriptive Design Research Study Excerpt Buet and colleagues (2013) conducted a comparative descriptive study to describe and determine differences in the hand hygiene (HH) opportunities and adherence of clinical (e.g., nurses and physicians) and nonclinical (e.g., teachers and parents) caregivers for patients in pediatric extended-care facilities (ECFs). The following study excerpt includes key elements of this comparative descriptive design: “Eight children across four pediatric ECFs were observed for a cumulative 128 hours, and all caregiver HH opportunities were characterized by the World Health Organization [WHO] ‘5 Moments for HH.’… A convenience sample of two children from each site ( n  = 8) was observed.… Four observers participated in two hours of didactic training and two hours of monitored practice observations at one of the four study sites to ensure consistent documentation and interpretation of observations. Observers learned how to accurately record HH opportunities and HH adherence using the WHO ‘5 Moments of HH’ data acquisition tool, discussed below. Throughout the study, regular debriefings were also held to review and discuss data recording.…The World Health Organization ( WHO, 2009 ) ‘5 Moments for HH’ define points of contact when healthcare workers should perform HH: ‘before touching a patient, before clean/aseptic procedures, after body fluid exposure/risk, after touching a patient, and after touching patient surroundings…. During approximately 128 hours of observation, 865 HH opportunities were observed.” ( Buet et al., 2013, pp. 72-73 ) Critical Appraisal Buet and associates (2013) clearly described the aspects of their study design but did not identify the specific type of design used in their study. The design was comparative descriptive because the HH opportunities and adherence for clinical and nonclinical caregivers were described and compared. The study included 128 hours of observation (16 hours per child) of 865 HH opportunities in four different ECF settings. Thus the sampling process was strong and seemed focused on accomplishing the study purpose. The data collectors were well trained and monitored to ensure consistent observation and recording of data. HH was measured using an observational tool based on international standards ( WHO, 2009 ) for HH. Implications for Practice Buet and co-workers (2013) found that the HH of the clinical caregivers was significantly higher than the nonclinical caregivers. However, the overall HH adherence for the clinical caregivers was only 43%. The low HH adherence suggested increased potential for transmission of infections among children in ECFs. Additional HH education is needed for clinical and nonclinical caregivers of these children to prevent future adverse events. Quality and Safety Education for Nurses ( QSEN, 2013 ) implications from this study encourage nurses to follow evidence-based practice (EBP) guidelines in adhering to HH measures to ensure safe care of their patients and reduce their risk of potentially life-threatening infections ( Sherwood & Barnsteiner, 2012 ). Correlational Designs The purpose of a correlational design is to examine relationships between or among two or more variables in a single group in a study. This examination can occur at any of several levels— descriptive correlational, in which the researcher can seek to describe a relationship, predictive correlational, in which the researcher can predict relationships among variables, or the model testing design, in which all the relationships proposed by a theory are tested simultaneously. In correlational designs, a large range in the variable scores is necessary to determine the existence of a relationship. Therefore the sample should reflect the full range of scores possible on the variables being measured. Some subjects should have very high scores and others very low scores, and the scores of the rest should be distributed throughout the possible range. Because of the need for a wide variation on scores, correlational studies generally require large sample sizes. Subjects are not divided into groups, because group differences are not examined. To determine the type of correlational design used in a published study, use the algorithm shown in Figure 8-5 . More details on correlational designs referred to in this algorithm are available from other sources ( Grove et al., 2013 ; Kerlinger & Lee, 2000 ). Fig 8-5 Algorithm for determining the type of correlational design. Descriptive Correlational Design The purpose of a descriptive correlational design is to describe variables and examine relationships among these variables. Using this design facilitates the identification of many interrelationships in a situation ( Figure 8-6 ). The study may examine variables in a situation that has already occurred or is currently occurring. Researchers make no attempt to control or manipulate the situation. As with descriptive studies, variables must be clearly identified and defined conceptually and operationally (see Chapter 5 ). Fig 8-6 Descriptive correlational design. Research Example Descriptive Correlational Design Research Study Excerpt Burns, Murrock, and Graor (2012) conducted a correlational study to examine the relationship between BMI and injury severity in adolescent males attending a National Boy Scout Jamboree. The key elements of this descriptive correlational design are presented in the following study excerpt. “Design This study used a descriptive, correlational design to examine the relationship between obesity and injury severity.… The convenience sample consisted of the 611 adolescent males, aged 11-17 years, who received medical attention for an injury at one of eight participating medical facilities. Exclusion criteria were adolescent males presenting with medical complaints unrelated to an injury (e.g., sore throat, dehydration, insect bite) and those who were classified as ‘special needs’ participants because of the disability affecting their mobility or requiring the use of an assistive device.… There were 20 medical facilities located throughout the 2010 National Boy Scout Jamboree. Each facility was equipped to manage both medical complaints and injuries.…” ( Burns et al., 2012, pp. 509–510 ) “Measures Past medical history, weight (in pounds) and height (in inches) were obtained from the HMR [health and medical record]. BMI [body mass index] and gender-specific BMI percentage were calculated electronically using online calculators from the Centers for Disease Control and Prevention and height and weight data. The BMI value was plotted on the CDC’s gender-specific BMI-for-age growth chart to obtain a percentile ranking (BMI-P)…. BMI-P defines four weight status categories: less than 5% is considered underweight, 5% to less than 85% is categorized healthy weight, 85% to less than 95% is the overweight category, and 95% or greater is categorized as obese. Age was measured in years and was self-reported. Severity of injury was measured using the ESI [Emergency Severity Index] Version 4. This five-level triage rating scale was developed by the Agency for Healthcare Research and Quality and provides rapid, reproducible, clinically relevant stratification of patients into levels based on acuity and resource needs.… Training sessions were held for each medical facility to educate staff on the project, process, data collection techniques, and injury severity scoring methods.… All BMI and BMI-P values were recalculated to verify accuracy. To assess interrater reliability for injury severity scoring, ESI scores reported were compared with the primary researcher’s scores. When discrepancies were found, the primary researcher reviewed the treatment record to determine the most accurate score.” ( Burns et al., 2012, p. 510 ) Critical Appraisal Descriptive Correlational Design Burns and colleagues (2012) clearly identified their study design in their research report. The sampling method was a nonrandom sample of convenience that is commonly used in descriptive and correlational studies. Nonrandom sampling methods decrease the sample’s representativeness of the population; however, the sample size was large and included 20 medical facilities at a national event. The exclusion sampling criteria ensured that the subjects selected were most appropriate to address the study purpose. The adolescents’ height and weight were obtained from their medical records but the researchers did not indicate if these were reported or measured by the healthcare professionals. Self-reported height and weight for subjects could decrease the accuracy of the BMI and BMI-P calculated in a study. The BMI-P and severity injury scores were obtained using reliable and valid measurement methods, and the data from the medical facilities were checked for accuracy. The design of this study seemed strong and the knowledge generated provides a basis for future research. Implications for Practice Burns and associates (2012) found a significant relationship between BMI-P and injury severity. They noted that overweight/obese adolescents may have increased risks of serious injuries. Additional research is needed to examine the relationship of BMI to injury risk and to identify ways to prevent injuries in these adolescents. The findings from this study also emphasize the importance of healthy weight in adolescents to prevent health problems. QSEN (2013) implications are that evidence-based knowledge about the relationship between obesity and severity of injury provides nurses and students with information for educating adolescents to promote their health. Predictive Correlational Design The purpose of a predictive correlational design is to predict the value of one variable based on the values obtained for another variable or variables. Prediction is one approach to examining causal relationships between variables. Because causal phenomena are being examined, the terms dependent and independent are used to describe the variables. The variable to be predicted is classified as the dependent variable, and all other variables are independent or predictor variables. A predictive correlational design study attempts to predict the level of a dependent variable from the measured values of the independent variables. For example, the dependent variable of medication adherence could be predicted using the independent variables of age, number of medications, and medication knowledge of patients with congestive heart failure. The independent variables that are most effective in prediction are highly correlated with the dependent variable but are not highly correlated with other independent variables used in the study. The predictive correlational design structure is presented in Figure 8-7 . Predictive correlational designs require the development of a theory-based mathematical hypothesis proposing variables expected to predict the dependent variable effectively. Researchers then use regression analysis to test the hypothesis (see Chapter 11 ). Fig 8-7 Predictive correlational design. Research Example Predictive Correlational Design Research Study Excerpt Coyle (2012) used a predictive correlational design to determine if depressive symptoms were predictive of self-care behaviors in adults who had suffered a myocardial infarction (MI). The following study excerpt presents key elements of this design. “Design, Setting, and Sample A descriptive correlational design examined the relationship between the independent variable of depressive symptoms [agitation and loss of energy] and the dependent variable of self-care. Data were collected from 62 patients in one hospital, who were recovering from an MI in the metropolitan Washington, areaA….” ( Coyle, 2012, p. 128 ) Measures “Beck Depression Inventory II Depressive symptoms were measured using the BDI-II [Beck Depression Inventory II], a well-validated, 21-item scale designed to measure self-reported depressive symptomatology.… Internal-consistency estimates coefficient alpha of the total scores were .92 for psychiatric outpatients and .93 for college students. Construct validity was .93 (p < .001) when correlated with the BDI-I. In this study, the BDI-II Cronbach’s alpha was .68 at baseline.” ( Coyle, 2012, p. 128 ) “Health Behavior Scale Self-care behaviors after an MI were measured by the Health Behavior Scale (HBS), developed specifically for measuring the extent to which persons with cardiac disease perform prescribed self-care behaviors.… This self-report, a 20-item instrument, assesses the degree to which patients perform five types of prescribed self-care (following diet, limiting smoking, performing activities, taking medications, and changing responses to stressful situations).… Cronbach’s alphas for different self-care behaviors ranged from .82 to .95. In this study, reliability was measured by Cronbach’s alpha and was .62 at 2 weeks and .71 at 30 days….Prior to hospital discharge, the Medical and Demographic Characteristics Questionnaire and BDI-II were administered by the researcher.… At 2 weeks and at 30 days after hospital discharge, participants were contacted by telephone to determine responses to the HBS.” ( Coyle, 2012, pp. 128-129 ) Critical Appraisal Coyle (2012) might have identified her study design more clearly as predictive correlational but did clearly identify the dependent variable as self-care and the independent variables as depressive symptoms. The design also included the longitudinal measurement of self-care with the HBS at 2 weeks and 30 days. The design was appropriate to accomplish the study purpose. The sample of 62 subjects was adequate because the study findings indicated significant results. The BDI-II has documented reliability (Cronbach’s alphas > 0.7) and validity from previous studies, but the reliability of .68 was low in this study. Reliability indicates how consistently the scale measured depression and, in this study, it had 68% consistency and 32% error (1.00 − .68 = .32 × 100% = 32%; see Chapter 10 ). HBS had strong reliability in previous studies but the validity of the scale was not addressed. The reliability of HBS was limited at 2 weeks (62% reliable and 38% error) but acceptable at 30 days (71% reliable and 29% error). This study has a strong design with more strengths than weaknesses, and the findings are probably an accurate reflection of reality. The study needs to be replicated with stronger measurement methods and a larger sample. Implications for Practice Coyle (2012) found that depressive symptoms of agitation and loss of energy were significantly predictive of self-care performance in patients with an MI at 30 days post–hospital discharge. Coyle recommended screening post-MI patients for depressive symptoms so that their symptoms might be managed before they were discharged. Further research is recommended to examine depression and self-care behaviors after hospital discharge to identify and treat potential problems. Model Testing Design Some studies are designed specifically to test the accuracy of a hypothesized causal model (see Chapter 7 for content on middle range theory). The model testing design requires that all concepts relevant to the model be measured and the relationships among these concepts examined. A large heterogeneous sample is required. Correlational analyses are conducted to determine the relationships among the model concepts, and the results are presented in the framework model for the study. This type of design is very complex; this text provides only an introduction to a model testing design implemented by Battistelli, Portoghese, Galletta, and Pohl (2013) . Research Example Model Testing Design Research Study Battistelli and co-workers (2013) developed and tested a theoretical model to examine turnover intentions of nurses working in hospitals. The concepts of work-family conflict, job satisfaction, community embeddedness, and organizational affective commitment were identified as predictive of nurse turnover intention. The researchers collected data on these concepts using a sample of 440 nurses from a public hospital. The analysis of study data identified significant relationships ( p  < 0.05) among all concepts in the model. The results of this study are presented in Figure 8-8 and indicate the importance of these concepts in predicting nurse turnover intention. Fig 8-8 Results of the structural equation modeling analysis of the hypothesized model of turnover intention on the cross-validation sample ( n  = 440, standardized path loadings, p  < 0.05, two-tailed). (From Battistelli, A., Portoghese, I., Galletta, M., & Pohl, S. [2012]. Beyond the tradition: Test of an integrative conceptual model on nurse turnover. International Nursing Review, 60 (1), p. 109.) Understanding Concepts Important to Causality in Designs Quasi-experimental and experimental designs were developed to examine causality or the effect of an intervention on selected outcomes. Causality basically says that things have causes, and causes lead to effects. In a critical appraisal, you need to determine whether the purpose of the study is to examine causality, examine relationships among variables (correlational designs), or describe variables (descriptive designs). You may be able to determine whether the purpose of a study is to examine causality by reading the purpose statement and propositions within the framework (see Chapter 7 ). For example, the purpose of a causal study may be to examine the effect of a specific, preoperative, early ambulation educational program on length of hospital stay. The proposition may state that preoperative teaching results in shorter hospitalizations. However, the preoperative early ambulation educational program is not the only factor affecting length of hospital stay. Other important factors include the diagnosis, type of surgery, patient’s age, physical condition of the patient prior to surgery, and complications that occurred after surgery. Researchers usually design quasi-experimental and experimental studies to examine causality or the effect of an intervention (independent variable) on a selected outcome (dependent variable), using a design that controls extraneous variables. Critically appraising studies designed to examine causality requires an understanding of such concepts as multicausality, probability, bias, control, and manipulation. Multicausality Very few phenomena in nursing can be clearly linked to a single cause and a single effect. A number of interrelating variables can be involved in producing a particular effect. Therefore studies developed from a multicausal perspective will include more variables than those using a strict causal orientation. The presence of multiple causes for an effect is referred to as multicausality . For example, patient diagnosis, age, presurgical condition, and complications after surgery will be involved in causing the length of hospital stay. Because of the complexity of causal relationships, a theory is unlikely to identify every element involved in causing a particular outcome. However, the greater the proportion of causal factors that can be identified and examined or controlled in a single study, the clearer the understanding will be of the overall phenomenon. This greater understanding is expected to increase the ability to predict and control the effects of study interventions.

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)

Related posts:

  • Outcomes Research
  • Understanding Statistics in Research
  • Research Problems, Purposes, and Hypotheses
  • Examining Populations and Samples in Research

descriptive comparative research title examples

Stay updated, free articles. Join our Telegram channel

Comments are closed for this page.

descriptive comparative research title examples

Full access? Get Clinical Tree

descriptive comparative research title examples

  • Cookies & Privacy
  • GETTING STARTED
  • Introduction
  • FUNDAMENTALS
  • Acknowledgements
  • Research questions & hypotheses
  • Concepts, constructs & variables
  • Research limitations
  • Getting started
  • Sampling Strategy
  • Research Quality
  • Research Ethics
  • Data Analysis

Structure of comparative research questions

There are five steps required to construct a comparative research question: (1) choose your starting phrase; (2) identify and name the dependent variable; (3) identify the groups you are interested in; (4) identify the appropriate adjoining text; and (5) write out the comparative research question. Each of these steps is discussed in turn:

Choose your starting phrase

Identify and name the dependent variable

Identify the groups you are interested in

Identify the appropriate adjoining text

Write out the comparative research question

FIRST Choose your starting phrase

Comparative research questions typically start with one of two phrases:

Some of these starting phrases are highlighted in blue text in the examples below:

What is the difference in the daily calorific intake of American men and women?

What is the difference in the weekly photo uploads on Facebook between British male and female university students?

What are the differences in perceptions towards Internet banking security between adolescents and pensioners?

What are the differences in attitudes towards music piracy when pirated music is freely distributed or purchased?

SECOND Identify and name the dependent variable

All comparative research questions have a dependent variable . You need to identify what this is. However, how the dependent variable is written out in a research question and what you call it are often two different things. In the examples below, we have illustrated the name of the dependent variable and highlighted how it would be written out in the blue text .

The first three examples highlight that while the name of the dependent variable is the same, namely daily calorific intake, the way that this dependent variable is written out differs in each case.

THIRD Identify the groups you are interested in

All comparative research questions have at least two groups . You need to identify these groups. In the examples below, we have identified the groups in the green text .

What is the difference in the daily calorific intake of American men and women ?

What is the difference in the weekly photo uploads on Facebook between British male and female university students ?

What are the differences in perceptions towards Internet banking security between adolescents and pensioners ?

What are the differences in attitudes towards music piracy when pirated music is freely distributed or purchased ?

It is often easy to identify groups because they reflect different types of people (e.g., men and women, adolescents and pensioners), as highlighted by the first three examples. However, sometimes the two groups you are interested in reflect two different conditions, as highlighted by the final example. In this final example, the two conditions (i.e., groups) are pirated music that is freely distributed and pirated music that is purchased. So we are interested in how the attitudes towards music piracy differ when pirated music is freely distributed as opposed to when pirated music in purchased.

FOURTH Identify the appropriate adjoining text

Before you write out the groups you are interested in comparing, you typically need to include some adjoining text. Typically, this adjoining text includes the words between or amongst , but other words may be more appropriate, as highlighted by the examples in red text below:

FIFTH Write out the comparative research question

Once you have these details - (1) the starting phrase, (2) the name of the dependent variable, (3) the name of the groups you are interested in comparing, and (4) any potential adjoining words - you can write out the comparative research question in full. The example comparative research questions discussed above are written out in full below:

In the section that follows, the structure of relationship-based research questions is discussed.

Structure of relationship-based research questions

There are six steps required to construct a relationship-based research question: (1) choose your starting phrase; (2) identify the independent variable(s); (3) identify the dependent variable(s); (4) identify the group(s); (5) identify the appropriate adjoining text; and (6) write out the relationship-based research question. Each of these steps is discussed in turn.

Identify the independent variable(s)

Identify the dependent variable(s)

Identify the group(s)

Write out the relationship-based research question

Relationship-based research questions typically start with one or two phrases:

What is the relationship between gender and attitudes towards music piracy amongst adolescents?

What is the relationship between study time and exam scores amongst university students?

What is the relationship of career prospects, salary and benefits, and physical working conditions on job satisfaction between managers and non-managers?

SECOND Name the independent variable(s)

All relationship-based research questions have at least one independent variable . You need to identify what this is. In the examples that follow, the independent variable(s) is highlighted in the purple text .

What is the relationship of career prospects , salary and benefits , and physical working conditions on job satisfaction between managers and non-managers?

When doing a dissertation at the undergraduate and master's level, it is likely that your research question will only have one or two independent variables, but this is not always the case.

THIRD Name the dependent variable(s)

All relationship-based research questions also have at least one dependent variable . You also need to identify what this is. At the undergraduate and master's level, it is likely that your research question will only have one dependent variable. In the examples that follow, the dependent variable is highlighted in the blue text .

FOURTH Name of the group(s)

All relationship-based research questions have at least one group , but can have multiple groups . You need to identify this group(s). In the examples below, we have identified the group(s) in the green text .

What is the relationship between gender and attitudes towards music piracy amongst adolescents ?

What is the relationship between study time and exam scores amongst university students ?

What is the relationship of career prospects, salary and benefits, and physical working conditions on job satisfaction between managers and non-managers ?

FIFTH Identify the appropriate adjoining text

Before you write out the groups you are interested in comparing, you typically need to include some adjoining text (i.e., usually the words between or amongst):

Some examples are highlighted in red text below:

SIXTH Write out the relationship-based research question

Once you have these details ? (1) the starting phrase, (2) the name of the dependent variable, (3) the name of the independent variable, (4) the name of the group(s) you are interested in, and (5) any potential adjoining words ? you can write out the relationship-based research question in full. The example relationship-based research questions discussed above are written out in full below:

STEP FOUR Write out the problem or issues you are trying to address in the form of a complete research question

In the previous section, we illustrated how to write out the three types of research question (i.e., descriptive, comparative and relationship-based research questions). Whilst these rules should help you when writing out your research question(s), the main thing you should keep in mind is whether your research question(s) flow and are easy to read .

Examples logo

Comparative Research

Comparative Research Examples 1

Although not everyone would agree, comparing is not always bad. Comparing things can also give you a handful of benefits. For instance, there are times in our life where we feel lost. You may not be getting the job that you want or have the sexy body that you have been aiming for a long time now. Then, you happen to cross path with an old friend of yours, who happened to get the job that you always wanted. This scenario may put your self-esteem down, knowing that this friend got what you want, while you didn’t. Or you can choose to look at your friend as an example that your desire is actually attainable. Come up with a plan to achieve your  personal development goal . Perhaps, ask for tips from this person or from the people who inspire you. According to the article posted in  brit.co , licensed master social worker and therapist Kimberly Hershenson said that comparing yourself to someone successful can be an excellent self-motivation to work on your goals.

Aside from self-improvement, as a researcher, you should know that comparison is an essential method in scientific studies, such as experimental research and descriptive research . Through this method, you can uncover the relationship between two or more variables of your project in the form of comparative analysis .

What is Comparative Research?

Aiming to compare two or more variables of an experiment project, experts usually apply comparative research examples in social sciences to compare countries and cultures across a particular area or the entire world. Despite its proven effectiveness, you should keep it in mind that some states have different disciplines in sharing data. Thus, it would help if you consider the affecting factors in gathering specific information.

Quantitative and Qualitative Research Methods in Comparative Studies

In comparing variables, the statistical and mathematical data collection, and analysis that quantitative research methodology naturally uses to uncover the correlational connection of the variables, can be essential. Additionally, since quantitative research requires a specific research question, this method can help you can quickly come up with one particular comparative research question.

The goal of comparative research is drawing a solution out of the similarities and differences between the focused variables. Through non-experimental or qualitative research , you can include this type of research method in your comparative research design.

13+ Comparative Research Examples

Know more about comparative research by going over the following examples. You can download these zipped documents in PDF and MS Word formats.

1. Comparative Research Report Template

comparative research report template

  • Google Docs

Size: 113 KB

2. Business Comparative Research Template

business comparative research template

Size: 69 KB

3. Comparative Market Research Template

comparative market research template

Size: 172 KB

4. Comparative Research Strategies Example

comparative research strategies example

5. Comparative Research in Anthropology Example

comparative research in anthropology example

Size: 192 KB

6. Sample Comparative Research Example

sample comparative research example

Size: 516 KB

7. Comparative Area Research Example

comparative area research example

8. Comparative Research on Women’s Emplyment Example

comparative research on womens emplyment

Size: 290 KB

9. Basic Comparative Research Example

basic comparative research example

Size: 19 KB

10. Comparative Research in Medical Treatments Example

comparative research in medical treatments

11. Comparative Research in Education Example

comparative research in education

Size: 455 KB

12. Formal Comparative Research Example

formal comparative research example

Size: 244 KB

13. Comparative Research Designs Example

comparing comparative research designs

Size: 259 KB

14. Casual Comparative Research in DOC

caasual comparative research in doc

Best Practices in Writing an Essay for Comparative Research in Visual Arts

If you are going to write an essay for a comparative research examples paper, this section is for you. You must know that there are inevitable mistakes that students do in essay writing . To avoid those mistakes, follow the following pointers.

1. Compare the Artworks Not the Artists

One of the mistakes that students do when writing a comparative essay is comparing the artists instead of artworks. Unless your instructor asked you to write a biographical essay, focus your writing on the works of the artists that you choose.

2. Consult to Your Instructor

There is broad coverage of information that you can find on the internet for your project. Some students, however, prefer choosing the images randomly. In doing so, you may not create a successful comparative study. Therefore, we recommend you to discuss your selections with your teacher.

3. Avoid Redundancy

It is common for the students to repeat the ideas that they have listed in the comparison part. Keep it in mind that the spaces for this activity have limitations. Thus, it is crucial to reserve each space for more thoroughly debated ideas.

4. Be Minimal

Unless instructed, it would be practical if you only include a few items(artworks). In this way, you can focus on developing well-argued information for your study.

5. Master the Assessment Method and the Goals of the Project

We get it. You are doing this project because your instructor told you so. However, you can make your study more valuable by understanding the goals of doing the project. Know how you can apply this new learning. You should also know the criteria that your teachers use to assess your output. It will give you a chance to maximize the grade that you can get from this project.

Comparing things is one way to know what to improve in various aspects. Whether you are aiming to attain a personal goal or attempting to find a solution to a certain task, you can accomplish it by knowing how to conduct a comparative study. Use this content as a tool to expand your knowledge about this research methodology .

descriptive comparative research title examples

AI Generator

Text prompt

  • Instructive
  • Professional

10 Examples of Public speaking

20 Examples of Gas lighting

Book cover

How to Practice Academic Medicine and Publish from Developing Countries? pp 185–192 Cite as

How to Choose a Title?

  • Samiran Nundy 4 ,
  • Atul Kakar 5 &
  • Zulfiqar A. Bhutta 6  
  • Open Access
  • First Online: 24 October 2021

28k Accesses

1 Altmetric

‘What’s in a name? That which we call a rose by any other name would smell as sweet’ is a famous quote from Shakespeare’s play ‘Romeo and Juliet’. However, in biomedical research, the title or name of the article is without any reservation the most important part of the paper and the most read part in the journal. The title is the face of the research and it should sum up the main notion of the experiment/research in such a way that in the fewest possible words one can summarize the facts of the paper and attract the reader as well. ‘Being concise, precise, and meticulous is the key’ for planning a title [1].

A title should predict the content of a paper, should be interesting, reflect the tone of writing, and contain important keywords so that it can be easily located.

Download chapter PDF

1 Why is the Title So Important in Biomedical Research?

‘What’s in a name? That which we call a rose by any other name would smell as sweet’ is a famous quote from Shakespeare’s play ‘Romeo and Juliet’. However, in biomedical research, the title or name of the article is without any reservation the most important part of the paper and the most read part in the journal. The title is the face of the research and it should sum up the main notion of the experiment/research in such a way that in the fewest possible words one can summarize the facts of the paper and attract the reader as well. ‘Being concise, precise, and meticulous is the key’ for planning a title [ 1 ].

The title should not be very lengthy and also it should not contain several unnecessary words, e.g., ‘A Study to Investigate the safety and efficacy of Hydroxychloroquine in subjects who are infected with COVID-19 during the pandemic’ contains many extra words and could be easily replaced by ‘Safety and efficacy of Hydroxychloroquine during the COVID-19 pandemic’. Besides this, a title that is too petite or brief would not convey what the paper is all about. For example, ‘COVID depression’ does not provide the necessary information to the readers of the paper.

The title thus has two functions, first to help the scientific sites to index the academic paper, and second, it acts like a billboard or advertisement to sell the paper.

2 What Parameters Help to Formulate a Suitable Research Paper Title?

Time should be spent in planning the title. Editors often reject an article based on its title [ 2 ]. Typically, principal investigators and their coinvestigators should select the title accurately so that it captures the central ideal of the research. Devoting time to the title can help writers to relook at the main purpose of the study and also reconsider if ‘they are drifting off on a tangent while writing’. There are many adjectives to describe the title of a medical paper and these include ‘simple, direct, accurate, appropriate, specific, functional, interesting, attractive/appealing, concise/brief, precise/focused, unambiguous, memorable, captivating and informative’ [ 2 , 3 , 4 ]. The following information is important while designing the title.

The aim of the research project.

The type of the study.

The methodology used in the project.

PICOT: population/problem, intervention (test, drug or surgery), control/comparison and time.

3 How to Plan Effective Titles in Academic Research Papers?

Although the title appears on the top of the article, it should be written only after the abstract has been written and finalized. There are five basic steps for writing the title. After doing this exercise, one should jot down two or three options and then choose wisely which is the best for your paper. Titles are typically arranged to form a phrase, but can also be in the form of a question. The grammar should be correct and one should capitalize all the first words. In academic papers, a title is rarely followed by an exclamation mark. Ideally, the ‘title’ should be ‘descriptive, direct, accurate, appropriate, interesting, concise, precise and unique’ [ 1 ].

Step 1: Write scope of the study and the major hypothesis in a point format.

Step 2: Use current nomenclature from the manuscript or keywords and dependent and independent variables.

Step 3: Break down the study into the various components of PICOT.

Step 4: Frame phrases that give a positive impression and stimulate reader interest.

Step 5: Finally organize and then reorganize the title (Fig. 16.1 ).

figure 1

Steps to frame title

4 What Things Should Be Avoided in the Title?

One should avoid the following.

Avoid using abbreviations and symbols in the title.

Limit the word count to 10–15.

Do not include ‘study of’, ‘analysis of’, or a similar assembly of words.

Avoid using unfamiliar jargon not used in the text.

The title should not be misleading.

Amusing titles may be taken less seriously by readers and maybe cited less often [ 2 , 5 ].

5 What are the Types of Titles for an Academic Paper?

Classically three types of titles have been described in the literature, i.e., descriptive, declarative, or interrogative. The fourth category which is Creative is also called the combined type. The details are given in Fig. 16.2 .

figure 2

Type of title for an Academic paper

5.1 Descriptive or Neutral Title?

This has the vital components of the research paper, i.e., information on the subjects, study design, the interventions used, comparisons/control, and the outcome. It is the PICOT style of a title but does not disclose the observations, results, or conclusions [ 4 , 6 ]. The descriptive title is based on multiple keywords and provides an opportunity for the reader to decide about the results in an unbiased matter. This type of title is usually preferred in original articles and is also more read and cited as compared to the other types [ 6 , 7 ]. Examples are given in Table 16.1 .

5.2 Declarative Title

The title provides the main results of the study and decreases inquisitiveness and thus should be avoided. A few examples are cited in Table 16.2 .

5.3 Interrogative Title

In this, the title ends with a question mark which increases the reads and the downloads. When the title is in the form of a query it dramatizes the subject and the readers become inquisitive. A few examples are cited in Table 16.3 .

5.4 Creative Phrase/Combined Type of Title

This title is used for editorials or viewpoints. Sometimes it can be used in original articles but in such cases, there is the clubbing of an informative with a creative phrase. Usually, the informative part is the main and the creative is the minor part of the title. The latter gives a punch. Both can be separated by a colon or hyphen (Table 16.4 ).

6 What is a Short-running Title?

Many journals will ask for a short running title that is published on the top of each page. The requirements for this, e.g., the word count should be checked with the journal.

figure a

7 Conclusions

The title provides the most important information which helps in indexing and also attracts readers.

The word count for the title should be less than 16.

There are four types of title descriptive, declarative, interrogative, and creative. The majority of original articles have a descriptive title.

There are five basic steps that you need to follow when designing a title. They start with writing the hypothesis and finish with a phrase that can hold the attention of the reader.

Tullu MS. Writing the title and abstract for a research paper: being concise, precise, and meticulous is the key. Saudi J Anaesth. 2019;13(Suppl 1):S12–7.

Article   Google Scholar  

Bavdekar SB. Formulating the right title for a research article. J Assoc Physicians India. 2016;64:53–6.

PubMed   Google Scholar  

Tullu MS, Karande S. Writing a model research paper: a roadmap. J Postgrad Med. 2017;63:143–6.

Article   CAS   Google Scholar  

Dewan P, Gupta P. Writing the title, abstract and introduction: looks matter! Indian Pediatr. 2016;53:235–41.

Sagi I, Yechiam E. Amusing titles in scientific journals and article citation. J Inf Sci. 2008;34:680–7.

Cals JWL, Kotz D. Effective writing and publishing scientific papers, part II: title and abstract. J Clin Epidemiol. 2013;66:585.

Jamali HR, Nikzad M. Article title type and its relation with the number of downloads and citations. Scientometrics. 2011;88:653–61.

Jax T, Stirban A, Terjung A, Esmaeili H, Berk A, Thiemann S, Chilton R, von Eynatten M, Marx N. A randomised, active-and placebo-controlled, three-period crossover trial to investigate short-term effects of the dipeptidyl peptidase-4 inhibitor linagliptin on macro-and microvascular endothelial function in type 2 diabetes. Cardiovasc Diabetol. 2017 Dec;16(1):1–6.

Kelly AS, Auerbach P, Barrientos-Perez M, Gies I, Hale PM, Marcus C, Mastrandrea LD, Prabhu N, Arslanian S. A randomized, controlled trial of liraglutide for adolescents with obesity. N Engl J Med. 2020 May 28;382(22):2117–28.

Xia S, Duan K, Zhang Y, Zhao D, Zhang H, Xie Z, Li X, Peng C, Zhang Y, Zhang W, Yang Y. Effect of an inactivated vaccine against SARS-CoV-2 on safety and immunogenicity outcomes: interim analysis of 2 randomized clinical trials. JAMA. 2020 Sep 8;324(10):951–60.

Jackson JB, MacDonald KL, Cadwell J, Sullivan C, Kline WE, Hanson M, Sannerud KJ, Stramer SL, Fildes NJ, Kwok SY, Sninsky JJ. Absence of HIV infection in blood donors with indeterminate Western blot tests for antibody to HIV-1. N Engl J Med. 1990 Jan 25;322(4):217–22.

Seeman E, Hopper JL, Bach LA, Cooper ME, Parkinson E, McKay J, Jerums G. Reduced bone mass in daughters of women with osteoporosis: N Engl J Med.1989;320/9(554–558). Maturitas. 1989 Sep 1;11(3):244.

Google Scholar  

Weber R, Bryan RT, Owen RL, Wilcox CM, Gorelkin L, Visvesvara GS. Improved light-microscopical detection of microsporidia spores in stool and duodenal aspirates. The Enteric Opportunistic Infections Working Group. N Engl J Med. 1992;326:161–6.

Paul S, Ridge JA, Quan SH, Elin RS. Ann Surg. 1990;211:67–71.

Clavien PA. Hepatic vein embolization for safer liver surgery. Ann Surg. 2020;272:206–9.

Rosenberger PB. Dyslexia—is it a disease? N Engl J Med. 1992;326:192–3.

Winthrop KL, Mariette X. To immunosuppress: whom, when and how? That is the question with COVID-19. Ann Rheum Dis. 2020;79:1129–31.

Issaka RB. Good for us all. JAMA. 2020;324(6):556–7. https://doi.org/10.1001/jama.2020.12630 .

Article   PubMed   PubMed Central   Google Scholar  

Kay R. Old wine in new bottle. Hong Kong Med J. 2007;13(1):4.

Tandon V, Botha JF, Banks J, Pontin AR, Pascoe MD, Kahn D. A tale of two kidneys—how long can a kidney transplant wait? Clin Transpl. 2000;14:189–92. https://doi.org/10.1034/j.1399-0012.2000.140302 .

Grant Stewart: pain, panic, and panting—the reality of “shortness of breath”. BMJ. 2020.

Aronson. When I use a word … clowns. The BMJ 2020 blog. https://blogs.bmj.com/bmj/2020/08/14/jeffrey-aronson-when-i-use-a-word-clowns/ .

Download references

Author information

Authors and affiliations.

Department of Surgical Gastroenterology and Liver Transplantation, Sir Ganga Ram Hospital, New Delhi, India

Samiran Nundy

Department of Internal Medicine, Sir Ganga Ram Hospital, New Delhi, India

Institute for Global Health and Development, The Aga Khan University, South Central Asia, East Africa and United Kingdom, Karachi, Pakistan

Zulfiqar A. Bhutta

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2022 The Author(s)

About this chapter

Cite this chapter.

Nundy, S., Kakar, A., Bhutta, Z.A. (2022). How to Choose a Title?. In: How to Practice Academic Medicine and Publish from Developing Countries?. Springer, Singapore. https://doi.org/10.1007/978-981-16-5248-6_16

Download citation

DOI : https://doi.org/10.1007/978-981-16-5248-6_16

Published : 24 October 2021

Publisher Name : Springer, Singapore

Print ISBN : 978-981-16-5247-9

Online ISBN : 978-981-16-5248-6

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Privacy Policy

Buy Me a Coffee

Research Method

Home » 500+ Quantitative Research Titles and Topics

500+ Quantitative Research Titles and Topics

Table of Contents

Quantitative Research Topics

Quantitative research involves collecting and analyzing numerical data to identify patterns, trends, and relationships among variables. This method is widely used in social sciences, psychology , economics , and other fields where researchers aim to understand human behavior and phenomena through statistical analysis. If you are looking for a quantitative research topic, there are numerous areas to explore, from analyzing data on a specific population to studying the effects of a particular intervention or treatment. In this post, we will provide some ideas for quantitative research topics that may inspire you and help you narrow down your interests.

Quantitative Research Titles

Quantitative Research Titles are as follows:

Business and Economics

  • “Statistical Analysis of Supply Chain Disruptions on Retail Sales”
  • “Quantitative Examination of Consumer Loyalty Programs in the Fast Food Industry”
  • “Predicting Stock Market Trends Using Machine Learning Algorithms”
  • “Influence of Workplace Environment on Employee Productivity: A Quantitative Study”
  • “Impact of Economic Policies on Small Businesses: A Regression Analysis”
  • “Customer Satisfaction and Profit Margins: A Quantitative Correlation Study”
  • “Analyzing the Role of Marketing in Brand Recognition: A Statistical Overview”
  • “Quantitative Effects of Corporate Social Responsibility on Consumer Trust”
  • “Price Elasticity of Demand for Luxury Goods: A Case Study”
  • “The Relationship Between Fiscal Policy and Inflation Rates: A Time-Series Analysis”
  • “Factors Influencing E-commerce Conversion Rates: A Quantitative Exploration”
  • “Examining the Correlation Between Interest Rates and Consumer Spending”
  • “Standardized Testing and Academic Performance: A Quantitative Evaluation”
  • “Teaching Strategies and Student Learning Outcomes in Secondary Schools: A Quantitative Study”
  • “The Relationship Between Extracurricular Activities and Academic Success”
  • “Influence of Parental Involvement on Children’s Educational Achievements”
  • “Digital Literacy in Primary Schools: A Quantitative Assessment”
  • “Learning Outcomes in Blended vs. Traditional Classrooms: A Comparative Analysis”
  • “Correlation Between Teacher Experience and Student Success Rates”
  • “Analyzing the Impact of Classroom Technology on Reading Comprehension”
  • “Gender Differences in STEM Fields: A Quantitative Analysis of Enrollment Data”
  • “The Relationship Between Homework Load and Academic Burnout”
  • “Assessment of Special Education Programs in Public Schools”
  • “Role of Peer Tutoring in Improving Academic Performance: A Quantitative Study”

Medicine and Health Sciences

  • “The Impact of Sleep Duration on Cardiovascular Health: A Cross-sectional Study”
  • “Analyzing the Efficacy of Various Antidepressants: A Meta-Analysis”
  • “Patient Satisfaction in Telehealth Services: A Quantitative Assessment”
  • “Dietary Habits and Incidence of Heart Disease: A Quantitative Review”
  • “Correlations Between Stress Levels and Immune System Functioning”
  • “Smoking and Lung Function: A Quantitative Analysis”
  • “Influence of Physical Activity on Mental Health in Older Adults”
  • “Antibiotic Resistance Patterns in Community Hospitals: A Quantitative Study”
  • “The Efficacy of Vaccination Programs in Controlling Disease Spread: A Time-Series Analysis”
  • “Role of Social Determinants in Health Outcomes: A Quantitative Exploration”
  • “Impact of Hospital Design on Patient Recovery Rates”
  • “Quantitative Analysis of Dietary Choices and Obesity Rates in Children”

Social Sciences

  • “Examining Social Inequality through Wage Distribution: A Quantitative Study”
  • “Impact of Parental Divorce on Child Development: A Longitudinal Study”
  • “Social Media and its Effect on Political Polarization: A Quantitative Analysis”
  • “The Relationship Between Religion and Social Attitudes: A Statistical Overview”
  • “Influence of Socioeconomic Status on Educational Achievement”
  • “Quantifying the Effects of Community Programs on Crime Reduction”
  • “Public Opinion and Immigration Policies: A Quantitative Exploration”
  • “Analyzing the Gender Representation in Political Offices: A Quantitative Study”
  • “Impact of Mass Media on Public Opinion: A Regression Analysis”
  • “Influence of Urban Design on Social Interactions in Communities”
  • “The Role of Social Support in Mental Health Outcomes: A Quantitative Analysis”
  • “Examining the Relationship Between Substance Abuse and Employment Status”

Engineering and Technology

  • “Performance Evaluation of Different Machine Learning Algorithms in Autonomous Vehicles”
  • “Material Science: A Quantitative Analysis of Stress-Strain Properties in Various Alloys”
  • “Impacts of Data Center Cooling Solutions on Energy Consumption”
  • “Analyzing the Reliability of Renewable Energy Sources in Grid Management”
  • “Optimization of 5G Network Performance: A Quantitative Assessment”
  • “Quantifying the Effects of Aerodynamics on Fuel Efficiency in Commercial Airplanes”
  • “The Relationship Between Software Complexity and Bug Frequency”
  • “Machine Learning in Predictive Maintenance: A Quantitative Analysis”
  • “Wearable Technologies and their Impact on Healthcare Monitoring”
  • “Quantitative Assessment of Cybersecurity Measures in Financial Institutions”
  • “Analysis of Noise Pollution from Urban Transportation Systems”
  • “The Influence of Architectural Design on Energy Efficiency in Buildings”

Quantitative Research Topics

Quantitative Research Topics are as follows:

  • The effects of social media on self-esteem among teenagers.
  • A comparative study of academic achievement among students of single-sex and co-educational schools.
  • The impact of gender on leadership styles in the workplace.
  • The correlation between parental involvement and academic performance of students.
  • The effect of mindfulness meditation on stress levels in college students.
  • The relationship between employee motivation and job satisfaction.
  • The effectiveness of online learning compared to traditional classroom learning.
  • The correlation between sleep duration and academic performance among college students.
  • The impact of exercise on mental health among adults.
  • The relationship between social support and psychological well-being among cancer patients.
  • The effect of caffeine consumption on sleep quality.
  • A comparative study of the effectiveness of cognitive-behavioral therapy and pharmacotherapy in treating depression.
  • The relationship between physical attractiveness and job opportunities.
  • The correlation between smartphone addiction and academic performance among high school students.
  • The impact of music on memory recall among adults.
  • The effectiveness of parental control software in limiting children’s online activity.
  • The relationship between social media use and body image dissatisfaction among young adults.
  • The correlation between academic achievement and parental involvement among minority students.
  • The impact of early childhood education on academic performance in later years.
  • The effectiveness of employee training and development programs in improving organizational performance.
  • The relationship between socioeconomic status and access to healthcare services.
  • The correlation between social support and academic achievement among college students.
  • The impact of technology on communication skills among children.
  • The effectiveness of mindfulness-based stress reduction programs in reducing symptoms of anxiety and depression.
  • The relationship between employee turnover and organizational culture.
  • The correlation between job satisfaction and employee engagement.
  • The impact of video game violence on aggressive behavior among children.
  • The effectiveness of nutritional education in promoting healthy eating habits among adolescents.
  • The relationship between bullying and academic performance among middle school students.
  • The correlation between teacher expectations and student achievement.
  • The impact of gender stereotypes on career choices among high school students.
  • The effectiveness of anger management programs in reducing violent behavior.
  • The relationship between social support and recovery from substance abuse.
  • The correlation between parent-child communication and adolescent drug use.
  • The impact of technology on family relationships.
  • The effectiveness of smoking cessation programs in promoting long-term abstinence.
  • The relationship between personality traits and academic achievement.
  • The correlation between stress and job performance among healthcare professionals.
  • The impact of online privacy concerns on social media use.
  • The effectiveness of cognitive-behavioral therapy in treating anxiety disorders.
  • The relationship between teacher feedback and student motivation.
  • The correlation between physical activity and academic performance among elementary school students.
  • The impact of parental divorce on academic achievement among children.
  • The effectiveness of diversity training in improving workplace relationships.
  • The relationship between childhood trauma and adult mental health.
  • The correlation between parental involvement and substance abuse among adolescents.
  • The impact of social media use on romantic relationships among young adults.
  • The effectiveness of assertiveness training in improving communication skills.
  • The relationship between parental expectations and academic achievement among high school students.
  • The correlation between sleep quality and mood among adults.
  • The impact of video game addiction on academic performance among college students.
  • The effectiveness of group therapy in treating eating disorders.
  • The relationship between job stress and job performance among teachers.
  • The correlation between mindfulness and emotional regulation.
  • The impact of social media use on self-esteem among college students.
  • The effectiveness of parent-teacher communication in promoting academic achievement among elementary school students.
  • The impact of renewable energy policies on carbon emissions
  • The relationship between employee motivation and job performance
  • The effectiveness of psychotherapy in treating eating disorders
  • The correlation between physical activity and cognitive function in older adults
  • The effect of childhood poverty on adult health outcomes
  • The impact of urbanization on biodiversity conservation
  • The relationship between work-life balance and employee job satisfaction
  • The effectiveness of eye movement desensitization and reprocessing (EMDR) in treating trauma
  • The correlation between parenting styles and child behavior
  • The effect of social media on political polarization
  • The impact of foreign aid on economic development
  • The relationship between workplace diversity and organizational performance
  • The effectiveness of dialectical behavior therapy in treating borderline personality disorder
  • The correlation between childhood abuse and adult mental health outcomes
  • The effect of sleep deprivation on cognitive function
  • The impact of trade policies on international trade and economic growth
  • The relationship between employee engagement and organizational commitment
  • The effectiveness of cognitive therapy in treating postpartum depression
  • The correlation between family meals and child obesity rates
  • The effect of parental involvement in sports on child athletic performance
  • The impact of social entrepreneurship on sustainable development
  • The relationship between emotional labor and job burnout
  • The effectiveness of art therapy in treating dementia
  • The correlation between social media use and academic procrastination
  • The effect of poverty on childhood educational attainment
  • The impact of urban green spaces on mental health
  • The relationship between job insecurity and employee well-being
  • The effectiveness of virtual reality exposure therapy in treating anxiety disorders
  • The correlation between childhood trauma and substance abuse
  • The effect of screen time on children’s social skills
  • The impact of trade unions on employee job satisfaction
  • The relationship between cultural intelligence and cross-cultural communication
  • The effectiveness of acceptance and commitment therapy in treating chronic pain
  • The correlation between childhood obesity and adult health outcomes
  • The effect of gender diversity on corporate performance
  • The impact of environmental regulations on industry competitiveness.
  • The impact of renewable energy policies on greenhouse gas emissions
  • The relationship between workplace diversity and team performance
  • The effectiveness of group therapy in treating substance abuse
  • The correlation between parental involvement and social skills in early childhood
  • The effect of technology use on sleep patterns
  • The impact of government regulations on small business growth
  • The relationship between job satisfaction and employee turnover
  • The effectiveness of virtual reality therapy in treating anxiety disorders
  • The correlation between parental involvement and academic motivation in adolescents
  • The effect of social media on political engagement
  • The impact of urbanization on mental health
  • The relationship between corporate social responsibility and consumer trust
  • The correlation between early childhood education and social-emotional development
  • The effect of screen time on cognitive development in young children
  • The impact of trade policies on global economic growth
  • The relationship between workplace diversity and innovation
  • The effectiveness of family therapy in treating eating disorders
  • The correlation between parental involvement and college persistence
  • The effect of social media on body image and self-esteem
  • The impact of environmental regulations on business competitiveness
  • The relationship between job autonomy and job satisfaction
  • The effectiveness of virtual reality therapy in treating phobias
  • The correlation between parental involvement and academic achievement in college
  • The effect of social media on sleep quality
  • The impact of immigration policies on social integration
  • The relationship between workplace diversity and employee well-being
  • The effectiveness of psychodynamic therapy in treating personality disorders
  • The correlation between early childhood education and executive function skills
  • The effect of parental involvement on STEM education outcomes
  • The impact of trade policies on domestic employment rates
  • The relationship between job insecurity and mental health
  • The effectiveness of exposure therapy in treating PTSD
  • The correlation between parental involvement and social mobility
  • The effect of social media on intergroup relations
  • The impact of urbanization on air pollution and respiratory health.
  • The relationship between emotional intelligence and leadership effectiveness
  • The effectiveness of cognitive-behavioral therapy in treating depression
  • The correlation between early childhood education and language development
  • The effect of parental involvement on academic achievement in STEM fields
  • The impact of trade policies on income inequality
  • The relationship between workplace diversity and customer satisfaction
  • The effectiveness of mindfulness-based therapy in treating anxiety disorders
  • The correlation between parental involvement and civic engagement in adolescents
  • The effect of social media on mental health among teenagers
  • The impact of public transportation policies on traffic congestion
  • The relationship between job stress and job performance
  • The effectiveness of group therapy in treating depression
  • The correlation between early childhood education and cognitive development
  • The effect of parental involvement on academic motivation in college
  • The impact of environmental regulations on energy consumption
  • The relationship between workplace diversity and employee engagement
  • The effectiveness of art therapy in treating PTSD
  • The correlation between parental involvement and academic success in vocational education
  • The effect of social media on academic achievement in college
  • The impact of tax policies on economic growth
  • The relationship between job flexibility and work-life balance
  • The effectiveness of acceptance and commitment therapy in treating anxiety disorders
  • The correlation between early childhood education and social competence
  • The effect of parental involvement on career readiness in high school
  • The impact of immigration policies on crime rates
  • The relationship between workplace diversity and employee retention
  • The effectiveness of play therapy in treating trauma
  • The correlation between parental involvement and academic success in online learning
  • The effect of social media on body dissatisfaction among women
  • The impact of urbanization on public health infrastructure
  • The relationship between job satisfaction and job performance
  • The effectiveness of eye movement desensitization and reprocessing therapy in treating PTSD
  • The correlation between early childhood education and social skills in adolescence
  • The effect of parental involvement on academic achievement in the arts
  • The impact of trade policies on foreign investment
  • The relationship between workplace diversity and decision-making
  • The effectiveness of exposure and response prevention therapy in treating OCD
  • The correlation between parental involvement and academic success in special education
  • The impact of zoning laws on affordable housing
  • The relationship between job design and employee motivation
  • The effectiveness of cognitive rehabilitation therapy in treating traumatic brain injury
  • The correlation between early childhood education and social-emotional learning
  • The effect of parental involvement on academic achievement in foreign language learning
  • The impact of trade policies on the environment
  • The relationship between workplace diversity and creativity
  • The effectiveness of emotion-focused therapy in treating relationship problems
  • The correlation between parental involvement and academic success in music education
  • The effect of social media on interpersonal communication skills
  • The impact of public health campaigns on health behaviors
  • The relationship between job resources and job stress
  • The effectiveness of equine therapy in treating substance abuse
  • The correlation between early childhood education and self-regulation
  • The effect of parental involvement on academic achievement in physical education
  • The impact of immigration policies on cultural assimilation
  • The relationship between workplace diversity and conflict resolution
  • The effectiveness of schema therapy in treating personality disorders
  • The correlation between parental involvement and academic success in career and technical education
  • The effect of social media on trust in government institutions
  • The impact of urbanization on public transportation systems
  • The relationship between job demands and job stress
  • The correlation between early childhood education and executive functioning
  • The effect of parental involvement on academic achievement in computer science
  • The effectiveness of cognitive processing therapy in treating PTSD
  • The correlation between parental involvement and academic success in homeschooling
  • The effect of social media on cyberbullying behavior
  • The impact of urbanization on air quality
  • The effectiveness of dance therapy in treating anxiety disorders
  • The correlation between early childhood education and math achievement
  • The effect of parental involvement on academic achievement in health education
  • The impact of global warming on agriculture
  • The effectiveness of narrative therapy in treating depression
  • The correlation between parental involvement and academic success in character education
  • The effect of social media on political participation
  • The impact of technology on job displacement
  • The relationship between job resources and job satisfaction
  • The effectiveness of art therapy in treating addiction
  • The correlation between early childhood education and reading comprehension
  • The effect of parental involvement on academic achievement in environmental education
  • The impact of income inequality on social mobility
  • The relationship between workplace diversity and organizational culture
  • The effectiveness of solution-focused brief therapy in treating anxiety disorders
  • The correlation between parental involvement and academic success in physical therapy education
  • The effect of social media on misinformation
  • The impact of green energy policies on economic growth
  • The relationship between job demands and employee well-being
  • The correlation between early childhood education and science achievement
  • The effect of parental involvement on academic achievement in religious education
  • The impact of gender diversity on corporate governance
  • The relationship between workplace diversity and ethical decision-making
  • The correlation between parental involvement and academic success in dental hygiene education
  • The effect of social media on self-esteem among adolescents
  • The impact of renewable energy policies on energy security
  • The effect of parental involvement on academic achievement in social studies
  • The impact of trade policies on job growth
  • The relationship between workplace diversity and leadership styles
  • The correlation between parental involvement and academic success in online vocational training
  • The effect of social media on self-esteem among men
  • The impact of urbanization on air pollution levels
  • The effectiveness of music therapy in treating depression
  • The correlation between early childhood education and math skills
  • The effect of parental involvement on academic achievement in language arts
  • The impact of immigration policies on labor market outcomes
  • The effectiveness of hypnotherapy in treating phobias
  • The effect of social media on political engagement among young adults
  • The impact of urbanization on access to green spaces
  • The relationship between job crafting and job satisfaction
  • The effectiveness of exposure therapy in treating specific phobias
  • The correlation between early childhood education and spatial reasoning
  • The effect of parental involvement on academic achievement in business education
  • The impact of trade policies on economic inequality
  • The effectiveness of narrative therapy in treating PTSD
  • The correlation between parental involvement and academic success in nursing education
  • The effect of social media on sleep quality among adolescents
  • The impact of urbanization on crime rates
  • The relationship between job insecurity and turnover intentions
  • The effectiveness of pet therapy in treating anxiety disorders
  • The correlation between early childhood education and STEM skills
  • The effect of parental involvement on academic achievement in culinary education
  • The impact of immigration policies on housing affordability
  • The relationship between workplace diversity and employee satisfaction
  • The effectiveness of mindfulness-based stress reduction in treating chronic pain
  • The correlation between parental involvement and academic success in art education
  • The effect of social media on academic procrastination among college students
  • The impact of urbanization on public safety services.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Funny Research Topics

200+ Funny Research Topics

Sports Research Topics

500+ Sports Research Topics

American History Research Paper Topics

300+ American History Research Paper Topics

Cyber Security Research Topics

500+ Cyber Security Research Topics

Environmental Research Topics

500+ Environmental Research Topics

Economics Research Topics

500+ Economics Research Topics

IMAGES

  1. FREE 9+ Comparative Research Templates in PDF

    descriptive comparative research title examples

  2. FREE 9+ Comparative Research Templates in PDF

    descriptive comparative research title examples

  3. FREE 9+ Comparative Research Templates in PDF

    descriptive comparative research title examples

  4. PPT

    descriptive comparative research title examples

  5. Comparative Research

    descriptive comparative research title examples

  6. Comparative Research Ideas

    descriptive comparative research title examples

VIDEO

  1. Reporting Descriptive Analysis

  2. Descriptive Analysis

  3. Descriptive Research Design #researchmethodology

  4. examples of comparative degree / comparison of Degrees #grammer #sentences #english #spoken😊

  5. Causal

  6. Comparative Analysis of Intrusion Detection Systems and Machine Learning Based Model Analysis Throug

COMMENTS

  1. Descriptive Research Designs: Types, Examples & Methods

    Descriptive-comparative; In descriptive-comparative research, the researcher considers 2 variables that are not manipulated, and establish a formal procedure to conclude that one is better than the other. For example, an examination body wants to determine the better method of conducting tests between paper-based and computer-based tests.

  2. Descriptive Research Design

    As discussed earlier, common data analysis methods for descriptive research include descriptive statistics, cross-tabulation, content analysis, qualitative coding, visualization, and comparative analysis. I nterpret results: Interpret your findings in light of your research question and objectives.

  3. 18 Descriptive Research Examples (2024)

    18 Descriptive Research Examples. Descriptive research involves gathering data to provide a detailed account or depiction of a phenomenon without manipulating variables or conducting experiments. "Descriptive research is defined as a research approach that describes the characteristics of the population, sample or phenomenon studied.

  4. Types of Research Designs Compared

    Types of Research Designs Compared | Guide & Examples. Published on June 20, 2019 by Shona McCombes.Revised on June 22, 2023. When you start planning a research project, developing research questions and creating a research design, you will have to make various decisions about the type of research you want to do.. There are many ways to categorize different types of research.

  5. Descriptive Research

    Descriptive research aims to accurately and systematically describe a population, situation or phenomenon. It can answer what, where, when and how questions, but not why questions. A descriptive research design can use a wide variety of research methods to investigate one or more variables. Unlike in experimental research, the researcher does ...

  6. Demystifying the research process: understanding a descriptive

    The research design used a descriptive-comparative type of research. This research design is intended to describe the differences among groups in a population without manipulating the independent ...

  7. Descriptive Research: Methods And Examples

    Descriptive research examples show the thorough research involved in such a study. ... Descriptive Comparative. Comparing two variables can show if one is better than the other. Doing this through tests or surveys can reveal all the advantages and disadvantages associated with the two. For example, this technique can be used to find out if ...

  8. Descriptive Research 101: Definition, Methods and Examples

    For example, suppose you are a website beta testing an app feature. In that case, descriptive research invites users to try the feature, tracking their behavior and then asking their opinions. Can be applied to many research methods and areas. Examples include healthcare, SaaS, psychology, political studies, education, and pop culture.

  9. Comparative Studies

    Comparative is a concept that derives from the verb "to compare" (the etymology is Latin comparare, derivation of par = equal, with prefix com-, it is a systematic comparison).Comparative studies are investigations to analyze and evaluate, with quantitative and qualitative methods, a phenomenon and/or facts among different areas, subjects, and/or objects to detect similarities and/or ...

  10. Descriptive Research Design

    Descriptive research aims to accurately and systematically describe a population, situation or phenomenon. It can answer what, where, when, and how questions, but not why questions. A descriptive research design can use a wide variety of research methods to investigate one or more variables. Unlike in experimental research, the researcher does ...

  11. Descriptive Research: Design, Methods, Examples, and FAQs

    Descriptive research is a common investigatory model used by researchers in various fields, including social sciences, linguistics, and academia. To conduct effective research, you need to know a scenario's or target population's who, what, and where. Obtaining enough knowledge about the research topic is an important component of research.

  12. Comparative Research Methods

    Comparative research in communication and media studies is conventionally understood as the contrast among different macro-level units, such as world regions, countries, sub-national regions, social milieus, language areas and cultural thickenings, at one point or more points in time. ... In the latter case, descriptive comparative analysis ...

  13. How to Write a Title for a Compare and Contrast Essay

    2. List what you want to compare. An informative title should tell your reader exactly what you are comparing in your essay. List the subjects you want to compare so that you can make sure they are included in your title. You only need to include the broad topics or themes you want to compare, such as dogs and cats.

  14. 15

    What makes a study comparative is not the particular techniques employed but the theoretical orientation and the sources of data. All the tools of the social scientist, including historical analysis, fieldwork, surveys, and aggregate data analysis, can be used to achieve the goals of comparative research. So, there is plenty of room for the ...

  15. 80+ Exceptional Research Titles Examples in Different Areas

    Examples of Research Topics on Ethics. Enumerate the different ways the government of the United States can reduce deaths arising from the unregulated use of guns. Analyze the place of ethics in medicine or of medical practitioners. For instance, you can discuss the prevalence of physician-assisted suicides in a named country.

  16. 5 Comparative Studies

    In nearly all studies in the comparative group, the titles of experimental curricula were explicitly identified. The only exception to this was the ARC Implementation Center study (Sconiers et al., 2002), where three NSF-supported elementary curricula were examined, but in the results, their effects were pooled. ... but this is descriptive at ...

  17. Clarifying Quantitative Research Designs

    A research design is a blueprint for conducting a study. Over the years, several quantitative designs have been developed for conducting descriptive, correlational, quasi-experimental, and experimental studies. Descriptive and correlational designs are focused on describing and examining relationships of variables in natural settings.

  18. How to structure quantitative research questions

    Structure of comparative research questions. There are five steps required to construct a comparative research question: (1) choose your starting phrase; (2) identify and name the dependent variable; (3) identify the groups you are interested in; (4) identify the appropriate adjoining text; and (5) write out the comparative research question. Each of these steps is discussed in turn:

  19. Comparative Research

    The goal of comparative research is drawing a solution out of the similarities and differences between the focused variables. Through non-experimental or qualitative research, you can include this type of research method in your comparative research design. 13+ Comparative Research Examples. Know more about comparative research by going over ...

  20. How to Choose a Title?

    The descriptive title is based on multiple keywords and provides an opportunity for the reader to decide about the results in an unbiased matter. This type of title is usually preferred in original articles and is also more read and cited as compared to the other types [6, 7]. Examples are given in Table 16.1.

  21. Grand Valley State University ScholarWorks@GVSU

    A DESCRIPTIVE COMPARATIVE STUDY OF STUDENT LEARNING STYLES FROM SELECTED MEDICAL EDUCATION PROGRAMS. By. Dennis C. Gregory, PA-S Steven K. Huisman, PA-S. Submitted to the Physician Assistant Studies Program at Grand Valley State University Allendale, Michigan in partial fulfillment of the requirements for the degree of.

  22. 500+ Quantitative Research Titles and Topics

    Quantitative research involves collecting and analyzing numerical data to identify patterns, trends, and relationships among variables. This method is widely used in social sciences, psychology, economics, and other fields where researchers aim to understand human behavior and phenomena through statistical analysis. If you are looking for a quantitative research topic, there are numerous areas ...

  23. Comparative Research

    An goal of comparative research is drawing a solution out concerning the similarities the differences between the focused variables. Through non-experimental or qualitative research, your can inclusive this character of research method in your comparative resources design. 13+ Comparisons Doing Examples. Know more with comparative research by ...