What is the Scientific Method: How does it work and why is it important?

The scientific method is a systematic process involving steps like defining questions, forming hypotheses, conducting experiments, and analyzing data. It minimizes biases and enables replicable research, leading to groundbreaking discoveries like Einstein's theory of relativity, penicillin, and the structure of DNA. This ongoing approach promotes reason, evidence, and the pursuit of truth in science.

Updated on November 18, 2023

What is the Scientific Method: How does it work and why is it important?

Beginning in elementary school, we are exposed to the scientific method and taught how to put it into practice. As a tool for learning, it prepares children to think logically and use reasoning when seeking answers to questions.

Rather than jumping to conclusions, the scientific method gives us a recipe for exploring the world through observation and trial and error. We use it regularly, sometimes knowingly in academics or research, and sometimes subconsciously in our daily lives.

In this article we will refresh our memories on the particulars of the scientific method, discussing where it comes from, which elements comprise it, and how it is put into practice. Then, we will consider the importance of the scientific method, who uses it and under what circumstances.

What is the scientific method?

The scientific method is a dynamic process that involves objectively investigating questions through observation and experimentation . Applicable to all scientific disciplines, this systematic approach to answering questions is more accurately described as a flexible set of principles than as a fixed series of steps.

The following representations of the scientific method illustrate how it can be both condensed into broad categories and also expanded to reveal more and more details of the process. These graphics capture the adaptability that makes this concept universally valuable as it is relevant and accessible not only across age groups and educational levels but also within various contexts.

a graph of the scientific method

Steps in the scientific method

While the scientific method is versatile in form and function, it encompasses a collection of principles that create a logical progression to the process of problem solving:

  • Define a question : Constructing a clear and precise problem statement that identifies the main question or goal of the investigation is the first step. The wording must lend itself to experimentation by posing a question that is both testable and measurable.
  • Gather information and resources : Researching the topic in question to find out what is already known and what types of related questions others are asking is the next step in this process. This background information is vital to gaining a full understanding of the subject and in determining the best design for experiments. 
  • Form a hypothesis : Composing a concise statement that identifies specific variables and potential results, which can then be tested, is a crucial step that must be completed before any experimentation. An imperfection in the composition of a hypothesis can result in weaknesses to the entire design of an experiment.
  • Perform the experiments : Testing the hypothesis by performing replicable experiments and collecting resultant data is another fundamental step of the scientific method. By controlling some elements of an experiment while purposely manipulating others, cause and effect relationships are established.
  • Analyze the data : Interpreting the experimental process and results by recognizing trends in the data is a necessary step for comprehending its meaning and supporting the conclusions. Drawing inferences through this systematic process lends substantive evidence for either supporting or rejecting the hypothesis.
  • Report the results : Sharing the outcomes of an experiment, through an essay, presentation, graphic, or journal article, is often regarded as a final step in this process. Detailing the project's design, methods, and results not only promotes transparency and replicability but also adds to the body of knowledge for future research.
  • Retest the hypothesis : Repeating experiments to see if a hypothesis holds up in all cases is a step that is manifested through varying scenarios. Sometimes a researcher immediately checks their own work or replicates it at a future time, or another researcher will repeat the experiments to further test the hypothesis.

a chart of the scientific method

Where did the scientific method come from?

Oftentimes, ancient peoples attempted to answer questions about the unknown by:

  • Making simple observations
  • Discussing the possibilities with others deemed worthy of a debate
  • Drawing conclusions based on dominant opinions and preexisting beliefs

For example, take Greek and Roman mythology. Myths were used to explain everything from the seasons and stars to the sun and death itself.

However, as societies began to grow through advancements in agriculture and language, ancient civilizations like Egypt and Babylonia shifted to a more rational analysis for understanding the natural world. They increasingly employed empirical methods of observation and experimentation that would one day evolve into the scientific method . 

In the 4th century, Aristotle, considered the Father of Science by many, suggested these elements , which closely resemble the contemporary scientific method, as part of his approach for conducting science:

  • Study what others have written about the subject.
  • Look for the general consensus about the subject.
  • Perform a systematic study of everything even partially related to the topic.

a pyramid of the scientific method

By continuing to emphasize systematic observation and controlled experiments, scholars such as Al-Kindi and Ibn al-Haytham helped expand this concept throughout the Islamic Golden Age . 

In his 1620 treatise, Novum Organum , Sir Francis Bacon codified the scientific method, arguing not only that hypotheses must be tested through experiments but also that the results must be replicated to establish a truth. Coming at the height of the Scientific Revolution, this text made the scientific method accessible to European thinkers like Galileo and Isaac Newton who then put the method into practice.

As science modernized in the 19th century, the scientific method became more formalized, leading to significant breakthroughs in fields such as evolution and germ theory. Today, it continues to evolve, underpinning scientific progress in diverse areas like quantum mechanics, genetics, and artificial intelligence.

Why is the scientific method important?

The history of the scientific method illustrates how the concept developed out of a need to find objective answers to scientific questions by overcoming biases based on fear, religion, power, and cultural norms. This still holds true today.

By implementing this standardized approach to conducting experiments, the impacts of researchers’ personal opinions and preconceived notions are minimized. The organized manner of the scientific method prevents these and other mistakes while promoting the replicability and transparency necessary for solid scientific research.

The importance of the scientific method is best observed through its successes, for example: 

  • “ Albert Einstein stands out among modern physicists as the scientist who not only formulated a theory of revolutionary significance but also had the genius to reflect in a conscious and technical way on the scientific method he was using.” Devising a hypothesis based on the prevailing understanding of Newtonian physics eventually led Einstein to devise the theory of general relativity .
  • Howard Florey “Perhaps the most useful lesson which has come out of the work on penicillin has been the demonstration that success in this field depends on the development and coordinated use of technical methods.” After discovering a mold that prevented the growth of Staphylococcus bacteria, Dr. Alexander Flemimg designed experiments to identify and reproduce it in the lab, thus leading to the development of penicillin .
  • James D. Watson “Every time you understand something, religion becomes less likely. Only with the discovery of the double helix and the ensuing genetic revolution have we had grounds for thinking that the powers held traditionally to be the exclusive property of the gods might one day be ours. . . .” By using wire models to conceive a structure for DNA, Watson and Crick crafted a hypothesis for testing combinations of amino acids, X-ray diffraction images, and the current research in atomic physics, resulting in the discovery of DNA’s double helix structure .

Final thoughts

As the cases exemplify, the scientific method is never truly completed, but rather started and restarted. It gave these researchers a structured process that was easily replicated, modified, and built upon. 

While the scientific method may “end” in one context, it never literally ends. When a hypothesis, design, methods, and experiments are revisited, the scientific method simply picks up where it left off. Each time a researcher builds upon previous knowledge, the scientific method is restored with the pieces of past efforts.

By guiding researchers towards objective results based on transparency and reproducibility, the scientific method acts as a defense against bias, superstition, and preconceived notions. As we embrace the scientific method's enduring principles, we ensure that our quest for knowledge remains firmly rooted in reason, evidence, and the pursuit of truth.

The AJE Team

The AJE Team

See our "Privacy Policy"

loading

STEM Problem Solving: Inquiry, Concepts, and Reasoning

  • Published: 29 January 2022
  • Volume 32 , pages 381–397, ( 2023 )

Cite this article

problem solving method in science

  • Aik-Ling Tan   ORCID: orcid.org/0000-0002-4627-4977 1 ,
  • Yann Shiou Ong   ORCID: orcid.org/0000-0002-6092-2803 1 ,
  • Yong Sim Ng   ORCID: orcid.org/0000-0002-8400-2040 1 &
  • Jared Hong Jie Tan 1  

10k Accesses

8 Citations

2 Altmetric

Explore all metrics

Balancing disciplinary knowledge and practical reasoning in problem solving is needed for meaningful learning. In STEM problem solving, science subject matter with associated practices often appears distant to learners due to its abstract nature. Consequently, learners experience difficulties making meaningful connections between science and their daily experiences. Applying Dewey’s idea of practical and science inquiry and Bereiter’s idea of referent-centred and problem-centred knowledge, we examine how integrated STEM problem solving offers opportunities for learners to shuttle between practical and science inquiry and the kinds of knowledge that result from each form of inquiry. We hypothesize that connecting science inquiry with practical inquiry narrows the gap between science and everyday experiences to overcome isolation and fragmentation of science learning. In this study, we examine classroom talk as students engage in problem solving to increase crop yield. Qualitative content analysis of the utterances of six classes of 113 eighth graders and their teachers were conducted for 3 hours of video recordings. Analysis showed an almost equal amount of science and practical inquiry talk. Teachers and students applied their everyday experiences to generate solutions. Science talk was at the basic level of facts and was used to explain reasons for specific design considerations. There was little evidence of higher-level scientific conceptual knowledge being applied. Our observations suggest opportunities for more intentional connections of science to practical problem solving, if we intend to apply higher-order scientific knowledge in problem solving. Deliberate application and reference to scientific knowledge could improve the quality of solutions generated.

Similar content being viewed by others

problem solving method in science

Science Camps for Introducing Nature of Scientific Inquiry Through Student Inquiries in Nature: Two Applications with Retention Study

problem solving method in science

Ways of thinking in STEM-based problem solving

problem solving method in science

Framing and Assessing Scientific Inquiry Practices

Avoid common mistakes on your manuscript.

1 Introduction

As we enter to second quarter of the twenty-first century, it is timely to take stock of both the changes and demands that continue to weigh on our education system. A recent report by World Economic Forum highlighted the need to continuously re-position and re-invent education to meet the challenges presented by the disruptions brought upon by the fourth industrial revolution (World Economic Forum, 2020 ). There is increasing pressure for education to equip children with the necessary, relevant, and meaningful knowledge, skills, and attitudes to create a “more inclusive, cohesive and productive world” (World Economic Forum, 2020 , p. 4). Further, the shift in emphasis towards twenty-first century competencies over mere acquisition of disciplinary content knowledge is more urgent since we are preparing students for “jobs that do not yet exist, technology that has not yet been invented, and problems that has yet exist” (OECD, 2018 , p. 2). Tan ( 2020 ) concurred with the urgent need to extend the focus of education, particularly in science education, such that learners can learn to think differently about possibilities in this world. Amidst this rhetoric for change, the questions that remained to be answered include how can science education transform itself to be more relevant; what is the role that science education play in integrated STEM learning; how can scientific knowledge, skills and epistemic practices of science be infused in integrated STEM learning; what kinds of STEM problems should we expose students to for them to learn disciplinary knowledge and skills; and what is the relationship between learning disciplinary content knowledge and problem solving skills?

In seeking to understand the extent of science learning that took place within integrated STEM learning, we dissected the STEM problems that were presented to students and examined in detail the sense making processes that students utilized when they worked on the problems. We adopted Dewey’s ( 1938 ) theoretical idea of scientific and practical/common-sense inquiry and Bereiter’s ideas of referent-centred and problem-centred knowledge building process to interpret teacher-students’ interactions during problem solving. There are two primary reasons for choosing these two theoretical frameworks. Firstly, Dewey’s ideas about the relationship between science inquiry and every day practical problem-solving is important in helping us understand the role of science subject matter knowledge and science inquiry in solving practical real-world problems that are commonly used in STEM learning. Secondly, Bereiter’s ideas of referent-centred and problem-centred knowledge augment our understanding of the types of knowledge that students can learn when they engage in solving practical real-world problems.

Taken together, Dewey’s and Bereiter’s ideas enable us to better understand the types of problems used in STEM learning and their corresponding knowledge that is privileged during the problem-solving process. As such, the two theoretical lenses offered an alternative and convincing way to understand the actual types of knowledge that are used within the context of integrated STEM and help to move our understanding of STEM learning beyond current focus on examining how engineering can be used as an integrative mechanism (Bryan et al., 2016 ) or applying the argument of the strengths of trans-, multi-, or inter-disciplinary activities (Bybee, 2013 ; Park et al., 2020 ) or mapping problems by the content and context as pure STEM problems, STEM-related problems or non-STEM problems (Pleasants, 2020 ). Further, existing research (for example, Gale et al., 2000 ) around STEM education focussed largely on description of students’ learning experiences with insufficient attention given to the connections between disciplinary conceptual knowledge and inquiry processes that students use to arrive at solutions to problems. Clarity in the role of disciplinary knowledge and the related inquiry will allow for more intentional design of STEM problems for students to learn higher-order knowledge. Applying Dewey’s idea of practical and scientific inquiry and Bereiter’s ideas of referent-centred and problem-centred knowledge, we analysed six lessons where students engaged with integrated STEM problem solving to propose answers to the following research questions: What is the extent of practical and scientific inquiry in integrated STEM problem solving? and What conceptual knowledge and problem-solving skills are learnt through practical and science inquiry during integrated STEM problem solving?

2 Inquiry in Problem Solving

Inquiry, according to Dewey ( 1938 ), involves the direct control of unknown situations to change them into a coherent and unified one. Inquiry usually encompasses two interrelated activities—(1) thinking about ideas related to conceptual subject-matter and (2) engaging in activities involving our senses or using specific observational techniques. The National Science Education Standards released by the National Research Council in the US in 1996 defined inquiry as “…a multifaceted activity that involves making observations; posing questions; examining books and other sources of information to see what is already known; planning investigations; reviewing what is already known in light of experimental evidence; using tools to gather, analyze, and interpret data; proposing answers, explanations, and predictions; and communicating the results. Inquiry requires identification of assumptions, use of critical and logical thinking, and consideration of alternative explanations” (p. 23). Planning investigation; collecting empirical evidence; using tools to gather, analyse and interpret data; and reasoning are common processes shared in the field of science and engineering and hence are highly relevant to apply to integrated STEM education.

In STEM education, establishing the connection between general inquiry and its application helps to link disciplinary understanding to epistemic knowledge. For instance, methods of science inquiry are popular in STEM education due to the familiarity that teachers have with scientific methods. Science inquiry, a specific form of inquiry, has appeared in many science curriculum (e.g. NRC, 2000 ) since Dewey proposed in 1910 that learning of science should be perceived as both subject-matter and a method of learning science (Dewey, 1910a , 1910b ). Science inquiry which involved ways of doing science should also encompass the ways in which students learn the scientific knowledge and investigative methods that enable scientific knowledge to be constructed. Asking scientifically orientated questions, collecting empirical evidence, crafting explanations, proposing models and reasoning based on available evidence are affordances of scientific inquiry. As such, science should be pursued as a way of knowing rather than merely acquisition of scientific knowledge.

Building on these affordances of science inquiry, Duschl and Bybee ( 2014 ) advocated the 5D model that focused on the practice of planning and carrying out investigations in science and engineering, representing two of the four disciplines in STEM. The 5D model includes science inquiry aspects such as (1) deciding on what and how to measure, observe and sample; (2) developing and selecting appropriate tools to measure and collect data; (3) recording the results and observations in a systematic manner; (4) creating ways to represent the data and patterns that are observed; and (5) determining the validity and the representativeness of the data collected. The focus on planning and carrying out investigations in the 5D model is used to help teachers bridge the gap between the practices of building and refining models and explanation in science and engineering. Indeed, a common approach to incorporating science inquiry in integrated STEM curriculum involves student planning and carrying out scientific investigations and making sense of the data collected to inform engineering design solution (Cunningham & Lachapelle, 2016 ; Roehrig et al., 2021 ). Duschl and Bybee ( 2014 ) argued that it is needful to design experiences for learners to appreciate that struggles are part of problem solving in science and engineering. They argued that “when the struggles of doing science is eliminated or simplified, learners get the wrong perceptions of what is involved when obtaining scientific knowledge and evidence” (Duschl & Bybee, 2014 , p. 2). While we concur with Duschl and Bybee about the need for struggles, in STEM learning, these struggles must be purposeful and grade appropriate so that students will also be able to experience success amidst failure.

The peculiar nature of science inquiry was scrutinized by Dewey ( 1938 ) when he cross-examined the relationship between science inquiry and other forms of inquiry, particularly common-sense inquiry. He positioned science inquiry along a continuum with general or common-sense inquiry that he termed as “logic”. Dewey argued that common-sense inquiry serves a practical purpose and exhibits features of science inquiry such as asking questions and a reliance on evidence although the focus of common-sense inquiry tends to be different. Common-sense inquiry deals with issues or problems that are in the immediate environment where people live, whereas the objects of science inquiry are more likely to be distant (e.g. spintronics) from familiar experiences in people’s daily lives. While we acknowledge the fundamental differences (such as novel discovery compared with re-discovering science, ‘messy’ science compared with ‘sanitised’ science) between school science and science that is practiced by scientists, the subject of interest in science (understanding the world around us) remains the same.

The unfamiliarity between the functionality and purpose of science inquiry to improve the daily lives of learners does little to motivate learners to learn science (Aikenhead, 2006 ; Lee & Luykx, 2006 ) since learners may not appreciate the connections of science inquiry in their day-to-day needs and wants. Bereiter ( 1992 ) has also distinguished knowledge into two forms—referent-centred and problem-centred. Referent-centred knowledge refers to subject-matter that is organised around topics such as that in textbooks. Problem-centred knowledge is knowledge that is organised around problems, whether they are transient problems, practical problems or problems of explanations. Bereiter argued that referent-centred knowledge that is commonly taught in schools is limited in their applications and meaningfulness to the lives of students. This lack of familiarity and affinity to referent-centred knowledge is likened to the science subject-matter knowledge that was mentioned by Dewey. Rather, it is problem-centred knowledge that would be useful when students encounter problems. Learning problem-centred knowledge will allow learners to readily harness the relevant knowledge base that is useful to understand and solve specific problems. This suggests a need to help learners make the meaningful connections between science and their daily lives.

Further, Dewey opined that while the contexts in which scientific knowledge arise could be different from our daily common-sense world, careful consideration of scientific activities and applying the resultant knowledge to daily situations for use and enjoyment is possible. Similarly, in arguing for problem-centred knowledge, Bereiter ( 1992 ) questioned the value of inert knowledge that plays no role in helping us understand or deal with the world around us. Referent-centred knowledge has a higher tendency to be inert due to the way that the knowledge is organised and the way that the knowledge is encountered by learners. For instance, learning about the equation and conditions for photosynthesis is not going to help learners appreciate how plants are adapted for photosynthesis and how these adaptations can allow plants to survive changes in climate and for farmers to grow plants better by creating the best growing conditions. Rather, students could be exposed to problems of explanations where they are asked to unravel the possible reasons for low crop yield and suggest possible ways to overcome the problem. Hence, we argue here that the value of the referent knowledge is that they form the basis and foundation for the students to be able to discuss or suggest ways to overcome real life problems. Referent-centred knowledge serves as part of the relevant knowledge base that can be harnessed to solve specific problems or as foundational knowledge students need to progress to learn higher-order conceptual knowledge that typically forms the foundations or pillars within a discipline. This notion of referent-centred knowledge serving as foundational knowledge that can be and should be activated for application in problem-solving situation is shown by Delahunty et al. ( 2020 ). They found that students show high reliance on memory when they are conceptualising convergent problem-solving tasks.

While Bereiter argues for problem-centred knowledge, he cautioned that engagement should be with problems of explanation rather than transient or practical problems. He opined that if learners only engage in transient or practical problem alone, they will only learn basic-category types of knowledge and fail to understand higher-order conceptual knowledge. For example, for photosynthesis, basic-level types of knowledge included facts about the conditions required for photosynthesis, listing the products formed from the process of photosynthesis and knowing that green leaves reflect green light. These basic-level knowledges should intentionally help learners learn higher-level conceptual knowledge that include learners being able to draw on the conditions for photosynthesis when they encounter that a plant is not growing well or is exhibiting discoloration of leaves.

Transient problems disappear once a solution becomes available and there is a high likelihood that we will not remember the problem after that. Practical problems, according to Bereiter are “stuck-door” problems that could be solved with or without basic-level knowledge and often have solutions that lacks precise definition. There are usually a handful of practical strategies, such as pulling or pushing the door harder, kicking the door, etc. that will work for the problems. All these solutions lack a well-defined approach related to general scientific principles that are reproducible. Problems of explanations are the most desirable types of problems for learners since these are problems that persist and recur such that they can become organising points for knowledge. Problems of explanations consist of the conceptual representations of (1) a text base that serves to represent the text content and (2) a situation model that shows the portion of the world in which the text is relevant. The idea of text base to represent text content in solving problems of explanations is like the idea of domain knowledge and structural knowledge (refers to knowledge of how concepts within a domain are connected) proposed by Jonassen ( 2000 ). He argued that both types of knowledges are required to solve a range of problems from well-structured problems to ill-structured problems with a simulated context, to simple ill-structured problems and to complex ill-structured problems.

Jonassen indicated that complex ill-structured problems are typically design problems and are likely to be the most useful forms of problems for learners to be engaged in inquiry. Complex ill-structured design problems are the “wicked” problems that Buchanan ( 1992 ) discussed. Buchanan’s idea is that design aims to incorporate knowledge from different fields of specialised inquiry to become whole. Complex or wicked problems are akin to the work of scientists who navigate multiple factors and evidence to offer models that are typically oversimplified, but they apply them to propose possible first approximation explanations or solutions and iteratively relax constraints or assumptions to refine the model. The connections between the subject matter of science and the design process to engineer a solution are delicate. While it is important to ensure that practical concerns and questions are taken into consideration in designing solutions (particularly a material artefact) to a practical problem, the challenge here lies in ensuring that creativity in design is encouraged even if students initially lack or neglect the scientific conceptual understanding to explain/justify their design. In his articulation of wicked problems and the role of design thinking, Buchanan ( 1992 ) highlighted the need to pay attention to category and placement. Categories “have fixed meanings that are accepted within the framework of a theory or a philosophy and serve as the basis for analyzing what already exist” (Buchanan, 1992 , p. 12). Placements, on the other hand, “have boundaries to shape and constrain meaning, but are not rigidly fixed and determinate” (p. 12).

The difference in the ideas presented by Dewey and Bereiter lies in the problem design. For Dewey, scientific knowledge could be learnt from inquiring into practical problems that learners are familiar with. After all, Dewey viewed “modern science as continuous with, and to some degree an outgrowth and refinement of, practical or ‘common-sense’ inquiry” (Brown, 2012 ). For Bereiter, he acknowledged the importance of familiar experiences, but instead of using them as starting points for learning science, he argued that practical problems are limiting in helping learners acquire higher-order knowledge. Instead, he advocated for learners to organize their knowledge around problems that are complex, persistent and extended and requiring explanations to better understand the problems. Learners are to have a sense of the kinds of problems to which the specific concept is relevant before they can be said to have grasp the concept in a functionally useful way.

To connect between problem solving, scientific knowledge and everyday experiences, we need to examine ways to re-negotiate the disciplinary boundaries (such as epistemic understanding, object of inquiry, degree of precision) of science and make relevant connections to common-sense inquiry and to the problem at hand. Integrated STEM appears to be one way in which the disciplinary boundaries of science can be re-negotiated to include practices from the fields of technology, engineering and mathematics. In integrated STEM learning, inquiry is seen more holistically as a fluid process in which the outcomes are not absolute but are tentative. The fluidity of the inquiry process is reflected in the non-deterministic inquiry approach. This means that students can use science inquiry, engineering design, design process or any other inquiry approaches that fit to arrive at the solution. This hybridity of inquiry between science, common-sense and problems allows for some familiar aspects of the science inquiry process to be applied to understand and generate solutions to familiar everyday problems. In attempting to infuse elements of common-sense inquiry with science inquiry in problem-solving, logic plays an important role to help learners make connections. Hypothetically, we argue that with increasing exposure to less familiar ways of thinking such as those associated with science inquiry, students’ familiarity with scientific reasoning increases, and hence such ways of thinking gradually become part of their common-sense, which students could employ to solve future relevant problems. The theoretical ideas related to complexities of problems, the different forms of inquiry afforded by different problems and the arguments for engaging in problem solving motivated us to examine empirically how learners engage with ill-structured problems to generate problem-centred knowledge. Of particular interest to us is how learners and teachers weave between practical and scientific reasoning as they inquire to integrate the components in the original problem into a unified whole.

3.1 Context

The integrated STEM activity in our study was planned using the S-T-E-M quartet instructional framework (Tan et al., 2019 ). The S-T-E-M quartet instructional framework positions complex, persistent and extended problems at its core and focusses on the vertical disciplinary knowledge and understanding of the horizontal connections between the disciplines that could be gained by learners through solving the problem (Tan et al., 2019 ). Figure  1 depicts the disciplinary aspects of the problem that was presented to the students. The activity has science and engineering as the two lead disciplines. It spanned three 1-h lessons and required students to both learn and apply relevant scientific conceptual knowledge to solve a complex, real-world problem through processes that resemble the engineering design process (Wheeler et al., 2019 ).

figure 1

Connections across disciplines in integrate STEM activity

figure 2

Frequency of different types of reasoning

In the first session (1 h), students were introduced to the problem and its context. The problem pertains to the issue of limited farmland in a land scarce country that imports 90% of food (Singapore Food Agency [SFA], 2020 ). The students were required to devise a solution by applying knowledge of the conditions required for photosynthesis and plant growth to design and build a vertical farming system to help farmers increase crop yield with limited farmland. This context was motivated by the government’s effort to generate interests and knowledge in farming to achieve the 30 by 30 goal—supplying 30% of country’s nutritional needs by 2030. The scenario was a fictitious one where they were asked to produce 120 tonnes of Kailan (a type of leafy vegetable) with two hectares of land instead of the usual six hectares over a specific period. In addition to the abovementioned constraints, the teacher also discussed relevant success criteria for evaluating the solution with the students. Students then researched about existing urban farming approaches. They were given reading materials pertaining to urban farming to help them understand the affordances and constraints of existing solutions. In the second session (6 h), students engaged in ideation to generate potential solutions. They then designed, built and tested their solution and had opportunities to iteratively refine their solution. Students were given a list of materials (e.g. mounting board, straws, ice-cream stick, glue, etc.) that they could use to design their solutions. In the final session (1 h), students presented their solution and reflected on how well their solution met the success criteria. The prior scientific conceptual knowledge that students require to make sense of the problem include knowledge related to plant nutrition, namely, conditions for photosynthesis, nutritional requirements of Kailin and growth cycle of Kailin. The problem resembles a real-world problem that requires students to engage in some level of explanation of their design solution.

A total of 113 eighth graders (62 boys and 51 girls), 14-year-olds, from six classes and their teachers participated in the study. The students and their teachers were recruited as part of a larger study that examined the learning experiences of students when they work on integrated STEM activities that either begin with a problem, a solution or are focused on the content. Invitations were sent to schools across the country and interested schools opted in for the study. For the study reported here, all students and teachers were from six classes within a school. The teachers had all undergone 3 h of professional development with one of the authors on ways of implementing the integrated STEM activity used in this study. During the professional development session, the teachers learnt about the rationale of the activity, familiarize themselves with the materials and clarified the intentions and goals of the activity. The students were mostly grouped in groups of three, although a handful of students chose to work independently. The group size of students was not critical for the analysis of talk in this study as the analytic focus was on the kinds of knowledge applied rather than collaborative or group think. We assumed that the types of inquiry adopted by teachers and students were largely dependent on the nature of problem. Eighth graders were chosen for this study since lower secondary science offered at this grade level is thematic and integrated across biology, chemistry and physics. Furthermore, the topic of photosynthesis is taught under the theme of Interactions at eighth grade (CPDD, 2021 ). This thematic and integrated nature of science at eighth grade offered an ideal context and platform for integrated STEM activities to be trialled.

The final lessons in a series of three lessons in each of the six classes was analysed and reported in this study. Lessons where students worked on their solutions were not analysed because the recordings had poor audibility due to masking and physical distancing requirements as per COVID-19 regulations. At the start of the first lesson, the instructions given by the teacher were:

You are going to present your models. Remember the scenario that you were given at the beginning that you were tasked to solve using your model. …. In your presentation, you have to present your prototype and its features, what is so good about your prototype, how it addresses the problem and how it saves costs and space. So, this is what you can talk about during your presentation. ….. pay attention to the presentation and write down questions you like to ask the groups after the presentation… you can also critique their model, you can evaluate, critique and ask questions…. Some examples of questions you can ask the groups are? Do you think your prototype can achieve optimal plant growth? You can also ask questions specific to their models.

3.2 Data collection

Parental consent was sought a month before the start of data collection. The informed consent adhered to confidentiality and ethics guidelines as described by the Institutional Review Board. The data collection took place over a period of one month with weekly video recording. Two video cameras, one at the front and one at the back of the science laboratory were set up. The front camera captured the students seated at the front while the back video camera recorded the teacher as well as the groups of students at the back of the laboratory. The video recordings were synchronized so that the events captured from each camera can be interpreted from different angles. After transcription of the raw video files, the identities of students were substituted with pseudonyms.

3.3 Data analysis

The video recordings were analysed using the qualitative content analysis approach. Qualitative content analysis allows for patterns or themes and meanings to emerge from the process of systematic classification (Hsieh & Shannon, 2005 ). Qualitative content analysis is an appropriate analytic method for this study as it allows us to systematically identify episodes of practical inquiry and science inquiry to map them to the purposes and outcomes of these episodes as each lesson unfolds.

In total, six h of video recordings where students presented their ideas while the teachers served as facilitator and mentor were analysed. The video recordings were transcribed, and the transcripts were analysed using the NVivo software. Our unit of analysis is a single turn of talk (one utterance). We have chosen to use utterances as proxy indicators of reasoning practices based on the assumption that an utterance relates to both grammar and context. An utterance is a speech act that reveals both meaning and intentions of the speaker within specific contexts (Li, 2008 ).

Our research analytical lens is also interpretative in nature and the validity of our interpretation is through inter-rater discussion and agreement. Each utterance at the speaker level in transcripts was examined and coded either as relevant to practical reasoning or scientific reasoning based on the content. The utterances could be a comment by the teacher, a question by a student or a response by another student. Deductive coding is deployed with the two codes, practical reasoning and scientific reasoning derived from the theoretical ideas of Dewey and Bereiter as described earlier. Practical reasoning refers to utterances that reflect commonsensical knowledge or application of everyday understanding. Scientific reasoning refers to utterances that consist of scientifically oriented questions, scientific terms, or the use of empirical evidence to explain. Examples of each type of reasoning are highlighted in the following section. Each coded utterance is then reviewed for detailed description of the events that took place that led to that specific utterance. The description of the context leading to the utterance is considered an episode. The episodes and codes were discussed and agreed upon by two of the authors. Two coders simultaneously watched the videos to identify and code the episodes. The coders interpreted the content of each utterance, examine the context where the utterance was made and deduced the purpose of the utterance. Once each coder has established the sense-making aspect of the utterance in relation to the context, a code of either practical reasoning or scientific reasoning is assigned. Once that was completed, the two coders compared their coding for similarities and differences. They discussed the differences until an agreement was reached. Through this process, an agreement of 85% was reached between the coders. Where disagreement persisted, codes of the more experienced coder were adopted.

4 Results and Discussion

The specific STEM lessons analysed were taken from the lessons whereby students presented the model of their solutions to the class for peer evaluation. Every group of students stood in front of the class and placed their model on the bench as they presented. There was also a board where they could sketch or write their explanations should they want to. The instructions given by the teacher to the students were to explain their models and state reasons for their design.

4.1 Prevalence of Reasoning

The 6h of videos consists of 1422 turns of talk. Three hundred four turns of talk (21%) were identified as talk related to reasoning, either practical reasoning or scientific reasoning. Practical reasoning made up 62% of the reasoning turns while 38% were scientific reasoning (Fig. 2 ).

The two types of reasoning differ in the justifications that are used to substantiate the claims or decisions made. Table 1 describes the differences between the two categories of reasoning.

4.2 Applications of Scientific Reasoning

Instances of engagement with scientific reasoning (for instance, using scientific concepts to justify, raising scientifically oriented questions, or providing scientific explanations) revolved around the conditions for photosynthesis and the concept of energy conversion when students were presenting their ideas or when they were questioned by their peers. For example, in explaining the reason for including fish in their plant system, one group of students made connection to cyclical energy transfer: “…so as the roots of the plants submerged in the water, faeces from the fish will be used as fertilizers so that the plant can grow”. The students considered how organic matter that is still trapped within waste materials can be released and taken up by plants to enhance the growth. The application of scientific reasoning made their design one that is innovative and sustainable as evaluated by the teacher. Some students attempted more ecofriendly designs by considering energy efficiencies through incorporating water turbines in their farming systems. They applied the concept of different forms of energy and energy conversion when their peers inquired about their design. The same scientific concepts were explained at different levels of details by different students. At one level, the students explained in a purely descriptive manner of what happens to the different entities in their prototypes, with implied changes to the forms of energy─ “…spins then generates electricity. So right, when the water falls down, then it will spin. The water will fall on the fan blade thing, then it will spin and then it generates electricity. So, it saves electricity, and also saves water”. At another level, students defended their design through an explanation of energy conversion─ “…because when the water flows right, it will convert gravitational potential energy so, when it reaches the bottom, there is not really much gravitational potential energy”. While these instances of applying scientific reasoning indicated that students have knowledge about the scientific phenomena and can apply them to assist in the problem-solving process, we are not able to establish if students understood the science behind how the dynamo works to generate electricity. Students in eighth grade only need to know how a generator works at a descriptive level and the specialized understanding how a dynamo works is beyond the intended learning outcomes at this grade level.

The application of scientific concepts for justification may not always be accurate. For instance, the naïve conception that students have about plants only respiring at night and not in the day surfaced when one group of students tried to justify the growth rates of Kailan─ “…I mean, they cannot be making food 24/7 and growing 24/7. They have nighttime for a reason. They need to respire”. These students do not appreciate that plants respire in the day as well, and hence respiration occurs 24/7. This naïve conception that plants only respire at night is one that is common among learners of biology (e.g. Svandova, 2014 ) since students learn that plant gives off oxygen in the day and takes in oxygen at night. The hasty conclusion to that observation is that plants carry out photosynthesis in the day and respire at night. The relative rates of photosynthesis and respiration were not considered by many students.

Besides naïve conceptions, engagement with scientific ideas to solve a practical problem offers opportunities for unusual and alternative ideas about science to surface. For instance, another group of students explained that they lined up their plants so that “they can take turns to absorb sunlight for photosynthesis”. These students appear to be explaining that the sun will move and depending on the position of the sun, some plants may be under shade, and hence rates of photosynthesis are dependent on the position of the sun. However, this idea could also be interpreted as (1) the students failed to appreciate that sunlight is everywhere, and (2) plants, unlike animals, particularly humans, do not have the concept of turn-taking. These diverse ideas held by students surfaced when students were given opportunities to apply their knowledge of photosynthesis to solve a problem.

4.3 Applications of Practical Reasoning

Teachers and students used more practical reasoning during an integrated STEM activity requiring both science and engineering practices as seen from 62% occurrence of practical reasoning compared with 38% for scientific reasoning. The intention of the activity to integrate students’ scientific knowledge related to plant nutrition to engineering practice of building a model of vertical farming system could be the reason for the prevalence of practical reasoning. The practical reasoning used related to structural design considerations of the farming system such as how water, light and harvesting can be carried out in the most efficient manner. Students defended the strengths of designs using logic based on their everyday experiences. In the excerpt below (transcribed verbatim), we see students applied their everyday experiences when something is “thinner” (likely to mean narrower), logically it would save space. Further, to reach a higher level, you use a machine to climb up.

Excerpt 1. “Thinner, more space” Because it is more thinner, so like in terms of space, it’s very convenient. So right, because there is – because it rotates right, so there is this button where you can stop it. Then I also installed steps, so that – because there are certain places you can’t reach even if you stop the – if you stop the machine, so when you stop it and you climb up, and then you see the condition of the plants, even though it costs a lot of labour, there is a need to have an experienced person who can grow plants. Then also, when like – when water reach the plants, cos the plants I want to use is soil-based, so as the water reach the soil, the soil will xxx, so like the water will be used, and then we got like – and then there’s like this filter that will filter like the dirt.

In the examples of practical reasoning, we were not able to identify instances where students and teachers engaged with discussion around trade-off and optimisation. Understanding constraints, trade-offs and optimisations are important ideas in informed design matrix for engineering as suggested by Crismond and Adams ( 2012 ). For instance, utterances such as “everything will be reused”, “we will be saving space”, “it looks very flimsy” or “so that it can contains [sic] the plants” were used. These utterances were made both by students while justifying their own prototypes and also by peers who challenged the design of others. Longer responses involving practical reasoning were made based on common-sense, everyday logic─ “…the product does not require much manpower, so other than one or two supervisors like I said just now, to harvest the Kailan, hence, not too many people need to be used, need to be hired to help supervise the equipment and to supervise the growth”. We infer that the higher instances of utterances related to practical reasoning could be due to the presence of more concrete artefacts that is shown, and the students and teachers were more focused on questioning the structure at hand. This inference was made as instructions given by the teacher at the start of students’ presentation focus largely on the model rather than the scientific concepts or reasoning behind the model.

4.4 Intersection Between Scientific and Practical Reasoning

Comparing science subject matter knowledge and problem-solving to the idea of categories and placement (Buchanan, 1992 ), subject matter is analogous to categories where meanings are fixed with well-established epistemic practices and norms. The problem-solving process and design of solutions are likened to placements where boundaries are less rigid, hence opening opportunities for students’ personal experiences and ideas to be presented. Placements allow students to apply their knowledge from daily experiences and common-sense logic to justify decisions. Common-sense knowledge and logic are more accessible, and hence we observe higher frequency of usage. Comparatively, while science subject matter (categories) is also used, it is observed less frequently. This could possibly be due either to less familiarity with the subject matter or lack of appropriate opportunity to apply in practical problem solving. The challenge for teachers during implementation of a STEM problem-solving activity, therefore, lies in the balance of the application of scientific and practical reasoning to deepen understanding of disciplinary knowledge in the context of solving a problem in a meaningful manner.

Our observations suggest that engaging students with practical inquiry tasks with some engineering demands such as the design of modern farm systems offers opportunities for them to convert their personal lived experiences into feasible concrete ideas that they can share in a public space for critique. The peer critique following the sharing of their practical ideas allows for both practical and scientific questions to be asked and for students to defend their ideas. For instance, after one group of students presented their prototype that has silvered surfaces, a student asked a question: “what is the function of the silver panels?”, to which his peers replied : “Makes the light bounce. Bounce the sunlight away and then to other parts of the tray.” This question indicated that students applied their knowledge that shiny silvered surfaces reflect light, and they used this knowledge to disperse the light to other trays where the crops were growing. An example of a practical question asked was “what is the purpose of the ladder?”, to which the students replied: “To take the plants – to refill the plants, the workers must climb up”. While the process of presentation and peer critique mimic peer review in the science inquiry process, the conceptual knowledge of science may not always be evident as students paid more attention to the design constraints such as lighting, watering, and space that was set in the activity. Given the context of growing plants, engagement with the science behind nutritional requirements of plants, the process of photosynthesis, and the adaptations of plants could be more deliberately explored.

5 Conclusion

The goal of our work lies in applying the theoretical ideas of Dewey and Bereiter to better understand reasoning practices in integrate STEM problem solving. We argue that this is a worthy pursue to better understand the roles of scientific reasoning in practical problem solving. One of the goals of integrated STEM education in schools is to enculture students into the practices of science, engineering and mathematics that include disciplinary conceptual knowledge, epistemic practices, and social norms (Kelly & Licona, 2018 ). In the integrated form, the boundaries and approaches to STEM learning are more diverse compared with monodisciplinary ways of problem solving. For instance, in integrated STEM problem solving, besides scientific investigations and explanations, students are also required to understand constraints, design optimal solutions within specific parameters and even to construct prototypes. For students to learn the ways of speaking, doing and being as they participate in integrated STEM problem solving in schools in a meaningful manner, students could benefit from these experiences.

With reference to the first research question of What is the extent of practical and scientific reasoning in integrated STEM problem solving, our analysis suggests that there are fewer instances of scientific reasoning compared with practical reasoning. Considering the intention of integrated STEM learning and adopting Bereiter’s idea that students should learn higher-order conceptual knowledge through engagement with problem solving, we argue for a need for scientific reasoning to be featured more strongly in integrated STEM lessons so that students can gain higher order scientific conceptual knowledge. While the lessons observed were strong in design and building, what was missing in generating solutions was the engagement in investigations, where learners collected or are presented with data and make decisions about the data to allow them to assess how viable the solutions are. Integrated STEM problems can be designed so that science inquiry can be infused, such as carrying out investigations to figure out relationships between variables. Duschl and Bybee ( 2014 ) have argued for the need to engage students in problematising science inquiry and making choices about what works and what does not.

With reference to the second research question , What is achieved through practical and scientific reasoning during integrated STEM problem solving? , our analyses suggest that utterance for practical reasoning are typically used to justify the physical design of the prototype. These utterances rely largely on what is observable and are associated with basic-level knowledge and experiences. The higher frequency of utterances related to practical reasoning and the nature of the utterances suggests that engagement with practical reasoning is more accessible since they relate more to students’ lived experiences and common-sense. Bereiter ( 1992 ) has urged educators to engage learners in learning that is beyond basic-level knowledge since accumulation of basic-level knowledge does not lead to higher-level conceptual learning. Students should be encouraged to use scientific knowledge also to justify their prototype design and to apply scientific evidence and logic to support their ideas. Engagement with scientific reasoning is preferred as conceptual knowledge, epistemic practices and social norms of science are more widely recognised compared with practical reasoning that are likely to be more varied since they rely on personal experiences and common-sense. This leads us to assert that both context and content are important in integrated STEM learning. Understanding the context or the solution without understanding the scientific principles that makes it work makes the learning less meaningful since we “…cannot strip learning of its context, nor study it in a ‘neutral’ context. It is always situated, always relayed to some ongoing enterprise”. (Bruner, 2004 , p. 20).

To further this discussion on how integrated STEM learning experiences harness the ideas of practical and scientific reasoning to move learners from basic-level knowledge to higher-order conceptual knowledge, we propose the need for further studies that involve working with teachers to identify and create relevant problems-of-explanations that focuses on feasible, worthy inquiry ideas such as those related to specific aspects of transportation, alternative energy sources and clean water that have impact on the local community. The design of these problems can incorporate opportunities for systematic scientific investigations and scaffolded such that there are opportunities to engage in epistemic practices of the constitute disciplines of STEM. Researchers could then examine the impact of problems-of-explanations on students’ learning of higher order scientific concepts. During the problem-solving process, more attention can be given to elicit students’ initial and unfolding ideas (practical) and use them as a basis to start the science inquiry process. Researchers can examine how to encourage discussions that focus on making meaning of scientific phenomena that are embedded within specific problems. This will help students to appreciate how data can be used as evidence to support scientific explanations as well as justifications for the solutions to problems. With evidence, learners can be guided to work on reasoning the phenomena with explanatory models. These aspects should move engagement in integrated STEM problem solving from being purely practice to one that is explanatory.

6 Limitations

There are four key limitations of our study. Firstly, the degree of generalisation of our observations is limited. This study sets out to illustrate what how Dewey and Bereiter’s ideas can be used as lens to examine knowledge used in problem-solving. As such, the findings that we report here is limited in its ability to generalise across different contexts and problems. Secondly, the lessons that were analysed came from teacher-frontal teaching and group presentation of solution and excluded students’ group discussions. We acknowledge that there could potentially be talk that could involve practical and scientific reasonings within group work. There are two practical consideration for choosing to analyse the first and presentation segments of the suite of lesson. Firstly, these two lessons involved participation from everyone in class and we wanted to survey the use of practical and scientific reasoning by the students as a class. Secondly, methodologically, clarity of utterances is important for accurate analysis and as students were wearing face masks during the data collection, their utterances during group discussions lack the clarity for accurate transcription and analysis. Thirdly, insights from this study were gleaned from a small sample of six classes of students. Further work could involve more classes of students although that could require more resources devoted to analysis of the videos. Finally, the number of students varied across groups and this could potentially affect the reasoning practices during discussions.

Data Availability

The datasets used and analysed during the current study are available from the corresponding author on reasonable request.

Aikenhead, G. S. (2006). Science education for everyday life: Evidence-based practice . Teachers College Press.

Google Scholar  

Bereiter, C. (1992). Referent-centred and problem-centred knowledge: Elements of an educational epistemology. Interchange, 23 (4), 337–361.

Article   Google Scholar  

Breiner, J. M., Johnson, C. C., Harkness, S. S., & Koehler, C. M. (2012). What is STEM? A discussion about conceptions of STEM in education and partnership. School Science and Mathematics, 112 (1), 3–11. https://doi.org/10.1111/j.194908594.2011.00109.x

Brown, M. J. (2012). John Dewey’s logic of science. HOPS: The Journal of the International Society for the History of Philosophy of Science, 2 (2), 258–306.

Bruner, J. (2004). The psychology of learning: A short history (pp.13–20). Winter: Daedalus.

Bryan, L. A., Moore, T. J., Johnson, C. C., & Roehrig, G. H. (2016). Integrated STEM education. In C. C. Johnson, E. E. Peters-Burton, & T. J. Moore (Eds.), STEM road map: A framework for integrated STEM education (pp. 23–37). Routledge.

Buchanan, R. (1992). Wicked problems in design thinking. Design Issues, 8 (2), 5–21.

Bybee, R. W. (2013). The case for STEM education: Challenges and opportunities . NSTA Press.

Crismond, D. P., & Adams, R. S. (2012). The informed design teaching and learning matrix. Journal of Engineering Education, 101 (4), 738–797.

Cunningham, C. M., & Lachapelle, P. (2016). Experiences to engage all students. Educational Designer , 3(9), 1–26. https://www.educationaldesigner.org/ed/volume3/issue9/article31/

Curriculum Planning and Development Division [CPDD] (2021). 2021 Lower secondary science express/ normal (academic) teaching and learning syllabus . Singapore: Ministry of Education.

Delahunty, T., Seery, N., & Lynch, R. (2020). Exploring problem conceptualization and performance in STEM problem solving contexts. Instructional Science, 48 , 395–425. https://doi.org/10.1007/s11251-020-09515-4

Dewey, J. (1938). Logic: The theory of inquiry . Henry Holt and Company Inc.

Dewey, J. (1910a). Science as subject-matter and as method. Science, 31 (787), 121–127.

Dewey, J. (1910b). How we think . D.C. Heath & Co Publishers.

Book   Google Scholar  

Duschl, R. A., & Bybee, R. W. (2014). Planning and carrying out investigations: an entry to learning and to teacher professional development around NGSS science and engineering practices. International Journal of STEM Education, 1 (12). DOI: https://doi.org/10.1186/s40594-014-0012-6 .

Gale, J., Alemder, M., Lingle, J., & Newton, S (2000). Exploring critical components of an integrated STEM curriculum: An application of the innovation implementation framework. International Journal of STEM Education, 7(5), https://doi.org/10.1186/s40594-020-0204-1 .

Hsieh, H.-F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15 (9), 1277–1288.

Jonassen, D. H. (2000). Toward a design theory of problem solving. ETR&D, 48 (4), 63–85.

Kelly, G., & Licona, P. (2018). Epistemic practices and science education. In M. R. Matthews (Ed.), History, philosophy and science teaching: New perspectives (pp. 139–165). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-62616-1 .

Lee, O., & Luykx, A. (2006). Science education and student diversity: Synthesis and research agenda . Cambridge University Press.

Li, D. (2008). The pragmatic construction of word meaning in utterances. Journal of Chinese Language and Computing, 18 (3), 121–137.

National Research Council. (1996). The National Science Education standards . National Academy Press.

National Research Council (2000). Inquiry and the national science education standards: A guide for teaching and learning. Washington, DC: The National Academies Press. https://doi.org/10.17226/9596 .

OECD (2018). The future of education and skills: Education 2030. Downloaded on October 3, 2020 from https://www.oecd.org/education/2030/E2030%20Position%20Paper%20(05.04.2018).pdf

Park, W., Wu, J.-Y., & Erduran, S. (2020) The nature of STEM disciplines in science education standards documents from the USA, Korea and Taiwan: Focusing on disciplinary aims, values and practices.  Science & Education, 29 , 899–927.

Pleasants, J. (2020). Inquiring into the nature of STEM problems: Implications for pre-college education. Science & Education, 29 , 831–855.

Roehrig, G. H., Dare, E. A., Ring-Whalen, E., & Wieselmann, J. R. (2021). Understanding coherence and integration in integrated STEM curriculum. International Journal of STEM Education, 8(2), https://doi.org/10.1186/s40594-020-00259-8

SFA (2020). The food we eat . Downloaded on May 5, 2021 from https://www.sfa.gov.sg/food-farming/singapore-food-supply/the-food-we-eat

Svandova, K. (2014). Secondary school students’ misconceptions about photosynthesis and plant respiration: Preliminary results. Eurasia Journal of Mathematics, Science, & Technology Education, 10 (1), 59–67.

Tan, M. (2020). Context matters in science education. Cultural Studies of Science Education . https://doi.org/10.1007/s11422-020-09971-x

Tan, A.-L., Teo, T. W., Choy, B. H., & Ong, Y. S. (2019). The S-T-E-M Quartet. Innovation and Education , 1 (1), 3. https://doi.org/10.1186/s42862-019-0005-x

Wheeler, L. B., Navy, S. L., Maeng, J. L., & Whitworth, B. A. (2019). Development and validation of the Classroom Observation Protocol for Engineering Design (COPED). Journal of Research in Science Teaching, 56 (9), 1285–1305.

World Economic Forum (2020). Schools of the future: Defining new models of education for the fourth industrial revolution. Retrieved on Jan 18, 2020 from https://www.weforum.org/reports/schools-of-the-future-defining-new-models-of-education-for-the-fourth-industrial-revolution/

Download references

Acknowledgements

The authors would like to acknowledge the contributions of the other members of the research team who gave their comment and feedback in the conceptualization stage.

This study is funded by Office of Education Research grant OER 24/19 TAL.

Author information

Authors and affiliations.

Natural Sciences and Science Education, meriSTEM@NIE, National Institute of Education, Nanyang Technological University, Singapore, Singapore

Aik-Ling Tan, Yann Shiou Ong, Yong Sim Ng & Jared Hong Jie Tan

You can also search for this author in PubMed   Google Scholar

Contributions

The first author conceptualized, researched, read, analysed and wrote the article.

The second author worked on compiling the essential features and the variations tables.

The third and fourth authors worked with the first author on the ideas and refinements of the idea.

Corresponding author

Correspondence to Yann Shiou Ong .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Tan, AL., Ong, Y.S., Ng, Y.S. et al. STEM Problem Solving: Inquiry, Concepts, and Reasoning. Sci & Educ 32 , 381–397 (2023). https://doi.org/10.1007/s11191-021-00310-2

Download citation

Accepted : 28 November 2021

Published : 29 January 2022

Issue Date : April 2023

DOI : https://doi.org/10.1007/s11191-021-00310-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Practical Inquiry
  • Science Inquiry
  • Referent-centered knowledge
  • Problem-centered knowledge
  • Find a journal
  • Publish with us
  • Track your research

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Biology archive

Course: biology archive   >   unit 1, the scientific method.

  • Controlled experiments
  • The scientific method and experimental design

problem solving method in science

Introduction

  • Make an observation.
  • Ask a question.
  • Form a hypothesis , or testable explanation.
  • Make a prediction based on the hypothesis.
  • Test the prediction.
  • Iterate: use the results to make new hypotheses or predictions.

Scientific method example: Failure to toast

1. make an observation., 2. ask a question., 3. propose a hypothesis., 4. make predictions., 5. test the predictions..

  • If the toaster does toast, then the hypothesis is supported—likely correct.
  • If the toaster doesn't toast, then the hypothesis is not supported—likely wrong.

Logical possibility

Practical possibility, building a body of evidence, 6. iterate..

  • If the hypothesis was supported, we might do additional tests to confirm it, or revise it to be more specific. For instance, we might investigate why the outlet is broken.
  • If the hypothesis was not supported, we would come up with a new hypothesis. For instance, the next hypothesis might be that there's a broken wire in the toaster.

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Incredible Answer

PrepScholar

Choose Your Test

  • Search Blogs By Category
  • College Admissions
  • AP and IB Exams
  • GPA and Coursework

The 6 Scientific Method Steps and How to Use Them

author image

General Education

feature_microscope-1

When you’re faced with a scientific problem, solving it can seem like an impossible prospect. There are so many possible explanations for everything we see and experience—how can you possibly make sense of them all? Science has a simple answer: the scientific method.

The scientific method is a method of asking and answering questions about the world. These guiding principles give scientists a model to work through when trying to understand the world, but where did that model come from, and how does it work?

In this article, we’ll define the scientific method, discuss its long history, and cover each of the scientific method steps in detail.

What Is the Scientific Method?

At its most basic, the scientific method is a procedure for conducting scientific experiments. It’s a set model that scientists in a variety of fields can follow, going from initial observation to conclusion in a loose but concrete format.

The number of steps varies, but the process begins with an observation, progresses through an experiment, and concludes with analysis and sharing data. One of the most important pieces to the scientific method is skepticism —the goal is to find truth, not to confirm a particular thought. That requires reevaluation and repeated experimentation, as well as examining your thinking through rigorous study.

There are in fact multiple scientific methods, as the basic structure can be easily modified.  The one we typically learn about in school is the basic method, based in logic and problem solving, typically used in “hard” science fields like biology, chemistry, and physics. It may vary in other fields, such as psychology, but the basic premise of making observations, testing, and continuing to improve a theory from the results remain the same.

body_history

The History of the Scientific Method

The scientific method as we know it today is based on thousands of years of scientific study. Its development goes all the way back to ancient Mesopotamia, Greece, and India.

The Ancient World

In ancient Greece, Aristotle devised an inductive-deductive process , which weighs broad generalizations from data against conclusions reached by narrowing down possibilities from a general statement. However, he favored deductive reasoning, as it identifies causes, which he saw as more important.

Aristotle wrote a great deal about logic and many of his ideas about reasoning echo those found in the modern scientific method, such as ignoring circular evidence and limiting the number of middle terms between the beginning of an experiment and the end. Though his model isn’t the one that we use today, the reliance on logic and thorough testing are still key parts of science today.

The Middle Ages

The next big step toward the development of the modern scientific method came in the Middle Ages, particularly in the Islamic world. Ibn al-Haytham, a physicist from what we now know as Iraq, developed a method of testing, observing, and deducing for his research on vision. al-Haytham was critical of Aristotle’s lack of inductive reasoning, which played an important role in his own research.

Other scientists, including Abū Rayhān al-Bīrūnī, Ibn Sina, and Robert Grosseteste also developed models of scientific reasoning to test their own theories. Though they frequently disagreed with one another and Aristotle, those disagreements and refinements of their methods led to the scientific method we have today.

Following those major developments, particularly Grosseteste’s work, Roger Bacon developed his own cycle of observation (seeing that something occurs), hypothesis (making a guess about why that thing occurs), experimentation (testing that the thing occurs), and verification (an outside person ensuring that the result of the experiment is consistent).

After joining the Franciscan Order, Bacon was granted a special commission to write about science; typically, Friars were not allowed to write books or pamphlets. With this commission, Bacon outlined important tenets of the scientific method, including causes of error, methods of knowledge, and the differences between speculative and experimental science. He also used his own principles to investigate the causes of a rainbow, demonstrating the method’s effectiveness.

Scientific Revolution

Throughout the Renaissance, more great thinkers became involved in devising a thorough, rigorous method of scientific study. Francis Bacon brought inductive reasoning further into the method, whereas Descartes argued that the laws of the universe meant that deductive reasoning was sufficient. Galileo’s research was also inductive reasoning-heavy, as he believed that researchers could not account for every possible variable; therefore, repetition was necessary to eliminate faulty hypotheses and experiments.

All of this led to the birth of the Scientific Revolution , which took place during the sixteenth and seventeenth centuries. In 1660, a group of philosophers and physicians joined together to work on scientific advancement. After approval from England’s crown , the group became known as the Royal Society, which helped create a thriving scientific community and an early academic journal to help introduce rigorous study and peer review.

Previous generations of scientists had touched on the importance of induction and deduction, but Sir Isaac Newton proposed that both were equally important. This contribution helped establish the importance of multiple kinds of reasoning, leading to more rigorous study.

As science began to splinter into separate areas of study, it became necessary to define different methods for different fields. Karl Popper was a leader in this area—he established that science could be subject to error, sometimes intentionally. This was particularly tricky for “soft” sciences like psychology and social sciences, which require different methods. Popper’s theories furthered the divide between sciences like psychology and “hard” sciences like chemistry or physics.

Paul Feyerabend argued that Popper’s methods were too restrictive for certain fields, and followed a less restrictive method hinged on “anything goes,” as great scientists had made discoveries without the Scientific Method. Feyerabend suggested that throughout history scientists had adapted their methods as necessary, and that sometimes it would be necessary to break the rules. This approach suited social and behavioral scientists particularly well, leading to a more diverse range of models for scientists in multiple fields to use.

body_experiment-3

The Scientific Method Steps

Though different fields may have variations on the model, the basic scientific method is as follows:

#1: Make Observations 

Notice something, such as the air temperature during the winter, what happens when ice cream melts, or how your plants behave when you forget to water them.

#2: Ask a Question

Turn your observation into a question. Why is the temperature lower during the winter? Why does my ice cream melt? Why does my toast always fall butter-side down?

This step can also include doing some research. You may be able to find answers to these questions already, but you can still test them!

#3: Make a Hypothesis

A hypothesis is an educated guess of the answer to your question. Why does your toast always fall butter-side down? Maybe it’s because the butter makes that side of the bread heavier.

A good hypothesis leads to a prediction that you can test, phrased as an if/then statement. In this case, we can pick something like, “If toast is buttered, then it will hit the ground butter-first.”

#4: Experiment

Your experiment is designed to test whether your predication about what will happen is true. A good experiment will test one variable at a time —for example, we’re trying to test whether butter weighs down one side of toast, making it more likely to hit the ground first.

The unbuttered toast is our control variable. If we determine the chance that a slice of unbuttered toast, marked with a dot, will hit the ground on a particular side, we can compare those results to our buttered toast to see if there’s a correlation between the presence of butter and which way the toast falls.

If we decided not to toast the bread, that would be introducing a new question—whether or not toasting the bread has any impact on how it falls. Since that’s not part of our test, we’ll stick with determining whether the presence of butter has any impact on which side hits the ground first.

#5: Analyze Data

After our experiment, we discover that both buttered toast and unbuttered toast have a 50/50 chance of hitting the ground on the buttered or marked side when dropped from a consistent height, straight down. It looks like our hypothesis was incorrect—it’s not the butter that makes the toast hit the ground in a particular way, so it must be something else.

Since we didn’t get the desired result, it’s back to the drawing board. Our hypothesis wasn’t correct, so we’ll need to start fresh. Now that you think about it, your toast seems to hit the ground butter-first when it slides off your plate, not when you drop it from a consistent height. That can be the basis for your new experiment.

#6: Communicate Your Results

Good science needs verification. Your experiment should be replicable by other people, so you can put together a report about how you ran your experiment to see if other peoples’ findings are consistent with yours.

This may be useful for class or a science fair. Professional scientists may publish their findings in scientific journals, where other scientists can read and attempt their own versions of the same experiments. Being part of a scientific community helps your experiments be stronger because other people can see if there are flaws in your approach—such as if you tested with different kinds of bread, or sometimes used peanut butter instead of butter—that can lead you closer to a good answer.

body_toast-1

A Scientific Method Example: Falling Toast

We’ve run through a quick recap of the scientific method steps, but let’s look a little deeper by trying again to figure out why toast so often falls butter side down.

#1: Make Observations

At the end of our last experiment, where we learned that butter doesn’t actually make toast more likely to hit the ground on that side, we remembered that the times when our toast hits the ground butter side first are usually when it’s falling off a plate.

The easiest question we can ask is, “Why is that?”

We can actually search this online and find a pretty detailed answer as to why this is true. But we’re budding scientists—we want to see it in action and verify it for ourselves! After all, good science should be replicable, and we have all the tools we need to test out what’s really going on.

Why do we think that buttered toast hits the ground butter-first? We know it’s not because it’s heavier, so we can strike that out. Maybe it’s because of the shape of our plate?

That’s something we can test. We’ll phrase our hypothesis as, “If my toast slides off my plate, then it will fall butter-side down.”

Just seeing that toast falls off a plate butter-side down isn’t enough for us. We want to know why, so we’re going to take things a step further—we’ll set up a slow-motion camera to capture what happens as the toast slides off the plate.

We’ll run the test ten times, each time tilting the same plate until the toast slides off. We’ll make note of each time the butter side lands first and see what’s happening on the video so we can see what’s going on.

When we review the footage, we’ll likely notice that the bread starts to flip when it slides off the edge, changing how it falls in a way that didn’t happen when we dropped it ourselves.

That answers our question, but it’s not the complete picture —how do other plates affect how often toast hits the ground butter-first? What if the toast is already butter-side down when it falls? These are things we can test in further experiments with new hypotheses!

Now that we have results, we can share them with others who can verify our results. As mentioned above, being part of the scientific community can lead to better results. If your results were wildly different from the established thinking about buttered toast, that might be cause for reevaluation. If they’re the same, they might lead others to make new discoveries about buttered toast. At the very least, you have a cool experiment you can share with your friends!

Key Scientific Method Tips

Though science can be complex, the benefit of the scientific method is that it gives you an easy-to-follow means of thinking about why and how things happen. To use it effectively, keep these things in mind!

Don’t Worry About Proving Your Hypothesis

One of the important things to remember about the scientific method is that it’s not necessarily meant to prove your hypothesis right. It’s great if you do manage to guess the reason for something right the first time, but the ultimate goal of an experiment is to find the true reason for your observation to occur, not to prove your hypothesis right.

Good science sometimes means that you’re wrong. That’s not a bad thing—a well-designed experiment with an unanticipated result can be just as revealing, if not more, than an experiment that confirms your hypothesis.

Be Prepared to Try Again

If the data from your experiment doesn’t match your hypothesis, that’s not a bad thing. You’ve eliminated one possible explanation, which brings you one step closer to discovering the truth.

The scientific method isn’t something you’re meant to do exactly once to prove a point. It’s meant to be repeated and adapted to bring you closer to a solution. Even if you can demonstrate truth in your hypothesis, a good scientist will run an experiment again to be sure that the results are replicable. You can even tweak a successful hypothesis to test another factor, such as if we redid our buttered toast experiment to find out whether different kinds of plates affect whether or not the toast falls butter-first. The more we test our hypothesis, the stronger it becomes!

What’s Next?

Want to learn more about the scientific method? These important high school science classes will no doubt cover it in a variety of different contexts.

Test your ability to follow the scientific method using these at-home science experiments for kids !

Need some proof that science is fun? Try making slime

Trending Now

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

ACT vs. SAT: Which Test Should You Take?

When should you take the SAT or ACT?

Get Your Free

PrepScholar

Find Your Target SAT Score

Free Complete Official SAT Practice Tests

How to Get a Perfect SAT Score, by an Expert Full Scorer

Score 800 on SAT Math

Score 800 on SAT Reading and Writing

How to Improve Your Low SAT Score

Score 600 on SAT Math

Score 600 on SAT Reading and Writing

Find Your Target ACT Score

Complete Official Free ACT Practice Tests

How to Get a Perfect ACT Score, by a 36 Full Scorer

Get a 36 on ACT English

Get a 36 on ACT Math

Get a 36 on ACT Reading

Get a 36 on ACT Science

How to Improve Your Low ACT Score

Get a 24 on ACT English

Get a 24 on ACT Math

Get a 24 on ACT Reading

Get a 24 on ACT Science

Stay Informed

Get the latest articles and test prep tips!

Follow us on Facebook (icon)

Melissa Brinks graduated from the University of Washington in 2014 with a Bachelor's in English with a creative writing emphasis. She has spent several years tutoring K-12 students in many subjects, including in SAT prep, to help them prepare for their college education.

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Change Password

Your password must have 8 characters or more and contain 3 of the following:.

  • a lower case character, 
  • an upper case character, 
  • a special character 

Password Changed Successfully

Your password has been changed

  • Sign in / Register

Request Username

Can't sign in? Forgot your username?

Enter your email address below and we will send you your username

If the address matches an existing account you will receive an email with instructions to retrieve your username

A Detailed Characterization of the Expert Problem-Solving Process in Science and Engineering: Guidance for Teaching and Assessment

  • Argenta M. Price
  • Candice J. Kim
  • Eric W. Burkholder
  • Amy V. Fritz
  • Carl E. Wieman

*Address correspondence to: Argenta M. Price ( E-mail Address: [email protected] ).

Department of Physics, Stanford University, Stanford, CA 94305

Search for more papers by this author

Graduate School of Education, Stanford University, Stanford, CA 94305

School of Medicine, Stanford University, Stanford, CA 94305

Department of Electrical Engineering, Stanford University, Stanford, CA 94305

A primary goal of science and engineering (S&E) education is to produce good problem solvers, but how to best teach and measure the quality of problem solving remains unclear. The process is complex, multifaceted, and not fully characterized. Here, we present a detailed characterization of the S&E problem-solving process as a set of specific interlinked decisions. This framework of decisions is empirically grounded and describes the entire process. To develop this, we interviewed 52 successful scientists and engineers (“experts”) spanning different disciplines, including biology and medicine. They described how they solved a typical but important problem in their work, and we analyzed the interviews in terms of decisions made. Surprisingly, we found that across all experts and fields, the solution process was framed around making a set of just 29 specific decisions. We also found that the process of making those discipline-general decisions (selecting between alternative actions) relied heavily on domain-specific predictive models that embodied the relevant disciplinary knowledge. This set of decisions provides a guide for the detailed measurement and teaching of S&E problem solving. This decision framework also provides a more specific, complete, and empirically based description of the “practices” of science.

INTRODUCTION

Many faculty members with new graduate students and many managers with employees who are recent college graduates have had similar experiences. Their advisees/employees have just completed a program of rigorous course work, often with distinction, but they seem unable to solve the real-world problems they encounter. The supervisor struggles to figure out exactly what the problem is and how they can guide the person in overcoming it. This paper is providing a way to answer those questions in the context of science and engineering (S&E). By characterizing the problem-solving process of experts, this paper investigates the “mastery” performance level and specifies an overarching learning goal for S&E students, which can be taught and measured to improve teaching.

The importance of problem solving as an educational outcome has long been recognized, but too often postsecondary S&E graduates have serious difficulties when confronted with real-world problems ( Quacquarelli Symonds, 2018 ). This reflects two long-standing educational problems with regard to problem solving: how to properly measure it, and how to effectively teach it. We theorize that the root of these difficulties is that good “problem solving” is a complex multifaceted process, and the details of that process have not been sufficiently characterized. Better characterization of the problem-solving process is necessary to allow problem solving, and more particularly, the complex set of skills and knowledge it entails, to be measured and taught more effectively. We sought to create an empirically grounded conceptual framework that would characterize the detailed structure of the full problem-solving process used by skilled practitioners when solving problems as part of their work. We also wanted a framework that would allow use and comparison across S&E disciplines. To create such a framework, we examined the operational decisions (choices among alternatives that result in subsequent actions) that these practitioners make when solving problems in their discipline.

Various aspects of problem solving have been studied across multiple domains, using a variety of methods (e.g., Newell and Simon, 1972 ; Dunbar, 2000 ; National Research Council [NRC], 2012b ; Lintern et al. , 2018 ). These ranged from expert self-reflections (e.g., Polya, 1945 ), to studies on knowledge lean tasks to discover general problem-solving heuristics (e.g., Egan and Greeno, 1974 ), to comparisons of expert and novice performances on simplified problems across a variety of disciplines (e.g., Chase and Simon, 1973 ; Chi et al. , 1981 ; Larkin and Reif, 1979 ; Ericsson et al. , 2006 , 2018 ). These studies revealed important novice–expert differences—notably, that experts are better at identifying important features and have knowledge structures that allow them to reduce demands on working memory. Studies that specifically gave the experts unfamiliar problems in their disciplines also found that, relative to novices, they had more deliberate and reflective strategies, including more extensive planning and managing of their own behavior, and they could use their knowledge base to better define the problem ( Schoenfeld, 1985 ; Wineburg, 1998 ; Singh, 2002 ). While these studies focused on discrete cognitive steps of the individual, an alternative framing of problem solving has been in terms of “ecological psychology” of “situativity,” looking at how the problem solver views and interacts with the environment in terms of affordances and constraints ( Greeno, 1994 ). “Naturalistic decision making” is a related framework that specifically examines how experts make decisions in complex, real-world, settings, with an emphasis on the importance of assessing the situation surrounding the problem at hand ( Klein, 2008 ; Mosier et al. , 2018 ).

While this work on expertise has provided important insights into the problem-solving process, its focus has been limited. Most has focused on looking for cognitive differences between experts and novices using limited and targeted tasks, such as remembering the pieces on a chessboard ( Chase and Simon, 1973 ) or identifying the important concepts represented in an introductory physics textbook problem ( Chi et al. , 1981 ). It did not attempt to explore the full process of solving, particularly for solving the type of complex problem that a scientist or engineer encounters as a member of the workforce (“authentic problems”).

There have also been many theoretical proposals as to expert problem-solving practices, but with little empirical evidence as to their completeness or accuracy (e.g., Polya, 1945 ; Heller and Reif, 1984 ; Organisation for Economic Cooperation and Development [OECD], 2019 ). The work of Dunbar (2000) is a notable exception to the lack of empirical work, as his group did examine how biologists solved problems in their work by analyzing lab meetings held by eight molecular biology research groups. His groundbreaking work focused on creativity and discovery in the research process, and he identified the importance of analogical reasoning and distributed reasoning by scientists in answering research questions and gaining new insights. Kozma et al. (2000) studied professional chemists solving problems, but their work focused only on the use of specialized representations.

The “cognitive systems engineering” approach ( Lintern et al. , 2018 ) takes a more empirically based approach looking at experts solving problems in their work, and as such tends to span aspects of both the purely cognitive and the ecological psychological theories. It uses both observations of experts in authentic work settings and retrospective interviews about how experts carried out particular work tasks. This theoretical framing and the experimental methods are similar to what we use, particularly in the “naturalistic decision making” area of research ( Mosier et al. , 2018 ). That work looks at how critical decisions are made in solving specific problems in their real-world setting. The decision process is studied primarily through retrospective interviews about challenging cases faced by experts. As described below, our methods are adapted from that work ( Crandall et al. , 2006 ), though there are some notable differences in focus and field. A particular difference is that we focused on identifying what are decisions to be made, which are more straight-forward to identify from retrospective interviews than how those decisions are made. We all have the same ultimate goal, however, to improve the training/teaching of the respective expertise.

Problem solving is central to the processes of science, engineering, and medicine, so research and educational standards about scientific thinking and the process and practices of science are also relevant to this discussion. Work by Osborne and colleagues describes six styles of scientific reasoning that can be used to explain how scientists and students approach different problems ( Kind and Osborne, 2016 ). There are also numerous educational standards and frameworks that, based on theory, lay out the skills or practices that science and engineering students are expected to master (e.g., American Association for the Advancement of Science [AAAS], 2011 ; Next Generation Science Standards Lead States, 2013 ; OECD, 2019 ; ABET, 2020 ). More specifically related to the training of problem solving, Priemer et al. (2020) synthesizes literature on problem solving and scientific reasoning to create a “STEM [science, technology, engineering, and mathematics] and computer science framework for problem solving” that lays out steps that could be involved in a students’ problem-solving efforts across STEM fields. These frameworks provide a rich groundwork, but they have several limitations: 1) They are based on theoretical ideas of the practice of science, not empirical evidence, so while each framework contains overlapping elements of the problem-solving process, it is unclear whether they capture the complete process. 2) They are focused on school science, rather than the actual problem solving that practitioners carry out and that students will need to carry out in future STEM careers. 3) They are typically underspecified, so that the steps or practices apply generally, but it is difficult to translate them into measurable learning goals for students to practice. Working to address that, Clemmons et al. (2020) recently sought to operationalize the core competencies from the Vision and Change report ( AAAS, 2011 ), establishing a set of skills that biology students should be able to master.

Our work seeks to augment this prior work by building a conceptual framework that is empirically based, grounded in how scientists and engineers solve problems in practice instead of in school. We base our framework on the decisions that need to be made during problem solving, which makes each item clearly defined for practice and assessment. In our analysis of expert problem solving, we empirically identified the entire problem-solving process. We found this includes deciding when and how to use the steps and skills defined in the work described previously but also includes additional elements. There are also questions in the literature about how generalizable across fields a particular set of practices may be. Here, we present the first empirical examination of the entire problem-solving process, and we compare that process across many different S&E disciplines.

A variety of instructional methods have been used to try and teach science and engineering problem solving, but there has been little evidence of their efficacy at improving problem solving (for a review, see NRC, 2012b ). Research explicitly on teaching problem solving has primarily focused on textbook-type exercises and utilized step-by-step strategies or heuristics. These studies have shown limited success, often getting students to follow specific procedural steps but with little gain in actually solving problems and showing some potential drawbacks ( Heller and Reif, 1984 ; Heller et al. , 1992 ; Huffman, 1997 ; Heckler, 2010 ; Kuo et al. , 2017 ). As discussed later, the framework presented here offers guidance for different and potentially more effective approaches to teaching problem solving.

These challenges can be illustrated by considering three different problems taken from courses in mechanical engineering, physics, and biology, respectively ( Figure 1 ). All of these problems are challenging, requiring considerable knowledge and effort by the student to solve correctly. Problems such as these are routinely used to both assess students’ problem-solving skills, and students are expected to learn such skills by practicing doing such problems. However, it is obvious to any expert in the respective fields, that, while these problems might be complicated and difficult to answer, they are vastly different from solving authentic problems in that field. They all have well-defined answers that can be reached by straightforward solution paths. More specifically, they do not involve needing to use judgment to make any decisions based on limited information (e.g., insufficient to specify a correct decision with certainty). The relevant concepts and information and assumptions are all stated or obvious. The failure of problems like these to capture the complexity of authentic problem solving underlies the failure of efforts to measure and teach problem solving. Recognizing this failure motivated our efforts to more completely characterize the problem-solving process of practicing scientists, engineers, and doctors.

FIGURE 1. Example problems from courses or textbooks in mechanical engineering, physics and biology. Problems from: Mechanical engineering: Wayne State mechanical engineering sample exam problems (Wayne State, n.d.), Physics: A standard physics problem in nearly every advanced quantum mechanics course, Biology: Molecular Biology of the Cell 6th edition, Chapter 7 end of chapter problems ( Alberts et al ., 2014 ).

We are building on the previous work studying expert–novice differences and problem solving but taking a different direction. We sought to create an empirically grounded framework that would characterize the detailed structure of the full problem-solving process by focusing on the operational decisions that skilled practitioners make when successfully solving authentic problems in their scientific, engineering, or medical work. We chose to identify the decisions that S&E practitioners made, because, unlike potentially nebulous skills or general problem-solving steps that might change with the discipline, decisions are sufficiently specified that they can be individually practiced by students and measured by instructors or departments. The authentic problems that we analyzed are typical problems practitioners encounter in “doing” the science or engineering entailed in their jobs. In the language of traditional problem-
solving and expertise research, such authentic problems are “ill-structured” ( Simon, 1973 ) and require “adaptive expertise” ( Hatano and Inagaki, 1986 ) to solve. However, our authentic problems are considerably more complex and unstructured than what is normally considered in those literatures, because not only do they lack a clear solution path, but in many cases, it is not clear a priori that they have any solution at all. Determining that, and whether the problem needs to be redefined to be soluble, is part of the successful expert solution process. Another way in which our set of decisions goes beyond the characterization of what is involved in adaptive expertise is the prominent role of making judgments with limited information.

A common reaction of scientists and engineers to seeing the list of decisions we obtain as our primary result is, “Oh, yes, these are things I always do in solving problems. There is nothing new here.” It is comforting that these decisions all look familiar; that supports their validity. However, what is new is not that experts are making such decisions, but rather that there is a relatively small but complete set of decisions that has now been explicitly identified and that applies so generally.

We have used a much larger and broader sample of experts in this work than used in prior expert–novice studies, and we used a more stringent selection criterion. Previous empirical work has typically involved just a few experts, almost always in a single domain, and included graduate students as “experts” in some cases. Our semistructured interview sample was 31 experienced practitioners from 10 different disciplines of science, engineering, and medicine, with demonstrated competence and accomplishments well beyond those of most graduate students. Also, approximately 25 additional experts from across science, engineering, and medicine served as consultants during the planning and execution of this work.

Our research question was: What are the decisions experts make in solving authentic problems, and to what extent is this set of decisions to be made consistent both within and across disciplines?

Our approach was designed to identify the level of consistency and unique differences across disciplines. Our hypothesis was that there would be a manageable number (20–50) of decisions to be made, with a large amount of overlap of decisions made between experts within each discipline and a substantial but smaller overlap across disciplines. We believed that if we had found that every expert and/or discipline used a large and completely unique set of decisions, it would have been an interesting research result but of little further use. If our hypothesis turned out to be correct, we expected that the set of decisions obtained would have useful applications in guiding teaching and assessment, as they would show how experts in the respective disciplines applied their content knowledge to solve problems and hence provide a model for what to teach. We were not expecting to find the nearly complete degree of overlap in the decisions made across all the experts.

We first conducted 22 relatively unstructured interviews with a range of S&E experts, in which we asked about problem-solving expertise in their fields. From these interviews, we developed an initial list of decisions to be made in S&E problem solving. To refine and validate the list, we then carried out a set of 31 semistructured interviews in which S&E experts chose a specific problem from their work and described the solution process in detail. The semistructured interviews were coded for the decisions represented, either explicitly stated or implied by a choice of action. This provided a framework of decisions that characterize the problem-solving process across S&E disciplines. The research was approved by the Stanford Institutional Review Board (IRB no. 48785), and informed consent was obtained from all the participants.

This work involved interviewing many experts across different fields. We defined experts as practicing scientists, engineers, or physicians with considerable experience working as faculty at highly rated universities or having several years of experience working in moderately high-level technical positions at successful companies. We also included a few longtime postdocs and research staff in biosciences to capture more details of experimental decisions from which faculty members in those fields often were more removed. This definition of expert allows us to identify the practices of skilled professionals; we are not studying what makes only the most exceptional experts unique.

Experts were volunteers recruited through direct contact via the research team's personal and professional networks and referrals from experts in our networks. This recruitment method likely biased our sample toward people who experienced relatively similar training (most were trained in STEM disciplines at U.S. universities within the last 15–50 years). Within this limitation, we attempted to get a large range of experts by field and experience. This included people from 10 different fields (including molecular biology/biochemistry, ecology, and medicine), 11 U.S. universities, and nine different companies or government labs, and the sample was 33% female (though our engineering sample only included one female). The medical experts were volunteers from a select group of medical school faculty chosen to serve as clinical reasoning mentors for medical students at a prestigious university. We only contacted people who met our criteria for being an “expert,” and everyone who volunteered was included in the study. Most of the people who were contacted volunteered, and the only reason given for not volunteering was insufficient time. Other than their disciplinary expertise, there was little to distinguish these experts beyond the fact they were acquaintances with members of the team or acquaintances of acquaintances of team or project advisory board members. The precise number from each field was determined largely by availability of suitable experts.

We defined an “authentic problem” to be one that these experts solve in their actual jobs. Generally, this meant research projects for the science and engineering faculty, design problems for the industry engineers, and patient diagnoses for the medical doctors. Such problems are characterized by complexity, with many factors involved and no obvious solution process, and involve substantial time, effort, and resources. Such problems involve far more complexity and many more decisions, particularly decisions with limited information, than the typical problems used in previous problem-solving research or used with students in instructional settings.

Creating an Initial List of Problem-Solving Decisions

We first interviewed 22 experts ( Table 1 ), most of whom were faculty at a prestigious university, in which we asked them to discuss expertise and problem solving in their fields as it related to their own experiences. This usually resulted in their discussing examples of one or more problems they had solved. Based on the first seven interviews, plus reflections on personal experience from the research team and review of the literature on expert problem solving and teaching of scientific practices ( Ericsson et al. , 2006 ; NRC, 2012a ; Wieman, 2015 ), we created a generic list of decisions that were made in S&E problem solving. In the rest of the unstructured interviews (15), we also provided the experts with our list and asked them to comment on any additions or deletions they would suggest. Faculty who had close supervision of graduate students and industry experts who had extensively supervised inexperienced staff were particularly informative. Their observations of the way inexperienced people could fail made them sensitive to the different elements of expertise and where incorrect decisions could be made. Although we initially expected to find substantial differences across disciplines, from early in the process, we noted a high degree of overlap across the interviews in the decisions that were described.

Number of interviews conducted, by field of interviewee

DisciplineInformal interviews (creation of initial list)Structured interviews (validation/refinement)Notes
Biology (5 biochem/molecular bio, 2 cell bio, 1 plant bio, 1 immunology, 1 ecology)28Female: 6, URM: 2 5 faculty, 2 industry 3 acad staff/postdoc (year 5+)
Medicine (6 internal med or pediatrics, 1 oncology, 2 surgery)46Female: 4, URM: 1 All medical faculty
Physics (4 experiment, 3 theory)25Female: 1, URM: 1 All faculty
Electrical Engineering432 faculty, 4 industry, 1 acad. staff
Chemical Engineering22Female: 1 3 industry, 1 acad. staff
Mechanical Engineering22URM: 1, 2 faculty, 2 industry
Earth Science12Female: 2, 2 faculty, 1 industry
Chemistry12Female: 2, all faculty
Computer Science21Female: 1, 2 faculty, 1 industry
Biological Engineering2All faculty or acad. staff
Total2231Female: 17, URM: 5

URM (under-represented minority) included 3 African American and 2 Hispanic/Latinx. One medical faculty member was interviewed twice – in both informal and structure interviews, for a total of 53 interviews with 52 experts.

Refinement and Validation of the List of Decisions

After creating the preliminary list of decisions from the informal interviews, we conducted a separate set of more structured interviews to test and refine the list. Semistructured interviews were conducted with 31 experts from across science, engineering, and medical fields ( Table 1 ). For these interviews, we recruited experts from a range of universities and companies, though the range of institutions is still limited, given the sample size. Interviews were conducted in person or over video chat and were transcribed for analysis. In the semistructured interviews, experts were asked to choose a problem or two from their work that they could recall the details of solving and then describe the process, including all the steps and decisions they made. So that we could get a full picture of the successful problem-solving process, we decided to focus the interviews on problems that they had eventually solved successfully, though their processes inherently involved paths that needed to be revised and reconsidered. Transcripts from interviewees who agreed to have their interview transcript published are available in the supplemental data set.

Our interview protocol (see Supplemental Text) was inspired in part by the critical decision method of cognitive task analysis ( Crandall et al. , 2006 ; Lintern et al. , 2018 ), which was created for research in cognitive systems engineering and naturalistic decision making. There are some notable differences between our work and theirs, both in research goal and method. First, their goal is to improve training in specific fields by focusing on how critical decisions are made in that field during an unusual or important event; the analysis seeks to identify factors involved in making those critical decisions. We are focusing on the overall problem solving and how it compares across many different fields, which quickly led to attention on what decisions are to be made, rather than how a limited set of those decisions are made. We asked experts to describe a specific, but not necessarily unusual, problem in their work, and focused our analysis on identifying all decisions made, not reasons for making them or identifying which were most critical. The specific order of problem-solving steps was also less important to us, in part because it was clear that there was no consistent order that was followed. Second, we are looking at different types of work. Cognitive systems engineering work has primarily focused on performance in professions like firefighters, power plant operators, military technicians, and nurses. These tend to require time-sensitive critical skills that are taught with modest amounts of formal training. We are studying scientists, engineers, and doctors solving problems that require much longer and less time-critical solutions and for which the formal training occupies many years.

Given our different focus, we made several adaptations to eliminate some of the more time-consuming steps from the interview protocol, allowing us to limit the interview time to approximately 1 hour. Both protocols seek to elicit an accurate and complete reporting of the steps taken and decisions made in the process of solving a problem. Our general strategy was: 1) Have the expert explain the problem and talk step by step through the decisions involved in solving it, with relatively few interruptions from the interviewer except to keep the discussion focused on the specific problem and occasionally to ask for clarifications. 2) Ask follow-up questions to probe for more detail about particular steps and aspects of the problem-solving process. 3) Occasionally ask for general thoughts on how a novice's process might differ.

While some have questioned the reliability of information from retrospective interviews ( Nisbett and Wilson, 1977 ), we believe we avoid these concerns, because we are only identifying a decision to be made, which in this case, means identifying a well-defined action that was chosen from alternatives. This is less subjective and much more likely to be accurately recalled than is the rationale behind such a decision. See Ericsson and Simon (1980) . However, the decisions identified may still be somewhat limited—the process of deciding among possible actions might involve additional decisions in the moment, when the solution is still unknown, that we are unable to capture in the retrospective context. For the decisions we can identify, we are able to check their accuracy and completeness by comparing them with the actions taken in the conduct of the research/design. For example, consider this quote from a physician who had to re-evaluate a diagnosis, “And, in my very subjective sense, he seemed like he was being forthcoming and honest. Granted people can fool you, but he seemed like he was being forthcoming. So we had to reevaluate.” The physician then considered alternative diagnoses that could explain a test result that at first had indicated an incorrect diagnosis. While this quote does describe the (retrospective) reasoning behind a decision, we do not need to know whether that reasoning is accurately recalled. We can simply code this as “decision 18, how believable is info?” The physician followed up by considering alternative diagnoses, which in this context was coded as “26, how good is solution?” and “8, potential solutions?” This was followed by the description of the literature and additional tests conducted. These indicated actions taken that confirm the physician made a decision about the reliability of the information given by the patient.

Interview Coding

We coded the semistructured interviews in terms of decisions made, through iterative rounds of coding ( Chi, 1997 ), following a “directed content analysis approach,” which involves coding according to predefined theoretical categories and updating the codes as needed based on the data ( Hsieh and Shannon, 2005 ). Our predefined categories were the list of decisions we had developed during the informal interviews. This approach means that we limited the focus of our qualitative analysis—we were able to test and refine the list of decisions, but we did not seek to identify all possible categories of approach to selecting and solving problems. The goals of each iterative round of coding are described in the next three paragraphs. To code for decisions in general, we matched decisions from the list to statements in each interview, based on the following criteria: 1) there was an explicit statement of a decision or choice made or needing to be made; 2) there was the description of the outcome of a decision, such as listing important features of the problem (that had been decided on) or conclusions arrived at; or 3) there was a statement of actions taken that indicated a decision about the appropriate action had been made, usually from a set of alternatives. Two examples illustrate the types of comments we identified as decisions: A molecular biologist explicitly stated the decisions required to decompose a problem into subproblems (decision 11), “Which cell do we use? The gene. Which gene do we edit? Which part of that gene do we edit? How do we build the enzyme that is going to do the cutting? … And how do we read out that it worked?” An ecologist made a statement that was also coded as a decomposition decision, because it described the action taken: “So I analyze the bird data first on its own, rather than trying to smash all the taxonomic groups together because they seem really apples and oranges. And just did two kinds of analysis, one was just sort of across all of these cases, around the world.” A single statement could be coded as multiple decisions if they were occurring simultaneously in the story being recalled or were intimately interconnected in the context of that interview, as with the ecology quote, in which the last sentence leads into deciding what data analysis is needed. Inherent in nearly every one of these decisions was that there was insufficient information to know the answer with certainty, so judgment was required.

Our primary goal for the first iterative round of coding was to check whether our list was complete by checking for any decisions that were missing, as indicated by either an action taken or a stated decision that was not clearly connected to a decision on our initial list. In this round, we also clarified wording and combined decisions that we were consistently unable to differentiate during the coding. A sample of three interviews (from biology, medicine, and electrical engineering) were first coded independently by four coders (AP, EB, CK, and AF), then discussed. The decision list was modified to add decisions and update wording based on that discussion. Then the interviews were recoded with the new list and rediscussed, leading to more refinements to the list. Two additional interviews (from physics and chemical engineering) were then coded by three coders (AP, EB, and CK) and further similar refinements were made. Throughout the subsequent rounds of coding, we continued to check for missing decisions, but after the additions and adjustments made based on these five interviews, we did not identify any more missing decisions.

In our next round of coding, we focused on condensing overlapping decisions and refining wording to improve the clarity of descriptions as they applied across different disciplinary contexts and to ensure consistent interpretation by different coders. Two or three coders independently coded an additional 11 interviews, iteratively meeting to discuss codes identified in the interviews, refining wording and condensing the list to improve agreement and combine overlapping codes, and then using the updated list to code subsequent interviews. We condensed the list by combining decisions that represented the same cognitive process taking place at different times, that were discipline-specific variations on the same decision, or that were substeps involved in making a larger decision. We noticed that some decisions were frequently co-coded with others, particularly in some disciplines. But if they were identified as distinct a reasonable fraction of the time in any discipline, we listed them as separate. This provided us with a list, condensed from 42 to 29 discrete decisions (plus five additional non-decision themes that were so prevalent that they are important to describe), that gave good consistency between coders.

Finally, we used the resulting codes to tabulate which decisions occurred in each interview, simplifying our coding process to focus on deciding whether or not each decision had occurred, with an example if it did occur to back up the “yes” code, but no longer attempting to capture every time each decision was mentioned. Individual coders identified decisions mentioned in the remaining 15 interviews. Interviews that had been coded with the early versions of the list were also recoded to ensure consistency. Coders flagged any decisions they were unsure about occurring in a particular interview, and two to four coders (AP, EB, CK, and CW) met to discuss those debated codes, with most uncertainties being resolved by explanations from a team member who had more technical expertise in the field of the interview. Minor wording changes were made during this process to ensure that each description of a decision captured all instantiations of the decision across disciplines, but no significant changes to the list were needed or made.

Coding an interview in terms of decisions made and actions taken in the research often required a high level of expertise in the discipline in question. The coder had to be familiar with the conduct of research in the field in order to recognize which actions corresponded to a decision between alternatives, but our team was assembled with this requirement in mind. It included high-level expertise across five different fields of science, engineering, and medicine and substantial familiarity with several other fields.

Supplemental Table S1 shows the final tabulation of decisions identified in each interview. In the tabulation, most decisions were marked as either “yes” or “no” for each interview, though 65 out of 1054 total were marked as “implied,” for one of the following reasons: 1) for 40/65, based on the coder's knowledge of the field, it was clear that a step must have been taken to achieve an outcome or action, even though that decision was not explicitly mentioned (e.g., interviewees describe collecting certain raw data and then coming to a specific conclusion, so they must have decided how to analyze the data, even if they did not mention the analysis explicitly); 2) for 15/65, the interview context was important, in that multiple statements from different parts of the interview taken together were sufficient to conclude that the decision must have happened, though no single statement described that decision explicitly; 3) 10/65 involved a decision that was explicitly discussed as an important step in problem solving, but they did not directly state how it was related to the problem at hand, or it was stated only in response to a direct prompt from the interviewer. The proportion of decisions identified in each interview, broken down by either explicit or explicit + implied, is presented in Supplemental Tables S1 and S2. Table 2 and Figure 2 of the main text show explicit + implied decision numbers.

Problem-solving decisions and percentages of expert interviews in which they occur

A. Selection and goals (Occur in 100% )B. Frame problem (100%)C. Plan process for solving (100%)D. Interpret info and choose solutions (100%)E. Reflect (100%)F. Implications and communicate results (84%)
1. (61%) What is important in field?4. (100%) Important features and info?10. (100%) Approximations and simplifications to make?16. (81%) Which calculations and data analysis?23. (77%) Assumptions and simplifications appropriate?27. (65%) Broader implications?
2. (77%) Opportunity fits solver’s expertise?5. (100%) What predictive framework? 11. (68%) How to decompose into sub-problems?17. (68%) How to represent and organize information?24. (84%) Additional knowledge needed?28. (55%) Audience for communication?
3. (100%) Goals, criteria, constraints?6. (97%) How to narrow down problem?12. (90%) Most difficult or uncertain areas?18. (77%) How believable is information?25. (94%) How well is solving approach working?29. (68%) Best way to present work?
7. (97%) Related problems?13. (100%) What info needed?19. (100%) How does info compare to predictions?26. (100%) How good is solution?
8. (100%) Potential Solutions?14. (87%) Priorities?20. (71%) Any significant anomalies?
9. (74%) Is problem solvable?15. (100%) Specific plan for getting information?21. (97%) Appropriate conclusions?
22. (97%) What is best solution?

a See supplementary text and Table S2 for full description and examples of each decision. A set of other non-decision knowledge and skill development themes were also frequently mentioned as important to professional success: Staying up to date in the field (84%), intuition and experience (77%), interpersonal and teamwork (100%), efficiency (32%), and attitude (68%).

b Percentage of interviews in which category or decision was mentioned.

c Numbering is for reference. In practice ordering is fluid – involves extensive iteration with other possible starting points.

d Chosen predictive framework(s) will inform all other decisions.

e Reflection occurs throughout process, and often leads to iteration. Reflection on solution occurs at the end as well.

FIGURE 2. Proportion of decisions coded in interviews by field. This tabulation includes decisions 1–29, not the additional themes. Error bars represent standard deviations. Number of interviews: total = 31; physical science = 9; biological science = 8; engineering = 8; medicine = 6. Compared with the sciences, slightly fewer decisions overall were identified in the coding of engineering and medicine interviews, largely for discipline-specific reasons. See Supplemental Table S2 and associated discussion.

Two of the interviews that had not been discussed during earlier rounds of coding (one physics [AP and EB], one medicine [AP and CK]) were independently coded by two coders to check interrater reliability using the final list of decisions. The goal of our final coding was to tabulate whether or not each expert described making each decision at any point in the problem-solving process, so the level of detail we chose for coding and interrater reliability was whether or not a decision was present in the entire interview. The decisions identified in each interview were compared for the two coders. For both interviews, the raters disagreed on whether or not only one of the 29 decisions occurred. Codes of “implied” were counted as agreement if the other coder selected either “yes” or “implied.” This equates to a percent agreement of 97% for each interview (28 agree/29 total decisions per interview = 97%). As a side note, there was also one disagreement per interview on the coding of the five other themes, but those themes were not a focus of this work nor the interviews.

We identified a total set of 29 decisions to be made (plus five other themes), all of which were identified in a large fraction of the interviews across all disciplines ( Table 2 and Figure 2 ). There was a surprising degree of overlap across the different fields with all the experts mentioning similar decisions to be made. All 29 were evident by the fifth semistructured interview, and on average, each interview revealed 85% of the 29 decisions. Many decisions occurred multiple times in an interview, with the number of times varying widely, depending on the length and complexity of the problem-solving process discussed.

We focused our analysis on what decisions needed to be made, not on the experts’ processes for making those decisions: noting that a choice happened, not how they selected and chose among different alternatives. This is because, while the decisions to be made were the same across disciplines, how the experts made those decisions varied greatly by discipline and individual. The process of making the decisions relied on specialized disciplinary knowledge and experience and may vary depending on demographics or other factors that our study design (both our sample and nature of retrospective interviews) did not allow us to investigate. However, while that knowledge was distinct and specialized, we could tell that it was consistently organized according to a common structure we call a “predictive framework,” as discussed in the “ Predictive Framework ” section below. Also, while every “decision” reflected a step in the problem solving involved in the work, and the expert being interviewed was involved in making or approving the decision, that does not mean the decision process was carried out only by that individual. In many cases, the experts described the decisions made in terms of ideas and results of their teams, and the importance of interpersonal skills and teamwork was an important non-decision theme raised in all interviews.

We were particularly concerned with the correctness and completeness of the set of decisions. Although the correctness was largely established by the statements in the interviews, we also showed the list of decisions to these experts at the end of the interviews as well as to about a dozen other experts. In all cases, they all agreed that these decisions were ones they and others in their field made when solving problems. The completeness of the list of decisions was confirmed by: 1) looking carefully at all specific actions taken in the described problem-solving process and checking that each action matched a corresponding decision from the list; and 2) the high degree of consistency in the set of decisions across all the interviews and disciplines. This implies that it is unlikely that there are important decisions that we are missing, because that would require any such missing decisions to be consistently unspoken by all 31 interviewees as well as consistently unrecognized by us from the actions that were taken in the problem-solving process.

In focusing on experts’ recollections of their successful solving of problems, our study design may have missed decisions that experts only made during failed problem-solving attempts. However, almost all interviews described solution paths that were not smooth and continuous, but rather involved going down numerous dead ends. There were approaches that were tried and failed, data that turned out to be ambiguous and worthless, and so on. Identifying the failed path involved reflection decisions (23–26). Often decision 9 (is problem solvable?) would be mentioned, because it described a path that was determined to be not solvable. For example, a biologist explained, “And then I ended up just switching to a different strain that did it [crawling off the plate] less. Because it was just … hard to really get them to behave themselves. I suppose if I really needed to rely on that very particular one, I probably would have exhausted the possibilities a bit more.” Thus, we expect unsuccessful problem solving would entail a smaller subset of decisions being made, particularly lack of reflection decisions, or poor choices on the decisions, rather than making a different set of decisions.

The set of decisions represent a remarkably consistent structure underlying S&E problem solving. For the purposes of presentation, we have categorized the decisions as shown in Figure 3 , roughly based on the purposes they achieve. However, the process is far less orderly and sequential than implied by this diagram, or in fact any characterization of an orderly “scientific method.” We were struck by how variable the sequence of decisions was in the descriptions provided. For example, experts who described how they began work on a problem sometimes discussed importance and goals (1–3, what is important in field?; opportunity fits solver’s expertise?; and goals, criteria, constraints?), but others mentioned a curious observation (20, any significant anomalies?), important features of their system that led them to questions (4, important features and info?, 6, how to narrow down problem?), or other starting points. We also saw that there were flexible connections between decisions and repeated iterations—jumping back to the same type of decision multiple times in the solution process, often prompted by reflection as new information and insights were developed. The sequence and number of iterations described varied dramatically by interview, and we cannot determine to what extent this was due to legitimate differences in the problem-solving process or to how the expert recalled and chose to describe the process. This lack of a consistent starting point, with jumping and iterating between decisions, has also been identified in the naturalistic decision-making literature ( Mosier et al. , 2018 ). Finally, the experts also often described considering multiple decisions simultaneously. In some interviews, a few decisions were always described together, while in others, they were clearly separate decisions. In summary, while the specific decisions themselves are fully grounded in expert practice, the categories and order shown here are artificial simplifications for presentation purposes.

FIGURE 3. Representation of problem-solving decisions by categories. The black arrows represent a hypothetical but unrealistic order of operations, the blue arrows represent more realistic iteration paths. The decisions are grouped into categories for presentation purposes; numbers indicate the number of decisions in each category. Knowledge and skill development were commonly mentioned themes but are not decisions.

The decisions contained in the seven categories are summarized here. See Supplemental Table S2 for specific examples of each decision across multiple disciplines.

Category A. Selection and Goals of the Problem

This category involves deciding on the importance of the problem, what criteria a solution must meet, and how well it matches the capabilities, resources, and priorities of the expert. As an example, an earth scientist described the goal of her project (decision 3, goals, criteria, constraints?) to map and date the earliest volcanic rocks associated with what is now Yellowstone and explained why the project was a good fit for her group (2, opportunity fits solver’s expertise?) and her decision to pursue the project in light of the significance of this type of eruption in major extinction events (1, what is important in field?). In many cases, decisions related to framing (see category B) were mentioned before decisions in this category or were an integral part of the process for developing goals.

1. What is important in the field?

What are important questions or problems? Where is the field heading? Are there advances in the field that open new possibilities?

2. Opportunity fits solver's expertise?

If and where are there gaps/opportunities to solve in field? Given experts’ unique perspectives and capabilities, are there opportunities particularly accessible to them? (This could involve challenging the status quo, questioning assumptions in the field.)

3. Goals, criteria, constraints?

a. What are the goals, design criteria, or requirements of the problem or its solution?

b. What is the scope of the problem?

c. What constraints are there on the solution?

d. What will be the criteria on which the solution is evaluated?

Category B. Frame Problem

These decisions lead to a more concrete formulation of the solution process and potential solutions. This involves identifying the key features of the problem and deciding on predictive frameworks to use (see “ Predictive Framework ” section below), as well as narrowing down the problem, often forming specific questions or hypotheses. Many of these decisions are guided by past problem solutions with which the expert is familiar and sees as relevant. The framing decisions of a physician can be seen in his discussion of a patient with liver failure who had previously been diagnosed with HIV but had features (4, important features and info?; 5, what predictive framework?) that made the physician question the HIV diagnosis (5, what predictive framework?; 26, how good is solution?). His team then searched for possible diagnoses that could explain liver failure and lead to a false-positive HIV test (7, related problems?; 8, potential solutions?), which led to their hypothesis the patient might have Q fever (6, how to narrow down problem?; 13, what info needed?; 15, specific plan for getting info?). While each individual decision is strongly supported by the data, the categories are groupings for presentation purposes. In particular, framing (category B) and planning (see category C) decisions often blended together in interviews.

a. Which available information is relevant to problem solving and why?

b. (When appropriate) Create/find a suitable abstract representation of core ideas and information Examples: physics, equation representing process involved; chemistry, bond diagrams/potential energy surfaces; biology, diagram of pathway steps.

5. What predictive framework?

Which potential predictive frameworks to use? (Decide among possible predictive frameworks or create framework.) This includes deciding on the appropriate level of mechanism and structure that the framework needs to embody to be most useful for the problem at hand.

6. How to narrow down the problem?

How to narrow down the problem? Often involves formulating specific questions and hypotheses.

7. Related problems?

What are related problems or work seen before, and what aspects of their problem-solving process and solutions might be useful in the present context? (This may involve reviewing literature and/or reflecting on experience.)

8. Potential solutions?

What are potential solutions? (This is based on experience and fitting some criteria for solution they have for a problem having general key features identified.)

9. Is problem solvable?

Is the problem plausibly solvable and is the solution worth pursuing given the difficulties, constraints, risks, and uncertainties?

Category C. Plan the Process for Solving

These decisions establish the specifics needed to solve the problem and include: how to simplify the problem and decompose it into pieces, what specific information is needed, how to obtain that information, and what are the resources needed and priorities? Planning by an ecologist can be seen in her extensive discussion of her process of simplifying (10, approximations/simplifications to make?) a meta-analysis project about changes in migration behavior, which included deciding what types of data she needed (13, what info needed?), planning how to conduct her literature search (15, specific plan for getting info?), difficulties in analyzing the data (12, most difficult/uncertain areas?; 16, which calculations and data analysis?), and deciding to analyze different taxonomic groups separately (11, how to decompose into subproblems?). In general, decomposition often resulted in multiple iterations through the problem-solving decisions, as subsets of decisions need to be made about each decomposed aspect of a problem. Framing (category B) and planning (category C) decisions occupied much of the interviews, indicating their importance.

10. Approximations and simplifications to make?

What approximations or simplifications are appropriate? How to simplify the problem to make it easier to solve? Test possible simplifications/approximations against established criteria.

11. How to decompose into subproblems?

How to decompose the problem into more tractable subproblems? (Subproblems are independently solvable pieces with their own subgoals.)

12. Most difficult or uncertain areas?

a. What are acceptable levels of uncertainty with which to proceed at various stages?

13. What info needed?

a. What will be sufficient to test and distinguish between potential solutions?

14. Priorities?

What to prioritize among many competing considerations? What to do first and how to obtain necessary resources?

Considerations could include: What's most important? Most difficult? Addressing uncertainties? Easiest? Constraints (time, materials, etc.)? Cost? Optimization and trade-offs? Availability of resources? (facilities/materials, funding sources, personnel)

15. Specific plan for getting information?

a. What are the general requirements of a problem-solving approach, and what general approach will they pursue? (These decisions are often made early in the problem-solving process as part of framing.)

b. How to obtain needed information? Then carry out those plans. (This could involve many discipline- and problem-specific investigation possibilities such as: designing and conducting experiments, making observations, talking to experts, consulting the literature, doing calculations, building models, or using simulations.)

c. What are achievable milestones, and what are metrics for evaluating progress?

d. What are possible alternative outcomes and paths that may arise during the problem-solving process, both consistent with predictive framework and not, and what would be paths to follow for the different outcomes?

Category D. Interpret Information and Choose Solution(s)

This category includes deciding how to analyze, organize, and draw conclusions from available information, reacting to unexpected information, and deciding upon a solution. A biologist studying aging in worms described how she analyzed results from her experiments, which included representing her results in survival curves and conducting statistical analyses (16, which calculations and data analysis?; 17, how to represent and organize info?), as well as setting up blind experiments (15, specific plan for getting info?) so that she could make unbiased interpretations (18, how believable is info?) of whether a worm was alive or dead. She also described comparing results with predictions to justify the conclusion that worm aging was related to fertility (19, how does info compare to predictions?; 21, appropriate conclusions?; 22, what is best solution?). Deciding how results compared with expectations based on a predictive framework was a key decision that often preceded several other decisions.

16. Which calculations and data analysis?

What calculations and data analysis are needed? Once determined, these must then be carried out.

17. How to represent and organize information?

What is the best way to represent and organize available information to provide clarity and insights? (Usually this will involve specialized and technical representations related to key features of predictive framework.)

18. How believable is the information?

Is information valid, reliable, and believable (includes recognizing potential biases)?

19. How does information compare to predictions?

As new information comes in, particularly from experiments or calculations, how does it compare with expected results (based on the predictive framework)?

20. Any significant anomalies?

a. Does potential anomaly fit within acceptable range of predictive framework(s) (given limitations of predictive framework and underlying assumptions and approximations)?

b. Is potential anomaly an unusual statistical variation or relevant data? Is it within acceptable levels of uncertainty?

21. Appropriate conclusions?

What are appropriate conclusions based on the data? (This involves making conclusions and deciding if they are justified.)

22. What is the best solution?

a. Which of multiple candidate solutions are consistent with all available information and which can be rejected? (This could be based on comparing data with predicted results.)

b. What refinements need to be made to candidate solutions?

Category E. Reflect

Reflection decisions occur throughout the process and include deciding whether assumptions are justified, whether additional knowledge or information is needed, how well the solution approach is working, and whether potential and then final solutions are adequate. These decisions match the categories of reflection identified by Salehi (2018) . A mechanical engineer described developing a model (to inform surgical decisions) of which muscles allow the thumb to function in the most useful manner (22, what is best solution?), including reflecting on how well engineering approximations applied in the biological context (23, assumptions and simplifications appropriate?). He also described reflecting on his approach, that is, why he chose to use cadaveric models instead of mathematical models (25, how well is solving approach working?), and the limitations of his findings in that the “best” muscle identified was difficult to access surgically (26, how good is solution?; 27, broader implications?). Reflection decisions are made throughout the problem-solving process, often lead to reconsidering other decisions, and are critical for success.

23. Assumptions and simplifications appropriate?

a. Do the assumptions and simplifications made previously still look appropriate considering new information?

b Does predictive framework need to be modified?

24. Additional knowledge needed?

a. Is solver's relevant knowledge sufficient?

b. Is more information needed and, if so, what?

c. Does some information need to be checked? (Is there a need to repeat experiment or check a different source?)

25. How well is the problem-solving approach working?

How well is the problem-solving approach working, and does it need to be modified? This includes possibly modifying the goals. (One needs to reflect on one's strategy by evaluating progress toward the solution.) and reflecting on one’s strategy by evaluating progress toward the solution.

26. How good is the solution?

a. Decide by exploring possible failure modes and limitations—“try to break” solution.

b. Does it “make sense” and pass discipline-specific tests for solutions of this type of problem?

c. Does it completely meet the goals/criteria?

Category F. Implications and Communication of Results

These are decisions about the broader implications of the work, and how to communicate results most effectively. For example, a theoretical physicist developing a method to calculate the magnetic moment of the muon decided on who would be interested in his work (28, audience for communication?) and what would be the best way to present it (29, best way to present work?). He also discussed the implications of preliminary work on a simplified aspect of the problem (10, approximations and simplifications to make?) in terms of evaluating its impact on the scientific community and deciding on next steps (27, broader implications?; 29, best way to present work?). Many interviewees described that making decisions in this category affected their decisions in other categories.

27. Broader implications?

What are the broader implications of the results, including over what range of contexts does the solution apply? What outstanding problems in the field might it solve? What novel predictions can it enable? How and why might this be seen as interesting to a broader community?

28. Audience for communication?

What is the audience for communication of work, and what are their important characteristics?

29. Best way to present work?

What is the best way to present the work to have it understood, and its correctness and importance appreciated? How to make a compelling story of the work?

Category G. Ongoing Skill and Knowledge Development

Although we focused on decisions in the problem-solving process, the experts volunteered general skills and knowledge they saw as important elements of problem-solving expertise in their fields. These included teamwork and interpersonal skills (strongly emphasized), acquiring experience and intuition, and keeping abreast of new developments in their fields.

30. Stay up to date in field

a. Reviewing literature, which does involve making decisions as to which is important.

b. Learning relevant new knowledge (ideas and technology from literature, conferences, colleagues, etc.)

31. Intuition and experience

Acquiring experience and associated intuition to improve problem solving.

32. Interpersonal, teamwork

Includes navigating collaborations, team management, patient interactions, communication skills, etc., particularly as how these apply in the context of the various types of problem-solving processes.

33. Efficiency

Time management including learning to complete certain common tasks efficiently and accurately.

34. Attitude

Motivation and attitude toward the task. Factors such as interest, perseverance, dealing with stress, and confidence in decisions.

Predictive Framework

How the decisions were made was highly dependent on the discipline and problem. However, there was one element that was fundamental and common across all interviews: the early adoption of a “predictive framework” that the experts used throughout the problem-solving process. We define this framework as “a mental model of key features of the problem and the relationships between the features.” All the predictive frameworks involved some degree of simplification and approximation and an underlying level of mechanism that established the relationships between key features. The frameworks provided a structure of knowledge and facilitated the application of that knowledge to the problem at hand, allowing experts to repeatedly run “mental simulations” to make predictions for dependencies and observables and to interpret new information.

As an example, an ecologist described her predictive framework for migration, which incorporated important features such as environmental conditions and genetic differences between species and the mechanisms by which these interacted to impact the migration patterns for a species. She used this framework to guide her meta-analysis of changes in migration patterns, affecting everything from her choice of data sets to include to her interpretation of why migration patterns changed for different species. In many interviews, the frameworks used evolved as additional information was obtained, with additional features being added or underlying assumptions modified. For some problems, the relevant framework was well established and used with confidence, while for other problems, there was considerable uncertainty as to a suitable framework, so developing and testing the framework was a substantial part of the solution process.

A predictive framework contains the expert knowledge organization that has been observed in previous studies of expertise ( Egan and Greeno, 1974 ) but goes further, as here it serves as an explicit tool that guides most decisions and actions during the solving of complex problems. Mental models and mental simulations that are described in the naturalistic decision-making literature are similar, in that they are used to understand the problem and guide decisions ( Klein, 2008 ; Mosier et al. , 2018 ), but they do not necessarily contain the same level of mechanistic understanding of relationships that underlies the predictive frameworks used in science and engineering problem solving. While the use of predictive frameworks was universal, the individual frameworks themselves explicitly reflected the relevant specialized knowledge, structure, and standards of the discipline, and arguably largely define a discipline ( Wieman, 2019 ).

Discipline-Specific Variation

While the set of decisions to be made was highly consistent across disciplines, there were extensive differences within and across disciplines and work contexts, which reflected the differences in perspectives and experiences. These differences were usually evident in how experts made each of the specific decisions, but not in the choice of which decisions needed to be made. In other words, the solution methods, which included following standard accepted procedures in each field, were very different. For example, planning in some experimental sciences may involve formulating a multiyear construction and data-collection effort, while in medicine it may be deciding on a simple blood test. Some decisions, notably in categories A, D, and F, were less likely to be mentioned in particular disciplines, because of the nature of the problems. Specifically, decisions 1 (what is important in field?), 2 (opportunity fits solver’s expertise?), 27 (broader implications?), 28 (audience for communication?), and 29 (best way to present work?) were dependent on the scope of the problem being described and the expert's specific role in it. These were mentioned less frequently in interviews where the problem was assigned to the expert (most often engineering or industry) or where the importance or audience was implicit (most often in medicine). Decisions 16 (which calculations and data analysis?) and 17 (how to represent and organize info?) were particularly unlikely to be mentioned in medicine, because test results are typically provided to doctors not in the form or raw data, but rather already analyzed by a lab or other medical technology professional, so the doctors we interviewed did not need to make decisions themselves about how to analyze or represent the data. Qualitatively, we also noticed some differences between disciplines in the patterns of connections between decisions. When the problem involved development of a tool or product, most commonly the case in engineering, the interview indicated relatively rapid cycles between goals (3), framing problem/potential solutions (8), and reflection on the potential solution (26), before going through the other decisions. Biology, the experimental science most represented in our interviews, had strong links between planning (15), deciding on appropriate conclusions (21), and reflection on the solution (26). This is likely because the respective problems involved complex systems with many unknowns, so careful planning was unusually important for achieving definitive conclusions. See Supplemental Text and Supplemental Table S2 for additional notes on decisions that were mentioned at lower frequency and decisions that were likely to be interconnected, regardless of field.

This work has created a framework of decisions to characterize problem solving in science and engineering. This framework is empirically based and captures the successful problem-solving process of all experts interviewed. We see that several dozen experts across many different fields all make a common set of decisions when solving authentic problems. There are flexible linkages between decisions that are guided by reflection in a continually evolving process. We have also identified the nature of the “predictive frameworks” that S&E experts consistently use in problem solving. These predictive frameworks reveal how these experts organize their disciplinary knowledge to facilitate making decisions. Many of the decisions we identified are reflected in previous work on expertise and scientific problem solving. This is particularly true for those listed in the planning and interpreting information categories ( Egan and Greeno, 1974 ). The priority experts give to framing and planning decisions over execution compared with novices has been noted repeatedly (e.g., Chi et al. , 1988 ). Expert reflection has been discussed, but less extensively ( Chase and Simon, 1973 ), and elements of the selection and implications and communication categories have been included in policy and standards reports (e.g., AAAS, 2011 ). Thus, our framework of decisions is consistent with previous work on scientific practices and expertise, but it is more complete, specific, empirically based, and generalizable across S&E disciplines.

A limitation of this study is the small number of experts we have in total, from each discipline, and from underrepresented groups (especially lack of female representation in engineering). The lack of randomized selection of participants may also bias the sample toward experts who experienced similar academic training (STEM disciplines at U.S. universities). This means we cannot prove that there are not some experts who follow other paths in problem solving. As with any scientific model, the framework described here should be subjected to further tests and modifications as necessary. However, to our knowledge, this is a far larger sample than used in any previous study of expert problem solving. Although we see a large amount of variation both within and across disciplines in the problem-solving process, this is reflected in how experts make decisions, not in what decisions they make. The very high degree of consistency in the decisions made across the entire sample strongly suggests that we are capturing elements that are common to all experts across science and engineering. A second limitation is that decisions often overlap and co-occur in an interview, so the division between decision items is often somewhat ambiguous and could be defined somewhat differently. As noted, a number of these decisions can be interconnected, and in some fields are nearly always interconnected.

The set of decisions we have observed provides a general framework for characterizing, analyzing, and teaching S&E problem solving. These decisions likely define much of the set of cognitive skills a student needs to practice and master to perform as a skilled practitioner in S&E. This framework of decisions provides a detailed and structured way to approach the teaching and measurement of problem solving at the undergraduate, graduate, and professional training levels. For teaching, we propose using the process of “deliberate practice” ( Ericsson, 2018 ) to help students learn problem solving. Deliberate practice of problem solving would involve effective scaffolding and concentrated practice, with feedback, at making the specific decisions identified here in relevant contexts. In a course, this would likely involve only an appropriately selected set of the decisions, but a good research mentor would ensure that trainees have opportunities to practice and receive feedback on their performance on each of these 29 decisions. Future work is needed to determine whether there are additional decisions that were not identified in experts but are productive components of student problem solving and should also be practiced. Measurements of individual problem-solving expertise based on our decision list and the associated discipline-specific predictive frameworks will allow a detailed measure of an individual's discipline-specific problem-solving strengths and weaknesses relative to an established expert. This can be used to provide targeted feedback to the learner, and when aggregated across students in a program, feedback on the educational quality of the program. We are currently working on the implementation of these ideas in a variety of instructional settings and will report on that work in future publications.

As discussed in the Introduction , typical science and engineering problems fail to engage students in the complete problem-solving process. By considering which of the 29 decisions are required to answer the problem, we can more clearly articulate why. The biology problem, for example, requires students to decide on a predictive framework and access the necessary content knowledge, and they need to decide which information they need to answer the problem. However, other decisions are not required or are already made for them, such as deciding on important features and identifying anomalies. We propose that different problems, designed specifically to require students to make sets of the problem-solving decisions from our framework, will provide more effective tools for measuring, practicing, and ultimately mastering the full S&E problem-solving process.

Our preliminary work with the use of such decision-based problems for assessing problem-solving expertise is showing great promise. For several different disciplines, we have given test subjects a relevant context, requiring content knowledge covered in courses they have taken, and asked them to make decisions from the list presented here. Skilled practitioners in the relevant discipline respond in very consistent ways, while students respond very differently and show large differences that typically correlate with their different educational experiences. What apparently matters is not what content they have seen, but rather what decisions they have had practice making. Our approach was to identify the decisions made by experts, this being the task that educators want students to master. Our data do not exclude the possibility that students engage in and/or should learn other decisions as a productive part of the problem-solving process while they are learning. Future work would seek to identify decisions made at intermediate levels during the development of expertise, to identify potential learning progressions that could be used to teach problem solving more efficiently. What we have seen is consistent with previous work identifying expert–novice differences but provides a much more extensive and detailed picture of a student's strengths and weaknesses and the impacts of particular educational experiences. We have also carried out preliminary development of courses that explicitly involve students making and justifying many of these decisions in relevant contexts, followed by feedback on their decisions. Preliminary results from these courses are also encouraging. Future work will involve the more extensive development and application of decision-based measurement and teaching of problem solving.

ACKNOWLEDGMENTS

We acknowledge the many experts who agreed to be interviewed for this work, M. Flynn for contributions on expertise in mechanical engineering, and Shima Salehi for useful discussions. This work was funded by the Howard Hughes Medical Institute through an HHMI Professor grant to C.E.W.

  • ABET . ( 2020 ). Criteria for accrediting engineering programs, 2020–2021 . Retrieved November 23, 2020, from www.abet.org/accreditation/accreditation-criteria/criteria-for-accrediting-engineering-programs-2020-2021 Google Scholar
  • Alberts, B., Johnson, A., Lewis, J., Morgan, D., Raff, M., Roberts, K., & Walter, P. ( 2014 ). Control of gene expression . In Molecular Biology of the Cell (6th ed., pp. 436–437). New York: Garland Science. Retrieved November 12, 2020, from https://books.google.com/books?id=2xIwDwAAQBAJ Google Scholar
  • American Association for the Advancement of Science . ( 2011 ). Vision and change in undergraduate biology education: A call to action . Washington, DC. Retrieved February 12, 2021, from https://visionandchange.org/finalreport Google Scholar
  • Chi, M. T. H., Glaser, R., & Farr, M. J.( ( 1988 ). The nature of expertise . Hillsdale, NJ: Erlbaum. Google Scholar
  • Crandall, B., Klein, G. A., & Hoffman, R. R. ( 2006 ). Working minds: A practitioner's guide to cognitive task analysis . Cambridge, MA: MIT Press. Google Scholar
  • Egan, D. E., & Greeno, J. G. ( 1974 ). Theory of rule induction: Knowledge acquired in concept learning, serial pattern learning, and problem solving in L . In Gregg, W. (Ed.), Knowledge and cognition . Potomac, MD: Erlbaum. Google Scholar
  • Ericsson, K. A., Charness, N., Feltovich, P. J., & Hoffman, R. R. , (Eds.) ( 2006 ). The Cambridge handbook of expertise and expert performance . Cambridge, United Kingdom: Cambridge University Press. Google Scholar
  • Ericsson, K. A., Hoffman, R. R., Kozbelt, A., & Williams, A. A. , (Eds.) ( 2018 ). The Cambridge handbook of expertise and expert performance (2nd ed.). Cambridge, United Kingdom: Cambridge University Press. Google Scholar
  • Hatano, G., & Inagaki, K. ( 1986 ). Two courses of expertise . In Stevenson, H. W.Azuma, H.Hakuta, K. (Eds.), A series of books in psychology. Child development and education in Japan (pp. 262–272). New York: Freeman/Times Books/Henry Holt. Google Scholar
  • Klein, G. ( 2008 ). Naturalistic decision making . Human Factors , 50 (3), 456–460. Medline ,  Google Scholar
  • Kozma, R., Chin, E., Russell, J., & Marx, N. ( 2000 ). The roles of representations and tools in the chemistry laboratory and their implications for chemistry learning . Journal of the Learning Sciences , 9 (2), 105–143. Google Scholar
  • Lintern, G., Moon, B., Klein, G., & Hoffman, R. ( 2018 ). Eliciting and representing the knowledge of experts . In Ericcson, K. A.Hoffman, R. R.Kozbelt, A.Williams, A. M. (Eds.), The Cambridge handbook of expertise and expert performance (2nd ed). (pp. 165–191). Cambridge, United Kingdom: Cambridge University Press. Google Scholar
  • Mosier, K., Fischer, U., Hoffman, R. R., & Klein, G. ( 2018 ). Expert professional judgments and “naturalistic decision making.” In Ericcson, K. A.Hoffman, R. R.Kozbelt, A.Williams, A. M. (Eds.), The Cambridge handbook of expertise and expert performance (2nd ed). (pp. 453–475). Cambridge, United Kingdom: Cambridge University Press. Google Scholar
  • National Research Council (NRC) . ( 2012a ). A framework for K–12 science education: Practices, crosscutting concepts, and core ideas . Washington, DC: National Academies Press. Google Scholar
  • Newell, A., & Simon, H. A. ( 1972 ). Human problem solving . Prentice-Hall. Google Scholar
  • Next Generation Science Standards Lead States . ( 2013 ). Next Generation Science Standards: For states, by states . Washington, DC: National Academies Press. Google Scholar
  • Polya, G. ( 1945 ). How to solve it: A new aspect of mathematical method . Princeton, NJ: Princeton University Press. Google Scholar
  • Quacquarelli Symonds . ( 2018 ). The global skills gap in the 21st century . Retrieved July 20, 2021, from www.qs.com/portfolio-items/the-global-skills-gap-in-the-21st-century/ Google Scholar
  • Salehi, S. ( 2018 ). Improving problem-solving through reflection (Doctoral dissertation) . Stanford Digital Repository, Stanford University. Retrieved February 18, 2021, from https://purl.stanford.edu/gc847wj5876 Google Scholar
  • Schoenfeld, A. H. ( 1985 ). Mathematical problem solving . Orlando, FL: Academic Press. Google Scholar
  • Wayne State University . ( n.d ). Mechanical engineering practice qualifying exams. Wayne State University Mechanical Engineering department . Retrieved February 23, 2021, from https://engineering.wayne.edu/me/exams/mechanics_of_materials_-_sample_pqe_problems_.pdf Google Scholar
  • Wineburg, S. ( 1998 ). Reading Abraham Lincoln: An expert/expert study in the interpretation of historical texts . Cognitive Science , 22 (3), 319–346. https://doi.org/10.1016/S0364-0213(99)80043-3 Google Scholar
  • Uncovering students’ problem-solving processes in game-based learning environments 1 Jun 2022 | Computers & Education, Vol. 182
  • Student understanding of kinematics: a qualitative assessment 9 May 2022 | European Journal of Engineering Education, Vol. 5
  • What decisions do experts make when doing back-of-the-envelope calculations? 5 April 2022 | Physical Review Physics Education Research, Vol. 18, No. 1
  • Simulation led optical design assessments: Emphasizing practical and computational considerations in an upper division physics lecture course 1 Apr 2022 | American Journal of Physics, Vol. 90, No. 4
  • Evidence-Based Principles for Worksheet Design 1 Sep 2021 | The Physics Teacher, Vol. 59, No. 6

problem solving method in science

Submitted: 2 December 2020 Revised: 11 June 2021 Accepted: 23 June 2021

© 2021 A. M. Price et al. CBE—Life Sciences Education © 2021 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  • A to Z Guides

What Is the Scientific Method?

problem solving method in science

The scientific method is a systematic way of conducting experiments or studies so that you can explore the things you observe in the world and answer questions about them. The scientific method, also known as the hypothetico-deductive method, is a series of steps that can help you accurately describe the things you observe or improve your understanding of them.

Ultimately, your goal when you use the scientific method is to:

  • Find a cause-and-effect relationship by asking a question about something you observed
  • Collect as much evidence as you can about what you observed, as this can help you explore the connection between your evidence and what you observed
  • Determine if all your evidence can be combined to answer your question in a way that makes sense

Francis Bacon and René Descartes are usually credited with formalizing the process in the 16th and 17th centuries. The two philosophers argued that research shouldn’t be guided by preset metaphysical ideas of how reality works. They supported the use of inductive reasoning to come up with hypotheses and understand new things about reality.

Scientific Method Steps

The scientific method is a step-by-step problem-solving process. These steps include:

Observe the world around you. This will help you come up with a topic you are interested in and want to learn more about. In many cases, you already have a topic in mind because you have a related question for which you couldn't find an immediate answer.

Either way, you'll start the process by finding out what people before you already know about the topic, as well as any questions that people are still asking about. You may need to look up and read books and articles from academic journals or talk to other people so that you understand as much as you possibly can about your topic. This will help you with your next step.

Ask questions. Asking questions about what you observed and learned from reading and talking to others can help you figure out what the "problem" is. Scientists try to ask questions that are both interesting and specific and can be answered with the help of a fairly easy experiment or series of experiments. Your question should have one part (called a variable) that you can change in your experiment and another variable that you can measure. Your goal is to design an experiment that is a "fair test," which is when all the conditions in the experiment are kept the same except for the one you change (called the experimental or independent variable).

Form a hypothesis and make predictions based on it.  A hypothesis is an educated guess about the relationship between two or more variables in your question. A good hypothesis lets you predict what will happen when you test it in an experiment. Another important feature of a good hypothesis is that, if the hypothesis is wrong, you should be able to show that it's wrong. This is called falsifiability. If your experiment shows that your prediction is true, then your hypothesis is supported by your data.

Test your prediction by doing an experiment or making more observations.  The way you test your prediction depends on what you are studying. The best support comes from an experiment, but in some cases, it's too hard or impossible to change the variables in an experiment. Sometimes, you may need to do descriptive research where you gather more observations instead of doing an experiment. You will carefully gather notes and measurements during your experiments or studies, and you can share them with other people interested in the same question as you. Ideally, you will also repeat your experiment a couple more times because it's possible to get a result by chance, but it's less possible to get the same result more than once by chance.

Draw a conclusion. You will analyze what you already know about your topic from your literature research and the data gathered during your experiment. This will help you decide if the conclusion you draw from your data supports or contradicts your hypothesis. If your results contradict your hypothesis, you can use this observation to form a new hypothesis and make a new prediction. This is why scientific research is ongoing and scientific knowledge is changing all the time. It's very common for scientists to get results that don't support their hypotheses. In fact, you sometimes learn more about the world when your experiments don't support your hypotheses because it leads you to ask more questions. And this time around, you already know that one possible explanation is likely wrong.

Use your results to guide your next steps (iterate). For instance, if your hypothesis is supported, you may do more experiments to confirm it. Or you could come up with a hypothesis about why it works this way and design an experiment to test that. If your hypothesis is not supported, you can come up with another hypothesis and do experiments to test it. You'll rarely get the right hypothesis in one go. Most of the time, you'll have to go back to the hypothesis stage and try again. Every attempt offers you important information that helps you improve your next round of questions, hypotheses, and predictions.

Share your results. Scientific research isn't something you can do on your own; you must work with other people to do it.   You may be able to do an experiment or a series of experiments on your own, but you can't come up with all the ideas or do all the experiments by yourself .

Scientists and researchers usually share information by publishing it in a scientific journal or by presenting it to their colleagues during meetings and scientific conferences. These journals are read and the conferences are attended by other researchers who are interested in the same questions. If there's anything wrong with your hypothesis, prediction, experiment design, or conclusion, other researchers will likely find it and point it out to you.

It can be scary, but it's a critical part of doing scientific research. You must let your research be examined by other researchers who are as interested and knowledgeable about your question as you. This process helps other researchers by pointing out hypotheses that have been proved wrong and why they are wrong. It helps you by identifying flaws in your thinking or experiment design. And if you don't share what you've learned and let other people ask questions about it, it's not helpful to your or anyone else's understanding of what happens in the world.

Scientific Method Example

Here's an everyday example of how you can apply the scientific method to understand more about your world so you can solve your problems in a helpful way.

Let's say you put slices of bread in your toaster and press the button, but nothing happens. Your toaster isn't working, but you can't afford to buy a new one right now. You might be able to rescue it from the trash can if you can figure out what's wrong with it. So, let's figure out what's wrong with your toaster.

Observation. Your toaster isn't working to toast your bread.

Ask a question. In this case, you're asking, "Why isn't my toaster working?" You could even do a bit of preliminary research by looking in the owner's manual for your toaster. The manufacturer has likely tested your toaster model under many conditions, and they may have some ideas for where to start with your hypothesis.

Form a hypothesis and make predictions based on it. Your hypothesis should be a potential explanation or answer to the question that you can test to see if it's correct. One possible explanation that we could test is that the power outlet is broken. Our prediction is that if the outlet is broken, then plugging it into a different outlet should make the toaster work again.

Test your prediction by doing an experiment or making more observations. You plug the toaster into a different outlet and try to toast your bread.

If that works, then your hypothesis is supported by your experimental data. Results that support your hypothesis don't prove it right; they simply suggest that it's a likely explanation. This uncertainty arises because, in the real world, we can't rule out the possibility of mistakes, wrong assumptions, or weird coincidences affecting the results. If the toaster doesn’t work even after plugging it into a different outlet, then your hypothesis is not supported and it's likely the wrong explanation.

Use your results to guide your next steps (iteration). If your toaster worked, you may decide to do further tests to confirm it or revise it. For example, you could plug something else that you know is working into the first outlet to see if that stops working too. That would be further confirmation that your hypothesis is correct.

If your toaster failed to toast when plugged into the second outlet, you need a new hypothesis. For example, your next hypothesis might be that the toaster has a shorted wire. You could test this hypothesis directly if you have the right equipment and training, or you could take it to a repair shop where they could test that hypothesis for you.

Share your results. For this everyday example, you probably wouldn't want to write a paper, but you could share your problem-solving efforts with your housemates or anyone you hire to repair your outlet or help you test if the toaster has a short circuit.

What the Scientific Method Is Used For

The scientific method is useful whenever you need to reason logically about your questions and gather evidence to support your problem-solving efforts. So, you can use it in everyday life to answer many of your questions; however, when most people think of the scientific method, they likely think of using it to:

Describe how nature works . It can be hard to accurately describe how nature works because it's almost impossible to account for every variable that's involved in a natural process. Researchers may not even know about many of the variables that are involved. In some cases, all you can do is make assumptions. But you can use the scientific method to logically disprove wrong assumptions by identifying flaws in the reasoning.

Do scientific research in a laboratory to develop things such as new medicines.

Develop critical thinking skills.  Using the scientific method may help you develop critical thinking in your daily life because you learn to systematically ask questions and gather evidence to find answers. Without logical reasoning, you might be more likely to have a distorted perspective or bias. Bias is the inclination we all have to favor one perspective (usually our own) over another.

The scientific method doesn't perfectly solve the problem of bias, but it does make it harder for an entire field to be biased in the same direction. That's because it's unlikely that all the people working in a field have the same biases. It also helps make the biases of individuals more obvious because if you repeatedly misinterpret information in the same way in multiple experiments or over a period, the other people working on the same question will notice. If you don't correct your bias when others point it out to you, you'll lose your credibility. Other people might then stop believing what you have to say.

Why Is the Scientific Method Important?

When you use the scientific method, your goal is to do research in a fair, unbiased, and repeatable way. The scientific method helps meet these goals because:

It's a systematic approach to problem-solving. It can help you figure out where you're going wrong in your thinking and research if you're not getting helpful answers to your questions. Helpful answers solve problems and keep you moving forward. So, a systematic approach helps you improve your problem-solving abilities if you get stuck.

It can help you solve your problems.  The scientific method helps you isolate problems by focusing on what's important. In addition, it can help you make your solutions better every time you go through the process.

It helps you eliminate (or become aware of) your personal biases.  It can help you limit the influence of your own personal, preconceived notions . A big part of the process is considering what other people already know and think about your question. It also involves sharing what you've learned and letting other people ask about your methods and conclusions. At the end of the process, even if you still think your answer is best, you have considered what other people know and think about the question.

The scientific method is a systematic way of conducting experiments or studies so that you can explore the world around you and answer questions using reason and evidence. It's a step-by-step problem-solving process that involves: (1) observation, (2) asking questions, (3) forming hypotheses and making predictions, (4) testing your hypotheses through experiments or more observations, (5) using what you learned through experiment or observation to guide further investigation, and (6) sharing your results.

Top doctors in ,

Find more top doctors on, related links.

  • Health A-Z News
  • Health A-Z Reference
  • Health A-Z Slideshows
  • Health A-Z Quizzes
  • Health A-Z Videos
  • WebMDRx Savings Card
  • Coronavirus (COVID-19)
  • Hepatitis C
  • Diabetes Warning Signs
  • Rheumatoid Arthritis
  • Morning-After Pill
  • Breast Cancer Screening
  • Psoriatic Arthritis Symptoms
  • Heart Failure
  • Multiple Myeloma
  • Types of Crohn's Disease

problem solving method in science

Lucidly exploring and applying philosophy

  • Fun Quizzes
  • Logic Course
  • Ethics Course
  • Philosophy Course

Chapter 6: Scientific Problem Solving

If you prefer a video, click this button:

Scientific Problem Solving Video

Science is a method to discover empirical truths and patterns. Roughly speaking, the scientific method consists of

1) Observing

2) Forming a hypothesis

3) Testing the hypothesis and

4) Interpreting the data to confirm or disconfirm the hypothesis.

The beauty of science is that any scientific claim can be tested if you have the proper knowledge and equipment.

You can also use the scientific method to solve everyday problems: 1) Observe and clearly define the problem, 2) Form a hypothesis, 3) Test it, and 4) Confirm the hypothesis... or disconfirm it and start over.

So, the next time you are cursing in traffic or emotionally reacting to a problem, take a few deep breaths and then use this rational and scientific approach. Slow down, observe, hypothesize, and test.

Explain how you would solve these problems using the four steps of the scientific process.

Example: The fire alarm is not working.

1) Observe/Define the problem: it does not beep when I push the button.

2) Hypothesis: it is caused by a dead battery.

3) Test: try a new battery.

4) Confirm/Disconfirm: the alarm now works. If it does not work, start over by testing another hypothesis like “it has a loose wire.”  

  • My car will not start.
  • My child is having problems reading.
  • I owe $20,000, but only make $10 an hour.
  • My boss is mean. I want him/her to stop using rude language towards me.
  • My significant other is lazy. I want him/her to help out more.

6-8. Identify three problems where you can apply the scientific method.

*Answers will vary.

Application and Value

Science is more of a process than a body of knowledge. In our daily lives, we often emotionally react and jump to quick solutions when faced with problems, but following the four steps of the scientific process can help us slow down and discover more intelligent solutions.

In your study of philosophy, you will explore deeper questions about science. For example, are there any forms of knowledge that are nonscientific? Can science tell us what we ought to do? Can logical and mathematical truths be proven in a scientific way? Does introspection give knowledge even though I cannot scientifically observe your introspective thoughts? Is science truly objective?  These are challenging questions that should help you discover the scope of science without diminishing its awesome power.

But the first step in answering these questions is knowing what science is, and this chapter clarifies its essence. Again, Science is not so much a body of knowledge as it is a method of observing, hypothesizing, and testing. This method is what all the sciences have in common.

Perhaps too science should involve falsifiability, which is a concept explored in the next chapter.

Return to Logic Home                            Next (Chapter 7, Falsifiability)

problem solving method in science

Click on my affiliate link above (Logic Book Image) to explore the most popular introduction to logic. If you purchase it, I recommend buying a less expensive older edition.

  • Publications
  • Conferences & Events
  • Professional Learning
  • Science Standards
  • Awards & Competitions
  • Instructional Materials
  • Free Resources
  • American Rescue Plan
  • For Preservice Teachers
  • NCCSTS Case Collection
  • Science and STEM Education Jobs
  • Interactive eBooks+
  • Digital Catalog
  • Regional Product Representatives
  • e-Newsletters
  • Bestselling Books
  • Latest Books
  • Popular Book Series
  • Submit Book Proposal
  • Web Seminars
  • National Conference • New Orleans 24
  • Leaders Institute • New Orleans 24
  • Exhibits & Sponsorship
  • Submit a Proposal
  • Conference Reviewers
  • Past Conferences
  • Latest Resources
  • Professional Learning Units & Courses
  • For Districts
  • Online Course Providers
  • Schools & Districts
  • College Professors & Students
  • The Standards
  • Teachers and Admin
  • eCYBERMISSION
  • Toshiba/NSTA ExploraVision
  • Junior Science & Humanities Symposium
  • Teaching Awards
  • Climate Change
  • Earth & Space Science
  • New Science Teachers
  • Early Childhood
  • Middle School
  • High School
  • Postsecondary
  • Informal Education
  • Journal Articles
  • Lesson Plans
  • e-newsletters
  • Science & Children
  • Science Scope
  • The Science Teacher
  • Journal of College Sci. Teaching
  • Connected Science Learning
  • NSTA Reports
  • Next-Gen Navigator
  • Science Update
  • Teacher Tip Tuesday
  • Trans. Sci. Learning

MyNSTA Community

  • My Collections

A Problem-Solving Experiment

Using Beer’s Law to Find the Concentration of Tartrazine

The Science Teacher—January/February 2022 (Volume 89, Issue 3)

By Kevin Mason, Steve Schieffer, Tara Rose, and Greg Matthias

Share Start a Discussion

A Problem-Solving Experiment

A problem-solving experiment is a learning activity that uses experimental design to solve an authentic problem. It combines two evidence-based teaching strategies: problem-based learning and inquiry-based learning. The use of problem-based learning and scientific inquiry as an effective pedagogical tool in the science classroom has been well established and strongly supported by research ( Akinoglu and Tandogan 2007 ; Areepattamannil 2012 ; Furtak, Seidel, and Iverson 2012 ; Inel and Balim 2010 ; Merritt et al. 2017 ; Panasan and Nuangchalerm 2010 ; Wilson, Taylor, and Kowalski 2010 ).

Floyd James Rutherford, the founder of the American Association for the Advancement of Science (AAAS) Project 2061 once stated, “To separate conceptually scientific content from scientific inquiry,” he underscored, “is to make it highly probable that the student will properly understand neither” (1964, p. 84). A more recent study using randomized control trials showed that teachers that used an inquiry and problem-based pedagogy for seven months improved student performance in math and science ( Bando, Nashlund-Hadley, and Gertler 2019 ). A problem-solving experiment uses problem-based learning by posing an authentic or meaningful problem for students to solve and inquiry-based learning by requiring students to design an experiment to collect and analyze data to solve the problem.

In the problem-solving experiment described in this article, students used Beer’s Law to collect and analyze data to determine if a person consumed a hazardous amount of tartrazine (Yellow Dye #5) for their body weight. The students used their knowledge of solutions, molarity, dilutions, and Beer’s Law to design their own experiment and calculate the amount of tartrazine in a yellow sports drink (or citrus-flavored soda).

According to the Next Generation Science Standards, energy is defined as “a quantitative property of a system that depends on the motion and interactions of matter and radiation with that system” ( NGSS Lead States 2013 ). Interactions of matter and radiation can be some of the most challenging for students to observe, investigate, and conceptually understand. As a result, students need opportunities to observe and investigate the interactions of matter and radiation. Light is one example of radiation that interacts with matter.

Light is electromagnetic radiation that is detectable to the human eye and exhibits properties of both a wave and a particle. When light interacts with matter, light can be reflected at the surface, absorbed by the matter, or transmitted through the matter ( Figure 1 ). When a single beam of light enters a substance at a perpendicularly (at a 90 ° angle to the surface), the amount of reflection is minimal. Therefore, the light will either be absorbed by the substance or be transmitted through the substance. When a given wavelength of light shines into a solution, the amount of light that is absorbed will depend on the identity of the substance, the thickness of the container, and the concentration of the solution.

Light interacting with matter.  (Retrieved from https://etorgerson.files.wordpress.com/2011/05/light-reflect-refract-absorb-label.jpg).

Light interacting with matter.

(Retrieved from https://etorgerson.files.wordpress.com/2011/05/light-reflect-refract-absorb-label.jpg ).

Beer’s Law states the amount of light absorbed is directly proportional to the thickness and concentration of a solution. Beer’s Law is also sometimes known as the Beer-Lambert Law. A solution of a higher concentration will absorb more light and transmit less light ( Figure 2 ). Similarly, if the solution is placed in a thicker container that requires the light to pass through a greater distance, then the solution will absorb more light and transmit less light.

Figure 2 Light transmitted through a solution.  (Retrieved from https://media.springernature.com/original/springer-static/image/chp%3A10.1007%2F978-3-319-57330-4_13/MediaObjects/432946_1_En_13_Fig4_HTML.jpg).

Light transmitted through a solution.

(Retrieved from https://media.springernature.com/original/springer-static/image/chp%3A10.1007%2F978-3-319-57330-4_13/MediaObjects/432946_1_En_13_Fig4_HTML.jpg ).

Definitions of key terms.

Absorbance (A) – the process of light energy being captured by a substance

Beer’s Law (Beer-Lambert Law) – the absorbance (A) of light is directly proportional to the molar absorptivity (ε), thickness (b), and concentration (C) of the solution (A = εbC)

Concentration (C) – the amount of solute dissolved per amount of solution

Cuvette – a container used to hold a sample to be tested in a spectrophotometer

Energy (E) – a quantitative property of a system that depends on motion and interactions of matter and radiation with that system (NGSS Lead States 2013).

Intensity (I) – the amount or brightness of light

Light – electromagnetic radiation that is detectable to the human eye and exhibits properties of both a wave and a particle

Molar Absorptivity (ε) – a property that represents the amount of light absorbed by a given substance per molarity of the solution and per centimeter of thickness (M-1 cm-1)

Molarity (M) – the number of moles of solute per liters of solution (Mol/L)

Reflection – the process of light energy bouncing off the surface of a substance

Spectrophotometer – a device used to measure the absorbance of light by a substance

Tartrazine – widely used food and liquid dye

Transmittance (T) – the process of light energy passing through a substance

The amount of light absorbed by a solution can be measured using a spectrophotometer. The solution of a given concentration is placed in a small container called a cuvette. The cuvette has a known thickness that can be held constant during the experiment. It is also possible to obtain cuvettes of different thicknesses to study the effect of thickness on the absorption of light. The key definitions of the terms related to Beer’s Law and the learning activity presented in this article are provided in Figure 3 .

Overview of the problem-solving experiment

In the problem presented to students, a 140-pound athlete drinks two bottles of yellow sports drink every day ( Figure 4 ; see Online Connections). When she starts to notice a rash on her skin, she reads the label of the sports drink and notices that it contains a yellow dye known as tartrazine. While tartrazine is safe to drink, it may produce some potential side effects in large amounts, including rashes, hives, or swelling. The students must design an experiment to determine the concentration of tartrazine in the yellow sports drink and the number of milligrams of tartrazine in two bottles of the sports drink.

While a sports drink may have many ingredients, the vast majority of ingredients—such as sugar or electrolytes—are colorless when dissolved in water solution. The dyes added to the sports drink are responsible for the color of the sports drink. Food manufacturers may use different dyes to color sports drinks to the desired color. Red dye #40 (allura red), blue dye #1 (brilliant blue), yellow dye #5 (tartrazine), and yellow dye #6 (sunset yellow) are the four most common dyes or colorants in sports drinks and many other commercial food products ( Stevens et al. 2015 ). The concentration of the dye in the sports drink affects the amount of light absorbed.

In this problem-solving experiment, the students used the previously studied concept of Beer’s Law—using serial dilutions and absorbance—to find the concentration (molarity) of tartrazine in the sports drink. Based on the evidence, the students then determined if the person had exceeded the maximum recommended daily allowance of tartrazine, given in mg/kg of body mass. The learning targets for this problem-solving experiment are shown in Figure 5 (see Online Connections).

Pre-laboratory experiences

A problem-solving experiment is a form of guided inquiry, which will generally require some prerequisite knowledge and experience. In this activity, the students needed prior knowledge and experience with Beer’s Law and the techniques in using Beer’s Law to determine an unknown concentration. Prior to the activity, students learned how Beer’s Law is used to relate absorbance to concentration as well as how to use the equation M 1 V 1 = M 2 V 2 to determine concentrations of dilutions. The students had a general understanding of molarity and using dimensional analysis to change units in measurements.

The techniques for using Beer’s Law were introduced in part through a laboratory experiment using various concentrations of copper sulfate. A known concentration of copper sulfate was provided and the students followed a procedure to prepare dilutions. Students learned the technique for choosing the wavelength that provided the maximum absorbance for the solution to be tested ( λ max ), which is important for Beer’s Law to create a linear relationship between absorbance and solution concentration. Students graphed the absorbance of each concentration in a spreadsheet as a scatterplot and added a linear trend line. Through class discussion, the teacher checked for understanding in using the equation of the line to determine the concentration of an unknown copper sulfate solution.

After the students graphed the data, they discussed how the R2 value related to the data set used to construct the graph. After completing this experiment, the students were comfortable making dilutions from a stock solution, calculating concentrations, and using the spectrophotometer to use Beer’s Law to determine an unknown concentration.

Introducing the problem

After the initial experiment on Beer’s Law, the problem-solving experiment was introduced. The problem presented to students is shown in Figure 4 (see Online Connections). A problem-solving experiment provides students with a valuable opportunity to collaborate with other students in designing an experiment and solving a problem. For this activity, the students were assigned to heterogeneous or mixed-ability laboratory groups. Groups should be diversified based on gender; research has shown that gender diversity among groups improves academic performance, while racial diversity has no significant effect ( Hansen, Owan, and Pan 2015 ). It is also important to support students with special needs when assigning groups. The mixed-ability groups were assigned intentionally to place students with special needs with a peer who has the academic ability and disposition to provide support. In addition, some students may need additional accommodations or modifications for this learning activity, such as an outlined lab report, a shortened lab report format, or extended time to complete the analysis. All students were required to wear chemical-splash goggles and gloves, and use caution when handling solutions and glass apparatuses.

Designing the experiment

During this activity, students worked in lab groups to design their own experiment to solve a problem. The teacher used small-group and whole-class discussions to help students understand the problem. Students discussed what information was provided and what they need to know and do to solve the problem. In planning the experiment, the teacher did not provide a procedure and intentionally provided only minimal support to the students as needed. The students designed their own experimental procedure, which encouraged critical thinking and problem solving. The students needed to be allowed to struggle to some extent. The teacher provided some direction and guidance by posing questions for students to consider and answer for themselves. Students were also frequently reminded to review their notes and the previous experiment on Beer’s Law to help them better use their resources to solve the problem. The use of heterogeneous or mixed-ability groups also helped each group be more self-sufficient and successful in designing and conducting the experiment.

Students created a procedure for their experiment with the teacher providing suggestions or posing questions to enhance the experimental design, if needed. Safety was addressed during this consultation to correct safety concerns in the experimental design or provide safety precautions for the experiment. Students needed to wear splash-proof goggles and gloves throughout the experiment. In a few cases, students realized some opportunities to improve their experimental design during the experiment. This was allowed with the teacher’s approval, and the changes to the procedure were documented for the final lab report.

Conducting the experiment

A sample of the sports drink and a stock solution of 0.01 M stock solution of tartrazine were provided to the students. There are many choices of sports drinks available, but it is recommended that the ingredients are checked to verify that tartrazine (yellow dye #5) is the only colorant added. This will prevent other colorants from affecting the spectroscopy results in the experiment. A citrus-flavored soda could also be used as an alternative because many sodas have tartrazine added as well. It is important to note that tartrazine is considered safe to drink, but it may produce some potential side effects in large amounts, including rashes, hives, or swelling. A list of the materials needed for this problem-solving experiment is shown in Figure 6 (see Online Connections).

This problem-solving experiment required students to create dilutions of known concentrations of tartrazine as a reference to determine the unknown concentration of tartrazine in a sports drink. To create the dilutions, the students were provided with a 0.01 M stock solution of tartrazine. The teacher purchased powdered tartrazine, available from numerous vendors, to create the stock solution. The 0.01 M stock solution was prepared by weighing 0.534 g of tartrazine and dissolving it in enough distilled water to make a 100 ml solution. Yellow food coloring could be used as an alternative, but it would take some research to determine its concentration. Since students have previously explored the experimental techniques, they should know to prepare dilutions that are somewhat darker and somewhat lighter in color than the yellow sports drink sample. Students should use five dilutions for best results.

Typically, a good range for the yellow sports drink is standard dilutions ranging from 1 × 10-3 M to 1 × 10-5 M. The teacher may need to caution the students that if a dilution is too dark, it will not yield good results and lower the R2 value. Students that used very dark dilutions often realized that eliminating that data point created a better linear trendline, as long as it didn’t reduce the number of data points to fewer than four data points. Some students even tried to use the 0.01 M stock solution without any dilution. This was much too dark. The students needed to do substantial dilutions to get the solutions in the range of the sports drink.

After the dilutions are created, the absorbance of each dilution was measured using a spectrophotometer. A Vernier SpectroVis (~$400) spectrophotometer was used to measure the absorbance of the prepared dilutions with known concentrations. The students adjusted the spectrophotometer to use different wavelengths of light and selected the wavelength with the highest absorbance reading. The same wavelength was then used for each measurement of absorbance. A wavelength of 650 nanometers (nm) provided an accurate measurement and good linear relationship. After measuring the absorbance of the dilutions of known concentrations, the students measured the absorbance of the sports drink with an unknown concentration of tartrazine using the spectrophotometer at the same wavelength. If a spectrophotometer is not available, a color comparison can be used as a low-cost alternative for completing this problem-solving experiment ( Figure 7 ; see Online Connections).

Analyzing the results

After completing the experiment, the students graphed the absorbance and known tartrazine concentrations of the dilutions on a scatter-plot to create a linear trendline. In this experiment, absorbance was the dependent variable, which should be graphed on the y -axis. Some students mistakenly reversed the axes on the scatter-plot. Next, the students used the graph to find the equation for the line. Then, the students solve for the unknown concentration (molarity) of tartrazine in the sports drink given the linear equation and the absorbance of the sports drink measured experimentally.

To answer the question posed in the problem, the students also calculated the maximum amount of tartrazine that could be safely consumed by a 140 lb. person, using the information given in the problem. A common error in solving the problem was not converting the units of volume given in the problem from ounces to liters. With the molarity and volume in liters, the students then calculated the mass of tartrazine consumed per day in milligrams. A sample of the graph and calculations from one student group are shown in Figure 8 . Finally, based on their calculations, the students answered the question posed in the original problem and determined if the person’s daily consumption of tartrazine exceeded the threshold for safe consumption. In this case, the students concluded that the person did NOT consume more than the allowable daily limit of tartrazine.

Sample graph and calculations from a student group.

Sample graph and calculations from a student group.

Communicating the results

After conducting the experiment, students reported their results in a written laboratory report that included the following sections: title, purpose, introduction, hypothesis, materials and methods, data and calculations, conclusion, and discussion. The laboratory report was assessed using the scoring rubric shown in Figure 9 (see Online Connections). In general, the students did very well on this problem-solving experiment. Students typically scored a three or higher on each criteria of the rubric. Throughout the activity, the students successfully demonstrated their ability to design an experiment, collect data, perform calculations, solve a problem, and effectively communicate those results.

This activity is authentic problem-based learning in science as the true concentration of tartrazine in the sports drink was not provided by the teacher or known by the students. The students were generally somewhat biased as they assumed the experiment would result in exceeding the recommended maximum consumption of tartrazine. Some students struggled with reporting that the recommended limit was far higher than the two sports drinks consumed by the person each day. This allows for a great discussion about the use of scientific methods and evidence to provide unbiased answers to meaningful questions and problems.

The most common errors in this problem-solving experiment were calculation errors, with the most common being calculating the concentrations of the dilutions (perhaps due to the use of very small concentrations). There were also several common errors in communicating the results in the laboratory report. In some cases, students did not provide enough background information in the introduction of the report. When the students communicated the results, some students also failed to reference specific data from the experiment. Finally, in the discussion section, some students expressed concern or doubts in the results, not because there was an obvious error, but because they did not believe the level consumed could be so much less than the recommended consumption limit of tartrazine.

The scientific study and investigation of energy and matter are salient topics addressed in the Next Generation Science Standards ( Figure 10 ; see Online Connections). In a chemistry classroom, students should have multiple opportunities to observe and investigate the interaction of energy and matter. In this problem-solving experiment students used Beer’s Law to collect and analyze data to determine if a person consumed an amount of tartrazine that exceeded the maximum recommended daily allowance. The students correctly concluded that the person in the problem did not consume more than the recommended daily amount of tartrazine for their body weight.

In this activity students learned to work collaboratively to design an experiment, collect and analyze data, and solve a problem. These skills extend beyond any one science subject or class. Through this activity, students had the opportunity to do real-world science to solve a problem without a previously known result. The process of designing an experiment may be difficult for some students that are often accustomed to being given an experimental procedure in their previous science classroom experiences. However, because students sometimes struggled to design their own experiment and perform the calculations, students also learned to persevere in collecting and analyzing data to solve a problem, which is a valuable life lesson for all students. ■

Online Connections

The Beer-Lambert Law at Chemistry LibreTexts: https://bit.ly/3lNpPEi

Beer’s Law – Theoretical Principles: https://teaching.shu.ac.uk/hwb/chemistry/tutorials/molspec/beers1.htm

Beer’s Law at Illustrated Glossary of Organic Chemistry: http://www.chem.ucla.edu/~harding/IGOC/B/beers_law.html

Beer Lambert Law at Edinburgh Instruments: https://www.edinst.com/blog/the-beer-lambert-law/

Beer’s Law Lab at PhET Interactive Simulations: https://phet.colorado.edu/en/simulation/beers-law-lab

Figure 4. Problem-solving experiment problem statement: https://bit.ly/3pAYHtj

Figure 5. Learning targets: https://bit.ly/307BHtb

Figure 6. Materials list: https://bit.ly/308a57h

Figure 7. The use of color comparison as a low-cost alternative: https://bit.ly/3du1uyO

Figure 9. Summative performance-based assessment rubric: https://bit.ly/31KoZRj

Figure 10. Connecting to the Next Generation Science Standards : https://bit.ly/3GlJnY0

Kevin Mason ( [email protected] ) is Professor of Education at the University of Wisconsin–Stout, Menomonie, WI; Steve Schieffer is a chemistry teacher at Amery High School, Amery, WI; Tara Rose is a chemistry teacher at Amery High School, Amery, WI; and Greg Matthias is Assistant Professor of Education at the University of Wisconsin–Stout, Menomonie, WI.

Akinoglu, O., and R. Tandogan. 2007. The effects of problem-based active learning in science education on students’ academic achievement, attitude and concept learning. Eurasia Journal of Mathematics, Science, and Technology Education 3 (1): 77–81.

Areepattamannil, S. 2012. Effects of inquiry-based science instruction on science achievement and interest in science: Evidence from Qatar. The Journal of Educational Research 105 (2): 134–146.

Bando R., E. Nashlund-Hadley, and P. Gertler. 2019. Effect of inquiry and problem-based pedagogy on learning: Evidence from 10 field experiments in four countries. The National Bureau of Economic Research 26280.

Furtak, E., T. Seidel, and H. Iverson. 2012. Experimental and quasi-experimental studies of inquiry-based science teaching: A meta-analysis. Review of Educational Research 82 (3): 300–329.

Hansen, Z., H. Owan, and J. Pan. 2015. The impact of group diversity on class performance. Education Economics 23 (2): 238–258.

Inel, D., and A. Balim. 2010. The effects of using problem-based learning in science and technology teaching upon students’ academic achievement and levels of structuring concepts. Pacific Forum on Science Learning and Teaching 11 (2): 1–23.

Merritt, J., M. Lee, P. Rillero, and B. Kinach. 2017. Problem-based learning in K–8 mathematics and science education: A literature review. The Interdisciplinary Journal of Problem-based Learning 11 (2).

NGSS Lead States. 2013. Next Generation Science Standards: For states, by states. Washington, DC: National Academies Press.

Panasan, M., and P. Nuangchalerm. 2010. Learning outcomes of project-based and inquiry-based learning activities. Journal of Social Sciences 6 (2): 252–255.

Rutherford, F.J. 1964. The role of inquiry in science teaching. Journal of Research in Science Teaching 2 (2): 80–84.

Stevens, L.J., J.R. Burgess, M.A. Stochelski, and T. Kuczek. 2015. Amounts of artificial food dyes and added sugars in foods and sweets commonly consumed by children. Clinical Pediatrics 54 (4): 309–321.

Wilson, C., J. Taylor, and S. Kowalski. 2010. The relative effects and equity of inquiry-based and commonplace science teaching on students’ knowledge, reasoning, and argumentation. Journal of Research in Science Teaching 47 (3): 276–301.

Chemistry Crosscutting Concepts Curriculum Disciplinary Core Ideas General Science Inquiry Instructional Materials Labs Lesson Plans Mathematics NGSS Pedagogy Science and Engineering Practices STEM Teaching Strategies Technology Three-Dimensional Learning High School

You may also like

Reports Article

Problem-Solving Method in Teaching

The problem-solving method is a highly effective teaching strategy that is designed to help students develop critical thinking skills and problem-solving abilities . It involves providing students with real-world problems and challenges that require them to apply their knowledge, skills, and creativity to find solutions. This method encourages active learning, promotes collaboration, and allows students to take ownership of their learning.

Table of Contents

Definition of problem-solving method.

Problem-solving is a process of identifying, analyzing, and resolving problems. The problem-solving method in teaching involves providing students with real-world problems that they must solve through collaboration and critical thinking. This method encourages students to apply their knowledge and creativity to develop solutions that are effective and practical.

Meaning of Problem-Solving Method

The meaning and Definition of problem-solving are given by different Scholars. These are-

Woodworth and Marquis(1948) : Problem-solving behavior occurs in novel or difficult situations in which a solution is not obtainable by the habitual methods of applying concepts and principles derived from past experience in very similar situations.

Skinner (1968): Problem-solving is a process of overcoming difficulties that appear to interfere with the attainment of a goal. It is the procedure of making adjustments in spite of interference

Benefits of Problem-Solving Method

The problem-solving method has several benefits for both students and teachers. These benefits include:

  • Encourages active learning: The problem-solving method encourages students to actively participate in their own learning by engaging them in real-world problems that require critical thinking and collaboration
  • Promotes collaboration: Problem-solving requires students to work together to find solutions. This promotes teamwork, communication, and cooperation.
  • Builds critical thinking skills: The problem-solving method helps students develop critical thinking skills by providing them with opportunities to analyze and evaluate problems
  • Increases motivation: When students are engaged in solving real-world problems, they are more motivated to learn and apply their knowledge.
  • Enhances creativity: The problem-solving method encourages students to be creative in finding solutions to problems.

Steps in Problem-Solving Method

The problem-solving method involves several steps that teachers can use to guide their students. These steps include

  • Identifying the problem: The first step in problem-solving is identifying the problem that needs to be solved. Teachers can present students with a real-world problem or challenge that requires critical thinking and collaboration.
  • Analyzing the problem: Once the problem is identified, students should analyze it to determine its scope and underlying causes.
  • Generating solutions: After analyzing the problem, students should generate possible solutions. This step requires creativity and critical thinking.
  • Evaluating solutions: The next step is to evaluate each solution based on its effectiveness and practicality
  • Selecting the best solution: The final step is to select the best solution and implement it.

Verification of the concluded solution or Hypothesis

The solution arrived at or the conclusion drawn must be further verified by utilizing it in solving various other likewise problems. In case, the derived solution helps in solving these problems, then and only then if one is free to agree with his finding regarding the solution. The verified solution may then become a useful product of his problem-solving behavior that can be utilized in solving further problems. The above steps can be utilized in solving various problems thereby fostering creative thinking ability in an individual.

The problem-solving method is an effective teaching strategy that promotes critical thinking, creativity, and collaboration. It provides students with real-world problems that require them to apply their knowledge and skills to find solutions. By using the problem-solving method, teachers can help their students develop the skills they need to succeed in school and in life.

  • Jonassen, D. (2011). Learning to solve problems: A handbook for designing problem-solving learning environments. Routledge.
  • Hmelo-Silver, C. E. (2004). Problem-based learning: What and how do students learn? Educational Psychology Review, 16(3), 235-266.
  • Mergendoller, J. R., Maxwell, N. L., & Bellisimo, Y. (2006). The effectiveness of problem-based instruction: A comparative study of instructional methods and student characteristics. Interdisciplinary Journal of Problem-based Learning, 1(2), 49-69.
  • Richey, R. C., Klein, J. D., & Tracey, M. W. (2011). The instructional design knowledge base: Theory, research, and practice. Routledge.
  • Savery, J. R., & Duffy, T. M. (2001). Problem-based learning: An instructional model and its constructivist framework. CRLT Technical Report No. 16-01, University of Michigan. Wojcikowski, J. (2013). Solving real-world problems through problem-based learning. College Teaching, 61(4), 153-156

Micro Teaching Skills

  • Shopping Cart

Advanced Search

  • Browse Our Shelves
  • Best Sellers
  • Digital Audiobooks
  • Featured Titles
  • New This Week
  • Staff Recommended
  • Reading Lists
  • Upcoming Events
  • Ticketed Events
  • Science Book Talks
  • Past Events
  • Video Archive
  • Online Gift Codes
  • University Clothing
  • Goods & Gifts from Harvard Book Store
  • Hours & Directions
  • Newsletter Archive
  • Frequent Buyer Program
  • Signed First Edition Club
  • Signed New Voices in Fiction Club
  • Off-Site Book Sales
  • Corporate & Special Sales
  • Print on Demand

Harvard Book Store

Our Shelves
  • All Our Shelves
  • Academic New Arrivals
  • New Hardcover - Biography
  • New Hardcover - Fiction
  • New Hardcover - Nonfiction
  • New Titles - Paperback
  • African American Studies
  • Anthologies
  • Anthropology / Archaeology
  • Architecture
  • Asia & The Pacific
  • Astronomy / Geology
  • Boston / Cambridge / New England
  • Business & Management
  • Career Guides
  • Child Care / Childbirth / Adoption
  • Children's Board Books
  • Children's Picture Books
  • Children's Activity Books
  • Children's Beginning Readers
  • Children's Middle Grade
  • Children's Gift Books
  • Children's Nonfiction
  • Children's/Teen Graphic Novels
  • Teen Nonfiction
  • Young Adult
  • Classical Studies
  • Cognitive Science / Linguistics
  • College Guides
  • Cultural & Critical Theory
  • Education - Higher Ed
  • Environment / Sustainablity
  • European History
  • Exam Preps / Outlines
  • Games & Hobbies
  • Gender Studies / Gay & Lesbian
  • Gift / Seasonal Books
  • Globalization
  • Graphic Novels
  • Hardcover Classics
  • Health / Fitness / Med Ref
  • Islamic Studies
  • Large Print
  • Latin America / Caribbean
  • Law & Legal Issues
  • Literary Crit & Biography
  • Local Economy
  • Mathematics
  • Media Studies
  • Middle East
  • Myths / Tales / Legends
  • Native American
  • Paperback Favorites
  • Performing Arts / Acting
  • Personal Finance
  • Personal Growth
  • Photography
  • Physics / Chemistry
  • Poetry Criticism
  • Ref / English Lang Dict & Thes
  • Ref / Foreign Lang Dict / Phrase
  • Reference - General
  • Religion - Christianity
  • Religion - Comparative
  • Religion - Eastern
  • Romance & Erotica
  • Science Fiction
  • Short Introductions
  • Technology, Culture & Media
  • Theology / Religious Studies
  • Travel Atlases & Maps
  • Travel Lit / Adventure
  • Urban Studies
  • Wines And Spirits
  • Women's Studies
  • World History
  • Writing Style And Publishing
Gift Cards

Add to Cart

Solving Everyday Problems with the Scientific Method: Thinking Like a Scientist (Second Edition)

This book describes how one can use The Scientific Method to solve everyday problems including medical ailments, health issues, money management, traveling, shopping, cooking, household chores, etc. It illustrates how to exploit the information collected from our five senses, how to solve problems when no information is available for the present problem situation, how to increase our chances of success by redefining a problem, and how to extrapolate our capabilities by seeing a relationship among heretofore unrelated concepts. One should formulate a hypothesis as early as possible in order to have a sense of direction regarding which path to follow. Occasionally, by making wild conjectures, creative solutions can transpire. However, hypotheses need to be well-tested. Through this way, The Scientific Method can help readers solve problems in both familiar and unfamiliar situations. Containing real-life examples of how various problems are solved — for instance, how some observant patients cure their own illnesses when medical experts have failed — this book will train readers to observe what others may have missed and conceive what others may not have contemplated. With practice, they will be able to solve more problems than they could previously imagine. In this second edition, the authors have added some more theories which they hope can help in solving everyday problems. At the same time, they have updated the book by including quite a few examples which they think are interesting. Readership: General public interested in self-help books; undergraduates majoring in education and behavioral psychology; graduates and researchers with research interests in problem solving, creativity and scientific research methodology.

There are no customer reviews for this item yet.

Classic Totes

problem solving method in science

Tote bags and pouches in a variety of styles, sizes, and designs , plus mugs, bookmarks, and more!

Shipping & Pickup

problem solving method in science

We ship anywhere in the U.S. and orders of $75+ ship free via media mail!

Noteworthy Signed Books: Join the Club!

problem solving method in science

Join our Signed First Edition Club (or give a gift subscription) for a signed book of great literary merit, delivered to you monthly.

Harvard Book Store

Harvard Square's Independent Bookstore

© 2024 Harvard Book Store All rights reserved

Contact Harvard Book Store 1256 Massachusetts Avenue Cambridge, MA 02138

Tel (617) 661-1515 Toll Free (800) 542-READ Email [email protected]

View our current hours »

Join our bookselling team »

We plan to remain closed to the public for two weeks, through Saturday, March 28 While our doors are closed, we plan to staff our phones, email, and harvard.com web order services from 10am to 6pm daily.

Store Hours Monday - Saturday: 9am - 11pm Sunday: 10am - 10pm

Holiday Hours 12/24: 9am - 7pm 12/25: closed 12/31: 9am - 9pm 1/1: 12pm - 11pm All other hours as usual.

Map Find Harvard Book Store »

Online Customer Service Shipping » Online Returns » Privacy Policy »

Harvard University harvard.edu »

Facebook

  • Clubs & Services

problem solving method in science

IMAGES

  1. Draw A Map Showing The Problem Solving Process

    problem solving method in science

  2. PPT

    problem solving method in science

  3. The 5 Steps of Problem Solving

    problem solving method in science

  4. PPT

    problem solving method in science

  5. What Is Problem-Solving? Steps, Processes, Exercises to do it Right

    problem solving method in science

  6. Introduction

    problem solving method in science

VIDEO

  1. Problem Solving Method in Urdu by Khurram Shehzad

  2. Teaching Methods

  3. Problem Solving Science Project, Innovative Science Project

  4. POLYA'S PROBLEM SOLVING METHOD

  5. How to solve an everyday problem

  6. Module 2. 1 Maths Problem Solving

COMMENTS

  1. The scientific method (article)

    The scientific method. At the core of physics and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  2. What is the Scientific Method: How does it work and why is it important

    While the scientific method is versatile in form and function, it encompasses a collection of principles that create a logical progression to the process of problem solving: Define a question : Constructing a clear and precise problem statement that identifies the main question or goal of the investigation is the first step.

  3. PDF Scientific Method How do Scientists Solve problems

    Formulate student's ideas into a chart of steps in the scientific method. Determine with the students how a scientist solves problems. • Arrange students in working groups of 3 or 4. Students are to attempt to discover what is in their mystery box. • The group must decide on a procedure to determine the contents of their box and formulate ...

  4. Using the Scientific Method to Solve Problems

    The processes of problem-solving and decision-making can be complicated and drawn out. In this article we look at how the scientific method, along with deductive and inductive reasoning can help simplify these processes. ... How the Scientific Method and Reasoning Can Help Simplify Processes and Solve Problems. MTCT. By the Mind Tools Content Team

  5. Problem Solving in Science Learning

    The traditional teaching of science problem solving involves a considerable amount of drill and practice. Research suggests that these practices do not lead to the development of expert-like problem-solving strategies and that there is little correlation between the number of problems solved (exceeding 1,000 problems in one specific study) and the development of a conceptual understanding.

  6. Problem-Solving in Science and Technology Education

    Abstract. This chapter focuses on problem-solving, which involves describing a problem, figuring out its root cause, locating, ranking and choosing potential solutions, as well as putting those solutions into action in science and technology education. This chapter covers (1) what problem-solving means for science and technology education; (2 ...

  7. Teaching Creativity and Inventive Problem Solving in Science

    Engaging learners in the excitement of science, helping them discover the value of evidence-based reasoning and higher-order cognitive skills, and teaching them to become creative problem solvers have long been goals of science education reformers. But the means to achieve these goals, especially methods to promote creative thinking in scientific problem solving, have not become widely known ...

  8. STEM Problem Solving: Inquiry, Concepts, and Reasoning

    Balancing disciplinary knowledge and practical reasoning in problem solving is needed for meaningful learning. In STEM problem solving, science subject matter with associated practices often appears distant to learners due to its abstract nature. Consequently, learners experience difficulties making meaningful connections between science and their daily experiences. Applying Dewey's idea of ...

  9. Teaching and learning problem solving in science. Part I: A general

    A systematic approach to solving problems and on designing instruction where students learn this approach.

  10. Problem solving

    There are many specialized problem-solving techniques and methods in fields such as engineering, business, medicine, mathematics, computer science, philosophy, and social organization. The mental techniques to identify, analyze, and solve problems are studied in psychology and cognitive sciences .

  11. The scientific method (article)

    The scientific method. At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  12. Solving Everyday Problems with the Scientific Method

    Thinking Like a Scientist. This book describes how one can use The Scientific Method to solve everyday problems including medical ailments, health issues, money management, traveling, shopping, cooking, household chores, etc. It illustrates how to exploit the information collected from our five senses, how to solve problems when no information ...

  13. The 6 Scientific Method Steps and How to Use Them

    The one we typically learn about in school is the basic method, based in logic and problem solving, typically used in "hard" science fields like biology, chemistry, and physics. It may vary in other fields, such as psychology, but the basic premise of making observations, testing, and continuing to improve a theory from the results remain ...

  14. A Detailed Characterization of the Expert Problem-Solving Process in

    A primary goal of science and engineering (S&E) education is to produce good problem solvers, but how to best teach and measure the quality of problem solving remains unclear. The process is complex, multifaceted, and not fully characterized. Here, we present a detailed characterization of the S&E problem-solving process as a set of specific interlinked decisions. This framework of decisions ...

  15. The Scientific Method: What Is It?

    The scientific method is a step-by-step problem-solving process. These steps include: ... It's a step-by-step problem-solving process that involves: (1) observation, (2) asking questions, (3 ...

  16. The Scientific Method Of Problem Solving

    The Scientific Method Of Problem Solving. The Basic Steps: State the Problem - A problem can't be solved if it isn't understood.; Form a Hypothesis - This is a possible solution to the problem formed after gathering information about the problem.The term "research" is properly applied here. Test the Hypothesis - An experiment is performed to determine if the hypothesis solves the problem or not.

  17. Chapter 6: Scientific Problem Solving

    Scientific Problem Solving Video. Science is a method to discover empirical truths and patterns. Roughly speaking, the scientific method consists of. 1) Observing. 2) Forming a hypothesis . 3) Testing the hypothesis and . 4) Interpreting the data to confirm or disconfirm the hypothesis.

  18. A Problem-Solving Experiment

    A problem-solving experiment is a learning activity that uses experimental design to solve an authentic problem. It combines two evidence-based teaching strategies: problem-based learning and inquiry-based learning. The use of problem-based learning and scientific inquiry as an effective pedagogical tool in the science classroom has been well established and strongly supported by research ...

  19. Problem-Solving Method In Teaching

    The problem-solving method is an effective teaching strategy that promotes critical thinking, creativity, and collaboration. It provides students with real-world problems that require them to apply their knowledge and skills to find solutions. By using the problem-solving method, teachers can help their students develop the skills they need to ...

  20. Full article: A framework to foster problem-solving in STEM and

    ABSTRACT. Background: Recent developments in STEM and computer science education put a strong emphasis on twenty-first-century skills, such as solving authentic problems. These skills typically transcend single disciplines. Thus, problem-solving must be seen as a multidisciplinary challenge, and the corresponding practices and processes need to be described using an integrated framework.

  21. Solving Everyday Problems with the Scientific Method

    ISBN: 978-981-3145-32- (ebook) USD 14.95. Also available at Amazon and Kobo. Description. Chapters. Reviews. Supplementary. This book describes how one can use The Scientific Method to solve everyday problems including medical ailments, health issues, money management, traveling, shopping, cooking, household chores, etc.

  22. Solving Everyday Problems with the Scientific Method: Thinking Like a

    Through this way, The Scientific Method can help readers solve problems in both familiar and unfamiliar situations. Containing real-life examples of how various problems are solved — for instance, how some observant patients cure their own illnesses when medical experts have failed — this book will train readers to observe what others may ...