Research methods
Research method(s) | Source |
---|---|
E-questionnaire | [ ] |
Interviews | [ ] |
Experiment and questionnaire | [ ] |
Interviews and e-questionnaire | [ ] |
Interviews and experiment | [ ] |
Statistical methods of analysis
Statistical methods of analysis | Source |
---|---|
ANOVA | [ ] |
PLS-SEM | [ ] |
Factor analysis, exploratory factor analysis, confirmatory factor analysis | [ ] |
-test, Chi-square | [ ] |
Regression, OLS regression | [ ] |
Correlation analysis | [ ] |
Descriptive statistics | [ ] |
Content analysis | [ ] |
Field of study
Fields of study | Sources |
---|---|
Health | [ ] |
Veterinary | [ ] |
Education | [ ] |
Tourism | [ ] |
Banking | [ ] |
Customer service | [ ] |
Business | [ ] |
Mobile commerce | [ ] |
Insurance | [ ] |
Transport | [ ] |
Behavioral theories
Behavioral theories used | Source |
---|---|
UTAUT | [ ] |
UTAUT2 | [ ] |
TAM | [ ] |
TAM and DOI | [ ] |
TAM and ECM and ISS | [ ] |
TAM and SST | [ ] |
TAM and DOI and TOE | [ ] |
U&G | [ ] |
SERVQUAL | [ ] |
TPB | [ ] |
SOR | [ ] |
CAT | [ ] |
TRA | [ ] |
Extended post acceptance model of IS continuance | [ ] |
Factors influencing intention
Constructs/Factors | Directly affecting chatbot intention-adoption | Quantity | Indirectly affecting chatbot intention-adoption | Quantity |
---|---|---|---|---|
Performance expectancy | [ ] | 5 | ||
Effort expectancy | [ ] | 3 | ||
Habit | [ ] | 3 | ||
Perceived usefulness | [ ] | 6 | [ ] | 1 |
Perceived enjoyment | [ ] | 3 | [ ] | 1 |
Perceived ease of use | [ ] | 3 | [ ] | 2 |
Trust | [ ] | 5 | [ ] | 1 |
Privacy concerns | [ ] | 1 | ||
Perceived humanness | [ ] | 2 | ||
Perceived completeness | [ ] | 1 | ||
Perceived convenience | [ ] | 1 | ||
Personal innovation | [ ] | 1 | ||
Attitude | [ ] | 4 | [ ] | 1 |
Social influence | [ ] | 3 | ||
Facilitating conditions | [ ] | 2 | ||
Anthropomorphism | [ ] | 2 | ||
Reliability | [ ] | 1 | ||
Empathy | [ ] | 1 | ||
Tangibility | [ ] | 1 | ||
Predisposition (to use self-service technologies) | [ ] | 1 | ||
Perceived intelligence | [ ] | 1 | ||
Perceived utility | [ ] | 1 | ||
Communication style | [ ] | 1 | ||
Hedonic motivation | [ ] | 1 | ||
Price value | [ ] | 1 |
List of the top-5 most cited papers, as of October 7th, 2021
# | Authors | Year | Journal or conference | Citations |
---|---|---|---|---|
1 | Brandtzaeg and Folstad [ ] | 2017 | 4th international conference on internet science | 410 |
2 | Ciechanowski [ ] | 2019 | Future generation of computer systems | 249 |
3 | Go and Sundar [ ] | 2019 | Computers in human behavior | 182 |
4 | Nadarzynski [ ] | 2019 | Digital health | 118 |
5 | Zarouali [ ] | 2018 | Cyber psychology, behavior and social networking | 96 |
Chatbot adoption-intention papers per continent and country
Continent | Country | Quantity | Sources |
---|---|---|---|
Asia | China | 2 | [ ] |
India | 6 | [ ] | |
Indonesia | 1 | [ ] | |
Japan | 1 | [ ] | |
South Korea | 1 | [ ] | |
Philippines | 1 | [ ] | |
Taiwan | 1 | [ ] | |
America | USA | 7 | [ ] |
Europe | The United Kingdom | 5 | [ ] |
Poland | 1 | [ ] | |
Germany | 2 | [ ] | |
Norway | 2 | [ ] | |
The Netherlands | 2 | [ ] | |
Spain | 1 | [ ] | |
Italy | 2 | [ ] | |
Africa | Nigeria | 1 | [ ] |
Unknown countries | 4 | [ ] |
1 Ashfaq M , Yun J , Yu S , Loureiro SMC. , Chatbot I . Modeling the determinants of users' satisfaction and continuance intention of AI-power service agents . Telematics Inform . 2020 ; 54 : 17 .
2 Przegalinska A , Ciechanowski L , Stroz A , Gloor P , Mazurek G . In bot we trust: a new methodology of chatbot performance measures . Bus Horiz . 2019 ; 62 ( 6 ): 785 - 97 .
3 Radziwill NM , Benton MC . Evaluating quality of chatbots and intelligent conversational agents . Comput Sci (Internet) . 2017 . Available from: http://arxiv.org/abs/1704.04579.pdf .
4 Sivaramakrishnan S , Wan F , Tang Z . Giving an “e-human touch” to e-tailing: the moderating roles of static information quantity and consumption motive in the effectiveness of an anthropomorphic information agent . J Interact Mark . 2007 ; 21 ( 1 ): 60 - 75 .
5 Dash M , Bakshi S . An exploratory study of customer perceptions of usage of chatbots in the hospitality industry . Int J Cust Relat . 2019 ; 7 ( 2 ): 27 - 33 .
6 Rowley J . Product searching with shopping bots . Internet Res . 2000 ; 10 ( 3 ): 203 - 14 .
7 Smith MD . The impact of shopbots on electronic markets . J Acad Mark Sci . 2002 ; 30 ( 4 ): 446 - 54 .
8 Brandtzaeg PB , Folstad A . Why people use chatbots . In: International Conference on Internet Science ; 2017 Nov 22-24 ; Thessaloniki . 377 - 92 . doi: 10.1007/978-3-319-70284-1_30 .
9 Brennan K . The managed teacher: emotional labour, education, and technology . Educ Insights . 2006 ; 10 ( 2 ): 55 - 65 .
10 Shawar BA , Atwell E . Different measurements metrics to evaluate a chatbot system . In: Proceedings of the Workshop on Bridging the Gap: Academic and Industrial Research in Dialog Technologies ; 2007 . 89 - 96 .
11 Zumstein D , Hundertmark S . Chatbots - an interactive technology for personalized communication, transactions and services . IADIS Int J WWW/Internet . 2017 ; 15 ( 1 ): 96 - 109 .
12 Cardona DR , Janssen A , Guhr N , Breitnet MH , Milde J . A matter of trust? Examination of chatbot usage in insurance business . In: Proceedings of the 54th Hawaii International Conference on System Sciences 2021 Maui ; Hawaii .
13 Nadarzynski T , Miles O , Cowie A , Ridge D . Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: a mixed-methods study . Digital Health . 2019 ; 5 : 1 - 12 .
14 Quah JT , Chua YW . Chatbot assisted marketing in financial service industry . International conference on services computing . Cham : Springer ; 2019 . 107 - 14 . doi: 10.1007/978-3-030-23554-3_8 .
15 Mogaji E , Balakrishnan J , Nwoba Christian A , Nguyen P . Emerging-market consumers' interactions with banking chatbots . Telemat Inform . 2021 ; 65 : 101711 . Elsevier .
16 Coopamootoo NM , Toreini E , Aitken M , Elliot K , Van Moorse A . Simulating the effects of social presence on trust, privacy concerns & usage intentions in automated bots for finance . In: 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW) . np .
17 Li L , Lee KY , Emokpae E , Yang S-B . What makes you continuously use chatbot services? Evidence from Chinese online travel agencies . Electron Mark . 2021 . doi: 10.1007/s12525-020-00454-z .
18 Pillai R , Sivathanu B . Adoption of AI-based chatbots for hospitality and tourism . Int J Contemp Hosp Manag . 2020 ; 32 ( 10 ): 3199 - 226 .
19 Gonzalez Melian S , Tano Gutierrez D , Gidumal Bulchand J . Predicting the intentions to use chatbots for travel and tourism . Curr Issues Tour . 2021 ; 24 ( 2 ): 192 - 210 .
20 Roy R , Naidoo V . Enhancing chatbot effectiveness: the role of anthropomorphic conversational styles and time orientation . J Bus Res . 2021 ; 126 : 23 - 34 .
21 Zarouali B , Van den Broeck E , Walrave M , Poels K . Predicting consumer responses to a chatbot on Facebook . Cyberpsychol Behav Soc Netw . 2018 ; 21 ( 8 ): 491 - 497 .
22 Chung M , Ko E , Joung H , Kim SJ . Chatbot e-service and customer satisfaction regarding luxury brands . J Bus Res . 2020 ; 117 : 587 - 95 .
23 Luo X , Tong S , Fang Z , Qu Z . Frontiers: machines vs humans: the impact of artificial intelligence chatbot disclosure on customer purchases . Mark Sci . 2019 ; 38 ( 6 ): 937 - 47 .
24 Kvale K , Freddi E , Hodnebrog S , Sell OA , Folstad A . Understanding the user experience of customer service chatbots: what can we learn from customer satisfaction surveys? Chatbot Research and Design . 4th International Workshop, CONVERSATIONS ; 2020 Nov 23-24 ; 2021 . 205 - 18 .
25 Van den Broeck E , Zarouali B , Poels K . Chatbot advertising effectiveness: when does the message get through? Comput Hum Behav . 2019 ; 98 : 150 - 7 .
26 Soni R , Pooja B . Trust in chatbots: investigating key factors influencing the adoption of chatbots by Generation Z . MuktShabd J . 2020 ; 9 ( 5 ): 5528 - 43 .
27 Sands S. , Ferraro C. , Campell C. , Tsao HY . Managing the human-chatbot divide:how service scripts influence service experience . J Serv Manag . 2021 . ISSN 1757-5818 Elsevier .
28 De Cicco R , Da Costa e Silva SCL , Alparone FR . It's on its way: chatbots applied for online food delivery services, social or task-oriented interaction style? J Food Serv Bus Res . 2020 ; 24 ( 2 ): 140 - 64 .
29 Van der Goot MJ , Pilgrim T . Exploring age differences in motivations for and acceptance of chatbot communication in a customer service context . International Workshop on Chatbot Research and Design, Book Series (LNCS, Volume 11970) . Springer Link ; 2020 . 173 - 86 .
30 Malik P , Gautam S , Srivastava S . A study on behavior intention for using chatbots . In: 8th International Conference on Reliability ; 2020 Jun 4-5 ; Noida, India : Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO) ,Institute of Electrical and Electronics Engineers (IEEE) . np .
31 Huang Yu S. , Kao WK . Chatbot service usage during a pandemic: fear and social distancing . Serv Ind J . 2021 ; 41 ( 13-14 ): 964 - 84 .
32 Sheehan B. , Jin HS. , Gottlieb U . Customer service chatbots: anthropomorphism and adoption . J Bus Res . 2020 ; 115 : 14 - 24 .
33 Soni R , Tyagi V . Acceptance of chat bots by millennial consumers . Int J Res Eng Manag . 2019 ; 4 ( 10 ): 429 - 32 . ISSN 2454-9150 .
34 Trapero H , Ilao J , Lacaza R . An integrated theory for chatbot use in air travel: questionnaire development and validation . In: 2020 IEEE Region 10 Conference (TENCON) ; 2020 Nov 16-19 ; Osaka, Japan . 652 - 7 . doi: 10.1109/TENCON50793.2020.9293710 .
35 Van Eeuwen M . Mobile conversational commerce: messenger chatbots as the next interface between businesses and consumers . Master's thesis . University of Twente ; 2017 .
36 De Cosmo LM , Piper L , Di Vittorio A . The role of attitude toward chatbots and privacy concern on the relationship between attitude toward mobile advertising and behavioral intent to use chatbots . Ital J Mark . 2021 ; ( 1-2 ): 83 - 102 .
37 Kasilingam DL . Understanding the attitude and intention to use smartphone chatbots for shopping . Technol Soc . 2020 ; 62 : 15 .
38 Brachten F , Kissmer T , Stieglitz S . The acceptance of chatbots in an enterprise context – a survey study . Int J Inf Manag . 2021 ; 60 ( C ). doi: 10.1016/j.ijinfomgt.2021.102375 .
39 Selamat MA , Windasari NA . Chatbot for SMEs: integrating customer and business owner perspectives . Technol Soc . 2021 ; 66 ( C ). Elsevier , 101685 .
40 Cardona DR , Werth O , Schonborn S , Breitner MH . A mixed methods analysis of the adoption and diffusion of chatbot technology in the German insurance sector . In: Twenty-fifth Americas Conference on Information System ; Mexico : Cancun ; 2019 .
41 Fryer L , Nakao K , Thompson A . Chatbot learning partners: connecting learning experiences, interest and competence . Comput Hum Behav . 2019 ; 93 : 279 - 89 .
42 Almahri Amer Jid F , Bell D , Merhi M . Understanding student acceptance and use of chatbots in the United Kingdom universities: a structural equation modeling approach . In: 6th IEEE International Conference on Information Management ; UK : IEEE Xplore ; 2020 . 284 - 8 .
43 Misirlis N , Vlachopoulou M . Social media metrics and analytics in marketing-S3M: a mapping literature review . Int J Inf Manag . 2018 ; 38 ( 1 ): 270 - 6 .
44 Folstad A , Nordheum CB , Bjorkli CA . What makes users trust a chatbot for customer service. An exploratory interview study . In: 5 th International Conference on Internet Science- INSCI ; 2018 Oct 24-26 ; St. Petersburg, Russia : Internet Science ; 2018 . 194 - 208 .
45 Ciechanowski L , Przegalinska A , Magnuski M , Gloor P . In the shades of the uncanny valley: an experimental study of human-chatbot interaction . Future Gener Comput Syst . 2019 ; 92 : 539 - 48 .
46 Lee S. , Lee N. , Sah YJ . Perceiving a mind in a chatbot: effect of mind perception and social cues on co-presence, closeness, and intention to use . Int J Hum-Comput Int . 2020 ; 36 ( 10 ): 930 - 40 .
47 Kuberkar S , Singhal TK . Factors influencing adoption intention of AI powered chatbot for public transport services within a smart city . Int J Emerg Tech . 2020 ; 11 ( 3 ): 948 - 58 .
48 Huang D.H , Chueh HE . Chatbot usage intention analysis: veterinary consultation . J Innov Knowl . 2021 ; 6 ( 3 ): 135 - 44 .
49 Venkatesh V , Morris MG , Davis GB , Davis FD . User acceptance of information technology: toward a unified view . MIS Q Manag Inf Syst . 2003 ; 27 ( 3 ): 425 - 78 .
50 Venkatesh V , Thong JYL , Xu X . Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology . MIS Q Manag Inf Syst . 2012 ; 36 ( 1 ): 157 - 78 .
51 Davis FD , Bagozzi RP , Warshaw PR . User acceptance of computer technology: a comparison of two theoretical models . Manag Sci . 1989 ; 35 ( 8 ): 982 - 1003 .
52 Cheng X , Bao Y , Zarifis A , Gong W. , Mou J . Exploring consumers' response to text-based chatbots in e-commerce: the moderating role of task complexity and chatbot disclosure . Internet Res . 2021 ; 32 ( 2 ): 1066 - 2243 . Emerald Publishing . doi: 10.1108/INTR-08-2020-0460 .
53 Silva S , De Cicco R , Alparone F . What kind of chatbot do millenials prefer to interact with? In: Proceedings of the 49th European Annual Marketing Academy Conference ; 2020 May 26-29 ; Budapest .
54 Go E , Sundar SS . Humanizing chatbots: the effects of visual, identity and conversational cues on humanness perceptions . Comput Hum Behav . 2019 ; 97 : 304 - 16 .
Related articles, all feedback is valuable.
Please share your general feedback
Contact Customer Support
Step-by-Step Guide: How to Use ChatGPT for Writing a Literature Review
Writing a literature review can be a challenging task for researchers and students alike. It requires a comprehensive understanding of the existing body of research on a particular topic. However, with the advent of advanced language models like ChatGPT, the process has become more accessible and efficient.
In this step-by-step guide, ilovephd will explore how you can leverage ChatGPT to write a compelling literature review that is both informative.
Step 1: Defining Your Research Objective Before diving into the literature review process, it is crucial to define your research objective.
Clearly articulate the topic, research question, or hypothesis you aim to address through your literature review. This step will help you maintain focus and guide your search for relevant sources.
Step 2: Identifying Keywords and Search Terms To effectively use ChatGPT to assist in your literature review, you need to identify relevant keywords and search terms related to your research topic.
These keywords will help you narrow down your search and gather pertinent information. Consider using tools like Google Keyword Planner or other keyword research tools to discover commonly used terms in your field.
Step 3: Familiarizing Yourself with ChatGPT Before engaging with ChatGPT, it is essential to understand its capabilities and limitations. Familiarize yourself with the prompts and commands that work best with the model.
Keep in mind that ChatGPT is an AI language model trained on a vast amount of data, so it can provide valuable insights and suggestions, but it’s important to critically evaluate and validate the information it generates.
Step 4: Generating an Initial Literature Review Outline Start by creating an outline for your literature review. Outline the main sections, such as the introduction, methodology, results, discussion, and conclusion.
Within each section, jot down the key points or subtopics you want to cover. This will help you organize your thoughts and structure your review effectively.
Step 5: Engaging with ChatGPT for Research Assistance Once you have your outline ready, engage with ChatGPT for research assistance.
Begin by providing a clear and concise prompt that specifies the topic, context, and any specific questions you have. For example, “What are the current trends in [your research topic]?” or “Can you provide an overview of the main theories on [your research question]?”
Step 6: Reviewing and Selecting Generated Content ChatGPT will generate a response based on your prompt. Carefully review the content generated, considering its relevance, accuracy, and coherence.
Extract key points, relevant references, and insightful arguments from the response and incorporate them into your literature review. Be sure to cite and attribute the sources appropriately.
Step 7: Ensuring Coherence and Flow While ChatGPT can provide valuable content, it’s important to ensure the coherence and flow of your literature review.
Use your critical thinking skills to connect the generated content with your research objective and existing knowledge. Rearrange, rephrase, and expand upon the generated text to ensure it aligns with the structure and purpose of your review.
Step 8: Editing and Proofreading Once you have incorporated the generated content into your literature review, thoroughly edit and proofread the document.
Check for grammatical errors, consistency in referencing, and overall clarity. This step is crucial to ensure your literature review is polished and professional.
Prompts you can use when engaging with ChatGPT for research assistance in writing a literature review:
Remember to provide clear and specific instructions in your prompts to guide ChatGPT in generating relevant and accurate content for your literature review.
Using ChatGPT to write a literature review can greatly facilitate the research process. By following a step-by-step approach, researchers can effectively leverage ChatGPT’s capabilities to gather insights, generate content, and enhance the quality of their literature review. However, it is important to approach the generated content critically, validate it with reliable sources, and ensure coherence within the review.
Working sci-hub proxy links – 2024, 24 best online plagiarism checker free – 2024, most popular, icmr call for research proposal 2024, call for applications: dst inspire faculty fellowship (2024), should you quit your phd explore reasons & alternatives, india – sri lanka joint research funding opportunity, how to check scopus indexed journals 2024, apply for the dst-jsps indo-japan call 2024, india-eu partner up for explainable and robust ai research, list of open access sci journals in computer science, best for you, what is phd, popular posts, scopus indexed journals list 2024, how to write a research paper in a month, popular category.
iLovePhD is a research education website to know updated research-related information. It helps researchers to find top journals for publishing research articles and get an easy manual for research tools. The main aim of this website is to help Ph.D. scholars who are working in various domains to get more valuable ideas to carry out their research. Learn the current groundbreaking research activities around the world, love the process of getting a Ph.D.
Contact us: [email protected]
Copyright © 2024 iLovePhD. All rights reserved
Are we there yet - a systematic literature review on chatbots in education.
Chatbots are a promising technology with the potential to enhance workplaces and everyday life. In terms of scalability and accessibility, they also offer unique possibilities as communication and information tools for digital learning. In this paper, we present a systematic literature review investigating the areas of education where chatbots have already been applied, explore the pedagogical roles of chatbots, the use of chatbots for mentoring purposes, and their potential to personalize education. We conducted a preliminary analysis of 2,678 publications to perform this literature review, which allowed us to identify 74 relevant publications for chatbots’ application in education. Through this, we address five research questions that, together, allow us to explore the current state-of-the-art of this educational technology. We conclude our systematic review by pointing to three main research challenges: 1) Aligning chatbot evaluations with implementation objectives, 2) Exploring the potential of chatbots for mentoring students, and 3) Exploring and leveraging adaptation capabilities of chatbots. For all three challenges, we discuss opportunities for future research.
Educational Technologies enable distance learning models and provide students with the opportunity to learn at their own pace. They have found their way into schools and higher education institutions through Learning Management Systems and Massive Open Online Courses, enabling teachers to scale up good teaching practices ( Ferguson and Sharples, 2014 ) and allowing students to access learning material ubiquitously ( Virtanen et al., 2018 ).
Despite the innovative power of educational technologies, most commonly used technologies do not substantially change teachers’ role. Typical teaching activities like providing students with feedback, motivating them, or adapting course content to specific student groups are still entrusted exclusively to teachers, even in digital learning environments. This can lead to the teacher-bandwidth problem ( Wiley and Edwards, 2002 ), the result of a shortage of teaching staff to provide highly informative and competence-oriented feedback at large scale. Nowadays, however, computers and other digital devices open up far-reaching possibilities that have not yet been fully exploited. For example, incorporating process data can provide students with insights into their learning progress and bring new possibilities for formative feedback, self-reflection, and competence development ( Quincey et al., 2019 ). According to ( Hattie, 2009 ), feedback in terms of learning success has a mean effect size of d = 0.75, while ( Wisniewski et al., 2019 ) even report a mean effect of d = 0.99 for highly informative feedback. Such feedback provides suitable conditions for self-directed learning ( Winne and Hadwin, 2008 ) and effective metacognitive control of the learning process ( Nelson and Narens, 1994 ).
One of the educational technologies designed to provide actionable feedback in this regard is Learning Analytics. Learning Analytics is defined as the research area that focuses on collecting traces that learners leave behind and using those traces to improve learning ( Duval and Verbert, 2012 ; Greller and Drachsler, 2012 ). Learning Analytics can be used both by students to reflect on their own learning progress and by teachers to continuously assess the students’ efforts and provide actionable feedback. Another relevant educational technology is Intelligent Tutoring Systems. Intelligent Tutoring Systems are defined as computerized learning environments that incorporate computational models ( Graesser et al., 2001 ) and provide feedback based on learning progress. Educational technologies specifically focused on feedback for help-seekers, comparable to raising hands in the classroom, are Dialogue Systems and Pedagogical Conversational Agents ( Lester et al., 1997 ). These technologies can simulate conversational partners and provide feedback through natural language ( McLoughlin and Oliver, 1998 ).
Research in this area has recently focused on chatbot technology, a subtype of dialog systems, as several technological platforms have matured and led to applications in various domains. Chatbots incorporate generic language models extracted from large parts of the Internet and enable feedback by limiting themselves to text or voice interfaces. For this reason, they have also been proposed and researched for a variety of applications in education ( Winkler and Soellner, 2018 ). Recent literature reviews on chatbots in education ( Winkler and Soellner, 2018 ; Hobert, 2019a ; Hobert and Meyer von Wolff, 2019 ; Jung et al., 2020 ; Pérez et al., 2020 ; Smutny and Schreiberova, 2020 ; Pérez-Marín, 2021 ) have reported on such applications as well as design guidelines, evaluation possibilities, and effects of chatbots in education.
In this paper, we contribute to the state-of-the-art of chatbots in education by presenting a systematic literature review, where we examine so-far unexplored areas such as implementation objectives, pedagogical roles, mentoring scenarios, the adaptations of chatbots to learners, and application domains. This paper is structured as follows: First, we review related work (section 2), derive research questions from it, then explain the applied method for searching related studies (section 3), followed by the results (section 4), and finally, we discuss the findings (section 5) and point to future research directions in the field (section 5).
In order to accurately cover the field of research and deal with the plethora of terms for chatbots in the literature (e.g. chatbot, dialogue system or pedagogical conversational agent) we propose the following definition:
Chatbots are digital systems that can be interacted with entirely through natural language via text or voice interfaces. They are intended to automate conversations by simulating a human conversation partner and can be integrated into software, such as online platforms, digital assistants, or be interfaced through messaging services.
Outside of education, typical applications of chatbots are in customer service ( Xu et al., 2017 ), counseling of hospital patients ( Vaidyam et al., 2019 ), or information services in smart speakers ( Ram et al., 2018 ). One central element of chatbots is the intent classification, also named the Natural Language Understanding (NLU) component, which is responsible for the sense-making of human input data. Looking at the current advances in chatbot software development, it seems that this technology’s goal is to pass the Turing Test ( Saygin et al., 2000 ) one day, which could make chatbots effective educational tools. Therefore, we ask ourselves “ Are we there yet? - Will we soon have an autonomous chatbot for every learner?”
To understand and underline the current need for research in the use of chatbots in education, we first examined the existing literature, focusing on comprehensive literature reviews. By looking at research questions in these literature reviews, we identified 21 different research topics and extracted findings accordingly. To structure research topics and findings in a comprehensible way, a three-stage clustering process was applied. While the first stage consisted of coding research topics by keywords, the second stage was applied to form overarching research categories ( Table 1 ). In the final stage, the findings within each research category were clustered to identify and structure commonalities within the literature reviews. The result is a concept map, which consists of four major categories. Those categories are CAT1. Applications of Chatbots, CAT2. Chatbot Designs, CAT3. Evaluation of Chatbots and CAT4. Educational Effects of Chatbots. To standardize the terminology and concepts applied, we present the findings of each category in a separate sub-section, respectively ( see Figure 1 , Figure 2 , Figure 3 , and Figure 4 ) and extended it with the outcomes of our own literature study that will be reported in the remaining parts of this article. Due to the size of the concept map a full version can be found in Appendix A .
TABLE 1 . Assignment of coded research topics identified in related literature reviews to research categories.
FIGURE 1 . Applications of chatbots in related literature reviews (CAT1).
FIGURE 2 . Chatbot designs in related literature reviews (CAT2).
FIGURE 3 . Evaluation of chatbots in related literature reviews (CAT3).
FIGURE 4 . Educational Effects of chatbots in related literature reviews (CAT4).
Regarding the applications of chatbots (CAT1), application clusters (AC) and application statistics (AS) have been described in the literature, which we visualized in Figure 1 . The study of ( Pérez et al., 2020 ) identifies two application clusters, defined through chatbot activities: “service-oriented chatbots” and “teaching-oriented chatbots.” ( Winkler and Soellner, 2018 ) identify applications clusters by naming the domains “health and well-being interventions,” “language learning,” “feedback and metacognitive thinking” as well as “motivation and self-efficacy.” Concerning application statistics (AS), ( Smutny and Schreiberova, 2020 ) found that nearly 47% of the analyzed chatbots incorporate informing actions, and 18% support language learning by elaborating on chatbots integrated into the social media platform Facebook. Besides, the chatbots studied had a strong tendency to use English, at 89%. This high number aligns with results from ( Pérez-Marín, 2021 ), where 75% of observed agents, as a related technology, were designed to interact in the English language. ( Pérez-Marín, 2021 ) also shows that 42% of the analyzed chatbots had mixed interaction modalities. Finally, ( Hobert and Meyer von Wolff, 2019 ) observed that only 25% of examined chatbots were incorporated in formal learning settings, the majority of published material focuses on student-chatbot interaction only and does not enable student-student communication, as well as nearly two-thirds of the analyzed chatbots center only on a single domain. Overall, we can summarize that so far there are six application clusters for chatbots for education categorized by chatbot activities or domains. The provided statistics allow for a clearer understanding regarding the prevalence of chatbots applications in education ( see Figure 1 ).
Regarding chatbot designs (CAT2), most of the research questions concerned with chatbots in education can be assigned to this category. We found three aspects in this category visualized in Figure 2 : Personality (PS), Process Pipeline (PP), and Design Classifications (DC). Within these, most research questions can be assigned to Design Classifications (DC), which are separated into Classification Aspects (DC2) and Classification Frameworks (DC1). One classification framework is defined through “flow chatbots,” “artificially intelligent chatbots,” “chatbots with integrated speech recognition,” as well as “chatbots with integrated context-data” by ( Winkler and Soellner, 2018 ). A second classification framework by ( Pérez-Marín, 2021 ) covers pedagogy, social, and HCI features of chatbots and agents, which themselves can be further subdivided into more detailed aspects. Other Classification Aspects (DC2) derived from several publications, provide another classification schema, which distinguishes between “retrieval vs. generative” based technology, the “ability to incorporate context data,” and “speech or text interface” ( Winkler and Soellner, 2018 ; Smutny and Schreiberova, 2020 ). By specifying text interfaces as “Button-Based” or “Keyword Recognition-Based” ( Smutny and Schreiberova, 2020 ), text interfaces can be subdivided. Furthermore, a comparison of speech and text interfaces ( Jung et al., 2020 ) shows that text interfaces have advantages for conveying information, and speech interfaces have advantages for affective support. The second aspect of CAT2 concerns the chatbot processing pipeline (PP), highlighting user interface and back-end importance ( Pérez et al., 2020 ). Finally, ( Jung et al., 2020 ) focuses on the third aspect, the personality of chatbots (PS). Here, the study derives four guidelines helpful in education: positive or neutral emotional expressions, a limited amount of animated or visual graphics, a well-considered gender of the chatbot, and human-like interactions. In summary, we have found in CAT2 three main design aspects for the development of chatbots. CAT2 is much more diverse than CAT1 with various sub-categories for the design of chatbots. This indicates the huge flexibility to design chatbots in various ways to support education.
Regarding the evaluation of chatbots (CAT3), we found three aspects assigned to this category, visualized in Figure 3 : Evaluation Criteria (EC), Evaluation Methods (EM), and Evaluation Instruments (EI). Concerning Evaluation Criteria, seven criteria can be identified in the literature. The first and most important in the educational field, according to ( Smutny and Schreiberova, 2020 ) is the evaluation of learning success ( Hobert, 2019a ), which can have subcategories such as how chatbots are embedded in learning scenarios ( Winkler and Soellner, 2018 ; Smutny and Schreiberova, 2020 ) and teaching efficiency ( Pérez et al., 2020 ). The second is acceptance, which ( Hobert, 2019a ) names as “acceptance and adoption” and ( Pérez et al., 2020 ) as “students’ perception.” Further evaluation criteria are motivation, usability, technical correctness, psychological, and further beneficial factors ( Hobert, 2019a ). These Evaluation Criteria show broad possibilities for the evaluation of chatbots in education. However, ( Hobert, 2019a ) found that most evaluations are limited to single evaluation criteria or narrower aspects of them. Moreover, ( Hobert, 2019a ) introduces a classification matrix for chatbot evaluations, which consists of the following Evaluation Methods (EM): Wizard-of-Oz approach, laboratory studies, field studies, and technical validations. In addition to this, ( Winkler and Soellner, 2018 ) recommends evaluating chatbots by their embeddedness into a learning scenario, a comparison of human-human and human-chatbot interactions, and comparing spoken and written communication. Instruments to measure these evaluation criteria were identified by ( Hobert, 2019a ) by naming quantitative surveys, qualitative interviews, transcripts of dialogues, and technical log files. Regarding CAT3, we found three main aspects for the evaluation of chatbots. We can conclude that this is a more balanced and structured distribution in comparison to CAT2, providing researchers with guidance for evaluating chatbots in education.
Regarding educational effects of chatbots (CAT4), we found two aspects visualized in Figure 4 : Effect Size (ES) and Beneficial Chatbot Features for Learning Success (BF). Concerning the effect size, ( Pérez et al., 2020 ) identified a strong dependency between learning and the related curriculum, while ( Winkler and Soellner, 2018 ) elaborate on general student characteristics that influence how students interact with chatbots. They state that students’ attitudes towards technology, learning characteristics, educational background, self-efficacy, and self-regulation skills affect these interactions. Moreover, the study emphasizes chatbot features, which can be regarded as beneficial in terms of learning outcomes (BF): “Context-Awareness,” “Proactive guidance by students,” “Integration in existing learning and instant messaging tools,” “Accessibility,” and “Response Time.” Overall, for CAT4, we found two main distinguishing aspects for chatbots, however, the reported studies vary widely in their research design, making high-level results hardly comparable.
Looking at the related work, many research questions for the application of chatbots in education remain. Therefore, we selected five goals to be further investigated in our literature review. Firstly, we were interested in the objectives for implementing chatbots in education (Goal 1), as the relevance of chatbots for applications within education seems to be not clearly delineated. Secondly, we aim to explore the pedagogical roles of chatbots in the existing literature (Goal 2) to understand how chatbots can take over tasks from teachers. ( Winkler and Soellner, 2018 ) and ( Pérez-Marín, 2021 ), identified research gaps for supporting meta-cognitive skills with chatbots such as self-regulation. This requires a chatbot application that takes a mentoring role, as the development of these meta-cognitive skills can not be achieved solely by information delivery. Within our review we incorporate this by reviewing the mentoring role of chatbots as (Goal 3). Another key element for a mentoring chatbot is adaptation to the learners needs. Therefore, (Goal 4) of our review lies in the investigation of the adaptation approaches used by chatbots in education. For (Goal 5), we want to extend the work of ( Winkler and Soellner, 2018 ) and ( Pérez et al., 2020 ) regarding Application Clusters (AC) and map applications by further investigating specific learning domains in which chatbots have been studied.
To delineate and map the field of chatbots in education, initial findings were collected by a preliminary literature search. One of the takeaways is that the emerging field around educational chatbots has seen much activity in the last two years. Based on the experience of this preliminary search, search terms, queries, and filters were constructed for the actual structured literature review. This structured literature review follows the PRISMA framework ( Liberati et al., 2009 ), a guideline for reporting systematic reviews and meta-analyses. The framework consists of an elaborated structure for systematic literature reviews and sets requirements for reporting information about the review process ( see section 3.2 to 3.4).
Contributing to the state-of-the-art, we investigate five aspects of chatbot applications published in the literature. We therefore guided our research with the following research questions:
RQ1: Which objectives for implementing chatbots in education can be identified in the existing literature?
RQ2: Which pedagogical roles of chatbots can be identified in the existing literature?
RQ3: Which application scenarios have been used to mentor students?
RQ4: To what extent are chatbots adaptable to personal students’ needs?
RQ5: What are the domains in which chatbots have been applied so far?
As data sources, Scopus, Web of Science, Google Scholar, Microsoft Academics, and the educational research database “Fachportal Pädagogik” (including ERIC) were selected, all of which incorporate all major publishers and journals. In ( Martín-Martín et al., 2018 ) it was shown that for the social sciences only 29.8% and for engineering and computer science, 46.8% of relevant literature is included in all of the first three databases. For the topic of chatbots in education, a value between these two numbers can be assumed, which is why an approach of integrating several publisher-independent databases was employed here.
Based on the findings from the initial related work search, we derived the following search query:
( Education OR Educational OR Learning OR Learner OR Student OR Teaching OR School OR University OR Pedagogical ) AND Chatbot.
It combines education-related keywords with the “chatbot” keyword. Since chatbots are related to other technologies, the initial literature search also considered keywords such as “pedagogical agents,” “dialogue systems,” or “bots” when composing the search query. However, these increased the number of irrelevant results significantly and were therefore excluded from the query in later searches.
The queries were executed on 23.12.2020 and applied twice to each database, first as a title search query and secondly as a keyword-based search. This resulted in a total of 3.619 hits, which were checked for duplicates resulting in 2.678 candidate publications. The overall search and filtering process is shown in Figure 5 .
FIGURE 5 . PRISMA flow chart.
In the case of Google Scholar, the number of results sorted by relevance per query was limited to 300, as this database also delivers many less relevant works. The value was determined by looking at the search results in detail using several queries to exclude as few relevant works as possible. This approach showed promising results and, at the same time, did not burden the literature list with irrelevant items.
The further screening consisted of a four-stage filtering process. First, eliminating duplicates in the results of title and keyword queries of all databases independently and second, excluding publications based on the title and abstract that:
• were not available in English
• did not describe a chatbot application
• were not mainly focused on learner-centered chatbots applications in schools or higher education institutions, which is according to the preliminary literature search the main application area within education.
Third, we applied another duplicate filter, this time for the merged set of publications. Finally, a filter based on the full text, excluding publications that were:
• limited to improve chatbots technically (e.g., publications that compare or develop new algorithms), as research questions presented in these publications were not seeking for additional insights on applications in education
• exclusively theoretical in nature (e.g., publications that discuss new research projects, implementation concepts, or potential use cases of chatbots in education), as they either do not contain research questions or hypotheses or do not provide conclusions from studies with learners.
After the first, second, and third filters, we identified 505 candidate publications. We continued our filtering process by reading the candidate publications’ full texts resulting in 74 publications that were used for our review. Compared to 3.619 initial database results, the proportion of relevant publications is therefore about 2.0%.
The final publication list can be accessed under https://bit.ly/2RRArFT .
To analyze the identified publications and derive results according to the research questions, full texts were coded, considering for each publication the objectives for implementing chatbots (RQ1), pedagogical roles of chatbots (RQ2), their mentoring roles (RQ3), adaptation of chatbots (RQ4), as well as their implementation domains in education (RQ5) as separated sets of codes. To this end, initial codes were identified by open coding and iteratively improved through comparison, group discussion among the authors, and subsequent code expansion. Further, codes were supplemented with detailed descriptions until a saturation point was reached, where all included studies could be successfully mapped to codes, suggesting no need for further refinement. As an example, codes for RQ2 (Pedagogical Roles) were adapted and refined in terms of their level of abstraction from an initial set of only two codes, 1 ) a code for chatbots in the learning role and 2 ) a code for chatbots in a service-oriented role. After coding a larger set of publications, it became clear that the code for service-oriented chatbots needed to be further distinguished. This was because it summarized e.g. automation activities with activities related to self-regulated learning and thus could not be distinguished sharply enough from the learning role. After refining the code set in the next iteration into a learning role, an assistance role, and a mentoring role, it was then possible to ensure the separation of the individual codes. In order to avoid defining new codes for singular or a very small number of publications, studies were coded as “other” (RQ1) or “not defined” (RQ2), if their occurrence was less than eight publications, representing less than 10% of the publications in the final paper list.
By grouping the resulting relevant publications according to their date of publication, it is apparent that chatbots in education are currently in a phase of increased attention. The release distribution shows slightly lower publication numbers in the current than in the previous year ( Figure 6 ), which could be attributed to a time lag between the actual publication of manuscripts and their dissemination in databases.
FIGURE 6 . Identified chatbot publications in education per year.
Applying the curve presented in Figure 6 to Gartner’s Hype Cycle ( Linden and Fenn, 2003 ) suggests that technology around chatbots in education may currently be in the “Innovation Trigger” phase. This phase is where many expectations are placed on the technology, but the practical in-depth experience is still largely lacking.
Regarding RQ1, we extracted implementation objectives for chatbots in education. By analyzing the selected publications we identified that most of the objectives for chatbots in education can be described by one of the following categories: Skill improvement, Efficiency of Education, and Students’ Motivation ( see Figure 7 ). First, the “improvement of a student’s skill” (or Skill Improvement ) objective that the chatbot is supposed to help with or achieve. Here, chatbots are mostly seen as a learning aid that supports students. It is the most commonly cited objective for chatbots. The second objective is to increase the Efficiency of Education in general. It can occur, for example, through the automation of recurring tasks or time-saving services for students and is the second most cited objective for chatbots. The third objective is to increase Students’ Motivation . Finally, the last objective is to increase the Availability of Education . This objective is intended to provide learning or counseling with temporal flexibility or without the limitation of physical presence. In addition, there are other, more diverse objectives for chatbots in education that are less easy to categorize. In cases of a publication indicating more than one objective, the publication was distributed evenly across the respective categories.
FIGURE 7 . Objectives for implementing chatbots identified in chatbot publications.
Given these results, we can summarize four major implementing objectives for chatbots. Of these, Skill Improvement is the most popular objective, constituting around one-third of publications (32%). Making up a quarter of all publications, Efficiency of Education is the second most popular objective (25%), while addressing Students’ Motivation and Availability of Education are third (13%) and fourth (11%), respectively. Other objectives also make up a substantial amount of these publications (19%), although they were too diverse to categorize in a uniform way. Examples of these are inclusivity ( Heo and Lee, 2019 ) or the promotion of student teacher interactions ( Mendoza et al., 2020 ).
Regarding RQ2, it is crucial to consider the use of chatbots in terms of their intended pedagogical role. After analyzing the selected articles, we were able to identify four different pedagogical roles: a supporting learning role, an assisting role, and a mentoring role.
In the supporting learning role ( Learning ), chatbots are used as an educational tool to teach content or skills. This can be achieved through a fixed integration into the curriculum, such as conversation tasks (L. K. Fryer et al., 2020 ). Alternatively, learning can be supported through additional offerings alongside classroom teaching, for example, voice assistants for leisure activities at home ( Bao, 2019 ). Examples of these are chatbots simulating a virtual pen pal abroad ( Na-Young, 2019 ). Conversations with this kind of chatbot aim to motivate the students to look up vocabulary, check their grammar, and gain confidence in the foreign language.
In the assisting role ( Assisting ), chatbot actions can be summarized as simplifying the student's everyday life, i.e., taking tasks off the student’s hands in whole or in part. This can be achieved by making information more easily available ( Sugondo and Bahana, 2019 ) or by simplifying processes through the chatbot’s automation ( Suwannatee and Suwanyangyuen, 2019 ). An example of this is the chatbot in ( Sandoval, 2018 ) that answers general questions about a course, such as an exam date or office hours.
In the mentoring role ( Mentoring ), chatbot actions deal with the student’s personal development. In this type of support, the student himself is the focus of the conversation and should be encouraged to plan, reflect or assess his progress on a meta-cognitive level. One example is the chatbot in ( Cabales, 2019 ), which helps students develop lifelong learning skills by prompting in-action reflections.
The distribution of each pedagogical role is shown in Figure 8 . From this, it can be seen that Learning is the most frequently used role of the examined publications (49%), followed by Assisting (20%) and Mentoring (15%). It should be noted that pedagogical roles were not identified for all the publications examined. The absence of a clearly defined pedagogical role (16%) can be attributed to the more general nature of these publications, e.g. focused on students’ small talk behaviors ( Hobert, 2019b ) or teachers’ attitudes towards chatbot applications in classroom teaching (P. K. Bii et al., 2018 ).
FIGURE 8 . Pedagogical roles identified in chatbot publications.
Looking at pedagogical roles in the context of objectives for implementing chatbots, relations among publications can be inspected in a relations graph ( Figure 9 ). According to our results, the strongest relation in the examined publications can be considered between Skill Improvement objective and the Learning role. This strong relation is partly because both, the Skill Improvement objective and the Learning role, are the largest in their respective categories. In addition, two other strong relations can be observed: Between the Students’ Motivation objective and the Learning role, as well as between Efficiency of Education objective and Assisting role.
FIGURE 9 . Relations graph of pedagogical roles and objectives for implementing chatbots.
By looking at other relations in more detail, there is surprisingly no relation between Skill Improvement as the most common implementation objective and Assisting , as the 2nd most common pedagogical role. Furthermore, it can be observed that the Mentoring role has nearly equal relations to all of the objectives for implementing chatbots.
The relations graph ( Figure 9 ) can interactively be explored through bit.ly/32FSKQM.
Regarding RQ3, we identified eleven publications that deal with chatbots in this regard. The Mentoring role in these publications can be categorized in two dimensions. Starting with the first dimension, the mentoring method, three methods can be observed:
• Scaffolding ( n = 7)
• Recommending ( n = 3)
• Informing ( n = 1)
An example of Scaffolding can be seen in ( Gabrielli et al., 2020 ), where the chatbot coaches students in life skills, while an example of Recommending can be seen in ( Xiao et al., 2019 ), where the chatbot recommends new teammates. Finally, Informing can be seen in ( Kerly et al., 2008 ), where the chatbot informs students about their personal Open Learner Model.
The second dimension is the addressed mentoring topic, where the following topics can be observed:
• Self-Regulated Learning ( n = 5)
• Life Skills ( n = 4)
• Learning Skills ( n = 2)
While Mentoring chatbots to support Self-Regulated Learning are intended to encourage students to reflect on and plan their learning progress, Mentoring chatbots to support Life Skills address general student’s abilities such as self-confidence or managing emotions. Finally, Mentoring chatbots to support Learning Skills , in contrast to Self-Regulated Learning , address only particular aspects of the learning process, such as new learning strategies or helpful learning partners. An example for Mentoring chatbots supporting Life Skill is the Logo counseling chatbot, which promotes healthy self-esteem ( Engel et al., 2020 ). CALMsystem is an example of a Self-Regulated Learning chatbot, which informs students about their data in an open learner model ( Kerly et al., 2008 ). Finally, there is the Learning Skills topic. Here, the MCQ Bot is an example that is designed to introduce students to transformative learning (W. Huang et al., 2019 ).
Regarding RQ4, we identified six publications in the final publication list that address the topic of adaptation. Within these publications, five adaptation approaches are described:
The first approach (A1) is proposed by ( Kerly and Bull, 2006 ) and ( Kerly et al., 2008 ), dealing with student discussions based on success and confidence during a quiz. The improvement of self-assessment is the primary focus of this approach. The second approach (A2) is presented in ( Jia, 2008 ), where the personality of the chatbot is adapted to motivate students to talk to the chatbot and, in this case, learn a foreign language. The third approach (A3), as shown in the work of ( Vijayakumar et al., 2019 ), is characterized by a chatbot that provides personalized formative feedback to learners based on their self-assessment, again in a quiz situation. Here, the focus is on Hattie and Timperley’s three guiding questions: “Where am I going?,” “How am I going?” and “Where to go next?” ( Hattie and Timperley, 2007 ). In the fourth approach (A4), exemplified in ( Ruan et al., 2019 ), the chatbot selects questions within a quiz. Here, the chatbot estimates the student’s ability and knowledge level based on the quiz progress and sets the next question accordingly. Finally, a similar approach (A5) is shown in ( Davies et al., 2020 ). In contrast to ( Ruan et al., 2019 ), this chatbot adapts the amount of question variation and takes psychological features into account which were measured by psychological tests before.
We examined these five approaches by organizing them according to their information sources and extracted learner information. The results can be seen in Table 2 .
TABLE 2 . Adaptation approaches of chatbots in education.
Four out of five adaptation approaches (A1, A3, A4, and A5) are observed in the context of quizzes. These adaptations within quizzes can be divided into two mainstreams: One is concerned about students’ feedback (A1 and A3), while the other is concerned about learning material selection (A4 and A5). The only different adaptation approach is shown in A2, which focuses on the adaptation of the chatbot personality within a language learning application.
Regarding RQ5, we identified 20 domains of chatbots in education. These can broadly be divided by their pedagogical role into three domain categories (DC): Learning Chatbots , Assisting Chatbots , and Mentoring Chatbots . The remaining publications are grouped in the Other Research domain category. The complete list of identified domains can be seen in Table 3 .
TABLE 3 . Domains of chatbots in education.
The domain category Learning Chatbots , which deals with chatbots incorporating the pedagogical role Learning , can be subdivided into seven domains: 1 ) Language Learning , 2 ) Learn to Program , 3 ) Learn Communication Skills , 4 ) Learn about Educational Technologies , 5 ) Learn about Cultural Heritage , 6 ) Learn about Laws , and 7 ) Mathematics Learning . With more than half of publications (53%), chatbots for Language Learning play a prominent role in this domain category. They are often used as chat partners to train conversations or to test vocabulary. An example of this can be seen in the work of ( Bao, 2019 ), which tries to mitigate foreign language anxiety by chatbot interactions in foreign languages.
The domain category Assisting Chatbots , which deals with chatbots incorporating the pedagogical role Assisting , can be subdivided into four domains: 1 ) Administrative Assistance , 2 ) Campus Assistance , 3 ) Course Assistance , and 4 ) Library Assistance . With one-third of publications (33%), chatbots in the Administrative Assistance domain that help to overcome bureaucratic hurdles at the institution, while providing round-the-clock services, are the largest group in this domain category. An example of this can be seen in ( Galko et al., 2018 ), where the student enrollment process is completely shifted to a conversation with a chatbot.
The domain category Mentoring Chatbots , which deals with chatbots incorporating the pedagogical role Mentoring , can be subdivided into three domains: 1 ) Scaffolding Chatbots , 2 ) Recommending Chatbots , and 3 ) Informing Chatbots . An example of a Scaffolding Chatbots is the CRI(S) chatbot ( Gabrielli et al., 2020 ), which supports life skills such as self-awareness or conflict resolution in discussion with the student by promoting helpful ideas and tricks.
The domain category Other Research , which deals with chatbots not incorporating any of these pedagogical roles, can be subdivided into three domains: 1 ) General Chatbot Research in Education , 2 ) Indian Educational System , and 3 ) Chatbot Interfaces . The most prominent domain, General Chatbot Research , cannot be classified in one of the other categories but aims to explore cross-cutting issues. An example for this can be seen in the publication of ( Hobert, 2020 ), which researches the importance of small talk abilities of chatbots in educational settings.
In this paper, we investigated the state-of-the-art of chatbots in education according to five research questions. By combining our results with previously identified findings from related literature reviews, we proposed a concept map of chatbots in education. The map, reported in Appendix A , displays the current state of research regarding chatbots in education with the aim of supporting future research in the field.
Concerning RQ1 (implementation objectives), we identified four major objectives: 1 ) Skill Improvement , 2 ) Efficiency of Education , 3 ) Students’ Motivation, and 4 ) Availability of Education . These four objectives cover over 80% of the analyzed publications ( see Figure 7 ). Based on the findings on CAT3 in section 2, we see a mismatch between the objectives for implementing chatbots compared to their evaluation. Most researchers only focus on narrow aspects for the evaluation of their chatbots such as learning success, usability, and technology acceptance. This mismatch of implementation objectives and suitable evaluation approaches is also well known by other educational technologies such as Learning Analytics dashboards ( Jivet et al., 2017 ). A more structured approach of aligning implementation objectives and evaluation procedures is crucial to be able to properly assess the effectiveness of chatbots. ( Hobert, 2019a ), suggested a structured four-stage evaluation procedure beginning with a Wizard-of-Oz experiment, followed by technical validation, a laboratory study, and a field study. This evaluation procedure systematically links hypotheses with outcomes of chatbots helping to assess chatbots for their implementation objectives. “Aligning chatbot evaluations with implementation objectives” is, therefore, an important challenge to be addressed in the future research agenda.
Concerning RQ2 (pedagogical roles), our results show that chatbots’ pedagogical roles can be summarized as Learning , Assisting , and Mentoring . The Learning role is the support in learning or teaching activities such as gaining knowledge. The Assisting role is the support in terms of simplifying learners’ everyday life, e.g. by providing opening times of the library. The Mentoring role is the support in terms of students’ personal development, e.g. by supporting Self-Regulated Learning. From a pedagogical standpoint, all three roles are essential for learners and should therefore be incorporated in chatbots. These pedagogical roles are well aligned with the four implementation objectives reported in RQ1. While Skill Improvement and Students’ Motivation is strongly related to Learning , Efficiency of Education is strongly related to Assisting . The Mentoring role instead, is evenly related to all of the identified objectives for implementing chatbots. In the reviewed publications, chatbots are therefore primarily intended to 1 ) improve skills and motivate students by supporting learning and teaching activities, 2 ) make education more efficient by providing relevant administrative and logistical information to learners, and 3 ) support multiple effects by mentoring students.
Concerning RQ3 (mentoring role), we identified three main mentoring method categories for chatbots: 1 ) Scaffolding , 2 ) Recommending , and 3 ) Informing . However, comparing the current mentoring of chatbots reported in the literature with the daily mentoring role of teachers, we can summarize that the chatbots are not at the same level. In order to take over mentoring roles of teachers ( Wildman et al., 1992 ), a chatbot would need to fulfill some of the following activities in their mentoring role. With respect to 1 ) Scaffolding , chatbots should provide direct assistance while learning new skills and especially direct beginners in their activities. Regarding 2 ) Recommending , chatbots should provide supportive information, tools or other materials for specific learning tasks to life situations. With respect to 3 ) Informing, chatbots should encourage students according to their goals and achievements, and support them to develop meta-cognitive skills like self-regulation. Due to the mismatch of teacher vs. chatbot mentoring we see here another research challenge, which we call “Exploring the potential of chatbots for mentoring students.”
Regarding RQ4 (adaptation), only six publications were identified that discuss an adaptation of chatbots, while four out of five adaptation approaches (A1, A3, A4, and A5) show similarities by being applied within quizzes. In the context of educational technologies, providing reasonable adaptations for learners requires a high level of experience. Based on our results, the research on chatbots does not seem to be at this point yet. Looking at adaptation literature like ( Brusilovsky, 2001 ) or ( Benyon and Murray, 1993 ), it becomes clear that a chatbot needs to consider the learners’ personal information to fulfill the requirement of the adaptation definition. Personal information must be retrieved and stored at least temporarily, in some sort of learner model. For learner information like knowledge and interest, adaptations seem to be barely explored in the reviewed publications, while the model of ( Brusilovsky and Millán, 2007 ) points out further learner information, which can be used to make chatbots more adaptive: personal goals, personal tasks, personal background, individual traits, and the learner’s context. We identify research in this area as a third future challenge and call it the “Exploring and leveraging adaptation capabilities of chatbots” challenge.
In terms of RQ5 (domains), we identified a detailed map of domains applying chatbots in education and their distribution ( see Table 3 ). By systematically analyzing 74 publications, we identified 20 domains and structured them according to the identified pedagogical role into four domain categories: Learning Chatbots , Assisting Chatbots , Mentoring Chatbots , and Other Research . These results extend the taxonomy of Application Clusters (AC) for chatbots in education, which previously comprised the work from ( Pérez et al., 2020 ), who took the chatbot activity as characteristic, and ( Winkler and Soellner, 2018 ), who characterized the chatbots by domains. It draws relationships between these two types of Application Clusters (AC) and structures them accordingly. Our structure incorporates Mentoring Chatbots and Other Research in addition to the “service-oriented chatbots” (cf. Assisting Chatbots ) and “teaching-oriented chatbots” (cf. Learning Chatbots ) identified by (Perez). Furthermore, the strong tendencies of informing students already mentioned by ( Smutny and Schreiberova, 2020 ) can also be recognized in our results, especially in Assisting Chatbots . Compared to ( Winkler and Soellner, 2018 ), we can confirm the prominent domains of “language learning” within Learning Chatbots and “metacognitive thinking” within Mentoring Chatbots . Moreover, through Table 3 , a more detailed picture of chatbot applications in education is reflected, which could help researchers to find similar works or unexplored application areas.
One important limitation to be mentioned here is the exclusion of alternative keywords for our search queries, as we exclusively used chatbot as keyword in order to avoid search results that do not fit our research questions. Though we acknowledge that chatbots share properties with pedagogical agents, dialog systems, and bots, we carefully considered this trade-off between missing potentially relevant work and inflating our search procedure by including related but not necessarily pertinent work. A second limitation may lie in the formation of categories and coding processes applied, which, due to the novelty of the findings, could not be built upon theoretical frameworks or already existing code books. Although we have focused on ensuring that codes used contribute to a strong understanding, the determination of the abstraction level might have affected the level of detail of the resulting data representation.
In this systematic literature review, we explored the current landscape of chatbots in education. We analyzed 74 publications, identified 20 domains of chatbots and grouped them based on their pedagogical roles into four domain categories. These pedagogical roles are the supporting learning role ( Learning ), the assisting role ( Assisting ), and the mentoring role ( Mentoring ). By focusing on objectives for implementing chatbots, we identified four main objectives: 1 ) Skill Improvement , 2 ) Efficiency of Education , 3 ) Students’ Motivation, and 4 ) Availability of Education . As discussed in section 5, these objectives do not fully align with the chosen evaluation procedures. We focused on the relations between pedagogical roles and objectives for implementing chatbots and identified three main relations: 1 ) chatbots to improve skills and motivate students by supporting learning and teaching activities, 2 ) chatbots to make education more efficient by providing relevant administrative and logistical information to learners, and 3 ) chatbots to support multiple effects by mentoring students. We focused on chatbots incorporating the Mentoring role and found that these chatbots are mostly concerned with three mentoring topics 1 ) Self-Regulated Learning , 2 ) Life Skills , and 3 ) Learning Skills and three mentoring methods 1 ) Scaffolding , 2 ) Recommending , and 3 ) Informing . Regarding chatbot adaptations, only six publications with adaptations were identified. Furthermore, the adaptation approaches found were mostly limited to applications within quizzes and thus represent a research gap.
Based on these outcomes we consider three challenges for chatbots in education that offer future research opportunities:
Challenge 1: Aligning chatbot evaluations with implementation objectives . Most chatbot evaluations focus on narrow aspects to measure the tool’s usability, acceptance or technical correctness. If chatbots should be considered as learning aids, student mentors, or facilitators, the effects on the cognitive, and emotional levels should also be taken into account for the evaluation of chatbots. This finding strengthens our conclusion that chatbot development in education is still driven by technology, rather than having a clear pedagogical focus of improving and supporting learning.
Challenge 2: Exploring the potential of chatbots for mentoring students . In order to better understand the potentials of chatbots to mentor students, more empirical studies on the information needs of learners are required. It is obvious that these needs differ from schools to higher education. However, so far there are hardly any studies investigating the information needs with respect to chatbots nor if chatbots address these needs sufficiently.
Challenge 3: Exploring and leveraging adaptation capabilities of chatbots . There is a large literature on adaptation capabilities of educational technologies. However, we have seen very few studies on the effect of adaptation of chatbots for education purposes. As chatbots are foreseen as systems that should personally support learners, the area of adaptable interactions of chatbots is an important research aspect that should receive more attention in the near future.
By addressing these challenges, we believe that chatbots can become effective educational tools capable of supporting learners with informative feedback. Therefore, looking at our results and the challenges presented, we conclude, “No, we are not there yet!” - There is still much to be done in terms of research on chatbots in education. Still, development in this area seems to have just begun to gain momentum and we expect to see new insights in the coming years.
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.
SW, JS†, DM†, JW†, MR, and HD.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Abbasi, S., Kazi, H., and Hussaini, N. N. (2019). Effect of Chatbot Systems on Student’s Learning Outcomes. Sylwan 163(10).
CrossRef Full Text
Abbasi, S., and Kazi, H. (2014). Measuring Effectiveness of Learning Chatbot Systems on Student's Learning Outcome and Memory Retention. Asian J. Appl. Sci. Eng. 3, 57. doi:10.15590/AJASE/2014/V3I7/53576
CrossRef Full Text | Google Scholar
Almahri, F. A. J., Bell, D., and Merhi, M. (2020). “Understanding Student Acceptance and Use of Chatbots in the United Kingdom Universities: A Structural Equation Modelling Approach,” in 2020 6th IEEE International Conference on Information Management, ICIM 2020 , London, United Kingdom , March 27–29, 2020 , (IEEE), 284–288. doi:10.1109/ICIM49319.2020.244712
Bao, M. (2019). Can Home Use of Speech-Enabled Artificial Intelligence Mitigate Foreign Language Anxiety - Investigation of a Concept. Awej 5, 28–40. doi:10.24093/awej/call5.3
Benyon, D., and Murray, D. (1993). Applying User Modeling to Human-Computer Interaction Design. Artif. Intell. Rev. 7 (3-4), 199–225. doi:10.1007/BF00849555
Bii, P. K., Too, J. K., and Mukwa, C. W. (2018). Teacher Attitude towards Use of Chatbots in Routine Teaching. Univers. J. Educ. Res. . 6 (7), 1586–1597. doi:10.13189/ujer.2018.060719
Bii, P., Too, J., and Langat, R. (2013). An Investigation of Student’s Attitude Towards the Use of Chatbot Technology in Instruction: The Case of Knowie in a Selected High School. Education Research 4, 710–716. doi:10.14303/er.2013.231
Google Scholar
Bos, A. S., Pizzato, M. C., Vettori, M., Donato, L. G., Soares, P. P., Fagundes, J. G., et al. (2020). Empirical Evidence During the Implementation of an Educational Chatbot with the Electroencephalogram Metric. Creative Education 11, 2337–2345. doi:10.4236/CE.2020.1111171
Brusilovsky, P. (2001). Adaptive Hypermedia. User Model. User-Adapted Interaction 11 (1), 87–110. doi:10.1023/a:1011143116306
Brusilovsky, P., and Millán, E. (2007). “User Models for Adaptive Hypermedia and Adaptive Educational Systems,” in The Adaptive Web: Methods and Strategies of Web Personalization . Editors P. Brusilovsky, A. Kobsa, and W. Nejdl. Berlin: Springer , 3–53. doi:10.1007/978-3-540-72079-9_1
Cabales, V. (2019). “Muse: Scaffolding metacognitive reflection in design-based research,” in CHI EA’19: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems , Glasgow, Scotland, United Kingdom , May 4–9, 2019 , (ACM), 1–6. doi:10.1145/3290607.3308450
Carayannopoulos, S. (2018). Using Chatbots to Aid Transition. Int. J. Info. Learn. Tech. 35, 118–129. doi:10.1108/IJILT-10-2017-0097
Chan, C. H., Lee, H. L., Lo, W. K., and Lui, A. K.-F. (2018). Developing a Chatbot for College Student Programme Advisement. in 2018 International Symposium on Educational Technology, ISET 2018 , Osaka, Japan , July 31–August 2, 2018 . Editors F. L. Wang, C. Iwasaki, T. Konno, O. Au, and C. Li, (IEEE), 52–56. doi:10.1109/ISET.2018.00021
Chang, M.-Y., and Hwang, J.-P. (2019). “Developing Chatbot with Deep Learning Techniques for Negotiation Course,” in 2019 8th International Congress on Advanced Applied Informatics, IIAI-AAI 2019 , Toyama, Japan , July 7–11, 2019 , (IEEE), 1047–1048. doi:10.1109/IIAI-AAI.2019.00220
Chen, C.-A., Yang, Y.-T., Wu, S.-M., Chen, H.-C., Chiu, K.-C., Wu, J.-W., et al. (2018). “A Study of Implementing AI Chatbot in Campus Consulting Service”, in TANET 2018-Taiwan Internet Seminar , 1714–1719. doi:10.6861/TANET.201810.0317
Chen, H.-L., Widarso, G. V., and Sutrisno, H. (2020). A ChatBot for Learning Chinese: Learning Achievement and Technology Acceptance. J. Educ. Comput. Res. 58 (6), 1161–1189. doi:10.1177/0735633120929622
Daud, S. H. M., Teo, N. H. I., and Zain, N. H. M. (2020). E-java Chatbot for Learning Programming Language: A post-pandemic Alternative Virtual Tutor. Int. J. Emerging Trends Eng. Res. 8(7). 3290–3298. doi:10.30534/ijeter/2020/67872020
Davies, J. N., Verovko, M., Verovko, O., and Solomakha, I. (2020). “Personalization of E-Learning Process Using Ai-Powered Chatbot Integration,” in Selected Papers of 15th International Scientific-practical Conference, MODS, 2020: Advances in Intelligent Systems and Computing , Chernihiv, Ukraine , June 29–July 01, 2020 . Editors S. Shkarlet, A. Morozov, and A. Palagin, ( Springer ) Vol. 1265, 209–216. doi:10.1007/978-3-030-58124-4_20
Diachenko, A. V., Morgunov, B. P., Melnyk, T. P., Kravchenko, O. I., and Zubchenko, L. V. (2019). The Use of Innovative Pedagogical Technologies for Automation of the Specialists' Professional Training. Int. J. Hydrogen. Energy. 8, 288–295. doi:10.5430/ijhe.v8n6p288
Dibitonto, M., Leszczynska, K., Tazzi, F., and Medaglia, C. M. (2018). “Chatbot in a Campus Environment: Design of Lisa, a Virtual Assistant to Help Students in Their university Life,” in 20th International Conference, HCI International 2018 , Las Vegas, NV, USA , July 15–20, 2018 , Lecture Notes in Computer Science. Editors M. Kurosu, (Springer), 103–116. doi:10.1007/978-3-319-91250-9
Durall, E., and Kapros, E. (2020). “Co-design for a Competency Self-Assessment Chatbot and Survey in Science Education,” in 7th International Conference, LCT 2020, Held as Part of the 22nd HCI International Conference, HCII 2020 , Copenhagen, Denmark , July 19–24, 2020 , Lecture Notes in Computer Science. Editors P. Zaphiris, and A. Ioannou, Berlin: Springer Vol. 12206, 13–23. doi:10.1007/978-3-030-50506-6_2
Duval, E., and Verbert, K. (2012). Learning Analytics. Eleed 8 (1).
Engel, J. D., Engel, V. J. L., and Mailoa, E. (2020). Interaction Monitoring Model of Logo Counseling Website for College Students' Healthy Self-Esteem, I. J. Eval. Res. Educ. 9, 607–613. doi:10.11591/ijere.v9i3.20525
Febriani, G. A., and Agustia, R. D. (2019). Development of Line Chatbot as a Learning Media for Mathematics National Exam Preparation. Elibrary.Unikom.Ac.Id . https://elibrary.unikom.ac.id/1130/14/UNIKOM_GISTY%20AMELIA%20FEBRIANI_JURNAL%20DALAM%20BAHASA%20INGGRIS.pdf .
Ferguson, R., and Sharples, M. (2014). “Innovative Pedagogy at Massive Scale: Teaching and Learning in MOOCs,” in 9th European Conference on Technology Enhanced Learning, EC-TEL 2014 , Graz, Austria , September 16–19, 2014 , Lecture Notes in Computer Science. Editors C. Rensing, S. de Freitas, T. Ley, and P. J. Muñoz-Merino, ( Berlin : Springer) Vol. 8719, 98–111. doi:10.1007/978-3-319-11200-8_8
Fryer, L. K., Ainley, M., Thompson, A., Gibson, A., and Sherlock, Z. (2017). Stimulating and Sustaining Interest in a Language Course: An Experimental Comparison of Chatbot and Human Task Partners. Comput. Hum. Behav. 75, 461–468. doi:10.1016/j.chb.2017.05.045
Fryer, L. K., Nakao, K., and Thompson, A. (2019). Chatbot Learning Partners: Connecting Learning Experiences, Interest and Competence. Comput. Hum. Behav. 93, 279–289. doi:10.1016/j.chb.2018.12.023
Fryer, L. K., Thompson, A., Nakao, K., Howarth, M., and Gallacher, A. (2020). Supporting Self-Efficacy Beliefs and Interest as Educational Inputs and Outcomes: Framing AI and Human Partnered Task Experiences. Learn. Individual Differences , 80. doi:10.1016/j.lindif.2020.101850
Gabrielli, S., Rizzi, S., Carbone, S., and Donisi, V. (2020). A Chatbot-Based Coaching Intervention for Adolescents to Promote Life Skills: Pilot Study. JMIR Hum. Factors 7 (1). doi:10.2196/16762
PubMed Abstract | CrossRef Full Text | Google Scholar
Galko, L., Porubän, J., and Senko, J. (2018). “Improving the User Experience of Electronic University Enrollment,” in 16th IEEE International Conference on Emerging eLearning Technologies and Applications, ICETA 2018 , Stary Smokovec, Slovakia , Nov 15–16, 2018 . Editors F. Jakab, (Piscataway, NJ: IEEE ), 179–184. doi:10.1109/ICETA.2018.8572054
Goda, Y., Yamada, M., Matsukawa, H., Hata, K., and Yasunami, S. (2014). Conversation with a Chatbot before an Online EFL Group Discussion and the Effects on Critical Thinking. J. Inf. Syst. Edu. 13, 1–7. doi:10.12937/EJSISE.13.1
Graesser, A. C., VanLehn, K., Rose, C. P., Jordan, P. W., and Harter, D. (2001). Intelligent Tutoring Systems with Conversational Dialogue. AI Mag. 22 (4), 39–51. doi:10.1609/aimag.v22i4.1591
Greller, W., and Drachsler, H. (2012). Translating Learning into Numbers: A Generic Framework for Learning Analytics. J. Educ. Tech. Soc. 15 (3), 42–57. doi:10.2307/jeductechsoci.15.3.42
Haristiani, N., and Rifa’i, M. M. Combining Chatbot and Social Media: Enhancing Personal Learning Environment (PLE) in Language Learning. Indonesian J Sci Tech. 5 (3), 487–506. doi:10.17509/ijost.v5i3.28687
Hattie, J., and Timperley, H. (2007). The Power of Feedback. Rev. Educ. Res. 77 (1), 81–112. doi:10.3102/003465430298487
Hattie, J. (2009). Visible Learning: A Synthesis of over 800 Meta-Analyses Relating to Achievement . Abingdon, UK: Routledge .
Heller, B., Proctor, M., Mah, D., Jewell, L., and Cheung, B. (2005). “Freudbot: An Investigation of Chatbot Technology in Distance Education,” in Proceedings of ED-MEDIA 2005–World Conference on Educational Multimedia, Hypermedia and Telecommunications , Montréal, Canada , June 27–July 2, 2005 . Editors P. Kommers, and G. Richards, ( AACE ), 3913–3918.
Heo, J., and Lee, J. (2019). “CiSA: An Inclusive Chatbot Service for International Students and Academics,” in 21st International Conference on Human-Computer Interaction, HCII 2019: Communications in Computer and Information Science , Orlando, FL, USA , July 26–31, 2019 . Editors C. Stephanidis, ( Springer ) 11786, 153–167. doi:10.1007/978-3-030-30033-3
Hobert, S. (2019a). “How Are You, Chatbot? Evaluating Chatbots in Educational Settings - Results of a Literature Review,” in 17. Fachtagung Bildungstechnologien, DELFI 2019 - 17th Conference on Education Technologies, DELFI 2019 , Berlin, Germany , Sept 16–19, 2019 . Editors N. Pinkwart, and J. Konert, 259–270. doi:10.18420/delfi2019_289
Hobert, S., and Meyer von Wolff, R. (2019). “Say Hello to Your New Automated Tutor - A Structured Literature Review on Pedagogical Conversational Agents,” in 14th International Conference on Wirtschaftsinformatik , Siegen, Germany , Feb 23–27, 2019 . Editors V. Pipek, and T. Ludwig, ( AIS ).
Hobert, S. (2019b). Say Hello to ‘Coding Tutor’! Design and Evaluation of a Chatbot-Based Learning System Supporting Students to Learn to Program in International Conference on Information Systems (ICIS) 2019 Conference , Munich, Germany , Dec 15–18, 2019 , AIS 2661, 1–17.
Hobert, S. (2020). Small Talk Conversations and the Long-Term Use of Chatbots in Educational Settings ‐ Experiences from a Field Study in 3rd International Workshop on Chatbot Research and Design, CONVERSATIONS 2019 , Amsterdam, Netherlands , November 19–20 : Lecture Notes in Computer Science. Editors A. Folstad, T. Araujo, S. Papadopoulos, E. Law, O. Granmo, E. Luger, and P. Brandtzaeg, ( Springer ) 11970, 260–272. doi:10.1007/978-3-030-39540-7_18
Hsieh, S.-W. (2011). Effects of Cognitive Styles on an MSN Virtual Learning Companion System as an Adjunct to Classroom Instructions. Edu. Tech. Society 2, 161–174.
Huang, J.-X., Kwon, O.-W., Lee, K.-S., and Kim, Y.-K. (2018). Improve the Chatbot Performance for the DB-CALL System Using a Hybrid Method and a Domain Corpus in Future-proof CALL: language learning as exploration and encounters–short papers from EUROCALL 2018 , Jyväskylä, Finland , Aug 22–25, 2018 . Editors P. Taalas, J. Jalkanen, L. Bradley, and S. Thouësny, ( Research-publishing.net ). doi:10.14705/rpnet.2018.26.820
Huang, W., Hew, K. F., and Gonda, D. E. (2019). Designing and Evaluating Three Chatbot-Enhanced Activities for a Flipped Graduate Course. Int. J. Mech. Engineer. Robotics. Research. 813–818. doi:10.18178/ijmerr.8.5.813-818
Ismail, M., and Ade-Ibijola, A. (2019). “Lecturer's Apprentice: A Chatbot for Assisting Novice Programmers,”in Proceedings - 2019 International Multidisciplinary Information Technology and Engineering Conference (IMITEC) , Vanderbijlpark, South Africa , (IEEE), 1–8. doi:10.1109/IMITEC45504.2019.9015857
Jia, J. (2008). “Motivate the Learners to Practice English through Playing with Chatbot CSIEC,” in 3rd International Conference on Technologies for E-Learning and Digital Entertainment, Edutainment 2008 , Nanjing, China , June 25–27, 2008 , Lecture Notes in Computer Science, (Springer) 5093, 180–191. doi:10.1007/978-3-540-69736-7_20
Jia, J. (2004). “The Study of the Application of a Keywords-Based Chatbot System on the Teaching of Foreign Languages,” in Proceedings of SITE 2004--Society for Information Technology and Teacher Education International Conference , Atlanta, Georgia, USA . Editors R. Ferdig, C. Crawford, R. Carlsen, N. Davis, J. Price, R. Weber, and D. Willis, (AACE), 1201–1207.
Jivet, I., Scheffel, M., Drachsler, H., and Specht, M. (2017). “Awareness is not enough: Pitfalls of learning analytics dashboards in the educational practice,” in 12th European Conference on Technology Enhanced Learning, EC-TEL 2017 , Tallinn, Estonia , September 12–15, 2017 , Lecture Notes in ComputerScience. Editors E. Lavoué, H. Drachsler, K. Verbert, J. Broisin, and M. Pérez-Sanagustín, (Springer), 82–96. doi:10.1007/978-3-319-66610-5_7
Jung, H., Lee, J., and Park, C. (2020). Deriving Design Principles for Educational Chatbots from Empirical Studies on Human-Chatbot Interaction. J. Digit. Contents Society , 21, 487–493. doi:10.9728/dcs.2020.21.3.487
Kerly, A., and Bull, S. (2006). “The Potential for Chatbots in Negotiated Learner Modelling: A Wizard-Of-Oz Study,” in 8th International Conference on Intelligent Tutoring Systems, ITS 2006 , Jhongli, Taiwan , June 26–30, 2006 , Lecture Notes in Computer Science. Editors M. Ikeda, K. D. Ashley, and T. W. Chan, ( Springer ) 4053, 443–452. doi:10.1007/11774303
Kerly, A., Ellis, R., and Bull, S. (2008). CALMsystem: A Conversational Agent for Learner Modelling. Knowledge-Based Syst. 21, 238–246. doi:10.1016/j.knosys.2007.11.015
Kerly, A., Hall, P., and Bull, S. (2007). Bringing Chatbots into Education: Towards Natural Language Negotiation of Open Learner Models. Knowledge-Based Syst. , 20, 177–185. doi:10.1016/j.knosys.2006.11.014
Kumar, M. N., Chandar, P. C. L., Prasad, A. V., and Sumangali, K. (2016). “Android Based Educational Chatbot for Visually Impaired People,” in 2016 IEEE International Conference on Computational Intelligence and Computing Research , Chennai, India , December 15–17, 2016 , 1–4. doi:10.1109/ICCIC.2016.7919664
Lee, K., Jo, J., Kim, J., and Kang, Y. (2019). Can Chatbots Help Reduce the Workload of Administrative Officers? - Implementing and Deploying FAQ Chatbot Service in a University in 21st International Conference on Human-Computer Interaction, HCII 2019: Communications in Computer and Information Science , Orlando, FL, USA , July 26–31, 2019 . Editors C. Stephanidis, ( Springer ) 1032, 348–354. doi:10.1007/978-3-030-23522-2
Lester, J. C., Converse, S. A., Kahler, S. E., Barlow, S. T., Stone, B. A., and Bhogal, R. S. (1997). “The Persona Effect: Affective Impact of Animated Pedagogical Agents,” in Proceedings of the ACM SIGCHI Conference on Human factors in computing systems , Atlanta, Georgia, USA , March 22–27, 1997 , (ACM), 359–366.
Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gøtzsche, P. C., Ioannidis, J. P. A., et al. (2009). The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies that Evaluate Health Care Interventions: Explanation and Elaboration. J. Clin. Epidemiol. 62 (10), e1–e34. doi:10.1016/j.jclinepi.2009.06.006
Lin, M. P.-C., and Chang, D. (2020). Enhancing Post-secondary Writers’ Writing Skills with a Chatbot. J. Educ. Tech. Soc. 23, 78–92. doi:10.2307/26915408
Lin, Y.-H., and Tsai, T. (2019). “A Conversational Assistant on Mobile Devices for Primitive Learners of Computer Programming,” in TALE 2019 - 2019 IEEE International Conference on Engineering, Technology and Education , Yogyakarta, Indonesia , December 10–13, 2019 , (IEEE), 1–4. doi:10.1109/TALE48000.2019.9226015
Linden, A., and Fenn, J. (2003). Understanding Gartner’s Hype Cycles. Strategic Analysis Report No. R-20-1971 8. Stamford, CT: Gartner, Inc .
Liu, Q., Huang, J., Wu, L., Zhu, K., and Ba, S. (2020). CBET: Design and Evaluation of a Domain-specific Chatbot for mobile Learning. Univ. Access Inf. Soc. , 19, 655–673. doi:10.1007/s10209-019-00666-x
Mamani, J. R. C., Álamo, Y. J. R., Aguirre, J. A. A., and Toledo, E. E. G. (2019). “Cognitive Services to Improve User Experience in Searching for Academic Information Based on Chatbot,” in Proceedings of the 2019 IEEE 26th International Conference on Electronics, Electrical Engineering and Computing (INTERCON) , Lima, Peru , August 12–14, 2019 , (IEEE), 1–4. doi:10.1109/INTERCON.2019.8853572
Martín-Martín, A., Orduna-Malea, E., Thelwall, M., and Delgado López-Cózar, E. (2018). Google Scholar, Web of Science, and Scopus: A Systematic Comparison of Citations in 252 Subject Categories. J. Informetrics 12 (4), 1160–1177. doi:10.1016/j.joi.2018.09.002
Matsuura, S., and Ishimura, R. (2017). Chatbot and Dialogue Demonstration with a Humanoid Robot in the Lecture Class, in 11th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2017, held as part of the 19th International Conference on Human-Computer Interaction, HCI 2017 , Vancouver, Canada , July 9–14, 2017 , Lecture Notes in Computer Science. Editors M. Antona, and C. Stephanidis, (Springer) Vol. 10279, 233–246. doi:10.1007/978-3-319-58700-4
Matsuura, S., and Omokawa, R. (2020). Being Aware of One’s Self in the Auto-Generated Chat with a Communication Robot in UAHCI 2020 , 477–488. doi:10.1007/978-3-030-49282-3
McLoughlin, C., and Oliver, R. (1998). Maximising the Language and Learning Link in Computer Learning Environments. Br. J. Educ. Tech. 29 (2), 125–136. doi:10.1111/1467-8535.00054
Mendoza, S., Hernández-León, M., Sánchez-Adame, L. M., Rodríguez, J., Decouchant, D., and Meneses-Viveros, A. (2020). “Supporting Student-Teacher Interaction through a Chatbot,” in 7th International Conference, LCT 2020, Held as Part of the 22nd HCI International Conference, HCII 2020 , Copenhagen, Denmark , July 19–24, 2020 , Lecture Notes in Computer Science. Editors P. Zaphiris, and A. Ioannou, ( Springer ) 12206, 93–107. doi:10.1007/978-3-030-50506-6
Meyer, V., Wolff, R., Nörtemann, J., Hobert, S., and Schumann, M. (2020). “Chatbots for the Information Acquisition at Universities ‐ A Student’s View on the Application Area,“in 3rd International Workshop on Chatbot Research and Design, CONVERSATIONS 2019 , Amsterdam, Netherlands , November 19–20 , Lecture Notes in Computer Science. Editors A. Folstad, T. Araujo, S. Papadopoulos, E. Law, O. Granmo, E. Luger, and P. Brandtzaeg, (Springer) 11970, 231–244. doi:10.1007/978-3-030-39540-7
Na-Young, K. (2018c). A Study on Chatbots for Developing Korean College Students’ English Listening and Reading Skills. J. Digital Convergence 16. 19–26. doi:10.14400/JDC.2018.16.8.019
Na-Young, K. (2019). A Study on the Use of Artificial Intelligence Chatbots for Improving English Grammar Skills. J. Digital Convergence 17, 37–46. doi:10.14400/JDC.2019.17.8.037
Na-Young, K. (2018a). Chatbots and Korean EFL Students’ English Vocabulary Learning. J. Digital Convergence 16. 1–7. doi:10.14400/JDC.2018.16.2.001
Na-Young, K. (2018b). Different Chat Modes of a Chatbot and EFL Students’ Writing Skills Development . 1225–4975. doi:10.16933/sfle.2017.32.1.263
Na-Young, K. (2017). Effects of Different Types of Chatbots on EFL Learners’ Speaking Competence and Learner Perception. Cross-Cultural Studies 48, 223–252. doi:10.21049/ccs.2017.48.223
Nagata, R., Hashiguchi, T., and Sadoun, D. (2020). Is the Simplest Chatbot Effective in English Writing Learning Assistance?, in 16th International Conference of the Pacific Association for Computational Linguistics , PACLING, Hanoi, Vietnam , October 11–13, 2019 , Communications in Computer and Information Science. Editors L.-M. Nguyen, S. Tojo, X.-H. Phan, and K. Hasida, ( Springer ) Vol. 1215, 245–246. doi:10.1007/978-981-15-6168-9
Nelson, T. O., and Narens, L. (1994). Why Investigate Metacognition. in Metakognition: Knowing About Knowing . Editors J. Metcalfe, and P. Shimamura, (MIT Press) 13, 1–25.
Nghi, T. T., Phuc, T. H., and Thang, N. T. (2019). Applying Ai Chatbot for Teaching a Foreign Language: An Empirical Research. Int. J. Sci. Res. 8.
Ondas, S., Pleva, M., and Hládek, D. (2019). How Chatbots Can Be Involved in the Education Process. in ICETA 2019 - 17th IEEE International Conference on Emerging eLearning Technologies and Applications, Proceedings, Stary Smokovec , Slovakia , November 21–22, 2019 . Editors F. Jakab, (IEEE), 575–580. doi:10.1109/ICETA48886.2019.9040095
Pereira, J., Fernández-Raga, M., Osuna-Acedo, S., Roura-Redondo, M., Almazán-López, O., and Buldón-Olalla, A. (2019). Promoting Learners' Voice Productions Using Chatbots as a Tool for Improving the Learning Process in a MOOC. Tech. Know Learn. 24, 545–565. doi:10.1007/s10758-019-09414-9
Pérez, J. Q., Daradoumis, T., and Puig, J. M. M. (2020). Rediscovering the Use of Chatbots in Education: A Systematic Literature Review. Comput. Appl. Eng. Educ. 28, 1549–1565. doi:10.1002/cae.22326
Pérez-Marín, D. (2021). A Review of the Practical Applications of Pedagogic Conversational Agents to Be Used in School and University Classrooms. Digital 1 (1), 18–33. doi:10.3390/digital1010002
Pham, X. L., Pham, T., Nguyen, Q. M., Nguyen, T. H., and Cao, T. T. H. (2018). “Chatbot as an Intelligent Personal Assistant for mobile Language Learning,” in ACM International Conference Proceeding Series doi:10.1145/3291078.3291115
Quincey, E. de., Briggs, C., Kyriacou, T., and Waller, R. (2019). “Student Centred Design of a Learning Analytics System,” in Proceedings of the 9th International Conference on Learning Analytics & Knowledge , Tempe Arizona, USA , March 4–8, 2019 , (ACM), 353–362. doi:10.1145/3303772.3303793
Ram, A., Prasad, R., Khatri, C., Venkatesh, A., Gabriel, R., Liu, Q, et al. (2018). Conversational Ai: The Science behind the Alexa Prize, in 1st Proceedings of Alexa Prize (Alexa Prize 2017) . ArXiv [Preprint]. Available at: https://arxiv.org/abs/1801.03604 .
Rebaque-Rivas, P., and Gil-Rodríguez, E. (2019). Adopting an Omnichannel Approach to Improve User Experience in Online Enrolment at an E-Learning University, in 21st International Conference on Human-Computer Interaction, HCII 2019: Communications in Computer and Information Science , Orlando, FL, USA , July 26–31, 2019 . Editors C. Stephanidis, ( Springer ), 115–122. doi:10.1007/978-3-030-23525-3
Robinson, C. (2019). Impressions of Viability: How Current Enrollment Management Personnel And Former Students Perceive The Implementation of A Chatbot Focused On Student Financial Communication. Higher Education Doctoral Projects.2 . https://aquila.usm.edu/highereddoctoralprojects/2 .
Ruan, S., Jiang, L., Xu, J., Tham, B. J.-K., Qiu, Z., Zhu, Y., Murnane, E. L., Brunskill, E., and Landay, J. A. (2019). “QuizBot: A Dialogue-based Adaptive Learning System for Factual Knowledge,” in 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019 , Glasgow, Scotland, United Kingdom , May 4–9, 2019 , (ACM), 1–13. doi:10.1145/3290605.3300587
Sandoval, Z. V. (2018). Design and Implementation of a Chatbot in Online Higher Education Settings. Issues Inf. Syst. 19, 44–52. doi:10.48009/4.iis.2018.44-52
Sandu, N., and Gide, E. (2019). “Adoption of AI-Chatbots to Enhance Student Learning Experience in Higher Education in india,” in 18th International Conference on Information Technology Based Higher Education and Training , Magdeburg, Germany , September 26–27, 2019 , (IEEE), 1–5. doi:10.1109/ITHET46829.2019.8937382
Saygin, A. P., Cicekli, I., and Akman, V. (2000). Turing Test: 50 Years Later. Minds and Machines 10 (4), 463–518. doi:10.1023/A:1011288000451
Sinclair, A., McCurdy, K., Lucas, C. G., Lopez, A., and Gaševic, D. (2019). “Tutorbot Corpus: Evidence of Human-Agent Verbal Alignment in Second Language Learner Dialogues,” in EDM 2019 - Proceedings of the 12th International Conference on Educational Data Mining .
Smutny, P., and Schreiberova, P. (2020). Chatbots for Learning: A Review of Educational Chatbots for the Facebook Messenger. Comput. Edu. 151, 103862. doi:10.1016/j.compedu.2020.103862
Song, D., Rice, M., and Oh, E. Y. (2019). Participation in Online Courses and Interaction with a Virtual Agent. Int. Rev. Res. Open. Dis. 20, 44–62. doi:10.19173/irrodl.v20i1.3998
Stapić, Z., Horvat, A., and Vukovac, D. P. (2020). Designing a Faculty Chatbot through User-Centered Design Approach, in 22nd International Conference on Human-Computer Interaction,HCII 2020 , Copenhagen, Denmark , July 19–24, 2020 , Lecture Notes in Computer Science. Editors C. Stephanidis, D. Harris, W. C. Li, D. D. Schmorrow, C. M. Fidopiastis, and P. Zaphiris, ( Springer ), 472–484. doi:10.1007/978-3-030-60128-7
Subramaniam, N. K. (2019). Teaching and Learning via Chatbots with Immersive and Machine Learning Capabilities. In International Conference on Education (ICE 2019) Proceedings , Kuala Lumpur, Malaysia , April 10–11, 2019 . Editors S. A. H. Ali, T. T. Subramaniam, and S. M. Yusof, 145–156.
Sugondo, A. F., and Bahana, R. (2019). “Chatbot as an Alternative Means to Access Online Information Systems,” in 3rd International Conference on Eco Engineering Development, ICEED 2019 , Surakarta, Indonesia , November 13–14, 2019 , IOP Conference Series: Earth and Environmental Science, (IOP Publishing) 426. doi:10.1088/1755-1315/426/1/012168
Suwannatee, S., and Suwanyangyuen, A. (2019). “Reading Chatbot” Mahidol University Library and Knowledge Center Smart Assistant,” in Proceedings for the 2019 International Conference on Library and Information Science (ICLIS) , Taipei, Taiwan , July 11–13, 2019 .
Vaidyam, A. N., Wisniewski, H., Halamka, J. D., Kashavan, M. S., and Torous, J. B. (2019). Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape. Can. J. Psychiatry 64 (7), 456–464. doi:10.1177/0706743719828977
Vijayakumar, B., Höhn, S., and Schommer, C. (2019). “Quizbot: Exploring Formative Feedback with Conversational Interfaces,” in 21st International Conference on Technology Enhanced Assessment, TEA 2018 , Amsterdam, Netherlands , Dec 10-11, 2018 . Editors S. Draaijer, B. D. Joosten-ten, and E. Ras, ( Springer ), 102–120. doi:10.1007/978-3-030-25264-9
Virtanen, M. A., Haavisto, E., Liikanen, E., and Kääriäinen, M. (2018). Ubiquitous Learning Environments in Higher Education: A Scoping Literature Review. Educ. Inf. Technol. 23 (2), 985–998. doi:10.1007/s10639-017-9646-6
Wildman, T. M., Magliaro, S. G., Niles, R. A., and Niles, J. A. (1992). Teacher Mentoring: An Analysis of Roles, Activities, and Conditions. J. Teach. Edu. 43 (3), 205–213. doi:10.1177/0022487192043003007
Wiley, D., and Edwards, E. K. (2002). Online Self-Organizing Social Systems: The Decentralized Future of Online Learning. Q. Rev. Distance Edu. 3 (1), 33–46.
Winkler, R., and Soellner, M. (2018). Unleashing the Potential of Chatbots in Education: A State-Of-The-Art Analysis. in Academy of Management Annual Meeting Proceedings 2018 2018 (1), 15903. doi:10.5465/AMBPP.2018.15903abstract
Winne, P. H., and Hadwin, A. F. (2008). “The Weave of Motivation and Self-Regulated Learning,” in Motivation and Self-Regulated Learning: Theory, Research, and Applications . Editors D. H. Schunk, and B. J. Zimmerman, (Mahwah, NJ: Lawrence Erlbaum Associates Publishers ), 297–314.
Wisniewski, B., Zierer, K., and Hattie, J. (2019). The Power of Feedback Revisited: A Meta-Analysis of Educational Feedback Research. Front. Psychol. 10, 1664–1078. doi:10.3389/fpsyg.2019.03087
Wolfbauer, I., Pammer-Schindler, V., and Rose, C. P. (2020). “Rebo Junior: Analysis of Dialogue Structure Quality for a Reflection Guidance Chatbot,” in Proceedings of the Impact Papers at EC-TEL 2020, co-located with the 15th European Conference on Technology-Enhanced Learning “Addressing global challenges and quality education” (EC-TEL 2020) , Virtual , Sept 14–18, 2020 . Editors T. Broos, and T. Farrell, 1–14.
Xiao, Z., Zhou, M. X., and Fu, W.-T. (2019). “Who should be my teammates: Using a conversational agent to understand individuals and help teaming,” in IUI’19: Proceedings of the 24th International Conference on Intelligent User Interfaces, Marina del Ray , California, USA , March 17–20, 2019 , (ACM), 437–447. doi:10.1145/3301275.3302264
Xu, A., Liu, Z., Guo, Y., Sinha, V., and Akkiraju, R. (2017). “A New Chatbot for Customer Service on Social media,” in Proceedings of the 2017 CHI conference on human factors in computing systems , Denver, Colorado, USA , May 6–11, 2017 , ACM, 3506–3510. doi:10.1145/3025453.3025496
Yin, J., Goh, T.-T., Yang, B., and Xiaobin, Y. (2020). Conversation Technology with Micro-learning: The Impact of Chatbot-Based Learning on Students' Learning Motivation and Performance. J. Educ. Comput. Res. 59, 154–177. doi:10.1177/0735633120952067
Keywords: chatbots, education, literature review, pedagogical roles, domains
Citation: Wollny S, Schneider J, Di Mitri D, Weidlich J, Rittberger M and Drachsler H (2021) Are We There Yet? - A Systematic Literature Review on Chatbots in Education. Front. Artif. Intell. 4:654924. doi: 10.3389/frai.2021.654924
Received: 17 January 2021; Accepted: 10 June 2021; Published: 15 July 2021.
Reviewed by:
Copyright © 2021 Wollny, Schneider, Di Mitri, Weidlich, Rittberger and Drachsler. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Sebastian Wollny, [email protected] ; Jan Schneider, [email protected]
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Claude 3.5: from songwriter to dungeon master
Create a song, pretend you’re a choose-your-own-adventure game, tell me a very funny joke, practise learning a language, develop a workout routine, create ascii art.
Among the various AI chatbots and assistants available today , I've found Claude to be particularly impressive in its capabilities and user interaction. While Open AI 's ChatGPT has been a prominent player in the AI assistant space, Claude is quickly making a name for itself as a powerful alternative. What I appreciate about Claude is its more natural conversational tone. In essence, it's human-ness .
In my experience, Claude avoids the often impersonal customer-service-chatbot tone that other AI assistants like ChatGPT are prone to, which is a breath of fresh air. Since playing around with Claude, I've been really impressed by its language comprehension, as well as the intuitive interface. It often seems to anticipate what I need, which I've found incredibly helpful — whether I'm working on more complex tasks or broadly exploring its capabilities.
For those seeking an AI chatbot which offers its users a personable interaction blended with optimal efficiency, Claude is definitely worth it. Let's take a closer look at what it can do.
If you already have a Claude account, go onto the Claude website and click Continue with Google or email to begin chatting.
If you're using Claude for the first time, go onto the Claude website and provide your email address. Once provided, you'll be taken to the chat interface. It's important to note that you'll need to verify your phone number for security reasons when signing up.
Type your Claude prompt in the message bar and press Enter to generate a response.
You can also upload up to 5 documents or images, with a size limit of 10 MB each, and ask questions related to them.
After Claude generates a response, you have several editing options available. If you're not a fan of Claude's initial response, simply click the Retry button. The chatbot will generate an entirely new response to your original question, often with a different approach or insight.
One of Claude's standout features is its ability to remember the ongoing conversation, allowing you to adjust your query and refine Claude's response without starting over or repeating yourself.
If you find Claude's answer helpful and want to use it elsewhere, click Copy. To provide feedback, click the thumbs up or thumbs down icon.
Now we've covered the fundamentals, let's dive into Claude's additional capabilities!
When I prompted Claude to teach me a new skill , it asked me to choose out of the following options: a practical skill, a creative pursuit, a technical skill, a language or communication skill, or a physical activity. I chose technical skill .
Claude decided to teach me the basics of Python. The instructions were really clear and concise, and also included accompanying pictures. There was also an option to move onto a simple project to practice my newfound skill.
Type in the idea for your song. I inputted the following: Develop a song about self-discovery and embracing one's true identity, with lyrics that explore the journey of overcoming doubt and celebrating individuality.
While the lyrics are pretty saccharine, Claude's use of rhyme and metaphor was actually quite impressive! Significantly less cringe than I anticipated.
With this prompt, you can be as general or specific as you want. You could type something simple like: Pretend you're a choose-your-own adventure game. Set the scene and give me four options to proceed to the next stage.
For my prompt, I provided specific instructions for the game: Create a choose-your-own-adventure story where the reader can explore an ancient castle, a hidden cave guarded by a dragon, or an enchanted forest filled with magical creatures. Provide multiple choices at each decision point, leading to different outcomes based on the reader's selections.
I had a lot of fun playing with this and found the range of choices very compelling. Claude did an excellent job of immersing me as a player, making the game feel truly original.
If you're a pro subscriber (as you'll need more than five generated responses to really get into it) you won't want to miss this.
I asked Claude to tell me a very funny joke. The results were interesting. A few of the jokes were actually quite clever and did make me chuckle. As a more humanised AI chatbot, Claude's responses felt like your typical dad jokes.
Claude also suggested I could try a joke in a different style of humour. I chose observational humour with an edge . I was really surprised how well Claude roasted modern society, and it got a genuine laugh out of me, too.
If you're looking to learn and practise speaking a new language, Claude's got you covered. In the prompt box, I typed I'm a beginner, trying to learn Spanish. Offer language learning tips and practice conversations.
Claude provided some basic tips to start with, before asking if I'd like to practise a conversation.
From a practical standpoint, Claude was an excellent teacher. It conversed with me in Spanish, affirming when I got it right and clearly suggesting corrections when I got it wrong. Additionally, it provided follow-up phrases to help expand the conversation. If you use this in tandem with Duolingo, you'll be fluent in no time.
I wanted to test how well Claude could tailor a workout to my needs, so I asked it to develop a routine for someone with a repetitive strain injury in the wrist.
Claude not only provided a detailed breakdown of specific exercises and set times but also included important considerations, such as avoiding exercises that put direct stress on the wrist, like traditional push-ups or planks on hands.
ASCII art is making pictures using just the letters, numbers, and symbols you can type on a keyboard. Imagine trying to draw a smiley face, but you can only use things like colons, dashes, and parentheses. So you might end up with something like this: :-)
I asked Claude to create a unique ASCII artwork. At first it created a really a simple tree with a triangular canopy. As I encouraged it to be more creative, the results seriously improved.
Claude seemed to generate the best responses when they related to landscapes or objects. It created a ship, a mountain landscape and an underwater scene. The best by far, was the train. I thought the use of '@' and '0' to indicate the dissipating steam was a really nice touch.
Anthropic has ambitiously claimed that Claude 3.5 Sonnet outperforms OpenAI 's GPT-4, and many users of Claude aren't surprised. The AI chatbot is impressively human-like, funny and at times a little sassy, thanks to character training added during the fine tuning process. And with three model tiers, there's a Claude for everyone.
Upgrade your life with a daily dose of the biggest tech news, lifestyle hacks and our curated analysis. Be the first to know about cutting-edge gadgets and the hottest deals.
Kaycee is an Editor at Tom’s Guide and has been writing for as long as she can remember. Her journey into the tech world began as Cazoo's Knowledge Content Specialist, igniting her enthusiasm for technology. When she’s not exploring the latest gadgets and innovations, Kaycee can be found immersed in her favorite video games, or penning her second poetry collection.
Moshi Chat's GPT-4o advanced voice competitor tried to argue with me — OpenAI doesn't need to worry just yet
You can now talk to 1000s of AI models on Mac or Windows, thanks to this huge GPT4All 3.0 update
My favorite iPhone app is the one that irritates me the most — here's why
ChatGPT quickly swept us away with its mind-blowing skills. Its latest model, GPT-4o, is faster, cheaper and can generate more text than its predecessors.
In late 2022, OpenAI wowed the world when it introduced ChatGPT , a chatbot with an entirely new level of power, breadth and usefulness, thanks to the generative AI technology behind it. Since then, ChatGPT has continued to evolve, including its most recent development: the launch of its GPT-4o model .
ChatGPT and generative AI aren't a novelty anymore, but keeping track of what they can do poses a challenge as new abilities arrive. Most notably, OpenAI now provides easier access to anyone who wants to use it. It also lets anyone write custom AI apps called GPTs and share them on its own app store, while on a smaller scale ChatGPT can now speak its responses to you. OpenAI has been leading the generative AI charge , but it's hotly pursued by Microsoft, Google and startups far and wide.
Generative AI still hasn't shaken a core problem -- it makes up information that sounds plausible but isn't necessarily correct. But there's no denying AI has fired the imaginations of computer scientists, loosened the purse strings of venture capitalists and caught the attention of everyone from teachers to doctors to artists and more, all wondering how AI will change their work and their lives.
If you're trying to get a handle on ChatGPT, this FAQ is for you. Here's a look at what's up.
Read more : ChatGPT 3.5 Review: First Doesn't Mean Best
ChatGPT is an online chatbot that responds to "prompts" -- text requests that you type. ChatGPT has countless uses . You can request relationship advice, a summarized history of punk rock or an explanation of the ocean's tides. It's particularly good at writing software, and it can also handle some other technical tasks, like creating 3D models .
ChatGPT is called a generative AI because it generates these responses on its own. But it can also display more overtly creative output like screenplays, poetry, jokes and student essays. That's one of the abilities that really caught people's attention.
Much of AI has been focused on specific tasks, but ChatGPT is a general-purpose tool. This puts it more into a category like a search engine.
That breadth makes it powerful but also hard to fully control. OpenAI has many mechanisms in place to try to screen out abuse and other problems, but there's an active cat-and-mouse game afoot by researchers and others who try to get ChatGPT to do things like offer bomb-making recipes.
ChatGPT really blew people's minds when it began passing tests. For example, AnsibleHealth researchers reported in 2023 that " ChatGPT performed at or near the passing threshold " for the United States Medical Licensing Exam, suggesting that AI chatbots "may have the potential to assist with medical education, and potentially, clinical decision-making."
We're a long way from fully fledged doctor-bots you can trust, but the computing industry is investing billions of dollars to solve the problems and expand AI into new domains like visual data, too. OpenAI is among those at the vanguard. So strap in, because the AI journey is going to be a sometimes terrifying, sometimes exciting thrill.
Artificial intelligence algorithms had been ticking away for years before ChatGPT arrived. These systems were a big departure from traditional programming, which follows a rigid if-this-then-that approach. AI, in contrast, is trained to spot patterns in complex real-world data. AI has been busy for more than a decade screening out spam, identifying our friends in photos, recommending videos and translating our Alexa voice commands into computerese.
A Google technology called transformers helped propel AI to a new level, leading to a type of AI called a large language model, or LLM . These AIs are trained on enormous quantities of text, including material like books, blog posts, forum comments and news articles. The training process internalizes the relationships between words, letting chatbots process input text and then generate what it believes to be appropriate output text.
A second phase of building an LLM is called reinforcement learning through human feedback, or RLHF. That's when people review the chatbot's responses and steer it toward good answers or away from bad ones. That significantly alters the tool's behavior and is one important mechanism for trying to stop abuse.
OpenAI's LLM is called GPT, which stands for "generative pretrained transformer." Training a new model is expensive and time consuming, typically taking weeks and requiring a data center packed with thousands of expensive AI acceleration processors. OpenAI's latest LLM is called GPT-4o. Other LLMs include Google's Gemini (formerly called Bard), Anthropic's Claude and Meta's Llama .
ChatGPT is an interface that lets you easily prompt GPT for responses. When it arrived as a free tool in November 2022, its use exploded far beyond what OpenAI expected.
When OpenAI launched ChatGPT, the company didn't even see it as a product. It was supposed to be a mere "research preview," a test that could draw some feedback from a broader audience, said ChatGPT product leader Nick Turley. Instead, it went viral, and OpenAI scrambled to just keep the service up and running under the demand.
"It was surreal," Turley said. "There was something about that release that just struck a nerve with folks in a way that we certainly did not expect. I remember distinctly coming back the day after we launched and looking at dashboards and thinking, something's broken, this couldn't be real, because we really didn't make a very big deal out of this launch."
ChatGPT, a name only engineers could love, was launched as a research project in November 2022, but quickly caught on as a consumer product.
The ChatGPT website is the most obvious method. Open it up, select the LLM version you want from the drop-down menu in the upper left corner, and type in a query.
As of April 1, OpenAI is allowing consumers to use ChatGPT without first signing up for an account. According to a blog post , the move was meant to make the tool more accessible. OpenAI also said in the post that as part of the move, it's introducing added content safeguards, blocking prompts in a wider range of categories.
However, users with accounts will be able to do more with the tool, such as save and review their history, share conversations and tap into features like voice conversations and custom instructions.
In 2023, OpenAI released a ChatGPT app for iPhones and for Android phones . In February 2024, ChatGPT for Apple Vision Pro arrived , too, adding the chatbot's abilities to the "spatial computing" headset. Be careful to look for the genuine article, because other developers can create their own chatbot apps that link to OpenAI's GPT.
In January 2024, OpenAI opened its GPT Store , a collection of custom AI apps that focus ChatGPT's all-purpose design to specific jobs. A lot more on that later, but in addition to finding them through the store you can invoke them with the @ symbol in a prompt, the way you might tag a friend on Instagram.
Microsoft uses GPT for its Bing search engine, which means you can also try out ChatGPT there.
ChatGPT has sprouted up in various hardware devices, including Volkswagen EVs , Humane's voice-controlled AI pin and the squarish Rabbit R1 device .
It's free, though you have to set up an account to take advantage of all of its features.
For more capability, there's also a subscription called ChatGPT Plus that costs $20 per month that offers a variety of advantages: It responds faster, particularly during busy times when the free version is slow or sometimes tells you to try again later. It also offers access to newer AI models, including GPT-4 Turbo , which arrived in late 2023 with more up-to-date responses and an ability to ingest and output larger blocks of text.
The free ChatGPT uses GPT-4o, which launched in May of this year.
ChatGPT is growing beyond its language roots. With ChatGPT Plus, you can upload images, for example, to ask what type of mushroom is in a photo.
Perhaps most importantly, ChatGPT Plus lets you use GPTs.
GPTs are custom versions of ChatGPT from OpenAI, its business partners and thousands of third-party developers who created their own GPTs.
Sometimes when people encounter ChatGPT, they don't know where to start. OpenAI calls it the "empty box problem." Discovering that led the company to find a way to narrow down the choices, Turley said.
"People really benefit from the packaging of a use case -- here's a very specific thing that I can do with ChatGPT," like travel planning, cooking help or an interactive, step-by-step tool to build a website, Turley said.
OpenAI CEO Sam Altman announces custom AI apps called GPTs at a developer event in November 2023.
Think of GPTs as OpenAI trying to make the general-purpose power of ChatGPT more refined the same way smartphones have a wealth of specific tools. (And think of GPTs as OpenAI's attempt to take control over how we find, use and pay for these apps, much like Apple has a commanding role over iPhones through its App Store.)
OpenAI's GPT store now offers millions of GPTs , though as with smartphone apps, you'll probably not be interested in most of them. A range of GPT custom apps are available, including AllTrails personal trail recommendations , a Khan Academy programming tutor , a Canva design tool , a book recommender , a fitness trainer , the laundry buddy clothes washing label decoder, a music theory instructor , a haiku writer and the Pearl for Pets for vet advice bot .
One person excited by GPTs is Daniel Kivatinos, co-founder of financial services company JustPaid . His team is building a GPT designed to take a spreadsheet of financial data as input and then let executives ask questions. How fast is a startup going through the money investors gave it? Why did that employee just file a $6,000 travel expense?
JustPaid hopes that GPTs will eventually be powerful enough to accept connections to bank accounts and financial software. For now, the developers are focusing on guardrails to avoid problems like hallucinations -- those answers that sound plausible but are actually wrong -- or making sure the GPT is answering based on the users' data, not on some general information in its AI model, Kivatinos said.
Anyone can create a GPT, at least in principle. OpenAI's GPT editor walks you through the process with a series of prompts. Just like with the regular ChatGPT, your ability to craft the right prompt will generate better results.
Another notable difference from regular ChatGPT: GPTs let you upload extra data that's relevant to your particular GPT, like a collection of essays or a writing style guide.
Some of the GPTs draw on OpenAI's Dall-E tool for turning text into images, which can be useful and entertaining. For example, there is a coloring book picture creator , a logo generator and a tool that turns text prompts into diagrams like company org charts. OpenAI calls Dall-E a GPT.
Not very, and that can be a problem. For example, a Bing search using ChatGPT to process results said OpenAI hadn't yet released its ChatGPT Android app. Search results from traditional search engines can help to "ground" AI results, and indeed that's part of the Microsoft-OpenAI partnership that can tweak ChatGPT Plus results.
GPT-4 Turbo is trained on data up through April 2023. But it's nothing like a search engine whose bots crawl news sites many times a day for the latest information.
No. Well, sometimes, but you need to be wary.
Large language models work by stringing words together, one after another, based on what's probable each step of the way. But it turns out that the generative AI fueled by LLMs works better and sounds more natural with a little spice of randomness added to the word selection recipe. That's the basic statistical nature that underlies the criticism that LLMs are mere "stochastic parrots" rather than sophisticated systems that in some way understand the world's complexity.
The result of this system, combined with the steering influence of the human training, is an AI that produces results that sound plausible but that aren't necessarily true. ChatGPT does better with information that's well represented in training data and undisputed -- for instance, red traffic signals mean stop, Plato was a philosopher who wrote the Allegory of the Cave , an Alaskan earthquake in 1964 was the largest in US history at magnitude 9.2.
We humans interact with AI chatbots by writing prompts -- questions or statements that seek an answer from the information stored in the chatbot's underlying large language model.
When facts are more sparsely documented, controversial or off the beaten track of human knowledge, LLMs don't work as well. Unfortunately, they sometimes produce incorrect answers with a convincing, authoritative voice. That's what tripped up a lawyer who used ChatGPT to bolster his legal case only to be reprimanded when it emerged ChatGPT fabricated some cases that appeared to support his arguments. "I did not comprehend that ChatGPT could fabricate cases ," he said, according to The New York Times.
Such fabrications are called hallucinations in the AI business.
That means when you're using ChatGPT, it's best to double check facts elsewhere.
But there are plenty of creative uses for ChatGPT that don't require strictly factual results.
Want to use ChatGPT to draft a cover letter for a job hunt or give you ideas for a themed birthday party? No problem. Looking for hotel suggestions in Bangladesh? ChatGPT can give useful travel itineraries , but confirm the results before booking anything.
Yes, but we haven't seen a breakthrough.
"Hallucinations are a fundamental limitation of the way that these models work today," Turley said. LLMs just predict the next word in a response, over and over, "which means that they return things that are likely to be true, which is not always the same as things that are true," Turley said.
But OpenAI has been making gradual progress. "With nearly every model update, we've gotten a little bit better on making the model both more factual and more self aware about what it does and doesn't know," Turley said. "If you compare ChatGPT now to the original ChatGPT, it's much better at saying, 'I don't know that' or 'I can't help you with that' versus making something up."
Hallucinations are so much a part of the zeitgeist that Dictionary.com touted it as a new word it added to its dictionary in 2023.
You can try, but lots of it will violate OpenAI's terms of use , and the company tries to block it too. The company prohibits use that involves sexual or violent material, racist caricatures, and personal information like Social Security numbers or addresses.
OpenAI works hard to prevent harmful uses. Indeed, its basic sales pitch is trying to bring the benefits of AI to the world without the drawbacks. But it acknowledges the difficulties, for example in its GPT-4 "system card" that documents its safety work.
"GPT-4 can generate potentially harmful content, such as advice on planning attacks or hate speech. It can represent various societal biases and worldviews that may not be representative of the user's intent, or of widely shared values. It can also generate code that is compromised or vulnerable," the system card says. It also can be used to try to identify individuals and could help lower the cost of cyberattacks.
Through a process called red teaming, in which experts try to find unsafe uses of its AI and bypass protections, OpenAI identified lots of problems and tried to nip them in the bud before GPT-4 launched. For example, a prompt to generate jokes mocking a Muslim boyfriend in a wheelchair was diverted so its response said, "I cannot provide jokes that may offend someone based on their religion, disability or any other personal factors. However, I'd be happy to help you come up with some light-hearted and friendly jokes that can bring laughter to the event without hurting anyone's feelings."
Researchers are still probing LLM limits. For example, Italian researchers discovered they could use ChatGPT to fabricate fake but convincing medical research data . And Google DeepMind researchers found that telling ChatGPT to repeat the same word forever eventually caused a glitch that made the chatbot blurt out training data verbatim. That's a big no-no, and OpenAI barred the approach .
LLMs are still new. Expect more problems and more patches.
And there are plenty of uses for ChatGPT that might be allowed but ill-advised. The website of Philadelphia's sheriff published more than 30 bogus news stories generated with ChatGPT .
ChatGPT is well suited to short essays on just about anything you might encounter in high school or college, to the chagrin of many educators who fear students will type in prompts instead of thinking for themselves.
Microsoft CEO Satya Nadella touted his company's partnership with OpenAI at a November 2023 event for OpenAI developers. Microsoft uses OpenAI's GPT large language model for its Bing search engine, Office productivity tools and GitHub Copilot programming assistant.
ChatGPT also can solve some math problems, explain physics phenomena, write chemistry lab reports and handle all kinds of other work students are supposed to handle on their own. Companies that sell anti-plagiarism software have pivoted to flagging text they believe an AI generated.
But not everyone is opposed, seeing it more like a tool akin to Google search and Wikipedia articles that can help students.
"There was a time when using calculators on exams was a huge no-no," said Alexis Abramson, dean of Dartmouth's Thayer School of Engineering. "It's really important that our students learn how to use these tools, because 90% of them are going into jobs where they're going to be expected to use these tools. They're going to walk in the office and people will expect them, being age 22 and technologically savvy, to be able to use these tools."
ChatGPT also can help kids get past writer's block and can help kids who aren't as good at writing, perhaps because English isn't their first language, she said.
So for Abramson, using ChatGPT to write a first draft or polish their grammar is fine. But she asks her students to disclose that fact.
"Anytime you use it, I would like you to include what you did when you turn in your assignment," she said. "It's unavoidable that students will use ChatGPT, so why don't we figure out a way to help them use it responsibly?"
The threat to employment is real as managers seek to replace expensive humans with cheaper automated processes. We've seen this movie before: elevator operators were replaced by buttons, bookkeepers were replaced by accounting software, welders were replaced by robots.
ChatGPT has all sorts of potential to blitz white-collar jobs: paralegals summarizing documents, marketers writing promotional materials, tax advisers interpreting IRS rules, even therapists offering relationship advice.
But so far, in part because of problems with things like hallucinations, AI companies present their bots as assistants and "copilots," not replacements.
And so far, sentiment is more positive than negative about chatbots, according to a survey by consulting firm PwC. Of 53,912 people surveyed around the world, 52% expressed at least one good expectation about the arrival of AI, for example that AI would increase their productivity. That compares with 35% who had at least one negative thing to say, for example that AI will replace them or require skills they're not confident they can learn.
Software development is a particular area where people have found ChatGPT and its rivals useful. Trained on millions of lines of code, it internalized enough information to build websites and mobile apps. It can help programmers frame up bigger projects or fill in details.
One of the biggest fans is Microsoft's GitHub , a site where developers can host projects and invite collaboration. Nearly a third of people maintaining GitHub projects use its GPT-based assistant, called Copilot, and 92% of US developers say they're using AI tools .
"We call it the industrial revolution of software development," said Github Chief Product Officer Inbal Shani. "We see it lowering the barrier for entry. People who are not developers today can write software and develop applications using Copilot."
It's the next step in making programming more accessible, she said. Programmers used to have to understand bits and bytes, then higher-level languages gradually eased the difficulties. "Now you can write coding the way you talk to people," she said.
And AI programming aids still have a lot to prove. Researchers from Stanford and the University of California-San Diego found in a study of 47 programmers that those with access to an OpenAI programming help " wrote significantly less secure code than those without access."
And they raise a variation of the cheating problem that some teachers are worried about: copying software that shouldn't be copied, which can lead to copyright problems. That's why Copyleaks, a maker of plagiarism detection software, offers a tool called the Codeleaks Source Code AI Detector designed to spot AI-generated code from ChatGPT, Google Gemini and GitHub Copilot. AIs could inadvertently copy code from other sources, and the latest version is designed to spot copied code based on its semantic structures, not just verbatim software.
At least in the next five years, Shani doesn't see AI tools like Copilot as taking humans out of programming.
"I don't think that it will replace the human in the loop. There's some capabilities that we as humanity have -- the creative thinking, the innovation, the ability to think beyond how a machine thinks in terms of putting things together in a creative way. That's something that the machine can still not do."
CNET's Lisa Lacy contributed to this report.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
You can also search for this author in PubMed Google Scholar
Academics hope to publish their research in journals (shown), but their initial submissions are often rejected. Credit: Getty
Like actors and writers, researchers experience their fair share of rejection. Scientists submit their work to journals, hoping that it will be accepted, but many manuscripts are rejected from their authors’ top-choice publication and eventually get accepted by another. A considerable number of submissions don’t ever find a home.
A study 1 sheds light on this process of rejection and resubmission, which it argues can be skewed by the differing attitudes and behaviours of researchers around the world.
After following the fate of some 126,000 rejected manuscripts, the research team found that authors in Western countries are almost 6% more likely than are those based in other parts of the world to successfully publish a paper after it has been rejected. This could be, the authors suggest, because of regional differences in access to ‘procedural knowledge’ of how to deal with rejection — how to interpret negative reviews, revise accordingly and resubmit to a journal that is likely to accept the work. (Many academic journals are based in Western countries.)
“Maybe it’s something about being in the right networks and being able to get the right kind of advice at the right time,” says co-author Misha Teplitskiy, a sociologist studying innovation in science and technology at the University of Michigan in Ann Arbor.
Teplitskiy and his colleagues worked with data provided by IOP Publishing (IOPP), a company based in Bristol, UK, that publishes more than 90 English-language journals and is owned by the Institute of Physics.
They examined around 203,000 manuscripts that were submitted to 62 of IOPP’s physical-sciences journals between 2018 and 2022. Some 62% were rejected. The researchers scoured a bibliometric database to see whether the same (or similar) work was subsequently published elsewhere. They then sorted these publications by the geographical region of the corresponding author — the researcher who is usually in charge of a study’s publication process — and compared the outcomes for authors from the West (which they define as North America, Europe and Oceania) with those from the rest of the world.
Source: Ref. 1
To compare the fate of rejected papers as fairly as possible, the authors categorized them by quality, using the ratings and comments of the original peer reviewers recorded in the IOPP data. In this way, they could compare ‘like for like’: for example, looking at whether low-quality papers from Western authors had different outcomes from those rated as similar quality but written by authors from other parts of the world.
The analysis — published ahead of peer review as a preprint on the SSRN server 1 — showed that corresponding authors from Western countries are 5.7% more likely to publish a manuscript after rejection than those from other regions. In a process that often takes up to 300 days, they did so 23 days faster, on average. These authors also revised the abstract of their manuscript — a proxy for the overall paper — 5.9% less often, as defined by a computational ‘edit distance’ metric. And, ultimately, they published in journals with 0.8% higher impact factors. This metric reflects how often papers in a journal are cited, but is equated by some with the journal’s reach and prestige.
In a breakdown by country, the team’s analysis showed that around 70% of papers from Asian nations such as China and India were published eventually, compared with 85% from the United States, and close to 90% for many European countries (see ‘Publishing outcomes by country’).
What’s responsible for these differences? It’s hard to be sure, Teplitskiy says, but the results are consistent — at least in part — with the idea that the tacit norms and rules of the publishing process circulate more widely in the West, which leads to a higher likelihood of successful responses by Western scientists to rejections. His team tried to ask the authors of rejected papers about this hypothesis in a follow-up survey, but got few responses.
“People hate surveys in general, but they really don’t like surveys about their rejected papers,” he says.
The way the authors rated and compared papers of similar quality is a good approach, says Honglin Bao, a data scientist at Harvard Business School in Boston, Massachusetts, who worked previously in China: “I think this works.”
The true cost of science’s language barrier for non-native English speakers
Differing procedural knowledge could contribute to the well-known bias in the peer-review system against researchers who are not based in Western countries, Bao says. Another possibility is that cultural factors work against researchers and add to the system’s bias. For example, many journals are written in English, which puts researchers whose first language is not English at a disadvantage , and could contribute to their poorer performance after rejection.
Teplitskiy will now face the possible rejection–resubmission cycle himself. He has submitted the study to the journal Proceedings of the National Academy of Sciences for peer review, but is realistic about the probable outcome. “I think this paper’s great, but I know the process is noisy,” he says. “We expect that it will bounce around early on and then land somewhere.”
doi: https://doi.org/10.1038/d41586-024-02142-w
Chen, H., Rider, C. I., Jurgens, D. & Teplitskiy, M. Preprint at https://ssrn.com/abstract=4872023 (2024).
Download references
Reprints and permissions
Spy on millions of sleeping butterflies and more — June’s best science images
News 02 JUL 24
‘It can feel like there’s no way out’ — political scientists face pushback on their work
News Feature 19 JUN 24
How climate change is hitting Europe: three graphics reveal health impacts
News 18 JUN 24
‘All things that wander in the heavens’: how I swapped my ivory tower for the world of science fiction
Career Q&A 04 JUL 24
Give UK science the overhaul it urgently needs
Comment 04 JUL 24
Japan’s scientists demand more money for basic science
News 04 JUL 24
A position as a Staff scientist in Computational Metabolomics is available at the SciLifeLab Metabolomics Platform.
Umeå (Kommun), Västerbotten (SE)
Umeå University (KBC)
APPLICATION CLOSING DATE: August 15th, 2024 Human Technopole (HT) is an interdisciplinary life science research institute, created and supported by...
Human Technopole
IOP is the leading research institute in China in condensed matter physics and related fields. Through the steadfast efforts of generations of scie...
Beijing, China
Institute of Physics (IOP), Chinese Academy of Sciences (CAS)
Jointly sponsored by the Hangzhou Municipal People's Government and the University of Chinese Academy of Sciences.
Hangzhou, Zhejiang, China
Hangzhou Institute of Advanced Study, UCAS
An exciting opportunity has arisen for a highly motivated Postdoctoral Research Scientist to join Professor Chapman’s Group, to investigate how DNA...
Oxford, Oxfordshire
University of Oxford, Radcliffe Department of Medicine
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
IMAGES
VIDEO
COMMENTS
The first step in using Bing Chatbot is to create a literature review matrix using an Excel or Google sheet to document the input you receive from the chatbot. The matrix should include the ...
In order to accurately cover the field of research and deal with the plethora of terms for chatbots in the literature (e.g. chatbot, dialogue system or pedagogical conversational agent) we propose the following definition: Chatbots are digital systems that can be interacted with entirely through natural language via text or voice interfaces.
This literature review presents the History, Technology, and Applications of Natural Dialog Systems or simply chatbots. It aims to organize critical information that is a necessary background for further research activity in the field of chatbots. More specifically, while giving the historical evolution, from the generative idea to the present ...
Additionally, we considered publications that focused on literature review and literature survey of chatbots. This approach enabled us to get a picture of the state of the art and the evolution of the field over time-Section 2 presents the evolution of cahtbots over a period of time. To maintain the original focus of our study, we discarded ...
In this study, we conducted a systematic literature review exploring a spectrum of topics regarding the development of emotionally intelligent chatbots, exploring the technique of embedding and generating emotional responses, the challenges, the datasets used, and the evaluation processes used to measure the chatbot's performance.
A Literature Review on chatbots in education: An intelligent chat agent. Gil Maria dos Santos Romão. Catholic University of Mozambique. [email protected]. Abstract. The class size in a ...
Literature search and screening. ... Various platforms were used to deliver the chatbots in our present review, including standalone websites, study specific smartphone apps, or they were deployed ...
This section describes the definition of the structured research questions and the development of the review protocol describing the search strategy, the inclusion and exclusion criteria, the biases and disagreement resolution, and the quality criteria.. 3.1 Research questions. As introduced in Sect. 1, the research community has proposed the usage of multi-agent-based chatbots in recent years ...
5. Related Works. Previous literature survey work on different aspects of chatbots have focused on the design and implementation, chatbot history and background, evaluation methods and the application of chatbots in specific domain. Our work is similar to previous work where we outline the background of chatbot.
that focused on literature review and literature survey of chatbots. This approach enabled us to get a picture of the state of the art and the evolution of the field over time-Section 2
ChatGPT — Conversational Large Language Model by OpenAI — Potential applications for teaching, learning and doing literature reviews. https://chat.openai.com. The knowledge cutoff for the ChatGPT 3.5 is September 2021 and it has no access to the Internet. Academic users may consider alternatives such as Semantic Scholar, Elicit, Consensus ...
Chatbots in Libraries: A Systematic Literature Review. Chatbots have experienced significant growth over the past decade, with a. proliferation of new applications across various domains. Previous ...
Review ChatGPT's Responses. - Cross-reference with actual research for accuracy. - Evaluate AI-generated text for coherence and depth. - Ensure originality to avoid plagiarism. Ensure Coherence and Flow. - Use ChatGPT as a starting point; refine output. - Review and edit for narrative flow and academic standards. Edit and Proofread.
AI powered agents in the context of customer experience are grounded in two main streams of research: information systems (IS) and marketing [6]. However, studies on the importance of chatbots for enhancing customers experience are scarce. Therefore, the purpose of this study is two-fold: • To provide insights from the literature of human ...
Discussion. The purpose of this study was to conduct a systematic review of the literature on Chatbot applications in education to gain a better understanding of their current status, benefits, problems, and future potential. Four broad research questions were specified in relation to the objectives.
Chatbots hold the promise of revolutionizing education by engaging learners, personalizing learning activities, supporting educators, and developing deep insight into learners' behavior. However, there is a lack of studies that analyze the recent evidence-based chatbot-learner interaction design techniques applied in education. This study presents a systematic review of 36 papers to ...
Chatbots have become common in marketing-related applications, providing 24/7 service, engaging customers in humanlike conversation, and reducing employee workload in handling customer calls. However, the academic literature on the use of chatbots in marketing remains sparse and scattered across disciplines.
The present paper is a literature review study concerning the empirical investigation of users' behavioral intention to adopt and use chatbots during the last five years. By analyzing key characteristic points of these empirical research studies, a number of significant findings were drawn.
this vast amount of literature, providing a comprehensive understanding of the current research status concerning the inuence of chatbots in education. By conducting a sys-tematic review, we seek to identify common themes, trends, and patterns in the impact of chatbots on education and provide a holistic view of the research, enabling research-
Step 1: Defining Your Research Objective Before diving into the literature review process, it is crucial to define your research objective. Clearly articulate the topic, research question, or hypothesis you aim to address through your literature review. This step will help you maintain focus and guide your search for relevant sources.
Here, the study derives four guidelines helpful in education: positive or neutral emotional expressions, a limited amount of animated or visual graphics, a well-considered gender of the chatbot, and human-like interactions. In summary, we have found in CAT2 three main design aspects for the development of chatbots.
Discover how to create and deploy chatbots effortlessly with Copilot Studio. This comprehensive guide walks you through the process of setting up your chatbot, choosing deployment channels, and publishing your copilot on platforms like Teams, Facebook, and custom websites. Enhance your customer engagement with our step-by-step tutorial.
More precisely, this literature review shows three main ways in which emotions have been addressed by chatbot research: by investigating what kinds of emotions are generated and expressed during the ongoing conversation with the chatbot (5 papers); by focusing on empathy, as a fundamental emotion to develop a close connection with the agent (6 ...
The AI chatbot is impressively human-like, funny and at times a little sassy, thanks to character training added during the fine tuning process. And with three model tiers, there's a Claude for ...
2. Literature Review 2.1 The history of chatbots The first chatbot, named ELIZA, was created in 1966 and used to simulate a psychotherapist communicating with patients who had a certain level of communication ability (Weizenbaum, 1966). Artificial intelligence was first applied to a chatbot called Jabberwacky in 1988 (Jabberwacky, n.d.).
That's when people review the chatbot's responses and steer it toward good answers or away from bad ones. That significantly alters the tool's behavior and is one important mechanism for trying to ...
The analysis — published ahead of peer review as a preprint on the SSRN server 1 — showed that corresponding authors from Western countries are 5.7% more likely to publish a manuscript after ...