Table 1
Role or Position in Organization
Role or Position in Organization
Percentage of Respondents
Number of Respondents
Senior management (e.g. Director, Dean, associate dean/director)
9.09%
55
Middle management (e.g. department head, supervisor, coordinator)
20.00%
121
Specialist or professional (e.g., librarian, analyst, consultant)
60.99%
369
Support staff or administrative
8.93%
54
Other
0.99%
6
Most of the respondents were primarily involved in Reference and Research Services (25.17%) or Library Instruction and Information Literacy (24.34%)—two areas integral to the academic support infrastructure.
In terms of professional experience, participants exhibited a broad range, from novices with less than a year’s experience (2.81%) to seasoned veterans with over 20 years in the field (22.68%).
Table 2 | ||
Primary Work Area in Academic Librarianship | ||
Primary Work Area in Academic Librarianship | Percentage of Respondents | Number of Respondents |
Administration or management | 10.93% | 66 |
Reference and research services | 25.17% | 152 |
Technical services (e.g., acquisitions, cataloging, metadata) | 8.11% | 49 |
Collection development and management | 4.64% | 28 |
Library instruction and information literacy | 24.34% | 147 |
Electronic resources and digital services | 4.30% | 26 |
Systems and IT services | 3.64% | 22 |
Archives and special collections | 3.31% | 20 |
Outreach, marketing, and communications | 1.66% | 10 |
Other | 13.91% | 84 |
|
|
|
Table 3 | ||
Years of Experience as a Library Employee | ||
Years of Experience as a Library Employee | Percentage of Respondents | Number of Respondents |
Less than 1 year | 2.81% | 17 |
1–5 years | 21.19% | 128 |
6–10 years | 19.54% | 118 |
11–15 years | 19.04% | 115 |
16–20 years | 14.74% | 89 |
More than 20 years | 22.68% | 137 |
|
|
|
The survey group was highly educated, with most holding a master’s degree in library and information science (65.51%), and a significant number having completed a doctoral degree or a master’s in another field.
The survey also collected demographic information. A substantial majority identified as female (71.97%), and the largest age group was 35–44 years (27.97%). While the majority identified as White (76.11%), other ethnicities, including Asian, Black or African American, and Hispanic or Latino, were also represented.
This diverse participant profile offers a broad-based view of AI literacy in the academic library landscape, setting the stage for insightful findings and discussions.
Table 4 | ||
Level of Understanding of AI Concepts and Principles | ||
Level of Understanding of AI Concepts and Principles | % of Respondents | Number of Respondents |
1 (Very Low) | 7.50% | 57 |
2 | 20.13% | 153 |
3 (Moderate) | 45.39% | 345 |
4 | 23.29% | 177 |
5 (Very High) | 3.68% | 28 |
At a broad level, participants expressed a modest understanding of AI concepts and principles, with a significant portion rating their knowledge at an average level. However, the number of respondents professing a high understanding of AI was quite small, revealing a potential area for further training and education.
A similar pattern was observed when participants were queried about their understanding of generative AI specifically. This suggests that while librarians have begun to grasp AI and its potential, there is a considerable scope for growth in terms of knowledge and implementation (Figure 1).
Figure 1 |
Understanding of Generative AI |
|
Regarding the familiarity with AI tools, most participants had a moderate level of experience (30.94%). Only a handful of participants reported a high level of familiarity (3.87%), signaling an opportunity for more hands-on training with these tools.
In examining the prevalence of AI usage in the library sector, the researcher found a varied landscape. While some technologies have found significant adoption, others remain relatively unused. Notably, Chatbots and text or data mining tools were the most widely used AI technologies.
Participants’ understanding of specific AI concepts followed a similar trend. More straightforward concepts such as Machine Learning and Natural Language Processing had a higher average rating, whereas complex areas like Deep Learning and Generative Adversarial Networks were less understood. This trend underscores the need for targeted educational programs on AI in library settings.
Table 5 | |
Understanding of Specific AI Concepts | |
AI Concept | Average Rating |
Machine Learning | 2.50 |
Natural Language Processing (NLP) | 2.38 |
Neural Network | 1.93 |
Deep Learning | 1.79 |
Generative Adversarial Networks (GANs) | 1.37 |
Notably, there was almost a nine percent drop in responses from the previous questions to the questions that asked about the more technical aspects of AI. This could signify a gap in knowledge or comfort level with these topics among the participants.
In the professional sphere, AI tools have yet to become a staple in library work. The majority of participants do not frequently use these tools, with 41.79% never using generative AI tools and 28.01% using them less than once a month. This might be attributed to a lack of familiarity, resources, or perceived need. However, for those who do use them, text generation and research assistance are the primary use cases.
Concerns about ethical issues, quality, and accuracy of generated content, as well as data privacy, were prevalent among the participants. This finding indicates that while there’s interest in AI technologies, the perceived challenges are significant barriers to full implementation and adoption.
In their personal lives, AI tools have yet to make a significant impact among the participants. The majority (63.98%) reported using these tools either ‘less than once a month’ or ‘never.’ This could potentially reflect the current state of AI integration in non-professional or leisurely activities, and may change as AI continues to permeate our everyday lives.
A chi-square test of independence was performed to examine the relation between the position of the respondent and the understanding of AI concepts and principles. The relation between these variables was significant, χ 2 (16, N = 760) = 26.31, p = .05. This means that the understanding of AI concepts and principles varies depending on the position of the respondent.
The distributions suggest that—while there is a significant association between the position of the respondent and their understanding of AI concepts and principles—the majority of respondents across all positions have a moderate understanding of AI. However, there are differences in the proportions of respondents who rate their understanding as high or very high, with Senior Management and Middle Management having higher proportions than the other groups.
There is also a significant relation between the area of academic librarianship and the understanding of AI concepts and principles, χ²(36, N = 760) = 68.64, p = .00084. This means that the understanding of AI concepts and principles varies depending on the area of academic librarianship. The distributions show that there are differences in the proportions of respondents who rate their understanding as high or very high, with Administration or management and Library Instruction and Information Literacy having higher proportions than the other groups.
Furthermore, a Chi-Square test shows that the relation between the payment for a premium version of at least one of the AI tools and the understanding of AI concepts and principles is significant, χ²(4, N = 539) = 85.42, p < .001. The distributions suggest that respondents who have paid for a premium version of at least one of the AI tools have a higher understanding of AI concepts and principles compared to those who have not. This could be because those who have paid for a premium version of an AI tool are more likely to use AI in their work or personal life, which could enhance their understanding of AI. Alternatively, those with a higher understanding of AI might be more likely to see the value in paying for a premium version of an AI tool.
It’s important to note that these findings are based on the respondents’ self-rated understanding of AI, which may not accurately reflect their actual understanding. Further research could involve assessing the respondents’ understanding of AI through objective measures. Additionally, other factors not considered in this analysis, such as the respondent’s educational background, years of experience, and exposure to AI in their work, could also influence their understanding of AI.
In this section, the researcher delved deeper into the gaps in knowledge and confidence among academic library professionals regarding AI applications. These gaps highlight the urgent need for targeted professional development and training in AI literacy.
The survey data pointed to moderate levels of confidence across a spectrum of AI-related tasks, indicating room for growth and learning. For evaluating ethical implications of using AI, a modest 30.12% of respondents felt somewhat confident (levels 4 and 5 combined), while 29.50% were not confident (levels 1 and 2 combined), and the largest group (39.38%) remained neutral.
Discussing AI integration revealed similar patterns. Here, 31.1% reported high confidence, 34.85% expressed low confidence, and the remaining 33.06% were neutral. These distributions suggest an overall hesitation or lack of assurance in discussing and ethically implementing AI, potentially indicative of inadequate training or exposure to these topics.
When it came to collaborating on AI-related projects, fewer respondents (31.39%) felt confident, while 40.16% reported low confidence, and 28.46% chose a neutral stance. This might point to the necessity of not only individual proficiency in AI but also the need for collaborative skills and shared understanding among teams working with AI.
Troubleshooting AI tools and applications emerged as the most significant gap, with 69.76% rating their confidence as low and only 10.9% expressing high confidence. This highlights an essential area for targeted training, as troubleshooting is a fundamental aspect of successful technology implementation.
Table 6 | |||||
Confidence Levels in Various Aspects of AI | |||||
Aspect | % at Confidence Level 1 | % at Confidence Level 2 | % at Confidence Level 3 | % at Confidence Level 4 | % at Confidence Level 5 |
Evaluating Ethical Implications of AI | 12.48% | 17.02% | 39.38% | 24.64% | 6.48% |
Participating in AI Discussions | 13.29% | 21.56% | 33.06% | 20.75% | 11.35% |
Collaborating on AI Projects | 15.77% | 24.39% | 28.46% | 21.63% | 9.76% |
Troubleshooting AI Tools | 41.79% | 27.97% | 19.35% | 9.76% | 1.14% |
Providing Guidance on AI Resources | 25.65% | 24.51% | 25.81% | 20.13% | 3.90% |
Approximately one-third of survey participants have engaged in AI-focused professional development, showcasing several key themes:
The findings emphasize the multifaceted nature of AI in libraries, underlining the need for ongoing, comprehensive professional development. This includes addressing both technical and ethical aspects, equipping librarians with practical AI skills, and fostering a supportive community of practice.
A Chi-square test examining the relationship between the respondents’ positions and their participation in any training focused on generative AI (χ²(4, N = 595) = 26.72, p < .001) indicates a significant association. Upon examining the data, the proportion of respondents who have participated in training or professional development programs focused on generative AI is highest among those in Senior Management (47.27%), followed by Specialist or Professional (37.40%), Middle Management (29.75%), and Other (16.67%). The proportion is lowest among Support Staff or Administrative (3.70%).
This suggests that individuals in higher positions, such as Senior Management and Specialist or Professional roles, are more likely to have participated in training or professional development programs focused on generative AI. This could be due to a variety of reasons, such as these roles potentially requiring a more in-depth understanding of AI and its applications, or these individuals having more access to resources and opportunities for such training. On the other hand, Support Staff or Administrative personnel are less likely to have participated in such programs, which could be due to less perceived need or fewer opportunities for training in these roles.
These findings highlight the importance of providing access to training and professional development opportunities focused on AI across all roles in an organization, not just those in higher positions or those directly involved in AI-related tasks. This could help ensure a more widespread understanding and utilization of AI across the organization.
Despite these efforts, many participants did not feel adequately prepared to utilize generative AI tools professionally. A notable 62.91% disagreed to some extent with the statement: “I feel adequately prepared to use generative AI tools in my professional work as a librarian,” underscoring the need for more effective training programs.
Interestingly, the areas identified for further training weren’t just about understanding the basics of AI. Participants showed a clear demand for advanced understanding of AI concepts and techniques (13.53%), familiarity with AI tools and applications in libraries (14.21%), and addressing privacy and data security concerns related to generative AI (14.36%). This suggests that librarians are looking to move beyond a basic understanding and are keen to engage more deeply with AI.
Preferred formats for professional development opportunities leaned towards remote and flexible learning opportunities, such as online courses or webinars (26.02%) and self-paced learning modules (22.44%). This preference reflects the current trend towards digital and remote learning, providing a clear direction for future training programs.
Notably, almost half of the participants (43.99%) rated the need for academic librarians to receive training on AI tools and applications within the next twelve months as ‘extremely important.’ This emphasis on urgency indicates a significant and immediate gap to be addressed.
In summary, a deeper analysis of the data reveals a landscape where academic librarians possess moderate to low confidence in understanding, discussing, and handling AI-related tasks, despite some exposure to professional development in AI. This finding indicates the need for more comprehensive, in-depth, and accessible AI training programs. By addressing these knowledge gaps, the library community can effectively embrace AI’s potential and navigate its challenges.
The comprehensive results of our survey, as illustrated in Table 7, offer a detailed portrait of librarians’ perceptions towards the integration of generative AI tools in library services and operations.
Table 7 | |||||
Perceptions Towards the Integration of Generative AI Tools In Library Services | |||||
Statement | 1 | 2 | 3 | 4 | 5 |
To what extent do you agree or disagree with the following statement: “I believe generative AI tools have the potential to benefit library services and operations.” (1 = strongly disagree, 5 = strongly agree) | 3.32% | 10.96% | 35.88% | 27.91% | 21.93% |
How important do you think it is for your library to invest in the exploration and implementation of generative AI tools? (1 = not at all important, 5 = extremely important) | 7.24% | 15.95% | 29.93% | 28.78% | 18.09% |
In your opinion, how prepared is your library to adopt generative AI tools and applications in the next 12 months? (1 = not at all prepared, 5 = extremely prepared) | 32.28% | 37.75% | 23.84% | 4.80% | 1.32% |
To what extent do you think generative AI tools and applications will have a significant impact on academic libraries within the next 12 months? (1 = no impact, 5 = major impact) | 2.81% | 20.03% | 36.09% | 26.16% | 14.90% |
How urgent do you feel it is for your library to address the potential ethical and privacy concerns related to the use of generative AI tools and applications? (1 = not at all urgent, 5 = extremely urgent) | 2.15% | 5.46% | 18.05% | 29.47% | 44.87% |
When considering the potential benefits of AI, the responses indicate a degree of ambivalence, with 35.88% choosing a neutral stance. However, when we combine the categories of those who ‘agree’ and ‘strongly agree,’ we see that a significant portion, 49.84%, view AI as beneficial to a certain extent. Similarly, on the question of the importance of investment in AI, there is a notable inclination towards agreement, with 46.87% agreeing that investment is important to some degree.
However, this optimism is juxtaposed with concerns about readiness. When asked how prepared they feel to adopt generative AI tools within the forthcoming year, 70.03% of respondents (those who ‘strongly disagree’ or ‘disagree’) admit a lack of preparedness. This suggests that despite recognizing the potential value of AI, there are considerable obstacles to be overcome before implementation becomes feasible.
The uncertainty surrounding AI’s impact on libraries in the short-term further illuminates this complexity. A significant proportion of librarians (36.09%) chose a neutral response when asked to predict the impact of AI on academic libraries within the next twelve months. Nonetheless, there is a considerable group (41.06% who ‘agree’ or ‘strongly agree’) who foresee significant short-term impact.
A key finding from the survey was the collective recognition of the urgency to address ethical and privacy issues tied to AI usage. In fact, 74.34% of respondents, spanning ‘agree’ and ‘strongly agree,’ underscored the urgent need to address potential ethical and privacy concerns related to AI, highlighting the weight of responsibility librarians feel in maintaining the integrity of their services in the age of AI (Figure 2).
Figure 2 |
Perceived Urgency for Addressing Ethical and Privacy Concerns of Generative AI in Libraries |
|
The qualitative responses provide a rich understanding of the perceptions of generative AI among library professionals and the implications they foresee for the library profession. The responses were categorized into several key themes, each of which is discussed below with relevant quotes from the respondents.
A significant theme that emerged from the responses was the ethical and privacy concerns associated with the use of generative AI tools in libraries. Respondents expressed apprehension about potential misuse of data and violations of privacy. As one respondent noted, “Library leaders should not rush to implement AI tools without listening to their in-house experts and operational managers.” Another respondent cautioned, “We need to be cautious about adopting technologies or practices within our own workflows that pose significant ethical questions, privacy concerns.”
The need for education and training on AI for librarians was another prevalent theme. Respondents emphasized the importance of understanding AI tools and their implications before implementing them. One respondent suggested: “quickly education on AI is needed for librarians. As with anything else, there will be early adopters and then a range of adoption over time.” Another respondent highlighted the need for an AI specialist, stating, “I also think it would be valuable to have an AI librarian, someone who can be a resource for the rest of the staff.”
Respondents expressed concern about the potential for misuse of AI tools, such as generating false citations or over-reliance on AI systems. They emphasized the importance of critical thinking skills, and cautioned against replacing human judgment and learning processes with AI. As one respondent put it, “Critical thinking skills and learning processes are vital and should not be replaced by AI.” Another respondent warned: “there are potential risks from misuse such as false citations being provided or too much dependence on systems.”
Several respondents expressed doubts about the ability of libraries to quickly and effectively implement AI tools. They cited issues such as frequent updates and refinements to AI tools, the need for significant investment, and the potential for AI to be used in ways that do not benefit the library or its users. One respondent noted, “the concern I have with AI tools is the frequent updates and refinements that occur. For libraries with small staff size, it seems daunting to keep up.”
Some respondents suggested specific ways in which AI could be used in libraries, such as for collection development, instruction, and answering frequently asked questions. However, they also cautioned against viewing AI as a panacea for all library challenges. One respondent stated: “using them for FAQs will be more useful than answering a complicated reference question.”
Some respondents expressed concern that the use of AI could lead to job displacement or a devaluation of the human elements of librarianship. They suggested that AI should be used to complement, not replace, human librarians. One respondent expressed that, “I could see a future where only top research institutions have human reference librarians as a concierge service.”
Respondents emphasized the need for critical evaluation of AI tools, including understanding their limitations and potential biases. They suggested that libraries should not rush to implement AI without fully understanding its implications. One respondent advised: “the framing of AI usage as a forgone conclusion is concerning. It’s a tool, not a solution, and should not be implemented without due consideration.”
Some respondents suggested that libraries have a role to play in teaching AI literacy to students and other library users. They emphasized the importance of understanding how AI tools work and how to use them responsibly. One respondent stated: “I think we need to teach AI literacy to students.” Another respondent echoed this sentiment, saying, “it is essential that we prepare our students to use generative AI tools responsibly.”
The perceptions of generative AI among library professionals are multifaceted, encompassing both the potential benefits and challenges of these technologies. While there is recognition of the potential of AI to enhance library services, there is also a strong emphasis on the need for ethical considerations, education and training, critical evaluation, and responsible use of these tools. The implications for the library profession are significant, with concerns about job displacement, the need for new skills and roles, and the potential for changes in library practices and services. These findings highlight the need for ongoing dialogue and research on the use of generative AI in libraries.
While library employees acknowledge the potential advantages of AI in library services, they also express concerns regarding readiness, and emphasize the urgency to address ethical and privacy considerations. These findings indicate the need for support systems, training, and resources to address readiness gaps, alongside rigorous discussion, and guidelines to navigate ethical and privacy issues as libraries explore the possibilities of AI integration.
The survey results cast light on the current state of artificial intelligence literacy, training needs, and perceptions within the academic library community. The findings reveal a landscape of recognition for the potential of AI technologies, yet, simultaneously, a lack of in-depth understanding and preparedness for their adoption.
A detailed examination of the data reveals that a considerable number of library professionals self-assess their understanding of AI as sitting around, or below, the middle. While this does suggest a basic level of familiarity with AI concepts and principles, it likely falls short of the proficiency required to navigate the rapidly evolving AI landscape confidently and competently. This gap in understanding holds implications for the library field as AI continues to infiltrate various sectors and increasingly permeates library services and operations.
Moreover, an analysis of the familiarity of library professionals with AI tools lends further credence to this call for more comprehensive AI education initiatives. An understanding of AI extends beyond mere theoretical comprehension—it necessitates hands-on familiarity with AI tools and the ability to use and apply them in practice. Direct interaction with AI technologies provides an avenue for library professionals to bolster their practical understanding and thus equip them to incorporate these tools into their work more effectively.
However, formulating training initiatives that address these gaps is a multifaceted task. The AI usage in libraries is as diverse as the scope of AI applications themselves. From customer service chatbots, and text or data mining tools, to advanced technologies like neural networks and deep learning systems—each offers unique applications and therefore requires distinct expertise and understanding. Accordingly, training programs must be flexible and comprehensive, encompassing the full range of potential AI applications while also delving deep enough to provide a solid grasp of each specific tool’s functionality and potential uses.
The study also sheds light on the varying degrees of understanding across different AI concepts. Participants generally exhibited a higher level of comprehension for simpler AI concepts. However, their understanding waned when it came to more complex concepts, often the bedrock of cutting-edge AI applications. This variation in comprehension underscores the need for a stratified approach to AI education. Such an approach could start with foundational concepts and gradually progress towards more advanced topics, providing a scaffold on which a deeper understanding of AI can be built.
Addressing the AI literacy gap in the library sector thus requires a concerted approach—one that offers comprehensive and layered educational strategies that bolster both theoretical understanding and practical familiarity with AI. The aim should not only be to impart knowledge, but to empower library professionals to confidently navigate the AI landscape, to adopt and adapt AI technologies in their work effectively and—crucially —responsibly. Through such training and professional development initiatives, libraries can harness the potential of AI, ensuring they continue to be at the forefront of technological advancements.
As the focus shifts to the professional use of AI tools in libraries, the data reveal that their adoption is not yet commonplace. The use of AI tools—such as text generation and research assistance—are most reported, reflecting the immediate utility these technologies offer to librarians. However, a significant proportion of participants do not frequently use AI tools, indicating barriers to adoption. These barriers could include a lack of understanding or familiarity with these tools, a perceived lack of necessity for their use, or limitations in resources necessary for implementation and maintenance. To overcome these barriers, the field may need more than just providing education and resources. Demonstrating the tangible benefits and efficiencies AI tools can bring to library work could play a pivotal role in their wider adoption.
The data show a strong enthusiasm among librarians for professional development related to AI. While introductory training modalities are popular, the findings reveal a demand for more advanced, hands-on training. This need aligns with the complexity and rapid evolution of AI technologies, which require a deeper understanding to be fully leveraged in library contexts.
Furthermore, the findings highlight the importance of ethical considerations and the potential benefits of fostering communities of practice in AI training. With the increasing integration of AI technology into library services, the issues related to AI ethics will likely become more complex. Proactively addressing these concerns through in-depth, focused training can help libraries continue to serve as ethical stewards of information. Communities of practice provide a platform for shared learning, mutual support, and the pooling of resources, equipping librarians to better navigate the intricacies of AI integration.
Importantly, the data show that the diversity in librarians’ roles and contexts necessitates a tailored approach to AI training. Libraries differ in their services, target audiences, resources, and strategic goals, and so do their AI training needs. A one-size-fits-all approach to AI training may fall short. Future AI training could therefore take these variations into account, offering specialized tracks or modules catering to specific roles or institutional contexts.
Likewise, the perceptions surrounding the use of generative AI tools in libraries are intricate and multifaceted. While the potential benefits of AI are acknowledged and the importance of investing in its implementation recognized, there is also a pronounced lack of readiness to adopt these tools. This readiness gap could stem from various factors, such as a lack of technical skills, insufficient funding, or institutional resistance. Future research should delve into these possibilities to better understand and address this gap.
Library professionals express uncertainty about the short-term implications of AI for libraries. This could reflect the novelty of these technologies and a lack of clear use cases, or it could echo the experiences of early adopters. The findings also emphasize a heightened sense of urgency in addressing the ethical and privacy concerns associated with AI technologies. These concerns underline the necessity for ongoing dialogue, education, and policy development around AI use in libraries.
The results reveal an intricate landscape of AI understanding, usage, and perception in the library field. While the benefits of AI tools are acknowledged, a comprehensive understanding and readiness to implement these technologies remain less than ideal. This reality underlines the pressing need for an investment in targeted educational strategies and ongoing professional development initiatives.
Crucially, the wide variance in AI literacy, understanding of AI concepts, and hands-on familiarity with AI tools among library professionals points towards the need for a stratified and tailored approach to AI education. Future training programs must aim beyond just knowledge acquisition—they must equip library professionals with the capabilities to apply AI technologies in their roles effectively, ethically, and responsibly. Ethical and privacy concerns emerged as significant considerations in the adoption of AI technologies in libraries. Our findings reinforce the crucial role that libraries have historically played, and must continue to play, in advocating for ethical information practices.
The readiness gap in AI adoption uncovered by the study suggests a disconnect between understanding the potential of AI and the ability to harness it effectively. This invites a deeper investigation into potential barriers, including technical proficiency, resource allocation, and institutional culture, among others.
This study presents a framework for defining AI literacy in academic libraries, encapsulating seven key competencies:
This multidimensional definition of AI literacy for libraries provides a foundation for developing comprehensive training programs and curricula. For instance, the need to understand AI system capabilities and limitations highlighted in the definition indicates that introductory AI education should provide a solid grounding in how common AI technologies like machine learning work, where they excel, and their constraints. This conceptual comprehension equips librarians to set realistic expectations when evaluating or implementing AI.
The definition also accentuates that gaining practical skills to use AI tools appropriately should be a core training component. Hands-on learning focused on identifying appropriate applications, utilizing AI technologies effectively, and critically evaluating outputs can empower librarians to harness AI purposefully.
Moreover, emphasizing critical perspectives and ethical considerations reflects that AI training for librarians should move beyond technical proficiency. Incorporating modules examining biases, privacy implications, misinformation risks, and societal impacts is key for fostering responsible AI integration.
Likewise, the collaborative dimension of the definition demonstrates that cultivating soft skills for productive AI discussions and teamwork should be part of the curriculum. AI literacy has an important social element that training programs need to nurture.
Overall, this definition provides a skills framework that can inform multipronged, context-sensitive AI training tailored to librarians’ diverse needs. It constitutes an actionable guide for developing AI curricula and professional development that advance both technical and social aspects of AI literacy.
Based on the findings and limitations of the current study, the following are specific recommendations for future research:
By pursuing these avenues for future research, we can continue to deepen our understanding of AI literacy in the library profession, inform strategies for enhancing AI literacy, and promote the effective and ethical use of AI in libraries.
Cetindamar, D., Kitto, K., Wu, M., Zhang, Y., Abedin, B., & Knight, S. (2021). Explicating AI literacy of employees at digital workplaces. IEEE Transactions on Engineering Management , 68(5), 1259–1271.
Cox, A. (2022). The ethics of AI for information professionals: Eight scenarios. Journal of the Australian Library and Information Association , 71(3), 201–214.
Heck, T., Weisel, L., & Kullmann, S. (2019). Information literacy and its interplay with AI . In A. Botte, P. Libbrecht, & M. Rittberger (Eds.), Learning Information Literacy Across the Globe (pp. 129–131). https://doi.org/10.25656/01:17891
Hervieux, S., & Wheatley, A. (2021). Perceptions of artificial intelligence: A survey of academic librarians in Canada and the United States. The Journal of Academic Librarianship , 47(1), 102270.
Laupichler, M. C., Aster, A., Schirch, J., & Raupach, T. (2022). Artificial intelligence literacy in higher and adult education: A scoping literature review. Computers and Education: Artificial Intelligence , 3, 100101. https://doi.org/10.1016/j.caeai.2022.100101
Lo, L. S. (2023a). An initial interpretation of the U.S. Department of Education’s AI report: Implications and recommendations for Academic Libraries. The Journal of Academic Librarianship , 49(5), 102761. https://doi.org/10.1016/j.acalib.2023.102761
Lo, L. S. (2023b). The art and science of prompt engineering: A new literacy in the information age. Internet Reference Services Quarterly , 27(4), 203–210. https://doi.org/10.1080/10875301.2023.2227621
Lo, L. S. (2023c). The clear path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship , 49(4), 102720. https://doi.org/10.1016/j.acalib.2023.102720
Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reality: artificial intelligence‐written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology , 74(5), 570–581. https://doi.org/10.1002/asi.24750
McKinsey & Company. (2023). The state of AI in 2023 : Generative AI’s breakout year . McKinsey & Company. https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year
Mishra, P., & Koehler, M.J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record , 108(6), 1017–1054.
Mishra, P. (2019). Considering contextual knowledge: The TPACK diagram gets an upgrade. Journal of Digital Learning in Teacher Education , 35(2), 76–78. https://doi.org/10.1080/21532974.2019.1588611
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence , 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041
Ocaña-Fernández, Y., Valenzuela-Fernández, L., & Garro-Aburto, L. (2019). Artificial intelligence and its implications in higher education. Propósitos y Representaciones , 7(2), 536–568. https://doi.org/10.20511/pyr2019.v7n2.274
Oliphant, T. (2015). Social media and web 2.0 in information literacy education in libraries: New directions for self-directed learning in the digital age. Journal of Information Literacy , 9(2), 37–49.
Pinski, M., & Benlian, A. (2023). AI literacy—Towards measuring human competency in artificial intelligence. Proceedings of the 56th Hawaii International Conference on System Sciences, 165–174. https://doi.org/10.24251/HICSS.2023.012
Ridley, M., & Pawlick-Potts, D. (2021). Algorithmic literacy and the role for libraries. Information Technology and Libraries , 40(2), 1–15. https://doi.org/10.6017/ital.v40i2.12963
Sobel, K., & Grotti, M.G. (2013). Using the TPACK framework to facilitate decision making on instructional technologies. Journal of Electronic Resources Librarianship , 25(4), 255–262. https://doi.org/10.1080/1941126X.2013.847671
UNESCO. (2021). AI and education: Guidance for policy-makers . United Nations Educational, Scientific and Cultural Organization. https://unesdoc.unesco.org/ark:/48223/pf0000376709
U.S. Department of Education. (2023). (rep.). Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations . Retrieved from https://www2.ed.gov/documents/ai-report/ai-report.pdf .
Survey flow.
Standard: Block 1 (1 Question)
Block: Knowledge and Familiarity (12 Questions)
Standard: Perceived Competence and Gaps in AI Literacy (5 Questions)
Standard: Training on Generative AI for Librarians (6 Questions)
Standard: Desired Use of Generative AI in Libraries (7 Questions)
Standard: Demographic (10 Questions)
Standard: End of Survey (1 Question)
Start of Block: Block 1
Dr. Leo Lo from the University of New Mexico is conducting a research project. You are invited to participate in a research study aiming to assess AI literacy among academic library employees, identify gaps in AI literacy that require further professional development and training, and understand the differences in AI literacy levels across different roles and demographic factors. Before you begin the survey, please read this Informed Consent Form carefully. Your participation in this study is voluntary, and you may choose to withdraw at any time without any consequences.
Artificial Intelligence (AI) refers to the development of computer systems and software that can perform tasks that would typically require human intelligence. These tasks may include problem-solving, learning, understanding natural language, recognizing patterns, perception, and decision-making
You are being asked to participate based of the following inclusion and exclusion criteria:
The purpose of this study is to evaluate the current AI literacy levels of academic librarians and identify areas where further training and development may be needed. The findings will help inform the design of targeted professional development programs and contribute to the understanding of AI literacy in the library profession.
If you agree to participate in this study, you will be asked to complete an online survey that will take approximately 15–20 minutes to complete. The survey includes questions about your AI knowledge, familiarity with AI tools and applications, perceived competence in using AI, and your opinions on training needs.
There are no known risks or discomforts associated with participating in this study. Some questions might cause minor discomfort due to self-reflection, but you are free to skip any questions you prefer not to answer. Benefits While there are no direct benefits to you for participating in this study, your responses will help contribute to a better understanding of AI literacy among academic librarians and inform the development of relevant professional training programs.
Your responses will be anonymous, and no personally identifiable information will be collected. Data will be stored securely on password-protected devices or encrypted cloud storage services, with access limited to the research team. The results of this study will be reported in aggregate form, and no individual responses will be identifiable. Your information collected for this project will NOT be used or shared for future research, even if we remove the identifiable information like your name.
Your participation in this study is voluntary, and you may choose to withdraw at any time without any consequences. Please note that if you decide to withdraw from the study, the data that has already been collected from you will be kept and used. This is necessary to maintain the integrity of the study and ensure that the data collected is reliable and valid.
If you have any questions or concerns about this study, please contact the principal investigator, Leo Lo, at [email protected] . If you have questions regarding your rights as a research participant, or about what you should do in case of any harm to you, or if you want to obtain information or offer input, please contact the UNM Office of the IRB (OIRB) at (505) 277-2644 or irb.unm.edu
By clicking “I agree” below, you acknowledge that you have read and understood the information provided above, had an opportunity to ask questions, and voluntarily agree to participate.
I agree (1)
I do not agree (2)
Skip To: End of Survey If Q1.1 = I do not agree
End of Block: Block 1
Start of Block: Knowledge and Familiarity
(AI) refers to the development of computer systems and software that can perform tasks that would typically require human intelligence. These tasks may include problem-solving, learning, understanding natural language, recognizing patterns, perception, and decision-making
Please rate your overall understanding of AI concepts and principles (using a Likert scale, e.g., 1 = very low, 5 = very high)
Q2.2 On a scale of 1 to 5, how would you rate your understanding of generative AI ? (1 = not at all knowledgeable, 5 = extremely knowledgeable)
Q2.3 Rate your familiarity with generative AI tools (e.g., ChatGPT, DALL-E, etc.) (using a Likert scale, e.g., 1 = not familiar, 5 = very familiar)
Q2.4 Which of the following AI technologies or applications have you encountered or used in your role as an academic librarian? (Select all that apply)
Q2.5 For each of the following AI concepts, indicate your understanding of the concept by selecting the appropriate response.
I don’t know what it is (1) | I know what it is but can’t explain it (2) | I can explain it at a basic level (3) | I can explain it in detail (4) | |
Machine Learning (1) | ||||
Natural Language Processing (NLP) (2) | ||||
Neural Network (3) | ||||
Deep Learning (4) | ||||
Generative Adversarial Networks (GANs) (5) |
Q2.6 Which of the following generative AI tools have you used at least a few times? (Select all that apply)
Display This Question:
If If Which of the following generative AI tools have you used at least a few times? (Select all that a… q://QID5/SelectedChoicesCount Is Greater Than 0
Q2.7 Have you ever paid for a premium version of at least one of the AI tools (for example, ChatGPT Plus; or Mid Journey subscription plan, etc.)
Q2.8 How frequently do you use generative AI tools in your professional work? (Select one)
Several times per week (2)
A few times per month (4)
Monthly (5)
Less than once a month (6)
Q2.9 For what purposes do you use generative AI tools in your professional work? (Select all that apply)
Q2.10 On a scale of 1 to 5, how would you rate how reliable generative AI tools have been in fulfilling your professional needs? (1 = not at all reliable, 5 = extremely reliable)
Please explain your choice.
1 (1) __________________________________________________
2 (2) __________________________________________________
3 (3) __________________________________________________
4 (4) __________________________________________________
5 (5) __________________________________________________
Q2.11 What level of concern do you have for the following potential challenges in implementing generative AI technologies in academic libraries? (Rate each challenge on a scale of 1 to 5, where 1 = not at all concerned and 5 = extremely concerned)
1 (1) | 2 (2) | 3 (3) | 4 (4) | 5 (5) | |
Obtaining adequate funding and resources for AI implementation (1) | |||||
Ethical concerns, such as bias and fairness (2) | |||||
Intellectual property and copyright issues (3) | |||||
Staff resistance or lack of buy-in (4) | |||||
Quality and accuracy of generated content (5) | |||||
Ensuring accessibility and inclusivity of AI tools for all users (6) | |||||
Potential job displacement due to automation (7) | |||||
Data privacy and security (8) | |||||
Technical expertise and resource requirements (9) | |||||
Other (please specify) (10) |
Q2.12 How frequently do you use generative AI tools in your personal life ? (Select one)
End of Block: Knowledge and Familiarity
Start of Block: Perceived Competence and Gaps in AI Literacy
Q3.1 On a scale of 1 to 5, how confident are you in your ability to evaluate the ethical implications of using AI in your library? (1 = not at all confident, 5 = extremely confident)
Q3.2 On a scale of 1 to 5, how confident are you in your ability to participate in discussions about AI integration within your library? (1 = not at all confident, 5 = extremely confident)
Q3.3 On a scale of 1 to 5, how confident are you in your ability to collaborate with colleagues on AI-related projects in your library? (1 = not at all confident, 5 = extremely confident)
Q3.4 On a scale of 1 to 5, how confident are you in your ability to troubleshoot issues related to AI tools and applications used in your library? (1 = not at all confident, 5 = extremely confident)
Q3.5 On a scale of 1 to 5, how confident are you in your ability to provide guidance to library users about AI resources and tools ? (1 = not at all confident, 5 = extremely confident)
End of Block: Perceived Competence and Gaps in AI Literacy
Start of Block: Training on Generative AI for Librarians
Q4.1 Have you ever participated in any training or professional development programs focused on generative AI?
If Q4.1 = Yes
Q4.2 Please briefly describe the nature and content of the training or professional development program(s) you attended.
________________________________________________________________
Q4.3 To what extent do you agree or disagree with the following statement: “ I feel adequately prepared to use generative AI tools in my professional work as a librarian .” (1 = strongly disagree, 5 = strongly agree)
Q4.4 In which of the following areas do you feel the need for additional training or professional development related to AI? (Select all that apply)
Q4.5 What types of professional development opportunities related to AI would be most beneficial to you? (Select all that apply)
Q4.6 How important do you think it is for academic librarians to receive training on generative AI tools and applications in the next 12 months ? (1 = not at all important, 5 = extremely important)
End of Block: Training on Generative AI for Librarians
Start of Block: Desired Use of Generative AI in Libraries
Q5.1 To what extent do you agree or disagree with the following statement: “ I believe generative AI tools have the potential to benefit library services and operations .” (1 = strongly disagree, 5 = strongly agree)
Q5.2 How important do you think it is for your library to invest in the exploration and implementation of generative AI tools ? (1 = not at all important, 5 = extremely important)
Q5.3 If you have any additional thoughts or suggestions on how your library could or should use (or not use) generative AI tools, please share them here.
Q5.4 How soon do you think your library should prioritize implementing generative AI tools and applications? (Select one)
Immediately (1)
Within the next 6 months (2)
Within the next year (3)
Within the next 2–3 years (4)
More than 3 years from now (5)
Not a priority at all (6)
Q5.5 In your opinion, how prepared is your library to adopt generative AI tools and applications in the next 12 months? (1 = not at all prepared, 5 = extremely prepared)
Q5.6 To what extent do you think generative AI tools and applications will have a significant impact on academic libraries within the next 12 months ? (1 = no impact, 5 = major impact)
Q5.7 How urgent do you feel it is for your library to address the potential ethical and privacy concerns related to the use of generative AI tools and applications? (1 = not at all urgent, 5 = extremely urgent)
End of Block: Desired Use of Generative AI in Libraries
Start of Block: Demographic
Q6.1 In which type of academic institution is your library located? (Select one)
Community college (1)
College or university (primarily undergraduate) (2)
College or university (graduate and undergraduate) (3)
Research university (4)
Specialized or professional school (e.g., law, medical) (5)
Other (please specify) (6) __________________________________________________
Q6.2 Is your library an ARL member library?
Q6.3 Approximately how many students are enrolled at your institution? (Select one)
Fewer than 1,000 (1)
1,000–4,999 (2)
5,000–9,999 (3)
10,000–19,999 (4)
20,000–29,999 (5)
30,000 or more (6)
Q6.4 What is your current role or position in your organization? (Select one)
Senior management (e.g. Director, Dean, associate dean/director) (1)
Middle management (e.g. department head, supervisor, coordinator) (2)
Specialist or professional (e.g., librarian, analyst, consultant) (3)
Support staff or administrative (4)
Other (please specify) (5) __________________________________________________
Q6.5 In which area of academic librarianship do you primarily work? (Select one)
Administration or management (1)
Reference and research services (2)
Technical services (e.g., acquisitions, cataloging, metadata) (3)
Collection development and management (4)
Library instruction and information literacy (5)
Electronic resources and digital services (6)
Systems and IT services (7)
Archives and special collections (8)
Outreach, marketing, and communications (9)
Other (please specify) (10) __________________________________________________
Q6.6 How many years of experience do you have as a library employee?
Less than 1 year (1)
1–5 years (2)
6–10 years (3)
11–15 years (4)
16–20 years (5)
More than 20 years (6)
Q6.7 What is the highest level of education you have completed? (Select one)
High school diploma or equivalent (1)
Some college or associate degree (2)
Bachelor’s degree (3)
Master’s degree in library and information science (e.g., MLIS, MSLS) (4)
Master’s degree in another field (5)
Doctoral degree (e.g., PhD, EdD) (6)
Other (please specify) (7) __________________________________________________
Q6.8 What is your gender? (Select one)
Non-binary / third gender (3)
Prefer not to say (4)
Q6.9 What is your age range?
Under 25 (1)
65 and above (5)
Q6.10 How do you describe your ethnicity? (Select one or more)
End of Block: Demographic
Start of Block: End of Survey
Q7.1 Thank you for participating in our survey!
Your input is incredibly valuable to us and will contribute to our understanding of AI literacy among academic librarians. We appreciate the time and effort you have taken to share your experiences and opinions. The information gathered will help inform future professional development opportunities and address potential gaps in AI knowledge and skills.
We will carefully analyze the responses and share the findings with the academic library community. If you have any further comments or questions about the survey, please do not hesitate to contact us at [email protected].
Once again, thank you for your contribution to this important research. Your insights will help shape the future of AI in academic libraries.
Best regards,
University of New Mexico
End of Block: End of Survey
* Leo S. Lo is Dean, College of University Libraries and Learning Sciences at the University of New Mexico, email: [email protected] . ©2024 Leo S. Lo, Attribution-NonCommercial (https://creativecommons.org/licenses/by-nc/4.0/) CC BY-NC.
Contact ACRL for article usage statistics from 2010-April 2017.
2024 |
January: 0 |
February: 0 |
March: 0 |
April: 0 |
May: 0 |
June: 3 |
July: 888 |
© 2024 Association of College and Research Libraries , a division of the American Library Association
Print ISSN: 0010-0870 | Online ISSN: 2150-6701
ALA Privacy Policy
ISSN: 2150-6701
The lowdown on breakdown: open questions in plant proteolysis.
Authors are listed alphabetically (except for the lead author/coordinating editor). All authors contributed to writing and revising the article.
Nancy A Eckardt, Tamar Avin-Wittenberg, Diane C Bassham, Poyu Chen, Qian Chen, Jun Fang, Pascal Genschik, Abi S Ghifari, Angelica M Guercio, Daniel J Gibbs, Maren Heese, R Paul Jarvis, Simon Michaeli, Monika W Murcha, Sergey Mursalimov, Sandra Noir, Malathy Palayam, Bruno Peixoto, Pedro L Rodriguez, Andreas Schaller, Arp Schnittger, Giovanna Serino, Nitzan Shabek, Annick Stintzi, Frederica L Theodoulou, Suayib Üstün, Klaas J van Wijk, Ning Wei, Qi Xie, Feifei Yu, Hongtao Zhang, The lowdown on breakdown: Open questions in plant proteolysis, The Plant Cell , 2024;, koae193, https://doi.org/10.1093/plcell/koae193
Proteolysis, including post-translational proteolytic processing as well as protein degradation and amino acid recycling, is an essential component of the growth and development of living organisms. In this article, experts in plant proteolysis pose and discuss compelling open questions in their areas of research. Topics covered include the role of proteolysis in the cell cycle, DNA damage response, mitochondrial function, the generation of N-terminal signals (degrons) that mark many proteins for degradation (N-terminal acetylation, the Arg/N-degron pathway, and the chloroplast N-degron pathway), developmental and metabolic signaling (photomorphogenesis, abscisic acid and strigolactone signaling, sugar metabolism, and post-harvest regulation), plant responses to environmental signals (endoplasmic-reticulum associated degradation, chloroplast-associated degradation, drought tolerance, the growth-defense tradeoff)), and the functional diversification of peptidases. We hope these thought-provoking discussions help to stimulate further research.
Email alerts, citing articles via.
Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide
Sign In or Create an Account
This PDF is available to Subscribers Only
For full access to this pdf, sign in to an existing account, or purchase an annual subscription.
BMC Medical Education volume 24 , Article number: 736 ( 2024 ) Cite this article
Metrics details
Academic paper writing holds significant importance in the education of medical students, and poses a clear challenge for those whose first language is not English. This study aims to investigate the effectiveness of employing large language models, particularly ChatGPT, in improving the English academic writing skills of these students.
A cohort of 25 third-year medical students from China was recruited. The study consisted of two stages. Firstly, the students were asked to write a mini paper. Secondly, the students were asked to revise the mini paper using ChatGPT within two weeks. The evaluation of the mini papers focused on three key dimensions, including structure, logic, and language. The evaluation method incorporated both manual scoring and AI scoring utilizing the ChatGPT-3.5 and ChatGPT-4 models. Additionally, we employed a questionnaire to gather feedback on students’ experience in using ChatGPT.
After implementing ChatGPT for writing assistance, there was a notable increase in manual scoring by 4.23 points. Similarly, AI scoring based on the ChatGPT-3.5 model showed an increase of 4.82 points, while the ChatGPT-4 model showed an increase of 3.84 points. These results highlight the potential of large language models in supporting academic writing. Statistical analysis revealed no significant difference between manual scoring and ChatGPT-4 scoring, indicating the potential of ChatGPT-4 to assist teachers in the grading process. Feedback from the questionnaire indicated a generally positive response from students, with 92% acknowledging an improvement in the quality of their writing, 84% noting advancements in their language skills, and 76% recognizing the contribution of ChatGPT in supporting academic research.
The study highlighted the efficacy of large language models like ChatGPT in augmenting the English academic writing proficiency of non-native speakers in medical education. Furthermore, it illustrated the potential of these models to make a contribution to the educational evaluation process, particularly in environments where English is not the primary language.
Peer Review reports
Large language models (LLMs) are artificial intelligence (AI) tools that have remarkable ability to understand and generate text [ 1 , 2 ]. Trained with substantial amounts of textual data, LLMs have demonstrated their capability to perform diverse tasks, such as question answering, machine translation, and writing [ 3 , 4 ]. In 2022, Open AI released a LLM called ChatGPT [ 5 ]. Since its inception, ChatGPT has been widely applied in medicine domain, especially after testing, it can demonstrate the medical level that meets the requirements of passing the United States Medical Licensing Exam [ 6 ]. It can provide personalized learning experience according to the preference style of medical students [ 7 ]. Research has shown that the explanations provided by ChatGPT are more accurate and comprehensive than the explanations of basic principles provided in some standardized higher education exams [ 8 ]. Therefore, many researchers believe that ChatGPT may improve students’ problem-solving ability and reflective learning [ 9 ].
Writing English language based academic papers is very important for the development of medical students in universities. China is a non-native English-speaking country with a large population of medical students, so it is necessary to provide medical education and offer relevant courses, especially to cultivate their ability to write English academic papers [ 10 ]. This is essential for future engagement in scientific research and clinical work within the field of medicine. However, the ability of these non-native English-speaking medical students in writing English papers is relatively limited, and they need continuous training and improvement [ 11 ].
LLMs can be used to generate and modify text content and language styles, and can be applied to the quality improvement of scientific papers [ 12 , 13 ]. ChatGPT exhibits considerable potential in medical paper writing, assist in literature retrieval, data analysis, knowledge synthesis and other aspects [ 14 ]. Students received AI-assisted instruction exhibited improved proficiency in multiple aspects of writing, organization, coherence, grammar, and vocabulary [ 15 ]. Additionally, AI mediated instruction can positively impacts English learning achievement and self-regulated learning [ 16 ]. LLMs can also perform language translation [ 13 , 17 ]. Moreover, it can automatically evaluate and score the level of medical writing, and provide modification suggestions for improvement [ 18 ]. These studies indicate that incorporating large language models like ChatGPT into medical education holds promise for various advantages. However, their usage must be accompanied by careful and critical evaluation [ 19 ]. As far as we know, there is currently no research to evaluate the usability and effectiveness of ChatGPT in medical mini paper writing courses through real classroom teaching scenarios.
Therefore, in this study, we introduce the ChatGPT into real-world medical courses to investigate the effectiveness of employing LLMs in improving the academic writing proficiency for non-native English-speaking medical students. By collecting and analyzing data, we aim to provide evidence of the effectiveness of employing a LLM in improving the English academic writing skills of medical students, thereby facilitating better medical education and improve the scientific research ability and writing skills for students.
The research included 27 third-year medical students from the West China School of Medicine at Sichuan University. These students are all non-native English speakers. These students had concluded their fundamental medical coursework but had not yet embarked on specialized subjects. Exclusion criteria were applied to those who failed to fulfill the requisite homework assignments.
Initial Stage: The task involved composing an English academic paper in accordance with the stipulations of English thesis education. Considering the students’ junior academic standing, the composition of a discussion section in paper was not mandated. Each student was tasked with authoring a concise, “mini paper.”
Experimental Phase: Upon the completion of their individual “mini papers,” students had initially submitted these under the label “group without ChatGPT.” Subsequently, they engaged with ChatGPT-3.5 for a period of two weeks to refine their English academic manuscripts. After this period, the revised mini papers were resubmitted under the designation “group with ChatGPT.” Alongside this resubmission, students also provided a questionnaire regarding their experience with ChatGPT. The questionnaire was administered in Mandarin, which is the commonly used language in the research context. We conducted a thorough discussion within our teaching and research group to develop the questionnaire. Two students, who failed to meet the stipulated submission deadline, were excluded from the study.
All mini papers underwent evaluation and scoring based on a standardized scoring criterion. The assessment process encompassed three distinct approaches. Firstly, two teachers independently scored each mini paper using a blind review technique, and the final score was determined by averaging the two assessments. Secondly, scoring was performed using ChatGPT-3.5. Lastly, scoring was conducted using ChatGPT-4.
Evaluation Criteria: The scoring was composed of three dimensions: structure, logic, and language, with each dimension carrying a maximum of 20 points, culminating in a total of 60 points. The scores for each section were categorized into four tiers: 0–5 points (Fail), 6–10 points (Below Average), 11–15 points (Good), and 16–20 points (Excellent). The minimum unit for deduction was 0.5 points.
Structure emphasizes the organization and arrangement of the paper. It ensures that the content is placed in the appropriate sections according to the guidelines commonly found in academic journals. Logic refers to the coherence and progression of ideas within the paper. The logical flow should be evident, with each section building upon the previous ones to provide a cohesive argument. A strong logical framework ensures a systematic and well-supported study. Language refers to the correctness and proficiency of English writing. Proper language expression is essential for effectively conveying ideas and ensuring clear communication, and makes the paper becomes more readable and comprehensible to the intended audience.
Experience questionnaire for ChatGPT: The questionnaire comprised 31 questions, detailed in the attached appendix. (Attachment document)
The Kruskal-Wallis rank sum test was utilized to assess the baseline scores of students before and after using ChatGPT. A paired t-test was utilized to analyze the impact of ChatGPT on the improvement of students’ assignment quality (manual grading). Univariate regression analysis was conducted to investigate the extent of improvement in assignment quality attributed to ChatGPT. Previous studies have shown discrepancies in language learning and language-related skills between males and females. In order to mitigate any potential biases, we implemented gender correction techniques, which encompassed statistical adjustments to accommodate these gender variations [ 20 , 21 , 22 ]. The questionnaire was distributed and collected using the Wenjuanxing platform (Changsha Ran Xing Science and Technology, Shanghai, China. [ https://www.wjx.cn ]).
Statistical analyses were performed using the R software package (version 4.2.0, The R Foundation, Boston, MA, USA), Graph Pad Prism 9 (GraphPad Software, CA, USA), and Empower (X&Y Solutions Inc., Boston, MA, USA) [ 23 ].
Ultimately, the study included 25 participants, with two students being excluded due to late submission of their assignments. These participants were all third-year undergraduate students, including 14 males (56%) and 11 females (44%). The “group without ChatGPT” consisted of 25 participants who wrote mini papers with an average word count of 1410.56 ± 265.32, cited an average of 16.44 ± 8.31 references, and received a manual score of 46.45 ± 3.59. In contrast, the “group with ChatGPT” of 25 participants produced mini papers with an average word count of 1406.52 ± 349.59, cited 16.80 ± 8.10 references on average, and achieved a manual score of 50.68 ± 2.03. Further details are available in Table 1 .
In terms of manual scoring, medical students demonstrated a significant improvement in the quality of their assignments in the dimensions of logic, structure, language, and overall score after using ChatGPT, as depicted in Fig. 1 .
Using ChatGPT improved the quality of students’ academic papers. A statistical analysis of the manual scoring showed that the quality of students’ academic papers improved after using ChatGPT for revision in terms of structure, logic, language, and overall score. The results showed statistical significance. *** p < 0.001, **** p < 0.0001
We also conducted a univariate analysis on the impact of ChatGPT on medical students’ academic papers writing across all scoring methods. The results indicated significant improvement in all manual scores and those evaluated by ChatGPT-3.5 for paper structure, logic, language, and total score (all p < 0.05). Papers assessed by ChatGPT-4 also showed significant improvements in structure, logic, and total score (all p < 0.05). Although the language scores of papers evaluated by ChatGPT-4 did not show a significant difference, a trend of improvement was observed (β 1.02, 95% confidence interval (CI) -0.15, 2.19, p = 0.1). After adjusting for gender, multivariate regression analysis yielded similar results, with significant improvements in all dimensions of scoring across all methods, except for the language scores evaluated by ChatGPT-4. The total manual scoring of students’ papers improved by 4.23 (95% CI 2.64, 5.82) after revisions with ChatGPT, ChatGPT-3.5 scores increased by 4.82 (95% CI 2.47, 7.17), and ChatGPT-4 scores by 3.84 (95% CI 0.83, 6.85). Further details are presented in Table 2 .
Additionally, we investigated whether ChatGPT could assist teachers in assignment assessment. The results showed significant differences between the scores given by the ChatGPT-3.5 and manual grading, both for groups with and without ChatGPT. Interestingly, the scores from ChatGPT-4 were not significantly different from human grading, which suggests that ChatGPT-4 may have the potential to assist teachers in reviewing and grading student assignments (Fig. 2 ).
Potential of ChatGPT assisting teachers in evaluating papers. The results showed that there was a significant statistical difference between the scoring results of the GPT3.5 and the manual scoring results, both for the unrevised mini papers (left) and the revised mini papers (right) using ChatGPT. However, there was no significant statistical difference between the scoring results of GPT4 and the manual scoring results, which mean that GPT4 might be able to replace teachers in scoring in the future. ns: no significance, *** p < 0.001, **** p < 0.0001
Among the 25 valid questionnaires, social media emerged as the primary channel through which participants became aware of ChatGPT, accounting for 84% of responses. This was followed by recommendations from acquaintances and requirements from schools/offices, each selected by 48% of participants. News media accounted for 44%. (Attachment document)
Regarding the purpose of using ChatGPT (multiple responses allowed), 92% used it mainly to enhance homework quality and improve writing efficiency. 68% utilized ChatGPT for knowledge gathering. 56% employed ChatGPT primarily to improve their language skills. (Attachment document)
In the course of the study, the most widely used feature of ChatGPT in assisting with academic paper writing was English polishing, chosen by 100% of the students, indicating its widespread use for improving the language quality of their papers. Generating outlines and format editing were also popular choices, with 64% and 60% using these features, respectively. (Attachment document)
When asked what they would use ChatGPT for, 92% of participants considered it as a language learning tool for real-time translation and grammar correction. 84% viewed ChatGPT as a tool for assisting in paper writing, providing literature materials and writing suggestions. 76% saw ChatGPT as a valuable tool for academic research and literature review. 48% believed that ChatGPT could serve as a virtual tutor, providing personalized learning advice and guidance. (Attachment document)
Regarding attitudes towards the role of ChatGPT in medical education, 24% of participants had an optimistic view, actively embracing its role, while 52% had a generally positive attitude, and 24% held a neutral stance. This indicates that most participants viewed the role of ChatGPT in medical education positively, with only a minority being pessimistic. (Attachment document)
Among the participants, when asked about the limitations of ChatGPT in medical education, 96% acknowledged the challenge in verifying the authenticity of information; 72% noted a lack of human-like creative thinking; 52% pointed out the absence of clinical practice insights; and 40% identified language and cultural differences as potential issues. (Attachment document)
The results from the participants’ two-week unrestricted usage of the AI model ChatGPT to enhance their assignments indicated a noticeable improvement in the quality of student papers. This suggests that large language models could serve as assistive tools in medical education by potentially improving the English writing skills of medical students. Furthermore, the results of comparative analysis revealed that the ChatGPT-4 model’s evaluations showed no statistical difference from teacher’s manual grading. Therefore, AI might have prospective applications in certain aspects of teaching, such as grading assessments, providing significant assistance to manual efforts.
The results of questionnaire indicate ChatGPT can serve as an important educational tool, beneficial in a range of teaching contexts, including online classroom Q&A assistant, virtual tutor and facilitating language learning [ 24 ]. ChatGPT’s expansive knowledge base and advanced natural language processing capability enable it to effectively answer students’ inquiries and offer valuable literature resources and writing advice [ 25 ]. For language learning, it offers real-time translation and grammar correction, aiding learners in improving their language skills through evaluation and feedback [ 26 ]. ChatGPT can also deliver personalized educational guidance based on individual student needs, enhancing adaptive learning strategies [ 27 ]. Furthermore, in this study, the positive feedback of questionnaire for the usage of ChatGPT in English language polishing of academic papers, as well as for generating paper outlines and formatting, underscores its acceptance and recognition among students. The evaluation results of three dimensions reflects a keen focus on enhancing the structural and formatting quality of their papers, demonstrating the large AI language model’s impressive teaching efficacy in undergraduate education.
In the questionnaire assessing ChatGPT’s accuracy and quality, 48% of respondents indicated satisfaction with its performance. However, it’s important to consider that the quality and accuracy of responses from any AI model, including ChatGPT, can be influenced by various factors such as the source of data, model design, and training data quality. These results, while indicative, require deeper research and analysis to fully understand the capabilities and limitations of ChatGPT in this field. Furthermore, ongoing discussions about ethics and data security in AI applications highlight the need for continued vigilance and improvement [ 28 ]. Overall, while ChatGPT shows promise in medical education, it is clear that it has limitations that must be addressed to better serve the needs of this specialized field.
Manual grading can be a time-consuming task for teachers, particularly when dealing with a large number of assignments or exams. ChatGPT-4 may provide support to teachers in the grading process, which could free up their time, allowing them to focus on other aspects of teaching, such as providing personalized feedback or engaging with students. However, it may not replace the role of teachers in grading. Teachers possess valuable expertise and contextual knowledge that go beyond simple evaluation of assignments. They consider factors such as student effort, creativity, critical thinking, and the ability to convey ideas effectively. These aspects might be challenging for an AI model to fully capture and evaluate. Furthermore, the use of AI in grading raises important ethical considerations. It is crucial to ensure that the model’s grading criteria align with educational standards and are fair and unbiased.
Despite its potential benefits of using ChatGPT in medical education, it also has limitations, such as language barriers and cultural differences [ 29 , 30 ]. When inputted with different languages, ChatGPT may have difficulty in understanding and generating accurate responses. Medical terms and concepts vary across different languages, and even slight differences in translation can lead to misunderstandings. Medical education is also influenced by cultural factors. Different cultures have different communication styles, which can impact the way medical information is exchanged. Recognizing and respecting the diversity of cultural perspectives is crucial for providing patient-centered care, and it should be an important part in medical education, which ChatGPT does not excel at. The model may struggle with translating non-English languages, impacting its effectiveness in a global medical education context. Additionally, while ChatGPT can generate a vast amount of text, it lacks the creative thinking and contextual understanding inherent to human cognition, which can be crucial in medical education. Another concern is the authenticity and credibility of the information generated by ChatGPT [ 31 , 32 ]. In medical education, where accuracy and reliability of knowledge are paramount, the inability to guarantee the truthfulness of the information poses a significant challenge [ 32 , 33 , 34 ].
These limitations of ChatGPT in medical education may be addressed and potentially rectified with updates and advancements in AI models. For instance, in this study, the scoring results showed no statistical difference between the ChatGPT-4 model and manual grading, unlike the significant discrepancies observed with the ChatGPT-3.5 model. This suggests that ChatGPT-4 has improved capabilities to assist manual grading by teachers, demonstrating greater intelligence and human-like understanding compared to the ChatGPT-3.5 model. Similar findings have been noted in other research, highlighting the advancements from version 3.5 to 4. For example, there were clear evidences that version 4 achieved better test results than version 3.5 in professional knowledge exams in disciplines such as orthopedics [ 35 ], dermatology [ 36 ], and ophthalmology [ 37 ].
This study aimed to explore the use of ChatGPT in enhancing English writing skills among non-native English-speaking medical students. The results showed that the quality of students’ writing improved significantly after using ChatGPT, highlighting the potential of large language models in supporting academic writing by enhancing structure, logic, and language skills. Statistical analysis indicated that ChatGPT-4 has the potential to assist teachers in grading. As a pilot study in this field, it may pave the way for further research on the application of AI in medical education. This new approach of incorporating AI into English paper writing education for medical students represents an innovative research perspective. This study not only aligns with the evolving landscape of technology-enhanced learning but also addresses specific needs in medical education, particularly in the context of academic writing. In the future, AI models should be more rationally utilized to further enhance medical education and improve medical students’ research writing skills.
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
Singhal K, Azizi S, Tu T, Mahdavi SS, Wei J, Chung HW, Scales N, Tanwani A, Cole-Lewis H, Pfohl S, et al. Large language models encode clinical knowledge. Nature. 2023;620(7972):172–80.
Article Google Scholar
Tamkin A, Brundage M, Clark J, Ganguli D. Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models. ArXiv 2021, abs/2102.02503.
Korngiebel DM, Mooney SD. Considering the possibilities and pitfalls of generative pre-trained transformer 3 (GPT-3) in healthcare delivery. NPJ Digit Med. 2021;4(1):93.
Zong H, Li J, Wu E, Wu R, Lu J, Shen B. Performance of ChatGPT on Chinese national medical licensing examinations: a five-year examination evaluation study for physicians, pharmacists and nurses. BMC Med Educ. 2024;24(1):143.
ChatGPT. Optimizing Language Models for Dialogue [ https://openai.com/blog/chatgpt/ ]
Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepano C, Madriaga M, Aggabao R, Diaz-Candido G, Maningo J, et al. Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2(2):e0000198.
Sallam M. ChatGPT Utility in Healthcare Education, Research, and practice: systematic review on the promising perspectives and valid concerns. Healthc (Basel) 2023, 11(6).
Fijacko N, Gosak L, Stiglic G, Picard CT, John Douma M. Can ChatGPT pass the life support exams without entering the American heart association course? Resuscitation 2023, 185:109732.
Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, Chartash D. How does ChatGPT perform on the United States Medical Licensing Examination (USMLE)? The implications of Large Language Models for Medical Education and Knowledge Assessment. JMIR Med Educ. 2023;9:e45312.
Wang W. Medical education in China: progress in the past 70 years and a vision for the future. BMC Med Educ. 2021;21(1):453.
Wu C, Zhang YW, Li AW. Peer feedback and Chinese medical students’ English academic writing development: a longitudinal intervention study. BMC Med Educ. 2023;23(1):578.
Luo R, Sun L, Xia Y, Qin T, Zhang S, Poon H, Liu TY. BioGPT: generative pre-trained transformer for biomedical text generation and mining. Brief Bioinform 2022, 23(6).
Biswas S. ChatGPT and the future of Medical writing. Radiology. 2023;307(2):e223312.
Li J, Tang T, Wu E, Zhao J, Zong H, Wu R, Feng W, Zhang K, Wang D, Qin Y et al. RARPKB: a knowledge-guide decision support platform for personalized robot-assisted surgery in prostate cancer. Int J Surg 2024.
Song C, Song Y. Enhancing academic writing skills and motivation: assessing the efficacy of ChatGPT in AI-assisted language learning for EFL students. Front Psychol. 2023;14:1260843.
Wei L. Artificial intelligence in language instruction: impact on English learning achievement, L2 motivation, and self-regulated learning. Front Psychol. 2023;14:1261955.
Panayiotou A, Gardner A, Williams S, Zucchi E, Mascitti-Meuter M, Goh AM, You E, Chong TW, Logiudice D, Lin X, et al. Language Translation Apps in Health Care settings: Expert Opinion. JMIR Mhealth Uhealth. 2019;7(4):e11316.
Veras M, Dyer JO, Rooney M, Barros Silva PG, Rutherford D, Kairy D. Usability and efficacy of Artificial Intelligence Chatbots (ChatGPT) for Health sciences students: protocol for a crossover randomized controlled trial. JMIR Res Protoc. 2023;12:e51873.
Jeyaraman M, K SP, Jeyaraman N, Nallakumarasamy A, Yadav S, Bondili SK. ChatGPT in Medical Education and Research: a Boon or a bane? Cureus 2023, 15(8):e44316.
Saxena S, Wright WS, Khalil MK. Gender differences in learning and study strategies impact medical students’ preclinical and USMLE step 1 examination performance. BMC Med Educ. 2024;24(1):504.
D’Lima GM, Winsler A, Kitsantas A. Ethnic and gender differences in first-year college students’ goal orientation, self-efficacy, and extrinsic and intrinsic motivation. J Educational Res. 2014;107(5):341–56.
Kusnierz C, Rogowska AM, Pavlova I. Examining gender differences, personality traits, academic performance, and motivation in Ukrainian and Polish students of Physical Education: a cross-cultural study. Int J Environ Res Public Health 2020, 17(16).
Empower, X&Y Solutions Inc. (, Boston MA) [ https://www.empowerstats.com ]
Futterer T, Fischer C, Alekseeva A, Chen X, Tate T, Warschauer M, Gerjets P. ChatGPT in education: global reactions to AI innovations. Sci Rep. 2023;13(1):15310.
Khan RA, Jawaid M, Khan AR, Sajjad M. ChatGPT - reshaping medical education and clinical management. Pak J Med Sci. 2023;39(2):605–7.
Deng J, Lin Y. The Benefits and Challenges of ChatGPT: An Overview. Frontiers in Computing and Intelligent Systems 2023.
Baidoo-Anu D, Owusu Ansah L. Education in the era of Generative Artificial Intelligence (AI): understanding the potential benefits of ChatGPT in promoting teaching and learning. SSRN Electron J 2023.
Preiksaitis C, Rose C. Opportunities, challenges, and Future Directions of Generative Artificial Intelligence in Medical Education: scoping review. JMIR Med Educ. 2023;9:e48785.
Albrecht UV, Behrends M, Schmeer R, Matthies HK, von Jan U. Usage of multilingual mobile translation applications in clinical settings. JMIR Mhealth Uhealth. 2013;1(1):e4.
Beh TH, Canty DJ. English and Mandarin translation using Google Translate software for pre-anaesthetic consultation. Anaesth Intensive Care. 2015;43(6):792–3.
Google Scholar
Haleem A, Javaid M, Singh RP. An era of ChatGPT as a significant futuristic support tool: a study on features, abilities, and challenges. BenchCouncil Trans Benchmarks Stand Evaluations 2023.
Haque MU, Dharmadasa I, Sworna ZT, Rajapakse RN, Ahmad H. I think this is the most disruptive technology: Exploring Sentiments of ChatGPT Early Adopters using Twitter Data. ArXiv 2022, abs/2212.05856.
Cooper G. Examining Science Education in ChatGPT: an exploratory study of Generative Artificial Intelligence. J Sci Edu Technol. 2023;32:444–52.
Yu C, Zong H, Chen Y, Zhou Y, Liu X, Lin Y, Li J, Zheng X, Min H, Shen B. PCAO2: an ontology for integration of prostate cancer associated genotypic, phenotypic and lifestyle data. Brief Bioinform 2024, 25(3).
Massey PA, Montgomery C, Zhang AS. Comparison of ChatGPT-3.5, ChatGPT-4, and Orthopaedic Resident performance on Orthopaedic Assessment examinations. J Am Acad Orthop Surg. 2023;31(23):1173–9.
Lewandowski M, Lukowicz P, Swietlik D, Baranska-Rybak W. An original study of ChatGPT-3.5 and ChatGPT-4 dermatological knowledge level based on the Dermatology Specialty Certificate examinations. Clin Exp Dermatol; 2023.
Teebagy S, Colwell L, Wood E, Yaghy A, Faustina M. Improved performance of ChatGPT-4 on the OKAP examination: a comparative study with ChatGPT-3.5. J Acad Ophthalmol (2017). 2023;15(2):e184–7.
Download references
The authors gratefully thank Dr. Changzhong Chen, Chi Chen, and Xin-Lin Chen (EmpowerStats X&Y Solutions, Inc., Boston, MA) for providing statistical methodology consultation.
This work was supported by the National Natural Science Foundation of China (32070671 and 32270690), and the Fundamental Research Funds for the Central Universities (2023SCU12057). The authors gratefully thank Dr. Changzhong Chen, Chi Chen, and Xin-Lin Chen (EmpowerStats X&Y Solutions, Inc., Boston, MA) for providing statistical methodology consultation.
Jiakun Li, Hui Zong and Erman Wu contributed equally to this work.
Department of Urology and Institutes for Systems Genetics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, 610041, China
Jiakun Li, Hui Zong, Erman Wu, Rongrong Wu, Zhufeng Peng, Jing Zhao, Lu Yang & Bairong Shen
West China Hospital, West China School of Medicine, Sichuan University, No. 37, Guoxue Alley, Chengdu, 610041, China
Institutes for Systems Genetics, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, 610041, China
Bairong Shen
Department of Neurosurgery, the First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830054, China
You can also search for this author in PubMed Google Scholar
J.L., H.Z. and E.W. contributed equally as first authors of this manuscript. J.L., H.X. and B.S. were responsible for the conception and design of this study. J.L., E.W., R.W., J.Z., L.Y. and Z.P. interpreted the data. J.L., E.W., H.Z. and L.Y. were responsible for the data acquisition. J.L., H.Z. and E.W. wrote the first draft, interpreted the data, and wrote the final version of the manuscript. J.Z. was committed to the language editing of the manuscript. All authors critically revised the manuscript for important intellectual content and approved the final version of the manuscript. H.X. and B.S. contributed equally as the corresponding authors of this manuscript. All authors have read and approved the final manuscript.
Correspondence to Hong Xie or Bairong Shen .
Ethic approval and consent to participate.
Was not required for this study because the research data were anonymised, and the Research Ethics Committee of West China Hospital of Sichuan University determined it was not necessary based on the study’s nature.
Not applicable (NA).
The authors declare no competing interests.
During the writing of this work the author(s) used generative AI and/or AI-assisted technologies for the purpose of English language polishing. The author(s) take responsibility for the content and intended meaning of this article.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
Rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
Cite this article.
Li, J., Zong, H., Wu, E. et al. Exploring the potential of artificial intelligence to enhance the writing of english academic papers by non-native english-speaking medical students - the educational application of ChatGPT. BMC Med Educ 24 , 736 (2024). https://doi.org/10.1186/s12909-024-05738-y
Download citation
Received : 04 April 2024
Accepted : 02 July 2024
Published : 09 July 2024
DOI : https://doi.org/10.1186/s12909-024-05738-y
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1472-6920
Oxford University Press's Academic Insights for the Thinking World
Learn more about the world of academic publishing—from open access to peer review, accessibility to getting published—with our Publishing 101 series on the OUPblog.
Researchers’ attitudes to AI vary significantly across career stage, subject area, and country. While 76% of researchers say they have used some form of AI tool in their research, our survey uncovered unexpected generational differences and polarised opinions on the impact of AI.
We recently conducted a survey of over 2,000 researchers to hear directly from the research community about how they are reacting to and using AI in their work. A statistical cluster analysis identified eight groups illustrating the spectrum of attitudes, from ‘Challengers’ (those fundamentally against AI), through to ‘Pioneers’ (those fully embracing AI). Click on the image below to learn more about these profiles.
Featured image by Cash Macanaya on Unsplash .
Jessamine Hopkins , Tamsin Chamberlain , and Laura Richards work at Oxford University Press.
Our Privacy Policy sets out how Oxford University Press handles your personal information, and your rights to object to your personal information being used for marketing to you or being processed as part of our business activities.
We will only use your personal information to register you for OUPblog articles.
Or subscribe to articles in the subject area by email or RSS
There are currently no comments.
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment.
Last updated
4 July 2024
Reviewed by
Dazzle the interviewing team and land the job of your dreams by coming prepared to answer the most commonly asked research interview questions.
Read our article (which includes example answers to get your brain juices flowing) to ensure you put your best foot forward for your next research interview.
If you have set your sights on working in research, you will have to answer research interview questions during the hiring process.
Whether you are interested in working as a research assistant or want to land an academic or industry research position in your chosen field, confidently answering research interview questions is the best way to showcase your skills and land the job.
Designed to be open-ended, research interview questions give your interviewer a chance to:
Get a better understanding of your research experience
Explore your areas of research expertise
Determine if you and your research are a good fit for their needs
Assess if they have the required resources for you to conduct your research effectively
If you want to crush an upcoming interview for a research position, practicing your answers to commonly asked questions is a great place to start.
Read our list of research interview questions and answers to help get into the pre-interview zone (and, hopefully, ensure you land that position!)
General research questions are typically asked at the start of the interview to give the interviewer a sense of your work, personality, experience, and career goals.
They offer a great opportunity to introduce yourself and your skills before you deep-dive into your specific area of expertise.
Interviewers will ask this common kickoff question to learn more about you and your interests and experience. Besides providing the needed information, you can use this question to highlight your unique skills at the beginning of your interview to set the tone.
“My research focuses on the interaction between social media use and teenager mental well-being. I’ve conducted [X number] studies which have been published in [X publications]. I love studying this topic because not only is it a pressing modern issue, it also serves a commonly overlooked population that requires and deserves additional attention and support.”
Another icebreaker, this question allows you to provide some context and backstory into your passion for research.
“After completing my undergraduate degree in mechanical engineering, I had the opportunity to work with my current mentor on their research project. After we conducted the first experiment, I had a million other questions I wanted to explore—and I was hooked. From there, I was fortunate enough to be taken on as an assistant by my mentor, and they have helped me home in on my specific research topic over the past [X years].”
Playing off the classic “What are your greatest strengths and weaknesses?” interview question, this research-specific option often appears in these types of interviews.
This can be a tricky question to answer well. The best way to approach this type of question is to be honest but constructive. This is your opportunity to come across as genuine as you talk about aspects of research that challenge you—because no one wants to hear you like everything about your work!
“My favorite part of research is speaking directly to people in our target demographic to hear about their stories and experiences. My least favorite part is the struggle to secure grants to support my work—though now I have done that process a few times, it is less daunting than when I started.”
Once the interviewer has a basic understanding of you, they will transition into asking more in-depth questions about your work.
Regardless of your level of experience, this is the portion of the interview where you can dazzle your potential employer with your knowledge of your industry and research topic to highlight your value as a potential employee.
As this is a straightforward question, make sure you have to hand every place your work has been published. If your work is yet to be published, mention potential future publications and any other academic writing you have worked on throughout your career.
“My research has been published in [X number of publications]. If you want to read my published work, I am happy to share the publication links or print you a copy.”
Getting into the meat and potatoes of your work, this question is the perfect opportunity to share your working process while setting clear expectations for the support you will need.
Research is a collaborative process between team members and your employer, so being clear about how you prefer to work (while acknowledging you will need to make compromises to adjust to existing processes) will help you stand out from other candidates.
“Historically, I have worked alongside a team of researchers to devise and conduct my research projects. Once we determine the topic and gather the needed resources, I strive to be collaborative and open as we design the study parameters and negotiate the flow of our work. I enjoy analyzing data, so in most cases, I take the lead on that portion of the project, but I am happy to jump in and support the team with other aspects of the project as well.”
Depending on the type of research you conduct, this question allows you to deep-dive into the specifics of your data-collection process. Use this question to explain how you ensure you are collecting the right data, including selecting study participants, filtering peer-reviewed papers to analyze, etc.
“Because my research involves collecting qualitative data from volunteers, I use strict criteria to ensure the people I interview are within our target demographic. During the interview, which I like doing virtually for convenience, I use [X software] to create transcripts and pool data to make the analysis process less time-consuming.”
Many research positions require employees to take on leadership responsibilities as they progress throughout their careers.
If this is the case for your job position, have strong answers prepared to the following questions to showcase your leadership and conflict-management skills.
Many research positions are looking for people with leadership potential to take on more responsibility as they grow throughout their careers. If you are interested in pursuing research leadership, use this question to highlight your leadership qualities.
“While I currently do not have much research leadership experience, I have worked with so many lovely mentors, and I would love the opportunity to fulfill that role for the next generation of academics. Because I am quite organized and attuned to the challenges of research, I would love the opportunity to take on leadership responsibilities over time.”
Workplace conflict is always present when working with a team, so it is a common topic for research interview questions.
Despite being tricky to navigate, this type of question allows you to show you are a team player and that you know how to handle periods of interpersonal stress.
“When I'm directly involved in a disagreement with my team members, I do my best to voice my opinion while remaining respectful. I am trained in de-escalation techniques, so I use those skills to prevent the argument from getting too heated. If I am a bystander to an argument, I try to help other team members feel heard and valued while disengaging any big emotions from the conversation.”
Research is a team effort. Employers are looking for people who can work well in teams as a priority when hiring. Describing your ability to support and encourage your team members is essential for crushing your research interview.
“Working in research is hard—so I have had my fair share of offering and receiving support. When I have noticed someone is struggling, I do my best to offset their workload (provided I have the space to assist). Also, because I pride myself on being a friendly and approachable person, I do my best to provide a safe, open space for my team members if they want to talk or vent about any issues.”
As the interview comes to a close, your interviewer may ask you about your aspirations in academia and research.
To seal the deal and leave a positive impression, these types of questions are the perfect opportunity to remind your interviewer about your skills, knowledge base, and passion for your work and future in research.
Many hiring research positions may require their researchers to be open to exploring alternative research topics. If this applies to your position, coming prepared with adjacent topics to your current studies can help you stand out.
“While my primary interests are with my area of study, I also am interested in exploring [X additional topics] related to my current work.”
Your employer wants to see you are interested in and invested in growing your research career with them. To scope out your aspirations (and to show you are a good match for their needs), they may ask you to detail your future career goals.
“In five years, I would love to have at least two more published projects, particularly in [X publication]. Past that, as I mature in my research career, I hope to take on more leadership roles in the next 10 to 20 years, including running my own lab or being invited to speak at conferences in my chosen field.”
As a fun hypothetical question, the “ideal world” inquiry allows you to get creative and specific about your wishes and aspirations. If you get asked this question, do your best not to limit yourself. Be specific about what you want; you never know, some of your wishes may already be possible to fulfill!
“In an ideal world, I would love to be the lead of my own research team. We would have our own working space, access to [X specific research tool] to conduct our research, and would be able to attend conferences within our field as keynote speakers.”
Now you’re ready to dazzle your interviewers and land the research job of your dreams. Prepare strong and competent answers after reading this article on the most common research interview questions.
Arriving prepared for your interview is a great way to reduce stress, but remember: Showcasing yourself and your passion for your research is the number one way to stand out from the other applicants and get the job.
Best of luck. You’ve got this!
Do you want to discover previous research faster?
Do you share your research findings with others?
Do you analyze research data?
Start for free today, add your research, and get to key insights faster
Last updated: 18 April 2023
Last updated: 27 February 2023
Last updated: 6 February 2023
Last updated: 6 October 2023
Last updated: 5 February 2023
Last updated: 16 April 2023
Last updated: 9 March 2023
Last updated: 12 December 2023
Last updated: 11 March 2024
Last updated: 4 July 2024
Last updated: 6 March 2024
Last updated: 5 March 2024
Last updated: 13 May 2024
Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.
Users report unexpectedly high data usage, especially during streaming sessions.
Users find it hard to navigate from the home page to relevant playlists in the app.
It would be great to have a sleep timer feature, especially for bedtime listening.
I need better filters to find the songs or artists I’m looking for.
Get started for free
IMAGES
VIDEO
COMMENTS
Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.
The questionnaire is a tool widely used for data collection compared to interview and observation in empirical research; this study used Closed (multiple choice) and Open (descriptive) questions ...
The database is organized by department and lets you search for keywords. 1. SurveyKing. SurveyKing is the best tool for academic research surveys because of a wide variety of question types like MaxDiff, excellent reporting features, a solid support staff, and a low cost of $19 per month. The survey builder is straightforward to use.
Academic research technology and tools are constantly evolving and improving. SurveyMonkey uses artificial intelligence and machine learning to help you conduct the best possible surveys, earning you higher response and completion rates. Find new opportunities to research. Surveys can sometimes shed light on areas of discovery.
The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.
writing questions and building the construct of the questionnaire. It also develops the demand to pre-test the questionnaire and finalizing the questionnaire to conduct the survey. Keywords: Questionnaire, Academic Survey, Questionnaire Design, Research Methodology I. INTRODUCTION A questionnaire, as heart of the survey is based on a set of
Guides to Survey Research. Managing and Manipulating Survey Data: A Beginners Guide; Finding and Hiring Survey Contractors; How to Frame and Explain the Survey Data Used in a Thesis; Overview of Cognitive Testing and Questionnaire Evaluation; Questionnaire Design Tip Sheet; Sampling, Coverage, and Nonresponse Tip Sheet; PSR Survey Toolbox
The best survey tool for academic research. SurveyPlanet is a great tool for creating academic surveys that will let you put theoretical knowledge into practice and learn by doing. With dozens of templates that include pre-written questions, you will learn right away what a great academic survey should look like. ...
1. Early questions should be easy and pleasant to answer, and should build rapport between the respondent and the researcher. 2. Questions at the very beginning of a questionnaire should explicitly address the topic of the survey, as it was described to the respondent prior to the interview. 3. Questions on the same topic should be grouped ...
Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.
10. Test the Survey Platform: Ensure compatibility and usability for online surveys. By following these steps and paying attention to questionnaire design principles, you can create a well-structured and effective questionnaire that gathers reliable data and helps you achieve your research objectives.
Free academic surveys pre-built questionnaire templates & forms Business Marketing Market Research Product Create academic surveys now! ... An academic survey is a research tool used by scholars to gather data on a particular topic or phenomenon. It involves asking a set of structured questions to a sample of individuals or groups to collect ...
Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...
ChapterPDF Available. Questionnaires and Surveys. December 2015. December 2015. DOI: 10.1002/9781119166283.ch11. In book: Research Methods in Intercultural Communication: A Practical Guide (pp.163 ...
By conducting academic surveys, researchers can make informed decisions, advance their understanding of a given topic, and contribute to the broader academic community. 1. Define your research question. 2. MUse clear and concise language. 3. Make sure the survey questions. 4. Use different types of questions.
An academic research survey brings you data from target audience which helps you cross-examine your findings with real-world data and validate your theories. Help To Stay Up-to-date With Current Tools & Technology. During the process of conducting research, an individual comes across the latest technologies and trending tools. ...
A research questionnaire is a tool that consists of a series of standardized questions with the intent of collecting information from a sample of people. ... applying these questions on a commercial basis won't be as easy as it is in academic research. Your research aims must always be taken into consideration to address specific aspects of ...
A questionnaire is defined a market research instrument that consists of questions or prompts to elicit and collect responses from a sample of respondents. This article enlists 21 questionnaire templates along with samples and examples. It also describes the different types of questionnaires and the question types that are used in these questionnaires.
This paper presents a meta-analysis of the links between intelligence test scores and creative achievement. A three-level meta-analysis of 117 correlation coefficients from 30 studies found a correlation of r = .16 (95% CI: .12, .19), closely mirroring previous meta-analytic findings. The estimated effects were stronger for overall creative ...
With the Internet based questionnaire chosen as a data collection method, Cresswell (2009) states that the next step is to identify the type of data to be collected, and the method for collecting that data (i.e., via open or closed answers). Saunders et al. (2016) re-iterate this and warn that questionnaire questions must be defined precisely,
Using our research question generator tool, you won't need to crack your brains over this part of the writing assignment anymore. All you need to do is: Insert your study topic of interest in the relevant tab. Choose a subject and click "Generate topics". Grab one of the offered options on the list. The results will be preliminary; you ...
A good research question is essential to guide your research paper, dissertation, or thesis. All research questions should be: Focused on a single problem or issue. Researchable using primary and/or secondary sources. Feasible to answer within the timeframe and practical constraints. Specific enough to answer thoroughly.
A questionnaire was used to generate data that measuredthe variables. It was administered to the respondents and the answers generated were tabulated and analyzed. Based on the findings of the study, it can be concluded that the academic performance of the grade six pupils was at satisfactory level.
This chance, however, is only about 1 in 170,000 for a typical Pew Research Center survey. To obtain that rough estimate, we divide the current adult population of the U.S. (about 255 million) by the typical number of adults we recruit to our survey panel each year (usually around 1,500 people). We draw a random sample of addresses from the U.S ...
The survey itself was developed to address the study's research questions and was structured into four main sections, each focusing on a specific aspect of AI literacy among academic library employees. The first section sought to capture respondents' understanding and knowledge of AI, including their familiarity with AI concepts and ...
Proteolysis, including post-translational proteolytic processing as well as protein degradation and amino acid recycling, is an essential component of the growth and development of living organisms. In this article, experts in plant proteolysis pose and discuss compelling open questions in their areas of research.
Feedback from the questionnaire indicated a generally positive response from students, with 92% acknowledging an improvement in the quality of their writing, 84% noting advancements in their language skills, and 76% recognizing the contribution of ChatGPT in supporting academic research. The study highlighted the efficacy of large language ...
Reword statements from previous research (but still cite them) using the Paraphraser. Write spectacular and concise thesis statements (or even your whole introduction section!) using the Summarizer. Writer, meet QuillBot. QuillBot, meet a soon-to-be-elite academic writer! Frequently asked questions about how to write a research paper
Oxford Academic Learn more about the world of academic publishing—from open access to peer review, accessibility to getting published—with our Publishing 101 series on the OUPblog. ... While 76% of researchers say they have used some form of AI tool in their research, our survey uncovered unexpected generational differences and polarised ...
If you have set your sights on working in research, you will have to answer research interview questions during the hiring process. Whether you are interested in working as a research assistant or want to land an academic or industry research position in your chosen field, confidently answering research interview questions is the best way to showcase your skills and land the job.