ArticlesNovember 2024

Droids Don’t Teach, We Do: Addressing AI Anxiety in Language Teaching and Learning

By Joshua M. Paiz, The George Washington University

Joshua M. Paiz

DOI: https://www.doi.org/10.69732/VFPR7648

In late 2023, I was invited to offer AI literacy workshops to K-12 English language teachers and teacher educators by the Bahrain Teachers’ College and the Ministry of Education of Bahrain as part of a specialist exchange program under the auspices of the U.S. Department of State. I assumed my audience would share my enthusiasm for AI integration, given my experience as a power user and scholar. However, after my first session, it was clear that skepticism about AI and its educational implications was far greater than I had anticipated, especially considering the Ministry of Education and College leadership had asked me to come all this way to provide the training. This skepticism hindered their willingness to consider my pedagogical strategies. That evening, I reflected on their skepticism and noted their repeated expressions of worry and fear, such as, ‘I’m worried about…’, ‘It’s troubling that…’, and ‘I’m afraid that…’. The common thread to all of these was fear and worry. The atmosphere of the first session was rife with, in a word, anxiety. For me, this led to a night of intensive research into AI anxiety and resistance to new technology in educational technology, engineering education, and the CALL literature.

What I found was a tool-for-thought, developed at an interdisciplinary crossroads, that could be applied to a very real-world problem. This theoretical construct offered insights into the range of emotions associated with the emergence and proliferation of intelligent systems in educational settings. Emerging first in the late 2010s, the idea of “AI anxiety” describes the nervousness or fear related to adopting and integrating AI tools into daily life. This anxiety can arise from worries about the accuracy and fairness of AI algorithms; the impact AI may have on traditional professional practice, decision-making, and expertise; the impact AI may have on job satisfaction and security; the role of planetary-scale computing on the climate crisis; and, even, the influence of AI on interpersonal relationships and willingness to engage in productive struggle. This last one is key to understanding the worries about AI voiced by many in education–namely that there is value in productive struggle, as it is often a key part of the learning process (see Crawford, 2020; Johnson & Verdicchio, 2017; Li & Huang, 2020). 

Traditional Western news media, which operates 24/7 and often uses sensationalist reporting, increases AI anxiety by keeping audiences constantly uneasy. We’ve all seen many articles and blog posts claiming that things like college essays, part-time retail jobs, and even teaching are “doomed”. For those who want a more critical, but balanced view of AI, Crawford’s (2020) The Atlas of AI: Power, Politics, and the Planetary Cost of Artificial Intelligence and Buolamwini’s (2023) Unmasking AI: My Mission to Protect What is Human in a World of Machines are required reading – and actually do offer a critical voice in good faith.

Scholarship about AI anxiety also highlights the key role that users of new technologies play in facilitating their (eventual) adoption by a larger group. For example, the Mid-Atlantic Association for Language Learning Technology (MAALLT) regularly holds “Tech Slams.” During their recent “M.AI the Tech Slam be with You” event, many AI-friendly presentations took place–covering everything from using AI to support translanguaging in language courses more generally to using AI to help students navigate Spanish for specific purposes (SSP). It was clear that the presenters, myself included, were comprised primarily of early adopters and power users. Early adopters are those individuals who are among the first to embrace and experiment with a new technology or process–think of your friend who was contract grading before contract grading was cool. Power users, on the other hand, are tech-savvy individuals who extensively use and push the limits of a new technology or process to maximize its potential–think of your one colleague who does not just know how to use generative AI, but who has fine-tuned a local model using a platform like GPT4ALL or Ollamma and PyTorch. These groups represent key demographics in facilitating the eventual adoption of a new technology or process because they tend to take a proactive approach to addressing change and have a high tolerance for uncertainty, which they then leverage to influence their professional networks’ opinions and actions (see Bennett, 2014; Gillard, et al., 2008). 

Why AI Anxiety Matters in Language Teaching and Learning

Understanding AI anxiety is crucial because, to integrate AI or help resistant teachers see its value successfully, we need to acknowledge and address their concerns, legitimate or otherwise. By creating space to discuss these anxieties, we can facilitate better engagement. Returning to the work that I did in Bahrain with the U.S. Department of State’s English Language Specialists Program provides some additional insights. By recognizing AI anxiety as a legitimate phenomenon, I could better contextualize the hesitation and skepticism I encountered in my workshops. It became clear that to introduce AI integration strategies effectively, I first needed to acknowledge and address these underlying anxieties. This realization shifted my approach from simply presenting the potential benefits of AI to creating a space for open dialogue about concerns and fears. By coming to consider the role of AI anxiety, I came to understand the need for a holistic approach that considers not just the technological aspects of AI integration, but also its psychological and social implications for both teachers and learners.

The causes of AI anxiety in education are multifaceted and interrelated. At its core, many educators express a fear of obsolescence – a worry that AI might eventually replace human teachers altogether. This fear is not entirely unfounded, given the rapid advancements in AI technology. However, it often stems from a misunderstanding of AI’s capabilities and limitations. Perhaps related are anxieties caused by the perceived loss of control over the learning process. Many language educators pride themselves on their ability to tailor instruction to individual student needs, and there is a concern that AI-driven systems might standardize this process, removing the human touch that many consider essential to effective language teaching. 

We also cannot discuss any educational technology tool without also acknowledging the twin issues of data privacy and ethics, as both concerns also contribute significantly to AI anxiety. In an era where data breaches and misuse of personal information are frequently in the news, educators worry about the implications of collecting and analyzing vast amounts of student data. Questions arise about who has access to this data, how it is used, and what long-term consequences it might have for students.

There is also a more subtle yet pervasive cause: the fear of the unknown. AI, especially in its more advanced forms, can seem like a “black box” to many educators. The need for more transparency in how AI systems make decisions can lead to distrust and reluctance to incorporate these tools into teaching practices. This opacity is particularly concerning in education, where understanding the reasoning behind pedagogical decisions is crucial. Educators often feel a sense of powerlessness when faced with AI systems they cannot fully comprehend or control. Moreover, this fear of the unknown extends beyond just the technology itself to its potential long-term impacts on learning processes and outcomes. Many teachers worry about unintended consequences that might arise from widespread AI adoption in education, fearing that we might be opening a Pandora’s box of educational and ethical challenges that we are not yet equipped to handle. Here is where collective action and voice become important, as individually, there is relatively little that any single actor can achieve when pushing for more ethical and professionally sound ed tech development. Indeed, tech leaders in the fields of ed tech and CALL need to join their voices together to demand greater transparency and alignment to professional best practices from both our institutional technical leaders and ed tech developers in industry who create the products that we then potentially adopt in our own classes and institutions.

Spotting AI Anxiety

How these anxieties manifest can vary widely depending on the teaching and learning context. Some researchers have found a more pronounced resistance to AI integration in K-12 settings, partly due to stricter curriculum standards and concerns about age-appropriate technology use (see Nazaretsky, et al. 2022; Oh & Ahn, 2024; Rinelli, 2013). Here, AI anxiety might manifest as outright rejection of AI tools or a reluctance to engage in professional development related to AI in education. Sometimes, the manifestations can be more nuanced. Some instructors might grudgingly accept basic AI tools while resisting more advanced applications. Others might enthusiastically embrace AI for administrative tasks but draw a hard line at its use in actual language instruction or assessment. It is also worth noting that AI anxiety does not just affect teachers; it extends to students and even administrative staff. Students might need help with the fairness of AI-driven assessments or the potential loss of personalized attention from human instructors. Administrators, on the other hand, might grapple with concerns about the cost-effectiveness of AI implementation versus potential job losses among teaching staff.

Understanding these varied manifestations is crucial because it allows us to tailor our approaches to addressing AI anxiety. A one-size-fits-all solution is unlikely to be effective across all these different contexts and stakeholder groups. Instead, we need to develop nuanced, context-specific strategies to address these concerns and foster a more balanced and informed perspective on AI in language education.

Addressing AI Anxiety

During my time working with the teacher-educators at Bahrain Teachers College and the language teachers in the Bahraini Ministry of Education, a rather clear starting point for addressing AI anxiety emerged–one with clear roots in my earlier work on inclusive instructional practices in language education. We must create, for want of a better term, a safe space for language teachers to express a variety of feelings and attitudes towards AI in the language classroom and to process those feelings in a respectful and welcoming environment. I say this because teachers may often feel that they are either being told how to teach, that certain ed tech is being mandated to them, or that ed tech developers are attempting to sell them (or their institutions) on a solution that does not help drive language learning and acquisition. So, creating a safe space for engagement can create an environment where teachers feel less resistant or defensive and, hopefully, more open to the ideas being presented in the training session. This is important not only to facilitate potential AI integration, but also to empower educators with the language and knowledge necessary to better advocate for themselves and their learners when faced with institutional or market pressures to adopt new technologies for the sake of staying “current”, “modern”, or “relevant.”

The first step toward doing so is to give name to the sense of unease that may be sitting just below the surface for some teachers by introducing them to the idea of AI anxiety. Through this, we accomplish two things. First, we give the teachers present the language with which to engage in sustained future discussions about AI in the classroom with an array of stakeholders. Second, and perhaps most importantly, we legitimize the feeling. We show them that they are not going to be viewed as contrarian, or luddite, or out-of-touch just because they are experiencing feelings of AI anxiety. Instead, we show them that what they are feeling is a legitimate thing to feel during times of transformative/disruptive change. 

So, beginning a session on AI, especially training and workshop sessions, with a guided reflection can be a profoundly powerful moment for all involved. While I varied the questions to fit the group of teachers I was working with (e.g., teacher educators, vocational educators, English teachers, etc.), their core themes were consistent. They typically involved questions like: 

  • What aspects of AI in education make me most uncomfortable or anxious?
  • How do I fear AI might impact the teacher-student relationship?
  • What concerns do I have about AI’s influence on language learning and communication skills?
  • In what ways do I worry AI might affect job security in education?
  • How do I think AI could change societal values related to language and cultural exchange?

From here, it can be helpful to use a familiar interactional framework to guide some discussion about AI anxiety to give teachers space to make their voices heard. Personally, I am a fan of the think-pair-share framework. I would have teachers talk with their table/row-mates about their responses to these questions after taking a few minutes to think/write about them on their own. One could even take a more purposeful approach and make sure that each group had one AI early adopter or power user and one AI skeptic to help create space for perspective-taking. Then, as a whole group, I would guide us in a pulse taking activity where we tried to establish the overall mood in the room. So, here we have the second step in addressing AI anxiety: give teachers space to reflect and engage with it. 

We have only just begun to address AI anxiety. The real work involves moving from initial reflections to a deeper, practical understanding of AI’s role in education. To truly confront AI anxiety, we need to provide hands-on experiences and foster critical discussions for teachers to explore and demystify the technology.This involves providing educators with chances to interact directly with AI tools relevant to their professional practice. By gaining first hand experience, teachers can better understand the capabilities and limitations of these technologies, which often helps to dispel unfounded fears and highlight genuine areas of concern. For example, a hands-on lesson planning challenge can be a fruitful space in which teachers can explore AI tools and how they intersect with their disciplinary expertise. It allows them to “try on” AI to see how/if it will work for them. In this activity, pairs of teachers create lesson plans on the same topic, with one using AI assistance and the other using traditional methods. After comparing their results, teachers discuss the differences, surprises, and potential pitfalls they encountered. This practical exercise allows educators to directly experience AI’s capabilities and limitations in a familiar context and with trusted colleagues, fostering critical thinking about its role in lesson planning. By engaging with the technology firsthand, teachers can begin to see AI as a potential tool rather than a threat, addressing their anxieties through tangible experience and collaborative reflection.

Alongside practical exploration, we should engage teachers in discussions about realistic scenarios where AI might be integrated into their classrooms. One could begin by providing teachers with a set of scenarios to discuss, such as: 

  • An AI-powered writing assistant for student essays
  • An automated translation tool for multilingual classrooms
  • A personalized AI tutor for grammar and vocabulary practice
  • AI-generated quizzes and assessments
  • A virtual AI teaching assistant for answering student questions outside class hours

To help guide teachers thinking as they work either alone or in pairs, a set of guiding questions can help facilitate discussion and engagement. These questions can be a starting point:

  • How might [AI tool] enhance or potentially hinder student learning?
  • What ethical considerations arise from implementing this technology?
  • How could this AI application change your role as a teacher?
  • What potential challenges or unintended consequences should we anticipate?
  • What are our core values and non-negotiable principles regarding EdTech/AI adoption? 

Note that many of these questions should feel familiar, as they mirror those we asked teachers to consider when discussing AI anxiety in the first place. This approach encourages educators to consider both the potential benefits and challenges of AI implementation, grounding their reflections in concrete situations rather than abstract fears.

Moreover, offering training sessions that focus on developing skills that complement AI can be incredibly empowering. These include honing critical thinking, creative problem-solving, and effective technology integration in pedagogy. By focusing on these areas, we show teachers that their role remains crucial and adaptable in an AI-enhanced educational landscape. Additionally, focusing on these skills can model for teachers the ways in which they can either engage with AI tools with their students or that they can choose to craft assignments that move students away from being able to use AI supports. For example, teachers can be given an activity where they engage with AI tools to design personalized learning pathways for neurodiverse language learners. Participants use AI language models to generate initial suggestions for learning strategies and resources based on profiles of hypothetical students with various neurodiversities. The core of the activity involves critically evaluating and refining these AI-generated suggestions, and combining them with teachers’ professional expertise to create comprehensive, flexible learning plans. Through this process, educators explore how AI can be leveraged to support individualized instruction while recognizing the essential role of human insight in addressing the complex needs of neurodiverse learners.

It is also vital to facilitate discussions on the ethical implications of AI in education. Here, friendly debate can be a helpful tool. In one activity that my Bahraini participants responded to quite well, teachers are presented with a series of ethical dilemmas involving AI use in language education. For instance, they might debate the use of AI-powered writing assistants in essay composition. Working in small groups, they analyze this scenario, considering various stakeholder perspectives and potential consequences. Some groups argued that these tools can help students improve their writing skills and boost confidence, while others contended that overreliance on AI could hinder genuine language acquisition and raise concerns about academic integrity. Groups then engage in a structured debate, presenting arguments for different ethical stances on AI implementation in this context. The exercise concluded with a collaborative effort to draft guidelines for responsible AI use in their classrooms. This activity encourages deep reflection on the ethical implications of AI in education, equipping teachers to make informed decisions and guide students in responsible AI utilization. This not only empowers teachers to make informed decisions about AI use but also equips them to guide students in responsible AI utilization – a critical skill for the future.

Lastly, establishing regular forums or channels for teachers to share their ongoing experiences, concerns, and insights about AI as they continue to engage with it can be immensely valuable. It can be particularly beneficial to create platforms for both early adopters and power users to take on leadership roles in these conversations, sharing their hardwon knowledge and contextualized experiences with their peers. Likewise, AI skeptics should be part of this dialogue as well and in equal measure, as their voices often underscore critical areas of concern that need to be addressed when it comes to AI integration into education. This fosters a supportive community of practice where educators can learn from each other, voice concerns, and collectively navigate the evolving landscape of AI in education.

By actively engaging with AI in these ways, we can help teachers move beyond initial anxiety towards a more nuanced understanding of the role of technology in education. This approach transforms vague fears into specific, addressable concerns and enables teachers to become active participants in shaping how AI is implemented in their classrooms and broader educational contexts. In doing so, we not only address AI anxiety but also empower educators to lead the way in thoughtfully integrating AI into the future of education.

Addressing AI Anxiety with Our Students

Given the nature of our work, it’s equally important to consider how we can help our students navigate their own concerns about AI in education. In many ways, students’ anxieties may mirror those of their teachers – fears about job prospects, worries about the authenticity of their work, and uncertainties about the future of learning. However, students also bring unique perspectives and concerns to the table, often shaped by their supposed digital nativity and the looming specter of an AI-integrated workforce they may enter.

As with teachers, creating a safe space for students to express their feelings about AI is paramount. In my work with students at George Washington University and Montgomery College, I’ve found that open forums or “AI town halls” can be particularly effective. These sessions allow students to voice their concerns, ask questions, and share their experiences with AI tools in a non-judgmental environment. It’s important to validate their feelings, much as we do with teachers, showing them that their anxieties are legitimate and worth exploring. These town halls also proved useful in generating ideas for activities that can meet students where they are when it comes to AI integration and being purposeful, ethical learners. 

One such activity that came from discussions with my TESOL certificate students at Montgomery College was the “day in the life” planning exercise. This activity has resonated well with students in pilot sessions in my college composition and English for academic purposes (EAP) courses. In this activity, students are asked to imagine and map out a typical day in their future professional lives, considering how AI might be integrated into various aspects of their work. They then share these scenarios in small groups, discussing both the exciting possibilities and potential challenges they envision. This exercise not only helps students concretize their thoughts about AI but also encourages them to think critically about its implications for their future careers.

Hands-on experience with AI tools, tailored to the student context, is also vital, just as it is for educators. For instance, in language classes, I’ve had students experiment with AI writing assistants to compose essays in their target language while better understanding the nuance of how language is used with a target audience in that language. For example, how does our linguistic toolkit flex when communicating complex technical information to a general, youth audience in North America vs. a specialist audience at a Sino-American joint-venture university? They then engage in a reflection and discussion session, considering questions such as:

  • How did using the AI tool affect your learning process?
  • What aspects of language production did the AI assist with, and what aspects required your own knowledge and skills?
  • How might reliance on such tools impact your language acquisition in the long term?

This practical engagement helps demystify AI tools and allows students to form more nuanced opinions about their use in education.

Perhaps most importantly, we need to empower students to be active participants in shaping how AI is used in their education. This could involve creating student-led committees to provide input on AI policies in schools, or encouraging students to propose innovative ways to integrate AI into their learning experiences. By giving students agency in this process, we not only address their anxieties but also prepare them to be informed and engaged citizens in an AI-augmented world.

Ultimately, addressing AI anxiety with our students is not just about alleviating fears; it’s about equipping them with the tools, knowledge, and critical thinking skills to navigate an AI-integrated future confidently. By fostering open dialogue, providing hands-on experiences, and encouraging ethical reflection, we can help our students move from anxiety to empowerment, ready to harness AI as a tool for their learning and future success.

Closing Thoughts

Addressing AI anxiety will be an important skill not just for language teacher educators and ed tech trainers, but also for classroom teachers as they engage with their own learners and, in primary/secondary settings, with parents and other stakeholders. The approach outlined above creates space for language teachers to engage with their own AI anxiety, while also modeling how to help others (read: students, parents, admin) to confront their own AI anxiety in ways that support lifelong learning and student success. Most importantly, however, in a well-structured workshop or training session, the approach above can also help teachers as they acquire their own critical AI literacy, which is a vital skill not just for work and professional development, but also for being an engaged citizen.

References

Bennett, E. (2014). Learning from the early adopters: Developing the digital practitioner. Research in Learning Technology, 22, 21453. https://doi.org/10.3402/rlt.v22.21453  

Buolamwini, J. (2023). Unmasking AI: My mission to protect what is human in a world of machines. Random House

Crawford, K. (2020). Altas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Gillard, S., Bailey, D., & Nolan, E. (2008). Ten reasons for IT educators to be early adopters of IT innovations. Journal of Information Technology Education: Research, 7(1), 21–33. https://doi.org/10.28945/3004 

Johnson, D. G., & Verdicchio, M. (2017). AI anxiety [Opinion]. Journal of the Association for Information Science and Technology, 68(9), 2267-2270. https://doi.org/10.1002/asi.23867

Li, J., & Huang, J. (2020). Dimensions of artificial intelligence anxiety based on the integrated fear acquisition theory. Technology in Society, 63(2020), 101410. https://doi.org/10.1016j.techsoc.2020.101410.

Leffer, L. (2023). ‘AI anxiety’ is on the rise–Here’s how to manage it. Scientific American. https://www.scientificamerican.com/article/ai-anxiety-is-on-the-rise-heres-how-to-manage-it/

Nazaretsky, T., Ariely, M., Cukurova, M., & Alexandron, G. (2022). Teachers’ trust in AI‐powered educational technology and a professional development program to improve it. British Journal of Educational Technology, 53(4), 914-931.

Oh, S. Y., & Ahn, Y. (2024, July). Exploring Teachers’ Perception of Artificial Intelligence: The Socio-emotional Deficiency as Opportunities and Challenges in Human-AI Complementarity in K-12 Education. In International Conference on Artificial Intelligence in Education (pp. 439-447). Cham: Springer Nature Switzerland.

Richardson, K. (Ed.)(2017). An anthropology of robots and AI: Annihilation anxieties and machines. Routledge.

Rinelli, K. (2013). Overcoming K–12 teacher resistance to technology and learning using M-learning (Doctoral dissertation, Keiser University).

Leave a Reply

Your email address will not be published. Required fields are marked *