
“Is it Okay to Use AI for This?” A Model for Values-Based Decision Making in Language Teaching and Learning
By Kit Pribble, Wake Forest University
DOI: https://www.doi.org/10.69732/NLOX4487
The Problem: Ethical Challenges
“Is it okay to use AI for this?” is a question I have faced often over the past few years. It has been posed by anxious students trying to navigate ethical doubts or avoid a charge of academic dishonesty. I have also heard it from colleagues who are overwhelmed by the rapid spread of “generative artificial intelligence” (GAI) on campuses and by the mixed messages about this technology’s safety and effectiveness coming from administrators, university teaching centers, and an opportunistic edtech industry. It is also a question I have posed to myself. How do I, as a teacher of Russian language and literature, navigate the pervasive hype around GAI to determine its actual pedagogical potential – if such potential even exists? And how do I weigh that potential against GAI’s fundamental limitations and the serious ethical concerns it raises?
A quick note on terminology: I use the term “generative artificial intelligence” or GAI to refer to the various currently popular text-, image-, and video-generative tools – especially those accessed through a chatbot interface – that have emerged since OpenAI’s public launch of ChatGPT-3.5 in November 2022. However, it is worth noting that the use of the label “AI” to refer to these tools is contested, as their capabilities are in fact far narrower than the term “artificial intelligence” suggests. Critics have observed that the term plays into tech industry marketing strategies while also contributing to hype of both the apocalyptic and utopian varieties (Lanier, 2023; Goodlad & Stone, 2024). I use it here for the sake of clarity and convenience, as a familiar shorthand for today’s LLM-powered chatbots.
I believe the pedagogical potential of these tools in the field of foreign language (FL) study is real, though not without significant caveats (more on that below). However, the risks and harms they present must be carefully examined before language teachers and learners can make informed decisions about when to use GAI – and, critically, when not to use it.
By now, most of us are at least passingly familiar with the ethical challenges posed by large language models (LLMs), the deep learning systems upon which most currently popular GAI tools are built. I want to briefly enumerate these challenges here, as they provide important context for the decision-making model I propose below.
An especially comprehensive account of these ethical considerations can be found in Kate Crawford’s (2021) “atlas” of AI impact, which moves from the devastating environmental costs of lithium mining and cloud-based computing to the ways in which AI both facilitates and is built upon exploitative labor practices. Crawford details the bias built into AI algorithms, the flawed assumptions about the relationship between language and the object world that underlie computer vision (see also Crawford & Paglen, 2019), and the many ways AI systems have been used to consolidate and uphold state power. Unlike calculators, to which GAI is often misleadingly analogized in discussions of educational impact, AI systems are not neutral, objective tools, but rather algorithmic models that reproduce and uphold the perspectives of the state and corporate entities they serve.
Since 2022, the harms of GAI have garnered increasing public attention. A recent report from the MIT Technology Review attempts to trace the total environmental impact of AI, from building a model in a data center to the cost of a single user query. This turns out to be an impossible task, as leading tech companies are unwilling to disclose information about energy use in the name of trade secrets (O’Donnell & Crownhart, 2025). However, as the report notes, recent data from the Lawrence Berkeley National Laboratory reveals that the AI industry’s energy use has already more than doubled since 2017 and may rival that of nearly a quarter of U.S. households by 2028. Meanwhile, new information is constantly emerging about the implicit biases baked into LLMs as a result of their training data, from dialect prejudice against speakers of African American English (Hofmann et al., 2024) to the perpetuation of harmful stereotypes about disability (Gadiraju et al., 2023), to offer just two representative examples. Moreover, tech companies’ attempts to fix specific instances of bias in their systems in response to public pressure do little to address the larger underlying problem – the fact that these systems reflect and optimize the profit-maximizing interests of those already in power (Abebe & Kasy, 2021). The exploitative nature of the invisible human labor that drives these models has been well documented in pieces about America’s emerging “AI underclass” (Wong, 2023) and the low-paid tedium of data labeling (Dzieza, 2023).
Beyond these broader ethical problems, the question of GAI in teaching and learning raises its own particular set of concerns. Despite edtech industry promises that AI will “transform” or “revolutionize” education, recent studies suggest that access to GAI, while leading to improved performance in the short term, may have a detrimental effect on learning (Bastani et al., 2024), student agency (Darvishi et al., 2024), and cognitive engagement (Kosmyna et al., 2025) in the long term. Of course, research regarding the long-term implications of GAI for education is still in its early stages, and continued investigation is needed to substantiate these preliminary findings; however, it is not unreasonable to suspect that chatbot use impedes creativity and limits cognitive effort. When we outsource intellectual tasks to LLMs, we lose the beneficial friction and “productive confusion” (D’Mello et al., 2014) that make learning possible.
Another Problem: Value Misalignment for FL Teaching and Learning
Additionally, those of us in the field of FL education must consider whether LLMs are aligned with the values and goals of our profession. As I see it, there are currently three main sources of value misalignment in the use of LLMs for foreign language teaching and learning.
First, for all their apparent fluency with natural language, LLMs have no access to meaning. They rely on vector representations of the statistical associations between words captured in their training data to generate output probabilistically; this output has no grounding in semantic understanding or any internal model of the world. LLMs lack reason, judgment, identity, and embodied experience – all critical prerequisites for meaningful language use (Beals, 2024). They also cannot reflect on their own computational processes. In practice, this means they are unable to reliably explain the grammatical principles, syntactic structures, or pragmatic factors underlying their linguistic output. While earlier approaches to natural language processing (NLP) relied on rules-based methods to “teach” AI systems syntax and grammar, modern LLMs rely on statistical modeling (Ramati & Pinchevski, 2017; Stone, Goodlad & Sammon, 2024). As a result, any explanations (e.g., of grammar or syntax) they provide are themselves probabilistic, rather than derived from an underlying rules-based logic. However, because LLM-driven chatbots give the impression of communicative intent – intentionally “court[ing] the ELIZA effect” for the sake of marketability (Stone, Goodlad & Sammons, 2024) – language learners may view them as reliable tutors capable of accurately and consistently explaining concepts in areas like grammar and pragmatics.
A second alignment problem for FL education is the fact that, in “reducing all of semantics to geometry” (Mitchell, 2019, p. 195), LLMs treat the nuances and ambiguities of human language use as a problem to be solved, a complex puzzle in need of simplification. Historically, LLMs emerged from NLP techniques developed in the fields of machine translation (MT) and automatic speech recognition (Stone, Goodlad & Sammons, 2024), themselves grounded in a WWII-era cryptographic approach to language aimed at “the epistemological flattening of complexity into clean signal for the purposes of prediction” (Crawford, 2021, p. 213). In contrast, our goal as language and culture teachers is to introduce complexity and multivalence. We seek to familiarize our students with the (at times painful) process of navigating cultural and linguistic tensions, rather than effacing or simplifying those tensions for the sake of efficiency.
This brings me to the third and final alignment problem for our field: the well-documented Anglocentrism and cross-cultural homogenization embedded in most widely-used LLMs. These systems have been shown to disproportionately reproduce Western and Anglocentric norms and values (Agarwal et al., 2025; Bender et al., 2021; Cao et al., 2023; Elkins, 2024; Naous et al., 2024; Qadri et al., 2023). This homogenizing tendency presents an obvious problem for instructional contexts that seek to expose students to a diverse range of linguistic and cultural perspectives. Left unaddressed, this “technically determined” amplification of Anglocentrism by LLMs (Ledesma, 2024) risks undermining the goals of FL teaching and learning by narrowing students’ exposure to authentic and contextually appropriate instantiations of the target language and culture.
Toward A Solution: Critical Conversations
At this point, the solution to the GAI problem in FL education may seem obvious: Why not simply discourage students from using these tools altogether, and return to in-class assessments that render LLMs and MT (itself not only a form of AI, but “one of the earliest AI projects” (Mitchell, 2019, p. 198)) irrelevant? After all, this approach would have the added benefit of addressing another major concern I haven’t yet mentioned: the threat these tools pose to academic integrity.
A version of this approach may indeed be effective in Novice-level FL classes. During the first year of language study, an AI-free environment will likely best support the development of foundational skills and familiarization with the process of learning a new language. However, as students reach higher levels of proficiency, we will want to assign longer, multi-stage projects and writing assessments that promote higher-level thinking and exceed what can be reasonably accomplished during class time. In these scenarios, it is vital that we have conversations with students about what appropriate GAI use looks like when studying a foreign language or culture. Without guidelines students will be left to make their own assumptions about these systems’ abilities. Moreover, if we want our students to continue studying and engaging with the target language (TL) after they have left our classrooms, then we should teach them how to navigate the GAI tools they will inevitably encounter – and to do so critically, with full awareness of these tools’ harms and limitations as well as their possible utility.
Educators have pointed out that involving students actively in the creation of GAI guidelines fosters transparency and buy-in (Kostka, Toncelli & Fairfield (2025)). Perhaps more importantly, it demystifies these systems and encourages learner agency by prompting students to weigh the risks and benefits of LLMs in language learning tasks for themselves. One method is to conduct a class “AI audit,” in which students assess GAI effectiveness and ethical impact for completing specific tasks. An AI audit involves collectively brainstorming all the ways GAI might be used on a particular assessment–from generating images for a slideshow to providing grammar feedback on a presentation script–and then interrogating the harms and benefits before deciding together which uses are acceptable and which are not (see, for example, Estrada, 2023). At the same time, it is important to emphasize that GAI use should always be optional rather than required, so that no student feels compelled to use a technology they find morally objectionable (Conrad, 2024).
Guiding the Conversation: A GAI Use Decision-Making Flowchart
To help guide my students through the tricky process of determining whether GAI is effective, ethical, and appropriate in a given context, I created a decision-making flowchart that presents a list of key questions I want them to consider. Here is a version of the flowchart in English:

Of course, I don’t expect students to consult this flowchart every time they consider opening ChatGPT or Claude. Rather, the purpose of the flowchart is threefold: 1) to illustrate that justified use of GAI is the exception rather than the rule; 2) to familiarize students with the kinds of ethical and academic concerns they should consider when deciding whether to use GAI; and 3) to guide our class AI audits and actively engage students in determining when GAI use is appropriate in light of the many ethical challenges and value alignment problems outlined above.
The flowchart begins by asking, “Can this task be easily done with a traditional tool (e.g., a search engine, Canva)?”

A key principle of responsible GAI use is that AI should never be used for its own sake. Given the many real and potential harms of LLMs, their use is justified only in situations where they solve an existing problem or enable us to do something that would otherwise be prohibitively difficult or impossible. A good example in the FL context is using an AI chatbot as a conversation partner. Opportunities for speaking practice are limited in traditional classroom settings, especially for learners who live in environments with few proficient or native speakers of the TL. Chatbots serve as accessible conversation partners and are available 24/7, providing a potential solution to a long-standing challenge in FL education – especially in cases where the chatbot has been fine-tuned or customized by an expert in the target language and culture (Elkins, 2024; Meier, 2025). By contrast, using multimodal LLMs to generate images, graphs, or presentation slides for a project is unlikely to be justified, given the existence of less flawed and energy-intensive tools like search engines and Canva for locating or creating visual content.
If the answer to this first question is “Yes,” the chart guides the user to the outcome “Don’t use GAI.” If the answer is “No,” the user is taken to the next question: “Am I allowed to use GAI for this task (per school/work rules)?”

Although this question may seem relatively straightforward, it serves an important function by prompting users to examine institutional and discipline-specific guidelines more closely, particularly in cases where guidelines are not clearly defined.
An answer of “Yes” to this question will lead the user to question 3: “Does the task involve sensitive or private data?”

In my experience, students are prone to a surprising degree of carelessness with their own data and are often unaware of the privacy policies of the online platforms they use. While students and instructors in our field are unlikely to be dealing with highly sensitive data like medical or financial records, it is still critical to discuss the stakes of sharing personal information – for example, in roleplay conversations with a chatbot in the TL that touch on age, location, and personal background – that might be stored or accessed by third parties.
If the answer to this question is “No,” the user is guided to the fourth question: “Are accuracy and factual precision essential?”

The stochastic nature of GAI means that it is typically better suited for creative or exploratory tasks like brainstorming and open-ended language production than tasks requiring a high degree of factual accuracy, e.g., grammar explanations. Under the umbrella of “factual accuracy” I include cultural accuracy and authenticity. As noted above, LLMs sometimes reproduce stereotypes or present homogenized or culturally inappropriate content in a manner that appears authoritative. This limitation is especially damaging in a language learning context that seeks to promote intercultural competence and understanding. Tasked with giving a presentation on a cultural practice, for example, students should be guided to seek authentic images and examples rather than soliciting AI-generated materials.
If the task does require cultural authenticity or factual precision, the user is guided to a follow-up question: “Am I willing and able to fact check everything?” This question makes clear that using AI-generated or AI-augmented content means accepting responsibility for that content’s accuracy and impact.
If factual accuracy is not required, or if the user affirms that they are willing to fact check the output, then the next question is: “Does using GAI impede my learning or skill development?” In other words, by using GAI in this way, are students outsourcing a meaningful learning opportunity? To answer this question, students need to understand both the purpose of the assessment and the rationale behind its design. Discussing this question in class as it applies to a particular assessment creates a shared understanding of the assessment’s value and intent, which in turn leads to deeper engagement and a stronger sense of student buy-in.
At this point, the user is guided to the penultimate question: “Have I taken the time to understand potential harms of GAI in this context (e.g., bias, energy use)?”

While certain harms are difficult to tabulate, the intent behind this question is to encourage the user to think through major harm categories and acknowledge possible risks. If a student is considering using GAI to generate a sample dialogue between two people from a particular country or engage in a culture-specific roleplay, for example, they should take into account whether the output might include ethnic, religious, or cultural stereotyping; or, at a more fundamental level, whether the target culture is among those disproportionately affected by AI’s extractive labor and environmental practices, as using GAI to simulate these cultures without acknowledging the imbalance risks perpetuating extractive dynamics and global inequities. Likewise, a request that GAI generate a list of polite phrases in the TL to use in a specific situation (e.g., a wedding or a funeral) may result in phrases shaped by Western behavioral norms that are inappropriate for the target culture. When it comes to environmental impact, a single query of ChatGPT or Claude is more energy intensive than a Google search, while the energy required to generate a 5-second video with the AI model CogVideoX is “equivalent to riding 38 miles on an e-bike, or running a microwave for over an hour” (O’Donnell & Crownhart, 2025). Every use of GAI in language learning carries ethical considerations that deserve deliberate attention.
If the user affirms that they have taken the time to understand potential harms, they are guided to the final question: “Does using GAI in this situation align with my values?”

Having thought through the potential impacts of GAI for the task at hand, the user is encouraged to consider whether they are willing to take on the ethical responsibility of using GAI in this context.
Answering “Yes” to this final question will guide the user to the conclusion “Use [GAI] with caution.” Importantly, the flowchart’s other seven arrows lead to the outcome “Don’t use GAI” – a visual representation of the fact that GAI should be used sparingly, and only in situations where it provides a real, tangible benefit by solving an existing problem or making possible a learning opportunity that was not possible before. In situations where a non-AI-powered tool is available, where outsourcing a task to GAI deprives the user of a learning opportunity, or where value misalignment is too great, GAI use is not justified.
Conclusion: Finding a Balance
After two years of experimenting with a variety of approaches to GAI in my Russian language classes, I have settled on a hybrid framework. I prioritize closed-note, in-person oral and written assessments whenever possible, especially in the critical early stages of language study (though I think it is worth talking to students about GAI early on – its functions, its implications for language learning, and the fact that machine translation is itself a founding branch of AI). However, I also recognize that independent projects and out-of-class writing assignments are important for deeper engagement with course material, especially at the higher levels of proficiency. Banning GAI use entirely from these projects is, I believe, unrealistic, and would preclude students from engaging in a critical evaluation of LLMs’ capabilities and constraints as a language learning tool. A common theme in my students’ post-assessment reflections is their surprise at how unhelpful GAI turned out to be or how much a chatbot missed compared with instructor feedback (Pribble, 2025). This is a valuable lesson, particularly as we prepare our students to continue studying and engaging with the TL beyond our classrooms. We have a responsibility to help them understand what these tools can — and, crucially, cannot — offer.
I do not mean this in a technodeterministic way, as though the spread of GAI across all areas of human activity were inevitable; nor do I intend for the imperative of “teaching critical AI literacy” to be read as a rationale for frequent and unnecessary GAI use in instructional contexts. Rather, I think we are at a critical moment, when our students are increasingly turning to GAI chatbots for all manner of tasks (I have been told recently by several students that they open the ChatGPT app on their phones more often than they open a web browser) and urgently need guidance on GAI use and misuse across disciplines, including in FL education.
Allowing (heavily scaffolded) GAI use on a final project in my advanced Russian class this spring demonstrated how important it is to provide this guidance. Students were asked to develop and implement a detailed lesson plan in Russian to teach their classmates about a cultural artifact or practice from a Russophone country of their choice. During the brainstorming phase, I asked the students if and how they might want to incorporate GAI in their projects. I was taken aback and, frankly, a little disconcerted by how broadly the class imagined applying these tools. Several students floated the idea of using a text-to-image generator to create visual materials for their presentations, without first appearing to consider a search for authentic images. Others proposed using GAI to generate a full lesson plan from scratch – an idea they supported by arguing that this is “probably what most teachers are doing these days anyway.”
Following this brainstorming session, we used the flowchart above to conduct an audit of the proposed GAI uses, addressing misconceptions along the way. This became an opportunity not only to clarify GAI’s limitations — such as the misplaced assumption that Adobe Firefly could be trusted to produce a culturally accurate image of a Russian gzhel’ porcelain plate or a Kazakh dastarkhan, which it decidedly cannot — but also to discuss the culturally situated nature of language and the extent to which LLMs, trained on decontextualized data, might be misaligned with many of the goals of language learning. We also addressed the students’ concern that instructors might be using GAI to produce lesson plans and materials without telling them. We took a few minutes to talk about what makes a good lesson plan and why I, personally, would never rely on GAI to write one. I believe, and hope, that this conversation helped rebuild some of the trust between me and my students.
In the end, my students’ actual use of GAI on their final projects was relatively limited. Those who did choose to use it included a section in their presentations where they disclosed the use and explained their rationale: for example, using a chatbot to practice responding to questions in Russian to prepare for the open discussion portion of their presentation, or to brainstorm an interactive activity, or to condense and explain Russian-language videos and articles (in Russian) about how to crochet lace napkins. I can’t say that I find all of these uses entirely justified, but what mattered most was that the students had clearly thought through their choices and were ready to defend them using the criteria from our flowchart. At a moment when we are all still figuring out what GAI impact looks like, I would argue that this is precisely the kind of reflective and informed decision-making we should be encouraging.
References
Abebe, R. & Kasy, M. (2021). The means of prediction. Boston Review. https://www.bostonreview.net/forum/ais-future-doesnt-have-to-be-dystopian/the-means-of-prediction/
Agarwal, D., Naaman, M., & Vashistha, A. (2025). AI suggestions homogenize writing toward Western styles and diminish cultural nuances. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25), 1117, 1–21. https://doi.org/10.1145/3706598.3713564
Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö, & Mariman, R. (2024). Generative AI can harm learning. The Wharton School Research Paper. http://dx.doi.org/10.2139/ssrn.4895486
Beals, J. (2024). Grounding AI: Understanding the implications of generative AI in world language & culture education. The FLTMAG. https://fltmag.com/implications-generative-ai/
Bender, E., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
Cao, Y., Zhou, L., Lee, S., Cabello, L., Chen, M., & Hershcovich, D. (2023). Assessing cross-cultural alignment between ChatGPT and human societies: An empirical study. Proceedings of the First Workshop on Cross-Cultural Conversations in NLP, 53–67. https://aclanthology.org/2023.c3nlp-1.7/
Conrad, K. (2024). A blueprint for an AI bill of rights for education. Critical AI, 2(1). https://doi.org/10.1215/2834703X-11205245
Crawford, Kate. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven and London: Yale University Press,
Crawford, K. & Paglen, T. (2019). Excavating AI: The politics of training sets for machine learning. https://excavating.ai
Darvishi, A., Khosravi, H., Sadiq, S., Gašević, D., & Siemens, G. (2024). Impact of AI assistance on student agency. Computers & Education, 210. https://doi.org/10.1016/j.compedu.2023.104967
D’Mello, S., Lehman, B., Pekrun, R., & Graesser, A. (2014). Confusion can be beneficial for learning. Learning and Instruction, 29, 153–170. https://doi.org/10.1016/j.learninstruc.2012.05.003
Dzieza, J. (2023). AI is a lot of work. The Verge. https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation labor-scale-surge-remotasks-openai-chatbots
Elkins, K. (2024). A(I) university in ruins: What remains in a world with large language models? PMLA, 139(3), 559–564. https://doi.org/10.1632/S0030812924000543
Estrada, D. (2023, May 30). How to teach AI to students (AI ethics and @ NJIT audit project, Daniel Estrada). Critical AI TEACHING INSIGHTS series. https://criticalai.org/2023/05/30/teaching-insights-how-to-teach-ai-to-students-ai-ethics-and-njit-audit-project-dr-daniel-estrada/
Gadiraju, V., Kane, S., Dev, S. Taylor, A., Wang, D., Denton, R., & Brewer, R. (2023). “I wouldn’t say offensive but…”: Disability-centered perspectives on large language models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 205–216. https://doi.org/10.1145/3593013.3593989
Goodlad, L. M. E. & Stone, M. (2024). Beyond Chatbot-K: On large language models, ‘generative AI,’ and rise of chatbots–an introduction. Critical AI, 2(1). https://doi.org/10.1215/2834703X-11205147
Hofmann, V., Kalluri, P.R., Jurafsky, D. & King, S. (2024). AI generates covertly racist decisions about people based on their dialect. Nature, 633, 147–154. https://doi.org/10.1038/s41586-024-07856-5
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv preprint. https://www.doi.org/10.48550/arXiv.2506.08872
Kostka, I., Toncelli, R., & Fairfield, C. (2025). Red means stop and green means go: Creating AI guidelines with students. The FLTMAG. https://www.doi.org/10.69732/WNWY9538
Lanier, J. (2023). There is no A.I. The New Yorker.
https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no ai
Ledesma, E. (2024). Critical AI studies and the foreign language disciplines: What is to be done? PMLA Theories & Methodologies, 139(3), 533–540. https://doi.org/10.1632/S0030812924000567
Mecias, M. L. (2025). Navigating the AI highway: A traffic light approach to language learning. The FLTMAG. https://www.doi.org/10.69732/FGOG9332
Meier, I. (2025). DavAI: Designing a custom ChatGPT bot for Russian language learners. Russian Language Journal, 74(1). https://doi.org/10.70163/0036-0252.1400
Mitchell, M. (2019). Artificial Intelligence: A Guide for Thinking Humans. New York: Picador.
Naous, T., Ryan, M., Ritter, A., & Xu, W. (2024) Having beer after prayer? Measuring cultural bias in large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, 1, 16366–16393. https://aclanthology.org/2024.acl-long.862/
O’Donnell, J. & Crownhart, C. (2025). We did the math on AI’s energy footprint. Here’s the story you haven’t heard. MIT Technology Review. https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech
Pribble, K. (2025) [Forthcoming]. Leveraging generative AI chatbots to assess writing in the Russian language classroom: A model portfolio assignment with chatbot feedback. In D. Pastushenkov & L. Zalaltdinova (eds.), Assessment of Russian as a Foreign Language: Unlocking Proficiency. Routledge Russian Language Pedagogy and Research.
Qadri, R., Shelby, R., Bennett, C., & Denton, R. (2023). AI’s regimes of representation: A community-centered study of text-to-image models in South Asia. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 506–517. https://doi.org/10.1145/3593013.3594016
Ramati, I. & Pinchevski, A. (2017). Uniform multilingualism: A media genealogy of Google Translate. New Media and Society, 20(7), 2550–2565. https://doi.org/10.1177/1461444817726951
Stone, M., Goodlad, L., & Sammons, M. (2024). The origins of generative AI in transcription and machine translation, and why that matters. Critical AI, 2(1). https://doi.org/10.1215/2834703X-11256853
Wong, M. (2023). America already has an AI underclass. The Atlantic. https://www.theatlantic.com/technology/archive/2023/07/ai-chatbot-human-evaluator-feedback/674805/
Thank you. Very helpful and argued well.