ArticlesJuly 2024

Kasparov, Vygotsky, and ChatGPT: What a Chess Prodigy and Child Psychologist Can Teach Us about AI in Education

By Per Urlaub and Eva Dessein, Massachusetts Institute of Technology

Per Urlaub Eva Dessein

DOI: https://www.doi.org/10.69732/AZNY2894

Introduction

As we are about to enter the second quarter of the twenty-first century, language and intercultural education has been disrupted by two developments: the global pandemic, and the proliferation of consumer-oriented AI (artificial intelligence) services. Both discourses highlight our profession’s critical mission to identify curricular approaches that enable us to create and deliver meaningful and engaging educational experiences to our students through digital technologies.

As the pandemic waned, a technological disruption entered our world in the form of generative AI, which will be the topic of this essay. “The Great A.I. Awakening” is a phrase that was coined as early as 2016 in the title of a widely discussed journalistic feature in The New York Times Magazine (Lewis-Krauss, 2016). In this text, Lewis-Krauss predicted the disruptive nature of a massive paradigm change in computing and software development. Machine learning approaches to software engineering started to impact the pace of development and the accuracy of AI applications at previously unthinkable levels. A first glimpse of the great AI awakening came in the form of suddenly and massively improved online machine translation services such as Google Translate and DeepL between 2015 and 2020 (Lewis-Krauss, 2016; Urlaub & Dessein, 2022). The proliferation of impressive generative AI services, most notably the release of OpenAI’s large language model ChatGPT in late 2022, introduced a disruptive paradigm that has been challenging the status quo of language, writing, and humanities education. Like with other potential disruptors of educational settings in the past, such as pocket calculators or spell checkers, the initial reaction among many educators was to ban generative AI, but gradually, some instructors started to started to rethink their pedagogical practice vis-à-vis these AI technologies (Urlaub & Dessein, 2024).

As two language educators and applied linguists, we have been working both collaboratively and independently for close to twenty years on questions that relate to the role of technology in language, literacy, and intercultural education. Over those two decades there has never been a time of so many transformations as we are currently witnessing. The pace of innovation makes it increasingly difficult for language teachers, learning designers, instructional technologists, and applied linguists to keep up with the latest AI technologies and the evaluation of their potential affordances and limitations in the classroom. This essay does not compete in this race. Instead of focusing on the latest AI tool and assessing its impact, our text offers the readers an opportunity to pause and to reflect on two broader questions that we feel we need to consider as we navigate a world with an ever accelerating pace of technological innovations. And although our ideas are grounded in theory, this essay will offer practical guidance on how to think about the role of AI technology in our classrooms and offer a sustainable template that allows us to implement today’s generative AI technologies and future innovations into our classrooms in a meaningful, impactful, and in an ethically responsible manner. 

In the first part of the text, we will focus on the question of why language educators should embrace the integration of cutting-edge technologies into their classrooms. This part has two objectives. It will first highlight the great potential of human-machine partnership, and it will help reduce anxieties among educators about using generative AI in their classrooms. The argument presented in this first part will be based on insights about AI from the game of chess. These insights highlight the potential of human-machine collaboration and help us to shed fears by deconstructing the persuasive and dystopian “human against machine” narrative and reframe the role of technology through a more productive and optimistic “human with machine” mindset.

After having tackled the “why”question in the first part of the essay, we will focus on “how” in the second part of this text. Here, we will argue that we must use learning theory to guide our implementation of generative AI and future technologies in our classrooms. Only thoughtful instructional designs that are guided by theory will help us to create learning environments where students grow linguistically and intellectually while collaborating with generative AI technologies. To illustrate affordances of a theory-guided approach, we will sketch out basic principles of sociocultural theory and argue these ideas associated with the Soviet child psychologist Lev Vygotsky can guide us in our efforts to meaningfully, effectively, and ethically deploy AI technologies in education. This part will also demonstrate that close attention to learning theory can provide a practical framework for the design of AI-enhanced language learning environments.

Part 1: Why 

In 1997, the unique-purpose supercomputer Deep Blue became the first computer system to beat a human World Chess Champion. After 12 years of development, first at Carnegie Mellon University and then at IBM, Deep Blue’s victory over Gary Kasparov is to this day considered a milestone in the history of computing and artificial intelligence (Hsu, 2002). This event made a deep impact on me (Per Urlaub) as an undergraduate student in Germany. With my friends, among them a future computer engineer and a future neuroscientist, we discussed for hours: how can a machine that we as humans build and that we as humans program outperform not merely an average human chess player, but how can such a chess computer beat the reigning World Champion? But I must also admit that the fact that a machine successfully challenged humanity at a game is considered the pinnacle of human intellect and intuition was very liberating. This unexpected outcome destabilized the notion of genius that is to this day so deeply anchored in German culture and society. The fact that the genius of a chess grandmaster could be replicated by a huge, ugly, gray cabinet filled with microchips and cables energized me as an aspiring teacher. If humans can teach a machine this high level of expertise, then we as educators better take notice.

Thankfully, nobody listened to me 27 years ago. After all, the secret behind Deep Blue’s success was by today’s standards quite blunt: probabilities and processing power (Hsu, 2002). The supercomputer was simply able to statistically evaluate millions of moves any moment of the game in response to any constellation of the board. Today, a chess program on a smartphone could use this trick to beat Kasparov. By the way, Kasparov was not a graceful loser. Initially, he accused the engineers at IBM of cheating (Hsu, 2002). IBM discontinued the project and refused another rematch. However, Kasparov was able to engage the chess world and the AI community in a new quest. And now it gets very interesting for us as language educators.

After Kasparov finally conceded that supercomputers are better at chess than humans, he was interested in exploring a new question: how would the world’s most powerful chess computer hold up against a human collaborating together with a computer? (Sollinger, 2018). To explore this question, Kasparov proposed a new game variation: Centaur Chess (Centaur Chess exists also under the names Cyborg Chess and Advanced Chess in the international chess community). Similar to the mythological hybrid creature of the centaur – half-human, half-horse – competitors in this new genre of chess were hybrid teams: half-human and half-AI. But if humans are worse than computers at chess, wouldn’t a Human+AI pair be worse than a solo AI? Wouldn’t the computer’s superior processing powers be sabotaged by human intuition overriding the machine?

Over the past 20 years, chess tournaments featuring the new game variation have provided the answer to this question. A Human+AI Centaur usually beats the solo human. But surprisingly, Human+AI Centaurs routinely beat today’s most sophisticated solo computers (Baraniuk, 2015). For educators, reflecting on the role of AI in our classroom, the phenomenal outcome of this second completion carries two important implications that we will describe in the following. 

People are simply smarter when they collaborate with technology. This started in the stone age with primitive tools and continued through millennia From writing systems via spell checkers to Wikipedia there is a long list of innovations that were initially prohibited by educators, then reluctantly accepted, and today embraced, because they have had a transformational impact on the ways we learn, communicate, write, and think . Technologies have advanced us as a species . Generative AI, we argue, is no different. AI is not very powerful alone, and not nearly as sophisticated as a human being collaborating with the technology. We learned from the centaur chess competitions that a mix of intuitive human intelligence and probabilities-driven artificial intelligence surpasses either one alone. But human-machine collaboration also makes AI applications safer, because as we collaborate with the system, we supervise and critically interrogate the quality and truth value of the output. This can only happen if we learn to partner with technology, and it is critical for young people to become effective and ethical users of AI and learn how to partner with technologies in school. 

Technology works for us and with us – not against us. Many language educators remain reluctant to allow the use of AI in their classroom. We believe that some reservations as a result of ethical considerations are valid and important. But we also see that much of the anxiety vis-à-vis AI in the general population is rooted in fears that powerful machines that appear smarter than us have the power to replace us in the office, behind the wheel of a car, or in the classroom. Educators are not immune to  this replacement narrative and as a result many fear that AI might make them redundant, as the new technologies appear to endow humans with competencies that traditionally require years and years of education. For example, instructors in writing programs fear that ChatGPT will eventually replace the need for students to develop advanced writing skills in their classes. Language educators fear that machine translation services will erode both student motivation and public funding for language education. Even before having the chance to engage with the AI service, administrators responded to their calls to ban these technologies. We argue that many of the anxieties among educators are rooted in the fact that the “Human against Machine” narrative has far more traction than the “Human with Machine” narrative. Many Americans over the age of 45, even if they are neither particularly interested in chess or technology, have heard about Deep Blue. The original competitions between Kasparov and Deep Blue were highly publicized. Media coverage turned the machine into a global celebrity at the turn of the millennium. In contrast, few people outside the chess community have heard about centaur chess competitions. Like the general public, educators are impacted by these technophobic narratives and the popular imagination that endows machines with the power to replace us. We argue that in order to think productively about the role of technology in education, we have to overcome the negative “Human against Machine” narrative and adopt a more positive “Human with Machine” mindset as our guiding model for effective, impactful, and ethical use of innovative technologies in our classrooms. Once one accepts the merits of a “Human with Machine” narrative, the threat starts to disappear. We realize that if we accept the technology as a partner, we can engage in higher-order problem solving through human-machine collaboration. The same happened with the pocket calculator. The new technology did not replace math education, but it augmented and enriched the learning environment. Rather than spending many hours on tedious calculus operations on paper, teachers and their students could find more time to focus on a shift from accuracy-centric skill development towards problem-solving through human-machine collaboration. Technology is not the adversary anymore. It is a partner.

Part 2: How  

Innovative educators who are open or even enthusiastic about the integration of new technologies into their classrooms sometimes struggle with their implementation. They want to make sure that their learning designs advance students’ linguistic and literacy development. They want to see their students grow not merely as users of a new technology, but also as independent users of the language. If left unguided, they legitimately fear students may delegate tedious but necessary tasks that are central to the learning process to the technology. Such a use of the technology would sabotage the learning process. 

Due to the pace of innovation, our profession unfortunately has not had the chance to provide a lot of guidance that relates to principles on how teachers can integrate generative AI technologies in their classes in a way that enhances the learning process. We hear at conferences often two major arguments for using AI technologies in our classroom: (1) Students use them anyway. (2) Students find new technologies engaging. We do not want to dispute these arguments, but believe that this is not the whole story, and we argue that this is not necessarily a very constructive way to think about how exactly technologies should enter our classrooms. If this is the only guidance that teachers have, integration of technology in educational contexts often results in somewhat flashy, surface-level solutions, neglecting deeper understanding and meaningful implementation. It results in instructors trying out the new technology hoping to make tedious but necessary aspects of learning a little more exciting. We believe we need to aim higher. 

We argue that to have a truly positive transformational impact on our learning environments, we need to design learning opportunities where students do not use the technology to simply delegate tasks. We must aim to design classrooms that help students grow through the use of technology, both as effective and ethical collaborators with technology and as users of a language independent of any technology. This is of course an ideal, and it admittedly is a monumental challenge, but we argue that we should shoot towards this goal, and that it would help us tremendously, if we contextualized our use of technology in the classroom through theories of learning. 

In our view, sociocultural theory, and in particular Lev Vygotsky’s model of the zone of proximal development and his principle of scaffolding, strongly resonate with what we consider a responsible and powerful integration of AI into our learning environments.

The true depth and relevance of Lev Vygotsky’s ideas were only discovered in the West decades after the death of the Russian child psychologist 90 years ago. Experts in a wide spectrum of fields from learning sciences to human development consider him as the father of sociocultural theory. This framework emphasizes the role the environment and culture plays in how humans develop and grow cognitively, emotionally, and even motorically over their lives (Wertsch, 1986; Vygotsky, 1994). Vygotsky understood interaction between the individual and their environment as the central mechanism of the development process (Daniels, 2001; Kozulin, 2003). Vygotsky’s idea had a significant impact on our understanding of the second language acquisition process as well as instructional design in foreign language classrooms (Lantolf, 1998; Kinginger, 2001; Kinginger, 2002).

Vygotsky’s model of the Zone of Proximal Development postulates that there are three kinds of tasks that our environment demands from us as we venture through the world: (1) first, there are tasks that we can accomplish individually, (2) secondly, there are tasks that we cannot accomplish alone; and (3) third, we have tasks that we can only accomplish through interaction with a parent, a peer, or a teacher, who guides us through it. According to Vygotsky, task environments (1) and (2) do not provide learning opportunities. They just simply represent tasks we either can do or cannot do. Learning only happens in the third scenario. In this constellation, an individual finds themself in an environment in which they encounter a task that is too difficult to accomplish alone. It is not completely impossible to accomplish the task, but the tasks need to be tackled by the individual collaboratively through interaction with a parent, a teacher, or a peer. In this environment, which is the Zone of Proximal Development, the individual receives guidance and grows through interaction with an expert. This interaction is what Vygotsky calls scaffolding. Vygotsky famously stated, “What a child can do in cooperation today, he can do alone tomorrow” (Vygotsky, 1962: 104). 

Can generative AI offer scaffolding and engage a learner in the Zone of Proximal Development? Yes, we believe generative AI can do that! And not only do we believe that this is possible in carefully designed learning environments, but we go further and argue that the creation of a scaffolding relationship in the Zone of Proximal Development should be the principal objective in any situation where students are asked to use technology in language education. If this goal is not on our mind as teachers, we miss an opportunity and risk using new technologies in novelty-driven ways, merely for the sake of using them. 

If we assume that AI can provide an individual with the similar kind of scaffolding partnership that is conventionally provided by a human, does this also mean that we as teachers become replaceable by an algorithm? Certainly not! After all, we are not suggesting that we should teach in this manner all the time. Care, encouragement, warmth, emotional rapport, and trust are human dimensions at the core of scaffolding relationships. These aspects are not yet replaceable by a machine, a sentiment that resonates with Beals’ (2024) observation that machines do not have any lived experiences. But what we are suggesting in this article is that if we use generative AI in our teaching, we should aim at designing scenarios that simulate scaffolding interactions in the zone of proximal development where students grow through collaboratively tackling tasks and solving problems. If teachers are not able to design such learning environments, they probably serve their students better by not using ChatGPT and similar technologies in their classrooms.

Conclusion

Despite the astonishing performance leap of generative AI technologies, it is important to remind ourselves that artificial intelligence is not human intelligence. Our intelligence is based on understanding, reasoning, reflection, and emotion. It was formed and constantly evolves through a lifetime of real-world interactions in our families and communities, at our workplaces, and in schools. Large language models, on the other hand, are simply fed by large quantities of text that is out there on the internet and make predictions based on probabilities. Artificial Intelligence by itself is not very intelligent, but when paired with human intelligence, AI can make us smarter. A significant limitation of AI technologies in education is that students use these tools to delegate tedious but necessary tasks to machines and thus compromise their learning. To mitigate this limitation, we made a case for using learning theories to create more meaningful and intentional implementation of AI technologies in education. To illustrate the power of such a framework, we used sociocultural theory. We are sure that there are other productive ways to think on how to implement these new technologies into our classes, and we are looking forward to learning more about the exciting opportunities and challenges that our profession faces in the context of AI technologies in the future. 

References

Beals, Johnathon. (2024). Grounding AI: Understanding the implications of generative AI in world language & culture education. The FLTMAG. https://fltmag.com/implications-generative-ai/

Baraniuk, C. (2015, December 4) The cyborg chess players that can’t be beaten. British Broadcasting Corporation. https://www.bbc.com/future/article/20151201-the-cyborg-chess-players-that-cant-be-beaten

Daniels, H. (2001). Vygotsky and pedagogy. New York: Routledge Falmer.

Hsu, F.-H. (2002). Behind Deep Blue. Princeton University Press.

Kinginger, C. (2001) i + 1 ≠ ZPD. Foreign Language Annals, 34, 417425.

Kinginger, C. (2002) Defining the zone of proximal development in US foreign language education. Applied Linguistics, 23, 240–261.

Kozulin, A. (2003). Vygotsky’s educational theory in cultural context. Cambridge, UK: Cambridge University Press.

Lantolf, J. P. (1998). Vygotskian approaches to second language research. Norwood, NJ: Ablex Publication Corporation.

Lewis-Kraus, G. (2016, December 14). The great A.I. awakening. The New York Times Magazine. 

Sollinger, M. (2018, January 5). Garry Kasparov and the game of artificial intelligence. WGBH The World. https://theworld.org/stories/2018/01/05/garry-kasparov-and-game-artificial-intelligence

Urlaub, P., & Dessein, E. (2024). When disruptive innovation drives educational transformation: Literacy, Pocket Calculator, Google Translate, ChatGPT. In An MIT exploration of generative AI. From novel chemicals to opera. Cambridge: MIT Press. https://mit-genai.pubpub.org

Urlaub, P. & Dessein, E. (2022). From disrupted classroom to human-machine collaboration? The pocket calculator, Google Translate, and the future of language education. L2 Journal, 14(1), 45-59.

Vygotsky, L.S. (1962). Thought and language. Cambridge, MA: MIT Press.

Vygotsky, L.S. (1994). The problem of the environment. In R. Van Der Veer & J. Valsiner (Eds.) The Vygotsky reader, pp. 338-354. Oxford, UK: Blackwell.

Wertsch, J.V. (1986). Introduction. In J. V. Wertsch (Ed.) Culture, communication, and cognition: Vygotskian perspectives, pp. 1-18. Cambridge, UK: Cambridge University Press.

2 thoughts on “Kasparov, Vygotsky, and ChatGPT: What a Chess Prodigy and Child Psychologist Can Teach Us about AI in Education

  • Marvelous article!

    Urlaub and Dessein’s wisdom reminds me of another great MIT thinker, that of Leo Marx and his Machine in the Garden: Technology and the Pastoral Idea in America, 1964.

    Friction and resistance to technology has always been part of the American story followed by collaboration with machines from which humans gain new skills.

    Another result of Deep Blue beating Kasparov was a renaissance in chess learning, made possible by the increasingly affordability of chess computer games that were seized upon by hungry students.

    In a similar way, human directed AI can help us enter a golden age of language learning, with educators guiding the way.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *