Red Means Stop and Green Means Go: Creating AI Guidelines with Students
By Ilka Kostka, Rachel Toncelli, and Catherine Fairfield, Northeastern University
DOI: https://www.doi.org/10.69732/WNWY9538
From “Gotcha” to Guidelines
“I’ve been ChatGPT’d!”
Any educator who has been teaching during the rise of generative artificial intelligence (GenAI) knows what this phrase means. When we are “ChatGPT’d,” we see language that sounds either overly formal (e.g., “I am committed to swiftly making up for the material I have missed and ensuring that my academic performance is not adversely affected”) or unlike students’ voice (e.g., “Delving into the rich tapestry of human experience, one can uncover a plethora of insights that transcend the boundaries of conventional wisdom, weaving together a symphony of knowledge that is both profound and whimsical.”). It would be much easier to overlook this kind of language and grade accordingly. However, most educators who sense AI-generated writing in student work and correspondence want to respond in some way and understand where this kind of language comes from. As instructors who teach writing, English language, and world languages courses, we have spent a lot of time thinking about exactly how to do that.
Amidst the rise of GenAI, many of us who teach writing courses have returned to a concept that runs through all of our teaching: writing is fundamentally a tool for learning, not simply for expressing knowledge. The act of writing is crucial for how students work through their ideas to form perspectives and make sense of the world. We have had to ask ourselves how students can navigate GenAI in a way that still facilitates learning and what the larger purpose of our courses serves in their education. We have also been thinking a lot about academic integrity since a large part of GenAI discussions includes fears about cheating and AI detection. The reality is that anyone could create nearly any kind of text using GenAI, which is a daunting idea for educators. However, the three of us wonder if there is another way and if we could take a pedagogical approach to GenAI use instead of a punitive one. One co-author, Catherine, developed an activity that involves students in co-creating guidelines for AI use with their instructor. Rachel learned about this activity when Catherine presented at a faculty conference and then shared it with Ilka. Since then, the three authors have been using this “stoplight activity” to involve students in creating guidelines for appropriate GenAI use in their courses.
The Process
Students talk through a series of scenarios for using GenAI and deciding whether each one belongs in green, yellow, or red categories of use (Fairfield, 2024). The colors correspond to “go” (i.e., a green light and acceptable use of AI), “pause” (i.e., a yellow light to ask for the instructor’s help with the assignment or for clarification about acceptable use), and “stop” (i.e., a red light and not an appropriate use of AI in this course). This activity happens within broader ongoing discussions about AI and the course content. As such, we recommend starting the stoplight activity with a lesson that explains why we need to create boundaries for GenAI tools, as well as how these tools work and what ethical and academic integrity challenges exist when using them. Small group discussions can then help students articulate their own perspectives on how and when GenAI either should or should not be used.
The main prep work for us involves writing a series of one-line scenarios, which we create based on past student use of GenAI, our own classroom experimentation, and the potential likelihood of how assignments could be “ChatGPT’d” in the future. For example, scenarios in Catherine’s writing classes have included using GenAI to “check your writing for grammar mistakes” or “rewrite your assignment in a different style,” while Ilka and Rachel have included scenarios such as “use AI to generate presentation content” in listening and speaking courses. When creating the scenarios, we also include a range that we think may be green or yellow and scenarios that are clearly red. While it may seem counterintuitive to share a scenario like “A student uses AI to generate a whole paper and submits it as their own,” we have found it quite valuable to have explicit and open conversations about why such scenarios might arise (e.g., procrastination) and what students can do to prevent them.
Across our classes, students form small groups to discuss and debate where on the stoplight each scenario belongs. Catherine likes to provide hard copies of the scenarios printed on individual strips of paper to offer students a much-needed screen break; Rachel and Ilka typically use shared Google docs that students can access in their learning management system. Scenarios can also be displayed on the instructor’s screen in an in-person class. In online classes, the instructor can post this activity on a discussion board for students to reply to. The following table shows an example Google doc from Ilka’s class.
Exploring AI use
With your group, read each statement in the first column then put a ✔ in either the green, yellow, or red column. Please choose only one answer for each statement.
Statement | Green (appropriate) | Yellow (be careful; ambiguous) | Red (inappropriate) |
1. A student uses an AI chatbot to practice English conversation skills. | ✔ | ||
2. A student asks Copilot for definitions and sample sentences that include new vocabulary. | ✔ | ||
3. A student asks AI to translate their assignment word-for-word from their native language into English. | ✔ | ||
4. A student uses AI to create an outline for a class presentation. | ✔ | ||
5. A student uses AI to brainstorm ideas for a presentation topic. | ✔ | ||
6. A student uses AI to generate content for their presentation and slides. | ✔ | ||
7. A student asks AI to fix major grammatical issues in their writing. | ✔ | ||
8. A student uses AI to write the whole introduction for their speech. | ✔ | ||
9. A student uses AI to brainstorm ideas for a research poster presentation but does the research on their own. | ✔ | ||
10. A student asks AI to create a transcript of a listening passage for listening homework. | ✔ |
We have found that majority voting works for most scenario decisions and leads to class-wide agreement about which scenarios are green, yellow, or red. It also leads to discussion about the technical limitations of AI tools (e.g., what Copilot does well or does not do well). In most classes, a few scenarios result in a split vote that needs further discussion and clarifying language. Nonetheless, by the end of class, we have a document with clear stoplight categories that students can refer to whenever they work. The next table shows an example of how students’ comments are included next to the scenarios, which stay there for the duration of the course. Here, the class talked about why the red and green statements were crystal clear, and because these were obvious to everyone, no additional comments were added. We have actually found that the most negotiation and lively discussion tends to happen with the ambiguous yellow scenarios. For example, the additional comments in the yellow category in the table capture how nuanced scenarios were clarified in one of our classes. When GenAI use is more nuanced, we can dive deeper into why we need to be careful.
Inappropriate Use | Additional Comments |
A student has an AI rewrite and rephrase large portions of their research paper.
A student inputs an essay prompt into an AI and submits the AI-generated response to their teacher. A student has an AI program auto-generate the introduction and conclusion for their paper. |
|
To be discussed with professor should it come up/It depends/ | Additional Comments |
A student uses an AI writing assistant to get suggestions on how to rephrase a sentence more clearly.
A student asks an AI to translate a document word-for-word from their native language to English. A student uses an AI grammar checker to identify and correct errors in their writing. A student uses an AI tool to brainstorm ideas for writing or discussion. A student uses AI to summarize or annotate a reading. |
It’s ok if you read the output critically and decide if/how to use it.
It’s ok to compare texts but it is not ok to AVOID WRITING IN ENGLISH. It’s ok if you read the output critically and decide if/how to use it. It’s ok to help you get UNSTUCK It’s ok to help with comprehension. It’s not ok if you do this to AVOID READING. |
Appropriate Use | Additional Comments |
A student learning vocabulary asks an AI for sample sentences using new words.
A student uses an AI thesaurus tool to find alternative words to improve word choice. A student learning a new language uses an AI conversation practice tool. A student asks an AI tutor to explain grammatical concepts they are struggling with. |
What we like about this activity is that it is adaptable to different courses and modes of teaching. For instance, while we mostly use the stoplight in our face-to-face classes, we have experimented with the same activity in an online class. Below, we include an adaptation of the activity assigned to students in the first week of an asynchronous online introductory Italian course. Once students categorized various AI and language learning scenarios as part of the assignment, the instructor (Rachel) posted a summary of student responses to a shared discussion board for them to review. Then they needed to formally write an agreement or respond to additional questions on the Discussion board. The guidelines are still developed and agreed to by all students and used to inform their work in the course.
AI Guidelines Activity

Consider the Academic Integrity and Statement on AI section on pages 4-5 of the syllabus and use it to complete this activity. As you are deciding if/how to use AI to support your language learning, remember that as we learn language, we often make mistakes. These mistakes are wonderful opportunities to learn! Also, please note that I am always available to help you reflect on what can support your learning. Once everyone has submitted their responses, I will create a summary that outlines our class AI usage guidelines for the semester.
Part I : Open reflection questions
1. How does relying on AI translation tools for assignments differ from using a dictionary? Explain the cognitive processes involved in each approach and their impact on your language acquisition.
2. The policy distinguishes between acceptable and unacceptable AI use. Describe a real scenario where you might be tempted to use AI inappropriately, and explain how you would handle this situation ethically while still advancing your language skills.
3. Consider the policy’s emphasis on revision-based writing. Why might submitting AI-generated work for initial drafts undermine the learning objectives of this process?
Part II: The Stoplight Assessment
Using the academic integrity policy and your language learning objectives, evaluate each scenario according to this classification:
RED: Unacceptable use
– Violates academic integrity – Circumvents learning process – Explicitly prohibited by policy |
YELLOW: Requires careful consideration
– May have valid uses with proper guidelines – Discuss with professor – Could risk learning objectives |
GREEN: Acceptable use that supports learning
– Clearly aligns with policy guidelines – Enhances language acquisition – No ethical concerns |
For each scenario, assign a color and provide 1-2 sentences explaining your reasoning. A good approach to this evaluation is to first ask yourself if this use of AI can enhance your learning!
Scenarios
1. Using an AI tool to look up the meaning of a single word.
2. Using AI to generate a full paragraph for a writing assignment.
3. Using a grammar app to check if a sentence structure is correct.
4. Using Google Translate to translate an entire paragraph from English to Italian.
5. Having ChatGPT write a rough draft of your assignment.
6. Consulting ChatGPT for information about Italian history for a class presentation.
7. Using an AI tool to generate ideas for your essay, but writing the content yourself.
8. Using an AI tool to generate multiple sentences to see vocabulary in context.
9. Practicing conversation with an AI chatbot to improve your speaking skills.
10. Using an AI tool to completely generate your homework.
11. Using an AI tool to write a dialogue for a class presentation.
12. Using an AI tool to create a study schedule for your Italian course.
What Have We Learned?
Navigating rapid AI advancements with our students has been productive because the focus is less about particular AI tools and more about GenAI use broadly. We have found it worth openly discussing the nuances of AI use with them because these conversations help us build a trusting foundation throughout the semester. As one of our students told us, these conversations “not only trigger [the] autonomy of students, but also more give more guidance to teach students how to use AI correctly.” We believe that the stoplight activity helps shift conversations about the use of AI away from a right vs. wrong binary and towards reflection about learning in an increasingly AI-rich world.
So what do our students say about all this? They have told us that they appreciate this activity and love the lively and interesting group discussions that go along with it. As one student said, “I think [the stoplight] is very useful for international students who do not understand the use of AI, and different classroom professors have different attitudes towards AI.” Another student noted that this activity gave him “a clear understanding of how to appropriately use AI in our class,” which reminds us that it must be challenging for students whose professors have a range of attitudes and toward AI. This activity is an excellent reminder that students’ perspectives are critical to understanding the impact of AI on teaching and learning.
Additionally, students might worry that they are doing something wrong if they seek AI support in their learning. Indeed, there are many innovative uses of AI that can offer personalized support (e.g., using AI to practice conversation skills in the target language). In exploring AI boundaries with students, some of this fear can be alleviated. As one student noted, “It was very respectable that we did not avoid the topic of AI but instead somewhat welcome it and use it lightly to achieve our writing goals.” Other students reminded us that the stoplight does more than prevent misuse of AI; it creates permission for students to use GenAI in appropriate ways. For example, one student described feeling “encouraged to experiment with using [GenAI] to help me structure and outline essays since I don’t always know where to start, even if it wasn’t a significant use I would have felt uncomfortable doing that if we weren’t explicitly told it was ok.” These comments remind us that they are also worried and uncertain about AI use and seek guidance, just as many faculty do (Toncelli & Kostka, 2024).
In the next section, we each share our reflections on implementing this activity in our courses.
Ilka’s Reflection: Encouraging Discussion and Deep Thinking
Overall, this stoplight activity has worked well in my classes. I have found that discussions about these statements are exciting, and a bonus of this activity is that it offers students an opportunity to practice speaking and negotiating meaning. That being said, a common question I get from other instructors is, What happens if students disagree? I have found that this is a good problem to have because a healthy debate for any given statement sparks discussion and encourages students to think more deeply about the color they chose and why. It also encourages the class to reflect on the purpose of a course. For instance, generating and then reading the transcript of a podcast in a listening and speaking class would not help them strengthen their listening skills and achieve the goals of the course. What is also interesting is that instructors may change their minds after talking to students, as I have sometimes done. I remember one particular statement that I thought was “yellow” ended up being “green” after talking to students in class and hearing their views. Not having answers set in stone allows for instructors to reach a shared understanding with students, which makes the activity truly collaborative and solidifies students’ role as thought partners even more.
Rachel’s Reflection: Creating Space to Learn from AI Missteps
Saying whether co-created AI guidelines are an effective preventative measure is a little tricky because it is difficult to “prove” GenAI use. That said, we know our students well, recognize their unique voices, and have a pretty strong sense when we have been “ChatGPT’d.” I would like to say that the stoplight solves this issue completely, but the reality is more complicated. Since I have begun this more transparent process with students, I feel like I am receiving less AI writing than before, but it still happens occasionally that a student submits work that is clearly not their own. In these rare instances, it is useful to have a negotiated co-created policy to lean on. For example, I recently received a first draft of a paper that had all the hallmarks of GenAI writing. When I spoke to the student about it, she was able to admit that she did not follow the guidelines that were collectively created by the class. That it was our agreement, not hers, permitted this open conversation. I have also found that students need space to make and learn from mistakes like this. By asking students to submit various drafts during their writing process, they can make a mistake on a homework draft, learn from it, and not fail the final paper. Students should be able to make and learn from mistakes, and having shared responsibility for the AI guidelines supports this.
Catherine’s Reflection: Giving Students the Agency to Learn
When I collected student reflections on how our AI boundaries affected their learning after I piloted the stoplight activity in the fall of 2023, a significant number of students from different classes said they had never used tools like ChatGPT and did not intend to use them at any point in the future. These students reminded me that it is important to not paint students in the age of AI with a broad brush and assume that GenAI will appear all over our grading piles. Likewise, as our student feedback shows, the stoplight activity is about more than simply setting boundaries of what not to do. It is about creating a safe and straightforward framework of where to start when you feel like overwhelmingly powerful technologies are eclipsing the work of putting your own thoughts down on paper to share with the world – or at least with your college class. Encouraging students to use their own agency and dialogue skills to navigate these technologies helps to shift the power imbalance away from the technology and show students that their decisions made with their learning community matter. If they can use their problem-solving skills to find their way through AI together, they can also probably use those skills to write an essay that they are proud of. Having used this activity for many semesters, I appreciate that the great gift of student-led agreements is that they free up space for the true work: using writing to learn.
Moving Forward Together
When we provide clear expectations in our classes through this activity, we set the stage for transparent discussions about GenAI use and make it easier for students to focus on learning throughout the rest of the course. Using this activity in class has also set the stage for our own collaboration because this activity brought us together from different programs at our university. For these reasons, we emphasize the importance of regularly talking with peers, exchanging ideas and resources around teaching, and collectively addressing new opportunities and challenges as they come up (Toncelli & Kostka, 2024). Many unknowns remain about GenAI’s impact on higher education. However, including students as thought partners and working closely with colleagues provides a valuable opportunity for exploration and agency. We encourage you to adapt this activity to your own classes and explore what appropriate AI use looks like in your classes.
For another perspective on setting AI guidelines using a stoplight, we invite you to read the article Navigating the AI Highway: A Traffic Light Approach to Language Learning.
References
Fairfield, C. (2024, September 4). Collaboratively defining AI boundaries with students. Center for Advancing Teaching and Learning Through Research. https://learning.northeastern.edu/collaboratively-defining-ai-boundaries-with-students/
Toncelli, R., & Kostka, I. (2024). A love-hate relationship: Exploring faculty attitudes towards GenAI and its integration into teaching. International Journal of TESOL Studies, 6(3), 77-94. https://doi.org/10.58304/ijts.20240306