As artificial intelligence tools like ChatGPT continue to evolve at a rapid pace, Ransom Everglades has turned to a universally understood symbol to guide its approach to AI. Much like the traffic lights that dictate our movements on the road, new assignment labels now inform students about AI expectations. A “red” assignment means students may not use AI at all, “yellow” means students may use certain AI tools for specific tasks related to the assignment, and “green” means that students may use AI tools freely for any aspect of the assignment.
Since the establishment of the traffic-light system this past August, the three colors have appeared on the front pages of virtually every assignment and assessment. Is this new approach to adapting alongside artificial intelligence like ChatGPT effective? And does Ransom Everglades’ student body perceive AI as a valuable resource to be explored within the classroom, or do they find its potential stifled by the “Red” light too often?
Members of the RE community have had varying reactions, ranging from fullhearted endorsement to sincere confusion about why we need these policies in the first place.
Although the AI policies were officially set in place in the fall of this year, they took a lot of prior planning, research in the summer and the Spring 2023 semester. Ms. Jen Nero, Chair of the Humanities Department, convened the AI Ransom Everglades (AIRE) Task Force in early February. The group, which included faculty and staff members from almost every RE department, met weekly throughout the semester to discuss how best to tackle AI and share new insights. It was a hectic period with a lot of uncertainty for educators, but the Task Force worked to stay ahead of the curve. Attending numerous webinars and trainings made Ms. Nero “realize that RE was actually ahead of the game compared to a lot of academic communities around the country,” she said.
The Task Force developed the “traffic light” system at a meeting prior to the opening of the 2023-2024 school year. After it was approved by school leadership, it became part of the Student Handbook, and the group shared it with the entire faculty during opening meetings.
With the policy now established at the upper school for more than a semester, what is the RE community’s point of view? Is RE ahead of the game or just learning how to play?
For students like Juliana Rivera ‘24 and Nohan Gomez ’25, the limitations of the system have become apparent. Rivera ‘24 expressed how the system needs to be “a bit more thought-out” as it “leaves wiggle room in the yellow area, because you don’t know how much you can use and how much you can’t.”
For Gomez, the issue with the policy is that it is too open to teacher interpretation. “Some teachers have ‘yellow’ written down for assignments, but using ChatGPT is discouraged,” said Gomez. “I don’t think I’ve ever seen a green assignment,” recalled Gustavo Do Valle ‘26.
Other students said that the system had helped teachers be more creative with their assignments, however. “Because [teachers] know, if they give us a worksheet with questions on it, ChatGPT could do it, so they make us think and force us to like go a little above and beyond,” said Christopher Tsialas ’26.
From Ms. Nero’s perspective, such rethinking is inevitable and necessary. With the pervasiveness of AI use and ChatGPT, teachers need to reevaluate if assignments like take-home essays and exams are still okay to assign in our post-ChatGPT world. “Every teacher in America should feel like a new teacher this year, and students to a certain extent too,” she said.
Carlos Horcasitas ‘25 said that his teachers have made a significant effort to change the types of assignments they give this year. In his more STEM-oriented classes like Data Science, every assignment is green as students are encouraged to use AI to help them code and solve problems. But in his Humanities courses, assignments are very different from prior years.
“In AP [Comparative Government] with Ms. Nero, I’ve definitely seen that she’s made an effort to create different assignments, especially at the beginning of the year to get background knowledge. [The assignments] show the power of AI, but also the limitations, as it might not provide the most detailed or correct information.”
Some teachers, like Humanities teacher and Head Speech and Debate Coach Coach Kate Hamm, also acknowledge how the AI policies benefit teachers by acting as a tool that can help teachers to gauge the extent of their students’ knowledge and learning.
For example, when students use AI tools like Grammarly and ChatGPT to make their writing more focused or to get rid of passive voice, when they cite this AI use, teachers can look at what their students needed help with and use class time to help them understand these concepts better. “Seriously, if we can get rid of all the passive verbs with the push of a button, I say push that button.” said Coach Hamm passionately.
At the same time, Rivera acknowledged that tools like ChatGPT and DALL-E are double-edged swords. These tools can foster a sense of creativity in students, but also, “having something that can literally think for you is really dangerous.”
Approaching AI use with caution may not be a problem for some. Do Valle said he doesn’t have an OpenAI account, and “[doesn’t] feel a need to have it.”
Conversely, Gomez, who has used ChatGPT 4 (the newest and most accurate version of the AI tool), expressed strongly that the policies need to be more lenient as they are currently more limiting on the student body than they should be. “The only reason why we have these regulations here is to uphold the Honor Code. Outside, in the real world, your competitors will be using [AI]. It’s fair game,” he said.
One of the Honor Code-related limitations that Tsialas described is the additional requirement that students cite their AI use with “yellow” or “red” assignments. Tsialas said the process isn’t as transparent and straight-forward in practice than it is on paper. From his experience with the AI policies so far, he’s realized that “people are scared to put that they used AI when it was only for inspiration.”
That part of the policy won’t change, however. “There is an expectation that everyone cite AI usage in their work,” said Nero—even if they feel anxiety about doing so. Coach Hamm added that students who are scared of acknowledging AI use in their work “definitely put themselves at risk of their teachers identifying that their student committed a violation of academic integrity.”
In the last few days of the Spring 2023 semester, AIRE sent out a survey about AI use to the RE faculty. One question asked whether faculty members “use ChatGPT or other AI tools in their work at RE.” At the time, 30% of respondents said they did. At the end of the fall semester, AIRE ran a similar survey with the same question. The response shot up from 30 to 60%.
36% of the faculty said that they believed the “traffic light” system to be “very effective” at “communicating AI usage expectations to students.” But 50% of respondents also said that “red” was their most used assignment label. Only 12% said that they most frequently made assignments “green.”
For now, the “traffic light” system will remain in place. But as AI usage increases across the board among both students and faculty, change may be on the horizon. “The truth of the matter is that ChatGPT is going to be part of the technology we have access to. It is an additional resource, and whether we like it or not, we have to adapt alongside it,” Gomez said.