In the year since OpenAI released ChatGPT, high school teacher Vicki Davis has been rethinking every single assignment she gives her students. Davis, a computer science teacher at Sherwood Christian Academy in Georgia, was well-positioned to be an early adopter of the technology. She’s also the IT director at the school and helped put together an AI policy in March: the school opted to allow the use of AI tools for specific projects so long as students discussed it with their teachers and cited the tool. In Davis’s mind, there were good and bad uses of AI, and ignoring its growing popularity was not going to help students unlock the productive uses or understand its dangers.
“It’s actually changed how I design my projects because there are some times I want my students to use AI, and then there are times I don’t want them to,” Davis said. “What am I trying to teach here? Is this an appropriate use of AI or not?”
Like teachers across the US and UK, Davis, who also runs the education blog Cool Cat Teacher, spent the summer thinking through what the release of a technology could mean for her.
Generative AI can produce images of the pope in a bomber jacket and answer nearly any math problem, so what could it do for students? Educators like her played with the tools and tried to understand how they work, what the utility could be – for teachers and students alike – and, perhaps most pressingly, how the software could be misused. Some took drastic measures, going so far as to abandon homework assignments as long as the technology was accessible.
“It feels like we’re in some sort of lab experimenting with our kids because it’s changing so rapidly,” Davis said. “If you had asked me about any of this last fall, I couldn’t have told you any of it because ChatGPT didn’t exist.”
In Davis’s senior level class, she prohibited the use of chatbots to code because until recently the College Board, which administers standardized tests like the SAT, didn’t permit AI assistance for programming. (This was recently changed to allow for the use of generative AI as a supplemental tool.) But she has changed an annual project she assigns to incorporate AI into the process. Davis usually asks students to research current models of laptops and evaluate which would be the best fit based on where they want to go to college and what they want to study. Now she asks students to feed the research they have done on their computer options into ChatGPT and ask for a recommendation based on their chosen major and college. The students are then tasked with evaluating ChatGPT’s recommendation. Her goal is to show students how they can use their own knowledge and research on a topic to help them better supervise AI.
Teachers who spoke to the Guardian say their primary concern is helping students begin to use AI without enabling cheating. Looming over their futuristic lessons is a fear that an overreliance on these new tools could exacerbate the loss of learning many students suffered during the pandemic. Students had only returned to in-person instruction after two remote years when OpenAI launched ChatGPT, and many were still struggling with the huge hit to their ability to learn or engage in school at all.
“There’s so much trauma, and AI can’t help me with that,” said one Maryland high school teacher, Kevin Shindel.
***
After a summer spent experimenting with AI, there’s little consensus among teachers on how to address its use in schools. Many educators in a nearly 370,000-person Facebook group called “ChatGPT for teachers” argue the widespread use of AI chatbots is inevitable and eagerly discuss the best ways to use these tools to make their jobs more efficient and help their students learn. Other teachers the Guardian spoke to suggested student use of the tools be banned until they learn more about the technology behind it.
Still, others have focused largely on mitigating any AI-aided cheating; some have stopped assigning homework entirely, opting instead to have their students do supervised work in class. Some teachers have even required students to take handwritten exams or write the first drafts of essays by hand in class to ensure they are coming up with the ideas themselves.
But all those the Guardian spoke to agree: regardless of where you land on its use, teachers everywhere are grappling with how to stay on top of constantly evolving generative AI tools.
Shindel, a government teacher at a 3,300-student high school in Maryland, has been teaching his students about how AI impacts government and policy for 15 years, but he wasn’t prepared for how quickly people would adopt ChatGPT. He spent the summer learning about and experimenting with various chatbots, and in July presented his findings to the school board in a 38-slide presentation titled “The promise and peril of ChatGPT in today’s classroom”.
Shindel gave those in attendance ChatGPT-led activities to experiment with and posed questions about the ethics of its use (“What would a code of ethics for data usage and protection look like?”). Ultimately, he urged the school board to come up with a district-wide policy.
“Teachers shouldn’t be responsible for developing classroom policies alone,” Shindel said. “There needs to be some kind of concerted, systemic effort.”
Shindel doesn’t believe teachers and policymakers know enough about how chatbots collect student’s personal information – or how to prevent cheating – to allow students to use it. He also worries the tools could exacerbate the lack of student engagement caused by remote learning. Students and teachers are still reeling from the impacts of the pandemic, Shindel said. A recent Harvard graduate school of education study concluded the average public school student between third and eighth grade was half a year behind in math and reading and that nearly all students failed to recover the learning lost after returning to in-person instruction. A 2021 review of 10 studies on pandemic learning loss published by the UK’s Department for Education found that “disadvantaged primary school students were disproportionately behind expectations”, with many students 50% further behind.
“I have a couple classes that are almost completely silent. Students don’t interact with each other or answer any questions,” Shindel said.
Though they may be in the minority, other schools have made progress establishing AI policies. Little Falls high school in Minnesota decided to ban the use of AI tools entirely in an addendum to the school-wide cheating policy. Davis’s class policy allows certain tools to be used but requires students to seek permission and review the links the AI cites as sources. Kimberly Van Orman, a University of Georgia philosophy professor who is currently teaching a course on the ethics of AI, says she is focusing on transparency. Van Orman requires her students to include the prompt they entered into a chatbot and the response in any assignment they use it for to ensure they don’t “use it in a way that takes the place of learning”.
“If you’re trying to understand a concept from the book and you want to kind of talk it over with ChatGPT, that would be fine,” Van Orman said. “Consulting it on your homework problem would not be fine.”
***
Dozens of AI apps targeting students have cropped up in the past few years. Photomath, for instance, predates the current versions of ChatGPT and pitches itself as the No 1 app for math learning. Users can upload a picture of a math problem or equation, and the app will give them the answer with explanations. But several teachers said students began using it during the pandemic to cheat or, at the very least, replace the “productive struggle” that results in learning. Inevitably, students who relied on Photomath during the pandemic struggled when they returned to the classroom, several teachers said.
But there are also tools being built to refuse to just give students the answer. Khanmigo, an AI tutor being piloted by educational non-profit Khan Academy, is trained instead to ask questions that nudge students to better understand the material. When the Guardian asked Khanmigo a basic programming question (implement a cache with expirations in Javascript), the chatbot responded: “I can’t provide direct answers or solutions to coding problems.” When the Guardian was asked to solve for z in the equation “3z = 15” and repeatedly responded with “I don’t know”, the AI tutor kept providing guidance on how to solve it until it finally provided four multiple-choice options. Khanmigo was quicker to provide the right answer when the Guardian responded with an incorrect answer twice. ChatGPT, on the other hand, immediately provided the solution in both cases.
Sal Khan, the founder of Khan Academy, says the organization spent thousands of hours training the system, which is powered by ChatGPT-4, to understand that it’s not supposed to do people’s work for them. “We said stuff like: ‘You’re a Socratic tutor, you are here to make the students actively learn, not just passively,’” he said.
Though it’s still in an experimental phase, these training processes are what distinguishes an AI tutor from an AI cheating tool, Khan argues.
“This time next year, you’re going to have 50 [companies] who say that they have an AI tutor,” Khan said. “But probably 90% of them are going to be somewhat shady and they’re just going slap a little bit of a layer on top of ChatGPT-3.5. They’re going to be mainly cheating tools, and not good ones.”