CHAMPAIGN — Is generative artificial intelligence a useful learning tool or a problem for educators? University of Illinois professors are split on the subject.
For Helaine Silverman, an anthropology professor, concerns about AI are stretching beyond the classroom.
She received an email from a teenager asking about pre-college programs and discussing some of Silverman’s previous research.
“She wrote a long message, beautifully constructed, speaking about her interest,” Silverman said. “But there was one glaring mistake in her message: She referred to our work and then inserted ‘genital mutilation’ into part of her discussion. We never considered that in our death work. Rather, I mentioned it in passing in an earlier volume about cultural heritage and human rights.”
That made Silverman take another look at the rest of the email.
She said beyond the fact that the writing level far surpassed an average high school sophomore’s skills, the numeral “1” was inserted at strange places and the student had not mentioned what high school she was attending.
It also seemed strange that a young student would be reading Silverman’s anthropological research in the first place.
Silverman began to wonder if the email could be some kind of spam or virus, but even if it was written by a real high school student, she wouldn’t be happy if AI was involved.
Either way, she’s going to try to set up a virtual meeting with the student and see what happens.
“If her message is AI, then I’m concerned that she’s off to a very bad start. That message, potentially, is a kind of plagiarism,” Silverman said.
“Artificial intelligence” is a broad term that refers to everything from social-media algorithms to autocorrect to “digital assistants” like Siri.
The specific kind of AI that has gotten attention from educators is generative AI, like ChatGPT, which can “create” original content.
It’s a bit more complicated than that, since computer systems can’t actually generate new ideas and can only combine input given by humans, but the point is that a student could hypothetically use generative AI to create an entire essay without putting in any of their own writing.
Anthropology Professor Virginia Dominguez said even though she doesn’t want AI used on her assignments, she isn’t worried because it would be hard to use it to cheat in her class.
“I know some other people are, but I just don’t see how any student could fake an assignment in my courses because I am very specific about what I ask for in any written assignment,” Dominguez said.
Social-work Professor Flavia Andrade said AI is just the newest way for students to take shortcuts with their work.
She put some of the responsibility on the people creating the assignments to formulate them in a way that students will see the educational value of completing the work themselves.
However, Andrade doesn’t think students should be stopped from using AI altogether — they just need to use it in ways that supplement their own abilities.
“When new technologies are introduced, we have to learn how to use them better,” Andrade said. “Think about when calculators and computers were created and how they disrupted how we learn and teach. AI is a new technology we must learn how to benefit from when teaching.”
Stanley Ambrose, another anthropology professor, described himself as being “extremely concerned” about the use of AI because he said that students won’t learn anything if they just cut and paste questions and get fed answers.
He’s already identified AI being used in a way he didn’t agree with by his students on an assignment he knew well because his work is the most highly cited review of the subject.
Three students submitted papers that were not only so similar to each other but so similar to Ambrose’s writing that he ran them through an AI-detection software provided by the UI.
“Those with AI scores of 99 percent were flunked. Those with 50 percent were tested one paragraph at a time, and given D or C grades, depending on original content quality,” Ambrose said. “No assignment with AI content is given more than a C grade.”
Last semester, Ambrose began opening classes with discussions on AI and its implications for the world.
“This is such a serious problem in teaching, learning that I question the value of continuing to teach without guarantees that the students actually write what they submit,” Ambrose said. “The wider world is suffering as well, with AI-generated content that most of them cannot recognize and accept as fact.”
Communications Professor Joshua Barbour, on the other hand, encourages the use of AI in his classes. He said since we can expect AI to change the way we work, students should understand how to use it.
“I have often heard it said that ‘I’m not going to be replaced by a robot or AI, but I may be replaced by someone making better use of these tools,’” Barbour said. “The key for our students is to empower them to make thoughtful, research-based and community-grounded decisions about how they want to use these tools.”
That doesn’t mean he’s telling students to copy/paste his questions into ChatGPT — Barbour said his assignments are too specific for AI to complete accurately.
“It’s pretty easy to tell when students just try to pass off ChatGPT versions of their submissions,” Barbour said. “The prose is different (typically more polished than student work), and it sounds good but isn’t effective in terms of the substance of what we are learning in class.”
He doesn’t mind if students use it to get themselves started, solving the “blank-page problem,” as long as they disclose their use of AI on the assignment.
“My concern is that unthoughtful use of these tools will mean that students miss out on what they might learn by doing the assignments without them, but if I can help them to use them well, then they can augment their work, which may help them produce better submissions and learn more along the way,” Barbour said.
Likewise, social work Professor Karen Tabb Dina encourages the use of AI in her class, but it wouldn’t help on the majority of her assignments anyway.
She said she shifted her assignments away from formal papers to more practical skills to combat plagiarism around a decade ago.
The only “paper” she assigns is a transcript of an interview that must be conducted by the student, so it would be very challenging to fake.
Another way to combat the use of AI to plagiarize, she said, is to make sure assignments focus around making the students think for themselves.
“In my experience, AI is not very intelligent as it pertains to critical thinking and the application of skills,” Tabb Dina said. “By requiring students to have assignments that challenge their critical thinking, there’s no need to use AI to work for you. It doesn’t work; they cancel each other out.”
Instead, Tabb Dina will ask students to use AI when it could be helpful, like a last-minute meeting or by creating an image of something that doesn’t exist yet to support their proposal.
“I’m teaching the students how to make use of AI skills in critical moments, but not water down their ability to use critical thinking,” Tabb Dina said.
Tabb Dina said that all faculty should be thinking about how to use things like AI and social media as positive tools in their classes.
“I just challenge all faculty to challenge themselves,” Tabb Dina said. “We’re in 2023. Skill sets have changed and faculty needs to be creative.”