Detroit Mercy faculty have been working with and against generative AI since 2022. The broad access that students have to this developing technology has forced educators to make decisions about how AI may be used in their courses, if at all.
Large language models (LLMs) like ChatGPT can write and program much faster than humans can. However, the depth and accuracy of LLMs’ output can be flawed.
Stephen Pasqualina, an assistant professor of English, said that AI-written text often sounds “intellectually shallow but grammatically perfect.” He added that AI, when given a specific prompt, “often doesn’t do exactly what I’m asking it to do.”
However, even if a professor believes that a student’s work has been AI-generated, it is difficult to prove such suspicions definitively.
“AI checkers aren’t 100% accurate,” said Elena Garcia, an assistant professor of English. “Things can be flagged that aren’t AI-generated.”
False negatives are also possible, Garcia said, and content edited by Grammarly may be flagged as well. For this reason, Garcia advised students to save both a pre- and post-Grammarly version of their writing as “evidence of what their work is and what the AI review editor changed.”
Many educators are redesigning their assignments to account for potential AI usage. Caitlin Snyder, a computer science instructor, requires her students to present and answer questions about their Java programs in person.
“I can make sure students really understand their code and … it is good practice for the students to have to verbalize their computational understanding,” Snyder said.
With essays, these adjustments often occur within the prompts themselves.
“I’ve put more of a premium on citations,” Pasqualina said. “AI is not great at that, especially when you’re dealing with specific editions of texts.”
Pasqualina also noted that, in his experience, ChatGPT struggled with “linking up quotations in a text to a conceptual question” and addressing themes that are not “overt” — literary analysis skills that are needed to fully answer essay prompts.
Writing Center director Erin Bell encouraged educators to interact with LLMs themselves.
“If you haven’t tried out ChatGPT, try it out. See what it can do,” Bell said. “To understand the tool, you have to use the tool.”
Meanwhile, some professors are already using AI as a behind-the-scenes assistant.
“I’ve been leveraging generative AI … to create more practice problems for my students to work on and study from,” Snyder said. “It’s been great because I can spend more time brainstorming about the best way to teach students and trying different pedagogical approaches.”
Generative AI is also becoming a topic of discussion within the classroom. Bell gave the example of a colleague who asked his students to evaluate their writing against an LLM that was given the same prompt. Next semester, Snyder will be teaching two courses on AI for the Computer Science Department.
Garcia encouraged professors to “talk openly with students” about AI, stressing the importance of ethical use, in which “it doesn’t work against what we’re trying to teach.”
“I try to really emphasize to students that the most important quality or skill that employers want is effective communication,” Garcia said.
Bell mirrored this sentiment, stating that AI “should not replace the critical thinking that students learn to do at the University.”
Nevertheless, a key thing to remember about AI is that “it is not going anywhere,” Snyder said.