CNN
—
When Diane Geiske, a strategic communications professor at Ithaca College, receives an essay from one of her students, she runs part of it in ChatGPT and asks the AI tool to critique and suggest ways to improve the work.
“The best way to consider AI for grading is as a teaching assistant or research assistant who might be doing the initial review…and AI does a really good job of that,” she says. told CNN.
She shows her students the feedback from ChatGPT and how the tool rewrote their essays. “I'm also going to share what I think about their intro and discuss it,” she said.
Gayeski is asking her class of 15 students to do the same. She runs the draft on ChatGPT to see where she can improve it.
The advent of AI is reshaping education and brings real benefits, such as automating some tasks and freeing up time for more personalized instruction, but it also brings challenges from accuracy and plagiarism to completeness. It also poses some major dangers, even down to its maintenance.
Both teachers and students are using new technology. A report by strategy consultancy Titon Partners (sponsored by plagiarism detection platform Turnitin) found that half of college students were using AI tools in fall 2023. Meanwhile, the number of faculty using AI decreased, but the percentage increased to 22% of faculty in fall 2023. In 2023, it will increase from 9% in spring 2023.
Teachers are turning to AI tools and platforms like ChatGPT, Writable, Grammarly, and EssayGrader to help them grade papers, provide feedback, create lesson plans, and write assignments. We also use our rapidly growing tools to create quizzes, polls, videos, and interactives to raise the bar for classroom expectations.
Meanwhile, students rely on tools like ChatGPT and Microsoft CoPilot that are built into Word, PowerPoint, and other products.
But while some schools have policies in place regarding whether students can use AI for schoolwork, many lack guidelines for teachers. The practice of using AI to write feedback and grade assignments also raises ethical considerations. And for parents and students who already spend hundreds of thousands of dollars on tuition, is it worth their time and money to spend their time and money on an endless feedback loop of AI-generated and AI-graded content in college? You may be wondering.
“If the teacher uses it only for grading and the students only use it to create the final product, it doesn't work,” Gaysky says.
The time and place of AI
How teachers use AI will depend on many factors, especially when it comes to grading, said Dorothy Leidner, a business ethics professor at the university. University of Virginia. If the content being tested in large classes is primarily declarative knowledge, meaning there is a clear right and wrong, teacher grading using AI “may even be better than human grading.” she told her CNN.
AI allows teachers to grade papers more quickly and consistently, avoiding fatigue and boredom.sshe said.
But for smaller classes or assignments that don't have clear answers, Leidner says grading should remain individualized so teachers can provide more specific feedback and understand student work. He pointed out that there is a need and therefore progress will be made over time.
“Teachers should be responsible for grading, but they can also delegate some responsibility to AI,” she says.
She suggested that teachers use AI to look at specific metrics such as structure, language use, and grammar, and give those numbers numerical scores. However, for novelty, creativity, and depth of insight, teachers must grade student work themselves.
Provided by Leslie Lane
Leslie Lane teaches students how to best use ChatGPT, but she takes issue with how some educators are using it for grading papers.
Leslie Lane, who teaches ChatGPT best practices in a writing workshop at the University of Lynchburg in Virginia, said she sees the benefits for teachers, but also sees the drawbacks.
“I feel like using feedback that isn't really from me kind of damages that relationship a little bit,” she said.
She also believes that uploading student work to ChatGPT is a “serious ethical consideration” and potentially a violation of intellectual property. AI tools like ChatGPT use such entries to train algorithms on everything from speech patterns to sentence construction to facts and figures.
Leidner, a professor of ethics, agreed, saying it should be avoided especially in doctoral and master's theses because students might want to publish the paper.
“It is not right to upload materials to AI without first informing the students,” she said. “And the student will probably have to give consent.”
Some teachers utilize software called Writable that uses ChatGPT to help grade papers, but it is “tokenized” so papers do not contain any personal information and are shared directly with the system. It will not be done.
Teachers upload their essays to this platform. The platform was recently acquired by education company Houghton Mifflin Harcourt, where suggested feedback is provided to students.
Other educators use platforms like Turnitin, which has plagiarism detection tools, to Help teachers identify when assignments were created by ChatGPT or other AI. However, this type of detection tool is by no means foolproof. OpenAI shut down its own AI detection tool last year, citing what the company called “low accuracy.”
Setting standards
Some schools are actively working on policies for both teachers and students. Alan Reed, a researcher at Johns Hopkins University's Center for Educational Research and Reform (CRRE), recently discovered that K-12 schools are using the GPT tool to create personalized end-of-quarter comments on report cards. He said he spent time working with educators from across the country. .
But, like Lane, he acknowledged that there are still “limitations” to technology's ability to create insightful feedback.
He currently serves on a university committee that creates AI policy for faculty and staff. There is an ongoing debate about not only how teachers use AI in the classroom, but also how educators as a whole use AI.
He acknowledges that schools are talking about using generative AI tools to create promotion and tenure files, performance reviews, job postings, and more. ”
Nicholas Frank, associate professor of philosophy at the University of Lynchburg, said universities and professors need to be on the same page about policy, but they need to be cautious.
“There are many risks to developing AI policy at this stage,” he said.
He worries that it is still too early to understand how AI will be integrated into everyday life. He also worries that some administrators who don't teach in the classroom may develop policies that miss the nuances of instruction.
“That could run the risk of oversimplifying the issue of using AI in grading and instruction,” he says. “Oversimplification makes for bad policy.”
First, he said, educators can identify obvious misuses of AI and begin developing policy around them.
Meanwhile, Leidner said universities should prioritize transparency (that is, students have the right to know when AI is being used to grade their work) and that they should never upload what kind of information. He said that he is able to provide very advanced guidance, such as identifying what should and should not be done. Ask the AI, ask the AI.
But universities must also be open to “periodically reevaluating as technology and applications evolve,” she said.