OPINION: From ghost research papers, fabricated news articles and academic gibberish, ChatGPT makes up what it can’t answer, and many students are falling for it.
There was regret, much of it, as my student, pink-eyed and apologetic, confessed to numerous ChatGPT-derived mistakes in her essay, the gist of which she was unable to even explain.I felt partly responsible, having talked up ChatGPT’s potential while not fully appreciating all its pitfalls. By that point, she was about the fifth student in a week I’d grilled for submitting what I suspected was a badly AI-written assessment.
As a result, I am failing more students than ever before, three times more to be exact, because of it, and grade averages are down. From the conversations I’ve had with other academics the problem is widespread. Meanwhile, the advantages at the other end of the spectrum, for students using it well, so far, have been minimal. I expect this will change, and change quickly, but for now there is some painful teething going on.
Advice to fact-check AI content, because it is prone to mistakes, doesn’t cover the half of it, and it has caught out educators and students alike. It’s not only ghost academic papers that are a problem, the chatbot falsifies digital news article references too.after also fielding requests from members of the public trying to trace articles they assumed had been archived.
But the worst attribute of AI from an educational point of view is the genericised language style it produces and which we are witnessing across a large swath of the student body. It threatens the unique and original voice of the individual and from a grading point of view blends in with all the other AI-assisted essays – lowering the mark.