It would be better for universities to stop thinking of students as numbers and more as real people,”
Beautiful ending to this piece.
It would be better for universities to stop thinking of students as numbers and more as real people,”
Beautiful ending to this piece.
The experience, in many ways, was emblematic of his time at the university, he says.
AI, no matter how it turns out in the future will be in textbooks or whatever is left of them one thousand years from now.
If anything, the AI cheating crisis has exposed how transactional the process of gaining a degree has become. Higher education is increasingly marketised; universities are cash-strapped,
This is a horrifyingly bold statement.
They all agreed that a shift to different forms of teaching and assessment – one-to-one tuition, viva voces and the like – would make it far harder for students to use AI to do the heavy lifting.
When we talked about the blue books in class, I definitely quietly wished in my head that school could go back to how I remember it ten years ago.
One conveyed frustration that her university didn’t seem to be taking academic misconduct seriously any more; she had received a “whispered warning” that she was no longer to refer cases where AI was suspected to the central disciplinary board.
What does this mean? This seems concerning.
using it for an “overview of new concepts”, “as a collaborative coach”, or “supporting time management”.
There should be a course on this so people who do not know much about technology can use this tool.
“I’ve grown desensitised to it,” he says. “Half the students in my class are giving presentations that are clearly not their own work.
This is sad. Considering both AI information students get can be inaccurate, and the detection AI can miss or falsely accuse someone of cheating is intriguing.
sent over a suspiciously polished piece of work. The student, David explained, struggled with his English, “and that’s not their fault, but the report was honestly the best I’d ever seen”.
I do not think I have the brain capacity to predict if someone was using AI; this is kind of a wild accusation.
Researchers at the University of Reading recently conducted a blind test in which ChatGPT-written answers were submitted through the university’s own examination system: 94% of the AI submissions went undetected and received higher scores than those submitted by the humans.
WOW! This is actually insane.
Many academics seem to believe that “you can always tell” if an assignment was written by an AI, that they can pick up on the stylistic traits associated with these tools.
Alarming. Considering all AI seems to be unreliable.
the experience “messed with my mental health,” he says. His confidence was severely knocked. “I wasn’t even using spellcheckers to help edit my work because I was so scared.”
I can relate to this. Reading this article seeing things like Grammarly, which I just use for punctuation currently makes me scared to us AI at all due to my lack of understanding of it.
“humanisers”, such as CopyGenius and StealthGPT, the latter which boasts that it can produce undetectable content and claims to have helped half a million students produce nearly 5m papers.
This is actually really alarming! Wow!
a generative AI researcher at British University Vietnam, believes there are “significant limitations” to AI detection software. “All the research says time and time again that these tools are unreliable,”
So does this mean AI will never be 100% reliable?
a student with autism spectrum disorder whose work had been falsely flagged by a detection tool as being written by AI.
Why is this?
One study at Stanford found that a number of AI detectors have a bias towards non-English speakers, flagging their work 61% of the time, as opposed to 5% of native English speakers
Why is this?
Since then, Turnitin has processed more than 130m papers and says it has flagged 3.5m as being 80% AI-written. But it is also not 100% reliable
Will AI ever be 100% reliable? Is that possible?
Turnitin, which scans submissions for signs of plagiarism. In 2023, Turnitin launched a new AI detection tool that assesses the proportion of the text that is likely to have been written by AI.
AI programmed to detect AI? Feels a bit self aware to me.
In the struggle to stuff the genie back in the bottle, universities have become locked in an escalating technological arms race, even turning to AI themselves to try to catch misconduct.
What a crazy era for technology.
More than half of students now use generative AI to help with their assessments
I am excited and nervous to use AI feels like a gateway drug to laziness
Many such tools are now available, such as Google’s Gemini, Microsoft Copilot, Claude and Perplexity. These large language models absorb and process vast datasets, much like a human brain, in order to generate new material.
AI is so confusing to me.
Two years have passed since ChatGPT was released into the world.
How long has AI been around?
ever created an account with ChatGPT? How about Grammarly? Albert didn’t feel able to defend himself until the end, by which point he was on the verge of tears.
This is where I get confused because it is a tool that can be used and misused, and most school tools (i.e. rulers, calculators, textbooks) do not have the ability to completely cheat for you.
It might not have been his best effort, but he’d worked hard on the essay. He certainly didn’t use AI to write it
This worries me as someone who is a returning student from 10+ years ago in who barely knows how to use a computer let alone AI; being accused of it would be disheartening.