@Third-Eye said in AI In Poses:
When I pull up a detector it’s to have a second sanity check. Because what else have we got, ya know?
Yeah I totally get that impulse. My concern is rooted in psychology more than the technology itself.
Here’s a different example that maybe illustrates my point better: Grammar checkers. They are sometimes useful and very often completely, utterly wrong. As a professional writer, I have the skill to sift through the chaff to find the suggestions that are actually correct and useful. But the teen homeschoolers I work with don’t. If I hadn’t taken the time to teach them why one should be skeptical of the suggestions from a grammar checker, it would be completely understandable for them to just be like: “Well, this thing obviously knows more than me; I should do what it says.” (Here’s a neat video essay about the problems with someone who doesn’t know grammar well using Grammarly, btw)
So I’m not saying “never use grammar checkers because they suck and have no value”. I’m just saying that they don’t work well enough to be relied upon, and anybody who uses them needs to be well aware of their limitations. This just doesn’t happen when you’re a layperson whose only info is their marketing hype.
Now that’s grammar checkers, where we have a tangible baseline to compare it to (e.g., CMS style guide, etc.) Plagiarism detectors are the same. It’ll tell me: “Hey, this seems like it’s ripping off (this article)” and I can go look at the article and decide if it’s right.
With AI detectors, you don’t have that capability. You just have to take its word for it. If it lines up with your vibe, you’re probably likely to take that as confirmation even if it’s wrong. If it doesn’t line up with your vibe, you have no way to tell whether it’s wrong or you’re wrong.
I also have concerns about the fundamental way these detectors work. GPTZero analyzes factors like “Burstiness”. Yes, sometimes AI writing has low burstiness because it’s overly uniform. But sometimes human writing has low burstiness too, and sometimes AI writing can be massaged to make it bursiter.
These tools are new, there hasn’t been a lot of sound research into the subject (even that big article from U of Chicago was a “working paper” that hasn’t been peer-reviewed (as far as I can tell). Their methodology might suck or it might be brilliant, but until more folks have reproduced the research, it can’t be taken as gospel.