AI In Poses
-
@Faraday
This is a question about the individual service, not the entire category. For instance, Pangram’s policy:Pangram does not train generalized AI models like ChatGPT, and our AI detection technology is based off of a large, proprietary dataset that doesn’t include user submitted content.
We train an initial model on a small but diverse dataset of approximately 1 million documents comprised of public and licensed human-written text. The dataset also includes AI-generated text produced by GPT-4 and other frontier language models. The result of training is a neural network capable of reliably predicting whether text was authored by human or AI.
If you refuse to use any technology that relies on machine learning, algorithms, or neural networks regardless of the specifics then obviously that is your prerogative but you are going to have a hard time using the internet at all.
-
@Trashcan said in AI In Poses:
If you refuse to use any technology that relies on machine learning, algorithms, or neural networks regardless of the specifics then obviously that is your prerogative but you are going to have a hard time using the internet at all.
Yes, that would be ridiculous, and is not even remotely close to anything I’ve said. I have literally worked on ML software to identify cancer cells on digital pathology scans and categorize covid risks. My objection is to LLMs trained on material without compensation or consent, designed to replace creative folks with crappy knockoffs.
I just asked if someone knew off-hand how these detector tools worked because I couldn’t find the info quickly myself. Not all of them in existence, but the prominent ones at least.
-
@Third-Eye said in AI In Poses:
I increasingly want a game to be very clear on what its stance on AI is because players policing this themselves is a fucking nightmare.
Yeah I’d agree, this is something that staff should be on top of, one way or another. Any situation that dips into accusation drama would be a player problem and should be dealt with accordingly.
-
@Yam Something like “Using LLMs/AI for any contributions to the game, including but not limited to backgrounds, descriptions, wiki images, poses, etc, is a bannable offence. Being a dick if you suspect someone of using LLMs/AI is also a bannable offence.”
-
@Third-Eye said in AI In Poses:
When I pull up a detector it’s to have a second sanity check. Because what else have we got, ya know?
Yeah I totally get that impulse. My concern is rooted in psychology more than the technology itself.
Here’s a different example that maybe illustrates my point better: Grammar checkers. They are sometimes useful and very often completely, utterly wrong. As a professional writer, I have the skill to sift through the chaff to find the suggestions that are actually correct and useful. But the teen homeschoolers I work with don’t. If I hadn’t taken the time to teach them why one should be skeptical of the suggestions from a grammar checker, it would be completely understandable for them to just be like: “Well, this thing obviously knows more than me; I should do what it says.” (Here’s a neat video essay about the problems with someone who doesn’t know grammar well using Grammarly, btw)
So I’m not saying “never use grammar checkers because they suck and have no value”. I’m just saying that they don’t work well enough to be relied upon, and anybody who uses them needs to be well aware of their limitations. This just doesn’t happen when you’re a layperson whose only info is their marketing hype.
Now that’s grammar checkers, where we have a tangible baseline to compare it to (e.g., CMS style guide, etc.) Plagiarism detectors are the same. It’ll tell me: “Hey, this seems like it’s ripping off (this article)” and I can go look at the article and decide if it’s right.
With AI detectors, you don’t have that capability. You just have to take its word for it. If it lines up with your vibe, you’re probably likely to take that as confirmation even if it’s wrong. If it doesn’t line up with your vibe, you have no way to tell whether it’s wrong or you’re wrong.
I also have concerns about the fundamental way these detectors work. GPTZero analyzes factors like “Burstiness”. Yes, sometimes AI writing has low burstiness because it’s overly uniform. But sometimes human writing has low burstiness too, and sometimes AI writing can be massaged to make it bursiter.
These tools are new, there hasn’t been a lot of sound research into the subject (even that big article from U of Chicago was a “working paper” that hasn’t been peer-reviewed (as far as I can tell). Their methodology might suck or it might be brilliant, but until more folks have reproduced the research, it can’t be taken as gospel.
-
@Faraday said in AI In Poses:
So I’m not saying “never use grammar checkers because they suck and have no value”. I’m just saying that they don’t work well enough to be relied upon, and anybody who uses them needs to be well aware of their limitations. This just doesn’t happen when you’re a layperson whose only info is their marketing hype.
Make not mistake, I agree with this and agree with it more the more I play with the detectors, both the one I value enough to pay actual money for and the various other ones out there. I think Grammar Checks are one of the better comparisons that’ve come up, because I both use them and frequently just ignore what they’re highlighting.
Gonna be real real here, and apologies if this sounds dismissive to the people who say they can’t tell or don’t care about this stuff. I honestly do trust my gut more at this point, especially if my gut is confirmed by three or four other acquaintances of mine who I think have a really good sense of this stuff. My experience with false positives has been pretty minimal and limited to sets of text I’ve come to the conclusion teh detectors aren’t well-equipped to read. But the false negatives, and they happen not-infrequently, drive me a little batty. Because at that point I feel like I just have to accept probably being lied to by this person I’m playing with and shrugging my shoulders. But, idk, maybe that’s not the worst thing in the world.
I also really cringe at ‘vibes man’ becoming the way to figure this out, though, because I see some people spot ‘AI’ and I think they’re wrong, have terrible instincts, and are fixating on stuff I don’t think is relevant.
-
@Third-Eye said in AI In Poses:
I also really cringe at ‘vibes man’ becoming the way to figure this out, though, because I see some people spot ‘AI’ and I think they’re wrong, have terrible instincts, and are fixating on stuff I don’t think is relevant.
Just to be clear on my stance - I can absolutely believe that there are people whose gut is worse than the detectors, and people whose gut is better than the detectors. I’m just critiquing the detectors in isolation and the danger of someone who already has a bad gut relying on them.
Much the same stance I have with self-driving cars, incidentally. They are definitely better than the worst drivers, and worse than the best drivers. But that aside, they are nowhere near reliable enough that I would trust myself or my loved ones to their care.
-
@Tez said in AI In Poses:
I might not run things through a plagiarism checker, but I literally have seen people steal descriptions from other people and reuse them on other games. (@Roz for example. Someone stole her character desc from Arx and tried to use it on Concordia. As I recall, the player was disciplined. I am not sure if they were banned.)
I have been accused of stealing a desc I wrote bc someone first saw it being used by a person who stole it.
-
@Clarion said in AI In Poses:
They just absolutely are not trustable, and I think human intuition of “wait, this writing feels wordy and bland and disconnected from what’s actually happening in the scene” is both more accurate and more useful right now, because if a person isn’t using AI but does sound wordy and bland and disconnected from the scene, that’s still worth checking in about.
Wordy, bland, and disconnected? Shit. Now I’m starting to think that they trained the AI on my poses.
-
@Third-Eye said in AI In Poses:
I also really cringe at ‘vibes man’ becoming the way to figure this out, though, because I see some people spot ‘AI’ and I think they’re wrong, have terrible instincts, and are fixating on stuff I don’t think is relevant.
The number of times I’ve seen people on social media assert that something is “clearly AI” simply because it is a thing they themselves have never said or seen is astonishing. I can’t imagine it being any better when it’s something important, like the RP they’re presently having.
-
Aight so we can’t use tools to check, and we can’t use our guts to check, and we apparently can’t use both to check. What the fuck do we do, lie back and think of England? Hope for structural change in society? Assume the doofus that wrote like a chimpanzee 1 pose ago mustered the will and intelligence to get their shit together for this poetry contest?
-
@Yam Maybe it’s selfish of me but I personally just don’t worry about it when I RP. When I RP, it’s to relax and have fun. If the person I am RPing with is fun and his or her character connects with mine, that’s great and I want more. If their RP style, pose style, or writing doesn’t connect? That’s a shame but there are other people to RP with, I’m not going to think any less of them or worry that they are using AI. It’s not like I’m paying for a service and getting scammed by AI. It is what it is.
Ignorance is bliss for me.
-
AI detectors make me twitch because they don’t actually detect AI. They can’t. What they really do is look for patterns in the writing, like how smooth and predictable it is. Clean sentence structure and sensible word choices. Things like that. Ironically, the better your writing is, the more likely a detector is to call it AI.
My honest suggestion is this: if you think someone is using AI in their poses and you don’t like it, treat it the same way you would any other RP issue. If you don’t enjoy their writing for whatever reason, stop RPing with them. You’re not obligated to RP with anyone.
-
@Raistlin Do you use AI in your images or writing for the MU/RP space?
-
@Ashkuri The general answer is, sometimes. For images, I might if I can’t find an existing image for what I need. If it’s a fictional location or something like that. I’ve helped other people use it for characters and what not as well.
For RP/text, not really. I don’t generate text with it but I might us it to organize my own text. Not for RP but for things that should be clear and concise I might run it through ChatGPT to get advice on how to restructure it.
-
@Raistlin said in AI In Poses:
I’ve helped other people use it for characters and what not as well.
What does this mean?
-
@Ashkuri They’ve asked me to help them make images of their characters in Midjourney and I did. Helping them refine their prompt or whatever.
-
I don’t think comparing it to bad/boring writing is the whole story. With practice, if you are trying and writing your own poses, you might actually get better at it. You might evolve into being more interesting to interact with. It might be worth throwing some rp to a new/bad writer, just to give them some practice so they can improve!
LLMs never will. I resent throwing my time and effort into a black hole where I am getting nothing from it, and neither is anyone else.
-
Using LLMs to RP is, bare minimum, insulting as fuck. It is also pathetic, not that’s like a personal opinion and not like a cultural one. Literally the major difference between something like a MU* and other single player role-playing games is the interaction between people and the collaborative element to storytelling. Subverting that in any way goes beyond missing the point–it’s absurd to even want to.
Like what the fuck are you (you being the hypothetical LLMRP strawman I have made up in my head) doing? Why would I want to create anything with you while you’re playing a choose-your-own text adventure? If I want to RP with an LLM I literally do not need a person on the other end. If what you’re doing is something I could actually do without you to functionality the same degree of satisfaction why the fuck is it even happening?
That’s not even considering the larger conversation about LLMs and ethics or whatever.
Ban that shit and toss it in the fucking garbage.
-
Yea it’s exactly the same as getting scammed by AI, sorry. I’m being scammed. You are tricking me into interacting with AI. It doesn’t matter to me at all that my time instead of money is being wasted. It’s the same basic betrayal either way.
This is specifically re: text, not images. I’ve used AI images before - made by other people because I lack the knack for prompts, not because of moral authority or whatever. I probably won’t again, because the more this stuff becomes pervasive the less comfortable I get with it. I have taken an AI image and brought it to a flesh and blood artist and paid them to draw a character based on it, though, and I felt okay about that.
Re: text, that is the meat of the thing.