Brand MU Day
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Login
    1. Home
    2. Faraday
    3. Posts
    • Profile
    • Following 0
    • Followers 1
    • Topics 6
    • Posts 556
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Bad Stuff Happening IC

      @Third-Eye said in Bad Stuff Happening IC:

      @Roz said in Bad Stuff Happening IC:

      I answered “if I have control over it” which isn’t EXACTLY right, I don’t want to be needing to dictate the details. But it’s moreso a level of – understanding the risk I’m getting into, and also that I definitely trust some GMs more than others.

      This was both how I voted and I feel. I wax nostalgic about The Greatest Generation MUSH a lot and all my characters dying, but that was only fun because of how OOCly clear the risks going in were.

      Yeah that’s pretty much how I land also. When my PC got blown up unexpectedly on TGG, it was annoying, but I couldn’t complain because I knew what I was signing up for. When my PC got accidentally spaced on SW3, it was way more annoying because that level of “die due to one bad die roll” wasn’t expected.

      I generally welcome any IC drama that isn’t character-ending, but I prefer it to be collaborative. I care about story, and setbacks are important, but it’s also a game. There’s a middle ground.

      posted in Game Gab
      FaradayF
      Faraday
    • RE: AI In Poses

      @somasatori said in AI In Poses:

      I am apparently in Reviewer #2 brain these days whenever I look at any research work.

      I’m right there with you. I literally did a whole homeschool lesson with my kids on that whitepaper, showing how to think critically about the potential biases and how the company frames the results.

      Anyway, I didn’t dive too deep into the underlying studies themselves, focusing on the meta-analysis part. @Trashcan was right to point out that some of them were pretty dated.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      @somasatori said in AI In Poses:

      while I’m not disputing that Originality.ai is good as I’ve never used it, this is the same vibe as “we have investigated ourselves and found that we’re the best”

      Good to be skeptical, but I don’t think it’s quite that bad. More like “5 out of 6 doctors agree!” advertising. It is a meta-analysis of studies that (as far as I can tell) were done by other people. There are still a host of potential biases in play. My general point was that even with all those potential biases, they’re still admitting that sometimes they’re only getting a “B”.

      @MisterBoring said in AI In Poses:

      “I don’t want to RP with GenAI because I’m here to RP with real people.”

      This. But also: “I think GenAI is terrible and I don’t want to have anything to do with it. I especially don’t want my poses fed into their plagiarism machine.”

      I don’t really care how good it is. Even if they fixed every single one of its flaws and it was a better RPer than everyone else I’d ever played with, I still wouldn’t want to play with someone using it.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      @Pavel said in AI In Poses:

      @somasatori Or even if you do recognise those as “problems,” it reads more like the typical wannabe Wordsworth or Hemingway crap that I attempt whenever I get too big for my britches.

      Yeah exactly. When I ran some old “overdramatically wordsmithy” poses through the AI checkers, it flagged those too. I just question their methodology.

      @Trashcan said in AI In Poses:

      Not a huge sample size, of course, but I thought it was interesting.

      That is interesting, thanks for sharing.

      And look, even as a skeptic I’m not saying that the AI checkers don’t work at all. That’s clearly not the case. But I did find this interesting whitepaper from Originality.AI. A couple things that stood out to me:

      1. No single tool was the best in every study, and there was significant variance in tool performance across studies. This suggests that the effectiveness of these tools may vary greatly by how you’re using it. (which isn’t great if you want something reliable)

      a231b684-158b-4d85-9ee8-68f7ee09664e-image.png

      1. Even the tool that’s claiming it’s the best only got a B+ in a couple of the studies. Maybe that’s good enough for some purposes, but it gives me pause.

      eb695736-1636-4af5-844d-10942fd866a8-image.png

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      @Hobbie said in AI In Poses:

      it takes only ten minutes to train the damn thing to write passable poses?

      Sure, as long as you don’t mind when it can’t keep the details of Evelyn’s backstory straight, forgets that she has a fan in her hand from one pose to the next, doesn’t take into account how the scene she had last month would affect her dealings with Edward, etc. And heaven help you with theme consistency if the RP is happening in an original or lesser-known setting.

      GenAI is good at generating plausible text - that’s literally its one job. It still isn’t very good at generating a coherent story.

      I look at those example poses and they make me cringe. While I’d be reasonably confident they were AI-generated, to @Yam’s point about being a gamerunner, I don’t know that I’d be confident enough to ban somebody over it. And the AI generators aren’t trustworthy enough for me. (Some I tried insisted that some poses from 2001 had a better-than-average chance of being AI-generated, lol.)

      I dunno. It sucks. I hate GenAI.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      @Yam said in AI In Poses:

      Aight so we can’t use tools to check, and we can’t use our guts to check,

      I’m not sure where you’re getting that from.

      Some people think it’s perfectly fine to use the AI detectors.

      Other people think it’s better to use your gut.

      A third group of people prefer to use the detector to back up their gut.

      You do you.

      Personally I’m in the same camp as @KDraygo . If your vibe is off-putting to me, or I’m reasonably convinced you’re using AI, I’m not going to RP with you. I’m sure we’ll both survive.

      When I spoke of “structural change”, it was in regards to education. There I think it’s a bit more complicated, but that’s kinda irrelevant from a gaming perspective.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI Megathread

      This English professor asserting that em-dashes are the biggest tell-tale sign of her students using AI is what I’m talking about when I complain about people picking on the dashes.

      posted in No Escape from Reality
      FaradayF
      Faraday
    • RE: AI In Poses

      @Third-Eye said in AI In Poses:

      I also really cringe at ‘vibes man’ becoming the way to figure this out, though, because I see some people spot ‘AI’ and I think they’re wrong, have terrible instincts, and are fixating on stuff I don’t think is relevant.

      Just to be clear on my stance - I can absolutely believe that there are people whose gut is worse than the detectors, and people whose gut is better than the detectors. I’m just critiquing the detectors in isolation and the danger of someone who already has a bad gut relying on them.

      Much the same stance I have with self-driving cars, incidentally. They are definitely better than the worst drivers, and worse than the best drivers. But that aside, they are nowhere near reliable enough that I would trust myself or my loved ones to their care.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      @Third-Eye said in AI In Poses:

      When I pull up a detector it’s to have a second sanity check. Because what else have we got, ya know?

      Yeah I totally get that impulse. My concern is rooted in psychology more than the technology itself.

      Here’s a different example that maybe illustrates my point better: Grammar checkers. They are sometimes useful and very often completely, utterly wrong. As a professional writer, I have the skill to sift through the chaff to find the suggestions that are actually correct and useful. But the teen homeschoolers I work with don’t. If I hadn’t taken the time to teach them why one should be skeptical of the suggestions from a grammar checker, it would be completely understandable for them to just be like: “Well, this thing obviously knows more than me; I should do what it says.” (Here’s a neat video essay about the problems with someone who doesn’t know grammar well using Grammarly, btw)

      So I’m not saying “never use grammar checkers because they suck and have no value”. I’m just saying that they don’t work well enough to be relied upon, and anybody who uses them needs to be well aware of their limitations. This just doesn’t happen when you’re a layperson whose only info is their marketing hype.

      Now that’s grammar checkers, where we have a tangible baseline to compare it to (e.g., CMS style guide, etc.) Plagiarism detectors are the same. It’ll tell me: “Hey, this seems like it’s ripping off (this article)” and I can go look at the article and decide if it’s right.

      With AI detectors, you don’t have that capability. You just have to take its word for it. If it lines up with your vibe, you’re probably likely to take that as confirmation even if it’s wrong. If it doesn’t line up with your vibe, you have no way to tell whether it’s wrong or you’re wrong.

      I also have concerns about the fundamental way these detectors work. GPTZero analyzes factors like “Burstiness”. Yes, sometimes AI writing has low burstiness because it’s overly uniform. But sometimes human writing has low burstiness too, and sometimes AI writing can be massaged to make it bursiter.

      These tools are new, there hasn’t been a lot of sound research into the subject (even that big article from U of Chicago was a “working paper” that hasn’t been peer-reviewed (as far as I can tell). Their methodology might suck or it might be brilliant, but until more folks have reproduced the research, it can’t be taken as gospel.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI Megathread

      @Pavel Which GenAI will almost certainly never be able to because there isn’t enough COBOL stuff out there for it to steal for training data.

      posted in No Escape from Reality
      FaradayF
      Faraday
    • RE: AI Megathread

      @Aria said in AI Megathread:

      I’m just sitting here watching this with a look of vague horror on my face

      Oh dear heavens, all the sympathy. That sounds like my worst nightmare.

      I worked in FDA-regulated software for awhile. I’m sure @Aria knows this well, but for non-software folks: In safety-critical, regulated industries, it is a well-known fact—learned through bitter experience, horrific recalls, and lost lives—that it is utterly impossible to test software thoroughly enough to ensure it’s safe once it’s already been built. There are just too many edge cases and permutations. Safety/quality has to be baked in through sound design practices.

      AI has no concept of this. You can ask it to “write secure code” or whatever, but it fundamentally doesn’t know how to do that. I sincerely hope it does not take a rash of Therac 25 level disasters to teach the software industry that lesson again.

      posted in No Escape from Reality
      FaradayF
      Faraday
    • RE: AI In Poses

      @Trashcan said in AI In Poses:

      If you refuse to use any technology that relies on machine learning, algorithms, or neural networks regardless of the specifics then obviously that is your prerogative but you are going to have a hard time using the internet at all.

      Yes, that would be ridiculous, and is not even remotely close to anything I’ve said. I have literally worked on ML software to identify cancer cells on digital pathology scans and categorize covid risks. My objection is to LLMs trained on material without compensation or consent, designed to replace creative folks with crappy knockoffs.

      I just asked if someone knew off-hand how these detector tools worked because I couldn’t find the info quickly myself. Not all of them in existence, but the prominent ones at least.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      OK genuine question because I couldn’t find anything with a quick search and was too lazy to dive deep, but…

      Don’t AI detectors also use LLMs? And couldn’t they then be training on the stuff they’re scanning?

      If so, by putting poses into them, I could potentially be using other peoples’ RP to feed the very machine I hate so much.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      @Yam said in AI In Poses:

      There is a human that’s in charge of disinviting people to games.

      I’m not only talking about staff using AI detectors to ban people, I’m also talking about people running each others’ poses through AI detectors.

      If your human gut is saying that something is AI generated, that’s one thing. I just don’t trust these AI detector tools. Everything I’ve seen about them from experts tells me that the fundamental way they work is flawed, and I’ve seen enough drama around false-positives that I don’t want anything to do with them. YMMV.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      @Tez said in AI In Poses:

      That’s the case I’m actually interested in, not the anxieties people have that they might accidentally get flagged as AI and banned on an off day. I just don’t think that’s happening.

      Are you limiting your question solely to MUs? In that case, no, I am not aware of anyone getting disciplined falsely for AI use in MUs.

      But in the real world? It’s absolutely happening. There’s no reason to believe it won’t happen here too if people start routinely feeding things into AI detectors.

      @Tez said in AI In Poses:

      We can and do punish players for plagiarism in this hobby, so if we treat them as equivalent, then why wouldn’t we punish them?

      I didn’t say I wouldn’t ban someone for plagiarism if I believed they did it, I said I don’t routinely run poses through a plagiarism detector hunting for violations. Also while I may personally feel that GenAI is a plagiarism machine, I do acknowledge that mainstream society doesn’t see something AI-generated as plagiarism. So I do not attribute the same malice to the action.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI Megathread

      @InkGolem said in AI Megathread:

      “It’s not x. It’s y.”

      The reason that this—or anything else GenAI does—comes up frequently is because the algorithms are recognizing patterns in actual writing. It doesn’t make this stuff up out of thin air.

      Now yes, sometimes it uses those constructs in the wrong place / wrong way and that can be a tell. Or you can find it in weird places, like an email from your friend. But the construct itself isn’t a tell of AI use. The AI is just copying what it sees.

      For example, from a few book sources in the 1900’s:

      It’s not about dieting. It’s about freedom from the diet.

      It’s not a newspaper. It’s a public responsibility.

      It’s not being moved. It’s simply joy.

      It’s not love. It’s something higher than love.

      It took about 60 seconds to find these and a zillion other instances on a Google Ngram search of published literature.

      posted in No Escape from Reality
      FaradayF
      Faraday
    • RE: AI In Poses

      @Ashkuri I doubt I would try to enforce such a policy for individual poses, just as I don’t routinely run other peoples’ poses through a plagiarism checker. But speaking hypothetically…

      If I did engage, I’d probably do so on the merits (or lack thereof) of the poses themselves. “It seems that you’re struggling with the theme in your poses…” or “I’ve noticed a change in your poses recently. It’s giving AI vibes…” with some constructive criticism.

      Ultimately, you have the right to boot someone from your own game for any reason or no reason. If they’re giving you a bad vibe, you don’t need to prove it beyond a shadow of a doubt. You just need to be convinced yourself that you’re doing the right thing by showing them the door.

      For a less extreme solution, just stop playing with them. If their poses are that nonsensical, probably others will too. Feels like kind of a self-limiting problem to me.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI Megathread

      @Yam said in AI Megathread:

      if the computer drives the car better than my anxious ass, I’ll ride along.

      That’s a big “if” though, and is the crux of my argument.

      @Trashcan said in AI Megathread:

      If this facial recognition program does a better job than humans, yes I am okay with it. Humans are notoriously poor eye witnesses.

      The difference is that many people know that humans are notoriously poor eye witnesses. Many people trust machines more than they trust other humans, even when said machines are actually worse than the humans they’re replacing. That’s the psychological effect I’m referring to.

      posted in No Escape from Reality
      FaradayF
      Faraday
    • RE: AI Megathread

      @Yam That isn’t exactly what I said. It’s a complex issue requiring multiple lines of defense, better education, and structural change. But I am saying that even 99% accuracy is too low.

      For example, say you have a self-driving car. Are you OK if it gets into an accident 1 out of every 100 times you drive it?

      Say you have a facial recognition program that law enforcement leans heavily on. Are you OK if it mis-identifies 1 out of every 100 suspects?

      I’m not.

      1% failure doesn’t sound like much until you multiply it across millions of cases.

      posted in No Escape from Reality
      FaradayF
      Faraday
    • RE: AI Megathread

      @Trashcan I think you’re underestimating the psychological effect that takes place when people trust in tools. There’s a big difference between “I think this student may have cheated” and “This tool is telling me this student cheated” when laypeople don’t understand the limitations of the tool.

      I’ve studied human factors design, and there’s something that happens with peoples’ mindsets once a computer gets involved. We see this all the time - whether it’s reliance on facial recognition in criminal applications, self-driving cars, automated medical algorithms, etc.

      Also, plagiarism detectors are less impactful because they can point to a source and the teacher can do a human review to determine whether they think it’s too closely copied. That doesn’t work for AI detection. It’s all based on vibes, which can disproportionately impact minority populations (like neurodivergent and ESL students). I also highly doubt that hundreds of thousands of students are falsely accused of plagiarism each year, but I can’t prove it.

      As for the alternative? I don’t think there is one single silver bullet. IMHO we need structural change.

      posted in No Escape from Reality
      FaradayF
      Faraday