Brand MU Day
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Login
    1. Home
    2. Faraday
    3. Posts
    • Profile
    • Following 0
    • Followers 1
    • Topics 6
    • Posts 560
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Bad Stuff Happening IC

      @Roz I agree we’re mostly on the same page. I think I was just viewing bleed as a specific type of maladaptive behavior where one over-identifies with the character. Self-insert gone awry. The classic example being: two characters are in love and one player starts letting that bleed over to their behavior toward the other player.

      That feels very different from, say, ragequitting and throwing your controller across the room after losing a Fortnite match. That’s also unhealthy, obviously, but I personally wouldn’t call it bleed.

      The kind of bleed I’m describing feels closer to the parasocial relationships you see towards influencers.

      I dunno, maybe they’re all just different sides of the same coin and I’m trying to make an unnecessary distinction.

      posted in Game Gab
      FaradayF
      Faraday
    • RE: Bad Stuff Happening IC

      @Roz said in Bad Stuff Happening IC:

      I think bleed — by which I just mean having an emotional response to RP or IC events that is bad enough to feel harmful or maladaptive in some fashion, stuff that goes beyond the standard sort of emotional reaction you’d have to fiction – is incredibly common. Like, the vast majority of RPers will experience it in some fashion at one point or another.

      That’s an interesting perspective. I’m not sure that I define “bleed” the same way, because I think the line between “standard sort of emotional reaction to fiction” and “maladaptive” is not well-defined.

      People have emotional responses to fiction. People have emotional responses to gaming. It’s natural that someone is going to have emotional responses to fiction-gaming. I don’t personally call that “bleed”.

      Bleed to me is when you fail to keep a healthy boundary between you and the character. Like when I cry at Titanic, it’s not because I think I’m Rose. It’s not because I’m over-empathizing with the character, or her emotions are bleeding into mine. It’s just a tragic story. Whereas I see bleed as transferring the character’s emotions onto your own to an unhealthy degree.

      posted in Game Gab
      FaradayF
      Faraday
    • RE: Bad Stuff Happening IC

      @Pavel said in Bad Stuff Happening IC:

      Bleed, to my mind, is usually accidental

      I guess it depends on your definition. If ink bleeds through paper, it could be because you tried to put something under it but it wasn’t enough, but it could also be that you didn’t even try at all (through innocent ignorance or recklessness). Either way the effect is the same.

      That said, I agree with your basic premise that a concerning number of RPers don’t seem to believe that preventing bleed is even important/valuable in the first place.

      posted in Game Gab
      FaradayF
      Faraday
    • RE: Bad Stuff Happening IC

      @Ashkuri said in Bad Stuff Happening IC:

      Interesting to me that no one (yet) voted for “Yes but only physical peril, not social.” Social to me covers the like

      @Yam said in Bad Stuff Happening IC:

      someone’s gonna’ post a proclamation the next morning LORD EIRAN, LAYABOUT OF THE LAURENTS, SEEN BEING AN UNMARRIAGEABLE IDIOT

      that kind of thing. I would consider that a Social Bad Thing for a person to encounter.

      One thing I find interesting is that other people tend to transfer IC humiliation onto the humiliated player. Like I had one character who was constantly screwing up (by design), and a non-trivial number of players acted like I was the idiot. It was very puzzling. I don’t know if it was just so alien to them that someone would willingly set their own character up for humiliation, or if they just genuinely thought I was dumb because my character did something stupid or what. But it wasn’t a particularly fun experience.

      posted in Game Gab
      FaradayF
      Faraday
    • RE: Bad Stuff Happening IC

      @Third-Eye said in Bad Stuff Happening IC:

      @Roz said in Bad Stuff Happening IC:

      I answered “if I have control over it” which isn’t EXACTLY right, I don’t want to be needing to dictate the details. But it’s moreso a level of – understanding the risk I’m getting into, and also that I definitely trust some GMs more than others.

      This was both how I voted and I feel. I wax nostalgic about The Greatest Generation MUSH a lot and all my characters dying, but that was only fun because of how OOCly clear the risks going in were.

      Yeah that’s pretty much how I land also. When my PC got blown up unexpectedly on TGG, it was annoying, but I couldn’t complain because I knew what I was signing up for. When my PC got accidentally spaced on SW3, it was way more annoying because that level of “die due to one bad die roll” wasn’t expected.

      I generally welcome any IC drama that isn’t character-ending, but I prefer it to be collaborative. I care about story, and setbacks are important, but it’s also a game. There’s a middle ground.

      posted in Game Gab
      FaradayF
      Faraday
    • RE: AI In Poses

      @somasatori said in AI In Poses:

      I am apparently in Reviewer #2 brain these days whenever I look at any research work.

      I’m right there with you. I literally did a whole homeschool lesson with my kids on that whitepaper, showing how to think critically about the potential biases and how the company frames the results.

      Anyway, I didn’t dive too deep into the underlying studies themselves, focusing on the meta-analysis part. @Trashcan was right to point out that some of them were pretty dated.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      @somasatori said in AI In Poses:

      while I’m not disputing that Originality.ai is good as I’ve never used it, this is the same vibe as “we have investigated ourselves and found that we’re the best”

      Good to be skeptical, but I don’t think it’s quite that bad. More like “5 out of 6 doctors agree!” advertising. It is a meta-analysis of studies that (as far as I can tell) were done by other people. There are still a host of potential biases in play. My general point was that even with all those potential biases, they’re still admitting that sometimes they’re only getting a “B”.

      @MisterBoring said in AI In Poses:

      “I don’t want to RP with GenAI because I’m here to RP with real people.”

      This. But also: “I think GenAI is terrible and I don’t want to have anything to do with it. I especially don’t want my poses fed into their plagiarism machine.”

      I don’t really care how good it is. Even if they fixed every single one of its flaws and it was a better RPer than everyone else I’d ever played with, I still wouldn’t want to play with someone using it.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      @Pavel said in AI In Poses:

      @somasatori Or even if you do recognise those as “problems,” it reads more like the typical wannabe Wordsworth or Hemingway crap that I attempt whenever I get too big for my britches.

      Yeah exactly. When I ran some old “overdramatically wordsmithy” poses through the AI checkers, it flagged those too. I just question their methodology.

      @Trashcan said in AI In Poses:

      Not a huge sample size, of course, but I thought it was interesting.

      That is interesting, thanks for sharing.

      And look, even as a skeptic I’m not saying that the AI checkers don’t work at all. That’s clearly not the case. But I did find this interesting whitepaper from Originality.AI. A couple things that stood out to me:

      1. No single tool was the best in every study, and there was significant variance in tool performance across studies. This suggests that the effectiveness of these tools may vary greatly by how you’re using it. (which isn’t great if you want something reliable)

      a231b684-158b-4d85-9ee8-68f7ee09664e-image.png

      1. Even the tool that’s claiming it’s the best only got a B+ in a couple of the studies. Maybe that’s good enough for some purposes, but it gives me pause.

      eb695736-1636-4af5-844d-10942fd866a8-image.png

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      @Hobbie said in AI In Poses:

      it takes only ten minutes to train the damn thing to write passable poses?

      Sure, as long as you don’t mind when it can’t keep the details of Evelyn’s backstory straight, forgets that she has a fan in her hand from one pose to the next, doesn’t take into account how the scene she had last month would affect her dealings with Edward, etc. And heaven help you with theme consistency if the RP is happening in an original or lesser-known setting.

      GenAI is good at generating plausible text - that’s literally its one job. It still isn’t very good at generating a coherent story.

      I look at those example poses and they make me cringe. While I’d be reasonably confident they were AI-generated, to @Yam’s point about being a gamerunner, I don’t know that I’d be confident enough to ban somebody over it. And the AI generators aren’t trustworthy enough for me. (Some I tried insisted that some poses from 2001 had a better-than-average chance of being AI-generated, lol.)

      I dunno. It sucks. I hate GenAI.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      @Yam said in AI In Poses:

      Aight so we can’t use tools to check, and we can’t use our guts to check,

      I’m not sure where you’re getting that from.

      Some people think it’s perfectly fine to use the AI detectors.

      Other people think it’s better to use your gut.

      A third group of people prefer to use the detector to back up their gut.

      You do you.

      Personally I’m in the same camp as @KDraygo . If your vibe is off-putting to me, or I’m reasonably convinced you’re using AI, I’m not going to RP with you. I’m sure we’ll both survive.

      When I spoke of “structural change”, it was in regards to education. There I think it’s a bit more complicated, but that’s kinda irrelevant from a gaming perspective.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI Megathread

      This English professor asserting that em-dashes are the biggest tell-tale sign of her students using AI is what I’m talking about when I complain about people picking on the dashes.

      posted in No Escape from Reality
      FaradayF
      Faraday
    • RE: AI In Poses

      @Third-Eye said in AI In Poses:

      I also really cringe at ‘vibes man’ becoming the way to figure this out, though, because I see some people spot ‘AI’ and I think they’re wrong, have terrible instincts, and are fixating on stuff I don’t think is relevant.

      Just to be clear on my stance - I can absolutely believe that there are people whose gut is worse than the detectors, and people whose gut is better than the detectors. I’m just critiquing the detectors in isolation and the danger of someone who already has a bad gut relying on them.

      Much the same stance I have with self-driving cars, incidentally. They are definitely better than the worst drivers, and worse than the best drivers. But that aside, they are nowhere near reliable enough that I would trust myself or my loved ones to their care.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      @Third-Eye said in AI In Poses:

      When I pull up a detector it’s to have a second sanity check. Because what else have we got, ya know?

      Yeah I totally get that impulse. My concern is rooted in psychology more than the technology itself.

      Here’s a different example that maybe illustrates my point better: Grammar checkers. They are sometimes useful and very often completely, utterly wrong. As a professional writer, I have the skill to sift through the chaff to find the suggestions that are actually correct and useful. But the teen homeschoolers I work with don’t. If I hadn’t taken the time to teach them why one should be skeptical of the suggestions from a grammar checker, it would be completely understandable for them to just be like: “Well, this thing obviously knows more than me; I should do what it says.” (Here’s a neat video essay about the problems with someone who doesn’t know grammar well using Grammarly, btw)

      So I’m not saying “never use grammar checkers because they suck and have no value”. I’m just saying that they don’t work well enough to be relied upon, and anybody who uses them needs to be well aware of their limitations. This just doesn’t happen when you’re a layperson whose only info is their marketing hype.

      Now that’s grammar checkers, where we have a tangible baseline to compare it to (e.g., CMS style guide, etc.) Plagiarism detectors are the same. It’ll tell me: “Hey, this seems like it’s ripping off (this article)” and I can go look at the article and decide if it’s right.

      With AI detectors, you don’t have that capability. You just have to take its word for it. If it lines up with your vibe, you’re probably likely to take that as confirmation even if it’s wrong. If it doesn’t line up with your vibe, you have no way to tell whether it’s wrong or you’re wrong.

      I also have concerns about the fundamental way these detectors work. GPTZero analyzes factors like “Burstiness”. Yes, sometimes AI writing has low burstiness because it’s overly uniform. But sometimes human writing has low burstiness too, and sometimes AI writing can be massaged to make it bursiter.

      These tools are new, there hasn’t been a lot of sound research into the subject (even that big article from U of Chicago was a “working paper” that hasn’t been peer-reviewed (as far as I can tell). Their methodology might suck or it might be brilliant, but until more folks have reproduced the research, it can’t be taken as gospel.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI Megathread

      @Pavel Which GenAI will almost certainly never be able to because there isn’t enough COBOL stuff out there for it to steal for training data.

      posted in No Escape from Reality
      FaradayF
      Faraday
    • RE: AI Megathread

      @Aria said in AI Megathread:

      I’m just sitting here watching this with a look of vague horror on my face

      Oh dear heavens, all the sympathy. That sounds like my worst nightmare.

      I worked in FDA-regulated software for awhile. I’m sure @Aria knows this well, but for non-software folks: In safety-critical, regulated industries, it is a well-known fact—learned through bitter experience, horrific recalls, and lost lives—that it is utterly impossible to test software thoroughly enough to ensure it’s safe once it’s already been built. There are just too many edge cases and permutations. Safety/quality has to be baked in through sound design practices.

      AI has no concept of this. You can ask it to “write secure code” or whatever, but it fundamentally doesn’t know how to do that. I sincerely hope it does not take a rash of Therac 25 level disasters to teach the software industry that lesson again.

      posted in No Escape from Reality
      FaradayF
      Faraday
    • RE: AI In Poses

      @Trashcan said in AI In Poses:

      If you refuse to use any technology that relies on machine learning, algorithms, or neural networks regardless of the specifics then obviously that is your prerogative but you are going to have a hard time using the internet at all.

      Yes, that would be ridiculous, and is not even remotely close to anything I’ve said. I have literally worked on ML software to identify cancer cells on digital pathology scans and categorize covid risks. My objection is to LLMs trained on material without compensation or consent, designed to replace creative folks with crappy knockoffs.

      I just asked if someone knew off-hand how these detector tools worked because I couldn’t find the info quickly myself. Not all of them in existence, but the prominent ones at least.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      OK genuine question because I couldn’t find anything with a quick search and was too lazy to dive deep, but…

      Don’t AI detectors also use LLMs? And couldn’t they then be training on the stuff they’re scanning?

      If so, by putting poses into them, I could potentially be using other peoples’ RP to feed the very machine I hate so much.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      @Yam said in AI In Poses:

      There is a human that’s in charge of disinviting people to games.

      I’m not only talking about staff using AI detectors to ban people, I’m also talking about people running each others’ poses through AI detectors.

      If your human gut is saying that something is AI generated, that’s one thing. I just don’t trust these AI detector tools. Everything I’ve seen about them from experts tells me that the fundamental way they work is flawed, and I’ve seen enough drama around false-positives that I don’t want anything to do with them. YMMV.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI In Poses

      @Tez said in AI In Poses:

      That’s the case I’m actually interested in, not the anxieties people have that they might accidentally get flagged as AI and banned on an off day. I just don’t think that’s happening.

      Are you limiting your question solely to MUs? In that case, no, I am not aware of anyone getting disciplined falsely for AI use in MUs.

      But in the real world? It’s absolutely happening. There’s no reason to believe it won’t happen here too if people start routinely feeding things into AI detectors.

      @Tez said in AI In Poses:

      We can and do punish players for plagiarism in this hobby, so if we treat them as equivalent, then why wouldn’t we punish them?

      I didn’t say I wouldn’t ban someone for plagiarism if I believed they did it, I said I don’t routinely run poses through a plagiarism detector hunting for violations. Also while I may personally feel that GenAI is a plagiarism machine, I do acknowledge that mainstream society doesn’t see something AI-generated as plagiarism. So I do not attribute the same malice to the action.

      posted in Rough and Rowdy
      FaradayF
      Faraday
    • RE: AI Megathread

      @InkGolem said in AI Megathread:

      “It’s not x. It’s y.”

      The reason that this—or anything else GenAI does—comes up frequently is because the algorithms are recognizing patterns in actual writing. It doesn’t make this stuff up out of thin air.

      Now yes, sometimes it uses those constructs in the wrong place / wrong way and that can be a tell. Or you can find it in weird places, like an email from your friend. But the construct itself isn’t a tell of AI use. The AI is just copying what it sees.

      For example, from a few book sources in the 1900’s:

      It’s not about dieting. It’s about freedom from the diet.

      It’s not a newspaper. It’s a public responsibility.

      It’s not being moved. It’s simply joy.

      It’s not love. It’s something higher than love.

      It took about 60 seconds to find these and a zillion other instances on a Google Ngram search of published literature.

      posted in No Escape from Reality
      FaradayF
      Faraday