AI Megathread
-
spotted in the wild, made me lol at least as much as it made me groan.
-
@Pavel said in AI Megathread:
@Faraday said in AI Megathread:
The LLM doesn’t know whether something is true, and it doesn’t care.
I know this may seem like a quibble, but I feel it’s an important distinction: It can’t do either of those things, because it’s not intelligent. It’s a very fancy word predictor, it can’t think, it can’t know, it can’t create.
(This very obvious rant is not directed to anyone in particular but I still feel like it needed to be said…)
Neither can a computer. It can’t think, it can’t know, it can’t create any more than an AI can. It can’t get nearly as close as AI can. But every person here still uses computers. No one’s given up the machine and accessories filled with toxic metals that have been clawed from the earth as quickly as possible without regard for the environment it might be disturbing for the purposes of profit. Metals that are so valuable they mean the potential end or continuation of wars that kill hundreds of thousands with body counts that go up every day that possession of these metals aren’t agreed upon or bargained away to stop the death and dying. Metals that will go into these computers we all use that will likely return to the earth to sit in a landfill forever with parts that will never biodegrade in exchange for being of limited, non-thinking, non-knowing, non-creative use for just a few short years.
Everyone here also uses the internet with all the filth and hate and disgusting things that can be found online. Darkwebnuffsaid. The same internet that contributed to the death of newspapers and the decline of journalism as well as people’s recognition of truth and facts. The same internet that allowed Amazon to put countless small businesses out of business across the world. But no one here has stopped using that. Everyone is still contributing online to the companies that track and store massive amounts of information on as many people as possible on servers around the globe that devour energy like a bottomless pit and exude heat like a hell-connected furnace, then weaponize that information for profit and political leverage so intense it can turn elections in the most powerful country in the world, very specifically by weaponizing fear and hate and turning those dials up to 11 on people’s internet feeds to sway votes with no regard for what those people might do with their fear and hate outside of the voting booth.
How is it that AI is the problem?
You can’t be serious.
Please.
One might think you are being just a tad too…
Critical.
But okay. Rant done. Get on your computer and tell me on the internet how AI is so problematic you won’t use it.
I’m totally listening.
-
@MisterBoring said in AI Megathread:
Is it sad that I’ve met people whose natural writing and RP are so bad that I assumed they were LLMs incorrectly?
-
@Jynxbox said in AI Megathread:
But okay. Rant done. Get on your computer and tell me on the internet how AI is so problematic you won’t use it.
I can. I did in previous posts, but I’ll recap briefly: There is a benefit/harm ratio to every invention mankind has ever made.
Harnessing fire can bring warmth, but it can also destroy. Splitting the atom can power a nuclear power plant or wreck untold destruction. The internet contains filth, but it also powers information, education, and human connections.We have to gauge tools based on how they are used. GenAI, by my estimation, brings tremendous harm and next to zero actual good. Every use case I’ve seen - from customer service chatbots to research - is terrible compared to the human counterparts it’s driving out of business.
We also, traditionally, have gauged tools based on their legality. Napster brought free music to millions. Some saw that as a good thing, but it was illegal, and it was stopped. The GenAI industry is committing copyright infringement on a scale that would make Napster blush.
There are many people who boycott Amazon, social media, or whatever based on the harms they perceive. There are environmentalists who would scold me for the plastic soda bottles I use. We’re all allowed to pick our battles. Fighting GenAI is one of mine.
-
@Juniper said in AI Megathread:
People call this “hallucination”. I think we should stop letting them assign a new name to an existing phenomenon. The LLM is malfunctioning.
No, it is not. What the AI spits out is meaningless to the AI. It just spits out what the most likely next set of words are based on the input it received and what it has already given. For it to malfunction, it needs to start writing sentences devoid of grammar. As long as what it writes is grammatically correct and a somewhat rational statement, it has succeeded
It is saying things that are wrong.
Yes, but factually accurate answers are not the purpose. Grammatically correct sentences are.
It is failing to do what it was designed to do.
No. It is doing exactly what it is designed to do. It’s failing at doing what entrepreneurs, marketers, and other PT Barnum snake oil salespeople want the public to think it can do.
EDIT: To actually contribute something to the thread, about the only thing I am willing to use AI on for MU purposes is played-bys and maybe creating images on the wiki for the various venues on the grid. I have grown tired of seeing Jason Momoa as the image for Seksylonewulf McRiptabs #247.
EDIT 2: Typoes.
-
@Faraday I boycott Amazon because AWS absolutely sucks.
I’d explain more but I have to go write more Lambda functions while closing every pop-up telling me to use Q to streamline my processes.
-
@Jynxbox said in AI Megathread:
@Pavel said in AI Megathread:
@Faraday said in AI Megathread:
The LLM doesn’t know whether something is true, and it doesn’t care.
I know this may seem like a quibble, but I feel it’s an important distinction: It can’t do either of those things, because it’s not intelligent. It’s a very fancy word predictor, it can’t think, it can’t know, it can’t create.
(This very obvious rant is not directed to anyone in particular but I still feel like it needed to be said…)
Neither can a computer. It can’t think, it can’t know, it can’t create any more than an AI can. It can’t get nearly as close as AI can.
This seems like a strange separation: AI is run on computers. A computer is simply a larger tool that you can run all sorts of smaller tools on.
A computer can know some things, if you program it to do so. If you program a calculator, you instruct it on immutable facts of how numbers work.
If GenAI successfully reports that 1+1=2, all it’s saying is that a lot of people on the internet have mentioned that’s probably the case. It’s searching a massive database of random shit and finding a bunch of instances where someone mentioned the text “1+1” and seeing that a bunch of those instances ended with “=2”. It’s giving you a statistically probable sentence. Due to this, it’s ridiculously, laughable easy to manipulate.
The calculator on your computer knows that 1+1=2 because it knows what 1 is, and it knows what addition is, and it knows how to sum two instances of 1 together. Computers are very good at following strict rules and working within them when they are programmed to do so. And computers are very good at analyzing and iterating, and people have written really effective automation and AI tools (of the non-generative variety) to do that over the years.
But yes: as you said, computers can’t produce raw creation. Which is kind of the point being made.
-
-
@Roz said in AI Megathread:
If you program a calculator, you instruct it on immutable facts of how numbers work.
I mean… kinda? A calculator app doesn’t really know math facts the way a third grader does. It doesn’t intuitively know that 1x1=2. It just responds to keypresses, turns them into bits, and shuffles the bits around in a prescribed manner to get an answer.
I don’t point that out to be pedantic, but just to further contrast it with the way a LLM handles “what is 1+1”. Like you said, it’s based on statistical associations. It may conclude that 1+1=2 because that’s most common, but it could just as easily land on 1+1=3 because that’s a common joke on the internet. LLMs contain deliberate randomization to keep the outputs from being too repetitive. This is the exact opposite of the behavior you want in a calculator or fact-finder. And if you ask it some uncommon question, like 62.7x52.841, you’ll just get nonsense.
Now sure, some GenAI apps have put some scaffolding around their LLMs to specifically handle math problems. But a LLM itself is still ill-suited for such a task. And when we understand why, we can start to understand why it’s also ill-suited for giving other accurate information.
-
@MisterBoring now I have to add to my +finger I don’t use LLMs to write my poses sometimes I just suck as a writer
thx
-
@Faraday said in AI Megathread:
@Roz said in AI Megathread:
If you program a calculator, you instruct it on immutable facts of how numbers work.
I mean… kinda? A calculator app doesn’t really know math facts the way a third grader does. It doesn’t intuitively know that 1x1=2. It just responds to keypresses, turns them into bits, and shuffles the bits around in a prescribed manner to get an answer.
Yeah, sorry, my point is more that – computers know as much or as little as they’re programmed to know. The calculator is given strict rules to calculate input, whereas LLMs are literally just guessing at a probable answer.
-
@Faraday said in AI Megathread:
But a LLM itself is still ill-suited for such a task. And when we understand why, we can start to understand why it’s also ill-suited for giving other accurate information.
Sorry for double post, but wanted to add:
When we understand why it’s bad at giving accurate information, then we have to ask: So what IS it good for? And apart from a few niche word-parsing and pattern-matching tasks, the only answer I’ve seen is: replace the humans who generate content (artists, authors, customer support agents, narrators, etc.) with a machine that generates worse content. And that core idea is fundamentally the problem I have with GenAI.
-
@Faraday Getting a computer as close to passing the Turing Test than any other computer has managed before. That’s about it.
-
AI cannot improve your writing. I’m not saying the LLMs are bad at writing, or worse than you are, but I am saying that if you use AI it’s no longer your writing. Being that I am only RPing with you to interact with you and read your writing, kindly keep that shit away from me.
If you want a sentence to hit different then think harder.
-
I’ve said it before (possibly in this thread, idk, it’s been A Year) and I’ll say it again, all I want from someone using ChatGPT to write poses/descs/thematic stuff is disclosure that they’re doing that. Now, I want this because I don’t want to engage with LLM content in my travels through this hobby and it saves me time and frustration, but at this point this stuff isn’t going away and clearly a lot of people don’t care. So as long as it’s on the tin there’s not much I can complain about.
What I have observed is that nobody doing this, in even in very blatant ways, cops to it.
Which is something I find worth interrogating. People are a lot more upfront about using Midjourney and the like to create PBs and wiki art, maybe because they think it’s more obvious. I guess there’s a level of embarrassment involved that isn’t present for some people when it comes to visual art, idk.
-
While I’m not trying to change anyone’s opinion, I thought I’d share my personal experiences with generative AI accusations both in MU environments and the real world.
First, I’m an author by trade. I make money and pay bills by writing and selling novels. It’s not my sole source of income yet, but I’m hopeful it will be one day. Sadly, the “anti-AI” witch hunt often impacts authors like myself who don’t even use such tools.
I’ve received negative reviews claiming my work is “AI slop” for the following reasons:
- Using the phrase “with practiced ease” once
- Publishing 2 books in a 4-month period
- Having chapter lengths that are too consistent
- Creating covers that are “obviously AI”
I don’t know what to say about the “with practiced ease” comment. It is what it is, I guess. As for the other points, I write every day and can produce 3,000-8,000 words daily. I’m also very particular about my novel structure - I try to keep chapters below 2,500 words and my novels around 80,000 words. That’s just how I work.
I’d like to think these reviews haven’t negatively impacted my growth as an author, but I’ll never really know. I’ll never know how many people checked out my book on Amazon, read those reviews, and said “Oh, AI slop? Forget that.”
I know artists in the same boat. Take my covers, for instance - my wife hand paints all of them. Every single one. Yet I still got a negative review because the cover was “obviously AI.” I know many creators who’ve had their work criticized as “possibly AI.” Even when you show documented proof of every step in the creation process, people still attack you.
Finally, I do know people who use AI to touch up their RP poses. For many reasons, most of them come down to insecurity and simply wanting to tell better stories. They’re not mustache-twirling villains looking to ruin the MUSH world—they’re real people trying their best to engage meaningfully with others. The poses still come from a human with emotions, motivations, and a genuine desire to connect, regardless of what tools they used to craft them.
Why don’t they tell everyone they’re doing it? Looking at some of the responses in this very thread gives me a pretty clear answer—they justifiably fear being attacked, called names, having their motivations questioned, and being treated as morally bankrupt or creatively empty. There’s a world of difference between encouraging AI-free spaces (which is completely valid) and demonizing the people who use these tools as if they’re somehow less human or less worthy of respect in creative spaces.
I guess my only point is, the AI witch hunt can be just as damaging to people as AI itself. I think it’s perfectly valid to question AI and all things related to it, but we cross a line when we attack people for using it.
And this constant looking for AI in everything—even when it isn’t there—creates a toxic environment where genuine human creativity gets dismissed, artistic choices get questioned, and people’s enjoyment of creative spaces gets diminished. We’re reaching a point where people are so busy hunting for AI that they’re no longer engaging with the actual content or the humans behind it.
-
@Raistlin said in AI Megathread:
First, I’m an author by trade. I make money and pay bills by writing and selling novels. It’s not my sole source of income yet, but I’m hopeful it will be one day. Sadly, the “anti-AI” witch hunt often impacts authors like myself who don’t even use such tools.
That’s a bummer. Blame AI and the people using it to churn out slop.
The poses still come from a human with emotions, motivations, and a genuine desire to connect, regardless of what tools they used to craft them.
They don’t, though? If someone’s using AI to write their poses they’re not coming from a person, they’re coming from a bullshit machine, and anyone trying to pass it off as anything else is showing such absolute disrespect for their scene partner that I can’t even put it into words. Why would I bother to read something they couldn’t be bothered to write?
-
@Raistlin said in AI Megathread:
they’re real people trying their best to engage meaningfully with others. The poses still come from a human with emotions, motivations, and a genuine desire to connect, regardless of what tools they used to craft them.
No, that’s not genuine. That is literally disingenuous, particularly when it is undisclosed to other participants.
Engaging meaningfully requires thought, creation, intentionality, and vulnerability, not throwing half an idea into the word blender to come out with whatever sounds slick in the algorithm today.
As @Flitcraft said, if they can’t spend the time and effort to write something, why should I spend time and effort interacting with it?
-
This response perfectly illustrates my points. The focus is squarely on continuing the witch hunt rather than considering how it impacts real creators or understanding the actual humans you’re supposedly trying to engage with on a game. There’s no empathy for false accusations or collateral damage to innocent writers.
I’m not saying anyone should RP with anyone else for any reason. There are a host of reasons to not RP with someone, including using AI for any part of the RP process. I know people who won’t RP on Ares games that focus on async RP. That’s fine. Everyone is entitled to their own boundaries.
My point is how people address those boundaries. There’s a world of difference between “I prefer not to RP with people who use AI tools” and attacking others as “disrespectful” or their writing as “bullshit.” Some are simply using AI as an excuse to engage aggressively with other people. That’s what I feel is wrong.
Thankfully this forum allows us to unfollow/watch threads. I’ll be doing that here because there clearly is no actual intent to have a meaningful discussion—just shaming, name calling, and attacking. I have zero interest in participating in that kind of environment.
-
@Raistlin To be fair, I think it started with the intent to have meaningful discussion. But it just devolved into hyperbolic rhetoric with all the name calling and shaming after a while. Sometimes people who “feel passionately” about a subject justify less-than-great actions with their emotions.
Maybe that’s most threads that go on long enough. But I do think it started with good intent.