AI Megathread
-
Yeah, that absolutely reads as LLM slop to me. BOooo
-
Yeah, I would personally never call myself a coder, just an extreme hobbyist (also my knowledge of code starts at Evennia/python and ends at Evennia/python, with a very minor knowledge of TinyMUX/MUSH functions from 13 years ago thanks to Cobalt doing a code class back then). The setting and story is where a game really lives and breathes for me. I have admittedly a million ideas for games, but those ideas often have very specific stories, locations, characters, etc. which/who I will generally feel the need to write out, for better or worse. I personally don’t think this is an abnormal perspective for MUSH developers, especially the ones who are driving the setting, or story if something like a metaplot exists. Obviously we all can’t make an Arx or write a bunch of setting lore like Empire or some of the WoD projects. However, even if you’re just coming in with a new game that’s completely barebones, aren’t you at least interested in seeing what happens based on how the world is developed by your players? That probably requires some investment in the creation process.
I’m reminded of this comment that I read, or maybe was highlighted in one of the writing youtuber people that I watch, where someone was saying that LLMs allow writers to bypass having to work out the prose to get to the plot. This defeats the purpose of writing, in my opinion. I have like no physical artist bone in my body, so it could be that I’m reacting in the same way as my artist friends do when they see AI art.
-
though this non-MUSH-related thing also annoys me about AI and the techbro insistence that it must be in absolutely every product:

I typo’d “b and”. No one calls a watch band a “watch B&”. This is a very Hello Fellow Kids reaction, because if I had to guess it’s assuming that I’m using some sort of slang.
-
@Tez said in AI Megathread:
I don’t personally care as much about code, probably bc I’m not a coder, but the content is the heart and soul of it for me, and it sucks to see.
FWIW I’m a professional software engineer and I am 100% fine with people using AI to help write small amounts of MU* code, like CSS tweaks, as long as they’re comfortable with the fact that it might be weird and flaky code that’s extra hard to troubleshoot.
I would absolutely not build an entire codebase from an AI vibe-codey foundation, but that’s not at all an ethical stance, just a “wow holy shit that’s going to be impossible to maintain, please just use Ares or Evennia or something and read some guides” stance. Maybe in a few years it’ll be a 100% technically viable option and turn back into an ethical question, but by then I’ll probably be automated out of job anyway, whee.
AI slop in the content of a game will make me sadface, though.
Btw, I’ve absolutely had people ask me professionally if I used ChatGPT to write something, and it made me so deeply offended that now I deliberately swing my writing the other way and refuse to overly polish it. You’re just gonna get HELLA ADVERBS and run-on sentences and informal grammar from me, so there. Embrace my squishy human foibles.
-
@Clarion said in AI Megathread:
Maybe in a few years it’ll be a 100% technically viable option and turn back into an ethical question
The reliability of AI when it comes to cutting code (and to a lesser extent, just having accurate information) is coming more and more into question because the data the AI is training itself on is getting shittier and shittier.
Note this loop:
- Vibe coded app hits Github with issues
- AI learns from vibe coded app
- Issues are seen as standard practice and implemented
- More issues arise because AI code isn’t perfect
- Go to Step 1
I’ve been watching this continual enshittification take place as my company is forced to use AI (someone very successfully marketed to my intelligence-challenged CEO) and I’m getting more and more PRs across my desk that are full of slop. The decrease in the human element and the consistent marketing of “AI is gonna do it for you don’t even worry about it” is causing entropic damage to the AI’s ability to actually create something worth a damn.
Six months ago, it could spit out a CloudFormation template that was mostly passable, with a couple of fixes, and now it doesn’t even understand a WAF rule statement. It used to be possible to use ChatGPT for boilerplate BASH code but now it can’t even do that.
Can’t even use Google anymore, because the first five pages of results are AI articles that tell me less than nothing. Like, search engines give me results that are actively detrimental to what I’m trying to do.
For someone who keeps getting told AI is going to make my job easier, boy is it making it a lot harder.
I genuinely hope this bubble bursts with the force of a nuke because at some point in the near future an AI will introduce a genuinely serious problem that requires human resolution and there’s no humans around who have the knowledge to fix it for them.
tl;dr if you let dumb AI learn from dumb AI, AI gets dumber.
-
@Hobbie said in AI Megathread:
tl;dr if you let dumb AI learn from dumb AI, AI gets dumber.
So now I should put my poses through all the LLMs, and eventually they’ll break!
-
Some poor med student attempting to use GPT to write a paper for their ophthalmology class:
“The patient’s orbs glistened with lacrimal fluid like morning dew…”
-
@Hobbie said in AI Megathread:
I genuinely hope this bubble bursts with the force of a nuke because at some point in the near future an AI will introduce a genuinely serious problem that requires human resolution and there’s no humans around who have the knowledge to fix it for them.
tl;dr if you let dumb AI learn from dumb AI, AI gets dumber.
I cannot “THIS!!!” this hard enough.
My company is currently on a kick of “We’re gonna teach the product managers to code their own products! And the UXers! And the scrum leads! Everyone’s gonna vibe code and AI code and we’re going to release new features so fucking fast and it’s gonna be AWESOME!”
Meanwhile, I’m just sitting here watching this with a look of vague horror on my face because 1) my company works in one of the most heavily regulated industries in the country 2) is large enough that a single fuck-up can and has resulted in articles in national news publications because everyone likes to watch a giant stumble. So I keep looking at this “model of the future” thinking that we’re basically going to turn our code into a nightmare of non-functionality as hundreds of people get their sticky little fingers into it and only, like, half of them have any idea how it works. Meanwhile, the other half just shoves whatever the newest ‘agentic coding tool’ says into production because that’s what the computer told them and the computer must be right.
We’re going to get slapped with the sort of regulatory fine that could pay for 20 developers for the next five years, and then everyone’s going to stand there looking surprised.

I’m pretty sure we’re all just living in the plot of Wall-E now and I hate it here.
-
@Aria said in AI Megathread:
I’m just sitting here watching this with a look of vague horror on my face
Oh dear heavens, all the sympathy. That sounds like my worst nightmare.
I worked in FDA-regulated software for awhile. I’m sure @Aria knows this well, but for non-software folks: In safety-critical, regulated industries, it is a well-known fact—learned through bitter experience, horrific recalls, and lost lives—that it is utterly impossible to test software thoroughly enough to ensure it’s safe once it’s already been built. There are just too many edge cases and permutations. Safety/quality has to be baked in through sound design practices.
AI has no concept of this. You can ask it to “write secure code” or whatever, but it fundamentally doesn’t know how to do that. I sincerely hope it does not take a rash of Therac 25 level disasters to teach the software industry that lesson again.
-
@Faraday said in AI Megathread:
@Aria said in AI Megathread:
I’m just sitting here watching this with a look of vague horror on my face
AI has no concept of this. You can ask it to “write secure code” or whatever, but it fundamentally doesn’t know how to do that. I sincerely hope it does not take a rash of Therac 25 level disasters to teach the software industry that lesson again.
Yeeep. I’m not in medical research anymore, but I used to work in the office that monitored clinical trials for massive university hospital system, including training clinical research coordinators on how to maintain documentation to standard.
You do not mess around with people’s lives, livelihoods, and life savings. If you break those, there’s really no coming back. I don’t understand why we have to keep learning this, but I guess some tech bro billionaire and all his investors that can’t actually follow along with what he’s saying need the money to upgrade their yachts.
-
I work in fintech. We are involved with some big cranky banks. The current AI push driven by our CEO is several GDPR breaches waiting to happen and my security guy is sitting there pulling his… actually he has no hair so I suppose he’s pulling his beard out! I’m right there with him, both on the frustration and lack of hair.
Devs, infra, solution design et al, we don’t get paid to write lines of code, we get paid to write the right lines of code. That’s why we have PRs and reviewing them is where all the productivity maybe-gained is being absolutely-lost.
I’m not even on the dev teams, I’m in infra, and even I’m copping it from product people trying to push code to my repos now. UGH.
-
@Hobbie
LOLOLOL. I also work at a Fintech, in compliance, and the direction on AI is incredibly schizophrenic. They’re like WE LOVE AI THE CEO WANTS MORE AI…but also don’t put ANY of your work into public AI, and probably not even the proprietary ChatGPT rip-off we have internally, because you’re probably gonna violate personally identifiable information agreements. I just don’t use it and am doing all right. There have been some nice internal robotics enhancements that’ve come out of this whole mess but none of them have given me any confidence this stuff will take my job anytime soon. -
@Hobbie said in AI Megathread:
I work in fintech. We are involved with some big cranky banks. The current AI push driven by our CEO is several GDPR breaches waiting to happen and my security guy is sitting there pulling his… actually he has no hair so I suppose he’s pulling his beard out! I’m right there with him, both on the frustration and lack of hair.
Devs, infra, solution design et al, we don’t get paid to write lines of code, we get paid to write the right lines of code. That’s why we have PRs and reviewing them is where all the productivity maybe-gained is being absolutely-lost.
I’m not even on the dev teams, I’m in infra, and even I’m copping it from product people trying to push code to my repos now. UGH.
But don’t worry, the solution will be to train an “AI” to accept and deny the right PRs so that way it’ll eventually get it right and then you can work on more important things and let “AI” fill in the rest.
… Twenty years later, when maybe something marketed as “AI” learns enough to write proper code instead of parrot it. (And after “a few” lawsuits and payouts related to GDPR and other data leaks, company implosions all over the world, etc. etc.)
But I’m not going to hold my breath on this.
-
@dvoraen And somehow someone still needs to know COBOL.
-
@Pavel Which GenAI will almost certainly never be able to because there isn’t enough COBOL stuff out there for it to steal for training data.
-
@Faraday All COBOL knowledge is held exclusively by two men, both called Steve. They’re not allowed to travel at the same time, to avoid the risk of all worldly knowledge of COBOL being lost in the same incident.