AI Megathread
-
@Clarion said in AI Megathread:
Maybe in a few years it’ll be a 100% technically viable option and turn back into an ethical question
The reliability of AI when it comes to cutting code (and to a lesser extent, just having accurate information) is coming more and more into question because the data the AI is training itself on is getting shittier and shittier.
Note this loop:
- Vibe coded app hits Github with issues
- AI learns from vibe coded app
- Issues are seen as standard practice and implemented
- More issues arise because AI code isn’t perfect
- Go to Step 1
I’ve been watching this continual enshittification take place as my company is forced to use AI (someone very successfully marketed to my intelligence-challenged CEO) and I’m getting more and more PRs across my desk that are full of slop. The decrease in the human element and the consistent marketing of “AI is gonna do it for you don’t even worry about it” is causing entropic damage to the AI’s ability to actually create something worth a damn.
Six months ago, it could spit out a CloudFormation template that was mostly passable, with a couple of fixes, and now it doesn’t even understand a WAF rule statement. It used to be possible to use ChatGPT for boilerplate BASH code but now it can’t even do that.
Can’t even use Google anymore, because the first five pages of results are AI articles that tell me less than nothing. Like, search engines give me results that are actively detrimental to what I’m trying to do.
For someone who keeps getting told AI is going to make my job easier, boy is it making it a lot harder.
I genuinely hope this bubble bursts with the force of a nuke because at some point in the near future an AI will introduce a genuinely serious problem that requires human resolution and there’s no humans around who have the knowledge to fix it for them.
tl;dr if you let dumb AI learn from dumb AI, AI gets dumber.