Don’t forget we moved!
https://brandmu.day/
AI Megathread
-
@Rinel said in AI Megathread:
This is the problem with AI apologia; it overlooks fundamental errors that would not be acceptable from a human artist.
I mean, for goodness sakes, the pikachu has a third ear. And even on the upgraded image, there are no handles and it’s still missing a tail.Yeah, I think that’s exactly the kind of thing that a lot of folks miss about Gen-AI stuff. Is it neat? Sure, if we set aside all the moral/ethical implications of how it got that way, it’s impressive that you can even get that sort of image in the first place. But what it generates is so often wrong in very basic ways.
At its core, it doesn’t understand who/what Pikachu is. It doesn’t know what lightsabers are, how they work, or what their qualities are. It’s just putting traits from existing images into a blender to make something. Nevermind whether that’s actually what you asked for, or wanted.
And it’s the same fundamental problem with people trying to use GenAI for “research”. It has no context for whether the information it’s generating is accurate, nor does it care. It can’t even say where the information came from.
-
I have done art for a long time, and I see AI generation as a very interesting thing. While prompting and handling technical details around generating an AI image is certainly not trivial if you really dive into the details of it and want your own style to it, I think the process is less of being an artist and more of being a very picky commissioner - you are commisioning an artwork from the AI, requesting multiple revisions to get it right.
For my own art, I’m experimenting with generating sketches this way - quickly generating a scene from different angles or play with directions of lighting - basically to quickly play with concepts that I then use as a reference when painting it myself. In this sense, LLMs are artists’ tools. People forget that doing digital art at all was seen as ‘cheating’ not too long ago too.
You can certainly argue about legality or ethics when it comes to building a data set. There are OSS systems that have taken more care to only use actually allowed works in the training. But I think there are some misconceptions about what is actually happening in an LLM and how similar its work process actually is to that of a human.
A decade ago, the new hot thing for digital artists was “Alchemy”. This was a little program that allows you to randomly throw shapes and structures onto the canvas. You’d then look at those random shapes and have it jolt your imagination - maybe that line there could be an arm? Is that the shape of a dragon’s head? And so on. It’s like finding shapes in the clouds, and then fleshing them out to a full image. David Revoy showcased the process nicely at the time.
The interesting thing with AI generation is that it’s doing the same. It’s just that the AI starts from random noise (instead of random shapes big enough for a human eye to see). It then goes about ‘removing the noise’ from the image that surely is hidden in that noise. If you tell it what you are looking for (prompting), it will try to find that in the noise. As it ‘de-noisifies’ the image, the final result emerges. The process is eerily similar to me getting inspired by looking at random shapes. It’s ironic that ‘creativity’ is one of the things AI’s would catch up on first.Does the AI understand what it is producing? Not in a sentient kind of way, but sort-of. It doesn’t have all of those training images stored anywhere, that’s the whole point - it only knows the concept of how an arm appears, and looks for that in the noise. Now, the relationships between these and the hollistic concept of a 3D object is hard to train - this is why you get arms in strange places, too many fingers etc. The AI is only as good as its data set.
We humans have the advantage of billions of years of evolution to understand the world we live in, as well as decades of 24/7 training of our own neural nets ever since we were born. But the LLMs are advancing very quickly now, and I think it’s unrealistic to think they will remain as comparatively primitive as they are now. In a year or two, you will be able to get that Picachu image exactly like you want it, with realistic light sabers, facial expressions and proper lighting.
I’m sure some will always dislike AI art because of what it is; but that will quickly become a subjective (if legit) opinion; it will not take long before there are no messed up fingers, errant ears or stiff composition making an image feel ‘ai-generated’.As for me, I find it’s best to embrace it; Digital art will be AI-supported within the year. Me wanting to draw something will be my own hobby choice rather than necessity. Should I want to, I could train an LLM today with my 400+ pieces of artwork and have it generate images in my style. I don’t because I enjoy painting myself rather than commisioning an AI to do it for me.
TLDR: AIs has more human-like creativity than we’d like to give it credit for. Very soon we will have nothing objectively to complain about when it comes to art-technical prowess in AI-generated art.
-
@Griatch said in AI Megathread:
But the LLMs are advancing very quickly now, and I think it’s unrealistic to think they will remain as comparatively primitive as they are now. In a year or two, you will be able to get that Picachu image exactly like you want it, with realistic light sabers, facial expressions and proper lighting.
I think this is premised on certain beliefs about the scaling of LLM outputs with their datasets. These things struggle terribly with analogy. A human can be presented with an image of a mermaid and one of a centaur and then be told “draw a half-human/half-lion like what you just saw,” and they can do that. LLMs can’t. It’s fundamentally not how they operate.
(I know you could accomplish the same thing with an LLM by refining input to be something like “human from the waist up, lion from the waist down” or some other more method, but that doesn’t change my underlying point. LLMs are hugely limited in their capacity to adapt on the fly.)
@Griatch said in AI Megathread:
As for me, I find it’s best to embrace it; Digital art will be AI-supported within the year. Me wanting to draw something will be my own hobby choice rather than necessity.
Barring an economic revolution that is long-coming and never here, the result of this particular utopia is the collapse of widespread art as the practice reverts to only those privileged enough to spend large amounts of time on hobbies that they can’t use to help make a living. It’s difficult to fully describe how horrific this scenario is, but it’s the death of dreams and creativity for literal millions of people.
It would legitimately be better to destroy /all/ LLMs and prohibit their existence than to pay that cost.
-
@Griatch said in AI Megathread:
ut the LLMs are advancing very quickly now, and I think it’s unrealistic to think they will remain as comparatively primitive as they are now. In a year or two, you will be able to get that Picachu image exactly like you want it, with realistic light sabers, facial expressions and proper lighting.
The writing models are extremely unlikely to advance in this same kind of way because they lack context. They are word calculators, stringing words together without really understanding what those words mean because there is no actual intelligence behind the engines. From my understanding, the art versions work in similar ways and are therefore unlikely to make the same leaps, but admittedly I haven’t studied them as much.
-
@Testament said in AI Megathread:
However, the pessimistic nihilist in me would look at forums like here, MSB, r/MUDs see the kind of toxicity that are entirely bred within them and it makes me consider, “Okay, but what if we just remove the human element to it?”
then they would not exist. a forum with chatGPT posts would just be pages and pages of spam no human ever bothered to look at. theft on a cosmic scale, justified by nothing
-
@Rinel said in AI Megathread:
@Griatch said in AI Megathread:
But the LLMs are advancing very quickly now, and I think it’s unrealistic to think they will remain as comparatively primitive as they are now. In a year or two, you will be able to get that Picachu image exactly like you want it, with realistic light sabers, facial expressions and proper lighting.
I think this is premised on certain beliefs about the scaling of LLM outputs with their datasets. These things struggle terribly with analogy. A human can be presented with an image of a mermaid and one of a centaur and then be told “draw a half-human/half-lion like what you just saw,” and they can do that. LLMs can’t. It’s fundamentally not how they operate.
Fair enough, I guess we’ll see in a year or two. I’m not particularly advocating for this to happen, I just expect it to inevitable. The cat’s out of the bag; the technological advancement will not stop.
(I know you could accomplish the same thing with an LLM by refining input to be something like “human from the waist up, lion from the waist down” or some other more method, but that doesn’t change my underlying point. LLMs are hugely limited in their capacity to adapt on the fly.)
Yes, they are machines basically solving matrix math. Thing is, one can argue that so are we. It’s a matter of scale and helper methods.
@Griatch said in AI Megathread:
As for me, I find it’s best to embrace it; Digital art will be AI-supported within the year. Me wanting to draw something will be my own hobby choice rather than necessity.
Barring an economic revolution that is long-coming and never here, the result of this particular utopia is the collapse of widespread art as the practice reverts to only those privileged enough to spend large amounts of time on hobbies that they can’t use to help make a living. It’s difficult to fully describe how horrific this scenario is, but it’s the death of dreams and creativity for literal millions of people.
I agree: The rise of AI will change society. Many jobs will change or be lost. I expect my day job (computer development) to be fundamentally changed in just a year or two. Not because I advocate for it necessarily, I just think that’s the way it’ll go. You may wish to go back and put the genie back in the bottle, but there’s no practical reason this would ever happen - the advantages of AI integration are so great that someone else will just leverage it in your stead.
-
@Rinel Lol, OK. So obviously we’re into extreme bad faith territory here.
That took me maybe 10 minutes to do. And most of that was the usual multitasking of change settings->look at something else while images generate->look at results , change settings, repeat. The 36 image grid took ~5 minutes, so I left the computer. That’s the point. You want to bang on ‘oh my god, it added an ear, it didn’t have a tail, it doesn’t understaaaaand’. I fixed the ear instantly (again, no photoshop - I just put a blob over the ear and told it ‘crown instead, plz’). I could obviously add a fucking tail. I’m not going to do more because it’s pretty clear I could give you the Picasso of Pikachu vs. Darth Maulard and you’d complain about a single pixel.
Even though your goalpost was ‘MS paint doodle.’ Lets see your doodle so we can critique it.
There’s a lot of casual dismissal of the tech here, which is fine I guess, you’re entitled to your opinions. They won’t change the people who are going to be (or already are) out of jobs for this stuff. It’s not going to completely erase humans (that’s a straw man no one is suggesting), but when a human using these tools is as productive as a dozen without it, then you don’t have to pay 11 humans. And that’s tangentially what you saw in the D&D case: someone who found it useful, due to their workload and time constraints, to use the tool to finish a project for a deadline. They got caught, but the lesson won’t be ‘don’t use AI’ it will be ‘let’s make sure we use better looking AI.’
-
@Faraday said in AI Megathread:
@Griatch said in AI Megathread:
ut the LLMs are advancing very quickly now, and I think it’s unrealistic to think they will remain as comparatively primitive as they are now. In a year or two, you will be able to get that Picachu image exactly like you want it, with realistic light sabers, facial expressions and proper lighting.
The writing models are extremely unlikely to advance in this same kind of way because they lack context. They are word calculators, stringing words together without really understanding what those words mean because there is no actual intelligence behind the engines. From my understanding, the art versions work in similar ways and are therefore unlikely to make the same leaps, but admittedly I haven’t studied them as much.
It’s indeed a limit for text generation, not so much for image generation as far as I understand. That said, I believe what will happen is that multiple agents will be working together instead, each a specialist in its field, holding its own context. This is how our brain works (if you squint a bit) and how GPT-4 is designed apparently. But note that a million-token context research paper is already out, it had several follow-ups since. And considering how fast new research is coming out on LLMs, it would not surprise me if we see a few more breakthroughs sooner rather than later.
-
@Rinel this is not me snarking but asking a legitimate question. If you believe this:
@Rinel said in AI Megathread:
Barring an economic revolution that is long-coming and never here, the result of this particular utopia is the collapse of widespread art as the practice reverts to only those privileged enough to spend large amounts of time on hobbies that they can’t use to help make a living. It’s difficult to fully describe how horrific this scenario is, but it’s the death of dreams and creativity for literal millions of people.
It would legitimately be better to destroy /all/ LLMs and prohibit their existence than to pay that cost.then why do you do this:
@Rinel said in AI Megathread:
I’ve been using Midjourney for quite some time now,
-
@Griatch said in AI Megathread:
As for me, I find it’s best to embrace it; Digital art will be AI-supported within the year. Me wanting to draw something will be my own hobby choice rather than necessity.
are you aware that artists for whom this is NOT a hobby are suffering from this thing you’re excited to “embrace”
When photography displaced illustrators there was a new human art form that supported human creativity and jobs. When digital art allowed quick work in a new medium it was still human artists at work.
AI removes the human and removes the employment and does so by unethical sourcing of human effort. To say that’s no different than painting in photoshop is naive at best and disingenuous at worst.
Very cool that AI is making your comp sketches and light studies for you now. It’s still a problem. Embracing it should still be questioned until and unless it has ethical restraints.
-
I’m sorry, I just snorted water down the wrong pipe cuz of this.
@bored said in AI Megathread:
It’s not going to completely erase humans (that’s a straw man no one is suggesting)
@imstillhere said in AI Megathread:
AI removes the human and removes the employment and does so by unethical sourcing of human effort. To say that’s no different than painting in photoshop is naive at best and disingenuous at worst.
No offense to anyone, just found it hella funny.
-
@hellfrog said in AI Megathread:
@Testament said in AI Megathread:
However, the pessimistic nihilist in me would look at forums like here, MSB, r/MUDs see the kind of toxicity that are entirely bred within them and it makes me consider, “Okay, but what if we just remove the human element to it?”
then they would not exist. a forum with chatGPT posts would just be pages and pages of spam no human ever bothered to look at. theft on a cosmic scale, justified by nothing
I meant that more in the context by the example of humanity we see in forums like this place, was sardonically questioning if removing the human element from mushes and replacing it with AI be an improvement, even if you the person wasn’t aware that the change was made once AI is to that kind of point. While yes, it technically being a single player text game, would it be better.
That is until AIs learn to shitpost. Actually, I think they already do.
-
@Testament said in AI Megathread:
@hellfrog said in AI Megathread:
@Testament said in AI Megathread:
However, the pessimistic nihilist in me would look at forums like here, MSB, r/MUDs see the kind of toxicity that are entirely bred within them and it makes me consider, “Okay, but what if we just remove the human element to it?”
then they would not exist. a forum with chatGPT posts would just be pages and pages of spam no human ever bothered to look at. theft on a cosmic scale, justified by nothing
I meant that more in the context by the example of humanity we see in forums like this place, was sardonically questioning if removing the human element from mushes and replacing it with AI be an improvement, even if you the person wasn’t aware that the change was made once AI is to that kind of point. While yes, it technically being a single player text game, would it be better.
That is until AIs learn to shitpost. Actually, I think they already do.
What would be interesting is, once AI is advanced enough, replacing human voices on a forum one by one with AIs taught to mimic them, until there are no humans left, and seeing if they can continue posting ad infinitum at each other.
-
@Coin ad infinitum ad hominem would be a great tagline.
-
I can’t wait for people to be outraged because AI is taking away the function of soldiers in the battlefield because wars are being fought with AI-piloted drones.
“You’re taking away the chance for people to be able to pay for college!!!”
Haha.
Capitalism.
-
@sao said in AI Megathread:
@Coin ad infinitum ad hominem would be a great tagline.
Would it be more accurate flipped? Or do I just not know enough Latin?
-
@Coin You know, I’d probably never need tv again. Just arguments created by AIs.
-
That is until AIs learn to shitpost. Actually, I think they already do.
Heh. This is making me mention Microsoft’s Tay again.
-
@SpaceKhomeini Wasn’t that the one where a bunch of 4chan trolls turned the AI racist?
-
@imstillhere said in AI Megathread:
@Griatch said in AI Megathread:
As for me, I find it’s best to embrace it; Digital art will be AI-supported within the year. Me wanting to draw something will be my own hobby choice rather than necessity.
are you aware that artists for whom this is NOT a hobby are suffering from this thing you’re excited to “embrace”
When photography displaced illustrators there was a new human art form that supported human creativity and jobs. When digital art allowed quick work in a new medium it was still human artists at work.
Yes, I expect this will dramatically change the art industry. I can see why people are legitmately concerned. Same is true for a lot of white-collar jobs (for once, the blue-collar workers may be safest off). While I don’t work as a professional artist, I expect my own job in IT to fundamentally change or even go away too, as programmers eventually become baby-sitters of AI programmers rather than actually code ourselves. Since I think that this is inevitable, I’m trying to learn as much as I can about it already.
AI removes the human and removes the employment and does so by unethical sourcing of human effort.
There’s definitely discussions to be had about the ethical sources of the training data; OSS models (which is what I use, since I run these things locally) are already trying to shift to using more ethically sourced, freely available data sets (but yes, there are still issues there, considering the size of the corpus). You can in fact look into those data sets if you want - they are publicly searchable. Companies with proprietary solutions (Midjourney is particularly bad here) will hopefully be forced to do so by lawsuits and regulation, eventually. But that said, I’d think that even an AI completely trained on public-domain images will still change the industry, so it’s not like this changes the fundamental fact of the matter: LLM processing is here to stay.
To say that’s no different than painting in photoshop is naive at best and disingenuous at worst.
AI image generation is only one aspect of LLMs. It on its own is certainly not the same as painting in Photoshop, and I never suggested as much. But I do expect photoshop to have AI support to speed up your painting process in the future - for example, you sketch out a face and the AI cleans it up for you, that kind of thing (not that I use Photoshop, I’m an OSS guy. ). But yeah, for professional artists, I fear the future will be grim unless they find some way to go with the flow and find a new role for themselves; it will become hard for companies not using AI to compete.