Don’t forget we moved!
https://brandmu.day/
AI Megathread
-
@shit-piss-love well yes, but that’s not a very useful distinction when an entire career path is staring down the barrel of a gun - because even though there will still be work for those who can incorporate and use AI to their advantage, there will be much less available work than there is now.
-
@Griatch said in AI Megathread:
Yes, I expect this will dramatically change the art industry. I can see why people are legitmately concerned. Same is true for a lot of white-collar jobs (for once, the blue-collar workers may be safest off).
I mean, exactly? It’s not like AI is doing anything new in terms of the exploitation and replacement of humanity within labor fields.
I’m not saying it’s ethical, I’m just saying that once art becomes labor, once it is bought and sold and entire industries are built on the art created, it becomes subject to the same rules, regulations, and pitfalls that all labor is subject to under capitalism. There’s no way to avoid that except for bringing down capitalism. The idea that ethical regulations will do anything to stop this is a pipedream. At best, it will slow it down.
AI image generation is only one aspect of LLMs. It on its own is certainly not the same as painting in Photoshop, and I never suggested as much. But I do expect photoshop to have AI support to speed up your painting process in the future - for example, you sketch out a face and the AI cleans it up for you, that kind of thing (not that I use Photoshop, I’m an OSS guy. ). But yeah, for professional artists, I fear the future will be grim unless they find some way to go with the flow and find a new role for themselves; it will become hard for companies not using AI to compete.
Photoshop already integrated some AI stuff into its latest, IIRC. You can have photoshop essentially “finish” or “expand” art so that it fills out what’s missing past the edges of a picture. Lol. It’s wild.
-
@shit-piss-love said in AI Megathread:
@Griatch said in AI Megathread:
But yeah, for professional artists, I fear the future will be grim unless they find some way to go with the flow and find a new role for themselves; it will become hard for companies not using AI to compete.
This is what I expected, but the first people that made me start considering the positive effects of AI on the field of professional art are my friends who are professional artists. Most of them seem really happy with adding to their toolkit. Concept Art in hours what would previously take them days, or in the same amount of time being able to do significantly more iterations resulting in what they feel is a better final product. The takeaway I’ve got from conversations with them is that it will come down to quality of studio whether they use AI tools to level-up the art departments, or attempt to replace them. But as one said “An AI isn’t going to take my job. It will be a professional peer who knows how to use AI better than me.”
That’s encouraging to hear! If your friends feel they are on top of the coming changes, all the more power to them.
-
@Coin said in AI Megathread:
Photoshop already integrated some AI stuff into its latest, IIRC. You can have photoshop essentially “finish” or “expand” art so that it fills out what’s missing past the edges of a picture. Lol. It’s wild.
This is called outpainting in Stable Diffusion (vs inpainting, which is replacing an element inside the image, as I did with Pikachu’s 3rd ear->crown). Photoshop has it’s own ‘Generative Fill’ which is an outpainting process on its own in-house model. It also has plugins for SD and DALL-E integration. These will automate passing selections back and forth to the AI software (which is sensitive to image dimensions, the plugins help with this).
100%, this stuff is going to become a standard part of a professional Photoshop workflow, if it already isn’t. Especially with how fast it can do some tasks that used to be either time consuming or inaccurate under prior algorithmic automation.
-
@bear_necessities said in AI Megathread:
@Rinel this is not me snarking but asking a legitimate question. If you believe this:
@Rinel said in AI Megathread:
Barring an economic revolution that is long-coming and never here, the result of this particular utopia is the collapse of widespread art as the practice reverts to only those privileged enough to spend large amounts of time on hobbies that they can’t use to help make a living. It’s difficult to fully describe how horrific this scenario is, but it’s the death of dreams and creativity for literal millions of people.
It would legitimately be better to destroy /all/ LLMs and prohibit their existence than to pay that cost.then why do you do this:
@Rinel said in AI Megathread:
I’ve been using Midjourney for quite some time now,
Hypocrisy, probably. The technology interests me. I do my best to offset what I justify as relatively minor harm (using it solely for small-scale placeholder art and out of curiosity) by doing my best to commission more actual art. But it’s probably still hypocrisy.
@bored said in AI Megathread:
Even though your goalpost was ‘MS paint doodle.’ Lets see your doodle so we can critique it.
“Pikachu, wearing a crown and royal cape, in a lightsaber duel on the moon with an angry duck”
ETA:
@bored said in AI Megathread:
You want to bang on ‘oh my god, it added an ear, it didn’t have a tail, it doesn’t understaaaaand’. I fixed the ear instantly (again, no photoshop - I just put a blob over the ear and told it ‘crown instead, plz’). I could obviously add a fucking tail. I’m not going to do more because it’s pretty clear I could give you the Picasso of Pikachu vs. Darth Maulard and you’d complain about a single pixel.
You failed to generate an image that conformed to the extremely basic specifications that I, a person who as you can clearly see literally cannot draw, managed to accomplish with ease. This is the essence of the AI apologist. You’re actually mocking me for pointing out that the image you generated is missing a tail and generated a third ear, as though that would be remotely acceptable if it were coming from a human.
LLMs cannot do what humans do. The only people who think they can are lowering their standards.
ETA2, EDITING BOOGALOO:
@Griatch said in AI Megathread:
Yes, I expect this will dramatically change the art industry.
Not just the art industry, but the artistic community writ large. Lots of people can only do art because of independent commissions.
-
@Rinel Where’s it’s nose? In fact, it doesn’t even have a head, it’s just one big lump. How is this Pikachu in any way other than a distant, vague, 2nd hand understanding of the concept? Percentage wise, its less Pikachu than the AI one.
(And to be clear: No, I cannot draw better. But that isn’t the argument here. I’m just unclear why your failures are less significant than a missing tail.)
But really, I don’t know how to engage with this and I’m going to stop here. You’re just… confidently wrong, here. The AI can make the thing. You said the MS paint doodle would be recognizable as the subject and AI couldn’t produce it at all. But it did. So you switch to critiquing single small elements, even when the subjects are clear, and even when your drawing fails to have 100% of the elements either. If you look through the grid, all the concepts you’re talking about are there. I’m not your employee so I’m not interested in getting you a ‘good enough’ picture, but those concepts are there. AI may not ‘understand’, but the tokens pikachu, jedi, lightsaber, moon, Earth, crown, cape, when passed through CLIP, correspond to vectors in the latent space. That is its equivalent of understanding.
Fnal question: if there was a cheap IP-stealing Chinese mobile game that wanted Jedi Pikachu artwork, which of the two would they pick? Pretty obviously not yours, right?
-
@imstillhere said in AI Megathread:
are you aware that artists for whom this is NOT a hobby are suffering from this thing you’re excited to “embrace”
“Which Pikachu is better” arguments aside, this is the heart of it for me.
Thousands upon thousands of humans are going to be out of jobs because of a tool that was literally built by stealing their work, and the overwhelming response of the majority of other humans ranges from “meh, who cares - it’s cool, let’s embrace it” to “it’s too late to regulate anything, let’s just let the tech companies run amok.”
That’s really disheartening.
Now I realize that copyright laws and internet regulations are imperfect, but imagine what the world would look like if everyone had just folded over Napster. “Oh well, data can be shared easily now; screw the musicians.” Or if YouTube had just let everyone upload every movie they owned, free for anyone to watch. “Oh well, movies can be shared easily now; screw the filmmakers.” Or if every social media outlet had just thrown up their hands and said. “Oh well, nothing to stop people from posting whatever they like; why even bother trying to moderate.”
We can’t un-invent things, but we can use them responsibly.
-
By the way, here’s an example of a smaller AI model trained using only public-domain images. The tech will move on also if regulators pull the brakes.
Now I realize that copyright laws and internet regulations are imperfect, but imagine what the world would look like if everyone had just folded over Napster. “Oh well, data can be shared easily now; screw the musicians.” Or if YouTube had just let everyone upload every movie they owned, free for anyone to watch. “Oh well, movies can be shared easily now; screw the filmmakers.”
You can already generate your own AI music. In a year you’ll be able to generate your own movies from a prompt, so that industry is also in for an upheaval …
-
@bored said in AI Megathread:
it doesn’t even have a head, it’s just one big lump.
new pokemon fans
you disgust me
behold the true form of the electric mouse, to which i pay homage
@Faraday said in AI Megathread:
“Which Pikachu is better” arguments aside
how dare
@Faraday said in AI Megathread:
Now I realize that copyright laws and internet regulations are imperfect, but imagine what the world would look like if everyone had just folded over Napster.
More seriously, LLMs are far, far worse than Napster, which hurt recording companies way more than it hurt actual musicians. I’m not taking a stance on the ethics of pirating, but there’s a difference between people copying things that others have made and people outright displacing human creators.
-
@Rinel said in AI Megathread:
@Faraday said in AI Megathread:
Now I realize that copyright laws and internet regulations are imperfect, but imagine what the world would look like if everyone had just folded over Napster.
More seriously, LLMs are far, far worse than Napster, which hurt recording companies way more than it hurt actual musicians. I’m not taking a stance on the ethics of pirating, but there’s a difference between people copying things that others have made and people outright displacing human creators.
So, if I understand you right, it’s not unethical training sourcing that is the issue for you (as it seems to be for Faraday), but the societal implications of the tech itself?
That’s a valid view. But while we can regulate and fix ethics of training sets, we won’t realistically stop AI being used and possibly upending a lot of people’s jobs in the same way as was done by countless new technologies in the past.I’m not saying I want people to lose their jobs, I’m just saying this is something we need to learn and adapt to rather than hope that the genie can be put back in its bottle.
-
@Griatch said in AI Megathread:
That’s a valid view. But while we can regulate and fix ethics of training sets, we won’t realistically stop AI being used and possibly upending a lot of people’s jobs in the same way as was done by countless new technologies in the past.
What makes most of these models powerful is the breadth of their training data. Yes, you can make an open-source model, but by definition they don’t work as well. They can’t make Pikachu fighting a duck with a lightsaber because both Pikachu and lightsabers are copyrighted/trademarked properties.
Also, given how poorly people understand intellectual property law, I really question whether these models are truly being trained only on things in the public domain. There’s such a widespread mentality of “well it’s on the internet and doesn’t have a copyright tag attached so it must be free right?” Plus people re-uploading copyrighted stuff to sharing sites with a different license. Maybe they are - I haven’t dug into it - but color me doubtful.
I’m not suggesting that we can - or should - stop all uses of a new technology. I’m just suggesting that the current hype wave that’s overselling what the tech can actually do, coupled with the unethical nature of the training sets, is creating a perfect storm of badness.
-
@Griatch said in AI Megathread:
@Rinel said in AI Megathread:
@Faraday said in AI Megathread:
Now I realize that copyright laws and internet regulations are imperfect, but imagine what the world would look like if everyone had just folded over Napster.
More seriously, LLMs are far, far worse than Napster, which hurt recording companies way more than it hurt actual musicians. I’m not taking a stance on the ethics of pirating, but there’s a difference between people copying things that others have made and people outright displacing human creators.
So, if I understand you right, it’s not unethical training sourcing that is the issue for you (as it seems to be for Faraday), but the societal implications of the tech itself?
Both are issues for me, though if you pressed me I’d say I’m more worried about the effects. If it didn’t have an economic effect, it would be a lot more like pirating media to me.
I very strongly support the implementation of strict regulations on how the models are trained, with requirements that all training data be listed and freely discoverable by the public.
I’m not saying I want people to lose their jobs, I’m just saying this is something we need to learn and adapt to rather than hope that the genie can be put back in its bottle.
One of the reasons I use MJ is to understand what’s going on, so I get what you mean, but we can still shackle the genie for a while.
-
@Faraday You talk as if it’s a clear-cut thing that these models are based on “theft”. Legally speaking, I don’t think this is really established yet - it’s a new type of technology and copyright law has not caught up.
If you (the human) were to study Picachu (as presented in publicly available, but copyrighted images) and learn in detail how he looks, you would not be breaching copyright. Not until you actually took that knowledge and made fan-art of him would you be in breach of copyright (yes, fan-art is breaching copyright, it’s just that it’s usually beneficial to the brand and most copyright holders seldomly enforce their copyright unless you try to compete or make money off it).
In the same way, an AI may know how Picachu looks, but one could argue that this knowledge does not in itself infringe on copyright - it just knows how Picachu looks after all, similarly to you memorizing his looks by just looking.
One could of course say that this knowledge inherently makes it easier for users of the AI to breach copyright. If you were to commission Picachu from a human artist, both you and the artist could be on the hook for copyright infrigement.
So would that put both the AI (i.e. the company behind the AI) and the commissioning human in legal trouble the moment they write that Picachu prompt? It’s interesting that the US supreme court has ruled that AI-generated art cannot be copyrighted in itself. So this at least establihes that the AI does not itself has a person-hood that can claim copyright (which makes sense).
Now, I personally agree with the sentiment that it doesn’t feel good to have my works be included in training sets without my knowledge (yes, I’ve found at least 5 of my images in the training data). But my feelings (or the feelings of other artists) don’t in itself make this illegal or an act of thievery. That’s up to the legal machinery to decide on, and I think it’s not at all clear-cut.
-
@Rinel said in AI Megathread:
I very strongly support the implementation of strict regulations on how the models are trained, with requirements that all training data be listed and freely discoverable by the public.
Proprietary models like Midjourney and OpenAI don’t release any of this stuff, alas. But if you stick to OSS models, like Stable Diffusion, you can freely search their training data here (they also use other public sources). There are tens of thousands of LLM models for various purposes and active research on hugging face alone; they tend to be based on publicly available training data sets.
-
-
@Griatch said in AI Megathread:
You talk as if it’s a clear-cut thing that these models are based on “theft”. Legally speaking, I don’t think this is really established yet - it’s a new type of technology and copyright law has not caught up.
I do, yes. Obviously the courts have not weighed in yet on the specific lawsuits at play, but that doesn’t prevent people from drawing their conclusions based on available evidence and knowledge of the laws.
I have seen with my own eyes these tools generate images and text that are very clearly copyright-infringing.
Arguing that they are somehow absolved of all responsibility because of how the users use the tools is like arguing that a pirate website or Napster bears no responsibility for being a repository of pirated material because it’s the users who are uploading and downloading the actual files. That has historically not worked out too well for the app makers. It’s the reason YouTube errs on the side of copyright claims - they don’t want to get drawn into that battle.
I also don’t personally find any weight to the argument that AI is ‘just learning like humans learn’. That’s like arguing that NFL teams should be allowed to use Mark Rober’s kicking robot in the Super Bowl because “it kicks just like a human does”.
-
Just came across this latest insanity and felt obliged to share.
As of today, there are about half a dozen books being sold on Amazon, with my name on them, that I did not write or publish. Some huckster generated them using AI. This promises to be a serious problem for the book publishing world.
A brief update: After going back a few times with Amazon on this issue, I was notified the books would not be removed based on the information I provided. Since I do not own copyright in these AI works and since my name is not trademarked, I’m not sure what can be done.
It did eventually get sorted out, but only because this particular author had lawyers to advocate for them with Amazon.
-
@Faraday said in AI Megathread:
I also don’t personally find any weight to the argument that AI is ‘just learning like humans learn’.
It’s demonstrably false, as I put forward in the mermaid argument earlier. You can show a human a mermaid and tell them to make one who is half octopus instead of half fish. You can’t do that with LLMs. You have to phrase the imput differently when trying to generate novel ideas, because LLMs /cannot learn/. They aren’t sapient. They aren’t even sentient. The fact that you can use certain tools to end up with an approximate result with an LLM doesn’t mean the AI is learning.
-
I disagree that the law hasn’t caught up. The law of transformative versus derivative work is directly applicable to the theory behind the training data and its use. What hasn’t caught up is legislation, but it’s already illegal under existing common-law standards, it’s just that that’s difficult to enforce because it’s case by case and a lot of the actual practice of it is stupidly based on who can afford a fancy IP lawyer and who is going to believe a shifty agreement is lawful just because it was signed.
I got into an argument about this just the other day on wyrdhold but the innocent bystanders were screaming and crying about the crossfire so I had to stop.
The element of human creativity to create a new thing is already the basis of the legal distinction between transformative (new art) and derivative (copied art) work.
-
@sao said in AI Megathread:
The element of human creativity to create a new thing is already the basis of the legal distinction between transformative (new art) and derivative (copied art) work.
Very true. It also staggers me just how many folks cry “but it’s transformative!” like that’s a defense. Transformative art is by default copyright infringement. Fair use is an exception that requires specific criteria. Transformation alone is not enough.
That’s why people still need permission to make a movie from a book, or a video game from a movie, or to record a cover song, even though all of these things are “transformative”. (YT’s rules for covers using ContentID makes things murky, but still gives the rights holder the control to block it, because it’s copyright infringement.)
In other AI news - grocery store app generates deadly “recipes”.
https://www.theguardian.com/world/2023/aug/10/pak-n-save-savey-meal-bot-ai-app-malfunction-recipes
Other instances have involved everything from the dangerous (undercooked meat) to the nonsensical.
Hopefully people will eventually learn that LLMs cannot be trusted for accurate information.
-
@Faraday said in AI Megathread:
Fair use is an exception that requires specific criteria. Transformation alone is not enough.
And determining what is fair use is an absolute fucking mess. I’ve had tons of people get mad at me when I say that fanfic and fanart are generally not fair use, because they’ve been told that if you aren’t selling it then it’s fine. It’s not fine just because you aren’t selling it!
Don’t get me wrong, I support fanart and fanfic and even write fanfic, but I’m well aware that I’m operating in a grey area of the law. I just don’t care about the law when it comes to that sort of thing, because the law is overly restrictive.
As a total aside to this largely tangential post, one of the funnier things to emerge out of this common misconception is the extreme taboo people have on selling fanfic, while fanartists routinely sell their work.