Don’t forget we moved!
https://brandmu.day/
AI Megathread
-
Had a very long talk about this today with my SO, who is an artist and has some very thoughtful opinions on it as a whole. But what she did point out to me was this after a talk with a someone who works at Shutterstock while at a Figma conference.
Shutterstock has it’s own generative AI solely for it’s members, and if you opt-in to allowing your images to be used in their generative AI, you’re paid royalties for it.
I thought that was interesting, so I figure I’d share.
https://www.shutterstock.com/blog/shutterstock-building-ethical-ai
-
@Vulgar-Boy said in AI Megathread:
Honestly, I’m kinda surprised to see people getting upset at these edge cases while no one seems to care too much that there are entire games now where people are using AI to make their character images. Is it because when someone uses AI to do writing there is an attempt at deception, whereas no one is pretending their AI hottie was painted by them?
Honestly, kinda, for me. It is VERY easy for me to tell at a glance if something is a Midjourney generated PB or if it isn’t. Those images are obvious and it doesn’t feel like anybody’s trying to hide where they came from. And not gonna lie, I would probably opt out of a game that insisted I use MJ to generate my character image. How it’s different than using a PB of a public image of an actor/model I can’t quantify, I’d just feel uncomfortable if I didn’t have the choice.
I do feel like it takes more effort to suss out through just reading/using AI sensing tools if thematic or roster character text is generated via AI or not. And I would feel somewhat ‘trapped’ and a little gross if a game I thought was fully written and GM’d by a human ended up being a whole-cloth AI creation.
-
@Testament said in AI Megathread:
If AI generated art gets sued into oblivion and we all lose access to Midjourney, I don’t feel like mjuch will change for mushes. It’ll just mean another AI-based program will take it’s place or people will just go back to using the same 10 pictures of Jason Momoa. Which I’m not sure if that’s any better, but we’ve been doing it longer.
When it comes to AI PBs I’m gonna flex my age hard here and point out that I remember a time when “PBs didn’t exist.”
In some ways, I miss the lack of focus on them, but that ship fucking sailed long ago so I will just sulk in this world full of a billion and one Tom Hardys that are magically 6’4" and Eva Greens that are 5’2" and wonder why this shit was even necessary. But this is all just another area where this hobby sort of passed me by.
As it stands, I pretty much hate prompt-driven “AI” as a stand-in for anything but there’s a pile of post-pandemic GPUs that won’t sell themselves, so this silicon and capital has to go somewhere, right?
-
@Third-Eye said in AI Megathread:
How it’s different than using a PB of a public image of an actor/model I can’t quantify, I’d just feel uncomfortable if I didn’t have the choice.
At the risk of sounding hypocritical after my rant about copyright - for me, there are a few very important differences.
-
Who’s affected - a regular person artist vs Hollywood actors/studios.
-
How they’re affected - Actor/studio is not losing money by someone using a screencap or promo image; regular artists ARE losing money from tools like MJ. Maybe not from you, but the money to develop/host that tool comes from somewhere, and that somewhere is harming artists.
-
Context - MU PBs are really just “dream casting”, which is a pretty widely accepted Internet Thing. IMHO it’s more in the realm of fanfic than outright theft.
Does any of that make using PBs “okay”? That’s for individuals to decide, and honestly I’ve swung from “PBs all the way!” to “Meh” a bit myself. But I think those factors do make it different.
-
-
My stance on MJ is that if you use it, you should make an effort to commission more artists in order to offset the harm you’re contributing to. The ability to pump out “good enough” images is a huge problem for artists, and it’s going to make art the exclusive domain of the wealthy again.
Thankfully, for now, midjourney et al are fucking terrible unless you want a human in an extremely boring pose, but given the rapidity of their development I doubt it will stay that way.
Anyway, yeah, I’m ok with placeholder art for characters via MJ (though I think @Faraday’s absolutist stance against it is more coherent and principled), but if you’re playing that character for more than a month or two it’s time to start looking for commissions, imho.
-
As not to pollute the other thread, in regards to MUs becoming heavily reliant on AI:
The official RPGs are doing it, so why shouldn’t you?! Hasbro has announced AI DMs as an upcoming feature for D&D Beyond, and they were just caught using AI generated art in their newest upcoming/just releasing book, with some pretty blatantly poor looking results.
That’s to say nothing of the major film industry strikes right now being triggered by studios wanting to replace writers with AI and digitally catalogue extras so they can deepfake them for eternity.
Thrash and struggle as we may, but this is the world we live in now.
-
@bored said in AI Megathread:
Thrash and struggle as we may, but this is the world we live in now.
Only if we accept it.
The actors and writers in Hollywood are striking because of it.
Lots of folks in the writing community are boycotting AI-generated covers or ChatGPT-generated text, and there’s backlash against those who use them.
Lawsuits are striking back against the copyright infringement.
Again, to be clear, I’m not against the underlying technology, only the unethical use of it.
If a specific author or artist wants to train a model on their own stuff and then use it to generate more stuff like their own? More power to them. (It won’t work as well, because the real horsepower comes from the sheer volume of trained work, but that’s a separate issue.)
If MJ were only trained on a database of work from artists who had opted in and were paid royalties for every image generated? (like @Testament mentioned for Shutterstock - 123rf and Adobe have similar systems) That’s fine too (assuming the royalty arrangements are decent - look to Spotify for the dangers there).
These tools are products, and consumers have an influence in whether those products are commercially successful.
-
@Faraday There will be a legal shake-out, for sure. I’m not optimistic that starving artists (a group not traditionally known for their ability to afford expensive lobbyists) are going to win, though. While Midjourney is a bit of a black box (given their for-profit model), the idea of figuring out valid royalties for Stable Diffusion’s training data is getting into counting grains of sand on the beach territory. Given the open-source nature and proliferation of descendant models… what can you do? The only answer is to ban the technology completely outside of… approved, licensed models, which would almost certainly just be MORE of a corporate coup as ownership of those big data sets will become prized.
I brought this one up because I think the Hasbro stuff is an interesting case of it being mainstreamed and placed into the professional space (And its RPGs. Exactly what we discuss here.) Notably, the book has artist credits! These are a bunch of regular WotC contributors, and the art is in familiar styles (although also, the artist whose work it most looks like is not in the credits). Something got arranged here. Some people got paid. But maybe not all the people. WoTC presumably owns the rights to every bit of D&D and (vastly more) MTG art. That is a big enough set to train on. They’re probably not paying most of those people.
There will be laws for all this stuff eventually, but the idea that the creative industries come out on top seems very slim. Copywriting is already essentially being annihilated as a profession. I don’t see how it goes any other way.
-
@bored said in AI Megathread:
I’m not optimistic that starving artists (a group not traditionally known for their ability to afford expensive lobbyists) are going to win, though.
Not directly, no, but it’s not only starving artists fighting. Big-name companies like Getty Images are leading the charge, and they could get protections in place that impact the little guys too.
@bored said in AI Megathread:
the idea of figuring out valid royalties for Stable Diffusion’s training data is getting into counting grains of sand on the beach territory.
Sure, but that’s kind of the point. If a lawsuit decided that this was all copyright infringement and they were required to pay royalties to artists whose work was used in training, they’d basically have no choice but to throw away the existing models and start with more ethical ones. Would the old ones still exist in rogue form on the internet? Sure, but they would cease being in the mainstream.
Book piracy and music/movie torrent sites didn’t vanish after the fall of Napster, but nobody tried to make it their business model.
-
I agree large rightsholders pushing is the only way those legal chances happen. I am extremely dubious that this somehow yields an ‘ethical’ outcome. Are we fans of the DMCA now?
If generative AI becomes subject to proof of ownership of your training data, then it will just become de facto the exclusive tool of large tech and media companies that already own that data (or who can encourage you to sign away your rights to it as a part of their services, or just purchase it). Subsequently, everyone who wants to use it will be paying for their services. Google and Meta and Disney win, not John StockModel or Jane FanArtist. At best, they’ll be given the option to sign over their work forever, for very small payouts.
And it’s fundamentally worse than any of the analogy cases because once you have your model, you genuinely never need those models or artists again. That’s why the tech is worth it. It’s pointless if you owe royalties. Even if it wins a court case, Getty will lose (or realistically, sell the data while it’s still worth anything): Google scanned the planet, it can generate new training data if it has to. The actors strike will settle, they’ll give up their Bond villain position as a negotiating point and just generate novel fake humans instead of buying someone’s face. The tech is so transformative that anyone who thinks they can fight it is just going to get run over.
-
@bored said in AI Megathread:
The actors strike will settle, they’ll give up their Bond villain position as a negotiating point and just generate novel fake humans instead of buying someone’s face
What will they use to generate those fake humans
Eta:
@bored said in AI Megathread:
Google scanned the planet, it can generate new training data if it has to
This is way more legally fraught than you’d think, especially given European privacy rights
-
@Rinel said in AI Megathread:
@bored said in AI Megathread:
The actors strike will settle, they’ll give up their Bond villain position as a negotiating point and just generate novel fake humans instead of buying someone’s face
What will they use to generate those fake humans
There’s already plenty of those on TikTok.
-
@Testament said in AI Megathread:
@Rinel said in AI Megathread:
@bored said in AI Megathread:
The actors strike will settle, they’ll give up their Bond villain position as a negotiating point and just generate novel fake humans instead of buying someone’s face
What will they use to generate those fake humans
There’s already plenty of those on TikTok.
In that case the actors are safe
-
@Faraday said in AI Megathread:
@bored said in AI Megathread:
Thrash and struggle as we may, but this is the world we live in now.
Only if we accept it.
The actors and writers in Hollywood are striking because of it.
Lots of folks in the writing community are boycotting AI-generated covers or ChatGPT-generated text, and there’s backlash against those who use them.
Lawsuits are striking back against the copyright infringement.
Again, to be clear, I’m not against the underlying technology, only the unethical use of it.
If a specific author or artist wants to train a model on their own stuff and then use it to generate more stuff like their own? More power to them. (It won’t work as well, because the real horsepower comes from the sheer volume of trained work, but that’s a separate issue.)
If MJ were only trained on a database of work from artists who had opted in and were paid royalties for every image generated? (like @Testament mentioned for Shutterstock - 123rf and Adobe have similar systems) That’s fine too (assuming the royalty arrangements are decent - look to Spotify for the dangers there).
These tools are products, and consumers have an influence in whether those products are commercially successful.
Right now at work I’m having a horrendous time dealing with upper management who are just uncritical (and frankly, experienced enough to know better) about this sort of thing.
“ChatGPT told <Team Member x> to do thing Y with Python! It even wrote them the script!”
Me: Uhh, do they have any idea how this works? Or how to even read the script?
Dead silence.
I’m not letting this go on in my org without a fight.
-
@bored said in AI Megathread:
@Faraday There will be a legal shake-out, for sure. I’m not optimistic that starving artists (a group not traditionally known for their ability to afford expensive lobbyists) are going to win, though. While Midjourney is a bit of a black box (given their for-profit model), the idea of figuring out valid royalties for Stable Diffusion’s training data is getting into counting grains of sand on the beach territory. Given the open-source nature and proliferation of descendant models… what can you do? The only answer is to ban the technology completely outside of… approved, licensed models, which would almost certainly just be MORE of a corporate coup as ownership of those big data sets will become prized.
I brought this one up because I think the Hasbro stuff is an interesting case of it being mainstreamed and placed into the professional space (And its RPGs. Exactly what we discuss here.) Notably, the book has artist credits! These are a bunch of regular WotC contributors, and the art is in familiar styles (although also, the artist whose work it most looks like is not in the credits). Something got arranged here. Some people got paid. But maybe not all the people. WoTC presumably owns the rights to every bit of D&D and (vastly more) MTG art. That is a big enough set to train on. They’re probably not paying most of those people.
There will be laws for all this stuff eventually, but the idea that the creative industries come out on top seems very slim. Copywriting is already essentially being annihilated as a profession. I don’t see how it goes any other way.
Y’all ever think all of this shit is just incompatible with capitalism?
Or rather, ever think that capitalism is just incompatible with life at this point?
I’ll cop to oversimplifying things and thumping Marx and Marcuse here, but IMO the quiet part hasn’t been said loud enough in all of this mess.
-
@bored said in AI Megathread:
Are we fans of the DMCA now?
There’s certainly room to improve copyright laws, but am I a fan of the broad principles behind it? Of protecting creators’ work? Absolutely.
Creators are going nowhere because large language models and stable diffusion type graphic models are dumb.
I don’t just mean they’re dumb on principle, I mean they’re dumb logically and artistically. They’re parrots who don’t understand the world they’re parroting. They don’t understand jack about the human experience.
They can never write a news article about an event that just happened - not until some human writes it first so they can copy it. They can’t write a biography about a person who hasn’t been born yet, or generate an image of Ford’s new automobile, or write late-night jokes about the day’s events - not until a human has done something for them to copy. They will not - and can not - ever generate something truly original, inspired, or new.
Now to be fair, a lot of human creations are unoriginal and uninspired too, but we at least have the potential to do better. Generative AI doesn’t.
And this isn’t just a bug that they can fix in version 2.0. It’s baked into the very core of how they work.
JMS had a great essay on this.
I also liked Adam Conover’s video essay.
I don’t mean to downplay the harm that generative AI tools can cause to creatives. But this tech is nowhere near as transformative as the tech hype would like us to believe.
-
@SpaceKhomeini said in AI Megathread:
Y’all ever think all of this shit is just incompatible with capitalism?
The unrestrained drive for ever increasing profit will result in reform, revolution, or societal collapse. So yes.
You could probably keep capitalism going indefinitely even with shit like this if you implemented a strong safety program, but the plutocrats are too stupid even to do that. So!
-
I think we’re just going to see writing and graphical art go the same way as physical art that can be mass-produced. I can get an imitation porcelain vase mass produced out of a factory for $10 or I can get a handmade artisan-crafted porcelain vase for $1000. They may seem alike but a critical eye can spot the difference and some people may care about that.
-
@SpaceKhomeini said in AI Megathread:
@Faraday said in AI Megathread:
@bored said in AI Megathread:
Thrash and struggle as we may, but this is the world we live in now.
Only if we accept it.
The actors and writers in Hollywood are striking because of it.
Lots of folks in the writing community are boycotting AI-generated covers or ChatGPT-generated text, and there’s backlash against those who use them.
Lawsuits are striking back against the copyright infringement.
Again, to be clear, I’m not against the underlying technology, only the unethical use of it.
If a specific author or artist wants to train a model on their own stuff and then use it to generate more stuff like their own? More power to them. (It won’t work as well, because the real horsepower comes from the sheer volume of trained work, but that’s a separate issue.)
If MJ were only trained on a database of work from artists who had opted in and were paid royalties for every image generated? (like @Testament mentioned for Shutterstock - 123rf and Adobe have similar systems) That’s fine too (assuming the royalty arrangements are decent - look to Spotify for the dangers there).
These tools are products, and consumers have an influence in whether those products are commercially successful.
Right now at work I’m having a horrendous time dealing with upper management who are just uncritical (and frankly, experienced enough to know better) about this sort of thing.
“ChatGPT told <Team Member x> to do thing Y with Python! It even wrote them the script!”
Me: Uhh, do they have any idea how this works? Or how to even read the script?
Dead silence.
I’m not letting this go on in my org without a fight.
Please tell me you’re documenting this, because this is a security breach waiting to happen, or other litigation.
-
@dvoraen said in AI Megathread:
@SpaceKhomeini said in AI Megathread:
@Faraday said in AI Megathread:
@bored said in AI Megathread:
Thrash and struggle as we may, but this is the world we live in now.
Only if we accept it.
The actors and writers in Hollywood are striking because of it.
Lots of folks in the writing community are boycotting AI-generated covers or ChatGPT-generated text, and there’s backlash against those who use them.
Lawsuits are striking back against the copyright infringement.
Again, to be clear, I’m not against the underlying technology, only the unethical use of it.
If a specific author or artist wants to train a model on their own stuff and then use it to generate more stuff like their own? More power to them. (It won’t work as well, because the real horsepower comes from the sheer volume of trained work, but that’s a separate issue.)
If MJ were only trained on a database of work from artists who had opted in and were paid royalties for every image generated? (like @Testament mentioned for Shutterstock - 123rf and Adobe have similar systems) That’s fine too (assuming the royalty arrangements are decent - look to Spotify for the dangers there).
These tools are products, and consumers have an influence in whether those products are commercially successful.
Right now at work I’m having a horrendous time dealing with upper management who are just uncritical (and frankly, experienced enough to know better) about this sort of thing.
“ChatGPT told <Team Member x> to do thing Y with Python! It even wrote them the script!”
Me: Uhh, do they have any idea how this works? Or how to even read the script?
Dead silence.
I’m not letting this go on in my org without a fight.
Please tell me you’re documenting this, because this is a security breach waiting to happen, or other litigation.
At work we were recently talking about AI Hallucination Exploits. Essentially, sometimes AI-generated code calls imports for nonexistent libraries. Attackers can upload so-named libraries to public repositories, full of exploits. Hilarity and bottom-line-affecting events ensue.