The new forums will be named Coin Return (based on the most recent
vote)! You can check on the status and timeline of the transition to the new forums
here.
The Guiding Principles and New Rules
document is now in effect.
AI Generated Images: Tech, Ethics, Impacts on the Art Community
Posts
Regulating away technologies works all the time
Did it with asbestos did it with a bunch of others.
You're operating on an absurd double standard for what constitutes a real plan in this thread.
For the economic benefit of a tiny subset of the population? Citation needed. Cause the RIAA tried real hard to stop digitized music. Hollywood tried real hard to stop video recording. I've looked for and failed to find a time when a technology ceased to be used without being replaced by something better.
It's also just a wee bit inconsistent that you believe labor saving technology can be legislated away in a capitalistic society but contend legislation improving welfare is impossible.
All I can do is operate on specific examples, but I can find no end to anatomical AI mistakes.
Just to be clear, you’re telling me that feeding it nothing but incorrect renderings of humans has no discernable difference than feeding it actual pictures?
Nuclear Power?
But that's the thing about the art sector: if the robots can't create inspirational art, then that section of the market will persevere. There's an enduring human need for that kind of thing, there will always be a place in our society for those that inspire us.
The same can't be said for engineers and the like. If you feed it all the standards and it can layout all your plumbing, electrical, steel; then that's whole office reduced to an [absurdly expensive] Autodesk AI, ChatGPT on contracts and PR, and a lone human with a stamp who occasionally gets to design some 1%ers bespoke guest house.
The economic ripples from a shift like that are what I find deeply concerning.
It’s not one subset. It’s multiple sectors, including lawyers.
We haven't totally regulated away tobacco, a substance that only causes harm. Tesla is selling cars on the promise of self driving and killing people. Regulation isn't going to stop an intangible and unquantifiable harm to artists. We'd probably still be using asbestos if they had been better at lobbying.
I'm telling you that taking the incorrect output of a training model and feeding it back into the same training model will have no discernable difference on the output, yes. Because the errors are already baked in, as it were.
Where you would see a difference is if you, for example, developed a more sophisticated model and, on one hand, gave it the original good pictures of people, and on the other, gave it the janky output of the old model. Since the model was more sophisticated, you would expect it to make better use of the original images and produce better work, while feeding bad data would give bad output (but probably no worse than the original janky output).
But you're not really ever going to be able to take bad output and get worse output unless you're specifically trying to do so, but why would you be trying to do so?
Does that make sense?
Edit: like, I'm not trying to bag on you or anything, I'm legit trying to correct what I think is a misconception on your part. I can stop if you want.
France, South Korea, and China seem to be doing fine on the Nuclear Power front. The US isn't the entirety of the world.
Have you looked at the incidence of smoking compared to age? The youths don’t smoke. They vape, but it doesn’t carry the same image smokers did when I was a kid.
We have basically regulated it to being a dying industry by limiting its advertising and forcing its proprietors to advertise instead just how bad it is for you.
It still exists because it’s highly addictive and just pulling it from the market would cause some ugly displays and throw a bunch of people into withdrawal.
I'm unsure if AI is replacing lawyers. Isn't it just that one "Fight my ticket!" guy, which is probably not a service which is replacing actual lawyers, because you generally don't spend hundreds of dollars hiring a lawyer to get out of a traffic ticket?
And there was the guy trying to get an AI to argue in front of SCOTUS and getting laughed at and summarily dismissed, iirc.
yeah but I don't work in those countries so I'm giving only a shit about the tech used here, where the RIAA and Hollywood fought and failed
but I'm sure the workers protections are strong in South Korea and China
France is France
And still “it has a 90% pass rate on the LSAT” is exactly how they went out to the media for the next instance of chatGPT.
Those things just mean that the lawyers were smart enough to say “get the fuck out” as soon as someone tried to actually use it directly in the courtroom. Let’s see how they feel about replacing interns writing shit up for them.
For the same reasons everything always goes tits up in capitalism
Is it cheaper?
This is incorrect. Hands require special programming. It’s not that “billions of images of hands” wasn’t enough.
The reason that hands were hard to draw is fundamental to how correlation engines work. You see they work by setting portions of an image next to other portions of an image and guessing what should be next.
A finger has a 50% probability of being bounded by two other fingers. As a result an AI program that is drawing will see fingers and start producing a random number of fingers that ends with probability 1/2 after making at least two. (Because every finger is bounded on on side by another finger)
Adding more images of hands does not fix this because it does not modify the underlying structure of the image. It’s not adding new information.
You have to add a bounding example such that the space that the model examines when looking at a hand is harder than the space that a model examines or repeats when looking at other parts of the images.
Edit: like there is like a 0% chance that stable diffusion does not have the entirety of constructive anatomy in its model. They’ve got good images of hands. That isn’t the problem.
https://arstechnica.com/information-technology/2023/02/us-copyright-office-withdraws-copyright-for-ai-generated-comic-artwork/
https://fingfx.thomsonreuters.com/gfx/legaldocs/klpygnkyrpg/AI COPYRIGHT decision.pdf
That's pretty clear. They did manage to get copyright for the text and ordering of images because both were handled manually.
No. It would actually be more difficult and expensive to fuck it up. Because you have to be specifically trying to weight the output towards something bad.
This is like "Dr. Evil coming up with a plan to make everybody think people are deformed by using AI to subtly manipulate perceptions unless you pay him ONE MEEEELLION DOLLARS" territory.
There's another benefit for games. Let's take the game Path of Exile as an example. They release new content every few months, and that content usually has voice work for it. Now, let's say you want the main character (one of a few classes, with different voices) to have a new voice line as part of the new content. Well, to do that, you'd need to grab the people who did the original voice work, and stick each of them in a sound booth. That's a problem, because it might be years since their initial voice lines, assuming they're still in the business, and if even one of them can't do this, you have a problem. Being able to take the existing voice lines, use AI to create new lines, and stick those in the game has an obvious logistical benefit, because you don't need to get a specific person to do a specific thing. The only difference between ethical AI and unethical AI here is whether or not the original performer agreed to this and is appropriately compensated for the new lines, and there's no good reason not to do this properly; voice acting is just the cost of doing business.
Anyway, on an industry level, I don't see this as an apocalypse, but it does cut the low-end out of the market, for both art and voicework. Being able to voice Townsperson #4, or drawing background characters in a TV show, is how artists get their foot in the door of the industry. It's something you can point to and show that you know what you're doing and can be trusted with higher-profile work. But if you can just generate a random voice for Townsperson #4, and have that be "good enough", that's an entry-level job lost. Big budget media will still hire people for big roles, both for star power, and because they can give a better performance, but the smaller roles that would usually go to beginning actors/artists are threatened. And industries aren't historically great at noticing that they've shut off the supply of entry-level jobs, let alone responding to that in a useful way.
re bolded: Your line includes the reason not to do it properly. If they can get the lines and pay nothing, bonus!
Well, that's where industry standards, contracts, and the like come in. Likeness rights cover voices, and while existing contracts may not cover AI generated lines properly, it's now a Thing That Exists and can be negotiated over.