pancakes

MicrostockGroup Sponsors


Author Topic: Announcing Adobe Firefly A new family of creative generative AI models  (Read 24743 times)

0 Members and 1 Guest are viewing this topic.

Justanotherphotographer

« Reply #75 on: April 01, 2023, 06:25 »
+11

Yep, "everything under the sun".

Obviously, you don't understand why Getty is suing (hence protecting their business and OUR work) assuming that their lawsuits are frivolous!

Getty has a problem with AI companies using their database and OUR work without permission.

Besides, the example and the white paper I showed you is another proof that you are wrong: when the training set is limited, AI may very well end up plagiarizing copyrighted work. So it doesn't generate only ideas and concepts, but also derivative work.

You are correct. The people disagreeing tend to be the ones who stand to make money out of the new technology.

If the AI has a dataset of one the output will be (near) identical. Two and it will strongly resemble the two images, and so on. Once you get to millions it is a lot harder to spot but the principle is still very much the same. Infringing 1000,000 peoples copyright isnt better than infringing ones. It is a lot worse because it destroys an industry and many jobs. If I steal one persons images we may not even end up competing for the same customers. If I come up with an AI engine to steal everyones I have destroyed artists ability to make a living.

Anthropomorphising the app is silly. It isnt learning in the same way as a human. Its taking and processing images via programming. Saying they dont use them directly is a bizarre concept when we are talking about images being loaded into any type of algorithm or app resulting in new images coming out the other end.

The law is very clear. The Berne convention says the following must to be recognised as exclusive rights of authorisation. the right to make adaptations and arrangements of the work. Reading the exceptions makes the sprit of the law even clearer. The theme is exceptions are only allowed if they dont conflict with a normal exploitation of the work and don't prejudice the legitimate interests of the author. AI very clearly does both.

Just to add, dont read this as me being optimistic as to the outcome of any legal claims. The law is there to serve the powerful and all the big tech companies are behind the tech. Its unstoppable (not because it is currently legal or right though). I expect the law to be changed to allow it for these reasons.

ETA: missed this bit. Author also has exclusive "right to use the work as a basis for an audiovisual work". Not sure how you can argue that the images ingested haven't been used as the "basis" for the AI's output, regardless of how much/ little, directly/ indirectly they are processed. No image inputs, no image outputs.
« Last Edit: April 01, 2023, 06:52 by Justanotherphotographer »


Justanotherphotographer

« Reply #76 on: April 01, 2023, 06:29 »
+7
...
Getty has a problem with AI companies using their database and OUR work without permission.
...

To add one more thing as this is an Adobe thread. Adobe helping themselves to work we uploaded for completely different reasons is even worse IMHO (especially as they marketing it as a more "ethical" AI as it only uses "their" images).
« Last Edit: April 01, 2023, 08:49 by Justanotherphotographer »

« Reply #77 on: April 01, 2023, 12:29 »
+1

Because following this logic all works created by people are also "derivative" then because artist creates them after he got inspired by a mix of many kinds of "copyrighted" things he has seen before! Nothing ever is born in the vacuum.

Wrong.

It's a long-standing law principle that copyright does not protect ideas, concepts, systems, or methods of doing something, while it differentiates all this from derivative work.

Obviously you are not a lawyer and you didn't do your homework.

but that's the whole point! AI work is not derivative in an rational sense & no one has yet given an example that shows such

« Reply #78 on: April 01, 2023, 12:34 »
+1
..
Besides, the example and the white paper I showed you is another proof that you are wrong: when the training set is limited, AI may very well end up plagiarizing copyrighted work. So it doesn't generate only ideas and concepts, but also derivative work.
another strawwoman argument - the discussion was about massive AI training sets - of course giving it a limited dataset can produce what may be derivative work - but even submitting single images often produce unrecognizable results

« Reply #79 on: April 01, 2023, 12:37 »
+1
...
Anthropomorphising the app is silly. It isnt learning in the same way as a human. Its taking and processing images via programming. Saying they dont use them directly is a bizarre concept when we are talking about images being loaded into any type of algorithm or app resulting in new images coming out the other end.
...

MOST successful AI doesn't learn like a human but produces vastly superior results - champion GO & chess Ai dont even know the rules of the game!  it's called emergent behavir

« Reply #80 on: April 01, 2023, 13:17 »
0
of course giving it a limited dataset can produce what may be derivative work

Exactly!

Anny1234

« Reply #81 on: April 01, 2023, 17:09 »
0
of course giving it a limited dataset can produce what may be derivative work

Exactly!

Let's take the worst case non-existent scenario and judge it all based on it.
Let's ban all the kitchen knives because very occasionally someone is got killed with them.

The talk was about legitimate companies that allow commercial use because they are sure such cases are too rare (and their goal is to bring them to zero eventually, it's been few months!), not some garage AI.

And even in that overblown unimaginable case with 1 input of data, it may not be creating a derivative, if you have to draw a tree there is no way you can draw it any other way than in a shape of a tree. And if you've seen only one tree ever then well what would you expect. But it doesn't copy-paste nor collage anything - this is not how it works at all.
« Last Edit: April 01, 2023, 17:34 by Anny1234 »

« Reply #82 on: April 01, 2023, 17:42 »
0

Because following this logic all works created by people are also "derivative" then because artist creates them after he got inspired by a mix of many kinds of "copyrighted" things he has seen before! Nothing ever is born in the vacuum.

Wrong.

It's a long-standing law principle that copyright does not protect ideas, concepts, systems, or methods of doing something, while it differentiates all this from derivative work.

Obviously you are not a lawyer and you didn't do your homework.

This is hilarious how you seem to be stalking my every message to say it is wrong without reading it :D

Where did I say that copyright protects ideas?

Even that quoted by you sentence starts with words: following this logic (which is flawed as I described above)... etc. meaning that if it would be true it would be absurd, which is exactly what you have repeated after me but said that it is wrong :D

Sorry, I cannot even follow your reply, because you reply on something I wasn't even talking about :D

If I need to repeat especially for you in a simple sentence: AI doesn't create derivative work, same as artist doesn't create derivative work, because memorising, learning and being inspired by something to create something new is not same as copy-pasting.

What are you talking about I don't know :)

you are clearly delusional. Derivative Work is literally an accepted technique and has its own copywrite law. I won't embarrass you any more other than to state that if it was nonsense as you suggest SS wouldn't pay people for the use of their images in the learning sets because it is 'only storing' Our images as reference and yet SS is paying contributers. Getty wouldn't have a legal case going and yet they do. And Adobe wouldn't be exploring a compensation model and yet they are. Evidence and random ranter on a forum. I know what I believe ... but please ... continueninnyour delusion just do so quietly please.

« Reply #83 on: April 01, 2023, 18:14 »
0

Derivative work is clearly what an AI is producing. in which case compensation isn't a nice thing to do it is a legal requirement. Adobe have not paid for any such license to use the work. They don't own the copywrite we do.
..

not so clear - it's been shown many times that the actual generative AI does not use any images directly but creates an entirely new image from the training set.   

whether anyone has the right to use images scraped from the web is a separate issue, more theoretical, since any payment to authors for the training would be  minuscule a tiny fraction of 100s of m
at the same time writers & journalists arent complaining about scraping for trillions image examined

dont  know why some artists continue to propagate misleading info especially since there's really no upside.
webscraping creates the dataset but the results of those GPT bots do not violate anyone's copyright, again because the AI generates completely  new text w/o using the training data

Well you are conflating two issues there.

firstly the data sets it uses are an unknown quantity so you and others are guessing. Assuming millions is a nonsense because it just doesnt need to. Not when it could use 5.It doesnt work like that. As others have stated and shown examples of that given a limited data set the image will be similar or because it isnt programmed to be innovative it will or rather cab directly duplicate it. I've seen many examples of A.I. generated images and they have created a clearly derived work. And another issue is that asking it to create a photo of a dog in a costume should generate a fairly generic photo. But users don't do that. They'll ask for a specific breed, in a specific costume in a specific environment which limits the data sets it can reference. Similarly ask it to create a dragonfly on some grasses and it will reference only a few images not all.
Your second example of text regarding ChatGBT4 producing unique text is not quite correct. This is new law in the making. plagiarism has a long standing legal framework but even though ChatGBt4 can create text with perimeters (write an essay about World War 2 as if I had written it) producing a worryingly me like essay. But again it has racesd ahead and is being abusesd but not so well publicised is that apps have been created to detect A.I. created text already. Because of the potential for plagiarism. Schools are trialing A.I. detection software as we speak using A.I. to find content produced by a.I.

It is a very new frontier across the spectrum of usage but shortly the law will catch up as it always does. And companies that played fast and loose will find themselves in a very uncomfortable position of compensation and fines. Which is why SS are getting ahead of the curve by at least making a show of being wholesome. Random payments for images used that they do not disclose and payments are extremely variable, which doesnt help with transparency but, they are acknowledging that compensation must be paid.

 Adobe will have been aware of this and I suspect have already stated "we are working on a model of compensation" because the A.I. output isn't so far from its source material as they were led to believe it would be so are playing catch up with compensation. Just a theory of course.

« Reply #84 on: April 01, 2023, 18:33 »
0
of course giving it a limited dataset can produce what may be derivative work

Exactly!

Let's take the worst case non-existent scenario and judge it all based on it.
Let's ban all the kitchen knives because very occasionally someone is got killed with them.

The talk was about legitimate companies that allow commercial use because they are sure such cases are too rare (and their goal is to bring them to zero eventually, it's been few months!), not some garage AI.

And even in that overblown unimaginable case with 1 input of data, it may not be creating a derivative, if you have to draw a tree there is no way you can draw it any other way than in a shape of a tree. And if you've seen only one tree ever then well what would you expect. But it doesn't copy-paste nor collage anything - this is not how it works at all.

No, it's not a non-existing scenario.

This scenario is very much valid for those contributors who create really unique content. If their unique content is allowed in the AI training set, then the AI output will simply be plagiarism.

No problem with sliced tomatoes, hamburgers, or business people shaking hands, if that's your domain.

On the other hand, if you don't just do business people handshaking hands, sliced tomatoes or hamburgers, but truly unique content, then it's in your best interest to opt out of any AI training scheme (even if they pay you 1 peanut), to avoid shooting yourself in the foot.

And it's in the interest of all microstock companies to prevent this unique content to be used for AI training, or else they may face plagiarism lawsuits.

« Last Edit: April 01, 2023, 18:36 by Zero Talent »

Anny1234

« Reply #85 on: April 01, 2023, 18:43 »
+1
Researchers only extracted 94 direct matches and 109 perceptual near-matches out of 350,000 high-probability-of-memorization images they tested (a set of known duplicates in the 160 million-image dataset used to train Stable Diffusion), resulting in a roughly 0.03 percent memorization rate in this particular scenario.

Also, the researchers note that the "memorization" they've discovered is approximate since the AI model cannot produce identical byte-for-byte copies of the training images. By definition, Stable Diffusion cannot memorize large amounts of data because the size of the 160 million-image training dataset is many orders of magnitude larger than the 2GB Stable Diffusion AI model. That means any memorization that exists in the model is small, rare, and very difficult to accidentally extract.

None of this is written by me :D

@Lowls Go and write your insulting replies under this article: https://arstechnica.com/information-technology/2023/02/researchers-extract-training-images-from-stable-diffusion-but-its-difficult/amp/

Or maybe straight to a group of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich who released this paper, they are very interested in your opinion.

(Since I ignore you from now on, so won't see your little insulting replies anyway. :) )
Thanks to the forum for this feature.
« Last Edit: April 01, 2023, 18:57 by Anny1234 »

Anny1234

« Reply #86 on: April 01, 2023, 19:07 »
0
When training an image synthesis model, researchers feed millions of existing images into the model from a dataset, typically obtained from the public web. The model then compresses knowledge of each image into a series of statistical weights, which form the neural network. This compressed knowledge is stored in a lower-dimensional representation called "latent space." Sampling from this latent space allows the model to generate new images with similar properties to those in the training data set.

This is how AI works. (not written by me either :D)

Surprisingly nothing about derivatives and copy-pasting... Probably because they don't read this thread well enough to follow all the expert advice properly :)

Justanotherphotographer

« Reply #87 on: April 02, 2023, 02:34 »
+4
Sorry this has got heated/ personal. There's no need for it.

Back on topic.

Or maybe straight to a group of AI researchers...

Possibly not the best people to trust on the topic as they have literally the most invested in it being legit and above board. Even then the paper produces examples of the AI copying images almost exactly. In fact a lot more exactly than if I just saved the exact same image in a more lossy file format.

Surprisingly nothing about derivatives and copy-pasting...

I mean is there nothing about it? Strip out the buzzwords and leading language it could mean almost anything. Here's a possible translation:

When training an image synthesis model, researchers feed millions of existing images into the model from a dataset, typically obtained from the public web.

When programming an AI app employees copy millions of our illustrations/ photos into the app, typically taken from our online portfolios where we posted them for completely unrelated reasons.

The model then compresses knowledge of each image into a series of statistical weights, which form the neural network.

The app then saves the images in a more compressed format and indexes them for easy retrieval when recompiling derivates.

This compressed knowledge is stored in a lower-dimensional representation called "latent space."
These compressed copies of the images are stored in a database.

Sampling from this latent space allows the model to generate new images with similar properties to those in the training data set.


Copying elements from the images in this format allows the app to create the derivative images using the properties of the original images compressed and indexed in its database.


OR:
When programming an AI app employees copy millions of our illustrations/ photos into the app, typically taken from our online portfolios where we posted them for completely unrelated reasons. The app then saves the images in a more compressed format and indexes them for easy retrieval when recompiling derivates. These compressed copies of the images are stored in a database. Copying elements from the images in this format allows the app to create the derivative images using the properties of the original images compressed and indexed in its database.

I am sure you can imagine writing a description of saving a jpeg as a tif and then zipping the file and making it sound like magic forming a new image if you used enough tech bro buzz words in a similar way.


« Last Edit: April 02, 2023, 02:39 by Justanotherphotographer »

« Reply #88 on: April 02, 2023, 02:40 »
+5
Just look at the production of some AI advocates.
Look at the images they made before AI, and the images produced with AI that they proudly show off as their integral creation. Easy to check, since they expose a link to their portfolio or their website. Or go on Adobe stock and click on those latest AI stuff, and see the portfolio and what kind of production did the person before.
This simple fact is enough to understand what is going on. We can understand the temptation and the intoxicating feeling of having suddenly become great artists.
And some have (or will have) no qualms (and no state of mind) about flooding microstock sites with "their" art (of clicking a "generate" button).

...and they even complain that "Midjourney can legally sublicense any assets produced using it"... it's a competition of the unscrupulous!!!
What they don't realize is that they are only useful for this transition period, to select and check the images, they will be useless and discarded afterwards. The image generations will be autonomous, or directly driven by the final client. And this in a very short time, see you tomorrow!!! End of story.
« Last Edit: April 02, 2023, 03:49 by DiscreetDuck »

Anny1234

« Reply #89 on: April 02, 2023, 03:09 »
+1
What they don't realize is that they are only useful for this transition period, to select and check the images, they will be useless and discarded afterwards. The image generations will be autonomous, or directly driven by the final client. And this in a very short time, see you tomorrow!!! End of story.

Exactly! :) (though haven't seen anyone on this forum who doesn't realises it).

It is senseless to submit images to stock sites anymore (AI or not, maybe if only Editorial) as they themselves will cease to exist eventually, except Adobe as they have their own unique product. In my humble opinion (note for those who doesn't recognise it).

Some people here have predicted that it may happen within a year or even less.
« Last Edit: April 02, 2023, 06:13 by Anny1234 »

« Reply #90 on: April 02, 2023, 03:46 »
0
We must anticipate the inabilities of AI, thanks to HI...
« Last Edit: April 02, 2023, 06:48 by DiscreetDuck »

Just_to_inform_people2

« Reply #91 on: April 05, 2023, 14:32 »
+3
Notice how even here, whenever Mat announces something new, he will always just reply to questions and comments stricly related to the technical aspect of a feature and ignore every single question and comment about user actual concerns or morality aspects of what Adobe was doing.

I can't recall that Adobe has ever seeked out conversation with contributors about anything, before we were getting presented the end result - Free galleries, currency changes, our images being used to train AI that will make our images worthless - there has never been any kind of "conversation" prior to annoucing the final decision and the "conversation" that took place after that were alwas one-sided with Adobe pretty much ignoring our concerns.

Very true. And if you accuse Mat of corporate speak then he is offended :)

I do think they listen in via Mat. I actually think that Adobe didn't want to prolong the bonus program but because here, and probably at other places as well, we made a fuzz about it and so they caved in, in the end. Not to lose face and they sure will have weighted the costs versus the potential reputation damage.

But it annoys me too that when you ask something else then a technical question you never hear an answer. Concerns are maybe listened to but certainly not replied to.

And since they are a company (maximum profit seeking) it annoys me that they pretend to be an NGO sometimes only to support artists at all times while in reality they are not. But I guess lot of people fall for that kind of talk. At least here, I see many of them. But you can also see them when you join some of their live Behance meetings. Some actually seem groupies and completely believe in Adobes fantasy tale :)

And remember there is no them without us but not exactly as how they mean it. It's more that if we don't buy their products or deliver content to them then there is no them. That part is definitely true.

Yup, I'm so glad somebody said that. I see it too and often feel like traitor if I dare to say something against Adobe. Because looks that many people think they really care for us.

You see.

Adobe (via Mat) started this thread but all difficult questions were not answered. Even after some time, answers will not come.

But they are here for you, don't forget :)

« Reply #92 on: May 11, 2023, 04:19 »
+2
Adobe just take your images, feed a machine with it and generates a lot money.

Did you contributors get even a penny for such a usage?

Uncle Pete

  • Great Place by a Great Lake - My Home Port
« Reply #93 on: May 11, 2023, 11:58 »
0
Adobe just take your images, feed a machine with it and generates a lot money.

Did you contributors get even a penny for such a usage?

Not sure I understand what you wrote but yes, we got paid, and yes it was pretty much pennies. I don't know how much we got per image, how many of mine, or what images were used. I may have missed that detail?

« Reply #94 on: May 11, 2023, 15:36 »
+2
Adobe just take your images, feed a machine with it and generates a lot money.

Did you contributors get even a penny for such a usage?

No - there's a statement that they'll work out a compensation model and share the details when Firefly exits beta (it's still in beta now)

https://helpx.adobe.com/stock/contributor/help/firefly-faq-for-adobe-stock-contributors.html

« Reply #95 on: May 12, 2023, 07:42 »
0
No - there's a statement that they'll work out a compensation model and share the details when Firefly exits beta (it's still in beta now)

https://helpx.adobe.com/stock/contributor/help/firefly-faq-for-adobe-stock-contributors.html

Aight, and thanks. I was just curious about it.

At least, Adobe plans some sort of compensation, unlike the rest of the industry which just stealing images to train their machines.

I myself currently stopped uploading stuff to agencies until it's more clear that my rights are respected (and my images will be safe) or i get paid at least an extended licence per image.
« Last Edit: May 12, 2023, 07:44 by Thomas Vogel »

« Reply #96 on: May 12, 2023, 19:15 »
+2
What do you mean with "unlike the rest of the industry which just stealing images to train their machines" ?

Contributors at adobe have no choice.
Ah yes, one can delete portfolio, ok, really?

-if your content is used to train an AI model, it may not be possible to make the AI forget any learnings from your item.



« Reply #97 on: May 13, 2023, 11:44 »
+3
« Last Edit: May 13, 2023, 14:20 by Jo Ann Snover »

« Reply #98 on: May 23, 2023, 13:08 »
+2
This is only available in the beta version of Photoshop (and I have no idea how one gets that):

https://techcrunch.com/2023/05/23/adobe-brings-fireflys-generative-ai-to-photoshop/

"Photoshop is getting an infusion of generative AI today with the addition of a number of Firefly-based features that will allow users to extend images beyond their borders with Firefly-generated backgrounds, use generative AI to add objects to images and use a new generative fill feature to remove objects with far more precision than the previously available content-aware fill."

https://www.theverge.com/2023/5/23/23734027/adobe-photoshop-generative-fill-ai-image-generator-firefly

According to Gizmodo, "Adobes Firefly Image Generator Is Going to Make Photoshop Much Easier to Use. Soon, even your grandparents could be Photoshop experts"

https://gizmodo.com/adobe-firefly-ai-image-generator-photoshop-generative-f-1850462988

https://arstechnica.com/information-technology/2023/05/adobe-photoshops-new-generative-fill-ai-tool-lets-you-manipulate-photos-with-text/

https://www.businessinsider.com/sc/adobe-photoshop-unlocks-a-new-era-of-generative-creativity-with-firefly

https://9to5mac.com/2023/05/23/photoshop-first-adobe-app-with-generative-ai/

https://www.digitalcameraworld.com/news/adobe-integrates-fireflys-generative-ai-with-photoshop

Still nothing more about the compensation model for the Adobe Stock contributors without whom none of this would be in beta...
« Last Edit: May 23, 2023, 15:54 by Jo Ann Snover »

« Reply #99 on: May 24, 2023, 08:29 »
+2
This is only available in the beta version of Photoshop (and I have no idea how one gets that):

It's under "Beta Apps" in Creative Cloud Desktop App. Looks that it runs parallel with "normal" PS.


 

Related Topics

  Subject / Started by Replies Last post
136 Replies
32354 Views
Last post December 03, 2021, 04:01
by rushay
34 Replies
9310 Views
Last post January 26, 2022, 11:14
by MatHayward
234 Replies
32780 Views
Last post May 27, 2023, 12:12
by cobalt
111 Replies
13982 Views
Last post October 25, 2023, 19:26
by MatHayward
1 Replies
366 Views
Last post February 18, 2024, 20:09
by Jo Ann Snover

Sponsors

Mega Bundle of 5,900+ Professional Lightroom Presets

Microstock Poll Results

Sponsors