pancakes

MicrostockGroup Sponsors


Author Topic: Working together to lead the way with AI  (Read 8894 times)

0 Members and 2 Guests are viewing this topic.

« Reply #50 on: November 03, 2022, 10:14 »
+1


what would a fair deal look like? what compensation would be appropriate for an artist who contributed 1 (or 100+) images to a training set of millions?

Who can guess but from experience I would expect you will need a microscope to see it.
[/quote]


« Reply #51 on: November 03, 2022, 18:16 »
+3
Some here look like lawyers, passionate lovers, staunch defenders of artificial intelligence programs that generate images. I really don't understand this kind of behavior from those guys so much!

Justanotherphotographer

« Reply #52 on: November 04, 2022, 06:04 »
+2

what would a fair deal look like? what compensation would be appropriate for an artist who contributed 1 (or 100+) images to a training set of millions?

Honestly I don't know. I do know that artists work (images and keywords) shouldnt be used to train AI without consent and compensation.

« Reply #53 on: November 04, 2022, 06:58 »
0
Not terrifying at all ...

https://youtu.be/LWtlQZCcp8A

« Reply #54 on: November 15, 2022, 04:37 »
+1
Here we go again

100 peoples photos of a hand are used to generate an AI photo of the perfect hand.

Customer pays $10.00 👌
Shutterstock takes $6.00
Contributer gets 0.04 cents each.

Now prove you were one of the 100.
Prove your hand photo was used.
Find your photo particles in the customers hand composite.

Now tell me you trust SS to let you know your photo was used and pay you.

« Reply #55 on: November 15, 2022, 04:51 »
+4
Find your photo particles in the customers hand composite.
Now tell me you trust SS to let you know your photo was used and pay you.

It doesn't work this way.
Images are used to train AI to recreate an image of a hand.
There is no single pixel of your photo in the new AI generated one.
You have to be payed for training AI, not because you're giving pixels of your image
« Last Edit: November 15, 2022, 05:30 by derby »

« Reply #56 on: November 15, 2022, 05:01 »
0
what would a fair deal look like? what compensation would be appropriate for an artist who contributed 1 (or 100+) images to a training set of millions?

That's an interesting question.
Agency will probably pay small fee for quantity, but to be fair the correct way to pay should be a new license terms that give the right to use the image for "teaching".

Let's say... you're giving away not only your image but your knowledge to create that image, and this will be forever; like a teacher in the school.

We know that AI could create infinite number of new images based on this knowledge.
It doesn't matter if an image is sold in a single moment, because every single image created, even if refused by the buyer, will populate and will remain available forever in agency database.

As for this I think that a fair compensation should be near the price of an extended license for every single image used. This will cover every future sale.
Of course, this will never happen  ;D

Justanotherphotographer

« Reply #57 on: November 15, 2022, 06:03 »
+2
Find your photo particles in the customers hand composite.
Now tell me you trust SS to let you know your photo was used and pay you.

It doesn't work this way.
Images are used to train AI to recreate an image of a hand.
There is no single pixel of your photo in the new AI generated one.
You have to be payed for training AI, not because you're giving pixels of your image

I struggle with that framing. A pixel is not a thing that is physically picked up from one place and dropped in another. Its just a range of values for relative location and color. That is true whenever you copy an image. I honestly think the it doesnt use any of the original pixels framing is irrelevant, as that is always the case when transferring images digitally.

One of the ways AI is trained is, for example, by blurring a photo in a way that involves some randomisation then doing its best to recreate the original image (which is never exactly the same as some randomisation has occurred in the blur). It does this for lots of images with the same keywords and looks for the points of similarity that make up the defining characteristics of the objects.

So it is trying its best to copy the subset of images. Even if it had one image to go on, the result wouldnt be identical as it is making its best guess.

At which level of randomization in the disassembly/ reassembly of images do we draw the line? There will be people out there making better and worse AI engines. What about the times when a programmer takes shortcuts and small chunks of the original images are reassembled in exactly the same layout of pixels? Is any level of similarity fine as long as the company labels it as AI and some disassembly and reassembly is involved (even if the app is reassembling in the exact same layout of pixels?).

IMHO the relevant part is that the AI is using the source IP and keywords to create the engine/ resulting images, regardless of how the images are copied.
« Last Edit: November 15, 2022, 06:49 by Justanotherphotographer »

« Reply #58 on: November 15, 2022, 06:57 »
0
One of the ways AI is trained is, for example, by blurring a photo in a way that involves some randomisation then doing its best to recreate the original image (which is never exactly the same as some randomisation has occurred in the blur). It does this for lots of images with the same keywords and looks for the points of similarity that make up the defining characteristics of the objects.

So it is trying its best to copy the subset of images. Even if it had one image to go on, the result wouldnt be identical as it is making its best guess.

I'm not an expert but I read something about how AI machine learn, and it's slighlty different from what you describe (if I understand well your words, sorry, I'm not native english...)

The concept is that AI, following your example, learn what is and how to produce a nice depth of field.
When it knows it, it can reproduce this in any image: so it's not exactly the production of an new image based on original one.
The concept is that you can ask a nice depth of field for any subject, not only the subjects that was in training images. So it's not a question of pixels randomization that can give you a different image from an original one. The point is that now AI can blur the image to produce nice DOF for quite any subject you ask.
It's not trying to do a "copy" with some difference. It's mostly like trying to reproduce an event.

This is what I understood
« Last Edit: November 15, 2022, 07:02 by derby »

Justanotherphotographer

« Reply #59 on: November 15, 2022, 07:06 »
+1
One of the ways AI is trained is, for example, by blurring a photo in a way that involves some randomisation then doing its best to recreate the original image (which is never exactly the same as some randomisation has occurred in the blur). It does this for lots of images with the same keywords and looks for the points of similarity that make up the defining characteristics of the objects.

So it is trying its best to copy the subset of images. Even if it had one image to go on, the result wouldnt be identical as it is making its best guess.

I'm not an expert but I read something about how AI machine learn, and it's slighlty different from what you describe (if I understand well your words, sorry, I'm not native english...)

The concept is that AI, following your example, learn what is and how to produce a nice depth of field.
When it knows it, it can reproduce this in any image: so it's not exactly the production of an new image based on original one.
The concept is that you can ask a nice depth of field for any subject, not only the subjects that was in training images. So it's not a question of pixels randomization that can give you a different image from an original one. The point is that now AI can blur the image to produce nice DOF for quite any subject you ask.
It's not trying to do a "copy" with some difference. It's mostly like trying to reproduce an event.

This is what I understood
There are a few different methods/ models apparently. They all sound quite different to each other, but the formula is always: people's IP--->jiggery-pokery (skirting copyright)--->cash in the pocket of tech bro who did a fraction of the work it took to produce the millions of images and keyword them

« Reply #60 on: November 15, 2022, 07:33 »
0


I struggle with that framing. A pixel is not a thing that is physically picked up from one place and dropped in another. Its just a range of values for relative location and color. That is true whenever you copy an image. I honestly think the it doesnt use any of the original pixels framing is irrelevant, as that is always the case when transferring images digitally.

One of the ways AI is trained is, for example, by blurring a photo in a way that involves some randomisation then doing its best to recreate the original image (which is never exactly the same as some randomisation has occurred in the blur). It does this for lots of images with the same keywords and looks for the points of similarity that make up the defining characteristics of the objects. ....

that's not how ML works - the AI creates new info from each training info - none of original pixels are reserved. instead a condensed matrix is prepared. then based on tags, those matrices are used to create an entirely new image.  so the only question that remains is how owners of the million training images might be paid for the training. they have no claim to the new images created

Justanotherphotographer

« Reply #61 on: November 15, 2022, 09:37 »
+2

that's not how ML works - the AI creates new info from each training info - none of original pixels are reserved. instead a condensed matrix is prepared. then based on tags, those matrices are used to create an entirely new image.  so the only question that remains is how owners of the million training images might be paid for the training. they have no claim to the new images created

Yes, I get it. Its the same sort of reasoning as no ones making the decision its up to the algorithm.

I just find the assertions about whether pixels are retained redundant. The app learns where to place and how to color new pixels based on pixels in the original images. The new info is learnt from the inputted info. The reductio absurdum to make the point is that I can use an image to write a table with only figures (no pixels) referencing the color to paint each pixel and its location. I could then take that table and generate a completely new image (new info) identical to the original, i.e. not reserving any of the original pixels. I could also create an algorithm to shift the colors or locations of those pixels for the new image. How complex a process would that have to be before it is acceptable?

Take the example of the images of business people featuring the near perfectly copied DT watermark the Ai was outputting. Imagine that DT licensed your icon to use as a watermark only on their site. The AI would be perfectly reproducing your copyrighted material; it would be (by you definition) new info, but it is also identical to your copyright work.

I am not sure which part of what I said isnt how it works. I tried to make it clear that the AI is outputting what you call new info.
« Last Edit: November 15, 2022, 09:39 by Justanotherphotographer »

« Reply #62 on: November 15, 2022, 10:04 »
0
Take the example of the images of business people featuring the near perfectly copied DT watermark the Ai was outputting. Imagine that DT licensed your icon to use as a watermark only on their site. The AI would be perfectly reproducing your copyrighted material; it would be (by you definition) new info, but it is also identical to your copyright work.

I am not sure which part of what I said isnt how it works. I tried to make it clear that the AI is outputting what you call new info.

For what I can understand the point is that you're always referring to an existing image; AI doesn't need a "reference" image.

Let's try an example:
If I ask AI to give me an image described as:
"Section of planet earth, american continent, view from moon with defocused background of starry sky in dark space"

What AI need to know to create the image is
1-what is planet earth
2-what is american continent
3-what is starry sky space
4-what is defocused

Were AI get the first three points it's easy, these are clear and common knowledge with millions of images to let it know.

But what is "defocused"?
How can AI understand the concept of "defocused" and apply this to the requested image?
AI has been trained with thousand of defocused images with hundreds of different depth of field and effects. And it decide now to apply to the "starry space in background"

Does it means that this come from existing images? Of course yes, but not in the sense that some similar images was referred to the new one.

Maybe AI has learned depth of field from
"Cup of coffee on the table"
"macro close up of flower"
and so on...
But it doesn't need to have a defocused starry dark space as a reference.

So, had you collaborate to this science fictional image with your coffee cup and flower close up?
Probably Yes
Is there any minimal link between planet earth from the moon and coffe cup on a table? Of course no, not in the sense you're talking about.

If I understand well  ;D
because it's not so easy and it's not so clear  ;D
« Last Edit: November 15, 2022, 10:10 by derby »

Uncle Pete

  • Great Place by a Great Lake - My Home Port
« Reply #63 on: November 15, 2022, 16:03 »
0
https://blog.adobe.com/en/publish/2020/02/27/copyrights-in-the-era-of-ai#:~:text=In%20many%20cases%2C%20the%20data%20required%20for%20AI,process%20of%20training%20an%20AI%20model%20constitute%20infringement%3F

"The Japanese government, for example, recently updated its copyright laws to include exemptions of the use of copyrighted works for machine learning. Other countries, including China, Australia, Singapore, Thailand, are looking at making similar changes. Additionally, the European Union recently adopted limited text and data mining exceptions as part of its Copyright Directive and continues to explore further refinements."

As far as the legal side, "Generally, accessing copyrighted works for use in training algorithms does not reduce the economic value of the work in any measurable way. And, if a tool powered by the algorithm is used to create something totally different, the value of the copyrighted material remains similarly unchanged."

From reading this, I'd have to ask myself, did my specific original work lose value, because of the AI training, that created a new and different image?

The copyright statute sets forth four factors for courts to consider in determining whether a particular unauthorized use qualifies as fair use:

    The purpose and character of the use, including whether youve made a new transformative work, and whether your use is commercial.
    The nature of the original work, such as whether it is more factual than fictional.
    How much of the original work was used.
    Whether the new use affects the potential market for the original work.


https://graphicartistsguild.org/fair-use-or-infringement/#:~:text=The%20copyright%20statute%20sets%20forth%20four%20factors%20for,transformative%20work%2C%20and%20whether%20your%20use%20is%20commercial.



 

Related Topics

  Subject / Started by Replies Last post
11 Replies
6984 Views
Last post November 09, 2008, 14:48
by litifeta
39 Replies
17556 Views
Last post March 30, 2009, 00:08
by sgcallaway1994
1 Replies
2886 Views
Last post July 09, 2009, 13:18
by cardmaverick
4 Replies
3089 Views
Last post August 10, 2010, 06:51
by cathyslife
1 Replies
5964 Views
Last post February 09, 2014, 06:38
by Beppe Grillo

Sponsors

Mega Bundle of 5,900+ Professional Lightroom Presets

Microstock Poll Results

Sponsors