MicrostockGroup Sponsors


Author Topic: Working together to lead the way with AI  (Read 4001 times)

0 Members and 1 Guest are viewing this topic.

« on: October 25, 2022, 07:13 »
+1
Were excited to announce that we are partnering with OpenAI to bring the tools and experiences to the Shutterstock marketplace that will enable our customers to instantly generate and download images based on the keywords they enter.

As we step into this emerging space, we are going to do it in the best way we know howwith an approach that both compensates our contributor community and protects our customers.

In this spirit, we will not accept content generated by AI to be directly uploaded and sold by contributors in our marketplace because its authorship cannot be attributed to an individual person consistent with the original copyright ownership required to license rights. Please see our latest guidelines here. When the work of many contributed to the creation of a single piece of AI-generated content, we want to ensure that the many are protected and compensated, not just the individual that generated the content.

In the spirit of compensating our contributor community, we are excited to announce an additional form of earnings for our contributors. Given the collective nature of generative content, we developed a revenue share compensation model where contributors whose content was involved in training the model will receive a share of the earnings from datasets and downloads of ALL AI-generated content produced on our platform.

We see generative as an exciting new opportunityan opportunity that were committed to sharing with our contributor community. For more information, please see our FAQ on the subject, which will be updated regularly.


« Reply #1 on: October 25, 2022, 07:17 »
+15
Not sure how much more excitement i can take.
*.

« Reply #2 on: October 25, 2022, 07:18 »
+5
I think it isn't a terrible model considering. I just wish I could trust SS to give us anything resembling a fair cut.

« Reply #3 on: October 25, 2022, 07:37 »
+5
Yahoo!   Let the millionths of a penny start rolling in....

« Reply #4 on: October 25, 2022, 07:37 »
+4
It is SS. so I expect: your photo of the happy young man on a mountain + my sky background combined to a new image= 33% of shared 0.01 USD commission for each.  ;D

« Reply #5 on: October 25, 2022, 07:40 »
+4
It is SS. so I expect: your photo of the happy young man on a mountain + my sky background combined to a new image= 33% of shared 0.01 USD commission for each.  ;D

More like:  the AI looked at 300 million images to learn how images and keywords work and you had 5 images in there that were relevant.  You do the math.

« Reply #6 on: October 25, 2022, 07:41 »
+8
I'd strongly advise any contributors not to participate in providing content for datasets.

« Reply #7 on: October 25, 2022, 07:44 »
+2
When SS say something new is 'exciting'. You just know it probably isn't.

« Reply #8 on: October 25, 2022, 07:49 »
+4
Well, this ought to drive away the rest of their contributors.

« Reply #9 on: October 25, 2022, 08:01 »
0
I'd strongly advise any contributors not to participate in providing content for datasets.
How will that help?  Looks like they already fed current images into the dataset, so I'm not sure how much smarter the AI can get.  Opting out of payment will only be a good idea for everyone else, assuming it narrows the payment pool.

« Reply #10 on: October 25, 2022, 08:11 »
0
This will work only for exclusive contributors. Shutterstock dont have exclusive contributors or I am wrong?

« Reply #11 on: October 25, 2022, 08:29 »
0
This will work only for exclusive contributors.
Why?  DALL E may be able to use any dataset fed into it.  If it's the SS dataset, then exclusivity is not relevant.

« Reply #12 on: October 25, 2022, 09:40 »
+3
I'd strongly advise any contributors not to participate in providing content for datasets.
How will that help?  Looks like they already fed current images into the dataset, so I'm not sure how much smarter the AI can get.  Opting out of payment will only be a good idea for everyone else, assuming it narrows the payment pool.

How will that help? I assume Shutterstock will let this be an opt-in program - if not they definitely should. The more quality content they feed into these datasets, the closer we all are to going out of work. It's clear that it will lead to derivative work!

Look for example at this article: https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/.

wds

« Reply #13 on: October 25, 2022, 09:40 »
+1
My fear is that if an image is opted in to "train" the AI, what does that exactly mean? Will I see AI images generated that have AI people that happen to look a lot like a model in one of my images? or image concepts that are reused in AI generated images?

« Reply #14 on: October 25, 2022, 09:48 »
+3
I'd strongly advise any contributors not to participate in providing content for datasets.

I agree, but is this even an option?

« Reply #15 on: October 25, 2022, 09:55 »
+5
I'd strongly advise any contributors not to participate in providing content for datasets.

I agree, but is this even an option?

It has to be. I've contacted Shutterstock to get clarification on this.

« Reply #16 on: October 25, 2022, 10:10 »
+4
So now the buyers will start generating the images more instead of buying and since there is revenue share so start expecting more incoming pennies with decreasing the real downloads.


« Reply #17 on: October 25, 2022, 10:32 »
+5
Re Opt-out:

If you look at the FAQ they published (https://support.submit.shutterstock.com/s/article/Shutterstock-ai-and-Computer-Vision-Contributor-FAQ?language=en_US), it says at the bottom of the page:

"Can I opt out of having my content included in future datasets?

Yes, in the coming months we will be adding an option in the contributor account settings that will allow you to opt out of having your content included in future datasets. "

Which also means, they have already sold these "datasets" (i.e. content including metadata) in the past, without any agreement of the affected contributors.

They also say "Shutterstock maintains an internal database of all assets used in all datasets that have been created since the launch of this product, so we can compensate our contributors accordingly.", but - obviously - contributors will not be notified if their "assets" habe been included.

Typical Shutterstock move.
I would expect the compensation to be a (very low) token amount that doesn't add up to much...

So glad that I have deactivated my "assets" after their commission cut. Although, who says they didn't include deactivated images?

Uncle Pete

  • Great Place by a Great Lake - My Home Port
« Reply #18 on: October 25, 2022, 10:41 »
0
Quote
Shutterstock: Working together to lead the way with AI

Were excited to announce that we are partnering with OpenAI to bring the tools and experiences to the Shutterstock marketplace that will enable our customers to instantly generate and download images based on the keywords they enter.


I predicted this was coming earlier in this thread (bold part).

So yes, AI is on track for making us redundant unless legal/copyright prevents it from happening.

Just because it was relevant and from a different thread. We knew that some agency would adopt this and create their own dataset, from their own images, because that way they control the distribution, and have the rights, because artists will be paid a share.

 


« Reply #19 on: October 25, 2022, 12:06 »
+8
I'm starting to submit images with irrelevant keywords  8)

« Reply #20 on: October 25, 2022, 12:57 »
0
Quote
By partnering with OpenAI for the training AI model for their content generation tool, we are ensuring that the new technology coming to our platform was created in an ethical and responsible way, which compensates the contributing artists whose original content was used in developing this tool.
.../
Given the availability of various AI content generation models in the marketplace, we are unable to verify the model source for most AI-generated content and therefore are unable to ensure all artists who were involved in the generation of each piece of content are compensated.
......
will directly compensate Shutterstock contributors if their IP was used in the development of AI-generative models,
https://support.submit.shutterstock.com/s/article/Shutterstock-ai-and-Computer-Vision-Contributor-FAQ

Shutterstock needs to be more transparent:
  • in ML the original images in the training dataset are not used for creating new content. The trained app doesn't have the original creator info.  How can they identify each artist involved w/o knowing which images were used from the millions in dataset? most likely ALL images used in training generate 'income' for artists? So even if the
  • How do they identify the IP of each image? why IP rather than artist ID since "Shutterstock maintains an internal database of all assets used in all datasets that have been created since the launch of this product, so we can compensate our contributors accordingly."
  • What about users who don't have a dedicated IP but have a shared IP? The internet knows your IP address because it's assigned to your device and is required to browse the internet. Your IP address changes, though, every time you connect to a different Wi-Fi network or router.

« Reply #21 on: October 25, 2022, 16:13 »
+2
If your images are part of the dataset used, SS will pay a % of the overall fee OpenAI paid to train Dalle based on the number of images you have in the dataset. At your royalty rate. I see them doing a simple calculation split evenly between all files in the dataset. They're not going to be able to say what files are used in what images, or to what % they matter in the dataset. If we define:

F (fee paid for dataset)
T (total number of images in dataset)
N (number of your images in dataset)
R% (your royalty rate)

(F/T) * N * R% = how much you get paid.

These numbers are all hypothetical, and may be all wrong but....

Let's assume that 300M images were used and the fee they paid is $1M. Then SS would have had $1M / 300M = $0.003333333333 per image used to train.

If you have 10 images that were used you'll end up with 10 x $0.003333333333 = $0.03333333333 / your royalty rate. So between $0.004999999999 and $0.01333333333 per 10 images in the dataset.

Then as images are generated, everyone with images in that dataset gets their cut the same way each 6 months.

$10M in revenue for SS, ends up being $0.04999999999 - $0.1333333333 per 10 images you have in the dataset.
If you have 10,000 images in the dataset you'd end up with 1000 times this cut of course - $49.99999999 - $133.3333333 per 10,000 images in the dataset.

Plenty of money to last you another 6 months!  :o

I have no idea how many images are in the dataset, so if you think 300M is too many images, and you want to use 100M instead - just multiply the above numbers by 3.. Please check my math, but overall however you slice it, the contributors who's IP has been used will get the shaft here. SS will bank the fat cash.

« Reply #22 on: October 25, 2022, 20:41 »
+8
So Open AI (Dalle2) is now charging approx .13 cents per query (4 images)... say SS is negotiated 50% of that fee from OpenAi (maybe free in exchange for contributor content data), what are the realistic commission paid to contributors? ...fraction of a cent??

Its a joke, everyone should opt out unless you want to wake up with sub dollar monthly sales in your account!

« Reply #23 on: October 26, 2022, 01:45 »
+2
Don't like the idea of having to "opt out" I've contacted SS to ask why we are automatically opted in without our consent, not that I'm expecting them to listen.  I guess they are relying on contributors forgetting or just not bothering. 

Hopefully we'll get some kind of notification when the opt out option is available.

« Reply #24 on: October 26, 2022, 05:36 »
+6
When will these old farts ever learn. So blinded by the theory of AI are they that they have no idea how this will bite them in the arse. They should do their research.

Amazon used a powerful AI to hire the beat and brightest. It filtered thousands of applications and was doing really well. Until one day the higher ups looked at their workforce and struggled to find a female.

What had happened was that the AI had become a misogynist. In fact it made the decision to deliberately seek out any female application and avoid processing it. It learnt rather interesting and efficient ways to do this. It would look at their applications and search for their school. If it was an all female school it rejected them immediately. If that didn't work it searched their social media.

And now we are going to enjoy SS's budget version. Which will do what. AIs generally become with each iteration more feral.

I would imagine it will completely destroy SS by using data sets that are completely false and will skew everything.

« Reply #25 on: October 26, 2022, 05:46 »
+4
This also means the other agencies will follow this trend. Also images are the property of who clicked them, are AI generated images the property of the company? If so then I'm guessing they'll be flooding their ports all over

« Reply #26 on: October 26, 2022, 05:53 »
+6
If you read the FAQ, they have 2 ways of doing the AI business:
1/ Selling datasets for training AI (no commercial or public use).
2/Generating content for customers.

Ad 1/  Is there any compensation for contributors for selling datasets? Will be there the opt out option?
And in final if client will want to use one non-commercially generated image commercially, I guess that SS will allow it for good amount of money, but we will be compensated only for one standard generated image.

Ad2/  I think that this is brilliant idea. The problem with the AI generated contend is the copyright of images in datasets. But they found a solution - they will pay us "compensation".  So the copyright is OK now. Win-win situation for SS and OpenAI.

They also can generate billions of images for shutterstock database and pay us one time compensation. Any image generated and selected by client can be directly added in SS database. Clients will train the AI and replace reviewers. Tags will be generated from our keywords and client description.

Can the future Ai be trained on Ai generated content? I don't know.

Also we can be sure, that our images will be almost invisible on SS website. $0.10 per image is too much now.

Anyway they need us now, they need to pay us some compensation to get the license/copyright . But they will replace us very soon.

« Last Edit: October 26, 2022, 05:56 by cosus »


« Reply #27 on: October 26, 2022, 07:06 »
+1
For some customers at beginning it could be a wonder feature but I think its little pointless to sell it this way since anyone can produce tons of it instantly even with all free images that are in the web. So no one will actually need AI images of sstk or any other microstock site when a customer realizes it can do it very easily too.

The way i see it ,AI generated images its a very small part of what you can produce with AI and real data image.
The actual value is in real images with real data.

For example you want to determine the velocity of a Hurricane or the size of ocean wave by video/image. Other example, you want to mocap an real animal movement or a dance style. The list is endless of possibilities to use AI with real data.

The actual problem with microstock sites today is most of the features AI requires to have good video for Machine learning (not artistic, more documentary, long shot with no movement). This means more opportunities for us to start doing this kind of shots of everything. Instead of doing close ups, pans and tilts we should think ahead and capture also for this kind of business. Anyway i recommend to choose a niche and try to present your database and sell it directly to research companies. From my experience it's much more profitable!   



« Reply #28 on: October 26, 2022, 07:09 »
+11
A few things people need to realize:

1. the Genie is out of the bottle and wont go back in

2. SS or any other agency is not your friend

3. they will go the cheapest route to get rid of us and use our content legally to train the AI

4. they already used our content in "datasets", even if you opt out they gonna use the already established datasets. The opt out is just a throwaway token to keep the backlash lower.

5. do not get your hopes high that the "payout" for training our new AI overlord will be decent. Both companies profit from that deal, OpenAI gets to train the AI and SS gets to sell generated images. The money exchanged therefore will be minimal and will be a throwaway token to keep us a bit calmer and make it easy for the lawyers.

6. we probably agreed to the training by some clause in the terms of service, you can be sure they covered their legal ass before backstabbing us

7. the AI will learn by itself once user interactions are plentiful. Meaning it will determine trends and good images by what is sold and what is clicked on.

8. The bias argument already came up a few months ago and OpenAI addressed it by filtering in a bunch of black and asian looking "humans"

9. get strapped in because it is not going away, we maybe have a year left. Adjust now.

Farewell and godspeed.

« Reply #29 on: October 26, 2022, 07:40 »
+1
Our value is only in copyright of our images. Big customers needs to be sure that any generated image is properly licensed. So OpenAi needs SS, because SS controls licensed content. Nothing more. They don't need any new images from us. As long us we will give them license to generate new content, they are happy.

We know that most of contributors will do nothing - many are from poor areas - happy to get every $, many don't understant English well enough to read the message. Many are from Russia or Ukraine and limited in some way to act. So even if we opt out (too late), they will replace us. Soon.

It seams that they already sold datasets with our content. Most likely we agreed with that somewhere, we can't opt out and we will not be even paid for it. They can do it maybe because it's not intended for commercial use? So they make profit from it. We don't. Nice.


« Reply #30 on: October 26, 2022, 08:07 »
+1
  • What about users who don't have a dedicated IP but have a shared IP? The internet knows your IP address because it's assigned to your device and is required to browse the internet. Your IP address changes, though, every time you connect to a different Wi-Fi network or router.

This has nothing to do with IP address. IP is Intelectual Property, basically "content with author rights".

« Reply #31 on: October 26, 2022, 10:09 »
+2
Wow, they somehow managed to lowball Freepik. All at our expense of course

« Reply #32 on: October 26, 2022, 12:37 »
0
...

Amazon used a powerful AI to hire the beat and brightest. It filtered thousands of applications and was doing really well. Until one day the higher ups looked at their workforce and struggled to find a female.

What had happened was that the AI had become a misogynist. In fact it made the decision to deliberately seek out any female application and avoid processing it. It learnt rather interesting and efficient ways to do this. It would look at their applications and search for their school. If it was an all female school it rejected them immediately. If that didn't work it searched their social media.

you didnt mention this was still in the research phase when amazon discovered the bias & shut the program down before it went live
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
 
Quote
And now we are going to enjoy SS's budget version. Which will do what. AIs generally become with each iteration more feral.

I would imagine it will completely destroy SS by using data sets that are completely false and will skew everything.
yes, your imagination!  why would they be completely false? there are many examples of successful ML. and the current AI generators are far from being either 'feral' (whatever that means!) or 'false'.  in fact, each iteration should make the AI stronger - what's your reason for saying more training will make the AI worse?

« Reply #33 on: October 26, 2022, 13:16 »
+1
If datasets are made up of our data and data can only be kept for a period of time that is deemed reasonable then surely each refresh of the datasets would require previous ones to be deleted?!? If thats the case then thats maybe why payment is every 6 months. If we opt out then the current dataset will not be based on our images and the AI system can only use the latest dataset?!.

I feel there are a lot of questions SS need to answer and to make clear to their contributors otherwise they will run the risk of the biggest contributors taking some sort of class action against them. This will eventually put everyone out of work otherwise, including agencies.

« Reply #34 on: October 26, 2022, 14:04 »
+6
I might be an outlier herebut how many customers will actually use AI to generate content for their projects, instead of downloading ready pictures?

I have been playing around with nightcafe creator for over a year and it is very difficult to create a usable image.

Customers pay agencies not just for the content, but for the time saved finding the right image.

The revenue stream from AI will probably be similar to the tiny income we get with getty/pinterest deal. Our images get pinned and we get minuscule amounts. But we also get a backlink to our images.

If SS added backlinks to the content used for the creation, I could see this as another pinterest style revenue stream.  And might even bring eyes to the ports.

Will they allow us as creators to now use the SS/AI engine to create AI content that then gets added to ports?

AI is here to stay, the question is how can it be integrated in an intelligent way?

For me it is more like an additional asset class.

But maybe  I am wrong and this is the end of everything.

« Reply #35 on: October 26, 2022, 17:30 »
0

Will they allow us as creators to now use the SS/AI engine to create AI content that then gets added to ports?


No - we're not there yet.

« Reply #36 on: October 26, 2022, 22:17 »
+2
A lot of groups are developing AI text-to-image software. Many are Open source not-for-profit endeavors. But in the future many for-profit companies will emerge. And Adobe and Autodesk and other big companies will buy them or license their software in commercial products.

I read that one of the AI text-to-image companies (I think it was Stable Diffusion) spent $hundreds of thousands to train their AI.

If you have your images on several sites (SS, AdobeStock, Vectorstock, and so on) and if a lot of companies pay those sites to train their AIs, and IF those sites pay you a decent fee each time, it could amount to a significant income for you. Because AI text-to-image is going to be BIG.

But the big IF is if SS etc pay you more than a tiny percent of what they get. Probably they will not, as other people posting have said.

The really bad news is that AI will make such great images that it will put microstockers out of business eventually (I have seen amazing stuff made by AI ---I don't make stock images any more but I buy them and I am now using AI-made images in my publications, it's amazing really).


« Reply #37 on: October 27, 2022, 02:42 »
0


you didnt mention this was still in the research phase when amazon discovered the bias & shut the program down before it went live
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
 
Quote
And now we are going to enjoy SS's budget version. Which will do what. AIs generally become with each iteration more feral.

I would imagine it will completely destroy SS by using data sets that are completely false and will skew everything.
yes, your imagination!  why would they be completely false? there are many examples of successful ML. and the current AI generators are far from being either 'feral' (whatever that means!) or 'false'.  in fact, each iteration should make the AI stronger - what's your reason for saying more training will make the AI worse?
[/quote]
feral
/ˈfɛr(ə)l,ˈfɪərəl/
adjective
(especially of an animal) in a wild state, especially after escape from captivity or domestication.
"a feral cat"

Perhaps you should have spent your valuable time looking in a dictionary instead of scraping the net to prove me wrong. I didn't forget to mention it was in the testing phase because it wasn't in a testing phase and had been running for some time. According to the expert who was wheeled out to testify about fbook and others extensive use of AI for anything. It certainly wasn't explained that way by the expert but you have a link to a msm article so it must be true.

The data sets will be completely false because if you had a brain in your skull and stopped being a know it all and failing you would realise that when you search SS for images, increasingly what you ask for is nothing like what you expect.

If I ask for a particular insect I get lots of that insect. I also get whatever crap someone decided was that insect. Be it dustbin lid or pencil sharpener. Because of poor identification or poor keywording or whatever it may be. There are millions of photos in the database that are not what they are titled as or even close. Then we can get on to interpretive images. A man sitting on the floor with his back against the wall head down knees up. This could be depressed. Tired. Relaxed. Meditating. Sad. Mourning. It could be anything and will be keyworded as such. And of course it will be. It has been keyworded to fit as many circumstances as possible to sell as widely as possible. You as a seller must imagine everything a buyer wants and show them what that looks like.

What does a palm tree mean. On a beach. With a thunder cloud in an otherwise blue sky. It means whatever attributes were given it by its seller. And that's your data set.

The end aim here is a buyer goes to SS and types in
Labrador dog, coloured pink and singing karaoke. Because they want that on some t-shirts. And this "AI company" will scour the images and create this wonder for them.

And they'll get a mungrel that's punk from karachi. Because those users thought that's what a labrador looked like. And tye other user called it punk not pink because that's what hot pink means to them so they titled it punk and karaoke wasn't in the third person's spelling autocorrect so it changed it to Karachi for them and they didn't notice when they were uploaded 40,000 images that they stole from 40 profiles in the last week.

Give me a break. If you need that explaining to you several things are implied

1. You're a pedant
2. You're an AI
3. You are the owner of the wonder company
4. You 'love' A.I. in a deep and natural way

Whatever you're done.

« Reply #38 on: October 27, 2022, 02:48 »
+6
I will just leave this beauty here to remind everyone of the current quality of AI generated images....  ;)


« Reply #39 on: October 27, 2022, 07:34 »
+1

AI is here to stay, the question is how can it be integrated in an intelligent way?


First step for SS (and others) could be mixing on-the-fly AI generated content based on a search string with real content in the search results.
Customers can choose between AI or real photo's/illustrations. If they can see the difference at all.
Data coming out those "experiments" is very useful for further training of the AI.

Next step might even be a customer selecting 10 images as a baseline for unique AI generated content.

I read a lot of comments from people claiming that most AI generated images are far from perfect.
And I think they're right. For now. But we might be underestimating how fast technology advances.
And we can easily turn it around too: have a look at the average stock library and you'll notice a lot of junk and far from perfect images too.




Uncle Pete

  • Great Place by a Great Lake - My Home Port
« Reply #40 on: October 27, 2022, 11:28 »
+1
I will just leave this beauty here to remind everyone of the current quality of AI generated images....  ;)



Thanks and I'd agree, they aren't very good quality or large enough and often miss the target or produce horribly distorted images. I don't feel doomed or threatened quite yet.

Meanwhile agencies are working with AI and using our images, to create new content, which some day, might be Good Enough. Most of it isn't right now.

SVH

« Reply #41 on: October 27, 2022, 12:22 »
+1
Better still.

Why are agencies refusing contributors AI generated images?

Because they want to create them on their own and not share the pie with us.

But in the end, a customer will create their own content and having no need for an agency at all. So, last straws.

If AI can become of age and generate content on the fly that can really substitute the work we contribute, it's done. Except for editorial obviously.

« Reply #42 on: October 27, 2022, 14:10 »
+1
Why are agencies refusing contributors AI generated images?

Because they want to create them on their own and not share the pie with us.
...

 i disagree w SS new policy -- they had been accepting almost all my DALL-E but rejected latest batch as
Non-Licensable Content: We cannot accept this submission into our commercial or editorial collection, or we are no longer accepting this type of content.

most other agencies reject because they're afraid of copyrights on the training set

but SS avoids that problem because they are training only w images in their library & paying artists based on sales (ignoring fact that actual payments will be near 0)
« Last Edit: October 30, 2022, 13:13 by cascoly »

« Reply #43 on: October 30, 2022, 07:24 »
+2
6. we probably agreed to the training by some clause in the terms of service, you can be sure they covered their legal ass before backstabbing u
Actually, no. I read the TOS carefully and did not find anything about this. They don't need our consent for their own ad usage, that's all. But this is not an ad, this is a profitable business.
So, AI is already trained, our works was already used. Against our will and without any compensation.

« Reply #44 on: October 30, 2022, 11:39 »
0
I bet they also try to retain rights to the AI images and add the ones actually purchased to the collection. Even replacing us in searches with images generated from ours  :(

« Reply #45 on: October 30, 2022, 13:03 »
0
... dupe
« Last Edit: October 30, 2022, 13:08 by cascoly »

« Reply #46 on: October 30, 2022, 13:08 »
0
6. we probably agreed to the training by some clause in the terms of service, you can be sure they covered their legal ass before backstabbing u
Actually, no. I read the TOS carefully and did not find anything about this. They don't need our consent for their own ad usage, that's all. But this is not an ad, this is a profitable business.
So, AI is already trained, our works was already used. Against our will and without any compensation.


« Reply #47 on: October 30, 2022, 13:12 »
0
I bet they also try to retain rights to the AI images and add the ones actually purchased to the collection. Even replacing us in searches with images generated from ours  :(

no need to try - those are the terms of the AI generator. it's the same as if their employees created illustrations or photos the old fashioned way. shift happens.


« Reply #48 on: October 31, 2022, 09:07 »
+3
I bet they also try to retain rights to the AI images and add the ones actually purchased to the collection. Even replacing us in searches with images generated from ours  :(

no need to try - those are the terms of the AI generator. it's the same as if their employees created illustrations or photos the old fashioned way. shift happens.
I do hope this gets tested in court. Preferably in the EU where artists are more likely to get a fair deal.

« Reply #49 on: November 03, 2022, 10:01 »
+1
I bet they also try to retain rights to the AI images and add the ones actually purchased to the collection. Even replacing us in searches with images generated from ours  :(

no need to try - those are the terms of the AI generator. it's the same as if their employees created illustrations or photos the old fashioned way. shift happens.
I do hope this gets tested in court. Preferably in the EU where artists are more likely to get a fair deal.

what would a fair deal look like? what compensation would be appropriate for an artist who contributed 1 (or 100+) images to a training set of millions?


« Reply #50 on: November 03, 2022, 10:14 »
+1


what would a fair deal look like? what compensation would be appropriate for an artist who contributed 1 (or 100+) images to a training set of millions?

Who can guess but from experience I would expect you will need a microscope to see it.
[/quote]

« Reply #51 on: November 03, 2022, 18:16 »
+3
Some here look like lawyers, passionate lovers, staunch defenders of artificial intelligence programs that generate images. I really don't understand this kind of behavior from those guys so much!

« Reply #52 on: November 04, 2022, 06:04 »
+2

what would a fair deal look like? what compensation would be appropriate for an artist who contributed 1 (or 100+) images to a training set of millions?

Honestly I don't know. I do know that artists work (images and keywords) shouldnt be used to train AI without consent and compensation.

« Reply #53 on: November 04, 2022, 06:58 »
0
Not terrifying at all ...

https://youtu.be/LWtlQZCcp8A

« Reply #54 on: November 15, 2022, 04:37 »
+1
Here we go again

100 peoples photos of a hand are used to generate an AI photo of the perfect hand.

Customer pays $10.00 👌
Shutterstock takes $6.00
Contributer gets 0.04 cents each.

Now prove you were one of the 100.
Prove your hand photo was used.
Find your photo particles in the customers hand composite.

Now tell me you trust SS to let you know your photo was used and pay you.

« Reply #55 on: November 15, 2022, 04:51 »
+4
Find your photo particles in the customers hand composite.
Now tell me you trust SS to let you know your photo was used and pay you.

It doesn't work this way.
Images are used to train AI to recreate an image of a hand.
There is no single pixel of your photo in the new AI generated one.
You have to be payed for training AI, not because you're giving pixels of your image
« Last Edit: November 15, 2022, 05:30 by derby »

« Reply #56 on: November 15, 2022, 05:01 »
0
what would a fair deal look like? what compensation would be appropriate for an artist who contributed 1 (or 100+) images to a training set of millions?

That's an interesting question.
Agency will probably pay small fee for quantity, but to be fair the correct way to pay should be a new license terms that give the right to use the image for "teaching".

Let's say... you're giving away not only your image but your knowledge to create that image, and this will be forever; like a teacher in the school.

We know that AI could create infinite number of new images based on this knowledge.
It doesn't matter if an image is sold in a single moment, because every single image created, even if refused by the buyer, will populate and will remain available forever in agency database.

As for this I think that a fair compensation should be near the price of an extended license for every single image used. This will cover every future sale.
Of course, this will never happen  ;D


« Reply #57 on: November 15, 2022, 06:03 »
+2
Find your photo particles in the customers hand composite.
Now tell me you trust SS to let you know your photo was used and pay you.

It doesn't work this way.
Images are used to train AI to recreate an image of a hand.
There is no single pixel of your photo in the new AI generated one.
You have to be payed for training AI, not because you're giving pixels of your image

I struggle with that framing. A pixel is not a thing that is physically picked up from one place and dropped in another. Its just a range of values for relative location and color. That is true whenever you copy an image. I honestly think the it doesnt use any of the original pixels framing is irrelevant, as that is always the case when transferring images digitally.

One of the ways AI is trained is, for example, by blurring a photo in a way that involves some randomisation then doing its best to recreate the original image (which is never exactly the same as some randomisation has occurred in the blur). It does this for lots of images with the same keywords and looks for the points of similarity that make up the defining characteristics of the objects.

So it is trying its best to copy the subset of images. Even if it had one image to go on, the result wouldnt be identical as it is making its best guess.

At which level of randomization in the disassembly/ reassembly of images do we draw the line? There will be people out there making better and worse AI engines. What about the times when a programmer takes shortcuts and small chunks of the original images are reassembled in exactly the same layout of pixels? Is any level of similarity fine as long as the company labels it as AI and some disassembly and reassembly is involved (even if the app is reassembling in the exact same layout of pixels?).

IMHO the relevant part is that the AI is using the source IP and keywords to create the engine/ resulting images, regardless of how the images are copied.
« Last Edit: November 15, 2022, 06:49 by Justanotherphotographer »

« Reply #58 on: November 15, 2022, 06:57 »
0
One of the ways AI is trained is, for example, by blurring a photo in a way that involves some randomisation then doing its best to recreate the original image (which is never exactly the same as some randomisation has occurred in the blur). It does this for lots of images with the same keywords and looks for the points of similarity that make up the defining characteristics of the objects.

So it is trying its best to copy the subset of images. Even if it had one image to go on, the result wouldnt be identical as it is making its best guess.

I'm not an expert but I read something about how AI machine learn, and it's slighlty different from what you describe (if I understand well your words, sorry, I'm not native english...)

The concept is that AI, following your example, learn what is and how to produce a nice depth of field.
When it knows it, it can reproduce this in any image: so it's not exactly the production of an new image based on original one.
The concept is that you can ask a nice depth of field for any subject, not only the subjects that was in training images. So it's not a question of pixels randomization that can give you a different image from an original one. The point is that now AI can blur the image to produce nice DOF for quite any subject you ask.
It's not trying to do a "copy" with some difference. It's mostly like trying to reproduce an event.

This is what I understood
« Last Edit: November 15, 2022, 07:02 by derby »

« Reply #59 on: November 15, 2022, 07:06 »
+1
One of the ways AI is trained is, for example, by blurring a photo in a way that involves some randomisation then doing its best to recreate the original image (which is never exactly the same as some randomisation has occurred in the blur). It does this for lots of images with the same keywords and looks for the points of similarity that make up the defining characteristics of the objects.

So it is trying its best to copy the subset of images. Even if it had one image to go on, the result wouldnt be identical as it is making its best guess.

I'm not an expert but I read something about how AI machine learn, and it's slighlty different from what you describe (if I understand well your words, sorry, I'm not native english...)

The concept is that AI, following your example, learn what is and how to produce a nice depth of field.
When it knows it, it can reproduce this in any image: so it's not exactly the production of an new image based on original one.
The concept is that you can ask a nice depth of field for any subject, not only the subjects that was in training images. So it's not a question of pixels randomization that can give you a different image from an original one. The point is that now AI can blur the image to produce nice DOF for quite any subject you ask.
It's not trying to do a "copy" with some difference. It's mostly like trying to reproduce an event.

This is what I understood
There are a few different methods/ models apparently. They all sound quite different to each other, but the formula is always: people's IP--->jiggery-pokery (skirting copyright)--->cash in the pocket of tech bro who did a fraction of the work it took to produce the millions of images and keyword them

« Reply #60 on: November 15, 2022, 07:33 »
0


I struggle with that framing. A pixel is not a thing that is physically picked up from one place and dropped in another. Its just a range of values for relative location and color. That is true whenever you copy an image. I honestly think the it doesnt use any of the original pixels framing is irrelevant, as that is always the case when transferring images digitally.

One of the ways AI is trained is, for example, by blurring a photo in a way that involves some randomisation then doing its best to recreate the original image (which is never exactly the same as some randomisation has occurred in the blur). It does this for lots of images with the same keywords and looks for the points of similarity that make up the defining characteristics of the objects. ....

that's not how ML works - the AI creates new info from each training info - none of original pixels are reserved. instead a condensed matrix is prepared. then based on tags, those matrices are used to create an entirely new image.  so the only question that remains is how owners of the million training images might be paid for the training. they have no claim to the new images created

« Reply #61 on: November 15, 2022, 09:37 »
+2

that's not how ML works - the AI creates new info from each training info - none of original pixels are reserved. instead a condensed matrix is prepared. then based on tags, those matrices are used to create an entirely new image.  so the only question that remains is how owners of the million training images might be paid for the training. they have no claim to the new images created

Yes, I get it. Its the same sort of reasoning as no ones making the decision its up to the algorithm.

I just find the assertions about whether pixels are retained redundant. The app learns where to place and how to color new pixels based on pixels in the original images. The new info is learnt from the inputted info. The reductio absurdum to make the point is that I can use an image to write a table with only figures (no pixels) referencing the color to paint each pixel and its location. I could then take that table and generate a completely new image (new info) identical to the original, i.e. not reserving any of the original pixels. I could also create an algorithm to shift the colors or locations of those pixels for the new image. How complex a process would that have to be before it is acceptable?

Take the example of the images of business people featuring the near perfectly copied DT watermark the Ai was outputting. Imagine that DT licensed your icon to use as a watermark only on their site. The AI would be perfectly reproducing your copyrighted material; it would be (by you definition) new info, but it is also identical to your copyright work.

I am not sure which part of what I said isnt how it works. I tried to make it clear that the AI is outputting what you call new info.
« Last Edit: November 15, 2022, 09:39 by Justanotherphotographer »

« Reply #62 on: November 15, 2022, 10:04 »
0
Take the example of the images of business people featuring the near perfectly copied DT watermark the Ai was outputting. Imagine that DT licensed your icon to use as a watermark only on their site. The AI would be perfectly reproducing your copyrighted material; it would be (by you definition) new info, but it is also identical to your copyright work.

I am not sure which part of what I said isnt how it works. I tried to make it clear that the AI is outputting what you call new info.

For what I can understand the point is that you're always referring to an existing image; AI doesn't need a "reference" image.

Let's try an example:
If I ask AI to give me an image described as:
"Section of planet earth, american continent, view from moon with defocused background of starry sky in dark space"

What AI need to know to create the image is
1-what is planet earth
2-what is american continent
3-what is starry sky space
4-what is defocused

Were AI get the first three points it's easy, these are clear and common knowledge with millions of images to let it know.

But what is "defocused"?
How can AI understand the concept of "defocused" and apply this to the requested image?
AI has been trained with thousand of defocused images with hundreds of different depth of field and effects. And it decide now to apply to the "starry space in background"

Does it means that this come from existing images? Of course yes, but not in the sense that some similar images was referred to the new one.

Maybe AI has learned depth of field from
"Cup of coffee on the table"
"macro close up of flower"
and so on...
But it doesn't need to have a defocused starry dark space as a reference.

So, had you collaborate to this science fictional image with your coffee cup and flower close up?
Probably Yes
Is there any minimal link between planet earth from the moon and coffe cup on a table? Of course no, not in the sense you're talking about.

If I understand well  ;D
because it's not so easy and it's not so clear  ;D
« Last Edit: November 15, 2022, 10:10 by derby »

Uncle Pete

  • Great Place by a Great Lake - My Home Port
« Reply #63 on: November 15, 2022, 16:03 »
0
https://blog.adobe.com/en/publish/2020/02/27/copyrights-in-the-era-of-ai#:~:text=In%20many%20cases%2C%20the%20data%20required%20for%20AI,process%20of%20training%20an%20AI%20model%20constitute%20infringement%3F

"The Japanese government, for example, recently updated its copyright laws to include exemptions of the use of copyrighted works for machine learning. Other countries, including China, Australia, Singapore, Thailand, are looking at making similar changes. Additionally, the European Union recently adopted limited text and data mining exceptions as part of its Copyright Directive and continues to explore further refinements."

As far as the legal side, "Generally, accessing copyrighted works for use in training algorithms does not reduce the economic value of the work in any measurable way. And, if a tool powered by the algorithm is used to create something totally different, the value of the copyrighted material remains similarly unchanged."

From reading this, I'd have to ask myself, did my specific original work lose value, because of the AI training, that created a new and different image?

The copyright statute sets forth four factors for courts to consider in determining whether a particular unauthorized use qualifies as fair use:

    The purpose and character of the use, including whether youve made a new transformative work, and whether your use is commercial.
    The nature of the original work, such as whether it is more factual than fictional.
    How much of the original work was used.
    Whether the new use affects the potential market for the original work.


https://graphicartistsguild.org/fair-use-or-infringement/#:~:text=The%20copyright%20statute%20sets%20forth%20four%20factors%20for,transformative%20work%2C%20and%20whether%20your%20use%20is%20commercial.



 

Related Topics

  Subject / Started by Replies Last post
11 Replies
6613 Views
Last post November 09, 2008, 14:48
by litifeta
39 Replies
15502 Views
Last post March 30, 2009, 00:08
by sgcallaway1994
1 Replies
2572 Views
Last post July 09, 2009, 13:18
by cardmaverick
4 Replies
2751 Views
Last post August 10, 2010, 06:51
by cathyslife
1 Replies
4480 Views
Last post February 09, 2014, 06:38
by Beppe Grillo

Sponsors

Mega Bundle of 5,900+ Professional Lightroom Presets

Microstock Poll Results

Sponsors

3100 Posing Cards Bundle