MicrostockGroup Sponsors


Author Topic: Since "ai tools" get perpetual recurring revenue, contributors should too  (Read 6710 times)

0 Members and 2 Guests are viewing this topic.

« on: September 18, 2023, 08:26 »
+8
For "ai training" - obviously companies are basically trying to make more $$$, by essentially stealing other's people work to do so.

A new system should set up (and contributors very vocal about this, that means YOU, the person reading this) in which:

a) Obviously opt-in/opt-out - including RETROACTIVE opting in/opting out. (YES, it IS possible. Might be "inconvenient" for a company to do so and/or recode certain algorithms - but VERY easy to do - basically if one say "opted-out" - the company would simply "re-train" their entire dataset MINUS the individuals who chose to opt out. It has been done before, can be done again). Could be done in batch (say 1x/day for any new opt-in/opt-out requests). (This ALSO includes "ai generation" tools like midjourney/dall-e/etc).

b) For data that is trained on - it IS in fact EASY to "tag" datasets to attributed specific data to individuals. In otherwords - if someone "creates an AI" image that references your work AT ALL - you CAN actually be compensated for that. So if 1000 artists "data" is used to compose an image - each individual artist CAN in fact be attributed and compensated for that with fractional payments. It DOES require some programming/re-doing of current algorithms - but DEFINITELY 100% doable (despite what any company 'claims'. They may not want to do it - but it in fact, is very easy and possible to do. It is simply a matter of doing it).

IN OTHER WORDS - let's say 1000 artists data is used to "compose" an "AI" image. Each artists could get fractional income from that asset that was produced.
It may seem "tiny" at first (which it would be) - but obviously with the millions of images being created daily - that quickly adds up.

Contributors should - then could - be attributed/compensated for their work fairly. AND - 100% up to the contributor WHEN/IF they choose to have their data used - AND - it is possible to do it retroactively as well.

And then obviously contributors would be able to share in the benefit of perpetual recurring income - which, after all - is one of the big reasons various companies are stealing people's data and 'repackaging it' in an "AI" tool - because they want "perpetual recurring income" for basically doing nothing. Contributors should benefit from this as well, and again - at ANY point in time - be able to choose to opt-out/opt-in, as well as CHOOSE WHICH ASSETS can be trained/etc.

c) The dishonest tactic some companies have employed (i.e., say "oh, we took your data <ahem>, but um, yeah, here's a payout and now we'll 'let' you opt out") does not take them 'off the hook' for their actions. They are still fully responsible for their actions, as well as fully responsible for compensating contributors fairly. The above CAN and SHOULD be done. (And again, includes companies like midjourney/dalle/etc which haven't even yet compensated contributors. It's funny when the people running those companies talk about 'pesky little things like watermarks'... hmm, why would there EVER be a watermark? so strange!).

Just an FYI of what is possible. Get vocal about it, and make it happen.
« Last Edit: September 18, 2023, 08:44 by SuperPhoto »


« Reply #1 on: September 18, 2023, 13:33 »
+2
once again, you neither understand how these generators work, nor the massive programming involved. have you worked on such huge projects?

if everyone can optout at any time the training would have to be continuous, and there's no indication original images used would still be available - where are those billions going to be fud and how would they be able to identify your work?

but again, you dont understand how these work -; once trained, there is NO way to trace back to original training set.

« Reply #2 on: September 18, 2023, 16:50 »
+7
once again, you neither understand how these generators work, nor the massive programming involved. have you worked on such huge projects?

if everyone can optout at any time the training would have to be continuous, and there's no indication original images used would still be available - where are those billions going to be fud and how would they be able to identify your work?

but again, you dont understand how these work -; once trained, there is NO way to trace back to original training set.

Once again, you are mistaken, and it sounds like you have limited programming knowledge. Is that the case? I actually do know what I am talking about. And yes, I actually have worked on big dataset projects, so I know EXACTLY what I am talking about. It is NOT a "massive programming" undertaking. The "massiveness" is simply processing the data - but that is why there are now HUGE HUGE HUGE server farms - which make that a pretty simple task. (I.e., you've heard of google, right? They regularly refresh their "search engine database" (and actually archive a LOT more than simply archiving computer models of images)).

If a company is using an "out of the box solution" (aka a lazy man's way of doing things) - then yes, without ANY changes whatsoever, as far as I know - the 'out of the box' solutions don't have that pre-installed. HOWEVER... if you hire a couple programmers and tag the data - then yes, you CAN do that.

Answering your questions:
1. The dataset opt-out/opt-in would be at a set interval - and process them all in batch. (I.e., say you had 1000 people that 'opted-out' - it would not happen at once - it would be 'batched', i.e., say 8pm every day when the scraping/modelling was done - anyone who was 'opted in' would get processed, and those that 'opted out' would not).
b) In terms of the interval - depends on the server farm the company is using & the speed (not "really" an issue longterm - but short term because they are currently 'learning' - and depending if they are stealing 5 million images, or 5 billion - makes a difference). So could (for now) make the process say 1x every 3 days (to refresh the background modelling/dataset).

2. And again - you are mistaken - you actually CAN "tag" the data to be associated with the processed data. YES - it does (most likely) require revising the current algorithm - and YES - many programmers are lazy (or in some ways incomptent), so you actually need GOOD programmers (full stack would probably be best because they can see the 'bigger' picture usually) - but you CAN do it.

Let's take a hypothetical example (SUPER simplistic, but illustrates the point).
User A has pictures of chairs & lamps, i.e., "ID001".
User B has pictures of dogs, i.e., "ID002"
User C has landscapes, i.e., "ID003".
User D has landscapes, i.e., "ID004".

When initially 'modelling' the data, when the computer model for a "landscape" was put in, tags would be associated with (essentially what is a computer model/representation of the 'idea' of a 'landscape'). Likewise, for dogs/chairs/etc.

Then, say a prompt is "dog sitting in a chair".
User C/D's input was not used, so not relevant. However, A & B had the "computer model" accessed to "compose" that representation - so they would be "credited" with having composed that.

Then say it was "dog in landscape".
Now B/C/D were used, so would likewise be credited.

Say each image (for simplicity) was $1 to compose. And say it was a 50-50 split between the "ai tool/engine" & contributor.

In the first scenario, A&B get 25 cents each. In the second, B/C/D get 16.7 cents each.

THAT is how it would work.

It's possible the companies already designed their systems to be able to do that (for other reasons, I.e., logging, revising algorithms/tweaking "representations" of chairs (or "hands", "faces", etc)). If not - the algorithm CAN be redesigned.

It is simply a matter of doing it, & would require some thinking.

It is not "massive programming". The "massiveness" is simply processing the data - but again, almost an insignificant point because of the HUGE MASSIVE server farms - and how much "cheaper" it is becoming to process massive amounts of data.

It IS in fact a relatively simple thing to do. It is simply a matter of DOING it.

« Reply #3 on: September 19, 2023, 03:56 »
+8
I totally agree

Our years of work was used without our authorization to train the "system" that now leaves us without a job.

We should receive a percentage of the royalties produced by AI images (no matter how little money it was).

« Reply #4 on: September 23, 2023, 08:29 »
+2
I totally agree

Our years of work was used without our authorization to train the "system" that now leaves us without a job.

We should receive a percentage of the royalties produced by AI images (no matter how little money it was).

Yes, it is quite easy to do. Just the current algorithms for taking other people's images and creating data models needs to be revised, in order to properly attribute people's whose images are used for image generation. Without other people's hard work - these "ai" tools (which are NOT "ai", it is quite annoying how that term is misused) - these "ai" tools would not exist.

So yes, contributors should be properly compensated in a perpetual recurring revenue model, the same way the agencies want a lifetime of income for doing the initial 'work'.

« Reply #5 on: September 23, 2023, 10:43 »
+3
Last quarter SS earned 80 million from selling stock and an additional 17 million (21%) from licensing stock content for ai projects.

And they kept bragging how this is their most important money project.

Has anyone seen an increase of 20% of income from ai licensing?

What about all the other agencies? istock/envato/alamy have created bria, for "ethical licensing". How much will they pay artists?

I think they have all announced that they want to do that, but I only see envato confirming that they will pay their usual 50%.

Or maybe I missed that?

The agencies will keep making new data deals all the time, which means we should all be making more money.

20% extra for data licensing would be appreciated, wouldn't it?
« Last Edit: September 23, 2023, 12:11 by cobalt »

« Reply #6 on: September 25, 2023, 16:22 »
0
Since we have no bargaining power, what should be in an ideal world is irrelevant. We simply get a one-time payment for using our copyrighted works to train AI, but all profits from AI then go to them. Legally, it's probably legal. We weren't asked if we agreed with use or the amount of the one-time payment, but that's obviously not a legal issue. We're just screwed and it's only a matter of time before we get kicked out and fully replaced. And this also applies to those who generate AI images, it makes no difference in principle, it's just a temporary fill in for the demand for AI images.

« Reply #7 on: September 25, 2023, 17:36 »
+1
Since we have no bargaining power, what should be in an ideal world is irrelevant. We simply get a one-time payment for using our copyrighted works to train AI, but all profits from AI then go to them. Legally, it's probably legal. We weren't asked if we agreed with use or the amount of the one-time payment, but that's obviously not a legal issue. We're just screwed and it's only a matter of time before we get kicked out and fully replaced. And this also applies to those who generate AI images, it makes no difference in principle, it's just a temporary fill in for the demand for AI images.

Actually, you have an incredibly amount of power, but you need to use it. Everything from initially contacting the agencies politely, to contacting/discussing with a lawyer, to class action, etc, etc. "They" would like you to think you have "no power", but the contrary is true. You just need to realize that. If you have an attitude of you've lost - then you have already - so you need to realize that and start taking action NOW.

« Reply #8 on: September 25, 2023, 17:37 »
0
Last quarter SS earned 80 million from selling stock and an additional 17 million (21%) from licensing stock content for ai projects.

And they kept bragging how this is their most important money project.

Has anyone seen an increase of 20% of income from ai licensing?

What about all the other agencies? istock/envato/alamy have created bria, for "ethical licensing". How much will they pay artists?

I think they have all announced that they want to do that, but I only see envato confirming that they will pay their usual 50%.

Or maybe I missed that?

The agencies will keep making new data deals all the time, which means we should all be making more money.

20% extra for data licensing would be appreciated, wouldn't it?

Yes, totally agree - and it is totally doable.

« Reply #9 on: October 02, 2023, 11:53 »
+5
once again, you neither understand how these generators work, nor the massive programming involved. have you worked on such huge projects?

if everyone can optout at any time the training would have to be continuous, and there's no indication original images used would still be available - where are those billions going to be fud and how would they be able to identify your work?

but again, you dont understand how these work -; once trained, there is NO way to trace back to original training set.

What AIs do, they do from our work. From my point of view it is a much more sophisticated form of copyright infringement, but in essence it is the same.

Our work (copyrighted material) was used to produce something new, and that "something new" is sold without giving us royalties.

The semantics of "training" an AI and the definition of "training a machine", which must be expressed in thousands of words, do not change the essence of what is being done.

« Reply #10 on: October 02, 2023, 20:16 »
+2
once again, you neither understand how these generators work, nor the massive programming involved. have you worked on such huge projects?

if everyone can optout at any time the training would have to be continuous, and there's no indication original images used would still be available - where are those billions going to be fud and how would they be able to identify your work?

but again, you dont understand how these work -; once trained, there is NO way to trace back to original training set.

What AIs do, they do from our work. From my point of view it is a much more sophisticated form of copyright infringement, but in essence it is the same.

Our work (copyrighted material) was used to produce something new, and that "something new" is sold without giving us royalties.

The semantics of "training" an AI and the definition of "training a machine", which must be expressed in thousands of words, do not change the essence of what is being done.


You are correct.

It is basically theft, no matter how "they" try and justify it.

Especially when some try to "appear" noble by simply "stealing" first, THEN providing an opt-out, THEN saying "oh soz, since we paid you, well... we already gave the stuff away to another company, SORREEEE!!! hee hee. um, but you can opt out now if you like! tee hee tsk tsk!". Totally shady dishonest tactic.

ANYWAYS...

FACT is...

1. It IS indeed possible to RETROACTIVELY PAY EVERY SINGLE CONTRIBUTORS image who was stolen. The algorithm is quite simply - basically:
a) Most have extensive server logs/cached documents. They would simply 're-scrape' their cached documents, find the author - then reach out to the author to properly attribute them & compensate for the theft.
b) For those that "accidentally lose" those cached files - simply re-scrape the original data set.

For the the actual 'diffusion model'/etc...
c) Would require some re-coding - but it IS possible programatically to attribute EVERY SINGLE SOURCE image to authors.
d) When an image is generated (INCLUDING RETROACTIVE CALCULATIONS) - it IS actually programatically possible to figure out "which" authors images were used to create the "composite" image.
e) It IS also possible to do massive micropayment calculations on a PERPETUAL AND RECURRING BASIS to PROPERLY COMPENSATE IMAGES (AND VIDEOS)... BOTH retroactively for stolen images/theft to make the "ai tools" from, as well as going forward... AND have an opt-in/opt-out procedure that is updated DAILY if an author does not like the terms...

Now, it is simply a matter of doing it.


 

Related Topics

  Subject / Started by Replies Last post
7 Replies
3489 Views
Last post September 24, 2014, 11:36
by Valo
7 Replies
3969 Views
Last post September 07, 2016, 09:20
by Mantis
5 Replies
4404 Views
Last post September 14, 2023, 16:55
by SuperPhoto
5 Replies
1658 Views
Last post September 13, 2023, 15:26
by SuperPhoto
15 Replies
1535 Views
Last post October 25, 2023, 13:02
by Uncle Pete

Sponsors

Mega Bundle of 5,900+ Professional Lightroom Presets

Microstock Poll Results

Sponsors