pancakes

MicrostockGroup Sponsors


Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - SuperPhoto

Pages: 1 ... 5 6 7 8 9 [10] 11 12 13 14 15 ... 33
226
Guess I will comment.

Now for clarification - I am basing my comments on what people have said here If I am mistaken in my understanding of the situation, please do clarify.
That being said - with my current understanding of what is being offered - $30-$80 for a set of 500 images (if indeed that is what the amount is, someone said $0.06-$0.16/image?) is a pittance on several levels...

a) Unless you are looking for garbage shots (i.e., 500 figures of a foot with toenails just on auto-take) it's a waste of time for any professinal photographer to take those kinds of shots. Yes - you will probably find people in say india/ukraine/etc where $3 usd/hr is a great wage... or someone who is hungry and $50 makes a big deal of whether or not they can pay rent that month... but it is still a pittance...
b) Since the images are essentially being used to try and put that person OUT of business (via an "ai" tool to try and eliminate real images even more)... even more of a pittance...
c) People "may" or "may not" be compensated for their work? So it could be a complete waste of time?
c) 3600 seconds in an hour. 3600/500 = 7.2 seconds per image. No post processing/keywording/etc - not sure if that is expected. But just taking 500 images period - unless you are looking for garbage shots - not a lot of time to take good shots.
More likely - to take 500 (good) shots - is about 5-8 hours at least... so effective rate is $10/hour (minimum wage is higher in most countries), and then on top of that no guarantee it would even be used? (Kind of like showing up to work and the boss says 'eh, if I FEEL like it I'll pay you after - but do the work first and we'll see')... a very crappy (and a bit insulting) of an offer.

Anyways - is my understanding of what is/was being offered accurate? If so - then feedback wise, no professional photographer that values their work would go for this.


227
Adobe Stock / Re: RAW will soon need a different GPU
« on: September 28, 2023, 08:46 »
Perhaps reach out to Adobe? Or mat here - and see if he can pass the feedback onto the devs?

228
iStockPhoto.com / Re: Getty Images announces AI Generator
« on: September 26, 2023, 12:03 »
Lol, no - they may not have been "outright" stolen - instead - they were most likely stolen via a bait/switch tactic ("licensing" via "shutterstock" or other similar agencies), to then steal and pay a paltry sum to contributors, to pretend they are nice and virtuous...

Unless getty developed the images in house - or EXPLICITLY asked contributors PRIOR to "scraping" their database - I'd say the images were indeed "stolen" - just playing with words to make it sound nice.

229
While I agree that the submission requirements do say it shouldn't have identifiable locations/people/etc - so in this case - I would say this should have not gone through according to the current specs...

However, making a reason to reject based on something that could be construed/potentially "offend" someone - while yes, of course tragic/etc, and definitely very tragic for anyone personally affected - it would not be right to reject something because someone could be potentially offended, otherwise, where do you stop? Slippery slope. Some people might not like pictures of cows (i.e., vegans, east indians, etc). Some people might not like pictures of churches, or conversly mosques, (i.e., they "feel offended" by a particular religion), etc. Some might not like political parties, political stances, etc. Some people might not like 'black' people, others might not like 'white' people, others might not like 'asian' people, etc, etc. So 'feeling offended' is not a good reason to reject. (Total aside - last 3 years should have been an eye opener for many in terms of what really happened then, as well as what really happened WWII/etc - people are deliberately having their emotions/thoughts manipulated - but entirely different topic).

230
Last quarter SS earned 80 million from selling stock and an additional 17 million (21%) from licensing stock content for ai projects.

And they kept bragging how this is their most important money project.

Has anyone seen an increase of 20% of income from ai licensing?

What about all the other agencies? istock/envato/alamy have created bria, for "ethical licensing". How much will they pay artists?

I think they have all announced that they want to do that, but I only see envato confirming that they will pay their usual 50%.

Or maybe I missed that?

The agencies will keep making new data deals all the time, which means we should all be making more money.

20% extra for data licensing would be appreciated, wouldn't it?

Yes, totally agree - and it is totally doable.

231
Since we have no bargaining power, what should be in an ideal world is irrelevant. We simply get a one-time payment for using our copyrighted works to train AI, but all profits from AI then go to them. Legally, it's probably legal. We weren't asked if we agreed with use or the amount of the one-time payment, but that's obviously not a legal issue. We're just screwed and it's only a matter of time before we get kicked out and fully replaced. And this also applies to those who generate AI images, it makes no difference in principle, it's just a temporary fill in for the demand for AI images.

Actually, you have an incredibly amount of power, but you need to use it. Everything from initially contacting the agencies politely, to contacting/discussing with a lawyer, to class action, etc, etc. "They" would like you to think you have "no power", but the contrary is true. You just need to realize that. If you have an attitude of you've lost - then you have already - so you need to realize that and start taking action NOW.

232
There were only ever two tools, as far as I know, that used actual data on what buyers searched for when buying. They have now both been retired.

All the rest are much of a muchness (based on what other people have used) so whatever you find easier. SS's one is fine.

What were the two tools?

233
iStockPhoto.com / Re: New watermark
« on: September 24, 2023, 08:27 »
Why not contact iStock and show them how easy it is to replace - perhaps they will come up with a better watermark that can't simply be replaced with generative fill, and/or cropping it?

234
I totally agree

Our years of work was used without our authorization to train the "system" that now leaves us without a job.

We should receive a percentage of the royalties produced by AI images (no matter how little money it was).

Yes, it is quite easy to do. Just the current algorithms for taking other people's images and creating data models needs to be revised, in order to properly attribute people's whose images are used for image generation. Without other people's hard work - these "ai" tools (which are NOT "ai", it is quite annoying how that term is misused) - these "ai" tools would not exist.

So yes, contributors should be properly compensated in a perpetual recurring revenue model, the same way the agencies want a lifetime of income for doing the initial 'work'.

235
Interesting, thanks for sharing... now - over what period were those best sellers? I noticed you mentioned one of your pics was from 2014... does that mean these are your 'all time' best sellers since you've been doing stock?

236
I'm curious - for people that are seeing increases in sales, what kinds of videos are you making?

237
once again, you neither understand how these generators work, nor the massive programming involved. have you worked on such huge projects?

if everyone can optout at any time the training would have to be continuous, and there's no indication original images used would still be available - where are those billions going to be fud and how would they be able to identify your work?

but again, you dont understand how these work -; once trained, there is NO way to trace back to original training set.

Once again, you are mistaken, and it sounds like you have limited programming knowledge. Is that the case? I actually do know what I am talking about. And yes, I actually have worked on big dataset projects, so I know EXACTLY what I am talking about. It is NOT a "massive programming" undertaking. The "massiveness" is simply processing the data - but that is why there are now HUGE HUGE HUGE server farms - which make that a pretty simple task. (I.e., you've heard of google, right? They regularly refresh their "search engine database" (and actually archive a LOT more than simply archiving computer models of images)).

If a company is using an "out of the box solution" (aka a lazy man's way of doing things) - then yes, without ANY changes whatsoever, as far as I know - the 'out of the box' solutions don't have that pre-installed. HOWEVER... if you hire a couple programmers and tag the data - then yes, you CAN do that.

Answering your questions:
1. The dataset opt-out/opt-in would be at a set interval - and process them all in batch. (I.e., say you had 1000 people that 'opted-out' - it would not happen at once - it would be 'batched', i.e., say 8pm every day when the scraping/modelling was done - anyone who was 'opted in' would get processed, and those that 'opted out' would not).
b) In terms of the interval - depends on the server farm the company is using & the speed (not "really" an issue longterm - but short term because they are currently 'learning' - and depending if they are stealing 5 million images, or 5 billion - makes a difference). So could (for now) make the process say 1x every 3 days (to refresh the background modelling/dataset).

2. And again - you are mistaken - you actually CAN "tag" the data to be associated with the processed data. YES - it does (most likely) require revising the current algorithm - and YES - many programmers are lazy (or in some ways incomptent), so you actually need GOOD programmers (full stack would probably be best because they can see the 'bigger' picture usually) - but you CAN do it.

Let's take a hypothetical example (SUPER simplistic, but illustrates the point).
User A has pictures of chairs & lamps, i.e., "ID001".
User B has pictures of dogs, i.e., "ID002"
User C has landscapes, i.e., "ID003".
User D has landscapes, i.e., "ID004".

When initially 'modelling' the data, when the computer model for a "landscape" was put in, tags would be associated with (essentially what is a computer model/representation of the 'idea' of a 'landscape'). Likewise, for dogs/chairs/etc.

Then, say a prompt is "dog sitting in a chair".
User C/D's input was not used, so not relevant. However, A & B had the "computer model" accessed to "compose" that representation - so they would be "credited" with having composed that.

Then say it was "dog in landscape".
Now B/C/D were used, so would likewise be credited.

Say each image (for simplicity) was $1 to compose. And say it was a 50-50 split between the "ai tool/engine" & contributor.

In the first scenario, A&B get 25 cents each. In the second, B/C/D get 16.7 cents each.

THAT is how it would work.

It's possible the companies already designed their systems to be able to do that (for other reasons, I.e., logging, revising algorithms/tweaking "representations" of chairs (or "hands", "faces", etc)). If not - the algorithm CAN be redesigned.

It is simply a matter of doing it, & would require some thinking.

It is not "massive programming". The "massiveness" is simply processing the data - but again, almost an insignificant point because of the HUGE MASSIVE server farms - and how much "cheaper" it is becoming to process massive amounts of data.

It IS in fact a relatively simple thing to do. It is simply a matter of DOING it.

238
For "ai training" - obviously companies are basically trying to make more $$$, by essentially stealing other's people work to do so.

A new system should set up (and contributors very vocal about this, that means YOU, the person reading this) in which:

a) Obviously opt-in/opt-out - including RETROACTIVE opting in/opting out. (YES, it IS possible. Might be "inconvenient" for a company to do so and/or recode certain algorithms - but VERY easy to do - basically if one say "opted-out" - the company would simply "re-train" their entire dataset MINUS the individuals who chose to opt out. It has been done before, can be done again). Could be done in batch (say 1x/day for any new opt-in/opt-out requests). (This ALSO includes "ai generation" tools like midjourney/dall-e/etc).

b) For data that is trained on - it IS in fact EASY to "tag" datasets to attributed specific data to individuals. In otherwords - if someone "creates an AI" image that references your work AT ALL - you CAN actually be compensated for that. So if 1000 artists "data" is used to compose an image - each individual artist CAN in fact be attributed and compensated for that with fractional payments. It DOES require some programming/re-doing of current algorithms - but DEFINITELY 100% doable (despite what any company 'claims'. They may not want to do it - but it in fact, is very easy and possible to do. It is simply a matter of doing it).

IN OTHER WORDS - let's say 1000 artists data is used to "compose" an "AI" image. Each artists could get fractional income from that asset that was produced.
It may seem "tiny" at first (which it would be) - but obviously with the millions of images being created daily - that quickly adds up.

Contributors should - then could - be attributed/compensated for their work fairly. AND - 100% up to the contributor WHEN/IF they choose to have their data used - AND - it is possible to do it retroactively as well.

And then obviously contributors would be able to share in the benefit of perpetual recurring income - which, after all - is one of the big reasons various companies are stealing people's data and 'repackaging it' in an "AI" tool - because they want "perpetual recurring income" for basically doing nothing. Contributors should benefit from this as well, and again - at ANY point in time - be able to choose to opt-out/opt-in, as well as CHOOSE WHICH ASSETS can be trained/etc.

c) The dishonest tactic some companies have employed (i.e., say "oh, we took your data <ahem>, but um, yeah, here's a payout and now we'll 'let' you opt out") does not take them 'off the hook' for their actions. They are still fully responsible for their actions, as well as fully responsible for compensating contributors fairly. The above CAN and SHOULD be done. (And again, includes companies like midjourney/dalle/etc which haven't even yet compensated contributors. It's funny when the people running those companies talk about 'pesky little things like watermarks'... hmm, why would there EVER be a watermark? so strange!).

Just an FYI of what is possible. Get vocal about it, and make it happen.

239
Yes, like other people said - if I need something in a series/different poses/etc.

240
Quote
I see your point but dont think it is technically possible to tie output to specific learning material which went into the model. So in fact you would need to broadly distribute money to creators undiscriminate of the quality and usefulness of their work. So if 100.000.000 images went into the training the compensation would need to be split between all of them - and MidJourney doesnt even know them because they scraped the internet.

From a programming standpoint, it actually is very easy/very possible... Whether or not they do it is something entirely different, but certainly possible. Here's how you'd do it (just one way).

a) Most companies (i.e., midjourney included) keep EXTENSIVE server logs/etc. They also keep "snapshots" of their databases (i.e., they did I think it was going from v4 to v5, because some artists where quite vocal and some actually (I believe if I recall what I read correctly) - got their material removed from the database).
b) If you wanted to pay artists whose works were taking - you'd simply "re-scrape" the content and extract contact info. Good chance they have archives of the data they did scrape - so they'd just have to process their archived data (no need for rescraping the net).
c) Of course - maybe not "everyone" would have contact info - but enough would that one could contact them.

In terms of on-going perpetual compensation for using their assets -
d) Basically - it would require some tweaking of the current neural net algorithm, such that when they create "datapoints" for images, it includes an identifier for whose content it was. (Chances are they ALREADY have that - they just don't make that publicly known). But relatively easy to do.
e) When an image is created from say 1000 datapoints (just using easy #'s here) - each tagged artists get's a microfraction of compensation (i.e., say $0.00001). Of course - it may not seem like much - but when millions of images are generated daily - it adds up (i.e., say someone's image was used 1 million times in a day, that is $10).
f) You then compensate them.

Existing/future "ai" systems (not true "ai", just a popular term nowadays for things people have been doing for 40+ years) - these systems can incorporate  this type of "tagging" for image creation, in order to properly compensate artists whose works were used.

g) Opting out is also quite easy. You'd just tag certain assets and not include them in the image creation.

May require a bit of tweaking of existing neural net structures, but EXTREMELY feasible to do providing someone just DOES it. ESPECIALLY with ALL the MILLIONS and MILLIONS in revenue being generated, most likely on a daily basis.

241
Adobe Stock / Re: Distant cars in commercial image?
« on: September 13, 2023, 15:29 »
I have an aerial image of a town shot with a wide angle lens. And in this photograph, there are several parked cars but they are very distant and impossible to read the license plate numbers. It is possible that a car expert may be able to identify some of the brands or models of the cars without zooming in though some people may struggle to do this. Would such an image be suitable for commercial usage in Adobe Stock?

There is also a brand name visible on a supermarket in the photo but it is barely visible due to the distance. I'm planning to clone the brand name out but that is probably overkill. You would be able to identify the brand name if you zoomed in to the photo.

I know in the past, stock agencies would normally accept photographs of cities etc for commercial usage if it was a wide shot (showing a city skyline etc and everything is distant.) However, these current warnings about logos etc on the AS submit page has made me extra cautious and I don't want to take any chances.

I could also add that there are some old historical B&W photographs displayed on the wall of the supermarket that are possibly in the public domain. Though once again, they are extremely distant and hard to identify as photographs. Though a local would probably be able to recognise them as photographs.

My experience is 'generally speaking' if they just look like "generic cars" - then usually it is fine for commercial use. If however - you say could easily identify the brand (i.e., say a row of lambourghinis), or easily identify license plates, street names, business establishments, etc - then yes, that would become editorial footage.

242
Some reasonable suggestions, but I really do not think any microstock agency has any real interest in cretaing a really "honest" or "fair" or "ethical" compensation system for AI training. They are just throwing these words around because they hope it will be a selling point for their AI product to customers, that's all.  :(

I think they would in the future. While the "short-term" gains may be high - if contributors "exit" the system (i.e., too costly to create new works) - then the "ai images" become "stagnant" - i.e., have an "ai" look/feel to them, and don't incorporate new elements.

The current way of doing it is very short sighted.

243
you're off by several orders of magnitude and show you haven't really studied how ai training works - there millions++ of datapoints (not '100 ') and the identity of the images used in training is lost in creation of the dataset as there is no longer a simple correspondence between initial pixels (24MP/image at minimum) and the resulting dataset.  so when an ai image is generated there's no way to trace back

dont know your programming bkgd, but the solution you present is enormously complicated

For simplicity - I phrased it that way, and used easy to understand numbers. I very much do know what I'm talking about, and yes - it is possible, actually quite simple to do.

Now it is true many programmers are lazy - and use existing algorithms, instead of "thinking" - but definitely very easy/doable. I'm not talking about corresponding "pixel images" - I'm talking about the "model/representation". It is very easy, and very possible to "trace back" source images - or more specifically "source data" that were used in composing an image.

You'd have to re-code a few things, and to do things retroactively - run it on previous data (compositions/queries) - but almost every single company (i.e., midjourney for example) tracks EXTENSIVELY with MASSIVE web logs/stats/etc. So very easy not only to process payments retroactively - but also going forward.

It is just a matter of doing it.

244
Quote

And just a suspicion that agencies want AI images marked, so they won't be used to train AI.

lol - that is "exactly" one of the reasons why they are marked as such... not the only reason, but one of them :)

246
This is how you create an ethical/responsible "ai" system that COMPENSATES contributors as long as their images are used...

------------

Basically -
(a) Opt-in/opt-out system. Contributors CHOOSE whether they want to participate. Works are added/removed from the training set, depending on the setting.
(b) For a 'fair' system (where you'd most likely get contributors WANTING to participate) - contributors benefit for EVERY SINGLE "AI" IMAGE generated.

How do you do that? It's quite simple, really.
- When the neural networks are set up - the ID # of the images is recorded for the data inputted - i.e., a "data point".
- When a customer "generates" an "AI" image - it "pulls" from sometimes tens, or hundreds or thousands of "data points" to create that image. All the ID#'s of images used in composing that "AI image" is recorded.
- Each contributor - image ID - is given a fractional portion of that generated sale. Which, obviously adds up the more images created.

Doing it this way is certainly much more ethical, AND equitable/fair - and most likely you'd have people WANTING to make images when they know they will be compensated for, not with a tool that is designed to "replace" them.

It also CAN (and SHOULD be) done retroactively - and is very easy to do so.

Going forward it is also very easy to do so.

So for example, let's say:

a) A customer pays $50 for an "AI" tool, and makes 500 images. So each "image" is worth $0.10.
b) Let's say one of those random images "used" 100 contributor files to do so in their neural network.
c) Using the current arrangement (33%), payment would work out as follows. (As an aside, the % should be upped significantly for contributors, because once the tool is in place, adobe doesn't really have to do much 'maintenance'. The 'work' is image creation. I'd suggest a 90% contributor/10% split, or at least 80% contributor-20% adobe. But a different topic).

But for now - using the 33% idea... $0.067 to adobe, $0.033 to the "contributor pool" for the image created.
100 'images' used to create the "ai" image, so $0.033/100 = $0.0033/contributor.

Obviously, for a single image that is not much - BUT - it also obviously quickly adds up, as 1000's of images are created with the "AI" tools.

Certainly much fairer, and equitable.

And OPT-IN/OPT-OUT respected. if a contributor chose to "opt-out" - then their data points would ALSO BE REMOVED from the dataset for future "AI" image generation. "OPTING IN" is likewise very easy - it simply 're-adds' the datapoints to the training set for "ai" image generation.

Programatically VERY EASY to do - although it requires a bit of work to set up. And doing it this way more likely to have contributors WANTING to participate, as opposed to getting very upset/annoyed because it was simply "taken" from them.

THAT is much more along the lines of "responsible AI", with creators in mind at the "center". Not the "pay once to you, we benefit forever on your works" model which creates resentment, and actually discourages future image creation (which long term, will make a useless "AI" tool, as it quickly becomes outdated).

It also makes Adobe a HUGE amount of money going forward, with nice consistent revenue for very little effort or work, and happy contributors that benefit too.

247
Hi Matt,

Thank-you for the FYI. While it is nice to compensate authors, and certainly appreciated -

(a) Doing it the way other companies did (i.e., "take first, ask permission later") does not feel right, and is not right.
(b) A more honest/ethical/equitable approach would be doing fractional payments (as opposed to a lump sum) for EVERY image generated with the tool going forward. I realize that is not what other companies are doing - BUT - it would certainly be a much fairer system. AND... adobe could be a leader here... If you wouldn't mind passing that on, that would be appreciated.

Basically -
(a) Opt-in/opt-out system. Contributors CHOOSE whether they want to participate. Works are added/removed from the training set, depending on the setting.
(b) For a 'fair' system (where you'd most likely get contributors WANTING to participate) - contributors benefit for EVERY SINGLE "AI" IMAGE generated.

How do you do that? It's quite simple, really.
- When the neural networks are set up - the ID # of the images is recorded for the data inputted - i.e., a "data point".
- When a customer "generates" an "AI" image - it "pulls" from sometimes tens, or hundreds or thousands of "data points" to create that image. All the ID#'s of images used in composing that "AI image" is recorded.
- Each contributor - image ID - is given a fractional portion of that generated sale. Which, obviously adds up the more images created.

Doing it this way is certainly much more ethical, AND equitable/fair - and most likely you'd have people WANTING to make images when they know they will be compensated for, not with a tool that is designed to "replace" them.

It also CAN (and SHOULD be) done retroactively - and is very easy to do so.

Going forward it is also very easy to do so.

So for example, let's say:

a) A customer pays $50 for an "AI" tool, and makes 500 images. So each "image" is worth $0.10.
b) Let's say one of those random images "used" 100 contributor files to do so in their neural network.
c) Using the current arrangement (33%), payment would work out as follows. (As an aside, the % should be upped significantly for contributors, because once the tool is in place, adobe doesn't really have to do much 'maintenance'. The 'work' is image creation. I'd suggest a 90% contributor/10% split, or at least 80% contributor-20% adobe. But a different topic).

But for now - using the 33% idea... $0.067 to adobe, $0.033 to the "contributor pool" for the image created.
100 'images' used to create the "ai" image, so $0.033/100 = $0.0033/contributor.

Obviously, for a single image that is not much - BUT - it also obviously quickly adds up, as 1000's of images are created with the "AI" tools.

Certainly much fairer, and equitable.

And OPT-IN/OPT-OUT respected. if a contributor chose to "opt-out" - then their data points would ALSO BE REMOVED from the dataset for future "AI" image generation. "OPTING IN" is likewise very easy - it simply 're-adds' the datapoints to the training set for "ai" image generation.

Programatically VERY EASY to do - although it requires a bit of work to set up. And doing it this way more likely to have contributors WANTING to participate, as opposed to getting very upset/annoyed because it was simply "taken" from them.

THAT is much more along the lines of "responsible AI", with creators in mind at the "center". Not the "pay once to you, we benefit forever on your works" model which creates resentment, and actually discourages future image creation (which long term, will make a useless "AI" tool, as it quickly becomes outdated).

It also makes Adobe a HUGE amount of money going forward, with nice consistent revenue for very little effort or work, and happy contributors that benefit too.

248
Really don't like the way these companies basically say:

"Oh yah, we stole, erm, 'trained' our AI tool on your images... Here's what we figure it's worth, now don't bother us".

It's theft. Plain & simple. Doesn't matter how "they" justify it - if you did not explicitly give permission to do so, it's theft. Some "justify" it saying "well we have rights in our license agreement" - eh, no. BECAUSE of the way they've "approached" it - they KNOW it is wrong. (I.e., why would you have to 'hide' what you are doing, and after the fact say 'oh, here's some random money, now don't bother us' if you didn't feel it was wrong - because they KNOW - or rather - pretty strongly suspect - if they said to contributors 'Hey guys! We want to make a tool that will make US more money in the long term, and you less - so we basically want to rip of your images so we can do that, but we'll pay you a couple bucks so you don't feel bad, how does that sound?" MOST contributors would MOST LIKELY say, em, no.

Obviously dall-e/midjourney/etc were the first to just simply STEAL images... and one of the "pesky" little "problems" they have is getting rid of "watermarks"... hmm... how EVER could WATER marks have GOTTEN there? OH the mystery!

(While I have used the tools - and I do admit they are 'cool' - I think the approach to creating them is wrong and they should compensate artists for the hard work they did creating them - PLUS - future compensation for every single images generated based on those. Programatically - it IS VERY EASY to set up such a system. RETROACTIVELY - it is ALSO POSSIBLE. More work - but definitely doable).

Then shutterstock basically ripped things off, then said 'oh haha, yah, here's some money, SOOOOOOOreee! we already ripped it off, so you can't get it back, but here's what we randomly decided to pay you!"...

Sad to see what I would have considered better companies now following suit.

BY THE WAY...

CONTRARY to what these "AI" companies say (i.e., midjourney/dall-e/etc) - it actually IS possible to "backwards compensate"/"retroactively" pay/compensate contributors for the images they took IF they chose to do so...
(a) They "tracked" which images they fed into their training set.
(b) People who generated images used certain 'neural nodes' to create that image.
(c) They keep EXTENSIVE track of EVERY SINGLE THING created with the software.

So it IS possible to to write an algorithm "AI" to do super micropayments (like fractional cents) for EVERY SINGLE IMAGE created, THEN it IS possible to find those contributors (i.e., those images that had the 'pesky little watermarks') - and compensate them - and then it IS POSSIBLE to PAY OUT for EVERY SINGLE IMAGE going forward based on those BASE IMAGES...

It may be a little bit of work - but just an FYI - it IS possible. Contrary to what "they" might say. You'd just have to write a computer algorithm to do so.

So going forward - for EVERY single "AI" image created - you could be compensated fractional cents for 'neural node' inputs to create an image (i.e., $0.0001, because components of your image were used in making a new composite) - which - with the hundreds of thousands (more likely millions) being created every day... would quickly add up. AND - give you a nice future consistent revenue stream.



249
General - Stock Video / Re: Motion Elements uploading problem
« on: September 03, 2023, 11:35 »
Thanks. everything seems sorted now.

I sent them a message yesterday lunch time and had a few e-mail exchanges to answer queries their engineers had and they had the problem fixed by 5 pm. Excellent contributor support.

What e-mail did you use? I've tried sending e-mails to what I think is the support address, but never yet gotten a response... Thanks!
PPS - have you made any sales from them? I haven't yet - but I do sell higher priced clips. Wondering if I should try changing the price point. Thanks!

250
Where does it say they are shutting down? Or are they just being 'sold off'?

Pages: 1 ... 5 6 7 8 9 [10] 11 12 13 14 15 ... 33

Sponsors

Mega Bundle of 5,900+ Professional Lightroom Presets

Microstock Poll Results

Sponsors