pancakes

MicrostockGroup Sponsors


Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - SuperPhoto

Pages: 1 ... 17 18 19 20 21 [22] 23 24 25 26 27 ... 47
526
DepositPhotos / Re: How is DP for sales? (photos & vids)
« on: October 24, 2023, 14:25 »
Thanks... but - what is considered a "good" amount of sales?

Say one had 1000 images, & 1000 videos, relatively all unique/saleable (but no people shots/model releases/etc - so content other than people shots).

Would you... estimate making $10/month? $100/month? $500/month? $1000/month? Just curious what kind of average return there is, and what people consider "a lot" or "a little" in terms of revenue... thanks!

When I started back in 2014 or whenever, I could make a couple hundred a month.  Now a make a couple of ... dollars.  Maybe.

Ah, okay - thanks - that definitely helps.

Yes, was just trying to decide if it was worth it... I.e., a couple years ago I had joined 123rf - and it was time consuming uploading/processing stuff, etc - and in the couple years there - I think 'maybe' I got $100? Thanks. Was wondering how DP fared on that scale...

527
DepositPhotos / Re: How is DP for sales? (photos & vids)
« on: October 24, 2023, 11:26 »
Thanks... but - what is considered a "good" amount of sales?

Say one had 1000 images, & 1000 videos, relatively all unique/saleable (but no people shots/model releases/etc - so content other than people shots).

Would you... estimate making $10/month? $100/month? $500/month? $1000/month? Just curious what kind of average return there is, and what people consider "a lot" or "a little" in terms of revenue... thanks!

528
DepositPhotos / How is DP for sales? (photos & vids)
« on: October 24, 2023, 09:22 »
I realize it is a bit of a subjective question (obviously depends on content/etc) - but if you have a deposit photos port, could you share rough figures, and whether you focus on videos, photos, or both?

I'm considering joining them (haven't yet) - just wondering whether there is some good potential to get additional sales from there, or whether it would just be a time consuming task for a few dollars, etc...

And how would they compare say to your port on AS/SS/etc? (just for reference, like is it (DP) a relatively 'big' agency/customer base?)

Thanks!

529
Dealing w/google sometimes can be a bit of a pain - but since you have a 2nd channel - I'd contact them via the help section (upper right corner) and provide as much detail as possible.

They can actually see things on the backend (i.e., who logged into your account, where they logged in from, type of computer system, etc, etc) - and if it matches (i.e., both 1st/2nd account are obviously yours) - they may be able to assist. But provide as much detail/proof/etc as you can that both channels are yours.

Good luck.

530
No one called anyone a N**i though. It would be more accurate to coin a term for an "SJW" or "Woke" or "Neo-Marxist" Law. Far more likely someone's gonna get called that nowadays.

Anyway, I have nothing against Pete being an SJW or Neo-Marxist with politics of aggrievement for his racial group. Or that he is woke to the systemic policies of repression of the white man by the new world order. I make no judgement as to the validity of his aggrievement. I can't speak to the suffering he has suffered under the heel of whoever (though I can't remember anyone bringing up his race on this forum other than him? that could be a lapse in my memory?). But does it have to be forced down everyones throat? Can't we live our lives without these woke SJW people and their agendas?

I also dont care if someone else believes in chem trails, that covid and climate change are a lie. Whatever, its just that theres an off topic section especially for this stuff.

Ah, so you are just trolling? Okay, well - if you want to get back on topic, do so. You are the one who derailed it. Was it fun?

Get back on topic then.

Or... are you just demonstrating the usage of ChatGPT for producing a response, and showing how useless it can be/the gibberish it can produce ? :) If so, well xribtoks - there's a demonstration for you!

(Haha, just kidding photographer guy - if it actually was a legit response - who knows - you could have used chatgpt for that response - but, if you were just trolling lol, okay - get back on topic then).

531
PPS,

I should add - what do you think the words are used to describe in creating an image? I.e., "pear growing in a tree in a field".
"pear,grow[ing],tree,field"

Those are tags. The "machine learning" already DOES associate with tags.
It extracts that information (i.e., "keywords") associated with the "image" - and then associates that with the model.

So it is SUPER easy to simply add "contributor-id" (which can be the name/URL/etc or an actual number that contains all that information). And then SUPER easy to associate WHICH contributors file(s) were used in creating a "composite" image (i.e., an "ai" generated image).

SUPER SUPER EASY. Just a matter of doing it, then fairly compensating contributors with the SAME RECURRING PERPETUAL INCOME REVENUE model that the agencies so desperately and greedily want for themselves, and trying to convince contributors that anything else is "fair" (which of course, it's not). Sharing the recurrnig revenue model, with opt-in/opt-out features so at ANY time the contributor can opt-out if they don't like the terms  - and assets going forward do NOT reference the input items - is fair.

532

besides the fact that there is no, way to trace which images were used & worse even if your 'easy to do' way of marking were possible, for most images (maybe close to 0%) there is no way to identify who the artist is for the billions of images used - many have no names assoc'd and those that do lack verification and an address to pay to). how would your revised training know who (& how) to make payments


We're talking about Adobe generative fill. Adobe knows very well where to find the artists who's photos were used and how to pay them as they used images from their own database.
yes, it's about AS specifically but i was responding to the comments that were not limited to AS.

again, how does AS know whose images were used for each creation since ML eliminates any way to track that, even if identifiers were attached initially.  each image is translated into thousands of datapoints and millions/billions of operations are performed to generate each new image.

Okay - you aren't quite thinking correctly here.

It requires revising the machine learning algorith to incorporate identifiers and then attribute those identifiers to outputted information.

Keeping things simple.

Lets say you have 3 contributors, named "|A|" and "|B|" and "|C|".
|A| has images of an apple, and a pear
|B| has an image of an apple
|C| has an image of a pear, and an orange

Let's say the "machine" ("AI") version of an apple is "ML-APPLE" and "ML-PEAR".

The "AI" (ML/machine learning algorithm) creates a "representation" of what it believes an "apple" to be by scraping |A| + |B|'s image.
It then does the same for a pear, by scraping |A| + |C|'s image.

In it's internal representation, it would look like:

[ML-APPLE]:{|A|,|B|} (simply meaning what I stated above - the "ai" version of an apple references |A| + |B|'s image
[ML-PEAR]:{|A|,|C|} (simply meaning what I stated above - the "ai" version of a pears references |A| + |C|'s image
[ML-ORANGE]:{|C|} (simply meaning what I stated above - the "ai" version of an orange references |C|'s image

Let's say you then have a customer that generates images. Let's say they pay $1/image (simplicity), & its a 50-50 share between agency (the "ML" image) + the source contributors.

They decide to make a picture of a "pear". Since the "ML-PEAR" references |A|+|C|'s image - |A|+|C| would be compensated for the use of that image generation.
I.e., $1 = $.50 agency, $.50 to contributors. Since two contributors (A+C) made the "pear" image, the revenue for contributors would be issued to them).

Now let's say you made an image of an orange.
Since |C| was the only source document referenced, |C| would get full credit for this image. (I.e., $1 => $0.50 to agency, $.50 to |C|).

That is a super basic illustration of what I am talking about.

Of course, the pseudocode above is an extremely simplistic concept - it is simply designed to illustrate how it would be done on a most basic level, and some of the requirements to revise the agorithm to attribute source images.

Of course, actual code would be much more sophisticated, and one could then decide whether to attach weights to "how much" of the model was used (i.e., was it a "tiny" pear in the image, or a "big" pear in the image, and should they be compensated accordingly?) As well as how "much" of the "pear" was attributed to a specific contributor. (I.e., did |C| say have 50 images of pears, and |A| only 1 image of pear, that was used in the model/representation of what a 'pear' was - such that |C| should get 50x the 'credit' for the pear image?) Of course - that is a little more in depth, and this example was simply used for illustration purposes.

Fact is - it IS super easy to properly attribute source images, AND - it is ALSO super easy to properly CREDIT source images - on a PERPETUAL RECURRING BASIS.

It is simply a matter of taken the time to revise the algorithm to do so.

533

yes, it's about AS specifically but i was responding to the comments that were not limited to AS.

again, how does AS know whose images were used for each creation since ML eliminates any way to track that, even if identifiers were attached initially.  each image is translated into thousands of datapoints and millions/billions of operations are performed to generate each new image.

If you are using an out-of-the box "ML" solution, without ANY kind of revision whatsoever that had no built in tracking/etc, then yes, you would be correct.

But if you REVISE the algorithm (let's say a simple "reinforcement" model) - and assign weights with "ids" of original source to the original inputs - it becomes very easy to "track" sources. It does require revising the generic algorithms taught in most computer science textbooks.

534
you know NOTHING about my computer experience.

you don't address the biggest problem - how do you identify who made the image & how to contact & pay them?  that information isnt available in most cases

and now you've changed the goal posts, saying funds should be distributed to everyone, not based on where their images were used -  you know little/nothing about ML it seems and that is relevant as  the major fallacy in your proposal assumes you can track where an image is used

You are correct, I don't know your computer experience, which is why I was asking. Likewise, you know absolutely nothing about my experience, and are quite arrogant and accusatory in your statements, which would lead me to believe you know very little, if anything related to computer science/computer engineering/programming/etc - and especially - actual "AI" versus "ML" algorithms. Fact is, I actually do know what I am talking about. Have you EVER dealt with large datasets, scraped data, creating actual databases from scratch, written image algorithms, ANY of that? Have you actually ever even created your own "AI" algorithm? If you had - you'd probably know what I am talking about is very feasible, and simply a matter of doing it. It does require "work", and it does seem agencies are trying to figure out how to cut out artists while appearing to be noble (in many ways from a purely greedy standpoint). when fact is, they should be held fully accountable for any theft of artist assets, and compensate accordingly - in the same perpetual/recurring revenue model they so desperately desire.

Answering your questions:

a) For compensation - depends on where/how the data/images/etc were scraped. It would make most sense to simply begin with the major agencies whose data was scraped (i.e., DT/SS/AS/P5/etc). Super easy to figure out who to issue payments to. For other agencies, would be a matter of writing more sophisticated algorithms...

And - as a "tongue in cheek" statement - since "everyone" seems to think actual thinking "AI" exists (most people don't actually understand what actual "ai", and believe it is "thinking", versus the sophisticated theft & pattern re-arrangement being called "ai") - but if "ai" actually existed - it would be super simple - just ask the "ai" to figure out who to issue payments to.

b) it is SUPER easy to track/source image usage. If you don't realize this, it seems you've never scraped data before?


535
Honestly the forum is getting full of it. Hard to keep track of who's a "their among us guy", who's a Q guy, whos a chem trail guy, a white replacement theory guy, a the neo-Marxists are out to get us guy,  a far right ethnonationalist or libertarian, an incel/ misogynist on and on.

Guess it's the destiny of all online forums eventually. The people who are put off by the madness slowly just back out the room. It just isnt a pleasant place to hang out.

Haha, does that make you  the 'believes 100% everything on t.v. and totally obedient to what the newspapers say' guy? :) BTW - probably if you are finding a lot of people are trying to get through to you... then it might be worth looking into/using your own thinking to look at its merit... There is most certainly a difference between a "conspiracy theory", and just plain old "conspiracy"... (as well as the label 'conspiracy theory' is a tactic to make someone short-circuit their thinking, and just automatically try to 'dismiss' something without looking at the merit of it). But, I do understand with years of schooling for most people - being taught to be 'obedient to authority', and that 'authority is right without question', it can be hard to break out of that thinking to see things as they really are... Good luck.

"    Historical Exploitation: Throughout history, colonial powers, often led by European males, have oppressed and exploited indigenous populations in various parts of the world. This exploitation included forced labor, land theft, and cultural suppression.

    Slavery: The transatlantic slave trade, largely driven by European powers, resulted in the forced enslavement of millions of Africans. This brutal system of oppression led to centuries of suffering and continued racial disparities.

    Racial Discrimination: Discriminatory practices, including segregation and institutional racism, have marginalized racial and ethnic minorities in many societies. This discrimination has resulted in disparities in education, employment, housing, and criminal justice, among other areas.

    Stereotyping and Prejudice: Minorities often face stereotypes and prejudice perpetuated by the dominant culture, which can lead to bias in various aspects of life, including employment, education, and social interactions.

    Cultural Appropriation: The appropriation of cultural elements from minority groups by those in power can lead to the erasure of cultural identity and perpetuate harmful stereotypes.

    Unequal Access to Resources: Systemic disparities in access to resources, including economic opportunities, healthcare, and quality education, can disproportionately affect minority communities.

    Violence and Hate Crimes: Hate crimes and racially motivated violence can target minority individuals and communities, causing physical harm and psychological trauma.

    Disproportionate Incarceration: Racial and ethnic minorities, particularly in the United States, are often overrepresented in the criminal justice system, facing harsher sentences and unfair treatment.

    Political Disenfranchisement: Minority groups have historically faced barriers to political participation, including voting restrictions and gerrymandering, which can limit their ability to have a voice in decision-making."

Wow those evil WASPs?

Back on topic, it would be nice if AI could look at an image and say, here are keywords, for us. On the other hand, isn't that what keyword suggestions already do on Adobe, SS and DT, for example?

Or is there something in "please generate good keywords for an image of sliced vegetables" and then we use the results to add to are already thoughtful keywords?

Hey, not bad? I see some I could use.

    Sliced vegetables
    Fresh produce
    Culinary preparation
    Food preparation
    Chopped veggies
    Colorful ingredients
    Healthy cooking
    Salad ingredients
    Kitchen ingredients
    Food diversity
    Vibrant colors
    Nutrient-rich
    Cooking ingredients
    Culinary art
    Ingredient diversity
    Sliced carrots
    Chopped bell peppers
    Vegetable medley
    Knife skills
    Nutritious meal prep

I'm happy the Chat GPT 3.5 supports "Ingredient Diversity" for vegetables.  ;)

Hi Pete! :)

Not sure if this was a reply to me, or the other fellow? Anyways, curious - what was your reply in reference too (or what you were trying to say?) But digressing for a second (then getting back to "ai", which isn't true "ai", simply sophisicated theft & pattern re-arrangement)... re: "wasps" - it's important to remember the distinction of "who" is actually doing that kind of thing... i.e., have you (or anyone you know) "personally" appropriated someones land, personally engaged in "slave trade", personally engaged in 'discriminatory' behaviour, personally done cultural appropriate, etc, etc,?

Chances are no. I'd say you, probably like most people - generally are good people and do your best to do the right thing. HOWEVER... I would agree that there are a small group of psychopaths (not just "white", but black/asian/indian/jewish/christian/buddist/etc/etc) - as george carlin said "its a big club & you ain't in it!" - who LOVE to manipulate, deceive, steal, engage in dishonest tactics, etc, etc,... Generally speaking also have control over armies, and have trained them to be 'order-followers', not to think - just to obey, and then use those armies to appropriate things like land, oil/resources, deliberately manipulate and encourage "racial tension" (to try and encourage fighting among the 'taxslaves' so they don't see who is pulling the strings), etc, etc... Tell-a-vision does encourage certain stereotypes, sloth and laziness is encouraged (easier to control someone who is a welfare/social assistance recipient than someone who is their 'own man'), etc, etc... there is a very real "attack" on "white" people - lumping generally speaking very good individuals with the actual pyschopaths who are perpetuating this evil acts upon pretty much "everyone"...

It's a small group of pyschopaths. Same psychopaths doing the massive money printing to redirect money to cause "inflation" (a.k.a. invisible theft where you don't even have to go to someone's home - you just make everything more expensive), same pyschopaths trying to buy up all the land & businesses for cents on the dollars by first bringing them to the brink of bankruptcy through economic manipulation, then "miraculously" restoring the businesses once they've acquired them for cents on the dollar, etc, etc... And then those same pyschopaths who write the history books (and do their best to censor anything to the contrary) to lay the blame on anyone but themselves...

ANYWays... totally different topic :)

Getting back to "AI"...

Yes, trying to use it for keywords generates the following kinds of things.

a) Redundant keywords
b) Irrelevant keywords
c) Useless keywords
d) etc, etc.

It is a lot of "work" to try and get it to provide something that would be consistently useful for sentences provided. And on the off chance you do find something that is useful - it appears to be part of the algorithm to "randomize" it - so... eventually while the first few results may have been useful, it ends up becoming useless garbage for subsequent results...


536
Thanks, interesting interview.

I'm curious how do easily replace background "skies" - does he just mask it and cut/paste? Wouldn't there be noticeable pixellation, or how do you compensate for that?

Thanks!

537
Yes, I have found it somewhat useful for very specific use cases, in some ways I suppose it is a slightly better version of google (in terms of 'natural language parsing', i.e., inputting text sentences)... you still need to do thinking to see whether the answer is accurate & useful to your specific case...

...
...But now, several years after, we can see the results: the coputers are a big help, a big tool, but they have not deleted the editor job at all.

538
I've never used it for keywording, but it has been a great sparring partner for me in learning Illustrator scripting, which I have used to make algorithmic-based stock vectors.

I have found for basic scripting it can be good... but if you try and get anything a little more sophisicated (i.e., say sophisticated patterns, etc) it does a really poor job.

539
how did this topic turn into conspiracy theories!!!?
PS Climate change is real

lol, well, it's true - climate does change every day. Sometimes its warm, sometimes its cold. Certainly not what the t.v. would lead you to believe though - that YOU - because you breathed out and drove your car a mile today, are responsible for 100's of acres of clear cutting of brazillian forests by corporation and the private gas guzzling jets, therefore need to sell your car, buy a bike, and put a VR headset and stay at home to work remotely... :P

540
Honestly the forum is getting full of it. Hard to keep track of who's a "their among us guy", who's a Q guy, whos a chem trail guy, a white replacement theory guy, a the neo-Marxists are out to get us guy,  a far right ethnonationalist or libertarian, an incel/ misogynist on and on.

Guess it's the destiny of all online forums eventually. The people who are put off by the madness slowly just back out the room. It just isnt a pleasant place to hang out.

Haha, does that make you  the 'believes 100% everything on t.v. and totally obedient to what the newspapers say' guy? :) BTW - probably if you are finding a lot of people are trying to get through to you... then it might be worth looking into/using your own thinking to look at its merit... There is most certainly a difference between a "conspiracy theory", and just plain old "conspiracy"... (as well as the label 'conspiracy theory' is a tactic to make someone short-circuit their thinking, and just automatically try to 'dismiss' something without looking at the merit of it). But, I do understand with years of schooling for most people - being taught to be 'obedient to authority', and that 'authority is right without question', it can be hard to break out of that thinking to see things as they really are... Good luck.

541
I have used it, and overall it's very difficult to get good consistent results - such that I've found doing it the "old school" way is much better. I now usually use from time to time just to "complement" my original workflow (but even then, debating its usefulness because of the extra time required to sort through crap).
...
As of yet - for myself - I have not yet figured out an efficient way of getting consistent, useable information.

Thank you for your detailed response! This was largely my experience too. But still I was interested if anybody managed to make any use of it.

By the way, did you use free version (GPT 3.5) or Pro (GPT-4)?

Used the free version (3.5).

Was considering trying/using the paid version, but haven't yet. So that was my experience with the "free" version.

542
What query did you use to get "good" keywords, and how many were you able to get at a time?

I've found generally speaking the results have been inconsistent, and limited. How were you able to get good results?

It's easy to ask all community than actually use a tool called search for it first isn't it?

Have you tried using your own tip? Did you find in search any results relevant to microstocks?

I'm asking about a personal experience, relevant to microstocks. Not "how to use ChatGPT for photography".

Probably i was not clear since i was trying to make a joke and respond to you at same time what you are asking. But I can guarantee that the joke is not on you but on the situation: it involves AI and the type of responses that Chatgpt sometimes makes if you ask him to play a role.

This time I'm going to try to keep more unboxed: Yes. ChatGPT AI actually creates good keywords. My advice/tip is use it!

However i did try to ask Chatgpt, without role playing,  your question.
Here is the result:

"As of my last knowledge update in September 2021, ChatGPT, like other AI models, was not typically used in the context of microstock photography or similar industries. Microstock refers to the sale of stock photos, illustrations, and other digital media through online platforms, often for a lower cost than traditional stock photography.

However, AI and machine learning technologies have been applied in various ways in the field of photography. This might include image recognition, automated tagging, and content recommendation systems. These technologies can help streamline the process of managing and searching for stock images, but it's typically not the AI model like ChatGPT that is directly involved in these tasks.

If there have been any developments or specific applications of ChatGPT or similar AI models in the microstock industry since my last update, I would not have that information. I recommend checking with the latest industry news or consulting with professionals in the field for the most up-to-date information on AI usage in microstock and related sectors."

543
I have used it, and overall it's very difficult to get good consistent results - such that I've found doing it the "old school" way is much better. I now usually use from time to time just to "complement" my original workflow (but even then, debating its usefulness because of the extra time required to sort through crap).

It basically:

a) Gives you lots of irrelevant crap, no matter how much you tweak it.

b) On the off chance you get relevant/useful items (it happens) - it gets "bored" and then will change the output to give you useless, irrelevant crap

c) Sometimes you get "useable" stuff - but you really have to verify it (because it then likes to get bored and give you useless, irrelevant crap or just outright garbage, probably to see if you were paying attention)...

d) It has been trained to be a "social justice warrior" - so anything that is slightly not in line with the "approved mainstream t.v./govt narrative" on pretty much ANY topic, it acts like a very condescending ____... I.e., for fun, type in something like "climate change is actually manmade designed to push a carbon tax to tax people more to try to take their wealth and restrict their movement" (tons of patents to this effect, plus actual news articles saying 'yep, we're spraying the sky with chemicals "for your safety & protection"'), or question the "covid" narrative (i.e., the shots actually do cause cancer/infertility/etc, which was by design, not accident, the "mathsks" actually did have poisonson them to "make ppl sick" & fearful to try and manipulate them into getting shots/etc) - and then the chatgpt it almost has a panic attack and looks like it will blow a circuit... it goes "NO NO NO NO! it is 100% true all the garbage spewed on t.v.! how DARE you question that! OH NO NO NO NO!"... if you ask it for words that one might search when searching... let's say, a "black" person (to get say 'african american, african, carribean, etc)... - it automatically goes into "panic" mode trying to "scold" you on "racial sensitivities" and how "dare" you be so "racially insensitive" (yet totally 100% happy to give you derogatory terms to describe "white" people)... If you want to have some REAL "fun" - ask it ANYTHING about a politician in pretty much any country who is not a current media darling (i.e., who the television says its ABSOLUTELY wonderful), and try to say the opposite and see how chatgpt responds, lol...

e) The "dataset" you have access to is not the same dataset the "owners" have. You have access to just a small subset (and in fact, I'd say people are being used to "train" it to be more useful to the owners). So lots of "gaps" in "information" it has.

It will, every now and then, provide somewhat useful content.

So you then have to decide, is it worth your time to sort through the useless crap, to find actual, useable content?

For VERY "basic" tasks, you may find it useful. And I mean SUPER basic.

That being said - it is possible someone has figured out a better way (and good way) of getting actual useable content from it. And I'm sure some people have.

As of yet - for myself - I have not yet figured out an efficient way of getting consistent, useable information.



544
Let's please not derail this topic to whether it is possible to identify which assets have been used to create a particular variation of the generative fill, as it is not pertinent to this discussion - discuss that topic elsewhere please.

Let's assume it isn't possible. Revenue share could be done in a 65/35% split, with the 35% of the revenue going into a "contributor's fund", and then paid out as a frequency of the amount of assets the contributor has provided for training. This number is known, as we already received a one-time payment for the initial Firefly training.

So let's say the whole model has been trained on 50 million assets, and contributor A has provided 5000 assets to that training database. His "exposure" would be 5000/50 million = 0.01%

If the total revenue of "fast lane" credits is 2 million per month, 35% (for the contributor fund) is 700000$. 0.01% of that is 70$. Contributor A would get 70$ that month.

This is feasible, realistic and, in my opinion, fair.

Why is this not being incorporated and why aren't we voicing our concerns? Do you really not care?

Actually - it's not derailing - it's actually very relevant, and pertinent to the discussion - simply because the agencies are acting on the premise that they have never considered that as an option (plus, would like people to think it wasn't an option) - simply because "they" in general want the lions share of $$$$. Most - if they could - would probably try to get rid of none-computer generated assets out of pure greed.

Fact is - the "tools" being built are built on the hard work of others - and - going forward - perpetually - contributors whose assets are referenced should be compensated fairly. The "tagging" model (pseudo-code of course for simplicity), accomplishes this.

In terms of a 1-time payment - your model - while good in principle - is too simplistic - and does not compensate users based on usage. I.e., "one" asset could be referenced millions of times, but only get a "1 asset" payment, while another (i.e., the people who spam 50,000 pencils) would get "50,000" asset payments. As well - usage would not necessarily be fairly compensated (i.e., evergreen content, versus say editorial/time specific (i.e, "the virus") over the last couple years).

The perpetual income based on tagging assets used for "ai" image generation is very fair, reasonable, and doable. It is simply a matter of reprogramming the current algorithm to tag assets, and when an "ai image" is generated - compensating every contributor whose assets were used as part of that computer model.


545
....
b) When an AI "image" generates an image - it does reference essentially computer models. However - it is totally easy to say for say a "car" - 55 contributors were "tagged" in that computer representation of a "car".
c) When the asset is generated, contributors get micropayments (i.e., say the "ai" image was "worth" $0.10 to the company, and the revenue split was 50-50. So 55 contributors each get $0.05 / 55 ~ 0.0005 cents. May not seem like a "lot" - but with the millions of images generated daily - quickly adds up. (So say each image was like that, 55 contributors for different 'models' - and say 100000 images were generated referencing their input in a month, then 0.0005 * 100000 = $50)...

besides the fact that there is no, way to trace which images were used & worse even if your 'easy to do' way of marking were possible, for most images (maybe close to 0%) there is no way to identify who the artist is for the billions of images used - many have no names assoc'd and those that do lack verification and an address to pay to). how would your revised training know who (& how) to make payments

but you've stacked the odds in your favor - you claim there would be millions of images generated using only 50 originals, when when many thousands of images would be likely ( a conservative estimate), so your estimate of payment dueis off by several orders of magnitude and certainly not millions of images generated daily

You obviously don't have any computer background, or very little. Do you have ANY programming experience whatsoever, let alone large datasets? Yes, it is VERY possible, and VERY doable.

The illustration/example is designed to keep it simple, so you can understand. Obviously programming would be a little more sophisticated than that. But it is VERY VERY easy to do - it is simply a matter of DOING it.

546
Totally 100% agree contributors should get PERPETUAL RECURRING revenue - EVERY SINGLE TIME an asset is "created" that is referenced.

It is 100% possible to do with the "AI" model - contrary to what anyone says. They might not "want" to do it (because they idea is "greed" and trying to "take it all") - but extremely feasible.

See my post here:
https://www.microstockgroup.com/general-stock-discussion/since-'ai-tools'-get-perpetual-recurring-revenue-contributors-should-too/

In essence - the current algorithm (for stealing people's content) would be revised slightly, basically:
a) Every time an asset is stolen, er, "trained" - the contributor whose asset was trained is "tagged" with an ID.
b) When an AI "image" generates an image - it does reference essentially computer models. However - it is totally easy to say for say a "car" - 55 contributors were "tagged" in that computer representation of a "car".
c) When the asset is generated, contributors get micropayments (i.e., say the "ai" image was "worth" $0.10 to the company, and the revenue split was 50-50. So 55 contributors each get $0.05 / 55 ~ 0.0005 cents. May not seem like a "lot" - but with the millions of images generated daily - quickly adds up. (So say each image was like that, 55 contributors for different 'models' - and say 100000 images were generated referencing their input in a month, then 0.0005 * 100000 = $50)
d) The contributor then gets the $50 for their REGULARLY RE-USED contribution. If that same amount continued every month - then the contributor gets that same payment for the rest of their life EVERY SINGLE MONTH, as their asset is referenced.

VERY EASY TO DO. REQUIRES RE-PROGRAMMING the current algorithm. PUSH for that. It is simply a matter of DOING it.

547
Mat, you may have mentioned this elsewhere, but I must have missed it.
If I use Generate in photoshop to remove an item from my photograph, do I have to check the AI box?
Thanks

Generally, no. You would not need to identify a photograph that you used Firefly to remove an item from. Please review the full policy here for clarity: https://helpx.adobe.com/stock/contributor/help/generative-ai-content.html

Thanks for the question,

Mat Hayward

Mat - do you have a transcript of the video? If not - could you please have one put together? I'd be interested in quickly reading to see what points are of interest. Thanks.

548
I suspect it just saves meta data in the file (especially if it is a PNG file). Curious - if you save it as a bitmap, then resave as jpg/png - does the tool still detect "ai" content?

Mat, you may have mentioned this elsewhere, but I must have missed it.
If I use Generate in photoshop to remove an item from my photograph, do I have to check the AI box?
Thanks

I think this is Mat's most recent statement on Adobe Stock's rules for what counts as genAI

https://www.microstockgroup.com/fotolia-com/announcing-adobe-firefly-a-new-family-of-creative-generative-ai-models/msg592314/#msg592314

If you submit to any other agency that forbids AI uploads, you should be aware that using generative fill for anything will automatically tag the JPEG as having been modified with an AI tool - see the report CAI's Verify tool made on a test file of mine. It may get the image rejected everywhere else but Adobe Stock.

I used generative fill to remove a thumb from the sky area, saved and then made a JPEG. You don't get a choice about CAI tagging if a generative AI feature is used

549
General Stock Discussion / Re: Stock Footage YouTube Video
« on: October 21, 2023, 10:24 »
What's up y'all!  I recently made a video touching in on some of my experience in creating stock footage since 2015.  I've been focusing more on YouTube recently, but this is my first video about stock footage.  I plan on covering more topics within this genre... so I'd really appreciate it if you guys can take a look and let me know what you think.  Thanks! 

https://youtu.be/oe_aIt1qul4

Interesting - what kind of return have you seen on shots where you include yourself versus just generic landscape shots? I've never really included myself in any of the shots.

550
You are missing a HUGE thing.

There are what "you" think buyers will want to find an image, and then there are what buyers "actually" use to find an image.

Based on the description your provided of your image, it sounds potentially highly relevant. I.e., a buyer might look for something like "animal background". Aka - "no animals" - because they want to put something on it (i.e., "come to the animal fair! explore more animals!", etc). If they are adding it in, (a) it is probably automated, (b) in this case, it actually sounds potentially highly relevant.

Pages: 1 ... 17 18 19 20 21 [22] 23 24 25 26 27 ... 47

Sponsors

Mega Bundle of 5,900+ Professional Lightroom Presets

Microstock Poll Results

Sponsors