MicrostockGroup Sponsors
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
Messages - SuperPhoto
Pages: 1 ... 18 19 20 21 22 [23] 24 25 26 27 28 ... 47
551
« on: October 12, 2023, 16:14 »
Topaz actually DOES inject metadata into the files. They seem to do it randomly, so some files might have it, others might not - a bit of a dishonest tactic. You need to select your files (under windows), right click 'properties', then remove the "title" (which they usually put a compression method directly linked to topaz), and I think it is 'program name' as well (two different fields).
552
« on: October 06, 2023, 15:18 »
-------
553
« on: October 05, 2023, 10:19 »
as I have already said and repeated,if the choice of the existence of AIs depended on me,I would never have allowed AIs.
I am not in favor of AI,I never have been and never will be,don't confuse the fact that I try to do a job I've been assigned well with my opinion on AI.
Just to be clear,I am 100% against AI.
I just try to adapt to the situation,I understand that the genie is now out of the bottle,and no one can do anything about it anymore,so it's better to try to do what needs to be done.
it is better to try to create content that AI cannot do, or try to exploit this new technology to your advantage, continuing to go against something that cannot be stopped is useless.
"keep your friends close,and your enemies even closer", or "if you can't beat them,team up with them"
regarding the photos of bananas,they are not only photos of bananas but also photos of environments,such as dining rooms or kitchens and much more,and I am sure that I will sell many of these photos, even if AI can create them anyway, I will sell them I'm sure of it anyway.
@Her Ugliness:microstock is my main source of income.
okay,now I think "how will this affect my means of living in 5, 10 or 20 years?" and so what?What do I gain by thinking about this?
there is nothing left to do,can you understand it or not?it's over,it's gone!
continuing to think about how to stop something that cannot be stopped is useless,the sooner you understand it,and the sooner you start working to tackle the problem,and try to take advantage of the situation as much as possible.
even Darwin in 1800 understood that only those who adapt survive! 
a) Your actions speak MUCH louder than words. In fact, SO loud, no one can hear what you are saying.
You - by your actions - are actually 100% fully for "ai" - to chase a little bit of bread crumbs thrown on the ground, so you can eat off the floor, that might be tossed your way after what you are making into a master, finishes his bread from the big table he is sitting at. It's a very short-sighted/myopic view of things - instant gratification - you want things "NOW". You don't realize essentially by participating in that - you are using a shovel to dig your own hole to jump into...
If NO one "participated" - there would be no "AI" to "create" - because it wouldn't have anything to "create" (in reality "steal") from...
Re: "darwin" - lol - "he" is not who you think he is/was... (including his "theory" of evolution lol, extremely funny)... the extremely co-ordinated "schooling" system was designed to teach you to think a certain way, and not question other things.
b) Now - since you seem super intent on getting a few paltry crumbs thrown your way - you should at least request you get recurring crumbs - perpetual income - instead of a 1-time thing.
you're really funny! 
but are you really still at this?you still can't accept that times have changed?
look,I don't care what you think,and it doesn't concern me,but if I can give you a suggestion it's better that you start thinking differently,because your way of reasoning can't bring you anything good.
Then again,I don't understand all this concern about AI.
If they didn't exist,was it better?Yes,in my opinion,yes.
but in any case they do not represent the end of the microstock,they will certainly have a certain impact,but it will certainly not be the end of the microstock.
Yesterday I was reading Steven's blog,BackyardSilver,and I read an interesting article called "Artificial Intelligence the end for Stock Photographers?"read it,and maybe you start to see the situation from a better perspective,because there is nothing worse than losing hope,especially in a job like microstock,but in life in general I think it's like that.
You do care what I think, because you took the time to reply - which, thank-you. And yes, my reasoning does bring good. Never said it (microstock) would be "the end", because no - it is not. However, yes, I agree with you that you are correct that you don't yet understand the concern about "AI" based on your actions. (Which again, isn't real "ai" - it's not actually 'creating' anything nor 'reasoning', its simply sophisticated theft and pattern re-arrangement) . An analogy would be like a thief breaking into someone's home and stealing a bunch of items, and on the way out - dropping some money (lets say $300 usd). And as the thief runs away - you pick it up, pocket it yourself patting yourself on the back, while admiring your banana that you are eating - proud that you pocketed some extra cash for yourself from the spoils of the theft - because, as you reason - "well it happened anyways, so might as well take the money and run". It hurts the person that the thief stole from. It hurts the people in the neighbourhood - because they know are a bit fearful/uncertain. Sure - you benefit in the short term, but at what cost? Things do change. And these "AI" tools are now available now. However - you, as many others - actually DO have a say in the direction it takes. You can either choose good, or not good. You can either just munch on your banana, happy that you have a full belly and a pocket full of cash and you "don't care" because YOUR belly is full of bananas, until the thief hits your home, and then you start crying, and you run out of bananas, and wonder what you are going to eat the next day because you pooped out all your bananas from the previous day. OR - you can do good - and you can have a say/shape the direction of the "AI" tools - and say no - you don't want the short sighted view of things, but rather be compensated fairly for your work (which helps others be compensated fairly for their work) - with one example being RECURRING perpetual payments, and then you can eat ALL the bananas you want, for the rest of your life - because now you, and others - are fairly compensated. You, as others, do have a say for the direction "ai" tools take.
554
« on: October 05, 2023, 08:13 »
We are committing $200 million over the next 3 years to AI and Creator royalties 
Hi Danny - appreciate that you are participating in this forum. Since the "AI" tools are designed to create a perpetual income for the corporation (and most are approaching it from a greedy standpoint, aka - they want it "all")... Why not compensate contributors in the same recurring, perpetual income format? I.e., micropayments for ANY/ALL contributor images used in the creation of an "ai" image? Much fairer, and better long term strategy than the myopic view some corporations are currently taking.
555
« on: October 05, 2023, 08:01 »
as I have already said and repeated,if the choice of the existence of AIs depended on me,I would never have allowed AIs.
I am not in favor of AI,I never have been and never will be,don't confuse the fact that I try to do a job I've been assigned well with my opinion on AI.
Just to be clear,I am 100% against AI.
I just try to adapt to the situation,I understand that the genie is now out of the bottle,and no one can do anything about it anymore,so it's better to try to do what needs to be done.
it is better to try to create content that AI cannot do, or try to exploit this new technology to your advantage, continuing to go against something that cannot be stopped is useless.
"keep your friends close,and your enemies even closer", or "if you can't beat them,team up with them"
regarding the photos of bananas,they are not only photos of bananas but also photos of environments,such as dining rooms or kitchens and much more,and I am sure that I will sell many of these photos, even if AI can create them anyway, I will sell them I'm sure of it anyway.
@Her Ugliness:microstock is my main source of income.
okay,now I think "how will this affect my means of living in 5, 10 or 20 years?" and so what?What do I gain by thinking about this?
there is nothing left to do,can you understand it or not?it's over,it's gone!
continuing to think about how to stop something that cannot be stopped is useless,the sooner you understand it,and the sooner you start working to tackle the problem,and try to take advantage of the situation as much as possible.
even Darwin in 1800 understood that only those who adapt survive! 
a) Your actions speak MUCH louder than words. In fact, SO loud, no one can hear what you are saying. You - by your actions - are actually 100% fully for "ai" - to chase a little bit of bread crumbs thrown on the ground, so you can eat off the floor, that might be tossed your way after what you are making into a master, finishes his bread from the big table he is sitting at. It's a very short-sighted/myopic view of things - instant gratification - you want things "NOW". You don't realize essentially by participating in that - you are using a shovel to dig your own hole to jump into... If NO one "participated" - there would be no "AI" to "create" - because it wouldn't have anything to "create" (in reality "steal") from... Re: "darwin" - lol - "he" is not who you think he is/was... (including his "theory" of evolution lol, extremely funny)... the extremely co-ordinated "schooling" system was designed to teach you to think a certain way, and not question other things. b) Now - since you seem super intent on getting a few paltry crumbs thrown your way - you should at least request you get recurring crumbs - perpetual income - instead of a 1-time thing.
556
« on: October 02, 2023, 20:16 »
once again, you neither understand how these generators work, nor the massive programming involved. have you worked on such huge projects?
if everyone can optout at any time the training would have to be continuous, and there's no indication original images used would still be available - where are those billions going to be fud and how would they be able to identify your work?
but again, you dont understand how these work -; once trained, there is NO way to trace back to original training set.
What AIs do, they do from our work. From my point of view it is a much more sophisticated form of copyright infringement, but in essence it is the same.
Our work (copyrighted material) was used to produce something new, and that "something new" is sold without giving us royalties.
The semantics of "training" an AI and the definition of "training a machine", which must be expressed in thousands of words, do not change the essence of what is being done.
You are correct. It is basically theft, no matter how "they" try and justify it. Especially when some try to "appear" noble by simply "stealing" first, THEN providing an opt-out, THEN saying "oh soz, since we paid you, well... we already gave the stuff away to another company, SORREEEE!!! hee hee. um, but you can opt out now if you like! tee hee tsk tsk!". Totally shady dishonest tactic. ANYWAYS... FACT is... 1. It IS indeed possible to RETROACTIVELY PAY EVERY SINGLE CONTRIBUTORS image who was stolen. The algorithm is quite simply - basically: a) Most have extensive server logs/cached documents. They would simply 're-scrape' their cached documents, find the author - then reach out to the author to properly attribute them & compensate for the theft. b) For those that "accidentally lose" those cached files - simply re-scrape the original data set. For the the actual 'diffusion model'/etc... c) Would require some re-coding - but it IS possible programatically to attribute EVERY SINGLE SOURCE image to authors. d) When an image is generated (INCLUDING RETROACTIVE CALCULATIONS) - it IS actually programatically possible to figure out "which" authors images were used to create the "composite" image. e) It IS also possible to do massive micropayment calculations on a PERPETUAL AND RECURRING BASIS to PROPERLY COMPENSATE IMAGES (AND VIDEOS)... BOTH retroactively for stolen images/theft to make the "ai tools" from, as well as going forward... AND have an opt-in/opt-out procedure that is updated DAILY if an author does not like the terms... Now, it is simply a matter of doing it.
557
« on: October 02, 2023, 09:39 »
Feedback wise - it is taking advantage of people that they "may or may not get paid" to design a product to try and permanently put them out of business so they never get paid again. No.
I'm curious,please enlighten us with your wisdom,what should Adobe do in your opinion,should they no longer accept AI content,should they not integrate Firefly into the software,and should they maybe start a crusade against AI?
but it's obvious that they can't do that,It seems obvious to me!
then,sorry but I don't understand,I put a cake on the table if you eat it it means you like it if you don't eat it you don't like it,or just because I put the cake on the table am I accused of exploiting you because I put the cake on the table?
If you don't like this cake,just don't eat it! 
It's quite simple - you don't participate in it. Realize it is designed to try and permanently put you out of business, and don't participate.
558
« on: October 01, 2023, 08:14 »
Feedback wise - it is taking advantage of people that they "may or may not get paid" to design a product to try and permanently put them out of business so they never get paid again. No.
559
« on: September 28, 2023, 14:11 »
I'm agree on not all but most part of what you say, by the way after Mat comments I think that yes, they need also garbage shots, the only important thing is to clearly see the requested subject, nothing else. No lighting setup, no denoising, nothing.
c) People "may" or "may not" be compensated for their work? So it could be a complete waste of time?
Well, but this is true for any image you send to microstock agencies
Because this is being done on spec - it is basically an insult to expect people to do a SET of 500-1000 images "for free" with the "hope" of getting paid... again, I guess they are basing this model on the 99designs thing (where ppl work for free hoping to get paid) - so I guess you get what you pay for - probably a lot of crap, and might get lucky with some good stuff, but overall, most garbage...
560
« on: September 28, 2023, 08:51 »
Guess I will comment.
Now for clarification - I am basing my comments on what people have said here If I am mistaken in my understanding of the situation, please do clarify. That being said - with my current understanding of what is being offered - $30-$80 for a set of 500 images (if indeed that is what the amount is, someone said $0.06-$0.16/image?) is a pittance on several levels...
a) Unless you are looking for garbage shots (i.e., 500 figures of a foot with toenails just on auto-take) it's a waste of time for any professinal photographer to take those kinds of shots. Yes - you will probably find people in say india/ukraine/etc where $3 usd/hr is a great wage... or someone who is hungry and $50 makes a big deal of whether or not they can pay rent that month... but it is still a pittance... b) Since the images are essentially being used to try and put that person OUT of business (via an "ai" tool to try and eliminate real images even more)... even more of a pittance... c) People "may" or "may not" be compensated for their work? So it could be a complete waste of time? c) 3600 seconds in an hour. 3600/500 = 7.2 seconds per image. No post processing/keywording/etc - not sure if that is expected. But just taking 500 images period - unless you are looking for garbage shots - not a lot of time to take good shots. More likely - to take 500 (good) shots - is about 5-8 hours at least... so effective rate is $10/hour (minimum wage is higher in most countries), and then on top of that no guarantee it would even be used? (Kind of like showing up to work and the boss says 'eh, if I FEEL like it I'll pay you after - but do the work first and we'll see')... a very crappy (and a bit insulting) of an offer.
Anyways - is my understanding of what is/was being offered accurate? If so - then feedback wise, no professional photographer that values their work would go for this.
561
« on: September 28, 2023, 08:46 »
Perhaps reach out to Adobe? Or mat here - and see if he can pass the feedback onto the devs?
562
« on: September 26, 2023, 12:03 »
Lol, no - they may not have been "outright" stolen - instead - they were most likely stolen via a bait/switch tactic ("licensing" via "shutterstock" or other similar agencies), to then steal and pay a paltry sum to contributors, to pretend they are nice and virtuous...
Unless getty developed the images in house - or EXPLICITLY asked contributors PRIOR to "scraping" their database - I'd say the images were indeed "stolen" - just playing with words to make it sound nice.
563
« on: September 26, 2023, 11:56 »
While I agree that the submission requirements do say it shouldn't have identifiable locations/people/etc - so in this case - I would say this should have not gone through according to the current specs...
However, making a reason to reject based on something that could be construed/potentially "offend" someone - while yes, of course tragic/etc, and definitely very tragic for anyone personally affected - it would not be right to reject something because someone could be potentially offended, otherwise, where do you stop? Slippery slope. Some people might not like pictures of cows (i.e., vegans, east indians, etc). Some people might not like pictures of churches, or conversly mosques, (i.e., they "feel offended" by a particular religion), etc. Some might not like political parties, political stances, etc. Some people might not like 'black' people, others might not like 'white' people, others might not like 'asian' people, etc, etc. So 'feeling offended' is not a good reason to reject. (Total aside - last 3 years should have been an eye opener for many in terms of what really happened then, as well as what really happened WWII/etc - people are deliberately having their emotions/thoughts manipulated - but entirely different topic).
564
« on: September 25, 2023, 17:37 »
Last quarter SS earned 80 million from selling stock and an additional 17 million (21%) from licensing stock content for ai projects.
And they kept bragging how this is their most important money project.
Has anyone seen an increase of 20% of income from ai licensing?
What about all the other agencies? istock/envato/alamy have created bria, for "ethical licensing". How much will they pay artists?
I think they have all announced that they want to do that, but I only see envato confirming that they will pay their usual 50%.
Or maybe I missed that?
The agencies will keep making new data deals all the time, which means we should all be making more money.
20% extra for data licensing would be appreciated, wouldn't it?
Yes, totally agree - and it is totally doable.
565
« on: September 25, 2023, 17:36 »
Since we have no bargaining power, what should be in an ideal world is irrelevant. We simply get a one-time payment for using our copyrighted works to train AI, but all profits from AI then go to them. Legally, it's probably legal. We weren't asked if we agreed with use or the amount of the one-time payment, but that's obviously not a legal issue. We're just screwed and it's only a matter of time before we get kicked out and fully replaced. And this also applies to those who generate AI images, it makes no difference in principle, it's just a temporary fill in for the demand for AI images.
Actually, you have an incredibly amount of power, but you need to use it. Everything from initially contacting the agencies politely, to contacting/discussing with a lawyer, to class action, etc, etc. "They" would like you to think you have "no power", but the contrary is true. You just need to realize that. If you have an attitude of you've lost - then you have already - so you need to realize that and start taking action NOW.
566
« on: September 24, 2023, 08:29 »
There were only ever two tools, as far as I know, that used actual data on what buyers searched for when buying. They have now both been retired.
All the rest are much of a muchness (based on what other people have used) so whatever you find easier. SS's one is fine.
What were the two tools?
567
« on: September 24, 2023, 08:27 »
Why not contact iStock and show them how easy it is to replace - perhaps they will come up with a better watermark that can't simply be replaced with generative fill, and/or cropping it?
568
« on: September 23, 2023, 08:29 »
I totally agree
Our years of work was used without our authorization to train the "system" that now leaves us without a job.
We should receive a percentage of the royalties produced by AI images (no matter how little money it was).
Yes, it is quite easy to do. Just the current algorithms for taking other people's images and creating data models needs to be revised, in order to properly attribute people's whose images are used for image generation. Without other people's hard work - these "ai" tools (which are NOT "ai", it is quite annoying how that term is misused) - these "ai" tools would not exist. So yes, contributors should be properly compensated in a perpetual recurring revenue model, the same way the agencies want a lifetime of income for doing the initial 'work'.
569
« on: September 19, 2023, 19:47 »
Interesting, thanks for sharing... now - over what period were those best sellers? I noticed you mentioned one of your pics was from 2014... does that mean these are your 'all time' best sellers since you've been doing stock?
570
« on: September 19, 2023, 19:42 »
I'm curious - for people that are seeing increases in sales, what kinds of videos are you making?
571
« on: September 18, 2023, 16:50 »
once again, you neither understand how these generators work, nor the massive programming involved. have you worked on such huge projects?
if everyone can optout at any time the training would have to be continuous, and there's no indication original images used would still be available - where are those billions going to be fud and how would they be able to identify your work?
but again, you dont understand how these work -; once trained, there is NO way to trace back to original training set.
Once again, you are mistaken, and it sounds like you have limited programming knowledge. Is that the case? I actually do know what I am talking about. And yes, I actually have worked on big dataset projects, so I know EXACTLY what I am talking about. It is NOT a "massive programming" undertaking. The "massiveness" is simply processing the data - but that is why there are now HUGE HUGE HUGE server farms - which make that a pretty simple task. (I.e., you've heard of google, right? They regularly refresh their "search engine database" (and actually archive a LOT more than simply archiving computer models of images)). If a company is using an "out of the box solution" (aka a lazy man's way of doing things) - then yes, without ANY changes whatsoever, as far as I know - the 'out of the box' solutions don't have that pre-installed. HOWEVER... if you hire a couple programmers and tag the data - then yes, you CAN do that. Answering your questions: 1. The dataset opt-out/opt-in would be at a set interval - and process them all in batch. (I.e., say you had 1000 people that 'opted-out' - it would not happen at once - it would be 'batched', i.e., say 8pm every day when the scraping/modelling was done - anyone who was 'opted in' would get processed, and those that 'opted out' would not). b) In terms of the interval - depends on the server farm the company is using & the speed (not "really" an issue longterm - but short term because they are currently 'learning' - and depending if they are stealing 5 million images, or 5 billion - makes a difference). So could (for now) make the process say 1x every 3 days (to refresh the background modelling/dataset). 2. And again - you are mistaken - you actually CAN "tag" the data to be associated with the processed data. YES - it does (most likely) require revising the current algorithm - and YES - many programmers are lazy (or in some ways incomptent), so you actually need GOOD programmers (full stack would probably be best because they can see the 'bigger' picture usually) - but you CAN do it. Let's take a hypothetical example (SUPER simplistic, but illustrates the point). User A has pictures of chairs & lamps, i.e., "ID001". User B has pictures of dogs, i.e., "ID002" User C has landscapes, i.e., "ID003". User D has landscapes, i.e., "ID004". When initially 'modelling' the data, when the computer model for a "landscape" was put in, tags would be associated with (essentially what is a computer model/representation of the 'idea' of a 'landscape'). Likewise, for dogs/chairs/etc. Then, say a prompt is "dog sitting in a chair". User C/D's input was not used, so not relevant. However, A & B had the "computer model" accessed to "compose" that representation - so they would be "credited" with having composed that. Then say it was "dog in landscape". Now B/C/D were used, so would likewise be credited. Say each image (for simplicity) was $1 to compose. And say it was a 50-50 split between the "ai tool/engine" & contributor. In the first scenario, A&B get 25 cents each. In the second, B/C/D get 16.7 cents each. THAT is how it would work. It's possible the companies already designed their systems to be able to do that (for other reasons, I.e., logging, revising algorithms/tweaking "representations" of chairs (or "hands", "faces", etc)). If not - the algorithm CAN be redesigned. It is simply a matter of doing it, & would require some thinking. It is not "massive programming". The "massiveness" is simply processing the data - but again, almost an insignificant point because of the HUGE MASSIVE server farms - and how much "cheaper" it is becoming to process massive amounts of data. It IS in fact a relatively simple thing to do. It is simply a matter of DOING it.
572
« on: September 18, 2023, 08:26 »
For "ai training" - obviously companies are basically trying to make more $$$, by essentially stealing other's people work to do so.
A new system should set up (and contributors very vocal about this, that means YOU, the person reading this) in which:
a) Obviously opt-in/opt-out - including RETROACTIVE opting in/opting out. (YES, it IS possible. Might be "inconvenient" for a company to do so and/or recode certain algorithms - but VERY easy to do - basically if one say "opted-out" - the company would simply "re-train" their entire dataset MINUS the individuals who chose to opt out. It has been done before, can be done again). Could be done in batch (say 1x/day for any new opt-in/opt-out requests). (This ALSO includes "ai generation" tools like midjourney/dall-e/etc).
b) For data that is trained on - it IS in fact EASY to "tag" datasets to attributed specific data to individuals. In otherwords - if someone "creates an AI" image that references your work AT ALL - you CAN actually be compensated for that. So if 1000 artists "data" is used to compose an image - each individual artist CAN in fact be attributed and compensated for that with fractional payments. It DOES require some programming/re-doing of current algorithms - but DEFINITELY 100% doable (despite what any company 'claims'. They may not want to do it - but it in fact, is very easy and possible to do. It is simply a matter of doing it).
IN OTHER WORDS - let's say 1000 artists data is used to "compose" an "AI" image. Each artists could get fractional income from that asset that was produced. It may seem "tiny" at first (which it would be) - but obviously with the millions of images being created daily - that quickly adds up.
Contributors should - then could - be attributed/compensated for their work fairly. AND - 100% up to the contributor WHEN/IF they choose to have their data used - AND - it is possible to do it retroactively as well.
And then obviously contributors would be able to share in the benefit of perpetual recurring income - which, after all - is one of the big reasons various companies are stealing people's data and 'repackaging it' in an "AI" tool - because they want "perpetual recurring income" for basically doing nothing. Contributors should benefit from this as well, and again - at ANY point in time - be able to choose to opt-out/opt-in, as well as CHOOSE WHICH ASSETS can be trained/etc.
c) The dishonest tactic some companies have employed (i.e., say "oh, we took your data <ahem>, but um, yeah, here's a payout and now we'll 'let' you opt out") does not take them 'off the hook' for their actions. They are still fully responsible for their actions, as well as fully responsible for compensating contributors fairly. The above CAN and SHOULD be done. (And again, includes companies like midjourney/dalle/etc which haven't even yet compensated contributors. It's funny when the people running those companies talk about 'pesky little things like watermarks'... hmm, why would there EVER be a watermark? so strange!).
Just an FYI of what is possible. Get vocal about it, and make it happen.
573
« on: September 14, 2023, 17:00 »
Yes, like other people said - if I need something in a series/different poses/etc.
574
« on: September 14, 2023, 16:55 »
I see your point but dont think it is technically possible to tie output to specific learning material which went into the model. So in fact you would need to broadly distribute money to creators undiscriminate of the quality and usefulness of their work. So if 100.000.000 images went into the training the compensation would need to be split between all of them - and MidJourney doesnt even know them because they scraped the internet.
From a programming standpoint, it actually is very easy/very possible... Whether or not they do it is something entirely different, but certainly possible. Here's how you'd do it (just one way). a) Most companies (i.e., midjourney included) keep EXTENSIVE server logs/etc. They also keep "snapshots" of their databases (i.e., they did I think it was going from v4 to v5, because some artists where quite vocal and some actually (I believe if I recall what I read correctly) - got their material removed from the database). b) If you wanted to pay artists whose works were taking - you'd simply "re-scrape" the content and extract contact info. Good chance they have archives of the data they did scrape - so they'd just have to process their archived data (no need for rescraping the net). c) Of course - maybe not "everyone" would have contact info - but enough would that one could contact them. In terms of on-going perpetual compensation for using their assets - d) Basically - it would require some tweaking of the current neural net algorithm, such that when they create "datapoints" for images, it includes an identifier for whose content it was. (Chances are they ALREADY have that - they just don't make that publicly known). But relatively easy to do. e) When an image is created from say 1000 datapoints (just using easy #'s here) - each tagged artists get's a microfraction of compensation (i.e., say $0.00001). Of course - it may not seem like much - but when millions of images are generated daily - it adds up (i.e., say someone's image was used 1 million times in a day, that is $10). f) You then compensate them. Existing/future "ai" systems (not true "ai", just a popular term nowadays for things people have been doing for 40+ years) - these systems can incorporate this type of "tagging" for image creation, in order to properly compensate artists whose works were used. g) Opting out is also quite easy. You'd just tag certain assets and not include them in the image creation. May require a bit of tweaking of existing neural net structures, but EXTREMELY feasible to do providing someone just DOES it. ESPECIALLY with ALL the MILLIONS and MILLIONS in revenue being generated, most likely on a daily basis.
575
« on: September 13, 2023, 15:29 »
I have an aerial image of a town shot with a wide angle lens. And in this photograph, there are several parked cars but they are very distant and impossible to read the license plate numbers. It is possible that a car expert may be able to identify some of the brands or models of the cars without zooming in though some people may struggle to do this. Would such an image be suitable for commercial usage in Adobe Stock?
There is also a brand name visible on a supermarket in the photo but it is barely visible due to the distance. I'm planning to clone the brand name out but that is probably overkill. You would be able to identify the brand name if you zoomed in to the photo.
I know in the past, stock agencies would normally accept photographs of cities etc for commercial usage if it was a wide shot (showing a city skyline etc and everything is distant.) However, these current warnings about logos etc on the AS submit page has made me extra cautious and I don't want to take any chances.
I could also add that there are some old historical B&W photographs displayed on the wall of the supermarket that are possibly in the public domain. Though once again, they are extremely distant and hard to identify as photographs. Though a local would probably be able to recognise them as photographs.
My experience is 'generally speaking' if they just look like "generic cars" - then usually it is fine for commercial use. If however - you say could easily identify the brand (i.e., say a row of lambourghinis), or easily identify license plates, street names, business establishments, etc - then yes, that would become editorial footage.
Pages: 1 ... 18 19 20 21 22 [23] 24 25 26 27 28 ... 47
|
Sponsors
Microstock Poll Results
Sponsors
|