MicrostockGroup Sponsors


Author Topic: The end for original content creators. Adobe officially allows use of img2img  (Read 2188 times)

0 Members and 1 Guest are viewing this topic.

« Reply #25 on: May 09, 2024, 14:26 »
+1
Adobe's reaction is disgusting. It doesn't bode well if this is how they treat (imo very obvious) copyright infringement...or at the very least a violation of their own AI policy.

I'm interested in what @MatHayward has to say about this. But I doubt he can or will respond to threads like this.


« Reply #26 on: May 09, 2024, 15:07 »
+2
You really don't have to be particularly clever or knowledgeable to realise that your portfolio has been copied here. This is not about a single image.
The answer from Adobe is, to put it kindly, a cheek.

If the current Adobe rules do not allow for any consequences here, then Adobe should adapt the rules to the current reality - which, incidentally, was promoted by Adobe in particular.

I can fully understand your frustration.

« Reply #27 on: May 09, 2024, 16:33 »
+2
what will happen in this case in my opinion is very simple.

sooner or later it will come to light that there are similar images,and Adobe clearly won't allow this,then going to verify who the real creator is,simply by checking the upload date,for start i think,and in this case it is more obvious because the thief's contents are generated by AI.

I really don't think Adobe Stock can allow a library full of duplicates,so the problem will be solved anyway,it's just a matter of time.

there are duplicates,thieves,those who have 10 accounts or whatever you want,they are there because unfortunately the procedures for checks and any actions take a long time,unfortunately time is necessary,as when an account is blocked it can unfortunately take a month or two,because they are simply procedures that take time.

« Reply #28 on: May 10, 2024, 01:50 »
+1
All libraries are full of duplicates. And duplicates of duplicates of duplicates

Ai just makes it a lot easier to copy and faster.

One thing we can do, is not use the actual prompt as the title or description.

Wont stop img2img copy, but at least makes it a little less easy.

But also with normal cameras there is absolutely massive copying happening every day.


« Reply #30 on: May 10, 2024, 14:36 »
+2
I appreciate the spirited discussion. Please know that we are regularly reviewing our policies and we are looking into this.

Thank you,

Mat Hayward

Mat, is this all you can say about my case?

We need to wait for when you are once again reviewing your policies. You will add to the rules: DO NOT USE OTHER PEOPLE'S IMAGES FOR img2img. And what will this change? If it can't be proven, then this rule won't make any difference.

Can you confirm that now Adobe allows using others' images as prompts for img2img? And these two authors will continue to sell works that I believe were generated unfairly.

The only thing Adobe has suggested to me is to prove that the images were used as prompts. But it's impossible to do. How can I defend myself in this case? And Adobe doesn't want to help me with this. But they could. Adobe could have asked the author for evidence of how they generated their AI images.

Maybe it's time to openly start discussing this issue. Not just making podcasts about how great it is to generate AI cartoon characters with seven fingers, but also podcasts, interviews, surveys about the issue of AI images on Adobe?




« Last Edit: May 10, 2024, 14:55 by Neo-Leo »

« Reply #31 on: May 10, 2024, 15:03 »
+2
I appreciate the spirited discussion. Please know that we are regularly reviewing our policies and we are looking into this.

Thank you,

Mat Hayward

It's funny - ALL the "AI" systems (which are not actually "ai") are based off of massive theft. The "problem" midjourney, stable diffusion, dall-e, etc + "research" institutions have all had is how to remove "watermarks" (i.e., theft-deterrent devices). The "AI" is simply sophisticated theft + pattern re-arrangement. (And lol - as I was just "testing" some images now based on the above - midjourney actually did generate an image with a watermark, funny!)

a) If you were to remove all content where someone based a prompt (or a photographic concept with original/non-ai photography) off of someone else's work - you'd probably need to remove 95% of the portfolios.
b) I'd say probably also 95% of the accounts have at least one image (if not more) based off of other's prompts/images/ideas/etc, if not more.

HOWEVER:

a) I do agree from what was posted above that these images were designed to look "as similar" as possible to the original portfolio. I tested some of the same images/prompts in midjourney - and while 'similar in concept' - were distinct enough that you would not think they were the same artist. The sample above looks almost identical - such that yes - I'd say it was processed through something like img2img

b) Generally speaking, those that tend to be pretty 'bold' in their theft (and I don't just mean this example, I mean where entire portfolios are simply 'downloaded' from unlmited sites then re-uploaded under new names) tend to be from east-indian speaking countries, or those with east-indian sounding names. It's an accurate stereotype - because that is just how they "do business" there. A simple solution would be to either not approve east-indian accounts/east indian account sounding names - or - have a little bit of a 'probation' for new accounts from there to make sure they don't simply steal. (All you have to do is watch some of the "get rich quick" videos they make and you can see that is what the 'advise' to do). Of course, east indians aren't the only ones, you do get some malaysian, phillipines, afghani, as well as (smaller percentage) some ukranian, italian, russian, etc doing exactly the same thing.

c) As I've said before - the "real" theft is from companies like midjourney (+ 'chatgpt', etc, etc) whose (paid) business model is based off of theft, then disguising the theft and passing it off as original content. While the 'algorithms' to disguise the content may be novel - 'populating' those models is not. A big push should be made to hold THEM accountable - and it is actually very simple to do.

Quite simply, this is what you would do:
i) Since the data was scraped - it is quite simple to revise the scraping algorithm (if it wasn't already) to find out which authors the data was stolen from.
ii) Since sites like midjourney keep track of EVERYTHING (i.e., no generated image is actually 'deleted') - it is EXTREMELY easy to see which 'inputs' were used to create that image. AKA - see which "models" were used to create the composite image, and then see which original author images those composite models were created from.
iii) Micropayments should be attributed to all the authors (i.e., say 100 distinct authors were used to create an image of an orange, then 100 authors would get a % payment of what midjourney got from their generation. While of course just a fraction of a cent, those fractions quickly add up when millions of images are generated on a regular basis).
iv) Those micropayments then issued to authors whose works were stolen
v) Going forward - companies like midjourney (not just them of course, they just happen to be one of the most popular ones, there's about another 20 or so "startups" - lol - interestingly enough - many of them from the 'y-combinator' funding - so easy to see who is behind the theft) - basically going forward - ALL so-called "AI" companies make REGULAR PERPETUAL micropayments (i.e., monthly) for any derivative works that were generated - since of course, THEY expect to have perpetual income in the future (none of the stupid 'one-time' payment crap).

THAT is a real solution - and THAT is what should be pushed for. Then - authors are fairly compensated for the 100,000's of thousands of images that were created based off of their original works.

Going forward - original content creators should then also have the ability to 'opt-out' as well as re opt-in, as well as specify what % revenue sharing they would be willing to allow their assets to be opted in for - such that models are reconstructed with or without their data. Given the MASSIVE "data centers" that are being created, and already created - this is ALSO EXTREMELY EASY to do.

The companies may not "want" to do that - but that is what they should do - and what people should be pushing for. Push for THAT - get it, and then that will resolve a lot of issues and concerns.
« Last Edit: May 10, 2024, 15:08 by SuperPhoto »

« Reply #32 on: May 10, 2024, 18:02 »
+2
An opt in or at least opt out should be the norm for use of our images in AI/to train AI. 

« Reply #33 on: May 10, 2024, 18:18 »
+1
An opt in or at least opt out should be the norm for use of our images in AI/to train AI.

True. However - how those "ai" tools were developed was just theft. They just 'took' it from the sites (it's called 'scraping') without asking. Then, they worked hard on figuring out how to get rid of watermarks.

« Reply #34 on: May 11, 2024, 05:10 »
0
I appreciate the spirited discussion. Please know that we are regularly reviewing our policies and we are looking into this.

Thank you,

Mat Hayward

Mat, is this all you can say about my case?

We need to wait for when you are once again reviewing your policies. You will add to the rules: DO NOT USE OTHER PEOPLE'S IMAGES FOR img2img. And what will this change? If it can't be proven, then this rule won't make any difference.

Can you confirm that now Adobe allows using others' images as prompts for img2img? And these two authors will continue to sell works that I believe were generated unfairly.

The only thing Adobe has suggested to me is to prove that the images were used as prompts. But it's impossible to do. How can I defend myself in this case? And Adobe doesn't want to help me with this. But they could. Adobe could have asked the author for evidence of how they generated their AI images.

Maybe it's time to openly start discussing this issue. Not just making podcasts about how great it is to generate AI cartoon characters with seven fingers, but also podcasts, interviews, surveys about the issue of AI images on Adobe?

I understand your frustration,but you were just told that the policies will be reviewed and that they are investigating the problem,so I think perhaps you should be more than satisfied?

the problem will be solved and most likely you won't be asked for any proof of anything.

Of course,there are problems,and there will be many more,but what matters is the will to solve these problems.

let's not forget that until last year AI content was not yet accepted,so it is a new path for Adobe too,and with time everything will certainly be regulated in a better way.

the important thing is the will to do it,and Adobe clearly has this will.








« Reply #35 on: May 11, 2024, 05:56 »
0
I understand your frustration,...

Talking about frustration seems quite contemptuous to me, this is about notorious incompetence on the part of Adobe, inability to master the tools they develop, total injustice for the people impacted by their catastrophic management of these technologies .
And we could also talk about the client side, the collection becomes a rather questioning mix.

« Reply #36 on: May 11, 2024, 07:31 »
+2
Earlier today someone posted on Discord a bad AI image that they had come across (bad in terms of both the image itself and the metadata) and Diego Gomez from Adobe responded that the image has already been deleted but if anyone finds others like this "please send us the details through the Contact Support link https://contributor.stock.adobe.com/en/contact, so we can review it." It's good to know that this avenue is open for reporting to Adobe.

« Reply #37 on: May 12, 2024, 16:19 »
0
Adobe Is The Worst!!! I had one single image where I used an element of another image that was clear of copyright and they deleted my whole folder including all of my video!!! I had a handful of images and would have been happy to delete them all but nooooo no response, just evil........FACT

« Reply #38 on: May 12, 2024, 18:41 »
0
Adobe Is The Worst!!! I had one single image where I used an element of another image that was clear of copyright and they deleted my whole folder including all of my video!!! I had a handful of images and would have been happy to delete them all but nooooo no response, just evil........FACT
So its possible for Adobe to take a swift action? Why that wasnt done in Neo Leoss case then, but it was done in yours? Seems subjective Do you know who from Adobe decides is it infringement or not?

« Reply #39 on: May 12, 2024, 19:01 »
+2
An opt in or at least opt out should be the norm for use of our images in AI/to train AI.
How many companies would you expect to opt out? All of them will never happen. Copyright watermarks are easy to fix with Adobe Photoshop. Then how would numerous AI companies police if its a personal image or someone elses image that is used img2img as reference? (Majority of people use AI to create selfies in different styles)
Thats not a right direction for requesting stronger infringement policies.

Adobe should train more knowledgeable customer support that will look not only on individual images and say its 50% different with no clear parts of copyrighted images, but customer support who can look at a whole portfolio and describe if its an intentional infringement.

Other agencies might not go extra mile for dealing with infringement, but Adobe definitely should: Adobe serves creative community, reputation matters
« Last Edit: May 12, 2024, 19:04 by Mifornia »


 

Related Topics

  Subject / Started by Replies Last post
10 Replies
3729 Views
Last post November 13, 2013, 15:29
by Allsa
12 Replies
5131 Views
Last post May 16, 2019, 03:06
by georgep7
13 Replies
8548 Views
Last post June 20, 2019, 17:37
by cathyslife
7 Replies
3422 Views
Last post February 05, 2022, 17:54
by DnEEop
80 Replies
12915 Views
Last post January 08, 2024, 01:27
by Madoo

Sponsors

Mega Bundle of 5,900+ Professional Lightroom Presets

Microstock Poll Results

Sponsors