MicrostockGroup Sponsors


Author Topic: Adobe Stock needs a visible label on genAI images-like Editorial and Premium  (Read 3063 times)

0 Members and 1 Guest are viewing this topic.

« on: July 26, 2023, 15:48 »
+12
There are written Adobe Stock rules that genAI images should not say it's a real place:

"Dont: Describe AI-generated content as depicting real people or places."

There are many thousands of examples of photo-like images supposedly of real towns or landmarks already in the collection and it's really unhelpful to customers who do the default search (which includes genAI images) to have no clue looking at the results that what purports to be Memphis, Fresno, London, the Eiffel Tower, Yellowstone, Austin, TX, etc. etc. isn't really.

The existing model for overlays on Editorial and Premium images (lower left of the thumbnail) would work well IMO and would alert buyers who don't even realize there is now AI content at Adobe Stock. They could then exclude genAI images for searches where it matters that the place they're searching for is depicted as it exists.

I did an example for a search for cliffs of moher which has a lot of recent AI uploads that could not be used if you were doing tourist promotions for that area of Ireland. Click for larger image



I started thinking about this when I saw an AI image labeled as Windansea beach in California and it clearly wasn't. I've been there.

Then I realized the description looked familiar and looked at one of my images of that area. It was copied verbatim by the AI uploader. The same thing had happened a few months back with a very different image of mine. Here are the pairs of images - it's not hard to guess which is the real one and which AI :)



I have no skin in this game - my images will continue to sell as long as the photo-realistic AI images of specific places are so useless - but from a buyer's perspective, if you want Tower Bridge in Sacramento, the genAI versions are 100% useless and just need to be clearly marked so the unwary buyer doesn't make an a$$ of themselves with Adobe Stock's help.

Although I do have some skin in the game - I don't want buyers walking away from Adobe Stock because they no longer feel safe licensing images there.
« Last Edit: July 26, 2023, 18:07 by Jo Ann Snover »


« Reply #1 on: July 27, 2023, 01:11 »
+4
All AI material need an extra watermark or caption

« Reply #2 on: July 27, 2023, 03:14 »
0
Quote
"Dont: Describe AI-generated content as depicting real people or places."

Oh yes, so true, at adobestock when I search specific place of some of my images I see tens of AI images that have simply copied my entire descriptions and keywords.
We already have to fight against stolen images because agencies don't do their job and now we can do nothing for AI images that steal our "concept", descriptions and keywords.
Without union we are slowly eaten by the sharks.

« Reply #3 on: July 27, 2023, 03:29 »
0
Stealing descriptions and keywords is jot new. Especially those who are not native to English do that all the time.

But our long and good descriptions are now being used as prompts to copy our files.

I am now experimenting with much shorter descriptions to not make it that easy.

Overall I also want to do a lot more editorial.

« Reply #4 on: July 27, 2023, 07:47 »
+4
As a recent example (yesterday) of how much difficulty a stock image/video customer can get into when they use content from the wrong places...

https://www.theguardian.com/uk-news/2023/jul/26/yorkshire-water-ad-ridiculed-over-clips-of-herefordshire-and-russian-bar

This was stock video, not AI images, but the goof could just as easily have occurred with genAI Adobe Stock images of "Yorkshire"

"The advert for Yorkshire Water made what appeared to be Yorkshire look wonderful: beautiful, sweeping countryside and smiling, friendly local people, some in a car and others enjoying their downtime in a pub.

But the countryside was not the Yorkshire Dales but the Malvern Hills. The car was left-hand drive and in Ukraine. The chances of getting a pint of Landlord from the pub would seem remote, given it was a bar called Eskimos located a couple of thousand miles away in a Russian ski resort near the Black Sea."


At Adobe Stock, the unsuspecting buyer could license supposed scenic views of the Yorkshire Dales, drone aerials of Whitby Harbour or Leeds, Sheffield town hall, a footbridge over the river Aire, Clifford's Tower in York, or many others.  None of these are real and would likely inspire the same mockery the Yorkshire Water ad did.

The problem with real places isn't just wonky Big Ben or the Eiffel Tower moving around Paris but all sorts of smaller cities or landscapes all over Europe and the US (or not really there, but labeled as if they were). Fake drone and aerial footage surprised me - with Google maps satellite view it's so easy to see how wrong these genAI creations are.

Here's just one example of a part of Devon, UK - the Salcombe Kingsbridge estuary. A search with the "Relevance" sort shows a number of real pictures but the second item in the list is a genAI effort that is wrong in just about every respect.

Real Salcombe estuary

https://stock.adobe.com/images/aerial-view-of-salcombe-and-kingsbridge-estuary-from-a-drone-south-hams-devon-england/585530226
https://stock.adobe.com/images/aerial-vista-of-salcombe-and-the-kingsbridge-estuary-south-hams-devon-england/484221969
https://stock.adobe.com/images/salcombe-devon/130326993

genAI's imaginary Salcombe

https://stock.adobe.com/images/drone-footage-of-the-kingsbridge-and-salcombe-estuaries-in-devon-england-s-south-hams-generative-ai/580334780

These not-real-places images need to be labeled so the buyer doesn't find themselves in the mess Yorkshire Water did.

« Reply #5 on: July 27, 2023, 10:52 »
+2
Im wondering if vas quantities of keyword and description spamming is coming home to roost to an extent.

Playing with Firefly to generate actual real underwater fish and animals and pretty much 100% of the time its the wrong thing.
If AI has learnt the atrocious keyword spamming it'll be funny at least.

« Reply #6 on: July 28, 2023, 04:43 »
+1
As a recent example (yesterday) of how much difficulty a stock image/video customer can get into when they use content from the wrong places...

https://www.theguardian.com/uk-news/2023/jul/26/yorkshire-water-ad-ridiculed-over-clips-of-herefordshire-and-russian-bar

This was stock video, not AI images, but the goof could just as easily have occurred with genAI Adobe Stock images of "Yorkshire"

"The advert for Yorkshire Water made what appeared to be Yorkshire look wonderful: beautiful, sweeping countryside and smiling, friendly local people, some in a car and others enjoying their downtime in a pub.

But the countryside was not the Yorkshire Dales but the Malvern Hills. The car was left-hand drive and in Ukraine. The chances of getting a pint of Landlord from the pub would seem remote, given it was a bar called Eskimos located a couple of thousand miles away in a Russian ski resort near the Black Sea."


At Adobe Stock, the unsuspecting buyer could license supposed scenic views of the Yorkshire Dales, drone aerials of Whitby Harbour or Leeds, Sheffield town hall, a footbridge over the river Aire, Clifford's Tower in York, or many others.  None of these are real and would likely inspire the same mockery the Yorkshire Water ad did.

The problem with real places isn't just wonky Big Ben or the Eiffel Tower moving around Paris but all sorts of smaller cities or landscapes all over Europe and the US (or not really there, but labeled as if they were). Fake drone and aerial footage surprised me - with Google maps satellite view it's so easy to see how wrong these genAI creations are.

Here's just one example of a part of Devon, UK - the Salcombe Kingsbridge estuary. A search with the "Relevance" sort shows a number of real pictures but the second item in the list is a genAI effort that is wrong in just about every respect.

Real Salcombe estuary

https://stock.adobe.com/images/aerial-view-of-salcombe-and-kingsbridge-estuary-from-a-drone-south-hams-devon-england/585530226
https://stock.adobe.com/images/aerial-vista-of-salcombe-and-the-kingsbridge-estuary-south-hams-devon-england/484221969
https://stock.adobe.com/images/salcombe-devon/130326993

genAI's imaginary Salcombe

https://stock.adobe.com/images/drone-footage-of-the-kingsbridge-and-salcombe-estuaries-in-devon-england-s-south-hams-generative-ai/580334780

These not-real-places images need to be labeled so the buyer doesn't find themselves in the mess Yorkshire Water did.

Well said. It's ridiculous how these uploaders are allowed to use specific locations and landmarks to describe their fake images. It's very misleading, they could have said 'inspired by' , 'based on' or 'similar to' if they really wanted to mention a specific location.

I agree it should be clearly labeled.

« Reply #7 on: July 31, 2023, 13:20 »
+4
Another example of why (IMO) it is critical to watermark genAI images - content that claims to be a specific species of bird or animal, in some case with wording in the title saying "Taken with a professional camera and lens" which is 100.0% false.

I'm not an expert, but I did a google search for images of the species named (and there are lots of examples of this in Adobe Stock's genAI section) and the genAI content looks nothing like the actual photos. In some cases there's also the digits problem - eagles with 5 talons  (it's 3 front and one rear facing; I looked it up).

Here's a very small set of examples to show what I mean. A buyer would do the default search, which includes genAI images, and potentially not realize the fake critters. In time, who'd shop for stock images at a place where you have no idea what you're getting? And every customer can't become an expert in everything to know exactly how things should look - that's the agency's job in screening contributor content, especially with genAI that is photo-realistic.










« Reply #8 on: July 31, 2023, 13:37 »
0
The problem is not new with stock photos. We had various scandals where companies or even cities advertised or put out articles about local events, but illustrated it with stock photos from Russia or elsewhere on the globe.

Unless it is editorial, people always have to be careful.

« Reply #9 on: July 31, 2023, 23:29 »
+4
Ethically there is no way the default search for images should contain AI.

It should be a setting a user has to physically switch on and the first time acknowledge a disclaimer about accuracy etc.

At the current rate they're just polluting their data set and absolutely killing genuine images.

« Reply #10 on: August 01, 2023, 01:33 »
0
Then I realized the description looked familiar and looked at one of my images of that area. It was copied verbatim by the AI uploader. The same thing had happened a few months back with a very different image of mine.

The same thing happened to me in the past on DT. Another contributor had copied a large portion of my description exactly - word for word. And yea I suspected that this was someone whose first language was not English. Regardless, I wasn't happy.

And yes, something should be done about this mess with the AI images and potential issues that buyers face. I agree that AI images should be specifically labelled as such and shouldn't appear in general searches.

« Reply #11 on: August 01, 2023, 07:35 »
+3
The problem is not new with stock photos. ...Unless it is editorial, people always have to be careful.

It's true that bad keywords are completely ignored by agencies and that  inaccurate descriptions have resulted in trouble for buyers. More often though it's careless staff not paying attention to accurate descriptions and using content based solely on looks. At its very worst, it was at least a photo of somewhere on planet earth, even if the keywords said Bahamas, Aruba, Guadeloupe, Jamaica, St Thomas...

Comparing problems with photographs to the dumpster fire of photo-realistic (ish) genAI is like comparing a gentle stream to a raging flash flood - the difference in degree is so great it's really a difference in kind.

In six months the genAI collection has ballooned and is full of things that aren't what they claim to be.

« Reply #12 on: August 13, 2023, 15:33 »
+3
Back in May, Google announced a feature "coming soon" that would provide more information about images in searches, including if the image was AI generated.

https://techcrunch.com/2023/05/10/google-introduces-new-features-to-help-identify-ai-images-in-search-and-elsewhere/

The example shown half way down of a midjourney image whose about text said "Image self-labeled as AI generated". It noted:

"Google says several publishers are already on board to adopt this feature, including Midjourney, Shutterstock and others."

It didn't say Adobe Stock, but based on a search I did this afternoon, (a) it needs to include all the stock agencies and (b) the feature is needed now and isn't there (Google didn't say when it would ship, but that article was 3 months ago).

I saw a new genAI image supposedly of "Colorful morning scene of Sardinia, Italy, Europe. Fantastic sunrise on Capo San Marco Lighthouse on Del Sinis peninsula"



I did a google search in another window to see how close the AI image came to the real thing (even though it also went on my list of genAI images claiming to be of real places which Adobe says not to do). I was horrified to see the image page included genAI images from Adobe Stock and Pixta as well as photographs of the real thing (for the moment, Wikipedia and the photos on Google maps will have to be the reference).

There is nothing that identifies these images as AI generated and there must be - from Google or Adobe Stock or both.  I redid the search in an incognito window to be sure I was getting clean results. See below (click to see full size)



I think Adobe Stock should enforce its rule about not labeling real places or people for genAI content. I also believe that Google search results urgently need to mark AI images - they realize the need, but AI generation is moving faster than they are.

Searches will be next to useless if the pretend content is indistinguishable from the real


« Reply #13 on: August 14, 2023, 02:23 »
+3

Searches will be next to useless if the pretend content is indistinguishable from the real

It's impossible to do, as there is absolutely no automated way to distiguish AI images from real photos. Last month I read an article about how OpenAI discontinued their AI text detection tool, because it was only able to detect AI generated texts with a reliability of like 40% - which just made the tool completely useless. With just 40% reliability you would have a better chance of detecting AI content if you just guessed.  I think it is the same for AI images.

Unless people voluntarily label all their AI content - everywhere on the internet - as AI content, we will never be able to reliable distinguish real photos from AI images in the future.

There should have been safeguards put in place from the start. Like for example a rule that requires every company that offers the creation AI content  - text, images, music, videos - to create a database in which ALL of the content created with their tools goes. Like this every content on the internet could have been checked in these datbase, similar to how the reversed google image search works, and if it came up in the database you would know it was AI generated for sure. But the ship has already sailed as the intenet is already full with millions, probably billions of unlabeled Ai generated content. And of course every country of the world would have to enforce these rules, which would have been a  struggle on its own. Though just considering how easy it has now become to create deepfake videos and how much damage you could do with it it should have been it the interest of every country's government as well.

Oh well, all too late now. No one really has a good plan how to distinguish AI content from real content in the future and this will be something that will cause a lot of trouble. You will not be able to believe any news backed up with videos, sound or photos anymore. Video, sound and photo evidence in courtcases will become useless and you can pretty much put every word ino every politicians mouth now. Last week videos of a moderator of a big German national television news service circulated the internet where he was telling people of some "great money earning scheme" in said news channel - Of course an AI generated scam, but it shows the tip of the iceberg of what we can expect in the future.
« Last Edit: August 14, 2023, 02:30 by Her Ugliness »

« Reply #14 on: August 14, 2023, 19:32 »
0
You will not be able to believe any news backed up with videos, sound or photos anymore. Video, sound and photo evidence in courtcases will become useless

This isn't really a new issue. Similar sorts of image deception has been going on for a very long time now - especially with digital photography and software like Photoshop with image manipulation. Quite a few reputable newspapers and magazines have been guilty of using such software to lift certain elements from one image and add them on to another image to make a news story more dramatic or appealing. This is done with sports photography and I recall one example of natural disaster where a father holding a child on his shoulders was pasted on to a photo of a bush fire. National Geographic admitted to shifting the position of one of the pyramids in a photograph taken in Egypt. This sort of stuff has been going on for many years.

And discussions about the legitamacy of using photographs as evidence in court cases has been going on since at least the 1990s (with the advent of digital image manipulation.) Also, the internet has been filled with all sorts of heavily manipulated images for a very long time (blending fact and fiction.)

And before digital image manipulation became a thing, people were creating fakery in their photographs with more traditional means. Deceiving people with photographs is certainly nothing new. Remember that old story about the young girls who supposedly photographed fairies in their garden in the early 1900s? Many people were fooled by them.

https://en.wikipedia.org/wiki/Cottingley_Fairies

« Reply #15 on: August 15, 2023, 01:00 »
0
You will not be able to believe any news backed up with videos, sound or photos anymore. Video, sound and photo evidence in courtcases will become useless

This isn't really a new issue.

The issue of it is not new, the scale of it certainly is.
If you wanted to photoshop Donal Trump being chased by a group of policemen in the past that would have taken you hours of work and editing experience. Now it takes 3 seconds and everyone can do it.
« Last Edit: August 15, 2023, 01:02 by Her Ugliness »

« Reply #16 on: August 15, 2023, 16:59 »
0
You will not be able to believe any news backed up with videos, sound or photos anymore. Video, sound and photo evidence in courtcases will become useless

This isn't really a new issue. Similar sorts of image deception has been going on for a very long time now - especially with digital photography and software like Photoshop with image manipulation. Quite a few reputable newspapers and magazines have been guilty of using such software to lift certain elements from one image and add them on to another image to make a news story more dramatic or appealing. This is done with sports photography and I recall one example of natural disaster where a father holding a child on his shoulders was pasted on to a photo of a bush fire. National Geographic admitted to shifting the position of one of the pyramids in a photograph taken in Egypt. This sort of stuff has been going on for many years.

And discussions about the legitamacy of using photographs as evidence in court cases has been going on since at least the 1990s (with the advent of digital image manipulation.) Also, the internet has been filled with all sorts of heavily manipulated images for a very long time (blending fact and fiction.)

...

besides the fact that the Natl Geo pyramids case is FORTY years old, you're listing isolated incidents - the threat from AIM is orders of magnitude greater. Plus most of those earlier instances were quickly identified as hoaxes. w AI that won't be as easy.

« Reply #17 on: August 15, 2023, 19:07 »
0
Plus most of those earlier instances were quickly identified as hoaxes. w AI that won't be as easy.

Regarding the earlier cases of image deception, I would say it depends on the skills of the person doing the Photoshop job. Yea sure, the internet is full of crude and badly done examples. Well some are better than others. The big name publications generally feature better quality fakery with Photoshop etc.

As for me, I'm just going to worry about my own port and images regardless of how out of hand AI gets. Obviously, there's nothing I can do about the threat of AI or the inability of some agencies to correctly label it or categorise it. I'll just to continue to make the best images I can (the old fashioned way) and keep being creative.


 

Related Topics

  Subject / Started by Replies Last post
10 Replies
6830 Views
Last post January 25, 2019, 16:37
by dbvirago
47 Replies
27419 Views
Last post September 05, 2019, 11:55
by MatHayward
35 Replies
18668 Views
Last post January 26, 2020, 08:13
by ouatedeP
1 Replies
1856 Views
Last post February 11, 2020, 09:35
by Uncle Pete
47 Replies
24702 Views
Last post July 25, 2023, 09:57
by cobalt

Sponsors

Mega Bundle of 5,900+ Professional Lightroom Presets

Microstock Poll Results

Sponsors