pancakes

MicrostockGroup Sponsors


Author Topic: Artificial Intelligence killing the whole industry  (Read 11819 times)

1 Member and 3 Guests are viewing this topic.

« Reply #125 on: September 30, 2022, 19:36 »
+4
again, unsupported assumptions tending towards conspiracy theories- that's not how machine learning operates,

Then please enlighten us how it does work.

and your claim DALL-E copies other images is specifically denied by open-ai.  you may choose not to believe them, but that doesn't justify your claiming to know how the image is created.

I am not claiming the Ai copies any specific image. I claim that the AI creates images on a variety of existing images and I claim that the AI has to store what it learned somewhere. We store information in our brain, computers typically use databases. Without a database or some other data storage the AI can learn nothing. So the question remains what exactly is stored in this data storage, what does the AI actually learn? Clearly it does not learn what a naval battle is. It only learns how images with that description or that keyword look like and it has to store this information somewhere. So it has to store for hundreds of thousands keywords how the images with those keywords look like. How else would it be able to create images when the user enters those keywords?

You seem to think that this is some kind of magic. I believe it is technology.


« Reply #126 on: September 30, 2022, 19:49 »
+2
If I had to make a 19th century naval battle the first thing I'd do is google and look at all the pictures and see how they were composed, mood, structure etc. Let's be honest who wouldn't?

Sure, but you would be able to abstract from the specific images and choose a setting and specifics for the battle yourself, for example you would choose, which countries ships battle each other, for example Turkish ships against Russian ships, indicating this by the flags of the ships. You would probably choose a specific battle, like the Battle of Sinop during the Crimean War. You would show some fighting, like cannons firing, smoke, damage to the ships. You would be able to abstract from the style of the images you looked at so that your image probably would not look like an oil painting.

All of this is missing from the image Cascoly generated with the AI.

« Reply #127 on: September 30, 2022, 22:43 »
0
again, unsupported assumptions tending towards conspiracy theories- that's not how machine learning operates,

Then please enlighten us how it does work.

and your claim DALL-E copies other images is specifically denied by open-ai.  you may choose not to believe them, but that doesn't justify your claiming to know how the image is created.

I am not claiming the Ai copies any specific image. I claim that the AI creates images on a variety of existing images and I claim that the AI has to store what it learned somewhere. We store information in our brain, computers typically use databases. Without a database or some other data storage the AI can learn nothing. So the question remains what exactly is stored in this data storage, what does the AI actually learn? Clearly it does not learn what a naval battle is. It only learns how images with that description or that keyword look like and it has to store this information somewhere. So it has to store for hundreds of thousands keywords how the images with those keywords look like. How else would it be able to create images when the user enters those keywords?

You seem to think that this is some kind of magic. I believe it is technology.

That is how I understand it, too.
lI read a little about DALL-E and midjourney today. I thought it was software that you buy, but no its a subscription-based site. It sounds like it would be fun to play around with. I just dont think they have any copyright issues ironed out yet. Because it is learning and storing images found on the internet using keywords.

Tomorrow I will go look at some actual uploaded images (on adobe?) to see the quality. As a person involved in creative work my whole career, this is very intriguing. As mentioned in one of the posts above, I see it as a tool to generate creative ideas, more than actually creating images to submit to microstock. At this point anyway. Im sure the process will be refined and image quality will improve in the future. Exciting stuff, wish I werent at the tail end of my career.

Heres the midjourney article I read, which talks about the copyright conundrum.
https://expertphotography.com/midjourney/

And this about Dall-E: This public debut comes without answers to some key questions. It's not clear if AI-generated art is fair use or stolen, for instance Getty Images and similar services have banned the material out of concern it might violate copyright. While this expansion will be welcome, it might test some legal limits.
https://www.engadget.com/dall-e-ai-image-generator-beta-no-waitlist-173746483.html

Interesting that Elon Musk is involved in OpenAI and Dall-E ... I had seen him talk about AI in tweets, but didnt connect the dots.

« Reply #128 on: October 01, 2022, 07:26 »
+1
just because you don't trust a source is no justification to libel them with your unsupported claims, especially when those opinions are presented as 'facts'

and you continue to pose the false narrative that these AI are copying images to create new ones. Even those agencies banning AI do not make that unsupportable claim. instead, they are concerned about the training of AI which is an entirely different issue

I am not making any additional claims at all (let alone libellous ones). Again, you are just arguing semantics. Look at the basic facts. Their product ingests a load of work that they dont own the copyright to and outputs images based (in some way) on those images and keywords.

Business model:

OUR WORK (taken without permission or compensation) ----->PROCESSING (call it whatever makes you feel good)----->THEIR PRODUCT (and lots of cash for them)




OUR WORK----->PROCESSING----->THEIR PRODUCT
« Last Edit: October 01, 2022, 07:32 by Justanotherphotographer »

« Reply #129 on: October 01, 2022, 09:30 »
+6
I just found this on one forum. This AI has totally ripped of microstock content.





« Reply #130 on: October 01, 2022, 09:43 »
+1
I just found this on one forum. This AI has totally ripped of microstock content.




The AI was trained with watermarked images so much that it thinks the watermark should belong to the image. It thinks the watermark is part of a businessman, just like his tie or glasses.
 This is an issue that has been discussed before and even adressed by members of DALL.  It doesn't mean the image itself was "ripped-off", just that the AI was trained wrongly. Could have easily been avoided if the people working for /owning DALL and other AI image generators would have, well, I don't know... bothered to actually PAY for the images they fed their AI with? Can't say I don't feel a little bit gleefull that it's causing hiccups in the image creating process.
« Last Edit: October 01, 2022, 14:50 by Her Ugliness »

« Reply #131 on: October 01, 2022, 10:10 »
+4
I just found this on one forum. This AI has totally ripped of microstock content.


No, no it's not copying it. It just "learnt" what the watermark looks like and reproduced the exact same watermark from its memory of the essence of its watermark-ness. Not the same thing at all (sarcasm of course).

Again I get what happened here, it trained on watermarked images so reproduces the watermark. What it does is give the game away beautifully until they clean it up. It is using images and keywords. Just because it forms a model from the information it ingests and spews out a conglomerated version from this idea doesnt make any difference legally IMHO.
« Last Edit: October 01, 2022, 13:24 by Justanotherphotographer »

« Reply #132 on: October 01, 2022, 13:23 »
0
.

« Reply #133 on: October 01, 2022, 14:45 »
+1
again, unsupported assumptions tending towards conspiracy theories- that's not how machine learning operates,

Then please enlighten us how it does work.

and your claim DALL-E copies other images is specifically denied by open-ai.  you may choose not to believe them, but that doesn't justify your claiming to know how the image is created.


I am not claiming the Ai copies any specific image. I claim that the AI creates images on a variety of existing images and I claim that the AI has to store what it learned somewhere. We store information in our brain, computers typically use databases. Without a database or some other data storage the AI can learn nothing. So the question remains what exactly is stored in this data storage, what does the AI actually learn? Clearly it does not learn what a naval battle is. It only learns how images with that description or that keyword look like and it has to store this information somewhere. So it has to store for hundreds of thousands keywords how the images with those keywords look like. How else would it be able to create images when the user enters those keywords?

You seem to think that this is some kind of magic. I believe it is technology.
Speaking of database is quite reductive. Never heard of neural networks or deep learning? The images are not "stolen" but used to form concepts, as you do looking around, stimulating your brain through your eyes. Also if you are not a scientist there are tons of information online to begin to have a better idea of the machine learning process.

« Reply #134 on: October 01, 2022, 16:34 »
+5
Many here celebrate this, but perhaps it is the beginning of the end of an extra income like microstock. Someone  could not agree with  I will say, but using images from microstock or from the web makes these AI image creation companies cheaters who are taking advantage of something that is not theirs.It is not as deep a thought as many have written here. In conclusion it is a misappropriation of images.

« Reply #135 on: October 01, 2022, 16:54 »
+2
again, unsupported assumptions tending towards conspiracy theories- that's not how machine learning operates,

Then please enlighten us how it does work.

and your claim DALL-E copies other images is specifically denied by open-ai.  you may choose not to believe them, but that doesn't justify your claiming to know how the image is created.


I am not claiming the Ai copies any specific image. I claim that the AI creates images on a variety of existing images and I claim that the AI has to store what it learned somewhere. We store information in our brain, computers typically use databases. Without a database or some other data storage the AI can learn nothing. So the question remains what exactly is stored in this data storage, what does the AI actually learn? Clearly it does not learn what a naval battle is. It only learns how images with that description or that keyword look like and it has to store this information somewhere. So it has to store for hundreds of thousands keywords how the images with those keywords look like. How else would it be able to create images when the user enters those keywords?

You seem to think that this is some kind of magic. I believe it is technology.
Speaking of database is quite reductive. Never heard of neural networks or deep learning? The images are not "stolen" but used to form concepts, as you do looking around, stimulating your brain through your eyes. Also if you are not a scientist there are tons of information online to begin to have a better idea of the machine learning process.
The Ai should learn from public domain images, how an image it is! Not use images that are not public domain!

« Reply #136 on: October 01, 2022, 17:27 »
+1
Hope the mayority of free web sites about AI for images are like this. 

« Reply #137 on: October 02, 2022, 00:47 »
0
A.I. can't use images from microstock because they are watermarked it would have to use Google or PD images. Of course, google would find out that they're using said images and would probably buy them out.

« Reply #138 on: October 02, 2022, 01:12 »
+1
At the very least the watermarks show the images that they use to learn from are stolen.

« Reply #139 on: October 02, 2022, 01:26 »
+4
A.I. can't use images from microstock because they are watermarked

And yet they clearly did.

« Reply #140 on: October 02, 2022, 06:11 »
+1
A.I. can't use images from microstock because they are watermarked it would have to use Google or PD images. Of course, google would find out that they're using said images and would probably buy them out.


Look a couple of posts above yours. Obviously they did. Apparently this was happening a lot more earlier on in development (the visible watermarks I mean). The engines have been trained to filter out more of this noise. The founders of some of these companies say they did. The descriptions and keywords in a predictable format are a gold mine for these companies.

I have tried Dall E and have had results of works with what look like blury signatures in the corner.
« Last Edit: October 02, 2022, 13:06 by Justanotherphotographer »

« Reply #141 on: October 02, 2022, 13:38 »
0
again, unsupported assumptions tending towards conspiracy theories- that's not how machine learning operates,

Then please enlighten us how it does work.
...

i & others have supplied multiple links about ML on the 2 threads about AI -- or just google it


« Reply #142 on: October 02, 2022, 18:47 »
+2
I wouldn't be surprised if at some point a company like say SS got paid by one of these places for access to the images and keywords. Did we see any of that money - no. Was it allowed somewhere in SS's multiply rewritten vague terms - probably. Is it "right" probably not. Is there anything we can do about it - probably not.

It would be interesting  to see what the results would be for people that have used their own business name as keywords if they used that as a starting input.

« Reply #143 on: October 03, 2022, 00:30 »
+3
I wouldn't be surprised if at some point a company like say SS got paid by one of these places for access to the images and keywords.

If this had been the case, then the AI would not have learned to genereate SS watermarks on their images, because if SS gave them access to the images there would not have been a watermark on the images.


« Reply #144 on: October 04, 2022, 02:49 »
0
Here comes Google's text to 3d generation.



https://dreamfusion3d.github.io/

« Reply #145 on: October 04, 2022, 06:49 »
+1
What's interesting experimenting with this technology is what it's good at and what it's bad at.

I think it'll eventually become really good at lifestyle stuff - beautiful woman sitting in private jet, eating a bowl of salad etc. for instance. So those type of photographers are screwed.

Imagined images and illustrations - those artists are largely screwed.

Generic landscapes - misty forests, dramatic mountain ranges, green farmland. Those photographers are screwed.

What it's really bad at is named locations. It doesn't seem to be able to produce decent realistic images of Big Ben or Mt Everest or the Taj Mahal because those images require a single viewpoint and you can't combine images without it looking very odd.

« Reply #146 on: October 04, 2022, 16:47 »
0

...
What it's really bad at is named locations. It doesn't seem to be able to produce decent realistic images of Big Ben or Mt Everest or the Taj Mahal because those images require a single viewpoint and you can't combine images without it looking very odd.

these folk are likely screwed as well - just not as quickly as your other examples.  when i asked for images of crowds and the yeni camii near the golden horn, the results showed several different angles

only images got the minarets correct, but it's just a matter of time before that''s improved

3 of the 4 'sherpas on everest' pictures had a reasonable image of everest with a recognizable west ridge & summit pyramid

« Reply #147 on: October 19, 2022, 03:47 »
0
If AI can make any picture you want why would anyone need agencies? You just buy the software and add any picture you want to your article. So it's not just the contributors losing here. It's also the agencies. They will be redundant, like us.
AI can't do everything. It will struggle generating images of places that don't have many photos. It can't do any editorial, where the image can't be manipulated. I'll do everything that AI can't be used for, or it would be too much hard work for AI.

« Reply #148 on: October 19, 2022, 03:50 »
+3
A.I. can't use images from microstock because they are watermarked it would have to use Google or PD images. Of course, google would find out that they're using said images and would probably buy them out.
One of the images I made with AI, clearly had the Alamy watermark on it. The AI adjusted it, but was still obvious where the image came from.

« Reply #149 on: October 19, 2022, 11:04 »
0

...
What it's really bad at is named locations. It doesn't seem to be able to produce decent realistic images of Big Ben or Mt Everest or the Taj Mahal because those images require a single viewpoint and you can't combine images without it looking very odd.

these folk are likely screwed as well - just not as quickly as your other examples.  when i asked for images of crowds and the yeni camii near the golden horn, the results showed several different angles

only images got the minarets correct, but it's just a matter of time before that''s improved

3 of the 4 'sherpas on everest' pictures had a reasonable image of everest with a recognizable west ridge & summit pyramid

Probably not the best example to use... for fiction it maybe ok depending on the audience but certainly no good for anyone in the world of mountaineering.

You can't use an image that is similar to a mountain when you need the image to highlight a specific route up it. A lot of the books I have for different mountains in the Alps an the Andes were bought as a reference guide so we knew what to expect before we got to the climb. No point having something that looks similar as it could be very dangerous once you're at 4-8000 metres up.

The climbing community as a whole would pickup on it in a flash, they are very protective of tradition. Just try and bolt a route up a mountain in the Lake District, they would hunt you down.  Any images that weren't accurate would be ridiculed by the community. Even in the Lake District or Scottish Highlands they need to be accurate. Editors don't even like it when an image is captioned a little wrong as they could use the image incorrectly that could lead to climbers and hikers getting into difficulty on the mountain.

As a photo on the wall, a real photo of Everest and climbers hanging on the wall is like wow, amazing achievement and place. You think of the effort it took to get there, the dangers, the blood sweat and tears etc of each of those captured in the image... there is an emotional connection. An AI rendition is just an empty vessel in comparison. No value at all because it's not real. But I guess that's just the way I see it from a climbers perspective.

Maybe in the future that's what we'll see. People hunting out real imagery over AI as they don't want something that is fake. They want something that is real, something that can connect them to the earth we live in... something that does exist and was seem by a fellow human


 

Related Topics

  Subject / Started by Replies Last post
63 Replies
13567 Views
Last post May 25, 2010, 05:52
by youralleffingnuts
8 Replies
9271 Views
Last post March 15, 2011, 05:28
by Microbius
27 Replies
8541 Views
Last post May 19, 2011, 05:59
by Perry
42 Replies
11634 Views
Last post February 26, 2013, 01:09
by Xanox
6 Replies
5207 Views
Last post April 03, 2015, 01:36
by fmarsicano

Sponsors

Mega Bundle of 5,900+ Professional Lightroom Presets

Microstock Poll Results

Sponsors

3100 Posing Cards Bundle