pancakes

MicrostockGroup Sponsors


Author Topic: Dall e 2 will make us all redundant?  (Read 8683 times)

0 Members and 1 Guest are viewing this topic.

Uncle Pete

  • Great Place by a Great Lake - My Home Port
« Reply #25 on: June 29, 2022, 12:49 »
0

I searched for a Dall e2 thread before posting this but the search didn't show anything. Weird, I'd have just voiced my opinions on the original. Maybe I needed to add the -?

The only point for my post was:


I did last time we had this same discussion?
I can't recall having this discussion with anyone before?

I wasn't being critical of you bringing it up again, or Firn, just that, yes we have done this dance before.

I find the updated discussion interesting and more opinions are added. Looking back at this, I see more that I missed last round.  :) I followed some more links and saw more of what the system could do. I still see it as best for simple, concepts and illustrations. That doesn't mean it won't get even better and more realistic.

I wonder how it will be used and marketed? I mean, per image online subscription? Or maybe software for sale which wouldn't surely open Pandora's box of new images. Will I be able to dream up a new image, have it made and add it to my images for license?

I'm convinced along the same lines as many others that there will always be a need for real photographers, and some situations can't be covered by software.


« Reply #26 on: June 30, 2022, 01:19 »
+1



In ten years perhaps volume will be so easy to create digitally than a lot of individuals can actually have a personal microstock site store with millions instead of thousands. But for news or documentary stuff i think this will continue shy with no big changes

Which is why I am trying to do more journalism. Until all the air is filled with drones reporting in realtime, we can always walk around and document the world around us.

Uncle Pete

  • Great Place by a Great Lake - My Home Port
« Reply #27 on: July 01, 2022, 21:12 »
+1



In ten years perhaps volume will be so easy to create digitally than a lot of individuals can actually have a personal microstock site store with millions instead of thousands. But for news or documentary stuff i think this will continue shy with no big changes

Which is why I am trying to do more journalism. Until all the air is filled with drones reporting in realtime, we can always walk around and document the world around us.

That has always been one of my interests. Documenting and journalism. Not as creative or interesting as "art", but it's interesting and a challenge for me, that I can enjoy.

« Reply #28 on: July 02, 2022, 01:55 »
+1
I wish I had done it earlier. Bevor Covid I enjoyed going to protests or other outdoor local events. And apparently you can document it all for agencies.

Even walking around my hometown there are so many tourist attractions nobody has ever covered. They all just do the cathedral.

It is alsp refreshing not to worry about logos.


« Reply #29 on: August 26, 2022, 02:01 »
+4
Today I had the chance to try out DALL E and I started out by describing some of my bestsellers and see what my AI generated "competition" would be.

I can now safely say that it is how I thought: We are really really still FAR away from having to worry about AI generated photos replacing us.

Tricking AI and then saying it's flawed by giving the input poor definitions, isn't really proving anything.

But maybe this does: One of the descriptions I gave was " Two French Bulldogs with one wearing full body snowman costume and one wearing full body christrmas tree costume next to gift boxes" ( I tried to describe this image: https://www.shutterstock.com/image-photo/dogs-christmas-costumes-two-french-bulldogs-1850738611 )
In none of the results any of the dogs was wearing a christmas tree or a snowman costume. DALL E gave me a Christmas elf and santa costume instead. In one of the results the French Bulldog was replaced with a creepy looking plush dog.

Another example I tried was  "French Bulldog wearing full body devil costume with pitchfork" ( I tried to describe this image https://www.shutterstock.com/image-photo/french-bulldog-dog-red-halloween-devil-1822964279 )
None of the results gave me a full body costume. The dogs were wearing devil horns and had a pitchfork that floated weirdly in the air in front of them.

But the worst of all is that in all results DALL E gave me dogs that looked like Zombies with parts of their faces like eyes missing or eyes melting off their faces or with strangely twisted legs. The results weren't just bad. They were scary.
Also, the single elements of the images often were not put together well. Look at the strangely pasted pirate hat in the example below. It's not looking realistic at all.
Even the simple instruction "French Bulldog on white background" isn't producing the desired results.  First round of results I get dogs on white blankets with lots of folds in the fabric in all results, even though that's not what I asked for. Second round I suddenly got French Bulldogs on a white background like I asked, but in one of 4 results the head of the dog was not in the picture and the other 3 had melting zombie eyes again. In all results the dog was strangely placed in the picture, with body parts being cut off.
At this point I had seen enough nightmare material and tried for something harmless: "Leaf of Monstera Deliciosa Variegata plant". No result showed me a variegated version of the plant like I asked for. Next search for "Leaf of Philodendron Verrucosum plant" showed me two results of random Philodendon plant leaves, not belonging to a Verrucosum, one Monstera and one Epipremnum leaf. The AI obviously hasn't learned a thing about botany.

So, after having tried this out for myself I feel pretty assured in my original statement. We are not there yet. I don't even know where all the great examples that were used for advertising this came from, I couldn't produce one single usable result. At this state this product shouldn't even have been released for beta testing with the results it gives.

Scary zombie dogs:
« Last Edit: August 26, 2022, 04:22 by Firn »

« Reply #30 on: August 26, 2022, 05:20 »
+3
I have been amazed at some of the results though some faces are very poor. It needs fairly detailed instructions to work. The more details the better. I would think that Adobe will need to up their game quickly. This software is still in its infancy. It can only get better. I think graphic designers are more likely to be affected than photographers but. presumably they will adapt to Dall 2 and incorporate it in thheir designs.   

Uncle Pete

  • Great Place by a Great Lake - My Home Port
« Reply #31 on: August 26, 2022, 23:04 »
+3
Scary zombie dogs:

That's being kind. They are grotesque.

PaulieWalnuts

  • We Have Exciting News For You
« Reply #32 on: August 27, 2022, 14:39 »
+1
Today I had the chance to try out DALL E and I started out by describing some of my bestsellers and see what my AI generated "competition" would be.

I can now safely say that it is how I thought: We are really really still FAR away from having to worry about AI generated photos replacing us.

With today's exponential technology advances, "far" is probably five, maybe ten, years at most before the technology is ready. I hope it doesn't start to become mainstream for at lest another 15 years.

« Reply #33 on: August 27, 2022, 18:26 »
+2
Dall e 1 was introduced in 2021, Dalle e 2 in 2022, one year only and it was much better than e1, so: real competition from these Ai systems may come much faster

Brasilnut

  • Author Brutally Honest Guide to Microstock & Blog

« Reply #34 on: August 30, 2022, 09:49 »
+3
Hi all,

I've been given access to try out this neat piece of software and published my thoughts.

https://brutallyhonestmicrostock.com/2022/08/30/dall-e-2-glimpse-into-the-future-of-artificial-intelligence-image-creation/

Also, here's me riding a bike with my camera as created by DALL-E 2.

« Reply #35 on: August 30, 2022, 13:33 »
+2
Thanks for putting together a post about your experiences.

Leaving aside the usability of the resulting images, it seems that the issue of copyright in the end result will have many of the same tangles as in the music business where samples, even brief ones, have resulted in litigation. Agencies do not want to spend money on lawyers, so avoiding legal risk will, IMO, be a key factor in any rules they set for contributors.


« Reply #36 on: August 30, 2022, 14:16 »
+2
AI definitely can replace a fair chunk of today's microstock market.
It would be nave to think otherwise. Plenty of examples where AI could work just as fine as a run-of-the-mill stock image.

Article or post about wine? AI generates the perfect glass of wine in a cozy setting. Rustic wooden table, wood stove in the slightly defocused background.
Article or post about depression? AI generates a sad depressed and more important anonymous face.
Article or post about traveling the Grand Canyon? AI does the trick. Perfect sunset over the Grand Canyon with happy birds in the sky and a lovely hipster couple holding hands in the near distance. 

Agencies are sitting on massive data, with massive amounts of topics covered.
And they are selling it to AI developers who build systems that can generate the perfect image based on popular already existing content.

The question is: how much will those tech companies charge to use their AI to generate stock images?
And will it be cheaper than what stock agencies currently (or in the future) charge?
Continuous development and maintaining a proper AI is not cheap.
Having the infrastructure that can process thousands of requests per hour, instantly, isn't that cheap either.
And sure, they will want to make as much as money as they possibly can too.
For a customer, buying a standard good enough stock image or illustration from the agencies, still might be the cheapest option for the months and few years to come.

And sure, AI can't do just about anything.
Video? Sounds like a lot more difficult.
Editorial content? Documentary. Real people. I wonder how images of gritty and edgy demonstrations about topic x in city y will look like. Can get quite messy, right?
New trends. Creativity How fast can AI pick up new trends and build on an ongoing stream of creativity?

But in the long run, for sure, it will have a very severe impact on the market of microstock.
We already beat it in a coma by the enormous amount of content that we fed it. AI will only feed more content to that same market.

One last thing that comes to my mind: How good will it really be in high volumes while keeping a certain aspect of authenticity?
At the end of the day, will the consumers of those media not get frustrated and feel disconnected by the overflow of AI generated content?

« Reply #37 on: September 01, 2022, 20:20 »
0
I wouldn't call this harmless and I certainly didn't get any zombies. And if someone thinks he can learn tricks on how to write prompts in a day he is far from the fact that you have to spend some credits to train yourself.

Also, keep in mind this was my first and only try on this topic , that I didn't play with descriptions like "full body" and similar ones to avoid body parts cutting off etc. Your prompts should be accurate, If you don't want zombies, use something like cute, you want front or side view,  just write it.

Its already here, Its pretty useful even if its still in beta , and it will just get better.









« Reply #38 on: September 02, 2022, 00:45 »
+1
Also, keep in mind this was my first and only try on this topic ,

Which is exactly why you shouldn't make any conclusions. You had one try and got 4 non-zombies. I made several tries and got zombies 75% times. So obviously I have the bigger "control group" to draw conclusions from. From my experience the dogs faces always seem to get worse, the more details you add to the description. Just a close up of a French Bulldog face produces almost perfect results. A full French Bulldog on one-colored background produces minor errors sometimes. But add more dogs, add items to the surroundings, add accessoriess etc. and it gets worse. The more things DALL E seems to have to add to the picture, the more problems it seems to have with the details, in this cases especially with the dogs' eyes and noses.


 If you don't want zombies, use something like cute


There is something seriously wrong  with an AI where I should have to add the attribute "cute" to not get a result of a dog with a melting zombie face. But okay, here we go:
Instead of my former " Two French Bulldogs with one wearing full body snowman costume and one wearing full body christrmas tree costume next to gift boxes" I tried "Two cute French Bulldogs with one wearing full body snowman costume and one wearing full body christrmas tree costume next to gift boxes".

Apart from the fact that, again, none of the result gave me a dog in a snowman or christmas tree costume, this is what DALL E consideres "cute". Thanks for more nightmare material. Cheers!
« Last Edit: September 02, 2022, 01:18 by Firn »

« Reply #39 on: September 02, 2022, 01:25 »
0
Also, keep in mind this was my first and only try on this topic ,

Which is exactly why you shouldn't make any conclusions. You had one try and got 4 non-zombies. I made several tries and got zombies 75% times. So obviously I have the bigger "control group" to draw conclusions from.


 If you don't want zombies, use something like cute

First and only try on your Bulldog. ;D

Spent like 5000 credits until now on various styles, so Im pretty sure those 20k generations probably made me a bit above your control group and that I know what I'm talking about.

And that's why I didn't get zombies at first like I did when I was starting. I got cutoff ears though and I would have to waste few attempts to try and bypass that minor thing.

From 100 generations Im getting like 20-30 or so errors when I spend few credits on attempts and catch the right description. Usually 5 legs on animals , strange paws, different eyes or so. And for starters that's not bad at all. 

And photography is its by far weakest spot.   

« Reply #40 on: September 02, 2022, 01:34 »
0

 If you don't want zombies, use something like cute

First and only try on your Bulldog. ;D


I can only repeat the exact same answers I wrote above already:
If you only tried Bulldogs once, you don't have much to draw conclusions from.
If I use cute, I get zombies.

And since you apparently reply before reading, just repeating your original statement and this conversation is going in circles, this is pointless and I am out of here.

« Reply #41 on: September 02, 2022, 01:40 »
0
You need to study harder  ;D


« Reply #42 on: September 02, 2022, 01:48 »
+2

 If you don't want zombies, use something like cute

First and only try on your Bulldog. ;D


I can only repeat the exact same answers I wrote above already:
If you only tried Bulldogs once, you don't have much to draw conclusions from.
If I use cute, I get zombies.

And since you apparently reply before reading, just repeating your original statement and this conversation is going in circles, this is pointless and I am out of here.

No angry  ;D

Yes i do cause I tried the elephants, tried the horses and I was getting the same thing before like your example. So I started researching, reading other peoples experiences , looking for other peoples prompts etc.

Here is a little hint to get you going... for starters , htf can it know that you want a photography from your prompt to begin with ? You didnt even mention what media should it generate. And thats just a drop in the ocean.

If you can not get a result from something you just started with , you might want to consider that you don't know how to use the thing and that this is the main part of your problem.  ;)

 
« Last Edit: September 02, 2022, 01:53 by Lizard »

Brasilnut

  • Author Brutally Honest Guide to Microstock & Blog

« Reply #43 on: September 02, 2022, 05:11 »
+2
Wow, it's like looking in the mirror :D


« Reply #44 on: September 02, 2022, 05:29 »
0
I really don't know what to think about the future, probably in long terms big changes will affect any image production process...

By the way, as NOW, just starting, you can have still not perfect but REALLY impressive results





« Reply #45 on: September 02, 2022, 06:34 »
0
Sure, DELL can create a slice of fish or a horse on a beach. Give it something more complex and creative and it fails. It's like with my failed attempts to recreate one of my bestsellers. It can't understand the instruction "French Bulldog wearing a fullbody snowman cosutume" in the context of the rest of the description. If I just write "French Bulldog dog wearing full body snowman costume" I get decent results, at leats of the costume (But 50% zombie dog faces again). If I use the very same sentence but describe another dog in a costume standing next to it and some gift boxes added DELL suddenly can't remember what a snowman is. Sometimes it can't even remember what a dog is.  :o
« Last Edit: September 02, 2022, 06:44 by Firn »

« Reply #46 on: September 02, 2022, 06:48 »
0
You need to study harder  ;D



And once again you have replied to me without bothering to read what I wrote. Simple images like the one you posted result in mostly correct results. The more details I add, the more the dogs get zombified. But I already wrote that.
Try for yourself.  Describe this image to DELL and look at your results.
https://www.shutterstock.com/image-photo/dogs-christmas-costumes-two-french-bulldogs-1850738611

I dare everyone who seems so impressed with DALL to do the same.

« Last Edit: September 02, 2022, 07:00 by Firn »

« Reply #47 on: September 02, 2022, 13:53 »
+2

i have no clue about all this ai,
but for me it looks like results are generated (stolen)

from existing keyworded work - without consent, compensation and regard of copyright

no way a cgi can have the idea of dof, light etc




« Reply #48 on: September 02, 2022, 18:30 »
+1

i have no clue about all this ai,
but for me it looks like results are generated (stolen)

from existing keyworded work - without consent, compensation and regard of copyright

no way a cgi can have the idea of dof, light etc

have you watched any movies or tv lately?  cgi easily does lighting, etc

what's new here is creating an image de novo from just a description, and not using any existing image. each pic starts as a random mix of pixels

machine learning trains on images, but what's produced isn't derivative

« Reply #49 on: September 02, 2022, 18:55 »
+1
just started today - here are the results of my first session

long view of climbers near summit of an himalyan peak



mountain biker riding through an alpine meadow with mountains in the background



rock climber silhouette on steep rock face




19th century naval battle 




misfires:
hms victory' ship-of-the-line at naval battle of trafalgar --> showed only the ship, docked

roman legion attacking a city wall -- 2 soldiers, but too close


 

Related Topics

  Subject / Started by Replies Last post
11 Replies
4450 Views
Last post August 08, 2013, 02:12
by anjaliaroha
3 Replies
2291 Views
Last post May 28, 2014, 16:50
by gnirtS
19 Replies
3279 Views
Last post May 18, 2022, 05:21
by thx9000
14 Replies
1665 Views
Last post October 20, 2022, 04:46
by emax
0 Replies
418 Views
Last post November 07, 2022, 13:04
by Uncle Pete

Sponsors

Mega Bundle of 5,900+ Professional Lightroom Presets

Microstock Poll Results

Sponsors

3100 Posing Cards Bundle