pancakes

MicrostockGroup Sponsors


Author Topic: Dall e 2 will make us all redundant?  (Read 8686 times)

0 Members and 1 Guest are viewing this topic.

SVH

« Reply #75 on: October 06, 2022, 12:28 »
+2
..
...You yourself mentioned the concept of human-machine people but never mentioned the fact that after the singularity the next purely logical evolution doesnt bode well for humanity. Remove art, expression, individuality and perhaps most importantly trust from the equation and you are hastening the process.

the next steps will see AI for stock buyers, then they'll replace graphic designers.  AI will read & post to social media and decide what their humans (a la 'Mr Peabody's boy sherman') 'want' to buy

i mentioned earlier a thoughtful take:
https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/

What I find interesting is what will people do to earn money once AI is encouraged to replace all the jobs. Yes I know new jobs will recreated but nowhere near as many. AI is writing books, news articles and the like and that's before things like automated cars etc become the norm. Thousands, millions of jobs gone. If a large % of the population is no longer earning money (or as much money) who is going to buy and use the services. Given the amount of price cutting in all areas in order to get an edge, a reduction in sales is the last thing that is needed.

I know this is somewhat "me me me" but I'm glad I'm a lot closer to retirement than most around here. The future will certainly be a lot different and I feel there is a rush from some quarters to embrace the end of human involvement in the creative process. It's not just a tool, it's creating a system that removes the person from the creative process. The end result will be a client and the software. It won't take many people to manage the software for a world wide audience and the client once they get used to it won't need someone to type in the words for them.

By the time they realise that part of the creative process was the actual thinking of the ideas, millions of people will have left the industry and they'll be left with a piece of software that finds it hard to think outside the box... because it doesn't think. It scraps data (visual and words) and try to learn from it. What happens when there is no one left to create fresh data for it to learn from?!? Will it just end up learning from images it's created itself, including the errors it makes as it can't think or understand what the real item should look like... other than from the scrapped data. Will it be able to come up with new trends?!? Invent different styles. So far, the images it produces seem very similar in style. So much so I tend to be able to spot them very quickly online. How long before people get bored? Hopefully I'll be sitting with my feet up in front of the fire enjoying a single malt (retired) by that time.
Well said!


« Reply #76 on: October 06, 2022, 17:30 »
0
incorporating copyrighted elements, parts of someone else's artwork is inevitable

AI doesn't incorporate anything.
AI learn what is and how to recreate any object (or human faces, animals... everything)

Of course there are legal problems because images used to train are copyrighted; but there is nothing that will be "incorporated" in new images

It's quite new scheme, and it cannot be managed with "classic" discussion, it's completely new issue to solve.

Interestingly, it obviously copies quite a bit as they were also including watermarks with the images they produce. Of course, the programmers will write a bit of code to remove them in the future but it's obvious it's basing images on real content.

again, no - doesnt need to copy anything - a matrix analysis creates completely different information. the watermarks aren't stored per se - instead the ML thinks watermarks are part of the object.  using a larger training set would eliminate some of that problem.

of course, they shouldn't be using watermarked images for training min the first

« Reply #77 on: October 06, 2022, 17:42 »
+1
..
...You yourself mentioned the concept of human-machine people but never mentioned the fact that after the singularity the next purely logical evolution doesnt bode well for humanity. Remove art, expression, individuality and perhaps most importantly trust from the equation and you are hastening the process.

the next steps will see AI for stock buyers, then they'll replace graphic designers.  AI will read & post to social media and decide what their humans (a la 'Mr Peabody's boy sherman') 'want' to buy

i mentioned earlier a thoughtful take:
https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/

What I find interesting is what will people do to earn money once AI is encouraged to replace all the jobs. Yes I know new jobs will recreated but nowhere near as many. AI is writing books, news articles and the like and that's before things like automated cars etc become the norm. Thousands, millions of jobs gone. If a large % of the population is no longer earning money (or as much money) who is going to buy and use the services. Given the amount of price cutting in all areas in order to get an edge, a reduction in sales is the last thing that is needed..../.

it looks bleak only if we continue the current robber baron capitalist paradigm with obscene inequality of income and huge corporate profits with few taxes

it's a political problem - not economic or technological.  a paradigm shift would see a much more progressive tax system, balancing of income ranges, etc.  people, not corporation oriented.  this would provide a basic livable income for all. people would have the choice to accept that and pursue non-profit areas that couldn't provide an adequate income. this is already seen from techies who achieve their monetary goals & retire early to work for np foundations et al . or they could continue to follow professions that haven't been overtaken by AI (yet)

a shift from Hobbesian dynamics would allow more folk to have the options of the super-rich.

« Reply #78 on: October 06, 2022, 17:46 »
0



Interestingly, it obviously copies quite a bit as they were also including watermarks with the images they produce.


Might risk sounding like a broken record, but: The AIs sometimes generated images that have something resembling microstock agency watermarks, because they have been trained with so many watermarked (unlicensed!) images that they wrongly learned that the watermark was part of whatever it was supposed to generate. When an AI generates a watermark, it "thinks" it belongs in the picture like a suit to a businessman or the sun to a picture of a sunny sky. It's an issue of wrong learning, not an issue of copying. It recreates the watermark, just like it re-creates the sun or a suit. It cannot understand that the watermark is not part of whatever it is supposed to depict. If an AI was capable of thinking/realizing that whatever it is creating in images was actually something that exists in the offline world, then it would think that people walk around with floating watermarks in front of them.

I start to think that many people do not really understand what an AI is. Artificial intelligence. It's not a computer programm that copy & pastes stuff. It is a program that has learning abilities. It gets input and it learns from it. Give it the wrong input and it will learn to create wrong results.

exactly - it's much easier to make absurd claims rather than actually doing a bit of research to see how these AI actually work

« Reply #79 on: October 07, 2022, 02:34 »
+6
I am sorry, but its you who arent listening. You cant seem to grasp that a person can understand how AI works and still think it is unacceptable to use other peoples copyright protected work to make your product. Be that via training, learning or straight up copy-pasting.

The watermark has been used as an example as it perfectly illustrates that learning can perfectly reproduce parts of another image in a way identical to simply copying and pasting the image. The watermark looks the same because the program has assessed a business man always has this feature and it looks the same from any angle. It has come to this conclusion because all/ most of the images it is pulling from had this watermark.

It gives the game away because it neatly demonstrates whats going on. The same process is happening with all images it ingests. Just because it has more inputs for most features, producing results less like any one individual image, doesnt change the principle. However you try and cut it they have appropriated someone elses intellectual property to produce their own commercial product (the AI and resulting images).

« Reply #80 on: October 07, 2022, 10:34 »
+2
I am sorry, but its you who arent listening. You cant seem to grasp that a person can understand how AI works and still think it is unacceptable to use other peoples copyright protected work to make your product. Be that via training, learning or straight up copy-pasting.

The watermark has been used as an example as it perfectly illustrates that learning can perfectly reproduce parts of another image in a way identical to simply copying and pasting the image. The watermark looks the same because the program has assessed a business man always has this feature and it looks the same from any angle. It has come to this conclusion because all/ most of the images it is pulling from had this watermark.

It gives the game away because it neatly demonstrates whats going on. The same process is happening with all images it ingests. Just because it has more inputs for most features, producing results less like any one individual image, doesnt change the principle. However you try and cut it they have appropriated someone elses intellectual property to produce their own commercial product (the AI and resulting images).

You're fighting a losing battle with this one I'm afraid. If you were to take parts (learnt) of X amount of songs to combine and form a new one, you'd have to pay the copyright holder of the original content. You are profiting on the back of someone else's copyright material and the they were caught with their pants down when the images started reproducing watermarks... which confirms they used copyright material to develop their system.

The machine can't look at one picture of a tree and then draw a representation of it like a human can, it needs many hundreds of examples with matching keyword data to link the word "Tree" to the image and then take small samples of those images to form a new one.

The MTB cyclist was another good example, it doesn't know the wheel is an element on it's own so it included a sample of background from an images that didn't match the rest of the BG it created. It's like using content aware when filling in gaps in Photoshop, sometimes it samples (grabs) the wrong part of the image to fill in the gap but sometimes it grabs a section from the wrong part ands stands out like a sore thumb. The AI is grabbing multiple bits from thousands of images to create a new one. I'd imagine that's why the perspective at times looks off with the images they produce as the samples taken don't all have the same perspective and they have a wobbly look to them. If it had truly learnt how to draw a bike or skyscraper, the perspective would be consistent through the image rather than the Pablo Picasso look where the angles don't quite add up and it would not add a random bit of background to a wheel.
« Last Edit: October 07, 2022, 10:38 by HalfFull »

Uncle Pete

  • Great Place by a Great Lake - My Home Port
« Reply #81 on: October 07, 2022, 12:30 »
0
Using the wheel example. The machine looks at 10,000 images of wheels, and learns how to draw a wheel. (or what it concludes is a wheel) It is learning what visually makes a wheel, not copying previous photos and drawings of wheels. That's the difference.

If it was using actual elements from photos of wheels, then there would be a possible problem. Not an out but how do you pay 500 million people or pay for unattributed images, and how much? How do you know if one of my images was used and where it came from. OK simple enough. As payment, if someone makes a claim, you get a $10 credit on your Dall e account, for the viewing of the image. You have been paid!

But if it's only using rights free or paid per view images where the agency sold them the right to view, then no protection.

Topic: Dall e 2 will make us all redundant?

We aren't redundant already?

« Reply #82 on: October 07, 2022, 13:41 »
0
...
The MTB cyclist was another good example, it doesn't know the wheel is an element on it's own so it included a sample of background from an images that didn't match the rest of the BG it created. It's like using content aware when filling in gaps in Photoshop, sometimes it samples (grabs) the wrong part of the image to fill in the gap but sometimes it grabs a section from the wrong part ands stands out like a sore thumb. The AI is grabbing multiple bits from thousands of images to create a new one. I'd imagine that's why the perspective at times looks off with the images they produce as the samples taken don't all have the same perspective and they have a wobbly look to them. If it had truly learnt how to draw a bike or skyscraper, the perspective would be consistent through the image rather than the Pablo Picasso look where the angles don't quite add up and it would not add a random bit of background to a wheel.

yes - and if you look at the rider there are many weird pieces - pasture or mtns on his back (the mtn actually can be mistaken for a daypack); the face is random bits

at present to make the DALLE bugs into features,  I REQUEST a Picasso effect

new raw set of MTB  (DALL-E has an annoying tendency to crop - and doesnt understand 'no cropping' etc in the input phrase

« Last Edit: October 07, 2022, 13:47 by cascoly »

« Reply #83 on: October 11, 2022, 15:18 »
+3
Repent! the end is near!

had 1st sales of DALL-E art from AS & SS today - the dimes are pouring in!

« Reply #84 on: October 12, 2022, 00:13 »
0
DALL-E has an annoying tendency to crop - and doesnt understand 'no cropping' etc in the input phrase
Yes, I noticed that too and sometimes the crops are really ridiculous, like cutting off the whole head of a person. I tried all kinds of phrases to avoid this, like "no crop, not cropped, person fully visible in picture, head not cut off". I have not been able to figure out any kind of instruction that DALL-E uderstands.
But, to be honest, even though I was searching for images for work and could have used some good results, I was still glad to see such flaws. The more flaws I find with DALL-E's performance the less I am worried that I will be completely replaceble as a photographer, at least in the near future.
« Last Edit: October 12, 2022, 06:06 by Her Ugliness »

« Reply #85 on: October 12, 2022, 05:48 »
0
cascoly

Did you submit those Dale-E 2 images as photos or illustrations?

Thanks

« Reply #86 on: October 12, 2022, 12:26 »
0
DALL-E has an annoying tendency to crop - and doesnt understand 'no cropping' etc in the input phrase
Yes, I noticed that too and sometimes the crops are really ridiculous, like cutting off the whole head of a person. I tried all kinds of phrases to avoid this, like "no crop, not cropped, person fully visible in picture, head not cut off". I have not been able to figure out any kind of instruction that DALL-E uderstands.
But, to be honest, even though I was searching for images for work and could have used some good results, I was still glad to see such flaws. The more flaws I find with DALL-E's performance the less I am worried that I will be completely replaceble as a photographer, at least in the near future.

yes, i've tried many such phrases and also emailed DALL-E w no reply.

and yes, i see little competition in the near future

« Reply #87 on: October 12, 2022, 12:33 »
+3
cascoly

Did you submit those Dale-E 2 images as photos or illustrations?

Thanks

as illustrations and that's what i asked for - i don't think the present version is ready for prime time. with illustrations there's much more tolerance (esp'ly on SS) for what would be perceived as noise if submitted as a photo

« Reply #88 on: October 12, 2022, 15:50 »
0
cascoly

Thanks


« Reply #89 on: October 13, 2022, 01:22 »
+3

« Reply #90 on: October 13, 2022, 16:22 »
0
And this is how its going to be rolled out to the masses:

https://techcrunch.com/2022/10/12/microsoft-brings-dall-e-2-to-the-masses-with-designer-and-image-creator/
I think the end is more near than i supposed! Well i'm selling all my photography gear, as i said before, in order to learn new skills that AI can't get . I'm in the searching for that activities where AI is very weak, i would hear some suggestion from you, maybe it would work for some people here too.

« Reply #91 on: October 13, 2022, 18:04 »
+1
And this is how its going to be rolled out to the masses:

https://techcrunch.com/2022/10/12/microsoft-brings-dall-e-2-to-the-masses-with-designer-and-image-creator/

the approach is aimed at lower end uses, for which DALL E is already good - not replacing microstock (yet)

Seeking to bring OpenAIs tech to an even wider audience, Microsoft is launching Designer, a Canva-like web app that can generate designs for presentations, posters, digital postcards, invitations, graphics and more to share on social media and other channels. Designer whose announcement leaked repeatedly this spring and summer leverages user-created content and DALL-E 2 to ideate designs, with drop-downs and text boxes for further customization and personalization.

Within Designer, users can choose from various templates to get started on specific, defined-dimensions designs for platforms like Instagram, LinkedIn Facebook ads and Instagram Stories. Prebuilt templates are available from the web, as are shapes, photos, icons and headings that can be added to projects.


Interesting to see how deep pockets MS deals with the copyright training issue

« Reply #92 on: October 13, 2022, 21:53 »
+2
First killing photographers and illustrators with free images and AI, then designers with Canva, Designer and who knows what else. I can't believe how difficult it's becoming to make a living as creative these days.

« Reply #93 on: October 14, 2022, 06:43 »
+1
Quote from: Vincent van Gogh
I can't believe how difficult it's becoming to make a living as creative these days.

tupungato

  • Europe
« Reply #94 on: October 14, 2022, 08:25 »
+3
incorporating copyrighted elements, parts of someone else's artwork is inevitable

AI doesn't incorporate anything.
AI learn what is and how to recreate any object (or human faces, animals... everything)

Of course there are legal problems because images used to train are copyrighted; but there is nothing that will be "incorporated" in new images

It's quite new scheme, and it cannot be managed with "classic" discussion, it's completely new issue to solve.

It doesn't incorporate elements per se. But you know how there are microstockers specializing in certain things. They have 30k photos of gold bars, or 10k vectors of starry sky. Inevitably there is a subject where half of all images online come from a successful artist. Inevitably AI will have learned from these images. Inevitably AI will "create" an artwork for something very specific like "angry baby sloth" and it will be 90% inspired by Angry Baby Sloth webcomic. AI customer will be oblivious about existence of Angry Baby Sloth webcomic, but the copyright will be infringed.

Unless AI is trained on hand picked training images.

Uncle Pete

  • Great Place by a Great Lake - My Home Port
« Reply #95 on: October 15, 2022, 11:02 »
0
OK I went and joined and had some fun. I've used all my free credits already.

Personal conclusion, it doesn't do very good with making descriptions into useful images, but it does some interesting and can be fun for wild imaginary scenes. The final images lack realism much of the time and have distortions and flaws. It does better as creating something that can be converted into an illustration kind of project.

I tried uploading my own images, only a few, and some of them, it gave back the same image except some warped bees and small changes. That's my input and maybe other images, I might have gotten better results and variations.

I think it crops too tight many times and it does have issues with wheels and circles.

It's fun and I'm impressed. Triple cheeseburger, with lettuce, tomato, onion, pickles.



« Reply #96 on: October 15, 2022, 14:58 »
0

« Reply #97 on: October 15, 2022, 15:56 »
+1
OK I went and joined and had some fun. I've used all my free credits already.

Personal conclusion, it doesn't do very good with making descriptions into useful images, but it does some interesting and can be fun for wild imaginary scenes. The final images lack realism much of the time and have distortions and flaws. It does better as creating something that can be converted into an illustration kind of project.

I tried uploading my own images, only a few, and some of them, it gave back the same image except some warped bees and small changes. That's my input and maybe other images, I might have gotten better results and variations.

I think it crops too tight many times and it does have issues with wheels and circles.

It's fun and I'm impressed. Triple cheeseburger, with lettuce, tomato, onion, pickles.



Pete, I did the same thing today. I signed up and spent my 50 credits. My enthusiasm is very limited. Pretty much all the results were crap quality and looked artificial, much like your burger.

For photography I don't see any serious competition at this point, for illustration it might be different - here I had some interesting results.

« Reply #98 on: October 16, 2022, 03:28 »
+1
Midjourney has the best results by far at the moment IMHO. Dall E 2 still looks pants, but look how far it has come in a few months. Give it another year and it could be flawless (if it doesn't hit some kind of ceiling).

Two interesting things. First,  Midjourney can produce amazing results, but they all have the same Midjourney "feel". Will this lead to stagnation, with just a few styles depending on the engine used? Second, what happens when no one can afford to make a living at photography and Dall E only has social media post to pull from? Does all "photography" (AI generated) have the same style devoid of any personality?

Also weird how quick looking at Midjourney results is sapping my appreciation for the artwork. There's something about feeling a connection to the artist. The feeing of awe at the craft and emotion an artist puts into the work that is instantly sapped when you know an AI has outputted it. Even if the work looks identical on a surface level. Weird. Can't imagine ever being interested in going to an exhibition of AI work for example.

« Reply #99 on: October 16, 2022, 03:37 »
+1
what happens when no one can afford to make a living at photography and Dall E only has social media post to pull from?

DALL E doesn't "pull" images from anywhere. It has been trained with existing images. Even if no one can afford to make a living from photography anymore, DALL E will not "unlearn" what it has learned.


 

Related Topics

  Subject / Started by Replies Last post
11 Replies
4451 Views
Last post August 08, 2013, 02:12
by anjaliaroha
3 Replies
2291 Views
Last post May 28, 2014, 16:50
by gnirtS
19 Replies
3280 Views
Last post May 18, 2022, 05:21
by thx9000
14 Replies
1666 Views
Last post October 20, 2022, 04:46
by emax
0 Replies
419 Views
Last post November 07, 2022, 13:04
by Uncle Pete

Sponsors

Mega Bundle of 5,900+ Professional Lightroom Presets

Microstock Poll Results

Sponsors

3100 Posing Cards Bundle