

After all, enterprise clients soon realized that the output of most AI systems was too unreliable and too frequently incorrect to be counted on for jobs that demand accuracy. But creative work was another story.
I think that the current crop of systems is often good enough for a header illustration in a journal or something, but there are also a lot of things that it just can’t reasonably do well. Maintaining character cohesion across multiple images, for example, and different perspectives — try doing a graphic novel with diffusion models trained on 2D images, and it just doesn’t work. The whole system would need to have a 3D model of the world, be able to do computer vision to get from 2D images to 3D, and have a knowledge of 3D stuff rather than 2D stuff. That’s something that humans, with a much deeper understanding of the world, find far easier.
Diffusion models have their own strong points where they’re a lot better than humans, like easily mimicking a artist’s style. I expect that as people bang away on things, it’ll become increasingly-visible what the low-hanging fruit is, and what is far harder.
I haven’t blocked anyone on this account, but it’s new.
On my last one, I think I blocked three users. I believe all were basically trying to flood a community so that it was unreadable (one, IIRC, was just posting the same large Simpsons or Futurama image repeatedly throughout a thread to try to stop people from talking).