This question has been rolling around in my mind for awhile, and there are a few parts to this question. I will need to step through of how I got to these questions.
I have used AI as a tool in my own art pieces before. For example, I have taken a painting I had made more than a decade ago, and used a locally hosted AI to enhance it. The content of the final image is still my original concept, just enhanced with additional details and also make it into a 32:9 ultrawide wallpaper for my monitor.
From that enhanced image, I sent it through my local AI again (different workflow) to generate a depth map, and a normal map. I also separated the foreground, midground, and background.
Then I took all of that and loaded it into Wallpaper Engine (if you don’t know what that is, it’s an application that can be used to create animated wallpapers). I compiled each of the images proceeded to manually animate, track, and script it to bring the entire thing to life. The end product is something I really enjoy and I even published it on the wallpaper engine steam workshop for others to enjoy as well.
However, with all the AI slop that is being generated endlessly and the stigma that AI has in the art community as a whole, it brought the following questions to mind:
-
Is the piece that I painted and then used AI to rework, and then manually reworked further, still my art?
-
One step further, I didn’t build any of the tools to make the original painting, I didn’t create the programming or scripting languages. I didn’t fabricate the PCBs or chipsets that I built my computer with to run all of those tools. The list can go on and on for how many things I use that were not created/generated by me nor would it be possible/feasible to give credit to every single person involved. So, is any artwork that I make actually mine? Or does it belong to the innumerable shoulders of giants of which we all stand upon?
-
Those questions led me to the main question of this post. Say that a real human grew up with only the experience of seeing AI slop and, as such, can only reference that AI slop experience they had learned; if that human creates something with their own hands, is that piece they create still art? Is it even a piece that they can claim they made?
I’m curious to see what thoughts people have on this.


I think you’re mostly right about how AI works, but I think some of the conclusions go a bit further than what the mechanics alone really show.
Yes, AI is an algorithm and it’s statistical. It learns patterns and maps inputs to outputs. I don’t really disagree with that part. Where I start to disagree is the idea that this automatically means the output can’t be novel or meaningful. A human brain is also a physical system processing information according to rules. Saying AI is “just an algorithm” only really works as a dismissal if humans aren’t doing something similar, which I’m not convinced is true.
The Excel average comparison also feels a little off to me. Averaging collapses information. Generative models don’t really do that. They explore and recombine patterns across a large possibility space, which feels a lot closer to how people learn and create than how a spreadsheet works. It’s true you could replicate an AI with enough paper and time, but the same thing applies to any finite physical system, including a human brain. That feels more like computability than about creativity or authorship. Another point I do agree with is how AI is used matters a lot. If someone is mostly prompting and picking outputs, that’s closer to curation than creation. But that isn’t really unique to AI. We’ve had similar debates with photography, sampling, filters, and procedural art. Art has never just been about manual effort anyway, it’s more about intent and judgment.
So I think what we aren’t lining up on is less about what AI is, and (as some others have noted here) is more about where we draw the line for authorship and responsibility in how it’s actually used. I do appreciate your perspective on it, and it’s definitely a very grey philosophical to discuss.
“explore and recombine” isn’t really the words I would use to describe generative AI. Remember that it is a deterministic algorithm, so it can’t really “explore.” I think it would be more accurate to say that it interpolates patterns from its training data.
As for comparison to humans, you bring up an interesting point, but one that I think is somewhat oversimplified. It is true that human brains are physical systems, but just because it is physical does not mean that it is deterministic. No computer is able to come even close to modeling a mouse brain, let alone a human brain.
And sure, you could make the argument that you could strip out all extraneous neurons from a human brain to make it deterministic. Remove all the unpredictable elements: memory neurons, mirror neurons, emotional neurons. In that case, sure - you’d probably get something similar to AI. But I think the vast majority of people would then agree that this clump of neurons is no longer a human.
A human uses their entire lived experience to weigh a response. A human pulls from their childhood experience of being scared of monsters in order to make horror. An AI does not do this. It creates horror by interpolating between existing horror art to estimate what horror could be. You are not seeing an AI’s fear - you are seeing other people’s fears, reflected and filtered through the algorithm.
More importantly, a human brain is plastic, meaning that it can learn and change. If a human is told that they are wrong, they will correct themselves next time. This is not what happens with an AI. The only way that an AI can “learn” is by adding on to its training data and then retraining the algorithm. It’s not really “learning,” it’s more accurate to say that you’re deleting the old model and creating a new one that holds more training data. If this were applied to humans, it would be as if you grew an entirely new brain every single time you learned something new. Sounds inefficient? That’s because it is. Why do you think AI is using up so much electricity and resources? Prompting and generating an AI doesn’t use up much resources; it’s actually the training and retraining that uses so much resources.
To summarize: AI is a tool. It’s a pretty smart tool, but it’s a tool. It has some properties that are analogous to human brains, but lacks some properties that make it truly similar. It is in techbros’ best interests to hype up the similarities and hide the dissimilarities, because hype drives up the stock prices. That’s not to say that AI is completely useless. Just as you have said in your comment, I think it can be used to help make art, in a similar way that cameras have been used to help make art.
But in the end, when you cede the decision-making to the AI (that is, when you rely on AI for too much of your workflow), my belief is that the product is no longer yours. How can you claim that a generated artpiece is yours if you didn’t choose to paint a little easter egg in the background? If you didn’t decide to use the color purple for this object? If you didn’t accidentally paint the lips slightly skewed? Even supposing that an AI is completely human-like, the art is still not yours, because at that point, you’re basically just commissioning an artist, and you definitely don’t own art that you’ve commissioned.
To be clear, this is my stance on other tools as well, not just AI