What Algorithms Can’t Curate: The Enduring Power of Human Selection
There are already too many options. The problem isn’t how to increase production anymore; it’s deciding what to keep. Generative AI became more costly than anticipated. It created a new problem that it can’t solve. Curation of what has meaning.
Human selection is the irreplaceable core of creative value in the age of infinite generation. The more AI floods the world with output, the more curation becomes a premium, scarce, and deeply human act because choosing what stays, what’s shown, and what means something requires judgment that machines fundamentally cannot replicate.
1. More Output is Less Novelty
Eric Zhou and Dokyun Lee analyzed over 4 million artworks on a major digital art platform and published their findings in PNAS Nexus in 2024. They wanted to know what happened when artists adopted text-to-image AI tools. The results were dramatic: adopters saw a 25% increase in creative output and a 50% boost in engagement (favourites per view). But that is not a win.
While peak content novelty increased—meaning the best, most unusual work got even better—average visual novelty declined over time. The typical AI-assisted artwork became more predictable, more stylistically flat. The tools expanded the ideas, but they pulled creators toward a kind of aesthetic convergence.
Zhou and Lee’s interpretation is key: AI is excellent at ideation. It can generate effective starting points, variations, and raw material. But it cannot perform filtering. It doesn’t know which of the thousand generated images is the one that resonates, the one that carries meaning, and the one worth showing. That remains a human skill. And in their data, the artists who thrived were the ones who learned to navigate, select, and refine AI outputs and not the ones who simply published everything the model gave them.
This created a productivity paradox. AI makes it easier to make more, but harder to make better. The bottleneck has shifted from production to curation.
2. The Authorship Effect: Why Labels Change Everything
Lucas Bellaiche and his team at the University of British Columbia conducted an experiment, published in Cognitive Research: Principles and Implications. They showed people identical images but told half the participants the image was made by a human and the other half that it was made by AI.
When labelled “human-made,” the images scored significantly higher on liking, beauty, profundity, and worth. Same image. Different story.
Two factors explained most of the gap: narrativity (the sense that there’s a story, a human experience, behind the work) and perceived effort (the belief that someone struggled, iterated, and cared). When people thought a human made it, they inferred intention, emotion, and a journey. When they thought an algorithm had made it, those inferences disappeared.
This finding was replicated by Federico Magni and colleagues in the Journal of Business and Psychology, who ran four experiments with over 2,000 participants. They found that evaluators consistently rated AI-attributed work as less creative—not because the output was objectively worse, but because they perceived less effort behind it. Effort, it turns out, is a proxy for meaning. And meaning is what we’re really judging.
Here’s why this matters for curation: A curator’s job is not just to pick good work. It’s to tell the story of why it’s good. To surface the effort, the context, the human stakes. Algorithms can rank by engagement or similarity, but they can’t explain why something matters. They can’t build the narrative about why the piece resonates.
3. The Cultural Limits of Algorithmic Taste
But what happens when curation itself is automated?
Jianyu Yan and colleagues surveyed 30 cultural institutions—museums, galleries, digital platforms—and published their findings in the Journal of Education, Humanities and Social Sciences in 2025. The concerns were consistent: algorithmic curation introduces bias, cultural homogenization, and a loss of interpretive agency.
And here’s the problem: Algorithms are trained on datasets, and datasets reflect the past. They encode what has already been valued, already been seen, already been clicked. They’re backward-looking by design. A recommendation engine can tell you what’s like what people have liked before. It cannot tell you what’s new in a way that challenges or expands taste. It cannot recognize work that’s culturally specific, locally meaningful, or ahead of its time.
Emotional resonance depends on human-centered cues like the viewer’s own lived experience. An algorithm can optimize for visual similarity or predicted engagement, but it can’t account for the embodied knowledge that a human curator brings to the table.
Adam Reynolds and Emiliano Ricciardi, writing in Psychology of Aesthetics, Creativity, and the Arts (2024) found that subjective emotional responses explain aesthetic appeal more than formal perceptual features. In simpler words, what makes a work land is not its pixel-level properties. It is only the human experience that it triggers.
Lauri Nummenmaa's 2023 research in Cognition & Emotion, mapping art-evoked feelings across thousands of artworks, found that emotional responses carry distinct bodily "fingerprints" — and that these responses are especially activated when human figures and human stories are salient.
Generative models struggle with culturally anchored complex narratives—the kind of work that draws on specific histories, subcultures, or marginalized perspectives. Without intentional human oversight, AI curation risks flattening the cultural landscape, amplifying dominant aesthetics and erasing the edges.
This is not a hypothetical risk. It’s already happening. And it’s why human curators, the people who understand context, who can read between the lines, who know what’s at stake, are more essential than ever.
4. What Curation Actually Is
Curation is not just “picking favorites.”
Curation is:
Filtering — deciding what’s worth attention in a sea of noise.
Contextualizing — explaining why something matters, where it fits, and what it’s in conversation with.
Sequencing — arranging work so it builds meaning, tells a story, creates an experience.
Stewardship — protecting cultural memory, elevating underrepresented voices, making long-term bets on what will endure.
None of these is an algorithmic task. They require cultural literacy and values. A curator makes choices that reflect what they believe is important, not just what’s popular or predicted to perform well.
The research backs this up. Zhou and Lee’s data show that the artists who succeeded with AI were the ones who developed strong selection skills—the ability to sift through hundreds of generated images and identify the handful worth refining. The ones who failed were the ones who treated AI as a vending machine: input prompt, output art, publish. That’s not curation. That’s automation. And the market can tell the difference.
Curation is now creative authorship. In a world where anyone can generate a thousand images in an afternoon, the creative act is no longer making the image; it’s deciding what it means and who needs to see it.
5. The Manifesto: Choose Like It Matters
We are entering an era of infinite mediocrity. Not because AI is bad, but because it’s too good at producing the average, the expected, the statistically likely. It flooded already every social platform, every feed, every inbox with competent, polished, utterly forgettable work.
The people whose work will be seen, remembered, and paid for are the ones who can curate. Who can look at a hundred options and choose the one that’s true. Who can say no to the fast and easy yes. Who can build a body of work that has a congruent point of view, a through-line, a reason to exist beyond “the algorithm made it.”
Algorithms optimize for engagement. Humans optimize for meaning. And meaning is what people will pay for.
Human curation is now the irreplaceable core of creative value. The more AI generates, the more your ability to choose—thoughtfully, intentionally, with conviction—becomes your competitive advantage.
Why This Matters For Your Creative Practice
Treat AI as a collaborator in ideation, not a replacement for judgment. Use it to generate options, explore variations, and break through your creative blocks. But never outsource the decision of what’s worth keeping. That’s your job, and it’s where your value lives.
Develop your selection skills as deliberately as you develop your production skills. Practice saying no. Practice challenging. Build a personal canon of work you admire and articulate why it’s good. Constantly train your eye, your ear, your taste. Curation is a muscle.
Tell the story of your choices. When you share work, explain why it matters. What you were trying to do. What you learned. What you rejected along the way. Context is what separates a portfolio from a feed. Narrative is always what makes people care. And in many cases, it has more value than reality.
Resist the pressure to publish everything. Volume is not a strategy. The artists in Zhou and Lee’s study who succeeded weren’t the ones who posted the most—they were the ones who posted the best. Quality is a curatorial act.
Invest in cultural literacy and contextual knowledge. The more you know the more knowledge you accumulate about history, about other disciplines, about the world in general, the better your curatorial judgment will be. Algorithms can’t read the room. You can.
Build systems that preserve your agency. If you’re using AI tools, choose ones that let you steer, iterate, and override. Avoid black-box systems that make decisions for you. Your creative process should amplify your judgment, not replace it.
References
Bellaiche, L., Shahi, R., Turpin, M. H., Ragnhildstveit, A., Sprockett, S., Barr, N., Christensen, A., & Seli, P. (2023). Humans versus AI: Whether and why we prefer human-created compared to AI-created artwork. Cognitive Research: Principles and Implications, 8(1), Article 42. https://doi.org/10.1186/s41235-023-00499-6
Nummenmaa, L., & Hari, R. (2023). Bodily feelings and aesthetic experience of art. Cognition & Emotion, 37(3), 515–528. https://doi.org/10.1080/02699931.2023.2183180
Reynolds, A. P. F., & Ricciardi, E. (2024). Subjective emotional instances surpass formal perceptual features in shaping the aesthetic appeal of artworks. Psychology of Aesthetics, Creativity, and the Arts. Advance online publication. https://doi.org/10.1037/aca0000720
Magni, F., Park, J., & Chao, M. M. (2023). Humans as creativity gatekeepers: Are we biased against AI creativity? Journal of Business and Psychology, 39(3), 643–656. https://doi.org/10.1007/s10869-023-09910-x
Yan, J., Wang, Y., & Liu, W. (2025). The application and challenges of artificial intelligence in contemporary art curation. Journal of Education, Humanities and Social Sciences, 60, 205–211. https://doi.org/10.54097/xnpkv747
Zhou, E., & Lee, D. (2024). Generative artificial intelligence, human creativity, and art. PNAS Nexus, 3(3), Article pgae052. https://doi.org/10.1093/pnasnexus/pgae052


