Photography in the Age of AI (2025): Lessons From 25 Weeks of Experimentation
For the past 25 weeks I’ve been running a simple experiment. Every Wednesday, I’d take one of my photographs, ask AI to describe it, and then feed that description back into the machine to see what image it would generate in return. No tweaking, no retries. One human image in, one synthetic image out.
The idea was straightforward: measure how “close” AI could get to the way a photographer sees. Could a model not only recognize what was in front of the lens, but also capture the feeling of being there?
Week after week, I shared the results in Closer. Sometimes the AI came surprisingly close. Other times it missed the point entirely. And in between those two extremes, I found a lot of lessons about technology, creativity, and what it means to make art in the age of AI.
This essay is my attempt to wrap up those 25 weeks: what worked, what failed, what I’ve learned, and why I think the human role in photography is more important than ever.
TL;DR
- I ran a 25-week experiment: one human photo → AI description → AI image.
- AI descriptions were precise, even poetic.
- AI-generated images were polished but often sterile or “too perfect.”
- AI is useful for utilitarian images, but not for art.
- Imperfection, process, and human connection remain irreplaceable.
Key Takeaways
- AI is precise but not profound — strong at recognition, weak at meaning.
- Stock, illustrative, and transactional imagery are already replaceable by AI.
- Human photography thrives on imperfection, context, and embodied process.
- Prompting style matters: structured for precision, conversational for creativity.
- In the AI era, people don’t just want to like the art — they want to like the artist.
Why did I start Closer?
I launched Closer to test whether AI could replicate the way a photographer sees. By repeating one strict experiment each week, I hoped to create a clear baseline: a snapshot of what image generation can and cannot do today.

When I began, the rules were simple. Each week I would take one of my own photographs, made in the world with a real camera, and give it to AI for analysis. The AI would describe what it saw, and then, using its own words, generate a new image.
No retries. No prompt tricks. Just one pass through the system.
This constraint was deliberate. Anyone can coax a model into better results with endless tinkering, but that only hides its natural state. What I wanted was a record of how these tools perform “out of the box,” because that is how most people will experience them.
In a way, Closer became a time capsule. If I had started this three years earlier the results would have been laughably bad, a reminder of how far the technology has come. But in 2025 we are at an in-between moment: AI is undeniably powerful, yet still far from replacing what it means to make a photograph.
Over 25 weeks, this repetition gave me a dataset of its own. It shows not only how the technology behaves, but also what it consistently misses.
What changed in AI image generation in 2025 — and what didn’t?
AI descriptions became remarkably accurate and sometimes even poetic, but the generated images often felt sterile or strangely rearranged. GPT-5 improved speed and perception, but not quality. In short: the tools got better at understanding, not at creating.


If you didn't catch it in the hero image, this example shows how AI consistently smooths away imperfections that make images feel real.
When I started Closer, I expected to see a gradual narrowing of the gap between human and AI. That didn’t really happen. The biggest leaps in image generation came before this project began. If I had run these tests in 2022, the contrast would have been dramatic. In 2025, the improvements are incremental and easy to miss.
Descriptions were where AI shined. In the first ten weeks, the text outputs impressed me over and over again. They were vivid, detailed, even poetic. Sometimes they noticed elements I had overlooked myself. But when those same descriptions were turned into images, the results consistently lost something. AI rearranged the composition, invented details that weren’t there, or smoothed the imperfections into a polished unreality.
Just after the halfway point, the pattern was clear. As I wrote around Week 19, the project had hit a plateau. GPT-5, which launched during this period, made interpretation faster and more consistent, but the generated images still looked like… AI. They were competent but soulless.
The progress that would have wowed us already happened before Closer began. What I captured was not a breakthrough moment but a baseline — a record of AI’s strengths and limits at this stage.
Descriptions vs. Generations: strengths and weaknesses
| Aspect | AI Descriptions | AI-Generated Images |
|---|---|---|
| Accuracy | Often precise, sometimes more cautious than humans | Frequently rearranged, added or missing parts |
| Detail | Rich, poetic, occasionally insightful | Lacks nuance, flattens textures |
| Emotion | Can mimic emotional language | Struggles to translate into visual feeling |
| Imperfection | Notes flaws, decay, context | Smooths away irregularities |
| Overall impact | Helps imagine the scene | Looks competent but feels sterile |
What changed with GPT-5?
GPT-5 improved speed and visual perception. It interprets input images more quickly and consistently, but it did not noticeably improve the quality of generated images.
Can AI replace photographers?
AI can take over functional photography — stock images, diagrams, quick concept visuals — but it cannot replace artistic work that depends on presence, imperfection, and story. The gap isn’t about technical quality; it’s about purpose.
Over 25 weeks of testing, one conclusion kept resurfacing: some images are purely functional, while others are meant to carry human experience. AI is more than capable of creating the functional kind. It struggles with the rest.
Where AI already wins
AI excels when images are purely transactional:
- Stock photography
- Instructional diagrams
- Decorative fillers
- Quick concept visuals for proposals
In these cases, the goal is clarity, not connection. If the image is just there to illustrate, AI can do it in seconds — and often at “good enough” quality.
Take an example of Rob Hogenboom at Community Hub OostWest. He wanted to show how a container unit in their space might be converted into a place for creative courses. Normally, he would have left the proposal with text and a simple photo of the current state. That would have been fine. Commissioning an artist wasn’t realistic (budget-, planning-, or scope-wise)— and more importantly, it wasn’t necessary.
But with AI, he could combine a photo of the container with a few inspiration images. In minutes, he had a visual sketch of what the transformation might look like. It wasn’t art, but it made the plan more tangible and easier to communicate. Without AI, that image simply wouldn’t have existed. With AI, the proposal was clearer and stronger.




Rob's process of going from a real image, some inspiration shots for reference, and the final output.
Where AI falls short
The problem arises when images are expected to hold more than information. Photography at its best carries the weight of presence: the walk to the location, the feel of the weather, the decision of where to stand and when to press the shutter. These are not incidental details, they are the experience itself.
In my own project, this gap was obvious. AI could describe and replicate what was in front of the lens, but the generated images stripped away the imperfections that made them human. What was left looked polished, even portfolio-ready at times, but lifeless.
As I wrote in Week 24: I am not trying to show you a picture of a barn with some spools in front of it. I am trying to show you what walking the Grebbeliniepad feels like. That layer of context — of being there — cannot be synthesized.
The dividing line
So, can AI replace photographers? Only in the spaces where photography was never about art to begin with. The utilitarian side of the spectrum is already changing. But for work that depends on presence, imperfection, and story, the human role is not just intact, it is indispensable.
Can AI replace photographers?
AI can replace transactional photography such as stock, diagrams, and quick concept visuals. It cannot replace photography that depends on presence, imperfection, and story.
Why do AI images look “too perfect”?
AI images often look too perfect because the AI models smooth away imperfections, polish surfaces, and avoid the messy details of real life. Human photography, by contrast, thrives on cracks, decay, and atmosphere — the things that make a scene feel real.
One of the clearest patterns across 25 weeks was how consistently AI stripped away the imperfections that give photographs their spark. Weathered textures became smooth. Shadows flattened. Small flaws that carry history, chipped paint, worn stone, a faded sign, they all disappeared.






This pursuit of perfection raises questions about what we value in art. As the photographer and video essayist Developing Tank has argued in a recent YouTube video, part of photography’s future may lie in leaning harder into tangibility and process. “Much like many other photographers, I love film. It’s fun. It brings back the feeling that you’re actually creating something tangible… There’s a possibility that many art forms will lead into these more tangible experiences as a way of lending legitimacy to art due to how it’s created and as some way of having a truly human experience.”
People still gather to watch live music, glassblowing, or street painters at work, because the act of creation itself carries meaning. In the same way, projects rooted in physical process — like Takuma Nakahira’s 1971 Circulation, Date, Place, Events — remind us that art is not just about outcomes, but about presence and craft.
Typing a few words into a prompt and claiming authorship is like ordering at a Michelin-star restaurant and insisting you cooked the meal. — Developing Tank
Why do AI images look too perfect?
Because AI models default to smooth, polished surfaces and remove imperfections. Human flaws like chipped paint, faded signs, or uneven textures are what make images feel authentic, and AI tends to erase them.
That distinction matters. My own work, especially long walking projects, is inseparable from process. The feel of the air, the rhythm of moving through space, the decision to stop and frame a subject — these cannot be synthesized. AI can generate a plausible copy, but it cannot simulate being there. And in stripping away imperfections, it also strips away humanity.
Does prompting style matter?
Prompting style matters because it changes how AI interprets your request. Structured prompts tend to produce more precise outputs, while conversational prompts make the process feel more collaborative and often more creative.
Over the 25 weeks of Closer, I tried both approaches. When I gave the AI tightly structured instructions — checklists, headers, bullet points — the descriptions became longer and more detailed. Sometimes that improved the accuracy, but it also exposed the mechanical nature of the process. The results were correct, yet they lacked spark.
Conversational prompts felt different. When I wrote as if I was talking to another person, the AI mirrored that conversational style. It followed my thought process, picked up on hints, and occasionally surprised me. The images were not always better, but the interpretations it offered were more interesting.
The choice comes down to intent. If you need accuracy, structure helps. If you want exploration, conversation works better. Both have their place, but for a creative experiment like Closer, treating AI as a sparring partner made more sense than treating it as a compiler.
Should I prompt like a programmer or like a person?
Structured prompts help when you need precision. Conversational prompts are better for brainstorming and creativity. Both styles work, but conversational prompts often feel more collaborative and human.


What should creatives do with AI?
AI should handle the chores. Creatives should handle the art. Use it for speed, sketches, and placeholders, but not for meaning or story. Treat AI as a sparring partner, never as a substitute for human vision.
Across this project, the pattern became clear: if your goal is simply to illustrate, AI will do the job. It can produce images that are functional, transactional, and “good enough.” That covers stock, quick mockups, moodboards, and decorative visuals.
But art has a different purpose. Art is about presence. It is about the process of being somewhere, noticing something, and deciding to frame it. That is not a step AI can replace. As I wrote earlier, I am not trying to show you “a barn with some spools in front of it,” I am trying to show you what walking the Grebbeliniepad feels like. That difference in intent is where the human role is safe.
Photographer and writer Roman Fox recently put it bluntly in a blog post on his website: “With the rise of AI, people don’t just want to like the art, they want to like the artist.”
That search for authenticity is also why projects like Adobe’s Content Credentials initiative or the idea of publishing a proof of built matter. Content Credentials aim to attach trustworthy metadata to digital files, making it possible to verify where and how an image was created.


A proof of built serves a similar function. It is an added layer of context — GPS data, editing history, or workflow documentation — that shows the physical process behind an image. In my opening essay, I described this as a way for photography to “prove its authenticity” in the AI era. In an age where models can fabricate convincing visuals in seconds, a proof of built helps underline that a photograph is the result of being there, doing the work, and making the choices that only a human can make.
So what should creatives do? Use AI as a sparring partner. Let it help with the functional layers: sketches, placeholders, and drafts. But reserve your own time, skill, and attention for the work that requires meaning, story, and humanity. That is where your value lies, and where no model can step in for you.
The future of photography in the age of AI
AI is faster and more precise in description than ever, but its generated images still feel sterile. For functional uses it is powerful; for art that depends on presence and imperfection, the human role remains essential.
So where do we go from here? When I began Closer, I framed the experiment around a simple question: could AI ever replicate the emotion and observation that go into a human photograph, and would the gap between us get smaller over time? Twenty-five weeks later, the answer is clearer than I expected. The gap hasn’t closed. AI is powerful at description, useful for utilitarian tasks, and fascinating as a creative partner. But when it comes to making images that carry lived experience, it still falls short.
This is not necessarily a failure. What Closer captured was a baseline. If the real breakthroughs in AI image generation happened before 2023, then what I’ve documented is a snapshot of the present — a record of what AI can do “out of the box” today. That has value too, both as a time capsule and as a point of comparison for the future. It is unfortunate that I didn’t capture the same baseline in 2021 or 2022, when the technology was changing more dramatically. But if significant advances arrive in the years ahead, Closer can be reignited. At least then we will have something solid to measure those changes against.


It’s also possible that today’s AI aesthetic, which so often feels too smooth or painterly, will eventually develop its own nostalgia. Just as film grain, VHS fuzz, and early digital cameras all became desirable looks long after they were technically obsolete, we might one day look back at these first AI outputs with affection. For now, they remain unconvincing, but history shows our tastes are never fixed.
For me, the larger lesson is that AI doesn’t diminish the need for human creativity. It highlights it. By stripping away imperfections, it shows us what we value in them. By making “good enough” functional images, it forces us to ask what makes an image more than functional. And by pushing us to reflect on our own practice, it makes the role of the artist more visible, not less.
The road ahead will be shaped by both humans and machines. We can embrace AI when it helps us move faster or sketch ideas. But we must hold onto the parts of our craft that can’t be replicated: being there, noticing, waiting, telling stories. Because in the end, the question isn’t whether AI can replace photography. The question is why we make photographs at all — and that answer remains entirely human.
FAQ: AI and Photography in 2025
Can AI replace photographers?
AI already handles some photography tasks, especially stock, diagrams, and other transactional work. But photography is more than image-making. It’s process, presence, and story. Those elements can’t be automated, which means artistic and documentary photography remain uniquely human.
Why do AI images look too perfect?
AI models are trained to prioritize smoothness and clarity, which erases the small flaws that signal reality. Imperfections like scratches, chipped paint, or uneven light give images depth and authenticity. AI’s tendency to “over-polish” reveals the gap between simulation and lived experience.
When is an AI image “good enough”?
AI is useful in contexts where connection isn’t required — proposals, moodboards, concept sketches, or decorative visuals. In these cases, speed and clarity matter more than authorship. For art or communication that depends on trust and story, human-made images are still essential.
Should I prompt like a programmer or like a person?
Both styles work, but for different reasons. Structured prompts help the AI focus on accuracy and detail. Conversational prompts bring out more unexpected ideas and mirror a collaborative dialogue. Many creatives mix both: structure for precision, conversation for exploration.
What changed with GPT-5?
GPT-5 made image interpretation faster and more consistent. It can “read” input photos more easily, which speeds up the process. But output quality has not leapt forward. The shift is more about efficiency than artistry — better pipelines, not better pictures.
How can artists future-proof their work?
By making the process visible. This can mean publishing a proof of built (context like GPS data, editing steps, or workflow traces) or using initiatives like Content Credentials to certify authorship. Beyond tools, the key is to lean into imperfection, tangibility, and personal story — things AI cannot reproduce.
Just you, me, and some occasional notes from the field. No spam.
Member discussion