Photography as Art – A Brief History

A brief history of photography as art from its beginning in the mid 1800s to the impact of computational photography and artificial intelligence.

Back in the mid 1800s photography was just born.  New advances were made in rapid order.  All those that saw the amazing photographs were struck with their realism.  Painters who made their livings from portraits saw their businesses shrink virtually overnight, replaced by the camera. 

It was the realism that separated the photograph from the other visual arts.  The generally accepted idea was that photographs could never be art.  And yet, some photographers took exception with that.  They contended that photographs can be art and they borrowed techniques from painters to prove it.  In England, Henry Peach Robinson published his ground-breaking book in 1869, Pictorial Effect in Photography: Being Hints On Composition And Chiaroscuro For Photographers, that powerfully and effectively not only made the case for photography as an art form but provide guidelines on how to accomplish that end.  Pictorialism became a movement that swept European photographers and, in the United States, had Alfred Stieglitz as its most influential proponent and promoter. Even Ansel Adams had his Pictorialism period.

While pictorialism was many things, one of the defining qualities was a soft focus to reproduce the effect generally seen with paint on canvas.  Composite images made from more than one negative were also very common and well accepted, even demanded.  But the main intent was to create photographs that went beyond realism and elicited an emotional response in the viewer.

Photography continued to evolve as an art form along with Modernism and the social and intellectual movement it spawned.  Group f/64, founded by William Van Dyke and Ansel Adams in the San Francisco area, rejected the notion that fine art photography must have a soft focus.  Adams, upon meeting with Stieglitz in New York, convince him that photographs of the natural world that were in sharp focus were also art. 

Not only that, but Adams also developed the Zone system that brought a high degree of precision and accuracy, providing photographers with great control through the interpretive decisions they made in each step of the photographic process.  One can get insights into the numerous decisions from Adams’ insightful book Examples: the making of 40 photographs.  The path shaped by each decision they made took them to the final print and what they visualized it would convey.  Each photographer was in complete control of the entire process.

Eventually Kodak and other film companies took over the development and printing part of the process.  This removed the investment individuals would have to make not only in the needed equipment, chemicals and time but also the knowledge and experience required to make the decisions that led to the final print. On the one hand, photographers lost control of a good part of the process but on the other hand the masses gained access to this wonderful new technology and the benefits it provided. They delivered the undeveloped film to the lab or the neighborhood photo store and a few days later received the developed negatives and prints.

Exposure was hit and miss for the general population.  The professional and serious enthusiasts used light meters to calculate their exposures.  But the general public just guessed although in many cases their cameras didn’t give them any control over exposure.  It was only a matter of time, however, before camera manufacturers would incorporate light meters in the camera itself.  The first one I ever saw had a gauge with a needle on the top of the body that would move back and forth as the aperture and/or shutter speed was adjusted.  The idea was to line up the needle with the mark in the middle.   The same decisions used with the light meter were essentially incorporated into the camera.

But again, progress being what it is, camera manufacturers took the next step of not only including a light meter in the camera body but created lenses with apertures that could be controlled by the camera body.  In this way the camera could determine the exposure for the photographer and relieve them of the task of deciding what aperture and shutter speed to use.  The decision could be made by the camera.  The only hitch was that you wouldn’t know if the camera made the correct decision until the processed film was received back from the lab. 

A similar thing happened when autofocus came along.  Before auto focus, SLRs had prism or split image viewfinders, the latter being the more accurate.  The photographer selected the object to focus on and adjusted the lens focus ring until the object was not split through the center.  The photographer had to decide which object would be used to focus on.  With autofocus, that decision was given to the camera.

As technology advanced, even sophisticated cameras became easier to use because they made more and more complex decisions for the photographer. And experienced photographers understood that the camera could sometimes make sub-optimum decisions and that they had the ability to make adjustments or override the camera’s decision-making altogether. 

Then digital cameras came along and, as they inevitably grew in quality, functionality and stature, began to replace film.  The casual photographer no longer needed to send film to the lab to see their pictures. With JPEG files they could see them instantly on their computer screens.  And if they chose, they could send their JPEGs to a lab to get prints. 

The serious photographer now had a new ‘undeveloped negative’ in the RAW file. These photographers developed their own ‘negatives’ originally using Adobe Camera Raw and eventually with Lightroom or other comparable RAW conversion apps.  This opened up ‘darkroom’ techniques undreamed of or, at best, extremely difficult to achieve in the chemical darkroom.  Many large format photographers still preferred to use film but then scanned the transparencies so they could process them in Photoshop.

RAW files are, by nature, uninspiring and require processing to unlock their potential.  JPEG files, on the other hand, are processed by the camera.  The photographer has some control over this in terms of specifying how to handle contrast, saturation, sharpness and other variables.  But from that point on, the camera makes all the decisions on how the image will be enhanced. 

With RAW files, the decisions are back in the hands of the photographer. Now the photographer, apart from the decisions the camera may make with regards to exposure and focus (which can be overridden or disabled) is back in control of the whole process from the moment of inspiration to the final product, especially if they do their own printing. Even those that have their images printed by labs have the option of allowing the lab to enhance their images for them or printing them as they are.

With the state of technology today, artists can choose to retain total control over the artistic process, making the many interpretive decisions that go into making a work of art.  And some advances, especially in the digital darkroom, have made enhancements that were very difficult in the days of film much easier now.  The ability of novices to create images of high technical quality, however, has exploded. But the next technological development dwarfs everything that has come before.

Enter Computational Photography.  If that sounds to you like a computer is making practically all of the decisions that go into making an image, well, you’re very close to the truth.  And it’s already here – predominantly in our smart phones.

Smart phones can do HDR.  Traditionally, that involves taking multiple images at different exposures that capture the brightest and darkest parts of the scene.  These multiple exposures are blended together, taking the best parts of each exposure.  But smart phones can get the exposures instantaneously using a technology of streaming images that is not even possible with DSLRs.

In-camera panoramas are another example of computational photography. The camera decides when to capture the next image as the camera is panned across the scene and how to merge it with the previous ones.

A more sophisticated application of computational photography is the Portrait mode now appearing in smart phones.  The ideal portrait is to have the subject in focus and the background out of focus – bokeh.   But due to design restraints in smart phone cameras, the image from the tiny lens has a virtually unlimited depth of field resulting in everything being in focus, the subject and the background.  The processing that separates subject from background and blurs the background is highly sophisticated.  Different phone makers each use their own algorithms but they all rely on machine learning (aka artificial intelligence). 

TechCrunch has a wonderful article that lays out both the current state of photography and the role computational photography will play in the future.  They make this argument…

The future of photography is computational, not optical…. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.”

And Apple, Google and Samsung are pouring their research dollars, not into improving the optical qualities of their cameras, but into adding startling new functionality to their phones, all utilizing computational photography. Will this technology make it beyond smart phones?  Probably. And the ones most likely to be able to incorporate it will be mirrorless cameras.

So that’s the trajectory smart phones are currently on and quite possibly one that mirrorless cameras will soon follow.  More and more decisions are being made for the photographer.  But what about post processing? Is computational photography entering that realm as well?

Pixelmator Photo is already doing just that on iPads. They are using AI to enhance color, saturation and tonality of images.  They are using it to remove unwanted elements and even ‘improve’ the composition with their crop tool.  With apps such as this the interpretive decisions made in the digital darkroom by the photographer can be turned over to artificial intelligence.

Is HDR more expressive when blending using a tool such as Photomatix Pro?  Without a doubt because of the control one has.  Is the bokeh effect for the background in portraits better when correct lens settings are used on a camera with a full frame sensor?  Again, without a doubt, at least for now.  However, with the sure-to-expand availability of computational photography functionality in all aspects of photography from camera to digital darkroom, images created with this technology will become more and more prevalent.  And they will be good. 

Machine learning is achieved by ‘training’ the computer on millions of high-quality images.  From this training, algorithms are created that implement what the computer has learned. It’s very powerful.  But the process raises a few questions.  Who determines what a ‘high-quality’ image is? Is the risk, then, that this will result in a more uniform look to images where artificial intelligence is involved?  Will individual expression be sacrificed to convenience?

I don’t mean to imply that there will no longer be any photographers that develop a strong personal style and a unique voice, that have profound insights and are able to effectively express them in their works.  They are still there.  However, with the growing availability of “smarter” tools that gain their smarts through machine learning, more and more people are cranking out high-impact images and the true artists among us, whose images take us beyond the initial impact and cause us to want to pause and explore and relate to what the artist has to share, are in a shrinking minority and their influence runs the risk of being diluted. 

Already, many decisions are being made for us.  Our cameras make exposure decisions.  They also decide what to focus on.  Third party add-ons and plug-ins can make adjustment decisions for us in the Lightroom and Photoshop.  And now AI and its foundation in machine learning is promising to make even more decisions for us in both areas.

What is impacted is the artistic control that comes from the countless interpretive decisions that are made throughout the entire creative process.  Simplifying the technical aspects of photography is desirable because it frees us to stay in a creative state of mind.  But turning over the creative aspects of photography, those activities that come together to fulfill the photographer’s intent, to machine learning and the computations that it gives birth to runs the risk of short circuiting what the early pioneers of photography worked so hard to achieve – to show that photography is a serious art form.

AI will continue to gain traction.  It will appear in every aspect of the photographic experience (well, maybe not in our camera bags but imagine an AI enhanced tripod [come on, I’m kidding]).  More and more people will be able to create high-impact images with a “Wow, that’s cool!” factor.  Serious photographers will adjust their workflows to incorporate AI as a starting point and following it up with their own interpretive decisions as they make works of personal expressivity.  But the already crowded arena of technically perfect, high-impact images will get even more crowded.  

Are we losing something?  To the extent that creating a high-impact image gets easier and easier, then perhaps. When images meet the standard defined by AI and a homogeneity of look results, then much is lost.  But when AI is used as a tool for individual expression, just like Lightroom and Photoshop, the Zone system, burning and dodging in the darkroom, composite images made from multiple negatives, then the creative visions of artists are served.  Each artist can the personal choice of how many interpretive decisions to allocate to AI and how many to keep.  But in the end, what is important is images that effectively convey the artist’s intent. 

It’s still true that how an image gets made is not important.  Just like the debate between Canon vs. Nikon, film vs. digital, prime vs. zoom lenses is irrelevant.  What is important is what the photograph says.

Let us seek out those photographers who have inspiring, provocative, thoughtful, moving, enriching things to say through their art.


Join me on one of my exciting photography workshops.  Click here for more information.

(60)

Author: doinlight

Ralph Nordstrom is an award-winning fine art landscape photographer and educator. He lives in Southern California and leads photography workshops throughout the Western United States.

We look forward to your comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.