“Real” Images And AI Generated Ones
by William Lulow
I learned how to do “real” photography back in the 1970s. Prior to that, I had learned how to manipulate darkroom tools, developing and printing to a high level of quality. When I turned to digital imaging around 2001 or so, I began learning about digital workflow and how it affected the work that I did on a daily basis. When I was shooting film, the emphasis was on perfect Black&White prints and exacting color chromes meant for advertising and reproduction. We shot with large format cameras in all sizes from medium format, 2 1/4 film all the way up to 11×14″ sheet film. In those days, many catalog sheets had to be photographed “to size.” There was no Photoshop or InDesign.
When we were doing portraits and other advertising shots, they had to be of the highest quality so that they would print well. The emphasis was always on creating true images both in color and monochrome. Any altering of the images was always to clean up blemishes here and there and to have all images render true color as sharply as possible.
Over the years, I learned a lot about how to control not only lighting but all the processes that went with image creating, including retouching. When I wanted to make a large print for example, I often turned my enlarger around so that it projected the image on the floor or on the wall of the darkroom. Most color images were handled by professional color labs that specialized in development of Kodak E-6 color film. Kodachrome slides were still mostly shipped to Kodak Labs (in the NYC metro area the best known one was in Fair Lawn, New Jersey). Kodachrome was a slow ISO 25 color film that yielded transparencies. But since the film had very little latitude for exposures that weren’t precise, processing was limited to what the lab could do. The emulsions of Kodachrome were fairly consistent and rarely were tested. Ektachrome, on the other hand, was much more maleable and could be processed in various ways due to the emulsions and how film was exposed.
These days, image processing is much faster and much simpler given the various digital applications that can even be used right at your own computer. But there are quite a few things you must learn before you become a true expert at making top notch images. One of the things that has replaced emulsion control using camera filtration is the PICTURE STYLE menu on many high end digital cameras today. There are about seven different categories of style and roughly five settings on each. That makes for thirty-five different settings you can use to control your images. They include saturation, color, sharpness, exposure and contrast. And that doesn’t even take into account white balance, and normal exposure techniques. The LANDSCAPE settings can be much different from the PORTRAIT settings in the way they treat overall color balance and colors themselves. Or you can set all the controls to NEUTRAL and do all the editing in the post-production process. So, you have to be careful about the settings you use and what subject matter you are shooting. Here’s what the PICTURE STYLE menu looks like:

So here is a LANDSCAPE image shot with the appropriate PICTURE STYLE on my Canon 90D DSLR:

The colors seem to be rendered correctly, the color of the water and of the foliage is appropriate as necessary in such an image.
Here, I set my PICTURE STYLE to PORTRAIT whenever I have headshots or portraits to make. The saturation control is usually a bit more than normal to render skin tones a bit more life-like, but that’s all the changes I make in-camera. The reason for this is because you don’t necessarily want to change each image manually in post-production:

Here, I’m looking for skin tones to look natural and all clothing, the same. These are the two types of images I shoot most so I have my cameras and software set up to render all the tones correctly.
I write about all this to kind of give the reader a sense of what photographers were trained to do in the film age and how they have adjusted to the digital age as well. Now the rage seems to be AI “Artificial Intelligence.” They have developed programs that are able to generate not only images but text as well with only basic human input. I have seen many AI-generated images and they all have a kind of “unreal” quality to them mostly because they are just that. You can often tell an AI-generated portrait for example, because if you look in the person’s eyes, you can tell that the expression is not genuine, just as you can tell that an image has been heavily retouched or elements inserted. Many retouchers don’t concern themselves enough with shadows or small details in angles. So, a picture that at first look seems normal, when you look closer you are able to see the “doctored” parts. Some are generated from multiple images of a person and merged together, then retouched. It’s hard to get a “real” expression and glimmer in the eyes from many different images put together artificially. As a portrait photographer with over 40 years’ experience, I can tell when there is a “real” interaction between photographer and subject. It’s in the eyes.
So, as a classically-trained photographer, I am a bit averse to using AI methods at this point, and I don’t think there is really a good substitute for human contact, at least where making portraits is concerned.
Discover more from William Lulow Photography
Subscribe to get the latest posts sent to your email.