In photography, “workflow” refers to the complete process from taking a photo to producing the final print or digital copy. This includes lighting and setup beforehand, camera settings, and post-processing. Since my lighting comes from the sun, and setup is basically “go somewhere and look for things to photograph”, I really use the term to mean post-processing.
Okay, that’s a bit of an exaggeration. Finding something to photograph usually involves a bit of planning. And actually taking the photograph involves both compositional decisions (is this the right viewpoint? should I zoom in or out? can I wait for that other thing to move away?) and lighting decisions (avoiding the sun both directly behind me, which leaves no shadows in the image for texture, and in front of me, which causes flare). But there’s quite a bit of luck in both finding/composing a subject, and getting good lighting.
Some photographers, and I’m tempted to call them “real” landscape photographers, plan things out with great attention to time of day/year for lighting angles, weather, and season. I won’t say I don’t do any of that, because there are places I’ve returned to in particular seasons when I thought I could retry an image that hadn’t worked before. But by and large I tend to take photographs with very brief forethought, just seeing something, making a few quick choices, and snapping the shutter. For every image that makes it online, a dozen or two were taken but passed over as badly taken or simply not being “interesting enough”. I am getting better at it. Practice does indeed help. But I rarely set out to photograph something or some place specific. Even if I plan a destination, what I end up finding photogenic can be very different from what I thought I was going to photograph. Everything up to the shutter click is thus more about being in the moment than about purposeful action. After the shutter clicks is when I get organized.
But to post-process, you first need to start with a photograph, and there are things I do when taking one to set myself up for success in post. My camera uses image-stabilized lenses, so I can get away with hand-holding at relatively slow shutter speeds without blurring if I’m careful, going as low as 1/15 of a second, but I usually try for 1/30 or faster. That is, if I’m photographing static images with a wide zoom. If I’m zoomed out or if there’s motion (blowing tree branches, people walking, distant vehicles) I need to bump up the shutter speed (to 1/60, 1/125, 1/250 or more if there are fast-moving objects), even if it means compromising elsewhere. Normally I’ll use 1/250 for any significant motion, but sometimes 1/125 can work. Examining the image at 100% on the screen on the camera after taking it is a good idea if there’s any risk of blurring, and there are times I’ll do that and re-take the shot if I’m in doubt.
The ISO setting is also important. ISO 100 would give the best results, but is rarely practical except in direct sunlight if I want good depth of field. More normally I’ll use ISO 200 if things are brightly lit, and drop down to ISO 400 or even 800 if its overcast, or I’m under tree-cover, or if I need more range. Many of my photos end up being taken at ISO 800. On my old camera, ISO 800 wasn’t substantially more noisy than ISO 200, but ISO 1600 was, so I almost never went beyond 800. On the new camera, ISO 1600 is acceptable and I’ve used 3200 a few times. Depth of field is the last part, and while many things go into this, the most important is f-stop. I try to use f/11, to capture both close foreground and “infinity” as well as to keep the lens where it’s at its sharpest. I will sometimes back off to f/8 or even f/7.1 if I need more light and depth of field isn’t as critical. Going beyond f/11 will cause some loss of detail due to diffraction error in the camera, so I avoid that, but I’ve used f/16 and even f/22 a few times.
Finally, I shoot for RAW and nearly always in manual exposure mode, so I don’t care about what the camera is doing to create a JPEG. I set auto white balance and ignore exposure compensation (which only matters if you use a preset or one of the priority modes). I have the camera set to display a histogram after taking the picture, and I’ll re-shoot with different settings if that shows crushed blacks or blown highlights (histogram bunched up at either end of the range), otherwise I tend to pay attention to what the exposure meter is telling me. I generally let the camera auto-focus, but using only the center focus point, so I can select the right place, lock focus and frame the shot. With that, I usually have a good image to start my post processing on.
Since taking up HDR in 2013, my practice is usually to take one photo in manual to verify that my reading of the light meter is giving me a correct interpretation of the light balance (this can also serve as a reference, although I don’t use it that way as much as I should), selecting an appropriate ISO. Then I change the camera to a preset that’s set up for a five-exposure manual bracket, and select the exposure based on metering the brightest and darkest locations, and trying to pick an exposure close to the one from the first photo that will cover both. If I don’t like the look of the result (i.e., it wasn’t as balanced as it should have been), I’ll adjust exposure and/or ISO and re-shoot.
For landscapes, I’ll generally try to focus about 1/3 to 1/2 of the way into the scene, rather than at infinity. This helps keep foreground elements more in-focus, although sometimes at the expense of very distant ones. Although if there’s a critical element that needs to be in focus I’ll choose that or something close to it as my focal point. I use auto-focus, but with only the center sensor point enabled, and use the press-halfway-to-lock focus feature of the shutter button.
There’s some controversy surrounding post-processing. Some people, even some knowledgeable photographers, seem to feel that what the camera captures is “truth” and any modification of that is an untruth. Frankly, that’s utter nonsense. Cameras have never captured anything other than an approximation of reality. They do a better job of it today than in the tinplate era, but what a lens and sensor records is not what the eye sees. Sometimes it’s not even a very good approximation. And that recording itself depends on arbitrary settings like “white balance” and “exposure compensation”, usually set by the camera, which means that they’re often very wrong. It’s certainly possible to deliberately lie with a camera in composition or post-processing, and perhaps more easy in the current world of digital editing that it was when film and enlargers were used. But the camera lies all on its own, and you need a human to set it straight. That’s an important aspect of post-processing.
My philosophy on post-processing for landscape photography is simple: the final photograph should reflect what I saw when I took it. Within that constraint I’m free to make the maximum use of what the camera recorded, not obligated to use the exact image produced by default. That means I won’t replace a cloudy sky with a nice blue one, or add or remove objects from the image (I’ll make an exception for things like power lines under very rare circumstances, but if I do that, I’ll always say so). Usually all I’m doing is adjusting color (white balance and vibrancy), brightness (exposure and contrast) and “sharpness” (edge detail enhancement via targeted contrast and similar techniques). I’ll also apply corrections for distortion of both color (chromatic aberration) and perspective (fixing tilt, removing “keystoning” of tall structures). And I’ll re-compose images by cropping things off the edges. And when I make a High Dynamic Range (HDR) image from three or more photographs, either I or the HDR software is going to do all of that, and a lot of the work I do there is undoing excesses of the HDR software. I think the resulting images are “true”, though there are some who would argue the point. Now to the details of what I normally do.
RAW photographs, or rather the default JPEGs created from the RAW sensor data, tend to look bland. This is because JPEGs created directly have in-camera processing applied to the image to punch up the color, contrast and sharpness, as well as setting the equivalent of levels to best use the captured 14-bit (on my camera) sensor data in 8-bit JPEG color values. All RAW images need post-processing to do the equivalent of what the camera would do, and there are a number of software packages available for that purpose. Most common are the manufacturer’s own software, Adobe’s Camera Raw (often coupled with Lightroom and/or Photoshop) and Apple’s Aperture. I used to use Aperture, importing directly into it from the camera via USB. Today I use Phase One’s Capture One Pro software, now that Apple has discontinued Aperture.
Once I have a photograph, I load it into Capture One and use that to convert the RAW file to an actual JPEG (or for HDR, a TIFF). I use the Adobe RGB color space while processing images, since this can hold more detail. When I make a final export to JPEG (which is 8-bit per channel color and normally compressed) for web use, this gets mapped to the smaller sRGB color space (typical of the range of colors computer monitors and most printers can support). For HDR I export all of the images to 16-bit/channel uncompressed TIFF in the Adobe RGB color-space to retain maximum information when working with my HDR software, and re-import the resulting TIFF to Capture One. Then that gets exported as the usual compressed sRGB JPEG.
If I need to work in Photoshop, I use Photoshop Elements/Express, and as that is crippled I have to export to it in 8-bit TIFF with the sRGB color space. I mostly use this for panoramic photos, and otherwise avoid it. If I do use it, it will be the last processing step before creating a JPEG for use.
As part of loading images, I’ll add metadata describing the location and other useful defaults like the copyright and creative commons license info and the website URL. I only discard really badly flawed images. Something that’s just “meh” now might have some aspect I find interesting later, and disk is cheap. Either when I load the images or shortly afterward, I’ll review them and mark potential candidates using the Rating feature (two stars means “maybe”, five means “posted”). Later I’ll actually work on one or more of them, before deciding on a final one to post for that week’s entry.
My photo library is stored on a RAID array (currently a Drobo 5D, but I’ve used others) with copies of the “catalog” (file of images) on individual hard disks I can store separate from the main library. I don’t delete the copy from the camera’s memory card until the image is in at least one archive in addition to the main library, which is usually done within a day of taking the photo. That’s important for the long term. Disks will fail, often without warning, as can flash memory, and even a RAID array could be destroyed in a fire. I value my photos, both the current ones and older ones I’ve scanned from film and slides taken by myself, my parents, and others. Ensuring the long-term survival of an image is as much a part of workflow as correcting the color balance.
My first step after selecting an image for improvement is usually to determine if the white balance chosen by auto-white-balance and used for the initial preview of the RAW is correct. If I thought to put my gray reference in one photo at the same place before taking another (which I don’t do as often as I should) I can use the White Balance adjustment’s eyedropper to select that, and then copy the Temp and Tint settings to the image I’m working on. But even if I do that, I need to look at the image and ask “is that what I saw?”. Often it doesn’t end up looking quite right, and I’ll experiment with small changes in Temp (100 degrees plus or minus) to see if that shifts it in a better direction. If I don’t have a gray reference, I’ll look for a neutral gray in the image, but it’s rare that there is one. Most “gray” objects in nature have color tints, and using the eyedropper on them will just make things look terrible. I often end up just trying different values until I like the result. Once I like Temp, I may experiment with Tint, although that less often needs modification.
After white balance, I’ll make any tilt changes to make the image level, and crop the result if needed. I try to compose images in-camera (meaning before I press the shutter), and avoid both tilting and cropping in software, but sometimes it’s necessary. When doing HDR, I usually put off cropping and tilt adjustments until after re-importing the HDR image, just to avoid having to modify five photos identically.
Chromatic Aberration is the next adjustment. This adjustment pane isn’t part of the default set, so it needs to be added. But first I “Duplicate Version” and add the adjustment to the new one (I leave both in a “stack” so I can see that they’re associated, and not separate images). With two versions, I can switch back and forth for A/B comparisons, so see how the changes affect the image. The Chromatic Aberration adjustment isn’t always needed, most commonly occurring when an object is backlit by the sun, although I’ve seen the problem occur on the edges of rocks that weren’t backlit. I use the loupe at 200%, to examine parts of the image that may need this, and apply just enough to remove or minimize the colored halo, then I do an A/B check to make sure I haven’t messed up the image colors.
After that, if not doing HDR I’ll make exposure adjustments if needed (using a new version). I’m still learning this one, and often don’t use it at all since I try to get exposure right to start (I’ve read it can be useful even on properly-exposed images, but I’m still feeling my way around that). Following exposure comes either Enhance or Highlights & Shadows (the order can change, and I don’t always do both; each will be its own version if I do).
Enhance is useful mainly for making the subdued colors perk up a bit. A very small amount of Vibrancy (0.07 – 0.20) is about all I usually need. I’ll sometimes use the other settings, but that’s rare and only needed if the image has some kind of problem. Highlights & Shadows is more common, as I usually need to boost one or the other (rarely both). Again this is usually a small amount, 10 to 20 on those scales.
After that I may add a Curves adjustment (rather than using Levels), in yet another version. I’ll try the auto settings on these, which sometimes works but often makes a mess (the back-arrow on this setting gets used a lot, but it reverts all of the changes in this adjustment pane, not just the last one made, so it’s a bit drastic). Otherwise I’ll try the black and white eyedroppers, or just do it manually. Sometimes I’ll do Curves before Highlights & Shadows, or otherwise alter the order, and for really complex images I’ll bounce back and forth tweaking things until it looks right. On many images I don’t use Curves at all, and only make minor adjustments to highlights or shadows.
If I’m going to make changes in Photoshop, this is likely the point at which I’ll do it, since that’s going to compress me to 8-bit values and the sRGB color-space (I use Photoshop Express, which is limited), exporting to a PSD or TIFF file and re-importing the result before doing any sharpening.
For HDR, I skip most of the Aperture adjustments (other than Chromatic Aberration correction, which is best done on a RAW image) and export five uncompressed Adobe RBG 16-bit TIFF images. In my HDR software, which presently is Nik Software’s HDR Efex Pro 2 I’ll use one of its preset modes normally, and then adjust as needed, but sometimes none of them are really right and I’ll simply work with adjustments. Here the key is to first avoid HDR artifacts, like unnatural colors or halos around areas of sharp contrast, and second to show as much detail in shadows and bright areas, without making the contrast look washed out. This works best if the center of the bracket I took was roughly in the center of the range of available light. When satisfied I export the results (again an uncompressed TIFF still in the Adobe RGB color space) and go back to Capture One (creating a library copy, as well as my base for final adjustments and exporting).
I’ve experimented with using UCT’s HDR Expose 3 rather than Efex Pro, but the current version still has severe ghosting problems with hand-held images, as it’s really intended for images taken from a tripod. So for now, I continue to use my original choice.
Finally, once I’m happy with the image, comes Sharpening. Capture One does this significantly differently from Aperture, and I’m still learning this aspect of it.
From what I’ve read, the appropriate value is going to be different for online or print use (and printers do their own sharpening, so the latter may even be printer-dependent). To date I’ve been focused on online use, so I do this to look good at 100% on screen. And once I think I have it, I’ll export a full-size JPEG at 80% compression (a.k.a., “quality”) level, and view it in “actual pixels” on the screen to make sure it’s good.
Many times not all of that is needed. Some images get essentially no processing at all. Most get at least a bit of adjustment and edge sharpening. On the larger images from my new camera, Sharpening doesn’t seem as important as it did on the old one, but it’s still useful and after waffling a bit, I’m now doing it on most images.
And with that, I have a JPEG file I can post online on this site. Exactly what I do differs for every image, and I’m still learning this aspect, so this is all likely to change as I go forward. But today, that’s how I do it.
As my workflow evolves I’ll make notes on individual posts about what I did new that time. The Workflow category is only used to mark posts that are about workflow (I’d originally planned to use if for ones that described changes to workflow, but that’s nearly all of them as I’m constantly doing things differently in minor ways, to see how they work).