Choosing a New Raw Processor

My photographic workflow has some kinks and splits.  The main one is between the handling of proofs, event photos, and snapshots, on the one hand, and the treatment given to final versions of important pictures (whether they’re art, restorations, portraits, or whatever).  Since I’m an imperfectly fossilized dinosaur, I think of the two as “machine prints” and “custom prints”, the two categories you could get from a pro lab in the 1970s.

Custom prints require the full power of Photoshop for me.  Since Adobe, in their death throes, has shot off their right foot (that being Photoshop), I’ll be continuing to use CS6 for the foreseeable future.  I can use the other fork of my workflow to produce 16-bit raw conversions as the input to CS6 (so I can continue to use it far beyond where ACR supports my camera bodies).

The “machine print” side has long run through Bibble Pro (and more recently Aftershot Pro, which is what Corel called it after they bought Bibble). This side works by making fairly quick adjustments by eye to groups of photos; often I’ll start with settings for an entire session, and then make additional adjustments to photos from different parts of the session, and then sometimes all the way down to individual series of shots. This is much faster than doing full custom printing on each shot!  But it’s also much better than just using the jpegs that come out of the camera. This is attempting to make “good” machine prints, like the video-analyzed prints from a pro lab, where a person looked at the print and actually maybe turned dials while watching a video screen.  For maybe a full second.

Since Bibble Labs sold themselves to Corel, and then Picturecode wouldn’t renew the agreement about integrating Noise Ninja, Aftershot Pro is no longer a great candidate for my raw processing, and I’ve been wondering where to go next. The obvious place to go was Adobe’s Lightroom—except that even before the “Creative Cloud” disaster I was unhappy with their upgrade policies and their policy of not supporting old versions with new cameras. While I’m not pissed enough, I think, to actually cut off my nose, I’d at least strongly prefer not to give Adobe my money if I can reasonably avoid it.

Having no other pressing business to entertain me, I decided to go through and make an attempt to evaluate what I saw as the interesting candidates for my new raw processor.

ProductVersion Evaluated
Price (June 2013)Supported OSs
Aftershot Pro1.1.0.30$50Windows
Bibble Pro5.2.3Not availableWindows, Linux
Dark TableFreeLinux, OS X, Solaris
Photo Ninja1.0.5$130Windows, OS X
Capture One Pro7.1.2 build 67846$300Windows, OS X
LightZone4.0Free (BSD license)Windows, OS X, Linux
Lightoom5.0$150 (but frequently on sale for less)Windows, OS X

This is far from an exhaustive list. In particular there are a number of free-software packages available, many of which don’t support Windows.

Capture One Pro is the “big gorilla” here, to my eye. It’s what supports most of the expensive medium-format digital cameras and backs, and it’s apparently what was nearly universally used in digital production environments (catalog shooters and such, who went digital very early because their high volumes justified the high price).

My evaluation methodology is going to be very casual. I’ve chosen a few pictures that I’m going to go through and process with each processor. I’ll no doubt acquire opinions along the way, which I will publish, and I’ll show the results and discuss what I see in them some. This is not either a deep or an especially scientific analysis, and is very me-centric.

I’ll be posting an article every few days on this for a while; first a series of articles about one raw processor I evaluated, and eventually the big conclusion article.  Hope this is all of use to somebody!

Cheap Framing at Large Sizes

Seven of eight prints framed and ready to hang

Framing is one of the most beneficial things you can learn to do yourself, if you’re an artist making flat visual art. Of course, if you’re a successful visual artist, framing is one of the first things you’ll want to give up!

Framing is important for artwork. It protect it from damage in storage, handling, and display, and provides some protection against environmental dangers.

Continue reading Cheap Framing at Large Sizes

Digital Photo Archiving

This came up in comments on TOP, and I realized I’d written enough that I wanted to make an article of it and keep it here where I could refer to it easily.

Craig Norris referred to this article about digital bit-rot that he had suffered, and that got me thinking about whether I’m covered against that sort of problem. He says he’s getting a stream of email from people who have had similar problems. I’ve never seen anything like that in my own collection—but I’m doing quite a few things to cover myself against such situations.

Here are things I’m doing to insure the integrity of my digital photo archive:

  • ECC RAM especially in my file server. This memory (and associated software in the OS) can detect up to two bit errors in a word, and correct up to one bit error in a word.
  • No overclocking. I’m not taking risks with data integrity on this system.
  • Storing the images on a ZFS filesystem.  ZFS keeps data block checksums independent of the hardware error protection, so it can detect more errors than just relying on the hardware.  (Also the data is mirrored on two disks).  (The ZFS checksums are larger than the hardware checksums, and so will detect more error cases.  No checksum system will detect all possible changes to a block  of data, though.)
  • Run weekly “scrubs”, where it reads all the blocks on the disks and verifies their checksums.  This means errors will be detected within a week, rather than waiting until the next time I look at an image.  This makes it more likely that I’ll have a valid backup somewhere.  (I have not yet detected any error on a scrub.) The early detection, and detection not depending on a human eye, are very valuable I think.

(I believe the BTRFS and NILFS filesystems for Linux also do block checksums.  ZFS is available in Linux and BSD ports, but none of these  are mainstream or considered production-ready in the Linux world (the original Solaris ZFS that I’m running is production-grade).  You could simulate block checksums with a fairly simple script and the md5sum utility, making a list of the MD5 checksums of all files in a directory and then checking it each week.)

  • For many of the older directories, I’ve run PAR2 to create redundant bits and checksums of the files in the directory (I choose about 15% overhead).  This gives me yet another way to detect and possibly fix errors.  I should really go through and do more of this.
  • Multiple backups on optical and magnetic media, including off-site copies.
  • Using high-quality optical media for backups (Kodak Gold Ultima, MAM Gold archival).
  • I have a program for analyzing the state of optical disks, which can tell how much error correction is going on to make it readable.  This should give me early warning before a disk becomes unreadable.  I need to run this again on some of my older samples.

You’ll notice I can’t achieve these things with white-box hardware and mainstream commercial software.  And that ongoing work is needed.  And that I’m behind on a couple of aspects.

I won’t say my digital photos are perfectly protected; I know they’re not. But I do think that I’m less likely to lose a year of my digital photos than I am of my film photos. A flood or fire in my house would be quite likely to do all the film in, while my digital photos would be fine (due to off-site backups).  (So would the scans I’ve made of film photos.)

Furthermore, I realized recently that I’ve been storing my film in plastic tubs, nearly air-tight, without any silica gel in there. I’m working to fix this, but that kind of oversight can be serious in a more humid climate. (If I lived in a more humid climate, I might have had enough bad experiences in the past that I wouldn’t make that kind of mistake!)

Anyway—the real lesson here is “archiving is hard”. Archiving with a multi-century lifespan in mind is especially hard.

Film, especially B&W film, tolerates benign neglect much more gracefully than digital data—it degrades slowly, and can often be restored to near-perfect condition (with considerable effort) after decades in an attic or garage, say.

Most people storing film are not doing it terribly “archivally”, though. Almost nobody is using temperature-controlled cold storage.  Most people store negatives in the materials they came back from the lab in, which includes plastics of uncertain quality and paper that’s almost certainly acidic.

Digital archives are rather ‘brittle’—they tend to seem perfect for a while, and then suddenly shatter when the error correction mechanism reaches its limits. But through copying and physical separation of copies, they can survive disasters that would totally destroy a film archive.

A digital archive requires constant attention; but it can store stuff perfectly for as long as it gets that attention. My digital archive gets that attention from me, and is unlikely to outlast me by as much as 50 years (though quite possibly individual pictures will live on online for a long time, like the Heinlein photo).

Bash Booleans

I keep getting these slightly wrong, and finally got annoyed enough to write a program which produces a cheatsheat.

In [] conditional expressions, presence counts rather than value.
if [ 0 ]: true
if [ 1 ]: true
if [ ]: false
if [ "" ]: false
if [ 'abc' ]: true

In [[]] conditional expressions, the same
if [[ 0 ]]: true
if [[ 1 ]]: true
if [[ ]]: illegal
if [[ "" ]]: false
if [[ 'abc' ]]: true

In (()) arithmetic expressions, numeric value counts
if (( )): false
if (( 0 )): false
if (( 1 )): true
if (( 3 == 3 )): true
if (( 1 && 1 )): true
if (( 1 && 0 )): false
if (( 1 || 1 )): true
if (( 0 || 1 )): true
if (( 0 || 0 )): false
if (( aaa )): false
if (( '' )): false

In straight if statements, program return values are used, NOT other things
But remember that 0 is true and 1 is false.
if true : true
if false: false
if 0: illegal (no program 0)
if 1: illegal (no program 1)
if true && true: true
if true && false: false
if false || true: true
if false || false: false

Basic Photographic Exposure

Having started photography before automation was the norm, I had to learn how to use a light meter and set exposure by brute force right up front.

I’ve always wondered what the process of transitioning from automatic to manual exposure was like (hordes of people make the transition, despite the widely-expressed belief in the 1980s that if you started out with an automatic camera you never really could; even at the time that sounded like dinosaur talk to me).

I’m a “how this works and how it developed” kind of learner.  People with drastically different learning styles may very well not find the following information useful; there may be too much of it, and it may not be focused specifically enough on what they need to know right now.  I can answer that kind of question too, but when writing explanations in general I tend towards excessive information.

There are three controls on the camera that affect exposure (ignoring flash).

The “ISO” is the sensitivity of the digital sensor (or film; same units are used).  (One of the great benefits of digital is that I can change ISO from shot to shot, without having to change film (which mostly comes in rolls, so changing in the middle of a roll means either wasting some, or doing a lot of finicky manipulations and marking and risking accidental multiple exposures)).  Bigger numbers mean more sensitivity meaning the picture will be brighter (and noisier), all other things being equal.

The “shutter speed” is the amount of time that the film/sensor is exposed to the light coming through the lens.  It’s measured in ordinary time units — 1/60 second, 1/1000 second, and so forth.  More time lets in more light, and makes the picture brighter, all other things being equal.  (Faster shutter speeds will “freeze” faster motion.  Slower shutter speeds will blur motion, including any camera shake.)

The “aperture” is an adjustable diaphragm buried in the middle of the lens that controls how much light is let through.  It’s measured in a weird unit — “f stops”.  (The f number is the ratio of the focal length of the lens to the diameter of the opening in the diaphragm, so it’s a pure dimensionless number.)  What it’s good for is that “f/8” on a 50mm lens puts exactly the same amount of light on the sensor that f/8 on a 400mm lens does.  This is much more convenient than having to figure exposure differently for each different focal length lens (especially today with zoom lenses being the norm).  Bigger numbers let through less light, making the picture darker (all other things being held constant).  Smaller numbers let through more light, making the picture lighter.

To photographers, the “f/stop” is the basic unit we think of exposure in.  And it doesn’t relate at all directly to the f-numbers used to measure aperture.

“One f/stop” means a doubling or halving of exposure.  We’ll say “I need to give this another two stops exposure”, meaning we have to give it 4 times as much exposure (“two stops” means doubling twice, meaning a factor of 4).  We can do this the obvious way with the shutter speed:  give it 4 times as much time (go from 1/400 sec. to 1/100 sec., for example).

With the ISO, the way it works is that a doubling of the number represents a one-stop increase (doubling) of sensitivity; so we could give the shot two stops more exposure by changing from ISO 100 to ISO 400.

With the aperture, to increase exposure by two stops we cut the number in half — going from f/8 to f/4, for example.  Yes, the numbers go the opposite direction from everything else; large f numbers give less exposure, small f numbers give more exposure.  (To change aperture by one f/stop, you change the number by a factor of the square root of 2, which is about 1.4; this gives rise to the classic series of f numbers that experienced photographers have burned into their medulla: 1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22, 32.) Changing from f/2.8 to f/4 is giving one stop less exposure, and changing from f/32 to f/11 is giving two stops more exposure.

Photographers talk about the aperture itself, not the number; we will all say that f/11 is a “smaller aperture” than f/8, and that f/2 is a “bigger” or “wider” aperture than f/5.6.  With a manual enough lens, you can see the actual aperture adjusting inside the lens as you turn the aperture ring, and see that what we call “smaller apertures” are indeed smaller; unfortunately for learning, most lenses don’t have aperture rings any more.  We will “stop down two stops” to go from f/8 to f/16, or “open up a stop” to go from f/5.6 to f/4.

I’m not AT ALL surprised that people coming to this for the first time find it somewhat confusing.  It’s one of those highly-evolved systems from the past with strange terminology.  It actually works very neatly in practice, but it’s hard to explain without producing gibbering insanity at least the first few times.

But it could be worse; we could be representing film/sensor sensitivity using the DIN system, which is logarithmic (a difference of 3 in the DIN number represents a one-stop change in sensitivity).  In fact it might be better if we were; the range of ISOs that we have to deal with in modern cameras goes from at least 50 to 25,600.

“Equivalent exposures” is a vital concept.  If you make equal but opposite changes in two of the factors affecting exposure, the net result will be the same (picture will be equally light/dark).  (With digital, in the useful ranges, this is basically true; with film there was something called “reciprocity failure” where at very long or very short shutter speeds this principle, called the reciprocity law, did not hold.)  (All three controls affect exposure AND have other effects; the other affects change as normal.  So we can deliberately adjust the shutter speed to get the degree of blurring/freezing we want, and compensate with the aperture and/or ISO to get the same exposure.)

So, given a starting exposure of ISO 100 1/100 sec. f/8, all of the following are equivalent to it (and to each other):

ISO Shutter Aperture
100 1/100 f/8
100 1/200 f/5.6
100 1/400 f/4
100 1/800 f/2.8
100 1/1600 f/2
100 1/50 f/11
100 1/25 f/16
200 1/400 f/5.6
200 1/800 f/4
200 1/1600 f/2.8
200 1/200 f/8
200 1/100 f/11
200 1/50 f/16
400 1/800 f/5.6
400 1/1600 4/3
400 1/400 f/8
400 1/200 f/11
400 1/100 f/16

(Typed that manually, so I suppose the odds of a grotesque screw-up are pretty good.)

Now, a couple of useful rules of thumb:

The “sunny 16 rule”:  The right exposure for a subject in direct sun is roughly a shutter speed of 1/ISO at an aperture of f/16 (or any equivalent exposure).  So at ISO 400, 1/400 sec. at f/16 is about right.

This one is very rough, but for most people the slowest shutter speed at which they can hold the camera steady enough not to get visibly blurred pictures is 1/(35mm-equivalent lens focal length).  People differ a lot, but the principle that it scales with focal length is reliable; that’s just physics (or geometry) (it’s actually scaling with magnification).  (This doesn’t consider subject motion; the speed that keeps camera shake from being a problem may still not be fast enough to freeze a moving subject.)

So, what’s the right exposure?  That’s an artistic question, and hence totally a matter of opinion; not a technical question.  For many purposes, if it’s possible to get the entire brightness range of the scene recorded, that’s useful (or at least the entire brightness range excluding “specular highlights”, shiny bits that directly reflect the light source). When it isn’t, you need to decide which parts of the scene are necessary to record; and expose to do so.  If you need more than  your sensor can capture (increasingly rare), you can try taking multiple exposures at different exposures and combining them to make an “HDR” (high dynamic range) image.