You Don’t Know How Lucky You Are!

Well, okay, most of you probably really do. Anyway, I’m talking about just one little corner.

Anybody in love with film, all excited to be discovering this fascinating retro medium, or still working happily with your old favorites, might not want to read further. However, I do want to say explicitly that I wish you well, and hope you achieve what you want with your chosen medium.

So, now, here it is:


Any questions?

The film image is from March of 1994, and was shot on Kodak “Gold 200-2 5096” according to the edge marks. Looks like they were processed at Proex.  Given the date, I probably shot them with an Olympus OM-4T (I decided to switch to AF and got the Nikon N90 that fall). They were scanned on a Nikon Coolscan 5000 ED at full resolution, with no grain reduction, and then the curves were adjusted to make the picture look decent.

So, here’s the full film image (click through for somewhat bigger version):

The above crops are both “100%”, i.e. 1:1 pixels from my digital file (the full-size versions you get when you click through above are).

Not horrible, certainly; zooming in to “100%” is what gives pixel-peeping a bad name, and is rarely good for anything except making relative comparisons.

Here’s digital example, taken earlier this month with my D700.  It was taken at ISO 200, just like the film.  I made minor curves adjustments to make it look a bit better.  The full-size version, sized down (click through for 900-pixel version):


And that is why I personally am not a fan of film, at least in comparison to digital.

You really don’t want to see the comparison at ISO 1600.  I scanned some Ektapress 1600 Professional (PPC) last night.  Scary boulder grain! Amazing electric blue sparklies through all the shadows!

Digital Photo Archiving

This came up in comments on TOP, and I realized I’d written enough that I wanted to make an article of it and keep it here where I could refer to it easily.

Craig Norris referred to this article about digital bit-rot that he had suffered, and that got me thinking about whether I’m covered against that sort of problem. He says he’s getting a stream of email from people who have had similar problems. I’ve never seen anything like that in my own collection—but I’m doing quite a few things to cover myself against such situations.

Here are things I’m doing to insure the integrity of my digital photo archive:

  • ECC RAM especially in my file server. This memory (and associated software in the OS) can detect up to two bit errors in a word, and correct up to one bit error in a word.
  • No overclocking. I’m not taking risks with data integrity on this system.
  • Storing the images on a ZFS filesystem.  ZFS keeps data block checksums independent of the hardware error protection, so it can detect more errors than just relying on the hardware.  (Also the data is mirrored on two disks).  (The ZFS checksums are larger than the hardware checksums, and so will detect more error cases.  No checksum system will detect all possible changes to a block  of data, though.)
  • Run weekly “scrubs”, where it reads all the blocks on the disks and verifies their checksums.  This means errors will be detected within a week, rather than waiting until the next time I look at an image.  This makes it more likely that I’ll have a valid backup somewhere.  (I have not yet detected any error on a scrub.) The early detection, and detection not depending on a human eye, are very valuable I think.

(I believe the BTRFS and NILFS filesystems for Linux also do block checksums.  ZFS is available in Linux and BSD ports, but none of these  are mainstream or considered production-ready in the Linux world (the original Solaris ZFS that I’m running is production-grade).  You could simulate block checksums with a fairly simple script and the md5sum utility, making a list of the MD5 checksums of all files in a directory and then checking it each week.)

  • For many of the older directories, I’ve run PAR2 to create redundant bits and checksums of the files in the directory (I choose about 15% overhead).  This gives me yet another way to detect and possibly fix errors.  I should really go through and do more of this.
  • Multiple backups on optical and magnetic media, including off-site copies.
  • Using high-quality optical media for backups (Kodak Gold Ultima, MAM Gold archival).
  • I have a program for analyzing the state of optical disks, which can tell how much error correction is going on to make it readable.  This should give me early warning before a disk becomes unreadable.  I need to run this again on some of my older samples.

You’ll notice I can’t achieve these things with white-box hardware and mainstream commercial software.  And that ongoing work is needed.  And that I’m behind on a couple of aspects.

I won’t say my digital photos are perfectly protected; I know they’re not. But I do think that I’m less likely to lose a year of my digital photos than I am of my film photos. A flood or fire in my house would be quite likely to do all the film in, while my digital photos would be fine (due to off-site backups).  (So would the scans I’ve made of film photos.)

Furthermore, I realized recently that I’ve been storing my film in plastic tubs, nearly air-tight, without any silica gel in there. I’m working to fix this, but that kind of oversight can be serious in a more humid climate. (If I lived in a more humid climate, I might have had enough bad experiences in the past that I wouldn’t make that kind of mistake!)

Anyway—the real lesson here is “archiving is hard”. Archiving with a multi-century lifespan in mind is especially hard.

Film, especially B&W film, tolerates benign neglect much more gracefully than digital data—it degrades slowly, and can often be restored to near-perfect condition (with considerable effort) after decades in an attic or garage, say.

Most people storing film are not doing it terribly “archivally”, though. Almost nobody is using temperature-controlled cold storage.  Most people store negatives in the materials they came back from the lab in, which includes plastics of uncertain quality and paper that’s almost certainly acidic.

Digital archives are rather ‘brittle’—they tend to seem perfect for a while, and then suddenly shatter when the error correction mechanism reaches its limits. But through copying and physical separation of copies, they can survive disasters that would totally destroy a film archive.

A digital archive requires constant attention; but it can store stuff perfectly for as long as it gets that attention. My digital archive gets that attention from me, and is unlikely to outlast me by as much as 50 years (though quite possibly individual pictures will live on online for a long time, like the Heinlein photo).

Changes in Photography

Now that we’re very solidly in the digital era, but haven’t been here so long that it’s anything like settled down yet, I have a few observations to make on changes. Especially changes relating to beginning and/or young amateurs.

At the bottom level, it’s a huge win. Many children will get hand-me-down P&S digitals from their parents, and have computer access (or their own computers). With that setup, they can take infinite numbers of pictures for no cost. This is so much different (better) than it was for me; until I started doing my own darkroom work (which was a fairly big jump even then),  the cost of film and processing put a really tight limit on how much I could shoot (ages 8-14, roughly). Furthermore, needing to stretch the life of a roll of film added to the unavoidable delay between shooting and seeing the results, meaning that the feedback loop of learning was rather loose. The fast feedback is very important, particularly for a child learning something this technically complicated. Finally, those P&S are more capable and more controllable cameras than my Pixie 127 ever was (ask me if I’m frustrated that my most interesting travel was all before I had a decent camera!).

Higher up, it’s more mixed, and more complicated. When I was shooting with my mother’s old Bolsey 35, I had limitations in focal length (one fixed lens), aperture (f/3.2, it says there), shutter speed (only up to 1/200), and close focus.  But I could use any 35mm film available, and process it myself (especially the B&W) so I could push to EI4000 if I needed to (not that I knew how that far back).   My images could display the grain structure, and something decently close to the contrast and definition, of the best 35mm equipment in the market place.

Today, a lot of things that used to be controlled by choice of film and darkroom processing have moved into the camera body. And no junior-high student (okay, very very few) gets to actually play with a Nikon D700 these days. So in many ways the intermediate-level amateurs who are young (or otherwise on a severe budget) are more constrained from seriously pursuing some areas of photography today than I was back in 1969 when I got my first SLR.  Of course, a Nikon D40, which they might well get as a hand-me down or even afford from part-time jobs, is a very capable camera, much better in low light than what I could do with film in 1969.  But it’s far below what a D3 can do in low light today; there’s a difference between professional equipment and amateur equipment today that there wasn’t in 1969.  The differences between what you could do with a Pentax Spotmatic and a Nikon F then were much smaller than the differences between a Nikon D40 and a Nikon D3 today.  The body wasn’t nearly as important back then as it is today.

Of course, once you get the digital camera, you still have “all you can shoot” for free, that’s still a huge win.  And even cheap DSLRs produce better images under difficult conditions than one could do with film 40 years ago.

Digital Cameras and “Focal Length”

This is a very simple subject — complicated by people like me arguing about proper use of terminology, and people trying to explain differences that other people haven’t even noticed, and people pontificating on things they don’t adequately understand.

Continue reading Digital Cameras and “Focal Length”