Server Upgrade Chronicles I

The home fileserver is getting an upgrade. It’s reaching 90% full on the current disks, which is really about as far as it’s good to go if you’re going to upgrade (ZFS balances the load better when you add an additional vdev  before the existing vdevs are too full).

Last night was step one (well, maybe step 30; counting all the consideration and research and ordering steps). I installed the memory upgrade, taking it from 1GB to 3GB (and filling all available memory banks), and then upgraded the swap space.

For what I’m afraid will be the last time in this series, I can report that it went completely smoothly.  I remembered how to open the case (on this one, you release the tabs holding the front on at the top, tilt the faceplate down, and that gives you access to the screws holding the side panels), I didn’t break anything while vacuuming out the dust, I figured out which way the memory sticks went in, and they went right in).

Background on This Server

I think this hardware dates to August 2006. It’s running Solaris using the ZFS filesystem (which is why it’s running Solaris).  It’s named “fsfs”, because it’s my redundant fileserver (all the data disks are mirrored).

Chenbro Case
Chenbro case

The hardware is a Chenbro case, actually a 4u rackmount with 8 hot-swap bays, but I’m using it as a tower. It’s very deep, since the bays in front don’t overlap the motherboard at all. ASUS M2N-SLI Deluxe motherboard with AMD dual-core processor. That motherboard has 6 SATA ports, so I’m not currently using  two of the hot-swap bays.

Two of the bays are for the system disks (mirrored).  The other 4 constitute the data pool.  Currently this is two mirrored pairs of 400GB disks (two of them free from Sun, when they gave up on using anything that small in the Streaming lab).

While the processor is old, it’s marvelously adequate for what it’s called on to do; and the case was the expensive part. I don’t want to mess with it, and certainly don’t want to buy another hotswap unit at this level. So I’m upgrading this one.

Upgrade Plan

It’s getting new boot drives (2.5″ disks in a new hot-swap bay; you can get a 4-drive 2.5″ hot-swap bay that mounts in a standard 5.25″ bay), a new disk controller to give me 8 more ports, and new boot disks (in the 2.5″ bays, freeing two of the main 3.5″ bays for data disks).

The current plan of operations goes like this:

  1. Upgrade memory (completed)
  2. Update backups
  3. Update software
  4. Update backups
  5. Install 2.5″ hot-swap bay
  6. Install new 2.5″ boot drives
  7. Install new controller
  8. Install software (or transfer existing installation) to new drives
  9. Remove old boot drives
  10. Install a new pair of drives in the data pool
  11. Update backups

The new controller is to be the Supermicro UIO MegaRAID AOC-USAS-L8i. This needs weird expensive cable sets, but with the right ones, it’ll drive 8 SATA drives from something that looks like a SCSI controller to the computer (it’ll also drive SAS drives, handle port expanders, and generally do a lot of stuff, and Solaris supposedly has good drivers for it). This will be my first experience with SAS; we’ll see how exciting that is.

You’ll notice I’ll end up with 12 hot-swap bays (4 2.5″ and 8 3.5″). With 6 SATA ports on the motherboard, this means that I can split each mirrored pair across controllers, meaning that hardware failures and driver bugs can’t take out both sides of the mirror at once.  There are still plenty of other single points of failure, including power surges, higher level software bugs, user error, and so forth; that’s why we still need backups.

ZFS

ZFS is the “zettabyte filesystem”. It’s a combined volume manager and filesystem, engineered from scratch. And in fact it eliminates all sorts of annoyances, and adds some big features. It was created by a team at Sun Microsystems (now a division of Oracle)

For me, the three big features are:

  • Pool expansion
  • Data checksum and scrub
  • Snapshots

This server  holds my photo archive, including scans of very old stuff. While I may not look at it very often, I do care about it.  I have backups, and can restore if necessary, but a precondition to that is realizing it’s damaged. I could continue making backups and rotating them through storage and not notice a picture got corrupted for years, by which time the last backup that included it would be gone.

ZFS stores a checksum for every block. An operation called “scrubbing” goes through all the blocks and verifies them against their stored checksums. By doing this, I can discover early that something is damaged. If the damage is a disk error, the redundant copy on the mirror drive will be automatically used to fix the problem (if its checksum is correct). If there isn’t a valid copy of the data online it can at least tell me which file is bad, early enough that I should still have a valid backup.  This makes me feel much safer. I think it even makes my data really somewhat safer.

Snapshots are great for keeping the last month or so of changes easily accessible online. Furthermore, I use snapshots on the backup disks too, so on only two backup drives I actually keep several years worth of change history.

Pool expansion lets me add more space without recopying everything. I’ve chosen to use mirrored pairs instead of RAID-5-style parity schemes (RAIDZ, it would be in ZFS). It lets me upgrade disks in smaller chunks (because in addition to adding a vdev to a pool, I can also replace the drives in an existing vdev with larger drives, and have that space become available). I don’t expect to ever have more than 8 hot-swap slots (well, 8 for the big drives I’m using for data), so working in small units, pairs, gives me more flexibility.  I rather expect to stop at 6 data disks (3 pair) and put in a seventh “hot spare” that ZFS can immediately copy data to if something happens to one of the live disks. Using 4-disk RAIDZ vdevs, I would get a higher proportion of the drive space available for use, but I’d have no space for hot spares, and I’d have to replace disks in groups of 4, and I’d have less redundancy and hence more exposure to data loss (luckily I do keep backups).

Time Again for New Time

The old watch is rapidly approaching its freshness date.

The Old Watch

In particular, something I’ve never seen before—the lower two buttons have worn all the way through!  And the functions they invoke have stopped working.

Buttons worn through

So there is a new watch.

New Watch

Has anybody ever had a Casio Waveceptor watch actually pick up the radio time signal?  Outside of Colorado, say?  Just curious. The instructions make it clear that it doesn’t work very well, and thinking about wavelengths and antenna sizes makes that not terribly surprising.

I love modern marketing categories.  “WR50M” means water resistant to 50 meters.  Which, according to the manual, means that it’s safe in the rain and when washing the car.  Okay, and when swimming; but not for snorkeling and scuba diving; those require the 200M certification.

ETA: Twice now it’s managed to sync while I’m wearing it at work — in a steel-frame building.  It does have lots of glass, and I’m right by a window. It couldn’t sync when set on a windowsill at home as specified in the instructions, and it’s never synced overnight (underground).

Synchronized!

Need Photoshop “Difference” Plugin

I want to make an image layer that consists of all the parts of a second layer that differs from a third layer.  That is, for each pixel, compare layer 1 and layer 2.  If they differ, put the layer 2 pixel in that position in layer 3.

I haven’t found a way to do this with the image calculation stuff yet, though I have the germ of an idea about using this to create a mask giving those pixels, and then make the layer from that.

Has anybody already done this?

Why, you ask, do I want this?  Because I’m being silly / anal, basically. There are still  a few tools that work most usefully by modifying an image layer. I’d like to be able to make a background copy layer, do some work with those tools, then automatically create a layer containing only the pixels I changed, and delete the background copy.  This makes it much easier to take a second (or fifteenth) pass at some bit of retouching without putting other work at risk, and is pleasingly consistent with the philosophy of lossless editing.

Because I’m using it on individual layers of a Photoshop image, a standalone utility (even if it understood Photoshop files) wouldn’t be very convenient; I need a plugin, or a Photoshop action.

ETA: The germ didn’t work, but another  new idea did pretty well. The key point was using threshold on the differences. I do each channel separately, apply threshold to each, and combine the resulting masks, then make a new layer via copy. The action doesn’t properly clean up after itself yet, and I haven’t tested it in the presence of existing alpha channels or other complexities. And I haven’t applied anything stronger than visual tests to its accuracy. But I’m already using it.