Disk Prices

I happened to compute the price per byte for a disk drive I bought Tuesday.  Then it occurred to me to compute the price per byte of the first hard drive I ever bought.

Then it occurred to me to compute the ratio.

2,916,667

The price per byte of that first disk drive was very nearly three million times higher.

Three.  Million.

That’s a lot.

I like living in the future.

7 thoughts on “Disk Prices”

  1. I’m curious what your first drive was, since my number isn’t nearly as impressive: 118945. I can’t actually remember my first drive – it might have been an ST225 MFM 20MB unit – so I chose a representative one from the beginning of my career. When I started building PCs in 1987, I remember selling the ST4096 – MFM, 80MB, full-height, for $849. I compared to an off-the-shelf random-manufacturer USB unit I bought a few weeks ago: 1TB, SATA, for $89.00.

    Still, ~120000x certainly is pretty impressive.

    Yes, living in the future is grand, but I often reminisce with my cohorts of those long-gone days. Things were much different back then, and I think that those of us who grew up in that time appreciate much more the advantages we have today. Computing these days is a commodity – virtually unlimited space and processing power – whereas, back then, everything was at a premium. As a result, we now see a lot more lazy coding and, dare I say it, lazy users with lazy expectations.

    Does this mean that I’m getting old? 🙂

  2. Mine was in fact an ST225. The summer of 1985, mail-order (not Internet order!).

    I remember remembering the price as having been $3500. I tried to research prices, and found such a range (going up to $7000 in reputable sources) that I eventually gave up and just went with my memory. (“Remember remembering” refers to the fact that I’m now remembering what I was doing when I originally wrote the post.)

  3. Oh, and about getting old — I dunno, sometimes I think the world is just getting younger around me.

    We’re nowhere near infinite memory or disk yet — we’ve spent time worrying about both at work this very week. Processor power is closer, but even that can be stretched — we did some in my previous job, where we were sending 80,000 simultaneous independent HD video streams out over IP (the Sun Streaming System).

    On the other hand, we casually design database table rows that exceed 80 characters!

    What’s expensive changes. Is it “lazy coding” to use a library that does somewhat more than you need, but is of high quality? Or is that just smart engineering? It makes your program bigger — but does it make it bigger in a way that’s a problem? And using solidly tested code is always good, and not spending extra time reinventing the wheel is good too.

  4. I apologize in advance for the novel, but, hey, since storage is virtually free these days… 🙂

    Sorry, when I mentioned “virtually unlimited space and processing power,” I was speaking from the consumer side. Aside from super-duper graphics (FPS games, uber-OS-UIs, and playing H.264-encoded 1080p streams full-screen), I haven’t seen a “legitimate” driver for the increases in processing power on the user side. Cool, so you have a 24-core 10GHz laptop with 1TB of RAM… you can recalc that Excel spreadsheet in attoseconds rather than femtoseconds! 🙂

    On the server side, yes, certainly we’re nowhere near “unlimited”, especially given the benefit of VMs. Give me a million-core processor and I’ll find a way to use it. I just built a small architecture for a cloud-computing side business with six physical machines that host 31 virtuals – 10 years ago it would have been 31 boxes. That’s huge progress.

    Looking back, it seems to me that the pressure source on the chipmakers has shifted: through the 90s and early 2000s, it was from the users, and now it’s from the service providers (again, except the case of graphics co-processors, the entire ballgame of which was started by the Amiga… but I digress). The whole idea of cloud computing and software as a service is really just starting to take off, and the larger it gets, the more horsepower we’re going to need in the back room. Back in the (very) beginning it was Big Iron in the datacenter and VT100s on the desktop, and I can see a future that looks much the same – massively-parallel, abstracted, virtual computing in the raised-floor environment and somewhat “thin” clients on users’ desktops… once we have sufficient client internet bandwidth, that is, but that’s a whole ‘nother topic.

    As to “lazy”, my premise wasn’t really meant to address libraries (as an example); I really don’t get jollies from seeing someone reinvent the wheel. I was really referring more to the sheer requirement for developer ingenuity due to limitations on hardware.

    My favorite example is the game Starflight, which I first experienced on my Tandy 1000 in ’86. Two 360k floppies held the game software plus the data for 270 star systems, containing 800 planets, each of which you could land on and explore in tremendous detail. Now, I’m actually fibbing there, because there’s no way that 720k could contain all that data – the developers did something unusual – fractal landscapes for each planet that were mathematically generated in-game from a key that was stored in the code. Pure genius.

    My other great example is the Infocom text games – the Zorks, et al., based in the idea that “Graphics suck right now, so rather than trying to go that route at all, let’s write incredible, interactive text-based adventures that are tiny (storage-wise), and tight (code-wise).” The Infocom parser is insanely tight and cool.

    I have a stack of unopened modern games sitting next to my main desktop PC at home. Once every couple years I spend 6 months playing all the games I’ve bought, and then I get bored with the entire idea and go back to reading. I spent a year doing EverQuest back in 2001 or so, and a year doing WoW a couple years ago, but in the end, everything pretty much looks the same to me now – breathtaking 3-D graphics and interaction, huge worlds, and so on – but they’re all basically the same, just skinned differently. Nothing creates the “wow factor” that Starflight, the Infocom games, and the others of their ilk created in me Back In The Day, because, for the most part, the need for TRUE ingenuity is gone. It’s all just a race now for faster graphics, higher resolutions, and bigger screens – something that’ll provide a short-term distraction for a “modern” audience that’s looking for the latest technological bar-raiser.

    The same applies in the arcades. All through grade school, I spent $5 a week at the local movie theater arcade. Then, when the 3-D fighting games hit – Street Fighter and such – I lost total interest. Same reason: very little ingenuity, rarely something new and truly different based on ingenuity that came from the severe hardware limitations of the time. Nothing is more pleasing to me now than to see the huge resurgence of classic arcade games, thanks to emulation. One of my Bucket List items right now is to beat the Twin Galaxies MAME world record for Mario Bros, and I have a gutted stand-up arcade cabinet waiting in the garage for me to get the inspiration to turn it into a proper MAME unit.

    Okay, enough of my rant. Just one last thing that I think about anytime I start feeling old and need a quick pick-me-up:

    I’m coming up on 23 years working professionally in IT. For a long time, I was always the youngest – a trailblazer, a smartass, always looking for a new challenge. Then, as time went on, and the “new generation” started coming into the business, and I was no longer the youngest or smartest, I found a new thrill – managing and mentoring these new guys. Still, I do love to hit the keys from time to time; once an engineer, always an engineer. A couple years ago, I was managing a great engineering team at a large financial company, and on my team was this kid who’d just turned 21. He reminded me of me at that age – smart but arrogant, always needing to prove something. One day, some unusual issue came up – I don’t remember what, exactly – but it turned out that I was able to dust off my engineering skills to solve the problem when the rest of the crew was stuck. Afterward, the kid pipes up with something like, “Wow, the Old Man DOES know what he’s doing!” Without missing a beat, in front of everyone, I brought the house down with a reply that I’ve been dying to say for my entire career – and for the first time, I could say it and mean it:

    “Kid, I’ve been doing this stuff since before you were born.”

    That makes it all worth it. 🙂

  5. Oh – one thing I just noticed… I think your WordPress time zone setting is off. It shows my last comment as posted at:

    March 11th, 2010 at 16:51UTC

    Actually, it was 20:51UTC (14:51 PST). 🙂

  6. Gaming has certainly driven a lot of home computer upgrade. Not mine; I haven’t played a computer game “all the way through” since Adventure (which I played through on a DECSYSTEM-20; still have my hand-drawn maps, which are state transition tables). And some hack and nethack. But mostly I play computer games at the Freecell or Bubble Breaker levels — to kill short periods of time, or when I’m too tired to go to bed.

    However, people have set up pretty heavy home systems for working with digital photos, for example, and video, and audio (most bands now record directly into their computers, until the reach the point of affording real studio time). The desktop still has SOME serious uses.

    I’ll be interested to see how cloud-based services work out. So far, the latency kills the user experience stone cold dead for “ordinary” things.

    Last fall I celebrated my 40th year in the business. I started programming an IBM 1401 in assembly language in 1969 (I was 15 at the time; I’d been programming for a bit over a year, and had taught myself Fortran and IBM 1620 assembly language).

    The server is in California, I’m in Minnesota, and the documentation on WordPress and the timezone plugin sucks. I’ll take another look at it. I don’t really want to display the timestamps in UTC anyway.

Leave a Reply