practical-magic

There's this great scene in the book The Last Castle by Jack Vance. Or it might have been The Dragon Masters — I can't remember anymore. This ancient castle on some distant planet is under attack, and the folks who've lived there for generations are screwed because they've forgotten how all the technology works. The machines have worked fine for centuries, so the people in the castle have pretty much degenerated into a kind of ultra-elaborate, decadent aristocracy.

There are these big fire-cannons on the ramparts that they use to blast any enemies who happen to want to live in the castle too. They know how to work the cannons, but not how they actually work; in fact it would be considered a gauche breach of their refined social customs to mention something as irrelevant as the inner workings of the machinery.

At a critical juncture in the battle, an old Baron or some such tries to use their cannon to do some serious blasting, but it just belches out some smoke. He yells at this old technician/butler guy who maintains the machines and asks him why it's not working. The butler/technician guy assures him that he's taken the most exquisite care of the machine for decades, and his father before him, and so on. He says they've polished the cannon with nothing but the finest wines and the softest cloths, since time immemorial. The Baron asks him again why it's not working, but the butler says the internals weren't his responsibility; he can't be held accountable, and so on.

I really got a kick out of that scene. Polishing the cannon with expensive wine. Har. It's a good thing I didn't really bond with those characters, since they got slaughtered.

Non-Leaky Abstractions

I wrestle with the whole abstraction question, and I'm still not sure where a good engineer these days should draw the line — the one below which it's all just magic.

We all have that line somewhere; I encountered mine in school, back in my semiconductors course. We were (ostensibly) learning how semiconductors work, but it was perfectly clear to me that this was just magic, and I was never going to understand it — in part, because I didn't want to make what was clearly going to be a HUGE effort to get it. There was all this crap about K-spaces and lattices and who knows what else. Maybe they'd listed the prerequisites wrong, but I did not have the math foundations to understand what was going on.

It didn't help that my professor was 100% senile, either. Once a week he'd break into his senile rant about what an Amazing Device the Human Eye is, totally forgetting that he'd shared this insight with us every week since the quarter started. We'd all recite the story along with him, sometimes jumping ahead, which he senile-ly mistook for us being just as big fans of the Human Eye as an Amazing Instrument as he was. It was really damned hard to concentrate in that class, and at the end of the quarter, my Human Eye was Amazed to see I'd almost failed it, which signalled clearly that it was Time to Change Majors. I'd found that I love programming and detest hardware, so staying a Computer Engineering major probably wouldn't be too wise.

So I transferred into the UW's Computer Science department, which meant switching from Engineering to Arts & Sciences, which meant taking a bunch of extra foreign-language and froo-froo social anthropology classes, where I got to watch films of dolphins counting to five through their blow-holes. Which is about how smart I felt after that semiconductors class. I felt like I could have made it as a dolphin.

It took me an extra year, but I finished in CS instead of CE, which allowed me to conveniently pretend that computer hardware, starting somewhere down at the level of the doping barrier in a silicon semiconductor, is just pure magic. Parts of the hardware above that level are still a little murky, but I have a pretty good idea how computers can be made by assembling successively larger components together from smaller ones. Given time and a gun to my head, I could probably start with any set of smaller components and derive how to build higher-level abstractions: transistors into logic gates, logic gates into larger assemblies that do arithmetic and boolean computing, those into state machines and ALUs, and so on. I'd suck at it, but I get the general idea, so it's not really "magic".

It's magic when you have absolutely no frigging clue how it works, not the first clue. It just works. That's how it is for me and semiconductors.

But I'm OK with that. I'm OK with treating semiconductors as "atoms" (or axioms, anyway) in our little programming universe. Chips are like molecules, CPUs are compounds, and Von Neumann machines can be bricks and mortar. Well, I'm sure you could come up with a way better metaphor. Whatever.

The point is, the computers and networks and power supplies and so on are all amazingly complex under the hood, but usually I can just pretend they work a certain way, and not worry about how, any more than I have to worry about my Amazing Eye in order to pour myself another glass of fine wine, which I will doubtless use to polish my keyboard if I don't finish this blog entry soon.

When is Magic OK?

I don't know. I'd really LIKE to know. I'm constantly trying to find better abstractions for my everyday work as a programmer. You can't build large systems without having some fairly bulletproof abstractions to build on. If you're building an e-commerce system, for instance, you need a transactional data store, and you rely on the semantics of transactionality when you're coding; if you don't, you'll wind up with all this crap in your code that tries to fake transactions.

I'm comfortable relying on certain abstractions working for me — for instance, if I compile with optimizations turned on, I know the compiler won't discard correctness in favor of performance, or if it does do that, it'll be documented so I know I shouldn't use that flag. Compiler optimization can be treated like a black box of sorts: I turn it on and see if performance seems any better, but I don't expect to need to re-test all my loops to make sure the variables are all still being adjusted in the same order as before.

What I'm NOT sure about is where I'm allowed to forget how things work, or never know in the first place, and start getting by with incantations that seem to work.

For instance, I know that I can type 'ls' in a shell to get a directory listing. How much do I have to really know about how unix 'ls' works in order to be an effective developer? I can see two arguments — so cleanly divided in opinion, in fact, that I can picture which of my friends are marching directly into one camp or the other right now.

One argument is that you DO need to know a lot about how 'ls' works, because there are all sorts of situations in which it can fail, and you'll need to figure out how to fix it. It could simply not show up ("command not found", say), or it could stop working on certain of my directories, or it could crash every once in a while instead of producing a directory listing, or it could go on a rampage and start deleting files. I think I've seen all of these happen with 'ls' in my time, even the last one, when my linux filesystem was corrupted.

Another argument is that you DO NOT need to know much about how 'ls' works. Probably all you need to know is that it's a unix command-line tool, a binary that was compiled for your OS, probably written in C, that it lives in a standard place in the filesystem, and that standard binary locations are looked up in an environment variable called PATH. But you don't need to know how it traverses the filesystem inode entries, or whether it's a statically or dynamically linked version of 'ls', or whatever else there is to know about 'ls'. (I probably should have picked 'diff' as my example, not 'ls'. Oh well.)

The second camp argues that software development is a community effort, and your community should have a Unix Guy Or Gal who knows all the ins and outs of how lses and diffs and their ilk work. A sort of tribal shaman who you go to, and desperately plead with: please help me! There's a bug in qsort! The shaman frowns at your screen, and pokes and prods things a bit, and announces that your pilfer grommit is no longer connected to your weasel pins, and that the easiest fix is to reinstall the OS.

A third, possibly even more abstract camp, would argue that 'ls' is way too complicated, and that you should be developing on systems in which everything is managed by navigating menus and waiting for your Auto Updater to tell you that a new Service Pak is available for download. And for all I know, they might be right. I have no idea how Exchange works, for example, and yet I trust it with my email every day. When it's broken, I put in Stupid Remedy Tickets saying I haven't received email in 10 days, and I don't feel the least bad about that, since it's not my responsibility; I made all the right incantations, and spilled all the right wine, and by golly, I'm not getting email anymore.

The second and third camps want to use J2EE, and they recount with some pride how they've managed just fine for years without ever having had to learn a scripting language, or become proficient with some cryptic unix editor, or any of that arcane crapola. They didn't see any need, because they were off building the FrooSingletonManager service, and it's all finished now, while I'm still mucking around with my broken 'ls' command.

Moreover, many of them say they don't see any need to know how to implement an n*log(n) sorting algorithm, because the damn things are already provided in libraries for every platform in the universe, including programming languages that are deader than Latin or Sanskrit.

And you know, I wonder sometimes whether they're right.

I find myself straddling both camps. One the one hand, I think nobody in their right mind would try to write a gigantic system in C++ now that Java has become an obviously far superior way to build large systems. You can argue that in C++ you really have to know what you're doing, or that you love the smell of freshly-brewn pointer arithmetic in the early morning, or whatever the hell you like about C++. I'm not arguing that you LIKE it — I know you do. I liked assembly language when I was using it daily. I'm just saying that the game has moved on, and if you're using C++ to build large systems because you like C++, it's like competing in the Indy 500 with your 1980 5.8-liter Pontiac Firebird. It gets you around, and it's fun to tinker with, which is good 'cause it breaks down all the time, but you're not going to win any world championships with it.

On the OTHER hand (remember, we were just on the one back there), I think nobody in their right mind would write a huge system in Java without first knowing how the whole thing is implemented, under the covers, in C++ and (below that) assembly language. As of today, anyway, I don't believe the JVM is allowed to be on the "it's magic" side of the line. And as for J2EE, which provides so much abstraction that you can snap together an enterprise architecture with it after one or two introductory programming courses, it's WAY too much magic. I definitely think that (as of today) if your Magic Line stops at the J2EE framework APIs and the Java Programming Language, then you're a wuss.

The problem is, these two views that I hold are mutually inconsistent:

    • one says that using abstraction makes you MORE effective, because to build large, interesting things in any reasonable amount of time, you need to work with high-level abstractions and be able to assume that they work as advertised.

      • This part of me believes that someday we'll be programming in purely declarative languages, telling the computer what to do, and letting it figure out how.

    • one says that relying too much on abstraction makes you LESS effective, because you're unable to cope when your framework isn't working right, or when you're trying to do something with it that it wasn't designed for, or whatever.

      • This part of me believes that my other belief is on crack if it thinks it'll happen during my lifetime.

My Current Compromise: Know Thy Abstractions

The tentative compromise I've been working with is as follows. You can use an abstraction all you want, as long as:

    • you know more or less how it works; i.e. what other abstractions it's built on top of, and approximately how it was built.

    • you know where the abstraction leaks: i.e. what the gotchas are, failure modes, what the most common incantations are to get it working again.

    • You need at least SOME level of tv-repairman ability for the things you use — you should be able to install any of the software you rely on, for instance. Or you should have a plausible backup plan handy if you need to throw this abstraction away and use another one.

    • you need to be able to reason about the performance characteristics of the abstraction, and understand how its performance and reliability degrade as you put pressure on it.

    • you know where to look, or who to ask, if it doesn't seem to be working.

Sound easy enough? If you think so, then you should be able to, say, talk about the data structure they used to implement a Set, the last time you used a Set. Think Java's TreeSet, or STL's Sets and MultiSets. They're implemented using the same underlying data structure, which seems to indicate that it's kind of important, so in my world view, you can use it, but only if you can describe more or less how it works, reason about its space and time performance, etc.

Am I right? I have no idea. I suspect that the Line of Acceptable Magic is slowly moving up, as certain abstractions become so bulletproof, ubiquitous, and well-supported that you really don't need to know how they work. Like toilets. I'm happy that they're magic.

As the Line of Acceptable Magic moves up each year, the people who understand abstractions below the line will kick and scream (I do it too), saying that back in the old days, your bits had to walk uphill both ways in the snow. These folks have been burned by the appearance of slick new magical forces, claiming to give you seven years of high availability if you'll just sign on the dotted line. Old-timers always advise you to hold on to your low-level knowledge, even if you apparently never use it.

On the other side, there are people who are pushing the Line of Acceptable Magic higher and higher, by building abstractions that take you further and further away from the bits marching like ants between computers on the network, or swirling around in the washing machine of your CPU. They say you can't write great literature if you're constantly worrying about typesetting and ink drying times. The J2EE people are shaking their heads at me for saying you should "only" be using the abstraction that plain Java and misc 3rd-party libraries provide. I'm sure they feel as frustrated with me as I do with people writing who are still writing big systems in C++, and they take out their frustration on me in debriefs, when I say their candidate doesn't raise the bar.

(And I take out my frustration on them when they say my candidates, who might quite literally be rocket scientists, fail their C++ questions. Really. This stuff happens every month, and the two sides are waging a sort of desperate war to figure out what a Hiring Bar is. As far as I know, it's pure magic.)

When is it OK to use magic? How do you know your machines won't fail during a most inopportune siege on your castle? How do you know your frameworks won't fail during an important time window for your business?

This is a really hard problem.

(Published Sep 05, 2004)