being-the-averagest

Being the Averagest

Stevey's Drunken Blog Rants™

[I wrote this blog pretty late at night during one of my most miserable weeks in the past few years. Laid up with the flu, just got back from California visiting brother in the hospital, heavily medicated, still coughing -- probably not the best time to write a new blog entry, in retrospect. I'll leave it in as an example of me at my absolute grumpiest. Normally I'm so cheerful! I promise my next blog attempt will have an Emacs macro in it, or something.]

Most programmers have only a vague notion of how competent they are at what they do for a living.

This sucks, since it implies that we're not pushing ourselves to be better. Competition spurs people to grow to their full potential. People compete for prestige, for money, survival, whatever. Any kind of competition motivates people to push harder, and when the competition is intense enough, it motivates people to push themselves as hard as they can.

But how do programmers compete? Generally, they just don't. Not in the way chess players or golfers compete, anyway. The reason? You can't compare programmers quantitatively, so you can't compute a score or a rank. Competitions and competitors have to be scored. Sure, you can set up scored programming competitions, but they're so tightly controlled that they don't resemble real-world software development anymore. Professional programmers basically just don't compete with each other.

Hence, you're probably not pushing yourself. Even if you're trying to improve your programming skills, you're probably just doing it in areas you're already comfortable in. And your improvements probably still aren't happening as fast as they would if you were competing to improve them.

Believe you me, people would looooove to be able to assign fine-grained scores to programmers. You can do this for some professions. Baseball stats are so accurate and comprehensive, for example, that you can compute a baseball player's trade value from actuarial tables. So it sure seems like you ought to be able to compute some kind of fine-grained numerical ranking for programmers.

But no such luck. Measuring programmers is a slippery problem. We programmers know when we're being measured, and we're smart enough to figure out the bugs or weaknesses in any measurement algorithm that HR departments can come up with.

It's not that we're trying to be devious. We just know intuitively that any measurement scheme that's truly accurate or realistic enough to measure our overall value will be too hard to instrument, compute, and/or understand. It's another Nonesuch Beast. So when a new programmer productivity measurement comes along, we already know it'll be silly, and we turn it into a game, and start competing at it -- specifically, by exploiting the weaknesses of the game rules. We're competitive by nature, but never more so than when we're being measured overtly.

Dilbert has made fun of this idea plenty of times, of course. My favorite is the one where the pointy-haired boss announces they're going to reward programmers based on lines of code, and Wally leaves, saying he's going to go write himself a new minivan. (Incidentally, shouldn't there be a rule that companies aren't allowed to do things that have been formally ridiculed in a Dilbert comic? We could use a rule like that.)

The Blub Threshold

Nobody has exactingly precise measurements of programmer productivity or quality, but that doesn't mean you can't tell a strong programmer from a weak one. Quick -- think of the best programmer you know. Do any names spring to mind? What's so good about those programmers? Any reason is fine: maybe they get really hard stuff done really fast, or they perform feats that awe their peers, or they seem to know everything about our systems, whatever. No need to think too hard about it. Let's just agree that some folks really stand out as being top programmers.

As soon as you conclude that some programmers are noticeably better than others, you've got yourself a primitive rating system. You only need two points (say, Bob and Sue) to make a rating scale with 5 levels: worse than Bob, roughly as good as Bob, better than Bob but worse than Sue, roughly as good as Sue, better than Sue.

Sadly, that's about the state of the art for SDE stack-ranking systems. Even in our high-tech industry, you still get away from the bear by running faster than your friend (i.e., not being ranked worse than Bob). But running faster than your friend doesn't imply you're as running as fast as you can. It's a rating system that only produces bear-evasion tactics, not world-class competition.

The fuzzy Bob/Sue five-level ranking system really is approximately what most companies use, so we'll call it the B.S. system for short. Ahaw, ahaw, how droll I am. The B.S. system is more or less accurate, but it's incredibly imprecise. Well -- it's accurate if you subscribe to the belief that you can stank-rank SDEs on a one-dimensional scale called "goodness", or "worth", or some such.

It's not entirely evident whether this is a good model, but whenever the question arises, people cite the "lifeboat drill". They ask you: if you could only hire 5 people for your own startup company, which people would they be? The assumption is that you should be able to take any group of engineers and sort them on "goodness", or at least on whether you'd pick them for your lifeboat. It smacks vaguely of specious reasoning, since we're not exactly a 6-person lifeboat anymore. But nobody has felt like challenging it so far, so we let the argument (and the primitive stack-rank system, which is better than nothing) stand.

I suppose you could think of the popular one-dimensional B.S. rating scale as a projection of all your relevant skills onto the "goodness" axis. It's sort of like an "SDE fitness function".

So the B.S. five-level ranking system serves its HR purpose, just barely. The system is a direct by-product of the fact that you can't measure SDE output quantitatively; all you can really do is compare SDEs to each other qualitatively to produce a relative ordering.

How do you actually decide that one SDE is better than another? That's a really tough problem, and I'll punt on it for today. But regardless of your scoring criteria, the magnitude of the minimum detectable difference between two SDEs is interesting in its own right. With such a coarse-grained ranking system, you need a pretty big jump on the "goodness" scale to move from "about as good as Joe" to "better than Joe". I'll call this minimum difference the Blub Threshold, since I have a hunch it's approximately equal to the boost you'd get by moving from Blub to a higher-level language.

I don't know if the Blub Threshold is a 10%, or 2x, or 10x difference in programmer P & Q (productivity and quality, which are the outputs that people traditionally wish they could measure.) I'd guess it's between 25% and 100%, depending on the terrain.

So you need to be up to twice as good as your neighbor in order for anyone to notice and put you in a different stack-rank bucket.

Wednesday's Your Day in the Bucket

Frankly, the 5-bucket system is useless for us programmers. Maybe it's serving valuable lifeboat-ish HR needs, but for those of us wanting to turn ourselves into better programmers, we need something better.

Back in school, you competed with other students based on GPA -- not just so you could lord it over your peers, although that's nice as far as it goes, but also to improve your chances of becoming gainfully employed. The competition with your peers is what drove a lot of your studying.

Everyone in school takes the competitive environment for granted. Students compete on test/homework grades and overall GPA, profs compete for tenure, departments compete with other schools for research grants, and so on.

That environment doesn't exist at most software and servware companies. Sure, the entire company is competing with other companies, rah rah rah. But there's very little individual competition, and what little there is of it is competition not to be the worst, rather than competition to be the best. We don't announce a Valedictorian Amazon SDE each year, so nobody knows if they're the best, or even how close they came. And certainly there's no incentive to compete for a nonexistent "best" title.

Why don't we compete in the workplace? Most students don't seem to mind the competition at school. Some students excel, some just pass, some fail. It affects your life just as much as your job does. Some professions DO compete directly -- sports and performing arts, of course, but also sales, operations, and manufacturing folks. They're numbers-driven, and they compete on those numbers.

Maybe we don't do it because we don't have the equivalent of a GPA for engineers. If we had such a number -- let's call it PNQ for "Productivity 'N' Quality" -- then SDEs would immediately begin competing to improve their PNQ. But it doesn't appear to be possible to measure or compute a fine-grained PNQ value for an SDE. I think you can measure someone's skills, but that's really more a measure of potential than performance.

So we don't have a competitive environment among our SDEs. I think this is true of most companies that employ SDEs.

Other than competition, the only force that really motivates SDEs to take deliberate steps to maximize their PNQ (which, measurable or not, we know can be improved) is peer pressure. This presupposes a general culture geared towards learning, mentoring, and career development. But we don't have that here either, and although we've started moving in that direction this year, we're pretty far from having a culture where our SDEs are all minding their PNQs. (Sorry, couldn't resist.)

Instead, most of us are simply focused on cranking out features and fixes as fast as we can. And our crank-rate isn't improving over time, because we're not taking time out to practice.

That Thing Other Professionals Do

Professional sports teams are squared away. The teams practice, and the team members also practice individually. They drill, they run laps, they work out. No competitive team just goes out and does nothing but play games. They'd stop improving, fall out of practice, stop winning, go out of business.

But it's not just team sports. Anywhere there's competition, there's practice. Musicians, golfers, skiiers, chess players, martial artists -- everyone who practices some difficult art or craft, practices it. There was a time when practicing was considered cheating. England was pretty grumpy about Americans "cheating" at golf, around the turn of the last century, because the Yanks were going to driving ranges and practicing their swing. But once the practicing starts, it becomes a necessity if you want to stay competitive and remain in the game.

Even where there's not overt competition, professionals still study and practice. Doctors and pharmacists have to continue their education throughout their careers. Writers, painters, composers and other artists practice their tools and techniques, and they keep abreast of the trends in their profession.

Virtually all professionals start by attending school to receive education in the theory, history, and practice of their profession. Many do an internship, apprenticeship, or residency as part of their education. In some professions, the education is required, and some even make you pass a test (e.g. the Bar exam, the CPA exams, etc.) in order to be a member of that profession.

I have trouble thinking of any modestly difficult profession in which continuous study and practice aren't the norm. Fighter pilots train in simulators before getting into the latest jet. Actors and politicians practice their lines and their smiles. Opera troupes do mock performances before public appearances. Writers, poets, and artists attend workshops, and study the work of the Masters.

Everyone practices -- everyone, that is, except for us. We just grind stuff out, day in, day out. Are you as embarrassed about the state of our profession as I am?

There are exceptions, of course. Some programmers do study and practice and try to improve their skills and abilities, and keep up with the ever-changing state of the art. A lot of them seem to hang out on the very first Wiki.

Most programmers don't bother, though. After all, there's no competitive incentive, other than to make sure you run faster than your slowest peers. And there's no incentive from a general learning culture, at least not here. In fact, there's a serious obstacle to studying or practicing: if you aren't given time for it at work, you'll usually be too tired to do it at home. We do work pretty hard, after all.

But I think that by far the biggest reason that programmers don't try to improve is stated in the opening sentence of this essay. Programmers have no idea how good (or bad) they are at programming. In fact, we all think we're pretty darn good at it.

The Bob Paradox

Let's posit some hypothetical average programmer out there named Bob. He thinks he's about as good at programming as you can get. He knows there are folks who know some languages he isn't familiar with, or who type a lot faster than he does, or who seem to be real whizzes with networking or filesystems or whatever. I can tell you this: Bob's not thinking: "Those folks are better programmers than I am."

What Bob's thinking, if he stops to consider those people at all, is this: "They have their specialties, and I have mine. We're all about equally competent."

Bob knows he's better than he was when he started, and that he's better than a lot of newer programmers, on account of having more experience at it. He knows he might spot a potential error condition faster than a more junior programmer, and that he comes up with better designs. But that's all just part of being a bit more seasoned. He's known how to program computers for a long time, and anything he doesn't know, well, he'll just figure it out if he needs it.

Bob knows this guy Joe who's just amazing. Joe's like the best programmer Bob's ever known. I mean, Bob and his peers, they've all got their strengths and weaknesses, but Joe is really stellar, a programming genius. He's a natural at it. One of them whiz kids.

In Bob's view of the world, there are essentially three programmer skill levels: folks learning how to program, folks like Bob who know how to program, and the inevitable whizzes, but they're few and far between. There are always a few whizzes out there, the ones who used to be child geniuses or whatever. But most programmers are like Bob, and Bob knows how to get his work done.

Bob has no incentive whatsoever to try to improve his skills:

    1. He knows he's not as good as Joe. But Joe's great on account of his genes, not because he practiced or studied more than Bob did, back in school. Obviously Bob can't compete with people who were kid geniuses, and he shouldn't exert himself unduly on their account.

    2. Bob knows everything he needs to know about actual programming. He knows how to write code, and how to debug it if there are problems. He can eventually get stuff to work. As far as Bob is concerned, he's a programmer, and that's all there is to it.

    3. Bob realizes there are technologies, data structures, algorithms and techniques that he knows little to nothing about. But that's OK. He doesn't know them because he doesn't need them for his job. Whenever he does need to learn something new, he gets out a book and figures it out.

Bob's content. He goes to work and programs at a reasonable pace. At night he goes home and watches TV or sings Karaoke or plays video games or does just about anything other than programming.

If you were to ask Bob: "Hey Bob, how do you know you don't need to know anything about technology X? What makes you think it wouldn't help you do your job better?" Bob would reply that he's gotten by just fine for many years without knowing X, and he's been doing his job, so quid pro quo, ipso facto, ergo rectum, he obviously doesn't need to know X. He has no idea why you'd bother to ask such an inane question.

Bob the Everyman

Almost everyone thinks of their programming ability as being just fine, plenty good enough. They can get by, get the job done, do pretty much anything they'd need to do, given time and patience.

It's quite a nasty shock for many of our interview candidates when they find they're unable to do something as simple as reverse a linked list, or open and write to a text file. They're not shocked that they can't do it; they're shocked that we'd ask. Those are specialty skills, and not their specialty. They haven't been doing much "low level" stuff like that lately.

Not all interview candidates are shocked when they can't do it, because many of them don't realize they've written something that could never work: broken code that's not even remotely close to a correct solution. These programmers are particularly cheerful, being so clueless that they don't even know they're clueless.

I was like that (well, the like first shocked group, anyway) for a very long time. Over a decade. In fact, I didn't realize until last year that I'd been caught up in the Bob Paradox.

See, I always thought I was a perfectly competent programmer: as good as you can get, basically. I was building cool stuff, doing seemingly complicated things, and I felt I knew a tremendous amount of lore about the art of programming. I had won or placed in programming competitions, could program in Java for weeks on end without referring to the API docs, and pretty much felt on top of things.

Every few years, I would read some critical book, or have some weighty flash of insight, and realize that I'd been operating all this time in what could only be termed "clueless mode", and that I hadn't really known what I was doing after all. Amusingly, I was always relieved that now I could consider myself to be a good programmer, since I now knew whatever it was I'd been missing before.

Last year it finally dawned on me, after 16 or 17 years of this, that I just might possibly still be clueless about something important that I really ought to know, something that would make me a much better programmer.

Like, duh.

So now I have no idea how far along the programmer-proficiency curve I am, but I can at least see that I'm nowhere near the high end; I'm not even to the halfway point. My ego still assures me I'm past the 25% mark, but realistically, I doubt it. I'm probably flush with the y-axis.

With this new perspective, I've been able to learn a great deal about programming, software engineering, and computer science in a big hurry. Why? Because I know that any time I see discussions about "weird stuff" that normally I'd just think is a specialty skill for lab-coat types, what I'm really seeing is a hole in my knowledge, a place where I'm clueless. So instead of ignoring it, I scope it out a bit, so I know where it falls in the big scheme of things, and put it in my to-study list.

My to-study list is pretty long now. Like, years long. And I'm pretty selective about what goes in it. I only want to learn things that will make me better at programming in general; I'm happy to skip things that I think really are specialty skills, such as (say) advanced 3D graphics techniques.

Which brings me back to my thesis, the opening line of this essay:

Most programmers have only a vague notion of how competent they are at what they do for a living.

Note: I used to have something kinda mean written here, but it was just my Nyquil kicking in, I think.

All About Bob

I'm responsible for (among other things) developer training at Amazon. It's a big problem to try to tackle. Ironically, though, the training isn't the hard part. The hard part is getting Bob to show up.

Bob is satisfied. I've outlined all the reasons Bob sees no need to try to improve, the biggest being that he doesn't even know it's a possibility. Recall that for Bob, there are only three levels: nonprogrammer, programmer, and anomalous super-whiz. And I haven't offered a single reason why anyone should work to become a better programmer.

That's partly because I think it's self-evident. Obviously there are smart companies out there (I won't name names, but maybe you can, um, Google for them) who would like to eat our lunch and take our milk money, too. Those companies may have internal competition, or a higher hiring bar, or lots of peer pressure and a self-improvement culture. We have no idea. But sooner or later a company will come along that has those things. And they'll punish us if we're still napping.

I'm taking it as axiomatic that a company of mostly high-PNQ engineers will run circles around a company of mostly medium-PNQ or low-PNQ engineers. If Bob won't grant me at least that much, then I'm giving up. But I think it's perfectly safe to assume that if a single high-PNQ engineer outperforms (by definition) a low-PNQ engineer, you can multiply both sides of the inequality by N and it'll still hold true.

I suppose I should mention that I talk to Bob all the time; there are lots of Bobs at Amazon. Far fewer, percentage-wise, than at just about every other company I've worked with or visited. We do pretty well here, actually. Most programmers worldwide (3/4, maybe?) are trapped in the Bob Paradox. After they finish school, they just stop learning new stuff, except when they just happen to notice they need it for the task at hand. Regardless of how much schooling they received, or how much more they could have taken, they now think they're done.

I think I should reiterate a distinction I made earlier: programmers usually are learning new things on the job, but their self-directed study is typically just reinforcing what they're already comfortable with. C++ programmers learn more C++, Java programmers learn more Java, Perl programmers learn more Perl. Linux users ignore Windows and vice-versa. UI developers don't learn tools or systems programming, and vice-versa. People stay in their comfort zones.

This isn't just true of programmers, of course. As a guitarist, I constantly have to fight the urge to practice the way all bad guitarists do worldwide; namely, to play the songs you already know from beginning to end, hoping that next time through you won't make the same mistakes. With that style, all you're doing is practicing your mistakes. The right way to practice a piece is a lot harder and far less instantly gratifying, but it's the only way to move up to the next level as a guitarist. (That took me a decade to figure out as well, even though people all around me had been trying to tell me. It took an embarrassing lesson with David Russell, a grandmaster guitarist, before I finally paid attention.)

People pushing within their comfort zone may feel like they're learning more, but it's really just more of the same. And it's highly unlikely to push them past the Blub Threshold and into a higher stack-rank bucket, or into a different job category or job level.

Nowadays my primary interest, as far as developer training is concerned, is in figuring out how to snatch someone out of the comfy vortex of the Bob Paradox. People usually don't want to leave. Learning new stuff is hard. They'll take the Blue pill, thanks very much.

Once you get someone to realize that the sum of their skills is about as effective as punch cards and paper-tape, compared to how effective they could be (or alternately, what the median PNQ is likely to be in 100 years), the rest is easy. You give 'em some books and set 'em loose. Programmers are a pretty smart bunch, you know, once the blinders are off.

Bob, Joe, and Sue are all fictional characters, and are not intended to resemble any particular real person or persons, at Amazon or elsewhere.

bob@blubsoft.com

(Published Oct 22, 2004)

Comments

Ever look at TopCoder?

I've never done it, but I've worked with two of the top 16 collegiate finalists before and they were pretty sharp. (I think they're working for google now.)

Posted by: Andrew W. at October 22, 2004 06:19 PM

Yeah, TopCoder's cool. I might try to find a way to bring it in house for an internal programming competition, maybe quarterly or twice a year. Could be fun. It's an amazingly well-written Java application, incidentally; one of the best I've seen.

Posted by: Steve Yegge at October 23, 2004 09:14 AM

Are you insinuating that working our asses off year after year for that 40 rating and a 1% raise isn't motivation enough to make us all rise to stardom??

Posted by: Anonymous at October 28, 2004 03:12 AM

Hi Anonymous,

That's a very interesting, complex question, and undeserving of a flip answer from me.

All I can say is: if you're working your ass off, stop right now. Amazon's not worth it. If you have a crappy manager who's making you work your ass off, fire your manager (by finding another group to work in.) There are plenty of bad, insecure, incompetent, neurotic managers in every organization, including ours, and you don't need to keep them in business by working hard to make them look good. I don't know why it takes so long to root out and eliminate bad managers at Amazon, but that seems to be the way of things.

Once you find a group at Amazon where you're actually working because you enjoy what you're doing (which is typically determined more by your team members and your management than by the actual work), then you can come back to this post and start wondering whether you might not want to "work" (in a fun way) to make yourself more effective at doing what you love to do.

What many people find is that when they're in the right environment, doing something they believe in (and being recognized by their peers for it), they work harder than folks who are supposedly "working their asses off." But it doesn't feel like work anymore. When it does feel like work, something's going wrong, and you need to fix it.

If you can't find a suitable team in Amazon, well, there are lots of places that pay higher than we do. I know a few guys who've gone across the street to work for Wells Fargo or Washington Mutual, made 50% more than they do here, and they leave at 3:00pm to go play golf.

My god, life's too short to bust your ass for wage increases. Nobody gets rich off wages.

Posted by: Steve Yegge at October 30, 2004 12:30 AM