is-weak-typing-strong-enough

Is Weak Typing Strong Enough?

Stevey's Drunken Blog Rants™

So... how big can dynamically-typed systems get? Do static type systems really matter? I'd love to know the answer to this.

We have some big systems at Amazon, and most of them seem to use strong static typing; at least the ones I know about do. Is that a requirement, or could we just as well have done them in Perl, Ruby, Lisp, Smalltalk?

I'm actually interested in the question in a broader sense than just for programming languages. For instance, the idea of strong typing applies just as much to relational data modeling as it does to programming languages. You can choose to model the living hell out of every possible entity, or you can throw together a quick-and-dirty schema of name/value pairs. Same goes for XML data modeling.

Pros of static typing

Here's my stab at a list of the main advantages of static typing:

    1. Static types provide constraints that can detect some type errors early, before the program runs (or at the time a row is being inserted/updated, or an XML doc parsed, etc.)

    2. Static typing offers more (or perhaps just easier) opportunities for performance enhancements.

      1. For instance, it's easier to create intelligent database indexes if you have a rich, thorough data model. And compilers can make better decisions when they have more precise information available about variable and expression types.

    3. With verbose type systems like those of C++ and Java, you can look at the code and see the static types of your variables, expressions, operators and functions.

      1. This advantage isn't as true in languages with type-inference like ML and Haskell; they evidently feel that requiring the type tags everywhere is a negative. But you can still specify type tags in cases where it clearly aids readability — something most dynamic languages don't allow.

    1. Static type annotations make it easier to do certain kinds of automated processing of your source code.

      1. This includes automated doc gen, syntax highlighting and indenting, dependency analysis, style checking, and other code-looking-at-code kinds of stuff. Static type tags simply provide more "purchase" for compiler-like tools: more distinct syntactic elements available for lex-only tools, and less guesswork needed for semantic analysis.

    1. People can look at an API or schema (as opposed to the implementation code or the database tables) and get a feel for its overall structure and usage patterns.

Am I overlooking any other advantages?

Cons of static typing

Here are the disadvantages of static types, as I see it:

    1. They artifically limit your expressiveness.

      1. In Java, for instance, the type system doesn't let you do operator overloading, multiple inheritance, mix-ins, reference parameters, or first-class functions. Any time the most natural design involves these things, you're stuck trying to mutate it into a design that fits Java's type system.

      2. You can come up with similar complaints about every static type system out there, from Ada to C++ to OCaml to, er, Z-language. About half of all design patterns out there (not just the GoF patterns) appear to be ways take perfectly natural design ideas and twist them to fit into someone's static type system: recipes for pounding square pegs into round holes.

    2. They make development go more slowly.

      1. You spend time up front creating your static models (top-down design), and more time updating them as requirements change. Type annotations also take up real estate in your source code, so they're a little slower to enter and maintain. (This is only a serious problem in Java, which has no type-aliasing facilities.) And as I mentioned above, you also have to spend more time forcing your designs to fit the static type system.

    1. They take longer to learn.

      1. It's much easier to get started with a dynamically-typed programming language. Static type systems are rigidly picky, and you have to spend a bunch of time learning their way of modeling the world, plus additional syntax rules for specifying the static types.

      2. Also, static type errors (aka compiler errors) are especially hard for language learners to figure out, because the poor program hasn't had a chance to run yet. You can't even use printf-debugging to see what's going wrong. You're stuck rearranging your code randomly until it gets past the compiler.

      3. This means it's harder to learn C++ than C or Smalltalk. It's harder to learn OCaml than Lisp, and harder to learn the Nice language than Java. And Perl has a whole bunch of static complexity — labyrinthine rules about what you're allowed to say, and how, and when — which makes harder to learn than Ruby or Python. I've never seen an example in which static typing made a language easier to learn.

    1. They can lead to a false sense of security.

      1. Static type systems reduce runtime errors and increase data integrity, so it's easy to be lulled into thinking that if your system passes compilation and runs, then it's basically bug-free. People who use languages with strong static type systems seem to do a lot less unit testing. Maybe it's my imagination, though.

    1. They can lead to sloppy documentation practices.

      1. It's easy to throw out the javadoc for your system with practically no comments, and think that it's sufficient for figuring out how to use it. I see this all the time in SourceForge projects, and even the Sun JDK packages often do this. (For instance, Sun frequently fails to provide any javadoc comments whatsoever on groups of static final constants.)

    1. They rarely yield highly dynamic/reflective systems.

      1. Most likely for performance reasons, most statically typed languages throw away most or all of the compiler-generated metadata at runtime. As a result, the systems are usually very hard to modify (or even introspect on) at runtime. E.g. to add a new function to a module, or method to a class, you usually have to recompile, shut down, and restart.

      2. This doesn't just impact the development cycle; it can adversely affect the entire design, if you have a system that needs to be modified (or reflected on) while it runs. You wind up needing to build elaborate architectures to support dynamic facilities, and it's inescapably mixed in with your domain logic.

Have I overlooked any other disadvantages of static type systems?

You can pretty much invert all the points above to obtain the pros and cons of dynamically-typed languages. Dynamic languages give you more expressive power and more design options; they're easier to learn; they make development go faster; and they tend to provide more run-time flexibility. And in general, you lose on early warnings of type errors (at least from a compiler), they're harder to optimize for performance, it's harder to do automated static analysis on them, and it can be harder to look at the code and figure out the type of a variable or expression.

Just as static languages tend eventually to bend and start adding dynamic features, dynamic languages often try to add in some sort of optional static type system (and/or static analysis tools), as they try to improve performance or increase early error detection. But it's usually pretty ugly, and works best if the language designed optional static types in from the start.

What's the right approach?

The strong vs. weak typing issue really gets people worked up. The choice has a major impact on virtually all aspects of your project lifecycle, architecture, and development practices.

Which of the following positions would you say is most true at your company, assuming (for the moment) that you can only choose one of them:

    1. Above all, we need stability. We have enormous scale and massive business complexity. To create order out of inevitable chaos, we need rigorous modeling for both our code and our data. If we don't get the model and architecture mostly correct in the beginning, it will hurt us later, so we'd better invest a lot of effort in up-front design. We need hardened interfaces — which means static typing by definition, or users won't be able to see how to use the interfaces. We need to maximize performance, and this requires static types and meticulous data models. Our most important business advantages are the stability, reliability, predictability and performance of our systems and interfaces. Viva SOAP (or CORBA), UML and rigorous ERDs, DTDs or schemas for all XML, and {C++|Java|C#|OCaml|Haskell|Ada}.

    2. Above all, we need flexibility. Our business requirements are constantly changing in unpredictable ways, and rigid data models rarely anticipate these changes adequately. Small teams need to be able to deliver quickly on their own goals, yet simultaneously keep up with rapid changes in the rest of the business. Hence we should use flexible, expressive languages and data models, even if it increases the cost of achieving the performance we need. We can achieve sufficient reliability through a combination of rigorous unit testing and agile development practices. Our most important business advantage is our ability to deliver on new initiatives quickly. Viva XML/RPC and HTTP, mandatory agile programming, loose name/value pair modeling for both XML and relational data, and {Python|Ruby|Lisp|Smalltalk|Erlang}.

I'll (slightly) caricature the workflows of these two philosophies, to make the essential differences as clear as possible.

The strong-typing camp basically works as follows: start by designing for your current requirements. Make a spec, even if it's a very lightweight spec. Define your interfaces and data models up front. Assume you're going to suffer from massive loads, so write everything with a keen eye towards performance. Avoid using abstractions like garbage collection and regular expressions. [Note: even Java programmers generally work extra hard to avoid garbage collection, always talking about object pooling before they've even started coding yet.]

The first camp only resorts to dynamic typing when they're backed into a corner. For instance, under extreme duress, a team using CORBA might finally add an XML string parameter to each interface call, effectively giving them an "escape" from the rigid type system that they adopted in the first place.

The second camp basically works as follows: start by making a prototype. Assume you can code it faster than a person could write a spec with the same level of detail, which means you can get customer feedback on it faster. Define whatever interfaces and data models make sense at the moment, but don't waste a bunch of time on it. Write everything with an eye towards doing the simplest thing that could possibly work. Assume you're going to suffer from massive requirements change, and write everything with a keen eye towards getting something working fast. Use abstractions (e.g. collections rather than buffers, or regexps rather than string compares), even when you know they may be overkill, because they buy you more flexibility, save you some typing, and generally have fewer bugs.

The second camp only resorts to performance optimizations and interface/schema lockdowns when backed into a corner. For example, under extreme duress, a team working in Perl might rewrite some heavily-used core modules in C and create XS bindings. And over time, as abstractions become standardized through usage, they gradually get locked down by wrapping them with schemas or finer-grained OO interfaces. [Even Perl programmers often bite the bullet and write OO interfaces for commonly-used abstractions.]

How do you suppose these strategies pan out over the long term?

Big Case Study

I watched the strong/weak battle play out (in various ways) in Amazon's Customer Service Applications group for years. I was initially aligned as follows:

    • I was in the "strong" camp for languages, personally favoring development in Java.

    • I liked the "weak" camp for protocols (e.g. favoring xml/rpc over SOAP) and XML modeling (favoring no DTD or schema at all).

    • I was in the "clueless" camp for relational modeling (favoring keeping my mouth shut and learning from the experts.)

One thing I observed was that the folks who favored Perl always seemed to be able to get stuff done really, really fast, even compared to experienced Java folks. And they had their act together; it wasn't just crude hackery, as many Java programmers would like to believe. Their code was generally very well organized, and when it wasn't, they'd go in periodically and fix it. Sometimes they did quick, hacky scripts, and in fact the ability to do this proved to be mission-critical time and time again. But generally the Perl stuff worked just as well as the Java stuff. Whenever performance became an issue, they'd find clever ways to make it perform well enough.

CS Apps had a relational data model for customer contacts, but it had a gaping hole, by way of an untyped attribute system (name/value pairs, basically). The relational model didn't change very often, in part because that was back in the days of centralized control over the databases, so it was pretty hard to get schema changes through, even though we had several talented data modelers among the software engineers. But the schema also didn't change much because even if they'd let us make the changes, our requirements were changing so fast that we might never have been able to keep up. Those flexible contact attributes were a life-saver. Even the static-language camp had to agree.

Yes, we had some data integrity issues. Name/value pair models are somewhat more prone to wrong names (e.g. via typos), bad values, or bad dependencies, because you can't rely on the database to do constraint checking for you. If anything, this made us more careful. When we noticed data integrity problems, we'd write a backfill and fix them. Sometimes it was hard. But we realized that even with the strongly modeled tables, we still had occasional data integrity errors. You always need to be able to recover from errors, strong typing or no.

And yes, we had performance issues from time to time — with the Java stuff too, though, and also with the C++ code (CS Apps was a mixed bag because we had to interoperate with virtually every other system in the company). It doesn't matter what language you use — performance issues will always come up. Your just have to find them and fix them.

So for years, we had Java and Perl development going on side by side. This was a decision made in old days, purely for expedience reasons. When we were deciding how to implement Arizona (our internal web-based application suite for CS), we had about a 50/50 split between Perl and Java programmers on the team.

During the initial development, the Perl use-cases got finished astonishingly fast. For a while, Arizona had more Perl than Java, because our Perl programmers started grabbing tasks assigned to the Java folks. Over time, the "climate" across the company pushed us to migrate everything towards Java. It was a pretty complicated situation not worth recounting here, but over a period of years, most the Perl stuff in Arizona was gradually rewritten. (I hear after I left, the climate changed again, with Java being phased out in favor of C++ and Perl, but for reasons mostly unrelated to the languages themselves.)

In any case, for several years I got to watch Perl and Java folks working side by side doing pretty much the same tasks. In some cases, they even had to implement the same logic in both languages. Yes, there were inefficiencies with our Perl-and-Java approach. However, it was the right decision at the time, and as a result, I was personally able to witness a more or less apples-to-apples, multi-year comparison of the strong-typing and weak-typing camps at work in the same production environment.

In nutshell, I was pretty impressed. I was a die-hard Java guy at the time, and even then, I could see that the Perl code was far smaller and simpler than the Java code. It didn't feel "cleaner", since Perl itself is a bit challenged in that department, but it seemed modular enough. It had a well-defined architecture, and it got the job done, year in and year out.

Our Java code (to me) seemed far more complex, even though I could read Java more easily. I think Java programmers have a tendency to overengineer things, myself included. I suppose many Java folks would have thought of our Perl code base as grossly under-engineered. But if it was really under-engineered, it seems like it should have caused more problems than it did. In practice, most of the problems in the Perl code were interoperability issues with external services (or databases, in cases where there were no services yet.) Most service owners didn't include Perl bindings for their interfaces, so our Perl folks had to do a lot of head-scratching to find workarounds.

On the data-modeling side, the team taught me how flexible attribute systems can be created in relational schemas. DBAs and (especially) data modelers tend to hate them, but SDEs like them just fine, since it beats trying to do "real" O/R mapping, and you don't have to make schema changes to accommodate certain large classes of new requirements. I doubt we'd have been able to keep up without that flexible system in place.

Why I'm Weak

These days, I believe approach #2 (weak/latent typing with selective lockdowns) seems preferable to approach #1 (strong/static typing with selective loosening) for most problem domains at Amazon. Our business is always changing rapidly, so we're always doing major overhauls of our interfaces and data models. We always have more work than we know what to do with. And we have Moooooooooore's Law to lean on... well, except that it's stalled out for the past 3 years. But still....

The CS Apps example isn't the only basis for my opinion, although I do feel it's pretty compelling evidence. (How often do you really get to see large-scale, long-term, mostly apples-to-apples comparisons of two different languages with different type-system philosophies?)

Another big reason I prefer weak typing is that I've seen plenty of examples where a team using strong-typing has thrown in the towel. I really have seen a team give up on "hardened" interfaces and add in an XML string parameter to a CORBA interface, and I thought it was absolutely the right decision. They were getting slain by CORBA's type system: every teenie weenie little change for one customer would affect all their other customers as well. Their "tunnel" gave them some breathing room.

I've seen kind of thing elsewhere as well. Heck, even Sun's JMX interface looks strongly typed, but it has a weakly typed query language embedded as a String parameter ("ObjectID"), one that's totally opaque to the compiler, interface generators, etc. So much for strong typing saving the day.

And once we did a site for a major sports franchise, and they wanted customers to be able to put their initials (or a custom number, or whatever) on jerseys and such. Our rigid model for orders (and the associated strongly-typed interfaces wrapping that model) had no provision for passing a custom initials field through to the backend, to be sent off as part of the drop-ship request. (Dunno if that technically counts as drop-ship, but you know...) I remember there being weeks of angst about how we were going to solve that problem.

The customer-initials thing was trivially tiny compared to the schema impact that Wireless (cell phones) had. Months and months of angst there. And Wireless seems to be turning out to be tiny compared to some of our more recent initiatives (from a data-modeling and interface-design perspective, at least.)

I could go on with other firsthand examples. Generally speaking, strong static typing has gotten in our way, time and again, and weak typing has never resulted in more "horribly bad" things happening than the equivalent strong-typing approaches. Horribly bad stuff just happens sometimes, no matter what approach you use.

Purely from my own personal-productivity standpoint, I've found that well-designed weakly-typed systems allow me to be much more productive (and usually much happier) than equally well-designed strongly-typed systems. Emacs is one example. Even now, when I'm still much better at Java than Lisp, I'd rather write an Emacs plug-in than an Eclipse plug-in any day of the week. It's an order of magnitude simpler. Well, once you learn Lisp, which was pretty hard. But in the long run, I'd rather spend the extra study time up front, and wind up being more productive forever.

And Emacs isn't especially well-designed, at least by modern standards. It could benefit from OO interfaces, namespaces, multithreading, less reliance on dynamic scoping, etc., all of which it would have if it were written in a more modern Lisp. But even in a dinosaur like Emacs, it's still an order of magnitude simpler to write plug-ins. If you focus strictly on the user-extensibility mechanisms of Emacs and Eclipse, you have yet another compelling, essentially apples-to-apples comparison of strong-typed vs. weak-typed language approaches, and Weak wins again.

And look at Java: even Java proponents would still rank Java's reflection, dynamic proxies, dynamic class loading, varargs, and other non-statically-checkable features as being critically important. Sure, they'll caution you about potential performance problems, and they'll tell you not to rely too much on dynamic features — but I doubt the Java community would be willing to throw all that stuff out. Those features are considered major advantages to using Java.

I used to be a big proponent of strong typing, but over time, my own experience has led me to feel it's not the right thing, at least not for the kinds of systems we build. If you're at a bank, sure. If your business rarely changes, you can afford to have a rigid, well-specified data model. If you're in some industry that has much harder constraints on performance, or safety, then sure. Use a strongly-typed system.

But after watching the new Hitchhiker's Guide movie this weekend, and seeing the hilarious caricature of British governmental bureacracy in the Vogons, I thought: hey, I hate bureacracy. And static type systems are basically just bureacracy. I want them to get out of my way and let me get stuff done without needing to fill out a bunch of forms. If the price of getting static type-safety is working with a brain-dead compiler and a restrictive type system, well, then I can handle my own type errors, thanks very much.

Are the Forces of Weakness strong enough?

I still have some doubts. Do weakly-typed systems have inherently lower scalability? Do they tend to dissolve into vast typeless traps at a certain size, as the static camp would have you believe? Do the runtime type-error rates get out of hand, even with rigorous unit testing and software-engineering discipline?

And is the performance prohibitively expensive? For instance, do you know of any large, weakly-typed systems that had to be thrown out and rewritten in a statically-typed language (by the same team, so we know it wasn't just language preference) in order to meet performance goals? I'm specifically interested in n-box distributed systems that failed, not embedded systems or programs for end-user desktops.

I don't think it should be a problem, and in theory, I feel it should be possible to use Ruby or Lisp (or Smalltalk, Python, or any other dynamic language with powerful mechanisms for code modularity and object-oriented abstraction) to build a large, customer-facing, production service at Amazon. I'd like to see some evidence of that before actually trying, though.

At this juncture, I think enforced static typing (e.g. what you find in Java, C++, OCaml, Ada, etc.) is detrimental to progress and flexibility. I also think that a complete lack of support for it (e.g. what you find in Ruby and today's Python) is problematic for being able to selectively tighten up systems as their usage patterns become established. I think Lisp's solution, where you can add in static types as needed, is close to ideal.

But I'm still a bit timid about trying to write something really significant in Ruby (my weakly-typed language of choice), on account of its performance and its lack of native threading. I'm equally timid about trying Common Lisp, mostly because the package contributions on Cliki seem fairly paltry; the language doesn't appear to have enough momentum for me to commit to it. I have similar reservations about all the other viable options (e.g. Python, Erlang, Scheme, Lua).

And yet I'd prefer not to work in Java or C++ anymore.

This is a hard problem.

(Published on May 2, 2005)

Note: I've deliberately misused the terms "strong" and "weak" in this article, for two reasons. First, to help make the distinction that I'm talking about more than just programming languages; the issue applies to data and interface modeling as well. Second, for poetic effect.

I'm obviously talking about static versus dynamic typing in the programming-language contexts. I'm well aware that the situation is better described as a 2x2 grid consisting of the combinations of static/dynamic and strong/weak; e.g. see Chapter 1 of Benjamin Pierce's "Types and Programming Languages" textbook.

I only bring this up because of some almost unbelievable whining coming from a few Python folks who've chosen to overlook the largely pro-Python(/Smalltalk/etc.) viewpoint of my article here, because they objected to having their dynamic typing referred to as "weak". Python and poetry evidently don't mix well. —steve, 12/30/2005