I was asked by someone to put this old piece of writing of mine somewhere more permanent, so here it is. A few comments off the top:
- This isn’t new. I wrote it years ago.
- Yes, I’m a bit embarrassed at some of the wording, as most people are when they read content they wrote a long time ago. There are things that I’d say differently if I wrote something like this today. There is clumsy wording, and there are poor choices of examples. I’m not taking this opportunity to revise the writing; this is exactly as I wrote it all those years ago.
- If pressed for an answer, though, yes I still do believe in the central conclusions of the article. Namely:
- That “static typing” and “dynamic typing” are two concepts that are fundamentally unrelated to each other, and just happen to share a word.
- That “static types” are, at their core a tool for writing and maintaining computer-checked proofs about code
- That “dynamic types” are, at their core, there to make unit testing less tedious, and are a tool for finding bugs.
- That the two are related in the way I outline: that one establishes lower bounds on correctness of code, while the other establishes upper bounds, and that questions about their real world use should come down to the possibility and effectiveness of addressing certain kinds of bugs by either computer-checked proof, or testing.
With that said, here is the original article:
What To Know Before Debating Type Systems
I would be willing to place a bet that most computer programmers have, on multiple occasions, expressed an opinion about the desirability of certain kinds of type systems in programming languages. Contrary to popular conception, that’s a great thing! Programmers who care about their tools are the same programmers who care about their work, so I hope the debate rages on.
There are a few common misconceptions, though, that confuse these discussions. This article runs through those I’ve encountered that obscure the most important parts of the debate. My goal is to build on a shared understanding of some of the basic issues, and help people get to the interesting parts more quickly.
Classifying Type Systems
Type systems are commonly classified by several words, of which the most common are “static,” “dynamic,” “strong,” and “weak.” In this section, I address the more common kinds of classification. Some are useful, and some are not.
Strong and Weak Typing
Probably the most common way type systems are classified is “strong” or “weak.” This is unfortunate, since these words have nearly no meaning at all. It is, to a limited extent, possible to compare two languages with very similar type systems, and designate one as having the stronger of those two systems. Beyond that, the words mean nothing at all.
Therefore: I give the following general definitions for strong and weak typing, at least when used as absolutes:
- Strong typing: A type system that I like and feel comfortable with
- Weak typing: A type system that worries me, or makes me feel uncomfortable
What about when the phrase is used in a more limited sense? Then strong typing, depending on the speaker or author, may mean anything on the spectrum from “static” to “sound,” both of which are defined below.
Static and Dynamic Types
This is very nearly the only common classification of type systems that has real meaning. As a matter of fact, it’s significance is frequently under-estimated. I realize that may sound ridiculous; but this theme will recur throughout this article. Dynamic and static type systems are two completely different things, whose goals happen to partially overlap.
A static type system is a mechanism by which a compiler examines source code and assigns labels (called “types”) to pieces of the syntax, and then uses them to infer something about the program’s behavior. A dynamic type system is a mechanism by which a compiler generates code to keep track of the sort of data (coincidentally, also called its “type”) used by the program. The use of the same word “type” in each of these two systems is, of course, not really entirely coincidental; yet it is best understood as having a sort of weak historical significance. Great confusion results from trying to find a world view in which “type” really means the same thing in both systems. It doesn’t. The better way to approach the issue is to recognize that:
- Much of the time, programmers are trying to solve the same problem with static and dynamic types.
- Nevertheless, static types are not limited to problems solved by dynamic types.
- Nor are dynamic types limited to problems that can be solved with static types.
- At their core, these two techniques are not the same thing at all.
Observing the second of these four simple facts is a popular pass-time in some circles. Consider this set of presentation notes, with a rather complicated “the type system found my infinite loop” comment. From a theoretical perspective, preventing infinite loops is in a very deep sense the most basic possible thing you can do with static types! The simply-typed lambda calculus, on which all other type systems are based, proves that programs terminate in a finite amount of time. Indeed, the more interesting question is how to usefully extend the type system to be able to describe programs that don’t terminate! Finding infinite loops, though, is not in the class of things most people associate with “types,” so it’s surprising. It is, indeed, provably impossible with dynamic types (that’s called the halting problem; you’ve probably heard of it!). But it’s nothing special for static types. Why? Because they are an entirely different thing from dynamic types.
The dichotomy between static and dynamic types is somewhat misleading. Most languages, even when they claim to be dynamically typed, have some static typing features. As far as I’m aware, all languages have some dynamic typing features. However, most languages can be characterized as choosing one or the other. Why? Because of the first of the four facts listed above: many of the problems solved by these features overlap, so building in strong versions of both provides little benefit, and significant cost.
There are many other ways to classify type systems. These are less common, but here are some of the more interesting ones:
- Sound types. A sound type system is one that provides some kind of guarantee. It is a well-defined concept relating to static type systems, and has proof techniques and all those bells and whistles. Many modern type systems are sound; but older languages like C often do not have sound type systems by design; their type systems are just designed to give warnings for common errors. The concept of a sound type system can be imperfectly generalized to dynamic type systems as well, but the exact definition there may vary with usage.
- Explicit/Implicit Types. When these terms are used, they refer to the extent to which a compiler will reason about the static types of parts of a program. All programming languages have some form of reasoning about types. Some have more than others. ML and Haskell have implicit types, in that no (or very few, depending on the language and extensions in use) type declarations are needed. Java and Ada have very explicit types, and one is constantly declaring the types of things. All of the above have (relatively, compared to C and C++, for example) strong static type systems.
- The Lambda Cube. Various distinctions between static type systems are summarized with an abstraction called the “lambda cube.” Its definition is beyond the scope of this article, but it basically looks at whether the system provides certain features: parametric types, dependent types, or type operators. Look here for more information.
- Structural/Nominal Types. This distinction is generally applied to static types with subtyping. Structural typing means a type is assumed whenever it is possible to validly assume it. For example, a record with fields called x, y, and z might be automatically considered a subtype of one with fields x and y. With nominal typing, there would be no such assumed relationship unless it were declared somewhere.
- Duck Typing. This is a word that’s become popular recently. It refers to the dynamic type analogue of structural typing. It means that rather than checking a tag to see whether a value has the correct general type to be used in some way, the runtime system merely checks that it supports all of the operations performed on it. Those operations may be implemented differently by different types.
This is but a small sample, but this section is too long already.
Fallacies About Static and Dynamic Types
Many programmers approach the question of whether they prefer static or dynamic types by comparing some languages they know that use both techniques. This is a reasonable approach to most questions of preference. The problem, in this case, is that most programmers have limited experience, and haven’t tried a lot of languages. For context, here, six or seven doesn’t count as “a lot.” On top of that, it requires more than a cursory glance to really see the benefit of these two very different styles of programming. Two interesting consequences of this are:
- Many programmers have used very poor statically typed languages.
- Many programmers have used dynamically typed languages very poorly.
This section, then, brings up some of the consequences of this limited experience: things many people assume about static or dynamic typing that just ain’t so.
Fallacy: Static types imply type declarations
The thing most obvious about the type systems of Java, C, C++, Pascal, and many other widely-used “industry” languages is not that they are statically typed, but that they are explicitly typed. In other words, they require lots of type declarations. (In the world of less explicitly typed languages, where these declarations are optional, they are often called “type annotations” instead. You may find me using that word.) This gets on a lot of people’s nerves, and programmers often turn away from statically typed languages for this reason.
This has nothing to do with static types. The first statically typed languages were explicitly typed by necessity. However, type inference algorithms – techniques for looking at source code with no type declarations at all, and deciding what the types of its variables are – have existed for many years now. The ML language, which uses it, is among the older languages around today. Haskell, which improves on it, is now about 15 years old. Even C# is now adopting the idea, which will raise a lot of eyebrows (and undoubtedly give rise to claims of its being “weakly typed” — see definition above). If one does not like type declarations, one is better off describing that accurately as not liking explicit types, rather than static types.
(This is not to say that type declarations are always bad; but in my experience, there are few situations in which I’ve wished to see them required. Type inference is generally a big win.)
Fallacy: Dynamically typed languages are weakly typed
The statement made at the beginning of this thread was that many programmers have used dynamically typed languages poorly. In particular, a lot of programmers coming from C often treat dynamically typed languages in a manner similar to what made sense for C prior to ANSI function prototypes. Specifically, this means adding lots of comments, long variable names, and so forth to obssessively track the “type” information of variables and functions.
Doing this prevents a programmer from realizing the benefits of dynamic typing. It’s like buying a new car, but refusing to drive any faster than a bicycle. The car is horrible; you can’t get up the mountain trails, and it requires gasoline on top of everything else. Indeed, a car is a pretty lousy excuse for a bicycle! Similarly, dynamically typed languages are pretty lousy excuses for statically typed languages.
The trick is to compare dynamically typed languages when used in ways that fit in with their design and goals. Dynamically typed languages have all sorts of mechanisms to fail immediately and clearly if there is a runtime error, with diagnostics that show you exactly how it happened. If you program with the same level of paranoia appropriate to C – where a simple bug may cause a day of debugging – you will find that it’s tough, and you won’t be actually using your tools.
(As a side comment, and certainly a more controversial one, the converse is equally true; it doesn’t make sense to do the same kinds of exhaustive unit testing in Haskell as you’d do in Ruby or Smalltalk. It’s a waste of time. It’s interesting to note that the whole TDD movement comes from people who are working in dynamically typed languages… I’m not saying that unit testing is a bad idea with static types; only that it’s entirely appropriate to scale it back a little.)
Fallacy: Static types imply upfront design or waterfall methods
Some statically typed languages are also designed to enforce someone’s idea of a good development process. Specifically, they often require or encourage that you specify the whole interface to something in one place, and then go write the code. This can be annoying if one is writing code that evolves over time or trying out ideas. It sometimes means changing things in several different places in order to make one tweak. The worst form of this I’m aware of (though done mainly for pragmatic reasons rather than ideological ones) is C and C++ header files. Pascal has similar aims, and requires that all variables for a procedure or function be declared in one section at the top. Though few other languages enforce this separation in quite the same way or make it so hard to avoid, many do encourage it.
It is absolutely true that these language restrictions can get in the way of software development practices that are rapidly gaining acceptance, including agile methodologies. It’s also true that they have nothing to do with static typing. There is nothing in the core ideas of static type systems that has anything to do with separating interface from implementation, declaring all variables in advance, or any of these other organizational restrictions. They are sometimes carry-overs from times when it was considered normal for programmers to cater to the needs of their compilers. They are sometimes ideologically based decisions. They are not static types.
If one doesn’t want a language deciding how they should go about designing their code, it would be clearer to say so. Expressing this as a dislike for static typing confuses the issue.
This fallacy is often stated in different terms: “I like to do exploratory programming” is the popular phrase. The idea is that since everyone knows statically typed languages make you do everything up front, they aren’t as good for trying out some code and seeing what it’s like. Common tools for exploratory programming include the REPL (read-eval-print loop), which is basically an interpreter that accepts statements in the language a line at a time, evaluates them, and tells you the result. These tools are quite useful, and they exist for many languages, both statically and dynamically typed. They don’t exist (or at least are not widely used) for Java, C, or C++, which perpetuates the unfortunate myth that they only work in dynamically typed languages. There may be advantages for dynamic typing in exploratory programming (in fact, there certainly are some advantages, anyway), but it’s up to someone to explain what they are, rather than just to imply the lack of appropriate tools or language organization.
Fallacy: Dynamically typed languages provide no way to find bugs
A common argument leveled at dynamically typed languages is that failures will occur for the customer, rather than the developer. The problem with this argument is that it very rarely occurs in reality, so it’s not very convincing. Programs written in dynamically typed languages don’t have far higher defect rates than programs written in languages like C++ and Java.
One can debate the reasons for this, and there are good arguments to be had there. One reason is that the average skill level of programmers who know Ruby is higher than those who know Java, for example. One reason is that C++ and Java have relatively poor static type systems. Another reason, though, is testing. As mentioned in the aside above, the whole unit testing movement basically came out of dynamically typed languages. It has some definite disadvantages over the guarantees provided by static types, but it also has some advantages; static type systems can’t check nearly as many properties of code as testing can. Ignoring this fact when talking to someone who really knows Ruby will basically get you ignored in turn.
Fallacy: Static types imply longer code
This fallacy is closely associated with the one above about type declarations. Type declarations are the reason many people associated static types with a lot of code. However, there’s another side to this. Static types often allow one to write much more concise code!
This may seem like a surprising claim, but there’s a good reason. Types carry information, and that information can be used to resolve things later on and prevent programmers from needing to write duplicate code. This doesn’t show up often in simple examples, but a really excellent case is found in the Haskell standard library’s
Data.Mapmodule. This module implements a balanced binary search tree, and it contains a function whose type signature looks like this:lookup :: (Monad m, Ord k) => k -> Map k a -> m a
This is a magical function. It says that I can look something up in a
Mapand get back the result. Simple enough, but here’s the trick: what do I do if the result isn’t there? Common answers might include returning a special “nothing” value, or aborting the current computation and going to an error handler, or even terminating the whole program. The function above does any of the above! Here’s how I compare the result against a special nothing value:case (lookup bobBarker employees) of Nothing -> hire bobBarker Just salary -> pay bobBarker salary
How does Haskell know that I want to choose the option of getting back
Nothingwhen the value doesn’t exist, rather than raising some other kind of error? It’s because I wrote code afterward to compare the result against
Nothing! If I had written code that didn’t immediately handle the problem but was called from somewhere that handled errors three levels up the stack, then
lookupwould have failed that way instead, and I’d be able to write seven or eight consecutive lookup statements and compute something with the results without having to check for
Nothingall the time. This completely dodges the very serious “exception versus return value” debate in handling failures in many other languages. This debate has no answer. Return values are great if you want to check them now; exceptions are great if you want to handle them several levels up. This code simply goes along with whatever you write the code to do.
The details of this example are specific to Haskell, but similar examples can be constructed in many statically typed languages. There is no evidence that code in ML or Haskell is any longer than equivalent code in Python or Ruby. This is a good thing to remember before stating, as if it were obviously true, that statically typed languages require more code. It’s not obvious, and I doubt if it’s true.
Benefits of Static Types
My experience is that the biggest problems in the static/dynamic typing debate occur in failing to understand the issues and potential of static types. The next two sections, then, are devoted to explaining this position in detail. This section works upward from the pragmatic perspective, while the next develops it into its full form.
There are a number of commonly cited advantages for static typing. I am going to list them in order from least to most significant. (This helps the general structure of working up to the important stuff.)
Performance is the gigantic red herring of all type system debates. The knowledge of the compiler in a statically typed language can be used in a number of ways, and improving performance is one of them. It’s one of the least important, though, and one of the least interesting.
For most computing environments, performance is the problem of two decades ago. Last decade’s problem was already different, and this decades problems are at least 20 years advanced beyond performance being the main driver of technology decisions. We have new problems, and performance is not the place to waste time.
(On the other hand, there are a few environments where performance still matters. Languages in use there are rarely dynamically typed, but I’m not interested enough in them to care much. If you do, maybe this is your corner of the type system debate.)
If, indeed, performance is irrelevant, what does one look to next? One answer is documentation. Documentation is an important aspect of software, and static typing can help.
Why? Because documentation isn’t just about comments. It’s about everything that helps people understand software. Static type systems build ideas that help explain a system and what it does. The capture information about the inputs and outputs of various functions and modules. This is exactly the set of information needed in documentation. Clearly, if all of this information is written in comments, there is a pretty good chance it will eventually become out of date. If this information is written in identifier names, it will be nearly impossible to fit it all in. It turns out that type information is a very nice place to keep this information.
That’s the boring view. As everyone knows, though, it’s better to have self- documenting code than code that needs a lot of comments (even if it has them!). Conveniently enough, most languages with interesting static type systems have type inference, which is directly analogous to self-documenting code. Information about the correct way to use a piece of code is extracted from the code itself (i.e., it’s self-documenting), but then verified and presented in a convenient format. It’s documentation that doesn’t need to be maintained or even written, but is available on demand even without reading the source code.
Tools and Analysis
Things get way more interesting than documentation, though. Documentation is writing for human beings, who are actually pretty good at understanding code anyway. It’s great that the static type system can help, but it doesn’t do anything fundamentally new.
Fundamentally new things happen when type systems help computer programs to understand code. Perhaps I need to explain myself here. After all, a wise man (Martin Fowler, IIRC) one said:
“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”I don’t disagree with Martin Fowler, but we have different definitions of understand in mind. Getting a computer to follow code step by step is easy. Getting a computer to analyze it and answer more complex questions about it is a different thing entirely, and it is very hard.
We often want our development tools to understand code. This is a big deal. I’ll turn back to Martin Fowler, who points this out as well.
Ultimately, though, the justification for static typing has to come back to writing correct code. Correctness, of course, is just the program doing “what you want.”
This is a really tough problem; perhaps the toughest of all. The theory of computation has a result called Rice’s Theorem, which essentially says this: Given an arbitrary program written in a general purpose programming language, it is impossible to write a computer program that determines anything about the program’s output. If I’m teaching an intro to programming class and assign my students to write “hello world”, I can’t program a grader to determine if they did so or not. There will be some programs for which the answer is easy; if the program never makes any I/O calls, then the answer is no. If the program consists of a single print statement, it’s easy to check if the answer is yes. However, there will be some complicated programs for which my grader can never figure out the answer. (A minor but important technical detail: one can’t run the program and wait for it to finish, because the program might never finish!) This is true of any statement about programs, including some more interesting ones like “does this program ever finish?” or “does this program violate my security rules?”
Given that we can’t actually check the correctness of a program, there are two approaches that help us make approximations:
- Testing: establishes upper bounds on correctness
- Proof: establishes lower bounds on correctness
Of course, we care far more about lower bounds than upper bounds. The problem with proofs, though, is the same as the problem with documentation. Proving correctness is easy only somewhat insanely difficult when you have a static body of code to prove things about. When the code is being maintained by three programmers and changing seven times per day, maintaining the correctness proofs falls behind. Static typing here plays exactly the same role as it does with documentation. If (and this is a big if) you can get your proofs of correctness to follow a certain form that can be reproduced by machine, the computer itself can be the prover, and let you know if the change you just made breaks the proof of correctness. The “certain form” is called structural induction (over the syntax of the code), and the prover is called a type checker.
An important point here is that static typing does not preclude proving correctness in the traditional way, nor testing the program. It is a technique to handle those cases in which testing might be guaranteed to succeed so they don’t need testing; and similarly, to provide a basis from which the effort of manual proof can be saved for those truly challenging areas in which it is necessary.
Dynamic Typing Returns
Certainly dynamic typing has answers to this. Dynamically typed languages can sometimes perform rather well (see Dylan), sometimes have great tools (see Smalltalk), and I’m sure they occasionally have good documentation as well, though the hunt for an example is too much for me right now. These are not knock-down arguments for static typing, but they are worth being aware of.
The correctness case is particularly enlightening. Just as static types strengthened our proofs of correctness by making them easier and automatic, dynamic typing improves testing by making it easier and more effective. It simply makes the code fail more spectacularly. I find it amusing when novice programmers believe their main job is preventing programs from crashing. I imagine this spectacular failure argument wouldn’t be so appealing to such a programmer. More experienced programmers realize that correct code is great, code that crashes could use improvement, but incorrect code that doesn’t crash is a horrible nightmare.
It is through testing, then, that dynamically typed languages establish correctness. Recall that testing establishes only upper bounds on correctness. (Dijkstra said it best: “Program testing can be used to show the presence of bugs, but never to show their absence.”) The hope is that if one tries hard enough and still fails to show the presence of bugs, then their absence becomes more likely. If one can’t seem to prove any better upper bound, then perhaps the correctness really is 100%. Indeed, there is probably at some correlation in that direction.
What is a Type?
This is as good a point as any to step back and ask the fundamental question: what is a type? I’ve already mentioned that I think there are two answers. One answer is for static types, and the other is for dynamic types. I am considering the question for static types.
It is dangerous to answer this question too quickly. It is dangerous because we risk excluding some things as types, and missing their “type” nature because we never look for it. Indeed, the definition of a type that I will eventually give is extremely broad.
Problems with Common Definitions
One common saying, quoted often in an attempt to reconcile static and dynamic typing, goes something like this: Statically typed languages assign types to variables, while dynamically typed languages assign types to values. Of course, this doesn’t actually define types, but it is already clearly and obviously wrong. One could fix it, to some extent, by saying “statically typed languages assign types to expressions, …” Even so, the implication that these types are fundamentally the same thing as the dynamic version is quite misleading.
What is a type, then? When a typical programmer is asked that question, they may have several answers. Perhaps a type is just a set of possible values. Perhaps it is a set of operations (a very structural-type-ish view, to be sure). There could be arguments in favor of each of these. One might make a list: integers, real numbers, dates, times, and strings, and so on. Ultimately, though, the problem is that these are all symptoms rather than definitions. Why is a type a set of values? It’s because one of the things we want to prove about our program is that it doesn’t calculate the square roots of a string. Why is a type a set of operations? It’s because one of the things we want to know is whether our program attempts to perform an impossible operation.
Let’s take a look at another thing we often want to know: does our web application stick data from the client into SQL queries without escaping special characters first? If this is what we want to know, then these becomes types. This article by Tom Moertel builds this on top of Haskell’s type system. So far, it looks like a valid definition of “type” is as follows: something we want to know.
A Type System
Clearly that’s not a satisfactory definition of a type. There are plenty of things we want to know that types can’t tell us. We want to know whether our program is correct, but I already said that types provide conservative lower bounds on correctness, and don’t eliminate the need for testing or manual proof. What makes a type a type, then? The other missing component is that a type is part of a type system.
Benjamin Pierce’s book Types and Programming Languages is far ans away the best place to read up on the nitty gritty details of static type systems, at least if you are academically inclined. I’ll quote his definition.
A type system is a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute.This is a complex definition, but the key ideas are as follows:
- syntactic method .. by classifying phrases: A type system is necessarily tied to the syntax of the language. It is a set of rules for working bottom up from small to large phrases of the language until you reach the result.
- proving the absence of certain program behaviors: This is the goal. There is no list of “certain” behaviors, though. The word just means that for any specific type system, there will be a list of things that it proves. What it proves is left wide open. (Later on in the text: “… each type system comes with a definition of the behaviors it aims to prevent.”)
- tractable: This just means that the type system finishes in a reasonable period of time. Without wanting to put words in anyone’s mouth, I think it’s safe to say most people would agree that it’s a mistake to include this in the definition of a type system. Some languages even have undecidable type systems. Nevertheless, it is certainly a common goal; one doesn’t expect the compiler to take two years to type-check a program, even if the program will run for two years.
The remainder of the definition is mainly unimportant. The “kinds of values they compute” is basically meaningless unless we know what kinds we might choose from, and the answer is any kind at all.
An example looks something like that. Given the expression
5 + 3, a type checker may look at 5 and infer that it’s an integer. It may look at 3 and infer it’s an integer. It may then look at the + operator, and know that when + is applied to two integers, the result is an integer. Thus it’s proven the absence of program behaviors (such as adding an integer to a string) by working up from the basic elements of program syntax.
Examples of Unusual Type Systems
That was a pretty boring example, and one that plays right into a trap: thinking of “type” as meaning the same thing it does in a dynamic type system. Here are some more interesting problems being solved with static types.
- http://wiki.di.uminho.pt/twiki/pub/Personal/Xana/WebHome/report.pdf. Uses types to ensure that the correct kinds of data are gotten out of a relational database. Via the type system, the compiler ends up understanding how to work with concepts like functional dependencies and normal forms, and can statically prove levels of normalization.
- http://www.cs.bu.edu/~hwxi/academic/papers/pldi98.pdf. Uses an extension to ML’s type system to prove that arrays are never accessed out of bounds. This is an unusually hard problem to solve without making the languages that solve it unusable, but it’s a popular one to work on.
- http://www.cis.upenn.edu/~stevez/papers/LZ06a.pdf. This is great. This example uses Haskell’s type system to let someone define a security policy for a Haskell program, in Haskell, and then proves that the program properly implements that security. If a programmer gets security wrong, the compiler will complain rather than opening up a potential security bug in the system.
- http://www.brics.dk/RS/01/16/BRICS-RS-01-16.pdf. Just in case you thought type systems only solved easy problems, this bit of Haskell gets the type system to prove two central theorems about the simply typed lambda calculus, a branch of computation theory!
The point of these examples is to point out that type systems can solve all sorts of programming problems. For each of these type systems, concepts of types are created that represent the ideas needed to accomplish this particular task with the type system. Some problems solved by static type systems look nothing like the intuitive idea of a type. A buggy security check isn’t normally considered a type error, but only because not many people use languages with type systems that solve that problem.
To reiterate the point above, it’s important to understand how limiting it is to insist, as many people do, that the dynamic typing definition of a “type” is applied to static typing as well. One would miss the chance to solve several real-world problems mentioned above.
The True Meaning of Type
So what is a type? The only true definition is this: a type is a label used by a type system to prove some property of the program’s behavior. If the type checker can assign types to the whole program, then it succeeds in its proof; otherwise it fails and points out why it failed. This is a definition, then, but it doesn’t tell us anything of fundamental importance. Some further exploration leads us to insight about the fundamental trade-offs involved in using a static type checker.
If you were looking at things the right way, your ears may have perked up a few sections back, when I said that Rice’s Theorem says we can’t determine anything about the output of a program. Static type systems prove properties of code, but it almost appears that Rice’s Theorem means we can’t prove anything of interest with a computer. If true, that would be an ironclad argument against static type systems. Of course, it’s not true. However, it is very nearly true. What Rice’s Theorem says is that we can’t determine anything. (Often the word “decide” is used; they mean the same thing here.) It didn’t say we can’t prove anything. It’s an important distinction!
What this distinction means is that a static type system is a conservative estimate. If it accepts a program, then we know the program has the properties proven by that type checker. If it fails… then we don’t know anything. Possibly the program doesn’t have that property, or possibly the type checker just doesn’t know how to prove it. Furthermore, there is an ironclad mathematical proof that a type checker of any interest at all is alwaysconservative. Building a type checker that doesn’t reject any correct programs isn’t just difficult; it’s impossible.
That, then, is the trade-off. We get assurance that the program is correct (in the properties checked by this type checker), but in turn we must reject some interesting programs. To continue the pattern, this is the diametric opposite of testing. With testing, we are assured that we’ll never fail a correct program. The trade-off is that for any program with an infinite number of possible inputs (in other words, any interesting program), a test suite may still accept programs that are not correct – even in just those properties that are tested.
Framing the Interesting Debate
That last paragraph summarizes the interesting part of the debate between static and dynamic typing. The battleground on which this is fought out is framed by eight questions, four for each side:
- For what interesting properties of programs can we build static type systems?
- How close can we bring those type systems to the unattainable ideal of never rejecting a correct program?
- How easy can it be made to program in a language with such a static type system?
- What is the cost associated with accidentally rejecting a correct computer program?
- For what interesting properties of programs can we build test suites via dynamic typing?
- How close can we bring those test suites to the unattainable ideal of never accepting a broken program?
- How easy can it be made to write and execute test suites for programs?
- What is the cost associated with accidentally accepting an incorrect computer program?
If you knew the answer to those eight questions, you could tell us all, once and for all, where and how we ought to use static and dynamic typing for our programming tasks.
There’s a pretty simple convention in place for handling requests for files, directories, and indexes on a web server. It works pretty well, and it is definitely worth understanding what happens and why, if you don’t already.
This came up today in the context of a question about Snap, I figured I should write about it. This topic isn’t particularly advanced, so it is likely to be nothing new if you already know HTTP fairly well. But I couldn’t find a reference to point to, so I’m writing it down.
Web servers have to be pretty flexible when it comes to handling requests for files. People might type any of the following URLs:
So what should happen in response to each? And why?
This post covers the basics. Remember that we’re only talking about basic straight-forward file serving here. Once you throw in custom code and server-side programming, things can change a lot. But a solid understanding of the basics is a stepping stone to understanding the things built on it… and all of HTTP is built on the metaphor of serving files from a web server. Then here we go.
This is the easy one, because it’s not up to the web server at all. If a user types the URL http://example.com, their web browser is required to send the request with a request URI of “/”, as if they’d placed a slash as the request URI. This is a matter of HTTP syntax; because request URIs are not quoted in HTTP, there’s no way to send an empty one.
In this case, as well, the request URI sent by the web browser will be “/”. So what is a web server going to do with that?
Most people, I think, know this answer. The server finds the root directory of content it expects to serve. (On many UNIX web servers, for example, this might be called /var/www, but server configuration is a whole different topic, and there are many reasons it might be something else.) Since the request was for the entire directory, it typically tries to find a file that represents an index of all the content in that directory.
Web servers can be configured to look for this content under many names, but a common one is “index.html”. A server will generally have a list of such index names, in order of preference: perhaps index.html, followed by index.htm, and so on. If one of these files exists, the server sends its contents as a response to the request.
Notice that typically, the client (that is, the web browser) never actually gets told that it is receiving a file named index.html. Instead, it just knows that it sent a request for the root directory (the request URI “/”) and got back some HTML. It does know that it’s HTML, because of the Content-type header in the request, but it doesn’t get sent the file name.
If no index.html or similar file exists, then the server has a choice about what to do next. Historically, web servers generated on the fly a list of all the files in the directory, formatted as HTML, and served that in response to the request for the directory. However, in more recent years, the obfuscation-based security movement has made that less common, so it’s more common to get back an error message letting you know that the server doesn’t want to tell you what’s in the directory. The error is often 403 (Forbidden), though it’s sometimes 404 (Not Found) as well.
Here, the request is for “/somedirectory”, a directory inside of the top-level.
(Note that in practice, the decision of what to do should be made based on whether there’s really a directory and/or file with that name, not the presence or absence of an extension. I’ve included the extension above just to give a hint to the reader — you — that one request is for a directory while the other is for a file.)
Ideally, the server would like to find an index file of some sort to serve for this subdirectory, just as it did in the previous case. But we have a problem. If the server gives back some HTML in response to the request URI “/somedirectory”, the user’s web browser will think that’s a file! Remember, we don’t rely on extensions to decide what’s a file or a directory. Also, remember that the web server doesn’t ever tell the client the name of the file it’s actually sending; it just sends the content.
Now, maybe you think it doesn’t matter if the web browser thinks it’s asked for a file and is really getting a directory. But in fact it does, and the reason is relative URLs. Suupose you were writing the index.html file that belongs in somedirectory, and you wanted to refer to an image alongside the file. You’d probably drop that image inside of somedirectory as well. But if the web browser gets back this HTML and thinks that it requested a file called somedirectory instead, then it will not know to ask for “/somedirectory/image.png”. Instead, it will ask for “/image.png”, thinking the image is sitting in the directory alongside a file called “somedirectory”. This is not going to work.
For this reason, the web server does not immediately send back the index.html or similar file in somedirectory. Instead, it sends a redirect. This response (using response code 302) informs the browser to come back and ask again, but this time use “/somedirectory/” as its request URI. Note the slash at the end: that’s what tells the web browser that this is a directory, and not a file.
It’s now the browser’s job to re-request the file with the right URI, so the server sends a redirect, and is done.
In this case, the browser has sent a request for “/somedirectory/”. Perhaps it got here by following the redirect from the previous request, or perhaps the request URI was correct from the beginning. In either case the response is just like Case 2. The server will look for a file called something like “index.html”… but this time, it will look inside of somedirectory to find it. If it doesn’t find an appropriate index file, once again it may or may not try to generate one.
Now the request is for “/somefile.html”, a file. This is the easy one. The server will find the file with that exact name, and send its contents back in the response. (Note that I’m assuming somefile.txt is a file. It’s entirely possible to create a directory with that name, in which you’d follow the instructions for case 3 instead, and send the redirect.)
Finally, suppose a user requests “/somefile.html/”. That is, the request is for a file, but there is a slash at the end of the path.
You might think we should just send the file; it’s pretty obvious what the user wanted, even if they’ve said it oddly. But there’s a problem with that. Just like in case 3, the web browser looks at that trailing slash and decides that somefile.html is a directory name. Then if it refers to images, style sheets, etc., the browser will try to request something like “/somefile.html/image.png”. That’s not going to accomplish anything useful. So it’s incorrect to serve a file with this URL.
We could send a redirect again, like we did for directories, but that’s less common in this case. Directories can be typed without a trailing slash because people do it a lot. On the other hand, hardly anyone habitually adds a trailing slash after their file names. So instead, we should just fail with a 404 (Not Found) response code.
Understanding the above conventions is important because when people design URLs for web applications, they should keep in mind how browsers expect pages to behave. Remember that the trailing slash decides how a web browser will resolve relative URLs. As such, it’s actually a significant part of the public interface of your application.
Quite a few times, this is gotten wrong. Snap, for example (which by the way is my current favorite foundation for building web applications, so don’t think I have anything against it), got this wrong in its file serving code, and the conversation I mentioned above was with regard to making it work. Even more significantly, a large number of web frameworks provide request routing, but ignore the question of whether content should be served as a file or a directory. For dynamic content, either one works… but you do need to keep in mind which one you choose and how your static resources are chosen on that basis. Though you can work around this using all absolute paths or the HTML “base” tag, this throws away a good bit of how relative URIs are supposed to work.
So it’s good to understand these conventions, and understand how web browsers are interacting with your application.
In a conversation last night, a friend and I talked about the following thought experiment:
If you had a few million dollars and a few years to spend on hiring/recruiting and leading a strike force to make some big improvements in the infrastructure of your favorite programming language… what would you do?
Note that the changes have to be to infrastructure. We aren’t interested in language changes, which tend to get too much play already; but rather in the tools and fundamental libraries available to us. Here’s my list for Haskell.
Project #1: Fix Dependencies in GHC/Cabal
Managing dependencies is a HUGE issue in Haskell right now. Any substantial software project seems to require nearly as much time managing the various versions of different dependent libraries as writing code. And there’s one really big thing that can be done about it: don’t expose dependencies from a library unless they are really, truly needed.
Let’s look at an example. A gazillion different libraries use Parsec internally to parse various things from text. They use various different versions of Parsec. Even though there is no reason why two different versions of Parsec can’t be used in the same program, still Cabal’s dependency resolution will try very hard to avoid combining those two libraries together, just because of some inconsequential implementation details. Parsec isn’t the only such package, either: QuickCheck, for years, split the Haskell world because various different packages depended on different versions of QuickCheck for their internal testing. The situation with mtl and transformers is a little more complicated, since it’s possible in principle that libraries that depended on both actually needed the same versions; but poking around a bit reveals that for the most part, these libraries used mtl or transformers internally to build monads that could very well have been wrapped up in opaque newtype wrappers at the package boundary.
Basically, we have a lot of confusion between implementation dependencies, and exposed dependencies.
So this idea has two steps: Step #1, instead of just telling GHC about all of the packages your code depends on, you should be able to give it a list of hidden dependencies, and a separate list of exposed dependencies. It should check that any types accessible via exports of your package’s exposed modules is in the list of exposed dependencies. Step #2, Cabal should get separate fields for exposed and hidden dependencies, and pass them along to GHC. And its constraint solver should be set up to never fail to install a package because of needing different versions of hidden dependencies.
I don’t know an easy way to measure this, but my strong suspicion is that this would save 80% of the time Haskell programmers currently spend on maintaining the dependencies of large Haskell projects. It would be a huge obstacle to real-world use of Haskell, removed in one fell swoop. It easily gets the #1 position on my list.
Project #2: Expand STM to External Systems
In my opinion, one of the most underestimated bits of potential in Haskell is software transactional memory. The STM library is, as Don Stewart so elegantly pointed out recently, done and working and quite usable. The problem is that it’s just about transactional memory.
I’d say that from my experience, a very dangerous source of software bugs comes from the fact that so much software is written from the perspective of someone standing on the outside, looking in to a transaction. Managing transactions properly can be very difficult, and many programmers just plain get it wrong. I shudder to think of the number of web applications we trust every day that probably have data loss bugs because of poor transaction handling. This is something that desperately needs a solution.
One solution is database stored procedures, and they are used liberally for this purpose. The ability to write code that sees the world from inside of the transaction is, I think, at the root of why it’s generally considered safer to write data-related code in stored procedures rather than in applications. But stored procedures are specific to a single data source. Most significant information stores provide some support for distributed transactions these days… but using them requires doing your data manipulation in application code, which means back outside the transaction. Very little work has been done on writing code at the application level that nevertheless lives inside of a (distributed) transaction. A big reason for that is that applications have always been effectful, and we’ve had no good way to take the side effects of the code in the application itself and manage even that central bit in a transaction-safe way.
Enter STM. Now we do have a nice, working, fully functional system for writing application level code that sees the world from inside of a transaction. STM was developed as a means of writing composable code that gets speedups from parallelism; I think that’s missing the bigger picture. Speedups from parallelism are nice, but what STM really gives us is a way to write composable code that’s organized as concurrent processes acting on data with safety offered by transactions. The next step is to expand those transactions. I want to write code that makes changes to both a database and my data structures in memory, and have that code run as a transaction, which either succeeds or fails as a whole.
There are challenges here, to be sure: STM’s crucial retry operation is unlikely to be supported by any kind of external transaction system, for example, and the transaction models of different external systems may be hard to bring together under a single interface (savepoints? nested transactions? isolation levels?). But even a simple lowest common denominator here would be a huge step forward.
I feel like Haskell has a huge opportunity here to be the first popular general-purpose languages in which it’s possible to easily write and compose code that is really transaction-safe. It would be a true shame to miss the opportunity.
Project #3: Universal Serialization
This is cheating a little bit, because it’s almost asking for a language feature… but I can get away with it because it doesn’t actually require a change to the language: just exposing a somewhat magical new API from the compiler.
Basically, it’s possible to take a lot of types in Haskell, and turn them into some kind of serialized form from which they can be recovered later. This is what Data.Binary (from the binary package) and Data.Serialize (from the cereal package) both do. But it’s not possible for all types. First class functions, for example, cannot be serialized. It’s not just that no one has written the code yet; in Haskell, it simply can’t happen.
But in a magic library provided by a Haskell compiler, it definitely could happen.
You’d have to be willing to live with a few limitations: recompiling your code would most likely invalidate serialized first-class functions. The interpreter is an even harder challenge. Serialized values would need to embed checksums of various bits of the code, and I haven’t even begun to think through which checksums would be needed. But it should be possible. After all, a closure is just a function pointer (to a function which has some offset in the compiled binary code) together with an environment consisting of values of other types. Throw in some smart handling of cyclic data structures based on a lookup table, and you’ve got something.
Even a half-baked attempt to make higher-order programming with functions and closures compatible with the existing world of SQL databases and binary network protocols would be a huge benefit.
I’ve been working furiously this weekend on the mysnapsession package that I announced a few days ago. Version 0.3 is now released.
I’ve started an introduction to the package, with a getting started step by step process, links to online API documentation, etc.
A few highlights of this new release:
Client Side Sessions: I’ve provided a second implementation of sessions based entirely on cookies and Michael Snoyman’s clientsession package. This was in the cards from the beginning, but it’s not my favorite choice, so it was second in the list of implementations.
Module Reorganization: I moved some modules around, in acknowledgement that the dialogues programming model is not actually a Snap extension. Instead, it’s a library that builds on sessions.
Matching APIs with Ozataman’s Package: One of the bigger surprises was learning that ozataman is already working on a different sessions package for Snap. Because of substantial design differences, I will still continue to develop my implementation anyway; but at least we can keep the exposed APIs similar. As such, you’ll notice that MonadSession looks very similar between the two. The differences are:
- Whereas ozataman defines the session type as a type alias at the top of the module, in my package it’s an associated type inside the MonadSession type class.
- Whereas ozataman defines setInSession, deleteFromSession, and getFromSession as members of MonadSession with default implementations, I define them as convenience functions following the MonadSession type class, and with somewhat more general types.
Aside from that, Snap.Extension.Session is basically the same between the two. The implementations differ considerably.
Just so everyone knows, I’m having trouble with my gmail account, which someone else seems to have told Google is theirs. Until it’s resolved, please email me with my first name @brindlewaye.com instead. Thanks.
Side Note: You can try the example application online at http://zelda.designacourse.com:8000/ if I happen to have it online. It’s a frivolous little game in which the web app tries to learn about animals. It’s all in a session with no persistent storage, though, so don’t spend too many hours teaching it about different animals.
I deliberately used a silly name, because this is an early attempt at adding sessions, and I don’t want to steal any important naming real estate that could be used later once some consensus builds about the right way to do this.
The motivation for releasing this package was the new extensions system in Snap 0.3. In a nutshell, extensions give Snap a nice common way to handle various added pieces of functionality that need their own internal state. Although the API is still experimental, it’s a very useful organizational tool.
The source code in the example package (linked above) is a good way to see what an application using mysnapsession might look like, but here’s the high-level overview.
Part 1: The MonadSession type class
Snap extensions all have a basic interface specified with a type class. For example, the built-in Heist extension comes with a type class called MonadHeist, and the example timer extension comes with MonadTimer. The fundamental type class for session support is called MonadSession. It’s a fairly simple type class, supporting two operations:
- getSessionObject: Retrieves an object representing this session
- putSessionObject: Replaces the object representing this session
You get to choose your own session object type, but not that this is a little different from the session API for every other programming language out there: there is one object representing the session. If you want name/value pairs, then your session object should probably be a Map from the Data.Map module.
Why didn’t I offer an interface that stores name/value pairs, a la every other web framework in existence? It would have been possible; in Haskell, that would just be a Map String Dynamic, using the Data.Dynamic package. The reason I didn’t do so is that I’m not convinced this is always the right way to go. I’ve done troubleshooting with a lot of other web applications in the past where strange behavior comes from leftover values in the session map, and I think it’s very possible to be more disciplined about the possible states of a session if you’re given a choice.
That’s not to say you can’t use a Map String Dynamic yourself: go ahead and do so. Write convenience functions for it. It’ll work fine. I just didn’t want to unnecessarily hard-wire the choice into the infrastructure.
Part 2: Memory-backed session implementation
The implementation I provide for sessions is memory-backed. This has both advantages and disadvantages.
- This is the simplest kind of sessions to implement: you don’t need a persistence layer.
- This is the only kind of sessions that can store values of arbitrary types. You can store first-class functions and such in the session, no problem.
- Because the session object exists in RAM on only one computer, it is fragile. If you restart the web application, it goes away. Unfortunately, it looks like this makes the development loader unusable; your sessions are wiped out on each request.
- If you need to load balance the application across multiple systems to scale it up, then you need some way to ensure that all requests from a session end up at the same node. This is called “sticky sessions”, and it’s supported, for example, in Apache’s mod_proxy. At some point, I should look more closely into how to set that up.
Aside from that, the memory-backed implementation works pretty much as you’d expect. It spawns a reaper thread that runs through the session list and discards old sessions after a user-configurable session timeout. Session keys are kept in cookies. (URL encoded session keys could be added as a feature pretty easily, but I haven’t done so.)
The MonadSession type class should be suitable for building sessions backed by databases or client-side cookies or other mechanisms as well. There would certainly be a good reason to do so. The memory-backed option was just a simple thing to do, which has some good use cases — including ones I’m hoping to use Snap for soon.
Part 3: Continuation-based programming
This is the bit of the API that’s demonstrated by the example application. It’s built on the previous two parts.
This part of the library has a long history. It originated as a post by Chris Eidhof to the HappStack mailing list. I cleaned up, simplified, and reorganized that code, and packed it up as the happstack-dlg package. (I regret, in hindsight, using so bold a name; this is certainly not an official part of the HappStack project.) Among other changes, I removed Chris’s error handling stuff… a decision I stick by, and have continued through to the present day. Error handling can, after all, be built on top of the simpler core.
Now that I’m working with Snap, and with this new opportunity, I’ve ported the code and included it as the Snap.Extension.Dialogues module in this package. I did drop the scaffolding and formlets support that existed in the happstack-dlg module. The formlet stuff was dropped mainly because formlets seems to be getting deprecated, and digestive-functors-blaze is still on my future reading list. The “scaffolding”, on the other hand, was just too ugly for me. I’d like to redo it properly using Heist for a customizable look and feel in the future, but only after some thought.
That’s it. Enjoy!
Edit: The original article (below) was unclear that I’m mainly talking about rooting techniques, custom ROMs, and modifications. This isn’t about applications, which are certainly somewhat under control. Even in the ROM world, there’s some good stuff happening with, e.g., ROM Manager, and well-known brands like Cyanogen. But at the same time, most people who’ve rooted their phones or installed custom mods have been told at some point to go download something from mediafire.com or a link from xda-developers, and install it on their phones.
I have a strong suspicion that Google’s Android operating system is ripe for a huge catastrophe in the near future. This post is about where it will probably come from, why it will happen, and how we can all manage to avoid being a victim when it happens.
Android: The Open System
Despite all the false debates drummed up by people with financial incentives otherwise, everyone knows that Android is an open system. The source code to the operating system is publicly available, multiple independent teams maintain their own forks of that code, a fair chunk of Android phone users are using custom ROMs. In the application space, Android apps tend to be free, are often written by individuals or small groups, and there are plenty of open source libraries and apps. It’s hard not to admit that’s an open software ecosystem in action.
Of course, that’s great. It means that a lot more options, and often a lot higher quality software, are available to Android users than if they had to rely entirely on a single organization to produce its masterpiece. It’s messy, too; certainly there is poor quality software released for Android, and there’s more sorting through the options versus a tightly controlled proprietary system; but that’s to be expected. Perhaps we can find better ways to sort through and find the best software out there, but that’s not the point of this article. Freedom is a good thing.
We have a problem, though.
A Tale of Two Communities
Back outside the smartphone/tablet space, in the 1990s long before such devices were around, there were two very different communities of people sharing code. On the one hand, we had the free software movement (later rebranded “open source” by Raymond). On the other hand, we had the Windows “freeware” community. The two groups both tended to share computer programs, mostly for free. But beyond that, they couldn’t have been more different.
The Free Software / Open Source Community:
- The running assumption is that everyone can read code.
- Everything is source code; indeed, sometimes pre-compiled versions are not available.
- Users take care to become acquainted with the technology involved.
- People take trust seriously. People know each other, and work together.
- Software is frequently downloaded from the official web site of that software project.
- Packaging and bundling handled by well-known groups working transparently with documented processes.
The Windows Freeware Community:
- People who understand code are considered an anomaly.
- Compiled executables are shared people who are paranoid about their source code.
- Users often download and install things they don’t understand.
- Anyone who can write code is worshiped, regardless of reputation or character.
- Software is often distributed by links in web forums, or file sharing web sites (with lots of popup ads).
- Random people repost software without mention of its source or author.
Predictably, the results were quite different. The open source community exploded, and still today produces high quality software and makes people’s lives better. By contrast, “freeware” was responsible for the widespread distribution of viruses, adware, and trojan horses, and likely single-handedly keeps the predatory “anti-virus” industry alive. Reasonable people never install “freeware” on their computers, and even people who want to repeat its mistakes know enough to avoid the word.
A Choice for Android
Which of these communities looks most like today’s Android world? It depends on where you look, but it appears depressingly like a huge chunk of the Android community is following the freeware path. Android is fairly new, but it’s already been demonstrated that viruses are in the works for the future, and as people rely more and more on smart phones for sensitive information like financial data, this looks likely to grow.
What can we do? Mostly, pay attention. And for goodness sake, quit the worship for “devs”. Yes, some people are skilled at software development and make cool things. But they aren’t all good people. Indeed, when someone is rude and disrespectful and changes your phone splash screen to a picture of someone peeing just for fun, we are better off as an Android community if that person weren’t writing software, or if they are ignored. Just because something is done for free doesn’t mean the community should settle for crap. We’ve sort of figured this out in terms of code quality, but we still need to learn this lesson with regard to someone’s relationship with the rest of the community, as well.
We should also start being distrustful of software that’s only available at some link from a web forum, or that was posted anonymously to a file sharing site. Someone with pride in their code can find a real official web site to host it from. It’s not as if there aren’t plenty of free options. Sure, there are good people who are not doing that just because the community norm points the other way… most software posted to XDA is not malware, and does what it claims to do. But it’s difficult to build trust when software is posted anonymously to mediafire.com, and pointed from some link on a forum that might disappear tomorrow.
We still have a choice, at this point, whether Android will be a vibrant new platform for open software, or the freeware blight of tomorrow. I hope we make the choice wisely.