Skip to content
January 17, 2011 / cdsmith

The Butterfly Effect in Cabal

This is an elaboration on a point from my previous article, and an ensuing reddit comment.

Twitter has a unique language all its own, and it would be nice to be able to understand and do something cool with all those twitter tags… you know, @cdsmithus, #haskell, etc.  The first step is to parse them from a block of text.  So I hypothetically might decide to write a very simple Haskell package called “twittertags” to do that.  Since I’d like my work to be extensible by others, I’ll export my work as parser combinators using parsec.  Then other people can use my code for the oh-so-difficult task of recognizing special punctuation as tags on Twitter.

Now, I’m well aware that while the current version of Parsec is 3.1, quite a few people are still using 2.1 because of performance concerns (or just lack of time to update their code).  Fortunately, if I import the compatible module names, I can be careful and make my code work with any version of parsec you like.

Of course, now that I’ve done the hard work of recognizing punctuation, people come along to use my library, building various Twitter and social media applications on top of this package of mine.  In particular, let’s consider two such packages called “twitclient” and “superblog”.  The both expose their own libraries, too, for extensibility.  They want to use my parser combinators in their own, so they also depend on Parsec, but neither of these folks is going to test against two different Parsec versions!  The author of “twitclient” decides to go with parsec 2.1, because of a measurable performance hit with the newer version.  But the author of “superblog” is all for the latest versions, and depends on 3.1.  My package works with both, so there’s no problem… right?

Wrong.

The dependencies now look like this…

Figure: The Cabal Butterfly Effect

Cabal allows several different versions of the same package to be installed at once, so there’s no problem with having both parsec versions installed at once.  But the question is how my own library should be built.  Should it be built against parsec 2.1?  Or against parsec 3.1?  The answer is both.  We need versions of twittertags to built against both of the two dependencies, or else one of the two packages that depends on it will fail.  But Cabal currently can’t do this.

What is does is certainly… suboptimal.  When you build twitclient, it will recompile twittertags against parsec-2.1, which will break superblog.  If you then reinstall superblog to fix it, Cabal will recompile twittertags against parsec-3.1, and break twitclient… and so on, ad infinitum.  What’s worse, it will break these packages without even telling you that it is doing so!

That’s what I’m now dubbing the “Cabal Butterfly Effect”, and one of the issues I brought up in the earlier post.

January 16, 2011 / cdsmith

Haskell’s Own DLL Hell

I touched on this issue in a more positive way in this recent entry.  But now I’m going to be more negative about it.  You see, here’s the thing: Significant Haskell software projects are struggling under the weight of the Haskell equivalent of “DLL hell”.

If you’re not familiar with the term, here’s a definition of “DLL hell”, a concept painfully familiar to many Windows system administrators:

“In computing, DLL hell is a colloquial term for the complications that arise when working with dynamic link libraries (DLLs) used with Microsoft Windows operating systems.”

The idea is that various applications on the computer share libraries.  These libraries have different versions, and different programs often need different versions.  The “hell” starts when some programs overwrite the libraries with other, incompatible, versions, or when one program somehow turns out to need both of two incompatible sets of libraries.

To be sure, Haskell is better equipped to handle the problem than historical Windows executables.  In old versions of Windows, programs often blindly copied the DLLs they needed into a system-wide location (Windows\System) without regard to any versioning at all.  However, in Haskell, we are dealing with the problem on a different order of magnitude.  Whereas a DLL on Windows is generally a pretty substantial project in its own right, many Haskell package on Hackage consist of just a few lines of code (put in a positive light, they do one thing, and do it well!), and as a result, many other projects depend on dozens of different packages, either directly or indirectly.

This has consequences: my experience is that it’s currently near-impossible to embark on most nontrivial Haskell software projects without spending a substantial percentage of one’s time fighting dependency version issues.  This is especially true of “real world” sorts of projects: those that often involve pulling together a lot of different functionality, rather than solving specific, well-defined computational problems.

That’s where we are.  The question is what we do about it.  I’d like to propose some questions for discussion:

1. Is it a good idea to follow the Package Versioning Policy?  Should the PVP be modified?

This one is certainly going to be controversial.  For background, the PVP is a set of guidelines for things like specifying the dependencies of a Haskell package, and how to manage your own package’s version number.  The short version is that packages shuld bump at least the minor version number (the x in version 1.x) every time they make a potentially breaking change, such as whenever any entity is removed or renamed or their types are changed.  Furthermore, the PVP suggests that dependencies should have have upper bounds on their versions. The goal here is that if you make a change that might break someone else’s package, you should create a new version and their package will continue to build against the old version.

There are two possible effects of following the PVP by adding upper bounds on dependencies:

  • Someone might try to install some package, and because of an upper bound, Cabal builds it against an older version of some library.  This causes the build to succeed, where it otherwise would have failed because you removed or renamed something.
  • Someone might try to install some package, and because of an upper bound, Cabal fails to find the right combination of dependencies, and refuses to build it at all.

Just to be argumentative, I’ll mention that it’s pretty clear to me from personal experience that #2 happens a lot more often than #1.  People following the package versioning policy by specifying upper bounds is far more likely to prevent ‘cabal install’ from succeeding, than to allow it to succeed.  Upper bounds, on balance, make it harder to get Haskell libraries installed successfully.  When (again, from personal experience) attempts to build any nontrivial Haskell application have a less than 30% chance or succeeding anyway, should we be all that worried about the theoretical chance that a build might fail?

That’s just one side of the story, though.  It’s true that an error due to failed dependencies makes it clearer what is going on than a random failure involving an unresolved symbol or type mismatch during a compile.  So this is an open question in my mind.

Perhaps the less contentious way to ask the question would be this: should Cabal be modified to give a warning instead of an error for upper bounds when they would prevent the package from building at all?  (And if so, perhaps it should get a new strong upper bound, which indicates someone actually knows that the build fails.)

2. How close are we to the goal of getting GHC and Cabal to tell the difference between exported and non-exported dependencies?

It would be one thing if the problem here were actual incompatibilities in code.  If I’m using libraries that rely on different versions of the same package, and they both export things that rely on types or instances from that package, then I should expect the build to fail.  But a lot of the time (I’d guess a majority of the time!) that’s not the case.  One place this comes up a lot is with network‘s dependency on parsec.  But it doesn’t actually export any parsers; its use of parsec is an implementation detail.

Similar issues arise in many other situations.  Many library dependencies are a matter of implementation, not public interface.  Even where it’s not currently the case, if this is fixed, it will change the community’s best practice to include using a lot more newtype wrappers rather than re-exporting other package’s types, or splitting packages if they provide substantial functionality that does not need the re-exports.

I mentioned earlier that Haskell deals with this problem on a whole different scale than other languages: part of the reason is this lack of distinction between implementation detail and exposed interfaces.

3. Can we stop cabal-install from breaking the package database?

A very special case of this problem happens in a particularly disturbing way.  It goes like this:

  • Package foo depends on bar-0.5 and baz-1.2.
  • Package flox depends on bar-0.5 and baz-1.1.
  • Package bar-0.5 depends on baz, but has no preference between versions 1.1 and 1.2.

The way this works now is as follows: When I cabal install foo, Cabal first builds baz-1.2, and then bar-0.5, and then foo.  But if I later cabal install flox, then Cabal will build baz-1.1 (this is fine, since multiple package versions can exist at once), then it will rebuild bar-0.5, linking against baz-1.1.  (This does not coexist; because bar has the same version number, it gets overwritten.).  At this point, foo is broken.  Running ghc-pkg check will complain about it being broken because of a missing dependency.

I’m not sure what the right thing to do is here; I suppose that if bar-0.5 re-exports types from baz, then its “version” needs to include the version information from baz somewhere.  In any case, this is an extremely confusing, and extremely common, issue to run into, and it results in an inconsistent package database without so much as a warning.  Something really needs to be done.

4. What is the best way to deal with this in the interim?

Yackage is a great idea.  (Or any other way to maintain a local Hackage; I’m aware of discussion about whether it might be better to just use the new hackage server code eventually, and I don’t think the implementation particularly matters.)  Michael Snoyman’s goal was really more about maintaining collections of packages that he’s maintaining… but for a real-world software project that otherwise won’t build, it sounds like a great way to keep track of local modifications to other people’s packages.

Another idea that might work really well is to just be able to ask cabal-install to remember a modification to various packages’ build-depends’ fields persistently.  So instead of having to download, build, and manually install these packages just to change their package.cabal files, cabal-install would continue updating from Hackage, but reconcile your existing build-depends requests (“relax fizbuzz’s build-depends to build with foobar-0.5″) automatically.  Even better, make it easy to get a list of which of these local changes are still at odds with public packages, to report to either the maintainer.

All of these are options for mitigating the problem.  But first, I think we need to realize that this is a serious problem.  I’m afraid there’s a bit of sampling bias here; I have to believe that these problems aren’t getting solved because many established Haskellers tend to work on projects with very narrowly tailored scope… and this is true because many people who want to work on more general (“real world” by my definition above) projects have often fled to languages that don’t make it a week-long job to get the dependencies for a project to all build at once.

January 9, 2011 / cdsmith

An Old Article I Wrote

I was asked by someone to put this old piece of writing of mine somewhere more permanent, so here it is.  A few comments off the top:

  • This isn’t new.  I wrote it years ago.
  • Yes, I’m a bit embarrassed at some of the wording, as most people are when they read content they wrote a long time ago.  There are things that I’d say differently if I wrote something like this today.  There is clumsy wording, and there are poor choices of examples.  I’m not taking this opportunity to revise the writing; this is exactly as I wrote it all those years ago.
  • If pressed for an answer, though, yes I still do believe in the central conclusions of the article.  Namely:
    • That “static typing” and “dynamic typing” are two concepts that are fundamentally unrelated to each other, and just happen to share a word.
    • That “static types” are, at their core a tool for writing and maintaining computer-checked proofs about code
    • That “dynamic types” are, at their core, there to make unit testing less tedious, and are a tool for finding bugs.
    • That the two are related in the way I outline: that one establishes lower bounds on correctness of code, while the other establishes upper bounds, and that questions about their real world use should come down to the possibility and effectiveness of addressing certain kinds of bugs by either computer-checked proof, or testing.

With that said, here is the original article:


What To Know Before Debating Type Systems

I would be willing to place a bet that most computer programmers have, on multiple occasions, expressed an opinion about the desirability of certain kinds of type systems in programming languages. Contrary to popular conception, that’s a great thing! Programmers who care about their tools are the same programmers who care about their work, so I hope the debate rages on.

There are a few common misconceptions, though, that confuse these discussions. This article runs through those I’ve encountered that obscure the most important parts of the debate. My goal is to build on a shared understanding of some of the basic issues, and help people get to the interesting parts more quickly.

Classifying Type Systems

Type systems are commonly classified by several words, of which the most common are “static,” “dynamic,” “strong,” and “weak.” In this section, I address the more common kinds of classification. Some are useful, and some are not.

Strong and Weak Typing

Probably the most common way type systems are classified is “strong” or “weak.” This is unfortunate, since these words have nearly no meaning at all. It is, to a limited extent, possible to compare two languages with very similar type systems, and designate one as having the stronger of those two systems. Beyond that, the words mean nothing at all.

Therefore: I give the following general definitions for strong and weak typing, at least when used as absolutes:

  • Strong typing: A type system that I like and feel comfortable with
  • Weak typing: A type system that worries me, or makes me feel uncomfortable

What about when the phrase is used in a more limited sense? Then strong typing, depending on the speaker or author, may mean anything on the spectrum from “static” to “sound,” both of which are defined below.

Static and Dynamic Types

This is very nearly the only common classification of type systems that has real meaning. As a matter of fact, it’s significance is frequently under-estimated. I realize that may sound ridiculous; but this theme will recur throughout this article. Dynamic and static type systems are two completely different things, whose goals happen to partially overlap.

A static type system is a mechanism by which a compiler examines source code and assigns labels (called “types”) to pieces of the syntax, and then uses them to infer something about the program’s behavior. A dynamic type system is a mechanism by which a compiler generates code to keep track of the sort of data (coincidentally, also called its “type”) used by the program. The use of the same word “type” in each of these two systems is, of course, not really entirely coincidental; yet it is best understood as having a sort of weak historical significance. Great confusion results from trying to find a world view in which “type” really means the same thing in both systems. It doesn’t. The better way to approach the issue is to recognize that:

  • Much of the time, programmers are trying to solve the same problem with static and dynamic types.
  • Nevertheless, static types are not limited to problems solved by dynamic types.
  • Nor are dynamic types limited to problems that can be solved with static types.
  • At their core, these two techniques are not the same thing at all.

Observing the second of these four simple facts is a popular pass-time in some circles. Consider this set of presentation notes, with a rather complicated “the type system found my infinite loop” comment. From a theoretical perspective, preventing infinite loops is in a very deep sense the most basic possible thing you can do with static types! The simply-typed lambda calculus, on which all other type systems are based, proves that programs terminate in a finite amount of time. Indeed, the more interesting question is how to usefully extend the type system to be able to describe programs that don’t terminate! Finding infinite loops, though, is not in the class of things most people associate with “types,” so it’s surprising. It is, indeed, provably impossible with dynamic types (that’s called the halting problem; you’ve probably heard of it!). But it’s nothing special for static types. Why? Because they are an entirely different thing from dynamic types.

The dichotomy between static and dynamic types is somewhat misleading. Most languages, even when they claim to be dynamically typed, have some static typing features. As far as I’m aware, all languages have some dynamic typing features. However, most languages can be characterized as choosing one or the other. Why? Because of the first of the four facts listed above: many of the problems solved by these features overlap, so building in strong versions of both provides little benefit, and significant cost.

Other Distinctions

There are many other ways to classify type systems. These are less common, but here are some of the more interesting ones:

  • Sound types. A sound type system is one that provides some kind of guarantee. It is a well-defined concept relating to static type systems, and has proof techniques and all those bells and whistles. Many modern type systems are sound; but older languages like C often do not have sound type systems by design; their type systems are just designed to give warnings for common errors. The concept of a sound type system can be imperfectly generalized to dynamic type systems as well, but the exact definition there may vary with usage.
  • Explicit/Implicit Types. When these terms are used, they refer to the extent to which a compiler will reason about the static types of parts of a program. All programming languages have some form of reasoning about types. Some have more than others. ML and Haskell have implicit types, in that no (or very few, depending on the language and extensions in use) type declarations are needed. Java and Ada have very explicit types, and one is constantly declaring the types of things. All of the above have (relatively, compared to C and C++, for example) strong static type systems.
  • The Lambda Cube. Various distinctions between static type systems are summarized with an abstraction called the “lambda cube.” Its definition is beyond the scope of this article, but it basically looks at whether the system provides certain features: parametric types, dependent types, or type operators. Look here for more information.
  • Structural/Nominal Types. This distinction is generally applied to static types with subtyping. Structural typing means a type is assumed whenever it is possible to validly assume it. For example, a record with fields called x, y, and z might be automatically considered a subtype of one with fields x and y. With nominal typing, there would be no such assumed relationship unless it were declared somewhere.
  • Duck Typing. This is a word that’s become popular recently. It refers to the dynamic type analogue of structural typing. It means that rather than checking a tag to see whether a value has the correct general type to be used in some way, the runtime system merely checks that it supports all of the operations performed on it. Those operations may be implemented differently by different types.

This is but a small sample, but this section is too long already.

Fallacies About Static and Dynamic Types

Many programmers approach the question of whether they prefer static or dynamic types by comparing some languages they know that use both techniques. This is a reasonable approach to most questions of preference. The problem, in this case, is that most programmers have limited experience, and haven’t tried a lot of languages. For context, here, six or seven doesn’t count as “a lot.” On top of that, it requires more than a cursory glance to really see the benefit of these two very different styles of programming. Two interesting consequences of this are:

  • Many programmers have used very poor statically typed languages.
  • Many programmers have used dynamically typed languages very poorly.

This section, then, brings up some of the consequences of this limited experience: things many people assume about static or dynamic typing that just ain’t so.

Fallacy: Static types imply type declarations

The thing most obvious about the type systems of Java, C, C++, Pascal, and many other widely-used “industry” languages is not that they are statically typed, but that they are explicitly typed. In other words, they require lots of type declarations. (In the world of less explicitly typed languages, where these declarations are optional, they are often called “type annotations” instead. You may find me using that word.) This gets on a lot of people’s nerves, and programmers often turn away from statically typed languages for this reason.

This has nothing to do with static types. The first statically typed languages were explicitly typed by necessity. However, type inference algorithms – techniques for looking at source code with no type declarations at all, and deciding what the types of its variables are – have existed for many years now. The ML language, which uses it, is among the older languages around today. Haskell, which improves on it, is now about 15 years old. Even C# is now adopting the idea, which will raise a lot of eyebrows (and undoubtedly give rise to claims of its being “weakly typed” — see definition above). If one does not like type declarations, one is better off describing that accurately as not liking explicit types, rather than static types.

(This is not to say that type declarations are always bad; but in my experience, there are few situations in which I’ve wished to see them required. Type inference is generally a big win.)

Fallacy: Dynamically typed languages are weakly typed

The statement made at the beginning of this thread was that many programmers have used dynamically typed languages poorly. In particular, a lot of programmers coming from C often treat dynamically typed languages in a manner similar to what made sense for C prior to ANSI function prototypes. Specifically, this means adding lots of comments, long variable names, and so forth to obssessively track the “type” information of variables and functions.

Doing this prevents a programmer from realizing the benefits of dynamic typing. It’s like buying a new car, but refusing to drive any faster than a bicycle. The car is horrible; you can’t get up the mountain trails, and it requires gasoline on top of everything else. Indeed, a car is a pretty lousy excuse for a bicycle! Similarly, dynamically typed languages are pretty lousy excuses for statically typed languages.

The trick is to compare dynamically typed languages when used in ways that fit in with their design and goals. Dynamically typed languages have all sorts of mechanisms to fail immediately and clearly if there is a runtime error, with diagnostics that show you exactly how it happened. If you program with the same level of paranoia appropriate to C – where a simple bug may cause a day of debugging – you will find that it’s tough, and you won’t be actually using your tools.

(As a side comment, and certainly a more controversial one, the converse is equally true; it doesn’t make sense to do the same kinds of exhaustive unit testing in Haskell as you’d do in Ruby or Smalltalk. It’s a waste of time. It’s interesting to note that the whole TDD movement comes from people who are working in dynamically typed languages… I’m not saying that unit testing is a bad idea with static types; only that it’s entirely appropriate to scale it back a little.)

Fallacy: Static types imply upfront design or waterfall methods

Some statically typed languages are also designed to enforce someone’s idea of a good development process. Specifically, they often require or encourage that you specify the whole interface to something in one place, and then go write the code. This can be annoying if one is writing code that evolves over time or trying out ideas. It sometimes means changing things in several different places in order to make one tweak. The worst form of this I’m aware of (though done mainly for pragmatic reasons rather than ideological ones) is C and C++ header files. Pascal has similar aims, and requires that all variables for a procedure or function be declared in one section at the top. Though few other languages enforce this separation in quite the same way or make it so hard to avoid, many do encourage it.

It is absolutely true that these language restrictions can get in the way of software development practices that are rapidly gaining acceptance, including agile methodologies. It’s also true that they have nothing to do with static typing. There is nothing in the core ideas of static type systems that has anything to do with separating interface from implementation, declaring all variables in advance, or any of these other organizational restrictions. They are sometimes carry-overs from times when it was considered normal for programmers to cater to the needs of their compilers. They are sometimes ideologically based decisions. They are not static types.

If one doesn’t want a language deciding how they should go about designing their code, it would be clearer to say so. Expressing this as a dislike for static typing confuses the issue.

This fallacy is often stated in different terms: “I like to do exploratory programming” is the popular phrase. The idea is that since everyone knows statically typed languages make you do everything up front, they aren’t as good for trying out some code and seeing what it’s like. Common tools for exploratory programming include the REPL (read-eval-print loop), which is basically an interpreter that accepts statements in the language a line at a time, evaluates them, and tells you the result. These tools are quite useful, and they exist for many languages, both statically and dynamically typed. They don’t exist (or at least are not widely used) for Java, C, or C++, which perpetuates the unfortunate myth that they only work in dynamically typed languages. There may be advantages for dynamic typing in exploratory programming (in fact, there certainly are some advantages, anyway), but it’s up to someone to explain what they are, rather than just to imply the lack of appropriate tools or language organization.

Fallacy: Dynamically typed languages provide no way to find bugs

A common argument leveled at dynamically typed languages is that failures will occur for the customer, rather than the developer. The problem with this argument is that it very rarely occurs in reality, so it’s not very convincing. Programs written in dynamically typed languages don’t have far higher defect rates than programs written in languages like C++ and Java.

One can debate the reasons for this, and there are good arguments to be had there. One reason is that the average skill level of programmers who know Ruby is higher than those who know Java, for example. One reason is that C++ and Java have relatively poor static type systems. Another reason, though, is testing. As mentioned in the aside above, the whole unit testing movement basically came out of dynamically typed languages. It has some definite disadvantages over the guarantees provided by static types, but it also has some advantages; static type systems can’t check nearly as many properties of code as testing can. Ignoring this fact when talking to someone who really knows Ruby will basically get you ignored in turn.

Fallacy: Static types imply longer code

This fallacy is closely associated with the one above about type declarations. Type declarations are the reason many people associated static types with a lot of code. However, there’s another side to this. Static types often allow one to write much more concise code!

This may seem like a surprising claim, but there’s a good reason. Types carry information, and that information can be used to resolve things later on and prevent programmers from needing to write duplicate code. This doesn’t show up often in simple examples, but a really excellent case is found in the Haskell standard library’s Data.Map module. This module implements a balanced binary search tree, and it contains a function whose type signature looks like this:

lookup :: (Monad m, Ord k) => k -> Map k a -> m a

This is a magical function. It says that I can look something up in a Map and get back the result. Simple enough, but here’s the trick: what do I do if the result isn’t there? Common answers might include returning a special “nothing” value, or aborting the current computation and going to an error handler, or even terminating the whole program. The function above does any of the above! Here’s how I compare the result against a special nothing value:

case (lookup bobBarker employees) of
    Nothing     -> hire bobBarker
    Just salary -> pay bobBarker salary

How does Haskell know that I want to choose the option of getting back Nothing when the value doesn’t exist, rather than raising some other kind of error? It’s because I wrote code afterward to compare the result againstNothing! If I had written code that didn’t immediately handle the problem but was called from somewhere that handled errors three levels up the stack, then lookup would have failed that way instead, and I’d be able to write seven or eight consecutive lookup statements and compute something with the results without having to check for Nothing all the time. This completely dodges the very serious “exception versus return value” debate in handling failures in many other languages. This debate has no answer. Return values are great if you want to check them now; exceptions are great if you want to handle them several levels up. This code simply goes along with whatever you write the code to do.

The details of this example are specific to Haskell, but similar examples can be constructed in many statically typed languages. There is no evidence that code in ML or Haskell is any longer than equivalent code in Python or Ruby. This is a good thing to remember before stating, as if it were obviously true, that statically typed languages require more code. It’s not obvious, and I doubt if it’s true.

Benefits of Static Types

My experience is that the biggest problems in the static/dynamic typing debate occur in failing to understand the issues and potential of static types. The next two sections, then, are devoted to explaining this position in detail. This section works upward from the pragmatic perspective, while the next develops it into its full form.

There are a number of commonly cited advantages for static typing. I am going to list them in order from least to most significant. (This helps the general structure of working up to the important stuff.)

Performance

Performance is the gigantic red herring of all type system debates. The knowledge of the compiler in a statically typed language can be used in a number of ways, and improving performance is one of them. It’s one of the least important, though, and one of the least interesting.

For most computing environments, performance is the problem of two decades ago. Last decade’s problem was already different, and this decades problems are at least 20 years advanced beyond performance being the main driver of technology decisions. We have new problems, and performance is not the place to waste time.

(On the other hand, there are a few environments where performance still matters. Languages in use there are rarely dynamically typed, but I’m not interested enough in them to care much. If you do, maybe this is your corner of the type system debate.)

Documentation

If, indeed, performance is irrelevant, what does one look to next? One answer is documentation. Documentation is an important aspect of software, and static typing can help.

Why? Because documentation isn’t just about comments. It’s about everything that helps people understand software. Static type systems build ideas that help explain a system and what it does. The capture information about the inputs and outputs of various functions and modules. This is exactly the set of information needed in documentation. Clearly, if all of this information is written in comments, there is a pretty good chance it will eventually become out of date. If this information is written in identifier names, it will be nearly impossible to fit it all in. It turns out that type information is a very nice place to keep this information.

That’s the boring view. As everyone knows, though, it’s better to have self- documenting code than code that needs a lot of comments (even if it has them!). Conveniently enough, most languages with interesting static type systems have type inference, which is directly analogous to self-documenting code. Information about the correct way to use a piece of code is extracted from the code itself (i.e., it’s self-documenting), but then verified and presented in a convenient format. It’s documentation that doesn’t need to be maintained or even written, but is available on demand even without reading the source code.

Tools and Analysis

Things get way more interesting than documentation, though. Documentation is writing for human beings, who are actually pretty good at understanding code anyway. It’s great that the static type system can help, but it doesn’t do anything fundamentally new.

Fundamentally new things happen when type systems help computer programs to understand code. Perhaps I need to explain myself here. After all, a wise man (Martin Fowler, IIRC) one said:

“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”I don’t disagree with Martin Fowler, but we have different definitions of understand in mind. Getting a computer to follow code step by step is easy. Getting a computer to analyze it and answer more complex questions about it is a different thing entirely, and it is very hard.

We often want our development tools to understand code. This is a big deal. I’ll turn back to Martin Fowler, who points this out as well.

Correctness

Ultimately, though, the justification for static typing has to come back to writing correct code. Correctness, of course, is just the program doing “what you want.”

This is a really tough problem; perhaps the toughest of all. The theory of computation has a result called Rice’s Theorem, which essentially says this: Given an arbitrary program written in a general purpose programming language, it is impossible to write a computer program that determines anything about the program’s output. If I’m teaching an intro to programming class and assign my students to write “hello world”, I can’t program a grader to determine if they did so or not. There will be some programs for which the answer is easy; if the program never makes any I/O calls, then the answer is no. If the program consists of a single print statement, it’s easy to check if the answer is yes. However, there will be some complicated programs for which my grader can never figure out the answer. (A minor but important technical detail: one can’t run the program and wait for it to finish, because the program might never finish!) This is true of any statement about programs, including some more interesting ones like “does this program ever finish?” or “does this program violate my security rules?”

Given that we can’t actually check the correctness of a program, there are two approaches that help us make approximations:

  • Testing: establishes upper bounds on correctness
  • Proof: establishes lower bounds on correctness

Of course, we care far more about lower bounds than upper bounds. The problem with proofs, though, is the same as the problem with documentation. Proving correctness is easy only somewhat insanely difficult when you have a static body of code to prove things about. When the code is being maintained by three programmers and changing seven times per day, maintaining the correctness proofs falls behind. Static typing here plays exactly the same role as it does with documentation. If (and this is a big if) you can get your proofs of correctness to follow a certain form that can be reproduced by machine, the computer itself can be the prover, and let you know if the change you just made breaks the proof of correctness. The “certain form” is called structural induction (over the syntax of the code), and the prover is called a type checker.

An important point here is that static typing does not preclude proving correctness in the traditional way, nor testing the program. It is a technique to handle those cases in which testing might be guaranteed to succeed so they don’t need testing; and similarly, to provide a basis from which the effort of manual proof can be saved for those truly challenging areas in which it is necessary.

Dynamic Typing Returns

Certainly dynamic typing has answers to this. Dynamically typed languages can sometimes perform rather well (see Dylan), sometimes have great tools (see Smalltalk), and I’m sure they occasionally have good documentation as well, though the hunt for an example is too much for me right now. These are not knock-down arguments for static typing, but they are worth being aware of.

The correctness case is particularly enlightening. Just as static types strengthened our proofs of correctness by making them easier and automatic, dynamic typing improves testing by making it easier and more effective. It simply makes the code fail more spectacularly. I find it amusing when novice programmers believe their main job is preventing programs from crashing. I imagine this spectacular failure argument wouldn’t be so appealing to such a programmer. More experienced programmers realize that correct code is great, code that crashes could use improvement, but incorrect code that doesn’t crash is a horrible nightmare.

It is through testing, then, that dynamically typed languages establish correctness. Recall that testing establishes only upper bounds on correctness. (Dijkstra said it best: “Program testing can be used to show the presence of bugs, but never to show their absence.”) The hope is that if one tries hard enough and still fails to show the presence of bugs, then their absence becomes more likely. If one can’t seem to prove any better upper bound, then perhaps the correctness really is 100%. Indeed, there is probably at some correlation in that direction.

What is a Type?

This is as good a point as any to step back and ask the fundamental question: what is a type? I’ve already mentioned that I think there are two answers. One answer is for static types, and the other is for dynamic types. I am considering the question for static types.

It is dangerous to answer this question too quickly. It is dangerous because we risk excluding some things as types, and missing their “type” nature because we never look for it. Indeed, the definition of a type that I will eventually give is extremely broad.

Problems with Common Definitions

One common saying, quoted often in an attempt to reconcile static and dynamic typing, goes something like this: Statically typed languages assign types to variables, while dynamically typed languages assign types to values. Of course, this doesn’t actually define types, but it is already clearly and obviously wrong. One could fix it, to some extent, by saying “statically typed languages assign types to expressions, …” Even so, the implication that these types are fundamentally the same thing as the dynamic version is quite misleading.

What is a type, then? When a typical programmer is asked that question, they may have several answers. Perhaps a type is just a set of possible values. Perhaps it is a set of operations (a very structural-type-ish view, to be sure). There could be arguments in favor of each of these. One might make a list: integers, real numbers, dates, times, and strings, and so on. Ultimately, though, the problem is that these are all symptoms rather than definitions. Why is a type a set of values? It’s because one of the things we want to prove about our program is that it doesn’t calculate the square roots of a string. Why is a type a set of operations? It’s because one of the things we want to know is whether our program attempts to perform an impossible operation.

Let’s take a look at another thing we often want to know: does our web application stick data from the client into SQL queries without escaping special characters first? If this is what we want to know, then these becomes types. This article by Tom Moertel builds this on top of Haskell’s type system. So far, it looks like a valid definition of “type” is as follows: something we want to know.

A Type System

Clearly that’s not a satisfactory definition of a type. There are plenty of things we want to know that types can’t tell us. We want to know whether our program is correct, but I already said that types provide conservative lower bounds on correctness, and don’t eliminate the need for testing or manual proof. What makes a type a type, then? The other missing component is that a type is part of a type system.

Benjamin Pierce’s book Types and Programming Languages is far ans away the best place to read up on the nitty gritty details of static type systems, at least if you are academically inclined. I’ll quote his definition.

A type system is a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute.This is a complex definition, but the key ideas are as follows:

  • syntactic method .. by classifying phrases: A type system is necessarily tied to the syntax of the language. It is a set of rules for working bottom up from small to large phrases of the language until you reach the result.
  • proving the absence of certain program behaviors: This is the goal. There is no list of “certain” behaviors, though. The word just means that for any specific type system, there will be a list of things that it proves. What it proves is left wide open. (Later on in the text: “… each type system comes with a definition of the behaviors it aims to prevent.”)
  • tractable: This just means that the type system finishes in a reasonable period of time. Without wanting to put words in anyone’s mouth, I think it’s safe to say most people would agree that it’s a mistake to include this in the definition of a type system. Some languages even have undecidable type systems. Nevertheless, it is certainly a common goal; one doesn’t expect the compiler to take two years to type-check a program, even if the program will run for two years.

The remainder of the definition is mainly unimportant. The “kinds of values they compute” is basically meaningless unless we know what kinds we might choose from, and the answer is any kind at all.

An example looks something like that. Given the expression 5 + 3, a type checker may look at 5 and infer that it’s an integer. It may look at 3 and infer it’s an integer. It may then look at the + operator, and know that when + is applied to two integers, the result is an integer. Thus it’s proven the absence of program behaviors (such as adding an integer to a string) by working up from the basic elements of program syntax.

Examples of Unusual Type Systems

That was a pretty boring example, and one that plays right into a trap: thinking of “type” as meaning the same thing it does in a dynamic type system. Here are some more interesting problems being solved with static types.

  • http://wiki.di.uminho.pt/twiki/pub/Personal/Xana/WebHome/report.pdf. Uses types to ensure that the correct kinds of data are gotten out of a relational database. Via the type system, the compiler ends up understanding how to work with concepts like functional dependencies and normal forms, and can statically prove levels of normalization.
  • http://www.cs.bu.edu/~hwxi/academic/papers/pldi98.pdf. Uses an extension to ML’s type system to prove that arrays are never accessed out of bounds. This is an unusually hard problem to solve without making the languages that solve it unusable, but it’s a popular one to work on.
  • http://www.cis.upenn.edu/~stevez/papers/LZ06a.pdf. This is great. This example uses Haskell’s type system to let someone define a security policy for a Haskell program, in Haskell, and then proves that the program properly implements that security. If a programmer gets security wrong, the compiler will complain rather than opening up a potential security bug in the system.
  • http://www.brics.dk/RS/01/16/BRICS-RS-01-16.pdf. Just in case you thought type systems only solved easy problems, this bit of Haskell gets the type system to prove two central theorems about the simply typed lambda calculus, a branch of computation theory!

The point of these examples is to point out that type systems can solve all sorts of programming problems. For each of these type systems, concepts of types are created that represent the ideas needed to accomplish this particular task with the type system. Some problems solved by static type systems look nothing like the intuitive idea of a type. A buggy security check isn’t normally considered a type error, but only because not many people use languages with type systems that solve that problem.

To reiterate the point above, it’s important to understand how limiting it is to insist, as many people do, that the dynamic typing definition of a “type” is applied to static typing as well. One would miss the chance to solve several real-world problems mentioned above.

The True Meaning of Type

So what is a type? The only true definition is this: a type is a label used by a type system to prove some property of the program’s behavior. If the type checker can assign types to the whole program, then it succeeds in its proof; otherwise it fails and points out why it failed. This is a definition, then, but it doesn’t tell us anything of fundamental importance. Some further exploration leads us to insight about the fundamental trade-offs involved in using a static type checker.

If you were looking at things the right way, your ears may have perked up a few sections back, when I said that Rice’s Theorem says we can’t determine anything about the output of a program. Static type systems prove properties of code, but it almost appears that Rice’s Theorem means we can’t prove anything of interest with a computer. If true, that would be an ironclad argument against static type systems. Of course, it’s not true. However, it is very nearly true. What Rice’s Theorem says is that we can’t determine anything. (Often the word “decide” is used; they mean the same thing here.) It didn’t say we can’t prove anything. It’s an important distinction!

What this distinction means is that a static type system is a conservative estimate. If it accepts a program, then we know the program has the properties proven by that type checker. If it fails… then we don’t know anything. Possibly the program doesn’t have that property, or possibly the type checker just doesn’t know how to prove it. Furthermore, there is an ironclad mathematical proof that a type checker of any interest at all is alwaysconservative. Building a type checker that doesn’t reject any correct programs isn’t just difficult; it’s impossible.

That, then, is the trade-off. We get assurance that the program is correct (in the properties checked by this type checker), but in turn we must reject some interesting programs. To continue the pattern, this is the diametric opposite of testing. With testing, we are assured that we’ll never fail a correct program. The trade-off is that for any program with an infinite number of possible inputs (in other words, any interesting program), a test suite may still accept programs that are not correct – even in just those properties that are tested.

Framing the Interesting Debate

That last paragraph summarizes the interesting part of the debate between static and dynamic typing. The battleground on which this is fought out is framed by eight questions, four for each side:

  1. For what interesting properties of programs can we build static type systems?
  2. How close can we bring those type systems to the unattainable ideal of never rejecting a correct program?
  3. How easy can it be made to program in a language with such a static type system?
  4. What is the cost associated with accidentally rejecting a correct computer program?
  5. For what interesting properties of programs can we build test suites via dynamic typing?
  6. How close can we bring those test suites to the unattainable ideal of never accepting a broken program?
  7. How easy can it be made to write and execute test suites for programs?
  8. What is the cost associated with accidentally accepting an incorrect computer program?

If you knew the answer to those eight questions, you could tell us all, once and for all, where and how we ought to use static and dynamic typing for our programming tasks.

January 9, 2011 / cdsmith

Request URI Handling in HTTP: A Primer

There’s a pretty simple convention in place for handling requests for files, directories, and indexes on a web server.  It works pretty well, and it is definitely worth understanding what happens and why, if you don’t already.

This came up today in the context of a question about Snap, I figured I should write about it.  This topic isn’t particularly advanced, so it is likely to be nothing new if you already know HTTP fairly well.  But I couldn’t find a reference to point to, so I’m writing it down.

Web servers have to be pretty flexible when it comes to handling requests for files.  People might type any of the following URLs:

  1. http://example.com
  2. http://example.com/
  3. http://example.com/somedirectory
  4. http://example.com/somedirectory/
  5. http://example.com/somefile.html
  6. http://example.com/somefile.html/

So what should happen in response to each?  And why?

This post covers the basics.  Remember that we’re only talking about basic straight-forward file serving here.  Once you throw in custom code and server-side programming, things can change a lot.  But a solid understanding of the basics is a stepping stone to understanding the things built on it… and all of HTTP is built on the metaphor of serving files from a web server.  Then here we go.

Case 1:

This is the easy one, because it’s not up to the web server at all.  If a user types the URL http://example.com, their web browser is required to send the request with a request URI of “/”, as if they’d placed a slash as the request URI.  This is a matter of HTTP syntax; because request URIs are not quoted in HTTP, there’s no way to send an empty one.

Case 2:

In this case, as well, the request URI sent by the web browser will be “/”.  So what is a web server going to do with that?

Most people, I think, know this answer.  The server finds the root directory of content it expects to serve.  (On many UNIX web servers, for example, this might be called /var/www, but server configuration is a whole different topic, and there are many reasons it might be something else.)  Since the request was for the entire directory, it typically tries to find a file that represents an index of all the content in that directory.

Web servers can be configured to look for this content under many names, but a common one is “index.html”.  A server will generally have a list of such index names, in order of preference: perhaps index.html, followed by index.htm, and so on.  If one of these files exists, the server sends its contents as a response to the request.

Notice that typically, the client (that is, the web browser) never actually gets told that it is receiving a file named index.html.  Instead, it just knows that it sent a request for the root directory (the request URI “/”) and got back some HTML.  It does know that it’s HTML, because of the Content-type header in the request, but it doesn’t get sent the file name.

If no index.html or similar file exists, then the server has a choice about what to do next.  Historically, web servers generated on the fly a list of all the files in the directory, formatted as HTML, and served that in response to the request for the directory.  However, in more recent years, the obfuscation-based security movement has made that less common, so it’s more common to get back an error message letting you know that the server doesn’t want to tell you what’s in the directory.  The error is often 403 (Forbidden), though it’s sometimes 404 (Not Found) as well.

Case 3:

Here, the request is for “/somedirectory”, a directory inside of the top-level.

(Note that in practice, the decision of what to do should be made based on whether there’s really a directory and/or file with that name, not the presence or absence of an extension.  I’ve included the extension above just to give a hint to the reader — you — that one request is for a directory while the other is for a file.)

Ideally, the server would like to find an index file of some sort to serve for this subdirectory, just as it did in the previous case.  But we have a problem.  If the server gives back some HTML in response to the request URI “/somedirectory”, the user’s web browser will think that’s a file!  Remember, we don’t rely on extensions to decide what’s a file or a directory.  Also, remember that the web server doesn’t ever tell the client the name of the file it’s actually sending; it just sends the content.

Now, maybe you think it doesn’t matter if the web browser thinks it’s asked for a file and is really getting a directory.  But in fact it does, and the reason is relative URLs.  Suupose you were writing the index.html file that belongs in somedirectory, and you wanted to refer to an image alongside the file.  You’d probably drop that image inside of somedirectory as well.  But if the web browser gets back this HTML and thinks that it requested a file called somedirectory instead, then it will not know to ask for “/somedirectory/image.png”.  Instead, it will ask for “/image.png”, thinking the image is sitting in the directory alongside a file called “somedirectory”.  This is not going to work.

For this reason, the web server does not immediately send back the index.html or similar file in somedirectory.  Instead, it sends a redirect.  This response (using response code 302) informs the browser to come back and ask again, but this time use “/somedirectory/” as its request URI.  Note the slash at the end: that’s what tells the web browser that this is a directory, and not a file.

It’s now the browser’s job to re-request the file with the right URI, so the server sends a redirect, and is done.

Case 4:

In this case, the browser has sent a request for “/somedirectory/”.  Perhaps it got here by following the redirect from the previous request, or perhaps the request URI was correct from the beginning.  In either case the response is just like Case 2.  The server will look for a file called something like “index.html”… but this time, it will look inside of somedirectory to find it.  If it doesn’t find an appropriate index file, once again it may or may not try to generate one.

Case 5:

Now the request is for “/somefile.html”, a file.  This is the easy one.  The server will find the file with that exact name, and send its contents back in the response.  (Note that I’m assuming somefile.txt is a file.  It’s entirely possible to create a directory with that name, in which you’d follow the instructions for case 3 instead, and send the redirect.)

Case 6:

Finally, suppose a user requests “/somefile.html/”.  That is, the request is for a file, but there is a slash at the end of the path.

You might think we should just send the file; it’s pretty obvious what the user wanted, even if they’ve said it oddly.  But there’s a problem with that.  Just like in case 3, the web browser looks at that trailing slash and decides that somefile.html is a directory name.  Then if it refers to images, style sheets, etc., the browser will try to request something like “/somefile.html/image.png”.  That’s not going to accomplish anything useful.  So it’s incorrect to serve a file with this URL.

We could send a redirect again, like we did for directories, but that’s less common in this case.  Directories can be typed without a trailing slash because people do it a lot.  On the other hand, hardly anyone habitually adds a trailing slash after their file names.  So instead, we should just fail with a 404 (Not Found) response code.

So What?

Understanding the above conventions is important because when people design URLs for web applications, they should keep in mind how browsers expect pages to behave.  Remember that the trailing slash decides how a web browser will resolve relative URLs.  As such, it’s actually a significant part of the public interface of your application.

Quite a few times, this is gotten wrong.  Snap, for example (which by the way is my current favorite foundation for building web applications, so don’t think I have anything against it), got this wrong in its file serving code, and the conversation I mentioned above was with regard to making it work.  Even more significantly, a large number of web frameworks provide request routing, but ignore the question of whether content should be served as a file or a directory.  For dynamic content, either one works… but you do need to keep in mind which one you choose and how your static resources are chosen on that basis.  Though you can work around this using all absolute paths or the HTML “base” tag, this throws away a good bit of how relative URIs are supposed to work.

So it’s good to understand these conventions, and understand how web browsers are interacting with your application.

January 3, 2011 / cdsmith

Big Ideas for Haskell

In a conversation last night, a friend and I talked about the following thought experiment:

If you had a few million dollars and a few years to spend on hiring/recruiting and leading a strike force to make some big improvements in the infrastructure of your favorite programming language… what would you do?

Note that the changes have to be to infrastructure.  We aren’t interested in language changes, which tend to get too much play already; but rather in the tools and fundamental libraries available to us.  Here’s my list for Haskell.

Project #1: Fix Dependencies in GHC/Cabal

Managing dependencies is a HUGE issue in Haskell right now.  Any substantial software project seems to require nearly as much time managing the various versions of different dependent libraries as writing code.  And there’s one really big thing that can be done about it: don’t expose dependencies from a library unless they are really, truly needed.

Let’s look at an example.  A gazillion different libraries use Parsec internally to parse various things from text.  They use various different versions of Parsec.  Even though there is no reason why two different versions of Parsec can’t be used in the same program, still Cabal’s dependency resolution will try very hard to avoid combining those two libraries together, just because of some inconsequential implementation details.  Parsec isn’t the only such package, either: QuickCheck, for years, split the Haskell world because various different packages depended on different versions of QuickCheck for their internal testing.  The situation with mtl and transformers is a little more complicated, since it’s possible in principle that libraries that depended on both actually needed the same versions; but poking around a bit reveals that for the most part, these libraries used mtl or transformers internally to build monads that could very well have been wrapped up in opaque newtype wrappers at the package boundary.

Basically, we have a lot of confusion between implementation dependencies, and exposed dependencies.

So this idea has two steps: Step #1, instead of just telling GHC about all of the packages your code depends on, you should be able to give it a list of hidden dependencies, and a separate list of exposed dependencies.  It should check that any types accessible via exports of your package’s exposed modules is in the list of exposed dependencies.  Step #2, Cabal should get separate fields for exposed and hidden dependencies, and pass them along to GHC.  And its constraint solver should be set up to never fail to install a package because of needing different versions of hidden dependencies.

I don’t know an easy way to measure this, but my strong suspicion is that this would save 80% of the time Haskell programmers currently spend on maintaining the dependencies of large Haskell projects.  It would be a huge obstacle to real-world use of Haskell, removed in one fell swoop.  It easily gets the #1 position on my list.

Project #2: Expand STM to External Systems

In my opinion, one of the most underestimated bits of potential in Haskell is software transactional memory.  The STM library is, as Don Stewart so elegantly pointed out recently, done and working and quite usable.  The problem is that it’s just about transactional memory.

I’d say that from my experience, a very dangerous source of software bugs comes from the fact that so much software is written from the perspective of someone standing on the outside, looking in to a transaction.  Managing transactions properly can be very difficult, and many programmers just plain get it wrong.  I shudder to think of the number of web applications we trust every day that probably have data loss bugs because of poor transaction handling.  This is something that desperately needs a solution.

One solution is database stored procedures, and they are used liberally for this purpose.  The ability to write code that sees the world from inside of the transaction is, I think, at the root of why it’s generally considered safer to write data-related code in stored procedures rather than in applications.  But stored procedures are specific to a single data source.  Most significant information stores provide some support for distributed transactions these days… but using them requires doing your data manipulation in application code, which means back outside the transaction.  Very little work has been done on writing code at the application level that nevertheless lives inside of a (distributed) transaction.  A big reason for that is that applications have always been effectful, and we’ve had no good way to take the side effects of the code in the application itself and manage even that central bit in a transaction-safe way.

Enter STM.  Now we do have a nice, working, fully functional system for writing application level code that sees the world from inside of a transaction.  STM was developed as a means of writing composable code that gets speedups from parallelism; I think that’s missing the bigger picture.  Speedups from parallelism are nice, but what STM really gives us is a way to write composable code that’s organized as concurrent processes acting on data with safety offered by transactions.  The next step is to expand those transactions.  I want to write code that makes changes to both a database and my data structures in memory, and have that code run as a transaction, which either succeeds or fails as a whole.

There are challenges here, to be sure: STM’s crucial retry operation is unlikely to be supported by any kind of external transaction system, for example, and the transaction models of different external systems may be hard to bring together under a single interface (savepoints?  nested transactions? isolation levels?).  But even a simple lowest common denominator here would be a huge step forward.

I feel like Haskell has a huge opportunity here to be the first popular general-purpose languages in which it’s possible to easily write and compose code that is really transaction-safe.  It would be a true shame to miss the opportunity.

Project #3: Universal Serialization

This is cheating a little bit, because it’s almost asking for a language feature… but I can get away with it because it doesn’t actually require a change to the language: just exposing a somewhat magical new API from the compiler.

Basically, it’s possible to take a lot of types in Haskell, and turn them into some kind of serialized form from which they can be recovered later.  This is what Data.Binary (from the binary package) and Data.Serialize (from the cereal package) both do.  But it’s not possible for all types.  First class functions, for example, cannot be serialized.  It’s not just that no one has written the code yet; in Haskell, it simply can’t happen.

But in a magic library provided by a Haskell compiler, it definitely could happen.

You’d have to be willing to live with a few limitations: recompiling your code would most likely invalidate serialized first-class functions.  The interpreter is an even harder challenge.  Serialized values would need to embed checksums of various bits of the code, and I haven’t even begun to think through which checksums would be needed.  But it should be possible.  After all, a closure is just a function pointer (to a function which has some offset in the compiled binary code) together with an environment consisting of values of other types.  Throw in some smart handling of cyclic data structures based on a lookup table, and you’ve got something.

Even a half-baked attempt to make higher-order programming with functions and closures compatible with the existing world of SQL databases and binary network protocols would be a huge benefit.

January 2, 2011 / cdsmith

Update to Snap sessions package

I’ve been working furiously this weekend on the mysnapsession package that I announced a few days ago.  Version 0.3 is now released.

I’ve started an introduction to the package, with a getting started step by step process, links to online API documentation, etc.

https://patch-tag.com/r/cdsmith/mysnapsession/wiki/

A few highlights of this new release:

Client Side Sessions: I’ve provided a second implementation of sessions based entirely on cookies and Michael Snoyman’s clientsession package.  This was in the cards from the beginning, but it’s not my favorite choice, so it was second in the list of implementations.

Module Reorganization: I moved some modules around, in acknowledgement that the dialogues programming model is not actually a Snap extension.  Instead, it’s a library that builds on sessions.

Matching APIs with Ozataman’s Package: One of the bigger surprises was learning that ozataman is already working on a different sessions package for Snap.  Because of substantial design differences, I will still continue to develop my implementation anyway; but at least we can keep the exposed APIs similar.  As such, you’ll notice that MonadSession looks very similar between the two.  The differences are:

  1. Whereas ozataman defines the session type as a type alias at the top of the module, in my package it’s an associated type inside the MonadSession type class.
  2. Whereas ozataman defines setInSession, deleteFromSession, and getFromSession as members of MonadSession with default implementations, I define them as convenience functions following the MonadSession type class, and with somewhat more general types.

Aside from that, Snap.Extension.Session is basically the same between the two.  The implementations differ considerably.

January 2, 2011 / cdsmith

Email woes

Just so everyone knows, I’m having trouble with my gmail account, which someone else seems to have told Google is theirs.  Until it’s resolved, please email me with my first name @brindlewaye.com instead.  Thanks.

Follow

Get every new post delivered to your Inbox.

Join 73 other followers