Haskell’s Niche: Hard Problems
(This post isn’t really intended for experienced Haskell programmers. It has no source code, and is a tad philosophical in tone. You have been warned!)
A particularly trite phrase that reflexively comes up in programming language discussions is “use the right tool for the job.” The notion is that different programming languages are good at different things, and that a software developer should be prepared to make different choices about their programming language depending on the characteristics of the problem they are trying to solve.
In some cases, this is obviously true: many programming languages are designed to play a niche role in software projects. Other languages are more general purpose. Nevertheless, many people are inclined to look for “the job” that a given language does well, and any language community ought to have a well considered answer. What is the “job” for which you most think of this language as the right tool?
This is my answer for Haskell: Haskell’s “job” is solving hard problems.
At first, this might seem evasive or dishonest, or like another way of just saying Haskell is great; but that is not what I mean. After all, a very small percentage of software is actually about solving hard problems. If the task at hand is to make sure that the user’s password has at least 6 characters, and at least one digit or punctuation symbol, then most of us could probably whip up an implementation in any of a dozen different programming languages in 15 minutes or less. The great majority of software development is like that. You know the task, you understand what’s supposed to happen, and you just need to write it.
But then there’s the hard stuff: you need to find the right heuristics to interpret some fuzzy data, or optimize a process, or search a space of possible solutions… but you don’t start out with a clear idea of what you’re doing, what the right answers will be, and/or how to go about the task. Those are the programming jobs where Haskell shines. I’ll give three reasons for that.
Reason 1: Haskell shines at domain specific languages.
The first thing you want to do when you approach a difficult problem is to find the right notation and language to discuss it. This step is absolutely crucial: entire fields of human knowledge have been held back by poor notation, and conversely have experienced something like a renaissance when better notation is introduced. In programming communities, we call this idea domain specific languages, and if you can embed a domain specific language into the programming language you’re using, things that looked very difficult can start to appear doable.
Haskell, of course, excels at this. If you look over a list of, say, the top 40 programming languages in use today, the three that have the most promise for domain specific languages would likely be Lisp, Haskell, and Ruby. (Side note: this isn’t meant to unfairly slight languages like Factor that also excel in this area but don’t meet the arbitrary popularity line.) Ruby does an adequate job while remaining basically a traditional modern language — object oriented and imperative. Lisp is defined by this idea, and it dominates the language design, but sometimes at the cost of having a clean combinatorial approach to putting different notation together. Haskell sits in the middle, with a clean and quiet syntax, arbitrary operators, lazy evaluation, combinatorial programming, and advanced type system features that together to let you build quite powerful and flexible embedded languages and notations and mix them cleanly.
Reason 2: Haskell shines at naming ideas.
The second crucial step to solving problems you didn’t already know how to solve is to start naming things. If you watch people tackle difficult tasks in many other areas of life, you’ll notice this common thread. You can’t talk about something until you have a name for it. Programming languages are also largely built around naming things; but a lot of mainstream languages are limited in terms of what kinds of ideas can be expressed in the language and assigned a name. One question you might ask of a language is how many things it lets you describe and name.
Haskell scores quite well on that count. Monoids, for example, are pervasive and used frequently in many programming languages; but only in Haskell are they named in the standard library. It’s common to hear “you can use monads in this language; but you can’t express the idea of a monad in the language itself.” Giving things names is a more universal fact of Haskell programming than in any other language I’m aware of. In this way, as well, programming in Haskell meshes very well with good practice in difficult problem-solving.
Reason 3: Haskell shines at making frequent fundamental changes.
Finally, a crucial aspect of difficult problem solving is that you’re frequently wrong. You pursue an idea for a long time, only to discover that you had something backwards in an important way, and need to make some pervasive changes throughout your work. Note that the maintenance programmer’s “surgical modification” style is a horrible idea here; the last thing you want, when you’re already working at the limits of your ability, is to wade through code whose structure arose out of your past major mistakes. Rather, what you need is a way to make deep structural changes to your code, and still end up with a fair amount of confidence that the result is at least reasonable, that you haven’t forgotten something big.
Unit testing won’t do the job; there are just too many false failures, since making such a large change tends to invalidate huge swaths of your unit tests anyway. You already know that they won’t work, because you deleted or changed the arguments to the pieces that you were testing. Indeed, while test-driven development works great for the vast majority of programming tasks that fall squarely in the “not difficult” category, it has a tendency to crystallize the code a bit quickly here. You don’t want to be told about and need to fix every place something changed; you want to know specifically when you’ve made changes that are not consistent between themselves.
That, of course, is the job of a type system. Haskell has undoubtedly the most advanced type system of any popular language (let’s say “top 40” again) in use today. This gives you (if you actually use it, rather than avoiding it and shutting it up!) the ability to make a large change that will affect the structure of your code, and know for sure that you’ve hit all the places where that change makes a difference. Indeed, the type checker can direct your attention to what remains to be updated. We’re not even talking about errors that look reasonable but contain subtle mistakes; those will need to be caught with some combination of careful thought and testing. We’re talking about the kind of errors that would cry out at you if you reread that bit of code; but came into being because of a circuitous route of refactoring. To quote Benjamin Pierce, “Attempting to prove any nontrivial theorem about your program will expose lots of bugs: The particular choice of theorem makes little difference!” This is especially true when you’re dealing with rapidly changing code.
That, then, is the answer I give for what Haskell is the “right tool” for. To be sure, there are a number of more specific answers, parallelism being a popular one. But these are in a sense misleading. Unlike, say, Erlang, Haskell was not designed for parallel programming; its usefulness for that task has arisen after the fact. Indeed, parallelism in a sense qualifies as one of the hard problems for which Haskell is well-suited. I also don’t mean to claim that Haskell is miserable at easy tasks; indeed, I use it routinely for the sort of thing many UNIX experts would pull out shell scripting and Perl. But I am bold enough to say that those tasks are not where its shows to its best advantage.
That’s my slogan, then. Haskell is for solving hard problems.