Skip to content
June 6, 2011 / cdsmith

Mazes in Haskell, My Version

Earlier today, I read this article by Mihai Maruseac about generating mazes in Haskell.  It’s very interesting, but it turns out I’m a maze generation algorithm bigot.  As far as I’m concerned, there’s only one right way to generate mazes, and it’s this:

  1. Construct the maze with all possible walls (so each cell is isolated).
  2. Pick a random wall such that there’s no way to get from one side to the other.
  3. Tear down that wall.
  4. Repeat until the entire maze is navigable.

Sadly, this isn’t how Mihai decided to do it, so I was forced (forced, I tell you!) to spend some time writing my own maze generator.

Equivalence Relations

It turns out that the tricky step here is #2: how can you efficiently tell whether there’s a way to get from one side of a wall to the other.  A maze solver would do it, of course, but solving the maze once per wall isn’t the most scalable answer out there.  Instead, you want to keep track of the separate “rooms” in your maze as you go.

An efficient data structure for this task, called “union find”, has been known to the imperative world for decades: Tarjan showed back in the 70s that it runs in time that’s very nearly linear (with some additional factor that grows like the inverse of the Ackerman function… that is, ludicrously slowly).  Sadly, this seems to be one of those data structures that’s confined to the imperative world.  There’s not an obvious translation into the functional world.  I could just use an imperative implementation, in IO or ST… but that would be too easy.  Instead, I decided to find an implementation of an externally-pure, internally-stateful version of that algorithm.  I failed to find one, so I wrote one instead.  Here it is: persistent-equivalence.

I based the general technique off of one by Conchon and Filliatre.  They used a persistent array under the hood, and threw in a bit of unsafe mutation to implement path compression.  Well, the path compression bit is easy: it’s just an atomicModifyIORef.  It’s perfectly safe, since no exposed function can ever give a different result depending on whether it’s been modified or not.  In fact, I’m not even certain it needs to atomic, but I’ve played it safe for now.  The second major change made by Conchon and Filliatre is sadly less safe: they arranged to make changes to the persistent array (naively a DiffArray) to “reroot” it when old versions are accessed.  However, their code for doing this is clearly a minefield in the presence of multithreading… I was up for tackling the task, until I realized that STM is forbidden inside unsafePerformIO, and the interactions between various locks are mind-boggling…

Rather than enter the depths of thread-safety hell, or else potentially expose an API that claims it’s safe but really isn’t, I instead gave up.  DiffArray is good enough for us anyway, since we won’t be doing any backtracking.

Just as a side comment, this structure is a widely known example of an imperative structure that’s hard to translate into the functional world… but when you *do* translate at least its public interface, the result is rather beautiful.  I’ve never seen a specification of “union find” that I’d consider particularly enlightening… but when it’s converted to a functional interface, it’s immediately clear what you’re dealing with: equivalence relations.  Instead of talking about some operation names that were made up for this specific purpose, we’re looking at a very simple idea from mathematics.  The imperative viewpoint, though, obscured this by encouraging you to only speak in terms of the operations, and avoid ever talking about a specific equivalence relation you’ve got.  I’m much happier with the functional interface.

Generating the Maze

Now that I’ve got an implementation of equivalence relations, I’m well on my way to having a maze.  I declare a few data types for cells and walls:

-- Vertical walls are to the right of their cell (so the x component
-- must be less than width - 1), and horizontal walls are to the top
-- of their cell (so the y component must be less than height - 1).

type Cell = (Int, Int)
data Wall = H Cell | V Cell deriving (Eq, Show)

And I write the code to generate a maze, which works out in a nice recursive style.

process rooms []     = []
process rooms (H (x,y) : ws)
    | equiv rooms (x,y) (x,y+1) = H (x,y) : process rooms ws
    | otherwise                 = process (equate (x,y) (x,y+1) rooms) ws
process rooms (V (x,y) : ws)
    | equiv rooms (x,y) (x+1,y) = V (x,y) : process rooms ws
    | otherwise                 = process (equate (x,y) (x+1,y) rooms) ws

genMaze :: RandomGen gen => Int -> Int -> gen -> [Wall]
genMaze w h gen = finalWalls
  where allWalls = [ H (x,y) | x <- [0 .. w-1], y <- [0 .. h-2] ]
                ++ [ V (x,y) | x <- [0 .. w-2], y <- [0 .. h-1] ]
        startRooms = emptyEquivalence ((0,0), (w-1, h-1))
        startWalls = shuffle' allWalls (length allWalls) gen
        finalWalls = process startRooms startWalls

To generate a maze, you make a list of all the walls in the chosen size, shuffle them (using the random-shuffle package from Hackage), build an initial empty equivalence relation between cells (that is, each cell is its own separate room), and start considering the walls one by one in the random order chosen.  For each wall, if its two sides are already connected (the cells are in the same equivalence class), you keep the wall.  Else, you knock it down, and proceed with a new equivalence relation considering those cells connected now.  This is exactly the algorithm given at the start.

Showing the Result

I turned to Gtk2Hs for a quick GUI displaying the resulting maze.  The code is not terribly interesting, so I’ll just drop it in for the sake of completeness.

First, we have Cairo code for drawing the maze (for the sake of simplicity, I’ve hard-coded each cell to 30×30 pixels):

drawMaze :: Int -> Int -> [Wall] -> Render ()
drawMaze w h walls = do
    rectangle 10 10 (30 * fromIntegral w) (30 * fromIntegral h)
    forM_ walls $ \wall -> case wall of
        H (x,y) -> do
            moveTo (10 + 30 * fromIntegral x) (40 + 30 * fromIntegral y)
            lineTo (40 + 30 * fromIntegral x) (40 + 30 * fromIntegral y)
        V (x,y) -> do
            moveTo (40 + 30 * fromIntegral x) (10 + 30 * fromIntegral y)
            lineTo (40 + 30 * fromIntegral x) (40 + 30 * fromIntegral y)
    stroke

Next we’ll build a window to display it:

displayMaze :: Int -> Int -> [Wall] -> IO ()
displayMaze w h walls = do
    initGUI
    wnd <- windowNew
    wnd `on` deleteEvent $ liftIO mainQuit >> return False
    set wnd [
        windowDefaultWidth  := 20 + 30 * w,
        windowDefaultHeight := 20 + 30 * h ]

    da <- drawingAreaNew
    containerAdd wnd da
    da `on` exposeEvent $ do
        dw <- eventWindow
        liftIO $ renderWithDrawable dw (drawMaze w h walls)
        return False

    widgetShowAll wnd
    mainGUI

Finally, we pull that all together…

main = do
    [read -> w, read -> h] <- getArgs
    gen <- newStdGen
    displayMaze w h (genMaze w h gen)

There you are, mazes in Haskell, my way!

Examples

Just for the fun of it, I present a couple mazes:

April 26, 2011 / cdsmith

Composing state with functions and lenses

Here’s the scenario: you’re writing some stateful code.  Maybe it’s threaded state (a State monad), or maybe it’s just fixed shared state (a Reader monad).  So you’ve got a lot of types flying around like:

Reader X a

Reader X a -> Reader X b

State X a

State X a -> State X b

But then you take your stateful code, and try to compose it with someone else’s stateful code, and their state is different.  That is, they have:

Reader Y a

Reader Y a -> Reader Y b

State Y a

State Y a -> State Y b

Question: how do you get these pieces of code to work together?

Clearly you’ll need some kind of relationship between the types X and Y, or you have no hope.  But what kind of relationship do you need here?  We’ll consider each of the types in turn.

Case 1: Reader X a / Reader Y a

In this case, you’ve got a Reader X a, and a Reader Y a, and you want the combine them.  It turns out all you need here is a function from one to the other, and you can turn these into compatible types to compose them nicely.  The following is in the mtl package already.

withReader :: (p -> q) -> Reader q a -> Reader p a

That’s not surprising, actually.  After all, Reader x y is conceptually just a newtype wrapper around x -> y, so withReader is a fancy name for function composition!

withReader f r = reader (runReader r . f)

Note the contravariance there… you pass in a function p -> q, but what you get back is Reader q a -> Reader p a, in the opposite order.  That makes a lot of sense, though, if you think it through.  (Exercise for the reader: think it through.)

Case 2: (Reader X a -> Reader X b) / (Reader Y a -> Reader Y b)

Another situation that comes up is that we’ve got a way of wrapping reader monads.  This happens particularly often if you’re building up values by making changes to other values.  For example, one of the two primitives from the MonadReader class, local, gives you precisely this kind of map between Reader monads.

The first thing we notice here is that a function from one state type to the other cannot possibly be good enough, because a conversion doesn’t even have any clear meaning on those types.  What turns out to work for us, though, is a lens.  A lens can be thought of as a getter/setter pair, and I’ll use the definition from the excellent fclabels package.  Here’s what you need to know:

data a :-> b
lens :: (a -> b) -> (b -> a -> a) -> (a :-> b)
getL :: (a :-> b) -> (a -> b)
setL :: (a :-> b) -> (b -> a -> a)

In other words, a :-> b (note the colon) is the type of lenses from a to b.  You construct them by providing a getter and a setter to the lens function, and you can extract the getter and setter from getL and setL.  They can also be composed like functions, and have identities (in other words, they form the arrows of a category).

With both getters and setters in mind, we can set out to compose the types earlier.

wrapWithReader :: (x :-> y)
               -> (Reader y a -> Reader y b)
               -> (Reader x a -> Reader x b)
wrapWithReader l f r = reader (\x ->
    runReader (f (reader (\y -> runReader r (setL l y x)))) (getL l x))

This may look complex, but mostly the complexity is in constructing and deconstructing the Reader monad newtype.  The definition is straight-forward aside from that: to turn the Reader x a into a corresponding Reader x b, you simply consider the Reader y a that results from fixing the non-y bits of the input, transform it, and then map it back.

Case 3: State X a / State Y a

The third case is where we have a state monad rather than a reader monad.  Since changes of state in the state monad are almost interchangeable with a lens, it turns out a lens is what we need here, too.  We can implement this without too much trouble.

withState :: (x :-> y) -> State y a -> State x a
withState l s = state (\x -> let (a,y) = runState s (getL l x)
                             in  (a, setL l y x))

In other words, we pull the y out of the x, run the state computation with it, and then push the resulting y back into the x to get a modified x.  Works like a charm.

Case 4: (State X a -> State X b) / (State Y a -> State Y b)

The final case, and the most complicated one yet, arises if you have a function to modify state types, and need to change the type of the state.  Sadly, even a lens is not sufficient to assign a reasonable meaning to this conversion.  To make sense of such a transformation, you need to know something even stronger: we’ll do it where there is an isomorphism between the types X and Y.  Then the composition can be seen as transforming the functions by simply treating one as the other.

Fortunately, fclabels has the types we need still!

data x :<->: y
(:<->:) :: (x -> y) -> (y -> x) -> (x :<->: y)
fw :: (x :<->: y) -> (x -> y)
bw :: (x :<->: y) -> (y -> x)

An isomorphism is just a pair of inverse functions between the types, meaning they are essentially interchangeable.  Then it’s easy to build the state wrapper converter, which interchanges them:

wrapWithState :: (x :<->: y)
              -> (State y a -> State y a)
              -> (State x a -> State x a)
wrapWithState iso f = t (bw iso) (fw iso) . f . t (fw iso) (bw iso)
    where t fw bw s = state (second fw . runState s . bw)

And voila, composable state types!

(Side note: It is possible to build something with the type given above for wrapWithState but using just a lens instead of an isomorphism.  Sadly, it doesn’t act as you’d expect for doing the composition.  Also, Gregory Collins pointed out to me that you can implement wrapWithState with the expected behavior and just a lens, if you give it a rank 2 type and require that the wrapper function be universally quantified on the result type.  Neither of these quite what we’re looking for, though, and the isomorphism is needed to get something with the obvious meaning.)

(Second side note: I’ve done this with pure state and reader monads for simplicity; but it’s easy to generalize to StateT and ReaderT, if that’s what you want.)

April 15, 2011 / cdsmith

A Correspondence Involving Getters and Setters

One of the reasons that I love Haskell is that it leads you to fascinating thought experiments.  Here’s one of mine.  The conclusions aren’t particularly earth-shattering, but they are interesting.

One of the most common things to do in an imperative programming language is to build getters and setters for the properties of an object.  In Java, they may look like this:

public X getFoo();

public void setFoo(X val);

The obvious mapping from there into a purely functional approach gives you this, for a record type R and a field type F:

getFoo :: R -> F

setFoo :: F -> R -> R

The fact that we have two separate functions here is unpleasing to me, though.  Without being quite able to explain why, I’d really like to have just one type that completely describes the property “foo”.  A product type is definitely cheating… but this would definitely satisfy me, if it works:

foo :: forall t. (F -> (F,t)) -> (R -> (R,t))

I’m interested in this type for two reasons: first, because it’s fairly easy to embed both a getter and a setter together into such a type.  Suppose you give me the functions getFoo and setFoo.  Then I can certainly embed them into a foo, in such a way that they can be recovered.

foo g r = let (f,v) = g (getFoo r) in (setFoo f r, v)

getFoo’ = snd . foo (\x -> (x,x))

setFoo’ v = fst . foo (\x -> (v,()))

It’s a straight-forward matter of substitution to see that getFoo’ and setFoo’ are identical to their original counterparts.  So one can construct a value of the form of foo given any getter and setter combination, and given any such value of the type of foo, one can extract a getter and a setter.  The second reason I care about that type, though, is that has a natural meaning aside from just embedding a getter/setter pair.  Recall that the State monad (with state type S, for example) is a newtype wrapper around (forall t. S -> (S,t)).  So this can be seen as a state transformer.  It takes a stateful transformation, and changes the type of the state.

Now, the rather more involved question is whether there exist state transformers (values of the type of foo) that do not arise in that way as the straightforward embedding of getter and setter functions.  In other words, could foo be something more than just the encoding of a getter and a setter into a function?

Alas, the answer is yes.  It would be nice if the product type of getters and setters were isomorphic to the type of state transformers, and that is very nearly true… but not quite.  To see the reasoning work, first note that the type (a -> (b,c)) is isomorphic to (a -> b, a -> c).  (This is the type isomorphism version of distributing an exponent over a product).  This lets use split up foo into two parts as follows:

foo1 :: forall t. (F  -> F) -> (F -> t) -> R -> R

foo2 :: forall t. (F  -> F) -> (F -> t) -> R -> t

We can simplify a little by arguing based on the universal quantification.  Note that foo1 is given as a parameter a function of type (F -> t), but it cannot possibly make any use of the value, since it does not know anything about the type t.  Furthermore, foo2 must produce a value of type t, and can do so only through its parameter of type (F -> t), which can only be used for that purpose.  So these turn out to be equivalent to the following simpler types:

modifyFoo :: (F -> F) -> R -> R

filteredGetFoo :: (F -> F) -> R -> F

I’ve named them suggestively, because I have a bit of intuition for what these things tend to mean.  Let’s now look at what happens to the getFoo and setFoo functions that we were able to define from the original foo:

setFoo v = modifyFoo (const v)

getFoo = filteredGetFoo id

This all looks as you might expect… but remember that the question is whether modifyFoo and filteredGetFoo are completely determined by the getter / setter pair arising in that way.  Clearly they are not.  In particular, note that you can iterate a constant function 1 or more times, and always get the same answer no matter the number of iterations; and the identity function similarly for zero or more times.  So some interesting logic can be built into modifyFoo or filteredGetFoo with respect to iterating the function passed as a parameter (a constant number of times, or maybe until some predicate holds, or perhaps something more complex), and though this would change the behavior of the modify and filteredGet operations for some inputs, it would have no effect on the get and put operations.

Still, we’ve got something interesting here.  I wonder if there are interesting “non-standard” definitions of modify and filteredGet for some common record type.  If so, then they would lead to interesting transformations on values of the State monad, which don’t arise from get and set in the normal way.  Makes you wonder, doesn’t it?

March 13, 2011 / cdsmith

Haskell’s Niche: Hard Problems

(This post isn’t really intended for experienced Haskell programmers.  It has no source code, and is a tad philosophical in tone.  You have been warned!)

A particularly trite phrase that reflexively comes up in programming language discussions is “use the right tool for the job.”  The notion is that different programming languages are good at different things, and that a software developer should be prepared to make different choices about their programming language depending on the characteristics of the problem they are trying to solve.

In some cases, this is obviously true: many programming languages are designed to play a niche role in software projects.  Other languages are more general purpose.  Nevertheless, many people are inclined to look for “the job” that a given language does well, and any language community ought to have a well considered answer.  What is the “job” for which you most think of this language as the right tool?

This is my answer for Haskell: Haskell’s “job” is solving hard problems.

At first, this might seem evasive or dishonest, or like another way of just saying Haskell is great; but that is not what I mean.  After all, a very small percentage of software is actually about solving hard problems.  If the task at hand is to make sure that the user’s password has at least 6 characters, and at least one digit or punctuation symbol, then most of us could probably whip up an implementation in any of a dozen different programming languages in 15 minutes or less.  The great majority of software development is like that.  You know the task, you understand what’s supposed to happen, and you just need to write it.

But then there’s the hard stuff: you need to find the right heuristics to interpret some fuzzy data, or optimize a process, or search a space of possible solutions… but you don’t start out with a clear idea of what you’re doing, what the right answers will be, and/or how to go about the task.  Those are the programming jobs where Haskell shines.  I’ll give three reasons for that.

Reason 1: Haskell shines at domain specific languages.

The first thing you want to do when you approach a difficult problem is to find the right notation and language to discuss it.  This step is absolutely crucial: entire fields of human knowledge have been held back by poor notation, and conversely have experienced something like a renaissance when better notation is introduced.  In programming communities, we call this idea domain specific languages, and if you can embed a domain specific language into the programming language you’re using, things that looked very difficult can start to appear doable.

Haskell, of course, excels at this.  If you look over a list of, say, the top 40 programming languages in use today, the three that have the most promise for domain specific languages would likely be Lisp, Haskell, and Ruby.  (Side note: this isn’t meant to unfairly slight languages like Factor that also excel in this area but don’t meet the arbitrary popularity line.)  Ruby does an adequate job while remaining basically a traditional modern language — object oriented and imperative.  Lisp is defined by this idea, and it dominates the language design, but sometimes at the cost of having a clean combinatorial approach to putting different notation together.  Haskell sits in the middle, with a clean and quiet syntax, arbitrary operators, lazy evaluation, combinatorial programming, and advanced type system features that together to let you build quite powerful and flexible embedded languages and notations and mix them cleanly.

Reason 2: Haskell shines at naming ideas.

The second crucial step to solving problems you didn’t already know how to solve is to start naming things.  If you watch people tackle difficult tasks in many other areas of life, you’ll notice this common thread.  You can’t talk about something until you have a name for it.  Programming languages are also largely built around naming things; but a lot of mainstream languages are limited in terms of what kinds of ideas can be expressed in the language and assigned a name.  One question you might ask of a language is how many things it lets you describe and name.

Haskell scores quite well on that count.  Monoids, for example, are pervasive and used frequently in many programming languages; but only in Haskell are they named in the standard library.  It’s common to hear “you can use monads in this language; but you can’t express the idea of a monad in the language itself.”  Giving things names is a more universal fact of Haskell programming than in any other language I’m aware of.  In this way, as well, programming in Haskell meshes very well with good practice in difficult problem-solving.

Reason 3: Haskell shines at making frequent fundamental changes.

Finally, a crucial aspect of difficult problem solving is that you’re frequently wrong.  You pursue an idea for a long time, only to discover that you had something backwards in an important way, and need to make some pervasive changes throughout your work.  Note that the maintenance programmer’s “surgical modification” style is a horrible idea here; the last thing you want, when you’re already working at the limits of your ability, is to wade through code whose structure arose out of your past major mistakes.  Rather, what you need is a way to make deep structural changes to your code, and still end up with a fair amount of confidence that the result is at least reasonable, that you haven’t forgotten something big.

Unit testing won’t do the job; there are just too many false failures, since making such a large change tends to invalidate huge swaths of your unit tests anyway.  You already know that they won’t work, because you deleted or changed the arguments to the pieces that you were testing.  Indeed, while test-driven development works great for the vast majority of programming tasks that fall squarely in the “not difficult” category, it has a tendency to crystallize the code a bit quickly here.  You don’t want to be told about and need to fix every place something changed; you want to know specifically when you’ve made changes that are not consistent between themselves.

That, of course, is the job of a type system.  Haskell has undoubtedly the most advanced type system of any popular language (let’s say “top 40″ again) in use today.  This gives you (if you actually use it, rather than avoiding it and shutting it up!) the ability to make a large change that will affect the structure of your code, and know for sure that you’ve hit all the places where that change makes a difference.  Indeed, the type checker can direct your attention to what remains to be updated.  We’re not even talking about errors that look reasonable but contain subtle mistakes; those will need to be caught with some combination of careful thought and testing.  We’re talking about the kind of errors that would cry out at you if you reread that bit of code; but came into being because of a circuitous route of refactoring.  To quote Benjamin Pierce, “Attempting to prove any nontrivial theorem about your program will expose lots of bugs: The particular choice of theorem makes little difference!”  This is especially true when you’re dealing with rapidly changing code.

That, then, is the answer I give for what Haskell is the “right tool” for.  To be sure, there are a number of more specific answers, parallelism being a popular one.  But these are in a sense misleading.  Unlike, say, Erlang, Haskell was not designed for parallel programming; its usefulness for that task has arisen after the fact.  Indeed, parallelism in a sense qualifies as one of the hard problems for which Haskell is well-suited.  I also don’t mean to claim that Haskell is miserable at easy tasks; indeed, I use it routinely for the sort of thing many UNIX experts would pull out shell scripting and Perl.  But I am bold enough to say that those tasks are not where its shows to its best advantage.

That’s my slogan, then.  Haskell is for solving hard problems.

February 5, 2011 / cdsmith

HTML 5 in Haskell

I’ve just released the first version of the xmlhtml package, which is part of the Snap framework.  The purpose of the package is to be a common parser and renderer for both HTML 5 and XML.  I’m writing here to talk about what the package is, its goals and design choices, and so on.

Context: Heist and Hexpat

The Snap project includes a templating engine called Heist.  Since I didn’t write it, I can say that it is, in my opinion, the best model for real page template engines that exists.  If you’re generating very dynamic HTML using Haskell code, there are nice options like blaze; but if you want a template system, I think Heist is the clear choice.  It’s simple, easy to understand, and extremely powerful.

Under the hood, Heist works with hexpat, a wrapper around the native expat library for handling XML documents.  Unfortunately, HTML isn’t really XML, and it’s sometimes difficult to build pages that are valid XML and make use of a variety of client-side web techniques.  Some problems that arose:

  • JavaScript makes frequent use of reserved XML characters like ‘<‘, ‘&’, etc.  HTML-based browsers accept these without complaint, terminating the JavaScript only when it finds a closing script tag.
  • CSS is similar; special characters can occur in valid CSS, but are not okay in XML documents.
  • The converse problem exists as well; hexpat will escape special characters in text as entities, but web browsers that don’t expect that then won’t parse the code correctly.
  • Some tags like textarea and object need to have explicit close tags to be understood by many web browsers and be valid HTML 5.  Hexpat renders that as empty tags instead, with a slash in the start tag.
  • HTML allows certain end tags to be omitted; for example, end tags on list item, paragraphs, etc.  Hexpat is an XML parser, though, and insisted on close tags.
  • Hexpat insists on a single root element, as is the custom for XML.  However, Heist templates are allowed to have many root elements.  The tricks to work around this in Heist had bad effects on other bits, such as DTD declarations.  Better to have a proper parser that can understand multiple roots.

There are dozens of other such incompatibilities, and they formed a constant source of annoyance for Heist users.

The Answer: xmlhtml

To address these outstanding issues in Heist, I built a new library for handling both XML and HTML 5, which is creatively named xmlhtml.  Since this is a huge design space, we narrowed down the intent of the library as follows:

  • The intended use is for working with documents under your own control.  This includes, for example, the templates in a web application.
  • We support 99% of both XML and HTML 5.  We leave out a few small things on both sides (most notably, processing instructions are silently ignored in XML).
  • We use the same types for both HTML and XML, so you can write code to work with both seamlessly.
  • We focus on keeping the type as simple and small as possible.  The types and public API are designed to be as simple as possible.

The first point is crucial to keeping the project manageable.  The latest draft HTML 5 specification contains over a hundred pages of intricate parsing rules designed to match the quirky behavior of 20 years worth of web browsers.  While it might be useful to implement these rules for writing web spiders, screen scraping, and such; the result would be too complex for working with clean, controlled documents.

At the same time, it was important to us not to take compatibility and standards lightly.  While we don’t adhere to all of the standards all of the time, we differ in controlled ways that are important for the application we have in mind.

Simplicity was also a huge design goal.  There’s a tendency in Haskell for libraries to get more and more generic over time, to the point that actual honest-to-goodness types are few and far between!  One goal of xmlhtml was to just go ahead and decide the types for you.  So text is represented with the Text type from Bryan O’Sullivan’s package, which is now part of the Haskell Platform.  Lists are lists, attributes and tags are also text… you don’t have to track down a hundred type variables to understand the types in the package.  If you want to convert to a different type, you can; but the xmlhtml Document type uses the types that it uses.

This fills a space that I think is pretty much unoccupied in Haskell so far.  For parsing and rendering valid XML, there’s hexpat.  For handling arbitrary and possible invalid HTML, there is TagSoup.  But for handling your own documents using both HTML and XML features, and without requiring detective work to figure out how to use it, this is the way to go.

A Brief Introduction

It’s dead simple to make basic use of the package.  The module Text.XmlHtml exports parsing and rendering functions:

  • parseXML and parseHTML: These are the starting points for parsing documents.  The result is a Document value containing the detected character encoding, document type and the content of the document.
  • render: This is the renderer.  The result is a Builder object from the blaze-builder package.  You can use blaze-builder’s toByteString if you prefer it in that form, but keep in mind that the Builder type has advantages if you’re further concatenating the result into a larger bit of output.
  • Basic types and functions: The Text.XmlHtml module exports another 16 simple functions and a few types for manipulating document structure.  They are all pretty obvious and simple; you can check if a node is an element, text node, or comment, get its children, get and set its attributes, and so on.  You can get lists of child and descendant nodes.  All of the basic things you’d expect.
  • Cursor: In the package Text.XmlHtml.Cursor, there are functions for acting on document trees more imperatively using a zipper over document trees.  The zipper type is Cursor, and there are a few dozen supporting functions for moving and expecting the nodes of the tree.
  • renderHtml: The Text.Blaze.Renderer.XmlHtml module contains a renderer from blaze into this package’s document type.  This is a sort of corner case, outside the expected usage, but occasionally helpful for some integration stuff if you do some of your HTML using Heist and other stuff with blaze.

So that’s the new xmlhtml package.  It’s a very simple and nice way to play with document trees; and that’s pretty much it!

January 24, 2011 / cdsmith

My Dream GHCi Session, Take 2

About three or four years ago, I write a blog entry describing my dream GHCi session; basically a wish list of things I wish GHCi did for me.  Well, since today is official program announcement day for Google Summer of Code 2011, I’m revisiting this subject in hopes of roping some hapless college students into doing my bidding inspiring others.

So here, revised and updated, is what I’d love to see next time I start GHCi.

$ ghci MyFile.hs
GHCi, version 9.9.02013010: http://www.haskell.org/ghc/  :? for help
Loading package base ... linking ... done.
[1 of 1] Compiling MyFile              ( MyFile.hs, interpreted )

MyFile.hs:13:5: Not in scope: `callDatabase'

MyFile.hs:112:16:
    No instance for (Fractional Integer)

Partially failed, modules loaded: MyFile
*MyFile> :t foo
Warning: foo depends on callDatabase, which was not found.
Warning: Inferred type may be too general.

foo :: forall a b. (SQL a, SQLData b) => a -> t -> STM [b]

*MyFile> foo (Just 7) (DB $ connectPostgreSQL "dbname=fluffybunnies")

*** Exception: callDatabase: unresolved compile error

*MyFile> foo Nothing NoDB  -- doesn't force value for callDatabase

Just 42

*MyFile> :list 110-114

110:       return [a]
111:
112: bar x = (baz x) / 5
113:
114: superCoolFunction = unsafePerformIO $ do

*MyFile> :t baz
baz :: forall a. a -> Integer
*MyFile> let bar = fromIntegral (baz x) / 5
*MyFile> let callDatabase = undefined :: (SQL a, DBDriver b) => b -> a -> STM TVar
*MyFile> :recomp MyFile
[1 of 1] Compiling MyFile              ( MyFile.hs, interpreted )
Ok, modules loaded: MyFile

*MyFile> :t foo

foo :: forall a b. (SQL a, DBDriver t, SQLData b) => a -> t -> STM [b]

*MyFile> let data MyTree a = Leaf a | Branch (MyTree a) (MyTree a)
*MyFile> instance Monoid (MyTree a) where mappend = Branch
*MyFile> Leaf 6 `mappend` Leaf 7
Branch (Leaf 6) (Leaf 7)

*MyFile> :t 6              -- check type by itself
6 :: Num a => a

*MyFile> :t {{ 6 }} / 5  -- check type in context
6 :: Fractional a => a

MyFile> superCoolFunction
... BOOM!

Notes:

  1. It’s nice that I had to remove some parts of my old list because they’ve since been implemented!  In particular, the :list command was added with the GHCi debugger; GHCi now does multiline blocks with :{ and :}, and one of my requests for a better error message seems to no longer be an issue.  This is nice to see!
  2. The first request is for GHCi to continue doing its best to load a module when something goes wrong.  Instead of punting entirely, it could still put those bits in scope that work correctly, and just report the errors on the bits that don’t work.  This should probably only work for the open module (the one with an asterisk by it in the prompt.)
  3. Going further in this direction, the behavior of the first ‘:t foo’ suggests that GHCi may be able to report guess (possibly incorrect) at piecing together the rest of the file in the event of errors.  Here, callDatabase is not in scope, so type checking and inference can’t possible proceed correctly.  However, type checking and inference can still proceed speculatively, by assuming callDatabase :: forall a. a!  A warning is needed, of course, since the given type will likely be insufficiently specific once callDatabase is brought into scope.
  4. The next two lines record what happens to callDatabase (you know, that symbol that wasn’t in scope).  Basically, it’s treated as a bottom (i.e., undefined) value, so if it’s forced, then you get an exception.  If it’s not forced, you can still use foo.
  5. The :list command now exists for symbols or single lines.  I still think a line range is a reasonable idea.
  6. The next few lines show me interactively fixing the errors in MyFile.hs and using a made-up command “:recompile” to try to rebuild the currently loaded source file, except with  currently defined symbols from the command line added to the top-level scope to replace those from the file where relevant.  With that done, I can now ask for the type of foo, and get the true type, which is less general than the guess from earlier.
  7. Next, I define a data type and an instance on the command line.  I’d like to be able to define classes as well.  These may not even need to be GHCi-specific enhancements to Haskell.  Local instance declarations, for example, are already suggested (with restrictions would be needed to preserve principal types and coherence) by Oleg in the now-famous functional pearl on implicit configurations.  Their use within GHCi is another potential motivating example.
  8. The last bit I threw in is something I’ve found myself wanting far more often than I’d expect: a way to ask GHCi for the type of an expression in context.  So while 6 has type Num a => a as expected, if taken in the given context, it turns out to have the more restricted type Fractional a => a, because of the upcoming division.  I chose {{ and }} as markers in the unverified belief that at least “{{” can never occur in legal Haskell code, so they clearly form a subexpression marker instead of actual code.

Most of these are bigger projects, perhaps, than GSoC gives time for… or at a minimum, would need an applicant already familiar with GHC.  But they sure would be convenient.

January 21, 2011 / cdsmith

A Recap About Cabal and Haskell Libraries

Wow, it turns out I’m not the only person having issues with managing dependencies in Haskell!  There has been a lot of discussion and sharing of ideas and information in a few different places, starting from my original article about “DLL hell” in Haskell three days ago.  I’m going to try to collect all the relevant points together, along with some added reading I’ve done, and organize them into a coherent summary.  I’m sure I’ll inadvertently leave some things out, and if so, I apologize.

Regarding “Blame”

First of all, I want to echo Johan Tibbe, who mentioned on Reddit that this is, in many ways, a sign of good things.  Haskell isn’t running into this problem because something broke; it’s running into this problem because it’s being used for a remarkable variety of projects in a way that hasn’t happened before.  I also agree with John Millikin, Mike Burns, and others who pointed out that the problem isn’t uniquely Haskell.  Indeed, I think it’s fair to say that perhaps Haskell is one of the few environments where we’ve got a prayer at solving the problem.

The causes come down to two things, basically,

  1. Haskell encourages the writing of many small libraries.  This is in part because we do such a good job of managing the dependencies on all those libraries.  Can anyone imagine that we’d have a lot of the stuff we have now on Hackage if we didn’t have excellent tools for working with it?  If there’s any lesson to be learned here, it’s just that people will try to do more things until they hit the limits of the tools available to them!
  2. Haskell is now being used to write “integration” type software; packages that pull together a lot of very different functionality and build something out of the pieces.  I think this is a relatively new phenomenon, at least at the scale it’s happening.  And it puts a lot more stress on systems like Cabal.

So I’d be unhappy if someone walked away from my article thinking “Haskell sucks at dependency management; I should use something else.”  The thing is, something else probably sucks too, quite possibly a lot more!  In many cases, it sucks so much that you wouldn’t even attempt to do the things we routinely do in Haskell.  No one releases a Java library with only 50 lines of code – in part because it’s hard to do anything interesting in 50 lines of Java, but also because the effort of getting it installed and working with all your other libraries would be so great that it would swamp the effort of rewriting the functionality.

Regarding Defeatism

On the other side, several comments were made to the effect that it’s too much to hope that we can solve version dependency problems in Haskell.  The argument goes that there are lots of other languages that also have these same problems; they haven’t found answers, and we probably won’t either.  There’s mention that this is an “active research topic,” implying that it should be left to the researchers.

It’s good to inject some context into the discussion about the scope of the problem, but ultimately I think we should reject the idea that Haskell isn’t going to solve these problems, for three reasons:

  1. We’re almost there! Despite the admittedly negative tone in the article that started this all, I think I’ve actually enumerated most of the issues with the current system, and they all look solvable.  GHC and Cabal developers have put immense amounts of effort into working on these problems, and they have nearly gotten us to a place where Haskell dependencies just work in a way that’s not true in any other language.  (A key point here: Isolation-based answers help tremendously in practice, but still leave possibilities of problems.  In fact, any time something only works because of isolation, it’s also the case that someone trying to combine that code together into a bigger system is going to run into problems.  So any environment that widely recommends an isolation-based answer is clearly short of the “just works” point, and instead settling for “works so far, and we hope no one tries to do something bigger.”)
  2. These are Haskell’s problems to solve. While dependency management comes up in many different programming languages, the solutions in this case are Haskell solutions.  If you look at the central two issues I pointed out – internal dependencies, and the Cabal butterfly effect – what’s ultimately holding up a solution is work on the compiler to contribute things that relate to its understanding of the Haskell programming language and the interfaces between modules that the compiler builds.  If Haskell doesn’t solve these problems, then no one else is going to do it for us.
  3. Since when does the Haskell community shy away from hard problems? Sure, this is a difficult problem.  So are/were lazy evaluation in a general-purpose language, type classes, models for I/O in a pure functional framework, functional dependencies and type families, software transactional memory, vectorization of data parallel code, type inference for GADTs, and the construction of what is almost certainly the most advanced optimizing compiler in the world.  Haskell did (and/or is doing) all of those.

I don’t mean to dismiss those who pointed out this is a hard problem; it may take a while to solve it; so those who are trying to use Haskell in practice right now are well-advised to find temporary workaround or partial answers, such as isolating dependencies of projects, for example.  At the same time, though, part of the unique spirit of Haskell has always been the willingness to live with our problems all the way up to (but not past) our tolerance point, and take the time to find the right answer instead of the expedient one.

That’s always been my interpretation of the Haskell motto to “avoid success at all costs” – what we’re really avoiding is the temptation to take short-cuts with duct-tape and glue, and in the process compromise the effort to find the right answers to questions about Haskell.  This isn’t the fault of the duct tape or the glue, which are useful short-term tools.  But when keeping up with the duct tape and glue gets in the way of making correct decisions, then a programming language starts that deadly race wherein we try to get as popular as we can before the language gets too crufty to use any more, and people jump to something else.

Isolating Environments

Another very informative part of the conversation relates to isolation of different build environments.  I’m not active in the Python or Ruby development communities, but several people (including John Millikin and Mike Burns, mentioned that they routinely solve these kinds of problems with sandbox or isolation tools.  These tools maintain local package databases that are just big enough to build a particular piece of software, thereby guaranteeing that installing some unrelated library for a separate project won’t break your libraries for this one.  Ruby’s tool for this purpose is rvm (Ruby Version Manager), while the Python community uses virtualenv.

It may come as a surprise to many people that Haskell has its own tool for isolated build environments, called cabal-dev.  John Millikin wrote an in-depth introduction on building and using it on Reddit.  Basically, it keeps a private package database for you in a subdirectory of your build directory.  The idea is that you can install a minimal amount of stuff in the system-wide or user-wide databases, and let cabal-dev download and build most dependencies on demand for your specific project and store them locally.  It’s not quite an rvm-level tool, in that it does not manage multiple versions of the compiler, but it sure helps with library version isolation.

As I mentioned above, I see isolation as a short-term good, but perhaps a premature casting off of a hair shirt.  If I can only build all my Haskell stuff because of isolated package databases, then this means there are some integration projects that I could not embark on because they would be too large for the isolated package databases.  So I’m of mixed minds about this; it’s no-doubt good that cabal-dev exists.  On the other hand, I’d hope it does not obscure the need for a right answer to package dependencies.

A few other notes related to isolation:

  • Another idea that was brought up again, and that’s come up in many other places recently, is installing and using a local copy of Hackage for use in a collection of related projects.  Michael Snoyman’s Yackage is a minimal example of this that looks like a piece of cake to install.  It’s also supposed to (eventually?  now?) be relatively easy to install hackage-server and customize the exact set of features you want.  I have yet to do any of this, but it certainly looks appealing, especially if you’re trying to maintain an ecosystem of connected software.
  • Something else that came up with respect to cabal-dev and rvm, for example, is that rvm also isolates the version of the compiler you’re using, as well.  It looks rather difficult, currently, to have multiple versions of the ghc compiler installed at the same time.  Indeed, this is part of what turned me off from doing GHC work some time ago; it looks like it’s more work to keep several GHC versions in working order at once than it is to actually modify the code.  It seems we’re a long way from ‘cabal install ghc-7.1’!
  • Finally, a sort of “partial isolation” that seems unambiguously to be a good thing was mentioned in the context of Ruby’s bundlr by Darrin Thompson.  The comment was that when Ruby’s gem system resolves dependencies, it can be asked to write out the exact resolution to a file, and then other tools on other systems can bring exactly that same package dependency tree into being.  I think to date, the Haskell community has largely avoided struggling with deployment issues and integration systems, but it doesn’t seem difficult to get Cabal and cabal-install to pick up the idea of fixing a specific solution to version constraints, to ensure that packages used on deployed systems exactly match their development counterparts, even if a rebuild is needed (e.g., different processor architecture).  Of course, Ruby has a somewhat greater need for this, in that I can generally copy a compiled and statically linked binary in Haskell; but as the move to shared libraries continues, this may well become far more relevant.

Internal Dependencies

Nothing new here, except that there seems to be a general consensus that this needs to be one of the first changes made to the Haskell build infrastructure.  As Duncan has pointed out before, this involves a coordinated change involving both GHC and Cabal.

Something I find myself in agreement with is that perhaps the best approach going forward would be to fix this, and the next point (the “butterfly effect”), and then take stock again of where we are.  Fixing the internal dependencies issue would hopefully reduce the number of version constraints Cabal needs to deal with by an order of magnitude or so.  That might make many of the other issues people are facing go away, or reduce them to the point that they are solvable by people just talking to each other and politely requesting bumps in version upper bounds.  That seems sensible to me; there’s a legitimate hope that this fix would make everything else a matter of patching up the odd points here and there.

The Butterfly Effect

This was point number 3 in the “DLL Hell” article, but I’ve since written an expanded description of what is happening, and made up a name for it in hopes of making it easier to discuss.  The idea is that, while Cabal and GHC are fine with multiple packages existing having different versions, they are not okay with multiple packages have the same name and version, but different dependencies.  As a result, one can end up in a situation where installing a package breaks another package.  This is the only issue in the list that I consider to be a bug (albeit probably a known and understood one) in Cabal.

A number of people have brought up GHC’s ABI hash… that long sequence of hexadecimal that GHC appends to your package when you rebuild it.  I’ve spent some time doing a bit of reading into what GHC does here.  While it’s certainly related, unfortunately this still doesn’t actually solve the problem.  What it does do is help GHC to detect the problem.  The idea is that GHC hashes together all the information about the ABI – that is, all the information that needs to be known to correctly link against a package.  Then if some package gets recompiled and exposes a different ABI, the hask will differ, and GHC will notice that packages that depend on it are broken.

This raises the question of whether GHC could just keep around multiple copies of the same library, differing only in their ABI hash.  The answer, as Simon Marlow pointed out, is no.  Because the ABI hash is not included in the names of symbols in a library, trying to link two different packages with the same name and version but different ABI hashes would lead to problems later on.  So currently, the ABI hash is used to detect when a dependency is recompiled, but it cannot be used to keep several instances of the same package version.  The reason for not including the ABI hash in the symbol name seems to be related to avoiding unnecessarily recompiling things that don’t need to be recompiled.  That’s also a valid goal; so something a bit more complex would have to happen to get this sort of thing working.  Still, it doesn’t look undoable.

Several people mentioned the Linux package manager Nix as inspiration here.  It does look very much like what ought to be done.  Whether we would want deferred garbage collection or shared –user installed packages is an interesting question, but I think far less important than solving the immediate problems.

About the Package Versioning Policy (PVP)

One of the surprises for me was the response to my comments about the PVP (the Package Versioning Policy).  This comes back to different people having different kinds of projects, I suppose.  I personally have never witnessed a package that used to build breaking because of upgrades elsewhere and someone failing to put upper bounds on their package dependencies.  Don’t get me wrong; I’ve seen a few Cabal build failures in my time, but generally they’ve always been traced to some other problem; I’ve just never seen it be the case that the package would have been fine with an appropriate upper bound, but failed to compile because there wasn’t.  There’s always been something else involved; usually a compiler version difference and resulting changes in some package for which an older version doesn’t exist for the new compiler.

Apparently other people consider this a significant problem, though, and if others are having problems even today with the PVP not being followed tightly enough, then I certainly retract my suggestion that we consider weakening it.  Suggestions were made to have Hackage enforce the PVP, but my personal feelings always come back to what I think we ought to consider an axiom of Hackage: anything we do that makes it less likely people would upload their packages is a step backward, not forward.  The fact that practically all open-source Haskell code out there is contained in Hackage is an immense treasure of the Haskell community, and we’d be fools to jeopardize that in any way.  Having Hackage enforce the PVP means requiring that a package be buildable in the Hackage environment before accepting it, and that seems like a non-starter.

One question I’d like to keep on the table is the possibility of telling Cabal to distinguish between strong bounds (“I know this won’t work”) and weak bounds (“I haven’t tested this, possibly because it doesn’t exist yet to test“).  Perhaps new version bound operators (>>) and (<<) could be introduced to capture the strong bounds.  Cabal could then be told on a case-by-case basis to override the weak version bounds on other people’s packages.  Then one might imagine an interaction something like:

$ cabal install foo
Configuring foo...
Cannot resolve dependencies for foo (... for some reason involving upper bound on bar ...)
$ cabal relax-depends bar 'baz < 0.5'
$ cabal install foo
(successful build)

It’s also worth mentioning, here, Michael Snoymans packdeps package and its web front end.  These are tools for alerting package authors when they are excluding other libraries because of upper bounds on their packages.  This can help reduce the problem of keeping package dependencies up to date.

Interactions With OS Packaging

Finally, there were a few comments about using the operating system package manager instead of trying to “reinvent the wheel” with Cabal.  All things considered, this doesn’t look like a reasonable idea.  The amount of interconnection between Cabal and GHC, mentioned in several places earlier, is good proof that package management is somewhat intimately connected to the build tools of the language we’re doing it in.  Add to this the fact that there are nearly 3000 different packages on Hackage (never mind all the different versions, where some old versions are still needed!), and the fact that several of them are updated several times per day.  Packaging libraries for distribution with the operating system is a completely different model.

However, this brings up the question of what to do about OS-packaged Haskell libraries.  Personally, what I do is just let the operating system manage my global package database, and manage my own user package database.  Then if the operating system packaged libraries are updated, I may even just have to blow away and reinstall all of my user packages, but it’s infrequent enough to not be an issue.  Maybe there’s something theoretically better we could do here, but I don’t see it as a serious issue.

Follow

Get every new post delivered to your Inbox.

Join 73 other followers