Planet Haskell

September 03, 2015

Douglas M. Auclair (geophf)

1Liners August 2015

  • August 20th, 2015: Okay this: \(a,b) -> foo a b c d e Somehow curry-i-tize the above expression (make a and b go away!) Is this Applicative?
    • JP @japesinator uncurry $ flip flip e . flip flip d . flip flip c . foo
    • Conor McBride @pigworker (|foo fst snd (|c|) (|d|) (|e|)|)
  • August 19th, 2015: points-free define unintify: unintify :: (Int, Int) -> (Float, Float) where unintify (a,b) = (fromIntegral a, fromIntegral b)
  • August 19th, 2015: points-free define timeser: timeser :: (Float, Float) -> (Float, Float) -> (Float, Float) where timeser (a,b) (c,d) = (a*c, b*d)
  • August 18th, 2015: foo :: (Float, Float) -> (Float, Float) -> Int -> (Float, Float) points-free if: foo (a,b) (c,d) e = ((c-a)/e, (d-b)/e) Arrows? Bimaps?

by geophf ( at September 03, 2015 11:21 PM

1Liners July 2015

  • July 29th, 2015: ... on a roll: Point-free-itize
    foo :: (a -> b, a -> b) -> (a, a) -> (b, b)
    foo (f,g) (x,y) = (f x, g y)
    • \[ c^3 \] @das_kube uncurry (***)
  • July 29th, 2015: I can't believe this wasn't a #1Liner already. Point-free-itize dup:
    dup :: a -> (a,a)
    dup x = (x,x)
    • Antonio Nikishaev @lelff join (,)
    • \[ c^3 \] @das_kube id &&& id
  • July 23rd, 2015: define pairsies so that, e.g.: pairsies [1,2,3] = {{1, 2}, {1, 3}, {2, 3}} pairsies :: [a] -> Set (Set a)
    • pairsies list = concat (list =>> (head &&& tail >>> sequence))
  • July 23rd, 2015: define both :: (a -> b) -> (a,a) -> (b,b)
    • Chris Copeland @chrisncopeland point-freer: both = uncurry . on (,)
    • Brian McKenna @puffnfresh both = join bimap
  • July 23rd, 2015: point-free-itize: gen :: Monad m => (m a, m b) -> m (a, b)
    • Bob Ippolito @etrepum gen = uncurry (liftM2 (,))
  • July 17th, 2015: You may have seen this before, but here we go. point-free-itize swap:
    swap :: (a,b) -> (b,a)

by geophf ( at September 03, 2015 11:18 PM

1Liners Pre-July 2015

  • Point-free define: foo :: (Ord a, Ord b) => [([a], [b])] -> (Set a, Set b)
    • Андреев Кирилл @nonaem00 foo = (Set.fromList . concat *** Set.fromList . concat) . unzip
  • point-free-itize computeTotalWithTax :: Num b => ((a, b), b) -> b computeTotalWithTax ((a, b), c) = b + c
  • point-free-itize foo (k,v) m = Map.insert k b m with obvs types for k, v, and m.
  • point-free-itize: shower :: forall a. forall b. Show a => [b -> a] -> b -> [a] shower fns thing = map (app . flip (,) thing) fns
  • row :: String -> (Item, (USD, Measure)) given csv :: String -> [String] and line is = "apple,$1.99 Lb" hint: words "a b" = ["a","b"] ... all types mentioned above are in today's @1HaskellADay problem at
  • For Read a, point-free-itize: f a list = read a:list (f is used in a foldr-expression)
    • Or you could just do: map read
  • point-free-itize f such that: f a b c = a + b + c

by geophf ( at September 03, 2015 11:13 PM

Wolfgang Jeltsch

Constrained monads

There are Haskell types that have an associated monad structure, but cannot be made instances of the Monad class. The reason is typically that the return or the bind operation of such a type m has a constraint on the type parameter of m. As a result, all the nice library support for monads is unusable for such types. This problem is called the constrained-monad problem.

In my article The Constraint kind, I described a solution to this problem, which involved changing the Monad class. In this article, I present a solution that works with the standard Monad class. This solution has been developed by Neil Sculthorpe, Jan Bracker, George Giorgidze, and Andy Gill. It is described in their paper The Constrained-Monad Problem and implemented in the constrained-normal package.

This article is a write-up of a Theory Lunch talk I gave quite some time ago. As usual, the source of this article is a literate Haskell file, which you can download, load into GHCi, and play with.


We have to enable a couple of language extensions:

{-# LANGUAGE ConstraintKinds,
             Rank2Types #-}

Furthermore, we need to import some modules:

import Data.Set     hiding (fold, map)
import Data.Natural hiding (fold)

These imports require the packages containers and natural-numbers to be installed.

The set monad

The Set type has an associated monad structure, consisting of a return and a bind operation:

returnSet :: a -> Set a
returnSet = singleton

bindSet :: Ord b => Set a -> (a -> Set b) -> Set b
bindSet sa g = unions (map g (toList sa))

We cannot make Set an instance of Monad though, since bindSet has an Ord constraint on the element type of the result set, which is caused by the use of unions.

For a solution, let us first look at how monadic computations on sets would be expressed if Set was an instance of Monad. A monadic expression would be built from non-monadic expressions and applications of return and (>>=). For every such expression, there would be a normal form of the shape

s1 >>= \ x1 -> s2 >>= \ x2 -> -> sn >>= \ x_n -> return r

where the si would be non-monadic expressions of type Set. The existence of a normal form would follow from the monad laws.

We define a type UniSet of such normal forms:

data UniSet a where

    ReturnSet  :: a -> UniSet a

    AtmBindSet :: Set a -> (a -> UniSet b) -> UniSet b

We can make UniSet an instance of Monad where the monad operations build expressions and normalize them on the fly:

instance Monad UniSet where

    return a = ReturnSet a

    ReturnSet a     >>= f = f a
    AtmBindSet sa h >>= f = AtmBindSet sa h' where

        h' a = h a >>= f

Note that these monad operations are analogous to operations on lists, with return corresponding to singleton construction and (>>=) corresponding to concatenation. Normalization happens in (>>=) by applying the left-identity and the associativity law for monads.

We can use UniSet as an alternative set type, representing a set by a normal form that evaluates to this set. This way, we get a set type that is an instance of Monad. For this to be sane, we have to hide the data constructors of UniSet, so that different normal forms that evaluate to the same set cannot be distinguished.

Now we need functions that convert between Set and UniSet. Conversion from Set to UniSet is simple:

toUniSet :: Set a -> UniSet a
toUniSet sa = AtmBindSet sa ReturnSet

Conversion from UniSet to Set is expression evaluation:

fromUniSet :: Ord a => UniSet a -> Set a
fromUniSet (ReturnSet a)     = returnSet a
fromUniSet (AtmBindSet sa h) = bindSet sa g where

    g a = fromUniSet (h a)

The type of fromUniSet constrains the element type to be an instance of Ord. This single constraint is enough to make all invocations of bindSet throughout the conversion legal. The reason is our use of normal forms. Since normal forms are “right-leaning”, all applications of (>>=) in them have the same result type as the whole expression.

The multiset monad

Let us now look at a different monad, the multiset monad.

We represent a multiset as a function that maps each value of the element type to its multiplicity in the multiset, with a multiplicity of zero denoting absence of this value:

newtype MSet a = MSet { mult :: a -> Natural }

Now we define the return operation:

returnMSet :: Eq a => a -> MSet a
returnMSet a = MSet ma where

    ma b | a == b    = 1
         | otherwise = 0

For defining the bind operation, we need to define a class Finite of finite types whose sole method enumerates all the values of the respective type:

class Finite a where

    values :: [a]

The implementation of the bind operation is as follows:

bindMSet :: Finite a => MSet a -> (a -> MSet b) -> MSet b
bindMSet msa g = MSet mb where

    mb b = sum [mult msa a * mult (g a) b | a <- values]

Note that the multiset monad differs from the set monad in its use of constraints. The set monad imposes a constraint on the result element type of bind, while the multiset monad imposes a constraint on the first argument element type of bind and another constraint on the result element type of return.

Like in the case of sets, we define a type of monadic normal forms:

data UniMSet a where

    ReturnMSet  :: a -> UniMSet a

    AtmBindMSet :: Finite a =>
                   MSet a -> (a -> UniMSet b) -> UniMSet b

The key difference to UniSet is that UniMSet involves the constraint of the bind operation, so that normal forms must respect this constraint. Without this restriction, it would not be possible to evaluate normal forms later.

The MonadUniMSet instance declaration is analogous to the MonadUniSet instance declaration:

instance Monad UniMSet where

    return a = ReturnMSet a

    ReturnMSet a      >>= f = f a
    AtmBindMSet msa h >>= f = AtmBindMSet msa h' where

        h' a = h a >>= f

Now we define conversion from MSet to UniMSet:

toUniMSet :: Finite a => MSet a -> UniMSet a
toUniMSet msa = AtmBindMSet msa ReturnMSet

Note that we need to constrain the element type in order to fulfill the constraint incorporated into the UniMSet type.

Finally, we define conversion from UniMSet to MSet:

fromUniMSet :: Eq a => UniMSet a -> MSet a
fromUniMSet (ReturnMSet a)      = returnMSet a
fromUniMSet (AtmBindMSet msa h) = bindMSet msa g where

    g a = fromUniMSet (h a)

Here we need to impose an Eq constraint on the element type. Note that this single constraint is enough to make all invocations of returnMSet throughout the conversion legal. The reason is again our use of normal forms.

A generic solution

The solutions to the constrained-monad problem for sets and multisets are very similar. It is certainly not good if we have to write almost the same code for every new constrained monad that we want to make accessible via the Monad class. Therefore, we define a generic type that covers all such monads:

data UniMonad c t a where

    Return  :: a -> UniMonad c t a

    AtmBind :: c a =>
               t a -> (a -> UniMonad c t b) -> UniMonad c t b

The parameter t of UniMonad is the underlying data type, like Set or MSet, and the parameter c is the constraint that has to be imposed on the type parameter of the first argument of the bind operation.

For every c and t, we make UniMonad c t an instance of Monad:

instance Monad (UniMonad c t) where

    return a = Return a

    Return a     >>= f = f a
    AtmBind ta h >>= f = AtmBind ta h' where

        h' a = h a >>= f

We define a function lift that converts from the underlying data type to UniMonad and thus generalizes toUniSet and toUniMSet:

lift :: c a => t a -> UniMonad c t a
lift ta = AtmBind ta Return

Evaluation of normal forms is just folding with the return and bind operations of the underlying data type. Therefore, we implement a fold operator for UniMonad:

fold :: (a -> r)
     -> (forall a . c a => t a -> (a -> r) -> r)
     -> UniMonad c t a
     -> r
fold return _       (Return a)     = return a
fold return atmBind (AtmBind ta h) = atmBind ta g where

    g a = fold return atmBind (h a)

Note that fold does not need to deal with constraints, neither with constraints on the result type parameter of return (like Eq in the case of MSet), nor with constraints on the result type parameter of bind (like Ord in the case of Set). This is because fold works with any result type r.

Now let us implement Monad-compatible sets and multisets based on UniMonad.

In the case of sets, we face the problem that UniMonad takes a constraint for the type parameter of the first bind argument, but bindSet does not have such a constraint. To solve this issue, we introduce a type class Unconstrained of which every type is an instance:

class Unconstrained a

instance Unconstrained a

The implementation of Monad-compatible sets is now straightforward:

type UniMonadSet = UniMonad Unconstrained Set

toUniMonadSet :: Set a -> UniMonadSet a
toUniMonadSet = lift

fromUniMonadSet :: Ord a => UniMonadSet a -> Set a
fromUniMonadSet = fold returnSet bindSet

The implementation of Monad-compatible multisets does not need any utility definitions, but can be given right away:

type UniMonadMSet = UniMonad Finite MSet

toUniMonadMSet :: Finite a => MSet a -> UniMonadMSet a
toUniMonadMSet = lift

fromUniMonadMSet :: Eq a => UniMonadMSet a -> MSet a
fromUniMonadMSet = fold returnMSet bindMSet

Tagged: Andy Gill, constrained-normal (Haskell package), Constraint (kind), containers (Haskell package), functional programming, GADT, George Giorgidze, GHC, Haskell, Institute of Cybernetics, Jan Bracker, literate programming, monad, natural-numbers (Haskell package), Neil Sculthorpe, normal form, talk, Theory Lunch

by Wolfgang Jeltsch at September 03, 2015 08:06 PM

Felipe Almeida Lessa

Using Caps Lock as Menu/Apps keys on Emacs

I’m an ergoemacs-mode user, a mode that changes most key bindings so that they put less strain on your hands.  For example, it uses Alt instead of Ctrl most of the time, which is easier to press: use your curled thumb instead of a karate chop.  Also, many commands are activated by first pressing the Menu/Apps key (that key near the Right Ctrl which usually opens the context menu).  For example, pressing Menu then T allows you to switch buffers.

However, the keyboard on my new notebook doesn’t have a dedicated Menu key.  Instead, one needs to press Fn+Right Ctrl, which is of course extremely painful.

Menu key hidden on the Right Ctrl.

I’ve found a workaround, though.  A very hackish workaround.

The ergoemacs-mode FAQ suggests using Caps Lock as a Menu/Apps key for Mac users.  Using xmodmap it’s trivial to make Caps Lock a Menu key:

$ xmodmap -e "keycode 66 = Menu"

However, using xmodmap properly with Gnome is nigh impossible.  It’s recommend to use xkb instead, but xkb doesn’t support mapping Caps Lock to the Menu key out-of-the-box (at least not yet).  At this point, having wandered through many documentation pages, I’ve decided to try using some of the xkb options that already exist.

At first I tried setting Caps Lock as the Hyper key.  However, by default the Hyper key gets the same modifier code as the Super key (which is usually the key with the Windows logo).  There’s a straightforward way of separating them, but I couldn’t find a way to make it play nice with Gnome.  And even if I could, it’s not clear to me if I could use the Hyper key as a substitute for the Menu key on emacs.

When ready to admit defeat, I’ve set the Caps Lock behavior to “Caps Lock is disabled” in preparation of trying a hack using xmodmap.  Much to my surprise, I accidentally discovered that emacs then began treating the disabled Caps Lock key as <VoidSymbol>! The gears started turning in my head, then I added the following line to my ~/.emacs file:

(define-key key-translation-map (kbd "<VoidSymbol>") (kbd "<menu>"))

Surprisingly, it worked!  Now pressing Caps Lock then T will switch buffers, for example.  As a bonus, pressing Caps Lock accidentally while on another application won’t do anything.

It’s not clear to me how fragile this hack really is.  I’ll update this blog post if I ever find some drawback to it.  But right now it seems to work quite nicely.

by Felipe Lessa at September 03, 2015 03:28 PM

September 02, 2015

Roman Cheplyaka

MonadFix example: compiling regular expressions

{-# LANGUAGE RecursiveDo, BangPatterns #-}
import Control.Applicative
import Data.Function (fix)
import Data.IntMap as IntMap
import Control.Monad.Fix (mfix)
import Control.Monad.Trans.State
import Control.Monad.Trans.Class (lift)
import Text.Read (readMaybe)

MonadFix is an odd beast; many Haskell programmers will never use it in their careers. It is indeed very rarely that one needs MonadFix; and for that reason, non-contrived cases where MonadFix is needed are quite interesting to consider.

In this article, I’ll introduce MonadFix and show how it can be handy for compiling the Kleene closure (also known as star or repetition) of regular expressions.

What is MonadFix?

If you hear about MonadFix for the first time, you might think that it is needed to define recursive monadic actions, just like ordinary fix is used to define recursive functions. That would be a mistake. In fact, fix is just as applicable to monadic actions as it is to functions:

guessNumber m = fix $ \repeat -> do
  putStrLn "Enter a guess"
  n <- readMaybe <$> getLine
  if n == Just m
    then putStrLn "You guessed it!"
    else do
      putStrLn "You guessed wrong; try again"

So, what is mfix for? First, recall that in Haskell, one can create recursive definitions not just for functions (which makes sense in other, non-lazy languages) or monadic actions, but for ordinary data structures as well. This is known as cyclic (or circular, or corecursive) definitions; and the technique itself is sometimes referred to as tying the knot.

The classic example of a cyclic definition is the (lazy, infinite) list of Fibonacci numbers:

fib = 0 : 1 : zipWith (+) fib (tail fib)

Cyclic definitions are themselves rare in day-to-day Haskell programming; but occasionally, the right hand side will be not a pure value, but a monadic computation that needs to be run in order to obtain the value.

Consider this (contrived) example, where we start the sequence with an arbitrary number entered by the user:

fibIO1 = do
  putStrLn "Enter the start number"
  start <- read <$> getLine
  return $ start : 1 : zipWith (+) fibIO1 (tail fibIO1)

This doesn’t typecheck because fibIO is not a list; it’s an IO action that produces a list.

But if we try to run the computation, it doesn’t make much sense either:

fibIO2 = do
  putStrLn "Enter the start number"
  start <- read <$> getLine
  fib <- fibIO2
  return $ start : 1 : zipWith (+) fib (tail fib)

This version of fibIO will ask you to enter the start number ad infinitum and never get to evaluating anything.

Of course, the simplest thing to do would be to move IO out of the recursive equation; that’s why I said the example was contrived. But MonadFix gives another solution:

fibIO3 = mfix $ \fib -> do
  putStrLn "Enter the start number"
  start <- read <$> getLine
  return $ start : 1 : zipWith (+) fib (tail fib)

Or, using the do-rec syntax:

fibIO4 = do
    fib <- do
      putStrLn "Enter the start number"
      start <- read <$> getLine
      return $ start : 1 : zipWith (+) fib (tail fib)
  return fib

Compiling regular expressions

As promised, I am going to show you an example usage of MonadFix that solved a problem other than “how could I use MonadFix?”. This came up in my work on regex-applicative.

For a simplified presentation, let’s consider this type of regular expressions:

data RE
  = Sym Char  -- symbol
  | Seq RE RE -- sequence
  | Alt RE RE -- alternative
  | Rep RE    -- repetition

Our goal is to compile a regular expression into a corresponding NFA. The states will be represented by integer numbers. State 0 corresponds to successful completion; and each Sym inside a regex will have a unique positive state in which we are expecting the corresponding character.

type NFAState = Int

The NFA will be represented by a map

type NFA = IntMap (Char, [NFAState])

where each state is mapped to the characters expected at that state and the list of states where we go in case we get the expected character.

To compile a regular expression, we’ll take as an argument the list of states to proceed to when the regular expression as a whole succeeds (otherwise we’d have to compile each subexpression separately and then glue NFAs together). This is essentially the continuation-passing style; only instead of functions, our continuations are NFA states.

During the compilation, we’ll use a stack of two State monads: one to assign sequential state numbers to Syms; the other to keep track of the currently constructred NFA.

-- Returns the list of start states and the transition table
compile :: RE -> ([NFAState], NFA)
compile re = runState (evalStateT (go re [0]) 0) IntMap.empty

-- go accepts exit states, returns entry states
go :: RE -> [NFAState] -> StateT NFAState (State NFA) [NFAState]
go re exitStates =
  case re of
    Sym c -> do
      !freshState <- gets (+1); put freshState
      lift $ modify' (IntMap.insert freshState (c, exitStates))
      return [freshState]
    Alt r1 r2 -> (++) <$> go r1 exitStates <*> go r2 exitStates
    Seq r1 r2 -> go r1 =<< go r2 exitStates

This was easy so far: alternatives share their exit states and their entry states are combined; and consequtive subexpressions are chained. But how do we compile Rep? The exit states of the repeated subexpression should become its own entry states; but we don’t know the entry states until we compile it!

And this is precisely where MonadFix (or recursive do) comes in:

    Rep r -> do
        let allEntryStates = ownEntryStates ++ exitStates
        ownEntryStates <- go r allEntryStates
      return allEntryStates

Why does this circular definition work? If we unwrap the State types, we’ll see that the go function actually computes a triple of three non-strict fields:

  1. The last used state number
  2. The list of entry states
  3. The NFA map

The elements of the triple may depend on each other as long as there are no actual loops during evaluation. One can check that the fields can be indeed evaluated linearly in the order in which they are listed above:

  1. The used state numbers at each step depend only on the regular expression itself, so it can be computed wihtout knowing the other two fields.
  2. The list of entry states relies only on the state number information; it doesn’t need to know anything about the NFA transitions.
  3. The NFA table needs to know the entry and exit states; but that is fine, we can go ahead and compute that information without creating any reverse data dependencies.

Further reading

An ASM Monad – a similar example from a different domain.

Oliver Charles’s 24 Days of GHC Extensions: Recursive Do.

Levent Erkok’s thesis which contains all you need to know about MonadFix, including several other examples.

Todd Wilson points out that Douglas McIlroy describes a similar regular expression compilation technique in his 2004 JFP Functional Pearl Enumerating the strings of regular languages. Like this article, Douglas’s paper uses a circular definition when compiling the Kleene closure. But the circular definition is not monadic there: instead of using the State monad, Douglas passes the state around by hand.

September 02, 2015 08:00 PM

Wolfgang Jeltsch

MIU in Haskell

In the Theory Lunch of the last week, James Chapman talked about the MU puzzle from Douglas Hofstadter’s book Gödel, Escher, Bach. This puzzle is about a string rewriting system. James presented a Haskell program that computes derivations of strings. Inspired by this, I wrote my own implementation, with the goal of improving efficiency. This blog post presents this implementation. As usual, it is available as a literate Haskell file, which you can load into GHCi.

The puzzle

Let me first describe the MU puzzle shortly. The puzzle deals with strings that may contain the characters \mathrm M, \mathrm I, and \mathrm U. We can derive new strings from old ones using the following rewriting system:

\begin{array}{rcl} x\mathrm I & \rightarrow & x\mathrm{IU} \\ \mathrm Mx & \rightarrow & \mathrm Mxx \\ x\mathrm{III}y & \rightarrow & x\mathrm Uy \\ x\mathrm{UU}y & \rightarrow & xy \end{array}

The question is whether it is possible to turn the string \mathrm{MI} into the string \mathrm{MU} using these rules.

You may want to try to solve this puzzle yourself, or you may want to look up the solution on the Wikipedia page.

The code

The code is not only concerned with deriving \mathrm{MU} from \mathrm{MI}, but with derivations as such.


We import Data.List:

import Data.List

Basic things

We define the type Sym of symbols and the type Str of symbol strings:

data Sym = M | I | U deriving Eq

type Str = [Sym]

instance Show Sym where

    show M = "M"
    show I = "I"
    show U = "U"

    showList str = (concatMap show str ++)

Next, we define the type Rule of rules as well as the list rules that contains all rules:

data Rule = R1 | R2 | R3 | R4 deriving Show

rules :: [Rule]
rules = [R1,R2,R3,R4]

Rule application

We first introduce a helper function that takes a string and returns the list of all splits of this string. Thereby, a split of a string str is a pair of strings str1 and str2 such that str1 ++ str2 == str. A straightforward implementation of splitting is as follows:

splits' :: Str -> [(Str,Str)]
splits' str = zip (inits str) (tails str)

The problem with this implementation is that walking through the result list takes quadratic time, even if the elements of the list are left unevaluated. The following implementation solves this problem:

splits :: Str -> [(Str,Str)]
splits str = zip (map (flip take str) [0 ..]) (tails str)

Next, we define a helper function replace. An expression replace old new str yields the list of all strings that can be constructed by replacing the string old inside str by new.

replace :: Str -> Str -> Str -> [Str]
replace old new str = [front ++ new ++ rear |
                          (front,rest) <- splits str,
                          old `isPrefixOf` rest,
                          let rear = drop (length old) rest]

We are now ready to implement the function apply, which performs rule application. This function takes a rule and a string and produces all strings that can be derived from the given string using the given rule exactly once.

apply :: Rule -> Str -> [Str]
apply R1 str        | last str == I = [str ++ [U]]
apply R2 (M : tail)                 = [M : tail ++ tail]
apply R3 str                        = replace [I,I,I] [U] str
apply R4 str                        = replace [U,U]   []  str
apply _  _                          = []

Derivation trees

Now we want to build derivation trees. A derivation tree for a string str has the following properties:

  • The root is labeled with str.
  • The subtrees of the root are the derivation trees for the strings that can be generated from str by a single rule application.
  • The edges from the root to its subtrees are marked with the respective rules that are applied.

We first define types for representing derivation trees:

data DTree = DTree Str [DSub]

data DSub  = DSub Rule DTree

Now we define the function dTree that turns a string into its derivation tree:

dTree :: Str -> DTree
dTree str = DTree str [DSub rule subtree |
                          rule <- rules,
                          subStr <- apply rule str,
                          let subtree = dTree subStr]


A derivation is a sequence of strings with rules between them such that each rule takes the string before it to the string after it. We define types for representing derivations:

data Deriv = Deriv [DStep] Str

data DStep = DStep Str Rule

instance Show Deriv where

    show (Deriv steps goal) = "        "           ++
                              concatMap show steps ++
                              show goal            ++

    showList derivs
        = (concatMap ((++ "\n") . show) derivs ++)

instance Show DStep where

    show (DStep origin rule) = show origin ++
                               "\n-> ("    ++
                               show rule   ++
                               ") "

Now we implement a function derivs that converts a derivation tree into the list of all derivations that start with the tree’s root label. The function derivs traverses the tree in breadth-first order.

derivs :: DTree -> [Deriv]
derivs tree = worker [([],tree)] where

    worker :: [([DStep],DTree)] -> [Deriv]
    worker tasks = rootDerivs tasks        ++
                   worker (subtasks tasks)

    rootDerivs :: [([DStep],DTree)] -> [Deriv]
    rootDerivs tasks = [Deriv (reverse revSteps) root |
                           (revSteps,DTree root _) <- tasks]

    subtasks :: [([DStep],DTree)] -> [([DStep],DTree)]
    subtasks tasks = [(DStep root rule : revSteps,subtree) |
                         (revSteps,DTree root subs) <- tasks,
                         DSub rule subtree          <- subs]

Finally, we implement the function derivations which takes two strings and returns the list of those derivations that turn the first string into the second:

derivations :: Str -> Str -> [Deriv]
derivations start end
    = [deriv | deriv@(Deriv _ goal) <- derivs (dTree start),
               goal == end]

You may want to enter

derivations [M,I] [M,U,I]

at the GHCi prompt to see the derivations function in action. You can also enter

derivations [M,I] [M,U]

to get an idea about the solution to the MU puzzle.

Tagged: Douglas Hofstadter, functional programming, Gödel, Escher, Bach (book), Haskell, Institute of Cybernetics, James Chapman, literate programming, MU puzzle, string rewriting, talk, Theory Lunch

by Wolfgang Jeltsch at September 02, 2015 02:05 PM

MIU in Curry

More than two years ago, my colleague Denis Firsov and I gave a series of three Theory Lunch talks about the MIU string rewriting system from Douglas Hofstadter’s MU puzzle. The first talk was about a Haskell implementation of MIU, the second talk was an introduction to the functional logic programming language Curry, and the third talk was about a Curry implementation of MIU. The blog articles MIU in Haskell and A taste of Curry are write-ups of the first two talks. However, a write-up of the third talk has never seen the light of day so far. This is changed with this article.

As usual, this article is written using literate programming. The article source is a literate Curry file, which you can load into KiCS2 to play with the code.

I want to thank all the people from the Curry mailing list who have helped me improving the code in this article.


We import the module SearchTree:

import SearchTree

Basic things

We define the type Sym of symbols and the type Str of symbol strings:

data Sym = M | I | U

showSym :: Sym -> String
showSym M = "M"
showSym I = "I"
showSym U = "U"

type Str = [Sym]

showStr :: Str -> String
showStr str = concatMap showSym str

Next, we define the type Rule of rules:

data Rule = R1 | R2 | R3 | R4

showRule :: Rule -> String
showRule R1 = "R1"
showRule R2 = "R2"
showRule R3 = "R3"
showRule R4 = "R4"

So far, the Curry code is basically the same as the Haskell code. However, this is going to change below.

Rule application

Rule application becomes a lot simpler in Curry. In fact, we can code the rewriting rules almost directly to get a rule application function:

applyRule :: Rule -> Str -> Str
applyRule R1 (init ++ [I])              = init ++ [I, U]
applyRule R2 ([M] ++ tail)              = [M] ++ tail ++ tail
applyRule R3 (pre ++ [I, I, I] ++ post) = pre ++ [U] ++ post
applyRule R4 (pre ++ [U, U] ++ post)    = pre ++ post

Note that we do not return a list of derivable strings, as we did in the Haskell solution. Instead, we use the fact that functions in Curry are nondeterministic.

Furthermore, we do not need the helper functions splits and replace that we used in the Haskell implementation. Instead, we use the ++-operator in conjunction with functional patterns to achieve the same functionality.

Now we implement a utility function applyRules for repeated rule application. Our implementation uses a similar trick as the famous Haskell implementation of the Fibonacci sequence:

applyRules :: [Rule] -> Str -> [Str]
applyRules rules str = tail strs where

    strs = str : zipWith applyRule rules strs

The Haskell implementation does not need the applyRules function, but it needs a lot of code about derivation trees instead. In the Curry solution, derivation trees are implicit, thanks to nondeterminism.


A derivation is a sequence of strings with rules between them such that each rule takes the string before it to the string after it. We define types for representing derivations:

data Deriv = Deriv [DStep] Str

data DStep = DStep Str Rule

showDeriv :: Deriv -> String
showDeriv (Deriv steps goal) = "        "                ++
                               concatMap showDStep steps ++
                               showStr goal              ++

showDerivs :: [Deriv] -> String
showDerivs derivs = concatMap ((++ "\n") . showDeriv) derivs

showDStep :: DStep -> String
showDStep (DStep origin rule) = showStr origin ++
                                "\n-> ("       ++
                                showRule rule  ++
                                ") "

Now we implement a function derivation that takes two strings and returns the derivations that turn the first string into the second:

derivation :: Str -> Str -> Deriv
derivation start end
    | start : applyRules rules start =:= init ++ [end]
        = Deriv (zipWith DStep init rules) end where

    rules :: [Rule]
    rules free

    init :: [Str]
    init free

Finally, we define a function printDerivations that explicitly invokes a breadth-first search to compute and ultimately print derivations:

printDerivations :: Str -> Str -> IO ()
printDerivations start end = do
    searchTree <- getSearchTree (derivation start end)
    putStr $ showDerivs (allValuesBFS searchTree)

You may want to enter

printDerivations [M, I] [M, I, U]

at the KiCS2 prompt to see the derivations function in action.

Tagged: breadth-first search, Curry, Denis Firsov, Douglas Hofstadter, functional logic programming, functional pattern, functional programming, Haskell, Institute of Cybernetics, KiCS2, literate programming, logic programming, MU puzzle, string rewriting, talk, Theory Lunch

by Wolfgang Jeltsch at September 02, 2015 02:00 PM

Jasper Van der Jeugt

Erasing "expected" messages in Parsec


Parsec is an industrial-strength parser library. I think one of its main advantages is that allows you generate really good error messages. However, this sometimes requires some non-obvious tricks. In this blogpost, I describe one of those. On the way, we illustrate how one can split up a Parsec parser into a lexer and an actual parser.

This blogpost assumes a little familiarity with Parsec or parser combinator libraries. There are tons of Parsec tutorials out there, such as this one.

TL:DR: Using <?> "" allows you to erase error messages which in some cases can actually improve them.

This blogpost is written in literate Haskell so you should be able to just load it up in GHCi and play around with it (you can find the raw .lhs file here).

A simple expression parser

> {-# LANGUAGE FlexibleContexts #-}
> import Control.Monad (void)
> import Text.Parsec

As an example, let’s build a simple Polish notation parser to parse expressions like:

+ 2 (+ 1 4)

We can model the expressions we want to parse like this:

> data Expr
>     = Lit Int
>     | Add Expr Expr
>     deriving (Show)

Our parser is pretty straightforward – there are three cases: literals, additions, and expressions enclosed by parentheses.

> expr :: Stream s m Char => ParsecT s u m Expr
> expr = (<?> "expression") $
>     (Lit <$> natural)               <|>
>     (plus >> Add <$> expr <*> expr) <|>
>     (lparen *> expr <* rparen)

This uses the auxiliary parsers natural, plus, lparen and rparen. These are so-called token parsers. It is a common design pattern to split up a parser into a lexer (in this case, we call the collection of token parsers the lexer) and the actual parser 1.

The idea behind that is that the lexer takes care of fiddling with whitespace, comments, and produces tokens – atomic symbols of the language such as 123, +, (, and ). The parser can then focus on the actual logic: parsing expressions. It doesn’t need to care about details such as whitespace.


Now, onto the token parsers. These are typically placed in another module. First, let’s build some tools for dealing with whitespace and comments. Parsec already provides a parser to consume white space (spaces), so let’s add one for a comment:

> -- | Consume a comment (from a '#' character to the end of the line) and
> -- return nothing.
> comment :: Stream s m Char => ParsecT s u m ()
> comment = (<?> "comment") $ char '#' >> void (manyTill anyChar endOfLine)

Using comment and spaces, we can build a parser that skips both:

> whitespace :: Stream s m Char => ParsecT s u m ()
> whitespace = do
>     spaces
>     optional $ comment >> whitespace

Now, let’s define a token parser a bit more clearly: a token parser is a parser which consumes an atomic symbol followed by an arbitrary amount of whitespace.

This way, we can just use token parsers after one another in the parser and it is clear that the whitespace in between two tokens is consumed by the first token parser. Then, we only have to remember to strip whitespace from the beginning of the file when we write the top-level parser, like:

whitespace *> expr

Before we define our token parsers, let’s add a quick combinator which facilitates it:

> lexeme :: Stream s m Char => ParsecT s u m a -> ParsecT s u m a
> lexeme p = p <* whitespace

We can use lexeme to define some simple tokens:

> plus, lparen, rparen :: Stream s m Char => ParsecT s u m Char
> plus   = lexeme $ char '+'
> lparen = lexeme $ char '('
> rparen = lexeme $ char ')'

Followed by a slightly more complicated token:

> -- | Parse one or more digits as a decimal integer
> natural :: Stream s m Char => ParsecT s u m Int
> natural = (<?> "number") $ lexeme $ do
>     x  <- try digit
>     xs <- many digit
>     return $ read (x : xs)

That’s it! Now we have our parser. If we parse the following expression:

+ (+ 1 2)
  (+ 3
     # Four is a really cool number

We get:

Add (Add (Lit 1) (Lit 2)) (Add (Lit 3) (Lit 4))

Looking good!

Erasing “expected” error messages

At last, we arrive at the point of this blogpost. Let’s try to parse the following expression:

+ (+ 1 2)
  (- 2 3)

We get the following error:

unexpected "-"
expecting white space, comment or expression

The error message is correct but a bit verbose. Sure, there could be a comment or whitespace at that position, but the user is probably aware of that. The real issue is that the parser is expecting an expression.

In the Parsec documentation, there is no reference to how one can manipulate this message. However, when we take a closer look at the Parsec source code, it turns out that there is a way: using <?> with the empty string "".

I think treating the empty string as a special case is a bit un-Haskelly – <?> would be more self-documenting if it took a Maybe String as its second argument – but it is what it is.

<?> "" is a bit confusing to read – it is not immediately clear what it does so let’s turn it into a named combinator for clarity:

> eraseExpected :: ParsecT s u m a -> ParsecT s u m a
> eraseExpected = (<?> "")

We can rewrite whitespace using this combinator.

> whitespace' :: Stream s m Char => ParsecT s u m ()
> whitespace' = do
>     skipMany $ eraseExpected space
>     optional $ eraseExpected comment >> whitespace'

Notice that we had to inline the definition of spaces before erasing the error message. This is because <?> only sets the error message if the parser fails without consuming any input. This means that:

eraseExpected spaces

Would not erase the error message if at least one space character is consumed. Hence, we use skipMany $ eraseExpected space.

If we fix lexeme to use the new whitespace', we get a much nicer error message (in the spirit of less is more):

unexpected "-"
expecting expression

Thanks to Alex Sayers for proofreading.

  1. Traditionally, the lexer and parser are actually split into separate phases, where the lexer produces a Token datatype stream from the input String. Parsec, however, also allows you to write both at the same time, which is what we do in this blogpost. Both approaches have advantages and disadvantages.

by Jasper Van der Jeugt at September 02, 2015 12:00 AM

September 01, 2015


Darcs News #111

News and discussions

  1. The next Darcs Sprint will take place in Paris on September 18-20th. Please add yourself to the wiki page if you're going!
  2. Darcs 2.10.1 has been released (bugfixes, dependency versions bump):

Issues resolved (19)

issue2102 Guillaume Hoffmann
issue2307 Daniil Frumin
issue2308 Ben Franksen
issue2327 Alain91
issue2420 Ben Franksen
issue2421 Guillaume Hoffmann
issue2423 Alain91
issue2433 Guillaume Hoffmann
issue2438 Guillaume Hoffmann
issue2444 Ben Franksen
issue2446 Guillaume Hoffmann
issue2447 Ben Franksen
issue2448 Gian Piero Carrubba
issue2449 Ganesh Sittampalam
issue2451 Ben Franksen
issue2457 Ben Franksen
issue2461 Ben Franksen
issue2461 Ben Franksen
issue2463 Joachim Breitner

Patches applied (145)

See darcs wiki entry for details.

by guillaume ( at September 01, 2015 10:26 PM


Implicit Blacklisting for Cabal

I've been thinking about all the Haskell PVP discussion that's been going on lately. It should be no secret by now that I am a PVP proponent. I'm not here to debate the PVP in this post, so for this discussion let's assume that the PVP is a good thing and should be adopted by all packages published on Hackage. More specifically, let's assume this to mean that every package should specify upper bounds on all dependencies, and that most of the time these bounds will be of the form "< a.b".

Recently there has been discussion about problems encountered when packages that have not been using upper bounds change and start using them. The recent issue with the HTTP package is a good example of this. Roughly speaking the problem is that if foo-1.2 does not provide upper bounds on it's dependency bar, the constraint solver is perpetually "poisoned" because foo-1.2 will always be a candidate even long after bar has become incompatible with foo-1.2. If later foo-3.9 specifies a bound of bar < 0.5, then when bar-0.5 comes out the solver will try to build with foo-1.2 even though it is hopelessly old. This will result in build errors since bar has long since changed its API.

This is a difficult problem. There are several immediately obvious approaches to solving the problem.

  1. Remove the offending old versions (the ones missing upper bounds) from Hackage.
  2. Leave them on Hackage, but mark them as deprecated/blacklisted so they will not be chosen by the solver.
  3. Go back and retroactively add upper bounds to the offending versions.
  4. Start a new empty Hackage server that requires packages to specify upper bounds on all dependencies.
  5. Start a new Hackage mirror that infers upper bounds based on package upload dates.

All of these approaches have problems. The first three are problematic because they mess with build reproducibility. The fourth approach fragments the community and in the very best case would take a lot of time and effort before gaining adoption. The fifth approach has problems because correct upper bounds cannot always be inferred by upload dates.

I would like to propose a solution I call implicit blacklisting. The basic idea is that for each set of versions with the prefix a.b.c Cabal will only consider a single one: the last one. This effectively means that all the lower versions with the prefix a.b.c will be implicitly blacklisted. This approach should also allow maintainers to modify this behavior by specifying more granular version bounds.

In our previous example, suppose there were a number of 0.4 versions of the bar package, with being the last one. In this case, if foo specified a bound of bar < 0.5, the solver would only consider and would not be considered. This would allow us to completely hide a lack of version bounds by making a new patch release that only bumps the d number. If that release had problems, we could address them with more patch releases.

Now imagine that for some crazy reason foo worked with, but broke it somehow. Note that if bar is following the PVP, that should not be the case. But there are some well-known cases where the PVP can miss things and there is always the possibility of human error. In this case, foo should specify a bound of bar < In this case, the solver should respect that bound and only consider But would still be ignored as before.

Implicit blacklisting has the advantage that we don't need any special support for explicitly marking versions as deprecated/blacklisted. Another advantage is that it does not cause any problems for people who had locked their code down to using specific versions. If foo specified an exact version of bar ==, then that will continue to be chosen. Implicit blacklisting also allows us to leave everything in hackage untouched and fix issues incrementally as they arise with the minimum amount of work. In the above issue with HTTP-4000.0.7, we could trivially address it by downloading that version, adding version bounds, and uploading it as HTTP-4000.0.7.1.

All in all, I think this implicit blacklisting idea has a number of desirable properties and very few downsides. It fixes the problem using nothing but our existing infrastructure: version numbers. It doesn’t require us to add new concepts like blacklisted/deprecated flags, out-of-band “revision” markers to denote packages modified after the fact, etc. But since this is a complicated problem I may very well have missed something, so I'd like to hear what the community thinks about this idea.

by mightybyte ( at September 01, 2015 05:45 PM

The Problem with Curation

Recently I received a question from a user asking about "cabal hell" when installing one of my packages. The scenario in question worked fine for us, but for some reason it wasn't working for the user. When users report problems like this they usually do not provide enough information for us to solve it. So then we begin the sometimes arduous back and forth process of gathering the information we need to diagnose the problem and suggest a workaround or implement a fix.

In this particular case luck was on our side and the user's second message just happened to include the key piece of information. The problem in this case was that they were using stackage instead of the normal hackage build that people usually use. Using stackage locks down your dependency bounds to a single version. The user reporting the problem was trying to add additional dependencies to his project and those dependencies required different versions. Stackage was taking away degrees of freedom from the dependency solver (demoting it from the driver seat to the passenger seat). Fortunately in this case the fix was simple: stop freezing down versions with stackage. As soon as the user did that it worked fine.

This highlights the core problem with package curation: it is based on a closed-world assumption. I think that this makes it not a viable answer to the general question of how to solve the package dependency problem. The world that many users will encounter is not closed. People are constantly creating new packages. Curation resources are finite and trying to keep up with the world is a losing battle. Also, even if we had infinite curation resources and zero delay between the creation of a package and its inclusion in the curated repository, that would still not be good enough. There are many people working with code that is not public and therefore cannot be curated. We need a more general solution to the problem that doesn't require a curator.

by mightybyte ( at September 01, 2015 05:45 PM

Using Cabal With Large Projects

In the last post we talked about basic cabal usage. That all works fine as long as you're working on a single project and all your dependencies are in hackage. When Cabal is aware of everything that you want to build, it's actually pretty good at dependency resolution. But if you have several packages that depend on each other and you're working on development versions of these packages that have not yet been released to hackage, then life becomes more difficult. In this post I'll describe my workflow for handling the development of multiple local packages. I make no claim that this is the best way to do it. But it works pretty well for me, and hopefully others will find this information helpful.

Consider a situation where package B depends on package A and both of them depend on bytestring. Package A has wide version bounds for its bytestring dependency while package B has narrower bounds. Because you're working on improving both packages you can't just do "cabal install" in package B's directory because the correct version of package A isn't on hackage. But if you install package A first, Cabal might choose a version of bytestring that won't work with package B. It's a frustrating situation because eventually you'll have to end up worrying about dependencies issues that Cabal should be handling for you.

The best solution I've found to the above problem is cabal-meta. It lets you specify a sources.txt file in your project root directory with paths to other projects that you want included in the package's build environment. For example, I maintain the snap package, which depends on several other packages that are part of the Snap Framework. Here's what my sources.txt file looks like for the snap package:


My development versions of the other four packages reside in the parent directory on my local machine. When I build the snap package with cabal-meta install, cabal-meta tells Cabal to look in these directories in addition to whatever is in hackage. If you do this initially for the top-level package, it will correctly take into consideration all your local packages when resolving dependencies. Once you have all the dependencies installed, you can go back to using Cabal and ghci to build and test your packages. In my experience this takes most of the pain out of building large-scale Haskell applications.

Another tool that is frequently recommended for handling this large-scale package development problem is cabal-dev. cabal-dev allows you to sandbox builds so that differing build configurations of libraries can coexist without causing problems like they do with plain Cabal. It also has a mechanism for handling this local package problem above. I personally tend to avoid cabal-dev because in my experience it hasn't played nicely with ghci. It tries to solve the problem by giving you the cabal-dev ghci command to execute ghci using the sandboxed environment, but I found that it made my ghci workflow difficult, so I prefer using cabal-meta which doesn't have these problems.

I should note that cabal-dev does solve another problem that cabal-meta does not. There may be cases where two different packages may be completely unable to coexist in the same Cabal "sandbox" if their set of dependencies are not compatible. In that case, you'll need cabal-dev's sandboxes instead of the single user-level package repository used by Cabal. I am usually only working on one major project at a time, so this problem has never been an issue for me. My understanding is that people are currently working on adding this kind of local sandboxing to Cabal/cabal-install. Hopefully this will fix my complaints about ghci integration and should make cabal-dev unnecessary.

There are definitely things that need to be done to improve the cabal tool chain. But in my experience working on several different large Haskell projects both open and proprietary I have found that the current state of Cabal combined with cabal-meta (and maybe cabal-dev) does a reasonable job at handling large project development within a very fast moving ecosystem.

by mightybyte ( at September 01, 2015 05:45 PM

Why Cabal Has Problems

Haskell's package system, henceforth just "Cabal" for simplicity, has gotten some harsh press in the tech world recently. I want to emphasize a few points that I think are important to keep in mind in the discussion.

First, this is a hard problem. There's a reason the term "DLL hell" existed long before Cabal. I can't think of any package management system I've used that didn't generate quite a bit of frustration at some point.

Second, the Haskell ecosystem is also moving very quickly. There's the ongoing iteratees/conduits/pipes debate of how to do IO in an efficient and scalable way. Lenses have recently seen major advances in the state of the art. There is tons of web framework activity. I could go on and on. So while Hackage may not be the largest database of reusable code, the larger ones like CPAN that have been around for a long time are probably not moving as fast (in terms of advances in core libraries).

Third, I think Haskell has a unique ability to facilitate code reuse even for relatively small amounts of code. The web framework scene demonstrates this fairly well. As I've said before, even though there are three main competing frameworks, libraries in each of the frameworks can be mixed and matched easily. For example, web-routes-happstack provides convenience code for gluing together the web-routes package with happstack. It is 82 lines of code. web-routes-wai does the same thing for wai with 81 lines of code. The same thing could be done for Snap with a similar amount of code.

The languages with larger package repositories like Ruby and Python might also have small glue packages like this, but they don't have the powerful strong type system. This means that when a Cabal build fails because of dependency issues, you're catching an interaction much earlier than you would have caught it in the other languages. This is what I'm getting at when I say "unique ability to facilitate code reuse".

When you add Haskell's use of cross-module compiler optimizations to all these previous points, I think it makes a compelling case that the Haskell community is at or near the frontier of what has been done before even though we may be a ways away in terms of raw number of packages and developers. Thus, it should not be surprising that there are problems. When you're at the edge of the explored space, there's going to be some stumbling around in the dark and you might go down some dead end paths. But that's not a sign that there's something wrong with the community.

Note: The first published version of this article made some incorrect claims based on incorrect information about the number of Haskell packages compared to the number of packages in other languages. I've removed the incorrect numbers and adjusted my point.

by mightybyte ( at September 01, 2015 05:45 PM

Why version bounds cannot be inferred retroactively (using dates)

In past debates about Haskell's Package Versioning Policy (PVP), some have suggested that package developers don't need to put upper bounds on their version constraints because those bounds can be inferred by looking at what versions were available on the date the package was uploaded. This strategy cannot work in practice, and here's why.

Imagine someone creates a small new package called foo. It's a simple package, say something along the lines of the formattable package that I recently released. One of the dependencies for foo is errors, a popular package supplying frequently used error handling infrastructure. The developer happens to already have errors-1.4.7 installed on their system, so this new package gets built against that version. The author uploads it to hackage on August 16, 2015 with no upper bounds on its dependencies. Let's for simplicity imagine that errors is the only dependency, so the .cabal file looks like this:

name: foo

If we come back through at some point in the future and try to infer upper bounds by date, we'll see that on August 16, the most recent version of errors was 2.0.0. Here's an abbreviated illustration of the picture we can see from release dates:

If we look only at release dates, and assume that packages were building against the most recent version, we will try to build foo with errors-2.0.0. But that is incorrect! Building foo with errors-2.0.0 will fail because errors had a major breaking change in that version. Bottom line: dates are irrelevant--all that matters is what dependency versions the author happened to be building against! You cannot assume that package authors will always be building against the most recent versions of their dependencies. This is especially true if our developer was using the Haskell Platform or LTS Haskell because those package collections lag the bleeding edge even more. So this scenario is not at all unlikely.

It is also possible for packages to be maintaining multiple major versions simultaneously. Consider large projects like the linux kernel. Developers routinely do maintenance releases on 4.1 and 4.0 even though 4.2 is the latest version. This means that version numbers are not always monotonically increasing as a function of time.

I should also mention another point on the meaning of version bounds. When a package specifies version bounds like this...

name: foo
  errors >= 1.4 && < 1.5 is not saying "my package will not work with errors-1.5 and above". It is actually saying, "I warrant that my package does work with those versions of errors (provided errors complies with the PVP)". So the idea that "< 1.5" is a "preemptive upper bound" is wrong. The package author is not preempting anything. Bounds are simply information. The upper and lower bounds are important things that developers need to tell you about their packages to improve the overall health of the ecosystem. Build tools are free to do whatever they want with that information. Indeed, cabal-install has a flag --allow-newer that lets you ignore those upper bounds and step outside the version ranges that the package authors have verified to work.

In summary, the important point here is that you cannot use dates to infer version bounds. You cannot assume that package authors will always be building against the most recent versions of their dependencies. The only reliable thing to do is for the package maintainer to tell you explicitly what versions the package is expected to work with. And that means lower and upper bounds.

by mightybyte ( at September 01, 2015 05:45 PM

Douglas M. Auclair (geophf)

August 2015 1HaskellADay Problems and Solutions

August 2015

  • August 31st, 2015: What do 3,000 circles look like? We answer this question in today's #haskell problem Ah! Of course! 3,000 circles (unscaled, with numeric indices) look like a mess! Of course! 
  • August 28th, 2015: For today's #haskell problem: you said you wuz #BigData but you wuz only playin'! View and scale 'some' data today. Playahz gunna play ... with ... wait: lenses? WAT?
  • August 27th, 2015: Today's #haskell problem inspired from twitter: prove the soundness of ME + YOU = FOREVER Today's #haskell solution is a simpl(istic)e and specific arithmetic (dis)prover ME+YOU /= FOREVER It ain't happenin'
  • August 26th, 2015: You've heard of The Darkness? Well, today's #haskell problem is all about the Brightness Bright eyes! burnin' like fire! 
  • August 25th, 2015: Well, color me surprised! Today's #haskell problem asks to color by Num(bers) And we find out how colors can be numbers, or numbers (Integers) can be colors ... either way
  • August 24th, 2015: You thought I would say 'Purple' (as in Rain) for today's #haskell problem, but I was only playin' #PSA Circles are NOT jerkles ... because ... I don't even know what 'jerkles' ARE!
  • August 21st, 2015: So, ooh! PRITTY COLOURS YESTERDAY! BUT WHAT DO THEY MEAN? Today's #haskell problem we cluster data DO IT TO IT!
  • August 20th, 2015: For today's #haskell problem, now that we have yesterday solved, let's COLOUR the dots! Okay, very hack-y but, indeed: colour-y! (and the index colours need work, too ...)
  • August 19th, 2015: Let's look at some cells in a bounding box, shall we? for today's #haskell problem Share you results here on twitter! Ooh! I see blue dots! K3wl! 
  • August 18th, 2015: In #hadoop you can store a lot of data...but then you have to interpret that stored data for today's #haskell program Today, the SCA/Society for Creative Anachronisms solved the problem. No: SCA/Score Card Analysis ... my bad!
  • August 17th, 2015: For Today's #haskell problem we learn that bb does NOT mean 'Big Brother' (1984). What DOES it mean, then? Tune in! We learn that @geophf cannot come up with interesting title names for so early in the morning!
  • August 14th, 2015: We find out in today's #haskell problem that if 'kinda-prime' numbers had a taste, they would be 'yummy.'
  • August 13th, 2015: We generalize to divbyx-rule by using Singapore Maths-laaaaah for today's #Haskell problem "divby7 is too easy now-laaaah!" ... but there are interesting results for tomorrow's problem
  • August 12th, 2015: Is divby3 a fixpoint? We address this question in today's #haskell problem "There, fixed divby3 for ya!" you crow, in on the 'fix'-joke *groan *fixpoint-humour
  • August 11th, 2015: Today, I ask you to step up your composable-game, #haskell-tweeps! Today's div-by #haskell problem So we ♫ 'head for the mountains!' ♫ for our composable solution of divby10 and divby30 but leave an open question ...
  • August 10th, 2015: Neat little paper on divisibility rules ( leads to today's #Haskell problem divide by 3 rule! A number is divisible by three if the sum of its digits are. PROVED!
  • August 7th, 2015: For today's #haskell problem we relook yesterday's with Data.Monoid and fold(r) ... for fun(r) We're using code from the future (or the bonus answer, anyway) to answer today's #haskell problem
  • August 6th, 2015: For today's #haskell problem, @elizabethfoss provides us the opportunity to do ... MATHS! TALLY HO! Today's solution has Haskell talking with a LISP! (geddit? ;)
  • August 5th, 2015: Today's #Haskell problem shows us that 'anagramatic' is a word now, by way of @argumatronic We learned that #thuglife and #GANGSTA are a bifunctor, but not anagrams with @argumatronic 
  • August 4th, 2015: We actually write a PROGRAM for today's #haskell problem that DOES STUFF! WOW! #curbmyenthusiasm #no Today we learnt to talk like a pirate ... BACKWARDS! ARGGGH! ... no ... wait: !HGGGRA Yeah, that's it.
  • August 3rd, 2015: For today's #haskell problem, we design a Hadoop database ... ya know, without all that bothersome MapReduce stuff ;)

by geophf ( at September 01, 2015 04:13 PM

FP Complete

stack: more binary package sharing

This blog post describes a new feature in stack. Until now, multiple projects using the same snapshot could share the binary builds of packages. However, two separate snapshots could not share the binary builds of their packages, even if they were substantially identical. That's now changing.

tl;dr: stack will now be able to install new snapshots much more quickly, with less disk space usage, than previously.

This has been a known shortcoming since stack was first released. It's not coincidental that this support is being added not long after a similar project completed for Cabal. Ryan Trinkle- Vishal's mentor on the project- described the work to me a few months back, and I decided to wait to see the outcome of the project before working on the feature in stack.

The improvements to Cabal here are superb, and I'm thrilled to see them happening. However, after reviewing and discussing with a few stack developers and users, I decided to implement a different approach that doesn't take advantage of the new Cabal changes. The reasons are:

  • As Herbert very aptly pointed out on Reddit:

    Since Stack sandboxes everything maximum sharing between LTS versions can easily be implemented going back to GHC 7.0 without this new multi-instance support.

    This multi-instance support is needed if you want to accomplish the same thing without isolated sandboxes in a single package db.

  • There are some usability concerns around a single massive database with all packages in it. Specifically, there are potential problems around getting GHC to choose a coherent set of packages when using something like ghci or runghc. Hopefully some concept of views will be added (as Duncan described in the original proposal), but the implications still need to be worked out.

  • stack users are impatient (and I mean that in the best way possible). Why wait for a feature when we could have it now? While the Cabal Google Summer of Code project is complete, the changes are not yet merged to master, much less released. stack would need to wait until those changes are readily available to end users before relying on them.

stack's implementation

I came up with some complicated approaches to the problem, but ultimately a comment from Aaron Wolf rang true:

check the version differences and just copy compiled binaries from previous LTS for unchanged items

It turns out that this is really easy. The implementation ends up having two components:

  1. Whenever a snapshot package is built, write a precompiled cache file containing the filepaths of the library's .conf file (from inside the package database) and all of the executables installed.
  2. Before building a snapshot package, check for a precompiled cache file. If the file exists, copy over the executables and register the .conf file into the new snapshots database.

That precompiled cache file's path looks something like this:


This encodes the GHC version, Cabal version, package name, and package version. The last bit is a hash of all of the configuration information, including flags, GHC options, and dependencies. We then hash those flags and put them in the filepath, ensuring that when we look up a precompiled package, we're getting something that matches what we'd be building ourselves now.

The reason we can get away with this approach in stack is because of the invariants of a snapshot, namely: each snapshot has precisely one version of a package available, and therefore we have no need to deal with the new multi-instance installations GHC 7.10 supports. This also means no concern around views: a snapshot database is by its very nature a view.


  • Decreased compile times
  • Decreased disk space usage


  • You can't reliably delete a single snapshot, as there can be files shared between different snapshots. Deleting a single snapshot was never an officially supported feature previously, but if you knew what you were doing, you could do it safely.

After discussing with others: this trade-off seems acceptable: the overall decrease in disk space usage means that the desire to delete a single snapshot will be reduced. When real disk space reclaiming needs to happen, the recommended approach will be to wipe all snapshots and start over, which (1) will be an infrequent occurrence, and (2) due to the faster compile times, will be less burdensome.

September 01, 2015 06:00 AM

Gabriel Gonzalez

State of the Haskell ecosystem - August 2015

Note: This went out as a RFC draft a few weeks ago, which is now a live wiki. See the Conclusions section at the end for more details.

In this post I will describe the current state of the Haskell ecosystem to the best of my knowledge and its suitability for various programming domains and tasks. The purpose of this post is to discuss both the good and the bad by advertising where Haskell shines while highlighting where I believe there is room for improvement.

This post is grouped into two sections: the first section covers Haskell's suitability for particular programming application domains (i.e. servers, games, or data science) and the second section covers Haskell's suitability for common general-purpose programming needs (such as testing, IDEs, or concurrency).

The topics are roughly sorted from greatest strengths to greatest weaknesses. Each programming area will also be summarized by a single rating of either:

  • Best in class: the best experience in any language
  • Mature: suitable for most programmers
  • Immature: only acceptable for early-adopters
  • Bad: pretty unusable

The more positive the rating the more I will support the rating with success stories in the wild. The more negative the rating the more I will offer constructive advice for how to improve things.

Disclaimer #1: I obviously don't know everything about the Haskell ecosystem, so whenever I am unsure I will make a ballpark guess and clearly state my uncertainty in order to solicit opinions from others who have more experience. I keep tabs on the Haskell ecosystem pretty well, but even this post is stretching my knowledge. If you believe any of my ratings are incorrect, I am more than happy to accept corrections (both upwards and downwards)

Disclaimer #2: There are some "Educational resource" sections below which are remarkably devoid of books, since I am not as familiar with textbook-related resources. If you have suggestions for textbooks to add, please let me know.

Disclaimer #3: I am very obviously a Haskell fanboy if you haven't guessed from the name of my blog and I am also an author of several libraries mentioned below, so I'm highly biased. I've made a sincere effort to honestly appraise the language, but please challenge my ratings if you believe that my bias is blinding me! I've also clearly marked Haskell sales pitches as "Propaganda" in my external link sections. :)

Table of Contents

Application Domains


Rating: Best in class

Haskell is an amazing language for writing your own compiler. If you are writing a compiler in another language you should genuinely consider switching.

Haskell originated in academia, and most languages of academic origin (such as the ML family of languages) excel at compiler-related tasks for obvious reasons. As a result the language has a rich ecosystem of libraries dedicated to compiler-related tasks, such as parsing, pretty-printing, unification, bound variables, syntax tree manipulations, and optimization.

Anybody who has ever written a compiler knows how difficult they are to implement because by necessity they manipulate very weakly typed data structures (trees and maps of strings and integers). Consequently, there is a huge margin for error in everything a compiler does, from type-checking to optimization, to code generation. Haskell knocks this out of the park, though, with a really powerful type system with many extensions that can eliminate large classes of errors at compile time.

I also believe that there are many excellent educational resources for compiler writers, both papers and books. I'm not the best person to summarize all the educational resources available, but the ones that I have read have been very high quality.

Finally, there are a large number of parsers and pretty-printers for other languages which you can use to write compilers to or from these languages.

Notable libraries:

Some compilers written in Haskell:

Educational resources:

Server-side programming

Rating: Mature

Haskell's second biggest strength is the back-end, both for web applications and services. The main features that the language brings to the table are:

  • Server stability
  • Performance
  • Ease of concurrent programming
  • Excellent support for web standards

The strong type system and polished runtime greatly improve server stability and simplify maintenance. This is the greatest differentiator of Haskell from other backend languages, because it significantly reduces the total-cost-of-ownership. You should expect that you can maintain Haskell-based services with significantly fewer programmers than other languages, even when compared to other statically typed languages.

However, the greatest weakness of server stability is space leaks. The most common solution that I know of is to use ekg (a process monitor) to examine a server's memory stability before deploying to production. The second most common solution is to learn to detect and prevent space leaks with experience, which is not as hard as people think.

Haskell's performance is excellent and currently comparable to Java. Both languages give roughly the same performance in beginner or expert hands, although for different reasons.

Where Haskell shines in usability is the runtime support for the following three features:

  • lightweight threads enhanced (which differentiate Haskell from the JVM)
  • software transactional memory (which differentiate Haskell from Go)
  • garbage collection (which differentiate Haskell from Rust)

Many languages support two of the above three features, but Haskell is the only one that I know of that supports all three.

If you have never tried out Haskell's software transactional memory you should really, really, really give it a try, since it eliminates a large number of concurrency logic bugs. STM is far and away the most underestimated feature of the Haskell runtime.

Notable libraries:

  • warp / wai - the low-level server and API that all server libraries share, with the exception of snap
  • scotty - A beginner-friendly server framework analogous to Ruby's Sinatra
  • spock - Lighter than the "enterprise" frameworks, but more featureful than scotty (type-safe routing, sessions, conn pooling, csrf protection, authentication, etc)
  • yesod / yesod-* / snap / snap-* / happstack-server / happstack-* - "Enterprise" server frameworks with all the bells and whistles
  • servant / servant-* - This server framework might blow your mind
  • authenticate / authenticate-* - Shared authentication libraries
  • ekg / ekg-* - Haskell service monitoring
  • stm - Software-transactional memory

Some web sites and services powered by Haskell:


Educational resources:

Scripting / Command-line applications

Rating: Mature

Haskell's biggest advantage as a scripting language is that Haskell is the most widely adopted language that support global type inference. Many languages support local type inference (such as Rust, Go, Java, C#), which means that function argument types and interfaces must be declared but everything else can be inferred. In Haskell, you can omit everything: all types and interfaces are completely inferred by the compiler (with some caveats, but they are minor).

Global type inference gives Haskell the feel of a scripting language while still providing static assurances of safety. Script type safety matters in particular for enterprise environments where glue scripts running with elevated privileges are one of the weakest points in these software architectures.

The second benefit of Haskell's type safety is ease of script maintenance. Many scripts grow out of control as they accrete arcane requirements and once they begin to exceed 1000 LOC they become difficult to maintain in a dynamically typed language. People rarely budget sufficient time to create a sufficiently extensive test suite that exercises every code path for each and every one of their scripts. Having a strong type system is like getting a large number of auto-generated tests for free that exercise all script code paths. Moreover, the type system is more resilient to refactoring than a test suite.

However, the main reason I mark Haskell as mature because the language is also usable even for simple one-off disposable scripts. These Haskell scripts are comparable in size and simplicity to their equivalent Bash or Python scripts. This lets you easily start small and finish big.

Haskell has one advantage over many dynamic scripting languages, which is that Haskell can be compiled into a native and statically linked binary for distribution to others.

Haskell's scripting libraries are feature complete and provide all the niceties that you would expect from scripting in Python or Ruby, including features such as:

  • rich suite of Unix-like utilities
  • advanced sub-process management
  • POSIX support
  • light-weight idioms for exception safety and automatic resource disposal

Notable libraries:

Some command-line tools written in Haskell:

Educational resources:

Numerical programming

Rating: Immature? (Uncertain)

Haskell's numerical programming story is not ready, but steadily improving.

My main experience in this area was from a few years ago doing numerical programming for bioinformatics that involved a lot of vector and matrix manipulation and my rating is largely colored by that experience.

The biggest issues that the ecosystem faces are:

  • Really clunky matrix library APIs
  • Fickle rewrite-rule-based optimizations

When the optimizations work they are amazing and produce code competitive with C. However, small changes to your code can cause the optimizations to suddenly not trigger and then performance drops off a cliff.

There is one Haskell library that avoids this problem entirely which I believe holds a lot of promise: accelerate generates LLVM and CUDA code at runtime and does not rely on Haskell's optimizer for code generation, which side-steps the problem. accelerate has a large set of supported algorithms that you can find by just checking the library's reverse dependencies:

However, I don't have enough experience with accelerate or enough familiarity with numerical programming success stories in Haskell to vouch for this just yet. If somebody has more experience then me in this regard and can provide evidence that the ecosystem is mature then I might consider revising my rating upward.

Notable libraries:


Educational Resources:

Front-end web programming

Rating: Immature

This boils down to Haskell's ability to compile to Javascript. ghcjs is the front-runner, but for a while setting up ghcjs was non-trivial. However, ghcjs appears to be very close to having a polished setup story now that ghc-7.10.2 is out (Source).

One of the distinctive features of ghcjs compared to other competing Haskell-to-Javascript compilers is that a huge number of Haskell libraries work out of the box with ghcjs because it supports most Haskell primitive operations.

I would also like to mention that there are two Haskell-like languages that you should also try out for front-end programming: elm and purescript. These are both used in production today and have equally active maintainers and communities of their own.

Areas for improvement:

  • There needs to be a clear story for smooth integration with existing Javascript projects
  • There need to be many more educational resources targeted at non-experts explaining how to translate existing front-end programming idioms to Haskell
  • There need to be several well-maintained and polished Haskell libraries for front-end programming

Notable Haskell-to-Javascript compilers:

Notable libraries:

  • reflex-dom - Functional reactive programming library for DOM manipulation

Distributed programming

Rating: Immature

This is sort of a broad area since I'm using this topic to refer to both distributed computation (for analytics) and distributed service architectures. However, in both regards Haskell is lagging behind its peers.

The JVM, Go, and Erlang have much better support for this sort of things, particularly in terms of libraries.

There has been a lot of work in replicating Erlang-like functionality in Haskell through the Cloud Haskell project, not just in creating the low-level primitives for code distribution / networking / transport, but also in assembling a Haskell analog of Erlang's OTP. I'm not that familiar with how far progress is in this area, but people who love Erlang should check out Cloud Haskell.

Areas for improvement:

  • We need more analytics libraries. Haskell has no analog of scalding or spark. The most we have is just a Haskell wrapper around hadoop
  • We need a polished consensus library (i.e. a high quality Raft implementation in Haskell)

Notable libraries:

Standalone GUI applications

Rating: Immature

Haskell really lags behind the C# and F# ecosystem in this area.

My experience on this is based on several private GUI projects I wrote several years back. Things may have improved since then so if you think my assessment is too negative just let me know.

All Haskell GUI libraries are wrappers around toolkits written in other languages (such as GTK+ or Qt). The last time I checked the gtk bindings were the most comprehensive, best maintained, and had the best documentation.

However, the Haskell bindings to GTK+ have a strongly imperative feel to them. The way you do everything is communicating between callbacks by mutating IORefs. Also, you can't take extensive advantage of Haskell's awesome threading features because the GTK+ runtime is picky about what needs to happen on certain threads. I haven't really seen a Haskell library that takes this imperative GTK+ interface and wraps it in a more idiomatic Haskell API.

My impression is that most Haskell programmers interested in applications programming have collectively decided to concentrate their efforts on improving Haskell web applications instead of standalone GUI applications. Honestly, that's probably the right decision in the long run.

Another post that goes into more detail about this topic is this post written by Keera Studios:

Areas for improvement:

  • A GUI toolkit binding that is maintained, comprehensive, and easy to use
  • Polished GUI interface builders

Notable libraries:

  • gtk / glib / cairo / pango - The GTK+ suite of libraries
  • wx - wxWidgets bindings
  • X11 - X11 bindings
  • threepenny-gui - Framework for local apps that use the web browser as the interface
  • hsqml - A Haskell binding for Qt Quick, a cross-platform framework for creating graphical user interfaces.
  • fltkhs - A Haskell binding to FLTK. Easy install/use, cross-platform, self-contained executables.

Some example applications:

Educational resources:

Machine learning

Rating: Immature? (Uncertain)

This area has been pioneered almost single-handedly by one person: Mike Izbicki. He maintains the HLearn suite of libraries for machine learning in Haskell.

I have essentially no experience in this area, so I can't really rate it that well. However, I'm pretty certain that I would not rate it mature because I'm not aware of any company successfully using machine learning in Haskell.

For the same reason, I can't really offer constructive advice for areas for improvement.

If you would like to learn more about this area the best place to begin is the Github page for the HLearn project:

Notable libraries: * HLearn-*

Data science

Rating: Immature

Haskell really lags behind Python and R in this area. Haskell is somewhat usable for data science, but probably not ready for expert use under deadline pressure.

I'll primarily compare Haskell to Python since that's the data science ecosystem that I'm more familiar with. Specifically, I'll compare to the scipy suite of libraries:

The Haskell analog of NumPy is the hmatrix library, which provides Haskell bindings to BLAS, LAPACK. hmatrix's main limitation is that the API is a bit clunky, but all the tools are there.

Haskell's charting story is okay. Probably my main criticism of most charting APIs is that their APIs tend to be large, the types are a bit complex, and they have a very large number of dependencies.

Fortunately, Haskell does integrate into IPython so you can use Haskell within an IPython shell or an online notebook. For example, there is an online "IHaskell" notebook that you can use right now located here:

If you want to learn more about how to setup your own IHaskell notebook, visit this project:

The closest thing to Python's pandas is the frames library. I haven't used it that much personally so I won't comment on it much other than to link to some tutorials in the Educational Resources section.

I'm not aware of a Haskell analog to SciPy (the library) or sympy. If you know of an equivalent Haskell library then let me know.

One Haskell library that deserves honorable mention here is the diagrams library which lets you produce complex data visualizations very easily if you want something a little bit fancier than a chart. Check out the diagrams project if you have time:

Areas for improvement:

  • Smooth user experience and integration across all of these libraries
  • Simple types and APIs. The data science programmers I know dislike overly complex or verbose APIs
  • Beautiful data visualizations with very little investment

Notable libraries:

Game programming

Rating: Immature? / Bad?

Haskell has SDL and OpenGL bindings, which are actually quite good, but that's about it. You're on your own from that point onward. There is not a rich ecosystem of higher-level libraries built on top of those bindings. There is some work in this area, but I'm not aware of anything production quality.

There is also one really fundamental issue with the language, which is garbage collection, which runs the risk of introducing perceptible pauses in gameplay if your heap grows too large.

For this reason I don't see Haskell ever being used for AAA game programming. I suppose you could use Haskell for simpler games that don't require keeping a lot of resources in memory.

Haskell could maybe be used for the scripting layer of a game or to power the backend for an online game, but for rendering or updating an extremely large graph of objects you should probably stick to another language.

The company that has been doing the most to push the envelope for game programming in Haskell is Keera Studios, so if this is an area that interests you then you should follow their blog:

Areas for improvement:

  • Improve the garbage collector and benchmark performance with large heap sizes
  • Provide higher-level game engines
  • Improve distribution of Haskell games on proprietary game platforms

Notable libraries:

Systems / embedded programming

Rating: Bad / Immature (?) (See description)

Since systems programming is an abused word, I will clarify that I mean programs where speed, memory layout, and latency really matter.

Haskell fares really poorly in this area because:

  • The language is garbage collected, so there are no latency guarantees
  • Executable sizes are large
  • Memory usage is difficult to constrain (thanks to space leaks)
  • Haskell has a large and unavoidable runtime, which means you cannot easily embed Haskell within larger programs
  • You can't easily predict what machine code that Haskell code will compile to

Typically people approach this problem from the opposite direction: they write the low-level parts in C or Rust and then write Haskell bindings to the low-level code.

It's worth noting that there is an alternative approach which is Haskell DSLs that are strongly typed that generate low-level code at runtime. This is the approach championed by the company Galois.

Notable libraries:

  • atom / ivory - DSL for generating embedded programs
  • copilot - Stream DSL that generates C code
  • improve - High-assurance DSL for embedded code that generates C and Ada

Educational resources:

Mobile apps

Rating: Immature? / Bad? (Uncertain)

This greatly lags behind using the language that is natively supported by the mobile platform (i.e. Java for Android or Objective-C / Swift for iOS).

I don't know a whole lot about this area, but I'm definitely sure it is far from mature. All I can do is link to the resources I know of for Android and iPhone development using Haskell.

I also can't really suggest improvements because I'm pretty out of touch with this branch of the Haskell ecosystem.

Educational resources:

ARM processor support

Rating: Immature / Early adopter

On hobbyist boards like the raspberry pi its possible to compile haskell code with ghc. But some libraries have problems on the arm platform, ghci only works on newer compilers, and the newer compilers are flaky.

If haskell code builds, it runs with respectable performance on these machines.

Raspian (raspberry pi, pi2, others) * current version: ghc 7.4, cabal-install 1.14 * ghci doesn't work.

Debian Jesse (Raspberry Pi 2) * current version: ghc 7.6 * can install the current ghc 7.10.2 binary and ghci starts. However, fails to build cabal, with 'illegal instruction'

Arch (Raspberry Pi 2) * current version 7.8.2, but llvm is 3.6, which is too new. * downgrade packages for llvm not officially available. * with llvm downgrade to 3.4, ghc and ghci work, but problems compiling yesod, scotty.
* compiler crashes, segfaults, etc.

Arch (Banana Pi) * similar to raspberry pi 2, ghc is 7.8.2, works with llvm downgrade * have had success compiling a yesod project on this platform.

Common Programming Needs


Rating: Best in class

Haskell is unbelievably awesome for maintaining large projects. There's nothing that I can say that will fully convey how nice it is to modify existing Haskell code. You can only appreciate this through experience.

When I say that Haskell is easy to maintain, I mean that you can easily approach a large Haskell code base written by somebody else and make sweeping architectural changes to the project without breaking the code.

You'll often hear people say: "if it compiles, it works". I think that is a bit of an exaggeration, but a more accurate statement is: "if you refactor and it compiles, it works". This lets you move fast without breaking things.

Most statically typed languages are easy to maintain, but Haskell is on its own level for the following reasons:

  • Strong types
  • Global type inference
  • Type classes
  • Laziness

The latter two features are what differentiate Haskell from other statically typed languages.

If you've ever maintained code in other languages you know that usually your test suite breaks the moment you make large changes to your code base and you have to spend a significant amount of effort keeping your test suite up to date with your changes. However, Haskell has a very powerful type system that lets you transform tests into invariants that are enforced by the types so that you can statically eliminate entire classes of errors at compile time. These types are much more flexible than tests when modifying code and types require much less upkeep as you make large changes.

The Haskell community and ecosystem use the type system heavily to "test" their applications, more so than other programming language communities. That's not to say that Haskell programmers don't write tests (they do), but rather they prefer types over tests when they have the option.

Global type inference means that you don't have to update types and interfaces as you change the code. Whenever I do a large refactor the first thing I do is delete all type signatures and let the compiler infer the types and interfaces for me as I go. When I'm done refactoring I just insert back the type signatures that the compiler infers as machine-checked documentation.

Type classes also assist refactoring because the compiler automatically infers type class constraints (analogous to interfaces in other languages) so that you don't need to explicitly annotate interfaces. This is a huge time saver.

Laziness deserves special mention because many outsiders do not appreciate how laziness simplifies maintenance. Many languages require tight coupling between producers and consumers of data structures in order to avoid wasteful evaluation, but laziness avoids this problem by only evaluating data structures on demand. This means that if your refactoring process changes the order in which data structures are consumed or even stops referencing them altogether you don't need to reorder or delete those data structures. They will just sit around patiently waiting until they are actually needed, if ever, before they are evaluated.

Single-machine Concurrency

Rating: Best in class

I give Haskell a "Best in class" rating because Haskell's concurrency runtime performs as well or better than mainstream languages and is significantly easier to use due to the runtime support for software-transactional memory.

The best explanation of Haskell's threading module is the documentation in Control.Concurrent:

Concurrency is "lightweight", which means that both thread creation and context switching overheads are extremely low. Scheduling of Haskell threads is done internally in the Haskell runtime system, and doesn't make use of any operating system-supplied thread packages.

The best way to explain the performance of Haskell's threaded runtime is to give hard numbers:

  • The Haskell thread scheduler can easily handle millions of threads
  • Each thread requires 1 kb of memory, so the hard limitation to thread count is memory (1 GB per million threads).
  • Haskell channel overhead for the standard library (using TQueue) is on the order of one microsecond per message and degrades linearly with increasing contention
  • Haskell channel overhead using the unagi-chan library is on the order of 100 nanoseconds (even under contention)
  • Haskell's MVar (a low-level concurrency communication primitive) requires 10-20 ns to add or remove values (roughly on par with acquiring or releasing a lock in other languages)

Haskell also provides software-transactional memory, which allows programmers build composable and atomic memory transactions. You can compose transactions together in multiple ways to build larger transactions:

  • You can sequence two transactions to build a larger atomic transaction
  • You can combine two transactions using alternation, falling back on the second transaction if the first one fails
  • Transactions can retry, rolling back their state and sleeping until one of their dependencies changes in order to avoid wasteful polling

A few other languages provide software-transactional memory, but Haskell's implementation has two main advantages over other implementations:

  • The type system enforces that transactions only permit reversible memory modifications. This guarantees at compile time that all transactions can be safely rolled back.
  • Haskell's STM runtime takes advantage of enforced purity to improve the efficiency of transactions, retries, and alternation.

Notable libraries:

  • stm - Software transactional memory
  • unagi-chan - High performance channels
  • async - Futures library

Educational resources:

Types / Type-driven development

Rating: Best in class

Haskell definitely does not have the most advanced type system (not even close if you count research languages) but out of all languages that are actually used in production Haskell is probably at the top. Idris is probably the closest thing to a type system more powerful than Haskell that has a realistic chance of use in production in the foreseeable future.

The killer features of Haskell's type system are:

  • Type classes
  • Global type and type class inference
  • Light-weight type syntax

Haskell's type system really does not get in your way at all. You (almost) never need to annotate the type of anything. As a result, the language feels light-weight to use like a dynamic language, but you get all the assurances of a static language.

Many people are familiar with languages that support "local" type inference (like Rust, Java, C#), where you have to explicitly type function arguments but then the compiler can infer the types of local variables. Haskell, on the other hand, provides "global" type inference, meaning that the types and interfaces of all function arguments are inferred, too. Type signatures are optional (with some minor caveats) and are primarily for the benefit of the programmer.

This really benefits projects where you need to prototype quickly but refactor painlessly when you realize you are on the wrong track. You can leave out all type signatures while prototyping but the types are still there even if you don't see them. Then when you dramatically change course those strong and silent types step in and keep large refactors painless.

Some Haskell programmers use a "type-driven development" programming style, analogous to "test-driven development":

  • they specify desired behavior as a type signature which initially fails to type-check (analogous to adding a test which starts out "red")
  • they create a quick and dirty solution that satisfies the type-checker (analogous to turning the test "green")
  • they improve on their initial solution while still satisfying the type-checker (analogous to a "red/green refactor")

"Type-driven development" supplements "test-driven development" and has different tradeoffs:

  • The biggest disadvantage of types is that test as many things as full-blown tests, especially because Haskell is not dependently typed
  • The biggest advantage of types is that they can prove the complete absence of programming errors for all possible cases, whereas tests cannot examine every possibility
  • Type-checking is much faster than running tests
  • Type error messages are informative: they explain what went wrong and never get stale
  • Type-checking never hangs and never gives flaky results

Haskell also provides the "Typed Holes" extension, which lets you add an underscore (i.e. "_") anywhere in the code whenever you don't know what expression belongs there. The compiler will then tell you the expected type of the hole and suggest terms in scope with related types that you can use to fill the hole.

Educational resources:


Domain-specific languages (DSLs)

Rating: Mature

Haskell rocks at DSL-building. While not as flexible as a Lisp language I would venture that Haskell is the most flexible of the non-Lisp languages. You can overload a large amount of built-in syntax for your custom DSL.

The most popular example of overloaded syntax is do notation, which you can overload to work with any type that implements the Monad interface. This syntactic sugar for Monads in turn led to a huge overabundance of Monad tutorials.

However, there are lesser known but equally important things that you can overload, such as:

  • numeric and string literals
  • if/then/else expressions
  • list comprehensions
  • numeric operators

Educational resources:


Rating: Mature

There are a few places where Haskell is the clear leader among all languages:

  • property-based testing
  • mocking / dependency injection

Haskell's QuickCheck is the gold standard which all other property-based testing libraries are measured against. The reason QuickCheck works so smoothly in Haskell is due to Haskell's type class system and purity. The type class system simplifies automatic generation of random data from the input type of the property test. Purity means that any failing test result can be automatically minimized by rerunning the check on smaller and smaller inputs until QuickCheck identifies the corner case that triggers the failure.

Mocking is another area where Haskell shines because you can overload almost all built-in syntax, including:

  • do notation
  • if statements
  • numeric literals
  • string literals

Haskell programmers overload this syntax (particularly do notation) to write code that looks like it is doing real work:

example = do str <- readLine
putLine str

... and the code will actually evaluate to a pure syntax tree that you can use to mock in external inputs and outputs:

example = ReadLine (\str -> PutStrLn str (Pure ()))

Haskell also supports most testing functionality that you expect from other languages, including:

  • standard package interfaces for testing
  • unit testing libraries
  • test result summaries and visualization

Notable libraries:

  • QuickCheck - property-based testing
  • doctest - tests embedded directly within documentation
  • free - Haskell's abstract version of "dependency injection"
  • hspec - Testing library analogous to Ruby's RSpec
  • HUnit - Testing library analogous to Java's JUnit
  • tasty - Combination unit / regression / property testing library

Educational resources:

Data structures and algorithms

Rating: Mature

Haskell primarily uses persistent data structures, meaning that when you "update" a persistent data structure you just create a new data structure and you can keep the old one around (thus the name: persistent). Haskell data structures are immutable, so you don't actually create a deep copy of the data structure when updating; any new structure will reuse as much of the original data structure as possible.

The Notable libraries sections contains links to Haskell collections libraries that are heavily tuned. You should realistically expect these libraries to compete with tuned Java code. However, you should not expect Haskell to match expertly tuned C++ code.

The selection of algorithms is not as broad as in Java or C++ but it is still pretty good and diverse enough to cover the majority of use cases.

Notable libraries:


Rating: Mature

This boils down exclusively to the criterion library, which was done so well that nobody bothered to write a competing library. Notable criterion features include:

  • Detailed statistical analysis of timing data
  • Beautiful graph output: (Example)
  • High-resolution analysis (accurate down to nanoseconds)
  • Customizable HTML/CSV/JSON output
  • Garbage collection insensitivity

Notable libraries:

Educational resources:


Rating: Mature

Haskell's Unicode support is excellent. Just use the text and text-icu libraries, which provide a high-performance, space-efficient, and easy-to-use API for Unicode-aware text operations.

Note that there is one big catch: the default String type in Haskell is inefficient. You should always use Text whenever possible.

Notable libraries:

Parsing / Pretty-printing

Rating: Mature

Haskell is amazing at parsing. Recursive descent parser combinators are far-and-away the most popular parsing paradigm within the Haskell ecosystem, so much so that people use them even in place of regular expressions. I strongly recommend reading the "Monadic Parsing in Haskell" functional pearl linked below if you want to get a feel for why parser combinators are so dominant in the Haskell landscape.

If you're not sure what library to pick, I generally recommend the parsec library as a default well-rounded choice because it strikes a decent balance between ease-of-use, performance, good error messages, and small dependencies (since it ships with GHC).

attoparsec deserves special mention as an extremely fast backtracking parsing library. The speed and simplicity of this library will blow you away. The main deficiency of attoparsec is the poor error messages.

The pretty-printing front is also excellent. Academic researchers just really love writing pretty-printing libraries in Haskell for some reason.

Notable libraries:

  • parsec - best overall "value"
  • attoparsec - Extremely fast backtracking parser
  • trifecta - Best error messages (clang-style)
  • alex / happy - Like lexx / yacc but with Haskell integration
  • Earley - Early parsing embedded within the Haskell language
  • ansi-wl-pprint - Pretty-printing library
  • text-format - High-performance string formatting

Educational resources:


Stream programming

Rating: Mature

Haskell's streaming ecosystem is mature. Probably the biggest issue is that there are too many good choices (and a lot of ecosystem fragmentation as a result), but each of the streaming libraries listed below has a sufficiently rich ecosystem including common streaming tasks like:

  • Network transmissions
  • Compression
  • External process pipes
  • High-performance streaming aggregation
  • Concurrent streams
  • Incremental parsing

Notable libraries:

  • conduit / io-streams / pipes - Stream programming libraries (Full disclosure: I authored pipes and wrote the official io-streams tutorial)
  • machines - Networked stream transducers library

Educational resources:

Serialization / Deserialization

Rating: Mature

Haskell's serialization libraries are reasonably efficient and very easy to use. You can easily automatically derive serializers/deserializers for user-defined data types and it's very easy to encode/decode values.

Haskell's serialization does not suffer from any of the gotchas that object-oriented languages deal with (particularly Java/Scala). Haskell data types don't have associated methods or state to deal with so serialization/deserialization is straightforward and obvious. That's also why you can automatically derive correct serializers/deserializers.

Serialization performance is pretty good. You should expect to serialize data at a rate between 100 Mb/s to 1 Gb/s with careful tuning. Serialization performance still has about 3x-5x room for improvement by multiple independent estimates. See the "Faster binary serialization" link below for details of the ongoing work to improve the serialization speed of existing libraries.

Notable libraries:

Educational resources:

Support for file formats

Rating: Mature

Haskell supports all the common domain-independent serialization formats (i.e. XML/JSON/YAML/CSV). For more exotic formats Haskell won't be as good as, say, Python (which is notorious for supporting a huge number of file formats) but it's so easy to write your own quick and dirty parser in Haskell that this is not much of an issue.

Notable libraries:

  • aeson - JSON encoding/decoding
  • cassava - CSV encoding/decoding
  • yaml - YAML encoding/decoding
  • xml - XML encoding/decoding

Package management

Rating: Mature

If you had asked me a few months back I would have rated Haskell immature in this area. This rating is based entirely on the recent release of the stack package tool by FPComplete which greatly simplifies package installation and dependency management. This tool was created in response to a broad survey of existing Haskell users and potential users where cabal-install was identified as the single greatest issue for professional Haskell development.

The stack tool is not just good by Haskell standards but excellent even compared to other language package managers. Key features include:

  • Excellent project isolation (including compiler isolation)
  • Global caching of shared dependencies to avoid wasteful rebuilds
  • Easily add local repositories or remote Github repositories as dependencies

stack is also powered by Stackage, which is a very large Hackage mono-build that ensures that a large subset of Hackage builds correctly against each other and automatically notifies package authors to fix or update libraries when they break the mono-build. Periodically this package set is frozen as a Stackage LTS release which you can supply to the stack tool in order to select dependencies that are guaranteed to build correctly with each other. Also, if all your projects use the same or similar LTS releases they will benefit heavily from the shared global cache.

Educational resources:



Haskell has decent logging support. That's pretty much all there is to say.

Rating: Mature

  • fast-logger - High-performance multicore logging system
  • hslogger - Logging library analogous to Python's ConfigParser library
  • monad-logger - add logging with line numbers to your monad stack. Uses fast-logger under the hood.


Rating: Immature

The primary reason for the "Immature" rating is two big deficiencies in Haskell learning materials:

  • Intermediate-level books
  • Beginner-level material targeted at people with no previous programming experience

Other than that the remaining learning resources are okay. If the above holes were filled then I would give a "Mature" rating.

The most important advice I can give to Haskell beginners is to learn by doing. I observe that many Haskell beginners dwell too long trying to learn by reading instead of trying to build something useful to hone their understanding.

Educational resources:


Rating: Immature

The main Haskell debugging features are:

  • Memory and performance profiling
  • Stack traces
  • Source-located errors, using the assert function
  • Breakpoints, single-stepping, and tracing within the GHCi REPL
  • Informal printf-style tracing using Debug.Trace
  • ThreadScope

The two reasons I still mark debugging "Immature" are:

  • GHC's stack traces require profiling to be enabled
  • There is only one IDE that I know of (leksah) that integrates support for breakpoints and single-stepping and leksah still needs more polish

ghc-7.10 also added preliminary support for DWARF symbols which allow support for gdb-based debugging and perf-based profiling, but there is still more work that needs to be done. See the following page for more details:

Educational resources:

Cross-platform support

Rating: Immature

I give Haskell an "Immature" rating primarily due to poor user experience on Windows:

  • Most Haskell tutorials assume a Unix-like system
  • Several Windows-specific GHC bugs
  • Poor IDE support (Most Windows programmers don't use a command-line editor)

This is partly a chicken-and-egg problem. Haskell has many Windows-specific issues because it has such a small pool of Windows developers to contribute fixes. Most Haskell developers are advised to use another operating system or a virtual machine to avoid these pain points, which exacerbates the problem.

The situation is not horrible, though. I know because I do half of my Haskell programming on Windows in order to familiarize myself with the pain points of the Windows ecosystem and most of the issues affect beginners and can be worked around by more experienced developers. I wouldn't say any individual issue is an outright dealbreaker; it's more like a thousand papercuts which turn people off of the language.

If you're a Haskell developer using Windows, I highly recommend the following installs to get started quickly and with as few issues as possible:

  • Git for Windows - A Unix-like command-line environment bundled with git that you can use to follow along with tutorials
  • MinGHC - Use this for project-independent Haskell experimentation
  • Stack - Use this for project development

Additionally, learn to use the command line a little bit until Haskell IDE support improves. Plus, it's a useful skill in general as you become a more experienced programmer.

For Mac, the recommended installation is:

  • Haskell for Mac OS X - A self-contained relocatable GHC build for project-independent Haskell experimentation
  • Stack - Use this for project development

For other operating systems, use your package manager of choice to install ghc and stack.

Educational resources:

Databases and data stores

Rating: Immature

This is is not one of my areas of expertise, but what I do know is that Haskell has bindings to most of the open source databases and datastores such as MySQL, Postgres, SQLite, Cassandra, Redis, DynamoDB and MongoDB. However, I haven't really evaluated the quality of these bindings other than the postgresql-simple library, which is the only one I've personally used and was decent as far as I could tell.

The "Immature" ranking is based on the recommendation of Stephen Diehl who notes:

Raw bindings are mature, but the higher level ORM tooling is a lot less mature than its Java, Scala, Python counterparts Source

However, Haskell appears to be deficient in bindings to commercial databases like Microsoft SQL server and Oracle. So whether or not Haskell is right for you probably depends heavily on whether there are bindings to the specific data store you use.

Notable libraries:

Hot code loading

Rating: Immature

Haskell does provide support for hot code loading, although nothing in the same ballpark as in languages like Clojure.

There are two main approaches to hot code loading:

  • Compiling and linking object code at runtime (i.e. the plugins or hint libraries)
  • Recompiling the entire program and then reinitializing the program with the program's saved state (i.e. the dyre or halive libraries)

You might wonder how Cloud Haskell sends code over the wire and my understanding is that it doesn't. Any function you wish to send over the wire is instead compiled ahead of time on both sides and stored in a shared symbol table which each side references when encoding or decoding the function.

Haskell does not let you edit a live program like Clojure does so Haskell will probably never be "Best in class" short of somebody releasing a completely new Haskell compiler built from the ground up to support this feature. The existing Haskell tools for hot code swapping seem as good as they are reasonably going to get, but I'm waiting for commercial success stories of their use before rating this "Mature".

The halive library has the best hot code swapping demo by far:

Notable libraries:

  • plugins / hint - Runtime compilation and linking
  • dyre / halive - Program reinitialization with saved state

IDE support

Rating: Immature

I am not the best person to review this area since I do not use an IDE myself. I'm basing this "Immature" rating purely on what I have heard from others.

The impression I get is that the biggest pain point is that Haskell IDEs, IDE plugins, and low-level IDE tools keep breaking with every new GHC release.

Most of the Haskell early adopters have been vi/vim or emacs users so those editors have gotten the most love. Support for more traditional IDEs has improved recently with Haskell plugins for IntelliJ and Eclipse and also the Haskell-native leksah IDE.

FPComplete has also released a web IDE for Haskell programming that is also worth checking out which is reasonably polished but cannot be used offline.

Notable tools:

  • hoogle - Type-based function search
  • hlint - Code linter
  • ghc-mod - editor agnostic tool that powers many IDE-like features
  • ghcid - lightweight background type-checker that triggers on code changes
  • haskell-mode - Umbrella project for Haskell emacs support
  • structured-haskell-mode - structural editing based on Haskell syntax for emacs
  • codex - Tags file generator for cabal project dependencies.
  • hdevtools - Persistent GHC-powered background server for development tools
  • ghc-imported-from - editor agnostic tool that finds Haddock documentation page for a symbol

IDE plugins:

  • IntelliJ (the official plugin or Haskforce)
  • Eclipse (the EclipseFP plugin)
  • Atom (the IDE-Haskell plugin)


Educational resources:


I originally hosted this post as a draft on Github in order to solicit review from people more knowledgeable than myself. In the process it turned into a collaboratively edited wiki which you can find here:

I will continue to accept pull requests and issues to make sure that it stays up to date and once or twice a year I will post announcements if there have been any major changes or improvements in the Haskell ecosystem.

The main changes since the draft initially went out were:

  • The "Type system" section was upgraded to "Best in class" (originally ranked "Mature")
  • The "Concurrency" section was renamed to "Single-machine concurrency" and upgraded to "Best in class" (originally ranked "Mature")
  • The "Database" section was downgraded to "Immature" (originally ranked "Mature")
  • New sections were added for "Debugging", "Education", and "Hot code loading"


  • Aaron Levin
  • Alois Cochard
  • Ben Kovach
  • Benno Fünfstück
  • Carlo Hamalainen
  • Chris Allen
  • Curtis Gagliardi
  • Deech
  • David Howlett
  • David Johnson
  • Edward Cho
  • Greg Weber
  • Gregor Uhlenheuer
  • Juan Pedro Villa Isaza
  • Kazu Yamamoto
  • Kirill Zaborsky
  • Liam O'Connor-Davis
  • Luke Randall
  • Marcio Klepacz
  • Mitchell Rosen
  • Nicolas Kaiser
  • Oliver Charles
  • Pierre Radermecker
  • Rodrigo B. de Oliveira
  • Stephen Diehl
  • Tim Docker
  • Tran Ma
  • Yuriy Syrovetskiy
  • @bburdette
  • @co-dan
  • @ExternalReality
  • @GetContented
  • @psibi

by Gabriel Gonzalez ( at September 01, 2015 03:07 AM

August 31, 2015

Alessandro Vermeulen


FinagleCon was held at TwitterHQ in San Francisco. It is refreshing to see a nice working atmosphere with free food and drinks. Now for the contents.

Twitter’s RPC framework, Finagle, has been in production since August 2010 and has over 140 contributors. In addition to Twitter, it has been adopted by many large companies such as SoundCloud. Initially written in Java with FP constructs (monads, maps, etc.) all over, it was soon after rewritten in Scala.

Finagle is based on three core concepts: Simplicity, Composability, and Separation of Concerns. These concepts are shown through three primitive building blocks: Future, Service, and Filter.

  • Futures provide an easy interface to create asynchronous computation and to model sequential or asynchronous data-flows.
  • Services are functions that return futures, used to abstract away, possibly remote, service calls.
  • Filters are essentially decorators and are meant to contain modular blocks of re-usable, non-business logic. Example usages are LoggingFilter and RetryingFilter.

The use of Futures makes it easy to test asynchronous computations. Services and filters both can be created separately, each containing their specialized logic. This modularity makes it easy to test and reason about them separately. Services and filters are easily composed, just like functions do, which makes it convenient to test chains. Services and filters are meant to separate behaviour from domain logic.

As amazing as Finagle is, there are some things one should be aware of. To create a really resilient application with Finagle one has to be an expert in its internals. Many configuration parameters influence each other, e.g. queue size and time-outs. With a properly tuned setup Finagle is properly fast and resilient (the defaults are good as well, mind you). As most data centres are heterogenous in their setup, faster machines are added to the pool, and other conditions change, one has to keep attention to the tuning continuously in order to maintain optimal performance.

Some general advice, watch out for traffic amplification due to retries, keep your timeouts low so retry is useful, but not as low that you introduce spurious timeouts.

For extra points, keep hammering your application until it breaks, find out why it breaks, fix it, and repeat.

The future

In addition to this heads up we were also given a nice insight in the upcoming things for Finagle.

In order to make more informed decision, we will get a new Failure type which contains more information instead of ‘just’ a Throwable. In this new Failure, an added field indicates whether it is safe to retry.

There are several issues with the current way of fine-tuning Finagle, as mentioned, you need to be an expert to use all the configuration parameters properly. Next to this the configuration is static and doesn’t take into account changing environments and behaviour of downstream services. Because the tuning of the parameters is tightly coupled with the implementation of Finagle it is also hard to change the implementation significantly without significant re-tuning.

In order to battle the last two points, Finagle will introduce Service Level Objectives (SLO). The SLO is a higher-level goal that Finagle should strive to reach instead of low-level hardcoded parameters. What these SLO will be exactly is not yet known.

The community

The Finagle team will synchronize the internal Finagle repository with the Github repository every Monday. They will strive to publish a snapshot version of the change as well.

For someone looking to write his own protocol to connect to his service, finagle-serial is a nice project to start with. It is small enough to grasp within a day but big enough to be non-trivial.

It was found that the ParGCCardsPerStrideChunk garbage collection option, available from 7u40, can halve GC times on large heaps. It is recommended to try this parameter. Tuning seems to be hard to do and is generally done by copying a ‘known good set’ of parameters.

Scrooge is a good utility to use for Thrift and Scala as it is aware of Scala features such as Traits and Objects and can generate relevant transformations for them.

When you want to connect to multiple data-centres from a single data-centre one can use LatencyCompensation to include latency times.

August 31, 2015 05:57 PM

Wolfgang Jeltsch

Hyperreal numbers on Estonian TV

On 13 February, I talked about hyperreal numbers in the Theory Lunch. I have not yet managed to write a blog article about this, but my notes on the whiteboard have already been featured on Estonian TV.

The background is that the head of the Software Department of the Institute of Cybernetics, Ahto Kalja, recently received the Order of the White Star, 4th class from the President of Estonia. On this account, Estonian TV conducted an interview with him, during which they recorded also parts of my notes that were still present on the whiteboard in our coffee room.

You can watch the video online. The relevant part, which is about e-government, is from 18:14 to 21:18. I enjoyed it very much hearing Ahto Kalja’s colleague Arvo Ott talking about electronic tax returns and seeing some formula about limits immediately afterwards. :-) At 20:38, there is also some Haskell-like pseudocode.

Tagged: Ahto Kalja, Arvo Ott, e-government, Eesti Televisioon, Haskell, hyperreal number, Institute of Cybernetics, Order of the White Star, talk, Theory Lunch

by Wolfgang Jeltsch at August 31, 2015 05:29 PM

A taste of Curry

Curry is a programming language that integrates functional and logic programming. Last week, Denis Firsov and I had a look at Curry, and Thursday, I gave an introductory talk about Curry in the Theory Lunch. This blog post is mostly a write-up of my talk.

Like Haskell, Curry has support for literate programming. So I wrote this blog post as a literate Curry file, which is available for download. If you want to try out the code, you have to install the Curry system KiCS2. The code uses the functional patterns language extension, which is only supported by KiCS2, as far as I know.

Functional programming

The functional fragment of Curry is very similar to Haskell. The only fundamental difference is that Curry does not support type classes.

Let us do some functional programming in Curry. First, we define a type whose values denote me and some of my relatives.

data Person = Paul
            | Joachim
            | Rita
            | Wolfgang
            | Veronika
            | Johanna
            | Jonathan
            | Jaromir

Now we define a function that yields the father of a given person if this father is covered by the Person type.

father :: Person -> Person
father Joachim  = Paul
father Rita     = Joachim
father Wolfgang = Joachim
father Veronika = Joachim
father Johanna  = Wolfgang
father Jonathan = Wolfgang
father Jaromir  = Wolfgang

Based on father, we define a function for computing grandfathers. To keep things simple, we only consider fathers of fathers to be grandfathers, not fathers of mothers.

grandfather :: Person -> Person
grandfather = father . father

Combining functional and logic programming

Logic programming languages like Prolog are able to search for variable assignments that make a given proposition true. Curry, on the other hand, can search for variable assignments that make a certain expression defined.

For example, we can search for all persons that have a grandfather according to the above data. We just enter

grandfather person where person free

at the KiCS2 prompt. KiCS2 then outputs all assignments to the person variable for which grandfather person is defined. For each of these assignments, it additionally prints the result of the expression grandfather person.


Functions in Curry can actually be non-deterministic, that is, they can return multiple results. For example, we can define a function element that returns any element of a given list. To achieve this, we use overlapping patterns in our function definition. If several equations of a function definition match a particular function application, Curry takes all of them, not only the first one, as Haskell does.

element :: [el] -> el
element (el : _)   = el
element (_  : els) = element els

Now we can enter

element "Hello!"

at the KiCS2 prompt, and the system outputs six different results.

Logic programming

We have already seen how to combine functional and logic programming with Curry. Now we want to do pure logic programming. This means that we only want to search for variable assignments, but are not interested in expression results. If you are not interested in results, you typically use a result type with only a single value. Curry provides the type Success with the single value success for doing logic programming.

Let us write some example code about routes between countries. We first introduce a type of some European and American countries.

data Country = Canada
             | Estonia
             | Germany
             | Latvia
             | Lithuania
             | Mexico
             | Poland
             | Russia
             | USA

Now we want to define a relation called borders that tells us which country borders which other country. We implement this relation as a function of type

Country -> Country -> Success

that has the trivial result success if the first country borders the second one, and has no result otherwise.

Note that this approach of implementing a relation is different from what we do in functional programming. In functional programming, we use Bool as the result type and signal falsity by the result False. In Curry, however, we signal falsity by the absence of a result.

Our borders relation only relates countries with those neighbouring countries whose names come later in alphabetical order. We will soon compute the symmetric closure of borders to also get the opposite relationships.

borders :: Country -> Country -> Success
Canada    `borders` USA       = success
Estonia   `borders` Latvia    = success
Estonia   `borders` Russia    = success
Germany   `borders` Poland    = success
Latvia    `borders` Lithuania = success
Latvia    `borders` Russia    = success
Lithuania `borders` Poland    = success
Mexico    `borders` USA       = success

Now we want to define a relation isConnected that tells whether two countries can be reached from each other via a land route. Clearly, isConnected is the equivalence relation that is generated by borders. In Prolog, we would write clauses that directly express this relationship between borders and isConnected. In Curry, on the other hand, we can write a function that generates an equivalence relation from any given relation and therefore does not only work with borders.

We first define a type alias Relation for the sake of convenience.

type Relation val = val -> val -> Success

Now we define what reflexive, symmetric, and transitive closures are.

reflClosure :: Relation val -> Relation val
reflClosure rel val1 val2 = rel val1 val2
reflClosure rel val  val  = success

symClosure :: Relation val -> Relation val
symClosure rel val1 val2 = rel val1 val2
symClosure rel val2 val1 = rel val1 val2

transClosure :: Relation val -> Relation val
transClosure rel val1 val2 = rel val1 val2
transClosure rel val1 val3 = rel val1 val2 &
                             transClosure rel val2 val3

    where val2 free

The operator & used in the definition of transClosure has type

Success -> Success -> Success

and denotes conjunction.

We define the function for generating equivalence relations as a composition of the above closure operators. Note that it is crucial that the transitive closure operator is applied after the symmetric closure operator, since the symmetric closure of a transitive relation is not necessarily transitive.

equivalence :: Relation val -> Relation val
equivalence = reflClosure . transClosure . symClosure

The implementation of isConnected is now trivial.

isConnected :: Country -> Country -> Success
isConnected = equivalence borders

Now we let KiCS2 compute which countries I can reach from Estonia without a ship or plane. We do so by entering

Estonia `isConnected` country where country free

at the prompt.

We can also implement a nondeterministic function that turns a country into the countries connected to it. For this, we use a guard that is of type Success. Such a guard succeeds if it has a result at all, which can only be success, of course.

connected :: Country -> Country
connected country1
    | country1 `isConnected` country2 = country2

    where country2 free

Equational constraints

Curry has a predefined operator

=:= :: val -> val -> Success

that stands for equality.

We can use this operator, for example, to define a nondeterministic function that yields the grandchildren of a given person. Again, we keep things simple by only considering relationships that solely go via fathers.

grandchild :: Person -> Person
grandchild person
    | grandfather grandkid =:= person = grandkid

    where grandkid free

Note that grandchild is the inverse of grandfather.

Functional patterns

Functional patterns are a language extension that allows us to use ordinary functions in patterns, not just data constructors. Functional patterns are implemented by KiCS2.

Let us look at an example again. We want to define a function split that nondeterministically splits a list into two parts.1 Without functional patterns, we can implement splitting as follows.

split' :: [el] -> ([el],[el])
split' list | front ++ rear =:= list = (front,rear)

    where front, rear free

With functional patterns, we can implement splitting in a much simpler way.

split :: [el] -> ([el],[el])
split (front ++ rear) = (front,rear)

As a second example, let us define a function sublist that yields the sublists of a given list.

sublist :: [el] -> [el]
sublist (_ ++ sub ++ _) = sub

Inverting functions

In the grandchild example, we showed how we can define the inverse of a particular function. We can go further and implement a generic function inversion operator.

inverse :: (val -> val') -> (val' -> val)
inverse fun val' | fun val =:= val' = val where val free

With this operator, we could also implement grandchild as inverse grandfather.

Inverting functions can make our lives a lot easier. Consider the example of parsing. A parser takes a string and returns a syntax tree. Writing a parser directly is a non-trivial task. However, generating a string from a syntax tree is just a simple functional programming exercise. So we can implement a parser in a simple way by writing a converter from syntax trees to strings and inverting it.

We show this for the language of all arithmetic expressions that can be built from addition, multiplication, and integer constants. We first define types for representing abstract syntax trees. These types resemble a grammar that takes precedence into account.

type Expr = Sum

data Sum     = Sum Product [Product]
data Product = Product Atom [Atom]
data Atom    = Num Int | Para Sum

Now we implement the conversion from abstract syntax trees to strings.

toString :: Expr -> String
toString = sumToString

sumToString :: Sum -> String
sumToString (Sum product products)
    = productToString product                           ++
      concatMap ((" + " ++) . productToString) products

productToString :: Product -> String
productToString (Product atom atoms)
    = atomToString atom                           ++
      concatMap ((" * " ++) . atomToString) atoms

atomToString :: Atom -> String
atomToString (Num num)  = show num
atomToString (Para sum) = "(" ++ sumToString sum ++ ")"

Implementing the parser is now extremely simple.

parse :: String -> Expr
parse = inverse toString

KiCS2 uses a depth-first search strategy by default. However, our parser implementation does not work with depth-first search. So we switch to breadth-first search by entering

:set bfs

at the KiCS2 prompt. Now we can try out the parser by entering

parse "2 * (3 + 4)" .

  1. Note that our split function is not the same as the split function in Curry’s List module.

Tagged: breadth-first search, Curry, Denis Firsov, depth-first search, functional logic programming, functional pattern, functional programming, Institute of Cybernetics, KiCS2, literate programming, logic programming, parsing, Prolog, talk, Theory Lunch, type class

by Wolfgang Jeltsch at August 31, 2015 05:28 PM

FP Complete

New in-depth guide to stack

The stack build tool is a cross-platform program for developing Haskell projects. It is aimed at Haskellers both new and experienced. I recently put together an in-depth guide to using stack for Haskell development.

The official home for this document is in the stack repository. Below is the full text of the guide at the time of writing this blog post. If you have corrections or ideas for improvements, please send edits to the Github repository.

stack is a cross-platform program for developing Haskell projects. This guide is intended to step a new stack user through all of the typical stack workflows. This guide will not teach you Haskell, but will also not be looking at much code. This guide will not presume prior experience with the Haskell packaging system or other build tools.

What is stack?

stack is a modern build tool for Haskell code. It handles the management of your toolchain (including GHC- the Glasgow Haskell Compiler- and- for Windows users- MSYS), building and registering libraries, building build tool dependencies, and much more. While stack can use existing tools on your system, stack has the capability to be your one-stop shop for all Haskell tooling you need. This guide will follow that approach.

What makes stack special? Its primary design point is reproducible builds. The goal is that if you run stack build today, you'll get the same result running stack build tomorrow. There are some exceptions to that rule (changes in your operating system configuration, for example), but overall it follows this design philosophy closely.

stack has also been designed from the ground up to be user friendly, with an intuitive, discoverable command line interface. For many users, simply downloading stack and reading stack --help will be enough to get up and running. This guide is intended to provide a gradual learning process for users who prefer that learning style.

Finally, stack is isolated: it will not make changes outside of specific stack directories (described below). Do not be worried if you see comments like "Installing GHC": stack will not tamper with your system packages at all. Additionally, stack packages will not interfere with packages installed by other build tools like cabal.

NOTE In this guide, I'll be running commands on a Linux system (Ubuntu 14.04, 64-bit) and sharing output from there. Output on other systems- or with different versions of stack- will be slightly different. But all commands work in a cross platform way, unless explicitly stated otherwise.


There's a wiki page dedicated to downloading stack which has the most up-to-date information for a variety of operating systems, including multiple Linux flavors. Instead of repeating that content here, please go check out that page and come back here when you can successfully run stack --version. The rest of this session will demonstrate the installation procedure on a vanilla Ubuntu 14.04 machine.

# Starting with a *really* bare machine
michael@d30748af6d3d:~$ sudo apt-get install wget
# Demonstrate that stack really isn't available
michael@d30748af6d3d:~$ stack
-bash: stack: command not found
# Get the signing key for the package repo
michael@d30748af6d3d:~$ wget -q -O- | sudo apt-key add -
michael@d30748af6d3d:~$ echo 'deb stable main'|sudo tee /etc/apt/sources.list.d/fpco.list
deb stable main
michael@d30748af6d3d:~$ sudo apt-get update && sudo apt-get install stack -y
# downloading...
michael@d30748af6d3d:~$ stack --version
Version, Git revision 908b04205e6f436d4a5f420b1c6c646ed2b804d7

That's it, stack is now up and running, and you're good to go. In addition, it's a good idea- though not required- to set your PATH environment variable to include $HOME/.local/bin:

michael@d30748af6d3d:~$ echo 'export PATH=$HOME/.local/bin:$PATH' >> ~/.bashrc

Hello World

Now that we've got stack, it's time to put it to work. We'll start off with the stack new command to create a new project. We'll call our project helloworld, and we'll use the new-template project template:

michael@d30748af6d3d:~$ stack new helloworld new-template

You'll see a lot of output since this is your first stack command, and there's quite a bit of initial setup it needs to do, such as downloading the list of packages available upstream. Here's an example of what you may see, though your exact results may vary. Over the course of this guide a lot of the content will begin to make more sense:

Downloading template "new-template" to create project "helloworld" in helloworld/ ...
Using the following authorship configuration:
author-name: Example Author Name
Copy these to /home/michael/.stack/stack.yaml and edit to use different values.
Writing default config file to: /home/michael/helloworld/stack.yaml
Basing on cabal files:
- /home/michael/helloworld/helloworld.cabal

Downloaded lts-3.2 build plan.
Caching build plan
Fetched package index.
Populated index cache.
Checking against build plan lts-3.2
Selected resolver: lts-3.2
Wrote project config to: /home/michael/helloworld/stack.yaml

Great, we now have a project in the helloworld directory. Let's go in there and have some fun, using the most important stack command: build.

michael@d30748af6d3d:~$ cd helloworld/
michael@d30748af6d3d:~/helloworld$ stack build
No GHC found, expected version 7.10.2 (x86_64) (based on resolver setting in /home/michael/helloworld/stack.yaml).
Try running stack setup

That was a bit anticlimactic. The problem is that stack needs GHC in order to build your project, but we don't have one on our system yet. Instead of automatically assuming you want it to download and install GHC for you, stack asks you to do this as a separate command: setup. Our message here lets us know that stack setup will need to install GHC version 7.10.2. Let's try that out:

michael@d30748af6d3d:~/helloworld$ stack setup
Downloaded ghc-7.10.2.
Installed GHC.
stack will use a locally installed GHC
For more information on paths, see 'stack path' and 'stack exec env'
To use this GHC and packages outside of a project, consider using:
stack ghc, stack ghci, stack runghc, or stack exec

It doesn't come through in the output here, but you'll get intermediate download percentage statistics while the download is occurring. This command may take some time, depending on download speeds.

NOTE: GHC gets installed to a stack-specific directory, so calling ghc on the command line won't work. See the stack exec, stack ghc, and stack runghc commands below for more information.

But now that we've got GHC available, stack can build our project:

michael@d30748af6d3d:~/helloworld$ stack build
helloworld- configure
Configuring helloworld-
helloworld- build
Preprocessing library helloworld-
[1 of 1] Compiling Lib              ( src/Lib.hs, .stack-work/dist/x86_64-linux/Cabal- )
In-place registering helloworld-
Preprocessing executable 'helloworld-exe' for helloworld-
[1 of 1] Compiling Main             ( app/Main.hs, .stack-work/dist/x86_64-linux/Cabal- )
Linking .stack-work/dist/x86_64-linux/Cabal- ...
helloworld- install
Installing library in
Installing executable(s) in
Registering helloworld-

If you look closely at the output, you can see that it built both a library called "helloworld" and an executable called "helloworld-exe". We'll explain in the next section where this information is defined. For now, though, let's just run our executable (which just outputs the string "someFunc"):

michael@d30748af6d3d:~/helloworld$ stack exec helloworld-exe

And finally, like all good software, helloworld actually has a test suite. Let's run it with stack test:

michael@d30748af6d3d:~/helloworld$ stack test
NOTE: the test command is functionally equivalent to 'build --test'
helloworld- configure (test)
Configuring helloworld-
helloworld- build (test)
Preprocessing library helloworld-
In-place registering helloworld-
Preprocessing test suite 'helloworld-test' for helloworld-
[1 of 1] Compiling Main             ( test/Spec.hs, .stack-work/dist/x86_64-linux/Cabal- )
Linking .stack-work/dist/x86_64-linux/Cabal- ...
helloworld- test (suite: helloworld-test)
Test suite not yet implemented

Reading the output, you'll see that stack first builds the test suite and then automatically runs it for us. For both the build and test command, already built components are not built again. You can see this by running stack build and stack test a second time:

michael@d30748af6d3d:~/helloworld$ stack build
michael@d30748af6d3d:~/helloworld$ stack test
NOTE: the test command is functionally equivalent to 'build --test'
helloworld- test (suite: helloworld-test)
Test suite not yet implemented

In the next three subsections, we'll dissect a few details of this helloworld example.

Files in helloworld

Before moving on with understanding stack a bit better, let's understand our project just a bit better.

michael@d30748af6d3d:~/helloworld$ find * -type f

The app/Main.hs, src/Lib.hs, and test/Spec.hs files are all Haskell source files that compose the actual functionality of our project, and we won't dwell on them too much. Similarly, the LICENSE file has no impact on the build, but is there for informational/legal purposes only. That leaves Setup.hs, helloworld.cabal, and stack.yaml.

The Setup.hs file is a component of the Cabal build system which stack uses. It's technically not needed by stack, but it is still considered good practice in the Haskell world to include it. The file we're using is straight boilerplate:

import Distribution.Simple
main = defaultMain

Next, let's look at our stack.yaml file, which gives our project-level settings:

flags: {}
- '.'
extra-deps: []
resolver: lts-3.2

If you're familiar with YAML, you'll see that the flags and extra-deps keys have empty values. We'll see more interesting usages for these fields later. Let's focus on the other two fields. packages tells stack which local packages to build. In our simple example, we have just a single package in our project, located in the same directory, so '.' suffices. However, stack has powerful support for multi-package projects, which we'll elaborate on as this guide progresses.

The final field is resolver. This tells stack how to build your package: which GHC version to use, versions of package dependencies, and so on. Our value here says to use LTS Haskell version 3.2, which implies GHC 7.10.2 (which is why stack setup installs that version of GHC). There are a number of values you can use for resolver, which we'll talk about below.

The final file of import is helloworld.cabal. stack is built on top of the Cabal build system. In Cabal, we have individual packages, each of which contains a single .cabal file. The .cabal file can define 1 or more components: a library, executables, test suites, and benchmarks. It also specifies additional information such as library dependencies, default language pragmas, and so on.

In this guide, we'll discuss the bare minimum necessary to understand how to modify a .cabal file. The definitive reference on the .cabal file format is available on

The setup command

As we saw above, the setup command installed GHC for us. Just for kicks, let's run setup a second time:

michael@d30748af6d3d:~/helloworld$ stack setup
stack will use a locally installed GHC
For more information on paths, see 'stack path' and 'stack exec env'
To use this GHC and packages outside of a project, consider using:
stack ghc, stack ghci, stack runghc, or stack exec

Thankfully, the command is smart enough to know not to perform an installation twice. setup will take advantage of either the first GHC it finds on your PATH, or a locally installed version. As the command output above indicates, you can use stack path for quite a bit of path information (which we'll play with more later). For now, we'll just look at where GHC is installed:

michael@d30748af6d3d:~/helloworld$ stack exec which ghc

As you can see from that path, the installation is placed such that it will not interfere with any other GHC installation, either system-wide, or even different GHC versions installed by stack.

The build command

The build command is the heart and soul of stack. It is the engine that powers building your code, testing it, getting dependencies, and more. Quite a bit of the remainder of this guide will cover fun things you can do with build to get more advanced behavior, such as building test and Haddocks at the same time, or constantly rebuilding blocking on file changes.

But on a philosophical note: running the build command twice with the same options and arguments should generally be a no-op (besides things like rerunning test suites), and should in general produce a reproducible result between different runs.

OK, enough talking about this simple example. Let's start making it a bit more complicated!

Adding dependencies

Let's say we decide to modify our helloworld source a bit to use a new library, perhaps the ubiquitous text package. For example:

{-# LANGUAGE OverloadedStrings #-}
module Lib
    ( someFunc
    ) where

import qualified Data.Text.IO as T

someFunc :: IO ()
someFunc = T.putStrLn "someFunc"

When we try to build this, things don't go as expected:

michael@d30748af6d3d:~/helloworld$ stack build
helloworld- unregistering (local file changes)
helloworld- configure
Configuring helloworld-
helloworld- build
Preprocessing library helloworld-

    Could not find module `Data.Text.IO'
    Use -v to see a list of the files searched for.

--  While building package helloworld- using:
      /home/michael/.stack/programs/x86_64-linux/ghc-7.10.2/bin/runhaskell -package=Cabal- -clear-package-db -global-package-db -package-db=/home/michael/.stack/snapshots/x86_64-linux/lts-3.2/7.10.2/pkgdb/ /tmp/stack5846/Setup.hs --builddir=.stack-work/dist/x86_64-linux/Cabal- build exe:helloworld-exe --ghc-options -hpcdir .stack-work/dist/x86_64-linux/Cabal- -ddump-hi -ddump-to-file
    Process exited with code: ExitFailure 1

Notice that it says "Could not find module." This means that the package containing the module in question is not available. In order to tell stack that you want to use text, you need to add it to your .cabal file. This can be done in your build-depends section, and looks like this:

  hs-source-dirs:      src
  exposed-modules:     Lib
  build-depends:       base >= 4.7 && < 5
                       -- This next line is the new one
                     , text
  default-language:    Haskell2010

Now if we rerun stack build, we get a very different result:

michael@d30748af6d3d:~/helloworld$ stack build
text- download
text- configure
text- build
text- install
helloworld- configure
Configuring helloworld-
helloworld- build
Preprocessing library helloworld-
[1 of 1] Compiling Lib              ( src/Lib.hs, .stack-work/dist/x86_64-linux/Cabal- )
In-place registering helloworld-
Preprocessing executable 'helloworld-exe' for helloworld-
[1 of 1] Compiling Main             ( app/Main.hs, .stack-work/dist/x86_64-linux/Cabal- ) [Lib changed]
Linking .stack-work/dist/x86_64-linux/Cabal- ...
helloworld- install
Installing library in
Installing executable(s) in
Registering helloworld-
Completed all 2 actions.

What this output means is: the text package was downloaded, configured, built, and locally installed. Once that was done, we moved on to building our local package (helloworld). Notice that at no point do you need to ask stack to build dependencies for you: it does so automatically.


Let's try a more off-the-beaten-track package: the joke acme-missiles package. Our source code is simple:

module Lib
    ( someFunc
    ) where

import Acme.Missiles

someFunc :: IO ()
someFunc = launchMissiles

As expected, stack build will fail because the module is not available. But if we add acme-missiles to the .cabal file, we get a new error message:

michael@d30748af6d3d:~/helloworld$ stack build
While constructing the BuildPlan the following exceptions were encountered:

--  While attempting to add dependency,
    Could not find package acme-missiles in known packages

--  Failure when adding dependencies:
      acme-missiles: needed (-any), latest is 0.3, but not present in build plan
    needed for package: helloworld-

Recommended action: try adding the following to your extra-deps in /home/michael/helloworld/stack.yaml
- acme-missiles-0.3

You may also want to try the 'stack solver' command

Notice that it says acme-missiles is "not present in build plan." This is the next major topic to understand when using stack.

Curated package sets

Remember up above when stack new selected the lts-3.2 resolver for us? That's what's defining our build plan, and available packages. When we tried using the text package, it just worked, because it was part of the lts-3.2 package set. acme-missiles, on the other hand, is not part of that package set, and therefore building failed.

The first thing you're probably wondering is: how do I fix this? To do so, we'll use another one of the fields in stack.yaml- extra-deps- which is used to define extra dependencies not present in your resolver. With that change, our stack.yaml looks like:

flags: {}
- '.'
- acme-missiles-0.3 # Here it is
resolver: lts-3.2

And as expected, stack build succeeds.

With that out of the way, let's dig a little bit more into these package sets, also known as snapshots. We mentioned lts-3.2, and you can get quite a bit of information about it at

  • The appropriate resolver value (resolver: lts-3.2, as we used above)
  • The GHC version used
  • A full list of all packages available in this snapshot
  • The ability to perform a Hoogle search on the packages in this snapshot
  • A list of all modules in a snapshot, which an be useful when trying to determine which package to add to your .cabal file

You can also see a list of all available snapshots. You'll notice two flavors: LTS (standing for Long Term Support) and Nightly. You can read more about them on the LTS Haskell Github page. If you're not sure what to go with, start with LTS Haskell. That's what stack will lean towards by default as well.

Resolvers and changing your compiler version

Now that we know a bit more about package sets, let's try putting that knowledge to work. Instead of lts-3.2, let's change our stack.yaml file to use nightly-2015-08-26. Rerunning stack build will produce:

michael@d30748af6d3d:~/helloworld$ stack build
Downloaded nightly-2015-08-26 build plan.
Caching build plan
stm-2.4.4: configure
stm-2.4.4: build
stm-2.4.4: install
acme-missiles-0.3: configure
acme-missiles-0.3: build
acme-missiles-0.3: install
helloworld- configure
Configuring helloworld-
helloworld- build
Preprocessing library helloworld-
In-place registering helloworld-
Preprocessing executable 'helloworld-exe' for helloworld-
Linking .stack-work/dist/x86_64-linux/Cabal- ...
helloworld- install
Installing library in
Installing executable(s) in
Registering helloworld-
Completed all 3 actions.

We can also change resolvers on the command line, which can be useful in a Continuous Integration (CI) setting, like on Travis. For example:

michael@d30748af6d3d:~/helloworld$ stack --resolver lts-3.1 build
Downloaded lts-3.1 build plan.
Caching build plan
stm-2.4.4: configure
# Rest is the same, no point copying it

When passed on the command line, you also get some additional "short-cut" versions of resolvers: --resolver nightly will use the newest Nightly resolver available, --resolver lts will use the newest LTS, and --resolver lts-2 will use the newest LTS in the 2.X series. The reason these are only available on the command line and not in your stack.yaml file is that using them:

  1. Will slow your build down, since stack needs to download information on the latest available LTS each time it builds
  2. Produces unreliable results, since a build run today may proceed differently tomorrow because of changes outside of your control.

Changing GHC versions

Finally, let's try using an older LTS snapshot. We'll use the newest 2.X snapshot:

michael@d30748af6d3d:~/helloworld$ stack --resolver lts-2 build
Selected resolver: lts-2.22
Downloaded lts-2.22 build plan.
Caching build plan
No GHC found, expected version 7.8.4 (x86_64) (based on resolver setting in /home/michael/helloworld/stack.yaml). Try running stack setup

This fails, because GHC 7.8.4 (which lts-2.22 uses) is not available on our system. The first lesson is: when you want to change your GHC version, modify the resolver value. Now the question is: how do we get the right GHC version? One answer is to use stack setup like we did above, this time with the --resolver lts-2 option. However, there's another way worth mentioning: the --install-ghc flag.

michael@d30748af6d3d:~/helloworld$ stack --resolver lts-2 --install-ghc build
Selected resolver: lts-2.22
Downloaded ghc-7.8.4.
Installed GHC.
stm-2.4.4: configure
# Mostly same as before, nothing interesting to see

What's nice about --install-ghc is that:

  1. You don't need to have an extra step in your build script
  2. It only requires downloading the information on latest snapshots once

As mentioned above, the default behavior of stack is to not install new versions of GHC automatically, to avoid surprising users with large downloads/installs. This flag simply changes that default behavior.

Other resolver values

We've mentioned nightly-YYYY-MM-DD and lts-X.Y values for the resolver. There are actually other options available, and the list will grow over time. At the time of writing:

  • ghc-X.Y.Z, for requiring a specific GHC version but no additional packages
  • Experimental GHCJS support
  • Experimental custom snapshot support

The most up-to-date information can always be found on the stack.yaml wiki page.

Existing projects

Alright, enough playing around with simple projects. Let's take an open source package and try to build it. We'll be ambitious and use yackage, a local package server using Yesod. To get the code, we'll use the stack unpack command:

michael@d30748af6d3d:~$ stack unpack yackage-0.8.0
yackage-0.8.0: download
Unpacked yackage-0.8.0 to /home/michael/yackage-0.8.0/
michael@d30748af6d3d:~$ cd yackage-0.8.0/

This new directory does not have a stack.yaml file, so we need to make one first. We could do it by hand, but let's be lazy instead with the stack init command:

michael@d30748af6d3d:~/yackage-0.8.0$ stack init
Writing default config file to: /home/michael/yackage-0.8.0/stack.yaml
Basing on cabal files:
- /home/michael/yackage-0.8.0/yackage.cabal

Checking against build plan lts-3.2
Selected resolver: lts-3.2
Wrote project config to: /home/michael/yackage-0.8.0/stack.yaml
michael@d30748af6d3d:~/yackage-0.8.0$ cat stack.yaml
    upload: true
- '.'
extra-deps: []
resolver: lts-3.2

stack init does quite a few things for you behind the scenes:

  • Creates a list of snapshots that would be good candidates. The basic algorithm here is: prefer snapshots you've already built some packages for (to increase sharing of binary package databases, as we'll discuss later), prefer recent snapshots, and prefer LTS. These preferences can be tweaked with command line flags, see stack init --help.
  • Finds all of the .cabal files in your current directory and subdirectories (unless you use --ignore-subdirs) and determines the packages and versions they require
  • Finds a combination of snapshot and package flags that allows everything to compile

Assuming it finds a match, it will write your stack.yaml file, and everything will be good. Given that LTS Haskell and Stackage Nightly have ~1400 of the most common Haskell packages, this will often be enough. However, let's simulate a failure by adding acme-missiles to our build-depends and re-initing:

michael@d30748af6d3d:~/yackage-0.8.0$ stack init --force
Writing default config file to: /home/michael/yackage-0.8.0/stack.yaml
Basing on cabal files:
- /home/michael/yackage-0.8.0/yackage.cabal

Checking against build plan lts-3.2

* Build plan did not match your requirements:
    acme-missiles not found
    - yackage requires -any

Checking against build plan lts-3.1

* Build plan did not match your requirements:
    acme-missiles not found
    - yackage requires -any

Checking against build plan nightly-2015-08-26

* Build plan did not match your requirements:
    acme-missiles not found
    - yackage requires -any

Checking against build plan lts-2.22

* Build plan did not match your requirements:
    acme-missiles not found
    - yackage requires -any

    warp version found
    - yackage requires >=3.1

There was no snapshot found that matched the package bounds in your .cabal files.
Please choose one of the following commands to get started.

    stack init --resolver lts-3.2
    stack init --resolver lts-3.1
    stack init --resolver nightly-2015-08-26
    stack init --resolver lts-2.22

You'll then need to add some extra-deps. See:

You can also try falling back to a dependency solver with:

    stack init --solver

stack has tested four different snapshots, and in every case discovered that acme-missiles is not available. Also, when testing lts-2.22, it found that the warp version provided was too old for yackage. The question is: what do we do next?

The recommended approach is: pick a resolver, and fix the problem. Again, following the advice mentioned above, default to LTS if you don't have a preference. In this case, the newest LTS listed is lts-3.2. Let's pick that. stack has told us the correct command to do this. We'll just remove our old stack.yaml first and then run it:

michael@d30748af6d3d:~/yackage-0.8.0$ rm stack.yaml
michael@d30748af6d3d:~/yackage-0.8.0$ stack init --resolver lts-3.2
Writing default config file to: /home/michael/yackage-0.8.0/stack.yaml
Basing on cabal files:
- /home/michael/yackage-0.8.0/yackage.cabal

Checking against build plan lts-3.2

* Build plan did not match your requirements:
    acme-missiles not found
    - yackage requires -any

Selected resolver: lts-3.2
Wrote project config to: /home/michael/yackage-0.8.0/stack.yaml

As you may guess, stack build will now fail due to the missing acme-missiles. Toward the end of the error message, it says the familiar:

Recommended action: try adding the following to your extra-deps in /home/michael/yackage-0.8.0/stack.yaml
- acme-missiles-0.3

If you're following along at home, try making the necessary stack.yaml modification to get things building.

Alternative solution: dependency solving

There's another solution to the problem you may consider. At the very end of the previous error message, it said:

You may also want to try the 'stack solver' command

This approach uses a full blown dependency solver to look at all upstream package versions available and compare them to your snapshot selection and version ranges in your .cabal file. In order to use this feature, you'll need the cabal executable available. Let's build that with:

michael@d30748af6d3d:~/yackage-0.8.0$ stack build cabal-install
random-1.1: download
mtl-2.2.1: download
network- download
old-locale- download
random-1.1: configure
random-1.1: build
# ...
cabal-install- download
cabal-install- configure
cabal-install- build
cabal-install- install
Completed all 10 actions.

Now we can use stack solver:

michael@d30748af6d3d:~/yackage-0.8.0$ stack solver
This command is not guaranteed to give you a perfect build plan
It's possible that even with the changes generated below, you will still need to do some manual tweaking
Asking cabal to calculate a build plan, please wait
- acme-missiles-0.3

And if we're exceptionally lazy, we can ask stack to modify our stack.yaml file for us:

michael@d30748af6d3d:~/yackage-0.8.0$ stack solver --modify-stack-yaml
This command is not guaranteed to give you a perfect build plan
It's possible that even with the changes generated below, you will still need to do some manual tweaking
Asking cabal to calculate a build plan, please wait
- acme-missiles-0.3
Updated /home/michael/yackage-0.8.0/stack.yaml

With that change, stack build will now run.

NOTE: You should probably back up your stack.yaml before doing this, such as committing to Git/Mercurial/Darcs.

There's one final approach to mention: skipping the snapshot entirely and just using dependency solving. You can do this with the --solver flag to init. This is not a commonly used workflow with stack, as you end up with a large number of extra-deps, and no guarantee that the packages will compile together. For those interested, however, the option is available. You need to make sure you have both the ghc and cabal commands on your PATH. An easy way to do this is to use the stack exec command:

michael@d30748af6d3d:~/yackage-0.8.0$ stack exec --no-ghc-package-path -- stack init --solver --force
Writing default config file to: /home/michael/yackage-0.8.0/stack.yaml
Basing on cabal files:
- /home/michael/yackage-0.8.0/yackage.cabal

Asking cabal to calculate a build plan, please wait
Selected resolver: ghc-7.10
Wrote project config to: /home/michael/yackage-0.8.0/stack.yaml

The --no-ghc-package-path flag is described below, and is only needed due to a bug in the currently released stack. That bug is fixed in 0.1.4 and forward.

Different databases

Time to take a short break from hands-on examples and discuss a little architecture. stack has the concept of multiple databases. A database consists of a GHC package database (which contains the compiled version of a library), executables, and a few other things as well. Just to give you an idea:

michael@d30748af6d3d:~/helloworld$ ls .stack-work/install/x86_64-linux/lts-3.2/7.10.2/
bin  doc  flag-cache  lib  pkgdb

Databases in stack are layered. For example, the database listing I just gave is what we call a local database. This is layered on top of a snapshot database, which contains the libraries and executables specified in the snapshot itself. Finally, GHC itself ships with a number of libraries and executables, which forms the global database. Just to give a quick idea of this, we can look at the output of the ghc-pkg list command in our helloworld project:


Notice that acme-missiles ends up in the local database. Anything which is not installed from a snapshot ends up in the local database. This includes: your own code, extra-deps, and in some cases even snapshot packages, if you modify them in some way. The reason we have this structure is that:

  • it lets multiple projects reuse the same binary builds of many snapshot packages,
  • but doesn't allow different projects to "contaminate" each other by putting non-standard content into the shared snapshot database

Typically, the process by which a snapshot package is marked as modified is referred to as "promoting to an extra-dep," meaning we treat it just like a package in the extra-deps section. This happens for a variety of reasons, including:

  • changing the version of the snapshot package
  • changing build flags
  • one of the packages that the package depends on has been promoted to an extra-dep

And as you probably guessed: there are multiple snapshot databases available, e.g.:

michael@d30748af6d3d:~/helloworld$ ls ~/.stack/snapshots/x86_64-linux/
lts-2.22  lts-3.1  lts-3.2  nightly-2015-08-26

These databases don't get layered on top of each other, but are each used separately.

In reality, you'll rarely- if ever- interact directly with these databases, but it's good to have a basic understanding of how they work so you can understand why rebuilding may occur at different points.

The build synonyms

Let me show you a subset of the stack --help output:

build    Build the project(s) in this directory/configuration
install  Shortcut for 'build --copy-bins'
test     Shortcut for 'build --test'
bench    Shortcut for 'build --bench'
haddock  Shortcut for 'build --haddock'

It's important to note that four of these commands are just synonyms for the build command. They are provided for convenience for common cases (e.g., stack test instead of stack build --test) and so that commonly expected commands just work.

What's so special about these commands being synonyms? It allows us to make much more composable command lines. For example, we can have a command that builds executables, generates Haddock documentation (Haskell API-level docs), and builds and runs your test suites, with:

stack build --haddock --test

You can even get more inventive as you learn about other flags. For example, take the following:

stack build --pedantic --haddock --test --exec "echo Yay, it succeeded" --file-watch

This will:

  • turn on all warnings and errors
  • build your library and executables
  • generate Haddocks
  • build and run your test suite
  • run the command echo Yay, it succeeded when that completes
  • after building, watch for changes in the files used to build the project, and kick off a new build when done

install and copy-bins

It's worth calling out the behavior of the install command and --copy-bins option, since this has confused a number of users, especially when compared to behavior of other tools (e.g., cabal-install). The install command does precisely one thing in addition to the build command: it copies any generated executables to the local bin path. You may recognize the default value for that path:

michael@d30748af6d3d:~/helloworld$ stack path --local-bin-path

That's why the download page recommends adding that directory to your PATH environment variable. This feature is convenient, because now you can simply run executable-name in your shell instead of having to run stack exec executable-name from inside your project directory.

Since it's such a point of confusion, let me list a number of things stack does not do specially for the install command:

  • stack will always build any necessary dependencies for your code. The install command is not necessary to trigger this behavior. If you just want to build a project, run stack build.
  • stack will not track which files it's copied to your local bin path, nor provide a way to automatically delete them. There are many great tools out there for managing installation of binaries, and stack does not attempt to replace those.
  • stack will not necessarily be creating a relocatable executable. If your executables hard-codes paths, copying the executable will not change those hard-coded paths. At the time of writing, there's no way to change those kinds of paths with stack, but see issue #848 about --prefix for future plans.

That's really all there is to the install command: for the simplicity of what it does, it occupies a much larger mental space than is warranted.

Targets, locals, and extra-deps

We haven't discussed this too much yet, but in addition to having a number of synonyms, and taking a number of options on the command line, the build command also takes many arguments. These are parsed in different ways, and can be used to achieve a high level of flexibility in telling stack exactly what you want to build.

We're not going to cover the full generality of these arguments here; instead, there's a Wiki page covering the full build command syntax. Instead, let me point out a few different types of arguments:

  • You can specify a package name, e.g. stack build vector. This will attempt to build the vector package, whether it's a local package, in your extra-deps, in your snapshot, or just available upstream. If it's just available upstream but not included in your locals, extra-deps, or snapshot, the newest version is automatically promoted to an extra-dep.
  • You can also give a package identifier, which is a package name plus version, e.g. stack build yesod-bin-1.4.14. This is almost identical to specifying a package name, except it will (1) choose the given version instead of latest, and (2) error out if the given version conflicts with the version of a local package.
  • The most flexibility comes from specifying individual components, e.g. stack build helloworld:test:helloworld-test says "build the test suite component named helloworld-test from the helloworld package." In addition to this long form, you can also shorten it by skipping what type of component it is, e.g. stack build helloworld:helloworld-test, or even skip the package name entirely, e.g. stack build :helloworld-test.
  • Finally, you can specify individual directories to build, which will trigger building of any local packages included in those directories or subdirectories.

When you give no specific arguments on the command line (e.g., stack build), it's the same as specifying the names of all of your local packages. If you just want to build the package for the directory you're currently in, you can use stack build ..

Components, --test, and --bench

Here's one final important yet subtle point. Consider our helloworld package, which has a library component, an executable helloworld-exe, and a test suite helloworld-test. When you run stack build helloworld, how does it know which ones to build? By default, it will build the library (if any) and all of the executables, but ignore the test suites and benchmarks.

This is where the --test and --bench flags come into play. If you use them, those components will also be included. So stack build --test helloworld will end up including the helloworld-test component as well.

You can bypass this implicit adding of components by being much more explicit, and stating the components directly. For example, the following will not build the helloworld-exe executable:

michael@d30748af6d3d:~/helloworld$ stack clean
michael@d30748af6d3d:~/helloworld$ stack build :helloworld-test
helloworld- configure (test)
Configuring helloworld-
helloworld- build (test)
Preprocessing library helloworld-
[1 of 1] Compiling Lib              ( src/Lib.hs, .stack-work/dist/x86_64-linux/Cabal- )
In-place registering helloworld-
Preprocessing test suite 'helloworld-test' for helloworld-
[1 of 1] Compiling Main             ( test/Spec.hs, .stack-work/dist/x86_64-linux/Cabal- )
Linking .stack-work/dist/x86_64-linux/Cabal- ...
helloworld- test (suite: helloworld-test)
Test suite not yet implemented

We first cleaned our project to clear old results so we know exactly what stack is trying to do. Notice that it builds the helloworld-test test suite, and the helloworld library (since it's used by the test suite), but it does not build the helloworld-exe executable.

And now the final point: the last line shows that our command also runs the test suite it just built. This may surprise some people who would expect tests to only be run when using stack test, but this design decision is what allows the stack build command to be as composable as it is (as described previously). The same rule applies to benchmarks. To spell it out completely:

  • The --test and --bench flags simply state which components of a package should be built, if no explicit set of components is given
  • The default behavior for any test suite or benchmark component which has been built is to also run it

You can use the --no-run-tests and --no-run-benchmarks (from stack- and on) flags to disable running of these components. You can also use --no-rerun-tests to prevent running a test suite which has already passed and has not changed.

NOTE: stack doesn't build or run test suites and benchmarks for non-local packages. This is done so that running a command like stack test doesn't need to run 200 test suites!

Multi-package projects

Until now, everything we've done with stack has used a single-package project. However, stack's power truly shines when you're working on multi-package projects. All the functionality you'd expect to work just does: dependencies between packages are detected and respected, dependencies of all packages are just as one cohesive whole, and if anything fails to build, the build commands exits appropriately.

Let's demonstrate this with the wai-app-static and yackage packages:

michael@d30748af6d3d:~$ mkdir multi
michael@d30748af6d3d:~$ cd multi/
michael@d30748af6d3d:~/multi$ stack unpack wai-app-static-3.1.1 yackage-0.8.0
wai-app-static-3.1.1: download
Unpacked wai-app-static-3.1.1 to /home/michael/multi/wai-app-static-3.1.1/
Unpacked yackage-0.8.0 to /home/michael/multi/yackage-0.8.0/
michael@d30748af6d3d:~/multi$ stack init
Writing default config file to: /home/michael/multi/stack.yaml
Basing on cabal files:
- /home/michael/multi/yackage-0.8.0/yackage.cabal
- /home/michael/multi/wai-app-static-3.1.1/wai-app-static.cabal

Checking against build plan lts-3.2
Selected resolver: lts-3.2
Wrote project config to: /home/michael/multi/stack.yaml
michael@d30748af6d3d:~/multi$ stack build --haddock --test
# Goes off to build a whole bunch of packages

If you look at the stack.yaml, you'll see exactly what you'd expect:

    upload: true
    print: false
- yackage-0.8.0/
- wai-app-static-3.1.1/
extra-deps: []
resolver: lts-3.2

Notice that multiple directories are listed in the packages key.

In addition to local directories, you can also refer to packages available in a Git repository or in a tarball over HTTP/HTTPS. This can be useful for using a modified version of a dependency that hasn't yet been released upstream. This is a slightly more advanced usage that we won't go into detail with here, but it's covered in the stack.yaml wiki page.

Flags and GHC options

There are two common ways you may wish to alter how a package will install: with Cabal flags and GHC options. In the stack.yaml file above, you can see that stack init has detected that- for the yackage package- the upload flag can be set to true, and for wai-app-static, the print flag to false. (The reason it's chosen those values is that they're the default flag values, and their dependencies are compatible with the snapshot we're using.)

In order to change this, we can use the command line --flag option:

stack build --flag yackage:-upload

This means: when compiling the yackage package, turn off the upload flag (thus the -). Unlike other tools, stack is explicit about which package's flag you want to change. It does this for two reasons:

  1. There's no global meaning for Cabal flags, and therefore two packages can use the same flag name for completely different things.
  2. By following this approach, we can avoid unnecessarily recompiling snapshot packages that happen to use a flag that we're using.

You can also change flag values on the command line for extra-dep and snapshot packages. If you do this, that package will automatically be promoted to an extra-dep, since the build plan is different than what the plan snapshot definition would entail.

GHC options

GHC options follow a similar logic, with a few nuances to adjust for common use cases. Let's consider:

stack build --ghc-options="-Wall -Werror"

This will set the -Wall -Werror options for all local targets. The important thing to note here is that it will not affect extra-dep and snapshot packages at all. This is by design, once again, to get reproducible and fast builds.

(By the way: that above GHC options have a special convenience flag: --pedantic.)

There's one extra nuance about command line GHC options. Since they only apply to local targets, if you change your local targets, they will no longer apply to other packages. Let's play around with an example from the wai repository, which includes the wai and warp packages, the latter depending on the former. If we run:

stack build --ghc-options=-O0 wai

It will build all of the dependencies of wai, and then build wai with all optimizations disabled. Now let's add in warp as well:

stack build --ghc-options=-O0 wai warp

This builds the additional dependencies for warp, and then builds warp with optimizations disabled. Importantly: it does not rebuild wai, since wai's configuration has not been altered. Now the surprising case:

michael@d30748af6d3d:~/wai$ stack build --ghc-options=-O0 warp
wai- unregistering (flags changed from ["--ghc-options","-O0"] to [])
warp-3.1.3-a91c7c3108f63376877cb3cd5dbe8a7a: unregistering (missing dependencies: wai)
wai- configure

You may expect this to be a no-op: neither wai nor warp has changed. However, stack will instead recompile wai with optimizations enabled again, and then rebuild warp (with optimizations disabled) against this newly built wai. The reason: reproducible builds. If we'd never built wai or warp before, trying to build warp would necessitate building all of its dependencies, and it would do so with default GHC options (optimizations enabled). This dependency would include wai. So when we run:

stack build --ghc-options=-O0 warp

We want its behavior to be unaffected by any previous build steps we took. While this specific corner case does catch people by surprise, the overall goal of reproducible builds is- in the stack maintainers' views- worth the confusion.

Final point: if you have GHC options that you'll be regularly passing to your packages, you can add them to your stack.yaml file (starting with stack- See the wiki page section on ghc-options for more information.


NOTE: That's it, the heavy content of this guide is done! Everything from here on out is simple explanations of commands. Congratulations!

Generally, you don't need to worry about where stack stores various files. But some people like to know this stuff. That's when the stack path command is useful.

michael@d30748af6d3d:~/wai$ stack path
global-stack-root: /home/michael/.stack
project-root: /home/michael/wai
config-location: /home/michael/wai/stack.yaml
bin-path: /home/michael/.stack/snapshots/x86_64-linux/lts-2.17/7.8.4/bin:/home/michael/.stack/programs/x86_64-linux/ghc-7.8.4/bin:/home/michael/.stack/programs/x86_64-linux/ghc-7.10.2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ghc-paths: /home/michael/.stack/programs/x86_64-linux
local-bin-path: /home/michael/.local/bin
snapshot-pkg-db: /home/michael/.stack/snapshots/x86_64-linux/lts-2.17/7.8.4/pkgdb
local-pkg-db: /home/michael/wai/.stack-work/install/x86_64-linux/lts-2.17/7.8.4/pkgdb
snapshot-install-root: /home/michael/.stack/snapshots/x86_64-linux/lts-2.17/7.8.4
local-install-root: /home/michael/wai/.stack-work/install/x86_64-linux/lts-2.17/7.8.4
snapshot-doc-root: /home/michael/.stack/snapshots/x86_64-linux/lts-2.17/7.8.4/doc
local-doc-root: /home/michael/wai/.stack-work/install/x86_64-linux/lts-2.17/7.8.4/doc
dist-dir: .stack-work/dist/x86_64-linux/Cabal-

In addition, this command accepts command line arguments to state which of these keys you're interested in, which can be convenient for scripting. As a simple example, let's find out which versions of GHC are installed locally:

michael@d30748af6d3d:~/wai$ ls $(stack path --ghc-paths)/*.installed

(Yes, that command requires a *nix shell, and likely won't run on Windows.)

While we're talking about paths, it's worth explaining how to wipe our stack completely. It involves deleting just three things:

  1. The stack executable itself
  2. The stack root, e.g. $HOME/.stack on non-Windows systems. See stack path --global-stack-root
    • On Windows, you will also need to delete stack path --ghc-paths
  3. Any local .stack-work directories inside a project


We've already used stack exec used multiple times in this guide. As you've likely already guessed, it allows you to run executables, but with a slightly modified environment. In particular: it looks for executables on stack's bin paths, and sets a few additional environment variables (like GHC_PACKAGE_PATH, which tells GHC which package databases to use). If you want to see exactly what the modified environment looks like, try:

stack exec env

The only trick is how to distinguish flags to be passed to stack versus those for the underlying program. Thanks to the optparse-applicative library, stack follows the Unix convention of -- to separate these, e.g.:

michael@d30748af6d3d:~$ stack exec --package stm -- echo I installed the stm package via --package stm
Run from outside a project, using implicit global config
Using latest snapshot resolver: lts-3.2
Writing global (non-project-specific) config file to: /home/michael/.stack/global/stack.yaml
Note: You can change the snapshot via the resolver field there.
I installed the stm package via --package stm

Flags worth mentioning:

  • --package foo can be used to force a package to be installed before running the given command
  • --no-ghc-package-path can be used to stop the GHC_PACKAGE_PATH environment variable from being set. Some tools- notably cabal-install- do not behave well with that variable set

ghci (the repl)

GHCi is the interactive GHC environment, a.k.a. the REPL. You can access it with:

stack exec ghci

However, this doesn't do anything particularly intelligent, such as loading up locally written modules. For that reason, the stack ghci command is available.

NOTE: At the time of writing, stack ghci was still an experimental feature, so I'm not going to devote a lot more time to it. Future readers: feel free to expand this!


You'll sometimes want to just compile (or run) a single Haskell source file, instead of creating an entire Cabal package for it. You can use stack exec ghc or stack exec runghc for that. As simple helpers, we also provide the stack ghc and stack runghc commands, for these common cases.

stack also offers a very useful feature for running files: a script interpreter. For too long have Haskellers felt shackled to bash or Python because it's just too hard to create reusable source-only Haskell scripts. stack attempts to solve that. An example will be easiest to understand:

michael@d30748af6d3d:~$ cat turtle.hs
#!/usr/bin/env stack
-- stack --resolver lts-3.2 --install-ghc runghc --package turtle
{-# LANGUAGE OverloadedStrings #-}
import Turtle
main = echo "Hello World!"
michael@d30748af6d3d:~$ chmod +x turtle.hs
michael@d30748af6d3d:~$ ./turtle.hs
Run from outside a project, using implicit global config
Using resolver: lts-3.2 specified on command line
hashable- configure
# installs some more dependencies
Completed all 22 actions.
Hello World!
michael@d30748af6d3d:~$ ./turtle.hs
Run from outside a project, using implicit global config
Using resolver: lts-3.2 specified on command line
Hello World!

If you're on Windows: you can run stack turtle.hs instead of ./turtle.hs.

The first line is the usual "shebang" to use stack as a script interpreter. The second line, which is required, provides additional options to stack (due to the common limitation of the "shebang" line only being allowed a single argument). In this case, the options tell stack to use the lts-3.2 resolver, automatically install GHC if it is not already installed, and ensure the turtle package is available.

The first run can take a while, since it has to download GHC and build dependencies. But subsequent runs are able to reuse everything already built, and are therefore quite fast.

Finding project configs, and the implicit global

Whenever you run something with stack, it needs a stack.yaml project file. The algorithm stack uses to find this is:

  1. Check for a --stack-yaml option on the command line
  2. Check for a STACK_YAML environment variable
  3. Check the current directory and all ancestor directories for a stack.yaml file

The first two provide a convenient method for using an alternate configuration. For example: stack build --stack-yaml stack-7.8.yaml can be used by your CI system to check your code against GHC 7.8. Setting the STACK_YAML environment variable can be convenient if you're going to be running commands like stack ghc in other directories, but you want to use the configuration you defined in a specific project.

If stack does not find a stack.yaml in any of the three specified locations, the implicit global logic kicks in. You've probably noticed that phrase a few times in the output from commands above. Implicit global is essentially a hack to allow stack to be useful in a non-project setting. When no implicit global config file exists, stack creates one for you with the latest LTS snapshot as the resolver. This allows you to do things like:

  • compile individual files easily with stack ghc
  • build executables you'd want to use without starting a project, e.g. stack install pandoc

Keep in mind that there's nothing magical about this implicit global configuration. It has no impact on projects at all, and every package you install with it is put into isolated databases just like everywhere else. The only magic is that it's the catch-all project whenever you're running stack somewhere else.

stack.yaml vs .cabal files

Now that we've covered a lot of stack use cases, this quick summary of stack.yaml vs .cabal files will hopefully make a lot of sense, and be a good reminder for future uses of stack:

  • A project can have multiple packages. Each project has a stack.yaml. Each package has a .cabal file
  • The .cabal file specifies which packages are dependencies. The stack.yaml file specifies which packages are available to be used
  • .cabal specifies the components, modules, and build flags provided by a package
  • stack.yaml can override the flag settings for individual packages
  • stack.yaml specifies which packages to include

Comparison to other tools

stack is not the only tool around for building Haskell code. stack came into existence due to limitations with some of the existing tools. If you're unaffected by those limitations and are happily building Haskell code, you may not need stack. If you're suffering from some of the common problems in other tools, give stack a try instead.

If you're a new user who has no experience with other tools, you should start with stack. The defaults match modern best practices in Haskell development, and there are less corner cases you need to be aware of. You can develop Haskell code with other tools, but you probably want to spend your time writing code, not convincing a tool to do what you want.

Before jumping into the differences, let me clarify an important similarity:

  • Same package format. stack, cabal-install, and presumably all other tools share the same underlying Cabal package format, consisting of a .cabal file, modules, etc. This is a Good Thing: we can share the same set of upstream libraries, and collaboratively work on the same project with stack, cabal-install, and NixOS. In that sense, we're sharing the same ecosystem.

Now the differences:

  • Curation vs dependency solving as a default. stack defaults to using curation (Stackage snapshots, LTS Haskell, Nightly, etc) as a default instead of defaulting to dependency solving, as cabal-install does. This is just a default: as described above, stack can use dependency solving if desired, and cabal-install can use curation. However, most users will stick to the defaults. The stack team firmly believes that the majority of users want to simply ignore dependency resolution nightmares and get a valid build plan from day 1, which is why we've made this selection of default behavior.
  • Reproducible. stack goes to great lengths to ensure that stack build today does the same thing tomorrow. cabal-install does not: build plans can be affected by the presence of preinstalled packages, and running cabal update can cause a previously successful build to fail. With stack, changing the build plan is always an explicit decision.
  • Automatically building dependencies. In cabal-install, you need to use cabal install to trigger dependency building. This is somewhat necessary due to the previous point, since building dependencies can in some cases break existing installed packages. So for example, in stack, stack test does the same job as cabal install --run-tests, though the latter additionally performs an installation that you may not want. The closer command equivalent is cabal install --enable-tests --only-dependencies && cabal configure --enable-tests && cabal build && cabal test (newer versions of cabal-install may make this command shorter).
  • Isolated by default. This has been a pain point for new stack users actually. In cabal, the default behavior is a non-isolated build, meaning that working on two projects can cause the user package database to become corrupted. The cabal solution to this is sandboxes. stack, however, provides this behavior by default via its databases. In other words: when you use stack, there's no need for sandboxes, everything is (essentially) sandboxed by default.

More resources

There are lots of resources available for learning more about stack:

Fun features

This is just a quick collection of fun and useful feature stack supports.


We started off using the new command to create a project. stack provides multiple templates to start a new project from:

michael@d30748af6d3d:~$ stack templates
michael@d30748af6d3d:~$ stack new my-yesod-project yesod-simple
Downloading template "yesod-simple" to create project "my-yesod-project" in my-yesod-project/ ...
Using the following authorship configuration:
author-name: Example Author Name
Copy these to /home/michael/.stack/stack.yaml and edit to use different values.
Writing default config file to: /home/michael/my-yesod-project/stack.yaml
Basing on cabal files:
- /home/michael/my-yesod-project/my-yesod-project.cabal

Checking against build plan lts-3.2
Selected resolver: lts-3.2
Wrote project config to: /home/michael/my-yesod-project/stack.yaml

To add more templates, see the stack-templates repository.


stack has a work-in-progress suite of editor integrations, to do things like getting type information in emacs. For more information, see stack-ide.

Visualizing dependencies

If you'd like to get some insight into the dependency tree of your packages, you can use the stack dot command and Graphviz. More information is available on the wiki.

Travis with caching

Many people use Travis CI to test out a project for every Git push. We have a Wiki page devoted to Travis. However, for most people, the following example will be sufficient to get started:

sudo: false
language: c

    - libgmp-dev

# stack
- mkdir -p ~/.local/bin
- export PATH=$HOME/.local/bin:$PATH
- travis_retry curl -L | gunzip > ~/.local/bin/stack
- chmod a+x ~/.local/bin/stack

- stack --no-terminal setup
- stack --no-terminal build
- stack --no-terminal test

  - $HOME/.stack

Not only will this build and test your project, but it will cache your snapshot built packages, meaning that subsequent builds will be much faster.

Two notes for future improvement:

  • One Travis whitelists the stack .deb files, we'll be able to simply include stack in the addons section, and automatically use the newest version of stack, avoiding that complicated before_install section
  • Starting with stack-, there are improvements to the test command, so that the entire script section can be stack --no-terminal --install-ghc test

If you're wondering: the reason we need --no-terminal is because stack does some fancy sticky display on smart terminals to give nicer status and progress messages, and the terminal detection is broken on Travis.

Shell autocompletion

Love being able to tab-complete commands? You're not alone. If you're on bash, just run the following (or add it to .bashrc):

eval "$(stack --bash-completion-script "$(which stack)")"

For more information and other shells, see the Shell autocompletion wiki page


stack provides two built-in Docker integrations. Firstly, you can build your code inside a Docker image, which means:

  • even more reproducibility to your builds, since you and the rest of your team will always have the same system libraries
  • the Docker images ship with entire precompiled snapshots. That means you have a large initial download, but much faster builds

For more information, see the Docker wiki page.

The other integration is that stack can generate Docker images for you containing your built executables. This feature is great for automating deployments from CI. This feature is not yet very well documented, but the basics are to add a section like the following to stack.yaml:

    base: "fpco/ubuntu-with-libgmp:14.04"
      man/: /usr/local/share/man/
      - stack

and then run stack image container.

Power user commands

The following commands are a little more powerful, and therefore won't be needed by all users. Here's a quick rundown:

  • stack update will download the most recent set of packages from your package indices (e.g. Hackage). Generally, stack runs this for you automatically when necessary, but it can be useful to do this manually sometimes (e.g., before running stack solver, to guarantee you have the most recent upstream packages available).
  • stack unpack is a command we've already used quite a bit for examples, but most users won't use it regularly. It does what you'd expect: downloads a tarball and unpacks it.
  • stack sdist generates an uploading tarball containing your package code
  • stack upload uploads an sdist to Hackage. In the future, it will also perform automatic GPG signing of your packages for additional security, when configured.
  • stack upgrade will build a new version of stack from source. --git is a convenient way to get the most recent version from master for those testing and living on the bleeding edge.
  • stack setup --upgrade-cabal can install a newer version of the Cabal library, used for performing actual builds. You shouldn't generally do this, since new Cabal versions may introduce incompatibilities with package sets, but it can be useful if you're trying to test a specific bugfix.
  • stack list-dependencies lists all of the packages and versions used for a project

August 31, 2015 06:00 AM

Yesod Web Framework

Getting Rating A from the SSL Server Test

If you are using Warp TLS version 3.1.1 or earlier and the tls library version 1.3.1 or earlier, try the SSL Server Test provided by QUALYS SSL LABS. I'm sure that your server will get rating F and you will get disappointed. Here is a list of failed items:

Secure Renegotiation Not supported ACTION NEEDED
Secure Client-Initiated Renegotiation Supported DoS DANGER
Insecure Client-Initiated Renegotiation Supported INSECURE
Downgrade attack prevention No, TLS_FALLBACK_SCSV not supported
Forward Secrecy With some browsers
Session resumption (caching) No (IDs assigned but not accepted)

Is the quality of the tls library low? The answer is NO. The code is really readable. But some small features were missing unfortunately. This article describes how Aaron Friel and I added such features to get an A rating.

If you are not interested in technical details, just upgrade Warp TLS to version 3.1.2 and the tls library to version 1.3.2. Your server automatically will get an A rating, or a T rating in the case of a self-signed certificate.

Secure Renegotiation

Secure RenegotiationNot supported ACTION NEEDED

The original TLS 1.2 renegotiation defined in RFC 5246 is now considered insecure because it is vulnerable to man-in-the-middle attacks. RFC 5746 defines the "renegotiation_info" extension to authenticate both sides. The tls library implemented this but the result was "Not supported". Why?

The SSL Server Test uses TLS_EMPTY_RENEGOTIATION_INFO_SCSV, an alternative method defined in RFC 5746, to check this item. So, I modified the tls library to be aware of this virtual cipher suite:

Secure RenegotiationSupported

Client-Initiated Renegotiation

Secure Client-Initiated Renegotiation Supported DoS DANGER
Insecure Client-Initiated Renegotiation Supported INSECURE

A typical scenario of renegotiation is as follows: a user is browsing some pages over TLS. Then the user clicks a page which requires the client certificate in TLS. In this case, the server sends the TLS HelloRequest to start renegotiation so that the client can send the client certificate through the renegotiation phase.

A client can also initiate renegotiation by sending the TLS ClientHello. But neither secure renegotiation (RFC 5746) nor insecure renegotiation (RFC 5246) should not be allowed from the client side because of DOS attacks.

I added a new parameter supportedClientInitiatedRenegotiation to the Supported data type, whose default value is False. This modification results in:

Secure Client-Initiated Renegotiation No
Insecure Client-Initiated Renegotiation No

Downgrade attack prevention

Downgrade attack prevention No, TLS_FALLBACK_SCSV not supported

Downgrade attack is a bad technique to force a client and a server to use a lower TLS version even if higher TLS versions are available. Some clients fall back to a lower TLS version if the negotiation of a higher TLS version fails. An active attacker can cause network congestion or something to make the negotiation failed.

To prevent this, RFC 7507 defines Fallback Signaling Cipher Suite Value, TLS_FALLBACK_SCSV. A client includes this virtual cipher suite to the cipher suite proposal when falling back. If the corresponding server finds TLS_FALLBACK_SCSV and higher TLS versions are supported, the server can reject the negotiation to prevent the downgrade attack.

I implemented this feature and the evaluation results in:

Downgrade attack prevention Yes, TLS_FALLBACK_SCSV supported

For your information, you can test your server with the following commands:

% openssl s_client -connect <ipaddr>:<port> -tls1
% openssl s_client -connect <ipaddr>:<port> -tls1 -fallback_scsv

Forward Secrecy

Forward Secrecy With some browsers

Forward Secrecy can be achieved with ephemeral Diffie Hellman (DHE) or ephemeral elliptic curve Diffie Hellman (ECDHE). Warp TLS version 3.1.1 sets supportedCiphers to:

[ TLSExtra.cipher_ECDHE_RSA_AES128GCM_SHA256
, TLSExtra.cipher_DHE_RSA_AES128GCM_SHA256
, TLSExtra.cipher_DHE_RSA_AES256_SHA256
, TLSExtra.cipher_DHE_RSA_AES128_SHA256
, TLSExtra.cipher_DHE_RSA_AES256_SHA1
, TLSExtra.cipher_DHE_RSA_AES128_SHA1
, TLSExtra.cipher_DHE_DSS_AES128_SHA1
, TLSExtra.cipher_DHE_DSS_AES256_SHA1
, TLSExtra.cipher_AES128_SHA1
, TLSExtra.cipher_AES256_SHA1

This is evaluated as "With some browsers". SSL Labs: Deploying Forward Secrecy suggests that TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA and TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA are missing. Aaron Friel added the two cipher suites to the tls library and also added them in Warp TLS:

[ TLSExtra.cipher_ECDHE_RSA_AES128GCM_SHA256
, TLSExtra.cipher_ECDHE_RSA_AES128CBC_SHA256 -- here
, TLSExtra.cipher_ECDHE_RSA_AES128CBC_SHA  -- here
, TLSExtra.cipher_DHE_RSA_AES128GCM_SHA256
, TLSExtra.cipher_DHE_RSA_AES256_SHA256
, TLSExtra.cipher_DHE_RSA_AES128_SHA256
, TLSExtra.cipher_DHE_RSA_AES256_SHA1
, TLSExtra.cipher_DHE_RSA_AES128_SHA1
, TLSExtra.cipher_DHE_DSS_AES128_SHA1
, TLSExtra.cipher_DHE_DSS_AES256_SHA1
, TLSExtra.cipher_AES128_SHA1
, TLSExtra.cipher_AES256_SHA1

This configuration is evaluated as "With modern browsers".

Forward Secrecy With modern browsers

Note that the article also suggests TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA. I know that adding this cipher suite results in "Yes (with most browsers)" But we don't want to support 3DES.

Session resumption

Session resumption (caching) No (IDs assigned but not accepted)

Session resumption is a mechanism to reduce the overhead of key exchange. An exchanged key is associated with a session ID and stored in both the client and the server side. The next time the client sends the TLS ClientHello message, the client can specify the session ID previously used. So, the client and the server are able to reuse the exchanged key.

The tls library supports this mechanism. That's why the result says "IDs assgined". Since Warp TLS does not make use of SessionManager, it also says "but not accepted".

I'm not planning to implement this simple session resumption in Warp TLS since the server would need to have states of exchanged keys. Rather, I would like to implement the stateless TLS session resumption defined in RFC 5077.


I would like to thank Kazuho Oku for giving useful information about the secure renegotiation.

August 31, 2015 12:45 AM

August 30, 2015

wren gayle romano

(Re)meeting folks at ICFP

In the spirit of Brent's post, I figure I'll make a public announcement that I'm in Vancouver all week attending HOPE, ICFP, and the Haskell Symposium. I love reconnecting with old friends, as well as meeting new folks. Even if I've met you at past ICFPs, feel free to re-introduce yourself as ...things have changed over the past few years. I know I've been pretty quiet of late on Haskell cafe as well as here, but word on the street is folks still recognize my name around places. So if you want to meet up, leave a comment with when/where to find you, or just look for the tall gal with the blue streak in her hair.

Unlike Brent, and unlike in years past, I might should note that I am looking to "advance my career". As of this fall, I am officially on the market for research and professorship positions. So if you're interested in having a linguistically-minded constructive mathematician help work on your problems, come say hi. For those not attending ICFP, you can check out my professional site; I need to fix my script for generating the publications page, but you can find a brief background/research statement there along with the other usuals like CV, recent projects, and classes taught.

comment count unavailable comments

August 30, 2015 06:36 AM

August 29, 2015


FRP — API redesign for reactive-banana 1.0

After having released version 0.9 of my reactive-banana library, I now want to discuss the significant API changes that I have planned for the next release of the library, version number 1.0. These changes will not be backward compatible.

Since its early iterations (version 0.2), the goal of reactive-banana has been to provide an efficient push-based implementation of functional reactive programming (FRP) that uses (a variation of) the continuous-time semantics as pioneered by Conal Elliott and Paul Hudak. Don’t worry, this will stay that way. The planned API changes may be radical, but they are not meant to change the direction of the library.

I intend to make two major changes:

  1. The API for dynamic event switching will be changed to use a monadic approach, and will become more similar to that of the sodium FRP library. Feedback that I have received indicates that the current approach using phantom types is just too unwieldy.

  2. The type Event a will be changed to only allow a single event occurrence per moment, rather than multiple simultaneous occurrences. In other words, the types in the module Reactive.Banana.Experimental.Calm will become the new default.

These changes are not entirely cast in stone yet, they are still open for discussion. If you have an opinion on these matters, please do not hesitate to write a comment here, send me an email or to join the discussion on github on the monadic API!

The new API is not without precedent: I have already implemented a similar design in my threepenny-gui library. It works pretty well there and nobody complained, so I have good reason to believe that everything will be fine.

Still, for completeness, I want to summarize the rationale for these changes in the following sections.

Dynamic Event Switching

One major impediment for early implementations of FRP was the problem of so-called time leaks. The key insight to solving this problem was to realize that the problem was inherent to the FRP API itself and can only be solved by restricting certain types. The first solution with first-class events (i.e. not arrowized FRP) that I know is from an article by Gergeley Patai [pdf].

In particular, the essential insight is that any FRP API which includes the functions

accumB  :: a -> Event (a -> a) -> Behavior a
switchB :: Behavior a -> Event (Behavior a) -> Behavior a

with exactly these types is always leaky. The first combinator accumulates a value similar to scanl, whereas the second combinator switches between different behaviors – that’s why it’s called “dynamic event switching”. A more detailed explanation of the switchB combinator can be found in a previous blog post.

One solution the problem is to put the result of accumB into a monad which indicates that the result of the accumulation depends on the “starting time” of the event. The combinators now have the types

accumB  :: a -> Event (a -> a) -> Moment (Behavior a)
switchB :: Behavior a -> Event (Behavior a) -> Behavior a

This was the aforementioned proposal by Gergely and has been implemented for some time in the sodium FRP library.

A second solution, which was inspired by an article by Wolfgang Jeltsch [pdf], is to introduce a phantom type to keep track of the starting time. This idea can be expanded to be equally expressive as the monadic approach. The combinators become

accumB  :: a -> Event t (a -> a) -> Behavior t a
switchB :: Behavior t a
        -> Event t (forall s. Moment s (Behavior s a)
        -> Behavior t a

Note that the accumB combinator keeps its simple, non-monadic form, but the type of switchB now uses an impredicative type. Moreover, there is a new type Moment t a, which tags a value of type a with a time t. This is the approach that I had chosen to implement in reactive-banana.

There is also a more recent proposal by Atze van der Ploeg and Koen Claessen [pdf], which dissects the accumB function into other, more primitive combinators and attributes the time leak to one of the parts. But it essentially ends up with a monadic API as well, i.e. the first of the two mentioned alternatives for restricting the API.

When implementing reactive-banana, I intentionally decided to try out the second alternative, simply in order to explore a region of the design space that sodium did not. With the feedback that people have sent me over the years, I feel that now is a good time to assess whether this region is worth staying in or whether it’s better to leave.

The main disadvantage of the phantom type approach is that it relies not just on rank-n types, but also on impredicative polymorphism, for which GHC has only poor support. To make it work, we need to wrap the quantified type in a new data type, like this

newtype AnyMoment f a = AnyMoment (forall t. Moment t (f t a))

Note that we also have to parametrize over a type constructor f, so that we are able to write the type of switchB as

switchB :: forall t a.
    Behavior t a
    -> Event t (AnyMoment Behavior a)
    -> Behavior t a

Unfortunately, wrapping and unwrapping the AnyMoment constructor and getting the “forall”s right can be fairly tricky, rather tedious, outright confusing, or all three of it. As Oliver Charles puts it in an email to me:

Right now you’re required to provide an AnyMoment, which in turn means you have to trim, and then you need a FrameworksMoment, and then an execute, and then you’ve forgotten what you were donig! :-)

Another disadvantage is that the phantom type t “taints” every abstraction that a library user may want to build on top of Event and Behavior. For instance, image a GUI widget were some aspects are modeled by a Behavior. Then, the type of the widget will have to include a phantom parameter t that indicates the time at which the widget was created. Ugh.

On the other hand, the main advantage of the phantom type approach is that the accumB combinator can keep its simple non-monadic type. Library users who don’t care much about higher-order combinators like switchB are not required to learn about the Moment monad. This may be especially useful for beginners.

However, in my experience, when using FRP, even though the first-order API can carry you quite far, at some point you will invariably end up in a situation where the expressivitiy of dynamic event switching is absolutely necessary. For instance, this happens when you want to manage a dynamic collection of widgets, as demonstrated by the BarTab.hs example for the reactive-banana-wx library. The initial advantage for beginners evaporates quickly when faced with managing impredicative polymorphism.

In the end, to fully explore the potential of FRP, I think it is important to make dynamic event switching as painless as possible. That’s why I think that switching to the monadic approach is a good idea.

Simultaneous event occurences

The second charge is probably less controversial, but also breaks backward compatibility.

The API includes a combinator for merging two event streams,

union :: Event a -> Event a -> Event a

If we think of Event as a list of values with timestamps, Event a = [(Time,a)], this combinator works like this:

union ((timex,x):xs) ((timey,y):ys)
    | timex <  timey = (timex,x) : union xs ((timey,y):ys)
    | timex >  timey = (timey,y) : union ((timex,x):xs) yss
    | timex == timey = ??

But what happens if the two streams have event occurrences that happen at the same time?

Before answering this question, one might try to argue that simultaneous event occurrences are very unlikely. This is true for external events like mouse movement or key presses, but not true at all for “internal” events, i.e. events derived from other events. For instance, the event e and the event fmap (+1) e certainly have simultaneous occurrences.

In fact, reasoning about the order in which simultaneous occurrences of “internal” events should be processed is one of the key difficulties of programming graphical user interfaces. In response to a timer event, should one first draw the interface and then update the internal state, or should one do it the other way round? The order in which state is updated can be very important, and the goal of FRP should be to highlight this difficulty whenever necessary.

In the old semantics (reactive-banana versions 0.2 to 0.9), using union to merge two event streams with simultaneous occurrences would result in an event stream where some occurrences may happen at the same time. They are still ordered, but carry the same timestamp. In other words, for a stream of events

e :: Event a
e = [(t1,a1), (t2,a2), …]

it was possible that some timestamps coincide, for example t1 == t2. The occurrences are still ordered from left to right, though.

In the new semantics, all event occurrences are required to have different timestamps. In other to ensure this, the union combinator will be removed entirely and substituted by a combinator

unionWith f :: (a -> a -> a) -> Event a -> Event a -> Event a
unionWith f ((timex,x):xs) ((timey,y):ys)
    | timex <  timey = (timex,x)     : union xs ((timey,y):ys)
    | timex >  timey = (timey,y)     : union ((timex,x):xs) yss
    | timex == timey = (timex,f x y) : union xs ys

where the first argument gives an explicit prescription for how simultaneous events are to be merged.

The main advantage of the new semantics is that it simplies the API. For instance, with the old semantics, we also needed two combinators

collect :: Event a   -> Event [a]
spill   :: Event [a] -> Event a

to collect simultaneous occurrences within an event stream. This is no longer necessary with the new semantics.

Another example is the following: Imagine that we have an input event e :: Event Int whose values are numbers, and we want to create an event that sums all the numbers. In the old semantics with multiple simultaneous events, the event and behavior defined as

bsum :: Behavior Int
esum :: Event Int

esum = accumE  0 ((+) <@> e)
bsum = stepper 0 esum

are different from those defined by

bsum = accumB 0 ((+) <@> e)
esum = (+) <$> bsum <@ e

The reason is that accumE will take into account simultaneous occurrences, but the behavior bsum will not change until after the current moment in time. With the new semantics, both snippets are equal, and accumE can be expressed in terms of accumB.

The main disadvantage of the new semantics is that the programmer has to think more explicitly about the issue of simultaneity when merging event streams. But I have argued above that this is actually a good thing.

In the end, I think that removing simultaneous occurrences in a single event stream and emphasizing the unionWith combinator is a good idea. If required, s/he can always use an explicit list type Event [a] to handle these situations.

(It just occurred to me that maybe a type class instance

instance Monoid a => Monoid (Event a)

could give us the best of both worlds.)

This summarizes my rationale for these major and backward incompatible API changes. As always, I appreciate your comments!

August 29, 2015 06:23 PM

Brent Yorgey

Meeting new people at ICFP

This afternoon I’ll be getting on a plane to Vancouver for ICFP. I’m looking forward to seeing many friends, of course, but I also enjoy meeting new people—whether or not they are “famous”, whether or not I think they can “advance my career”. So I’ll just throw this out there: if you will be in Vancouver this week and would like to meet me, just leave a comment and I will make a point of trying to find you to chat! I’ll be attending the Haskell Implementor’s Workshop, the Ally Skills Tutorial, ICFP itself, the Haskell Symposium, and FARM, but there’s also plenty of time to chat in the hallway or over a meal.

by Brent at August 29, 2015 02:49 PM

Edward Z. Yang

Help us beta test “no-reinstall Cabal”

Over this summer, Vishal Agrawal has been working on a GSoC project to move Cabal to more Nix-like package management system. More simply, he is working to make it so that you'll never get one of these errors from cabal-install again:

Resolving dependencies...
In order, the following would be installed:
directory- (reinstall) changes: time-1.4.2 -> 1.5
process- (reinstall)
extra-1.0 (new package)
cabal: The following packages are likely to be broken by the reinstalls:

However, these patches change a nontrivial number of moving parts in Cabal and cabal-install, so it would be very helpful to have willing guinea pigs to help us iron out some bugs before we merge it into Cabal HEAD. As your prize, you'll get to run "no-reinstall Cabal": Cabal should never tell you it can't install a package because some reinstalls would be necessary.

Here's how you can help:

  1. Make sure you're running GHC 7.10. Earlier versions of GHC have a hard limitation that doesn't allow you to reinstall a package multiple times against different dependencies. (Actually, it would be useful if you test with older versions of GHC 7.8, but only mostly to make sure we haven't introduced any regressions here.)
  2. git clone (I've added some extra corrective patches on top of Vishal's version in the course of my testing) and git checkout cabal-no-pks.
  3. In the Cabal and cabal-install directories, run cabal install.
  4. Try building things without a sandbox and see what happens! (When I test, I've tried installing multiple version of Yesod at the same time.)

It is NOT necessary to clear your package database before testing. If you completely break your Haskell installation (unlikely, but could happen), you can do the old trick of clearing out your .ghc and .cabal directories (don't forget to save your .cabal/config file) and rebootstrapping with an old cabal-install.

Please report problems here, or to this PR in the Cabal tracker. Or chat with me in person next week at ICFP. :)

by Edward Z. Yang at August 29, 2015 04:31 AM

August 28, 2015

JP Moresmau

A Reddit clone (very basic) using Local Storage and no server side storage

Some weeks ago there was a bit of a tussle over at Reddit, with subreddits going private in protest, talk of censorship, etc. This was interesting to see, from a distance. It got me thinking about trying to replicate Reddit, a site where people can share stuff and have discussions, but without having the server control all the data. So I've developed a very basic Reddit clone, where you can post links and stories and comment on them. You can also upvote stories and comments, and downvote what you upvoted (cancel your upvote). But there's a catch: the site has no database. Things are kept in memory, and on the users machine, via the HTML 5 LocalStorage. That's all!

Everytime you upload or upvote something, it gets saved to your LocalStorage for the site. Once something gets downvoted to zero, it disappear. When you go to the site, whatever is in your LocalStorage gets uploaded and upvoted again. So stories can come and go as users connect and disconnect, and only the most popular stories will always be visible on the site (since at least one connected user needs to have uploaded or upvoted a story for it to be visible).

Of course, there is still a server, that could decide to censor stories, modify text, but at least you can always check that what you have on YOUR machine is the data you wanted. you can always copy that data elsewhere easily for safe keeping (browser developer tools let you inspect your LocalStorage content).

All in all, this was probably only an excuse for me to play with Javascript and Java (I did the server side in Java, since it was both easy to build and deploy) and Heroku. I've deployed the app at  and the source code can be found at Any feedback welcome!

by JP Moresmau ( at August 28, 2015 03:41 PM

Yesod Web Framework

Some thoughts on documentation

I write and maintain a lot documentation, both open source and commercially. Quite a bit of the documentation I maintain is intended to be collaborative documentation. Over the past few years, through my own observations and insights from others (yes, this blog post is basically a rip-off of a smaller comment by Greg), I've come up with a theory on collaborative documentation, and I'm interested in feedback.

tl;dr: people don't seem to trust Wiki content, nor explore it. They're also more nervous about editing Wiki content. Files imply: this is officially part of the project, and people feel comfortable sending a PR

When talking about documentation, there are three groups to consider: the maintainers, the contributors, and the readers. The most obvious medium for collaborative documentation is a Wiki. Let's see how each group sees a Wiki:

  • Maintainers believe they're saying "feel free to make any changes you want, the Wiki is owned by the community." By doing that, they are generally hoping to greatly increase collaboration.

  • Contributors, however, seem to be intimidated by a Wiki. Most contributors do not feel completely confident in their ability to add correct content, adhere to standards, fit into the right outline, etc. So paradoxically, by making the medium as open as possible, the Wiki discourages contribution.*

  • Readers of documentation greatly appreciate well structured content, and want to be able to trust the accuracy of the content they're reading. Wikis do not inspire this confidence. Despite my previous comments about contributors, readers are (logically) concerned that Wiki content may have been written by someone uninformed, or may have fallen out of date.

By contrast, let's take a different model for documentation: Markdown files in a Github repository:

  • Maintainers have it easy: they maintain documentation together with their code. The documentation can be forked and merged just like the code itself.

  • Contributors- at least in my experience- seem to love this. I've gotten dozens (maybe even hundreds) of people sending minor to major pull requests to documentation I maintain on open source projects this way. Examples range from the simplest (inline API documentation) to the most theoretically most complex (the content for the Yesod book). Since our target audience is developers, and developers already know how to send pull requests, this just feels natural.

  • Readers trust content of the repository itself much more. It's more official, because it means someone with commit access to the project agreed that this should belong here.

This discussion came up for me again when I started thinking about writing a guide for the Haskell tool stack. I got halfway through writing this blog post two weeks ago, and decided to finish it when discussing with other stack maintainers why I decided to make this a file instead of another Wiki page. Their responses were good confirmation to this theory:

Emanuel Borsboom:

Ok, that makes a lot of sense to me. We might want to consider moving reference material to files (for example, the stack.yaml documentation). Another nice thing about that is that it means the docs follow the versions (so no more confusion about whether the stack.yaml page is for current master vs. latest release).

Jason Boyer:

I can confirm this sentiment is buried in me somewhere, I've definitely felt this way (as a user/developer contributing little bits). On a more technical note, the workflow with editing the wiki doesn't offer up space for review - it is a done deal, there is no PR.

While Wikis still have their place (at the very least when collaborating with non-technical people), I'm quite happy with the file-as-a-collaborative-document workflow (that- I again admit- Greg introduced me to). My intended behavior moving forward is:

  • Keep documentation in the same repo as the project
  • Be liberal about who has commit access to repos

* I've seen a similar behavior with code itself: while many people (myself included in the past) are scared to give too many people commit access to a repository, my experience (following some advice from Edward Kmett) with giving access more often rather than less has never led to bad maintainer decisions. Very few people are actually malicious, and most will be cautious about breaking a project they love. (Thought experiment: how would you act if you were suddenly given commit access to a major open source project (GHC/Linux/etc)? I'm guessing you wouldn't go through making serious modifications without asking for your work to be reviewed.)

August 28, 2015 01:00 PM

Tom Schrijvers

Position: Functional Programming Technology Transfer

I am looking for a scientific collaborator / candidate PhD student in the context of project vLambda. This technology transfer project supports the Flemish software industry in the adoption of Functional Programming techniques in mainstream languages like Java and C-sharp. The project is subsidised by the Flemish agency for Innovation through Science and Technology (IWT) and proceeds in collaboration with our industrial partners.

You can find the details of the position here.

In case you are interested and happen to be at ICFP, drop me a line.

by Tom Schrijvers ( at August 28, 2015 06:08 AM

Douglas M. Auclair (geophf)

Yeah, but how do I do that?

So, my article on FP IRL has garnered some interest, and I have been asked, 'Yeah, but how do I get started into FP? How can I start using this stuff at my job?'

So, here's my answer. Here's what I do and how I do it.

So, it depends on how and where you want to start this adventure, yes? The beauty of today is that there is so many resources freely available to let you work on them. The problem is that you're good at what you do already, so it'll be hard to move away from what you know already into the domain where it should be easy but it's actually really, really different and that difference can be frustrating: caveat coder.

Also, there are effete programmers out there that tell you how you should not code.

"Oh, Prolog's not pure and it's not functional. You can't do that."

I don't listen to what I can't do. When somebody says to me: "you can't do that," it really means they are saying: "I can't do that." And I'm not interested in listening to the whining of losers. What I'm interested in is delivering results by coding well. If you want to do that, I want to do that with you. If you want to tell me what I can't do, the door is over there, don't let it hit you on your way out of my life.

Sorry. Not sorry.


I host @1HaskellADay where I post a problem that you can solve in any language you want, but I post the problem, and the solution, in Haskell, every day, Monday through Friday. You can learn FP one day at a time that way, be it Haskell, Scala, Idris, whatever you'd like. You write a solution in Haskell, I retweet your solution so everybody can see you're a Haskell coder.

So. That.

Also, there are contests on-line, some with money prizes (kaggle, right?), that you can take on in language X. You may or may not win, but you'll surely learn what you can do easily, and what comes hard in your language of choice.

The way I learn a language is I don't. Not in the traditional sense, that is, of learning a language's syntax and semantics. If I don't have a 'why' then the 'how' of a language is uninteresting to me.

So I make a 'why' to learn a language, then I learn it.

What I do is I have a real-world problem, and solve it in that language. That's how I learn a language, and yes, so I code wrongly, for a long time, but then I start to code better and better in that language, until I'm an expert in that language.

Reading any and everything on the fundamentals of that language, as I encounter them, help me a lot, too.

So, as you can see. I'm learning the 'languages' Neo4J and AWS right now (yes, I know, they aren't languages. Thanks). Lots of fun. I'm doing stuff obviously wrong, but the solutions I provide they need at work, and I'm the only one stepping up to the plate and swinging hard and fast enough to keep them funding these adventures in this tech.

Get that. They are paying me at work to learn stuff that I'm having a blast doing. Why?

Maybe it's because when the VP says, 'Hey, I got a problem here for ya,' I come running?

Here's something I do not do.

I DO NOT ASK: 'can I code in X?' because the answer is always: 'No.'

What I do, is code in X and then hand them a result that so wows them, they feed me the next impossible problem to solve, and I get to set the terms. It's like instead of 'doing my job,' I instead take ownership of the company and its problems, looking for the best solution for the company as its owner. And, like an owner, I say what I do and how I do it, because I know what's best for the company in these new waters we're exploring together in partnership.

Try it that way. Don't say 'we should do X' because that's what (in management's eyes) whiny little techies say. No, don't say anything. Just code it in X, deliver the solution, that you demo to the VP and then to the whole company, and get people coming up to you saying, 'Wow. Just wow. How the hell did you do that?'

No kidding: it takes a hell of a lot of courage to be a water-walker. It has for me, anyway, because the risk is there: that you'll fail. Fail hard. Because I have failed hard. But I choose that risk over drowning, doing what they tell me and how they tell me to do it, because I'm just employ number 89030 and my interview was this: "Do you know Java?" "Yes, I know Java." And, yay, I'm a Java programmer, just like everybody else, doing what everybody else does. Yay, so yay. So boring.

I've failed twice in my 25 years in this field, and wasn't for lack of trying. Twice.

Do you know how many times I have succeeded? I don't. I've lost count. I've saved three teens' lives and that was just in one month. Put a price on that, and that was because I stepped up to the plate and tried, when nobody else would or could. And my other successes, too, and the beauty of my successes is that the whole team won, we all got to keep working on really neat stuff that mattered and got company and customer attention.

And, bonus, "Hey, I've got a business and I need your help."

Three times so far.

Taking the risk leads to success. Success breeds success.

It starts with you, not asking, but just taking that risk, trying, a few times or right off the bat, and succeeding.

And that success, and the feeling you get from knowing you've done something, and you've done something.

They can't take that away from you.


by geophf ( at August 28, 2015 01:10 AM

August 27, 2015


FRP — Release of reactive-banana version 0.9

I am very pleased to announce the release of version of my reactive-banana library on hackage. The API is essentially the same as before, but the implementation has been improved considerably: Dynamically switched events are now garbage collected!

This means that the library finally features all the ingredients that I consider necessary for a mature implementation of FRP:

  • Continuous time semantics (sampling rate independence, deterministic union, …)
  • Recursion
  • Push-driven performance
  • Dynamic event switching without time leaks
  • Dynamic event switching with garbage collection

The banana is ripe! In celebration, I am going to drink a strawberry smoothie and toast Oliver Charles for his invaluable bug reports and Samuel Gélineau for tracking down a nasty bug in detail. Cheers!

While the library internals are now in a state that I consider very solid, the library is still not quite done yet. When introducing the API for dynamic event switching in version 0.7, I had the choice between two very different regions of the design space: An approach using a monad and an approach using phantom types. I had chosen the latter approach, mostly because the sodium FRP library had chosen to explore the former region in the design space, so we could cover more of the design space together this way. But over the years, people have sent me questions and comments, and it is apparent that the phantom type approach is too unwieldy for practical use. For the next version of reactive-banana, version number 1.0, I plan to radically change the API and switch to the monadic approach. While we’re at it, I also intend to remove simultaneous occurences in a single event. I will discuss these upcoming API changes more thoroughly in a subsequence blog post.

August 27, 2015 04:54 PM

Russell O'Connor

Clearing Up “Clearing Up Mysteries”

I am a big fan of E. T. Jaynes. His book Probability Theory: The Logic of Science is the only book on statistics that I ever felt I could understand. Therefore, when he appears to rail against the conclusions of Bell’s theorem in his paper “Clearing up Mysteries—The Original Goal”, I take him seriously. He suggests that perhaps there could be a time-dependent hidden variable theory that could yield the outcomes that quantum mechanics predicts.

However, after reading Richard D. Gill’s paper, “Time, Finite Statistics, and Bell’s Fifth Position” it is very clear that there can be nothing like a classical explanation that yields quantum predictions, time-dependent or otherwise. In this paper Gill reintroduces Steve Gull’s computer network, where a pair of classical computers is tasked to recreate probabilities predicted in a Bell-CHSH delayed choice experiment. The catch is that the challenger gets to choose the stream of bits sent to each of the two spatially separated computers in the network. These bits represent the free choice an experimenter running a Bell-CHSH experiment has to choose which polarization measurements to make. No matter what the classical computer does, no matter how much time-dependent fiddling you want to do, it can never produce correlations that will violate the Bell-CHSH inequality in the long run. This is Gull’s “You can’t program two independently running computers to emulate the EPR experiment” theorem.

Gill presents a nice analogy with playing roulette in the casino. Because of the rules of roulette, there is no computer algorithm can implement a strategy that will beat the house in roulette in the long run. Gill goes on to quantify exactly how long the long run is in order to place a wager against other people who claim they can recreate the probabilities predicted by quantum mechanics using a classical local hidden variable theory. Using the theory of supermartingales, one can bound the likelihood of seeing the Bell-CHSH inequality violated by chance by any classical algorithm in the same way that one can bound the likelihood of long winning streaks in roulette games.

I liked the casino analogy so much that I decided to rephrase Gull’s computer network as a coin guessing casino game I call Bell’s Casino. We can prove that any classical strategy, time-dependent or otherwise, simply cannot beat the house at that particular game in the long run. Yet, there is a strategy where the players employ entangled qubits and beat the house on average. This implies there cannot be any classical phenomena that yields quantum outcomes. Even if one proposes some classical oscollating (time-dependent) hidden variable vibrating at such a high rate that we could never practically measure it, this theory still could not yield quantum probabilities, because such a theory implies we could simulate it with Gull’s computer network. Even if our computer simulation was impractically slow, we could still, in principle, deploy it against Bell’s Casino to beat their coin game. But no such computer algorithm exists, in exactly the same way that there is no computer algorithm that will beat a casino at a fair game of roulette. The fact that we can beat the casino by using qubits clearly proves that qubits and quantum physics is something truly different.

You may have heard the saying that “correlation does not imply causation”. The idea is that if outcomes A and B are correlated, the either A causes B, or B causes A, or there is some other C that causes A and B. However, in quantum physics there is a fourth possibilty. We can have correlation without causation.

In light of Gull and Gill’s iron clad argument, I went back to reread Jaynes’s “Clearing up Mysteries”. I wanted to understand how Jaynes could have been so mistaken. After rereading it I realized that I had misunderstood what he was trying to say about Bell’s theorem. Jaynes just wanted to say two things.

Firstly, Jaynes wanted to say that Bell’s theorem does not necessarily imply action at a distance. This is not actually a controversial statement. The many-worlds interpretation is a local, non-realist (in the sense that experiments do not have unique definite outcomes) interpretation of quantum mechanics. This interpretation does not invoke any action at a distance and is perfectly compatible with Bell’s theorem. Jaynes spends some time noting that correlation does not imply causation in an attempt to clarify this point although he never talks about the many-worlds interpretation.

Secondly, Jaynes wanted to say that Bell’s theorem does not imply that quantum mechanics is the best possible physical theory that explains quantum outcomes. Here his argument is half-right and half-wrong. He spends some time suggesting that maybe there is a time-dependent hidden variable theory that could give more refined predictions than predicted by quantum theory. However, the suggestion that any classical theory, time-dependent or otherwise, could underlie quantum mechanics is refuted by Bell’s theorem and this is clearly illustrated by Gull’s computer network or by Bell’s casino. Jaynes learned about Gull’s computer network argument at the same conference that he presented “Clearing Up Mysteries”. His writing suggests that he was surprised by the argument, but he did not want to rush to draw any conclusions to from it without time to get a deeper understanding of it. Nevertheless, Jaynes larger point was still correct. Bell’s theorem does not imply that there is not some, non-classical, refinement of quantum mechanics that might yield more informative predictions than quantum mechanics does and Jaynes was worried that people would not look for such a refinement.

Jaynes spent a lot of effort trying to separate epistemology, where probability theory rules how we reason in the face of imperfect knowledge, from ontology, which describes what happens in reality if we had perfect information. Jaynes thought that quantum mechanics was mixing these two branches together into one theory and worried that if people were mistaking quantum mechanics for an ontological theory then they would never seek a more refined theory.

While Bell’s theorem does not rule out that there may be a non-classical hidden variable theory, Colbeck and Renner’s paper “No extension of quantum theory can have improved predictive power” all but eliminates that possibility by proving that there is no quantum hidden variable theory. This can be seen as a strengthening of Bell’s theorem, and they even address some of the same concerns that Jaynes had about Bell’s theorem.

To quote Bell [2], locality is the requirement that “…the result of a measurement on one system [is] unaffected by operations on a distant system with which it has interacted in the past…” Indeed, our non-signalling conditions reflect this requirement and, in our language, the statement that PXYZ|ABC is non-signalling is equivalent to a statement that the model is local (see also the discussion in [28]). (We remind the reader that we do not assume the non-signalling conditions, but instead derive them from the free choice assumption.) In spite of the above quote, Bell’s formal definition of locality is slightly more restrictive than these non-signalling conditions. Bell considers extending the theory using hidden variables, here denoted by the variable Z. He requires PXY|ABZ = PX|AZ × PY|BZ (see e.g. [13]), which corresponds to assuming not only PX|ABZ = PX|AZ and PY|ABZ = PY|BZ (the non-signalling constraints, also called parameter-independence in this context), but also PX|ABYZ = PX|ABZ and PY|ABXZ = PY|ABZ (also called outcome-independence). These additional constraints do not follow from our assumptions and are not used in this work.

The probabilistic assumptions are weaker in Colbeck and Renner’s work than in Bell’s theorem, because they want to exclude quantum hidden variable theories in addition to classical hidden variable theories. Today, if one wants to advance a local hidden variable theory, it would have to be a theory that is even weirder than quantum mechanics, if such a thing is even logically possible. It seems that quantum mechanics’s wave function is an ontological description after all.

I wonder what Jaynes would have thought about this result. I suspect he would still be looking for an exotic hidden variable theory. He seemed so convinced that probability theory was solely in the realm of epistemology and not ontology that he would not accept any probabilistic ontology at all.

I think Jaynes was wrong when he suggested that quantum mechanics was necessarily mixing up epistemology and ontology. I believe the many-worlds interpretation is trying to make that distinction clear. In this interpretation the wave-function and Schrödinger’s equation is ontology, but the Born rule that relates the norm-squared amplitude to probability ought to be epistemological. However, there does remain an important mystery here: Why do the observers within the many-worlds observe quantum probabilities that satisfy the Born rule? I like to imagine Jaynes could solve this problem if he were still around. I imagine that would say that something like, “Due to phase invariance of the wave-function … something something … transformation group … something something … thus the distribution must be in accordance with the Born rule.” After all, Jaynes did manage to use transformation groups to solve the Bertrand paradox, a problem widely regarded as being unsolvable due to being underspecified.

August 27, 2015 02:03 AM

August 26, 2015

Functional Jobs

Full Stack Haskell Software Engineer at Linkqlo Inc (Full-time)

Company Introduction

Linkqlo is a Palo Alto-based technology startup that is building a pioneering mobile community to connect people with better fitting clothes. We’re solving an industry-wide pain point for both consumers and fashion brands in retail shopping, sizing and fitting, just like Paypal took on the online payment challenge in 1999. Our app is available for download now in App Store. The next few months will see us focus on building up our iOS app with more features and functionalities to accelerate user acquisition. You’ll be an early member of an exciting pre-Series A startup in a fast-changing interesting space - fashion/apparel/retail, that is on a mission to redefine the way people discover and shop for clothes. We’re looking for talented, self-motivated people who are also passionate about our mission and excited about the challenges ahead.

We are looking for a committed full-time Full-Stack Haskell Software Engineer to integrate our iOS front-end client with the server, develop new product features, build and maintain robust and highly available database, design and develop HTML5 web site, and ensure high-performance and responsiveness of the entire system.


  • Single-digit employee badge number
  • Downtown Palo Alto location one block away from Caltrain Station
  • Dynamic and fun-loving startup culture
  • Attractive package in salary, stocks and benefits

About You

  • You love Haskell and want to build everything with Haskell
  • You have incredible coding skills and can turn ideas into extremely fast and reliable code
  • You are comfortable in working in an early-stage startup environment to quickly expand your skill set with increasingly substantial responsibilities
  • You buy into our vision and are interested in using the product for your own benefit
  • You are capable of designing and coding the whole web project yourself or supervising others to perform the tasks by overlooking the whole process from scratch to finish
  • You aspire to steer web projects in the right direction utilizing the best practices and latest advancements in the technology
  • You have excellent written and verbal English
  • You have legal status to work in U.S.


  • Development of server side logic to integrate user-facing elements developed by front-end developers
  • Management of hosting environments AWS EC2/S3 and Docker, including database administration and scaling an application to support load changes
  • Building reusable code and libraries for future use
  • Creating database schemas that represent and support business processes
  • Optimization of the application for maximum speed and scalability
  • Implementation of security and data protection
  • Designing and implementing data storage solutions
  • Implementing automated testing platforms and unit tests
  • Integration of multiple data sources and databases into one system


  • Expert in Haskell and experienced in one or more other web programming languages of Python/PHP/Ruby/Node.js/Java
  • Proficiency in data migration, transformation, backup and scripting
  • Fluency in popular web application frameworks (back-end is a must, front-end is a plus)
  • CS bachelor’s degree or an equivalent background in software engineering
  • Minimum three years of industrial experience in consumer web technology companies
  • Good understanding of front-end technologies and platforms, such as JavaScript, HTML5, and CSS3
  • Understanding accessibility and security compliance
  • Knowledge in user authentication and authorization between multiple systems, servers, and environments
  • Understanding differences between multiple delivery platforms such as mobile vs desktop, and optimizing output to match the specific platform
  • Mastery knowledge and use of code versioning tools including Git (Gitlab/Github)
  • Understanding of “session management” in a distributed server environment

Get information on how to apply for this position.

August 26, 2015 10:35 PM

Danny Gratzer

Type is not in Type

Posted on August 26, 2015
Tags: jonprl, types, haskell

I was reading a recent proposal to merge types and kinds in Haskell to start the transition to dependently typed Haskell. One thing that caught my eye as I was reading it was that this proposal adds * :: * to the type system. This is of some significance because it means that once this is fully realized, Haskell will be inconsistent (as a logic) in a new way! Of course, this isn’t a huge deal since Haskell is already woefully inconsistent with

  • unsafePerformIO
  • Recursive bindings
  • Recursive types
  • Exceptions

So it’s not like we’ll be entering new territory here. All that it means is that there’s a new way to inhabit every type in Haskell. If you were using Haskell as a proof assistant you were already in for a rude awakening I’m afraid :)

This is an issue of significance though for languages like Idris or Agda where such a thing would actually render proofs useless. Famously, Martin-Löf’s original type theory did have Type : Type (or * :: * in Haskell spelling) and Girard managed to derive a contradiction (Girard’s paradox). I’ve always been told that the particulars of this construction are a little bit complicated but to remember that Type : Type is bad.

In this post I’d like to prove that Type : Type is a contradiction in JonPRL. This is a little interesting because in most proof assistants this would work in two steps

  1. Hack the compiler to add the rule Type : Type
  2. Construct a contradiction and check it with the modified compiler

OK to be fair, in something like Agda you could use the compiler hacking they’ve already done and just say {-# OPTIONS --set-in-set #-} or whatever the flag is. The spirit of the development is the same though

In JonPRL, I’m just going to prove this as a regular implication. We have a proposition which internalizes membership and I’ll demonstrate not(member(U{i}; U{i})) is provable (U{i} is how we say Type in JonPRL). It’s the same logic as we had before.

Background on JonPRL

Before we can really get to the proof we want to talk about, we should go through some of the more advanced features of JonPRL we need to use.

JonPRL is a little different than most proof assistants, for example We can define a type of all closed terms in our language and whose equality is purely computational. This type is base. To prove that =(a; b; base) holds you have to prove ceq(a; b), the finest grain equality in JonPRL. Two terms are ceq if they

  1. Both diverge, or
  2. Run to the same outermost form and have ceq components

What’s particularly exciting is that you can substitute any term for any other term ceq to it, no matter at what type it’s being used and under what hypotheses. In fact, the reduce tactic (which performs beta reductions) can conceptually be thought of as substituting a bunch of terms for their weak-head-normal forms which are ceq to the original terms. The relevant literature behind this is found in Doug Howe’s “Equality in a Lazy Computation System”. There’s more in JonPRL in this regard, we also have the asymmetric version of ceq (called approx) but we won’t need it today.

Next, let’s talk about the image type. This is a type constructor with the following formation rule:

 H ⊢ A : U{i}        H ⊢ f : base
      H ⊢ image(A; f) : U{i}

So here A is a type and f is anything. Things are going to be equal image if we can prove that they’re of the form f w and f w' where w = w' ∈ A. So image gives us the codomain (range) of a function. What’s pretty crazy about this is that it’s not just the range of some function A → B, we don’t really need a whole new type for that. It’s the range of literally any closed term we can apply. We can take the range of the Y combinator over pi types. We can take the range of lam(x. ⊥) over unit, anything we want!

This construct lets us define some really incredible things as a user of JonPRL. For example, the “squash” of a type is supposed to be a type which is occupied by <> (and only <>) if and only if there was an occupant of the original type. You can define these in HoTT with higher inductive types. Or, you can define these in this type theory as

    Operator squash : (0).
    [squash(A)] =def= [image(A; lam(x. <>))]

x ∈ squash(A) if and only if we can construct an a so that a ∈ A and lam(x. <>) a ~ x. Clearly x must be <> and we can construct such an a if and only if A is nonempty.

We can also define the set-union of two types. Something is supposed to be in the set union if and only if it’s in one or the other. Two define such a thing with an image type we have

    Operator union : (0).
    [union(A; B)] =def= [image((x : unit + unit) * decide(x; _.A; _.B); lam(x.snd(x)))]

This one is a bit more complicated. The domain of things we’re applying our function to this time is

    (x : unit + unit) * decide(x; _.A; _.B)

This is a dependent pair, sometimes called a Σ type. The first component is a boolean; if it is true the second component is of type A, and otherwise it’s of type B. So for every term of type A or B, there’s a term of this Σ type. In fact, we can recover that original term of type A or B by just grabbing the second component of the term! We don’t have to worry about the type of such an operation because we’re not creating something with a function type, just something in base.

unions let us define an absolutely critical admissible rule in our system. JonPRL has this propositional reflection of the equality judgment and membership, but in Martin-Löf’s type theory, membership is non-negatable. By this I mean that if we have some a so that a = a ∈ A doesn’t hold, we won’t be able to prove =(a; a; A) -> void. See in order to prove such a thing we first have to prove that =(a; b; A) -> void is a type, which means proving that =(a; a; A) is a type.

In order to prove that =(a; b; A) is a proposition we have to prove =(a; a; A), =(b; b; A), and =(A; A; U{i}). The process of proving these will actually also show that the corresponding judgments, a ∈ A, b ∈ A, and A ∈ U{i} hold.

However, in the case that a and b are the same term this is just the same as proving =(a; b; A)! So =(a; a; A) is a proposition only if it’s true. However, we can add a rule that says that =(a; b; A) is a proposition if a = a ∈ (A ∪ base) and similarly for b! This fixes our negatibility issue because we can just prove that =(a; a; base), something that may be true even if a is not equal in A. Before having a function take a member(...) was useless (member(a; A) is just thin sugar for =(a; a; A)! member(a; A) is a proposition if and only if a = a ∈ A holds, in other words, it’s a proposition if and only if it’s true! With this new rule, we can prove member(a; A) is a proposition if A ∈ U{i} and a ∈ base, a much weaker set of conditions that are almost always true. We can apply this special rule in JonPRL with eq-eq-base instead of just eq-cd like the rest of our equality rules.

The Main Result

Now let’s actually begin proving Russell’s paradox. To start with some notation.

    Infix 20 "∈" := member.
    Infix 40 "~" := ceq.
    Infix 60 "∪" := bunion.
    Prefix 40 "¬" := not.

This let’s us say a ∈ b instead of member(a; b). JonPRL recently grew this ability to add transparent notation to terms, it makes our theorems a lot prettier.

Next we define the central term to our proof:

    Operator Russell : ().
    [Russell] =def= [{x : U{i} | ¬ (x ∈ x)}]

Here we’ve defined Russell as shorthand for a subset type, in particular a subset of U{i} (the universe of types). x ∈ Russell if x ∈ U{i} and ¬ (x ∈ x). Now normally we won’t be able to prove that this is a type (specifically x ∈ x is going to be a problem), but in our case we’ll have some help from an assumption that U{i} ∈ U{i}.

Now we begin to define a small set of tactics that we’ll want. These tactics are really where the fiddly bits of using JonPRL’s tactic system come into play. If you’re just reading this for the intuition as to why Type ∈ Type is bad just skip this. You’ll still understand the construction even if you don’t understand these bits of the proof.

First we have a tactic which finds an occurrence of H : A + B in the context and eliminate it. This gives us two goals, one with an A and one with a B. To do this we use match, which gives us something like match goal with in Coq.

    Tactic break-plus {
      @{ [H : _ + _ |- _] => elim <H>; thin <H> }

Note the syntax [H : ... |- ...] to match on a sequent. In particular here we just have _ + _ and _. Next we have a tactic bunion-eq-right. It’s to help us work with bunions (unions). Basically it turns =(M; N; bunion(A; B)) into

    =(lam(x.snd(x)) <<>, M>; lam(x.snd(x)) <<>, N>; bunion(A; B))

This is actually helpful because it turns out that once we unfold bunion we have to prove that M and N are in an image type, remember that bunion is just a thin layer of sugar on top of image types. In order to prove something is in the image type it needs to be of the form f a where f in our case is lam(x. snd(x)).

This is done with

    Tactic bunion-eq-right {
      @{ [|- =(M; N; L ∪ R)] =>
           csubst [M ~ lam(x. snd(x)) <inr(<>), M>] [h.=(h;_;_)];
           aux { unfold <snd>; reduce; auto };
           csubst [N ~ lam(x. snd(x)) <inr(<>), N>] [h.=(_;h;_)];
           aux { unfold <snd>; reduce; auto };

The key here is csubst. It takes a ceq as its first argument and a “targeting”. It then tries to replace each occurrence of the left side of the equality with the right. To find each occurrence the targeting maps a variable to each occurrence. We’re allowed to use wildcards in the targeting as well. It also relegates actually proving the equality into a new subgoal. It’s easy enough to prove so we demonstrate it with aux {unfold <snd>; reduce; auto}.

We only need to apply this tactic after eq-eq-base, this applies that rule I mentioned earlier about proving equalities to be well-formed in a much more liberal environment. Therefore we wrap those two tactics into one more convenient package.

    Tactic eq-base-tac {
      @{ [|- =(=(M; N; A); =(M'; N'; A'); _)] =>
           eq-eq-base; auto;
           bunion-eq-right; unfold <bunion>

There is one last tactic in this series, this one to prove that member(X; X) ∈ U{i'} is well-formed (a type). It starts by unfolding member into =(=(X; X; X); =(X; X; X); U{i}) and then applying the new tactic. Then we do other things. These things aren’t pretty. I suggest we just ignore them.

    Tactic impredicativity-wf-tac {
      unfold <member>; eq-base-tac;
      eq-cd; ?{@{[|- =(_; _; base)] => auto}};
      eq-cd @i'; ?{break-plus}; reduce; auto

Finally we have a tactic to prove that if we have not(P) and P existing in the context proves void. This is another nice application match

    Tactic contradiction {
      unfold <not implies>;
      @{ [H : P -> void, H' : P |- void] =>
           elim <H> [H'];
           unfold <member>;

We start by unfolding not and implies. This gives us P -> void and P. From there, we just apply one to the other giving us a void as we wanted.

We’re now ready to prove our theorem. We start with

    Theorem type-not-in-type : [¬ (U{i} ∈ U{i})] {

We now have the main subgoal

Remaining subgoals:

[main] ⊢ not(member(U{i}; U{i}))

We can start by unfold not and implies. Remember that not isn’t a built in thing, it’s just sugar. By unfolding it we get the more primitive form, something that actually apply the intro tactic to.

      unfold <not implies>; intro

Once unfolded, we’d get a goal along the lines of member(U{i}; U{i}) -> void. We immediately apply intro to this though. Now we have two subgoals; one is the result of applying intro, namely a hypothesis x : member(U{i}; U{i}) and a goal void. The second subgoal is the “well-formedness” obligation.

We have to prove that member(U{i}; U{i}) is a type in order to apply the intro tactic. This is a crucial difference between Coq-like systems and these proof-refinement logics. The process of demonstrating that what you’re proving is a proposition is intermingled with actually constructing the proof. It means you get to apply all the normal mathematical tools you have for proving things to be true in order to prove that they’re types. This gives us a lot of flexibility, but at the cost of sometimes annoying subgoals. They’re annotated with [aux] (as opposed to [main]). This means we can target them all at once using with the aux tactics.

To summarize that whole paragraph as JonPRL would say it, our proof state is

1. x : member(U{i}; U{i})
⊢ void

[aux] ⊢ member(member(U{i}; U{i}); U{i'})

Let’s get rid of that auxiliary subgoal using that impredictivity-wf-tac, this subgoal is in fact exactly what it was made for.

      unfold <not implies>; intro
      aux { impredicativity-wf-tac };

This picks off that [aux] goal leaving us with just

1. x : member(U{i}; U{i})
⊢ void

Now we need to prove some lemmas. They state that Russell is actually a type. This is possible to do here and only here because we’ll need to actually use x in the process of proving this. It’s a very nice example of what explicitly proving well-formedness can give you! After all, the process of demonstrating that Russell is a type is nontrivial and only possible in this hypothetical context, rather than just hoping that JonPRL is clever enough to figure that out for itself we get to demonstrate it locally.

We’re going to use the assert tactic to get these lemmas. This lets us state a term, prove it as a subgoal and use it as a hypothesis in the main goal. If you’re logically minded, it’s cut.

      unfold <not implies>; intro;
      aux { impredicativity-wf-tac };

      assert [Russell ∈ U{i}] <russell-wf>;

The thing in angle brackets is the name it will get in our hypothetical context for the main goal. This leaves us with two subgoals. The aux one being the assertion and the main one being allowed to assume it.

1. x : member(U{i}; U{i})
⊢ member(Russell; U{i})

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
⊢ void

We can prove this by basically working our way towards using impredicativity-wf-tac. We’ll use aux again to target the aux subgoal. We’ll start by unfolding everything and applying eq-cd.

      unfold <not implies>; intro;
      aux { impredicativity-wf-tac };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux {
        unfold <member Russell>; eq-cd; auto;

Remember that Russell is {x : U{i} | ¬ (x ∈ x)}

We just applied eq-cd to a subset type (Russell), so we get two subgoals. One says that U{i} is a type, one says that if x ∈ U{i} then ¬ (x ∈ x) is also a type. In essence this just says that a subset type is a type if both components are types. The former goal is quite straightforward so we applied auto and take care of it. Now we have one new subgoal to handle

1. x : =(U{i}; U{i}; U{i})
2. x' : U{i}
⊢ =(not(member(x'; x')); not(member(x'; x')); U{i})

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
⊢ void

The second subgoal is just the rest of the proof, and the first subgoal is what we want to handle. It says that if we have a type x, then not(member(x; x)) is a type (albeit in ugly notation). To prove this we have to unfold not. So we’ll do this and apply eq-cd again.

      unfold <not implies>; intro;
      aux { impredicativity-wf-tac };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux {
        unfold <member Russell>; eq-cd; auto;
        unfold <not implies>; eq-cd; auto;

Remember that not(P) desugars to P -> void. Applying eq-cd is going to give us two subgoals, P is a type and void is a type. However, member(void; U{i}) is pretty easy to prove, so we apply auto again which takes care of one of our two new goals. Now we just have

1. x : =(U{i}; U{i}; U{i})
2. x' : U{i}
⊢ =(member(x'; x'); member(x'; x'); U{i})

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
⊢ void

Now we’re getting to the root of the issue. We’re trying to prove that member(x'; x') is a type. This is happily handled by impredicativity-wf-tac which will use our assumption that U{i} ∈ U{i} because it’s smart like that.

      unfold <not implies>; intro;
      aux { impredicativity-wf-tac };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux {
        unfold <member Russell>; eq-cd; auto;
        unfold <not implies>; eq-cd; auto;

Now we just have that main goal with the assumption russell-wf added.

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
⊢ void

Now we have a similar well-formedness goal to assert and prove. We want to prove that ∈(Russell; Russell) is a type. This is easier though; we can prove it easily using impredicativity-wf-tac.

      unfold <not implies>; intro;
      aux { impredicativity-wf-tac };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux {
        unfold <member Russell>; eq-cd; auto;
        unfold <not implies>; eq-cd; auto;

      assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>;
      aux { impredicativity-wf-tac; cum @i; auto };

That cum @i is a quirk of impredicativity-wf-tac. It basically means that instead of proving =(...; ...; U{i'}) we can prove =(...; ...; U{i}) since U{i} is a universe below U{i'} and all universes are cumulative.

Our goal is now

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
⊢ void

Ok, so now the reasoning can start now that we have all these well-formedness lemmas. Our proof sketch is basically as follows

  1. Prove that Russell ∈ Russell is false. This is because if Russell was in Russell then by definition of Russell it isn’t in Russell.
  2. Since not(Russell ∈ Russell) holds, then Russell ∈ Russell holds.
  3. Hilarity ensues.

Here’s the first assertion:

      unfold <not implies>; intro;
      aux { impredicativity-wf-tac };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux {
        unfold <member Russell>; eq-cd; auto;
        unfold <not implies>; eq-cd; auto;

      assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>;
      aux { impredicativity-wf-tac; cum @i; auto };

      assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>;

Here are our subgoals:

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
⊢ not(member(Russell; Russell))

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
4. russell-not-in-russell : not(member(Russell; Russell))
⊢ void

We want to prove that first one. To start, let’s unfold that not and move member(Russell; Russell) to the hypothesis and use it to prove void. We do this with intro.

      unfold <not implies>; intro;
      aux { ... };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux { ... };

      assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>;
      aux { ... };

      assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>;
      aux {
        unfold <not implies>;
        intro @i; aux {assumption};

Notice that the well-formedness goal that intro generated is handled by our assumption! After all, it’s just member(Russell; Russell) ∈ U{i}, we already proved it. Now our subgoals look like this

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
4. x' : member(Russell; Russell)
⊢ void

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
4. russell-not-in-russell : not(member(Russell; Russell))
⊢ void

Here’s our clever plan

  1. Since Russell ∈ Russell, there’s an X : Russell so that ceq(Russell; X) holds
  2. Since X : Russell, we can unfold it to say that X : {x ∈ U{i} | ¬ (x ∈ x)}
  3. We can apply the elimination principle for subset types to X and derive that ¬ (X ∈ X)
  4. Rewriting by ceq(Russell; X) gives ¬ (Russell; Russell)
  5. Now we have a contradiction

Let’s start explaining this to JonPRL by introducing that X (here called R). We’ll assert an R : Russell such that R ~ Russell. We do this using dependent pairs (here written (x : A) * B(x)).

      unfold <not implies>; intro;
      aux { ... };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux { ... };

      assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>;
      aux { ... };

      assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>;
      aux {
        unfold <not implies>;
        intro @i; aux {assumption};
        assert [(R : Russell) * R ~ Russell] <R-with-prop>;
        aux {
          intro [Russell] @i; auto

We’ve proven this by intro. For proving dependent products we provide an explicit witness for the first component. Basically to prove (x : A) * B(x) we say intro [Foo]. We then have a goal Foo ∈ A and B(Foo). Since subgoals are fully independent of each other, we have to give the witness for the first component upfront. It’s a little awkward, Jon’s working on it :).

In this case we use intro [Russell]. After this we have to prove that this witness has type Russell and then prove the second component holds. Happily, auto takes care of both of these obligations so intro [Russell] @i; auto handles it all.

Now we promptly eliminate this pair. It gives us two new facts, that R : Russell and R ~ Russell hold.

      unfold <not implies>; intro;
      aux { ... };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux { ... };

      assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>;
      aux { ... };

      assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>;
      aux {
        unfold <not implies>;
        intro @i; aux {assumption};
        assert [(R : Russell) * R ~ Russell] <R-with-prop>;
        aux {
          intro [Russell] @i; auto

        elim <R-with-prop>; thin <R-with-prop>

This leaves our goal as

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
4. x' : member(Russell; Russell)
5. s : Russell
6. t : ceq(s; Russell)
⊢ void

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
4. russell-not-in-russell : not(member(Russell; Russell))
⊢ void

Now let’s invert on the hypothesis that s : Russell; we want to use it to conclude that ¬ (s ∈ s) holds since that will give us ¬ (R ∈ R).

      unfold <not implies>; intro;
      aux { ... };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux { ... };

      assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>;
      aux { ... };

      assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>;
      aux {
        unfold <not implies>;
        intro @i; aux {assumption};
        assert [(R : Russell) * R ~ Russell] <R-with-prop>;
        aux {
          intro [Russell] @i; auto

        elim <R-with-prop>; thin <R-with-prop>;
        unfold <Russell>; elim #5;

Now that we’ve unfolded all of those Russells our goal is a little bit harder to read, remember to mentally substitute {x : U{i} | not(member(x; x))} as Russell.

1. x : member(U{i}; U{i})
2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i})
3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i})
4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))})
5. s : {x:U{i} | not(member(x; x))}
6. x'' : U{i}
7. [t'] : not(member(x''; x''))
8. t : ceq(x''; {x:U{i} | not(member(x; x))})
⊢ void

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
4. russell-not-in-russell : not(member(Russell; Russell))
⊢ void

Now we use #7 to derive that not(member(Russell; Russell)) holds.

      unfold <not implies>; intro;
      aux { ... };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux { ... };

      assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>;
      aux { ... };

      assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>;
      aux {
        unfold <not implies>;
        intro @i; aux {assumption};
        assert [(R : Russell) * R ~ Russell] <R-with-prop>;
        aux {
          intro [Russell] @i; auto

        elim <R-with-prop>; thin <R-with-prop>;
        unfold <Russell>; elim #5;

        assert [¬ member(Russell; Russell)];
        aux {
          unfold <Russell>;

This leaves us with 3 subgoals, the first one being the assertion.

1. x : member(U{i}; U{i})
2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i})
3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i})
4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))})
5. s : {x:U{i} | not(member(x; x))}
6. x'' : U{i}
7. [t'] : not(member(x''; x''))
8. t : ceq(x''; {x:U{i} | not(member(x; x))})
⊢ not(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}))

1. x : member(U{i}; U{i})
2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i})
3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i})
4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))})
5. s : {x:U{i} | not(member(x; x))}
6. x'' : U{i}
7. [t'] : not(member(x''; x''))
8. t : ceq(x''; {x:U{i} | not(member(x; x))})
9. H : not(member(Russell; Russell))
⊢ void

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
4. russell-not-in-russell : not(member(Russell; Russell))
⊢ void

Now to prove this, what we need to do is substitute the unfolded Russell for x''; from there it’s immediate by assumption. We perform the substitution with chyp-subst. This takes a direction in which to substitute, which hypothesis to use, and another targeting telling us where to apply the substitution.

      unfold <not implies>; intro;
      aux { ... };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux { ... };

      assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>;
      aux { ... };

      assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>;
      aux {
        unfold <not implies>;
        intro @i; aux {assumption};
        assert [(R : Russell) * R ~ Russell] <R-with-prop>;
        aux {
          intro [Russell] @i; auto

        elim <R-with-prop>; thin <R-with-prop>;
        unfold <Russell>; elim #5;

        assert [¬ member(Russell; Russell)];
        aux {
          unfold <Russell>;
          chyp-subst ← #8 [h. ¬ (h ∈ h)];

This leaves us with a much more tractable goal.

1. x : member(U{i}; U{i})
2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i})
3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i})
4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))})
5. s : {x:U{i} | not(member(x; x))}
6. x'' : U{i}
7. [t'] : not(member(x''; x''))
8. t : ceq(x''; {x:U{i} | not(member(x; x))})
⊢ not(member(x''; x''))

1. x : member(U{i}; U{i})
2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i})
3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i})
4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))})
5. s : {x:U{i} | not(member(x; x))}
6. x'' : U{i}
7. [t'] : not(member(x''; x''))
8. t : ceq(x''; {x:U{i} | not(member(x; x))})
9. H : not(member(Russell; Russell))
⊢ void

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
4. russell-not-in-russell : not(member(Russell; Russell))
⊢ void

We’d like to just apply assumption but it’s not immediately applicable due to some technically details (basically we can only apply an assumption in a proof irrelevant context but we have to unfold Russell and introduce it to demonstrate that it’s irrelevant). So just read what’s left as a (very) convoluted assumption.

      unfold <not implies>; intro;
      aux { ... };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux { ... };

      assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>;
      aux { ... };

      assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>;
      aux {
        unfold <not implies>;
        intro @i; aux {assumption};
        assert [(R : Russell) * R ~ Russell] <R-with-prop>;
        aux {
          intro [Russell] @i; auto

        elim <R-with-prop>; thin <R-with-prop>;
        unfold <Russell>; elim #5;

        assert [¬ (Russell; Russell)];
        aux {
          unfold <Russell>;
          chyp-subst ← #8 [h. ¬ (h ∈ h)];
          unfold <not implies>
          intro; aux { impredicativity-wf-tac };

Now we’re almost through this assertion, our subgoals look like this (pay attention to 9 and 4)

1. x : member(U{i}; U{i})
2. russell-wf : member({x:U{i} | not(member(x; x))}; U{i})
3. russell-in-russell-wf : member(member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))}); U{i})
4. x' : member({x:U{i} | not(member(x; x))}; {x:U{i} | not(member(x; x))})
5. s : {x:U{i} | not(member(x; x))}
6. x'' : U{i}
7. [t'] : not(member(x''; x''))
8. t : ceq(x''; {x:U{i} | not(member(x; x))})
9. H : not(member(Russell; Russell))
⊢ void

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
4. russell-not-in-russell : not(member(Russell; Russell))
⊢ void

Once we unfold that Russell we have an immediate contradiction so unfold <Russell>; contradiction solves it.

      unfold <not implies>; intro;
      aux { ... };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux { ... };

      assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>;
      aux { ... };

      assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>;
      aux {
        unfold <not implies>;
        intro @i; aux {assumption};
        assert [(R : Russell) * R ~ Russell] <R-with-prop>;
        aux {
          intro [Russell] @i; auto

        elim <R-with-prop>; thin <R-with-prop>;
        unfold <Russell>; elim #5;

        assert [¬ (Russell; Russell)];
        aux {
          unfold <Russell>;
          chyp-subst ← #8 [h. ¬ (h ∈ h)];
          unfold <not implies>;
          intro; aux { impredicativity-wf-tac };

        unfold <Russell>; contradiction

This takes care of this subgoal, so now we’re back on the main goal. This time though we have an extra hypothesis which will provide the leverage we need to prove our next assertion.

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
4. russell-not-in-russell : not(member(Russell; Russell))
⊢ void

Now we’re going to claim that Russell is in fact a member of Russell. This will follow from the fact that we’ve proved already that Russell isn’t in Russell (yeah, it seems pretty paradoxical already).

      unfold <not implies>; intro;
      aux { ... };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux { ... };

      assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>;
      aux { ... };

      assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>;
      aux { ... };

      assert [Russell ∈ Russell];

Giving us

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
4. russell-not-in-russell : not(member(Russell; Russell))
⊢ member(Russell; Russell)

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
4. russell-not-in-russell : not(member(Russell; Russell))
5. H : member(Russell; Russell)
⊢ void

Proving this is pretty straightforward, we only have to demonstrate that not(Russell ∈ Russell) and Russell ∈ U{i}, both of which we have as assumptions. The rest of the proof is just more well-formedness goals.

First we unfold everything and apply eq-cd. This gives us 3 subgoals, the first two are Russell ∈ U{i} and ¬(Russell ∈ Russell). Since we have these as assumptions we’ll use main {assumption}. That will target both these goals and prove them immediately. Here by using main we avoid applying this to the well-formedness goal, which in this case actually isn’t the assumption.

      unfold <not implies>; intro
      aux { ... };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux { ... };

      assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>;
      aux { ... };

      assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>;
      aux { ... };

      assert [Russell ∈ Russell];
      aux {
        unfold <member Russell>; eq-cd;
        unfold <member>;

        main { assumption };

This just leaves us with one awful well-formedness goal requiring us to prove that not(=(x; x; x)) is a type if x is a type. We actually proved something similar back when we prove that Russell was well-formed. The proof is the same as then, just unfold, eq-cd and impredicativity-wf-tac. We use ?{!{auto}} to only apply auto in a subgoal where it immediately proves it. Here ?{} says “run this or do nothing” and !{} says “run this, if it succeeds stop, if it does anything else, fail”. This is not an interesting portion of the proof, don’t burn too many cycles trying to figure this out.

      unfold <not implies>; intro
      aux { ... };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux { ... };

      assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>;
      aux { ... };

      assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>;
      aux { ... };

      assert [Russell ∈ Russell] <russell-in-russell>;
      aux {
        unfold <member Russell>; eq-cd;
        unfold <member>;

        main { assumption };
        unfold <not implies>; eq-cd; ?{!{auto}};

Now we just have the final subgoal to prove. We’re actually in a position to do so now.

1. x : member(U{i}; U{i})
2. russell-wf : member(Russell; U{i})
3. russell-in-russell-wf : member(member(Russell; Russell); U{i})
4. russell-not-in-russell : not(member(Russell; Russell))
5. russell-in-russell : member(Russell; Russell)
⊢ void

Now that we’ve shown P and not(P) hold at the same time all we need to do is apply contradiction and we’re done.

    Theorem type-not-in-type [¬ (U{i} ∈ U{i})] {
      unfold <not implies>; intro
      aux { ... };

      assert [Russell ∈ U{i}] <russell-wf>;
      aux { ... };

      assert [(Russell ∈ Russell) ∈ U{i}] <russell-in-russell-wf>;
      aux { ... };

      assert [¬ (Russell ∈ Russell)] <not-russell-in-russell>;
      aux { ... };

      assert [Russell ∈ Russell] <russell-in-russell>;
      aux { ... };


And there you have it, a complete proof of Russell’s paradox fully formalized in JonPRL! We actually proved a slightly stronger result than just that the type of types cannot be in itself, we proved that at any point in the hierarchy of universes (the first of which is Type/*/whatever) if you tie it off, you’ll get a contradiction.

Wrap Up

I hope you found this proof interesting. Even if you’re not at all interested in JonPRL, it’s nice to see that allowing one to have U{i} ∈ U{i} or * :: * gives you the ability to have a type like Russell and with it, inhabit void. I also find it especially pleasing that we can prove something like this in JonPRL; it’s growing up so fast.

Thanks to Jon for greatly improving the original proof we had

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + ''; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus

August 26, 2015 12:00 AM

August 25, 2015

The GHC Team

GHC Weekly News - 2015/08/06

GHC Weekly News - 6 Aug 2015

Hello *,

Here is a rather belated Weekly News which I found sitting nearly done on my work-queue. I hope this will make for a good read despite its age. The next edition of the Weekly News will be posted soon.

Warnings for missed specialization opportunities

Simon Peyton Jones recently [a4261549afaee56b00fbea1b4bc1a07c95e60929 introduced] a warning in master to alert users when the compiler was unable to specialize an imported binding despite it being marked as INLINEABLE. This change was motivated by #10720, where the reporter observed poor runtime performance despite taking care to ensure his binding could be inlined. Up until now, ensuring that the compiler's optimizations meet the user's expectation would require a careful look at the produced Core. With this change the user is notified of exactly where the compiler had to stop specializing, along with a helpful hint on where to add a INLINABLE pragma.

Ticky-Ticky profiling

Recently I have been looking into breathing life back into GHC's ticky-ticky profiling mechanism. When enabled, ticky-ticky maintains low-level counters of various runtime-system events. These include closure entries, updates, and allocations. While ticky doesn't provide nearly the detail that the cost-center profiler allows, it is invisible to the Core-to-Core optimization passes and has minimal runtime overhead (manifested as a bit more memory traffic due to counter updates). For this reason, the ticky-ticky profiler can be a useful tool for those working on the Core simplifier.

Sadly, ticky-ticky has fallen into quite a state of disrepair in recent years as the runtime system and native code generator have evolved. As the beginning of an effort to resuscitate the ticky-ticky profiler I've started putting together a list of the counters currently implemented and whether they can be expected to do something useful. Evaluating the functionality of these counters is non-trivial, however, so this will be an on-going effort.

One of our goals is to eventually do a systematic comparison of the heap allocation numbers produced by the ticky-ticky profiler, the cost-center profiler, and ticky-ticky. While this will help validate some of the more coarse-grained counters exposed by ticky, most of them will need a more thorough read-through of the runtime system to verify.

integer-gmp Performance

Since the 7.10.2 release much of my effort has been devoted to characterizing the performance of various benchmarks over various GHC versions. This is part of an effort to find places where we have regressed in the past few versions. One product of this effort is a complete comparison of results from our nofib benchmark suite ranging from 7.4.2 to 7.10.1.

The good news is there are essentially no disastrous regressions. Moreover, on the mean runtimes are over 10% faster than they were in 7.4.2. There are, however, a few cases which have regressed. The runtime of the integer test, for instance, has increased by 7%. Looking at the trend across versions, it becomes apparent that the regression began with 7.10.1.

One of the improvements that was introduced with 7.10 was a rewrite of the integer-gmp library, which this benchmark tests heavily. To isolate this potential cause, I recompiled GHC 7.10.1 with the old integer-gmp-0.5. Comparing 7.10.1 with the two integer-gmp versions reveals a 4% increase in allocations.

While we can't necessarily attribute all of the runtime increase to these allocations, they are something that should be addressed if possible. Herbert Valerio Riedel, the author of the integer-gmp rewrite, believes that the cause may be due to the tendency for the rewrite to initially allocate a conservatively-sized backing ByteArray# for results. This leads to increased allocations due to the reallocations that are later required to accommodate larger-than-expected results.

While being more liberal in the initial allocation sizes would solve the reallocation issue, this approach may substantially increase working-set sizes and heap fragmentation for integer-heavy workloads. For this reason, Herbert will be looking into exploiting a feature of our heap allocator. Heap allocations in GHC occur by bumping a pointer into an allocation block. Not only is this a very efficient means of allocating, it potentially allows one to efficiently grow an existing allocation. In this case, if we allocate a buffer and soon after realize that our request was too small we can simply bump the heap pointer by the size deficit, so long as no other allocations have occurred since our initial allocation. We can do this since we know that the memory after the heap pointer is available; we merely need to ensure that the current block we are allocating into is large enough.

Simon Marlow and Herbert will be investigating this possibility in the coming weeks.

D924: mapM_ and traverse_

As discussed in the most recent Weekly News, one issue on our plate at the moment is Phab:D924, which attempted to patch up two remaining facets of the Applicative-Monad Proposal,

  1. Remove the override of mapM for the [] Traversal instance
  2. Rewrite mapM_ in terms of traverse_

While (1) seems like an obvious cleanup, (2) is a bit tricky. As noted last time, traverse_ appears to give rise to non-linear behavior in this context.

akio has contributed an insightful [analysis] shedding light on the cause of this behavior. Given that the quadratic behavior is intrinsic to the Applicative formulation, we'll be sending this matter back to the Core Libraries Committee to inform their future design decisions.

That is all for this week!


~ Ben

by bgamari at August 25, 2015 09:14 PM

Dimitri Sabadie

Contravariance and luminance to add safety to uniforms

It’s been a few days I haven’t posted about luminance. I’m on holidays, thus I can’t be as involved in the development of the graphics framework as I’m used to on a daily basis. Although I’ve been producing less in the past few days, I’ve been actively thinking about something very important: uniform.

What people usually do

Uniforms are a way to pass data to shaders. I won’t talk about uniform blocks nor uniform buffers – I’ll make a dedicated post for that purpose. The common OpenGL uniform flow is the following:

  1. you ask OpenGL to retrieve the location of a GLSL uniform through the function glGetUniformLocation, or you can use an explicit location if you want to handle the semantics on your own ;
  2. you use that location, the identifier of your shader program and send the actual values with the proper glProgramUniform.

You typically don’t retrieve the location each time you need to send values to the GPU – you only retrieve them once, while initializing.

The first thing to make uniforms more elegant and safer is to provide a typeclass to provide a shared interface. Instead of using several functions for each type of uniform – glProgramUniform1i for Int32, glProgramUniform1f for Float and so on – we can just provide a function that will call the right OpenGL function for the type:

class Uniform a where
sendUniform :: GLuint -> GLint -> a -> IO ()

instance Uniform Int32 where
sendUniform = glProgramUniform1i

instance Uniform Float where
sendUniform = glProgramUniform1f

-- and so on…

That’s the first step, and I think everyone should do that. However, that way of doing has several drawbacks:

  • it still relies on side-effects; that is, we can call sendUniform pretty much everywhere ;
  • imagine we have a shader program that requires several uniforms to be passed each time we draw something; what happens if we forget to call a sendUniform? If we haven’t sent the uniform yet, we might have an undefined behavior. If we already have, we will override all future draws with that value, which is very wrong… ;
  • with that way of representing uniforms, we have a very imperative interface; we can have a more composable and pure approach than that, hence enabling us to gain in power and flexibility.

What luminance used to do

In my luminance package, I used to represent uniforms as values.

newtype U a = U { runU :: a -> IO () }

We can then alter the Uniform typeclass to make it simpler:

class Uniform a where
toU :: GLuint -> GLint -> U a

instance Uniform Int32 where
toU prog l = U $ glProgramUniform1i prog l

instance Uniform Float where
toU prog l = U $ glProgramUniform1f prog l

We also have a pure interface now. I used to provide another type, Uniformed, to be able to send uniforms without exposing IO, and an operator to accumulate uniforms settings, (@=):

newtype Uniformed a = Uniformed { runUniformed :: IO a } deriving (Applicative,Functor,Monad)

(@=) :: U a -> a -> Uniformed ()
U f @= a = Uniformed $ f a

Pretty simple.

The new uniform interface

The problem with that is that we still have the completion problem and the side-effects, because we just wrap them without adding anything special – Uniformed is isomorphic to IO. We have no way to create a type and ensure that all uniforms have been sent down to the GPU…

Contravariance to save us!

If you’re an advanced Haskell programmer, you might have noticed something very interesting about our U type. It’s contravariant in its argument. What’s cool about that is that we could then create new uniform types – new U – by contramapping over those types! That means we can enrich the scope of the hardcoded Uniform instances, because the single way we have to get a U is to use Uniform.toU. With contravariance, we can – in theory – extend those types to all types.

Sounds handy eh? First thing first, contravariant functor. A contravariant functor is a functor that flips the direction of the morphism:

class Contravariant f where
contramap :: (a -> b) -> f b -> f a
(>$) :: b -> f b -> f a

contramap is the contravariant version of fmap and (>$) is the contravariant version of (<$). If you’re not used to contravariance or if it’s the first time you see such a type signature, it might seem confusing or even magic. Well, that’s the mathematic magic in the place! But you’ll see just below that there’s no magic no trick in the implementation.

Because U is contravariant in its argument, we can define a Contravariant instance:

instance Contravariant U where
contramap f u = U $ runU u . f

As you can see, nothing tricky here. We just apply the (a -> b) function on the input of the resulting U a so that we can pass it to u, and we just runU the whole thing.

A few friends of mine – not Haskeller though – told me things like “That’s just theory bullshit, no one needs to know what a contravariant thingy stuff is!”. Well, here’s an example:

newtype Color = Color {
colorName :: String
, colorValue :: (Float,Float,Float,Float)

Even though we have an instance of Uniform for (Float,Float,Float,Float), there will never be an instance of Uniform for Color, so we can’t have a U Color… Or can we?

uColor = contramap colorValue float4U

The type of uColor is… U Color! That works because contravariance enabled us to adapt the Color structure so that we end up on (Float,Float,Float,Float). The contravariance property is then a very great ally in such situations!

More contravariance

We can even dig in deeper! Something cool would be to do the same thing, but for several fields. Imagine a mouse:

data Mouse = Mouse {
mouseX :: Float
, mouseY :: Float

We’d like to find a cool way to have U Mouse, so that we can send the mouse cursor to shaders. We’d like to contramap over mouseX and mouseY. A bit like with Functor + Applicative:

getMouseX :: IO Float
getMouseY :: IO Float

getMouse :: IO Mouse
getMouse = Mouse <$> getMouseX <*> getMouseY

We could have the same thing for contravariance… And guess what. That exists, and that’s called divisible contravariant functors! A Divisible contravariant functor is the exact contravariant version of Applicative!

class (Contravariant f) => Divisible f where
divide :: (a -> (b,c)) -> f b -> f c -> f a
conquer :: f a

divide is the contravariant version of (<*>) and conquer is the contravariant version of pure. You know that pure’s type is a -> f a, which is isomorphic to (() -> a) -> f a. Take the contravariant version of (() -> a) -> f a, you end up with (a -> ()) -> f a. (a -> ()) is isomorphic to (), so we can simplify the whole thing to f a. Here you have conquer. Thank you to Edward Kmett for helping me understand that!

Let’s see how we can implement Divisible for U!

instance Divisible U where
divide f p q = U $ \a -> do
let (b,c) = f a
runU p b
runU q c
conquer = U . const $ pure ()

And now let’s use it to get a U Mouse!

let uMouse = divide (\(Mouse mx my) -> (mx,my)) mouseXU mouseYU

And here we have uMouse :: U Mouse! As you can see, if you have several uniforms – for each fields of the type, you can divide your type and map all fields to the uniforms by applying several times divide.

The current implementation is almost the one shown here. There’s also a Decidable instance, but I won’t talk about that for now.

The cool thing about that is that I can lose the Uniformed monadic type and rely only on U. Thanks to the Divisible typeclass, we have completion, and we can’t override future uniforms then!

I hope you’ve learnt something cool and useful through this. Keep in mind that category abstractions are powerful and are useful in some contexts.

Keep hacking around, keep being curious. A Haskeller never stops learning! And that’s what so cool about Haskell! Keep the vibe, and see you another luminance post soon!

by Dimitri Sabadie ( at August 25, 2015 03:02 PM


Hackage Security Beta Release

Well-Typed and the Industrial Haskell Group are very happy to announce the beta release of the hackage-security library, along with its integration in hackage-server and cabal-install. The new security features of hackage are now deployed on the central server and there is a beta release of cabal available. You can install it through

cabal install \ \

This will install a cabal-secure-beta binary which you can use alongside your normal installation of cabal.

For a more detailed discussion of the rationale behind this project, see the annoucement of the alpha release or the initial announcement of this project. We will also be giving a talk about the details at the upcoming Haskell Implementors Workshop. In the remainder of this blog post we will describe what’s available, right now.

What’s in it for you?

Increased security

The Hackage server now does index signing. This means that if an attacker sits between you and Hackage and tries to feed you different packages than you think you are installing, cabal will notice this and throw a security exception. Index signing provides no (or very limited) security against compromise of the central server itself, but allows clients to verify that what they are getting is indeed what is on the central server.

 (Untrusted) mirrors

A very important corollary of the previous point is that we can now have untrusted mirrors. Anyone can offer to mirror hackage and we can gratefully accept these offers without having to trust those mirror operators. Whether we are downloading from the mirror or from the primary server, the new security features make it possible to verify that what we are downloading is what is on the primary server.

In practice this mean we can have mirrors at all, and we can use them fully automatically with no client side configuration required. This should give a huge boost to the reliability of Hackage; even AWS goes down from time to time but properly decentralised mirrors should mean there’s always a recent snapshot available.

On the client-side, the very first time cabal updates from the primary server it also finds out what mirrors are available. On subsequent updates it will automatically make use of any of those mirrors: if it encounters a problem with one it will try another. Updates to the list of mirrors is also fully automatic.

For operating a mirror, we have extended the hackage-mirror client (currently bundled in the hackage-server package) so that it can be used to mirror a Hackage repository to a simple set of local files which can then be served by an ordinary HTTP server.

We already have one mirror available in time for the beta. The OSU Open Source Lab have very kindly agreed to host a Hackage mirror for the benefit of the Haskell community. This mirror is now live at, but we didn’t need to tell you that: (the beta release of) cabal will notice this automatically without any configuration on the part of the user thanks to

Getting a mirror up and running is very easy, so if you would like to host a public Hackage mirror, then please do get in touch; during the beta period get in touch with us, or later on get in touch with the Hackage admins.

Incremental updates

Hackage provides a 00-index.tar.gz resource which is a tarball containing the .cabal files for all packages available on Hackage. It is this file that cabal downloads when you call cabal update, and that it uses during dependency resolution.

However, this file is quite large, which is why cabal update can take a few seconds to complete. In fact at nearly 10Mb the index is now considerably larger than almost all package source tarballs.

As part of the security work we have had to extend this index with extra security metadata, making the file even larger. So we have also taken the opportunity to dramatically reduce download sizes by allowing clients to update this file incrementally. The index tarball is now extended in an append-only way. This means that once cabal has downloaded the tarball once, on subsequent updates it can just download the little bit it doesn’t yet have. To avoid making existing clients download the new larger index file each time, the 00-index.tar.gz is kept as it always was and repositories supporting the new features additionally provide a 01-index.tar.gz. In future we could additionally provide a .tar.xz variant and thereby keep the first-time update size to a minimum.

The append-only nature of the index has additional benefits; in effect, the index becomes a log of Hackage’s history. This log can be used for various purposes; for example, we can track how install plans for packages change over time. As another example, Herbert Valerio Riedel has been working on an “package-index wayback” feature for Cabal. This uses the index to recreate a past view of the package index for recovering now bit-rotted install-plans that were known to work in the past.

There are currently a few known issues that make cabal update slower than it needs to be, even though it’s doing an incremental update. This will be addressed before the official release.

Host your own private repository

It has always been possible to host your own Hackage repository, either for private packages or as a mirror of the public collection, but it has not always been convenient.

There is the “smart” server in the form of the hackage-server, which while relatively easy to build and run, isn’t as simple as just a bunch of files. There has also always been the option of a “dumb” server, in the form of a bunch of files in the right format hosted by an ordinary HTTP server. While the format is quite simple (reusing the standard tar format), there have not been convenient tools for creating or managing these file based repositories.

As part of the security work we have made a simple command line tool to create and manage file based Hackage repositories, including all the necessary security metadata. This tool has been released as hackage-repo-tool on Hackage.

So whether you want a private mirror of the public packages, or a repository for your own private packages, or both, we hope these new tools will make that much more convenient. Currently documentation on how to use these tools is still somewhat lacking; this is something we will address after this beta release. Getting started is not difficult; there are some brief instructions in the reddit discussion, and feel free to talk to us on #hackage on IRC or contact us directly at if you need help.

What’s next?

As mentioned, we would like to invite you to install cabal-secure-beta and start testing it; just use it as you would cabal right now, and report any problems you may find on the hackage-security issue tracker. Additionally, if you would like to host a public mirror for Hackage, please contact us.

This release is primarily intended as an in-the-wild test of the infrastructure; there are still several details to be dealt with before we call this an official release.

The most important of these is proper key management. Much like, say, HTTPS, the chain of trust starts at a set of root keys. We have asked the committee to act as the root of trust and the committee has agreed in principle. The committee members will hold a number of the root keys themselves and the committee may also invite other organisations and individuals within the community to hold root keys. There are some policy details that remain to be reviewed and agreed. For example we need to decide on how many root keys to issue, what threshold number of keys be required to re-sign the root info, and agree policies for storing the root keys to keep them safe (for instance, mandate an air gap where the root key is never on a machine that is connected to the Internet). We will use the opportunity of ICFP (and the HIW talk) in a couple weeks time to present more details and get feedback.

If you would like to help with development, please take a look at the issue list and get in touch!

by edsko, duncan at August 25, 2015 02:49 PM

August 24, 2015

FP Complete

stack and GHC on Windows

I've spent some time over the past few weeks working on problems stack users have run into on Windows, and I'd like to share the outcome. To summarize, here are the major problems I've seen encountered:

  1. When linking a project with a large number of libraries, GHC hits the 32k command length limit of Windows, causing linking to fail with a mysterious "gcc: command not found."
  2. On Windows, paths (at least by default) are limited to 260 characters. This can cause problems quickly when using either stack or cabal sandboxes, which have dist directory structures including GHC versions, Cabal versions, and sometimes a bit more metadata.
  3. Most users do not have a Unicode codepage (e.g., 65001 UTF-8) by default, so some characters cannot be produced by GHC. This affects both error/warning output on stdout/stderr, and dump files (e.g., -ddump-to-file -ddump-hi, which stack uses for detecting unlisted modules and Template Haskell files. Currently, GHC simply crashes when this occurs. This can affect non-Windows systems as well.

The result of this so far has been four GHC patches, and one recommended workaround - hopefully we can do better on that too.

Thanks to all those who have helped me get these patches in place, especially Ben Gamari, Reid Barton, Tamar Christina and Austin Seipp. If you're eager and want to test out the changes already, you can try out my GHC 7.10 branch.

Always produce UTF8-encoded dump files

This patch has already been merged and backported to GHC 7.10. The idea is simple: GHC expects input files to always be UTF-8 encoded, so generated UTF-8 encoded dump files too. Upshot: environment variables and codepage settings can no longer affect the format of these dump files, making it more reliable for tooling to parse and use these files.

Transliterate unknown characters

This patch is similarly both merged and backported. Currently, if GHC tries to print a warning that includes non-Latin characters, and the LANG variable/Windows codepage doesn't support it, you end up with a crash about the commitBuffer. This change is pretty simple: take the character encoding used by stdout and stderr, and switch on transliteration, which replaces unknown characters with a question mark (?).

Respect a GHC_CHARENC environment variable

The motivation here is that, when capturing the output of GHC, tooling like stack (and presumably cabal as well) would like to receive it in a consistent format. GHC currently has no means of setting the character encoding reliably across OSes: Windows uses the codepage, which is a quasi-global setting, whereas non-Windows uses the LANG environment variable. And even changing LANG may not be what we want; for example, setting that to C.UTF-8 would enable smart quotes, which we don't necessary want to do.

This new variable can be used to force GHC to use a specific character encoding, regardless of other settings. I chose to do this as an environment variable instead of a command line option, so that it would be easier to have this setting trickle through multiple layers of tools (e.g., stack calling the Cabal library calling GHC).

Note: This patch has not yet been merged, and is probably due for some discussion around naming.

Use a response file for command line arguments

Response files allow us to pass compiler and linker arguments via an external file instead of the command line, avoiding the 32k limit on Windows. The response file patch does just this. This patch is still being reviewed, but I'm hopeful that it will make it in for GHC 7.10.3, to help alleviate the pain points a number of Windows users are having. I'd also like to ask people reading this who are affected by this issue to test out the patches I've made; instructions are available on the stack issue tracker.

Workaround: shorter paths

For the issue of long path names, I don't have a patch available yet, nor am I certain that I can make one. Windows in principle supports tacking \\?\ to the beginning of an absolute path to unlock much larger path limits. However, I can't get this to be respected by GHC yet (I still need some investigation).

A workaround is to move your project directory to the root of the filesystem, and to set your STACK_ROOT environment variable similarly to your root (e.g., set STACK_ROOT=c:\stack_root). This should keep you under the limit for most cases.

August 24, 2015 09:30 PM

August 22, 2015

Joachim Breitner

Quickest path to a local apt repository

As I’m writing this, DebConf 15 is coming to an end. I spend most of my time improving the situation of the Haskell Packages in Debian, by improving the tooling and upgrading our packages to match Stackage 3.0 and build against GHC 7.10. But that is mostly of special interest (see this mail for a partial summary), so I’d like to use this post to advertise a very small and simple package I just uploaded to Debian:

During one of the discussion here I noticed that it is rather tricky to make a locally built package available to apt-get. The latest version in unstable allows one to install a debian package simply by running apt-get install on it, but in some cases, e.g. when you want a convenient way to list all packages that you made available for local use, this is insufficient.

So the usual approach is to create a local apt repository with your packages. Which is non-trivial: You can use dpkg-scanpackage, apt-ftparchive or reprepro. You need to create the directories, run the commands, add the repository to your local sources. You need to worry about signing it or setting the right options to make apt-get accept it without signing.

It is precisely this work that my new package local-apt-repository automates for you: Once it is installed, you simply drop the .deb file into /srv/local-apt-repository/ and after the next apt-get update the package can be installed like any other package from the archive.

I chose to use the advanced features that systemd provides – namely activation upon path changes – so works best with systemd as the init system.

If you want to contribute, or test it before it passes the NEW queue, check out the git repository.

by Joachim Breitner ( at August 22, 2015 01:48 PM

August 21, 2015

Mark Jason Dominus

A message to the aliens, part 6/23 (chemistry)

Earlier articles: Introduction Common features Page 1 (numerals) Page 2 (arithmetic) Page 3 (exponents) Page 4 (algebra) Page 5 (geometry)

This is page 6 of the Cosmic Call message. An explanation follows.

The 10 digits again:











Page 6 discusses fundamental particles of matter, the structure of the hydrogen and helium atoms, and defines glyphs for the most important chemical elements.

Depicted at top left is the hydrogen atom, with a proton in the center and an electron circulating around the outside. This diagram is equated to the glyph for hydrogen.

The diagram for helium is similar but has two electrons, and its nucleus has two protons and also two neutrons.




The illustrations may puzzle the aliens, depending on how they think of atoms. (Feynman once said that this idea of atoms as little solar systems, with the elctrons traveling around the nucleus like planets, was a hundred years old and out of date.) But the accompanying mass and charge data should help clear things up. The first formula says

the mass of the proton is 1836 times the mass of the electron, and that 1836, independent of the units used and believed to be a universal and fundamental constant, ought to be a dead giveaway about what is being discussed here.

If you want to communicate fundamental constants, you have a bit of a problem. You can't tell the aliens that the speed of light is furlongs per fortnight without first explaining furlongs and fortnights (as is actually done on a later page). But the proton-electron mass ratio is dimensionless; it's 1836 in every system of units. (Although the value is actually known to be 1836.15267; I don't know why a more accurate value wasn't given.)

This is the first use of subscripts in the document. It also takes care of introducing the symbol for mass. The following formula does the same for charge : .

The next two formulas, accompanying the illustration of the helium atom, describe the mass (1.00138 protons) and charge (zero) of the neutron. I wonder why the authors went for the number 1.00138 here instead of writing the neutron-electron mass ratio of 1838 for consistency with the previous ratio. I also worry that this won't be enough for the aliens to be sure about the meaning of . The 1836 is as clear as anything can be, but the 0 and -1 of the corresponding charge ratios could in principle be a lot of other things. Will the context be enough to make clear what is being discussed? I suppose it has to; charge, unlike mass, comes in discrete units and there is nothing like the 1836.

The second half of the page reiterates the symbols for hydrogen and helium and defines symbols for eight other chemical elements. Some of these appear in organic compounds that will be discussed later; others are important constitutents of the Earth. It also introduces symbol for “union” or “and”: . For example, sodium is described as having 11 protons and 12 neutrons.











Most of these new glyphs are not especially mnemonic, except for hydrogen—and aluminium, which is spectacular.

The blog is going on hiatus until early September. When it returns, the next article will discuss page 7, shown at right. It has three errors. Can you find them? (Click to enlarge.)

by Mark Dominus ( at August 21, 2015 01:33 PM

August 19, 2015

Richard Eisenberg

Planned change to GHC: merging types and kinds

I’m proud to announce that I’m nearing completion on a major patch to GHC, merging types and kinds. This patch has been in development since late 2012 (!), with many interruptions in the meantime. But I really do think it will make it for 7.12, due out early 2016. This post is meant to generate discussion in the community about the proposed changes and to get feedback about any user-facing aspects which might be of interest.


The real motivation for writing this is that it’s a key step toward dependent types, as described in the paper laying out the theory that underlies this patch. But other motivation is close to hand as well. This patch fixes GHC bug #7961, which concerns promotion of GADTs – after this patch is merged, all types can be used as kinds, because kinds are the same as types! This patch also contributes toward the solution of the problems outlined in the wiki page for the concurrent upgrade to Typeable, itself part of the Distributed Haskell plan.


Below are some fun examples that compile with my patch. As usual, this page is a literate Haskell file, and these examples really do compile! (I haven’t yet implemented checking for the proposed extension StarInStar, which this will require in the end.)

> {-# LANGUAGE DataKinds, PolyKinds, GADTs, TypeOperators, TypeFamilies #-}
> {-# OPTIONS_GHC -fwarn-unticked-promoted-constructors #-}
> -- a Proxy type with an explicit kind
> data Proxy k (a :: k) = P
> prox :: Proxy * Bool
> prox = P
> prox2 :: Proxy Bool 'True
> prox2 = P
> -- implicit kinds still work
> data A
> data B :: A -> *
> data C :: B a -> *
> data D :: C b -> *
> data E :: D c -> *
> -- note that E :: forall (a :: A) (b :: B a) (c :: C b). D c -> *
> -- a kind-indexed GADT
> data TypeRep (a :: k) where
>   TInt   :: TypeRep Int
>   TMaybe :: TypeRep Maybe
>   TApp   :: TypeRep a -> TypeRep b -> TypeRep (a b)
> zero :: TypeRep a -> a
> zero TInt            = 0
> zero (TApp TMaybe _) = Nothing
> data Nat = Zero | Succ Nat
> type family a + b where
>   'Zero     + b = b
>   ('Succ a) + b = 'Succ (a + b)
> data Vec :: * -> Nat -> * where
>   Nil  :: Vec a 'Zero
>   (:>) :: a -> Vec a n -> Vec a ('Succ n)
> infixr 5 :>
> -- promoted GADT, and using + as a "kind family":
> type family (x :: Vec a n) ++ (y :: Vec a m) :: Vec a (n + m) where
>   'Nil      ++ y = y
>   (h ':> t) ++ y = h ':> (t ++ y)
> -- datatype that mentions *
> data U = Star *
>        | Bool Bool
> -- kind synonym
> type Monadish = * -> *
> class MonadTrans (t :: Monadish -> Monadish) where
>   lift :: Monad m => m a -> t m a
> data Free :: Monadish where
>   Return :: a -> Free a
>   Bind   :: Free a -> (a -> Free b) -> Free b
> -- yes, * really does have type *.
> type Star = (* :: (* :: (* :: *)))


More details are in the wiki page for this redesign. As stated above, I’d love your feedback on all of this!

by Richard Eisenberg at August 19, 2015 02:51 PM

Mark Jason Dominus

A message to the aliens, part 5/23 (geometry)

Earlier articles: Introduction Common features Page 1 (numerals) Page 2 (arithmetic) Page 3 (exponents) Page 4 (algebra)

This is page 5 of the Cosmic Call message. An explanation follows.

The 10 digits again:











Page 5 discusses two basic notions of geometry. The top half concerns circles and introduces . There is a large circle with its radius labeled :

The outer diameter is then which is .

The perimeter is twice times the radius , and the area is times the square of the radius squared. What is ? It's of course, as the next line explains, giving , which gives enough digits on the front to make clear what is being comunicated. The trailing digits are around the 51 billionth places and communicate part of the state of our knowledge of . I almost wish the authors had included a sequence of fifteen random digits at this point, just to keep the aliens wondering.

The bottom half of the page is about the pythagorean theorem. Here there's a rather strange feature. Instead of using the three variables from the previous page, , the authors changed the second one and used instead. This new glyph does not appear anywhere else. A mistake, or did they do it on purpose?

In any case, the pythagorean formula is repeated twice, once with exponents and once without, as both and . I think they threw this in just in case the exponentiation on the previous pages wasn't sufficiently clear. I don't know why the authors chose to use an isosceles right triangle; why not a 3–4–5 or some other scalene triangle, for maximum generality? (What if the aliens think we think the pythagorean theorem applies only for isosceles triangles?) But perhaps they were worried about accurately representing any funny angles on their pixel grid. I wanted to see if it would fit, and it does. You have to make the diagram smaller, but I think it's still clear:

(I made it smaller than it needed to be and then didn't want to redo it.)

I hope this section will be sufficiently unmistakable that the aliens will see past the oddities.

The next article will discuss page 6, shown at right. (Click to enlarge.) Try to figure it out before then.

by Mark Dominus ( at August 19, 2015 02:28 PM

August 18, 2015

Functional Jobs

Haskell Engineer at Wagon (Full-time)

We’re a team of functional programmers writing apps and services in Haskell (and Javascript). Yes, it’s true: Haskell is our main backend language. We also use functional programming practices across our stack.

Wagon is a great place to do your best work. We love to teach and learn functional programming; our team is humble, hard working, and fun. We speak at the Bay Area Haskell Meetup, contribute to open source, and have weekly lunches with interesting people from the community.

Work on challenging engineering problems at Wagon. How to integrate Haskell with modern client- and server-side technologies, like Electron and Docker? How to deploy and manage distributed systems built with Haskell? Which pieces of our infrastructure should we open-source?

Learn more about our stack, how we combine Haskell, React, and Electron, and what it’s like working at a Haskell-powered startup.


  • love of functional programming
  • personal project or production experience using Haskell, OCaml, Clojure, or Scala
  • passionate (but practical) about software architecture
  • interested in data processing, scaling, and performance challenges
  • experience with databases (optional)


  • write Haskell for client- and server-side applications
  • integrate Haskell with modern tools like Docker, AWS, and Electron
  • architect Wagon to work with analytic databases like Redshift, BigQuery, Spark, etc
  • build systems and abstractions for streaming data processing and numeric computing
  • work with libraries like Conduit, Warp, and Aeson
  • use testing frameworks like QuickCheck and HSpec
  • develop deployment and monitoring tools for distributed Haskell systems

Get information on how to apply for this position.

August 18, 2015 09:38 PM

Thiago Negri

Dunning-Kruger effect on effort estimates

This post has two parts. The first is an experiment with a poll. The second is the actual content with my thoughts.

The experiment and the poll comes first as I don't want to infect you with my idea before you answer the questions. If you are in the mood of reading a short story and answering a couple of questions, keep reading. In case you are only concerned with my ideas, you may skip the first part.

I won't give any discussion about the subject. I'm just throwing my ideas to the internet, be warned.

Part 1. The experiment

You have to estimate the effort needed to complete a particular task of software development. You may use any tool you'd like to do it, but you will only get as much information as I will tell you now. You will use all the technologies that you already know, so you won't have any learning curve overhead and you will not encounter any technical difficulty when doing the task.

Our customer is bothered by missing other co-workers birthdates. He wants to know all co-workers that are cellebrating birthday or just cellebrated, so he can send a "happy birthday" message at the very morning, when he just turned on his computer. To avoid sending duplicated messages, he doesn't want to see the same person on multiple days at the list.

Your current sofware system already have all workers of the company with birthdates and their relationship, so you can figure out pretty easily who are the co-workers of the user and when is the birthdate of everyone.

Now, stop reading further, take your time and estimate the effort of this task by answering the following poll.

<script charset="utf-8" src="" type="text/javascript"></script>
<noscript>Estimate your effort</noscript>

Okay, now I'll give you more information about it and ask for your estimate again.

Some religions do not celebrate birthdates and some people get really mad when receiving a message of "happy birthday". To avoid this, you also need to check if the user wants to make its birthdate public.

By the way, the customer's company closes at the weekend, so you need to take into account that at monday you will need to show birthdates that happened at the weekend and not only of the current day.

This also applies to holidays. The holidays are a bit harder as it depends on the city of the employee, as they may have different holidays.

Oh, and don't forget to take into account that the user may have missed a day, so it needs to see everyone that he would on the day that he missed the job.

Now, take your time and estimate again.

<script charset="utf-8" src="" type="text/javascript"></script>
<noscript>Estimate your effort - II</noscript>

Part 2. The Dunning-Kruger effect on estimates

I don't know if the little story above tricked you or not, but that same story tricked me in real-life. :)

The Dunning-Kruger effect is stated at Wikipedia as:

"[...] a cognitive bias wherein relatively unskilled individuals suffer from illusory superiority, mistakenly assessing their ability to be much higher than is accurate. This bias is attributed to a metacognitive inability of the unskilled to accurately evaluate their own ability level. Conversely, highly skilled individuals may underestimate their relative competence, erroneously assuming that tasks that are easy for them are also easy for others."

I'm seeing that this effect contributes to make the task of estimating effort to be completely innacurate by nature, as it always pulls to a bad outcome. If you know little about it, you will overestimate your knowledge and consequently underestimate the effort to accomplish it. If you know much, you will underestimate your knowledge and consequently overestimate the effort.

I guess one way to minimize this problem is to remove knowledge up to the point that you only have left the essential needed to complete the task. Sort of what Taleb calls "via negativa" in his Antifragile book.

What do you think? Does this makes any sense to you?

by Thiago Negri ( at August 18, 2015 01:23 AM

August 17, 2015

Mark Jason Dominus

A message to the aliens, part 4/23 (algebra)

Earlier articles: Introduction Common features Page 1 (numerals) Page 2 (arithmetic) Page 3 (exponents)

This is page 4 of the Cosmic Call message. An explanation follows.

Reminder: page 1 explained the ten digits:











And the equal sign . Page 2 explained the four basic arithmetic operations and some associated notions:






ellipsis (…)



This page, headed with the glyph for “mathematics” , describes the solution of simple algebraic equations and defines glyphs for three variables, which we may as well call and :




Each equation is introduced by the locution which means “solve for ”. This somewhat peculiar “solve” glyph will not appear again until page 23.

For example the second equation is :

Solve for :

The solution, 6, is given over on the right:

After the fourth line, the equations to be solved change from simple numerical equations in one variable to more abstract algebraic relations between three variables. For example, if

Solve for :



The next-to-last line uses a decimal fraction in the exponent, : . On the previous page, the rational fraction was used. Had the same style been followed, it would have looked like this: .

Finally, the last line defines and then, instead of an algebraic solution, gives a graph of the resulting relation, with axes labeled. The scale on the axes is not the same; the -coordinate increases from 0 to 20 pixels, but the -coordinate increases from 0 to 8000 pixels because . If axes were to the same scale, the curve would go up by 8,000 pixels. Notice that the curve does not peek above the -axis until around or so. The authors could have stated that this was the graph of , but chose not to.

I also wonder what the aliens will make of the arrows on the axes. I think the authors want to show that our coordinates increase going up and to the left, but this seems like a strange and opaque way to do that. A better choice would have been to use a function with an asymmetric graph, such as .

(After I wrote that I learned that similar concerns were voiced about the use of a directional arrow in the Pioneer plaque.

(Wikipedia says: “An article in Scientific American criticized the use of an arrow because arrows are an artifact of hunter-gatherer societies like those on Earth; finders with a different cultural heritage may find the arrow symbol meaningless.”)

The next article will discuss page 5, shown at right. (Click to enlarge.) Try to figure it out before then.

by Mark Dominus ( at August 17, 2015 01:43 PM

Brandon Simmons

Announcing: Hashabler 1.0. Now even more hashy with SipHash

I’ve just released version 1.0 of a haskell library for principled, cross-platform & extensible hashing of types. It is available on hackage, and can be installed with:

cabal install hashabler

(see my initial announcement post which has some motivation and pretty pictures)

You can see the CHANGELOG but the main change is an implementation of SipHash. It’s about as fast as our implementation of FNV-1a for bytestrings of length fifty and slightly faster when you get to length 1000 or so, so you should use it unless you’re wanting a hash with a simple implementation.

If you’re implementing a new hashing algorithm or hash-based data structure, please consider using hashabler instead of hashable.

August 17, 2015 02:15 AM

August 16, 2015

Russell O'Connor

Bell’s Casino Problem

A new casino has opened up in town named “Bell’s Casino”. They are offering a coin game. The game works as follows.

The house will commit two coins on the table, oriented heads or tails each, and keep them covered. The player calls what the faces of the each of the coins are, either HH, HT, TH, or TT. The casino reveals the coins and if the player is correct, they win $1, and otherwise they lose $1.

Problem 1.
Prove that there is no strategy that can beat the casino.

After opening the customers stop coming by to play this boring game, so to boost attendance the casino modifies the game as follows.

The house will commit two coins on the table, oriented heads or tails each, and keep them covered. The player calls what the faces of each of the two coins are, either HH, HT, TH, or TT. The casino reveals one coin, of the players choice. After seeing revealed coin, the player can elect to back out of the game and neither win nor lose, or keep going, and see the second coin. If the player’s call is correct, they win $1, and otherwise they lose $1.

Problem 2.
Prove that there is no strategy that can beat the casino.

Even with the new, more fair, game, attendance at the casino starts dropping off again. The casino decides to offer a couples game.

The house will commit two coins on two tables, oriented heads or tails each, and keep them covered. The couple, together, calls what the the faces of each of the two coins are, either HH, HT, TH, or TT. Then, each player in the couple gets to see one coin each. Collectively they get to decide whether they are going to back out of the game or not by the following method. After seeing their revealed coin, each player will raise either a black flag or a red flag. If both players raise the different colour flags, the game ends and no one wins or loses. If both players raise the same colour flag, the game keeps going. If the couples original call was right, they win $1, and otherwise, they lose $1. To ensure that the couple cannot cheat, the two tables are places far enough apart such that each player’s decision on which flag to raise is space-like separated. Specifically the tables are placed 179 875 475 km apart and each player has 1 minute to decide which flag to raise otherwise a black flag will be raised on their behalf (or, more realistically, the tables are placed 400 m apart and each player has 100 nanoseconds to decide which flag to raise).

Problem 3.
Prove that there is no strategy for the couple that can beat the casino.
Problem 4.
Devise a physical procedure that a couple can follow to beat the casino on average at this last game without cheating.

The casino cannot figure out how they keep losing money on this game and, soon, Bell’s Casino goes bankrupt.

August 16, 2015 06:56 PM

Dimitri Sabadie

Never forget your git stashes again!

It’s been a while I’m experiencing issues with git stash. If you don’t know that command yet, git stash is used to move all the changes living in your staging area into a special place: the stash.

The stash is a temporary area working like a stack. You can push changes onto it via git stash or git stash save; you can pop changes from top with git stash pop. You can also apply a very specific part of the stack with git stash apply <stash id>. Finally you can get the list of all the stashes with git stash list.

We often use the git stash command to stash changes in order to make the working directory clear again so that we can apply a patch, pull some changes, change branch, and so on. For those purposes, the stash is pretty great.

However, I often forget about my stashes – I know I’m not the only one. Sometimes, I stash something and go to cook something or just go out, and when I’m back again, I might have forgotten about what I had stashed, especially if it was a very small change.

My current prompt for my shell, zsh, is in two parts. I set the PS1 environnment variable to set the regular prompt, and the RPROMPT environnment variable to set a reversed prompt, starting from the right of the terminal. My reversed prompt just performs a git command to check whether we’re actually in a git project, and get the current branch. Simple, but nice.

I came up to the realization that I could use the exact same idea to know whether I have stashed changes so that I never forget them! Here’s a screenshot to explain that:

As you can see, my prompt now shows me how many stashed changes there are around!

The code

I share the code I wrote with you. Feel free to use it, modify it and share it as well!

# …

function gitPrompt() {
# git current branch
currentBranch=`git rev-parse --abbrev-ref HEAD 2> /dev/null`
if (($? == 0))
echo -n "%F{green}$currentBranch%f"

# git stash
stashNb=`git stash list 2> /dev/null | wc -l`
if [ "$stashNb" != "0" ]
echo -n " %F{blue}($stashNb)%f"

echo ''

PS1="%F{red}%n%F{cyan}@%F{magenta}%M %F{cyan}%~ %F{yellow}%% %f"

# …

Have fun!

by Dimitri Sabadie ( at August 16, 2015 06:10 PM

Mark Jason Dominus

Math.SE report 2015-07

My overall SE posting volume was down this month, and not only did I post relatively few interesting items, I've already written a whole article about the most interesting one. So this will be a short report.

  • I already wrote up Building a box from smaller boxes on the blog here. But maybe I have a couple of extra remarks. First, the other guy's proposed solution is awful. It's long and complicated, which is forgivable if it had answered the question, but it doesn't. And the key point is “blah blah blah therefore code a solver which visits all configurations of the search space”. Well heck, if this post had just been one sentence that ended with “code a solver which visits all configurations of the search space” I would not have any complaints about that.

    As an undergraduate I once gave a talk on this topic. One of my examples was the problem of packing 31 dominoes into a chessboard from which two squares have been deleted. There is a simple combinatorial argument why this is impossible if the two deleted squares are the same color, say if they are opposite corners: each domino must cover one square of each color. But if you don't take time to think about the combinatorial argument you could waste a lot of time on computer search learning that there is no solution in that case, and completely miss the deeper understanding that it brings you. So this has been on my mind for a long time.

  • I wrote a few posts this month where I thought I gave good hints. In How to scale an unit vector in such way that where is a scalar I think I did a good job identifying the original author's confusion; he was conflating his original unit vector and the scaled, leading him to write . This is sure to lead to confusion. So I led him to the point of writing and let him take it from there. The other proposed solution is much more rote and mechanical. (“Divide this by that…”)

    In Find numbers so that the OP got stuck partway through and I specifically addressed the stuckness; other people solved the problem from the beginning. I think that's the way to go, if the original proposal was never going to work, especially if you stop and say why it was never going to work, but this time OP's original suggestion was perfectly good and she just didn't know how to get to the next step. By the way, the notation here means the number .

    In Help finding the limit of this series it would have been really easy to say “use the formula” or to analyze the series de novo, but I think I almost hit the nail on the head here: it's just like , which I bet OP already knows, except a little different. But I pointed out the wrong difference: I observed that the first sequence is one-fourth the second one (which it is) but it would have been simpler to observe that it's just the second one without the . I had to review it just now to give the simpler explanation, but I sure wish I'd thought of it at the time. Nobody else pointed it out either. Best of all, would have been to mention both methods. If you can notice both of them you can solve the problem without the advance knowledge of the value of , because you have and then solve for .

    In Visualization of Rhombus made of Radii and Chords it seemed that OP just needed to see a diagram (“I really really don't see how two circles can form a rhombus?”), so I drew one.

by Mark Dominus ( at August 16, 2015 04:38 PM

Ken T Takusagawa

[clomduww] Foldable with metadata

The Foldable instances of Array and Map in Haskell do not provide access to the index or key respectively. It is possible to provide such access, but doing so requires defining Foldable differently, making it a multiparameter type class and explicitly specifying an intermediate type that packages up the element and metadata, e.g., index or key.

GHC 7.10.1, array-, base-, containers-

{-# LANGUAGE MultiParamTypeClasses, FlexibleInstances, ScopedTypeVariables #-}
module FoldableWithKey where {
import Data.Array.IArray;
import qualified Data.Map as Map;

-- similar to Foldable, except the intermediate type can be different from the element type.
class FoldableWithKey collection intermediate where {
foldWithKey :: (intermediate -> b -> b) -> b -> collection -> b;

-- unclear why OVERLAPPABLE is needed here, as Map is clearly not an IArray
instance {-# OVERLAPPABLE #-} (IArray a e, Ix i) => FoldableWithKey (a i e) (i,e) where {
foldWithKey f z = foldr f z . assocs ;

instance FoldableWithKey (Map.Map k a) (k,a) where {
foldWithKey f = Map.foldWithKey $ \xk xa xb -> f (xk,xa) xb;

-- Overlapping Instance
-- Allows foldWithKey to be a drop-in replacement for foldr.
instance {-# OVERLAPPABLE #-} (Foldable t) => FoldableWithKey (t a) a where {
foldWithKey = foldr;

test1 :: [Int] -> Int;
test1 = foldWithKey (+) 0;

test2 :: Map.Map String Int -> Int;
test2 = foldWithKey (+) 0;

test3 :: Map.Map String Int -> (String,Int);
test3 = foldWithKey (\(s,i) (sold,iold) -> (s ++ sold, i + iold)) ("",0);

test4 :: Map.Map String Int -> Int;
-- explicit type signature weirdly needed on s
test4 = foldWithKey (\(s :: String, i) iold -> length s + i + iold) 0;

test5 :: Array Int Double -> Double;
-- explicit type signature weirdly needed on i
test5 = foldWithKey (\(i :: Int , d) dold -> d + dold + fromIntegral i) 0;

by Ken ( at August 16, 2015 05:29 AM

August 15, 2015

Neil Mitchell

Testing is never enough

Summary: Testing shows the presence, not the absence of bugs.

Recently, someone suggested to me that, thanks to test suites, things like changing compiler version or versions of library dependencies was "no big deal". If dependency changes still result in a passing test suite, then they have caused no harm. I disagree, and fortunately for me, Dijkstra explains it far more eloquently than I ever could:

Testing shows the presence, not the absence of bugs. Dijkstra (1969)

While a test suite can give you confidence in changes you make, it does not provide guarantees. Below are just a few reasons why.

The test suite does not cover all the code

For any reasonably sized code base (> 100 lines), covering all the lines of code is difficult. There are a number of factors that mean that mean a test suite is unlikely to provide 100% coverage:

  • Producing tests is a resource intensive activity, and most projects do not have the necessary manpower to test everything.
  • Sometimes there is no good way to test simple sugar functions - the definition is a specification of what the function should do.
  • Testing corner cases is difficult. As the corners get more obscure, the difficulty increases.
  • Testing error conditions is even harder. Some errors conditions have code to deal with them, but are believed to be unreachable.

The test suite does not cover all the ways through the code

Assuming the test suite really does cover every line of the code, making it cover every path through the code is almost certainly computationally infeasible. Consider a program taking a handful of boolean options. While it might be feasible to test each individual option in the true and false states, testing every state in conjunction with every other state requires an exponential amount of time. For programs with loops, testing every number of loop iterations is likely to be highly time consuming.

There is plenty of code you can't see

Even if you cover every line of source code, the compiler may still thwart your valiant efforts. Optimising compilers like to inline code (make copies of it) and specialise code (freeze in some details that would otherwise be dynamic). After such transformations, the compiler might spot undefined behaviour (something almost all C/C++ programs contain) and make modifications that break your code. You might have tested all the source code, but you have not tested all the code generated by the compiler. If you are writing in C/C++, and undefined behaviour and optimisation doesn't scare you a lot, you should read this LLVM article series.

Functions have huge inputs

Testing functions typically involves supplying their input and inspecting their output. Usually the input space is too large to enumerate - which is likely to be the case even if your function takes in an integer. As soon as your function takes a string or array, enumeration is definitely infeasible. Often you can pick cases at which the code is likely to go wrong (0, 1, -1, maxBound) - but maybe it only fails for Carmichael numbers. Random testing can help, and is always advisable, but the effort to deploy random testing is typically quite a bit higher than input/output samples, and it is no panacea.

Functions are not functions

Testing functions usually assumes they really are functions, which depend only on their input. In pure functional languages that is mostly true, but in C/C++ it is less common. For example, functions that have an internal cache might behave differently under parallelism, especially if their cache is not managed properly. Functions may rely on global variables, so they might perform correctly until some seemingly unrelated operation is performed. Even Haskell programs are likely to depend on global state such as the FPU flags, which may be changed unexpectedly by other code.

In my experience, the non-functional nature of functions is one of the biggest practical difficulties, and is also a common place where dependency changes cause frustration. Buggy code can work successfully for years until an improved memory allocator allows a race condition to be hit.

Performance testing is hard

Even if your code gives the correct results, it may take too long or use too much memory. Alas, testing for resource usage is difficult. Resource numbers, especially runtime, are often highly variable between runs - more so if tests are run on shared hardware or make use of parallelism. Every dependency change is likely to have some impact on resource usage, perhaps as dependencies themselves chose to trade time for memory. Spotting erroneous variations often requires a human to make a judgement call.

What is the solution?

Tests help, and are valuable, and you should aim to test as much as you can. But for any reasonably sized program, your tests will never be complete, and the program will always contain unknown bugs. Most likely someone using your code will stumble across one of these bugs. In this case, it's often possible (and indeed, highly desirable) to add a new test case specifically designed to spot this error. Bugs have a habit of recurring, and a bug that happens twice is just embarrassing.

Thinking back to dependency versions, there is often strength in numbers. If all your users are on the same version of all the dependencies, then any bug that is very common is likely to be found by at least one user and fixed for all.

Thinking more generally, it is clear that many of these issues are somewhat ameliorated by pure functional programming. I consider testability and robustness to be one of the great strengths of Haskell.

by Neil Mitchell ( at August 15, 2015 09:40 PM

August 14, 2015

Mark Jason Dominus

A message to the aliens, part 3/23 (exponentiation)

Earlier articles: Introduction Common features Page 1 (numerals) Page 2 (arithmetic)

This is page 3 of the Cosmic Call message. An explanation follows.

Reminder: page 1 explained the ten digits:











And the equal sign . Page 2 explained the four basic arithmetic operations and some associated notions:






ellipsis (…)



This page, headed with the glyph for “mathematics” , explains notations for exponentiation and scientific notation. (This notation was first used on page 1 in the mersenne prime .)

Exponenentiation could be represented by an operator, but instead the authors have chosen to represent it by a superscripted position on the page, as is done in conventional mathematical notation. This saves space.

The top section of the page has small examples of exponentiation, including for example :

There is a section that follows with powers of 10: and more interestingly :

This is a lead-in to the next section, which expresses various quantities in scientific notation, which will recur frequenly later on. For example, can be written as :

Finally, there is an offhend remark about the approximate value of the square root of 2:

The next article will discuss page 4, shown at right. (Click to enlarge.) Try to figure it out before then.

by Mark Dominus ( at August 14, 2015 05:49 PM


Parametricity Tutorial (Part 2): Type constructors and type classes

This is part 2 of a two-part series on parametricity.

In part 1 we covered the basics: constant types, functions and polymorphism (over types of kind *). In this post we will deal with more advanced material: type constructors, type classes, polymorphism over type constructors and type constructor classes.

Type constructors (types of kind * -> *)

Before considering the general case, let’s think about lists. Given a :: A ⇔ A', two lists xs :: [A] and ys :: [A'] are related iff their elements are related by a; that is,

[] ℛ([a]) []


     (x:xs') ℛ([a]) (y:ys')
iff  x ℛ(a) x'  and  xs' ℛ([a]) ys'

For the special case that a is a function a⃯ :: A -> A', this amounts to saying that map a⃯ xs ≡ ys.

You can imagine a similar relation F a exists for any type constructor F. However, we will not give a general treatment of algebraic data types in this blog post. Doing this would require giving instances for products and sums (which is fine), but also for (least) fixed points, and that would take us much too far afield.

Thankfully, we will not need to be quite so precise. Instead, it will only require the following characterization:

Characterization: Functors.

Let F be a functor. Then for all relations a :: A ⇔ A', b :: B ⇔ B' and functions f :: A -> B and g :: A' -> B', such that f ℛ(a -> b) g:

forall xs :: F A, xs' :: F A'.
  if    xs ℛ(F a) xs'
  then  F f xs ℛ(F b) F g xs'
where we overload F to also mean the “map” function associated with F. (Provided that the Functor type class instance for F is correct, F f should be the same as fmap f.)

(If we had the precise rules for algebraic data types we would be able to prove this characterization for any specific functor F.)

Intuitively, think about xs and xs' as two containers of the same shape with elements related by a, and suppose we have a pair of functions f and g which map a-related arguments to b-related results. Then the characterization states that if we apply function f to the elements of xs and g to the elements of xs', we must end up with two containers of the same shape with elements related by b.

For the special case that a and b are functions (and F is a functor), the mapping relations characterization simply says that

    if xs ℛ(F a⃯) xs' then F f xs ℛ(F b⃯) F g xs'
-- simplify
    if F a⃯ xs ≡ xs' then F f xs ℛ(F b⃯) F g xs'
-- simplify
    F b⃯ (F f xs) ≡ F g (F a⃯ xs)
-- functoriality
    F (b⃯ . f) xs ≡ F (g . a⃯) xs

which follows immediately from the premise that b⃯ . f ≡ g . a⃯ (which in turn is a consequence of f ℛ(a⃯ -> b⃯) g), so the mapping relations characterization is trivially satisfied (provided that the mapping of relations correspond to the functor map in the case for functions).

Technical note. When we use parametricity results, we often say something like: “specializing this result to functions rather than relations…”. It is important to realize however that if F is not a functor, then F a may not be a functional relation even if a is.

For example, let a⃯ :: A -> A', and take F(a) = a -> a. Then

     f ℛ(F a⃯) g
-- expand definition
iff  f ℛ(a⃯ -> a⃯) g
-- rule for functions
iff  forall x :: A, x' :: A'.
       if x ℛ(a⃯) x' then f x ℛ(a⃯) g x'
-- simplify (a⃯ is a function)
iff  forall x :: A.
       a⃯ (f x) ≡ g (a⃯ x)

Taking a :: Int -> Int ; a x = 0, this would relate two functions f, g :: Int -> Int whenever 0 ≡ g 0; it is clear that this is not a functional relation between f and g.

Given a function a⃯ :: A -> A', F a⃯ is a function F A -> F A' when F is a functor, or a function F A' -> F A if F is a contravariant functor. We will not consider contravariant functors further in this blog post, but there is an analogous Contravariant Functor Characterization that we can use for proofs involving contravariant functors.

 Example: ∀ab. (a -> b) -> [a] -> [b]

This is the type of Haskell’s map function for lists of course; the type of map doesn’t fully specify what it should do, but the elements of the result list can only be obtained from applying the function to elements of the input list. Parametricity tells us that

     f ℛ(ab. (a -> b) -> [a] -> [b]) f
-- apply rule for polymorphism, twice
iff  forall A, A', B, B', a :: AA', b :: BB'.
       f@A,B ℛ((a -> b) -> [a] -> [b]) f@A',B'
-- apply rule for functions, twice
iff  forall A, A', B, B', a :: AA', b :: BB'.
     forall g :: A -> B, g' :: A' -> B', xs :: [A], xs' :: [A'].
       if g ℛ(a -> b) g', xs ℛ([a]) xs' then f g xs ℛ([b]) f g' xs'

Specializing to functions a⃯ :: A -> A' and b⃯ :: B -> B', we get

     forall A, A', B, B', a⃯ :: A -> A', b⃯ :: B -> B'.
     forall g :: A -> B, g' :: A' -> B', xs :: [A], xs' :: [A'].
       if g ℛ(a⃯ -> b⃯) g', xs ℛ([a⃯]) xs' then f g xs ℛ([b⃯]) f g' xs'
-- simplify
iff  forall A, A', B, B', a⃯ :: A -> A', b⃯ :: B -> B'.
     forall g :: A -> B, g' :: A' -> B'.
       if b⃯ . g ≡ g' . a⃯ then map b⃯ . f g ≡ f g' . map a⃯

As an aside, Functor instances should satisfy two laws:

  • map id ≡ id
  • map f . map g ≡ map (f . g)

It turns out that the second property follows from the first by parametricity; see The free theorem for fmap.

Example: ∀a. F a -> G a

Consider a funtion f :: ∀a. F a -> G a, polymorphic in a but between fixed (constant) type constructors F and G; for example, a function of type ∀a. Maybe a -> [a] fits this pattern. What can we tell about f?

     f ℛ(a. F a -> G a) f
iff  forall A, A', a :: AA'.
       f@A ℛ(F a -> G a) f@A'
iff  forall A, A', a :: AA', x :: F A, x' :: F A'.
       if x ℛ(F a) x' then f x ℛ(G a) f x'

For the special case where we pick a function a⃯ :: A -> A' for a, this is equivalent to

forall A, A', a⃯ :: A -> A'.
  G a⃯ . f == f . F a⃯

For the categorically inclined, this means that polymorphic functions must be natural transformations.

Type classes

Now that we’ve covered the basics, it’s time to consider some more advanced language features. We will first consider qualified types, such as ∀a. Eq a => a -> a -> a.

The rule for a qualified type is

     f ℛ( a. C a => t) f'
iff  forall A, A', a :: AA'
     such that A, A' instances of C and a respects C.
       f@A ℛ(t) f'@A'

What does it mean for a relation a :: A ⇔ A' to respect a type class C? Every type class introduces a new constraint on relations defined by the members of the type class. Let’s consider an example; Haskell’s equality type class is defined by

class Eq a where
  (==) :: a -> a -> Bool

(Let’s ignore (/=) for simplicity’s sake.). Then a relation a respects Eq, written Eq(a), iff all class members are related to themselves. For the specific case of Eq this means that

     (==) ℛ(a -> a -> Bool) (==)
-- rule for functions, twice
iff  forall x :: A, x' :: A', y :: A, y' :: A'.
       if x ℛ(a) x', y ℛ(a) y' then x == y ℛ(Bool) x' == y'
-- Bool is a constant type, simplify
iff  forall x :: A, x' :: A', y :: A, y' :: A'.
       if x ℛ(a) x', y ℛ(a) y' then x == y ≡ x' == y'

For the special case where we pick a function a⃯ :: A -> A', the function respects Eq iff

forall x :: A, y :: A.
  x == y ≡ a⃯ x == a⃯ y

I.e., the function maps (==)-equal arguments to (==)-equal results.

Syntactic convention.

In the following we will write

forall A, A', a :: AA'
such that A, A' instances of C and a respects C.

more concisely as

forall C(A), C(A'), C(a) :: AA'.

Example: ∀a. Eq a => a -> a -> a

We already considered the free theorem for functions f :: ∀ a. a -> a -> a:

g (f x y) = f (g x) (g y)

Is this free theorem still valid for ∀a. Eq a => a -> a -> a? No, it’s not. Consider giving this (admittedly somewhat dubious) definition of natural numbers which considers all “invalid” natural numbers to be equal:

newtype Nat = Nat Int
  deriving (Show)

instance Eq Nat where
  Nat n == Nat n' | n <= 0, n' <= 0 = True
                  | otherwise       = n == n'

If we define

f :: forall a. Eq a => a -> a -> a
f x y = if x == y then y else x

g :: Nat -> Nat
g (Nat n) = Nat (n + 1)

then for x ≡ Nat (-1) and y ≡ Nat (-2) we have that g (f x y) ≡ Nat (-1) but f (g x) (g y) ≡ Nat 0. Dubious or not, free theorems don’t assume anything about the particular implementation of type classes. The free theorem for ∀a. Eq a => a -> a -> a however only applies to functions g which respect Eq; and this definition of g does not.

 Example: ∀ab. (Show a, Show b) => a -> b -> String

We promised to look at this type when we considered higher rank types above. If you go through the process, you will find that the free theorem for functions f of this type is

f x y = f (g x) (h y)

for any Show-respecting functions g and h. What does it mean for a function to respect Show? Intuitively it means that the function can change the value of its argument but not its string representation:

show (g x) = show x

Type constructor classes

Type constructor classes are classes over types of kind * -> *; a typical example is

class Functor f where
  fmap :: ab. (a -> b) -> f a -> f b

The final rule we will discuss is the rule for universal quantification over a qualified type constructor (universal quantification over a type constructor without a qualifier is rarely useful, so we don’t discuss it separately):

     g ℛ(f. C f => t) g'
iff  forall C(F), C(F'), C(f) :: FF'.
       g@F ℛ(t) g'@F'

If F and F' are type constructors rather than types (functions on types), f :: F ⇔ F' is a relational action rather than a relation: that is, it is a function on relations. As before, C(f) means that this function must respect the type class C, in much the same way as for type classes. Let’s consider what this means for the example of Functor:

     fmap ℛ(ab. (a -> b) -> f a -> f b) fmap
iff  forall A, A', B, B', a :: AA', b :: BB'.
       fmap@A,B ℛ((a -> b) -> f a -> f b) fmap@A,B
iff  forall A, A', B, B', a :: AA', b :: BB'.
     forall g :: A -> B, g' :: A' -> B', x :: F A, x' :: F' A'.
       if    g ℛ(a -> b) g', x ℛ(f a) x'
       then  fmap g x ℛ(f b) fmap g' x'

Example: ∀f. Functor f => f Int -> f Int

Intuitively, a function g :: ∀f. Functor f => f Int -> f Int can take advantage of the Int type, but only by applying fmap; for example, when we apply g to a list, the order of the list should not matter. Let’s derive the free theorem for functions of this type:

     g ℛ(f. Functor f => f Int -> f Int) g
iff  forall Functor(F), Functor(F'), Functor(f) :: FF'.
       g@F ℛ(f Int -> f Int) g@F'
iff  forall Functor(F), Functor(F'), Functor(f) :: FF'.
     forall x :: F Int, x' :: F' Int.
       if x ℛ(f Int) x' then g x ℛ(f Int) g x'

As before, we can specialize this to higher order functions, which are special cases of relational actions. Let’s use the notation f⃯ :: F -> F' (with F and F' type constructors) to mean f⃯ :: ∀ab. (a -> b) -> (F a -> F' b). Then we can specialize the free theorem to

iff  forall Functor(F), Functor(F'), Functor(f⃯) :: F -> F'.
     forall x :: F Int, x' :: F' Int.
       if x ℛ(f⃯ Int) x' then g x ℛ(f⃯ Int) g x'
-- `f⃯` is a function; recall that `Int` as a relation is the identity:
iff  forall Functor(F), Functor(F'), Functor(f⃯) :: F -> F'.
       f⃯ id . g ≡ g . f⃯ id

for any Functor-respecting f⃯.

Example continued: further specializing the free theorem

The free theorem we saw in the previous section has a very useful special case, which we will derive now. Recall that in order to prove that a higher order function f⃯ respects Functor we have to prove that

if g ℛ(a -> b) g', x ℛ(f⃯ a) x' then fmap g x ℛ(f⃯ b) fmap g' x'

As in the higher rank example, this is a proof obligation (as opposed to the application of a free theorem), so that we really have to consider relations a :: A ⇔ A' and b :: B ⇔ B' here; it’s not sufficient to consider functions only.

We can however derive a special case of the free theorem which is easier to use. Take some arbitrary polymorphic function k :: ∀a. F a -> F' a, and define the relational action f :: F ⇔ F' by

f(a) = k ⚬ F(a)

where we use k also as a relation. Then

     x ℛ(f a) x'
iff  i. x ℛ(k) i and i ℛ(F(a)) x'
-- k is a function
iff  k x ℛ(F(a)) x'
-- by the Functor Characterization
iff  F g (k x) ℛ(F b) F g' x'
-- naturality
iff  k (F g x) ℛ(F b) F g' x'
-- use k as a relation again
iff  F g x ℛ(k) k (F g x) ℛ(F b) F g' x'
-- pick k (F g x) as the intermediate
then F g x ℛ(f b) F g' x'
-- if we assume that fmap is the "real" functor instance
iff  fmap g x ℛ(f b) fmap g' x'

In the previous section we derived that the free theorem for g :: ∀f. Functor f => f Int -> f Int was

forall Functor(F), Functor(F'), Functor(f⃯) :: F -> F'.
  f⃯ id . g ≡ g . f⃯ id

for any higher order function which respects Functor. The f we defined above is a higher order function provided that a if a function, and we just proved that it must respect functor. The identity relation is certainly a function, so we can specialize the free theorem to

k . g ≡ g . k

for any polymorphic function k (no restrictions on k). As a special case, this means that we must have

reverse . g ≡ g . reverse

formalizing the earlier intuition that when we apply such a function to a list, the order of the list cannot matter.

 Example: ∀f. Functor f => (B -> f B) -> f A

As our last example, we will consider higher-order functions of type g :: ∀f. Functor f => (B -> f B) -> f A. The free theorem for such functions is

     g ℛ(f. Functor f => (B -> f B) -> f A) g
iff  forall Functor(F), Functor(F'), Functor(f) : FF'.
       g@F ℛ((B -> f B) -> f A) g@F'
iff  forall Functor(F), Functor(F'), Functor(f) : FF'.
     forall l :: B -> F B, l' :: B -> F' B.
       if l ℛ(B -> f B) l' then g l ℛ(f A) g l'

Specializing to higher order functions f⃯ :: ∀ab. (a -> b) -> F a -> F' b (rather than a relational action f), we get

     forall Functor(F), Functor(F'), Functor(f) : FF'.
     forall l :: B -> F B, l' :: B -> F' B.
       if l ℛ(B -> f⃯ B) l' then g l ℛ(f⃯ A) g l'
iff  forall Functor(F), Functor(F'), Functor(f) : FF'.
     forall l :: B -> F B, l' :: B -> F' B.
       if f⃯ id . l ≡ l' . id then f⃯ id (g l) ≡ g l'
-- simplify
iff  forall Functor(F), Functor(F'), Functor(f) : FF'.
     forall l :: B -> F B.
       f⃯ id (g l) ≡ g (f⃯ id . l)

for any Functor respecting f⃯; we can now apply the same reasoning as we did in the previous section, and give the following free theorem instead:

k (g l) ≡ g (k . l)

for any polymorphic function (that is, natural transformation) k :: ∀a. F a -> F' a and function l :: B -> F B. This property is essential when proving that the above representation of a lens is isomorphic to a pair of a setter and a getter; see Functor is to Lens as Applicative is to Biplate, Section 4, for details.


Parametricity allows us to formally derive what we can conclude about a function by only looking at its type. We’ve covered a lot of material in this tutorial, but there is a lot more out there still. If you want to know more, here are some additional references.


Thanks to Auke Booij on #haskell for his helpful feedback on both parts of this blog post.

by edsko at August 14, 2015 11:57 AM

Danny Gratzer

Solving Recursive Equations

Posted on August 14, 2015
Tags: types

I wanted to write about something related to all the stuff I’ve been reading for research lately. I decided to talk about a super cool trick in a field called domain theory. It’s a method of generating a solution to a large class of recursive equations.

In order to go through this idea we’ve got some background to cover. I wanted to make this post readable even if you haven’t read too much domain theory (you do need to know what a functor/colimit is though, nothing crazy though). We’ll start with a whirlwind tutorial of the math behind domain theory. From there we’ll transform the problem of finding a solution to an equation into something categorically tractable. Finally, I’ll walk through the construction of a solution.

I decided not to show an example of applying this technique to model a language because that would warrant its own post, hopefully I’ll write about that soon :)

Basic Domain Theory

The basic idea with domain theory comes from a simple problem. Suppose we want to model the lambda calculus. We want a collection of mathematical objects D so that we can treat element of D as a function D -> D and each function D -> D as an element of D. To see why this is natural, remember that we want to turn each program E into d ∈ D. If E = λ x. E' then we need to turn the function e ↦ [e/x]E' into a term. This means D → D needs to be embeddable in D. On the other hand, we might have E = E' E'' in which case we need to turn E' into a function D → D so that we can apply it. This means we need to be able to embed D into D → D.

After this we can turn a lambda calculus program into a specific element of D and reason about its properties using the ambient mathematical tools for D. This is semantics, understanding programs by studying their meaning in some mathematical structure. In our specific case that structure is D with the isomorphism D ≅ D → D. However, there’s an issue! We know that D can’t just be a set because then there cannot be such an isomorphism! In the case where D ≅ N, then D → D ≅ R and there’s a nice proof by diagonalization that such an isomorphism cannot exist.

So what can we do? We know there are only countably many programs, but we’re trying to state that there exists an isomorphism between our programs (countable) and functions on them (uncountable). Well the issue is that we don’t really mean all functions on D, just the ones we can model as lambda terms. For example, the function which maps all divergent programs to 1 and all terminating ones to 0 need not be considered because there’s no lambda term for it! How do we consider “computable” functions though? It’s not obvious since we define computable functions using the lambda calculus, what we’re trying to model here. Let’s set aside this question for a moment.

Another question is how do we handle this program: (λ x. x x) (λ x. x x)? It doesn’t have a value after all! It doesn’t behave like a normal mathematical function because applying it to something doesn’t give us back a new term, it just runs forever! To handle this we do something really clever. We stop considering just a collection of terms and instead look at terms with an ordering relation ! The idea is that ⊑ represents definedness. A program which runs to a value is more defined than a program which just loops forever. Similarly, two functions behave the same on all inputs except for 0 where one loops we could say one is more defined than the other. What we’ll do is define ⊑ abstractly and then model programs into sets with such a relation defined upon them. In order to build up this theory we need a few definitions

A partially ordered set (poset) is a set A and a binary relation where

  1. a ⊑ a
  2. a ⊑ b and b ⊑ c implies a ⊑ c
  3. a ⊑ b and b ⊑ a implies a = b

We often just denote the pair <A, ⊑> as A when the ordering is clear. With a poset A, of particular interest are chains in it. A chain is collection of elements aᵢ so that aᵢ ⊑ aⱼ if i ≤ j. For example, in the partial order of natural numbers and , a chain is just a run of ascending numbers. Another fundamental concept is called a least upper bound (lub). A lub of a subset P ⊆ A is an element of x ∈ A so that y ∈ P implies y ⊑ x and if this property holds for some z also in A, then x ⊑ z. So a least upper bound is just the smallest thing bigger than the subset. This isn’t always guaranteed to exist, for example, in our poset of natural numbers N, the subset N has no upper bounds at all! When such a lub does exist, we denote it with ⊔P. Some partial orders have an interesting property, all chains in them have least upper bounds. We call this posets complete partial orders or cpos.

For example while N isn’t a cpo, ω (the natural numbers + an element greater than all of them) is! As a quick puzzle, can you show that all finite partial orders are in fact CPOs?

We can define a number of basic constructions on cpos. The most common is the “lifting” operation which takes a cpo D and returns D⊥, a cpo with a least element . A cpo with such a least element is called “pointed” and I’ll write that as cppo (complete pointed partial order). Another common example, given two cppos, D and E, we can construct D ⊗ E. An element of this cppo is either or <l, r> where l ∈ D - {⊥} and r ∈ E - {⊥}. This is called the smash product because it “smashes” the ⊥s out of the components. Similarly, there’s smash sums D ⊕ E.

The next question is the classic algebraic question to ask about a structure: what are the interesting functions on it? We’ll in particular be interested in functions which preserve the ⊑ relation and the taking of lub’s on chains. For this we have two more definitions:

  1. A function is monotone if x ⊑ y implies f(x) ⊑ f(y)
  2. A function is continuous if it is monotone and for all chains C, ⊔ f(P) = f(⊔ P).

Notably, the collection of cppos and continuous functions form a category! This is because clearly x ↦ x is continuous and the composition of two continuous functions is continuous. This category is called Cpo. It’s here that we’re going to do most of our interesting constructions.

Finally, we have to discuss one important construction on Cpo: D → E. This is the set of continuous functions from D to E. The ordering on this is pointwise, meaning that f ⊑ g if for all x ∈ D, f(x) ⊑ g(x). This is a cppo where is x ↦ ⊥ and all the lubs are determined pointwise.

This gives us most of the mathematics we need to do the constructions we’re going to want, to demonstrate something cool here’s a fun theorem which turns out to be incredibly useful: Any continuous function f : D → D on a cppo D has a least fixed point.

To construct this least point we need to find an x so that x = f(x). To do this, note first that x ⊑ f(x) by definition and by the monotonicity of f: f(x) ⊑ f(y) if x ⊆ y. This means that the collection of elements fⁱ(⊥) forms a chain with the ith element being the ith iteration of f! Since D is a cppo, this chain has an upper bound: ⊔ fⁱ(⊥). Moreover, f(⊔ fⁱ(⊥)) = ⊔ f(fⁱ(⊥)) by the continuity of f, but ⊔ fⁱ(⊥) = ⊥ ⊔ (⊔ f(fⁱ(⊥))) = ⊔ f(fⁱ(⊥)) so this is a fixed point! The proof that it’s a least fixed point is elided because typesetting in markdown is a bit of a bother.

So there you have it, very, very basic domain theory. I can now answer the question we weren’t sure about before, the slogan is “computable functions are continuous functions”.

Solving Recursive Equations in Cpo

So now we can get to the result showing domain theory incredibly useful. Remember our problem before? We wanted to find a collection D so that

D ≅ D → D

However it wasn’t clear how to do this due to size issues. In Cpo however, we can absolutely solve this. This huge result was due to Dana Scott. First, we make a small transformation to the problem that’s very common in these scenarios. Instead of trying to solve this equation (something we don’t have very many tools for) we’re going to instead look for the fixpoint of this functor

F(X) = X → X

The idea here is that we’re going to prove that all well behaved endofunctors on Cpo have fixpoints. By using this viewpoint we get all the powerful tools we normally have for reasoning about functors in category theory. However, there’s a problem: the above isn’t a functor! It has both positive and negative occurrences of X so it’s neither a co nor contravariant functor. To handle this we apply another clever trick. Let’s not look at endofunctors, but rather functors Cpoᵒ × Cpo → Cpo (I believe this should be attributed to Freyd). This is a binary functor which is covariant in the second argument and contravariant in the first. We’ll use the first argument everywhere there’s a negative occurrence of X and the second for every positive occurrence. Take note: we need things to be contravariant in the first argument because we’re using that first argument negatively: if we didn’t do that we wouldn’t have a functor.

Now we have

F(X⁻, X⁺) = X⁻ → X⁺

This is functorial. We can also always recover the original map simply by diagonalizing: F(X) = F(X, X). We’ll now look for an object D so that F(D, D) ≅ D. Not quite a fixed point, but still equivalent to the equation we were looking at earlier.

Furthermore, we need one last critical property, we want F to be locally continuous. This means that the maps on morphisms determined by F should be continuous so F(⊔ P, g) = ⊔ F(P, g) and vice-versa (here P is a set of functions). Note that such morphisms have an ordering because they belong to the pointwise ordered cppo we talked about earlier.

We have one final thing to set up before this proof: what about if there’s multiple non-isomorphic solutions to F? We want a further coherence condition that’s going to provide us with 2 things

  1. An ability to uniquely determine a solution
  2. A powerful proof technique that isolates us from the particulars of the construction

What we want is called minimal invariance. Suppose we have a D and an i : D ≅ F(D, D). This is the minimal invariant solution if and only if the least fixed point of f(e) = i⁻ ∘ F(e, e) ∘ i is id. In other words, we want it to be the case that

d = ⊔ₓ fˣ(⊥)(d) (d ∈ D)

I mentally picture this as saying that the isomorphism is set up so that for any particular d we choose, if we apply i, fmap over it, apply i again, repeat and repeat, eventually this process will halt and we’ll run out of things to fmap over. It’s a sort of a statement that each d ∈ D is “finite” in a very, very handwavy sense. Don’t worry if that didn’t make much sense, it’s helpful to me but it’s just my intuition. This property has some interesting effects though: it means that if we find such a D then (D, D) is going to be both the initial algebra and final coalgebra of F.

Without further ado, let’s prove that every locally continuous functor F. We start by defining the following

D₀ = {⊥}
Dᵢ  = F(Dᵢ₋₁, Dᵢ₋₁)

This gives us a chain of cppos that gradually get larger. How do we show that they’re getting larger? By defining an section from Dᵢ to Dⱼ where j = i + 1. A section is a function f which is paired with a (unique) function f⁰ so that f⁰f = id and ff⁰ ⊑ id. In other words, f embeds its domain into the codomain and f⁰ tells us how to get it out. Putting something in and taking it out is a round trip. Since the codomain may be bigger though taking something out and putting it back only approximates a round trip. Our sections are defined thusly

s₀ = x ↦ ⊥         r₀ = x ↦ ⊥
sᵢ  = F(rᵢ₋₁, sᵢ₋₁)   rᵢ = F(rᵢ₋₁, sᵢ₋₁)

It would be very instructive to work out that these definitions are actually sections and retractions. Since type-setting this subscripts is a little rough, if it’s clear from context I’ll just write r and s. Now we’ve got this increasing chain, we define an interesting object

 D = {x ∈ Πᵢ Dᵢ | x.(i-1) = r(x.i)}

In other words, D is the collection of infinitely large pairs. Each component if from one of those Dᵢs above and they cohere with each other so using s and r to step up the chain takes you from one component to the next. Next we define a way to go from a single Dᵢ to a D: upᵢ : Dᵢ → D where

upᵢ(x).j =  x    if i = j
         | rᵈ(x) if i - j = d > 0
         | sᵈ(x) if j - i = d > 0

Interestingly, note that πᵢ ∘ upᵢ = id (easy proof) and that upᵢ ∘ πᵢ ⊑ id (slightly harder proof). This means that we’ve got more sections lying around: every Dᵢ can be fed into D. Consider the following diagram

    s      s      s
D0 ——> D1 ——> D2 ——> ...

I claim that D is the colimit to this diagram where the collection of arrows mapping into it are given with upᵢ. Seeing this is a colimit follows from the fact that πᵢ ∘ upᵢ is just id. Specifically, suppose we have some object C and a family of morphisms cᵢ : Dᵢ → C which commute properly with s. We need to find a unique morphism h so that cᵢ = h ∘ upᵢ. Define h as ⊔ᵢ cᵢπᵢ. Then

h ∘ upⱼ = (⊔j<i cᵢsʲrʲ) ⊔ cᵢ ⊔ (⊔j>i cᵢrʲsʲ) = (⊔j<i cᵢsʲrʲ) ⊔ cᵢ

The last step follows from the fact that rʲsʲ = id. Furthermore, sʲrʲ ⊑ id so cᵢsʲrʲ ⊑ cᵢ so that whole massive term just evaluates to cᵢ as required. So we have a colimit. Notice that if we apply F to each Dᵢ in the diagram we end up with a new diagram.

    s      s      s
D1 ——> D2 ——> D3 ——> ...

D is still the colimit (all we’ve done is shift the diagram over by one) but by identical reasoning to D being a colimit, so is F(D, D). This means we have a unique isomorphism i : D ≅ F(D, D). The fact that i is the minimal invariant follows from the properties we get from the fact that i comes from a colimit.

With this construction we can construct our model of the lambda calculus simply by finding the minimal invariant of the locally continuous functor F(D⁻, D⁺) = D⁻ → D⁺ (it’s worth proving it’s locally continuous). Our denotation is defined as [e]ρ ∈ D where e is a lambda term and ρ is a map of the free variables of e to other elements of D. This is inductively defined as

[λx. e]ρ = i⁻(d ↦ [e]ρ[x ↦ d])
[e e']ρ = i([e]ρ)([e']ρ)
[x]ρ = ρ(x)

Notice here that for the two main constructions we just use i and i⁻ to fold and unfold the denotations to treat them as functions. We could go on to prove that this denotation is sound and complete but that’s something for another post.

Wrap Up

That’s the main result I wanted to demonstrate. With this single proof we can actually model a very large class of programming languages into Cpo. Hopefully I’ll get around to showing how we can pull a similar trick with a relational structure on Cpo in order to prove full abstraction. This is nicely explained in Andrew Pitt’s “Relational Properties of Domains”.

If you’re interested in domain theory I learned from Gunter’s “Semantics of Programming Languages” book and recommend it.

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + ''; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus

August 14, 2015 12:00 AM

Learn Type Theory

Posted on August 14, 2015
Tags: types

I’ve been trying to write a blog post to this effect for a while now, hopefully this one will stick. I intend for this to be a bit more open-ended than most of my other posts, if you’re interested in seeing the updated version look here. Pull requests/issues are more than welcome on the repository. I hope you learn something from this.

Lots of people seem curious about type theory but it’s not at all clear how to go from no math background to understanding “Homotopical Patch Theory” or whatever the latest cool paper is. In this repository I’ve gathered links to some of the resources I’ve personally found helpful.

Reading Advice

I strongly urge you to start by reading one or more of the textbooks immediately below. They give a nice self-contained introduction and a foundation for understanding the papers that follow. Don’t get hung up on any particular thing, it’s always easier to skim the first time and read closely on a second pass.

The Resources


  • Practical Foundations of Programming Languages (PFPL)

    I reference this more than any other book. It’s a very wide ranging survey of programming languages that assumes very little background knowledge. A lot people prefer the next book I mention but I think PFPL does a better job explaining the foundations it works from and then covers more topics I find interesting.

  • Types and Programming Languages (TAPL)

    Another very widely used introductory book (the one I learned with). It’s good to read in conjunction with PFPL as they emphasize things differently. Notably, this includes descriptions of type inference which PFPL lacks and TAPL lacks most of PFPL’s descriptions of concurrency/interesting imperative languages. Like PFPL this is very accessible and well written.

  • Online supplements
  • Dead-tree copy

  • Advanced Topics in Types and Programming Languages (ATTAPL)

Don’t feel the urge to read this all at once. It’s a bunch of fully independent but excellent chapters on a bunch of different topics. Read what looks interesting, save what doesn’t. It’s good to have in case you ever need to learn more about one of the subjects in a pinch.

Proof Assistants

One of the fun parts of taking in an interest in type theory is that you get all sorts of fun new programming languages to play with. Some major proof assistants are

Type Theory

  • The Works of Per Martin-Löf

Per Martin-Löf has contributed a ton to the current state of dependent type theory. So much so that it’s impossible to escape his influence. His papers on Martin-Löf Type Theory (he called it Intuitionistic Type Theory) are seminal.

If you're confused by the papers above read the book in the next
entry and try again. The book doesn't give you as good a feel for
the various flavors of MLTT (which spun off into different areas
of research) but is easier to follow.

It’s good to read the original papers and here things from the horses mouth, but Martin-Löf is much smarter than us and it’s nice to read other people explanations of his material. A group of people at Chalmer’s have elaborated it into a book.

John Reynold’s works are similarly impressive and always a pleasure to read.

While most dependent type theories (like the ones found in Coq, Agda, Idris..) are based on Martin-Löf later intensional type theories, computational type theory is different. It’s a direct descendant of his extensional type theory that has been heavily developed and forms the basis of NuPRL nowadays. The resources below describe the various parts of how CTT works.

A new exciting branch of type theory. This exploits the connection between homotopy theory and type theory by treating types as spaces. It’s the subject of a lot of active research but has some really nice introductory resources even now.

Proof Theory

  • Frank Pfenning’s Lecture Notes

    Over the years, Frank Pfenning has accumulated lecture notes that are nothing short of heroic. They’re wonderful to read and almost as good as being in one of his lectures.

Category Theory

Learning category theory is necessary to understand some parts of type theory. If you decide to study categorical semantics, realizability, or domain theory eventually you’ll have to buckledown and learn a little at least. It’s actually really cool math so no harm done!

  • Category Theory for Computer Scientists

This is the absolute smallest introduction to category theory you can find that’s still useful for a computer scientist. It’s very light on what it demands for prior knowledge of pure math but doesn’t go into too much depth.

One of the better introductory books to category theory in my opinion. It’s notable in assuming relatively little mathematical background and for covering quite a lot of ground in a readable way.

Another valuable piece of reading are these lecture notes. They cover a lot of the same areas as “Category Theory” so they can help to reinforce what you learned there as well giving you some of the author’s perspective on how to think about these things.

Other Goodness

  • Gunter’s “Semantics of Programming Language”

While I’m not as big a fan of some of the earlier chapters, the math presented in this book is absolutely top-notch and gives a good understanding of how some cool fields (like domain theory) work.

The Oregon Programming Languages Summer School is a 2 week long bootcamp on PLs held annually at the university of Oregon. It’s a wonderful event to attend but if you can’t make it they record all their lectures anyways! They’re taught be a variety of lecturers but they’re all world class researchers.

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + ''; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus

August 14, 2015 12:00 AM

August 12, 2015

Dominic Steinitz

Stochastic Integration


Suppose we wish to model a process described by a differential equation and initial condition

\displaystyle   \begin{aligned}  \dot{x}(t) &= a(x, t) \\  x(0) &= a_0  \end{aligned}

But we wish to do this in the presence of noise. It’s not clear how do to this but maybe we can model the process discretely, add noise and somehow take limits.

Let \pi = \{0 = t_0 \leq t_1 \leq \ldots \leq t_n = t\} be a partition of [0, t] then we can discretise the above, allow the state to be random and add in some noise which we model as samples of Brownian motion at the selected times multiplied by b so that we can vary the amount noise depending on the state. We change the notation from x to X(\omega) to indicate that the variable is now random over some probability space.

\displaystyle   \begin{aligned}  {X}(t_{i+1}, \omega) - {X}(t_i, \omega)  &= a({X}(t_i, \omega))(t_{i+1} - t_i) +                                              b({X}(t_i, \omega))(W(t_{i+1}, \omega) - W(t_i, \omega)) \\  X(t_0, \omega) &= A_{0}(\omega)  \end{aligned}

We can suppress explicit mention of \omega and use subscripts to avoid clutter.

\displaystyle   \begin{aligned}  {X}_{t_{i+1}} - {X}_{t_i}  &= a({X}_{t_i})(t_{i+1} - t_i) +                                b({X}_{t_i})(W_{t_{i+1}} - W_{t_i}) \\  X(t_0) &= A_{0}(\omega)  \end{aligned}

We can make this depend continuously on time specifying that

\displaystyle   X_t = X_{t_i} \quad \mathrm{for} \, t \in (t_i, t_{i+1}]

and then telescoping to obtain

\displaystyle   \begin{aligned}  {X}_{t} &= X_{t_0} + \sum_{i=0}^{k-1} a({X}_{t_i})(t_{i+1} - t_i) +                       \sum_{i=0}^{k-1} b({X}_{t_i})(W_{t_{i+1}} - W_{t_i})                       \quad \mathrm{for} \, t \in (t_k, t_{k+1}]  \end{aligned}

In the limit, the second term on the right looks like an ordinary integral with respect to time albeit the integrand is stochastic but what are we to make of the the third term? We know that Brownian motion is nowhere differentiable so it would seem the task is impossible. However, let us see what progress we can make with so-called simple proceses.

Simple Processes


\displaystyle   X(t,\omega) = \sum_{i=0}^{k-1} B_i(\omega)\mathbb{I}_{(t_i, t_{i+1}]}(t)

where B_i is {\cal{F}}(t_i)-measurable. We call such a process simple. We can then define

\displaystyle   \int_0^\infty X_s \mathrm{d}W_s \triangleq \sum_{i=0}^{k-1} B_i{(W_{t_{i+1}} - W_{t_{i+1}})}

So if we can produce a sequence of simple processes, X_n that converge in some norm to X then we can define

\displaystyle   \int_0^\infty X(s)\mathrm{d}W(s) \triangleq \lim_{n \to \infty}\int_0^\infty X_n(s)\mathrm{d}W(s)

Of course we need to put some conditions of the particular class of stochastic processes for which this is possible and check that the limit exists and is unique.

We consider the {\cal{L}}^2(\mu \times \mathbb{P}), the space of square integrable functions with respect to the product measure \mu \otimes \mathbb{P} where \mu is Lesbegue measure on {\mathbb{R}^+} and \mathbb{P} is some given probability measure. We further restrict ourselves to progressively measurable functions. More explicitly, we consider the latter class of stochastic processes such that

\displaystyle   \mathbb{E}\int_0^\infty X^2_s\,\mathrm{d}s < \infty

Less Simple Processes

Bounded, Almost Surely Continuous and Progressively Adapted

Let X be a bounded, almost surely continuous and progressively measurable process which is (almost surely) 0 for t > T for some positive constant T. Define

\displaystyle   X_n(t, \omega) \triangleq X\bigg(T\frac{i}{n}, \omega\bigg) \quad \mathrm{for} \quad T\frac{i}{n} \leq t < T\frac{i + 1}{n}

These processes are cleary progressively measurable and by bounded convergence (X is bounded by hypothesis and \{X_n\}_{n=0,\ldots} is uniformly bounded by the same bound).

\displaystyle   \lim_{n \to \infty}\|X - X_n\|_2 = 0

Bounded and Progressively Measurable

Let X be a bounded and progressively measurable process which is (almost surely) 0 for t > T for some positive constant T. Define

\displaystyle   X_n(t, \omega) \triangleq \frac{1}{1/n}\int_{t-1/n}^t X(s, \omega) \,\mathrm{d}s

Then X^n(s, \omega) is bounded, continuous and progressively measurable and it is well known that X^n(t, \omega) \rightarrow X(t, \omega) as n \rightarrow 0. Again by bounded convergence

\displaystyle   \lim_{n \to \infty}\|X - X_n\|_2 = 0

Progressively Measurable

Firstly, let X be a progressively measurable process which is (almost surely) 0 for t > T for some positive constant T. Define X_n(t, \omega) = X(t, \omega) \land n. Then X_n is bounded and by dominated convergence

\displaystyle   \lim_{n \to \infty}\|X - X_n\|_2 = 0

Finally let X be a progressively measurable process. Define

\displaystyle   X_n(t, \omega) \triangleq  \begin{cases}  X(t, \omega) & \text{if } t \leq n \\  0            & \text{if } \mathrm{otherwise}  \end{cases}


\displaystyle   \lim_{n \to \infty}\|X - X_n\|_2 = 0

The Itô Isometry

Let X be a simple process such that

\displaystyle   \mathbb{E}\int_0^\infty X^2_s\,\mathrm{d}s < \infty


\displaystyle   \mathbb{E}\bigg(\int_0^\infty X_s\,\mathrm{d}W_s\bigg)^2 =  \mathbb{E}\bigg(\sum_{i=0}^{k-1} B_i{(W_{t_{i+1}} - W_{t_{i}})}\bigg)^2 =  \sum_{i=0}^{k-1} \mathbb{E}(B_i)^2({t_{i+1}} - {t_{i}}) =  \mathbb{E}\int_0^\infty X^2_s\,\mathrm{d}s

Now suppose that \{H_n\}_{n \in \mathbb{N}} is a Cauchy sequence of progressively measurable simple functions in {\cal{L}}^2(\mu \times \mathbb{P}) then since the difference of two simple processes is again a simple process we can apply the Itô Isometry to deduce that

\displaystyle   \lim_{m,n \to \infty}\mathbb{E}\bigg(\int_0^\infty (H_n(s) - H_m(s))\,\mathrm{d}W(s)\bigg)^2 =  \lim_{m,n \to \infty}\mathbb{E}\int_0^\infty (H_n(s) - H_m(s))^2\,\mathrm{d}s =  0

In other words, \int_0^\infty H_n(s)\,\mathrm{d}W(s) is also Cauchy in {\cal{L}}^2(\mathbb{P}) and since this is complete, we can conclude that

\displaystyle   \int_0^\infty X(s)\mathrm{d}W(s) \triangleq \lim_{n \to \infty}\int_0^\infty X_n(s)\mathrm{d}W(s)

exists (in {\cal{L}}^2(\mathbb{P})). Uniqueness follows using the triangle inequality and the Itô isometry.


  1. We defer proving the definition also makes sense almost surely to another blog post.

  2. This approach seems fairly standard see for example Handel (2007) and Mörters et al. (2010).

  3. Rogers and Williams (2000) takes a more general approach.

  4. Protter (2004) takes a different approach by defining stochastic processes which are good integrators, a more abstract motivation than the one we give here.

  5. The requirement of progressive measurability can be relaxed.


Handel, Ramon von. 2007. “Stochastic Calculus, Filtering, and Stochastic Control (Lecture Notes).”

Mörters, P, Y Peres, O Schramm, and W Werner. 2010. Brownian motion. Cambridge Series on Statistical and Probabilistic Mathematics. Cambridge University Press.

Protter, P.E. 2004. Stochastic Integration and Differential Equations: Version 2.1. Applications of Mathematics. Springer.

Rogers, L.C.G., and D. Williams. 2000. Diffusions, Markov Processes and Martingales: Volume 2, Itô Calculus. Cambridge Mathematical Library. Cambridge University Press.

by Dominic Steinitz at August 12, 2015 07:14 AM

August 11, 2015

Brent Yorgey

Catsters guide is complete!

About a year and a half ago I announced that I had started creating a guide to the excellent series of category theory YouTube videos by the Catsters (aka Eugenia Cheng and Simon Willerton). I am happy to report that as of today, the guide is finally complete!

As far as possible, I have tried to arrange the order so that each video only depends on concepts from earlier ones. (If you have any suggestions for improving the ordering, I would love to hear them!) Along with each video you can also find my cryptic notes; I make no guarantee that they will be useful to anyone (even me!), but hopefully they will at least give you an idea of what is in each video.

If and when they post any new videos (pretty please?) I will try to keep it updated.

by Brent at August 11, 2015 09:03 PM

Dimitri Sabadie

Luminance – what was that alignment stuff already?

Yesterday, I released a new article about how I implement vertex arrays in luminance. In that article, I told you that the memory was packed with alignment set to 1.

Well, I’ve changed my mind. Some people pointed out that the good thing to do for most GPU is to align on 32-bit. That is, 4 bytes. The alignment should be 4 bytes, then, not 1.

There might be an issue with that. If you store a structure with attributes which sizes are not a multiple of 4 bytes, it’s likely you need to add padding.

However, I just reviewed my code, and found this:

instance (GPU a,KnownNat n,Storable a) => Vertex (V n a) where
instance (Vertex a,Vertex b) => Vertex (a :. b) where

Those are the single instances for Vertex. That means you can only use V and (:.) to build up vertices. Look at the V instance. You’ll find a GPU typeclass constraint. Let’s look at its definition and instances:

class GPU a where
glType :: Proxy a -> GLenum

instance GPU Float where
glType _ = GL_FLOAT

instance GPU Int32 where
glType _ = GL_INT

instance GPU Word32 where

Woah. How did I forget that?! Let me translate those information to you. That means we can only have 32-bit vertex component! So the memory inside vertex buffers will always be aligned on 4 bytes! No need to worry about padding then!

The first implication is the fact you won’t be able to use Word16, for instance. You’ll need to stick to the three types that have a GPU instance.

Note: that doesn’t prevent us from adding Double later on, because a Double is a 64-bit type, which is a multiple of 4 bytes!

That’s all I have for today. I’m working on something very exciting linked to render batching. I’ll talk about that when it’s cooked. ;)

Keep the vibe; keep building awesome things, and as always, thank you for reading me!

by Dimitri Sabadie ( at August 11, 2015 01:56 PM

August 10, 2015

Neil Mitchell

Upcoming talk to the Cambridge UK Meetup, Thursday 13 Aug (Shake 'n' Bake)

I'll be talking at the Cambridge NonDysFunctional Programmers Meetup this coming Thursday (13 Aug 2015). Doors open at 7:00pm with talk 7:30-8:30pm, followed by beer/food. I'll be talking about Shake 'n' Bake. The abstract is:

Shake is a Haskell build system, an alternative to Make, but with more powerful and accurate dependencies. I'll cover how to build things with Shake, and why I laugh at non-Monadic build systems (which covers most things that aren't Shake). Shake is an industrial quality library, with a website at

Bake is a Haskell continuous integration system, an alternative to Travis/Jenkins, but designed for large semi-trusted teams. Bake guarantees that all code arriving in your master branch passes all tests on all platforms, while using as few resources as possible, allowing you to have hours of tests, 100's of commits a day and one a few lonely test servers. Bake is held together with duct tape.

I look forward to seeing people there.

by Neil Mitchell ( at August 10, 2015 08:13 PM