Planet Haskell

April 26, 2019

Mark Jason Dominus


What is the shed in “watershed”? Is it a garden shed? No.

I guessed that it meant a piece of land that sheds water into some stream or river. Wrong!

The Big Dictionary says that this shed is:

The parting made in the hair by combing along the top of the head.

This meaning of “shed” fell out of use after the end of the 17th century.

by Mark Dominus ( at April 26, 2019 04:34 AM

Best fanfic this month

This week I learned that there are no fewer than seven fanfics on AO3 that concern the Complaint letter to Ea-Nasir, a 3750-year-old Babylonian cuneiform tablet from an merchant angry at the poor-quality copper ingots he was sold. Truly, we live in an age of marvels.

I've said here before that I don't usually find written material funny, with very rare exceptions. But this story, Pay me Baby, Treat me Right, was a rare exception. I found it completely sidesplitting.

(Caution: sexual content.)

by Mark Dominus ( at April 26, 2019 04:29 AM

Hell is the absence of God

This is definitely the worst thing I learned this month. It's way worse than that picture of Elvis meeting Nixon.

Six tall black men, members of the Harlem Globetrotters (including Meadowlark Lemon) stand in what appears to be a hotel ballroom.  In front of them is Henry Kissinger, smiling and holding a basketball with the Globertrotters’ signatures. Henry Kissinger alone, smiling and holding up a Harlem Globetrotters game jersey with the number ‘1’.

Nobel Laureate and noted war criminal Henry Kissinger is also an honorary member of the Harlem Globetrotters.

As Maciej Cegłowski said, “And yet the cruel earth refuses to open and swallow up everyone involved.”

by Mark Dominus ( at April 26, 2019 04:01 AM

Sometimes it matters how you get there

Katara was given the homework exercise of rationalizing the denominator of $$\frac1{\sqrt2+\sqrt3+\sqrt5}$$ which she found troublesome. You evidently need to start by multiplying the numerator and denominator by , obtaining $$ \frac1{(\sqrt2+\sqrt3+\sqrt5)}\cdot \frac{-\sqrt2 + \sqrt 3 + \sqrt 5}{-\sqrt2 + \sqrt 3 + \sqrt 5} = \frac{-\sqrt2 + \sqrt 3 + \sqrt 5}{(-2 +3 + 5 + 2\sqrt{15})} = \frac{-\sqrt2 + \sqrt 3 + \sqrt 5}{6 + 2\sqrt{15}} $$ and then you go from there, multiplying the top and bottom by . It is a mess.

But when I did it, it was much quicker. Instead of using , I went with , not for any reason, but just at random. This got me: $$ \frac1{\sqrt2+\sqrt3+\sqrt5}\cdot \frac{\sqrt2 + \sqrt 3 - \sqrt 5}{\sqrt2 + \sqrt 3 - \sqrt 5} = \frac{\sqrt2 + \sqrt 3 - \sqrt 5}{(2 +3 - 5 + 2\sqrt{6})} = \frac{\sqrt2 + \sqrt 3 - \sqrt 5}{2\sqrt{6}} $$

with the vanishing in the denominator. Then the next step is quite easy; just get rid of the : $$ \frac{\sqrt2 + \sqrt 3 - \sqrt 5}{2\sqrt{6}}\cdot \frac{\sqrt6}{\sqrt6} = \frac{\sqrt{12}+\sqrt{18}-\sqrt{30}}{12} $$

which is correct.

I wish I could take credit for this, but it was pure dumb luck.

by Mark Dominus ( at April 26, 2019 03:26 AM

Men who are the husbands of someone important

It's often pointed out that women, even famous and accomplished women, are often described in newspaper stories as being someone's wife, but that the reverse rarely occurs. The only really well-known exception I could think of was Pierre Curie, who was a famous, prizewinning scientist (1903 Nobel Laureate, yo), but is often identified as having been the husband of Marie Skłodowska Curie (also 1903 Nobel Laureate).

But last week brought another example to my attention. There ware a great many news articles reporting that Salma Hayek's husband had pledged money to help rebuild Notre Dame cathedral. His name is François-Henri Pinault, and he is a billionaire. And the husband of Salma Hayek.

For example:

“Billionaire Francois Pinault and his son, Francois-Henri Pinault, who is married to actress Salma Hayek, said…”

Notre Dame fire – Salma Hayek’s French billionaire husband Francois-Henri Pinault pledges £86million

Salma Hayek’s Billionaire Husband Pledges More Than $110 Million to Rebuild Paris' Notre Dame Cathedral


by Mark Dominus ( at April 26, 2019 03:14 AM

April 24, 2019

Holden Karau

Brent Yorgey

Competitive Programming in Haskell: Basic Setup

I am the coach of my school’s competitive programming team and enjoy solving problems on Open Kattis. Since Kattis accepts submissions in a wide variety of languages (including Haskell, OCaml, Rust, Common Lisp, and even Prolog), I often enjoy submitting solutions in Haskell. Of the 946 problems I have solved on Kattis1, I used Haskell for 607 of them (I used Java for the rest, except for one in C++).

After solving so many problems in Haskell, by now I’ve figured out some patterns that work well, identified some common pitfalls, developed some nice little libraries, and so forth. I thought it would be fun to write a series of blog posts sharing my experience for the benefit of others—and because I expect I will also learn things from the ensuing discussion!

The Basics: I/O

As a basic running example I’ll use the same example problem that Kattis uses in its help section, namely, A Different Problem. In this problem, we are told that the input will consist of a number of pairs of integers between 0 and 10^{15}, one pair per line, and we should output the absolute value of the difference between each pair. The given example is that if the input looks like this:

10 12
71293781758123 72784
1 12345677654321

then our program should produce output that looks like this:


Kattis problems are always set up this way, with input of a specified format provided on standard input, and output to be written to standard output. To do this in Haskell, one might think we will need to use things like getLine and putStrLn to read and write the input. But wait! There is a much better way. Haskell’s standard Prelude has a function

interact :: (String -> String) -> IO ()

It takes a pure String -> String function, and creates an IO action which reads from standard input, feeds the input to the function, and then writes the function’s output to standard output. It uses lazy IO, so the reading and writing can be interleaved with computation of the function—a bit controversial and dangerous in general, but absolutely perfect for our use case! Every single Kattis problem I have ever solved begins with

main = interact $ ...

(or the equivalent for ByteString, more on that in a future post) and that is the only bit of IO in the entire program. Yay!

From Input to Output

So now we need to write a pure function which transforms the input into the output. Of course, in true Haskell fashion, we will do this by constructing a chained pipeline of functions to do the job incrementally. The general plan of attack (for any Kattis problem) is as follows:

  1. First, parse the input, that is, transform the raw String input into some more semantically meaningful representation—typically using a combination of functions like lines, words, read, map, and so on (or more sophisticated tools—see a later post).
  2. Next, solve the problem, turning a semantically meaningful representation of the input into a semantically meaningful representation of the output.
  3. Finally, format the output using things like show, unwords, unlines, and so on.

Idiomatic Haskell uses the composition operator (.) to combine functions. However, when solving competitive programming problems, I much prefer to use the reverse composition operator, (>>>) from Control.Arrow (that is, (>>>) = flip (.)). The reason is that since I often end up constructing long function pipelines, I want to be able to think about the process of transforming input to output and type from left to right at the same time; having to add functions from right to left would be tedious.

A Full Solution

So here’s my solution to A Different Problem:

main = interact $
  lines >>> map (words >>> map read >>> solve >>> show) >>> unlines

solve :: [Integer] -> Integer
solve [a,b] = abs (a - b)

A few notes:

  • Since each line is to be processed independently, notice how I put the processing of each line inside a single call to map.
  • We could probably inline the solve function in this case, but I prefer to split it out explicitly in order to specify its type, which both prevents problems with read/show ambiguity and also serves as a sanity check on the parsing and formatting code.
  • The machines on which our solution will run definitely have 64-bit architectures, so we could technically get away with using Int instead of Integer (maxBound :: Int64 is a bit more than 9 \times 10^{18}, plenty big enough for inputs up to 10^{15}), but there would be no benefit to doing so. If we use Integer we don’t even have to consider potential problems with overflow.

And one last thing: I said we were going to parse the input into a “semantically meaningful representation”, but I lied a teensy bit: the problem says we are going to get a pair of integers but I wrote my solve function as though it takes a list of integers. And even worse, my solve function is partial! Why did I do that?

The fact is that I almost never use actual Haskell tuples in my solutions, because they are too awkward and inconvenient. Representing homogeneous tuples as Haskell lists of a certain known length allows us to read and process “tuples” using standard functions like words and map, to combine them using zipWith, and so on. And since we get to assume that the input always precisely follows the specification—which will never change—this is one of the few situations where, in my opinion, we are fully justified in writing partial functions like this if it makes the code easier to write. So I always represent homogeneous tuples as lists and just pattern match on lists of the appropriate (known) length. (If I need heterogeneous tuples, on the other hand, I create an appropriate data type.)

Of course I’ve only scratched the surface here—I’ll have a lot more to say in future posts—but this should be enough to get you started! I’ll leave you with a few very easy problems, which can each be done with just a few lines of Haskell:

Of course you can also try solving any of the other problems (as of this writing, over 2400 of them!) on Kattis as well.

  1. Did I mention that I really enjoy solving competitive programming problems?↩

by Brent at April 24, 2019 03:49 PM

FP Complete

Stackage changes and Stack 2

We’re ramping up for the Stack 2 release, which contains a number of changes. (If you want more information, check out the changelog.) I’m not going to be covering all of those changes in this blog post. Instead, today, I want to talk about what the release process will look like, both for Stack itself and Stackage.

by Michael Snoyman ( at April 24, 2019 09:00 AM

April 22, 2019

Monday Morning Haskell

Serializing Mazes!


Last week we improved our game so that we could solve additional random mazes after the first. This week, we'll step away from the randomness and look at how we can serialize our mazes. This will allow us to have a consistent and repeatable game. It will also enable us to save the game state later.

We'll be using the Megaparsec library as part of this article. If you aren't familiar with that (or parsing in Haskell more generally), check out our Parsing Series!

A Serialized Representation

The serialized representation of our maze doesn't need to be human readable. We aren't trying to create an ASCII art style representation. That said, it would be nice if it bore some semblance to the actual layout. There are a couple properties we'll aim for.

First, it would be good to have one character represent one cell in our maze. This dramatically simplifies any logic we'll use for serializing back and forth. Second, we should layout the cell characters in a way that matches the maze's appearance. So for instance, the top left cell should be the first character in the first row of our string. Then, each row should appear on a separate line. This will make it easier to avoid silly errors when coming up with test cases.

So how can we serialize a single cell? We could observe that for each cell, we have sixteen possible states. There are 4 sides, and each side is either a wall or it is open. This suggests a hexadecimal representation.

Let's think of the four directions as being 4 bits, where if there is a wall, the bit is set to 1, and if it is open, the bit is set to 0. We'll order the bits as up-right-down-left, as we have in a couple other areas of our code. So we have the following example configurations:

  1. An open cell with no walls around it is 0.
  2. A totally surrounded cell is 1111 = F.
  3. A cell with walls on its top and bottom would be 1010 = A.
  4. A cell with walls on its left and right would be 0101 = 5.

With that in mind, we can create a small 5x5 test maze with the following representation:


And this ought to look like so:


This serialization pattern lends itself to a couple helper functions we'll use later. The first, charToBoundsSet, will take a character and give us four booleans. These represent the presence of a wall in each direction. First, we convert the character to the hex integer. Then we use patterns about hex numbers and where the bits lie. For instance, the first bit is only set if the number is at least 8. The last bit is only set for odd numbers. This gives us the following:

charToBoundsSet :: Char -> (Bool, Bool, Bool, Bool)
charToBoundsSet c =
  ( num > 7,
  , num `mod` 8 > 3
  , num `mod` 4 > 1
  , num `mod` 2 > 0

Then, we also want to go backwards. We want to take a CellBoundaries item and convert it to the proper character. We'll look at each direction. If it's an AdjacentCell, it contributes nothing to the final Int value. But otherwise, it contributes the hex digit value for its place. We add these up and convert to a char with intToDigit:

cellToChar :: CellBoundaries -> Char
cellToChar bounds =
  let top = case upBoundary bounds of
        (AdjacentCell _) -> 0
        _ -> 8
  let right = case rightBoundary bounds of
        (AdjacentCell _) -> 0
        _ -> 4
  let down = case downBoundary bounds of
        (AdjacentCell _) -> 0
        _ -> 2
  let left = case leftBoundary bounds of
        (AdjacentCell _) -> 0
        _ -> 1
  in toUpper $ intToDigit (top + right + down + bottom)

We'll use both of these functions in the next couple parts.

Serializing a Maze

Let's move on now to determining how we can take a maze and represent it as Text. For this part, let's first apply a type synonym on our maze type:

type Maze = Map.Map Location CellBoundaries

dumpMaze :: Maze -> Text
dumpMaze = ...

First, let's imagine we have a single row worth of locations. We can convert that row to a string easily using our helper function from above:

dumpMaze = …
    rowToString :: [(Location, CellBoundaries)] -> String
    rowToString = map (cellToChar . snd)

Now we'd like to take our maze map and group it into the different rows. The groupBy function seems appropriate. It groups elements of a list based on some predicate. We'd like to take a predicate that checks if the rows of two elements match. Then we'll apply that against the toList representation of our map:

rowsMatch :: (Location, CellBoundaries) -> (Location, CellBoundaries) -> Bool
rowsMatch ((_, y1), _) ((_, y2), _) = y1 == y2

We have a problem though because groupBy only works when the elements are next to each other in the list. The Map.toList function will give us a column-major ordering. We can fix this by first creating a transposed version of our map:

dumpMaze maze = …
    transposedMap :: Maze
    transposedMap = Map.mapKeys (\(x, y) -> (y, x)) maze

Now we can go ahead and group our cells by row:

dumpMaze maze = …
    transposedMap = …

    cellsByRow :: [[(Location, CellBoundaries)]]
    cellsByRow = groupBy (\((r1, _), _) ((r2, _), _) -> r1 == r2) 
                   (Map.toList transposedMap)

And now we can complete our serialization function! We get the string for each row, and combine them with unlines and then pack into a Text.

dumpMaze maze = pack $ (unlines . reverse) (rowToString <$> cellsByRow)
    transposedMap = …

    cellsByRow = …

    rowToString = ...

As a last trick, note we reverse the order of the rows. This way, we get that the top row appears first, rather than the row corresponding to y = 0.

Parsing a Maze

Now that we can dump our maze into a string, we also want to be able to go backwards. We should be able to take a properly formatted string and turn it into our Maze type. We'll do this using the Megaparsec library, as we discussed in part 4 of our series on parsing in Haskell. So we'll create a function in the Parsec monad that will take the dimensions of the maze as an input:

import qualified Text.Megaparsec as M

mazeParser :: (Int, Int) -> M.Parsec Void Text Maze
mazeParser (numRows, numColumns) = ...

We want to parse the input into a format that will match each character up with its location in the (x,y) coordinate space of the grid. This means parsing one row at a time, and passing in a counter argument. To make the counter match with the desired row, we'll use a descending list comprehension like so:

mazeParser (numRows, numColumns = do
  rows <- forM [(numRows - 1), (numRows - 2)..0] $ \i -> do

For each row, we'll parse the individual characters using M.hexDigit and match them up with a column index:

mazeParser (numRows, numColumns = do
  rows <- forM [0..(numRows - 1)] $ \i -> do
    (columns :: [(Int, Char)]) <-
      forM [0..(numColumns - 1)] $ \j -> do
        c <- M.hexDigitChar
        return (j, c)

We conclude the parsing of a row by reading the newline character. Then we make the indices match the coordinates in discrete (x,y) space. Remember, the "column" should be the first item in our location.

mazeParser (numRows, numColumns = do
  (rows :: [[(Location, Char)]]) <-
    forM [0..(numRows - 1)] $ \i -> do
      columns <- forM [0..(numColumns - 1)] $ \j -> do
        c <- M.hexDigitChar
        return (j, c)
      return $ map (\(col, char) -> ((col, i), char)) columns

Now we'll need a function to convert one of these Location, Char pairs into CellBoundaries. For the most part, we just want to apply our charToBoundsSet function and get the boolean values. Remember these tell us if walls are present or not:

mazeParser (numRows, numColumns = do
  rows <- …
    cellSpecToBounds :: (Location, Char) -> (Location, CellBoundaries)
    cellSpecToBounds (loc@(x, y), c) =
      let (topIsWall, rightIsWall, bottomIsWall, leftIsWall) = 
            charToBoundsSet c

Now it's a matter of applying a case by case basis in each direction. We just need a little logic to determine, in the True case, if it should be a Wall or a WorldBoundary. Here's the implementation:

cellSpecToBounds :: (Location, Char) -> (Location, CellBoundaries)
cellSpecToBounds (loc@(x, y), c) =
  let (topIsWall, rightIsWall, bottomIsWall, leftIsWall) = 
         charToBoundsSet c
      topCell = if topIsWall
        then if y + 1 == numRows
          then WorldBoundary
          else Wall
        else (AdjacentCell (x, y + 1))
      rightCell = if rightIsWall
        then if x + 1 == numColumns
          then WorldBoundary
          else Wall
        else (AdjacentCell (x + 1, y))
      bottomCell = if bottomIsWall
        then if y == 0
          then WorldBoundary
          else Wall
        else (AdjacentCell (x, y - 1))
      leftCell = if leftIsWall
        then if x == 0
          then WorldBoundary
          else Wall
        else (AdjacentCell (x - 1, y))
  in  (loc, CellBoundaries topCell rightCell bottomCell leftCell)

And now we can complete our parsing function by applying this helper over all our rows!

mazeParser (numRows, numColumns = do
  (rows :: [[(Location, Char)]]) <-
    forM [0..(numRows - 1)] $ \i -> do
      columns <- forM [0..(numColumns - 1)] $ \j -> do
        c <- M.hexDigitChar
        return (j, c)
      return $ map (\(col, char) -> ((col, i), char)) columns
  return $ Map.fromList (cellSpecToBounds <$> (concat rows))
    cellSpecToBounds = ...


This wraps up our latest part on serializing maze definitions. The next couple parts will still be more code-focused. We'll look at ways to improve our data structures and an alternate way of generating random mazes. But after those, we'll get back to adding some new game features, such as wandering enemies and combat!

To learn more about serialization, you should read our series on parsing. You can also download our Production Checklist for more ideas!

by James Bowen at April 22, 2019 02:30 PM

Magnus Therning

Comonadic builders, minor addition

When reading about Comonadic builders the other day I reacted to this comment:

The comonad package has the Traced newtype wrapper around the function (->). The Comonad instance for this newtype gives us the desired behaviour. However, dealing with the newtype wrapping and unwrapping makes our code noisy and truly harder to understand, so let’s use the Comonad instance for the arrow (->) itself

So, just for fun I thought I work out the “noisy and truly harder” bits.

To begin with I needed two language extensions and two imports

After that I could copy quite a bit of stuff directly from the other post

  • Settings definition
  • The Semigroup instance for Settings
  • The Monoid instance for Settings
  • Project definition

After this everything had only minor changes. First off the ProjectBuilder type had to be changed to

With that done the types of all the functions can actually be left as they are, but of course the definitions have to modified. However, it turned out that the necessary modifications were rather smaller than I had expected. First out buildProject which I decided to call buildProjectW to make it possible to keep the original code and the new code in the same file without causing name clashes:

The only difference is the addition of traced . to wrap it up in the newtype, the rest is copied straight from the original article.

The two simple project combinator functions, which I call hasLibraryBW and gitHubBW, needed a bit of tweaking. In the original version combinators take a builder which is an ordinary function, so it can just be called. Now however, the function is wrapped in a newtype so a bit of unwrapping is necessary:

Once again it’s rather small differences from the code in the article.

As for the final combinator, which I call travisBW, actually needed no changes at all. I only rewrote it using a when clause, because I prefer that style over let:

Finally, to show that this implementation hasn’t really changed the behaviour

April 22, 2019 12:00 AM

April 18, 2019

Dominic Steinitz

Ken T Takusagawa

[nzfpggsk] Type annotations on operators

In Haskell, one can assert the type of an identifier in an expression by attaching a type annotation with "::".  For infix operators, one can attach such a type annotation by rewriting it in function notation: surround the operator with parentheses and move it from infix to prefix position:

three :: Int;
three = ((+)::Int -> Int -> Int) 1 2;

It seems impossible to attach a type annotation to an operator while keeping it in infix notation.  This is a bit problematic because a common use of an infix operator is to use many of them in series, e.g., 1 + 2 + 3 + 4, but it is awkward to rewrite a long series of uses of an infix operator into prefix notation.

ten = (+) ((+) ((+) 1 2) 3) 4

Tangentially, for addition, we could do foldl' (+) 0 [1, 2, 3, 4], but fold won't work with operators like (.), ($), or (>>=) which take operands of many different types within a series expression.  Previously: syntactic fold and fold (.) via Data.Dynamic.

The motivation was, if one is using a polymorphic function or operator, it may be pedagogically helpful to attach a type annotation at the point of use so the reader knows what "version" of a polymorphic function is being invoked.  The annotation gives the concrete types at the point of use, rather than the polymorphic type with type variables that library documentation will give you.

A nice (hypothetical) IDE could do type inference to figure out the type at the point of use, then display it as a popup.

by Ken ( at April 18, 2019 03:37 AM

Holden Karau

Powering Tensorflow with Big Data @ CERN Computing Seminar

Thanks for joining me on 2019-04-03 for Powering Tensorflow with Big Data.I'll update this post with the slides soon.Comment bellow to join in the discussion :).Talk feedback is appreciated at

by Holden Karau ( at April 18, 2019 01:58 AM

April 17, 2019

Donnacha Oisín Kidney

Probability Monads in Cubical Agda

Posted on April 17, 2019
Tags: Agda, Probability

Cubical Agda has just come out, and I’ve been playing around with it for a bit. There’s a bunch of info out there on the theory of cubical types, and Homotopy Type Theory more generally (cubical type theory is kind of like an “implementation� of Homotopy type theory), but I wanted to make a post demonstrating cubical Agda in practice, and one of its cool uses from a programming perspective.

So What is Cubical Agda?

I don’t really know! Cubical type theory is quite complex (even for a type theory), and I’m not nearly qualified to properly explain it. In lieu of a proper first-principles explanation, then, I’ll try and give a few examples of how it differs from normal Agda, before moving on to the main example of this post.

{-# OPTIONS --cubical #-}

open import ProbabilityModule.Semirings

module ProbabilityModule.Monad {s} (rng : Semiring s) where

open import Cubical.Core.Everything
open import Cubical.Relation.Everything
open import Cubical.Foundations.Prelude hiding (_≡⟨_⟩_) renaming (_∙_ to _;_)
open import Cubical.HITs.SetTruncation
open import ProbabilityModule.Utils
One of the big annoyances in standard Agda is that we can’t prove the following:
extensionality : ∀ {f g : A → B}
           → (∀ x → f x ≡ g x)
           → f ≡ g
It’s emblematic of a wider problem in Agda: we can’t say “two things are equal if they always behave the same�. Infinite types, for instance (like streams) are often only equal via bisimulation: we can’t translate this into normal equality in standard Agda. Cubical type theory, though, has a different notion of “equality�, which allow a wide variety of things (including bisimulations and extensional proofs) to be translated into a proper equality
extensionality = funExt
One of these such things we can promote to a “proper equality� is an isomorphism. In the cubical repo this is used to prove things about binary numbers: by proving that there’s an isomorphism between the Peano numbers and binary numbers, they can lift any properties on the Peano numbers to the binary numbers.

So those are two useful examples, but the most interesting use I’ve seen so far is the following:

Higher Inductive Types

Higher Inductive Types are an extension of normal inductive types, like the list:
module NormalList where
 data List {a} (A : Set a) : Set a where
   [] : List A
   _∷_ : A → List A → List A

They allow us to add new equations to a type, as well as constructors. To demonstrate what this means, as well as why you’d want it, I’m going to talk about free objects.

Very informally, a free object on some algebra is the minimal type which satisfies the laws of the algebra. Lists, for instance, are the free monoid. They satisfy all of the monoid laws (<semantics>•<annotation encoding="application/x-tex">\bullet</annotation></semantics> is ++ and <semantics>ϵ<annotation encoding="application/x-tex">\epsilon</annotation></semantics> is []):

<semantics>(x•y)•z=x•(y•z)<annotation encoding="application/x-tex">(x \bullet y) \bullet z = x \bullet (y \bullet z)</annotation></semantics> <semantics>x•ϵ=x<annotation encoding="application/x-tex">x \bullet \epsilon = x</annotation></semantics> <semantics>ϵ•x=x<annotation encoding="application/x-tex">\epsilon \bullet x = x</annotation></semantics>

But nothing else. That means they don’t satisfy any extra laws (like, for example, commutativity), and they don’t have any extra structure they don’t need.

How did we get to the definition of lists from the monoid laws, though? It doesn’t look anything like them. It would be nice if there was some systematic way to construct the corresponding free object given the laws of an algebra. Unfortunately, in normal Agda, this isn’t possible. Consider, for instance, if we added the commutativity law to the algebra: <semantics>x•y=y•x<annotation encoding="application/x-tex">x \bullet y = y \bullet x</annotation></semantics> Not only is it not obvious how we’d write the corresponding free object, it’s actually not possible in normal Agda!

This kind of problem comes up a lot: we have a type, and we want it to obey just one more equation, but there is no inductive type which does so. Higher Inductive Types solve the problem in quite a straightforward way. So we want lists to satisfy another equation? Well, just add it to the definition!

module OddList where
  data List {a} (A : Set a) : Set a where
    [] : List A
    _∷_ : A → List A → List A
    comm : ∀ xs ys → xs ++ ys ≡ ys ++ xs

  postulate _++_ : List A → List A → List A
Now, when we write a function that processes lists, Agda will check that the function behaves the same on xs ++ ys and ys ++ xs. As an example, here’s how you might define the free monoid as a HIT:
data FreeMonoid {a} (A : Set a) : Set a where
  [_] : A → FreeMonoid A
  _∙_ : FreeMonoid A → FreeMonoid A → FreeMonoid A
  ε : FreeMonoid A
  ∙ε : ∀ x → x ∙ ε ≡ x
  ε∙ : ∀ x → ε ∙ x ≡ x
  assoc : ∀ x y z → (x ∙ y) ∙ z ≡ x ∙ (y ∙ z)

It’s quite a satisfying definition, and very easy to see how we got to it from the monoid laws.

Now, when we write functions, we have to prove that those functions themselves also obey the monoid laws. For instance, here’s how we would take the length:
module Length where
  open import ProbabilityModule.Semirings.Nat
  open Semiring +-*-�

  length : FreeMonoid A → ℕ
  length [ x ] = 1
  length (xs ∙ ys) = length xs + length ys
  length ε = 0
  length (∙ε xs i) = +0 (length xs) i
  length (ε∙ xs i) = 0+ (length xs) i
  length (assoc xs ys zs i) = +-assoc (length xs) (length ys) (length zs) i

The first three clauses are the actual function: they deal with the three normal constructors of the type. The next three clauses prove that those previous clauses obey the equalities defined on the type.

With the preliminary stuff out of the way, let’s get on to the type I wanted to talk about:


First things first, let’s remember the classic definition of the probability monad:

Definitionally speaking, this doesn’t really represent what we’re talking about. For instance, the following two things express the same distribution, but have different representations:

So it’s the perfect candidate for an extra equality clause like we had above.

Second, in an effort to generalise, we won’t deal specifically with Rational, and instead we’ll use any semiring. After all of that, we get the following definition:

open Semiring rng

module Initial where
 infixr 5 _&_∷_
 data � (A : Set a) : Set (a ⊔ s) where
   []  : � A
   _&_∷_ : (p : R) → (x : A) → � A → � A
   dup : ∀ p q x xs → p & x ∷ q & x ∷ xs ≡ p + q & x ∷ xs
   com : ∀ p x q y xs → p & x ∷ q & y ∷ xs ≡ q & y ∷ p & x ∷ xs
   del : ∀ x xs → 0# & x ∷ xs ≡ xs

The three extra conditions are pretty sensible: the first removes duplicates, the second makes things commutative, and the third removes impossible events.

Let’s get to writing some functions, then:

 ∫ : (A → R) → � A → R
 ∫ f [] = 0#
 ∫ f (p & x ∷ xs) = p * f x + ∫ f xs
 ∫ f (dup p q x xs i) = begin[ i ]
   p * f x + (q * f x + ∫ f xs) ≡˘⟨ +-assoc (p * f x) (q * f x) (∫ f xs) ⟩
   (p * f x + q * f x) + ∫ f xs ≡˘⟨ cong (_+ ∫ f xs) (⟨+⟩* p q (f x))  ⟩
   (p + q) * f x + ∫ f xs �
 ∫ f (com p x q y xs i) = begin[ i ]
   p * f x + (q * f y + ∫ f xs) ≡˘⟨ +-assoc (p * f x) (q * f y) (∫ f xs) ⟩
   p * f x + q * f y + ∫ f xs   ≡⟨ cong (_+ ∫ f xs) (+-comm (p * f x) (q * f y)) ⟩
   q * f y + p * f x + ∫ f xs   ≡⟨ +-assoc (q * f y) (p * f x) (∫ f xs) ⟩
   q * f y + (p * f x + ∫ f xs) �
 ∫ f (del x xs i) = begin[ i ]
   0# * f x + ∫ f xs ≡⟨ cong (_+ ∫ f xs) (0* (f x)) ⟩
   0# + ∫ f xs       ≡⟨ 0+ (∫ f xs) ⟩
   ∫ f xs �

This is much more involved than the free monoid function, but the principle is the same: we first write the actual function (on the first three lines), and then we show that the function doesn’t care about the “rewrite rules� we have in the next three clauses.

Before going any further, we will have to amend the definition a little. The problem is that if we tried to prove something about any function on our � type, we’d have to prove equalities between equalities as well. I’m sure that this is possible, but it’s very annoying, so I’m going to use a technique I saw in this repository. We add another rule to our type, stating that all equalities on the type are themselves equal. The new definition looks like this:

infixr 5 _&_∷_
data � (A : Set a) : Set (a ⊔ s) where
  []  : � A
  _&_∷_ : (p : R) → (x : A) → � A → � A
  dup : ∀ p q x xs → p & x ∷ q & x ∷ xs ≡ p + q & x ∷ xs
  com : ∀ p x q y xs → p & x ∷ q & y ∷ xs ≡ q & y ∷ p & x ∷ xs
  del : ∀ x xs → 0# & x ∷ xs ≡ xs
  trunc : isSet (� A)


Unfortunately, after adding that case we have to deal with it explicitly in every pattern-match on �. We can get around it by writing an eliminator for the type which deals with it itself. Eliminators are often irritating to work with, though: we give up the nice pattern-matching syntax we get when we program directly. It’s a bit like having to rely on church encoding everywhere.

However, we can get back some pattern-like syntax if we use copatterns. Here’s an example of what I mean, for folds on lists:

module ListElim where
 open NormalList
 open import ProbabilityModule.Semirings.Nat
 open Semiring +-*-� renaming (_+_ to _ℕ+_)

 record [_↦_] (A : Set a) (B : Set b) : Set (a ⊔ b) where
     [_][] : B
     [_]_∷_ : A → B → B
   [_]↓ : List A → B
   [ [] ]↓ = [_][]
   [ x ∷ xs ]↓ = [_]_∷_ x [ xs ]↓
 open [_↦_]
 sum-alg : [ ℕ ↦ ℕ ]
 [ sum-alg ][] = 0
 [ sum-alg ] x ∷ xs = x ℕ+ xs
 sum : List ℕ → ℕ
 sum = [ sum-alg ]↓

For the probability monad, there’s an eliminator for the whole thing, and eliminator for propositional proofs, and a normal eliminator for folding. Their definitions are quite long, but mechanical.

Eliminator Definitions
record ⟅_�_⟆ {a ℓ} (A : Set a) (P : � A → Set ℓ) : Set (a ⊔ ℓ ⊔ s) where
  constructor elim
    ⟅_⟆-set : ∀ {xs} → isSet (P xs)
    ⟅_⟆[] : P []
    ⟅_⟆_&_∷_ : ∀ p x xs → P xs → P (p & x ∷ xs)
  private z = ⟅_⟆[]; f = ⟅_⟆_&_∷_
    ⟅_⟆-dup : (∀ p q x xs pxs → PathP (λ i → P (dup p q x xs i))
              (f p x (q & x ∷ xs) (f q x xs pxs)) (f (p + q) x xs pxs))
    ⟅_⟆-com : (∀ p x q y xs pxs → PathP (λ i → P (com p x q y xs i))
              (f p x (q & y ∷ xs) (f q y xs pxs)) (f q y (p & x ∷ xs) (f p x xs pxs)))
    ⟅_⟆-del : (∀ x xs pxs → PathP (λ i → P (del x xs i))
              (f 0# x xs pxs) pxs)
  ⟅_⟆⇓ : (xs : � A) → P xs
  ⟅ [] ⟆⇓ = z
  ⟅ p & x ∷ xs ⟆⇓ = f p x xs ⟅ xs ⟆⇓
  ⟅ dup p q x xs i ⟆⇓ = ⟅_⟆-dup p q x xs ⟅ xs ⟆⇓ i
  ⟅ com p x q y xs i ⟆⇓ = ⟅_⟆-com p x q y xs ⟅ xs ⟆⇓ i
  ⟅ del x xs i ⟆⇓ = ⟅_⟆-del x xs ⟅ xs ⟆⇓ i
  ⟅ trunc xs ys p q i j ⟆⇓ =
    elimSquash₀ (λ xs → ⟅_⟆-set {xs}) (trunc xs ys p q) ⟅ xs ⟆⇓ ⟅ ys ⟆⇓ (cong ⟅_⟆⇓ p) (cong ⟅_⟆⇓ q) i j

open ⟅_�_⟆ public
elim-syntax : ∀ {a ℓ} → (A : Set a) → (� A → Set ℓ) → Set (a ⊔ ℓ ⊔ s)
elim-syntax = ⟅_�_⟆

syntax elim-syntax A (λ xs → Pxs) = [ xs ∈� A � Pxs ]

record ⟦_⇒_⟧ {a ℓ} (A : Set a) (P : � A → Set ℓ) : Set (a ⊔ ℓ ⊔ s) where
  constructor elim-prop
    ⟦_⟧-prop : ∀ {xs} → isProp (P xs)
    ⟦_⟧[] : P []
    ⟦_⟧_&_∷_⟨_⟩ : ∀ p x xs → P xs → P (p & x ∷ xs)
  private z = ⟦_⟧[]; f = ⟦_⟧_&_∷_⟨_⟩
  ⟦_⟧⇑ = elim
          (isProp→isSet ⟦_⟧-prop)
          z f
          (λ p q x xs pxs →
            toPathP (⟦_⟧-prop (transp (λ i → P (dup p q x xs i))
            (f p x (q & x ∷ xs) (f q x xs pxs))) (f (p + q) x xs pxs) ))
          (λ p x q y xs pxs → toPathP (⟦_⟧-prop (transp (λ i → P (com p x q y xs i)) i0
            (f p x (q & y ∷ xs) (f q y xs pxs))) (f q y (p & x ∷ xs) (f p x xs pxs))))
            λ x xs pxs → toPathP (⟦_⟧-prop (transp (λ i → P (del x xs i)) i0 ((f 0# x xs pxs))) pxs)
  ⟦_⟧⇓ = ⟅ ⟦_⟧⇑ ⟆⇓

open ⟦_⇒_⟧ public
elim-prop-syntax : ∀ {a ℓ} → (A : Set a) → (� A → Set ℓ) → Set (a ⊔ ℓ ⊔ s)
elim-prop-syntax = ⟦_⇒_⟧

syntax elim-prop-syntax A (λ xs → Pxs) = ⟦ xs ∈� A ⇒ Pxs ⟧

record [_↦_] {a b} (A : Set a) (B : Set b) : Set (a ⊔ b ⊔ s) where
  constructor rec
    [_]-set  : isSet B
    [_]_&_∷_ : R → A → B → B
    [_][]    : B
  private f = [_]_&_∷_; z = [_][]
    [_]-dup  : ∀ p q x xs → f p x (f q x xs) ≡ f (p + q) x xs
    [_]-com : ∀ p x q y xs → f p x (f q y xs) ≡ f q y (f p x xs)
    [_]-del : ∀ x xs → f 0# x xs ≡ xs
  [_]⇑ = elim
            (λ p x _ xs → f p x xs)
            (λ p q x xs → [_]-dup p q x)
            (λ p x q y xs → [_]-com p x q y)
            (λ x xs → [_]-del x)
  [_]↓ = ⟅ [_]⇑ ⟆⇓
open [_↦_] public

Here’s one in action, to define map:

map : (A → B) → � A → � B
map = λ f → [ map′ f ]↓
  module Map where
  map′ : (A → B) → [ A ↦ � B ]
  [ map′ f ] p & x ∷ xs = p & f x ∷ xs
  [ map′ f ][] = []
  [ map′ f ]-set = trunc
  [ map′ f ]-dup p q x xs = dup p q (f x) xs
  [ map′ f ]-com p x q y xs = com p (f x) q (f y) xs
  [ map′ f ]-del x xs = del (f x) xs

And here’s how we’d define union, and then prove that it’s associative:

infixr 5 _∪_
_∪_ : � A → � A → � A
_∪_ = λ xs ys → [ union ys ]↓ xs
  module Union where
  union : � A → [ A ↦ � A ]
  [ union ys ]-set = trunc
  [ union ys ] p & x ∷ xs = p & x ∷ xs
  [ union ys ][] = ys
  [ union ys ]-dup = dup
  [ union ys ]-com = com
  [ union ys ]-del = del

∪-assoc : (xs ys zs : � A) → xs ∪ (ys ∪ zs) ≡ (xs ∪ ys) ∪ zs
∪-assoc = λ xs ys zs → ⟦ ∪-assoc′ ys zs ⟧⇓ xs
  module UAssoc where
  ∪-assoc′ : ∀ ys zs → ⟦ xs ∈� A ⇒ xs ∪ (ys ∪ zs) ≡ (xs ∪ ys) ∪ zs ⟧
  ⟦ ∪-assoc′ ys zs ⟧-prop = trunc _ _
  ⟦ ∪-assoc′ ys zs ⟧[] = refl
  ⟦ ∪-assoc′ ys zs ⟧ p & x ∷ xs ⟨ P ⟩ = cong (p & x ∷_) P

There’s a lot more stuff here that I won’t bore you with.

Boring Stuff
infixl 7 _â‹Š_
_⋊_ : R → � A → � A
_⋊_ = λ p → [ p ⋊′ ]↓
  module Cond where
  _⋊′ : R → [ A ↦ � A ]
  [ p ⋊′ ]-set = trunc
  [ p ⋊′ ][] = []
  [ p ⋊′ ] q & x ∷ xs = p * q & x ∷ xs
  [ p ⋊′ ]-com q x r y xs = com (p * q) x (p * r) y xs
  [ p ⋊′ ]-dup q r x xs =
    p * q & x ∷ p * r & x ∷ xs ≡⟨ dup (p * q) (p * r) x xs ⟩
    p * q + p * r & x ∷ xs     ≡˘⟨ cong (_& x ∷ xs) (*⟨+⟩ p q r) ⟩
    p * (q + r) & x ∷ xs       �
  [ p ⋊′ ]-del x xs =
    p * 0# & x ∷ xs ≡⟨ cong (_& x ∷ xs) (*0 p) ⟩
    0# & x ∷ xs     ≡⟨ del x xs ⟩
    xs              �

∫ : (A → R) → � A → R
∫ = λ f → [ ∫′ f ]↓
  module Expect where
  ∫′ : (A → R) → [ A ↦ R ]
  [ ∫′ f ]-set = sIsSet
  [ ∫′ f ] p & x ∷ xs = p * f x + xs
  [ ∫′ f ][] = 0#
  [ ∫′ f ]-dup p q x xs =
    p * f x + (q * f x + xs) ≡˘⟨ +-assoc (p * f x) (q * f x) xs ⟩
    (p * f x + q * f x) + xs ≡˘⟨ cong (_+ xs) (⟨+⟩* p q (f x)) ⟩
    (p + q) * f x + xs �
  [ ∫′ f ]-com p x q y xs =
    p * f x + (q * f y + xs) ≡˘⟨ +-assoc (p * f x) (q * f y) (xs) ⟩
    p * f x + q * f y + xs   ≡⟨ cong (_+ xs) (+-comm (p * f x) (q * f y)) ⟩
    q * f y + p * f x + xs   ≡⟨ +-assoc (q * f y) (p * f x) (xs) ⟩
    q * f y + (p * f x + xs) �
  [ ∫′ f ]-del x xs =
    0# * f x + xs ≡⟨ cong (_+ xs) (0* (f x)) ⟩
    0# + xs       ≡⟨ 0+ (xs) ⟩
    xs            �

syntax ∫ (λ x → e) = ∫ e � x

pure : A → � A
pure x = 1# & x ∷ []

∪-cons : ∀ p (x : A) xs ys → xs ∪ p & x ∷ ys ≡ p & x ∷ xs ∪ ys
∪-cons = λ p x xs ys → ⟦ ∪-cons′ p x ys ⟧⇓ xs
  module UCons where
  ∪-cons′ : ∀ p x ys → ⟦ xs ∈� A ⇒ xs ∪ p & x ∷ ys ≡ p & x ∷ xs ∪ ys ⟧
  ⟦ ∪-cons′ p x ys ⟧-prop = trunc _ _
  ⟦ ∪-cons′ p x ys ⟧[] = refl
  ⟦ ∪-cons′ p x ys ⟧ r & y ∷ xs ⟨ P ⟩ = cong (r & y ∷_) P ; com r y p x (xs ∪ ys)

⋊-distribʳ : ∀ p q → (xs : � A) → p ⋊ xs ∪ q ⋊ xs ≡ (p + q) ⋊ xs
⋊-distribʳ = λ p q → ⟦ ⋊-distribʳ′ p q ⟧⇓
  module JDistrib where
  ⋊-distribʳ′ : ∀ p q → ⟦ xs ∈� A ⇒ p ⋊ xs ∪ q ⋊ xs ≡ (p + q) ⋊ xs ⟧
  ⟦ ⋊-distribʳ′ p q ⟧-prop = trunc _ _
  ⟦ ⋊-distribʳ′ p q ⟧[] = refl
  ⟦ ⋊-distribʳ′ p q ⟧ r & x ∷ xs ⟨ P ⟩ =
    p ⋊ (r & x ∷ xs) ∪ q ⋊ (r & x ∷ xs)   ≡⟨ ∪-cons (q * r) x (p ⋊ (r & x ∷ xs)) (q ⋊ xs)  ⟩
    q * r & x ∷ p ⋊ (r & x ∷ xs) ∪ q ⋊ xs ≡⟨ cong (_∪ q ⋊ xs) (dup (q * r) (p * r) x (p ⋊ xs)) ⟩
    q * r + p * r & x ∷ p ⋊ xs ∪ q ⋊ xs   ≡˘⟨ cong (_& x ∷ (p ⋊ xs ∪ q ⋊ xs)) (⟨+⟩* q p r) ⟩
    (q + p) * r & x ∷ p ⋊ xs ∪ q ⋊ xs     ≡⟨ cong ((q + p) * r & x ∷_) P ⟩
    (q + p) * r & x ∷ (p + q) ⋊ xs        ≡⟨ cong (λ pq → pq * r & x ∷ (p + q) ⋊ xs) (+-comm q p) ⟩
    (p + q) * r & x ∷ (p + q) ⋊ xs        ≡⟨⟩
    _⋊_ (p + q) (r & x ∷ xs) �

⋊-distribˡ : ∀ p → (xs ys : � A) → p ⋊ xs ∪ p ⋊ ys ≡ p ⋊ (xs ∪ ys)
⋊-distribˡ = λ p xs ys → ⟦ ⋊-distribˡ′ p ys ⟧⇓ xs
  module JDistribL where
  ⋊-distribˡ′ : ∀ p ys → ⟦ xs ∈� A ⇒ p ⋊ xs ∪ p ⋊ ys ≡ p ⋊ (xs ∪ ys) ⟧
  ⟦ ⋊-distribˡ′ p ys ⟧-prop = trunc _ _
  ⟦ ⋊-distribˡ′ p ys ⟧[] = refl
  ⟦ ⋊-distribˡ′ p ys ⟧ q & x ∷ xs ⟨ P ⟩ =
    p ⋊ (q & x ∷ xs) ∪ p ⋊ ys ≡⟨⟩
    p * q & x ∷ p ⋊ xs ∪ p ⋊ ys ≡⟨ cong (p * q & x ∷_) P ⟩
    p * q & x ∷ p ⋊ (xs ∪ ys) ≡⟨⟩
    p ⋊ ((q & x ∷ xs) ∪ ys) �

∪-idʳ : (xs : � A) → xs ∪ [] ≡ xs
∪-idʳ = ⟦ ∪-idʳ′ ⟧⇓
  module UIdR where
  ∪-idʳ′ : ⟦ xs ∈� A ⇒ xs ∪ [] ≡ xs ⟧
  ⟦ ∪-idʳ′ ⟧-prop = trunc _ _
  ⟦ ∪-idʳ′ ⟧[] = refl
  ⟦ ∪-idʳ′ ⟧ p & x ∷ xs ⟨ P ⟩ = cong (p & x ∷_) P

∪-comm : (xs ys : � A) → xs ∪ ys ≡ ys ∪ xs
∪-comm = λ xs ys → ⟦ ∪-comm′ ys ⟧⇓ xs
  module UComm where
  ∪-comm′ : ∀ ys → ⟦ xs ∈� A ⇒ xs ∪ ys ≡ ys ∪ xs ⟧
  ⟦ ∪-comm′ ys ⟧-prop = trunc _ _
  ⟦ ∪-comm′ ys ⟧[] = sym (∪-idʳ ys)
  ⟦ ∪-comm′ ys ⟧ p & x ∷ xs ⟨ P ⟩ = cong (p & x ∷_) P ; sym (∪-cons p x ys xs)

0⋊ : (xs : � A) → 0# ⋊ xs ≡ []
0⋊ = ⟦ 0⋊′ ⟧⇓
  module ZeroJ where
  0⋊′ : ⟦ xs ∈� A ⇒ 0# ⋊ xs ≡ [] ⟧
  ⟦ 0⋊′ ⟧-prop = trunc _ _
  ⟦ 0⋊′ ⟧[] = refl
  ⟦ 0⋊′ ⟧ p & x ∷ xs ⟨ P ⟩ =
    0# ⋊ (p & x ∷ xs)    ≡⟨⟩
    0# * p & x ∷ 0# ⋊ xs ≡⟨ cong (_& x ∷ 0# ⋊ xs) (0* p) ⟩
    0# & x ∷ 0# ⋊ xs     ≡⟨ del x (0# ⋊ xs) ⟩
    0# ⋊ xs              ≡⟨ P ⟩
    [] �

However, I can demonstrate the monadic bind:

_>>=_ : � A → (A → � B) → � B
xs >>= f = [ f =<< ]↓ xs
  module Bind where
  _=<< : (A → � B) → [ A ↦ � B ]
  [ f =<< ] p & x ∷ xs = p ⋊ (f x) ∪ xs
  [ f =<< ][] = []
  [ f =<< ]-set = trunc
  [ f =<< ]-del x xs = cong (_∪ xs) (0⋊ (f x))
  [ f =<< ]-dup p q x xs =
    p ⋊ (f x) ∪ q ⋊ (f x) ∪ xs   ≡⟨ ∪-assoc (p ⋊ f x) (q ⋊ f x) xs ⟩
    (p ⋊ (f x) ∪ q ⋊ (f x)) ∪ xs ≡⟨ cong (_∪ xs) (⋊-distribʳ p q (f x) ) ⟩
    _⋊_ (p + q) (f x) ∪ xs �
  [ f =<< ]-com p x q y xs =
    p ⋊ (f x) ∪ q ⋊ (f y) ∪ xs   ≡⟨ ∪-assoc (p ⋊ f x) (q ⋊ f y) xs ⟩
    (p ⋊ (f x) ∪ q ⋊ (f y)) ∪ xs ≡⟨ cong (_∪ xs) (∪-comm (p ⋊ f x) (q ⋊ f y)) ⟩
    (q ⋊ (f y) ∪ p ⋊ (f x)) ∪ xs ≡˘⟨ ∪-assoc (q ⋊ f y) (p ⋊ f x) xs ⟩
    q ⋊ (f y) ∪ p ⋊ (f x) ∪ xs �

And we can prove the monad laws, also:

Proofs of Monad Laws

1⋊ : (xs : � A) → 1# ⋊ xs ≡ xs
1⋊ = ⟦ 1⋊′ ⟧⇓
  module OneJoin where
  1⋊′ : ⟦ xs ∈� A ⇒ 1# ⋊ xs ≡ xs ⟧
  ⟦ 1⋊′ ⟧-prop = trunc _ _
  ⟦ 1⋊′ ⟧[] = refl
  ⟦ 1⋊′ ⟧ p & x ∷ xs ⟨ P ⟩ =
    1# ⋊ (p & x ∷ xs) ≡⟨⟩
    1# * p & x ∷ 1# ⋊ xs ≡⟨ cong (_& x ∷ 1# ⋊ xs) (1* p) ⟩
    p & x ∷ 1# ⋊ xs ≡⟨ cong (p & x ∷_) P ⟩
    p & x ∷ xs �

>>=-distrib : (xs ys : � A) (g : A → � B) → (xs ∪ ys) >>= g ≡ (xs >>= g) ∪ (ys >>= g)
>>=-distrib = λ xs ys g → ⟦ >>=-distrib′ ys g ⟧⇓ xs
  module BindDistrib where
  >>=-distrib′ : (ys : � A) (g : A → � B) → ⟦ xs ∈� A ⇒ ((xs ∪ ys) >>= g) ≡ (xs >>= g) ∪ (ys >>= g) ⟧
  ⟦ >>=-distrib′ ys g ⟧-prop = trunc _ _
  ⟦ >>=-distrib′ ys g ⟧[] = refl
  ⟦ >>=-distrib′ ys g ⟧ p & x ∷ xs ⟨ P ⟩ =
    (((p & x ∷ xs) ∪ ys) >>= g) ≡⟨⟩
    (p & x ∷ xs ∪ ys) >>= g ≡⟨⟩
    p ⋊ g x ∪ ((xs ∪ ys) >>= g) ≡⟨ cong (p ⋊ g x ∪_) P ⟩
    p ⋊ g x ∪ ((xs >>= g) ∪ (ys >>= g)) ≡⟨ ∪-assoc (p ⋊ g x) (xs >>= g) (ys >>= g) ⟩
    (p ⋊ g x ∪ (xs >>= g)) ∪ (ys >>= g) ≡⟨⟩
    ((p & x ∷ xs) >>= g) ∪ (ys >>= g) �

*-assoc-⋊ : ∀ p q (xs : � A) → (p * q) ⋊ xs ≡ p ⋊ (q ⋊ xs)
*-assoc-⋊ = λ p q → ⟦ *-assoc-⋊′ p q ⟧⇓
  module MAssocJ where
  *-assoc-⋊′ : ∀ p q → ⟦ xs ∈� A ⇒ (p * q) ⋊ xs ≡ p ⋊ (q ⋊ xs) ⟧
  ⟦ *-assoc-⋊′ p q ⟧-prop = trunc _ _
  ⟦ *-assoc-⋊′ p q ⟧[] = refl
  ⟦ *-assoc-⋊′ p q ⟧ r & x ∷ xs ⟨ P ⟩ =
    p * q ⋊ (r & x ∷ xs) ≡⟨⟩
    p * q * r & x ∷ (p * q ⋊ xs) ≡⟨ cong (_& x ∷ (p * q ⋊ xs)) (*-assoc p q r) ⟩
    p * (q * r) & x ∷ (p * q ⋊ xs) ≡⟨ cong (p * (q * r) & x ∷_) P ⟩
    p * (q * r) & x ∷ (p ⋊ (q ⋊ xs)) ≡⟨⟩
    p ⋊ (q ⋊ (r & x ∷ xs)) �

⋊-assoc->>= : ∀ p (xs : � A) (f : A → � B) → (p ⋊ xs) >>= f ≡ p ⋊ (xs >>= f)
⋊-assoc->>= = λ p xs f → ⟦ ⋊-assoc->>=′ p f ⟧⇓ xs
  module JDistribB where
  ⋊-assoc->>=′ : ∀ p (f : A → � B) → ⟦ xs ∈� A ⇒ (p ⋊ xs) >>= f ≡ p ⋊ (xs >>= f) ⟧
  ⟦ ⋊-assoc->>=′ p f ⟧-prop = trunc _ _
  ⟦ ⋊-assoc->>=′ p f ⟧[] = refl
  ⟦ ⋊-assoc->>=′ p f ⟧ q & x ∷ xs ⟨ P ⟩ =
    (p ⋊ (q & x ∷ xs)) >>= f ≡⟨⟩
    (p * q & x ∷ p ⋊ xs) >>= f ≡⟨⟩
    ((p * q) ⋊ f x) ∪ ((p ⋊ xs) >>= f) ≡⟨ cong (((p * q) ⋊ f x) ∪_) P ⟩
    ((p * q) ⋊ f x) ∪ (p ⋊ (xs >>= f)) ≡⟨ cong (_∪ (p ⋊ (xs >>= f))) (*-assoc-⋊ p q (f x)) ⟩
    (p ⋊ (q ⋊ f x)) ∪ (p ⋊ (xs >>= f)) ≡⟨ ⋊-distribˡ p (q ⋊ f x) (xs >>= f) ⟩
    p ⋊ ((q & x ∷ xs) >>= f) �

>>=-idˡ : (x : A) → (f : A → � B)
      → (pure x >>= f) ≡ f x
>>=-idˡ x f =
  pure x >>= f ≡⟨⟩
  (1# & x ∷ []) >>= f ≡⟨⟩
  1# ⋊ f x ∪ [] >>= f ≡⟨⟩
  1# ⋊ f x ∪ [] ≡⟨ ∪-idʳ (1# ⋊ f x) ⟩
  1# ⋊ f x ≡⟨ 1⋊ (f x) ⟩
  f x �

>>=-idʳ : (xs : � A) → xs >>= pure ≡ xs
>>=-idʳ = ⟦ >>=-idʳ′ ⟧⇓
  module Law1 where
  >>=-idʳ′ : ⟦ xs ∈� A ⇒ xs >>= pure ≡ xs ⟧
  ⟦ >>=-idʳ′ ⟧-prop = trunc _ _
  ⟦ >>=-idʳ′ ⟧[] = refl
  ⟦ >>=-idʳ′ ⟧ p & x ∷ xs ⟨ P ⟩ =
    ((p & x ∷ xs) >>= pure) ≡⟨⟩
    p ⋊ (pure x) ∪ (xs >>= pure) ≡⟨⟩
    p ⋊ (1# & x ∷ []) ∪ (xs >>= pure) ≡⟨⟩
    p * 1# & x ∷ [] ∪ (xs >>= pure) ≡⟨⟩
    p * 1# & x ∷ (xs >>= pure) ≡⟨ cong (_& x ∷ (xs >>= pure)) (*1 p) ⟩
    p & x ∷ xs >>= pure ≡⟨ cong (p & x ∷_) P ⟩
    p & x ∷ xs �

>>=-assoc : (xs : � A) → (f : A → � B) → (g : B → � C)
      → ((xs >>= f) >>= g) ≡ xs >>= (λ x → f x >>= g)
>>=-assoc = λ xs f g → ⟦ >>=-assoc′ f g ⟧⇓ xs
  module Law3 where
  >>=-assoc′ : (f : A → � B) → (g : B → � C) → ⟦ xs ∈� A ⇒ ((xs >>= f) >>= g) ≡ xs >>= (λ x → f x >>= g) ⟧
  ⟦ >>=-assoc′ f g ⟧-prop = trunc _ _
  ⟦ >>=-assoc′ f g ⟧[] = refl
  ⟦ >>=-assoc′ f g ⟧ p & x ∷ xs ⟨ P ⟩ =
    (((p & x ∷ xs) >>= f) >>= g) ≡⟨⟩
    ((p ⋊ f x ∪ (xs >>= f)) >>= g) ≡⟨ >>=-distrib (p ⋊ f x) (xs >>= f) g ⟩
    ((p ⋊ f x) >>= g) ∪ ((xs >>= f) >>= g) ≡⟨ cong ((p ⋊ f x) >>= g ∪_) P ⟩
    ((p ⋊ f x) >>= g) ∪ (xs >>= (λ y → f y >>= g)) ≡⟨ cong (_∪ (xs >>= (λ y → f y >>= g))) (⋊-assoc->>= p (f x) g) ⟩
    p ⋊ (f x >>= g) ∪ (xs >>= (λ y → f y >>= g)) ≡⟨⟩
    ((p & x ∷ xs) >>= (λ y → f y >>= g)) �


I’ve really enjoyed working with cubical Agda so far, and the proofs above were a pleasure to write. I think I can use the above definition to get a workable differential privacy monad, also.

Anyway, all the code is available here.

by Donnacha Oisín Kidney at April 17, 2019 12:00 AM

April 16, 2019

Neil Mitchell

Code Statistics and Measuring Contributions

Summary: The only way to understand a code base is to ask someone who works on it.

This weekend a relative asked me how can we tell who wrote the code behind the black hole image, and was interested in the stats available on GitHub. There are lots of available stats, but almost every stat can be misleading in some circumstances. The only people who have the context to interpret the stats are those who work on the project, hence my default approach to assessing a project is to ask someone who works on it, with the understanding that they may look at relevant stats on GitHub or similar. In this post lets go through some of the reasons that a simplistic interpretation of the stats is often wrong.

These remarks all apply whether you're trying to assign credit for a photo, trying to do performance reviews for an employee or trying to manage a software project.

What to measure

There are broadly two ways to measure activity on the code in a repo. The first is additions/deletions of lines of code, where a modified line of code is usually measured as an addition and deletion. The second is number of commits or pull requests, which measures how many batches of changes are made. The problem with the latter is that different people have different styles - some produce big batches, some tiny batches - a factor of 10 between different developers is not unusual. There are also policy reasons that commits may be misleading - some projects squash multiple commits down to one when merging. The number of lines of code gives a better measure of what has changed, but it's merely better, not good - the rest of this post assumes people are focusing on number of lines of code changed.

All code is equal

Treating number of lines changed as the contribution assumes that every line is equally hard - but that's far from the case. At a previous company I worked on code that ranged from the internals of a compiler, to intricate C++ libraries, to Haskell GUI's. I estimate that I could produce 100x the volume of Haskell GUI's compared to C++ libraries. Other colleagues worked only only on the compiler, or only on GUIs - vastly changing how much code they produced per hour.

Similarly, each line of code is not equally important. Last week I wrote a few 100 lines of code. Of those, nearly all were done on Monday, and the remainder of the week involved a single line that is ferociously difficult with lots of obscure side conditions (libraries and link order...). That one line is super valuable, but simplistic measuring suggests I napped all Tuesday and Wednesday.

Code is attributed properly

Developers typically have user handles or email addresses that are used for code contributions. I currently have at least two handles, and in the past when we did stats on a $WORK project there were 6 different email addresses that I claimed ownership of. As a consequence, my work shows up under lots of different names, and counting it can be difficult. The longer a project runs, the more chance of developers changing identity.

The person who changed code did the work

A big part of software engineering is making old code obsolete. I was recently involved in deleting many thousands of lines that was no longer necessary. With a small team, we created a new project, implemented it, converted 90% of the uses over to the new code, and then stopped. Separately, someone else did the last 10% of the conversion, and then was able to delete a huge number of lines of code. There was definitely work in deleting the final bit of code, but the "labour" involved in that final deletion was mostly carried out months ago by others.

Similarly, when copying a new project in (often called vendoring) there is a big commit to add a lot of new code that was originally written by others, but which gets attributed to a named individual.

All code is in one place

Often projects will incorporate library code. For example, the official contribution of Niklas Broberg to HLint is 8 lines. However, he's called out explicitly in the README as someone whose contributions were essential to the project. In this case, because he wrote a library called haskell-src-exts without which HLint could not exist, and then continued to improve it for the benefit of HLint for many years.

Furthermore, projects like HLint rely on a compiler, libraries, operating system, and even a version control system. Usually these get overlooked when giving credit since they are relatively old and shared between many projects - but they are an essential part of getting things to work.

More code is better

The only purpose of code is to do a thing - whatever that thing might be. In all other ways, code is a liability - it has to be read, tested, compiled etc. Given the choice between 10 lines or 1,000,000 lines of code, I would always prefer 10 lines if they did the same thing. A smarter programmer who can do more with less lines of code is better. The famous quote attributed to Bill Gates is still true many decades later:

Measuring programming progress by lines of code is like measuring aircraft building progress by weight.

Code is the essence

Measuring code suggests that code is the thing that matters. The code certainly does matter, but the code is just a representation of an underlying algorithm. The code follows a high-level design. Often much more significant contributions are made by picking the right design, algorithm, approach etc.

Code is all that matters

In a large project there is code, but the code doesn't exist in a vacuum. There are other code-adjacent tasks to be performed - discussions, mentoring, teaching, reporting issues, buying computers etc. Many of these are never reflected in the code, yet if omitted, the code wouldn't happen, or would happen slower.

by Neil Mitchell ( at April 16, 2019 09:24 PM

Functional Jobs

Scala developer for NetLogo at The Center for Connected Learning (Full-time)

The CCL Lab at Northwestern University is looking for a full-time Scala/Java Software Developer to work on the NetLogo desktop application, a computational modeling environment widely-used in both education and research.

This Software Developer position is based at Northwestern University's Center for Connected Learning and Computer-Based Modeling (CCL), working in a small collaborative development team in a university research group that also includes professors, postdocs, graduate students, and undergraduates, supporting the needs of multiple research projects. A major focus of the role is on the development of NetLogo, an open-source modeling environment for both education and scientific research. CCL grants also involve development work on other associated projects for NetLogo, including multi-user simulation and other tools for research and educational NSF grants involving building NetLogo-based science curricula for schools.

NetLogo is a programming language and an agent-based modeling environment. The NetLogo language is a dialect of Logo/Lisp specialized for building agent-based simulations of natural and social phenomena. NetLogo has hundreds of thousands of users ranging from grade school students to advanced researchers. A collaborative extension of NetLogo, called HubNet, enables groups of participants to run multi-user simulation activities in classrooms and distributed participatory simulations in social science research. NetLogo also featur es an expansive API that members of the NetLogo community use to extend the language to integrate with software like GIS databases, Python, R, and Mathematica, and to interface with hardware devices like Arduino boards and the Microsoft Kinect.

Application information:

This is a full-time, on-site position, not remote. The Northwestern campus is in Evanston, Illinois on the Lake Michigan shore, adjacent to Chicago and easily reachable by public transportation.

Specific Responsibilities:

  • Independently implements NetLogo features and bug fixes in Scala/Java.
  • Collaborates with the NetLogo development team and principal research investigators in planning and designing features for NetLogo and other related projects.
  • Interacts with commercial and academic partners to help determine design and functional requirements for NetLogo and other related projects.
  • Interacts with user community including responding to bug reports, questions, and suggestions, and interacting with open-source contributors.
  • Mentors undergraduate student workers and guides Google Summer of Code participants on contributions to the NetLogo codebase; assists graduate students with issues encountered during their work.
  • Contributes to the design of web-based applications, including CCL websites and NetLogo Web, and implements features for interoperability between desktop and web platforms.
  • Performs data collection, organization, and summarization for projects; assists with organizing team meetings and maintaining team communication.
  • Performs other duties as required or assigned.

Minimum Qualifications:

  • A bachelor's degree in computer science or a closely related field or the equivalent combination of education, training and experience from which comparable skills and abilities may be acquired.
  • Demonstrated experience and enthusiasm for writing clean, modular, well-tested code.

Preferred Qualifications:

  • Experience with working effectively as part of a small software development team, including close collaboration, distributed version control, and automated testing;
  • Experience with at least one JVM language, Scala strongly preferred.
  • Experience developing GUI applications, especially Java Swing-based applications.
  • Experience with programming language design and implementation, functional programming (especially Haskell or Lisp), and compilers.
  • Interest in and experience with computer-based modeling and simulation, especially agent-based simulation.
  • Interest in and experience with distributed, multiplayer, networked systems like HubNet.
  • Experience working on research projects in an academic environment.
  • Experience with open-source software development and supporting the growth of an open-source community.
  • Experience with Linux/Unix system administration.
  • Experience with building web-based applications, both server-side and client-side components, particularly with HTML5, JavaScript, and/or CoffeeScript.
  • Interest in education and an understanding of secondary school math and science content.

As per Northwestern University policy, this position requires a criminal background check. Successful applicants will need to submit to a criminal background check prior to employment.

Northwestern University is an Equal Opportunity, Affirmative Action Employer of all protected classes including veterans and individuals with disabilities.

Get information on how to apply for this position.

April 16, 2019 02:48 PM

April 15, 2019

Monday Morning Haskell

Declaring Victory! (And Starting Again!)


In last week's article, we used a neat little algorithm to generate random mazes for our game. This was cool, but nothing happens yet when we "finish" the maze! We'll change that this week. We'll allow the game to continue re-generating new mazes when we're finished! You can find all the code for this part on the part-2 branch on the Github repository for this project!

If you're a beginner to Haskell, hopefully this series is helping you learn simple ways to do cool things! If you're a little overwhelmed, try reading our Liftoff Series first!


Our objectives for this part are pretty simple. We want to make it so that when we reach the "end" location, we get a "victory" message and can restart the game by pressing a key. We'll get a new maze when we do this. There are a few components to this:

  1. Reaching the end should change a component of our World.
  2. When that component changes, we should display a message instead of the maze.
  3. Pressing "Enter" with the game in this state should start the game again with a new maze.

Sounds pretty simple! Let's get going!

Game Result

We'll start by adding a new type to represent the current "result" of our game. We'll add this piece of state to our World. As an extra little piece, we'll add a random generator to our state. We'll need this when we re-make the maze:

data GameResult = GameInProgress | GameWon
  deriving (Show, Eq)

data World = World
  { playerLocation :: Location
  , startLocation :: Location
  , endLocation :: Location
  , worldBoundaries :: Maze
  , worldResult :: GameResult
  , worldRandomGenerator :: StdGen

Our generation step needs a couple small tweaks. The function itself should now return its final generator as an extra result:

generateRandomMaze :: StdGen -> (Int, Int) -> (Maze, StdGen)
generateRandomMaze gen (numRows, numColumns) =
  (currentBoundaries finalState, randomGen finalState)
    finalState = execState dfsSearch initialState

Then in our main function, we incorporate the new generator and game result into our World:

main = do
  gen <- getStdGen
  let (maze, gen') = generateRandomMaze gen (25, 25)
    (World (0, 0) (0, 0) (24, 24) maze GameInProgress gen')

Now let's fix our updating function so that it changes the game result if we hit the final location! We'll add a guard here to check for this condition and update accordingly:

updateFunc :: Float -> World -> World
updateFunc _ w
  | playerLocation w == endLocation w = w { worldResult = GameWon }
  | otherwise = w

We could do this in the eventHandler but it seems more idiomatic to let the update function handle it. If we use the event handler, we'll never see our token enter the final square. The game will jump straight to the victory screen. That would be a little odd. Here there's at least a tiny gap.

Displaying Victory!

Now our game will update properly. But we have to respond to this change by changing what the display looks like! This is a quick fix. We'll add a similar guard to our drawingFunc:

drawingFunc :: (Float, Float) -> Float -> World -> Picture
drawingFunc (xOffset, yOffset) cellSize world
  | worldResult world == GameWon =
      Translate (-275) 0 $ Scale 0.12 0.25
        (Text "Congratulations! You've won!\
              \Press enter to restart with a new maze!")
  | otherwise = ...

Note that Text here is the Gloss Picture constructor, not Data.Text. We also scale and translate it a bit to make the text appear on the screen. This is all we need to get the victory screen to appear on completion!


Restarting the Game

The last step is that we have to follow through on our process to restart the game if they hit enter! This involves changing our inputHandler to give us a brand new World. As with our other functions, we'll add a guard to handle the GameWon case:

inputHandler :: Event -> World -> World
inputHandler event w
  | worldResult w == GameWon = …
  | otherwise = case event of

We'll want to make a new case section that accounts for the user pressing the "Enter" key. All this section needs to do is call generateRandomMaze and re-initialize the world!

inputHandler event w
  | worldResult w == GameWon = case event of
      (EventKey (SpecialKey KeyEnter) Down _ _) ->
        let (newMaze, gen') = generateRandomMaze 
              (worldRandomGenerator w) (25, 25)
        in  World (0, 0) (0, 0) (24, 24) newMaze GameInProgress gen'
      _ -> w

And with that, we're done! We can restart the game and navigate random mazes to our heart's content!


The ability to restart the game is great! But if we want to make our game re-playable instead of random, we'll need some way of storing mazes. In the next part, we'll look at some code for dumping a maze to an output format. We'll also need a way to re-load from this stored representation. This will ultimately allow us to make a true game with saving and loading state.

In preparation for that, you can read our series on Parsing. You'll especially want to acquaint yourself with the Megaparsec library. We go over this in Part 4 of the series!

by James Bowen at April 15, 2019 02:30 PM

April 14, 2019

Chris Smith 2

New in CodeWorld: Share a Folder as a Gallery

This is just a quick note that in CodeWorld, you can now share a folder as a gallery. If you do so, the contents of that folder will be available as a gallery link that lets others browse through your code.

Step 1: Sign in to CodeWorld

Although you can use CodeWorld for one-off tasks without logging in, this on will require logging in. So log in to your favorite dialect of CodeWorld. That’s if you’re interested in using it for math education, or if you want an idiomatic Haskell environment. You’ll need a Google account to log in.

Step 2: Create a shared folder.

To use this feature, you’ll need a shared folder, so let’s create one. Once you log in, you get the project browser sidebar, and at the top of that bar is a button labeled “New Folder”.

The project browser controls in CodeWorld

After you’ve typed the name of your new folder, you should notice a button on the bottom toolbar to share your folder. You can click it now, but the folder that you share will be empty. You could put things in it later, of course, but let’s do it in the other order. So hold off on sharing for a bit.

Alternative, if you’ve already got a folder of projects to share, skip ahead and you can share that instead.

Step 3: Create projects in the folder to share

At this point, your newly created folder should be highlighted in yellow, meaning it’s open. As long as a folder is open, any project you save will be saved into that folder. If you closed the folder, though you might need to click the folder again to open it.

Step 4: Share the folder

Now that you’ve got projects in your folder, you can share the folder. Previously, sharing a folder gave you a link that you could share with others to copy your folder into their account. You can still do that, but you now also have the option to share to a gallery where they can try your code without making a copy.

To share a folder, you need to select the entire folder. You can do this by clicking on it. If you have opened a project inside the folder, though, go ahead and save your work, and click the folder name itself — twice! (Once to close the folder, since it was already open, and again to select it.) Once the folder (and not a project inside of it) is selected, you should see the “Share Folder” button on the bottom toolbar. Click it!

By default, you get a link to share the folder for others to copy. To share as a gallery instead, click Share as Gallery, the blue button at the bottom.

The share dialog for a gallery

Step 5: Send your link to others

Now that you’ve got a link, you can send it to others. They’ll see a gallery that lets them click through your projects and try them out.

The CodeWorld gallery interface

Remember that sharing a folder as a gallery is still sharing the folder of code. Although it puts the running programs front and center, someone you share with can still access the code, either by clicking the link on the right of the gallery screen, or by copying the folder-share token from the URL and importing your shared folder into their own account. If you don’t want to share your source code, then you’ll still have to share one project at a time, by clicking “Share without Code” on the project share dialog.

by Chris Smith at April 14, 2019 04:29 AM

April 12, 2019

FP Complete

Christoph Breitkopf

Parsing ISO 8601 periods and version bound woes

I'm just trying to parse some JSON data. It contains durations encoded as ISO 8601 periods. For example:

{ "activity" : "Running", duration: "PT30M" }

(More fields omitted for the sake of brevity)

Specifically, my data type is a time duration, not a calendar period. So, NominalDiffTime seems the correct type:

data LogEntry = LogEntry Text NominalDiffTime

Trying to parse this with Aeson, I note that it will not accept ISO 8601 encodings for NominalDiffTime, so I search a bit and find CalandarDiffTime in Data.Time.LocalTime. Not ideal, since I really don't want to calendar periods, but this is a prototype. So my next attempt is:

data LogEntry = LogEntry Text CalendarDiffTime

To my surprise, that does not compile - ghc is unable to find CalendarDiffTime. I fumble around for a few minutes and then notice that it was added quite recently to the time library. So, I add a version bound to my dependencies, saying I want time >= 1.9.

Now stack complains that it can't find that version, and I should add it as extra-deps. I do, and predictably, other packages have an upper bound on time, preventing me from using the current version.

Solution: implement my own type for durations including FromJSON and ToJSON instances. Too bad. Something like this happens often enough that I feel really productive in Haskell only when writing my own code. Using libraries often feels less productive because of version issues like this or missing or incomplete docs.

by bokesan ( at April 12, 2019 11:18 AM

April 10, 2019

Brandon Simmons

Sandy Maguire

Announcing Polysemy

Just kidding. Turns out I couldn't put it down.

I want to announce my new library polysemy --- it's higher-order, no-boilerplate, zero-cost free monads.

I've been working nonstop on this for a few months now, so the commentary for what the heck is going on will come after a little vacation.

Stay tuned.

April 10, 2019 05:33 PM

Tweag I/O

The Sneakernet:
Towards A Much Faster Internet

Matthias Meschede

After an unprecedented effort of scientists all over the world, the Event Horizon Telescope team has released the first image ever taken of a blackhole.

The image is generated by correlating petabytes of data, which are shipped from the telescopes to the data centers in hard disks via airplane. It turns out that this is way faster than using those old-school cables. Once more, innovation has been brought to us from our modern philosophers, this time staring into a lens pointed to the dark sky, last time staring into a lens pointed on tiny objects in a cave - the physicists.

As a new-school enterprise, Tweag immediately realized the potential of shipping hard disks and did a quick explorative study about future possibilities of this technology called the Sneakernet: Could we give a new life to the homing pigeon industry? How about transportation means that are optimized to carry incredible amounts of weight? How about transportation means that are designed to be fast as a bullet?

Consider a 4 terabytes (TB), 60 grams, 100 X 70 X 7 mm³ disk SSD disk. To get good transfer rates, we need to ship as many of these disks as fast as possible from one place to another. It's all about shipping capacity and speed.

For instance, a human can carry about 30kg and walk around 5km/h. 30kg corresponds to 7500 of those 4TB hard disks. This means that a human can carry around 30,000 TB of data!

In case that this number doesn't speak to you: it corresponds to 30,000,000 GB. Doesn't speak to you either? Well, it's 30 PB. Still not happy? Let's go back to old-school technology then: 30 PB corresponds to the entire internet traffic of one month in the year 1999. Just think back and remember the endless ISDN and modem transfers of mp3's and images, then think about what you can squeeze into a 4 TB hard disk and you'll believe me.

The second factor that determines the transfer rate of a human carrier is speed. After all we have to bring our 30 PB of data from A to B and we want to do this in as little time as possible. The good old philosophical allegory time = distance / speed comes to help. Let's say I walk 10 m to bring 30 PB of data to my colleagues office. Not considering the time that it will take her to prepare and clean her disk, this would take about 7 seconds, in line with Adam Ries (Germans know why).

The Sneakernet transfer rate equation is thus:

time_from_A_to_B = speed / distance
storage_capacity = load_capacity / disk_weight x disk_storage_space
rate = storage_capacity / time_from_A_to_B

A human is neither optimized for speed nor for load capacity (are we actually optimized for something?). Planes and ships fare much better in this sense. It turns out that a plane (Boeing 747 LCF with 200,000kg load) can carry the semi-satanic number of 3,333,333 hard disks that store 13 exabytes of data. That used to be the entire internet traffic of one month in the year 2009! Remember: Google, Youtube and all of these things came up and you need a plane to handle this kind of data. The plane is also fast enough to deliver this data internationally with reasonable delay. However there might be some package loss with certain providers.

Now brace yourself. There are ships called mammoth ultra large crude carriers (ULCCs) that can carry 550,000 Deadweight Tons (DWTs). They can deliver a shipload of 9,166,666,666 hard disks, containing 36,666 EB in total. A single shipload can fit the entire internet traffic of 2018 25 times! As you know, the internet barely exists for this long and traffic rates were much lower for most of its history. This brings us to the even bigger conclusion that a single shipload of SSD hard disks can store the entire traffic that has ever been exchanged in the history of the internet.

Wow, that's impressive. Here is a little graph that shows how the Sneakernet compares to more traditional technology:

What about the time needed for loading and unloading data from the disks? And the food that the pigeons need to fly such large distances? What is the best sneaker brand for transferring data by hand?

We are sure that these questions can be answered and that the Sneakernet is the future. Besides the packet loss problem, that can become... problematic. And that so many hard disks don't actually exist. Details.

April 10, 2019 12:00 AM

April 08, 2019

Monday Morning Haskell

Generating More Difficult Mazes!


In the last part of this series, we established the fundamental structures for our maze game. But our "maze" was still rather bland. It didn't have any interior walls, so getting to the goal point was trivial. In this next part, we'll look at an algorithm for random maze generation. This will let us create some more interesting challenges. In upcoming parts of this series, we'll explore several more related topics. We'll see how to serialize our maze definition. We'll refactor some of our data structures. And we'll also take a look at another random generation algorithm.

If you've never programmed in Haskell before, you should download our Beginners Checklist! It will help you learn the basics of the language so that the concepts in this series will make more sense. The State monad will also see a bit of action in this part. So if you're not comfortable with monads yet, you should read our series on them!

Getting Started

We represent a maze with the type Map.Map Location CellBoundaries. For a refresher, a Location is an Int tuple. And the CellBoundaries type determines what borders a particular cell in each direction:

type Location = (Int, Int)

data BoundaryType = Wall | WorldBoundary | AdjacentCell Location

data CellBoundaries = CellBoundaries
  { upBoundary :: BoundaryType
  , rightBoundary :: BoundaryType
  , downBoundary :: BoundaryType
  , leftBoundary :: BoundaryType

An important note is that a Location refers to the position in discrete x,y space. That is, the first index is the column (starting from 0) and the second index is the row. Don't confuse row-major and column-major ordering! (I did this when implementing this solution the first time).

To generate our maze, we'll want two inputs. The first will be a random number generator. This will help randomize our algorithm so we can keep generating new, fresh mazes. The second will be the desired size of our grid.

import System.Random (StdGen, randomR)


  :: StdGen
  -> (Int, Int)
  -> Map.Map Location CellBoundaries
generateRandomMaze gen (numRows, numColumns) = ...

A Simple Randomization Algorithm

This week, we're going to use a relatively simple algorithm for generating our maze. We'll start by assuming everything is a wall, and we've selected some starting position. We'll use the following depth-first-search pattern:

  1. Consider all cells around us
  2. If there are any we haven't visited yet, choose one of them as the next cell.
  3. "Break down" the wall between these cells, and put that new cell onto the top of our search stack, marking it as visited.
  4. If we have visited all other cells around us, pop this current location from the stack
  5. As long as there is another cell on the stack, choose it as the current location and continue searching from step 1.

There are several pieces of state we have to maintain throughout the process. So the State monad is an excellent candidate for this problem! Let's make a SearchState type for all these:

data SearchState = SearchState
  { randomGenerator :: StdGen
  , locationStack :: [Location]
  , currentBoundaries :: Map.Map Location CellBoundaries
  , visitedCells :: Set.Set Location

dfsSearch :: State SearchState ()
dfsSearch = ...

Each time we make a random selection, we'll use the randomR function that returns the appropriate value as well as a new generator. Then we'll use a normal list for our search stack since we can push and pop from the top with ease. Next, we'll track the current state of the maze (it starts as all walls and we'll gradually break those down). Finally, there's the set of all cells we've already visited.

Starting Our Search!

To start our search process, we'll pull all our information out of the state monad, and examine the stack. If it's empty, we're done and can return! Otherwise, we'll want to consider the top location:

dfsSearch = do
  (SearchState gen locs bounds visited) <- get
  case locs of
    [] -> return ()
    (currentLoc : rest) -> do

Finding New Search Candidates

Given a particular location, we need to find the potential neighbors. We want to satisfy two conditions:

  1. It shouldn't be in our visited set.
  2. The boundary to this location should be a Wall

Then we'll want to use these properties to determine a list of candidates. Each candidate will contain 4 items:

  1. The next location
  2. The bounds we would use for the new location
  3. The previous location
  4. The new bounds for the previous location.

This seems like a lot, but it'll make more sense as we fill out our algorithm. With that in mind, here's the structure of our findCandidates function:

  :: Location -- Current location
  -> Map.Map Location CellBoundaries -- Current maze state
  -> Set.Set Location -- Visited Cells
  -> [(Location, CellBoundaries, Location, CellBoundaries)]
findCandidates currentLocation bounds visited = ...

Filling in this function consists of following the same process for each of the four directions from our starting point. First we check if the adjacent cell in that direction is valid. Then we create the candidate, containing the locations and their new boundaries. Since the location could be invalid, the result is a Maybe. Here's what we do for the "up" direction:

findCandidates =
  let currentLocBounds = fromJust $
        Map.lookup currentLocation bounds
      upLoc = (x, y + 1)
      maybeUpCandidate = case
        (upBoundary currentLocBounds, Set.member upLoc visited) of
        (Wall, False) -> Just
          ( upLoc
          , (fromJust $ Map.lookup upLoc bounds)
              { downBoundary = AdjacentCell currentLocation }
          , currentLocation
          , currentLocBounds { upBoundary = AdjacentCell upLoc }

We replace the existing Wall elements with AdjacentCell elements in our maze map. This may seem like it's doing a lot of unnecessary work in calculating bounds that we'll never use. But remember that Haskell is lazy! Any candidate that isn't chosen by our random algorithm won't be fully evaluated. We repeat this process for each direction and then use catMaybes on them all:

findCandidates =
  let currentLocBounds = fromJust $ Map.lookup currentLocation bounds
      upLoc = (x, y + 1)
      maybeUpCandidate = …
      rightLoc = (x + 1, y)
      maybeRightCandidate = …
      downLoc = (x, y - 1)
      maybeDownCandidate = …
      leftLoc = (x - 1, y)
      maybeLeftCandidate = …
  in  catMaybes [maybeUpCandidate, maybeRightCandidate, … ]

Choosing A Candidate

Our search function is starting to come together now. Here's what we've got so far. If we don't have any candidates, we'll reset our search state by popping the current location off our stack. Then we can continue the search by making another call to dfsSearch.

dfsSearch = do
  (SearchState gen locs bounds visited) <- get
  case locs of
    [] -> return ()
    (currentLoc : rest) -> do
      let candidateLocs = findCandidates currentLoc bounds visited
      if null candidateLocs
        then put (SearchState gen rest bounds visited) >> dfsSearch
        else ...

But assuming we have a non-empty list of candidates, we'll need to choose one. This function will update most of our state elements, so we'll put in in the State monad as well:

  :: [(Location, CellBoundaries, Location, CellBoundaries)]
  -> State SearchState ()
chooseCandidate candidates = do
  (SearchState gen currentLocs boundsMap visited) <- get

First, we'll need to select a random index into this list, which assumes it is non-empty.:

chooseCandidate candidates = do
  (SearchState gen currentLocs boundsMap visited) <- get
  let (randomIndex, newGen) = randomR (0, (length candidates) - 1) gen
      (chosenLocation, newChosenBounds, prevLocation, newPrevBounds) =
        candidates !! randomIndex

Since we did the hard work of creating the new bounds objects up above, the rest is straightforward. We'll create our new state with different components.

We get a new random generator from the randomR call. Then we can push the new location onto our search stack. Next, we update the bounds map with the new locations. Last, we can add the new location to our visited array:

chooseCandidate candidates = do
  (SearchState gen currentLocs boundsMap visited) <- get
  let (randomIndex, newGen) = randomR (0, (length candidates) - 1) gen
      (chosenLocation, newChosenBounds, prevLocation, newPrevBounds) =
        candidates !! randomIndex
      newBounds = Map.insert prevLocation newPrevBounds
        (Map.insert chosenLocation newChosenBounds boundsMap)
      newVisited = Set.insert chosenLocation visited
      newSearchStack = chosenLocation : currentLocs
  put (SearchState newGen newSearchStack newBounds newVisited)

Then to wrap up our DFS, we'll call this function at the very end. Remember to make the recursive call to dfsSearch!

dfsSearch = do
  (SearchState gen locs bounds visited) <- get
  case locs of
    [] -> return ()
    (currentLoc : rest) -> do
      let candidateLocs = findCandidates currentLoc bounds visited
      if null candidateLocs
        then put (SearchState gen rest bounds visited) >> dfsSearch
        else (chooseCandidate candidateLocs) >> dfsSearch

As a last step in our process, we need to incorporate our search function. At the most basic level, we'll want to execute our DFS state function and extract the boundaries from it:

generateRandomMaze :: StdGen -> (Int, Int) -> Map.Map Location CellBoundaries
generateRandomMaze gen (numRows, numColumns) =
  currentBoundaries (execState dfsSearch initialState)
    initialState :: SearchState
    initialState = ...

But we need to build our initial state. We'll start our search from a random location. Our initial stack and visited set will contain this location. Notice that with each random call, we use a new generator.

generateRandomMaze gen (numRows, numColumns) =
  currentBoundaries (execState dfsSearch initialState)
    (startX, g1) = randomR (0, numColumns - 1) gen
    (startY, g2) = randomR (0, numRows - 1) g1

    initialState :: SearchState
    initialState = SearchState
      [(startX, startY)]
      … -- TODO Bounds
      (Set.fromList [(startX, startY)])

The last thing we need is our initial bounds set. For this, I'm going to tease the next part of the series. We'll write a function to parse a maze from a string representation (and reverse the process). Our encoding will represent a "surrounded" cell with the character 'F'. So we can represent a completely blocked maze like so:

generateRandomMaze gen (numRows, numCols) = …

    fullString :: Text
    fullString = pack . unlines $
      take numRows $ repeat (take numColumns (repeat 'F'))

Finally, we'll apply the mazeParser function in Megaparsec style. You'll have to wait a couple weeks to see how to implement that! It will give us the appropriate cell boundaries.

generateRandomMaze gen (numRows, numColumns) =
  currentBoundaries (execState dfsSearch initialState)
    (startX, g1) = randomR (0, numColumns - 1) gen
    (startY, g2) = randomR (0, numRows - 1) g1

    initialState :: SearchState
    initialState = SearchState
      [(startX, startY)]
      (Set.fromList [(startX, startY)])

    initialBounds :: Map.Map Location CellBoundaries
    initialBounds = case Megaparsec.runParser
      (mazeParser (numRows, numColumns) "" fullString of
        Right bounds -> bounds
        _ -> error "Couldn't parse maze for some reason!"

    fullString :: Text
    fullString = ...

You can also look at our Github repo for some details. You'll want the part-2 branch if you want more details about how everything works!


Generating random mazes is cool. But it would be nice if we could actually finish the maze we're running and do another one! Next week, we'll make some modifications to the game state so that when we finish with one maze, we'll have the option to try another random one!

If you're just getting started with Haskell, we have some great resources to get you going! Download our Beginners Checklist and read our Liftoff Series!

by James Bowen at April 08, 2019 02:30 PM

April 07, 2019

Neil Mitchell

Code Review: Approve with Suggestions

Summary: Code review is not a yes/no decision - mostly I say yes with suggestions.

As I wrote previously, I didn't used to be a fan of code review for $WORK code, but now I am. After I review some code there are three responses I might give:

  • "Request changes" - this code has some fatal flaw. Maybe I saw a race condition. Maybe there's insufficient testing. Maybe it's just a bad idea. Please fix it or convince me I'm wrong and I'll review again.
  • "Approved" - this code is great. Let's merge it. If the CI has already passed, I'll probably merge it now myself.
  • "Approved with suggestions" - this code is fine, I'm happy for it to be merged, but I thought of a few ways to make it better.

I think I use "Approved with suggestions" about 80% of the time. To use this status I think the code is correct, readable, and will have no negative effects - I'm happy for it to be merged. At the same time, I can think of a few ways to improve it - e.g. using some utility function, simplifying things a bit, making the documentation clearer. If the original author disagrees with me, I'm not going to bother arguing. I'm happy for these commits to be in a follow up PR, or pushed on top of this PR, whatever suits them.

What's different about approved with suggestions, at least for trusted individuals (e.g. colleagues), is that if they make these tweaks I have no real interest in rereviewing. I'm happy for my approval to remain sticky and for them to seek out rereview only if they think they need it. Importantly, this guideline is consistent with my reasons for reviewing. After the first review, if the code doesn't change meaningfully, a rereview offers none of the benefits that make me want to review in the first place.

Since the programming language DAML that I work on is now open source I can point at a concrete example using a pull request to our GHC fork. Here Shayne added a function:

qualifyDesugar :: (String -> OccName) -> String -> RdrName
qualifyDesugar occName =
(mkRdrQual $ mkModuleName "DA.Internal.Desugar") . occName

It matches the stated intention, but it seems to do a bit too much - it turns a String into an OccName using a supplied function, when it could have just taken the OccName directly. Simpler, better, more maintainable. So I suggested:

qualifyDesugar :: OccName -> RdrName
qualifyDesugar = mkRdrQual $ mkModuleName "DA.Internal.Desugar"

A nice improvement. However, Shayne is responsible enough to make such simple tweaks that it didn't require another review. A typical case of Approve with suggestions.

by Neil Mitchell ( at April 07, 2019 09:07 PM

Shayne Fletcher

Announcing ghc-lib 0.20190404

Announcing ghc-lib 0.20190404

On behalf of Digital Asset I am excited to share with you the latest release of ghc-lib.

As described in detail in the ghc-lib README, the ghc-lib project lets you use the GHC API for parsing and analyzing Haskell code without tying you to a specific GHC version.

What's new

The GHC source code in this release is updated to GHC HEAD as of April the 4th, 2019. Accordingly, the mini-hlint example in the ghc-lib repo was adjusted to accomodate GHC API changes to the ParseResult datatype and parser error handling.

By far the biggest change though is this : the ghc-lib project now provides two packages, ghc-lib-parser and ghc-lib. The packages are released on Hackage, and can be installed as usual e.g. cabal install ghc-lib.

Some projects don't require the ability to compile Haskell to GHC's Core language. If lexical analysis alone is sufficient for your project's needs, then the ghc-lib-parser package alone will do for that. The build time for ghc-lib-parser is roughly half of the combined build times of ghc-lib-parser and ghc-lib. That is, in this case, switching to the new release will decrease the build time for your project. Note that if your project does require compiling Haskell to Core, then your project will now depend on both the ghc-lib-parser and ghc-lib packages.

The ghc-lib package "re-exports" the modules of the ghc-lib-parser package. So, if you depend upon the ghc-lib package, you'll get the ghc-lib-parser modules "for free". Sadly though, at this time, package import syntax (and we do recommend using package import syntax for these packages) doesn't quite work like you'd expect so that if you, import "ghc-lib" DynFlags for example, this will fail because DynFlags is in fact in the ghc-lib-parser package. In this case, you'd write, import "ghc-lib-parser" DynFlags and all would be well. The mini-compile example in the ghc-lib repo demonstrates mixing modules from both packages.

Digital Asset make extensive use of the ghc-lib packages in the DAML smart contract language compiler and hope you continue to benefit from this project too!

by Shayne Fletcher ( at April 07, 2019 04:13 PM

April 06, 2019

Shayne Fletcher

Adding a GHC Language Extension

Adding a GHC Language Extension

This note summarizes the essential mechanics of adding a new language extension to GHC. The example code will illustrate adding a Foo extension.

Implementing the extension

The first step is to add a Foo constructor to the Extension type in libraries/ghc-boot-th/GHC/LanguageExtensions/Type.hs.

data Extension
= Cpp
| OverlappingInstances
| Foo

The next job is to extend xFlagsDeps in compiler/main/DynFlags.hs.

xFlagsDeps = [
flagSpec "AllowAmbiguousTypes" LangExt.AllowAmbiguousTypes,
flagSpec "Foo" LangExt.Foo

That's all it takes. With these two changes, it is now possible to enable Foo in Haskell source files by writing {-# LANGUAGE Foo #-} or from a compile command by passing the argument -XFoo.

Testing for the extension


In compiler/parser/Lexer.x, locate data ExtBits and add a constructor for Foo.

data ExtBits
= FfiBit
| ...
| FooBit
Next, extend the where clause of function mkParserFlags' with a case for Foo.
langExtBits =
FfiBit `xoptBit` LangExt.ForeignFunctionInterface
.|. InterruptibleFfiBit `xoptBit` LangExt.InterruptibleFFI


.|. FooBit `xoptBit` LangExt.FooBit

The function xtest is then the basic building block for testing if Foo is enabled. For example, this specific function tests a bitmap for the on/off status of the Foo bit.
fooEnabled :: ExtsBitMap -> Bool
fooEnabled = xtest FooBit
In practice, testing for a language extension in the lexer is called from a function computing a lexer action. Suppose foo to be such a function and the action it computes depends somehow on whether the Foo language extension is in effect. Putting it all together, schematically it will have the following form.
foo :: (FastString -> Token) -> Action
foo con span buf len = do
exts <- getExts
if FooBit `xtest` exts then


This utility computes a monadic expression testing for the on/off state of a bit in a parser state monad.

extension :: (ExtsBitmap -> Bool) -> P Bool
extension p = P $ \s -> POk s (p $! (pExtsBitmap . options) s)
An expression of this kind can be evaluated in the semantic action of a parse rule in compiler/parser/Parser.y. Here's an example of how one might be used.
foo :: { () }
: 'foo' {}
| {- empty -} {% do
foo_required <- extension fooEnabled
when foo_required $ do
loc <- fileSrcSpan
parseErrorSDoc loc $ text "Missing foo"

Renaming, type-checking and de-sugaring

All of renaming, typechecking and desurgaring occur in the contexts of TcRnIf _ _ monads. Function xoptM :: Extension -> TcRnIf gbl lcl Bool is provided for extension testing in such contexts. Here's a schematic of how such a test might be used in a renaming function.

import GHC.LanguageExtensions

updateFoos :: [AvailInfo] -> RnM (TcGlbEnv, TcLclEnv)
updateFoos info = do
(globals, locals) <- getEnvs
opt_Foo <- xoptM Foo
if not opt_Foo then
return (globals, locals)
let globals' = ...
locals' = ...
return (globals', locals')

by Shayne Fletcher ( at April 06, 2019 04:10 PM

Bush fixing Travis and GitLab

Bush fixing Travis and CI

Ever had one of those days?

You are not alone!

This Saturday 9th March 2019, the GHC devs are going to announce that git:// has been decommissioned. The new official upstream GHC will be

Sadly (for us) this broke ghc-lib CI's Travis linux configuration.

What does our CI do? The ghc-lib CI script pulls down the latest GHC sources and builds and tests them as a ghc-lib. The details of the problem are that Travis gives you a broken Ubuntu where cloning the official URL fails with a TLS “handshake error”. More generally, any Travis job that tries to git clone over the https protocol from a GitLab remote will fail the same way.

This .travis.yml shows a workaround. The idea is to spin up a container before install that doesn’t have this problem and clone from there. The essential bits are:

- docker

# [Why we git clone on linux here]
# At this time, `git clone`
# from within `CI.hs` does not work on on linux. This appears to be a
# known Travis/ubuntu SSL verification issue. We've tried many less
# drastic workarounds. This grand hack is the only way we've found so
# far that can be made to work.
- |
if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then
docker pull alpine/git
docker run -ti --rm -v ${HOME}:/root -v $(pwd):/git \
alpine/git clone /git/ghc --recursive

Note, MacOS docker services aren’t supported but that’s OK! The TLS handshake problem doesn’t exhibit in that configuration.

Update : It turns out that while this issue exists in Ubuntu 14.04 which Travis uses by default, it is “fixed” in Ubuntu 16.04. So by writing dist: xenial in your .travis.yml file, the above workaround can be avoided.

by Shayne Fletcher ( at April 06, 2019 04:10 PM

April 05, 2019

Neil Mitchell

Full-time Haskell jobs in Zürich/New York, at Digital Asset

UPDATE: All positions have been filled.

Summary: We're hiring 3 Haskell programmers and a few other roles too.

I am currently working at Digital Asset, working on our DAML programming language. We're seeking 3 additional Haskell programmers to join, 2 in New York and 1 in Zurich (remote work is not currently an option). There are also a ton of other jobs on our website, including Formal Methods and nix + some Haskell Build Engineering (also available at a more junior level).

What we do

We have built DAML, the Digital Asset Modelling Language, which is the centerpiece of our distributed ledger technology. DAML is a contract language that consists of a strongly-typed purely functional core extended with domain specific constructs to express the flow of rights and obligations underlying today's multi-party business processes. Application Developers using DAML and our distributed ledger technology are supported by the DAML SDK. It provides a type-safe integration of DAML with existing technology like Java, Scala, XML and SQL, and contains DAML Studio, which provides a modern IDE experience to develop, test, and analyse DAML programs.

Working on the Language Engineering team with Digital Asset involves partnering with people around the world (we have centers in New York, Zurich and Sydney), working with exciting new technology, where many of the answers haven't yet been figured out, producing solutions for clients, such as replacing the settlement and clearing platform of the Australian Stock Exchange (ASX), and making sure the end result has the quality required for robust usage. It's challenging work, but the impact could be huge.

What we require

We're looking for the best functional programmers out there, with a strong bias towards Haskell. If not Haskell, then Scala is useful, as other teams in the company write Scala. However, we've hired ML/F# programmers too, with good results. In particular we want:
  • Experienced functional programmer. Either some open-source libraries (Hackage/GitHub) or commercial experience.
  • Writes good, clean, effective code.
  • Existing compiler development experience is useful, if it's with GHC then even better.
We do not require any existing blockchain/DLT/finance knowledge.

How to apply

To apply, email neil.mitchell AT with a copy of your CV. If you have any questions, email me.
The best way to assess technical ability is to look at code people have written. If you have any packages on Hackage or things on GitHub, please point me at the best projects. If your best code is not publicly available, please describe the Haskell projects you've been involved in.

by Neil Mitchell ( at April 05, 2019 08:07 PM

April 04, 2019

Dominic Steinitz

Holden Karau

April 03, 2019

Tweag I/O

Securely storing secrets in Terraform
with terraform-provider-secret

Jonas Chevalier

I want to present you a Terraform plugin for securely managing secrets that was written for Digital Asset, who kindly allowed me to open source it. The general idea of this plugin is to protect secrets by making use of Terraform's state.

Terraform maintains the state of the world at the moment of a deployment in state files. In the case of multi-seat deployments, where several people work with the same infrastructure, these files are often synced into a remote storage like S3 or GCS. Every time Terraform is invoked, the state is fetched and maintained in memory. When the run is finished, the new state is pushed back into the remote store.

State files might contain sensitive information such as RDS initial passwords, TLS certificates or Kubernetes certificates from a google_container_cluster. For this reason, it's good practice to enable encryption at rest and restrict access to the remote store only to the people responsible for deployments. Therefore when the remote store if appropriately configured, secrets in the Terraform state are relatively safe.

But how do these secrets get to the Terraform state? The simplest option is to keep secrets in the same repository as the Terraform code, and to pass them as environment variables. However, this is not a good idea, since anyone with access to the code would have access to the infrastructure. A better option would be to store secrets in a dedicated tool like HashiCorp Vault that allows fine-grained access control, auditing and dynamic secrets. However, managing a Vault instance takes time and operational knowledge that the team might not have.

This is where the terraform-provider-secret plugin comes in. The idea is to introduce a value-holding resource whose role is to store secrets. The resource is declared in code, and the terraform import command is used to import the secrets into the store.

This plugin brings many benefits:

  • No secrets stored in code. This means that we can share the infrastructure code with all the employees, which is nicely in line with the self-help principle of devops.
  • No secrets stored on the developer's laptop. If it gets stolen, it's just a matter of resetting their GCS or S3 bucket access.
  • Secrets are encrypted at rest in the remote state, if configured properly.
  • Auditable access. Each access to the GCS or S3 bucket can be logged for out-of-band breach correlation.
  • No need to send secrets over Slack.
  • Easy secret rotation.


The plugin depends on Go being installed on your system. Make sure to have $GOPATH/bin in your $PATH and run go get -u

If you are fortunate enough to be using Nix, the terraform-full package already includes the plugin.

Usage example

Before using this plugin, make sure to properly secure the remote state. More instructions on how to do so is available here.

To add a new secret, first declare a secret_resource resource in the Terraform code. It has no configuration options. In this example we want to store the DataDog API key:

resource "secret_resource" "datadog_api_key" {}

To actually import the secret, run:

terraform import secret_resource.datadog_api_key TOKEN

where TOKEN is the value of the secret token.

Now the secret is imported into the remote state and everybody with access to it can now read the value. The resource can also be accessed in the Terraform code with secret_resource.datadog_api_key.value.

locals {
  datadog_api_key = "${secret_resource.datadog_api_key.value}"

That's it!

Secret rotation

In case where the secret gets leaked or an employee leaves the company, rotating the secret is also quite simple.

terraform state rm secret_resourc.datadog_api_key
terraform import secret_resourc.datadog_api_key NEW_TOKEN

Note: This operation is unfortunately not atomic.

Misc gotchas

To import multi-line secrets, make sure to escape them properly:

terraform import secret_resourc.my_cert "$(cat my_cert.pem)"

If the secret contains binary data, use base64 to store the information:

terraform import secret_resource.gce_service_account_key "$(base64 service_account.key)"

and use the base64decode() interpolation function in the Terraform code to get back the binary value:

locals {
  gce_service_account_key = "${base64decode(secret_resource.gce_service_account_key.value)}"

These values are marked as "sensitive" which means that they won't turn up in the Terraform logs by default. However, the "sensitive" attribute doesn't propagate though references, therefore some precaution is required when referring to secrets.


Terraform is not designed as a security tool, unlike, for example, HashiCorp Vault. But depending on your security requirements, using it in conjuction with this plugin is strictly better than having secrets recorded into code. All the secrets are now stored in a single place which makes it easier to handle and audit.

I hope that this little plugin will prove to be useful in your toolbox.


Thank you to Digital Asset for sponsoring this work and releasing it under an open source license for the benefit of everyone.

April 03, 2019 12:00 AM

April 01, 2019

Monday Morning Haskell

Building a Bigger World


Last week we looked at some of the basic components of the Gloss library. We made simple animations and simulations, as well as a very simple "game" taking player input. This week, we're going to start making a more complex game!

Our game will involve navigating a maze, from start to finish. In fact, this week, we're not even going to make it very "mazy". We're just going to set up an open grid to navigate around with our player. But over the course of these next few weeks, we'll add more and more features, like enemies and hazards. At some point, we'll have so many features that we'll need a more organized scheme to keep track of everything. At that point, we'll discuss game architecture. You can take a look at the code for this game on our Github repository. For this part, you'll want to look at the part-1 branch.

Game programming is only one of the many interesting ways we can use Haskell. Take a look at our Production Checklist for some more ideas!

Making Our World

As we explored in the last part, the World type is central to how we define our game. It is a parameter to all the important functions we'll write. Before we define our World though, let's define a couple helper types. These will clarify many of our other functions.

-- Defined in Graphics.Gloss
-- Refers to (x, y) within the drawable coordinate system
type Point = (Float, Float)

-- Refers to discrete (x, y) within our game grid.
type Location = (Int, Int)

data GameResult = InProgress | PlayerWin | PlayerLoss

Let's start our World type now with a few simple elements. We'll imagine the game board as a grid with a fixed size, with the tiles having coordinates like (0,0) in the bottom left. We'll want a start location and an ending location for the maze. We'll also want to track the player's current location as well as the current "result" of the game:

data  World = World
  { playerLocation :: Location
  , startLocation :: Location
  , endLocation :: Location
  , gameResult :: GameResult

Now we need to represent the "maze". In other words, we want to be able to track where the "walls" are in our grid. We'll make a data type to represent to boundaries for any particular cell. Then we'll stick a mapping from each location in our grid to its boundaries:

data BoundaryType = WorldBoundary | Wall | AdjacentCell Location

data CellBoundaries = CellBoundaries
  { upBoundary :: BoundaryType
  , rightBoundary :: BoundaryType
  , downBoundary :: BoundaryType
  , leftBoundary :: BoundaryType

data  World = World
  { …
  , worldBoundaries :: Map Location CellBoundaries

Populating Our World

Next week we'll look into how we can generate interesting mazes. But for now, our grid will only have "walls" on the outside, not in the middle. To start, we'll define a function that takes the number of rows and columns in our grid and a particular location. It will return the "boundaries" of the cell at that location. Each boundary tells us if there is a wall in one direction, or if we are clear to move to a different cell. All we need to check is if we're about to exceed the boundary in that direction.

simpleBoundaries :: (Int, Int) -> Location -> CellBoundaries
simpleBoundaries (numColumns, numRows) (x, y) = CellBoundaries
  (if y + 1 < numRows
    then AdjacentCell (x, y+1)
    else WorldBoundary)
  (if x + 1 < numColumns
    then AdjacentCell (x+1, y)
    else WorldBoundary)
  (if y > 0 then AdjacentCell (x, y-1) else WorldBoundary)
  (if x > 0 then AdjacentCell (x-1, y) else WorldBoundary)

Our main function now will loop through all the different cells in our grid and make a map out of them:

boundariesMap :: (Int, Int) -> Map.Map Location CellBoundaries
boundariesMap (numColumns, numRows) = Map.fromList
  (buildBounds <$> (range ((0,0), (numColumns, numRows))))
    buildBounds :: Location -> (Location, CellBoundaries)
    buildBounds loc =
      (loc, simpleBoundaries (numColumns, numRows) loc)

Now we have all the tools we need to populate our initial world:

main = play
  (World (0, 0) (0,0) (24, 24) InProgress (boundariesMap (25, 25))
  drawingFunc ...
  inputHandler …
  updateFunc ...

Drawing Our World

Now we need to draw our world. We'll begin by passing a couple new parameters to our drawing function. We'll need offset values that will tell us the Point in our geometric coordinate system for the Location (0,0). We'll also take a floating point value for the cell size. Then we will also, of course, take the World as a parameter:

drawingFunc :: (Float, Float) -> Float -> World -> Picture
drawingFunc (xOffset, yOffset) cellSize world = …

Before we do anything else, let's define a type called CellCoordinates. This will contain the Points for the center and four corners of a cell in our grid.

data CellCoordinates = CellCoordinates
  { cellCenter :: Point
  , cellTopLeft :: Point
  , cellTopRight :: Point
  , cellBottomLeft :: Point
  , cellBottomRight :: Point

Next, let's define a conversion function from a Location to one of the coordinate objects. This will take the offsets, cell size, and the desired location.

locationToCoords ::
  (Float, Float) -> Float -> Location -> CellCoordinates
locationToCoords (xOffset, yOffset) cellSize (x, y) = CellCoordinates
  (centerX, centerY) -- Center
  (centerX - halfCell, centerY + halfCell) -- Top Left
  (centerX + halfCell, centerY + halfCell) -- Top Right
  (centerX - halfCell, centerY - halfCell) -- Bottom Left
  (centerX + halfCell, centerY - halfCell) -- Bottom Right
    (centerX, centerY) =
      ( xOffset + (fromIntegral x) * cellSize
      , yOffset + (fromIntegral y) * cellSize)
    halfCell = cellSize / 2.0

Now we can go ahead and make the first few simple pictures in our game. We'll have colored polygons for the start and end locations, and a circle for the player token. The player marker is easiest:

drawingFunc (xOffset, yOffset) cellSize world =
  Pictures [startPic, endPic, playerMarker]
    conversion = locationToCoords (xOffset, yOffset) cellSize
    (px, py) = cellCenter (conversion (playerLocation world))
    playerMarker = translate px py (Circle 10)
    startPic = …
    endPic = ...

We find its coordinates through our conversion, and then translate a circle. For our start and end points, we'll want to do something similar, except we want the corners, not the center. We'll use the corners as the points in our polygons and draw these polygons in appropriate colors.

drawingFunc (xOffset, yOffset) cellSize world =
  Pictures [startPic, endPic, playerMarker]
    conversion = locationToCoords (xOffset, yOffset) cellSize
    startCoords = conversion (startLocation world)
    endCoords = conversion (endLocation world)
    startPic = Color blue (Polygon
      [ cellTopLeft startCoords
      , cellTopRight startCoords
      , cellBottomRight startCoords
      , cellBottomLeft startCoords
    endPic = Color green (Polygon
      [ cellTopLeft endCoords
      , cellTopRight endCoords
      , cellBottomRight endCoords
      , cellBottomLeft endCoords

Now we need to draw the wall lines. So we'll have to loop through the wall grid, drawing the relevant lines for each individual cell.

drawingFunc (xOffset, yOffset) cellSize world = Pictures
  [mapGrid, startPic, endPic, playerMarker]
    mapGrid = Pictures $concatMap makeWallPictures
      (Map.toList (worldBoundaries world))

    makeWallPictures :: (Location, CellBoundaries) -> [Picture]
    makeWallPictures ((x, y), CellBoundaries up right down left) = ...

When drawing the lines for an individual cell, we'll use thin lines when there is no wall. We can make these with the Line constructor and the two corner points. But we want a separate color and thickness to distinguish an impassable wall. In this second case, we'll want two extra points that are offset so we can draw a polygon. Here's a helper function we can use:

drawingFunc (xOffset, yOffset) cellSize world = ...
    drawEdge :: (Point, Point, Point, Point) ->
                 BoundaryType -> Picture
    drawEdge (p1, p2, _, _) (AdjacentCell _) = Line [p1, p2]
    drawEdge (p1, p2, p3, p4) _ =
      Color blue (Polygon [p1, p2, p3, p4])

Now to apply this function, we'll need to do a little math to dig out all the individual coordinates out of this cell.

drawingFunc (xOffset, yOffset) cellSize world =
  Pictures [mapGrid, startPic, endPic, playerMarker]
    makeWallPictures :: (Location, CellBoundaries) -> [Picture]
    makeWallPictures ((x,y), CellBoundaries up right down left) =
      let coords = conversion (x,y)
          tl@(tlx, tly) = cellTopLeft coords
          tr@(trx, try) = cellTopRight coords
          bl@(blx, bly) = cellBottomLeft coords
          br@(brx, bry) = cellBottomRight coords
      in  [ drawEdge (tr, tl, (tlx, tly - 2), (trx, try - 2)) up
          , drawEdge (br, tr, (trx-2, try), (brx-2, bry)) right
          , drawEdge (bl, br, (brx, bry+2), (blx, bly+2)) down
          , drawEdge (tl, bl, (blx+2, bly), (tlx+2, tly)) left

But that's all we need! Now our drawing function is complete!

Player Input

The last thing we need is our input function. This is going to look a lot like it did last week. We'll only be looking at the arrow keys. And we'll be updating the player's coordinates if the move they entered is valid. To start, let's figure out how we get the bounds for the player's current cell (we'll assume the location is in our map).

inputHandler :: Event -> World -> World
inputHandler event w = case event of
  (EventKey (SpecialKey KeyUp) Down _ _) -> ...
  (EventKey (SpecialKey KeyDown) Down _ _) -> ...
  (EventKey (SpecialKey KeyRight) Down _ _) -> ...
  (EventKey (SpecialKey KeyLeft) Down _ _) -> ...
  _ -> w
    cellBounds = fromJust $ Map.lookup (playerLocation w) (worldBoundaries w)

Now we'll define a function that will take an access function to the CellBoundaries. It will determine what our "next" location is.

inputHandler :: Event -> World -> World
inputHandler event w = case event of
    nextLocation :: (CellBoundaries -> BoundaryType) -> Location
    nextLocation boundaryFunc = case boundaryFunc cellBounds of
      (AdjacentCell cell) -> cell
      _ -> playerLocation w

Finally, we pass the proper access function for the bounds with each direction, and we're done!

inputHandler :: Event -> World -> World
inputHandler event w = case event of
  (EventKey (SpecialKey KeyUp) Down _ _) ->
    w { playerLocation = nextLocation upBoundary }
  (EventKey (SpecialKey KeyDown) Down _ _) ->
    w { playerLocation = nextLocation downBoundary }
  (EventKey (SpecialKey KeyRight) Down _ _) ->
    w { playerLocation = nextLocation rightBoundary }
  (EventKey (SpecialKey KeyLeft) Down _ _) ->
    w { playerLocation = nextLocation leftBoundary }
  _ -> w

Tidying Up

Now we can put everything together in our main function with a little bit of glue.

main :: IO ()
main = play
  (World (0, 0) (0,0) (24,24) (boundariesMap (25, 25)))
  (drawingFunc (globalXOffset, globalYOffset) globalCellSize)

updateFunc :: Float -> World -> World
updateFunc _ = id

Note that for now, we don't have much of an "update" function. Our world doesn't change over time. Yet! We'll see in the coming weeks what other features we can add that will make use of this.


So we've finished stage 1 of our simple game! You can explore the part-1 branch on our Github repository to look at the code if you want! Come back next week and we'll explore how we can actually create a true maze, instead of an open grid. This will involve some interesting algorithmic challenges!

For some more ideas of advanced Haskell libraries, check out our Production Checklist. You can also read our Web Skills Series for a more in-depth tutorial on some of those ideas.

by James Bowen at April 01, 2019 02:30 PM

Michael Snoyman

Gym Etiquette Test

Miriam (aka @LambdaMom) and I are writing this blog post as a public service announcement, to address a true scourge in modern society. All too often, gymgoers do not know basic gym etiquette:

In response to this pandemic, we move that all gyms institute, immediately, a required exam for gym membership. There shall be no “grandfathering” of existing members; all existing members must submit to the test. Anyone scoring below a 90% should be forbidden from entering the gym until they have taken a “basic human decency in the gym” course.

Gyms: for your convenience, we have provided a set of twenty questions below. This is a multiple choice exam, where each statement can be answered with:

  • Required- you must behave this way in the gym
  • Recommended- this is good behavior
  • Frowned upon- try to avoid doing this
  • Forbidden- you will be ejected immediately from the gym

  1. I’m done chewing this piece of gum, let me put it on this plate Forbidden
  2. I’m done using the bench, I’m going to put all of my weights away Required
  3. Oh look, the weights are all jumbled up, let me spend 2 minutes and sort them Recommended
  4. That person is attractive, I’m gonna snap a photo Forbidden
  5. I just finished a set of tricep extensions, let me lift my shirt and check out how my abs look Frowned upon
  6. Curling in the squat rack Frowned upon, unless someone’s waiting to use the squat rack, then your life is in your own hands
  7. Telling someone lifting heavy weights to put them down more quietly Frowned upon
  8. Yelling like a moron for no reason Frowned upon (we’ll also accept forbidden)
  9. Running backwards on the treadmill Recommended, assuming you take a video and send to me to laugh at
  10. Doing anything that defies gravity with the cables Same as above, unless someone’s waiting to use them
  11. Walking closely to, or talking to, someone mid-set Forbidden
  12. Putting a towel down on the bench/machines Recommended (some say required)
  13. Form checking strangers Frowned upon
  14. Cleaning up after you spill your protein shake Required
  15. Farting while squatting Frowned upon
  16. Laughing at the person who farts while squatting Frowned upon, but perhaps inevitable
  17. Spitting in the gym Forbidden
  18. Cheering someone hitting a PR Recommended
  19. Putting on any form of hand lotion before touching any gym equipment Forbidden

April 01, 2019 05:32 AM

Daniel Mlot (duplode)

Idempotent Applicatives, Parametricity, and a Puzzle

Some applicative functors are idempotent, in the sense that repeating an effect is the same as having it just once. An example and a counterexample are Maybe and IO, respectively (contrast Just 3 *> Just 3 with print 3 *> print 3). More precisely, idempotency means that f <$> u <*> u = (\x -> f x x) <$> u. Given the informal description I began with, though, one might wonder whether the simpler property u *> u = u, which seems to capture the intuition about repeated effects, is equivalent to the usual idempotency property. In this post, I will tell how I went about exploring this conjecture, as well as a few things I learnt about parametricity along the way.

Before I begin, a few remarks about this notion of idempotency. The earliest mention of it that I know of is in Combining Monads, a paper by King and Wadler 1. There, idempotent monads are presented alongside the most widely known concept of commutative monads (f <$> u <*> v = flip f <$> v <*> u). Both properties generalise straightforwardly to applicative functors, which has the neat side-effect of allowing myself to skirt the ambiguity of the phrase “idempotent monad” (in category theory, that usually means a monad that, in Haskell parlance, has a join that is an isomorphism – a meaning that mostly doesn’t show up in Haskell). Lastly, I knew of the conjecture about idempotency amounting to u *> u = u through a Stack Overflow comment by David Feuer, and so I thank him for inspiring this post.


Given we are looking into a general claim about applicatives, our first port of call are the applicative laws. Since the laws written in terms of (<*>) can be rather clunky to wield, I will switch to the monoidal presentation of Applicative:

Note I am using an uncurried version of fzip, as I feel it makes what follows slightly easier to explain. I will also introduce a couple teensy little combinators so that the required tuple shuffling becomes easier on the eye:

The converse definitions of pure and (<*>) in terms of unit and fzip would be:

Using that vocabulary, the applicative laws become:

As for the idempotency property, it can be expressed as:

(*>) and its sibling (<*) become:

(Proofs of the claims just above can be found at the appendix at the end of this post.)

Finally, the conjecture amounts to:

That fzip (u, u) = dup <$> u implies snd <$> fzip (u, u) is immediate, as snd . dup = id. Our goal, then, is getting fzip (u, u) = dup <$> u out of snd <$> fzip (u, u) = u.

Drawing relations

How might we get from snd <$> fzip (u, u) = u to fzip (u, u) = dup <$> u? It appears we have to take a fact about the second components of the pairs in fzip (u, u) (note that mapping snd discards the first components) and squeeze something about the first components out of it (namely, that they are equal to the second components everywhere). At first glance, it doesn’t appear the applicative laws connect the components in any obviously exploitable way. The one glimmer of hope lies in how, in the associativity law…

… whatever values originally belonging to v must show up as second components of pairs on the left hand side, and as first components on the right hand side. While that, on its own, is too vague to be actionable, there is a seemingly innocuous observation we can make use of: snd <$> fzip (u, u) = u tells us we can turn fzip (u, u) into u using fmap (informally, we can say that they have the same shape), and that they can be related using snd while making use of a free theorem 2. For our current purposes, that means we can borrow the types involved in the left side of the associativity law and use them to draw the following diagram…

(F (a, b), F c) --------> F ((a, b), c)
     |      |                  |     |
     |      |                  |     |
  snd|      |id             snd|     |id 
     |      |                  |     |
     v      v                  v     v
(F   b, F   c ) --------> F (  b,    c)

… such that we get the same result by following either path from the top left corner to the bottom right one. Omitting the occurrences of id, we can state that through this equation:

In words, it doesn’t matter whether we use snd after or before using fzip. fmap and first, left implicit in the diagram, are used to lift snd across the applicative layer and the pairs, respectively. This is just one specific instance of the free theorem; instead of snd and id, we could have any functions – or, more generally, any relations 3– between the involved types. Free theorems tell us about relations being preserved; in this case, snd sets up a relation on the left side of the diagram, and fzip preserves it.

We can get back to our problem by slipping in suitable concrete values in the equation. For an arbitrary u :: F A, we have…

… and, thanks to our snd <$> fzip (u, u) = u premise:

Now, why should we restrict ourselves to the left side of the associativity law? We can get a very similar diagram to work with from the right side:

(F a, F (b, c)) --------> F (a, (b, c))
   |      |                  |    |
   |      |                  |    |
 id|      |snd             id|    |snd
   |      |                  |    |
   v      v                  v    v
(F a, F   c   ) --------> F (a,   c   )

Or, as an equation:

Proceeding just like before, we get:

Since fzip (u, fzip (u, u)) ~ fzip (fzip (u, u), u) (associativity), we can shuffle that into:

The equations we squeezed out of the diagrams…

… can be combined into:

This kind of looks like idempotency, except for the extra occurrence of u tagging along for the ride. We might have a go at getting rid of it by sketching a diagram of a slightly different nature, which shows how the relations play out across the specific values that appear in the equation above:

          (u, u) :: (F   a   , F a) -----------> F (  a   , a)
                         |       |                    |     |
                         |       |                    |     |
                        R|      S|               {dup}| {id}|
                         |       |                    |     |
                         |       |      fzip          |     |
(fzip (u, u), u) :: (F (a, a), F a) -----------> F ((a, a), a)

dup can be used to relate fzip (u, u) and fzip (fzip (u, u), u) on the right of the diagram. That this diagram involves specific values leads to a subtle yet crucial difference from the previous ones: the relation on the right side is not necessarily the function dup, but some relation that happens to agree with dup for the specific values we happen to be using here (that is what I have attempted to suggest by adding the curly brackets as ad hoc notation and dropping the arrow tips from the vertical connectors). This is important because, given how fzip preserves relations, we might be tempted to work backwards and identify R on the left side with dup, giving us a proof – dup <$> u = fzip (u, u) would be an immediate consequence. We can’t do that, though, as R only must agree with dup for those values which show up in a relevant way on the right side. More explicitly, consider some element x :: a of u. If x shows up as a first component anywhere in fzip (u, u), then the corresponding element of fzip (u, u) must have its first and second components equal to each other (because dup agrees with R on x, and R in turn relates u and fzip (u, u)), and to x (since snd <$> fzip (u, u) = u). If that held for all elements of u, we would have fzip (u, u) = dup <$> u. However if x doesn’t show up as a first component in fzip (u, u), there are no guarantees (as the right side offers no evidence on what x is related to through R), and so we don’t have grounds for the ultimate claim.

Close, but no cigar.

Something twisted

While those parametricity tricks gave us no proof, we did learn something interesting: the conjecture holds as long as all elements from u show up as first components in fzip (u, u). That sounds like a decent lead for a counterexample, so let’s switch course and look for one instead. To begin with, here is an inoffensive length-two vector/homogeneous pair type:

Here is its Applicative instance, specified in terms of the monoidal presentation:

Good, like any other applicative with a single shape, is idempotent – as there is just one shape, it can’t help but be preserved 4. That means we need a second constructor:

unit can remain the same…

… which means the Good-and-Good case must remain the same: the identity effect has to be idempotent 5:

The twist comes in the Evil-and-Evil case: we repeat our pick of a first element of the vector, and thus discard one of the first elements. (We can’t do the same with the second element, as we want snd <$> fzip (u, u) = u to hold.)

The Evil-and-Good case is determined by the right identity law…

… while associativity forces our hand in the Good-and-Evil case (consider what would happen in a Good-Evil-Evil chain 6):

Evil spreads, leaving a trail of repeated picks of first elements to the left of its rightmost occurrence in an applicative chain.

Getting an actual Applicative instance from those definitions is easy: fill unit with something, and take away the commas from fzip:

And there it is:

The conjecture is thus refuted. While parametricity isn’t truly necessary to bring out this counterexample, I am far from sure I would have thought of it without having explored it under the light of parametricity. On another note, it is rather interesting that there are biased applicatives like Twisted. I wonder whether less contrived cases can be found out there in the wild.


Below are some derivations that might distract from the main thrust of the post.

Alternative presentation of the idempotency property

One direction of the equivalency between the two formulations of the idempotency property follows from a straightforward substitution…

… while the other one calls for a small dose of parametricity:

Alternative definitions of (<*) and (*>)

Starting from…

… we can switch to the monoidal presentation:

It follows from parametricity that…

… which amount to…

… or simply:

Lawfulness of Twisted as an applicative functor

Right identity:

Left identity:


fzip (fzip (u, v), w) ~ fzip (u, fzip (v, w))
-- Good/Good/Good case: holds.
fzip (fzip (Good x1 x2, Good y1 y2), Good z1, z2) -- LHS
fzip (Good (x1, y1) (x2, y2), Good z1, z2)
Good ((x1, y1), z1) ((x2, y2), z2)
fzip (Good x1 x2, fzip (Good y1 y2, Good z1 z2)) -- RHS
fzip (Good x1 x2, Good (y1, z1) (y2, z2))
Good (x1, (y1, z1)) (x2, (y2, z2)) -- LHS ~ RHS
-- Evil/Evil/Evil case:
fzip (fzip (Evil x1 x2, Evil y1 y2), Evil z1, z2) -- LHS
fzip (Evil (x1, y1) (x1, y2), Evil z1, z2)
Evil ((x1, y1), z1) ((x1, y1), z2)
fzip (Evil x1 x2, fzip (Evil y1 y2, Evil z1 z2)) -- RHS
fzip (Evil x1 x2, Evil (y1, z1) (y1, z2))
Evil (x1, (y1, z1)) (x1, (y1, z2)) -- LHS ~ RHS
-- Good/Evil/Evil case:
fzip (fzip (Good x1 x2, Evil y1 y2), Evil z1, z2) -- LHS
fzip (Good (x1, y1) (x2, y2), Evil z1, z2)
Evil ((x1, y1), z1) ((x1, y1), z2)
fzip (Good x1 x2, fzip (Evil y1 y2, Evil z1 z2)) -- RHS
fzip (Good x1 x2, Evil (y1, z1) (y1, z2))
Evil (x1, (y1, z1)) (x1, (y1, z2)) -- LHS ~ RHS
-- Evil/Good/Evil case:
fzip (fzip (Evil x1 x2, Good y1 y2), Evil z1, z2) -- LHS
fzip (Evil (x1, y1) (x2, y2), Evil z1, z2)
Evil ((x1, y1), z1) ((x1, y1), z2)
fzip (Evil x1 x2, fzip (Good y1 y2, Evil z1 z2)) -- RHS
fzip (Evil x1 x2, Evil (y1, z1) (y1, z2))
Evil (x1, (y1, z1)) (x1, (y1, z2)) -- LHS ~ RHS
-- Evil/Evil/Good case:
fzip (fzip (Evil x1 x2, Evil y1 y2), Good z1, z2) -- LHS
fzip (Evil (x1, y1) (x1, y2), Good z1, z2)
Evil ((x1, y1), z1) ((x1, y2), z2)
fzip (Evil x1 x2, fzip (Evil y1 y2, Good z1 z2)) -- RHS
fzip (Evil x1 x2, Evil (y1, z1) (y2, z2))
Evil (x1, (y1, z1)) (x1, (y2, z2)) -- LHS ~ RHS
-- Evil/Good/Good case:
fzip (fzip (Evil x1 x2, Good y1 y2), Good z1, z2) -- LHS
fzip (Evil (x1, y1) (x2, y2), Good z1, z2)
Evil ((x1, y1), z1) ((x2, y2), z2)
fzip (Evil x1 x2, fzip (Good y1 y2, Good z1 z2)) -- RHS
fzip (Evil x1 x2, Good (y1, z1) (y2, z2))
Evil (x1, (y1, z1)) (x2, (y2, z2)) -- LHS ~ RHS
-- Good/Evil/Good case:
fzip (fzip (Good x1 x2, Evil y1 y2), Good z1, z2) -- LHS
fzip (Evil (x1, y1) (x1, y2), Good z1, z2)
Evil ((x1, y1), z1) ((x1, y2), z2)
fzip (Good x1 x2, fzip (Evil y1 y2, Good z1 z2)) -- RHS
fzip (Good x1 x2, Evil (y1, z1) (y2, z2))
Evil (x1, (y1, z1)) (x1, (y2, z2)) -- LHS ~ RHS
-- Good/Good/Evil case:
fzip (fzip (Good x1 x2, Good y1 y2), Evil z1, z2) -- LHS
fzip (Good (x1, y1) (x2, y2), Evil z1, z2)
Evil ((x1, y1), z1) ((x1, y1), z2)
fzip (Good x1 x2, fzip (Good y1 y2, Evil z1 z2)) -- RHS
fzip (Good x1 x2, Evil (y1, z1) (y1, z2))
Evil (x1, (y1, z1)) (x1, (y1, z2)) -- LHS ~ RHS

  1. One of Philip Wadler’s papers about monads.

  2. For a gentle initial illustration of the underlying theme of parametricity, see What Does fmap Preserve?. For a more thorough introduction, see Parametricity: Money for Nothing and Theorems for Free, by Bartosz Milewski.

  3. A relation is a set of pairs; or, if you will, of associations between values. As an arbitrary example, we can have a less-than relation on integers which includes all pairs of integers (x, y) such that x < y. In particular, functions are relations: seen as a relation, a function f seen in this way includes all pairs (x, f x), there being exactly one pair for each possible value of the first component.

  4. One way to prove that is by using parametricity in tandem with the identity laws, analogously to how we used associativity in the previous section, while exploiting how there being only one shape means any applicative value can be related to unit.

  5. See the previous note about relating things to unit.

  6. The appendix includes a proof of the lawfulness of Twisted.

by Daniel Mlot at April 01, 2019 05:30 AM

Michael Snoyman

Typing Resistance

I assure you, this is a post about programming, it’ll just take a few paragraphs to get there.

There’s a biological mechanism known as resistance, and it plays out in many different systems. For example, as you habitually drink more alcohol, you gain a tolerance, which prevents you from getting drunk as easily. This can be called alcohol resistance. When you habitually run high levels of insulin, your body becomes less sensitive to insulin, making you insulin resistant. When you go into a loud room, your ears adjust down the sound, making you noise resistant. And when you change enough dirty diapers, you become smell resistant.

Resistance applies not just at the biological level. Consider a car crash. The first time your car is in a crash, it is impacted in a major way. But after repeated crashes, the damage goes down, as your car attains crash resistance. The first time a baseball goes through a window, it shatters. But further impact between the shards and balls causes less damage. This is impact resistance.

Resistance isn’t a term we often use in the programming world. That’s a mistake I intend to rectify today.

Imagine you’re working on a Python application. Python is a memory managed language, so you have never seen a segfault. One day, you’re testing your code, and poof, a wild segfault appears! You will respond to it far more strongly than a C programmer, who has built up a healthy memory bug resistance over the years. And as a result, you may begin to develop some of that alcohol resistance I mentioned above.

But I’m not here to talk about Python, because no one uses Python in the real world. Instead, let’s discuss Haskell.

In Haskell, we can use strong typing techniques to great effect, such as:

  • Avoiding 404 errors in a web application with type-safe URLs
  • Ensuring proper character encoding handling by differentiating textual and binary data
  • Making Software Transactional Memory safe by quarantining effects

The problem is that we use it everywhere. Like any system, this overuse of typing builds up a resistance. When someone is insulin sensitive, and you give them a small dose of insulin, they respond quickly. When they are insulin resistant, that same dose produces almost no impact. The same is true with typing.

Every single expression and subexpression in Haskell is typed. Sure, there’s type inference and the programmer doesn’t have to spell it out. But it’s still there. The effects can still be felt. You cannot escape the reality that, when you hit compile, an Abstract Syntax Tree annotated with types is being generated. One of the intermediate languages used by GHC—Core—type annotates everything!

The types are there, just like insulin. And just like insulin, whether we realize it or not, our bodies and minds are slowly building up resistance to it.

With insulin resistance, the body produces ever increasing amounts of insulin to overcome the resistance, till the levels are so high that they cause damage. So, too, with types: Haskellers end up using more and more types in their code until they buckle under the strain. Their minds break down, their tools break, runtime errors slip in, performance suffers.

Types, like insulin, are not evil. But they need to be moderated.

There is no future for a language like Haskell. It dictates overuse—nay, abuse—of types. It is inhumane and immoral to subject our computers to such torture. We need to immediately reduce our dependence on types, and then—when type sensitivity is reestablished—carefully and moderately add them back in.

Python took this too far, by fully eliminating types. It’s literally impossible in Python to even get a type error, even at runtime. That’s absurd, and is just as bad as a body with no insulin production at all. However, Python heads in the right direction. We need a blend of Python’s complete lack of a type system, and Haskell’s ability to use types in the right location.

Therefore, I’m announcing today the creation of a new language: Paskell. It fortunately is an unused name, and sounds nothing like any other programming language ever before created.

And for those who want to learn more, you can watch this video where I describe in more detail how I truly feel about types.

April 01, 2019 03:00 AM

March 30, 2019

Matt Parsons

Extending the Persistent QuasiQuoter

Haskell’s persistent database library is convenient and flexible. The recommended way to define your database entities is the QuasiQuoter syntax, and a complete module that defines some typical entities looks like this:

-- src/Models.hs
{-# LANGUAGE DeriveGeneric              #-}
{-# LANGUAGE EmptyDataDecls             #-}
{-# LANGUAGE FlexibleContexts           #-}
{-# LANGUAGE FlexibleInstances          #-}
{-# LANGUAGE GADTs                      #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE MultiParamTypeClasses      #-}
{-# LANGUAGE OverloadedStrings          #-}
{-# LANGUAGE QuasiQuotes                #-}
{-# LANGUAGE RecordWildCards            #-}
{-# LANGUAGE TemplateHaskell            #-}
{-# LANGUAGE TypeFamilies               #-}

module Models where

import           Database.Persist.TH  
import           Data.Text            (Text)

share [mkPersist sqlSettings, mkMigrate "migrateAll"] [persistLowerCase|

User json
    name    Text
    email   Text
    age     Int
    deriving Show Eq


The QuasiQuoter does a ton of stuff for you. In this post, we’re going to learn how to make it work for you!

Sharing is Caring

Let’s look at the share function:

share :: [[EntityDef] -> Q [Dec]] -> [EntityDef] -> Q [Dec]
share fs x = fmap mconcat $ mapM ($ x) fs

It takes a list of functions [EntityDef] -> Q [Dec] and then runs all of them over the [EntityDef] that is provided, and finally joins all the [Dec] together. So, if we want to make the QQ work for us, we need to write a function with that type and add it to our list.

Let’s start with a problem: one of the instances that are generated for the User table is PersistEntity. PersistEntity has an associated data type, called EntityField. It’s a sum type which contains all of the fields for the User type, and it’s a GADT that tells you what the type of the field is.

If we were to write that part of the PersistEntity instance by hand, it would look like this:

instance PersistEntity User where
    data EntityField User fieldType where
        UserName  :: EntityField User Text
        UserEmail :: EntityField User Text
        UserAge   :: EntityField User Int

There are a lot of functions that use the EntityField type when doing querying.

This type has no instances defined for it! And you may want to do something interesting with these field types that hasn’t been considered. Let’s say we need to have Show instances for our record fields. We can derive them using the StandaloneDeriving language extension, so this works:

{-# LANGUAGE DeriveGeneric              #-}
{-# LANGUAGE EmptyDataDecls             #-}
{-# LANGUAGE FlexibleContexts           #-}
{-# LANGUAGE FlexibleInstances          #-}
{-# LANGUAGE GADTs                      #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE MultiParamTypeClasses      #-}
{-# LANGUAGE OverloadedStrings          #-}
{-# LANGUAGE QuasiQuotes                #-}
{-# LANGUAGE RecordWildCards            #-}
{-# LANGUAGE TemplateHaskell            #-}
{-# LANGUAGE TypeFamilies               #-}

{-# LANGUAGE StandaloneDeriving         #-}

module Models where

import           Database.Persist.TH  
import           Data.Text            (Text)

share [mkPersist sqlSettings, mkMigrate "migrateAll"] [persistLowerCase|

User json
    name    Text
    email   Text
    age     Int
    deriving Show Eq


deriving instance Show (EntityField User field)

The last line is our StandaloneDeriving instance. This works! However, it’s a bit annoying to write out for every single record in a larger schema. Let’s write a function that will do this for us automatically.

Template Rascal

Let’s first review the type of the function we pass to share:

[EntityDef] -> Q [Dec]

The input type is a list of the entity definitions. This type (EntityDef) comes from the persistent package, and has a ton of information about the entities.

The type Q comes from the template-haskell package, as do Dec.

This blog post isn’t going to elaborate too much on TemplateHaskell - if you’d like a beginner-friendly tutorial, see Template Haskell Is Not Scary.

We will begin by creating the skeleton for the function:

deriveShowFields :: [EntityDef] -> Q [Dec]
deriveShowFields entities = 

We know we’re going to iterate over all of them, so let’s use forM - Q has a Monad instance.

deriveShowFields :: [EntityDef] -> Q [Dec]
deriveShowFields entities = 
    forM entites $ \entity ->

We need to replace undefined with an expression of type Q Dec. We could attempt to construct the Dec value directly using data constructors. However, it will be a bit more straightforward to use a QuasiQuoter.

deriveShowFields :: [EntityDef] -> Q [Dec]
deriveShowFields entities = 
    forM entites $ \entity ->
        let name = undefined
        [d|deriving instance Show (EntityField $(name) field)|]

This fails with a type error. The [d| ... |] quasiquoter returns a value of type Q [Dec]. That means that forM entities ... will return Q [[Dec]]. So we just need to flatten it:

deriveShowFields :: [EntityDef] -> Q [Dec]
deriveShowFields entities = 
    fmap join . forM entites $ \entity ->
        let name = undefined
        [d|deriving instance Show (EntityField $(name) field)|]

Alright, now we need to get a name that fits in that splice. What’s the type of that splice? I’m going to throw a () in there and see what GHC complains about.

deriveShowFields :: [EntityDef] -> Q [Dec]
deriveShowFields entities = 
    fmap join . forM entites $ \entity ->
        let name = ()
        [d|deriving instance Show (EntityField $(name) field)|]

This gives us an error:

• Couldn't match type ‘()’ with ‘Q Type’
          Expected type: TypeQ
            Actual type: ()
        • In the expression: name
          In a stmt of a 'do' block:
            [d| deriving instance Show (EntityField $(name) x) |]
            pending(rn) [<splice, name>]
          In the expression:
            do let name = ()
               [d| deriving instance Show (EntityField $(name) x) |]
               pending(rn) [<splice, name>]
    ... |   [d|deriving instance Show (EntityField $(name) x)|]
        |                                            ^^^^

Cool! We need something of type Q Type. Type, like Dec, comes from the template-haskell package.

So, we have an entity :: EntityDef, and we need a name :: Q Type. The name is the name of the entity. If we look at the fields of EntityDef again, we’ll see that the first field is entityHaskell :: HaskellName. That is promising. We can use another PersistEntity class function, entityDef :: (Monad m) => m rec -> EntityDef, to summon an EntityDef in GHCi and see what we get.

>>> entityHaskell $ entityDef (Nothing :: Maybe User)
HaskellName {unHaskellName = "User"}

What’s inside a HaskellName? Let’s find out!

>>> :info HaskellName
newtype HaskellName
  = HaskellName {unHaskellName :: Data.Text.Internal.Text}
  	-- Defined in ‘persistent-2.8.2:Database.Persist.Types.Base’

So, we have a Text representation of the Haskell record name. And we know we need a Type that refers to this name. If we look at the data constructors for Type, we’ll notice that ConT appears to be what we want.

So now we need a Name to give to ConT. What is a Name? The linked docs say that it’s an abstract type for the Haskell value names. They also give us a way of creating one: mkName :: String -> Name.

The last building block is Data.Text.unpack :: Text -> String. Now, let’s plug our legos together:

deriveShowFields :: [EntityDef] -> Q [Dec]
deriveShowFields entities = 
    fmap join . forM entites $ \entity ->
        let name = 
                pure . ConT . mkName . Text.unpack 
                . unHaskellName . entityHaskell 
                $ entity
        [d|deriving instance Show (EntityField $(name) field)|]

Bingo! Let’s pass this to share in our model file. Note that we need to import it from somewhere else due to Template Haskell staging restrictions.

    [ deriveShowFields
    , mkPersist sqlSettings
    , mkMigrate "migrateAll"
    ] [persistLowerCase|

User json
    name    Text
    email   Text
    age     Int
    deriving Show Eq


And let’s try it in GHCi:

>>> show UserEmail
>>> show UserName

Nice. We’ve hooked into persistent’s QuasiQuoter and provided our own functionality.

March 30, 2019 12:00 AM

March 28, 2019

Philip Wadler

Pinker's Thirteen Rules for Better Writing

Steven Pinker tweets thirteen rules for better writing. Spotted via Boing Boing.
  1. Reverse-engineer what you read. If it feels like good writing, what makes it good? If it’s awful, why?
  2. Prose is a window onto the world. Let your readers see what you are seeing by using visual, concrete language.
  3. Don’t go meta. Minimize concepts about concepts, like “approach, assumption, concept, condition, context, framework, issue, level, model, perspective, process, range, role, strategy, tendency,” and “variable.”
  4. Let verbs be verbs. “Appear,” not “make an appearance.”
  5. Beware of the Curse of Knowledge: when you know something, it’s hard to imagine what it’s like not to know it. Minimize acronyms & technical terms. Use “for example” liberally. Show a draft around, & prepare to learn that what’s obvious to you may not be obvious to anyone else.
  6. Omit needless words (Will Strunk was right about this).
  7. Avoid clichés like the plague (thanks, William Safire).
  8. Old information at the beginning of the sentence, new information at the end.
  9. Save the heaviest for last: a complex phrase should go at the end of the sentence.
  10. Prose must cohere: readers must know how each sentence is related to the preceding one. If it’s not obvious, use “that is, for example, in general, on the other hand, nevertheless, as a result, because, nonetheless,” or “despite.”
  11. Revise several times with the single goal of improving the prose.
  12. Read it aloud.
  13. Find the best word, which is not always the fanciest word. Consult a dictionary with usage notes, and a thesaurus.
After absorbing these, read his The Sense of Style.

by Philip Wadler ( at March 28, 2019 02:11 PM

Tweag I/O

Introducing lorri,
your project's nix-env

Graham Christensen, Philip Patsch

Today we're excited to announce lorri, a new tool for NixOS, Linux, and macOS that makes developing with Nix even nicer:

When people try lorri, we often hear that it is more magical than they expected.

What is lorri?

lorri is a nix-shell replacement for project development. lorri is based around fast direnv integration for robust CLI and editor integration.

The project is about experimenting with and improving the developer's experience with Nix. A particular focus is managing your project's external dependencies, editor integration, and quick feedback.

How is lorri different from a nix-shell?

Nix's shells are a fantastic tool for environment management, but have some pain points.

Let's look at three ways lorri changes the experience.

Channel updates are no big deal

Do you use import <nixpkgs> {}? Update your channels and ... Oops! Your Nix shell is stale! Opening a new nix-shell means downloading all new dependencies before you're able to get back to work.

Not with lorri. When your channel updates, lorri watch automatically begins re-evaluating your project's dependencies in the background, outside of your work shell. If you enter the shell before the evaluation completes, the last completed evaluation is loaded instead. When the new one is ready, your environment updates automatically.

nix-collect-garbage before a flight? Sure.

Nix shells are not protected from garbage collection. This is good for one-off shells (nix-shell -p fortune --run fortune) but not so nice for your large project which integrates dozens of Nixpkgs dependencies. Having your gigabytes of project tooling disappear on a low-bandwidth connection is a pain most Nix users know.

lorri captures development dependencies and creates garbage collection roots automatically. Switching to lorri means you are always ready to get to work on your projects.

Editor integration is a snap

A big hassle for Nix users is editor integration and tools like language servers.

Adding direnv to the tool stack makes this better: direnv is like Nix's secret weapon for editor integration. Many editors support direnv, direnv supports Nix, and boom: many editors support Nix!

Out of the box, however, direnv's Nix integration is slow -- continuously, unnecessarily evaluating Nix expressions. In some editors the expressions are re-evaluated on every file switch. This is painful, especially if the evaluation takes even half a second!

lorri's direnv integration doesn't ever call Nix. Instead, the integration always selects the last cached evaluation, ensuring a lightning-fast editor and shell experience.

When entering a project directory your tools just appear, ready to use.

Using lorri

lorri is used in two parts:

  • direnv integration sets up your shell and editor integration with project-specific dependencies
  • lorri watch monitors your project and automatically regenerates your environment

A typical workflow for us is to spawn a terminal running lorri watch, minimize it, and open a second terminal for my work shell. If I'm not editing dependencies, sometimes I'll sometimes even skip the lorri watch and just use the cached evaluation.

lorri was built by Tweag for Target, and we are so excited to introduce it to the public. lorri is beta, open source (Apache-2.0) and a tutorial is available at Target/lorri. Give it a try, and let us know what you think =)!

March 28, 2019 12:00 AM

March 27, 2019

Joachim Breitner

March 26, 2019

FP Complete

How We Run an Open Source Based Business

The global economy is changing to promote sharing, and free software is an important part of that. How do we run a business when important products are abundant and even free?

by Aaron Contorer ( at March 26, 2019 05:13 PM

March 25, 2019

Monday Morning Haskell

Making a Glossy Game! (Part 1)


I've always had a little bit of an urge to try out game development. It wasn't something I associated with Haskell in the past. But recently, I started learning a bit about game architecural patterns. I stumbled on some ideas that seemed "Haskell-esque". I learned about the Entity-Component-System model, which suits typeclasses rather than object-oriented design.

So I've decided to do a few articles on writing a basic game in Haskell. We'll delve more into these architectural ideas later in the series. But to start, we have to learn a few building blocks! The first couple weeks will focus on the basics of the Gloss library. This library has some simple tools for creating 2D graphics that we can use to make a game. Frequent readers of this blog will note a lot of commonalities between Gloss and the Codeworld library we studied a while back. In this first part, we'll learn some basic combinators.

If you're looking for some more practical usages of Haskell, we have some tools for you! Download our Production Checklist to learn many interesting libraries you can use! You can also read our Haskell Web Skills series to go a bit more in depth!

A Basic Gloss Tutorial

The get started with the Gloss library, let's draw a simple picture using the display function. All this does is make a full screen window with a circle in the middle.

-- Always imported
import Graphics.Glass

main :: IO ()
main = display FullScreen white (Circle 80)

All the arguments here are pretty straightforward. The program opens a full screen window and displays a circle against a white background. We can make the window smaller by using InWindow instead of FullScreen for the Display type. This takes a window "name", as well as dimensions for the size and offset of the window.

windowDisplay :: Display
windowDisplay = InWindow "Window" (200, 200) (10, 10)

main :: IO ()
main = display windowDisplay white (Circle 80)

The primary argument here is this last one, a Picture using the Circle constructor. We can draw many different things, including circles, lines, boxes, text, and so on. The Picture type also allows for translation, rotation, and aggregation of other pictures.


We can take our drawing to the next level by using the animate function. Instead of only drawing a static picture, we'll take the animation time as an input to a function. Here's how we can provide an animation of a growing circle:

main = animate windowDisplay white animationFunc

animationFunc :: Float -> Picture
animationFunc time = Circle (2 * time)


The next stage of our program's development is to add a model. This allows us to add state to our animation so that it is no longer merely a function of the time. For our next example, we'll make a pendulum. We'll keep two pieces of information in our model. These are the current angle ("theta") and the derivative of that angle ("dtheta"). The simulate function takes more arguments than animate. Here's the skeleton of how we use it. We'll go over the new arguments one-by-one.

type Model = (Float, Float)

main = simulate
    simulationRate :: Int
    simulationRate = 20

    initialModel :: Model
    initialModel = (0,0)

    drawingFunc :: Model -> Picture
    drawingFunc (theta, dtheta) = …

    updateFunc :: ViewPort -> Float -> Model -> Model
    updateFunc _ dt (theta, dtheta) = ...

The first extra argument (simulationRate) tells us how many model steps per second. Then we have our initial model. Then there's a function taking the model and telling us how to draw the picture. We'll fill this in to draw a line at the appropriate angle.

drawingFunc :: Model -> Picture
drawingFunc (theta, dtheta) = Line [(0, 0), (50 * cos theta, 50 * sin theta)]

Finally, we have an updating function. This takes the view-port, which we won't use. It also takes the amount of time for this simulation step (dt). Then it takes a current model. It uses these to determine the new model. We can fill this in with a little bit of trigonometry. Then we'll have a working pendulum simulation!

updateFunc :: ViewPort -> Float -> Model -> Model
updateFunc _ dt (theta, dtheta) = (theta + dt * dtheta, dtheta - dt * (cos theta))

Playing a Game

The final element we need to make a playable game is to accept user input. The play function provides us what we need here. It looks like the simulate function except for an extra function for handling input. We're going to make a game where the user can move a circle around with the arrow keys. We'll add an extra mechanic where the circle keeps trying to move back towards the center. Here's the skeleton:

type World = (Float, Float)

main :: IO ()
main = play
  (0, 0)

drawingFunc :: World -> Picture
drawingFunc (x, y) = ...

inputHandler :: Event -> World -> World
inputHandler event (x, y) = ...

updateFunc :: Float -> World -> World
updateFunc dt (x, y) = ...

Our World will represent the current location of our circle. The drawing function will draw a simple circle, translated by this amount.

drawingFunc :: World -> Picture
drawingFunc (x, y) = translate x y (Circle 20)

Now for our input handler, we only care about a few inputs. We'll read the up/down/left/right arrows, and adjust the coordinates:

inputHandler :: Event -> World -> World
inputHandler (EventKey (SpecialKey KeyUp) Down _ _) (x, y) = (x, y + 10)
inputHandler (EventKey (SpecialKey KeyDown) Down _ _) (x, y) = (x, y - 10)
inputHandler (EventKey (SpecialKey KeyRight) Down _ _) (x, y) = (x + 10, y)
inputHandler (EventKey (SpecialKey KeyLeft) Down _ _) (x, y) = (x - 10, y)
inputHandler _ w = w

Finally, let's write our "update" function. This will keep trying to move the circle's coordinates towards the center of the frame:

updateFunc :: Float -> World -> World
updateFunc _ (x, y) = (towardCenter x, towardCenter y)
    towardCenter :: Float -> Float
    towardCenter c = if abs c < 0.25
      then 0
      else if c > 0
        then c - 0.25
        else c + 0.25

And that's it, we have our miniature game!


Hopefully this article gave you a good, quick overview on the basics of the Gloss library. Next week, we'll start making a more complicated game with a more interesting model!

We have other resources for the more adventurous Haskellers out there! Download our Production Checklist and read our Haskell Web Skills Series!

by James Bowen at March 25, 2019 02:30 PM

Lysxia's blog

Higher-rank types in Standard Haskell

How to write functions parameterized by polymorphic functions, without using language extensions?

by Lysxia at March 25, 2019 12:00 AM

March 24, 2019

Philip Wadler

Two weeks of teaching in Ethiopia

I just returned from teaching a class of twenty-two women in Addis Ababa, courtesy of IOHK. Rounding off eight weeks of Haskell, taught by Lars Brünjes and Polina Vinogradova, I spent two weeks teaching IOHK's smart contract languages Plutus and Marlowe. It was an amazing experience. I thank IOHK and my students for the opportunity, and I look forward to repeating it someday.

Here is IOHK's announcement prior to the course, and an analysis from an outsider.

Thank you to my students for introducing me to injera!

With Wanda, Lars, and Polina in traditional dress at the graduation ceremony.

All the students in traditional dress for graduation. They wear it well!

by Philip Wadler ( at March 24, 2019 06:58 PM

Root causes

Trevor Sumner delivers an incisive analysis of the root causes of the Ethiopian airlines crash. Many call it a software failure, but he looks at a trail of issues: economic problem, airframe problem, aerodynamic problem, systems engineering problem, sensor problem, maintenance practices problem, pilot training problem, pilot expertise problem, and back to economic problem.

(Thanks to Robin Sloan for highlighting Sumner's post in his weekly newsletter, Year of the Meteor.)

by Philip Wadler ( at March 24, 2019 06:12 PM

Donnacha Oisín Kidney

Permutations By Sorting

Posted on March 24, 2019
Tags: Haskell, Agda

A naive—and wrong—way to shuffle a list is to assign each element in the list a random number, and then sort it. It might not be immediately obvious why: Kiselyov (2002) has a good explanation as to the problem. One way to think about it is like this: choosing <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> random numbers each in the range <semantics>[0,n)<annotation encoding="application/x-tex">[0,n)</annotation></semantics> has <semantics>nn<annotation encoding="application/x-tex">n^n</annotation></semantics> possible outcomes, whereas there are <semantics>n!<annotation encoding="application/x-tex">n!</annotation></semantics> permutations. Since these don’t necessarily divide evenly into each other, you’re going to have some bias.

Factorial Numbers

The first part of the fix is to figure out a way to get some random data that has only <semantics>n!<annotation encoding="application/x-tex">n!</annotation></semantics> possible values. The trick here will be to mimic the structure of a factorial itself: taking <semantics>n=5<annotation encoding="application/x-tex">n = 5</annotation></semantics>, the previous technique would have yielded:

<semantics>5×5×5×5×5=55<annotation encoding="application/x-tex">5 \times 5 \times 5 \times 5 \times 5 = 5^5</annotation></semantics>

possible values. But we want:

<semantics>5×4×3×2×1=5!<annotation encoding="application/x-tex">5 \times 4 \times 3 \times 2 \times 1 = 5!</annotation></semantics>

The solution is simple, then! Simply decrement the range by one for each position in the output list. In Haskell:

As an aside, what we’ve done here is constructed a list of digits in the factorial number system.


Unfortunately, while we’ve figured out a way to get properly distributed random data, we can’t yet sort it to shuffle our list. If we look at the 6 factorial numbers generated for <semantics>n=5<annotation encoding="application/x-tex">n = 5</annotation></semantics>, we can see the problem:


Different values in the list will produce the same sort: 100 and 200, for instance.

Lehmer Codes

We need a way to map the numbers above to a particular permutations: that’s precisely the problem solved by Lehmer codes. For the numbers 110, we can think of each digit as the relative position to put that item from the string into. Some Haskell code might make it clear:

And we can step through its execution:

Dualities of Sorts

Notice the similarity of the function above to a standard insertion sort:

The “comparison� is a little strange—we have to take into account relative position—but the shape is almost identical. Once I spot something like that, my first thought is to see if the relationship extends to a better <semantics>�(nlogn)<annotation encoding="application/x-tex">\mathcal{O}(n \log n)</annotation></semantics> sort, but there’s something else I’d like to look at first.

“A Duality of Sorts� (Hinze, Magalhães, and Wu 2013) is a paper based on the interesting symmetry between insertion sort and selection sort (There’s also a video of Graham Hutton explaining the idea; Haran 2016).

With that paper in mind, can we rewrite shuffle as a selection-based algorithm? We can indeed!

While the symmetry is pleasing, the paper details how to make the relationship explicit, using the same function for both selection and insertion sort:

Improved Efficiency

So now we have to upgrade our sorts: in the paper, merge sort is the more efficient sort chosen, similarly to what I chose previously.

However, I feel like merge sort is an upgrade of insertion sort, not selection sort. Indeed, if you do the “split� step of merge sort badly, i.e. by splitting very unevenly, merge sort in fact becomes insertion sort!

So there’s a missing bit of this table:

Insertion Selection
<semantics>�(n2)<annotation encoding="application/x-tex">\mathcal{O}(n^2)</annotation></semantics> Insertion sort Selection sort
<semantics>�(nlogn)<annotation encoding="application/x-tex">\mathcal{O}(n \log n)</annotation></semantics> Merge sort ???

I think it’s clear that quicksort is the algorithm that fits in there: again, done badly it degrades to selection sort (if you intentionally pick the pivot to be the worst element possible, i.e. the smallest element).

There are more symmetries: merge sort splits the lists using their structure, and merges them using the ordering of the elements. Quicksort is the opposite, merging by concatenation, but splitting using order. Finally, in merge sort adjacent elements are in the correct order after the recursive call, but the two sides of the split are not. Again, quicksort is precisely the opposite: adjacent elements have not been compared (before the recursive call), but the two sides of the split are correctly ordered.

Anyway, I haven’t yet formalised this duality (and I don’t know if I can), but we can use it to produce a quicksort-based shuffle algorithm:

That’s all for this post! The algorithms can all be translated into Agda or Idris: I’m currently working on a way to represent permutations that isn’t <semantics>�(n2)<annotation encoding="application/x-tex">\mathcal{O}(n^2)</annotation></semantics> using them. If I figure out a way to properly dualise quicksort and merge sort I’ll do a small write up as well (I’m currently working my way through Hinze et al. 2012 for ideas). Finally, I’d like to explore some other sorting algorithms as permutation algorithms: sorting networks seem especially related to “permutations by swapping�.


Haran, Brady. 2016. “Sorting Secret.�

Hinze, Ralf, Daniel W.H. James, Thomas Harper, Nicolas Wu, and José Pedro Magalhães. 2012. “Sorting with bialgebras and distributive laws.� In Proceedings of the 8th ACM SIGPLAN workshop on Generic programming - WGP ’12, 69. Copenhagen, Denmark: ACM Press. doi:10.1145/2364394.2364405.

Hinze, Ralf, José Pedro Magalhães, and Nicolas Wu. 2013. “A Duality of Sorts.� In The Beauty of Functional Code: Essays Dedicated to Rinus Plasmeijer on the Occasion of His 61st Birthday, ed by. Peter Achten and Pieter Koopman, 151–167. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-40355-2_11.

Kiselyov, Oleg. 2002. “Provably perfect random shuffling and its pure functional implementations.�

by Donnacha Oisín Kidney at March 24, 2019 12:00 AM

March 22, 2019

Matt Parsons

Return a Function to Avoid Effects

To help write robust, reliable, and easy-to-test software, I always recommend purifying your code of effects. There are a bunch of tricks and techniques to accomplish this sort of thing, and I’m going to share one of my favorites.

I have implemented a pure data pipeline that imports records from one database and puts them in another database with a slightly different schema. Rather than implement all of the logic for saving each entity individually, I’ve created a function migrate that is abstract. The heart of this pipeline is a set of type classes:

{-# language FunctionalDependencies, ScopedTypeVariables, TypeApplications, AllowAmbiguousTypes #-}

class LoadData i where
    load :: IO [i]

class ConvertData i o | i -> o where
    convert :: i -> o

class SaveData o where
    save :: [o] -> IO ()

    :: forall i o
     . (LoadData i, ConvertData i o, SaveData o)
    => IO ()
migrate = do
    old <- load @i
    save (map convert old)

All of the business logic is in the ConvertData class. The LoadData and SaveData do typically boring things.

A new requirement came in: we are going to import a new record, and we must generate new UUIDs for it.

data Old = Old Int Text

data New = New UUID Int Text

These UUIDs must be randomly generated. The logical place to generate the UUID is in the ConvertData part of the pipeline. However, this would require adding IO to the method signature, which would make testing and verifying this code more difficult.

Instead, we are going to create a new type:

newtype NeedsUUID = NeedsUUID 
    { giveUUID :: UUID -> New

Now, our conversion function instances will look like this:

instance LoadData Old where
    load = loadOldData

instance ConvertData Old NeedsUUID where
    convert (Old i t) = 
        NeedsUUID (\uuid -> New uuid i t)

We have abstracted the UUID generation. Our ConvertData type class remains pure, and we’ve pushed the implementation detail of UUID generation out.

Now, we implement the SaveData type class, which already had IO.

instance SaveData NeedsUUID where
    save needsUUIDs = do
        values <- 
            forM needsUUIDs $ \needsUUID ->
                uuid <- freshUUID
                return (giveUUID needsUUID uuid)
        saveNewValues values

Effects on the Edge

We want to keep effects isolated to the edges of our programs as much as possible. This allows most of our code to remain pure and easy to test and examine. I’ve written about similar topics in my posts Type Safety Back and Forth and Invert Your Mocks!.

March 22, 2019 12:00 AM

March 21, 2019

Functional Jobs

Software Engineer - Haskell at Capital Match Platform (Full-time)

WHO WE ARE Capital Match is an innovative fintech business with a range of financial solutions to corporate and individual clients, investors and borrowers. Capital Match is the largest marketplace invoice financing platform in Southeast Asia based in Singapore, funded more than US$80 million in Singapore and Hong Kong over the past 3 years. Capital Match is backed by multiple VCs and institutional investors including Dymon Asia Capital, an alternative asset manager with $6bn AUM and Eduardo Saverin's B Capital Group.

THE ROLE We are looking for experienced developers to lead our tech growth in the fintech space, expand into surrounding countries, develop new products and integrations. You will be involved in all aspects of the creation, growth and operations of a secure web-based platform. Experience in front-to-back feature development, distributed deployment and automation in the cloud, build and test automation, is highly desirable.

OUR PLATFORM Our Platform is primarily developed in Haskell with a ClojureScript frontend. We use Docker containers, Nix and standard cloud infrastructure systems to manage our production rollouts. We make heavy use of modern functional programming practices, such as property-based testing and type-driven programming, to improve the quality of our code. We use many popular Haskell libraries such as Lens, QuickCheck, Servant, Scotty, Aeson, Attoparsec and Pandoc. We use continuous testing, integration and deployment wherever possible.


  • 5 years (or more) of software engineering experience.
  • Knowledge and passion for Haskell.
  • Strong foundation in data structures, algorithms, and software design.
  • Experience in developing both server and web applications.
  • Experience with modern software engineering practices such as TDD, CI, Emergent Design, Refactoring, Peer Review and Continuous Improvement.
  • Familiarity with Linux systems, command-line environments and cloud-based deployments.
  • A background in fintech and especially the lending space would be an advantage but is not essential.


  • Salary of SGD 5,000-10,000 / month (depending on experience and qualifications).
  • Attractive equity options for outstanding candidates.
  • Foreigners who relocate from most countries do not need to pay their country income tax and for the proposed remuneration income tax in Singapore is <10%.
  • Singapore is a great place to live, a vibrant city rich with diverse cultures, a very strong financial sector and a central location in Southeast Asia.
  • Visa sponsorship can be provided to the right candidate.


Send your CV and github link to careers [at] capital-match [dot] com

Get information on how to apply for this position.

March 21, 2019 04:49 AM

Donnacha Oisín Kidney

Lazy Binary Numbers

Posted on March 21, 2019
Tags: Agda, Haskell

Number Representations

When working with numbers in Agda, we usually use the following definition:



In Haskell it’s less common, for obvious reasons:

Operation Complexity
<semantics>n+m<annotation encoding="application/x-tex">n + m</annotation></semantics> <semantics>�(n)<annotation encoding="application/x-tex">\mathcal{O}(n)</annotation></semantics>
<semantics>n×m<annotation encoding="application/x-tex">n \times m</annotation></semantics> <semantics>�(nm)<annotation encoding="application/x-tex">\mathcal{O}(nm)</annotation></semantics>

Why use them at all, then? Well, in Agda, we need them so we can prove things about the natural numbers. Machine-level integers are fast, but they’re opaque: their implementation isn’t written in Agda, and therefore it’s not available for the compiler to reason about.

In Haskell, they occasionally find uses due to their laziness. This can help in Agda as well. By lazy here I mean that operations on them don’t have to inspect the full structure before giving some output.

In Haskell, as we can see, this lets us run computations without scrutinising some arguments. Agda benefits similarly: here it lets the compiler see more “obvious� facts that it may have missed otherwise.

It’s not completely lazy, though. In particular, it tends to be left-biased:

Like Boolean short-circuiting operators, operations on Peano numbers will usually have to scrutinise the left-hand-side argument quite a bit before giving an output.

So, Peano numbers are good because:

  1. We can prove things about them.
  2. They’re lazy.

In this post, I’m going to look at some other number representations that maintain these two desirable properties, while improving on the efficiency somewhat.


The first option for an improved representation is binary numbers. We can represent binary numbers as a list of bits:

As we’re using these to represent natural numbers, we’ll need to define a way to convert between them:

And here we run into our first problem: redundancy. There are multiple ways to represent the same number according to the semantics defined above. We can actually prove this in Agda:

In English: “There are two binary numbers which are not the same, but which do evaluate to the same natural number�. (This proof was actually automatically filled in for me after writing the signature)

This represents a huge problem for proofs. It means that even simple things like <semantics>x×0=0<annotation encoding="application/x-tex">x \times 0 = 0</annotation></semantics> aren’t true, depending on how multiplication is implemented. On to our next option:


Instead of looking at the bits directly, let’s think about a binary number as a list of chunks of 0s, each followed by a 1. In this way, we simply can’t have trailing zeroes, because the definition implies that every number other than 0 ends in 1.

This guarantees a unique representation. As in the representation above, it has much improved time complexities for the familiar operations:

Operation Complexity
<semantics>n+m<annotation encoding="application/x-tex">n + m</annotation></semantics> <semantics>�(log2n)<annotation encoding="application/x-tex">\mathcal{O}(\log_2 n)</annotation></semantics>
<semantics>n×m<annotation encoding="application/x-tex">n \times m</annotation></semantics> <semantics>�(log2(n+m))<annotation encoding="application/x-tex">\mathcal{O}(\log_2 (n + m))</annotation></semantics>

Encoding the zeroes as gaps also makes multiplication much faster in certain cases: multiplying by a high power of 2 is a constant-time operation, for instance.

It does have one disadvantage, and it’s to do with the increment function:

With all of their problems, Peano numbers performed this operation in constant time. The above implementation is only amortised constant-time, though, with a worst case of <semantics>�(log2n)<annotation encoding="application/x-tex">\mathcal{O}(\log_2 n)</annotation></semantics> (same as the list-of-bits version). There are a number of ways to remedy this, the most famous being:

Skew Binary

This encoding has three digits: 0, 1, and 2. To guarantee a unique representation, we add the condition that there can be at most one 2 in the number, which must be the first non-zero digit if it’s present.

To represent this we’ll encode “gaps�, as before, with the condition that if the second gap is 0 it actually represents a 2 digit in the preceding position. That weirdness out of the way, we are rewarded with an inc implementation which is clearly <semantics>�(1)<annotation encoding="application/x-tex">\mathcal{O}(1)</annotation></semantics>.

Unfortunately, though, we’ve lost the other efficiencies! Addition and multiplication have no easy or direct encoding in this system, so we have to convert back and forth between this and regular binary to perform them.


The key problem with incrementing in the normal binary system is that it can cascade: when we hit a long string of 1s, all the 1s become 0 followed by a single 1. We can turn this problem to our advantage if we use a representation which encodes both 1s and 0s as strings of gaps. We’ll have to use a couple more tricks to ensure a unique representation, but all in all this is what we have (switching to just Agda now):

Perfect! Increments are obviously <semantics>�(1)<annotation encoding="application/x-tex">\mathcal{O}(1)</annotation></semantics>, and we’ve guaranteed a unique representation.

I’ve been working on this type for a couple of days, and you can see my code here. So far, I’ve done the following:

Defined inc, addition, and multiplication

These were a little tricky to get right (addition is particularly hairy), but they’re all there, and maximally lazy.

Proved Homomorphism

For each one of the functions, you want them to correspond precisely to the equivalent functions on Peano numbers. Proving that fact amounts to filling in definitions for the following:

Proved Bijection

As we went to so much trouble, it’s important to prove that these numbers form a one-to-one correspondence with the Peano numbers. As well as that, once done, we can use it to prove facts about the homomorphic functions above, by reusing any proofs about the same functions on Peano numbers. For instance, here is a proof of commutativity of addition:


So now that we have our nice number representation, what can we do with it? One use is as a general-purpose number type in Agda: it represents a good balance between speed and “proofiness�, and Coq uses a similar type in its standard library.

There are other, more unusual uses of such a type, though.

Data Structures

It’s a well-known technique to build a data structure out of some number representation (Hinze 1998): in fact, all of the representations above are explored in Okasaki (1999, chap. 9.2).

Logic Programming

Logic programming languages like Prolog let us write programs in a backwards kind of way. We say what the output looks like, and the unifier will figure out the set of inputs that generates it.

In Haskell, we have a very rough approximation of a similar system: the list monad.

There are tons of inefficiencies in the above code: for us, though, we can look at one: the number representation. In the equation:

<semantics>x2+y2=z2<annotation encoding="application/x-tex">x^2 + y^2 = z^2</annotation></semantics>

If we know that <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> are both odd, then <semantics>z<annotation encoding="application/x-tex">z</annotation></semantics> must be even. If the calculation of the equation is expensive, this is precisely the kind of shortcut we’d want to take advantage of. Luckily, our binary numbers do just that: it is enough to scrutinise just the first bits of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> in order to determine the first bit of the output.

After seeing that example, you may be thinking that lazy evaluation is a perfect fit for logic programming. You’re not alone! Curry (Hanus (ed.) 2016) is a lazy, functional logic programming language, with a similar syntax to Haskell. It also uses lazy binary numbers to optimise testing.

Lazy Predicates

In order for queries to be performed efficiently on binary numbers, we will also need a way to describe lazy predicates on them. A lot of these predicates are more easily expressible on the list-of-bits representation above, so we’ll be working with that representation for this bit. Not to worry, though: we can convert from the segmented representation into the list-of-bits, and we can prove that the conversion is injective:

Here’s the curious problem: since our binary numbers are expressed least-significant-bit-first, we have to go to the end before knowing which is bigger. Luckily, we can use one of my favourite Haskell tricks, involving the ordering monoid:

Thanks to laziness, this function first compares the length of the lists, and then does a lexicographical comparison in reverse only if the lengths are the same. This is exactly what we want for our numbers.

Future Posts

That’s all I have for now, but I’m interested to formalise the laziness of these numbers in Agda. Usually that’s done with coinduction: I would also like to see the relationship with exact real arithmetic.

I wonder if it can be combined with O’Connor (2016) to get some efficient proof search algorithms, or with Escardo (2014) to get more efficient exhaustive search.


Escardo, Martin. 2014. “Seemingly impossible constructive proofs | Mathematics and Computation.� Mathematics and Computation.

Hanus (ed.), M. 2016. Curry: An Integrated Functional Logic Language (Vers. 0.9.0). Available at

Hinze, Ralf. 1998. Numerical Representations as Higher-Order Nested Datatypes. Institut für Informatik III, Universität Bonn.\#R5.

O’Connor, Liam. 2016. “Applications of Applicative Proof Search.� In Proceedings of the 1st International Workshop on Type-Driven Development, 43–55. TyDe 2016. New York, NY, USA: ACM. doi:10.1145/2976022.2976030.

Okasaki, Chris. 1999. Purely Functional Data Structures. Cambridge University Press.

by Donnacha Oisín Kidney at March 21, 2019 12:00 AM

March 20, 2019

Tweag I/O

Capability is about free monads.
It's a bird… It's a plane… It's a free monad!

Arnaud Spiwack

The subject of free monads is resurfacing of late, as it does from time to time. What prompted this write-up is that Sandy Maguire followed up his post with a discussion about an impredicative encoding (aka final encoding) of free monads:

newtype Freer f a = Freer (forall m. Monad m => (forall t. f t -> m t) -> m a)

That is: given a monad m, and an interpretation of my operations (f) in that monad, I can build an m a.

As it happens, capabilities-as-type-classes, as embodied by the capability library, are essentially the same thing. Let me explain.

As far as I know, the subject of impredicative encoding of free monads was first tackled, as many good things, by Russell O'Connor, who calls them van Laarhoven free monads. His blog post is a fairly mathy read. But the key bit is this:

-- (ops m) is required to be isomorphic to (Π n. i_n -> m j_n)
newtype VLMonad ops a = VLMonad { runVLMonad :: forall m. Monad m => ops m -> m a }

Where Π n. i_n -> m j_n is mathematician's way of saying forall n. i n -> m (j n). Up to a very small difference (rename i to f and pick the identity for j), this is indeed the same type. O'Connor then proves that this type is isomorphic to the usual presentation of free monads.

data Free f a = Return a | Free (f (Free f a))

Less is more

The comment, in Russell O'Connor's snippet, is crucial in the proof. Without it you can't establish the isomorphism between VLMonad and the traditional Free monad.

That's because not all effects can be represented in free monads. The prime example is exception handlers. You can make a function

handle :: Free MyEffect a -> Free MyEffect a -> Free MyEffect a

But it would have the property that (handle s f) >>= k = handle (s >>= k) (f >>= k). That is: exceptions raised after exiting the handler would still be caught by the handler. It is not a useless function, but it is not an exception handler. This phenomenon is a property of the free monad construction. In the impredicative encoding, it can be thought of as a consequence of the fact that in forall a. f a -> m a, f cannot refer to m.

So, while being isomorphic to Free is a nice theoretical property, Russell O'Connor's phrasing presents us with an opportunity: if we simply drop the comment restricting the form of ops, we get a less constrained free monad type which supports more effects. Therefore, we won't pay much attention to it at all in the rest of the post.

Towards capability

Functions in VLMonad will look, as functions in a monad do, like

somefunction :: A -> VLMonad Ops B

Where Ops represent the possible effects. For instance, if you need a state effect, you would define Ops as

data Ops m = Ops
  { put :: Int -> m ()
  , get :: m Int

But, after all, VLMonad is simply a newtype: we could very well inline its definition.

somefunction :: A -> forall m. Monad m => Ops m -> m B

The ordering of types and argument is not too idiomatic, though. So let's rearrange them a little:

somefunction :: Monad m => Ops m -> A -> m B

If this may look like a familiar style of structuring effect, it is. See for instance the style advertised by Éric Torreborre in a recent blog post. It's not really so much an alternative to free monads as a different presentation of free monads (albeit slightly more liberal, if, as per previous section, we ignore the restriction from O'Connor's comment).

Personally, I find it rather tiresome to explicitly carry around the capabilities (the Ops thing) at every function call. I'd rather keep my function arguments for the program logic, and leave all the plumbing to the monad. Therefore, I turn Ops into a type class, and move it “left of the fat arrow”: really in Haskell A -> B and A => B mean the same thing, they only differ in whose responsibility it is to pass the arguments around.

somefunction :: (Monad m, Ops m) => A -> m B

The definition of Ops for a state effect would, then, become

class Ops m where
  put :: Int -> m ()
  get :: m Int

This is precisely the style of programming supported by the capability library (see also our announcement blog post).

Closing thoughts

Free monads, capabilities-as-records and capabilities-as-type-classes, are, essentially, three different flavours of the same thing (with free monads technically being the more restrictive of the three, as it can't have exception handling effects).

Choosing between the three is, ultimately, a matter of taste. I really like capabilities-as-type-classes because it pushes the boilerplate outside of the domain logic.

At the end of the day, what really matters is the core idea shared by these three approaches: capabilities should be expressed in terms of the domain logic, not in terms of implementation details.

March 20, 2019 12:00 AM

March 19, 2019

Holden Karau

Matt Parsons

Sum Types In SQL

Algebraic datatypes are a powerful feature of functional programming languages. By combining the expressive power of “and” and “or,” we can solve all kinds of problems cleanly and elegantly. SQL databases represent product types – “and” – extremely well - a SQL table can correspond easily and directly to a product type where each field in the product type can fit in a single column.

On the other hand, SQL databases have trouble with sum types – “or”. Most SQL databases support simple enumerations easily, but they lack the ability to talk about real sum types with fields. We can encode sum types in SQL in a few different ways, each of which has upsides and downsides.

For each of these examples, we will be encoding the following Haskell datatype:

data Animal
    = Cat Name Age FavoriteFood
    | Dog Name OwnerId
    | Bird Name Song

type Name = Text
type Age = Int
type FavoriteFood = Text
type OwnerId = Int
type Song = Text

We’re going to ignore that this datatype could be normalized (though I will describe datatype normalization and show what I mean at the end of the post).

The SQL examples in this blog post will use PostgreSQL. The Haskell query code will use the persistent syntax for entities and queries.

Shared Primary Keys

The first technique is the one that I have used with the most success. It has a good set of tradeoffs with respect to SQL normalization and Haskell types.

First, we are going to create a table for all animals and a type for the constructors:

CREATE TYPE animal_constr AS ENUM ('cat', 'bird', 'dog');

  id    SERIAL          NOT NULL,
  type  animal_constr   NOT NULL,

  PRIMARY KEY (id, type)

The type field on the table will distinguish the different constructors and tables we’ll use.

Let’s create the cat table:

        animal_constr NOT NULL 
        DEFAULT 'cat' 
        CHECK (type = 'cat'),
        TEXT    NOT NULL,
        TEXT 	NOT NULL,
    FOREIGN KEY (id, type) 
        REFERENCES animal (id, type)

The contents of the type column for the cat table are completely constrained – we cannot insert any record into the table that does not have a value of type 'cat'. Fortunately, this is automated - we can omit specifying it, and it’ll be filled in automatically from the DEFAULT 'cat' clause.

The other SQL tables will have a similar format. The type field will be constrained to the constructor.

If we add new constructors to the Haskell datatype, we can use the SQL command:

ALTER TYPE animal_constr ADD VALUE 'alligator';

Then we can create an additional table 'alligator', and the constraints all work out exactly like we want.

This representation has a downside. It is possible to insert entries into the animal table which have no corresponding entry in any other table. Indeed, this is necessary - there must first be an entry in the animal table before we can insert the corresponding cat.

Let’s check out how to query this from Haskell. We are going to use the following persistent definition:

    type         AnimalConstr

    type         AnimalConstr
    name         Text
    age          Int
    favoriteFood Text

This corresponds exactly with the SQL description above, though it will generate migrations that are different. I am going to elide the PersistField definitions, as they are irrelevant, but AnimalConstr is defined as:

data AnimalConstr = Cat | Dog | Bird
    deriving (Eq, Show, Read)

Assuming we have defined the other animal tables, we’ll use this SQL query:

SELECT ??, ??, ??, ??
FROM animal
    ON =
    ON =
    ON =

We need the WHERE clause to ensure that we do not select animal records that do not have corresponding records in the subtables.

The Haskell code to load these records out of the database looks like this:

    :: SqlPersistM
        [ ( Entity Animal
          , Maybe (Entity Cat)
          , Maybe (Entity Dog)
          , Maybe (Entity Bird)
selectAnimalsDb = rawQuery theAboveSqlQuery []

We have a post-condition on the return value of this query that is guaranteed by the database schema, but not present in the types. One of the Maybe values will be a Just constructor, and the Just constructor will be determined by the AnimalConstr value on the Entity Animal value. This allows us to unsafely extract the Just value based on the constructor.

We would write our conversion function as so:

    :: Entity Animal
    -> Maybe (Entity Cat)
    -> Maybe (Entity Dog)
    -> Maybe (Entity Bird)
    -> Domain.Animal
toDomainType (Entity _ animal) mcat mdog mbird =
    case (animalType animal, mcat, mdog, mbird) of
        (Cat, Just cat, _, _) ->
            toCat cat
        (Dog, _, Just dog, _) ->
            toDog dog
        (Bird, _, _, Just bird) ->
            toBird bird
        _ ->
            error "Impossible due to database constraints"
    toDog  :: Entity Dog  -> Animal
    toCat  :: Entity Cat  -> Animal
    toBird :: Entity Bird -> Animal

This approach is safe, easy to extend, and conforms to good relational database design. It is possible to make the Haskell-side even safer, by adding some fancier type-level tricks to the SQL conversion layer. However, that differs based on the database library you are using, and I would like to keep this post generalizable to other libraries. A followup post (or library?) might provide insight into this.

The persistent Approach

The persistent library has an approach to encoding sum-type entities. The documentation describes this feature. Given that I wrote the documentation, I’m going to copy it in here:

The schema in the test is reproduced here:

share [mkPersist persistSettings, mkMigrate "sumTypeMigrate"] [persistLowerCase|
    brand T.Text
    make T.Text
    model T.Text
    bicycle BicycleId
    car CarId

Let’s check out the definition of the Haskell type Vehicle. Using ghci, we can query for :info Vehicle:

>>> :i Vehicle
type Vehicle = VehicleGeneric SqlBackend
        -- Defined at .../Projects/persistent/persistent-test/src/SumTypeTest.hs:26:1

>>> :i VehicleGeneric
type role VehicleGeneric nominal
data VehicleGeneric backend
  = VehicleBicycleSum (Key (BicycleGeneric backend))
  | VehicleCarSum (Key (CarGeneric backend))
        -- Defined at .../persistent/persistent-test/src/SumTypeTest.hs:26:1
-- lots of instances follow...

A VehicleGeneric has two constructors:

  • VehicleBicycleSum with a Key (BicycleGeneric backend) field
  • VehicleCarSum with a Key (CarGeneric backend) field

The Bicycle and Car are typical persistent entities.

This generates the following SQL migrations (formatted for readability):

CREATE TABLE "bicycle" (
    "id"        INTEGER PRIMARY KEY,
    "brand"     VARCHAR NOT NULL

    "id"        INTEGER PRIMARY KEY,
    "make"      VARCHAR NOT NULL,
    "model"     VARCHAR NOT NULL

CREATE TABLE "vehicle"(
    "id"        INTEGER PRIMARY KEY,
    "bicycle"   INTEGER NULL REFERENCES "bicycle",
    "car"       INTEGER NULL REFERENCES "car"

The vehicle table contains a nullable foreign key reference to both the bicycle and the car tables.

A SQL query that grabs all the vehicles from the database looks like this (note the ?? is for the persistent raw SQL query functions):

SELECT ??, ??, ??
FROM vehicle
    ON =
LEFT JOIN bicycle
    ON vehicle.bicycle =

If we use the above query with rawSql, we’d get the following result:

    :: SqlPersistM 
        [ ( Entity Vehicle
          , Maybe (Entity Bicycle)
          , Maybe (Entity Car)

This result has some post-conditions that are not guaranteed by the types or the schema. The constructor for Entity Vehicle is going to determine which of the other members of the tuple is Nothing. We can convert this to a friendlier domain model like this:

data Vehicle'
    = Car' Text Text
    | Bike Text

check = do
    result <- getVehicles
    pure (map convert result)

    :: Entity Vehicle
    -> Maybe (Entity Bicycle)
    -> Maybe (Entity Car)
    -> Vehicle'
convert (Entity _ (VehicycleBicycleSum _)) (Just (Entity _ (Bicycle brand))) _ =
    Bike brand
convert (Entity _ (VehicycleCarSum _)) _ (Just (Entity _ (Car make model))) =
    Car make model
convert _ =
    error "The database preconditions have been violated!"

The SQL table that is automatically generated from the entities does not guarantee that exactly one ID column is present. We can resolve this by adding a CHECK constraint:

We need to add another check to ensure that at most one of the columns is present.

    ( (bicycle IS NOT NULL AND car     IS NULL)
   OR (car     IS NOT NULL AND bicycle IS NULL)

As compared to the “Shared Primary Keys” approach, this inverts the foreign key relationship. This means that we would need to insert an Cat or Dog into the database before we can insert an Animal into the database. Where the previous method would allow “orphan” animal records with no corresponding cat, dog, etc, this method allows for orphan cats and dogs without corresponding animal records.

Adding constructors is easy - you add a new column to the animal table, and adjust the CHECK constraints so that only one can be present.

This requires less work with adding custom ENUM types, requires fewer and less complicated foreign keys, and has less “dynamic” behavior (linking the id and type field at runtime vs statically known relationship).

Nullable Columns

This approach dispenses with multiple tables and represents the sum-type in a single table. It is an awkward encoding, and it is not considered good database design. However, it does avoid many JOINs, and is slightly more straightforward.

We will start by creating a table for animal and denormalize the other tables in, with NULLable columns.

    -- Common fields:
    type    animal_constr NOT NULL,

    -- Cat fields:
    cat_name    TEXT,
    cat_age     INTEGER,
    cat_food    TEXT,

    -- Dog fields:
    dog_name        TEXT,
    dog_owner_id    INTEGER REFERENCES owner,
    -- Bird fields:
    bird_name       TEXT,
    bird_song       TEXT


Now, when we parse fields out of the database, we will look a the type field, and assume that the relevant fields are not null. We can make this safe by using a CHECK constraint to ensure that the corresponding fields are not null, and that the irrelevant fields are null.

    ( type = 'cat'
    AND cat_name     IS NOT NULL
    AND cat_age      IS NOT NULL
    AND cat_food     IS NOT NULL
    AND dog_name     IS NULL
    AND dog_owner_id IS NULL
    AND bird_name    IS NULL
    AND bird_song    IS NULL
    ) OR 
    ( type = 'dog'
    AND dog_name     IS NOT NULL
    AND dog_owner_id IS NOT NULL
    AND cat_name     IS NULL
    AND cat_age      IS NULL
    AND cat_food     IS NULL
    AND bird_name    IS NULL
    AND bird_song    IS NULL
    ) OR
    ( type = 'bird'
    AND bird_name IS NOT NULL
    AND bird_song IS NOT NULL
    AND dog_name     IS NULL
    AND dog_owner_id IS NULL
    AND cat_name     IS NULL
    AND cat_age      IS NULL
    AND cat_food     IS NULL

This CHECK constraint is getting pretty gnarly, and it’s only going to get worse if we add additional constructors and fields. This kind of boilerplate can be automated away. I suspect that a complex CHECK constraint like this might become computationally intensive, as well, though I have no idea what the performance characteristics of it are.

This approach is explicitly denormalized, and your DBA friends may scoff at you for implementing it. However, it has many upsides, as well. It is simple and easy to query, and aside from the safety CHECK constraints to guarantee data integrity, it is relatively low boilerplate.

If you want to provide this schema for convenience, you might consider using one of the previous two choices and exposing this as a VIEW on the underlying data. The query to provide the same schema from the “Shared Primary Keys” approach is here:

CREATE VIEW animal_a (
    id      INTEGER,
    type    animal_constr NOT NULL,

    cat_name    TEXT,
    cat_age     INTEGER,
    cat_food    TEXT,

    dog_name        TEXT,
    dog_owner_id    INTEGER REFERENCES owner,
    bird_name       TEXT,
    bird_song       TEXT
) AS 
    id, type, as cat_name, cat.age as cat_age, as cat_food, as dog_name, dog.owner_id as dog_owner_id, as bird_name, as bird_song
FROM animal
    ON =
    ON = dog.ig
    ON =

The view from the “Persistent” approach is slightly different, because the IDs for the cats/dogs/birds differ from the animal ID. If you needed to provide absolute backwards compatibility here, then you could provide the view with redundant cat_id etc columns.

CREATE VIEW animal_b (
    id      INTEGER,
    type    animal_constr NOT NULL,

    cat_id      INTEGER
    cat_name    TEXT,
    cat_age     INTEGER,
    cat_food    TEXT,

    dog_id          INTEGER
    dog_name        TEXT,
    dog_owner_id    INTEGER REFERENCES owner,
    bird_id         INTEGER
    bird_name       TEXT,
    bird_song       TEXT
) AS
    id, type, as cat_id, as cat_name, cat.age as cat_age, as cat_food, as dog_id, as dog_name, dog.owner_id as dog_owner_id, as bird_id, as bird_name, as bird_song
FROM animal
    ON animal.cat_id =
    ON animal.dog_id =
    ON animal.bird_id =

Since this option is expressible as a VIEW on the other two options, I’d suggest doing that if you need to provide this schema.

Datatype Normalization

Above, I mentioned that the Animal datatype we’re using is not normalized. Haskell datatypes can be normalized in much the same way that SQL relations can be. The Wikipedia is a good way to learn about this, and the process is essentially the same.

First, we notice that the Name field is repeated in each constructor. We are going to factor that field out.

data AnimalDetails
    = Cat Age FavoriteFood
    | Dog OwnerId
    | Bird Song

data Animal 
    = Animal Name AnimalDetails

All of the repetition has been factored out.


@cliffordheath commented that these three strategies have well-known names in the database world - absorption, separation, and partition.

March 19, 2019 12:00 AM

March 18, 2019

Neil Mitchell

GHC Rebuild Times - Shake profiling

Summary: GHC rebuild times are slow. Using the Shake profiler and Hadrian we can find out why.

I just checked out GHC, and using Hadrian (the Shake-based build system), built GHC on Windows using:

hadrian\build.stack.bat -j --flavour=quickest --integer-simple --configure --profile

Namely use stack to install dependencies and build Hadrian itself, then compile as quick as I can get it, on all CPUs (8 on my machine), run configure for me and a profile report.html output. After compiling Hadrian, 40m54s later I had a built GHC. That's not a quick process! Why did it take so long? If I bought a better machine would it go faster? How might we optimise GHC? These questions and more can be answered with Shake profiling.

Shake has had profiling for years, but in the recently-released Shake 0.17.7 I've overhauled it. The profile report is generated as a web page, and the generated output in the new version is smaller (2x smaller), loads faster (100x or more) and is more intuitive (not really a numeric thing). In the rest of this post I'll pepper some screenshots from the Shake profiler without thoughts about what it could mean. I've also uploaded the profile so you can play around with it:

Hadrian Profile

Summary Page

The first page you see when opening the report is the summary.

This page gives us some basic stats. There was 1 run of the build system. It ran 3,645 traced actions (e.g. command line calls or other expensive actions) and there were 15,809 rules run (where a rule is something with dependency information - somewhere between one third to two thirds of those are likely to be files in typical build systems).

Turning to performance, the entire build, on one CPU would take 2h26m. The build on my 8 CPU machine took 40m45s, with on average 3.58 commands going at once (quite a bit less than the 8 I wanted). The critical path is about 37m14s, so that's the lower bound with infinite CPUs, so buying a machine with more CPUs won't really help (faster CPUs and a faster disk probably would though).

OK, so I'm now unhappy that GHC doesn't execute enough in parallel. So let's see what it does manage to do in parallel by switching to the Command Plot.

Command Plot

We now see a graph of what was executing at each point during the build. We see spikes in a hideous light blue for GHC, showing that when GHC gets going, it can get near to the 8 CPUs we requested. However, we see lots of periods of time with only 1 task executing. In most cases these are either sh at the start (which I happen to know is configure), or cabal-configure (which is more obviously configure). However, there are also Haskell blips where we get down to 1 CPU. I'm now increasingly convinced that the biggest problem Hadrian has (performance wise) is lack of parallelism. To confirm that, let's switch to the Parallelizability tab.


This next tab predicts how long it will take to build Hadrian at various different thread counts. The green line is if there were no dependencies, the blue line is with the dependencies we have, and the yellow line is the difference. As we can see, at 8 CPU's the difference is about 16m - I could have had my GHC a whole 16m faster if we could parallelise GHC more. At the same time, it looks like the sweet spot for compiling GHC is currently around 6 CPUs - more doesn't make a huge amount of difference. How sad. To investigate let's jump to the Rules tab.


Now we've moved on from pretty graphs to tables of rules. The most interesting columns for performance work are Time (how long something took), ETime (how long it took if you only pay for the fraction of the computer you are using) and WTime (how long you were the only thing running for). The first is most relevant if you want to take less CPU, the second two if you aren't hitting the parallelism you are hoping for. Since we aren't hitting the parallelism, we can sort by WTime.

For WTime, if we eliminated that step, the total build would improve by that amount of time. Looking at the first two entries, which are the initial configure and then configure of the base library, we see a total of 8m38s. If we could get rid of configure, or speed it up, or interleave it with other operations, we could save almost 10 minutes off a GHC build. Using the search bar we can figure out how much configure costs us in total.

Now we have used the search bar to filter to only rules that run the command cabal-configure or sh, and we've named them all in the group configure (so it sums them up for us). We see we spend 15m18s configuring, and would go about 10m faster if we didn't - so it's expensive, and serialises the build a lot. I hate configure.

Slow Stage0 Compilation

Ignoring configure, the next slow things are building the stage0 compiler, so let's focus in on that.

While we can use the search bar for entering JavasScript expressions, we can equally just enter substrings. Let's delve into /compiler/ and sort by Time. We quickly see a bunch of stage0 and stage1 compiles, with HsInstances.o and DynFlags.o right at the top. Those files take a really long time to compile, and end up serialising the build quite a bit. However, it's a bit odd that we see stage0, but a lot less of stage1. Let's compare the two stages:

Now we're using a regular expression to pull out the .o compiles in compiler, and group them by their stage. We can see that both involve 1,527 compiles, but that stage0 takes 43m, while stage1 is under 18m. Why? Either we're using different sets of flags (e.g. compiling stage0 with higher optimisations or warnings), or GHC HEAD (the output of stage0 which we use to compile stage1) is significantly faster than GHC 8.6.3 (which I used to compile stage0). I've no idea, but tracking down the difference could save at least 7 minutes on the wall clock time of building GHC.


Compiling GHC is slow, but the biggest problem is it doesn't parallelise well. Using Shake profiling we've found that configure and stage0 are the culprits. There's lots to be done, and hopefully a Summer of Code project too.

by Neil Mitchell ( at March 18, 2019 11:50 PM

Edward Z. Yang

Microsoft Surface Book 2

Long time readers of mine may be aware that I used a ThinkPad X61T for the past decade. After the hinge on my second instance of the machine, I decided it was finally time to get a new laptop. And I had one particular model on my eye, after Simon Peyton Jones showed me his new laptop at the last Haskell Implementor's Workshop: the Microsoft Surface Book 2. It fits my primary requirement for a laptop: it's a convertible laptop into tablet mode with a digitizer pen. The pen is not Wacom branded but it has an eraser end and can magnetically attach to the laptop (no enclosure for the pen, but I think that for modern hardware that constraint is unsatisfiable.) Furthermore, there is a Linux enthusiast community around the device, which made me feel that it would be more likely I could get Linux to work. So a few weeks ago, I took the plunge, and laid down three grand for my own copy. It has worked out well, but in the classic Linux style, not without a little bit of elbow grease.

A quick review

The good:

  1. I've managed to get all of the "important" functionality to work. That's Xournal with XInput pen and hibernate (though with some caveats.)
  2. Linux support for other random features has pleasantly surprised me: I managed to get a working CUDA install and drivers (for PyTorch development), ability to boot my Linux partition bare metal as well as from a VM in Windows and I can even detach the screen while booted into Linux.
  3. The keyboard is nice; not as good as a classic Thinkpad keyboard but having actual function keys, but it has real function keys (unlike the Macbook Pro I use at work.)
  4. Two standard USB ports as well as a USB-C port means I don't need dongles for most usage (unlike my Macbook Pro, which only has USB-C ports.)

The bad:

  1. (Updated on March 19, 2019) Suspend is really slow. Although jakeday's suggests that suspend is not working, something is working, in the sense that if I close my laptop lid, the laptop goes into a low power state of some sort. But it takes quite a long time to suspend, an even longer time to restart, and you still have to click past the bootloader (which makes me seriously wonder if we are actually suspending).
  2. The laptop un-hibernates itself sometimes when I put it in my backpack. My current hypothesis is that the power button is getting pushed (unlike most laptops, the power button is unprotected on the top of the screen). Probably some fucking around with my ACPI settings might help but I haven't looked closely into it yet.
  3. It's a high DPI screen. There's nothing wrong with this per se (and you basically can't buy a non-high DPI laptop these days), but any software that doesn't understand how to do high DPI (VMWare and Xournal, I'm looking at you) looks bad. The support of Ubuntu Unity for high DPI has gotten much better since the last time I've attempted anything like it, however; if I stick to the terminal and browser, things look reasonable.
  4. The function key is hardwired to toggle fn-lock. This is somewhat irritating because you have to remember which setting it's on to decide if you should hold it to get the other toggle. I'm also feeling the loss of dedicated page-up/page-down key.
  5. Apparently, the NVIDIA GPU downthrottles itself due to thermal sensor shenanigans (something something the fan is on the motherboard and not the GPU so the driver thinks the fan is broken and throttles? Mumble.)
  6. The speakers are... OK. Not great, just OK.
  7. It's too bad Microsoft opted for some custom charger for the Surface Book 2.

Linux setup

I did a stock install of the latest Ubuntu LTS (18.04) dual boot with Windows (1TB hard drive helps!), and then installed jakeday's custom Linux kernel and drivers. Some notes about the process:

  • I spent a while scratching my head as to why I couldn't install Linux dual-boot. Some Googling suggested that the problem was that Windows hadn't really shutdown; it had just hibernated (for quick startup). I didn't manage to disable this, so I just resized the Windows partition from inside Windows and then installed Linux on that partition.

  • Don't forget to allocate a dedicated swap partition for Linux; you won't be able to hibernate without it.

  • The Surface Book 2 has secure boot enabled. You must follow the instructions in to get signed kernels.

  • One consequence of generating signed kernels, is that if you have both the unsigned and signed kernels installed update-initramfs -u will update the initrd for your unsigned kernel, meaning that you won't see your changes unless you copy the initrd over! This flummoxed me a lot about the next step...

  • If you want to use the NVIDIA drivers for your shiny NVIDIA GPU, you need to blacklist nouveau. There are plenty of instructions on the internet but I can personally vouch for remingtonlang's instructions. Make sure you are updating the correct initrd; see my bullet point above. Once this was fixed, a standard invocation of the CUDA installer got me working nvidia-smi. Note that I manually signed the NVIDIA using the instructions here since I already had generated a private key, and it seemed silly to generate another one because NVIDIA's installer asked me to.

  • Once you install the NVIDIA drivers, you have to be careful about the opposite problem: Xorg deciding it wants to do all its rendering on the NVIDIA card! The usual symptom when this occurs is that your mouse input to Linux is very laggy. If you have working nvidia-smi, you can also tell because Xorg will be a running process on your GPU. In any case, this is bad: you do NOT want to use the dGPU for plain old desktop rendering; you want the integrated one. I found that uncommenting the sample Intel config in /etc/X11/xorg.conf.d fixes the problem:

    Section "Device"
        Identifier  "Intel Graphics"
        Driver      "intel"

    But this doesn't play too nicely with VMWare; more on this below.

  • Sound did not work (it was too soft, or the right speaker wasn't working) until I upgraded to Linux 5.0.1.

  • After enabling XInput on my fork of Xournal, it did not start working until I restarted Xournal. Eraser worked right out of the box.

  • Don't forget to make a swap partition (Ubuntu default installer didn't prompt me to make one, probably because I was installing as dual-boot); otherwise, hibernate will not work.

  • Sometimes, when waking up from hibernate, networking doesn't work. Mercifully, this can be fixed by manually reloading the WiFi kernel module: modprobe mwifiex_pcie and systemctl restart NetworkManager.service. More discussion on this issue.

  • Sometimes, when waking up from hibernate/suspend, I get a big thermometer icon. When I reboot again it goes away but I have lost my hibernate/suspend. Perplexing! I don't know why this happens.

Boot via VM

The sad truth of life is that the Windows tablet experience is much better than the Linux experience--to the point where many would just install Windows and then boot Linux from a virtual machine (or Windows Subsystem for Linux). This was a non-starter for me: a bare metal boot of Linux was necessary to get the best pen input experience. However, why not also make it possible to boot the Linux partition from VMWare running on Windows? This setup is explicitly supported by VMWare, but it took a few days of fiddling to get it to actually work.

  • First, you need VMWare Workstation Pro to actually configure a VM that accesses raw disk (although the resulting VM image can be run from the free VMWare Player). You can sign up for the thirty-day trial to get it configured, and then use Player from then on, if you like. VMWare will offer the raw disk as an option when setting up disk; pick that and select the Linux partitions on your machine.
  • The primary challenge of setting up this system is that a standard install of Linux on the Surface Book 2 doesn't have a traditional Linux boot partition; instead, it has an EFI partition. Most notably, this partition is permanently mounted by Windows on boot up, so you can't remount it for your VM. Your regular partition doesn't have a bootloader, which is why when you turn on your VM, you get kicked into network boot via PXE. The workaround I ended up applying is to make a new, fake disk (vmdk-backed) and install the boot partition onto that (you don't actually need any of the kernels or initrds, since they live on your root filesystem; only /boot/efi is mounted from the EFI partition). Of course, you have to actually setup this boot partition; the way I did it was to chroot into my partition on a rescue CD and then run grub-install /dev/sda1. In the course of fiddling, I also accidentally ran update-grub which blew away my Windows boot option, but re-running this command when booted into Linux bare-metal fixed the problem (because the real /boot/efi will mount and thus Grub will find the Windows boot option.)
  • Some documentation about dual-boot is specific to VMWare Fusion. This is OS X specific, so not relevant to the Microsoft Surface Book 2.
  • Get yourself a bootable Linux CD (I personally use SystemRescueCd) to help debug problems in the installation process.
  • Make sure all of your /etc/fstab entries correspond to real disks, or your Ubuntu startup process will spend a while waiting for a disk that is never going to show up. I had this problem with the /boot/efi mount, because the mount was UUID based; I "fixed" it by changing the mount to be LABEL based and labeling my vmdk accordingly (I suppose it might also have been possible to change the UUID of my vmdk, but I couldn't find any reasonable instructions for doing so on Windows). Note that the volume doesn't actually have to successfully mount (mine doesn't, because I forgot to format it vfat); it just has to exist so system doesn't wait to see if it connects at some later point in time.
  • I don't really understand how Unity decides to provide scaling options, but although it offers magnification on a bare metal boot, those options are not available when run under a VM. I get something tolerably sized (with only slight blurriness) by setting the resolution to 1680 x 1050; play around a bit with it. I have "Stretch Mode" enabled in VMWare.
  • Whether or not you can log into your account depends on your X11 configuration; so if you're like me and uncommented the Intel configuration, I found this bricks my login (and you can unbrick it by commenting out again.) How do make both work? Don't ask me; I'm still figuring it out.

Window manager

I haven't gotten around to setting up xmonad; this is no small part due to the fact that Unity appears to support a very rudimentary form of tiling: Windows-left and Windows-right will move Windows to fill the left/right half of the display, while Windows-up will full screen a Window. I might still try to get xmonad setup on 18.04, but for now it is nice not having to fight with trayer to get the standard icons.

What's next

My two top priorities for improving the state of Linux on the Surface Book 2:

  1. Rewrite Xournal with support for hDPI (how hard could it be lol)
  2. Figure out how to make suspend/hibernate work more reliably

Otherwise, I am very happy with this new laptop. One thing in particular is how much faster my mail client (still sup) runs; previously, scanning for new mail would be a crawl, but on this laptop they stream in like a flash. Just goes to show how much an upgrade going from a 1.6GHz processor to a 4.2GHz processor is :3

by Edward Z. Yang at March 18, 2019 01:59 AM

March 16, 2019

Magnus Therning

TIL: prompt matters to org-mode

A workmate just embellished some shell code blocks I’d put in a shared org-mode file with :session s. When I tried to run the blocks with sessions my emacs just froze up though. I found a post on the emacs StackExchange that offered a possible cause for it: the prompt.

I’m using bash-it so my prompt is rather far from the default.

After inspecting the session buffer simply added the following to my ~/.bashrc

if [[ ${TERM} == "dumb" ]]; then
    export BASH_IT_THEME='standard'
    export BASH_IT_THEME='simple'

and now I can finally run shell code blocks in sessions.

March 16, 2019 12:00 AM

March 14, 2019

Brent Yorgey

Idea for a physics-based rolling ball puzzle game

For quite a while I’ve had this idea for a cool game, and had vague intentions to learn some game/physics framework well enough to make it, but I’ve finally admitted that this is never going to happen. Instead I’ll just describe my idea here in the hope that someone else will either make it, or tell me that it already exists!

This is a 2D physics-based puzzle/obstacle game where you control a ball (aka circle). The twist that distinguishes it from similar games I’ve seen is that you have only two ways to control the ball:

  • Pushing the left or right arrow keys changes the ball’s angular velocity, that is, its rate of spin. If the ball is sitting on a surface, this will cause it to roll due to friction, but if the ball is in the air, it will just change its spin rate without changing its trajectory at all.

  • The down arrow key increases the ball’s velocity in the downwards direction. If the ball is sitting on a surface this will cause it to bounce upwards a bit. If the ball is in the air you can cause it to either bounce higher, by adding to its downward velocity while it is already falling, or you can dampen a bounce by pushing the down arrow while the ball is travelling upwards.

Those are the key mechanics. My intuition is that controlling the ball would be challenging but doable, and there would be lots of opportunities for interesting obstacles to navigate. For example, to get out of a deep pit you have to keep bouncing higher and then once you’re high enough, you impart a bit of spin so the next time you bounce you travel sideways over the lip of the pit. Or there could be a ledge so that you have to bounce once or twice while travelling towards it to get high enough to clear it, but then immediately control your subsequent bounce so you don’t bounce too high and hit some sort of hazard on the ceiling. And so on.

Finally, of course there could be various power-ups (e.g. to make the ball faster, or sticky, or to alter gravity in various ways). Various puzzles might be based on figuring out which power-ups to use or how to use them to overcome various obstacles.

So, does this game already exist? Or does someone want to make it? (Preferably in Haskell? =)

by Brent at March 14, 2019 10:19 AM

Donnacha Oisín Kidney

More Agda Tips

Posted on March 14, 2019
Tags: Agda

Literate Agda

For including Agda code in LaTeX files, Agda’s built-in literate programming support is a great tool. It typesets code well, and ensures that it typechecks which can help avoid typos.

Embedding Agda Code in LaTeX

I write the LaTeX document in one file, and the Agda code in another .lagda file. Using the catchfilebetweentags LaTeX package, I can then embed snippets of the Agda code into the LaTeX document. For instance, in a file named Lists.lagda I can have the following:

Then, after compiling the Agda file with agda --latex --output-dir=. Lists.lagda, I can embed the snippet head : List A → Maybe A into the TeX file like so:

Dealing with Unicode

Most Agda source code will be Unicode-heavy, which doesn’t work well in LaTeX. There are a few different ways to deal with this: you could use XeTeX, which handles Unicode better, for instance. I found it easier to use the ucs package, and write a declaration for each Unicode character as I came across it. For the character above, for instance, you can write:

Live Reloading

For plain LaTeX code, I use Spacemacs and Skim to get live reloading. When I save the LaTeX source code, the Skim window refreshes and jumps to the point my editing cursor is at. I use elisp code from this blog post.

For Agda code, live reloading gets a little trickier. If I edit an Agda source file, the LaTeX won’t automatically recompile it. However, based on this stack exchange answer, you can put the following .latexmkrc file in the same directory as your .lagda files and your .tex file:

This will recompile the literate Agda files whenever they’re changed. Unfortunately, it doesn’t automate it the first time you do it: it needs to see the .tex files to see the dependency. You can fix this yourself, by running agda --latex --output-dir=. when you add a new .lagda file (just once, after that the automation will take over), or you can use a script like the following:

This will compile any .lagda file it finds that doesn’t have a corresponding .tex file (so it won’t slow things down). Then call that script on the first line of your .latexmkrc, like so:

Flags for Debugging

There are a number of undocumented flags you can pass to Agda which are absolutely invaluable when it comes to debugging. One of them can tell you more about termination checking, another reports on type checking (tc), another for profiling (profile), and so on. Set the verbosity level (agda -v 100) to get more or less info.

Type Checking Order

Agda does type checking from left to right. This isn’t always desired: as an example, if we want to annotate a value with its type, we can use the following function:

Coming from Haskell, though, this is the wrong way around. We usually prefer to write something like 3 :: Int. We can’t write that as a simple function in Agda, though, so we instead use a syntax declaration:

Changing the order of type checking can also speed up typechecking in some cases. There’s more information about syntax declarations in Agda’s documentation.

by Donnacha Oisín Kidney at March 14, 2019 12:00 AM

March 12, 2019

FP Complete

Enhancing File Durability in Your Programs - FP Complete


At FP Complete, we strive to build systems that endure the direst of situations. An unexpected shutdown (like a kernel panic, or unplugging the power cord) in a machine should not affect the durability of confirmed writes in programs we develop.

by Roman Gonzalez ( at March 12, 2019 03:15 PM

Sandy Maguire

Freer, yet Too Costly Higher-order Effects

I'm throwing in the towel. For now at least.

As of today I have free, higher-order effects working. Unfortunately, they are not fast. I don't think this is a fundamental limitation, merely that whatever code I've written isn't amenable to GHC's optimization process.

I've been hammering on this for about 50 hours now. It's been driving me slowly crazy, and promised myself I'd stop if I hadn't solved it by now. That being said, before putting this project to rest, I wanted to do a quick write-up detailing what I've learned, how everything fits together, and where I'm hoping someone will pick up the ball. Here's the repository.

Higher Order Effects

In the freer-simple model, effects are first-order---meaning they are unable to embed Eff computations inside of them. This is occasionally annoying, primarily when trying to write bracket-like effects.

You can sort of work around the problem by encoding your scoped computation as an interpretation of an effect, but this often comes at the cost of fixing the interpretations of the other effects you're dealing with.

This fundamental limitation comes from the fact that freer-simple effects have kind * -> *. There's nowhere in here to stick the Eff stack you're working in. You can kind of hack it in, but it never plays nicely.

The solution is given in the paper Effect Handlers in Scope, and implemented in the fused-effects package. The idea is to parameterize each of your effects with a monad---ie. they have kind (* -> *) -> * -> *. This parameter gets instantiated at the entire Eff stack, as seen by the type of send :: Member e r => e (Eff r) a -> Eff r a

While it's an obvious insight, actually getting everything to play nicely is tricky. The primary issue is how do you push interpretations through these additional monadic contexts? For example, let's consider a bracket-esque effect:

data Bracket m x where
  Bracket :: m a          -- ^ allocate
          -> (a -> m ())  -- ^ deallocate
          -> (a -> m x)   -- ^ use
          -> Bracket m x

Assume we want to push a State effect through these ms. What are the correct semantics for how the state is accumulated? Should any state changed in the deallocate block count outside of the bracket? What should happen in the use case if an exception is thrown and the rest of the block is short-circuited?

Not only are we concerned with the semantics, but also the actual mechanism of propagating this state throughout the computation.

Effect Handlers in Scope introduces weave in a typeclass, which is responsible for this state-propagation behavior. Statefulness for an effect is encoded as some arbitrarily chosen functor f, and weave describes how the effect should move that state through the effect. Behold:

class Effect e where
      :: Functor f
      => f ()
      -> (forall x. f (m x) -> n (f x))
      -> e m a
      -> e n (n (f a))

The f () parameter is the current "state of the world", and the rank-2 thing is this distribution law. You can intuit the m parameter being an effect stack with all of the effects present, and the n parameter as being the same effect stack---but with the top of it taken off. To clarify, we could monomorphize it thusly:

    :: Functor f
    => f ()
    -> (forall x. f (Eff (g ': r) x) -> Eff r (f x))
    -> e (Eff (g ': r)) a
    -> e (Eff r) (Eff r (f a))

This janky return type: e (Eff r) (Eff r (f a)) comes from the fact that Effect Handlers in Scope describes a traditional "free" (as opposed to freer) approach to a free monad. The last parameter of an effect is actually a continuation for the next piece of the computation. By mangling it, we're ensuring the caller (which will be library code) respects the evaluation semantics.

weave implicitly defines the evaluation semantics of an effect---it pins how state propagates through them, which in turn defines which pieces of the effect are observable to the outside.

Freeing the Higher Orders

Warning: This next section describes a janky-ass solution to the problem. It's clearly a hack and clearly not the right answer. But maybe by writing out what I did, someone with a keen eye can point out where I went wrong.

So this is all well and good. It works, but requires a lot of boilerplate. As presented in the paper, a new effect requires:

  • A Functor instance
  • An MFunctor instance (providing hoist :: forall x. (f x -> g x) -> e f a -> e g a)
  • An Effect instance as above, and an additional method not described here

If you're following in the fused-effects tradition, for each interpretation you additionally need a new Carrier type, with its own Functor, Applicative and Monad instances, and then another typeclass tying the effect to its carrier.

fused-effects improves the O(n2) MTL instance problem to O(n)---albeit with a big constant factor :(

This is a huge amount of work! I've said it before and I'll say it again: ain't nobody got time for that. If it feels like too much work, people aren't going to do it. A solution that depends on humans not being lazy isn't one that's going to take off.

So wouldn't it be nice if we could just all of this effect stuff for free?

Here's where I admittedly went a little off the rails. The first step towards getting a freer Functor-less Monad instance for Eff is to define it in terms of its final encoding. I made the obvious changes to last time without thinking too much about it:

newtype Freer f a = Freer
  { runFreer
        :: forall m
         . Monad m
        => (forall x. f (Freer f) x -> m x)
        -> m a

I have no idea if this is right, but at least it gives a Monad instance for free. One limitation you'll notice is that the continuation in runFreer is a natural transformation, and thus it's unable to change its return type.

That means interpretations like runError :: Eff (Error e ': r) a -> Eff r (Either e a) are surprisingly difficult to implement. More on this later---I just wanted to point out this flaw.

From here I followed Oleg and implemented Eff as a type synonym, making sure to tie the knot and instantiate the m parameter correctly:

type Eff r = Freer (Union r (Eff r))

But how can we get a free implementation for weave?

It's this mental thing I came up with, which is sort of like Coyoneda but for weave:

data Yo e m a where
  Yo :: (Monad n, Functor f)
     => e m a
     -> f ()
     -> (forall x. f (m x) ~> n (f x))
     -> (f a -> b)
     -> Yo e n b

In retrospect, I would not spend any more time on this approach---I'd just make people give an instance of weave for higher-order effects, and machinery to derive it automatically for first-order ones.

But then how can we get an MFunctor instance for free? You can't just derive it generically---lots of higher-order effects want existentials, and thus can't have Generic instances.

This Yo thing mirrors the definition of weave pretty closely. The idea is that it can accumulate arbitrarily many weaves into a single Yo, and then dispatch them all simultaneously.

Some interesting points to note are that the state functor f is existentialized, and that there is this final f a -> b parameter to make it play nicely with the Union (more on this in a second.) We can implement weave now by replacing the existing state functor with a Compose of the new one and the old one.

    :: (Monad m, Monad n, Functor f)
    => f ()
    -> (forall x. f (m x) -> n (f x))
    -> Union r m a
    -> Union r n (f a)
weave s' distrib (Union w (Yo e s nt f)) =
  Union w $
    Yo e (Compose $ s <$ s')
         (fmap Compose . distrib . fmap nt . getCompose)
         (fmap f . getCompose)

We can also use Yo to get a free MFunctor instance:

instance MFunctor (Yo e) where
  hoist f (Yo e s nt z) = Yo e s (f . nt) z

OK, all of this works I guess. But what's with this weird f a -> b thing in Yo that I mentioned earlier? Well recall the type of runFreer, when instantiated at Union:

    :: forall m
     . Monad m
    => (forall x. Union r (Eff r) x -> m x)
    -> m a

The only way we can produce an m a is via this rank-2 thing, which is a natural transformation from Union r (Eff r) to m. In other words, it's not allowed to change the type. We can't just stuff the f into the result and return an m (f a) instead---this thing doesn't form a Monad! Fuck!

All of this comes to a head when we ask ourselves how to actually get the state out of such a contraption. For example, when we call runState we want the resulting state at the end of the day!

The trick is the same one I used in the last post---we're able to instantiate the m inside runFreer at whatever we like, so we just choose StateT s (Eff r) and then run that thing afterwards. Again, this is very clearly a hack.

Because weave is given freely, interpretations must eventually actually decide what that thing should look like. Some combinators can help; for example, this is the interface I came up with for implementing runBracket:

    :: Member (Lift IO) r
    => (Eff r ~> IO)
    -> Eff (Bracket ': r) a
    -> Eff r a
runBracket finish = deep $ \start continue -> \case
  Bracket alloc dealloc use -> sendM $
      (finish $ start alloc)
      (finish . continue dealloc)
      (finish . continue use)

The deep combinator gives you start and continue (which are the cleverly disguised results of weave), and asks you to give a natural transformation from your effect into Eff r.

The actual implementation of deep isn't going to win any awards in understandability, or in inline-ability:

    :: ( m f y
           . Functor f
          => (forall x. m x -> (Eff r (f x)))
          -> ( i o. (i -> m o) -> f i -> Eff r (f o))
          -> e m y
          -> Eff r (f y)
    -> Eff (e ': r) a
    -> Eff r a
deep transform (Freer m) = m $ \u ->
  case decomp u of
    Left x -> liftEff $ hoist (deep transform) x
    Right (Yo eff state nt f) -> fmap f $
        (deep transform . nt . (<$ state))
        (\ff -> deep transform . nt . fmap ff)

Notice that whoever implements transform needs to give an equivalent of an implementation of weave anyway. Except that instead of only writing it once per effect, they need to write it per interpretation.

We can also give an implementation for runState in terms of a StateT s:

runState :: forall s r a. s -> Eff (State s ': r) a -> Eff r (s, a)
runState s = flip runStateT s . go
    go :: forall x. Eff (State s ': r) x -> StateT s (Eff r) x
    go (Freer m) = m $ \u ->
      case decomp u of
        -- W T F
        Left x -> StateT $ \s' ->
          liftEff . weave (s', ())
                          (uncurry (flip runStateT))
                  $ hoist go x
        Right (Yo Get state nt f) -> fmap f $ do
          s' <- get
          go $ nt $ pure s' <$ state
        Right (Yo (Put s') state nt f) -> fmap f $ do
          put s'
          go $ nt $ pure () <$ state

This is also completely insane. Notice the aptly-named section W T F, where for no reason I can discern other than satisfying the typechecker, we convert from a StateT s to a (s, ()) and back again. But why?? Because this is what weave wants---and we need to satisfy weave because it's the only way to change the type of a Union---and we need to do that in order to reinterpret everything as a StateT s.

There is a similarly WTF implementation for runError. But worse is the combinator I wrote that generalizes this pattern for running an effect in terms of an underlying monad transformer:

   ::  a f t e r
    . ( MonadTrans t
      ,  m. Monad m => Monad (t m)
      , Functor f
   => ( x. Eff r (f x) -> t (Eff r) x)
   -> ( x. t (Eff r) x -> Eff r (f x))
   -> ( x. f (t (Eff r) x) -> Eff r (f x))
   -> f ()
   -> ( m tk y
          . Functor tk
         => ( x. f () -> tk (m x) -> Eff r (f (tk x)))
         -> tk ()
         -> e m y
         -> t (Eff r) (tk y)
   -> Eff (e ': r) a
   -> Eff r (f a)
shundle intro finish dist tk zonk = finish . go
    go ::  x. Eff (e ': r) x -> t (Eff r) x
    go (Freer m) = m $ \u ->
      case decomp u of
        Left x -> intro . liftEff . weave tk dist $ hoist go x
        Right (Yo e sf nt f) -> fmap f $
          zonk (\r -> shundle intro finish dist r zonk . nt) sf e

Don't ask why it's called shundle or why it has a variable called zonk. It was funny to me at the time and I needed whatever smiles I could get to continue making progress. Believe it or not, this abomination does indeed generalize runState:

    :: ( m. e m ~> StateT s (Eff r))
    -> s
    -> Eff (e ': r) a
    -> Eff r (s, a)
statefully f s =
    (StateT . const)
    (flip runStateT s)
    (uncurry $ flip runStateT)
    (s, ()) $ \_ tk -> fmap (<$ tk) . f

runState :: s -> Eff (State s ': r) a -> Eff r (s, a)
runState = statefully $ \case
  Get    -> get
  Put s' -> put s'

runState is actually quite reasonable! At this point I was willing to concede "worse is better"---that most of the time people only really care about StateT-esque state in their effects. And really the only thing they've been missing is bracket. And we now have bracket, and an easy way of doing StateT-esque state.

So I was going to hand it in to one library or another---if not for full marks, then at least for the participation trophy.

But then I ran my benchmark, and saw that it performs 10x slower than freer-simple. Even for effect stacks that don't need to weave.



And I guess this is where I leave it. The machinery is clearly wrong, but amazingly it actually does what it says on the tin. Unfortunately it does it so slowly that I think the people who complain about the performance of free monads might actually have a point this time.

I've got my heart crossed that someone will swoop in here and say "here's what you've done wrong" and it will be a minor change and then everything will optimize away and we will make merry and the freer monad revolution will be complete.

I'm very tired.

Here's the repository again.

What a Real Solution Would Look Like

I think I have enough familiarity with the problem at this point to know what a solution would look like---even if I can't find it myself:

  • runFreer could produce a f a if you asked for it
  • Weave would be a typeclass that effects authors would need to implement. But they could derive it for free if the m parameter was unused. This would be the only instance necessary to implement by hand.
  • The library would provide a handleRelayS-esque interface for defining interpreters.
  • By the time the user's code ran for the interpretation, every monadic value in their effect would be bindable without any further effort---ala cata.

I'd also be happy if an existing solution (probably fused-effects) were given the tools to satisfy the above list. In particular, the handleRelayS thing.

March 12, 2019 12:16 AM

March 11, 2019

Holden Karau

PyData Hong Kong - Making the Big Data ecosystem work together with Python: Apache Arrow, Spark, Flink, Beam, and Dask @ PyData Hong Kong

Thanks for joining me on 2019-02-19 at PyData Hong Kong 2019 Hong Kong for PyData Hong Kong - Making the Big Data ecosystem work together with Python: Apache Arrow, Spark, Flink, Beam, and Dask.I'll update this post with the slides soon.Comment bellow to join in the discussion :).Talk feedback is appreciated at

by Holden Karau ( at March 11, 2019 07:12 AM