Planet Haskell

September 28, 2020

Monday Morning Haskell

Rustlings Part 2

This week we continue with another Rustlings video tutorial! We'll tackle some more advanced concepts like move semantics, traits, and generics! Next week, we'll start considering how we might build a similar program to teach beginners about Haskell!

by James Bowen at September 28, 2020 02:30 PM


Being lazy without getting bloated

This is an announcement and explanation of the nothunks Haskell package that Edsko has been developing in the context of our work on Cardano for IOHK. It was originally published on the IOHK blog and is republished here with permission. There is also a video by Edsko available that was presented at MuniHac 2020.

Haskell is a lazy language. The importance of laziness has been widely discussed elsewhere: Why Functional Programming Matters is one of the classic papers on the topic, and A History of Haskell: Being Lazy with Class discusses it at length as well. For the purposes of this blog we will take it for granted that laziness is something we want. But laziness comes at a cost, and one of the disadvantages is that laziness can lead to memory leaks that are sometimes difficult to find. In this post we introduce a new library called nothunks aimed at discovering a large class of such leaks early, and helping to debug them. This library was developed for our work on the Cardano blockchain, but we believe it will be widely applicable in other projects too.

A motivating example

Consider the tiny application below, which processes incoming characters and reports how many characters there are in total, in addition to some per-character statistics:

import qualified Data.Map.Strict as Map

data AppState = AppState {
      total :: !Int
    , indiv :: !(Map Char Stats)
  deriving (Show)

type Stats = Int

update :: AppState -> Char -> AppState
update st c = st {
      total = total st + 1
    , indiv = Map.alter (Just . aux) c (indiv st)
    aux :: Maybe Stats -> Stats
    aux Nothing  = 1
    aux (Just n) = n + 1

initAppState :: AppState
initAppState = AppState {
      total = 0
    , indiv = Map.empty

main :: IO ()
main = interact $ show . foldl' update initAppState

In this version of the code, the per-character statistics are simply how often we have seen each character. If we feed this code ‘aabbb’, it will tell us that it saw 5 characters, 2 of which were the letter ‘a’ and 3 of which were ‘b’:

$ echo -n aabbb | cabal run example1
AppState {
    total = 5
  , indiv = fromList [('a',2),('b',3)]

Moreover, if we feed the application a ton of data and construct a memory profile,

$ dd if=/dev/zero bs=1M count=10 | cabal run --enable-profiling example1 -- +RTS -hy

we see that the application runs in constant space:

So far so good. But now suppose we make an innocuous-looking change. Suppose, in addition to reporting how often every character occurs, we also want to know the offset of the last time that the character occurs in the file:

type Stats = (Int, Int)

update :: AppState -> Char -> AppState
update st c = -- .. as before
    aux :: Maybe Stats -> Stats
    aux Nothing       = (1     , total st)
    aux (Just (n, _)) = (n + 1 , total st)

The application works as expected:

$ echo -n aabbb | cabal run example2
AppState {
    total = 5
  , indiv = fromList [('a',(2,1)),('b',(3,4))]

and so the change is accepted in GitHub’s PR code review and gets merged. However, although the code still works, it is now a lot slower.

$ time (dd if=/dev/zero bs=1M count=100 | cabal run example1)
real    0m2,312s

$ time (dd if=/dev/zero bs=1M count=100 | cabal run example2)
real    0m15,692s

We have a slowdown of almost an order of magnitude, although we are barely doing more work. Clearly, something has gone wrong, and indeed, we have introduced a memory leak:

Unfortunately, tracing a profile like this to the actual problem in the code can be very difficult indeed. What’s worse, although our change introduced a regression, the application still worked fine and so the test suite probably wouldn’t have failed. Such memory leaks tend to be discovered only when they get so bad in production that things start to break (for example, servers running out of memory), at which point you have an emergency on your hands.

In the remainder of this post we will describe how nothunks can help both with spotting such problems much earlier, and debugging them.

Instrumenting the code

Let’s first see what usage of nothunks looks like in our example. We modify our code and derive a new class instance for our AppState:

data AppState = AppState {
      total :: !Int
    , indiv :: !(Map Char Stats)
  deriving (Show, Generic, NoThunks)

The NoThunks class is defined in the nothunks library, as we will see in detail later. Additionally, we will replace foldl' with a new function:

repeatedly :: forall a b. (NoThunks b, HasCallStack)
           => (b -> a -> b) -> (b -> [a] -> b)
repeatedly f = ...

We will see how to define repeatedly later, but, for now, think of it as “foldl' with some magic sprinkled on top”. If we run the code again, the application will throw an exception almost immediately:

$ dd if=/dev/zero bs=1M count=100 | cabal run example3
example3: Unexpected thunk with context
CallStack (from HasCallStack):
  error, called at shared/Util.hs:22:38 in Util
  repeatedly, called at app3/Main.hs:38:26 in main:Main

The essence of the nothunks library is that we can check if a particular value contains any thunks we weren’t expecting, and this is what repeatedly is using to make sure we’re not inadvertently introducing any thunks in the AppState; it’s this check that is failing and causing the exception. We get a HasCallStack backtrace telling us where we introduced that thunk, and – even more importantly – the exception gives us a helpful clue about where the thunk was:


This context tells us that we have an AppState containing a Map containing tuples, all of which were in weak head normal form (not thunks), but the tuple contained an Int which was not in weak head normal form: a thunk.

From a context like this it is obvious what went wrong: although we are using a strict map, we have instantiated the map at a lazy pair type, and so although the map is forcing the pairs, it’s not forcing the elements of those pairs. Moreover, we get an exception the moment we introduce the thunk, which means that we can catch such regressions in our test suite. We can even construct minimal counter-examples that result in thunks, as we will see later.

Using nothunks

Before we look at how the library works, let’s first see how it’s used. In the previous section we were using a magical function repeatedly, but didn’t see how we could define it. Let’s now look at this function:

repeatedly :: forall a b. (NoThunks b, HasCallStack)
           => (b -> a -> b) -> (b -> [a] -> b)
repeatedly f = go
    go :: b -> [a] -> b
    go !b []     = b
    go !b (a:as) =
        let !b' = f b a
        in case unsafeNoThunks b' of
              Nothing    -> go b' as
              Just thunk -> error . concat $ [
                  "Unexpected thunk with context "
                , show (thunkContext thunk)

The only difference between repeatedly and foldl' is the call to unsafeNoThunks, which is the function that checks if a given value contains any unexpected thunks. The function is marked as “unsafe” because whether or not a value is a thunk is not normally observable in Haskell; making it observable breaks equational reasoning, and so this should only be used for debugging or in assertions. Each time repeatedly applies the provided function f to update the accumulator, it verifies that the resulting value doesn’t contain any unexpected thunks; if it does, it errors out (in real code such a check would only be enabled in test suites and not in production).

One point worth emphasizing is that repeatedly reduces the value to weak head normal form (WHNF) before calling unsafeNoThunks. This is, of course, what makes a strict fold-left strict, and so repeatedly must do this to be a good substitute for foldl'. However, it is important to realize that if repeatedly did not do that, the call to unsafeNoThunks would trivially and immediately report a thunk; after all, we have just created the f b a thunk! Generally speaking, it is not useful to call unsafeNoThunks (or its IO cousin noThunks) on values that aren’t already in WHNF.

In general, long-lived application state should never contain any unexpected thunks, and so we can apply the same kind of pattern in other scenarios. For example, suppose we have a server that is a thin IO layer on top of a mostly pure code base, storing the application state in an IORef. Here, too, we might want to make sure that that IORef never points to a value containing unexpected thunks:

newtype StrictIORef a = StrictIORef (IORef a)

readIORef :: StrictIORef a -> IO a
readIORef (StrictIORef v) = Lazy.readIORef v

writeIORef :: (NoThunks a, HasCallStack)
           => StrictIORef a -> a -> IO ()
writeIORef (StrictIORef v) !x = do
    check x
    Lazy.writeIORef v x

check :: (NoThunks a, HasCallStack) => a -> IO ()
check x = do
    mThunk <- noThunks [] x
    case mThunk of
      Nothing -> return ()
      Just thunk ->
        throw $ ThunkException
                  (thunkContext thunk)

Since check already lives in IO, it can use noThunks directly, instead of using the unsafe pure wrapper; but otherwise this code follows a very similar pattern: the moment we might introduce a thunk, we instead throw an exception. One could imagine doing a very similar thing for, say, StateT, checking for thunks in put:

newtype StrictStateT s m a = StrictStateT (StateT s m a)
  deriving (Functor, Applicative, Monad)

instance (Monad m, NoThunks s)
      => MonadState s (StrictStateT s m) where
  get    = StrictStateT $ get
  put !s = StrictStateT $
      case unsafeNoThunks s of
        Nothing -> put s
        Just thunk -> error . concat $ [
            "Unexpected thunk with context "
          , show (thunkContext thunk)

Minimal counter-examples

In some applications, there can be complicated interactions between the input to the program and the thunks it may or may not create. We will study this through a somewhat convoluted but, hopefully, easy-to-understand example. Suppose we have a server that is processing two types of events, A and B:

data Event = A | B
  deriving (Show)

type State = (Int, Int)

initState :: State
initState = (0, 0)

update :: Event -> State -> State
update A (a, b)    = let !a' = a + 1 in (a', b)
update B (a, b)
  | a < 1 || b < 1 = let !b' = b + 1 in (a, b')
  | otherwise      = let  b' = b + 2 in (a, b')

The server’s internal state consists of two counters, a and b. Each time we see an A event, we just increment the first counter. When we see a B event, however, we increment b by 1 only if a and b haven’t reached 1 yet, and by 2 otherwise. Unfortunately, the code contains a bug: in one of these cases, part of the server’s state is not forced and we introduce a thunk. (Disclaimer: the code snippets in this blog post are not intended to be good examples of coding, but to make it obvious where memory leaks are introduced. Typically, memory leaks should be avoided by using appropriate data types, not by modifying code.)

A minimal counter-example that will demonstrate the bug would therefore involve two events A and B, in any order, followed by another B event. Since we get an exception the moment we introduce an exception, we can then use a framework such as quickcheck-state-machine to find bugs like this and construct such minimal counter-examples.

Here’s how we might set up our test. Explaining how quickcheck-state-machine (QSM) works is well outside the scope of this blog post; if you’re interested, a good starting point might be An in-depth look at quickcheck-state-machine. For this post, it is enough to know that in QSM we are comparing a real implementation against some kind of model, firing off “commands” against both, and then checking that the responses match. Here, both the server and the model will use the update function, but the “real” implementation will use the StrictIORef type we introduced above, and the mock implementation will just use the pure code, with no thunks check. Thus, when we compare the real implementation against the model, the responses will diverge whenever the real implementation throws an exception (caused by a thunk):

data T

type instance MockState   T = State
type instance RealMonad   T = IO
type instance RealHandles T = '[]

data instance Cmd T f hs where
  Cmd :: Event -> Cmd T f '[]

data instance Resp T f hs where
  -- We record any exceptions that occurred
  Resp :: Maybe String -> Resp T f '[]

deriving instance Eq   (Resp T f hs)
deriving instance Show (Resp T f hs)
deriving instance Show (Cmd  T f hs)

instance NTraversable (Resp T) where
  nctraverse _ _ (Resp ok) = pure (Resp ok)

instance NTraversable (Cmd T) where
  nctraverse _ _ (Cmd e) = pure (Cmd e)

sm :: StrictIORef State -> StateMachineTest T
sm state = StateMachineTest {
      runMock    = \(Cmd e) mock ->
        (Resp Nothing, update e mock)
    , runReal    = \(Cmd e) -> do
        real <- readIORef state
        ex   <- try $ writeIORef state (update e real)
        return $ Resp (checkOK ex)
    , initMock   = initState
    , newHandles = \_ -> Nil
    , generator  = \_ -> Just $
        elements [At (Cmd A), At (Cmd B)]
    , shrinker   = \_ _ -> []
    , cleanup    = \_ -> writeIORef state initState
    checkOK :: Either SomeException () -> Maybe String
    checkOK (Left err) = Just (show err)
    checkOK (Right ()) = Nothing

(This uses the new Lockstep machinery in QSM that we introduced in the Munihac 2019 hackathon.)

If we run this test, we get the minimal counter-example we expect, along with the HasCallStack backtrace and the context telling us precisely that we have a thunk inside a lazy pair:

*** Failed! Falsified (after 6 tests and 2 shrinks):
  { unCommands =
      [ Command At { unAt = Cmd B } At { unAt = Resp Nothing } []
      , Command At { unAt = Cmd A } At { unAt = Resp Nothing } []
      , Command At { unAt = Cmd B } At { unAt = Resp Nothing } []


Resp (Just "Thunk exception in context [Int,(,)]
    called at shared/StrictIORef.hs:26:5 in StrictIORef
    writeIORef, called at app5/Main.hs:71:37 in Main")
:/= Resp Nothing

The combination of a minimal counter-example, a clear context, and the backtrace, makes finding most such memory leaks almost trivial.

Under the hood

The core of the nothunks library is the NoThunks class:

-- | Check a value for unexpected thunks
class NoThunks a where
  noThunks   :: [String] -> a -> IO (Maybe ThunkInfo)
  wNoThunks  :: [String] -> a -> IO (Maybe ThunkInfo)
  showTypeOf :: Proxy a -> String

data ThunkInfo = ThunkInfo {
      thunkContext :: Context
deriving (Show)

type Context = [String]

All of the NoThunks class methods have defaults, so instances can be, and very often are, entirely empty, or – equivalently – derived using DeriveAnyClass.

The noThunks function is the main entry point for application code, and we have already seen it in use. Instances of NoThunks, however, almost never need to redefine noThunks and can use the default implementation, which we will take a look at shortly. Conversely, wNoThunks is almost never useful for application code but it’s where most of the datatype-specific logic lives, and is used by the default implementation of noThunks; we will see a number of examples of it below. Finally, showTypeOf is used to construct a string representation of a type when constructing the thunk contexts; it has a default in terms of Generic.


Suppose we are checking if a pair contains any thunks. We should first check if the pair itself is a thunk, before we pattern match on it. After all, pattern matching on the pair would force it, and so if it had been a thunk, we wouldn’t be able to see this any more. Therefore, noThunks first checks if a value itself is a thunk, and if it isn’t, it calls wNoThunks; the w stands for WHNF: wNoThunks is allowed to assume (has as precondition) that its argument is not itself a thunk and so can be pattern-matched on.

noThunks :: [String] -> a -> IO (Maybe ThunkInfo)
noThunks ctxt x = do
    isThunk <- checkIsThunk x
    if isThunk
      then return $ Just ThunkInfo { thunkContext = ctxt' }
      else wNoThunks ctxt' x
    ctxt' :: [String]
    ctxt' = showTypeOf (Proxy @a) : ctxt

Note that when wNoThunks is called, the (string representation of) type a has already been added to the context.


Most of the datatype-specific work happens in wNoThunks; after all, we can now pattern match. Let’s start with a simple example, a manual instance for a type of strict pairs:

data StrictPair a b = StrictPair !a !b

instance (NoThunks a, NoThunks b)
      => NoThunks (StrictPair a b) where
  showTypeOf _ = "StrictPair"
  wNoThunks ctxt (StrictPair x y) = allNoThunks [
        noThunks ctxt x
      , noThunks ctxt y

Because we have verified that the pair itself is in WHNF, we can just extract both components, and recursively call noThunks on both of them. Function allNoThunks is a helper defined in the library that runs a bunch of thunk checks, stopping at the first one that reports a thunk.

Occasionally we do want to allow for selected thunks. For example, suppose we have a set of integers with a cached total field, but we only want to compute that total if it’s actually used:

data IntSet = IntSet {
      toSet :: !(Set Int)

      -- | Total
      -- Intentionally /not/ strict:
      -- Computed when needed (and then cached)
    , total :: Int
  deriving (Generic)

Since total must be allowed to be a thunk, we skip it in wNoThunks:

instance NoThunks IntSet where
  wNoThunks ctxt (IntSet xs _total) = noThunks ctxt xs

Such constructions should probably only be used sparingly; if the various operations on the set are not carefully defined, the set might hold on to all kinds of data through that total thunk. Code like that needs careful thought and careful review.

Generic instance

If no implementation is given for wNoThunks, it uses a default based on GHC generics. This means that for types that implement Generic, deriving a NoThunks instance is often as easy as in the AppState example above, simply saying:

data AppState = AppState {
      total :: !Int
    , indiv :: !(Map Char Stats)
  deriving (Show, Generic, NoThunks)

Many instances in the library itself are also defined using the generic instance; for example, the instance for (default, lazy) pairs is just:

instance (NoThunks a, NoThunks b) => NoThunks (a, b)

Deriving-via wrappers

Sometimes, we don’t want the default behavior implemented by the generic instance, but defining an instance by hand can be cumbersome. The library therefore provides a few newtype wrappers that can be used to conveniently derive custom instances. We will discuss three such wrappers here; the library comes with a few more.

Only check for WHNF

If all you want to do is check if a value is in weak head normal form (ie, check that it is not a thunk itself, although it could contain thunks), you can use OnlyCheckIsWhnf. For example, the library defines the instance for Bool as:

deriving via OnlyCheckWhnf Bool
         instance NoThunks Bool

For Bool, this is sufficient: when a boolean is in weak head normal form, it won’t contain any thunks. The library also uses this for functions:

deriving via OnlyCheckWhnfNamed "->" (a -> b)
         instance NoThunks (a -> b)

(Here, the Named version allows you to explicitly define the string representation of the type to be included in the thunk contexts.) Using OnlyCheckWhnf for functions means that any values in the function closure will not be checked for thunks. This is intentional and a subtle design decision; we will come back to this in the section on permissible thunks below.

Skipping some fields

For types such as IntSet where most fields should be checked for thunks, but some fields should be skipped, we can use AllowThunksIn:

deriving via AllowThunksIn '["total"] IntSet
         instance NoThunks IntSet

This can be handy for large record types, where giving the instance by hand is cumbersome and, moreover, can easily get out of sync when changes to the type (for example, a new field) are not reflected in the definition of wNoThunks.

Inspecting the heap directly

Instead of going through the class system and the NoThunks instances, we can also inspect the GHC heap directly. The library makes this available through the InspectHeap newtype, which has an instance:

instance Typeable a => NoThunks (InspectHeap a) where

Note that this does not depend on a NoThunks instance for a. We can use this like any other deriving-via wrappers, for example:

deriving via InspectHeap TimeOfDay
         instance NoThunks TimeOfDay

The advantage of such an instance is that we do not require instances for any nested types; for example, although TimeOfDay has a field of type Pico, we don’t need a NoThunks instance for it.

The disadvantage is that we lose all compositionality. If there are any types nested inside for which we want to allow for thunks, we have no way of overriding the behaviour of the no-thunks check for those types. Since we are inspecting the heap directly, and the runtime system does not record any type information, any NoThunks instances for those types are irrelevant and we will report any thunks that it finds. Moreover, when we do find such a thunk, we cannot report a useful context, because – again – we have no type information. If noThunks finds a thunk deeply nested inside some T (whose NoThunks instance was derived using InspectHeap), it will merely report "...", "T" as the context (plus perhaps any context leading to T itself).

Permissible thunks

Some data types inherently depend on the presence of thunks. For example, the Seq type defined in Data.Sequence internally uses a finger tree. Finger trees are a specialized data type introduced by Ralf Hinze and Ross Paterson; for our purposes, all you need to know is that finger trees make essential use of thunks in their spines to achieve their asymptotic complexity bounds. This means that the NoThunks instance for Seq must allow for thunks in the spine of the data type, although it should still verify that there are no thunks in any of the elements in the sequence. This is easy enough to do; the instance in the library is:

instance NoThunks a => NoThunks (Seq a) where
  showTypeOf _   = "Seq"
  wNoThunks ctxt = noThunksInValues ctxt . toList

Here, noThunksInValues is a helper function that checks a list of values for thunks, without checking the list itself.

However, the existence of types such as Seq means that the non-compositionality of InspectHeap can be a big problem. It is also the reason that for functions we merely check if the function is in weak head normal form. Although the function could have thunks in its closure, we don’t know what their types are. We could check the function closure for thunks (using InspectHeap), but if we did, and that closure contained, say, a Seq among its values, we might incorrectly report an unexpected thunk. Because it is more problematic if the test reports a bug when there is none than when an actual bug is not reported, the library opts to check only functions for WHNF. If in your application you store functions, and it is important that these functions are checked for thunks, then you can define a custom newtype around a -> b with a NoThunks instance defined using InspectHeap (but only if you are sure that your functions don’t refer to types that must be allowed to have thunks).

Comparison with the heap/stack limit size method

In 2016, Neil Mitchell gave a very nice talk at HaskellX, where he presented a method for finding memory leaks (he has also written a blog post on the topic). The essence of the method is to run your test suite with much reduced stack and heap limits, so that if there is a memory leak in your code, you will notice it before it hits production. He then advocates the use of the -xc runtime flag to get a stack trace when such a “stack limit exhausted” exception is thrown.

The technique advocated in this post has a number of advantages. We get an exception the moment a thunk is created, so the stack trace we get is often much more useful. Together with the context reported by noThunks, finding the problem is usually trivial. Interpreting the stack reported by -xc can be more difficult, because this exception is thrown when the limit is exhausted, which may or may not be related to the code that introduced the leak in the first place. Moreover, since the problem only becomes known when the limit is exhausted, minimal counter-examples are out of the question. It can also be difficult to pick a suitable value for the limit; how much memory does the test site actually need, and what would constitute a leak? Finally, -xc requires your program to be compiled with profiling enabled, which means you’re debugging something different to what you’d run in production, which is occasionally problematic.

Having said all that, the nothunks method does not replace the heap/stack limit method, but complements it. The nothunks approach is primarily useful for finding space leaks in pieces of data where it’s clear that we don’t want any thunk build-up, typically long-lived application state. It is less useful for finding more ‘local’ space leaks, such as a function accumulator not being updated strictly. For finding such leaks, setting stack/heap limits is still a useful technique.


Long-lived application data should, typically, not have any thunk build-up. The nothunks library can verify this through the noThunks and unsafeNoThunks function calls, which check if the supplied argument contains any unexpected thunks. These checks can then be used in assertions to check that no thunks are created. This means that if we do introduce a thunk by mistake, we get an immediate test failure, along with a callstack to the place where the thunk was created as well as a context providing a helpful hint on where the thunk is. Together with a testing framework, this makes memory leaks much easier to debug and avoid. Indeed, they have mostly been a thing of the past in our work on Cardano since we started using this approach.

by edsko at September 28, 2020 12:00 AM

September 27, 2020

Joachim Breitner

Learn Haskell on CodeWorld writing Sokoban

Two years ago, I held the CIS194 minicourse on Haskell at the University of Pennsylvania. In that installment of the course, I changed the first four weeks to teach the basics of Haskell using the online Haskell environment CodeWorld, and lead the students towards implementing the game Sokoban.

As it is customary for CIS194, I put my lecture notes and exercises online, and this has been used as a learning resources by people from all over the world. But since I have left the University of Pennsylvania, I lost the ability to update the text, and as the CodeWorld API has evolved, some of the examples and exercises no longer work.

Some recent complains about that, in bug reports against CodeWorld and in unrealistically flattering tweets (“Shame, this was the best Haskell course ever!!!”) motivated me to extract that material and turn it into an updated stand-alone tutorial that I can host myself.

So if you feel like learning Haskell without worrying about local installation, and while creating a reasonably fun game, head over to and get started! Improvements can now also be contributed at

Credits go to Brent Yorgey, Richard Eisenberg and Noam Zilberstein, who held the previous installments of the course, and Chris Smith for creating the CodeWorld environment.

by Joachim Breitner ( at September 27, 2020 07:20 PM

Chris Penner

Generalizing 'jq' and Traversal Systems using optics and standard monads

Hi folks! Today I'll be chatting about Traversal Systems like jq and XPath; we're going to discover which properties make them useful, then see how we can replicate their most useful behaviours in Haskell using (almost entirely) pre-ols!existing standard Haskell tools! Let's go!

What's a Traversal System?

First off I'll admit that "Traversal System" is a name I just came up with, you probably won't find anything if you search for it (unless this post really catches on 😉).

A Traversal System allows you dive deeply into a piece of data and may allow you to fetch, query, and edit the structure as you go while maintaining references to other pieces of the structure to influence your work. The goal of most Traversal Systems is to make this as painless and concise as possible. It turns out that this sort of thing is incredibly useful for manipulating JSON, querying HTML and CSS, working with CSVs, or even just handling standard Haskell Records and data-types.

Some good examples of existing Traversal Systems which you may have heard of include the brilliant jq utility for manipulating and querying JSON, the XPath language for querying XML, and the meander data manipulation system in Clojure. Although each of these systems may appear drastically different at a glance, they both accomplish many of the same goals of manipulating and querying data in a concise way.

The similarities between these systems intrigued me! They seem so similar, but yet still seem to share very little in the way of structure, syntax, and prior art. They re-invent the wheel for each new data type! Ideally we could recognize the useful behaviours in each system and build a generalized system which works for any data type.

This post is an attempt to do exactly that; we'll take a look at a few things that these systems do well, then we'll re-build them in Haskell using standard tooling, all the while abstracting over the type of data!

Optics as a basis for a traversal system

For any of those who know me it should be no surprise that my first thought was to look at optics (i.e. Lenses and Traversals). In general I find that optics solve a lot of my problems, but in this case they are particularly appropriate! Optics inherently deal with the idea of diving deep into data and querying or updating data in a structured and compositional fashion.

In addition, optics also allow abstracting over the data type they work on. There are pre-existing libraries of optics for working with JSON via lens-aeson and for html via taggy-lens. I've written optics libraries for working with CSVs and even Regular Expressions, so I can say confidently that they're a brilliantly adaptable tool for data manipulation.

It also happens that optics are well-principled and mathematically sound, so they're a good tool for studying the properties that a system like this may have.

However, optics themselves don't provide everything we need! Optics are rather obtuse, in fact I wrote a whole book to help teach them, and they lack clarity and easy of use when it comes to building larger expressions. It's also pretty tough to work on one part of a data structure while referencing data in another part of the same structure. My hope is to address some of these short comings in this post.

In this particular post I'm mostly interested in explaining a framework for traversal systems in Haskell, we'll be using many standard mtl Monad Transformers alongside a lot of combinators from the lens library. You won't need to understand any of these intimately to get the gist of what's going on, but I won't be explaining them in depth here, so you may need to look elsewhere if you're lacking a bit of context.

Establishing the Problem

I'll be demoing a few examples as we go along so let's set up some data. I'll be working in both jq and Haskell to make comparisons between them, so we'll set up the same data in both JSON and Haskell.

Here's a funny lil' company as a JSON object:

        { "id": "1"
        , "name": "bob"
        , "pets": [
              { "name": "Rocky"
              , "type": "cat"
              { "name": "Bullwinkle"
              , "type": "dog"
        { "id": "2"
        , "name": "sally"
        , "pets": [
              { "name": "Inigo"
              , "type": "cat"
    "salaries": {
        "1": 12,
        "2": 15

And here's the same data in its Haskell representation, complete with generated optics for each record field.

data Company = Company { _staff :: [Employee]
                       , _salaries :: M.Map Int Int
                       } deriving Show
data Pet = Pet { _petName :: String
               , _petType :: String
               } deriving Show
data Employee = Employee { _employeeId :: Int
                         , _employeeName :: String
                         , _employeePets :: [Pet]
                         } deriving Show

makeLenses ''Company
makeLenses ''Pet
makeLenses ''Employee

company :: Company
company = Company [ Employee 1 "bob" [Pet "Rocky" "cat", Pet "Bullwinkle" "dog"] 
                  , Employee 2 "sally" [Pet "Inigo" "cat"]
                  ] (M.fromList [ (1, 12)
                                , (2, 15)


Let's dive into a few example queries to test the waters! First an easy one, let's write a query to find all the pets owned by any of our employees.

Here's how it looks in jq:

$ cat company.json | jq '.staff[].pets[] | select(.type == "cat")'
  "name": "Rocky",
  "type": "cat"
  "name": "Inigo",
  "type": "cat"

We look in the staff key, then enumerate that list, then for each staff member we enumerate their cats! Lastly we filter out anything that's not a cat.

We can recognize a few hallmarks of a Traversal System here. jq allows us to "dive" down deeper into our structure by providing a path to where we want to be. It also allows us to enumerate many possibilities using the [] operator, which will forward each value to the rest of the pipeline one after the other. Lastly it allows us to filter our results using select.

And in Haskell using optics it looks like this:

>>> toListOf (staff . folded . employeePets . folded . filteredBy (petType . only "cat")) company
[ Pet {_petName = "Rocky", _petType = "cat"}
, Pet {_petName = "Inigo", _petType = "cat"}

Here we use "toListOf" along with an optic which "folds" over each staff member, then folds over each of their pets, again filtering for "only" cats.

At a glance the two are extremely similar!

They each allow the enumeration of multiple values, in jq using [] and in optics using folded.

Both implement some form of filtering, jq using select and our optics with filteredBy.

Great! So far we've had no trouble keeping up! We're already starting to see a lot of similarities between the two, and our solutions using optics are easily generalizable to any data type.

Let's move on to a more complex example.

Keeping references

This time we're going to print out each pet and their owner!

First, here's the jq:

$ cat join.json | jq '
  | .name as $personName 
  | .pets[] 
  | "\(.name) belongs to \($personName)"
"Rocky belongs to bob"
"Bullwinkle belongs to bob"
"Inigo belongs to sally"

Here we see a new feature in jq which is the ability to maintain references to a part of the structure for later while we continue to dig deeper into the structure. We're grabbing the name of each employee as we enumerate them and saving it into $personName so we can refer to this later on. Then we enumerate each of the pets and use string interpolation to describe who owns each pet.

If we try to stick with optics on their own, well, it's possible, but unfortunately this is where it all starts to break down, look at this absolute mess:

owners :: [String]
owners = 
  company ^.. 
    (staff . folded . reindexed _employeeName selfIndex <. employeePets . folded . petName) 
    . withIndex 
    . to (\(eName, pName) -> pName <> " belongs to " <> eName)

>>> owners
[ "Rocky belongs to bob"
, "Bullwinkle belongs to bob"
, "Inigo belongs to sally"

You can bet that nobody is calling that "easy to read". Heck, I wrote a book on optics and it still took me a few tries to figure out where the brackets needed to go!

Optics are great for handling a single stream of values, but they're much worse at more complex expressions, especially those which require a reference to values that occur earlier in the chain. Let's see how we can address those shortcomings as we build our Traversal System in Haskell.

Just for the jq aficionados in the audience I'll show off this alternate version which uses a little bit of magic that jq does for you.

 $ cat company.json | jq '.staff[] | "\(.pets[].name) belongs to \(.name)"'
"Rocky belongs to bob"
"Bullwinkle belongs to bob"
"Inigo belongs to sally"

Depending on your experience may be less magical and more confusing 😬. Since the final expression contains an enumeration (i.e. \(.pets[].name)) jq will expand the final term once for each value in the enumeration. This is really cool, but unfortunately a bit "less principled" and tough to understand in my opinion.

Regardless, the behaviour is the same, and we haven't replicated it in Haskell satisfactorily yet, let's see what we can do about that!

Monads to the rescue (again...)

In Haskell we love our embedded DSLs; if you give a Haskeller a problem to solve, you can bet that 9 times out of 10 they'll solve it with a custom monad and an DSL 😂. Well, I'm sorry to tell you that I'm no different!

We'll be using a monad to address the readability problem of the last optics solution, but the question is... which monad?

Since all we're doing at the moment is querying data, we can make use of the esteemed Reader Monad to provide a context for our query.

Here's what that last query looks like when we use the Reader monad with the relatively lesser known magnify combinator:

owners' :: Reader Company [String]
owners' = do
    magnify (staff . folded) $ do
        personName <- view employeeName
        magnify (employeePets . folded) $ do
            animalName <- view petName
            return [animalName <> " belongs to " <> personName]

>>> runReader owners' company
[ "Rocky belongs to bob"
, "Bullwinkle belongs to bob"
, "Inigo belongs to sally"

I won't explain how the Reader monad itself works here, so if you're a bit shaky on that you'll probably want to familiarize yourself with that first.

As for magnify, it's a combinator from the lens library which takes an optic and an action as arguments. It uses the optic to focus a subset of the Reader's environment, then runs the action within a Reader with that data subset as its focus. It's just that easy!

One more thing! magnify can accept a Fold which focuses multiple elements, in this case it will run the action once for each focus, then combine all the results together using a semigroup. In this case, we wrapped our result in a list before returning it, so magnify will go ahead and automatically concatenate all the results together for us. Pretty nifty that we can get so much functionality out of magnify without writing any code ourselves!

We can see that rewriting the problem in this style has made it considerably easier to read. It allows us to "pause" as we use optics to descend and poke around a bit at any given spot. Since it's a monad and we're using do-notation, we can easily bind any intermediate results into names to be referenced later on; the names will correctly reference the value from the current iteration! It's also nice that we have a clear indication of the scope of all our bindings by looking at the indentation of each nested do-notation block.

Depending on your personal style, you could write this expression using the (->) monad directly, or even omit the indentation entirely; though I don't personally recommend that. In case you're curious, here's the way that I DON'T RECOMMEND writing this:

owners'' :: Company -> [String]
owners'' = do
  magnify (staff . folded) $ do
  eName <- view employeeName
  magnify (employeePets . folded) $ do
  pName <- view petName
  return [pName <> " belongs to " <> eName]

Updating deeply nested values

Okay! On to the next step! Let's say that according to our company policy we want to give a $5 raise to anyone who owns a dog! Hey, I don't make the rules here 🤷�♂�. Notice that this time we're running an update not just a query!

Here's one of a few different ways we could express this in jq

cat company.json | jq '
[.staff[] | select(.pets[].type == "dog") | .id] as $peopleWithDogs
| .salaries[$peopleWithDogs[]] += 5

  "staff": [
      "id": "1",
      "name": "bob",
      "pets": [
          "name": "Rocky",
          "type": "cat"
          "name": "Bullwinkle",
          "type": "dog"
      "id": "2",
      "name": "sally",
      "pets": [
          "name": "Inigo",
          "type": "cat"
  "salaries": {
    "1": 17,
    "2": 15

We first scan the staff to see who's worthy of a promotion, then we iterate over each of their ids and bump up their salary, and sure enough it works!

I'll admit that it took me a few tries to get this right in jq; if you're not careful you'll enumerate in a way that means jq can't keep track of your references and you'll be unable to edit the correct piece of the original object. For example, here's my first attempt to do this sort of thing:

$ cat company.json | jq '
. as $company 
| .staff[] 
| select(.pets[].type == "dog").id 
| $company.salaries[.] += 5

jq: error (at <stdin>:28): Invalid path expression near attempt to access element "salaries" of {"staff":[{"id":"1","name"...

In this case it looks like jq can't edit something we've stored as a variable; a bit surprising, but fair enough I suppose.

This sort of task is tricky because it involves enumeration over one area, storing those results, then enumerating AND updating in another! It's definitely possible in jq, but some of the magic that jq performs makes it a bit tough to know what will work and what won't at a glance.

Now for the Haskell version:

salaryBump :: State Company ()
salaryBump = do
    ids <- gets $ toListOf 
            ( staff 
            . traversed 
            . filteredBy (employeePets . traversed . petType . only "dog") 
            . employeeId
    for_ ids $ \id' ->
        salaries . ix id' += 5

>>> execState salaryBump company
Company { _staff = [ Employee { _employeeId = 1
                              , _employeeName = "bob"
                              , _employeePets = [ Pet { _petName = "Rocky"
                                              , _petType = "cat"
                                        , Pet { _petName = "Bullwinkle"
                                              , _petType = "dog"
                   , Employee { _employeeId = 2
                              , _employeeName = "sally"
                              , _employeePets = [Pet { _petName = "Inigo"
                                             , _petType = "cat"
        , _salaries = fromList [ (1, 17)
                               , (2, 15)

You'll notice that now that we need to update a value rather than just query I've switched from the Reader monad to the State monad, which allows us to keep track of our Company in a way that imitates mutable state.

First we lean on optics to collect all the ids of people who have dogs. Then, once we've got those ids we can iterate over our ids and perform an update action using each of them. The lens library includes a lot of nifty combinators for working with optics inside the State monad; here we're using += to "statefully" update the salary at a given id. for_ from Data.Foldable correctly sequences each of our operations and applies the updates one after the other.

When we're working inside State instead of Reader we need to use zoom instead of magnify; here's a rewrite of the last example which uses zoom in a trivial way; but zoom allows us to also edit values after we've zoomed in!

salaryBump :: State Company ()
salaryBump = do
    ids <- zoom ( staff 
                . traversed 
                . filteredBy (employeePets . traversed . petType . only "dog")
                ) $ do
              uses employeeId (:[])
    for_ ids $ \id' ->
        salaries . ix id' += 5

Next Steps

So hopefully by now I've convinced you that we can faithfully re-create the core behaviours of a language like jq in Haskell in a data-agnostic way! By swapping out your optics you can use this same technique on JSON, CSVs, HTML, or anything you can dream up. It leverages standard Haskell tools, so it composes well with Haskell libraries, and you maintain the full power of the Haskell language so you can easily write your own combinators to expand your vocabulary.

The question that remains is, where can we go from here? The answer, of course, is that we can add more monads!

Although we have filtered and filteredBy from lens to do filtering of our enumerations and traversals using optics; it would be nice to have the same power when we're inside a do-notation block! Haskell already has a stock-standard combinator for this called guard. It will "fail" in whichever monad you're working with. To work it depends on your type having an instance of the Alternative type; which unfortunately for us State does NOT have; so we'll need to look for an alternative way to get an Alternative instance 😂

The MaybeT monad transformer exists specifically to add failure to other monad types, so let's integrate that! The tricky bit here is that we want to fail only a single branch of our computation, not the whole thing! So we'll need to "catch" any failed branches before they merge back into the main computation.

Let's write a small wrapper around zoom to get the behaviour we want.

infixr 0 %>
(%>) :: Traversal' s e -> MaybeT (State e) a -> MaybeT (State s) [a]
l %> m = do
    zoom l $ do
        -- Catch and embed the current branch so we don't fail the whole program
        a <- lift $ runMaybeT m
        return (maybe [] (:[]) a)

This defines a handy new combinator for our traversal DSL which allows us to zoom just like we did before, but the addition of MaybeT allows us to easily use guard to prune branches!

We make sure to run and re-lift the results of our action rather than embedding them directly otherwise a single failed "guard" would fail the entire remaining computation, which we certainly don't want! Since each individual branch may fail, and since we've usually been collecting our results as lists anyways, I just went ahead and embedded our results in a list as part of the combinator, it should make everything a bit easier to use!

Let's try it out! I'll rewrite the previous example, but we'll use guard instead of filteredBy this time.

salaryBump'' :: MaybeT (State Company) ()
salaryBump'' = do
    ids <- staff . traversed %> do
            isDog <- employeePets . traversed %> do
                       uses petType (== "dog")
            guard (or isDog)
            use employeeId
    for_ ids $ \id' ->
        salaries . ix id' += 5

>>> flip execState company . runMaybeT $ salaryBump''
{ _staff    =
      [ Employee { _employeeId   = 1
                 , _employeeName = "bob"
                 , _employeePets =
                       [ Pet { _petName = "Rocky"
                             , _petType = "cat"
                       , Pet { _petName = "Bullwinkle"
                             , _petType = "dog"
      , Employee { _employeeId   = 2
                 , _employeeName = "sally"
                 , _employeePets = [ Pet { _petName = "Inigo"
                                         , _petType = "cat"
, _salaries = fromList [(1, 17), (2, 15)]

I wrote it out in "long form"; the expressiveness of our system means there are a few different ways to write the same thing; which probably isn't a good thing, but you can find the way that you like to work and standardize on that!

It turns out that if you want even more power you can replace MaybeT with a "List transformer done right" like LogicT or list-t. This will allow you to actually expand the number of branches within a zoom, not just filter them! It leads to a lot of power! I'll leave it as an exercise for the reader to experiment with, see if you can rewrite %> to use one of these list transformers instead!

Hopefully that helps to show how a few optics along with a few monads can allow you to replicate the power of something like jq and even add more capabilities, all by leveraging composable tools that already exist and while maintaining the full power of Haskell!

There are truly endless types of additional combinators you could add to make your code look how you want, but I'll leave that up to you. You can even use ReaderT or StateT as a base monad to make the whole stack into a transformer so you can add any other Monadic behaviour you want to your DSL (e.g. IO).

Is it really data agnostic?

Just to show that everything we've built so far works on any data type you like (so long as you can write optics for it); we'll rewrite our Haskell code to accept a JSON Aeson.Value object instead!

You'll find it's a bit longer than the jq version, but keep in mind that it's fully typesafe!

salaryBumpJSON :: MaybeT (State Value) ()
salaryBumpJSON = do
    ids <- key "staff" . values %> do
        isDog <- key "pets" . values %> do
                        pType <- use (key "type" . _String)
                        return $ pType == "dog"
        guard (or isDog)
        use (key "id" . _String)
    for_ ids $ \id' ->
        key "salaries" . key id' . _Integer += 5

As you can see it's pretty much the same! We just have to specify the type of JSON we expect to find in each location (e.g. _String, _Integer), but otherwise it's very similar!

For the record I'm not suggesting that you go and replace all of your CLI usages of jq with Haskell, but I hope that this exploration can help future programmers avoid "re-inventing" the wheel and give them a more mathematically structured approach when building their traversal systems; or maybe they'll just build those systems in Haskell instead 😉

I'm excited to see what sort of cool tricks, combinators, and interactions with other monads you all find!


Just to show that you can do "real" work with this abstraction here are a few more examples using this technique with different data types. These examples will still be a bit tongue in cheek, but hopefully show that you really can accomplish actual tasks with this abstraction across a wide range of data types.

First up; here's a transformation over a kubernetes manifest describing the pods available in a given namespace. You can see an example roughly what the data looks like here.

This transformation takes a map of docker image names to port numbers and goes through the manifest and sets each container to use the correct ports. It also tags each pod with all of the images from its containers, and finally returns a map of container names to docker image types! It's pretty cool how this abstraction lets us mutate data while also returning information.

{-# LANGUAGE OverloadedStrings #-}
module K8s where

import Data.Aeson hiding ((.=))
import Data.Aeson.Lens
import Control.Lens
import Control.Monad.State
import qualified Data.Map as M
import qualified Data.Text as T
import Data.Foldable

-- Load in your k8s JSON here however you like
k8sJSON :: Value
k8sJSON = undefined

transformation :: M.Map T.Text Int -> State Value (M.Map T.Text T.Text)
transformation ports = do
    zoom (key "items" . values) $ do
        containerImages <- zoom (key "spec" . key "containers" . values) $ do
            containerName <- use (key "name" . _String)
            imageName <- use (key "image" . _String . to (T.takeWhile (/= ':')))
            zoom (key "ports" . values) $ do
                let hostPort = M.findWithDefault 8080 imageName ports
                key "hostPort" . _Integral .= hostPort
                key "containerPort" . _Integral .= hostPort + 1000
            return $ M.singleton containerName imageName
        zoom (key "metadata" . key "labels") $ do
          for_ containerImages $ \imageName ->
              _Object . at imageName ?= "true"
        return containerImages

imagePorts :: M.Map T.Text Int
imagePorts = M.fromList [ ("redis", 6379)
                        , ("my-app", 80)
                        , ("postgres", 5432)

result :: (M.Map T.Text T.Text, Value)
result = runState (transformation imagePorts) k8sJSON

Next up; let's work with some HTML! The following transformation uses taggy-lens to interact with HTML (or any XML you happen to have lying around.)

This transformation will find all direct parents of <img> tags and will set the alt tags on those images to be all the text inside the parent node.

After that, it will find all <a> tags and wrap them in a <strong> tag while also returning a list of all href attributes so we can see all the links we have in the document!

{-# LANGUAGE OverloadedStrings #-}
module HTML where

import qualified Data.Text.Lazy as TL
import qualified Data.Text as T
import qualified Data.Text.Lazy.IO as TL
import Control.Monad.State
import Text.Taggy.Lens
import Control.Lens hiding (elements)

transformation :: State TL.Text [T.Text]
transformation = do
    -- Select all tags which have an "img" as a direct child
    zoom (html . elements . deep (filteredBy (elements . named (only "img")))) $ do
        -- Get the current node's text contents
        altText <- use contents
        -- Set the text contents as the "alt" tag for all img children
        elements . named (only "img") . attr "alt" ?= altText

    -- Transform all "a" tags recursively
    (html . elements . transformM . named (only "a")) 
      -- Wrap them in a <strong> tag while also returning their href value
      %%= \tag -> (tag ^.. attr "href" . _Just, Element "strong" mempty [NodeElement tag])

Lastly let's see a CSV example! I'll be using my lens-csv library for the optics.

This simple example iterates through all the rows in a csv and uses an overly simplistic formula to recompute their ages based on their birth year.

{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TypeApplications #-}
{-# LANGUAGE LambdaCase #-}
module CSV where

import Control.Lens
import Data.Csv.Lens
import qualified Data.ByteString.Lazy as BL
import Control.Monad.State

recomputeAges :: State BL.ByteString ()
recomputeAges = do
    zoom (namedCsv . rows) $ do
        preuse (column @Int "birthYear") >>= \case
            Nothing -> return ()
            Just birthYear -> do
                column @Int "age" .= 2020 - birthYear

Hopefully these last few examples help convince you that this really is an adaptable solution even though they're still a bit silly.

Hopefully you learned something �! Did you know I'm currently writing a book? It's all about Lenses and Optics! It takes you all the way from beginner to optics-wizard and it's currently in early access! Consider supporting it, and more posts like this one by pledging on my Patreon page! It takes quite a bit of work to put these things together, if I managed to teach your something or even just entertain you for a minute or two maybe send a few bucks my way for a coffee? Cheers! �

Become a Patron!

September 27, 2020 12:00 AM

September 23, 2020

Mark Jason Dominus

The mystery of the malformed command-line flags

Today a user came to tell me that their command

  greenlight submit branch-name --require-review-by skordokott

failed, saying:

    ** unexpected extra argument 'branch-name' to 'submit' command

This is surprising. The command looks correct. The branch name is required. The --require-review-by option can be supplied any number of times (including none) and each must have a value provided. Here it is given once and the provided value appears to be skordocott.

The greenlight command is a crappy shell script that pre-validates the arguments before sending them over the network to the real server. I guessed that the crappy shell script parser wanted the branch name last, even though the server itself would have been happy to take the arguments in either order. I suggested that the user try:

  greenlight submit --require-review-by skordokott branch-name 

But it still didn't work:

    ** unexpected extra argument '--require-review-by' to 'submit' command

I dug in to the script and discovered the problem, which was not actually a programming error. The crappy shell script was behaving correctly!

I had written up release notes for the --require-review-by feature. The user had clipboard-copied the option string out of the release notes and pasted it into the shell. So why didn't it work?

In an earlier draft of the release notes, when they were displayed as an HTML page, there would be bad line breaks:

blah blah blah be sure to use the -
-require-review-by option…


blah blah blah the new --
require-review-by feature is…

No problem, I can fix it! I just changed the pair of hyphens (- U+002D) at the beginning of --require-review-by to Unicode nonbreaking hyphens ( U+2011). Bad line breaks begone!

But then this hapless user clipboard-copied the option string out of the release notes, including its U+2011 characters. The parser in the script was (correctly) looking for U+002D characters, and didn't recognize --require-review-by as an option flag.

One lesson learned: people will copy-paste stuff out of documentation, and I should be prepared for that.

There are several places to address this. I made the error message more transparent; formerly it would complain only about the first argument, which was confusing because it was the one argument that wasn't superfluous. Now it will say something like

    ** extra branch name '--require-review-by' in 'submit' command
    ** extra branch name 'skordokott' in 'submit' command

which is more descriptive of what it actually doesn't like.

I could change the nonbreaking hyphens in the release notes back to regular hyphens and just accept the bad line breaks. But I don't want to. Typography is important.

One idea I'm toying with is to have the shell script silently replace all nonbreaking hyphens with regular ones before any further processing. It's a hack, but it seems like it might be a harmless one.

So many weird things can go wrong. This computer stuff is really complicated. I don't know how anyone get anything done.

[ Addendum: A reader suggests that I could have fixed the line breaks with CSS. But the release notes were being presented as a Slack “Post”, which is essentially a WYSIWYG editor for creating shared documents. It presents the document in a canned HTML style, and as far as I know there's no way to change the CSS it uses. Similarly, there's no way to insert raw HTML elements, so no way to change the style per-element. ]

by Mark Dominus ( at September 23, 2020 06:18 PM

Tweag I/O

Announcing Lagoon

We are happy to announce the open source release of Lagoon. Jointly developed with Pfizer, Lagoon is a tool for centralizing semi-structured datasets like CSV or JSON files into a data “lagoon”, where your data can be easily versioned, queried, or passed along to other ETL pipelines.

If you’ve ever worked to extract meaning from collections of disparate CSV or JSON files, you’ll know that one of the most tedious and labor-intensive steps in this process is mapping their structure and contents into a common storage location so that they can be joined and queried easily. We wrote Lagoon to do this part of the job for you.

The primary component of Lagoon is its server which is layered on top of a PostgreSQL database. Lagoon automatically generates database schemas for your datasets, allowing you to directly ingest them into the Lagoon store without having to manually configure tables. Data is queryable via a REST API, client libraries, or directly in PostgreSQL via automatically generated SQL views.

While other tools like Apache Drill also support querying CSV and JSON files, they typically require the user to manually specify types for data stored in text-based formats. Lagoon’s type inference simplifies the process of querying these datasets, and since data is ingested into a centralized relational database, it’s easier to integrate that data with traditional ETL tools. Lagoon also supports dataset-level versioning, enabling you to store and query multiple versions of your datasets as you wrangle your data.


Let’s try it out! As a simple example, we can ingest and query a few sample datasets from the NOAA Storm Events Database, which contains a record of major storm events in the United States over the past century. Comprising a large set of CSV files with several different schemas, the Storm Events Database provides a good case study for Lagoon’s data ingestion capabilities. It will also allow us to try out Lagoon’s query interface to incorporate that data into a data analysis workflow in Python.

In this example, we will:

  1. Start up a Lagoon server and database backend using Docker and Docker Compose.
  2. Ingest a few example files into our new lagoon using the lagoon-client Docker image.
  3. Query and plot some data from our newly ingested storm datasets using PyLagoon, Lagoon’s Python client library.

1. Create a new lagoon

We can create a local lagoon-server instance using the Docker Compose file that is included in the GitHub repository. This file also specifies a container for the lagoon-server’s PostgreSQL backend.

$ git clone
$ cd lagoon/docker
$ docker-compose up

2. Ingest example datasets

Now that the lagoon-server instance is running, we can ingest some example datasets. One easy way to ingest data is via the lagoon-client Docker image.

Let’s take a look at the storms from 2019. The CSV files in this example can be downloaded from the NOAA storm events file server.

As a first example, we can ingest the storm details dataset:

# Note: this command assumes you've downloaded the csv files to your working directory
$ docker run --network="host" --volume "$PWD/StormEvents_details-ftp_v1.0_d2019_c20200716.csv:/StormEvents_details-ftp_v1.0_d2019_c20200716.csv" \
    tweag/lagoon-client --port 1234 --host localhost ingest --name "storm_details_2019" /StormEvents_details-ftp_v1.0_d2019_c20200716.csv

This is a long command, so let’s take a closer look at what we are actually doing. In the first line we are specifying options for the Docker container. We specify that: 1) we want the lagoon-client container to be able to communicate with the lagoon-server we started earlier which is running on the host network, and 2) we want to mount the input CSV file to the container to make it visible to the Lagoon client.

In the second line, we invoke the lagoon command line client’s ingest command on the storm details CSV file, specifying that the lagoon-server is listening to port 1234 on our local host. We also use the --name flag to give the new dataset a human-readable identifier which can be used when querying it later.

The output from the ingest command describes the schema that was generated for the newly ingested dataset:

storm_details_2019 (version 1)
URL         (local)
description storm_details_2019
tags        (no tags)
created     2020-08-21 08:48:03.583204579 UTC
added by    unauthenticated-user
deprecated  False
schema      demo
table       t1 (with view storm_details_2019_v1)
typed       typed1 (with view storm_details_2019_v1_typed)
row count   67506
        Type	Name

Something to keep in mind is that type inference has its limits and will sometimes result in an inconvenient or unexpected type. For example, the generated BEGIN_YEARMONTH column contains values like “201907” representing July 2019. This value was stored as an INTEGER which makes it harder to use to construct a DATE value than if it was stored as text. For this reason, Lagoon also always generates an “untyped” SQL view which can be used to access raw values (storm_details_2019_v1 in the example above).

You can disable type inference entirely using the ingest command’s --no-type-inference flag. Disabling type inference will stop Lagoon from generating typed views and make Lagoon queries return data in a raw format (text for delimited text sources and JSON strings for JSON sources).

We can run a similar ingest command to ingest the 2019 storm fatalities dataset:

$ docker run --network="host" --volume "$PWD/StormEvents_fatalities-ftp_v1.0_d2019_c20200716.csv:/StormEvents_fatalities-ftp_v1.0_d2019_c20200716.csv" \
    tweag/lagoon-client --port 1234 --host localhost ingest --name "storm_fatalities_2019" /StormEvents_fatalities-ftp_v1.0_d2019_c20200716.csv

3. Query the lagoon

With our data ingested, we can start querying data using PyLagoon and analyze it using some standard data science tools in Python. The first step is to initialize the client.

from PyLagoon import LagoonConfig, Lagoon

lagoon = Lagoon(

We can access our datasets using the names we provided when we ingested them (the ingest command’s --name argument). You can also query all available data sources by omitting the name argument. It is also possible to query a subset of them by tag using the tag argument, which I won’t be covering in this post.

details_source = lagoon.sources(name="storm_details_2019")[0]
fatalities_source = lagoon.sources(name="storm_fatalities_2019")[0]

Each source contains a description of its corresponding dataset, but no actual data has been downloaded yet. To load data into a pandas DataFrame, we can use our lagoon object’s download_query() or download_source() methods.

The storm_details_2019 dataset includes data for over 67,000 storm events. While it is possible to load this entire dataset into a pandas DataFrame on most workstations, with larger datasets we would quickly saturate our workstation’s available memory. One of the advantages to using Lagoon is that we can limit the data we load into memory to only include the data we are interested in by specifying a SQL query. This helps to minimize client resource consumption and allows us to analyze larger datasets than would be possible with pandas alone.

Let’s use this query functionality to examine the storms that happened in the state of Texas in 2019.

from PyLagoon import PGMeta

# Lagoon uses SQLAlchemy for formatting SQL queries:
# To construct queries using SQLAlchemy, we need to generate a description of our database schema
meta = PGMeta([details_source, fatalities_source])

# Schemas for our two datasets:
storms = meta[details_source]
fatalities = meta[fatalities_source]

# Note: we can also use the PyLagoon.build_sql_query() function to preview or spot-check our query
query = meta.query(storms).filter("%TEXAS%"))

df = lagoon.download_query(query=query, sources=[details_source])

With our query results loaded, we can start working with our dataset. For example, we can map the storms along with their types.


import pandas as pd
import as px

# It looks like some bad values in the lat/lon columns forced them
# to be stored as strings. We can still cast them to floats (ignoring
# errors) using pandas:
df["BEGIN_LAT"] = pd.to_numeric(df["BEGIN_LAT"])
df["BEGIN_LON"] = pd.to_numeric(df["BEGIN_LON"])

    title="NOAA Storm Events (2019)",

px.histogram(df, x="EVENT_TYPE", width=1000, height=600).show()

png png

It looks like Texas has a lot of hail storms!

We can also perform more complex queries. For example, we can join our two datasets to see the type of location where the most storm-related fatalities occurred in 2019.

query = (
        storms.EVENT_ID, storms.BEGIN_LAT, storms.BEGIN_LON, fatalities.FATALITY_LOCATION
    .join(fatalities, storms.EVENT_ID == fatalities.EVENT_ID)
df_joined = lagoon.download_query(query=query, sources=[details_source, fatalities_source])

fig = px.histogram(
    title="Locations of storm-related fatalities (2019)",


With just a few quick commands we were able to ingest new datasets into our lagoon and start analyzing them, all without having to worry about generating database schemas.

Next steps

To get started with Lagoon, check out the documentation on GitHub. The Lagoon server and command line client are available as Docker images on DockerHub, and all components are also packaged using Nix in the GitHub repository.

Thanks for reading, and we hope that Lagoon is able to help streamline your data analysis workflows.

September 23, 2020 12:00 AM

September 22, 2020

Neil Mitchell

Don't use Ghcide anymore (directly)

Summary: I recommend people use the Haskell Language Server IDE.

Just over a year ago, I recommended people looking for a Haskell IDE experience to give Ghcide a try. A few months later the Haskell IDE Engine and Ghcide teams agreed to work together on Haskell Language Server - using Ghcide as a library as the core, with the plugins/installer experience from the Haskell IDE Engine (by that stage we were already both using the same Haskell setup and LSP libraries). At that time Alan Zimmerman said to me:

"We will have succeeded in joining forces when you (Neil) start recommending people use Haskell Language Server."

I'm delighted to say that time has come. For the last few months I've been both using and recommending Haskell Language Server for all Haskell IDE users. Moreover, for VS Code users, I recommend simply installing the Haskell extension which downloads the right version automatically. The experience of Haskell Language Server is better than either the Haskell IDE Engine or Ghcide individually, and is improving rapidly. The teams have merged seamlessly, and can now be regarded as a single team, producing one IDE experience.

There's still lots of work to be done. And for those people developing the IDE, Ghcide remains an important part of the puzzle - but it's now a developer-orientated piece rather than a user-orientated piece. Users should follow the README at Haskell Language Server and report bugs against Haskell Language Server.

by Neil Mitchell ( at September 22, 2020 09:16 AM

September 21, 2020

Monday Morning Haskell

Rustlings Video Blog!

We're doing something very new this week. Instead of doing a code writeup, I've actually made . In keeping with the last couple months on content, this first one is still Rust related. We'll walkthrough the Rustlings tool, which is an interactive program that teaches you the basics of the Rust Language! Soon, we'll start exploring how we might do this in Haskell!

You can also watch this video on our YouTube Channel! Subscribe there or sign up for our mailing list!

by James Bowen at September 21, 2020 02:30 PM

FP Complete

Rust: Of course it compiles, right?

I recently joined Matt Moore on LambdaShow. We spent some time discussing Rust, and one point I made was that, in my experience with Rust, ergonomics go something like this:

  • Beginner: oh cool, that worked, no problem
  • Advanced beginner: wait... why exactly did that work 99 other times? Why is it failing this time? I'm so confused!
  • Intermediate/advanced: oh, now I understand things really well, that's convenient

That may seem a bit abstract. Fortunately for me, an example of that popped up almost immediately after the post went live. This is my sheepish blog post explaining how I fairly solidly misunderstood something about the borrow checker. Hopefully it will help others.

Two weeks back, I wrote an offhand tweet with a bit of a code puzzle:

I thought this was a slightly tricky case of ownership, and hoped it would help push people to a more solid understanding of the topic. Soon after, I got a reply that gave the solution I had expected:

But then the twist: a question that made me doubt my own sanity.

This led me to filing a bogus bug report with the Rust team. Fortunately for me, Jonas Schievink had mercy and quickly pointed me to the documentation on temporary lifetime extension, which explains the whole situation.

If you've read this much, and everything made perfect sense, congratulations! You probably don't need to bother reading the rest of the post. But if anything is unclear, keep reading. I'll try to make this as clear as possible.

And if the explanation below still doesn't make sense, may I recommend FP Complete's Rust Crash Course eBook to brush up on ownership?

Borrow rules

Arguably the key feature of Rust is its borrow checker. One of the core rules of the borrow checker is that you cannot access data that is mutably referenced elsewhere. Or said more directly: you can either immutably borrow data multiple times, or mutably borrow it once, but not both at the same time. Usually, we let the borrow checker enforce this rule. And it enforces that rule at compile time.

However, there are some situations where a statically checked rule like that is too restrictive. In such cases, the Rust standard library provides cells, which let you move this borrow checking from compile time (via static analysis) to runtime (via dynamic counters). This is known as interior mutability. And a common type for this is a RefCell.

With a RefCell, the checking occurs at runtime. Let's demonstrate how that works. First, consider this program that fails to compile:

fn main() {
    let mut age: u32 = 30;

    let age_ref: &u32 = &age;

    let age_mut_ref: &mut u32 = &mut age;
    *age_mut_ref += 1;

    println!("Happy birthday, you're {} years old!", age_ref);

We try to take both an immutable reference and a mutable reference to the value age simultaneously. This doesn't work out too well:

error[E0502]: cannot borrow `age` as mutable because it is also borrowed as immutable
 --> src\
4 |     let age_ref: &u32 = &age;
  |                         ---- immutable borrow occurs here
5 |
6 |     let age_mut_ref: &mut u32 = &mut age;
  |                                 ^^^^^^^^ mutable borrow occurs here
9 |     println!("Happy birthday, you're {} years old!", age_ref);
  |                                                      ------- immutable borrow later used here

The right thing to do is to fix this code. But let's do the wrong thing! Instead of trying to fix it correctly, we're going to use RefCell to replace our compile time checks (which prevent the code from building) with runtime checks (which allow the code to build, and then fail at runtime). Let's check that out:

use std::cell::{Ref, RefMut, RefCell};
fn main() {
    let age: RefCell<u32> = RefCell::new(30);

    let age_ref: Ref<u32> = age.borrow();

    let mut age_mut_ref: RefMut<u32> = age.borrow_mut();
    *age_mut_ref += 1;

    println!("Happy birthday, you're {} years old!", age_ref);

It's instructive to compare this code with the previous code. It looks remarkably similar! We're replaced &u32 with Ref<u32>, &mut u32 with RefMut<u32>, and &age and &mut age with age.borrow() and age.borrow_mut(), respectively. You may be wondering: what are those Ref and RefMut things? Hold that thought.

This code surprisingly compiles. And here's the runtime output (using Rust Nightly, which gives a slightly nicer error message):

thread 'main' panicked at 'already borrowed: BorrowMutError', src\

That looks a lot like the error message we saw above from the compiler. That's no accident: these are the same error showing up in two different ways.

Ref and RefMut

Our code panics when it calls age.borrow_mut(). Something seems to know that the age_ref variable exists. And in fact, that's basically true. When we called age.borrow(), a counter on the RefCell was incremented. As long as age_ref stays alive, that counter will remain active. When age_ref goes out of scope, the Ref<u32> will be dropped, and the drop will cause the counter to be decremented. The same logic applies to the age_mut_ref. Let's make two modifications to our code. First, there's no need to call age.borrow() before age.borrow_mut(). Let's slightly rearrange the code:

use std::cell::{Ref, RefMut, RefCell};
fn main() {
    let age: RefCell<u32> = RefCell::new(30);

    let mut age_mut_ref: RefMut<u32> = age.borrow_mut();
    *age_mut_ref += 1;

    let age_ref: Ref<u32> = age.borrow();
    println!("Happy birthday, you're {} years old!", age_ref);

This compiles, but still gives a runtime error. However, it's a slightly different one:

thread 'main' panicked at 'already mutably borrowed: BorrowError', src\

Now the problem is that, when we try to call age.borrow(), the age_mut_ref is still active. Fortunately, we can fix that by manually dropping it before the age.borrow() call:

use std::cell::{Ref, RefMut, RefCell};
fn main() {
    let age: RefCell<u32> = RefCell::new(30);

    let mut age_mut_ref: RefMut<u32> = age.borrow_mut();
    *age_mut_ref += 1;

    let age_ref: Ref<u32> = age.borrow();
    println!("Happy birthday, you're {} years old!", age_ref);

And finally, our program not only compiles, but runs successfully! Now I know that I'm 31 years old! (Or at least I wish I still was.)

We have another mechanism for forcing the value to drop: an inner block. If we create a block within the main function, it will have its own scope, and the age_mut_ref will automatically be dropped, no need for std::mem::drop. That looks like this:

use std::cell::{Ref, RefMut, RefCell};
fn main() {
    let age: RefCell<u32> = RefCell::new(30);

        let mut age_mut_ref: RefMut<u32> = age.borrow_mut();
        *age_mut_ref += 1;

    let age_ref: Ref<u32> = age.borrow();
    println!("Happy birthday, you're {} years old!", age_ref);

Once again, this compiles and runs. Looking back, we can hopefully now understand why Ref and RefMut are necessary. If .borrow() and .borrow_mut() simply returned actual references (immutable or mutable), there would be no struct with a Drop impl to ensure that the internal counters in RefCell were decremented when they go out of scope. So the world now makes sense.

No reference without a Ref

Here's something cool: you can borrow a normal reference (e.g. &u32) from a Ref (e.g. Ref<u32>). Check this out:

use std::cell::{Ref, RefMut, RefCell};
fn main() {
    let age: RefCell<u32> = RefCell::new(30);

        let mut age_mut_ref: RefMut<u32> = age.borrow_mut();
        *age_mut_ref += 1;

    let age_ref: Ref<u32> = age.borrow();
    let age_reference: &u32 = &age_ref;
    println!("Happy birthday, you're {} years old!", age_reference);

age_ref is a Ref<u32>, but age_reference is a &u32. This is a compile-time-checked reference. We're now saying that the lifetime of age_reference cannot outlive the lifetime of age_ref. As it stands, that's true, and everything compiles and runs correctly. But we can break that really easily using either std::mem::drop:

use std::cell::{Ref, RefMut, RefCell};
fn main() {
    let age: RefCell<u32> = RefCell::new(30);

        let mut age_mut_ref: RefMut<u32> = age.borrow_mut();
        *age_mut_ref += 1;

    let age_ref: Ref<u32> = age.borrow();
    let age_reference: &u32 = &age_ref;
    println!("Happy birthday, you're {} years old!", age_reference);

Or by using inner blocks:

use std::cell::{Ref, RefMut, RefCell};
fn main() {
    let age: RefCell<u32> = RefCell::new(30);

        let mut age_mut_ref: RefMut<u32> = age.borrow_mut();
        *age_mut_ref += 1;

    let age_reference: &u32 = {
        let age_ref: Ref<u32> = age.borrow();
    println!("Happy birthday, you're {} years old!", age_reference);

The latter results in the error message:

error[E0597]: `age_ref` does not live long enough
  --> src\
10 |     let age_reference: &u32 = {
   |         ------------- borrow later stored here
11 |         let age_ref: Ref<u32> = age.borrow();
12 |         &age_ref
   |         ^^^^^^^^ borrowed value does not live long enough
13 |     };
   |     - `age_ref` dropped here while still borrowed

This makes sense hopefully: age_reference is borrowing from age_ref, and therefore cannot outlive it.

The false fail

Alright, our inner block currently looks like this, and refuses to compile:

let age_reference: &u32 = {
    let age_ref: Ref<u32> = age.borrow();

age_ref is really a useless temporary variable inside that block. I assign a value to it, and then immediately borrow from that variable and never use it again. It should have no impact on our program to combine that into a single line within a block, right? Wrong. Check out this program:

use std::cell::{RefMut, RefCell};
fn main() {
    let age: RefCell<u32> = RefCell::new(30);

        let mut age_mut_ref: RefMut<u32> = age.borrow_mut();
        *age_mut_ref += 1;

    let age_reference: &u32 = {
    println!("Happy birthday, you're {} years old!", age_reference);

This looks almost identical to the code above. But this code compiles and runs successfully. What gives?!? It turns out, creating our temporary variable wasn't quite as meaningless as we thought. That's thanks to something called temporary lifetime extension. Let me start with a caveat from the docs themselves:

Note: The exact rules for temporary lifetime extension are subject to change. This is describing the current behavior only.

With that out of the way, let's quote once more from the docs:

The temporary scopes for expressions in let statements are sometimes extended to the scope of the block containing the let statement. This is done when the usual temporary scope would be too small, based on certain syntactic rules.

OK, I'm all done quoting. The documentation there is pretty good at explaining things. For our case above, let's look at the code in question:

let age_reference: &u32 = {

age.borrow() create a value of type Ref<u32>. What variable holds that value? Trick question: there isn't one. This value is temporary. We use temporary values in programming all the time. In (1 + 2) + 5, the expression 1 + 2 generates a temporary 3, which is then added to 5 and thrown away. Normally these temporaries aren't terribly interesting.

But in the context of lifetimes and borrow checkers, they are. Taken at the most literal, { &age.borrow() } should behave as follows:

  • Create a new block
  • Call age.borrow() to get a Ref<u32>
  • That Ref<u32> is owned by the block around this expression
  • Borrow a reference to that Ref<u32>
  • Try to return that reference as the result of the block
  • Realize that reference refers to a value that was dropped with the block, and therefore lifetime rules are violated

But this kind of thing would pop up all the time! Consider the incredibly simple examples from the docs that I promised not to quote from anymore (borrowing code snippets is different, OK?):

let x = &mut 0;
// Usually a temporary would be dropped by now, but the temporary for `0` lives
// to the end of the block.
println!("{}", x);

It turns out that strictly following lexical scoping rules for lifetimes wouldn't be ergonomic. So there's a special case to make it feel right.


Firstly, I hope this was a good example of my comment about ergonomics. I never would have thought about let x = &mut 0 as a beginner: yeah, sure, I can borrow a reference to a number. Cool. Then, with a bit more experience, it suddenly seems shocking: what's the lifetime of 0? And finally, with just a bit more experience (and the kind help of Rust issue tracker maintainers), it makes sense again.

Secondly, I hope this semi-deep dive into how RefCell moves borrow rule checking to runtime helps elucidate some things. In my opinion, this was one of the harder concepts to grok in my Rust learning journey.

Thirdly, I hope seeing the temporary lifetime extension rules helps clarify why some things work that you thought wouldn't. I know I've been in the middle of writing something before, been surprised the borrow checker didn't punch me in the face, and then happily went on my way instead of questioning why everything went better than expected.

The tweets I started this off with discuss a more advanced version than I covered in the rest of the post. I'd recommend going back to the top and making sure the code and explanations all make sense.

Want to learn more about Rust? Check out FP Complete's Rust Crash Course, or read about our training courses. Also, you may be interested in these related posts:

September 21, 2020 12:00 AM

September 20, 2020

Ken T Takusagawa

[bvpqwmoh] Making Roman numerals worse

Roman numerals are terrible, but we explore making them even worse.

Extend additive notation the obvious way: iii=3, iiii=4, iiiii=5, iiiiii=6, vv=10, vvv=15.

Extend subtractive notation in the following (terrible) way.  We describe it around the character D (500), but the full system is its generalization to all Roman numeral characters.

First, consider uD, where u is a Roman numeral string composed of one or more of the characters I V X L C (i.e., anything less than D).  Interpret u (recursively) as a number.  Then, uD is 500-value(u).

ix = 9.  iix = 8.  vix = 10 - 6 = 4.

Incidentally, iiiiiv becomes one way to express zero.  So is vvx.  We can also create negative numbers: vvix = -(5 + 5 + 1) + 10 = -1.

Next, consider uDvDwDx.  u v w x are each strings composed of characters less than D.  Evaluate as follows: value(uDvDwDx) = (500-value(u)) + (500-value(uv)) + (500+value(uvw)) + value(x).  uv and uvw are string concatenation, so subtractive prefixes can get used multiple times.  It feels a bit like earlier strings, e.g., u, distribute over later strings v and w.

idid = id + iid = 499 + 498 = 997.  diid = 500 + 498 = 998.  iidd = iid + iid = 498 + 498 = 996.  idvd = id + ivd = 499 + 496 = 995.  vdidxd = vd + vid + vixd = 495 + (500-value(vi)) + (500-value(vix)) = 495 + 494 + 496 = 1485.

xixixixixixixixixixix = 10 + 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 + 0 = 55.

Subtractive prefixes get used multiple times through concatenation only through one level of characters:

iv = 4.  ivx = 10-value(iv) = 6, not anything bizarre like value(ix)-value(iv).  ixvx = (10-value(i)) + (10-value(iv)) = 9 + 6 = 15.

This system is backward compatible with existing Roman numerals.  It assigns values to strings which were previously invalid, and no previously valid strings change their value.  Every string of Roman numeral characters now has a unique value.  There are many ways to express any given value.

Here is Haskell source code for evaluating these "worse" Roman numerals.  We also provide routines for finding the shortest representation of a given Arabic number.  The shortest representation is found by breadth-first search.  At the end of this post, we give the shortest representation of all numbers from -100 to 100.  Some numbers have multiple possible shortest representations: we give them in a decreasing aesthetic order described in the compareroman function.

Future work: assume the minus sign is available for negation.  For what negative numbers (if any) does the shortest representation not use it?  Are there any positive numbers whose shortest representation does use it?  (Both of these seem unlikely.)  Another possibility: invent a character for zero, perhaps N for nihil or nullus.  Consider expressing negative numbers with subtractive prefixes in front of N.

Future work: instead of the standard set of Roman numeral characters and their values, consider some different set of values for characters: powers of 2, Fibonacci numbers, square numbers.  Does anything interesting happen?  Perhaps interesting things happen when seeking the shortest representation of numbers.

We define a sequence "worstcase" that exemplifies the "worst case" of our worse Roman numeral system.  Each line below has the next largest Roman numeral character interleaved between the characters of the previous line, and one more at the end.

1 I
4 IV

With Roman numeral characters beyond M, the sequence would continue

1304801 -42453397 2336054109 -221579717657 36896704797401 -10904184517859485 5768308016008033877 -5503513512222683409697

Getting the last 4 values required memoization in order not to run out of memory.  We used Data.MemoTrie.memoFix from the MemoTrie package, version 0.6.9.  It had higher performance than Data.Function.Memoize in the memoize package, version 0.8.1.  We did not have the patience to compute the next number in the sequence.


1 I


2 II




4 IV


5 V


6 VI






9 IX


10 X


11 XI


12 XII






15 XV


16 XVI




18 IXX


19 XIX


20 XX


21 XXI








25 XXV




















35 XVL








39 XIL


40 XL








44 VIL


45 VL






48 IIL


49 IL


50 L


51 LI


52 LII






55 LV


56 LVI








60 LX


61 LXI








65 LXV










70 LXX






























85 XVC








89 XIC


90 XC








94 VIC


95 VC








99 IC


100 C

-100 DDCM

by Unknown ( at September 20, 2020 04:30 AM

September 17, 2020


MuniHac 2020

We enjoyed attending virtual MuniHac and would like to thank our co-organisers TNG for all the work they put into planning, hosting and and running the event. In particular, we’ve been very impressed with their Virtual Office software that was used during the event.

There were three tracks of talks during this two-day online event, and if you missed it, all the talks are available both on the MuniHac website and on YouTube.

Several of us contributed talks at MuniHac, including:

Being lazy without being bloated

Edsko de Vries


Laziness is one of Haskell’s most distinctive features. It is one of the two features of functional programming that “Why Functional Programming Matters” identifies as key to modularity, but it is also one of the most frequently cited features of Haskell that programmers would perhaps like to change. One reason for this ambivalence is that laziness can give rise to space leaks, which can sometimes be fiendishly difficult to debug. In this talk we will present a new library called nothunks which can be used to test for the absence of unexpected thunks in long-lived data; when an unexpected thunk is found, a “stack trace” is returned identifying precisely where the thunk is (“the second coordinate of a pair in a map in a list in type T”). In combination with QuickCheck, this can be used to test that an API does not create any thunks when it shouldn’t and thunks that are created are easily identified and fixed. Whilst it doesn’t of course fix all space leaks, it can help avoid a significant proportion of space leaks due to excessive laziness.

Contravariant logging: How to add logging without getting grumpy

Duncan Coutts


Logging usually makes me grumpy. It tends to clutter code and adds unnecessary dependencies. It’s just not beautiful.

I want to share the good news that there is an approach to logging that does not make me grumpy, and I have used it in a large project where it has worked out well. It is a a relatively new approach based on contravariant functors, that avoids cluttering the code, has a simple general interface that minimises concrete dependencies and still allows a choice of logging backend.

This talk will cover the problems with logging libraries, how contravariant logging improves things and how to apply contravariant logging in your project, with your choice of logging backend.

Addendum: Duncan forgot in his talk to mention the existing libraries that use the core idea. Duncan says:

My goal in the talk was to promote the contravariant logging idea as being a good idea. As I said in the talk I am certainly not trying to take credit for the idea. I should however have credited the people who did come up with the idea and mentioned the existing libraries that are based on this idea.

So to give credit where credit is due, to the best of my knowledge the idea was rendered into code several times in the last few years:

I think it is great that this idea is being picked up and included in these libraries. As I argue in the talk however, we maximise the ability to use the pattern in different applications by minimising the assumptions. For example both di-core and co-log-core include additional “opinionated” functionality. I would like to see us get to the stage where there’s a package on Hackage that has just the minimal contravariant logging interface. That could be some future version of one of the existing library packages, or a core library shared between them.

Liquid Haskell (workshop)

Andres Löh


Liquid Haskell is an extension to Haskell that adds refinement types to the language, which are then checked via an external theorem prover such as z3. With refinement types, one can express many interesting properties of programs that are normally out of reach of Haskell’s type system or only achievable via quite substantial encoding efforts and advanced type system constructs. On the other hand, the overhead for checking refinement types is often rather small, because the external solver is quite powerful.

Liquid Haskell used to be an external, standalone executable, but is now available as a GHC plugin, making it much more convenient to use.

In this tutorial, we’ll discuss how refinement types work, give many examples of their use and learn how to work with Liquid Haskell productively.

Well-Typed Services

If you want to find more about what Well-Typed can offer, please check out our Services page, or just send us an email.

by christine, andres, duncan, edsko at September 17, 2020 12:00 AM

Jasper Van der Jeugt

Lazy Sort: Counting Comparisons


{-# LANGUAGE BangPatterns #-}
module Main where

import Data.IORef (IORef)
import qualified Data.Map as Map
import qualified Data.IORef as IORef
import Control.Monad (replicateM, forM_, unless, forM)
import Data.List (sort, intercalate, foldl')
import System.Random (randomIO)
import System.IO.Unsafe (unsafePerformIO)

Haskell’s laziness allows you to do many cool things. I’ve talked about searching an infinite graph before. Another commonly mentioned example is finding the smallest N items in a list.

Because programmers are lazy as well, this is often defined as:

smallestN_lazy :: Ord a => Int -> [a] -> [a]
smallestN_lazy n = take n . sort

This happens regardless of the language of choice if we’re confident that the list will not be too large. It’s more important to be correct than it is to be fast.

However, in strict languages we’re really sorting the entire list before taking the first N items. We can implement this in Haskell by forcing the length of the sorted list.

smallestN_strict :: Ord a => Int -> [a] -> [a]
smallestN_strict n l0 = let l1 = sort l0 in length l1 `seq` take n l1

If you’re at least somewhat familiar with the concept of laziness, you may intuitively realize that the lazy version of smallestN is much better since it’ll only sort as far as it needs.

But how much better does it actually do, with Haskell’s default sort?

A better algorithm?

For the sake of the comparison, we can introduce a third algorithm, which does a slightly smarter thing by keeping a heap of the smallest elements it has seen so far. This code is far more complex than smallestN_lazy, so if it performs better, we should still ask ourselves if the additional complexity is worth it.

smallestN_smart :: Ord a => Int -> [a] -> [a]
smallestN_smart maxSize list = do
    (item, n) <- Map.toList heap
    replicate n item
    -- A heap is a map of the item to how many times it occurs in
    -- the heap, like a frequency counter.
    heap = foldl' (\acc x -> insert x acc) Map.empty list
    insert x heap0
        | Map.size heap0 < maxSize = Map.insertWith (+) x 1 heap0
        | otherwise = case Map.maxViewWithKey heap0 of
            Nothing -> Map.insertWith (+) x 1 heap0
            Just ((y, yn), _) -> case compare x y of
                EQ -> heap0
                GT -> heap0
                LT ->
                    let heap1 = Map.insertWith (+) x 1 heap0 in
                    if yn > 1
                        then Map.insert y (yn - 1) heap1
                        else Map.delete y heap1

So, we get to the main trick I wanted to talk about: how do we benchmark this, and can we add unit tests to confirm these benchmark results in CI? Benchmark execution times are very fickle. Instruction counting is awesome but perhaps a little overkill.

Instead, we can just count the number of comparisons.

Counting comparisons

We can use a new type that holds a value and a number of ticks. We can increase the number of ticks, and also read the ticks that have occurred.

data Ticks a = Ticks {ref :: !(IORef Int), unTicks :: !a}

mkTicks :: a -> IO (Ticks a)
mkTicks x = Ticks <$> IORef.newIORef 0 <*> pure x

tick :: Ticks a -> IO ()
tick t = IORef.atomicModifyIORef' (ref t) $ \i -> (i + 1, ())

ticks :: Ticks a -> IO Int
ticks = IORef.readIORef . ref

smallestN has an Ord constraint, so if we want to count the number of comparisons we’ll want to do that for both == and compare.

instance Eq a => Eq (Ticks a) where
    (==) = tick2 (==)

instance Ord a => Ord (Ticks a) where
    compare = tick2 compare

The actual ticking code goes in tick2, which applies a binary operation and increases the counters of both arguments. We need unsafePerformIO for that but it’s fine since this lives only in our testing code and not our actual smallestN implementation.

tick2 :: (a -> a -> b) -> Ticks a -> Ticks a -> b
tick2 f t1 t2 = unsafePerformIO $ do
    tick t1
    tick t2
    pure $ f (unTicks t1) (unTicks t2)
{-# NOINLINE tick2 #-}


Let’s add some benchmarking that prints an ad-hoc CSV:

main :: IO ()
main = do
    let listSize = 100000
        impls = [smallestN_strict, smallestN_lazy, smallestN_smart]
    forM_ [50, 100 .. 2000] $ \sampleSize -> do
        l <- replicateM listSize randomIO :: IO [Int]
        (nticks, results) <- fmap unzip $ forM impls $ \f -> do
            l1 <- traverse mkTicks l
            let !r1 = sum . map unTicks $ f sampleSize l1
            t1 <- sum <$> traverse ticks l1
            pure (t1, r1)
        unless (equal results) . fail $
            "Different results: " ++ show results
        putStrLn . intercalate "," . map show $ sampleSize : nticks

Plug that CSV into a spreadsheet and we get this graph. What conclusions can we draw?

Clearly, both the lazy version as well as the “smart” version are able to avoid a large number of comparisons. Let’s remove the strict version so we can zoom in.

What does this mean?

  • If the sampleSize is small, the heap implementation does less comparions. This makes sense: even if treat sort as a black box, and don’t look at it’s implementation, we can assume that it is not optimally lazy; so it will always sort “a bit too much”.

  • As sampleSize gets bigger, the insertion into the bigger and bigger heap starts to matter more and more and eventually the naive lazy implementation is faster!

  • Laziness is awesome and take N . sort is absolutely the first implementation you should write, even if you replace it with a more efficient version later.

  • Code where you count a number of calls is very easy to do in a test suite. It doesn’t pollute the application code if we can patch in counting through a typeclass (Ord in this case).

Can we say something about the complexity?

  • The complexity of smallestN_smart is basically inserting into a heap listSize times. This gives us O(listSize * log(sampleSize)).

    That is of course the worst case complexity, which only occurs in the special case where we need to insert into the heap at each step. That’s only true when the list is sorted, so for a random list the average complexity will be a lot better.

  • The complexity of smallestN_lazy is far harder to reason about. Intuitively, and with the information that Data.List.sort is a merge sort, I came to something like O(listSize * max(sampleSize, log(listSize))). I’m not sure if this is correct, and the case with a random list seems to be faster.

    I would be very interested in knowing the actual complexity of the lazy version, so if you have any insights, be sure to let me know!

    Update: Edward Kmett corrected me: the complexity of smallestN_lazy is actually O(listSize * min(sampleSize, listSize)), with O(listSize * min(sampleSize, log(listSize)) in expectation for a random list.


Helper function: check if all elements in a list are equal.

equal :: Eq a => [a] -> Bool
equal (x : y : zs) = x == y && equal (y : zs)
equal _            = True

by Jasper Van der Jeugt at September 17, 2020 12:00 AM

September 16, 2020

Tweag I/O

Implicit Dependencies in Build Systems

In making a build system for your software, you codified the dependencies between its parts. But, did you account for implicit software dependencies, like system libraries and compiler toolchains?

Implicit dependencies give rise to the biggest and most common problem with software builds - the lack of hermiticity. Without hermetic builds, reproducibility and cacheability are lost.

This post motivates the desire for reproducibility and cacheability, and explains how we achieve hermetic, reproducible, highly cacheable builds by taking control of implicit dependencies.


Consider a developer newly approaching a code repository. After cloning the repo, the developer must install a long list of “build requirements” and plod through multiple steps of “setup”, only to find that, yes indeed, the build fails. Yet, it worked just fine for their colleague! The developer, typically not expert in build tooling, must debug the mysterious failure not of their making. This is bad for morale and for productivity.

This happens because the build is not reproducible.

One very common reason for the failure is that the compiler toolchain on the developer’s system is different from that of the colleague. This happens even with build systems that use sophisticated build software, like Bazel. Bazel implicitly uses whatever system libraries and compilers are currently installed in the developer’s environment.

A common workaround is to provide developers with a Docker image equipped with a certain compiler toolchain and system libraries, and then to mandate that the Bazel build occurs in that context.

That solution has a number of drawbacks. First, if the developer is using macOS, the virtualized build context runs substantially slower. Second, the Bazel build cache, developer secrets, and the source code remain outside of the image and this adds complexity to the Docker invocation. Third, the Docker image must be rebuilt and redistributed as dependencies change and that’s extra maintenance. Fourth, and this is the biggest issue, Docker image builds are themselves not reproducible - they nearly always rely on some external state that does not remain constant across build invocations, and that means the build can fail for reasons unrelated to the developer’s code.

A better solution is to use Nix to supply the compiler toolchain and system library dependencies. Nix is a software package management system somewhat like Debian’s APT or macOS’s Homebrew. Nix goes much farther to help developers control their environments. It is unsurpassed when it comes to reproducible builds of software packages.

Nix facilitates use of the Nixpkgs package set. That set is the largest single set of software packages. It is also the freshest package set. It provides build instructions that work both on Linux and macOS. Developers can easily pin any software package at an exact version.

Learn more about using Nix with Bazel, here.


Not only should builds be reproducible, but they should also be fast. Fast builds are achieved by caching intermediate build results. Cache entries are keyed based on the precise dependencies as well as the build instructions that produce the entries. Builds will only benefit from a (shared, distributed) cache when they have matching dependencies. Otherwise, cache keys (which depend on the precise dependencies) will be different, and there will be cache misses. This means that the developer will have to rebuild targets locally. These unnecessary local rebuilds slow development.

The solution is to make the implicit dependencies into explicit ones, again using Nix, making sure to configure and use a shared Nix cache.

Learn more about configuring a shared Bazel cache, here.


It is important to eliminate implicit dependencies in your build system in order to retain build reproducibility and cacheability. Identify Nix packages that can replace the implicit dependencies of your Bazel build and use rules_nixpkgs to declare them as explicit dependencies. That will yield a fast, correct, hermetic build.

September 16, 2020 12:00 AM

FP Complete

Where Rust fits in your organization

Rust is a relatively new and promising language that offers improvements in software in terms of safety and speed. We'll cover if adopting Rust into your organization makes sense and where you would want to add it to an existing software stack.

Advantages of Rust


Rust was originally created by Mozilla in order to replace C++ in the Firefox browser with a safer alternative. C++ is not a memory safe language, and for Mozilla memory safety issues were the main culprit for numerous bugs and security vulnerabilities in the Firefox browser.

To replace it Mozilla needed a language that would not require a runtime or a garbage collector. No language existed at that time which reasonably met those requirements, so instead Mozilla worked to implement their own language. Out of that endeavor sprung Rust.

Adoption and use beyond Mozilla

Since its creation the language has gained widespread adoption and use far beyond Mozilla and the Firefox browser. This is not surprising, as the language is generally considered to be superbly well designed, adopting many programming language advances that have been made in the last 20 years. Add to that it's incredibly fast - on the same level as idiomatic C and C++ code.

Language Design

Another reason for its popularity and growing use is that Rust doesn't re-implement bug-causing language design choices.

With Rust, errors induced by missing null checking and poor error handling, as well as other classes of coding errors, are ruled out by the design of the language and the strong type checks by the Rust compiler.

For example instead of allowing for things to be null or nil, Rust has enum types. Using these a Rust programmer can handle failure cases in a reasonable and safe way with useful enum types like Option and Result.

Compare this to a language like Go which doesn't provide this and instead implements the null pointer. Doing so essentially creates a dangerous escape door out of the type system that infects every type in the language. As a result a Go programmer could easily forget to check for null and overlook cases where a null value could be returned.

So if you have a Python 2 code base and you're trying to decide whether to re-implement it Go, use Rust instead!

Rust in the wild

Rust Adoption Success Stories

In 2020 Rust was once again (for 5 years running!) the most loved programming language according to the Stack Overflow developer survey.

Just because software developers love a language though, does not equate to success if you adopt the language into your organization.

Some of the best success stories for companies that have adopted Rust come from those that isolated some small but critical piece of their software and re-implemented it in Rust.

In a large organization, Rust is extremely useful in a scenario like this where a small but rate limiting piece of the software stack can be re-written in Rust. This gives the organization the benefits of adopting Rust in terms of performant, fast software but without requiring them to adopt the language across the board. And because Rust doesn't bring its own competing runtime and garbage collector, it fits this role phenomenally well.

Large Companies that count themselves as Rustaceans

Large companies like Microsoft now expound on Rust being the future of safe software development and have adopted using it. Other companies like Amazon have chosen Rust more and more for new critical pieces of cloud infrastructure software.

Apple, Google, Facebook, Cloudflare, and Dropbox (to name a few) also all now count themselves as Rust adopters.

Cost and Tradeoffs of Rust

Fighting the Rust Compiler

One of the key reasons to use Rust is to limit (or completely eliminate) entire classes of runtime bugs and errors. The drawback is that with Rust's strong type system and compile time checks, you will end up seeing a fair bit more compile time errors with your code. Some developers find this unnerving and become frustrated. This is especially true if they're used to less safe languages (like Javascript or C++) that ignore certain categories of programming mistakes at compile time and leave them as surprises when the software is run.

For some organizations, they're okay with this trade-off and the associated cost of discovering errors in production. In these scenarios, it may be the case that the code being written is not incredibly critical and shipping buggy code to production is tolerable (to a certain degree).

Development Time

Rust also brings with it a certain cost in terms of the time it takes to iterate on and develop. This is something associated with all compiled languages and it's not exclusive to Rust, but it's worth considering. Rust might not be a good fit if your organization's projects are comprised of relatively simple codebases where the added compile time is not worth it.

Is Rust Right for Your Organization?

Rust is well suited to situations where having performant, resource efficient code makes a huge difference for the larger overall product. If your organization could benefit from isolating critical pieces of its software stack that meet this description, then you should consider adopting and using Rust. The unique qualities of Rust mean that you don't need to adopt Rust across your entire organization to see a meaningful difference.

In addition to that, Rust is seeing major adoption outside its original target use case as a systems language. More and more it's being used for web servers, web dev via Web Assembly, game development, and general purpose programming uses. Rust has become a full stack language with huge range of supported use cases.

If you'd like to know more about Rust and how adopting it could make a difference in your organization, then please reach out to FP Complete! If you have a Rust project you want to get started, or if you would like Rust training for your team, FP Complete can help.

September 16, 2020 12:00 AM

September 15, 2020

ERDI Gergo

A "very typed" container for representing microcode

I've been thinking a bit about describing microcode lately. My motivation was the Intel 8080-compatible CPU I've been building for my upcoming Clash book. As with everything else for that book, the challenge is not in getting it to work — rather, it is in writing the code as close as possible to the way you would want to explain it to another person.

So in the context of a microprocessor as simple as the Intel 8080 and using synchronous RAM, I think of the microcode as a sequence of steps, where each step consists of an internal CPU state transition, and a memory read or write request. For example, the machine instruction 0x34 (mnemonic INR M) increments by one the byte pointed to by the register pair HL. In my core, the micro-architecture has an 8-bit value- and a 16-bit address-register; the latter can be used for memory addressing. To use something else for addressing, you need to load it into the address buffer first. So the steps to implement INR M are:

  1. Get value of HL register pair into the address buffer
  2. Indirect read via the address buffer into the value buffer
  3. Replace value buffer's contents with its increment
  4. Update the status register (flags like "was the latest value zero")
  5. Indirect write

However, memory access happens on the transition between cycles, so the final write will not be its own step; rather, it happens as the postamble of step 4. Similarly, the correct address will have to be put on the address pins in the preamble of step 2 for the load to work out:

  1. Get HL into address buffer
  2. Set address to address buffer's contents for reading
  3. Store read value from data-in into value buffer
  4. Increment value buffer
  5. Update status register
  6. Set address to address buffer's contents for writing

What makes this tricky is that on one hand, we want to describe preambles as part of their respective step, but of course for the implementation it is too late to process them when we get to that step. So I decided to write out the microcode as a sequence of triplets, corresponding to the preamble, the state transition, and the postamble, and then transform it into a format where preambles are attached to the previous step:

[ (Nothing,       Get2 rHL,          Nothing)
, (Just Indirect, ReadMem            Nothing)
, (Nothing,       ALU ADD Const0x01, Nothing)
, (Nothing,       UpdateFlags,       Just Indirect)

Here, Indirect addressing means setting the address pins from the address buffer (as opposed to, e.g. the program counter); if it is in the postamble (i.e. write) position, it also means the write-request pin should be asserted.

So this is what the microcode developer writes, but then we can transform it into a format that consists of a state transition paired with the addressing:

[ (Get2 rHL,          Just (Left Indirect))
, (ReadMem            Nothing)
, (ALU ADD Const0x01, Nothing)
, (UpdateFlags,       Just (Right Indirect))

So we're done, without having done anything interesting enough to warrant a blog post.

Or are we?

Disallowing memory addressing conflicts

Note that in the format we can actually execute, the addressing at each step is either a Left read address, or a Right write address (or Nothing at all). But what if we had two subsequent micro-steps, where the first one has a write request in its postamble, and the second one has a read request in its preamble? We are describing a CPU more than 40 years old, it is to be connected to single-port RAM, so we can't do read and write at the same time. This constraint is correctly captured by the Maybe (Either Read Write) type of memory requests in the normalized form, but it is not enforced by our naïve [(Maybe Read, Transition, Maybe Write)] type for what the microcode developer writes.

So this is what I set out to solve: to give an API for writing microcode that has self-contained steps including the read addressing, but still statically disallows conflicting writes and reads from subsequent steps. We start by going full Richard Eisenberg and lifting the memory addressing directives to the type level using singletons. While we're at it, let's also turn on Haskell 98 mode:

{-# LANGUAGE DataKinds, PolyKinds, ConstraintKinds, GADTs, FlexibleContexts #-}
{-# LANGUAGE TypeOperators, TypeFamilies, TypeApplications, ScopedTypeVariables #-}
{-# LANGUAGE StandaloneDeriving, DeriveFunctor #-}

data Step (pre :: Maybe a) (post :: Maybe b) t where
    Step :: Sing pre -> t -> Sing post -> Step pre post t
deriving instance Functor (Step pre post)

The plan, then, is to do enough type-level magic to only allow neighbouring Steps if at most one of the first post- and the second preamble is a type-level Just index.

The operations we want to support on microcode fragments is cons-ing a new Step and appeding fragments. For the first one, we need to check that the postamble of the new Step is compatible with the first preamble of the existing fragment; for the latter, we need the same check between the last postamble of the first fragment and the first preamble of the second fragment. First, let's codify what "compatible" means here:

type family Combine (post :: Maybe b) (pre :: Maybe a) :: Maybe (Either a b) where
    Combine Nothing Nothing = Nothing
    Combine (Just post) Nothing = Just (Right post)
    Combine Nothing (Just pre) = Just (Left pre)

Importantly, there is no clause for Combine (Just post) (Just pre).

Getting dizzy with the thin air of the type level? Let's leave ourselves a thread leading back to the term level:

    :: forall a b (post :: Maybe b) (pre :: Maybe a).
       (SingKind a, SingKind b, SingI (Combine post pre))
    => Sing post -> Sing pre -> Demote (KindOf (Combine post pre))
combine _ _ = demote @(Combine post pre)

(This post is not sponsored by Singpost BTW).

Cons- and append-able fragments

For the actual fragments, we can store them internally almost in the normalized format, i.e. as a term-level list of (Maybe a, [(t, Maybe (Either a b))]). Almost, but not quite, because the first a and the last b need to appear in the index, to be able to give a restricted type to cons and append. So instead of storing them in the list proper, we will store them as separate singletons:

data Ends a b
    = Empty
    | NonEmpty (Maybe a) (Maybe b)

data Amble (ends :: Ends a b) t where
    End :: Amble Empty t
        :: forall (a0 :: Maybe a) (bn :: Maybe b) n t. ()
        => Sing a0
        -> [(t, Demote (Maybe (Either a b)))]
        -> t
        -> Sing bn
        -> Amble (NonEmpty a0 bn) t
deriving instance Functor (Amble ends)      

Note that we need a special Empty index value for End instead of just NonEmpty Nothing Nothing, because starting with an empty Amble, the first cons needs to change both the front-end preamble and the back-end postamble, whereas later cons operators should only change the front-end.

type family Cons (b1 :: Maybe b) (ends :: Ends a b) where
    Cons b1 Empty = b1
    Cons b1 (NonEmpty a1 bn) = bn

We can now try writing the term-level cons. The first cons is easy, because there is no existing front-end to check compatibility with:

    :: forall (a0 :: Maybe a) b1 (ends :: Ends a b) t. ()
    => Step a0 b1 t -> Amble ends t -> Amble (NonEmpty a0 (Cons b1 ends)) t
cons (Step a0 x b1) End = More a0 [] x b1
cons (Step a0 x b1) (More a1 xs xn bn) = More a0 ((x, _):xs) xn bn

We get into trouble when trying to fill in the hole in the cons to a non-empty Amble. And we should be, because nowhere in the type of cons so far have we ensured that b1 is compatible with the front-end of ends. We will have to use another type family for that, to pattern-match on Empty and NonEmpty ends:

type family CanCons (b1 :: Maybe b) (ends :: Ends a b) :: Constraint where
    CanCons b1 Empty = ()
    CanCons (b1 :: Maybe b) (NonEmpty a1 bn :: Ends a b) =
        (SingKind a, SingKind b, SingI (Combine b1 a1))

Unsurprisingly, the constraints needed to be able to cons are exactly what we need to fill the hole with the term-level value of combine b1 a1:

    :: forall (a0 :: Maybe a) b1 (ends :: Ends a b) t. (CanCons b1 ends)
    => Step a0 b1 t -> Amble ends t -> Amble (NonEmpty a0 (Cons b1 ends)) t
cons (Step a0 x b1) End = More a0 [] x b1
cons (Step a0 x b1) (More a1 xs xn bn) = More a0 ((x, combine b1 a1):xs) xn bn

Now we are cooking with gas: we can re-use this idea to implement append by ensuring we CanCons the first fragment's backend to the second fragment:

type family CanAppend (ends1 :: Ends a b) (ends2 :: Ends a b) :: Constraint where
    CanAppend Empty ends2 = ()
    CanAppend (NonEmpty a1 bn) ends2 = CanCons bn ends2

type family Append (ends1 :: Ends a b) (ends2 :: Ends a b) where
    Append Empty ends2 = ends2
    Append ends1 Empty = ends1
    Append (NonEmpty a0 bn) (NonEmpty an bm) = NonEmpty a0 bm

append :: (CanAppend ends1 ends2) => Amble ends1 t -> Amble ends2 t -> Amble (Append ends1 ends2) t
append End ys = ys
append (More a0 xs xn bn) End = More a0 xs xn bn
append (More a0 xs xn bn) (More an ys ym bm) = More a0 (xs ++ [(xn, combine bn an)] ++ ys) ym bm

We finish off the implementation by writing the translation into the normalized format. Since the More constructor already contains almost-normalized form, we just need to take care to snoc the final element onto the result:

    :: forall (ends :: Ends a b) t. (SingKind a, SingKind b)
    => Amble ends t
    -> (Maybe (Demote a), [(t, Maybe (Demote (Either a b)))])
stepsOf End = (Nothing, [])
stepsOf (More a0 xs xn bn) = (fromSing a0, xs ++ [(xn, Right <$> fromSing bn)])

Putting a bow on it

What we have so far works, but there are a couple of straightforward improvements that would be a shame not to implement.

Nicer way to take steps

As written, you would have to use Step like this:

Step (sing @Nothing) UpdateFlags (sing @(Just Indirect))      

All this singing noise would be more annoying than the Eurovision Song Contest, so I wanted to avoid it. The idea is to turn those Sing-typed arguments into just type-level arguments; then do some horrible RankNTypes magic to keep the parameter order. Prenex? What is that?

{-# LANGUAGE RankNTypes #-}
step :: forall pre. (SingI pre) => forall t. t -> forall post. (SingI post) => Step pre post t
step x = Step sing x sing

So now we will be able to write code like step @Nothing UpdateFlags @(Just Indirect) and get a Step type inferred that has the preamble and the postamble appearing in the indices.

Custom type error message

Suppose we make a mistake in our microcode, and accidentally want to write after one step and read before the next (using >:> for infix cons):

step @Nothing         (Get2 rHL)                       @(Just IncrPC) >:>
step @(Just Indirect) ReadMem                          @Nothing >:>
step @Nothing         (Compute Const01 ADD KeepC SetA) @Nothing >:>
step @Nothing         UpdateFlags                      @(Just Indirect) >:>

This is rejected by the type checker, of course; however, the error message is not as informative as it could be, as it faults the missing SingI instance for a stuck type family application:

• No instance for (SingI (Combine ('Just 'IncrPC) ('Just 'Indirect)))
arising from a use of ‘>:>’

With GHC's custom type errors feature, we can add a fourth clause to our Combine type family. Unfortunately, this requires turning on UndecidableInstances for now:

{-# LANGUAGE UndecidableInstances #-}      
import GHC.TypeLits

type Conflict post pre =
    Text "Conflict between postamble" :$$: Text "  " :<>: ShowType post :$$:
    Text "and next preamble" :$$: Text "  " :<>: ShowType pre

type family Combine (post :: Maybe b) (pre :: Maybe a) :: Maybe (Either a b) where
    Combine Nothing Nothing = Nothing
    Combine (Just post) Nothing = Just (Right post)
    Combine Nothing (Just pre) = Just (Left pre)
    Combine (Just post) (Just pre) = TypeError (Conflict post pre)

With this, the error message changes to:

• Conflict between postamble 'IncrPC and next preamble 'Indirect

Much nicer!

Tracking fragment length

The final difference between what we have described here and the code I use for real is that in the real version, Amble also tracks its length in an index. This is needed because the CPU core is used not just for emulation, but also FPGA synthesis; and in real hardware, we can't just store lists of unbounded size in the microcode ROM. So instead, microcode is described as a length-indexed Amble n ends t, and then normalized into a Vec n instead of a list. Each instruction can be at most 10 steps long; everything is then ultimately normalized into a uniformly typed Vec 10 by padding it with "go start fetching next instruction" micro-ops.

The full implementation

Find the full code on GitHub, next to the rest of the Intel 8080 core.

September 15, 2020 06:32 PM

September 14, 2020

Monday Morning Haskell

Rust Web Series Complete!


We're taking a quick breather this week from new content for an announcement. Our recently concluded Rust Web series now has a permanent spot on the advanced page of our website. You can take a look at the series page here! Here's a quick summary of the series:

  1. Part 1: Postgres - In the first part, we learn about a basic library to enable integration with a Postgresql Database.
  2. Part 2: Diesel - Next up, we get a little more formal with our database mechanics. We use the Diesel library to provide a schema for our database application.
  3. Part 3: Rocket - In part 3, we take the next step and start making a web server! We'll learn the basics of the Rocket server library!
  4. Part 4: CRUD Server - What do we do once we have a database and server library? Combine them of course! In this part, we'll make a CRUD server that can access our database elements using Diesel and Rocket.
  5. Part 5: Authentication - If your server will actually serve real users, you'll need authentication at some point. We'll see the different mechanisms we can use with Rocket for securing our endpoints.
  6. Part 6: Front-end Templating - If you're serving a full front-end web app, you'll need some way to customize the HTML. In the last part of the series, we'll see how Rocket makes this easy!

The best part is that you can find all the code for the series on our Github Repo! So be sure to take a look there. And if you're still new to Rust, you can also get your feet wet first with our Beginners Series.

In other exciting news, we'll be trying a completely new kind of content in the next couple weeks. I've written a bit in the past about using different IDEs like Atom and IntelliJ to write Haskell. I'd like to revisit these ideas to give a clearer idea of how to make our lives easier when writing code. But instead of writing articles, I'll be making a few videos to showcase how these work! I hope that a visual display of the IDEs will help make the content more clear.

by James Bowen at September 14, 2020 02:30 PM

FP Complete

Avoiding duplicating strings in Rust

Based on actual events.

Let's say you've got a blog. The blog has a bunch of posts. Each post has a title and a set of tags. The metadata for these posts is all contained in TOML files in a single directory. (If you use Zola, you're pretty close to that.) And now you need to generate a CSV file showing a matrix of blog posts and their tags. Seems like a great job for Rust!

In this post, we're going to:

  • Explore how we'd solve this (fairly simple) problem
  • Investigate how Rust's types tell us a lot about memory usage
  • Play with some nice and not-so-nice ways to optimize our program


Program behavior

We've got a bunch of TOML files sitting in the posts directory. Here are some example files:

# devops-for-developers.toml
title = "DevOps for (Skeptical) Developers"
tags = ["dev", "devops"]

# rust-devops.toml
title = "Rust with DevOps"
tags = ["devops", "rust"]

We want to create a CSV file that looks like this:

DevOps for (Skeptical) Developers,true,true,false,false
Rust with DevOps,false,true,true,false
Serverless Rust using WASM and Cloudflare,false,true,true,false
Streaming UTF-8 in Haskell and Rust,false,false,true,true

To make this happen, we need to:

  • Iterate through the files in the posts directory
  • Load and parse each TOML file
  • Collect a set of all tags present in all posts
  • Collect the parsed post information
  • Create a CSV file from that information

Not too bad, right?


You should make sure you've installed the Rust tools. Then you can create a new empty project with cargo new tagcsv.

Later on, we're going to play with some unstable language features, so let's opt into a nightly version of the compiler. To do this, create a rust-toolchain file containing:


Then add the following dependencies to your Cargo.toml file:

csv = "1.1.3"
serde = "1.0.115"
serde_derive = "1.0.115"
toml = "0.5.6"

OK, now we can finally work on some code!

First version

We're going to use the toml crate to parse our metadata files. toml is built on top of serde, and we can conveniently use serde_derive to automatically derive a Deserialize implementation for a struct that represents that metadata. So we'll start off our program with:

use serde_derive::Deserialize;
use std::collections::HashSet;

struct Post {
    title: String,
    tags: HashSet<String>,

Next, we'll define our main function to load the data:

fn main() -> Result<(), std::io::Error> {
    // Collect all tags across all of the posts
    let mut all_tags: HashSet<String> = HashSet::new();
    // And collect the individual posts
    let mut posts: Vec<Post> = Vec::new();

    // Read in the files in the posts directory
    let dir = std::fs::read_dir("posts")?;
    for entry in dir {
        // Error handling
        let entry = entry?;
        // Read the file contents as a String
        let contents = std::fs::read_to_string(entry.path())?;
        // Parse the contents with the toml crate
        let post: Post = toml::from_str(&contents)?;
        // Add all of the tags to the all_tags set
        for tag in &post.tags {
        // Update the Vec of posts
    // Generate the CSV output
    gen_csv(&all_tags, &posts)?;

And finally, let's define our gen_csv function to take the set of tags and the Vec of posts and generate the output file:

fn gen_csv(all_tags: &HashSet<String>, posts: &[Post]) -> Result<(), std::io::Error> {
    // Open the file for output
    let mut writer = csv::Writer::from_path("tag-matrix.csv")?;

    // Generate the header, with the word "Title" and then all of the tags
    let mut header = vec!["Title"];
    for tag in all_tags.iter() {

    // Print out a separate row for each post
    for post in posts {
        // Create a record with the post title...
        let mut record = vec![post.title.as_str()];
        for tag in all_tags {
            // and then a true or false for each tag name
            let field = if post.tags.contains(tag) {
            } else {

Side note: it would be slightly nicer to alphabetize the set of tags, which you can do by collecting all of the tags into a Vec and then sorting it. I had that previously, but removed it in the code above to reduce incidental noise to the example. If you feel like having fun, try adding that back.

Anyway, this program works exactly as we want, and produces a CSV file. Perfect, right?

Let the types guide you

I love type-driven programming. I love the idea that looking at the types tells you a lot about the behavior of your program. And in Rust, the types can often tell you about the memory usage of your program. I want to focus on two lines, and then prove a point with a third. Consider:

tags: HashSet<String>,


let mut all_tags: HashSet<String> = HashSet::new();

Firstly, I love the fact that the types tell us so much about expected behavior. The tags are a set: the order is unimportant, and there are no duplicates. That makes sense. We don't want to list "devops" twice in our set of all tags. And there's nothing inherently "first" or "second" about "dev" vs "rust". And we know that tags are arbitrary pieces of textual data. Awesome.

But what I really like here is that it tells us about memory usage. Each post has its own copy of each tag. So does the all_tags set. How do I know this? Easy: because that's exactly what String means. There's no possibility of data sharing, at all. If we have 200 posts tagged "dev", we will have 201 copies of the string "dev" in memory (200 for the posts, once for the all_tags).

And now that we've seen it in the types, we can see evidence of it in the implementation too:


That .clone() bothered me when I first wrote it. And that's what got me to look at the types, which bothered me further.

In reality, this is nothing to worry about. Even with 1,000 posts, averaging 5 tags, with each tag averaging 20 bytes, this will only take up an extra 100,000 bytes of memory. So optimizing this away is not a good use of our time. We're much better off doing something else.

But I wanted to have fun. And if you're reading this post, I think you want to continue this journey too. Onwards!


This isn't the first solution I tried. But it's the first one that worked easily. So we'll start here.

The first thing we have to change is our types. As long as we have HashSet<String>, we know for a fact that we'll have extra copies of the data. This seems like a nice use case for Rc. Rc uses reference counting to let multiple values share ownership of another value. Sounds like exactly what we want!

My approach here is to use compiler-error-driven development, and I encourage you to play along with your own copy of the code. First, let's use Rc:

use std::rc::Rc;

Next, let's change our definition of Post to use an Rc<String> instead of String:

struct Post {
    title: String,
    tags: HashSet<Rc<String>>,

The compiler doesn't like this very much. We can't derive Deserialize for an Rc<String>. So instead, let's make a RawPost struct for the deserializing, and then dedicate Post for holding the data with Rc<String>. In other words:

struct RawPost {
    title: String,
    tags: HashSet<String>,

struct Post {
    title: String,
    tags: HashSet<Rc<String>>,

And then, when parsing the toml, we'll parse into a RawPost type:

let post: RawPost = toml::from_str(&contents)?;

If you're following along, you'll only have one error message at this point about posts.push(post); having a mismatch between Post and RawPost. But before we address that, let's make one more type change above. I want to make all_tags contain Rc<String>.

let mut all_tags: HashSet<Rc<String>> = HashSet::new();

OK, now we've got some nice error messages about mismatches between Rc<String> and String. This is where we have to be careful. The easiest thing to do would be to simply wrap our Strings in an Rc and end up with lots of copies of String. Let's implement the next bit incorrectly first to see what I'm talking about.

At this point in our code rewrite, we've got a RawPost, and we need to:

  • Add its tags to all_tags
  • Create a new Post value based on the RawPost
  • Add the Post to the posts Vec

Here's the simple and wasteful implementation:

let raw_post: RawPost = toml::from_str(&contents)?;

let mut post_tags: HashSet<Rc<String>> = HashSet::new();

for tag in raw_post.tags {
    let tag = Rc::new(tag);

let post = Post {
    title: raw_post.title,
    tags: post_tags,

The problem here is that we always keep the original String from the RawPost. If that tag is already present in the all_tags set, we don't end up using the same copy.

There's an unstable method on HashSets that helps us out here. get_or_insert will try to insert a value into a HashSet. If the value is already present, it will drop the new value and return a reference to the original value. If the value isn't present, the value is added to the HashSet and we get a reference back to it. Changing our code to use that is pretty easy:

for tag in raw_post.tags {
    let tag = Rc::new(tag);
    let tag = all_tags.get_or_insert(tag);

We still end up with a .clone() call, but now it's a clone of an Rc, which is a cheap integer increment. No additional memory allocation required! Since this method is unstable, we also have to enable the feature by adding this at the top of your source file:


And only one more change required. The signature for gen_csv is expecting a &HashSet<String>. If you change that to &HashSet<Rc<String>>, the code will compile and run correctly. Yay!

In case you got lost with all of the edits above, here's the current version of


I already told you that the original HashSet<String> version of the code is likely Good Enoughâ„¢ for most cases. I'll tell you that, if you're really bothered by that overhead, the HashSet<Rc<String>> version if almost certainly the right call. So we should probably just stop here and end the blog post on a nice, safe note.

But let's be bold and crazy. I don't actually like this version of the code that much, for two reasons:

  1. The Rc feels dirty here. Rc is great for weird lifetime situations with values. But in our case, we know that the all_tags set, which owns all of the tags, will always outlive the usage of the tags inside the Posts. So reference counting feels like an unnecessary overhead and obscuring the situation.
  2. As demonstrated before, it's all too easy to mess up with the Rc<String> version. You can accidentally bypass all of the memory saving benefits by using a new String instead of cloning a reference to an existing one.

What I'd really like to do is to have all_tags be a HashSet<String> and own the tags themselves. And then, inside Post, I'd like to keep references to those tags. Unfortunately, this doesn't quite work. Can you foresee why? If not, don't worry, I didn't see it until the borrow checker told me how wrong I was a few times. Let's experience that joy together. And we'll do it with compiler-driven development again.

The first thing I'm going to do is remove the use std::rc::Rc; statement. That leads to our first error: Rc isn't in scope for Post. We want to keep a &str in this struct. But we have to be explicit about lifetimes when holding references in structs. So our code ends up as:

struct Post<'a> {
    title: String,
    tags: HashSet<&'a str>,

The next error is about the definition of all_tags in main. That's easy enough: just take out the Rc:

let mut all_tags: HashSet<String> = HashSet::new();

This is easy! Similarly, post_tags is defined as a HashSet<Rc<String>>. In this case, we want to hold &strs instead, so:

let mut post_tags: HashSet<&str> = HashSet::new();

We no longer need to use Rc::new in the for loop, or clone the Rc. So our loop simplifies down to:

for tag in raw_post.tags {
    let tag = all_tags.get_or_insert(tag);

And (misleadingly), we just have one error message left: the signature for gen_csv still uses a Rc. We'll get rid of that with the new signature:

fn gen_csv(all_tags: &HashSet<String>, posts: &[Post]) -> Result<(), std::io::Error> {

And we get an (IMO confusing) error message about &str and &String not quite lining up:

error[E0277]: the trait bound `&str: std::borrow::Borrow<std::string::String>` is not satisfied
  --> src\
67 |             let field = if post.tags.contains(tag) {
   |                                      ^^^^^^^^ the trait `std::borrow::Borrow<std::string::String>` is not implemented for `&str`

But this can be solved by explicitly asking for a &str via the as_str method:

let field = if post.tags.contains(tag.as_str()) {

And you might think we're done. But this is where the "misleading" idea comes into play.

The borrow checker wins

If you've been following along, you should now see an error message on your screen that looks something like:

error[E0499]: cannot borrow `all_tags` as mutable more than once at a time
  --> src\
35 |             let tag = all_tags.get_or_insert(tag);
   |                       ^^^^^^^^ mutable borrow starts here in previous iteration of loop

error[E0502]: cannot borrow `all_tags` as immutable because it is also borrowed as mutable
  --> src\
35 |             let tag = all_tags.get_or_insert(tag);
   |                       -------- mutable borrow occurs here
46 |     gen_csv(&all_tags, &posts)?;
   |             ^^^^^^^^^  ------ mutable borrow later used here
   |             |
   |             immutable borrow occurs here

I was convinced that the borrow checker was being overly cautious here. Why would a mutable borrow of all_tags to insert a tag into the set conflict with an immutable borrow of the tags inside the set? (If you already see my error, feel free to laugh at my naivete.) I could follow why I'd violated borrow check rules. Specifically: you can't have a mutable reference and any other reference live at the same time. But I didn't see how this was actually stopping my code from segfaulting.

After a bit more thinking, it clicked. I realized that I had an invariant in my head which did not appear anywhere in my types. And therefore, the borrow checker was fully justified in saying my code was unsafe. What I realized is that I had implicitly been assuming that my mutations of the all_tags set would never delete any existing values in the set. I can look at my code and see that that's the case. However, the borrow checker doesn't play those kinds of games. It deals with types and facts. And in fact, my code was not provably correct.

So now is really time to quit, and accept the Rcs, or even just the Strings and wasted memory. We're all done. Please don't keep reading.

Time to get unsafe

OK, I lied. We're going to take one last step here. I'm not going to tell you this is a good idea. I'm not going to tell you this code is generally safe. I am going to tell you that it works in my testing, and that I refuse to commit it to the master branch of the project I'm working on.

We've got two issues:

  • We have an unstated invariant that we never delete tags from our all_tags HashSet
  • We need a mutable reference to the HashSet to insert, and that prevents taking immutable references for our tags

Let's fix this. We're going to define a new struct, called an AppendSet, which only provides the ability to insert new tags, not delete old ones.

struct AppendSet<T> {
    inner: HashSet<T>,

We're going to provide three methods:

  • A static method new, boring
  • A get_or_insert that behaves just like HashSet's, but only needs an immutable reference, not a mutable one
  • An inner method that returns a reference to the internal HashSet so we can reuse its Iterator interface

The first and last are really easy. get_or_insert is a bit more involved, let's just stub it out for now.

impl<T> AppendSet<T> {
    fn new() -> Self {
        AppendSet {
            inner: HashSet::new(),

    fn get_or_insert(&self, t: T) -> &T
        T: Eq + std::hash::Hash,

    fn inner(&self) -> &HashSet<T> {

Next, we'll redefine all_tags as:

let all_tags: AppendSet<String> = AppendSet::new();

Note that we no longer have the mut keyword here. We never need to mutate this thing... sort of. We'll interact with it via get_or_insert, which at least claims it doesn't mutate. The only other change we have to make is in the call to gen_csv, where we want to use the inner() method:

gen_csv(all_tags.inner(), &posts)?;

And perhaps surprisingly, our code now compiles. There's only one thing left to do: implement that get_or_insert method. And this is where the dirtiness happens.

fn get_or_insert(&self, t: T) -> &T
    T: Eq + std::hash::Hash,
    let const_ptr = self as *const Self;
    let mut_ptr = const_ptr as *mut Self;
    let this = unsafe { &mut *mut_ptr };

That's right, unsafe baby!

This code absolutely works. I'm also fairly certain it won't generally work. We are very likely violating invariants of HashSet's interface. As one simple example, we now have the ability to change the contents of a HashSet while there is an active iterate looping through it. I haven't investigated the internals of HashSet, but I wouldn't be surprised at all to find out this breaks some invariants.

NOTE To address one of these concerns: what if we modified the inner method on AppendSet to consume the self and return a HashSet? That would definitely help us avoid accidentally violating invariants. But it also won't compile. The AppendSet itself is immutably borrowed by the Post values, and therefore we cannot move it.

So does this code work? It seems to. Will AppendSet generally work for similar problems? I have no idea. Will this code continue to work with future versions of the standard library with changes to HashSet's implementation? I have no idea. In other words: don't use this code. But it sure was fun to write.


That was certainly a fun excursion. It's a bit disappointing to end up at the ideal solution requiring unsafe to work. But the Rc version is a really nice middle ground. And even the "bad" version isn't so bad.

A theoretically better answer would be to use a data structure specifically designed for this use case. I didn't do any investigation to see if such things existed already. If you have any advice, please let me know!

Check out other FP Complete Rust information.

September 14, 2020 12:00 AM

September 13, 2020

Mark Jason Dominus

Weasel words in headlines

The front page of today has this headline:

Screenshot of part of web page.  The main headline is “‘So Skeptical’: As Election Nears, Iowa Senator Under Pressure For COVID-19 Remarks”.  There is a longer subheadline undernearth, which I discussed below.

It contains this annoying phrase:

The race for Joni Ernst's seat could help determine control of the Senate.

Someone has really committed to hedging.

I would have said that the race would certainly help determine control of the Senate, or that it could determine control of the Senate. The statement as written makes an extremely weak claim.

The article itself doesn't include this phrase. This is why reporters hate headline-writers.


by Mark Dominus ( at September 13, 2020 02:53 PM

Oleg Grenrus

A design for paths in Cabal

Posted on 2020-09-13 by Oleg Grenrus engineering

Where a big part of Cabal is about interpreting your-package.cabal file, also an important part of it and also cabal-install are filepaths. After all, cabal-install is a build tool.

Currently (as of Cabal-3.4) the type used for all filepath needs is infamous

type FilePath = String

One can say that all paths in the codebase are dynamically typed. It is very hard to say whether paths are absolute or relative, and if relative to what.

A solution would be to use path or paths library.

I like paths better, because it is set up to talk about relative paths to arbitrary roots, not only absolute paths.

Still, neither is good enough. Why I say so? Because Cabal and cabal-install have to deal with three kinds of paths.

  1. Abstract paths
  2. Paths on the host system
  3. Paths on the target system

It is that simple, but path is very concretely the second kind, and paths is somewhere in between first and second, but doesn't let you differentiate them.

Abstract paths

Abstract paths are the ones written in your-package.cabal file. For example hs-source-dirs: src/. It is not Unix path. It is not Windows path. It is in fact something which should be interpretable as either, and also path inside tarball archive. In fact it currently have to be common denominator of it, which means that backslashes \, i.e. Windows filepaths aren't portable, but I suspect they work if you build on Windows.

Just thinking about types uncovers a possible bug.

If we had a

-- | An abstract path.
data APath root = ...

Then we could enforce format, for example prohibiting some (i.e. all known) special characters.

Note: abstract paths are relative. There might be some abstract root, for example PackageRoot, but its interpretation still depends.

The representation of APath is not important. It, however, should be some kind of text.

Paths on the host system

These are the concrete paths on your disk.

-- | A path on host (build) system
data HPath root = ...

The HPath can have different roots as well, for example CWD (for current working directory), HomeDir or Absolute. Maybe even talking about HPath PackageRoot is meaningful. My gut feeling says that we rather should be able to provide an operation to resolve APath PackageRoot into HPath Absolute, given a HPath Absolute of package root.

Also directory operations, i.e. IO operations, like listDirectory are only meaningful for HPaths. These are concrete paths.

HPaths have to be represented in a systems native way. It can still be FilePath in the first iteration, but e.g. absolute paths on Windows may start with \\?\ and use backslashes as directory separators (c.f. APath which probably will look like POSIX path everywhere).

Paths on the target system

The third kind of paths are paths on the target system. While cross-compilation support in Cabal is barely existing, having own type for paths for target system should help that improve.

One example are YourPackage_Paths modules. Currently it contains hardcoded paths to e.g. data-files directory of installed package, i.e. somewhere on the target system.

While having hardcodes absolute paths in YourPackage_Paths is a bad idea nowadays, and the data-files discovery should be based on some relative (relocatable, using abstract APaths maybe?) system, having a

-- | A path on the target (run) system
data TPath root = ...

will at least show where we use (absolute) target system paths. Perfectly we won't have them anywhere, if that is possible. But identifying where we have them now will help to get rid of them.

Another example is running (custom) ./Setup or tests or benchmarks. I hope that we can engineer the code in a way that executables built for target system won't be callable, but will need to use a runner wrapper (which we have, but I don't know much about it). Even a host = target (common) system case, where the wrapper is trivial.

Note: whether TPath is Windows path or POSIX path will depend on run-time information, so the conversion functions will need that bit of information. You couldn't be able to purely convert :: APath -> TPath, we will need to pass an extra context.

Here again, better types should help guide the design process.


These are my current thoughts about how the paths will look in some future version of Cabal. Instead of one FilePath (or Path) there will be three: APath, HPath and TPath1.

As I write down, this seems so obvious, this is how about paths have to be classified. Have anyone done something like this before? Please tell me, so I could learn from your experiences.

  1. Names are subject to change, maybe SymPath (for symbolic), HostPath and TargetPath.↩︎

September 13, 2020 12:00 AM

September 11, 2020

Douglas M. Auclair (geophf)

September 2020 Haskell Problems and Solutions

by geophf ( at September 11, 2020 05:49 PM

Mark Jason Dominus

Historical diffusion of words for “eggplant”

In reply to my recent article about the history of words for “eggplant”, a reader, Lydia, sent me this incredible map they had made that depicts the history and the diffusion of the terms:

A map of the world, with arrows depicting the sequential adoption of different terms for eggplant, as the words mutated from language to language.  For details, see the previous post.  The map is an oval-shaped projection.  The ocean parts of the map are a dark eggplant-purple color, and a eggplant stem has been added at the eastern edge, in the Pacific Ocean.

Lydia kindly gave me permission to share their map with you. You can see the early Dravidian term vaḻutanaṅṅa in India, and then the arrows show it travelling westward across Persia and, Arabia, from there to East Africa and Europe, and from there to the rest of the world, eventually making its way back to India as brinjal before setting out again on yet more voyages.

Thank you very much, Lydia! And Happy Diada Nacional de Catalunya, everyone!

by Mark Dominus ( at September 11, 2020 02:40 PM

A maxim for conference speakers

The only thing worse than re-writing your talk the night before is writing your talk the night before.

by Mark Dominus ( at September 11, 2020 02:31 PM

Michael Snoyman

Homeschool on PowerPoint

I’ve been really disappointed in the lack of computer literacy in my children’s education. I could bemoan this, but there’s no point. Instead, Miriam and I have been making a concerted effort to try and teach the kids computer literacy ourselves. When the entire country (and basically entire world) went into Coronavirus lockdown back in March, we started our own curriculum at home that included things like:

  • Todo list management
  • Email management
  • Typing practice

We encouraged the kids to play games on the computer instead of their tablets. (Minecraft was a big hit here.) And we started teaching them Rust. These things all worked, but the kids don’t really enjoy this stuff. We wanted to find something more fun and engaging.

With the new school year, the kids have some days home for remote learning with larger gaps in their schedules. So we decided to try something new, and it seems to be a success.

PowerPoint presentations

The kids are really into Minecraft right now. Earlier this week, before they went to school, we had about 30 minutes free. I brought out my computer, sat the kids down, and started asking them questions about Minecraft. As they answered, I typed it into bullets on slides. Then I showed them how to apply design ideas to make it more colorful. Then we recorded a voiceover, exported a video, and were able to upload to YouTube (unlisted of course).

This whole project took 20 minutes. It ended with the kids being on YouTube. This was enough to get them interested and hooked. Later this week, when two of the kids were home from school, I gave them a slower introduction to how PowerPoint works, and then gave them a task of making their own presentations on whatever topics they wanted. And then they happily worked on it for an hour.

From a learning standpoint, what we’d achieved was excitement about a topic and reduced resistance, while honing multiple technical and communicative skills:

  • More typing practice
  • How to organize a narrative
  • Basics of word processing (bullets, headings, etc)
  • More familiarity in general with using a computer (in place of a tablet)

These may sound modest, but the advantage of having the kids motivated to try this out makes it worth it.

Why not LibreOffice/Google Slides/reveal.js?

When I said PowerPoint above, I meant the actual desktop version of Microsoft PowerPoint. This feels a bit funny. When I was in middle school, PowerPoint was the cool technology. Then I didn’t touch PowerPoint for about 20 years, and have used things like reveal.js since.

I initially thought that we should keep the kids more agnostic on which tools they use, and not marry them immediately to one platform. Miriam and I discussed this in more depth, and realized at this point it’s more important for us to get them productive as quickly as possible, with a single tool, to keep their motivation high.

Recently at work, I’ve been on the Microsoft Office suite quite a bit. We made a move to Microsoft 365, and I’ve been making my own presentations and documents in PowerPoint and Word again. My familiarity, the maturity of the tools, the really nice features (like built in recording and design ideas), and the general adoption makes me think we made the right decision in focusing on this toolchain.

I still hope to make the kids more computer literate going forward, and hope that they don’t end up dependent on just one vendor’s tools. But I don’t want the perfect to be the enemy of the good, and I’d rather they be competent with 1 technology than 0.

Worth it?

If you’re looking for a new way to engage your kids with computer literacy, I would definitely recommend trying this out. I wouldn’t change anything of how I rolled this out. Summarized below, here’s my formula:

  • First presentation: you’re at the keyboard
    • Choose a topic you know your kids are excited about
    • Ask them questions about it (they love talking about this topic, right?)
    • Put the notes into a note taking app (e.g. OneNote/Keep), a document (e.g. Word), or just a piece of paper
    • Show them how you convert those notes into a PowerPoint deck
    • Ask them for input on choosing a theme, design ideas, inserting graphics, etc
    • Let them record a voiceover for each slide. Don’t worry if they just read out each slide verbatim
    • Export to a video and let them see that they made a cool presentation!
  • Second presentation: more instructive
    • Choose a simple topic you know a lot about
    • Put together the notes with the kids, but do it more slowly, and explain how to structure thoughts cohesively
    • Convert to slides, but do it much more slowly, and explain all the details (different slide formats, how to indent tabs, etc)
    • Do the same thing with theme, design ideas, etc
  • Third presentation: they’re in control
    • Help them choose a topic they care about
    • Tell them to use a piece of paper to jot down all their ideas (they can use an app if they really want)
    • Help them structure these notes into a narrative
    • Let them convert that into slides. They’re at the computer this time, but be nearby to answer questions
    • Do not inhibit their creativity here. Let them use every gaudy color scheme, crazy transition, obnoxious audio clip, etc.

I hope this is helpful for others. If you have success (or failure) with this with your kids, please let us know, we’re really interested in how other people are approaching computer literacy for their kids!

September 11, 2020 10:15 AM

September 10, 2020

Edward Z. Yang

Let’s talk about the PyTorch dispatcher

If this is your first time reading about PyTorch internals, you might want to check out my PyTorch internals post first. In this post, I want to talk about one particular part of PyTorch's internals: the dispatcher. At a first glance, the dispatcher is just a glorified if statement: based on some information about the tensor inputs, decide what piece of code should be called. So why should we care about the dispatcher?

Well, in PyTorch, a lot of things go into making an operator work. There is the kernel that does the actual work, of course; but then there is support for reverse mode automatic differentiation, e.g., the bits that make loss.backward() work. Oh, and if your code under torch.jit.trace, you can get a trace of all the operations that were run. Did I mention that if you run these operations on the inside of a vmap call, the batching behavior for the operators is different? There are so many different ways to interpret PyTorch operators differently, and if we tried to handle all of them inside a single function named add, our implementation code would quickly devolve into an unmaintainable mess. The dispatcher is not just an if statement: it is a really important abstraction for how we structure our code internally PyTorch... and it has to do so without degrading the performance of PyTorch (too much, anyway).

At the end of this post, our goal will be to understand all the different parts of this picture fit together. This post will proceed in three parts.

First, we'll talk about the dispatcher itself. What is the dispatcher, how does it decide what kernel to call? Second, we'll talk about the operator registration API, which is the interface by which we register kernels into the dispatcher. Finally, we'll talk about boxing and unboxing, which are a cross-cutting feature in the dispatcher that let you write code once, and then have it work on all kernels.

What is the dispatcher?

OK, so what is the dispatcher? For every operator, the dispatcher maintains a table of function pointers which provide implementations for each dispatch key, which corresponds roughly to one of the cross-cutting concerns in PyTorch. In the diagram above, you can see there are dispatch entries in this table for backends (CPU, CUDA, XLA) as well as higher-level concepts like autograd and tracing. The dispatcher's job is to compute a dispatch key, based on the input tensors and some other stuff (more on this shortly), and then do an indirect jump to the function pointed to by the table.

Those of you who are familiar with C++ may observe that this table of function pointers is very similar to virtual tables in C++. In C++, virtual methods on objects are implemented by associating every object with a pointer to a virtual table that contains implementations for each virtual method on the object in question. In PyTorch, we essentially reimplemented virtual tables, but with some differences:

  • Dispatch tables are allocated per operator, whereas vtables are allocated per class. This means that we can extend the set of supported operators simply by allocating a new dispatch table, in contrast to regular objects where you can extend from a class, but you can't easily add virtual methods. Unlike normal object oriented systems, in PyTorch most of the extensibility lies in defining new operators (rather than new subclasses), so this tradeoff makes sense. Dispatch keys are not openly extensible, and we generally expect extensions who want to allocate themselves a new dispatch key to submit a patch to PyTorch core to add their dispatch key.
  • More on this in the next slide, but the computation of our dispatch key considers all arguments to the operator (multiple dispatch) as well as thread-local state (TLS). This is different from virtual tables, where only the first object (this) matters.
  • Finally, the dispatcher supports boxing and unboxing as part of the calling convention for operators. More on this in the last part of the talk!

Fun historical note: we used to use virtual methods to implement dynamic dispatch, and reimplemented them when we realized we needed more juice than virtual tables could give us.

So how exactly do we compute the dispatch key which we use to index into the dispatch table? The basic abstraction we use for computing what dispatch key to use is a dispatch key set, which is a bitset over dispatch keys. The general concept is that we union together dispatch key sets from various sources (and in some case mask out some dispatch keys), giving us a final dispatch key set. Then, we pick the first dispatch key in the set (dispatch keys are implicitly ordered by some priority) and that is where we should dispatch to. What are these sources?

  • Each tensor input contributes a dispatch key set of all dispatch keys that were on the tensor (intuitively, these dispatch keys will be things like CPU, telling us that the tensor in question is a CPU tensor and should be handled by the CPU handler on the dispatch table)
  • We also have a local include set, which is used for "modal" functionality, such as tracing, which isn't associate with any tensors, but instead is some sort of thread local mode that a user can turn on and off within some scope.
  • Finally, we have a global set, which are dispatch keys that are always considered. (Since the time this slide was written, Autograd has moved off the global set and onto tensor. However, the high level structure of the system hasn't changed).

There is also a local exclude set, which is used to exclude dispatch keys from dispatch. A common pattern is for some handler to handle a dispatch key, and then mask itself off via the local exclude set, so we don't try reprocessing this dispatch key later.

Let's walk through the evolution of dispatch key through some examples.

(Warning: This description is out-of-date for PyTorch master. Instead of Autograd being in global, it is instead on the Tensor. Everything else proceeds as before.)

The most canonical example of the dispatch machinery in operation is how it handles autograd. Read the diagram from the top to the bottom. At the very top, Autograd is in the global set, and the local exclude set is empty. When we do dispatch, we find autograd is the highest priority key (it's higher priority than CPU), and we dispatch to the autograd handler for the operator. Inside the autograd handler, we do some autograd stuff, but more importantly, we create the RAII guard AutoNonVariableTypeMode, which adds Autograd to the local exclude set, preventing autograd from being handled for all of the operations inside of this operator. When we redispatch, we now skip the autograd key (as it is excluded) and dispatch to the next dispatch key, CPU in this example. As local TLS is maintained for the rest of the call tree, all other subsequent dispatches also bypass autograd. Finally, in the end, we return from our function, and the RAII guard removes Autograd from the local exclude set so subsequent operator calls once again trigger autograd handlers.

Another similar example is tracing, which is similar to autograd where when we enter the tracing handler, we disable tracing for nested calls with ExcludeDispatchKeyGuard. However, it differs from autograd in how tracing is initially triggered: tracing is toggled by a dispatch key that is added to the local include set when you turn on tracing (with IncludeDispatchKeyGuard), as opposed to the global dispatch key from Autograd (Update: now a dispatch key on tensors).

One final example is the BackendSelect key, which operates a little differently from normal keys. The problem backend select solves is that sometimes, the default dispatch key set calculation algorithm doesn't know how to work out what the correct dispatch key should be. One notable case of this are factory functions, which don't have any Tensor arguments (and so, naively, would not dispatch to anything). BackendSelect is in the global dispatch key set, but is only registered for a few operators (for the rest, it is a fallthrough key). The BackendSelect handler inspects the arguments and decides what the final dispatch key should be, and then does a direct dispatch to that key, bypassing dispatch key calculation.

The slide summarizes some of the most common sequences of handlers that get processed when dispatching some operation in PyTorch. Most of the time, it's autograd, and then the backend (with a backend select in-between if you are a factory function). For XLA, there is also an XLAPreAutograd key (Update: This key is now simply AutogradXLA) which can be used to override the behavior of the Autograd key. And of course, if you turn on every feature in PyTorch all at once, you can end up stopping at a lot of handlers. Notice that the order in which these handlers are processed matters, since handlers aren't necessarily commutative.

Operator registration

So we talked a lot about how we decide what function pointers in the dispatch table to call, but how do these pointers get in the dispatch table in the first place? This is via the operator registration API. If you have never seen this API before, you should take a look at the Dispatcher in C++ tutorial, which describes how the API works at a very high level. In this section, we'll dive into more detail about how exactly the registration API maps to the dispatch table. Below, you can see the three main ways of interacting with the operator registration API: you define schemas for operators and then register implementations at dispatch keys; finally, there is a fallback method which you can use to define a handler for all operators at some dispatch key.

To visualize the impact of these registration operators, let us imagine that the dispatch tables for all operators collectively form a grid, like this:

On one axis, we have each operator supported in PyTorch. On the other axis, we have each dispatch key we support in our system. The act of operator registration involves filling in cells with implementations under these two axes.

When we register a kernel for a single operator at a specific dispatch key, we fill in a single cell (blue below):

When you register a kernel as a "catch-all" kernel for all dispatch keys in an operator, you fill in an entire row for the operator with one kernel (red below). By the way, if this seems like a strange thing to want to do, it is! And we're working to remove this capability in favor of more specific fills for a subset of keys.

When you register a kernel as a fallback for kernel for a single dispatch key, you fill in the column for that dispatch key (green).

There's a precedence to these registrations: exact kernel registrations have the highest precedence, and catch all kernels take precedence over fallback.

Boxing and unboxing

I want to spend the last part of this post talking about the boxing and unboxing facilities in our dispatcher, which turn out to be pretty important for enabling backend fallback. When you are a programming language designer, there is a classic tradeoff you have to make in deciding whether or not you want to use a boxed or unboxed representation for data:

A boxed or homogenous representation is a data representation where every type of object in your system has the same layout. Typically, this means you have some representation that has a header describing what the object in question is, and then some regular payload after it. Homogenous representations are easy to work with in code: because you can always assume that data has some regular layout, you can write functions that work polymorphically over any type of data (think of a function in Java that takes in an arbitrary Object, for example). Most garbage-collected languages have some boxed representation for heap objects, because the garbage collector needs to be able to work over any type of heap object.

In contrast, an unboxed or heterogenous representation allows objects to have a different layout depending on the data in question. This is more efficient than a homogenous representation, as each object can tailor its internal representation to exactly what is needed for the task at hand. However, the downside is we can no longer easily write a single function that works polymorphically over many types of objects. In C++, this problem is worked around using templates: if you need a function to work on multiple types, the C++ compiler will literally create a new copy of the function specialized to each type it is used with.

By default, C++ defaults heterogenous layout, but we have implemented homogenous layout in PyTorch by way of the IValue struct (short for interpreter value), which implements a boxed representation that we can use in our interpreter. An IValue is a two word structure consisting of a payload word (usually a pointer, but it could also be an integer or float directly packed into the field) and a tag word which tells us what kind of value the IValue is.

This means we have two calling conventions for functions in PyTorch: the usual, C++, unboxed convention, and a boxed convention using IValues on a stack. Calls (from end users) can come from unboxed API (direct C++ call) or boxed API (from the JIT interpreter); similarly, kernels can be implemented as direct C++ functions (unboxed convention), or can be implemented as a boxed fallback (which by necessity is boxed, as they are polymorphic over all operators).

If I call from boxed API to a boxed fallback, it's easy to see how to plug the two components together...

...but how do I get from the unboxed API to the boxed fallback?

We need some sort of adapter to take the unboxed inputs and turn them into IValues so that they can be passed via the boxed calling convention. This is done via a boxing adapter, which is automatically generated using C++ templates working off of the unboxed C++ types in the outward facing API.

There is also an inverse problem, which is what to do if we have inputs from an boxed API and need to call into an unboxed kernel. Similarly, we have an unboxing adapter, which performs this translation. Unlike the boxing adapter, this adapter is applied to the kernel itself, since C++ templates only work at sites where the unboxed type is statically available (at the boxed API site, these types are not known, so you literally cannot implement this.) Note that we always keep the unboxed API around, so that if a user calls in from the unboxed API, we can fastpath straight to the unboxed kernel.

So here is what boxing and unboxing looks overall:

Boxing and unboxing are a key feature in the implementation of boxed fallback: without them, we could not let people write single kernels which would work everywhere (and indeed, in the past, people would write code generators to generate repetitive kernels for every function). With template-based boxing and unboxing, you can write a single boxed kernel, and then have it work for operators, even if those operators are defined externally from the library.


So that's PyTorch's dispatcher in a nutshell! The dispatcher is still being continuously worked on; for example, Ailing Zhang recently landed a rework of how autograd dispatch keys are handled, which means that we actually no longer have a single Autograd key but have split autograd keys for AutogradCPU/AutogradCUDA/... We're generally interested in improving the user experience for people who register kernels to the dispatcher. Let us know if you have any questions or comments!

by Edward Z. Yang at September 10, 2020 06:29 PM

Sandy Maguire

Algebra-Driven Design

After almost a year of work, I’m thrilled to announce the completion my new book, Algebra-Driven Design. It’s the culmination of two rewrites, and comes with a beautiful foreword written by John Hughes, the inventor of QuickCheck.

In the book, we take a fundamentally different approach to the software design process, focusing on deriving libraries from equations, algebraic manipulation and well-studied mathematical objects. The resulting code is guaranteed to be free of abstraction leaks, and in many cases, actually writes itself.

If that sounds like the sort of software you’d like to write, I’d highly encourage you to give it a read.

Algebra-Driven Design

September 10, 2020 04:55 PM

Tweag I/O

Towards a content-addressed model for Nix

This is my first post about content-addressability in Nix — a long-awaited feature that is hopefully coming soon! In this post I will show you how this feature will improve the Nix infrastructure. I’ll come back in another post to explain the technical challenges of adding content-addressability to Nix.

Nix has a wonderful model for handling packages. Because each derivation is stored under (aka addressed by) a unique name, multiple versions of the same library can coexist on the same system without issues: each version of the library has a distinct name, as far as Nix is concerned.

What’s more, if openssl is upgraded in Nixpkgs, Nix knows that all the packages that depend on openssl (i.e., almost everything) must be rebuilt, if only so that they point at the name of the new openssl version. This way, a Nix installation will never feature a package built for one version of openssl, but dynamically linked against another: as a user, it means that you will never have an undefined symbol error. Hurray!

The input-addressed store

How does Nix achieve this feat? The idea is that the name of a package is derived from all of its inputs (that is, the complete list of dependencies, as well as the package description). So if you change the git tag from which openssl is fetched, the name changes, if the name of openssl changes, then the name of any package which has openssl in its dependencies changes.

However this can be very pessimistic: even changes that aren’t semantically meaningful can imply mass rebuilding and downloading. As a slightly extreme example, this merge-request on Nixpkgs makes a tiny change to the way openssl is built. It doesn’t actually change openssl, yet requires rebuilding an insane amount of packages. Because, as far as Nix is concerned, all these packages have different names, hence are different packages. In reality, though, they weren’t.

Nevertheless, the cost of the rebuild has to be born by the Nix infrastructure: Hydra builds all packages to populate the cache, and all the newly built packages must be stored. It costs both time, and money (in cpu power, and storage space).

Unnecessary rebuilds?

Most distributions, by default, don’t rebuild packages when their dependencies change, and have a (more-or-less automated) process to detect changes that require rebuilding reverse dependencies. For example, Debian tries to detect ABI changes automatically and Fedora has a more manual process. But Nix doesn’t.

The issue is that the notion of a “breaking change” is a very fuzzy one. Should we follow Debian and consider that only ABI changes are breaking? This criterion only applies for shared libraries, and as the Debian policy acknowledges, only for “well-behaved” programs. So if we follow this criterion, there’s still need for manual curation, which is precisely what Nix tries to avoid.

The content-addressed model

Quite happily, there is a criterion to avoid many useless rebuilds without sacrificing correctness: detecting when changes in a package (or one of its dependencies) yields the exact same output. That might seem like an edge case, but the openssl example above (and many others) shows that there’s a practical application to it. As another example, go depends on perl for its tests, so an upgrade of perl requires rebuilding all the Go packages in Nixpkgs, although it most likely doesn’t change the output of the go derivation.

But, for Nix to recognise that a package is not a new package, the new, unchanged, openssl or go packages must have the same name as the old version. Therefore, the name of a package must not be derived from its inputs which have changed, but, instead, it should be derived from the content of the compiled package. This is called content addressing.

Content addressing is how you can be sure that when you and a colleague at the other side of the world type git checkout 7cc16bb8cd38ff5806e40b32978ae64d54023ce0 you actually have the exact same content in your tree. Git commits are content addressed, therefore the name 7cc16bb8cd38ff5806e40b32978ae64d54023ce0 refers to that exact tree.

Yet another example of content-addressed storage is IPFS. In IPFS storage files can be stored in any number of computers, and even moved from computer to computer. The content-derived name is used as a way to give an intrinsic name to a file, regardless of where it is stored.

In fact, even the particular use case that we are discussing here - avoiding recompilation when a rebuilt dependency hasn’t changed - can be found in various build systems such as Bazel. In build systems, such recompilation avoidance is sometimes known as the early cutoff optimization − see the build systems a la carte paper for example).

So all we need to do is to move the Nix store from an input-addressed model to a content-addressed model, as used by many tools already, and we will be able to save a lot of storage space and CPU usage, by rebuilding many fewer packages. Nixpkgs contributors will see their CI time improved. It could also allow serving a binary cache over IPFS.

Well, like many things with computers, this is actually way harder than it sounds (which explains why this hasn’t already been done despite being discussed nearly 15 years ago in the original paper), but we now believe that there’s a way forward… more on that in a later post.


A content-addressed store for Nix would help reduce the insane load that Hydra has to sustain. While content-addressing is a common technique both in distributed systems and build systems (Nix is both!), getting to the point where it was feasible to integrate content-addressing in Nix has been a long journey.

In a future post, I’ll explain why it was so hard, and how we finally managed to propose a viable design for a content-addressed Nix.

September 10, 2020 12:00 AM

September 09, 2020

Alson Kemp

Rust build/install: permission denied

For the benefit of humanity… I just got:

error: failed to run custom build command for log v0.4.11

Caused by:<br>could not execute process /tmp/cargo-installGs2WwM/release/build/log-a97e745b31d3670c/build-script-build (never executed)

Caused by:<br>Permission denied (os error 13)

This was caused by noexec on my /tmp/ partition. Update /etc/fstab and mount -a -o remount

by alson at September 09, 2020 08:40 PM

FP Complete

Using Rust for DevOps tooling

A beginner's guide to writing your DevOps tools in Rust.


In this blog post we'll cover some basic DevOps use cases for Rust and why you would want to use it. As part of this, we'll also cover a few common libraries you will likely use in a Rust-based DevOps tool for AWS.

If you're already familiar with writing DevOps tools in other languages, this post will explain why you should try Rust.

We'll cover why Rust is a particularly good choice of language to write your DevOps tooling and critical cloud infrastructure software in. And we'll also walk through a small demo DevOps tool written in Rust. This project will be geared towards helping someone new to the language ecosystem get familiar with the Rust project structure.

If you're brand new to Rust, and are interested in learning the language, you may want to start off with our Rust Crash Course eBook.

What Makes the Rust Language Unique

Rust is a systems programming language focused on three goals: safety, speed, and concurrency. It maintains these goals without having a garbage collector, making it a useful language for a number of use cases other languages aren’t good at: embedding in other languages, programs with specific space and time requirements, and writing low-level code, like device drivers and operating systems.

The Rust Book (first edition)

Rust was initially created by Mozilla and has since gained widespread adoption and support. As the quote from the Rust book alludes to, it was designed to fill the same space that C++ or C would (in that it doesn’t have a garbage collector or a runtime). But Rust also incorporates zero-cost abstractions and many concepts that you would expect in a higher level language (like Go or Haskell). For that, and many other reasons, Rust's uses have expanded well beyond that original space as low level safe systems language.

Rust's ownership system is extremely useful in efforts to write correct and resource efficient code. Ownership is one of the killer features of the Rust language and helps programmers catch classes of resource errors at compile time that other languages miss or ignore.

Rust is an extremely performant and efficient language, comparable to the speeds you see with idiomatic everyday C or C++. And since there isn’t a garbage collector in Rust, it’s a lot easier to get predictable deterministic performance.

Rust and DevOps

What makes Rust unique also makes it very useful for areas stemming from robots to rocketry, but are those qualities relevant for DevOps? Do we care if we have efficient executables or fine grained control over resources, or is Rust a bit overkill for what we typically need in DevOps?

Yes and no

Rust is clearly useful for situations where performance is crucial and actions need to occur in a deterministic and consistent way. That obviously translates to low-level places where previously C and C++ were the only game in town. In those situations, before Rust, people simply had to accept the inherent risk and additional development costs of working on a large code base in those languages. Rust now allows us to operate in those areas but without the risk that C and C++ can add.

But with DevOps and infrastructure programming we aren't constrained by those requirements. For DevOps we've been able to choose from languages like Go, Python, or Haskell because we're not strictly limited by the use case to languages without garbage collectors. Since we can reach for other languages you might argue that using Rust is a bit overkill, but let's go over a few points to counter this.

Why you would want to write your DevOps tools in Rust

  • Small executables relative to other options like Go or Java
  • Easy to port across different OS targets
  • Efficient with resources (which helps cut down on your AWS bill)
  • One of the fastest languages (even when compared to C)
  • Zero cost abstractions - Rust is a low level performant language which also gives the us benefits of a high level language with its generics and abstractions.

To elaborate on some of these points a bit further:

OS targets and Cross Compiling Rust for different architectures

For DevOps it's also worth mentioning the (relative) ease with which you can port your Rust code across different architectures and different OS's.

Using the official Rust toolchain installer rustup, it's easy to get the standard library for your target platform. Rust supports a great number of platforms with different tiers of support. The docs for the rustup tool has a section covering how you can access pre-compiled artifacts for various architectures. To install the target platform for an architecture (other than the host platform which is installed by default) you simply need to run rustup target add:

$ rustup target add x86_64-pc-windows-msvc 
info: downloading component 'rust-std' for 'x86_64-pc-windows-msvc'
info: installing component 'rust-std' for 'x86_64-pc-windows-msvc'

Cross compilation is already built into the Rust compiler by default. Once the x86_64-pc-windows-msvc target is installed you can build for Windows with the cargo build tool using the --target flag:

cargo build --target=x86_64-pc-windows-msvc

(the default target is always the host architecture)

If one of your dependencies links to a native (i.e. non-Rust) library, you will need to make sure that those cross compile as well. Doing rustup target add only installs the Rust standard library for that target. However for the other tools that are often needed when cross-compiling, there is the handy tool. This is essentially a wrapper around cargo which does all cross compilation in docker images that have all the necessary bits (linkers) and pieces installed.

Small Executables

A key unique feature of Rust is that it doesn't need a runtime or a garbage collector. Compare this to languages like Python or Haskell: with Rust the lack of any runtime dependencies (Python), or system libraries (as with Haskell) is a huge advantage for portability.

For practical purposes, as far as DevOps is concerned, this portability means that Rust executables are much easier to deploy than scripts. With Rust, compared to Python or Bash, we don't need to set up the environment for our code ahead of time. This frees us up from having to worry if the runtime dependencies for the language are set up.

In addition to that, with Rust you're able to produce 100% static executables for Linux using the MUSL libc (and by default Rust will statically link all Rust code). This means that you can deploy your Rust DevOps tool's binaries across your Linux servers without having to worry if the correct libc or other libraries were installed beforehand.

Creating static executables for Rust is simple. As we discussed before, when discussing different OS targets, it's easy with Rust to switch the target you're building against. To compile static executables for the Linux MUSL target all you need to do is add the musl target with:

$ rustup target add x86_64-unknown-linux-musl

Then you can using this new target to build your Rust project as a fully static executable with:

$ cargo build --target x86_64-unknown-linux-musl

As a result of not having a runtime or a garbage collector, Rust executables can be extremely small. For example, there is a common DevOps tool called CredStash that was originally written in Python but has since been ported to Go (GCredStash) and now Rust (RuCredStash).

Comparing the executable sizes of the Rust versus Go implementations of CredStash, the Rust executable is nearly a quarter of the size of the Go variant.

ImplementationExecutable Size
Rust CredStash: (RuCredStash Linux amd64)3.3 MB
Go CredStash: (GCredStash Linux amd64 v0.3.5)11.7 MB

Project links:

This is by no means a perfect comparison, and 8 MB may not seem like a lot, but consider the advantage automatically of having executables that are a quarter of the size you would typically expect.

This cuts down on the size your Docker images, AWS AMI's, or Azure VM images need to be - and that helps speed up the time it takes to spin up new deployments.

With a tool of this size, having an executable that is 75% smaller than it would be otherwise is not immediately apparent. On this scale the difference, 8 MB, is still quite cheap. But with larger tools (or collections of tools and Rust based software) the benefits add up and the difference begins to be a practical and worthwhile consideration.

The Rust implementation was also not strictly written with the resulting size of the executable in mind. So if executable size was even more important of a factor other changes could be made - but that's beyond the scope of this post.

Rust is fast

Rust is very fast even for common idiomatic everyday Rust code. And not only that it's arguably easier to work with than with C and C++ and catch errors in your code.

For the Fortunes benchmark (which exercises the ORM, database connectivity, dynamic-size collections, sorting, server-side templates, XSS countermeasures, and character encoding) Rust is second and third, only lagging behind the first place C++ based framework by 4 percent.

In the benchmark for database access for a single query Rust is first and second:

And in a composite of all the benchmarks Rust based frameworks are second and third place.

Of course language and framework benchmarks are not real life, however this is still a fair comparison of the languages as they relate to others (within the context and the focus of the benchmark).


Why would you not want to write your DevOps tools in Rust?

For medium to large projects, it’s important to have a type system and compile time checks like those in Rust versus what you would find in something like Python or Bash. The latter languages let you get away with things far more readily. This makes development much "faster" in one sense.

Certain situations, especially those with involving small project codebases, would benefit more from using an interpreted language. In these cases, being able to quickly change pieces of the code without needing to re-compile and re-deploy the project outweighs the benefits (in terms of safety, execution speed, and portability) that languages like Rust bring.

Working with and iterating on a Rust codebase in those circumstances, with frequent but small codebases changes, would be needlessly time-consuming If you have a small codebase with few or no runtime dependencies, then it wouldn't be worth it to use Rust.

Demo DevOps Project for AWS

We'll briefly cover some of the libraries typically used for an AWS focused DevOps tool in a walk-through of a small demo Rust project here. This aims to provide a small example that uses some of the libraries you'll likely want if you’re writing a CLI based DevOps tool in Rust. Specifically for this example we'll show a tool that does some basic operations against AWS S3 (creating new buckets, adding files to buckets, listing the contents of buckets).

Project structure

For AWS integration we're going to utilize the Rusoto library. Specifically for our modest demo Rust DevOps tools we're going to pull in the rusoto_core and the rusoto_s3 crates (in Rust a crate is akin to a library or package).

We're also going to use the structopt crate for our CLI options. This is a handy, batteries included CLI library that makes it easy to create a CLI interface around a Rust struct.

The tool operates by matching the CLI option and arguments the user passes in with a match expression.

We can then use this to match on that part of the CLI option struct we've defined and call the appropriate functions for that option.

match opt {
    Opt::Create { bucket: bucket_name } => {
        println!("Attempting to create a bucket called: {}", bucket_name);
        let demo = S3Demo::new(bucket_name);

This matches on the Create variant of the Opt enum.

We then use S3Demo::new(bucket_name) to create a new S3Client which we can use in the standalone create_demo_bucket function that we've defined which will create a new S3 bucket.

The tool is fairly simple with most of the code located in src/

Building the Rust project

Before you build the code in this project, you will need to install Rust. Please follow the official install instructions here.

The default build tool for Rust is called Cargo. It's worth getting familiar with the docs for Cargo but here's a quick overview for building the project.

To build the project run the following from the root of the git repo:

cargo build

You can then use cargo run to run the code or execute the code directly with ./target/debug/rust-aws-devops:

$ ./target/debug/rust-aws-devops 

Running tool
RustAWSDevops 0.1.0
Mike McGirr <>

    rust-aws-devops <SUBCOMMAND>

    -h, --help       Prints help information
    -V, --version    Prints version information

    add-object       Add the specified file to the bucket
    create           Create a new bucket with the given name
    delete           Try to delete the bucket with the given name
    delete-object    Remove the specified object from the bucket
    help             Prints this message or the help of the given subcommand(s)
    list             Try to find the bucket with the given name and list its objects``

Which will output the nice CLI help output automatically created for us by structopt.

If you're ready to build a release version (with optimizations turn on which will make compilation take slightly longer) run the following:

cargo build --release


As this small demo showed, it's not difficult to get started using Rust to write DevOps tools. And even then we didn't need to make a trade-off between ease of development and performant fast code.

Hopefully the next time you're writing a new piece of DevOps software, anything from a simple CLI tool for a specific DevOps operation or you're writing the next Kubernetes, you'll consider reaching for Rust. And if you have further questions about Rust, or need help implementing your Rust project, please feel free to reach out to FP Complete for Rust engineering and training!

Want to learn more Rust? Check out our Rust Crash Course eBook. And for more information, check out our Rust homepage.

September 09, 2020 12:00 AM

September 08, 2020

Douglas M. Auclair (geophf)

September 2020 Haskell 1-liners

  • 2020-09-08: given

    removeInfreqs :: Set String -> Ontology -> Ontology
    removeInfreqs infrequentWords ont = (\wordcounts -> foldl (flip ri') wordcounts infrequentWords) ont

    where Ontology is a map-of-maps.

    1. remove flip to get the same functional result.
    2. curry away ont from the function removeInfreqs
    3. curry away wordcounts from the map-lambda function.
    4. curry away infrequentWords from the function removeInfreqs

      n.b.: This curry may not be as straightforward as the other curries.

  • 2020-09-01: Given all of the above, and now that you've curried the above lambda to [SPOILER]:

    \key -> const (not (Set.member key stoppers))

    Curry away key from this new lambda.

by geophf ( at September 08, 2020 06:14 PM


Working with Hasura to improve GHC tooling

We’re glad to announce that we will be working with Hasura on improvements to GHC tooling over the coming months. We are looking forward to this work and would like to thank Hasura for this investment in the Haskell community. More details are available on the Hasura blog:


I’m excited to announce an engineering collaboration and partnership with the great folks at Well Typed to be working with them on improving open-source tooling in the Haskell ecosystem.


Most recently, we’ve been investigating memory fragmentation in GHC Haskell and found limitations in the profiling tools available, so we’re planning to support ongoing development efforts on ghc-debug and help improve GHC’s DWARF support for tracking running code back to source file locations. This will unlock many possibilities for easier profiling of production Haskell code.

We’re excited to be working with David Eichmann, Ben Gamari and the team at Well Typed over the coming months. And special thanks of course to Adam Gundry for helping this collaboration come together!

Well-Typed maintains and actively contributes to GHC thanks to support from a number of companies who are interested in improving the Haskell ecosystem for everyone. If your company might be willing to help fund our work on GHC or other core Haskell libraries, please drop us an email.

by adam, ben, davide at September 08, 2020 12:00 AM

September 07, 2020

Monday Morning Haskell

Unit Tests and Benchmarks in Rust


For a couple months now, we've focused on some specific libraries you can use in Rust for web development. But we shouldn't lose sight of some other core language skills and mechanics. Whenever you write code, you should be able to show first that it works, and second that it works efficiently. If you're going to build a larger Rust app, you should also know a bit about unit testing and benchmarking. This week, we'll take a couple simple sorting algorithms as our examples to learn these skills.

As always, you can take a look at the code for this article on our Github Repo for the series. You can find this week's code specifically in! For a more basic introduction to Rust, be sure to check out our Rust Beginners Series!

Insertion Sort

We'll start out this article by implementing insertion sort. This is one of the simpler sorting algorithms, which is rather inefficient. We'll perform this sort "in place". This means our function won't return a value. Rather, we'll pass a mutable reference to our vector so we can manipulate its items. To help out, we'll also define a swap function to change two elements around that same reference:

pub fn swap(numbers: &mut Vec<i32>, i: usize, j: usize) {
    let temp = numbers[i];
    numbers[i] = numbers[j];
    numbers[j] = temp;

pub fn insertion_sorter(numbers: &mut Vec<i32>) {

At its core, insertion sort is a pretty simple algorithm. We maintain the invariant that the "left" part of the array is always sorted. (At the start, with only 1 element, this is clearly true). Then we loop through the array and "absorb" the next element into our sorted part. To absorb the element, we'll loop backwards through our sorted portion. Each time we find a larger element, we switch their places. When we finally encounter a smaller element, we know the left side is once again sorted.

pub fn insertion_sorter(numbers: &mut Vec<i32>) {
    for i in 1..numbers.len() {
        let mut j = i;
        while j > 0 && numbers[j-1] > numbers[j] {
            swap(numbers, j, j - 1);
            j = j - 1;


Our algorithm is simple enough. But how do we know it works? The obvious answer is to write some unit tests for it. Rust is actually a bit different from Haskell and most other languages in the canonical approach to unit tests. Most of the time, you'll make a separate test directory. But Rust encourages you to write unit tests in the same file as the function definition. We do this by having a section at the bottom of our file specifically for tests. We delineate a test function with the test macro:

fn test_insertion_sort() {

To keep things simple, we'll define a random vector of 100 integers and pass it to our function. We'll use assert to verify that each number is smaller than the next one after it.

fn test_insertion_sort() {
    let mut numbers: Vec<i32> = random_vector(100);
    insertion_sorter(&mut numbers);
    for i in 0..(numbers.len() - 1) {
        assert!(numbers[i] <= numbers[i + 1]);

When we run the cargo test command, Cargo will automatically detect that we have a test suite in this file and run it.

running 1 test...
test sorter::test_insertion_sort ... ok


So we know our code works, but how quickly does it work? When you want to check the performance of your code, you need to establish benchmarks. These are like test suites except that they're meant to give out the average time it takes to perform a task.

Just as we had a test macro for making test suites, we can use the bench macro for benchmarks. Each of these takes a mutable Bencher object as an argument. To record some code, we'll call iter on that object and pass a closure that will run our function.

fn bench_insertion_sort_100_ints(b: &mut Bencher) {
    b.iter(|| {
        let mut numbers: Vec<i32> = random_vector(100);
        insertion_sorter(&mut numbers)

We can then run the benchmark with cargo bench.

running 2 tests
test sorter::test_insertion_sort ... ignored
test sorter::bench_insertion_sort_100_ints   ... bench:       6,537 ns
/iter (+/- 1,541)

So on average, it took about 6ms to sort 100 numbers. On its own, this number doesn't tell us much. But we can get a more clear idea for the runtime of our algorithm by looking at benchmarks of different sizes. Suppose we make lists of 1000 and 10000:

fn bench_insertion_sort_1000_ints(b: &mut Bencher) {
    b.iter(|| {
        let mut numbers: Vec<i32> = random_vector(1000);
        insertion_sorter(&mut numbers)

fn bench_insertion_sort_10000_ints(b: &mut Bencher) {
    b.iter(|| {
        let mut numbers: Vec<i32> = random_vector(10000);
        insertion_sorter(&mut numbers)

Now when we run the benchmark, we can compare the results of these different runs:

running 4 tests
test sorter::test_insertion_sort ... ignored
test sorter::bench_insertion_sort_10000_ints ... bench:  65,716,130 ns
/iter (+/- 11,193,188)
test sorter::bench_insertion_sort_1000_ints  ... bench:     612,373 ns
/iter (+/- 124,732)
test sorter::bench_insertion_sort_100_ints   ... bench:      12,032 ns
/iter (+/- 904)

We see that when we increase the problem size by a factor of 10, we increase the runtime by a factor of nearly 100! This confirms for us that our simple insertion sort has an asymptotic runtime of O(n^2), which is not very good.

Quick Sort

There are many ways to sort more efficiently! Let's try our hand at quicksort. For this algorithm, we first "partition" our array. We'll choose a pivot value, and then move all the numbers smaller than the pivot to the left of the array, and all the greater numbers to the right. The upshot is that we know our pivot element is now in the correct final spot!

Here's what the partition algorithm looks like. It works on a specific sub-segment of our vector, indicated by start and end. We initially move the pivot element to the back, and then loop through the other elements of the array. The i index tracks where our pivot will end up. Each time we encounter a smaller number, we increment it. At the very end we swap our pivot element back into its place, and return its final index.

pub fn partition(
  numbers: &mut Vec<i32>,
  start: usize,
  end: usize,
  partition: usize)
  -> usize {
    let pivot_element = numbers[partition];
    swap(numbers, partition, end - 1);
    let mut i = start;
    for j in start..(end - 1) {
        if numbers[j] < pivot_element {
            swap(numbers, i, j);
            i = i + 1;
    swap(numbers, i, end - 1);

So to finish sorting, we'll set up a recursive helper that, again, functions on a sub-segment of the array. We'll choose a random element and partition by it:

pub fn quick_sorter_helper(
  numbers: &mut Vec<i32>, start: usize, end: usize) {
    if start >= end {

    let mut rng = thread_rng();
    let initial_partition = rng.gen_range(start, end);
    let partition_index =
          partition(numbers, start, end, initial_partition);

Now that we've partitioned, all that's left to do is recursively sort each side of the partition! Our main API function will call this helper with the full size of the array.

pub fn quick_sorter_helper(
  numbers: &mut Vec<i32>, start: usize, end: usize) {
    if start >= end {

    let mut rng = thread_rng();
    let initial_partition = rng.gen_range(start, end);
    let partition_index =
          partition(numbers, start, end, initial_partition);
    quick_sorter_helper(numbers, start, partition_index);
    quick_sorter_helper(numbers, partition_index + 1, end);

pub fn quick_sorter(numbers: &mut Vec<i32>) {
    quick_sorter_helper(numbers, 0, numbers.len());

Now that we've got this function, let's add tests and benchmarks for it:

fn test_quick_sort() {
    let mut numbers: Vec<i32> = random_vector(100);
    quick_sorter(&mut numbers);
    for i in 0..(numbers.len() - 1) {
        assert!(numbers[i] <= numbers[i + 1]);

fn bench_quick_sort_100_ints(b: &mut Bencher) {
    b.iter(|| {
        let mut numbers: Vec<i32> = random_vector(100);
        quick_sorter(&mut numbers)

// Same kind of benchmarks for 1000, 10000, 100000

Then we can run our benchmarks and see our results:

running 9 tests
test sorter::test_insertion_sort ... ignored
test sorter::test_quick_sort ... ignored
test sorter::bench_insertion_sort_10000_ints ... bench:  65,130,880 ns
/iter (+/- 49,548,187)
test sorter::bench_insertion_sort_1000_ints  ... bench:     312,300 ns
/iter (+/- 243,337)
test sorter::bench_insertion_sort_100_ints   ... bench:       6,159 ns
/iter (+/- 4,139)
test sorter::bench_quick_sort_100000_ints    ... bench:  14,292,660 ns
/iter (+/- 5,815,870)
test sorter::bench_quick_sort_10000_ints     ... bench:   1,263,985 ns
/iter (+/- 622,788)
test sorter::bench_quick_sort_1000_ints      ... bench:     105,443 ns
/iter (+/- 65,812)
test sorter::bench_quick_sort_100_ints       ... bench:       9,259 ns
/iter (+/- 3,882)

Quicksort does much better on the larger values, as expected! We can discern that the times seem to only go up by a factor of around 10. It's difficult to determine that the true runtime is actually O(n log n). But we can clearly see that we're much closer to linear time!


That's all for this intermediate series on Rust! Next week, we'll summarize the skills we learned over the course of these couple months in Rust. Then we'll look ahead to our next series of topics, including some totally new kinds of content!

Don't forget! If you've never programmed in Rust before, our Rust Video Tutorial provides an in-depth introduction to the basics!

by James Bowen at September 07, 2020 02:30 PM


ICFP 2020 & MSFP 2020

We had a great time at both ICFP (plus co-located events) and MSFP this year. Many thanks to the organisers for managing to create compelling online versions of these events in challenging cicumstances. We’ve seen many great talks, and had lots of interesting discussions.

Ben was the program chair of the Haskell Implementors’ Workshop (HIW) this year, and several others of us had contributions at these conferences which we are summarising below. We include links to slides and videos (as far as they are available yet). We will add the missing video links once they become available.

Using STM for modular concurrency

Duncan Coutts, invited talk at Haskell Symposium


Software Transactional Memory (STM) has been available within Haskell for around fifteen years, yet it remains a somewhat under-appreciated feature. This talk aims to redress that by sharing the experiences from a recent successful industrial project that relies extensively and fundamentally on STM. There are good articles, book chapters and blogs on STM at the micro level: looking at the details of the primitives and how to use them to build bigger abstractions. This talk will try to focus on STM at the macro level: larger scale design patterns, how it fits into a system as a whole, and testing techniques.

The focus of this experience report is the application of STM in the context of highly concurrent systems with many modular concurrent components, and the use of STM to help structure the communication and interaction of these components. This contrasts for example with a database pattern where many threads executing transitions on one bundle of shared state.

Staged Sums of Products

Andres Löh with Matthew Pickering and Nicolas Wu, paper presentation at Haskell Symposium


Generic programming libraries have historically traded efficiency in return for convenience, and the generics-sop library is no exception. It offers a simple, uniform, representation of all datatypes precisely as a sum of products, making it easy to write generic functions. We show how to finally make generics-sop fast through the use of staging with Typed Template Haskell.

A Low-Latency Garbage Collector for GHC (Demo)

Ben Gamari with Laura Dietz, demo at Haskell Symposium


GHC 8.10.1 offers a new latency-oriented garbage collector to complement the existing throughput-oriented copying collector. This demonstration discusses the pros and cons of the latency-optimized GC design, briefly discusses the technical trade-offs made by the design, and describes the sorts of application for which the collector is suitable. We include a brief quantitative evaluation on a typical large-heap server workload.

Liquid Haskell as a GHC Plugin

Alfredo Di Napoli with Ranjit Jhala, Andres Löh, Niki Vazou, talk at HIW

SlidesBlog post

Liquid Haskell is a system that extends GHC with refinement types. Constraints arising from the refinement types are sent to an external automatic theorem prover such as z3. By employing such additional checks, one can express more interesting properties about Haskell programs statically.

Up until now, Liquid Haskell has been a separate executable that uses the GHC API, but would run on Haskell files individually and just say “SAFE” or “UNSAFE”. If “SAFE”, one could then proceed to compile a program normally.

In the recent months, we have rewritten Liquid Haskell to now be a GHC plugin. The main advantages of this approach are: First, there is just a single invocation necessary per Haskell source file, so the workflow becomes easier. Second, we can integrate with GHC and Cabal to support libraries and packages properly. When checking source files, Liquid Haskell requires information about the constraints already established for dependent libraries. Previously, these had to be hand-distributed for selected modules with Liquid Haskell itself. Now, they become part of normal GHC interface files and can be distributed for arbitrary user packages via Hackage.

In this talk, we present the Liquid Haskell plugin workflow and why we think it is superior to the old approach. We also discuss the implementation of the plugin: it is interesting because it does not neatly fit into the plugin categories currently provided. Morally, Liquid Haskell typechecks the code, but in order to generate constraints to feed to the prover, it must access (unoptimised!) core code. We explain the final design, and some of the iterations we needed to get there.

GHC Devops Update

Ben Gamari, HIW


As a part of the general GHC status update by Simon Peyton Jones, Ben ended up giving an update on GHC Devops.

Adding Backtraces to Exceptions

David Eichmann, lightning talk at HIW


David Eichmann spoke about an on-going effort to introduce backtrace information into GHC’s exception mechanism, fixing a long-standing pain-point for production users. In his remarks he briefly motivated the problem, described GHC’s existing backtrace collection mechanisms, and described the proposed approach for introducing backtraces into the GHC’s exception types.

Interested readers are invited to comment on the the associated GHC Proposal.

A Vision of Compartmentalized Concurrency in Haskell

Ben Gamari, lightning talk at HIW


Ben described a hypothetical design for improving the scalability of Glasgow Haskell by introducing a notion of multiple distinct heaps (known as domains) in a single Haskell process. This mechanism would enable improved locality on NUMA large machines (by keeping domains on particular NUMA nodes) while improving garbage collector performance (due to reduced synchronization) and exploiting the low communication-cost of a shared-memory environment. Ben explains some of the implementation challenges of this design and encourages contributors interested in undertaking implementation of this idea to get in touch.

Shattered Lens

Oleg Grenrus, extended abstract presentation at MSFP

VideoSlidesExtended abstract

A very well-behaved lens from a structure type S to a value type V is usually specified using two functions, a getter f : S \rightarrow V and a setter g : S \rightarrow V \rightarrow S . These functions are required to satisfy three laws involving equalities.

This formulation is problematic if we want to talk about the equality of lenses, for example to prove the associativity of lens composition. When the theory we work in doesn’t have unique identity proofs (UIP), we run into coherence problems, trying to show that equality proofs are equal.

I found a good formulation of prisms, which we say to be “decidable embeddings”. Yet, I failed to find as good a description of lenses, where good means merely propositional, uniquely determining description. However there are some ideas on how to think about lenses, and on what to look at next.

More about Well-Typed

If you want to find more about what we offer at Well-Typed, please check out our Services page, or just send us an email.

by christine, andres, duncan, ben, alfredo, davide, oleg at September 07, 2020 12:00 AM

September 05, 2020

Dan Piponi (sigfpe)

Some pointers to things not in this blog

Some pointers to things not in this blog

One reason I haven't blogged much recently is that my tolerance for has reached its limit and I've been too lazy to build my own platform supporting mathematics and code. (For example, I can't get previewing on blogger to work today so I'm just publishing this and hope the reformatting is acceptable.) But that doesn't mean I haven't posted stuff publicly. So here are some thematically related links to things I've written on github and colab.

Continuations, effects and runners

How to slice your code into continuations

Handling Effects with Jax

Are these runners?
Just a little snippet of code to illustrate how Python's coroutines can be used to support composable runners. See
(The answer is yes.)

Parallel audio

FWIW I think Colab might be my favourite place to share stuff publicly if it supported environments other than Python.

by sigfpe ( at September 05, 2020 07:56 PM

September 04, 2020

Oleg Grenrus

(Approximate) integer square root

Posted on 2020-09-04 by Oleg Grenrus

Quoting Wikipedia article: In number theory, the integer square root (intSqrt) of a positive integer n is the positive integer m which is the greatest integer less than or equal to the square root of n ,

\mathsf{intSqrt}\, n = \left\lfloor \sqrt{n} \right\rfloor

How to compute it in Haskell? The Wikipedia article mentions Newton’s method, but doesn’t discuss how to make the initial guess.

In base-4.8 (GHC-7.10) we got countLeadingZeros function, which can be used to get good initial guess.

Recall that finite machine integers look like

n = 0b0......01.....
      ^^^^^^^^       -- @countLeadingZeros n@ bits
              ^^^^^^ -- @b = finiteBitSize n - countLeadingZeros n@ bits 

We have an efficient way to get ``significant bits” count b , which can be used to approximate the number.

2^b \le n \le 2^{b+1}, \qquad n > 0

It is also easy to approximate the square root of numbers like 2^b :

\sqrt{2^b} = 2^{\frac{b}{2}} \approx 2^{\left\lfloor \frac{b}{2} \right\rfloor}

We can use this approximation as the initial guess, and write simple implementation of intSqrt:

module IntSqrt where

import Data.Bits

intSqrt :: Int -> Int
intSqrt 0 = 0
intSqrt 1 = 1
intSqrt n = case compare n 0 of
    LT -> 0           -- whatever :)
    EQ -> 0
    GT -> iter guess  -- only single iteration
    iter :: Int -> Int
    iter 0 = 0
    iter x = shiftR (x + n `div` x) 1 -- shifting is dividing

    guess :: Int
    guess = shiftL 1 (shiftR (finiteBitSize n - countLeadingZeros n) 1)

Note, I do only single iteration1. Is it enough? My need is to calculate square roots of small numbers. We can test quite a large range exhaustively. Lets define a correctness predicate:

correct :: Int -> Int -> Bool
correct n x = sq x <= n && n < sq (x + 1) where sq y = y * y

Out of hundred numbers

correct100 = length
    [ (n,x) | n <- [ 0..99 ], let x = intSqrt n, correct n x ]

the computed intSqrt is correct for 89! Which are the incorrect ones?

incorrect100 =
    [ (8,3)
    , (24,5)
    , (32,6), (33,6), (34,6), (35,6)
    , (48,7)
    , (80,9)
    , (96,10), (97,10), (98,10), (99,10)

The numbers which are close to perfect square ( 8 + 1 = 3^2 , 24 + 1 = 5^2 , …) are over estimated.

If we take bigger range, say 0...99999 then with single iteration 23860 numbers are correct, with two iterations 96659.

For my usecase (mangling the size of QuickCheck generators) this is good enough, small deviations are very well acceptable. Bit fiddling FTW!

  1. Like infamous Fast inverse square root algorithm, which also uses only single iteration, because the initial guess is very good,↩︎

September 04, 2020 12:00 AM

September 02, 2020

FP Complete

HTTP status codes with async Rust

This blog post is a direct follow up on my previous blog post on different levels of async in Rust. You may want to check that one out before diving in here.

Alright, so now we know that we can make our programs asynchronous by using non-blocking I/O calls. But last time we only saw examples that remained completely sequential, defeating the whole purpose of async. Let's change that with something more sophisticated.

A few months ago I needed to ensure that all of the URLs for a domain name resolved to either a real web page (200 status code) or redirected to somewhere else with a real web page. To make that happen, I needed a program that would:

  • Read all of the URLs in a text file, one URL per line
  • Produce a CSV file containing the URL and its status code

To make this simple, we're going to take a lot of shortcuts like:

  • Hard-coding the input file path for the URLs
  • Printing out the CSV output to standard output
  • Using a simple println! for generating CSV output instead of using a library
  • Allow any errors to crash the entire program
    • In fact, as you'll see later, we're really treating this as a requirement: if any HTTP requests have an error, the program must terminate with an error code so we know something went wrong

For the curious: the original version of this was a really short Haskell program that had these properties. For fun a few weeks back, I rewrote it in two ways in Rust, which ultimately led to this pair of blog posts.

Fully blocking

Like last time, I recommend following along with my code. I'll kick this off with cargo new httpstatus. And then to avoid further futzing with our Cargo.toml, let's add our dependencies preemptively:

tokio = { version = "0.2.22", features = ["full"] }
reqwest = { version = "0.10.8", features = ["blocking"] }
async-channel = "1.4.1"
is_type = "0.2.1"

That features = ["blocking"] should hopefully grab your attention. The reqwest library provides an optional, fully blocking API. That seems like a great place to get started. Here's a nice, simple program that does what we need:

// To use .lines() before, just like last time
use std::io::BufRead;

// We'll return _some_ kind of an error
fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Open the file for input
    let file = std::fs::File::open("urls.txt")?;
    // Make a buffered version so we can read lines
    let buffile = std::io::BufReader::new(file);

    // CSV header

    // Create a client so we can make requests
    let client = reqwest::blocking::Client::new();

    for line in buffile.lines() {
        // Error handling on reading the lines in the file
        let line = line?;
        // Make a request and send it, getting a response
        let resp = client.get(&line).send()?;
        // Print the status code
        println!("{},{}", line, resp.status().as_u16());

Thanks to Rust's ? syntax, error handling is pretty easy here. In fact, there are basically no gotchas here. reqwest makes this code really easy to write!

Once you put a urls.txt file together, such as the following:

You'll hopefully get output such as:


The logic above is pretty easy to follow, and hopefully the inline comments explain anything confusing. With that idea in mind, let's up our game a bit.

Ditching the blocking API

Let's first move away from the blocking API in reqwest, but still keep all of the sequential nature of the program. This involves four relatively minor changes to the code, all spelled out below:

use std::io::BufRead;

// First change: add the Tokio runtime
// Second: turn this into an async function
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let file = std::fs::File::open("urls.txt")?;
    let buffile = std::io::BufReader::new(file);


    // Third change: Now we make an async Client
    let client = reqwest::Client::new();

    for line in buffile.lines() {
        let line = line?;

        // Fourth change: We need to .await after send()
        let resp = client.get(&line).send().await?;

        println!("{},{}", line, resp.status().as_u16());

The program is still fully sequential: we fully send a request, then get the response, before we move onto the next URL. But we're at least ready to start playing with different async approaches.

Where blocking is fine

IF you remember from last time, we had a bit of a philosophical discussion on the nature of blocking, and that ultimately some blocking is OK in a program. In order to both simplify what we do here, as well as provide some real-world recommendations, let's list all of the blocking I/O we're doing:

  • Opening the file urls.txt
  • Reading lines from that file
  • Outputting to stdout with println!
  • Implicitly closing the file descriptor

Note that, even though we're sequentially running our HTTP requests right now, those are in fact using non-blocking I/O. Therefore, I haven't included anything related to HTTP in the list above. We'll start dealing with the sequential nature next.

Returning to the four blocking I/O calls above, I'm going to make a bold statement: don't bother making them non-blocking. It's not actually terribly difficult to do the file I/O using tokio (we saw how last time). But we get virtually no benefit from doing so. The latency for local disk access, especially when we're talking a file as small as urls.txt is likely to be, and especially in contrast to a bunch of HTTP requests, is miniscule.

Feel free to disagree with me, or to take on making those calls non-blocking as an exercise. But I'm going to focus instead on higher value targets.

Concurrent requests

The real problem here is that we have sequential HTTP requests going on. Instead, we would much prefer to make our requests concurrently. If we assume there are 100 URLs, and each request takes 1 second (hopefully an overestimation), a sequential algorithm can at best finish in 100 seconds. However, a concurrent algorithm could in theory finish all 100 requests in just 1 second. In reality that's pretty unlikely to happen, but it is completely reasonable to expect a significant speedup factor, depending on network conditions, number of hosts you're connecting to, and other similar factors.

So how exactly do we do concurrency with tokio? The most basic answer is the tokio::spawn function. This spawns a new task in the tokio runtime. This is similar in principle to spawning a new system thread. But instead, running and scheduling is managed by the runtime instead of the operating system. Let's take a first stab at spawning each HTTP request into its own task:

tokio::spawn(async move {
    let resp = client.get(&line).send().await?;

    println!("{},{}", line, resp.status().as_u16());

That looks nice, but we have a problem:

error[E0277]: the `?` operator can only be used in an async block that returns `Result` or `Option` (or another type that implements `std::ops::Try`)
  --> src\
15 |           tokio::spawn(async move {
   |  _________________________________-
16 | |             let resp = client.get(&line).send().await?;
   | |                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cannot use the `?` operator in an async block that returns `()`
17 | |
18 | |             println!("{},{}", line, resp.status().as_u16());
19 | |         });
   | |_________- this function should return `Result` or `Option` to accept `?`

Our task doesn't return a Result, and therefore has no way to complain about errors. This is actually indicating a far more serious issue, which we'll get to later. But for now, let's just pretend errors won't happen, and cheat a bit with .unwrap():

let resp = client.get(&line).send().await.unwrap();

This also fails, now with an ownership issue:

error[E0382]: use of moved value: `client`
  --> src\
10 |       let client = reqwest::Client::new();
   |           ------ move occurs because `client` has type `reqwest::async_impl::client::Client`, which does not implement the `Copy` trait

This one is easier to address. The Client is being shared by multiple tasks. But each task needs to make its own clone of the Client. If you read the docs, you'll see that this is recommended behavior:

The Client holds a connection pool internally, so it is advised that you create one and reuse it.

You do not have to wrap the Client it in an Rc or Arc to reuse it, because it already uses an Arc internally.

Once we add this line before our tokio::spawn, our code will compile:

let client = client.clone();

Unfortunately, things fail pretty spectacularly at runtime:

thread 'thread 'tokio-runtime-workerthread 'tokio-runtime-worker' panicked at '' panicked at 'tokio-runtime-workercalled `Result::unwrap()` on an `Err` value: reqwest::Error { kind: Request, url: "", source: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Interrupted, error: JoinError::Cancelled })) }called `Result::unwrap()` on an `Err` value: reqwest::Error { kind: Request, url: "", source: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Interrupted, error: JoinError::Cancelled })) }' panicked at '', ', called `Result::unwrap()` on an `Err` value: reqwest::Error { kind: Request, url: "", source: hyper::Error(Connect, ConnectError("dns error", Custom { kind: Interrupted, error: JoinError::Cancelled })) }src\main.rssrc\', ::src\main.rs1717:::241724

That's a big error message, but the important bit for us is a bunch of JoinError::Cancelled stuff all over the place.

Wait for me!

Let's talk through what's happening in our program:

  1. Initiate the Tokio runtime
  2. Create a Client
  3. Open the file, start reading line by line
  4. For each line:
    • Spawn a new task
    • That task starts making non-blocking I/O calls
    • Those tasks go to sleep, to be rescheduled when data is ready
    • When all is said and done, print out the CSV lines
  5. Reach the end of the main function, which triggers the runtime to shut down

The problem is that we reach (5) long before we finish (4). When this happens, all in-flight I/O will be cancelled, which leads to the error messages we saw above. Instead, we need to ensure we wait for each task to complete before we exit. The easiest way to do this is to call .await on the result of the tokio::spawn call. (Those results, by the way, are called JoinHandles.) However, doing so immediately will completely defeat the purpose of our concurrent work, since we will once again be sequential!

Instead, we want to spawn all of the tasks, and then wait for them all to complete. One easy way to achieve this is to put all of the JoinHandles into a Vec. Let's look at the code. And since we've made a bunch of changes since our last complete code dump, I'll show you the full current status of our source file:

use std::io::BufRead;

async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let file = std::fs::File::open("urls.txt")?;
    let buffile = std::io::BufReader::new(file);


    let client = reqwest::Client::new();

    let mut handles = Vec::new();

    for line in buffile.lines() {
        let line = line?;

        let client = client.clone();
        let handle = tokio::spawn(async move {
            let resp = client.get(&line).send().await.unwrap();

            println!("{},{}", line, resp.status().as_u16());

    for handle in handles {

And finally we have a concurrent program! This is actually pretty good, but it has two flaws we'd like to fix:

  1. It doesn't properly handle errors, instead just using .unwrap(). I mentioned this above, and said our usage of .unwrap() was indicating a "far more serious issue." That issue was the fact that the result values from spawning subthreads are never noticed by the main thread, which is really the core issue causing the cancellation we discussed above. It's always nice when type-driven error messages indicate a runtime bug in our code!
  2. There's no limitation on the number of concurrent tasks we'll spawn. Ideally, we'd rather have a job queue approach, with a dedicated number of worker tasks. This will let our program behave better as we increase the number of URLs in our input file.

NOTE It would be possible in the program above to skip the spawns and collect a Vec of Futures, then await on those. However, that would once again end up sequential in nature. Spawning allows all of those Futures to run concurrently, and be polled by the tokio runtime itself. It would also be possible to use join_all to poll all of the Futures, but it has some performance issues. So best to stick with tokio::spawn.

Let's address the simpler one first: proper error handling.

Error handling

The basic concept of error handling is that we want the errors from the spawned tasks to be detected in the main tasks, and then cause the application to exit. One way to handle that is to return the Err values from the spawned tasks directly, and then pick them up with the JoinHandle that spawn returns. This sounds nice, but naively implemented will result in checking the error responses one at a time. Instead, we'd rather fail early, by detecting that (for example) the 57th request failed and immediately terminating the application.

You could do some kind of a "tell me which is the first JoinHandle that's ready," but it's not the way I initially implemented it, and some quick Googling indicated you'd have to be careful about which library functions you use. Instead, we'll try a different approach using an mpsc (multi-producer, single-consumer).

Here's the basic idea. Let's pretend there are 100 URLs in the file. We'll spawn 100 tasks. Each of those tasks will write a single value onto the mpsc channel: a Result<(), Error>. Then, in the main task, we'll read 100 values off of the channel. If any of them are Err, we exit the program immediately. Otherwise, if we read off 100 Ok values, we exit successfully.

Before we read the file, we don't know how many lines will be in it. So we're going to use an unbounded channel. This isn't generally recommended practice, but it ties in closely with my second complaint above: we're spawning a separate task for each line in the file instead of doing something more intelligent like a job queue. In other words, if we can safely spawn N tasks, we can safely have an unbounded channel of size N.

Alright, let's see the code in question!

use std::io::BufRead;

async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let file = std::fs::File::open("urls.txt")?;
    let buffile = std::io::BufReader::new(file);


    let client = reqwest::Client::new();

    // Create the channel. tx will be the sending side (each spawned task),
    // and rx will be the receiving side (the main task after spawning).
    let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel();

    // Keep track of how many lines are in the file, and therefore
    // how many tasks we spawned
    let mut count = 0;

    for line in buffile.lines() {
        let line = line?;

        let client = client.clone();
        // Each spawned task gets its own copy of tx
        let tx = tx.clone();
        tokio::spawn(async move {
            // Use a map to say: if the request went through
            // successfully, then print it. Otherwise:
            // keep the error
            let msg = client.get(&line).send()|resp| {
                println!("{},{}", line, resp.status().as_u16());
            // And send the message to the channel. We ignore errors here.
            // An error during sending would mean that the receiving side
            // is already closed, which would indicate either programmer
            // error, or that our application is shutting down because
            // another task generated an error.

        // Increase the count of spawned tasks
        count += 1;

    // Drop the sending side, so that we get a None when
    // calling rx.recv() one final time. This allows us to
    // test some extra assertions below

    let mut i = 0;
    loop {
        match rx.recv().await {
            // All senders are gone, which must mean that
            // we're at the end of our loop
            None => {
                assert_eq!(i, count);
                break Ok(());
            // Something finished successfully, make sure
            // that we haven't reached the final item yet
            Some(Ok(())) => {
                assert!(i < count);
            // Oops, an error! Time to exit!
            Some(Err(e)) => {
                assert!(i < count);
                return Err(From::from(e));
        i += 1;

With this in place, we now have a proper concurrent program that does error handling correctly. Nifty! Before we hit the job queue, let's clean this up a bit.


The previous code works well. It allows us to spawn multiple worker tasks, and then wait for all of them to complete, handling errors when they occur. Let's generalize this! We're doing this now since it will make the final step in this blog post much easier.

We'll put all of the code for this in a separate module of our project. The code will be mostly the same as what we had before, except we'll have a nice struct to hold onto our data, and we'll be more explicit about the error type. Put this code into src/

use is_type::Is; // fun trick, we'll look at it below
use std::future::Future;
use tokio::sync::mpsc;

/// Spawn and then run workers to completion, handling errors
pub struct Workers<E> {
    count: usize,
    tx: mpsc::UnboundedSender<Result<(), E>>,
    rx: mpsc::UnboundedReceiver<Result<(), E>>,

impl<E: Send + 'static> Workers<E> {
    /// Create a new Workers value
    pub fn new() -> Self {
        let (tx, rx) = mpsc::unbounded_channel();
        Workers { count: 0, tx, rx }

    /// Spawn a new task to run inside this Workers
    pub fn spawn<T>(&mut self, task: T)
        // Make sure we can run the task
        T: Future + Send + 'static,
        // And a weird trick: make sure that the output
        // from the task is Result<(), E>
        // Equality constraints would make this much nicer
        // See:
        T::Output: Is<Type = Result<(), E>>,
        // Get a new copy of the send side
        let tx = self.tx.clone();
        // Spawn a new task
        tokio::spawn(async move {
            // Run the provided task and get its result
            let res = task.await;
            // Send the task to the channel
            // This should never fail, so we panic if something goes wrong
            match tx.send(res.into_val()) {
                Ok(()) => (),
                // could use .unwrap, but that would require Debug constraint
                Err(_) => panic!("Impossible happend! tx.send failed"),
        // One more worker to wait for
        self.count += 1;

    /// Finish running all of the workers, exiting when the first one errors or all of them complete
    pub async fn run(mut self) -> Result<(), E> {
        // Make sure we don't wait for ourself here
        // How many workers have completed?
        let mut i = 0;

        loop {
            match self.rx.recv().await {
                None => {
                    assert_eq!(i, self.count);
                    break Ok(());
                Some(Ok(())) => {
                    assert!(i < self.count);
                Some(Err(e)) => {
                    assert!(i < self.count);
                    return Err(e);
            i += 1;

Now in src/, we're going to get to focus on just our business logic... and error handling. Have a look at the new contents:

// Indicate that we have another module
mod workers;

use std::io::BufRead;

/// Create a new error type to handle the two ways errors can happen.
enum AppError {

// And now implement some boilerplate From impls to support ? syntax
impl From<std::io::Error> for AppError {
    fn from(e: std::io::Error) -> Self {

impl From<reqwest::Error> for AppError {
    fn from(e: reqwest::Error) -> Self {

async fn main() -> Result<(), AppError> {
    let file = std::fs::File::open("urls.txt")?;
    let buffile = std::io::BufReader::new(file);


    let client = reqwest::Client::new();
    let mut workers = workers::Workers::new();

    for line in buffile.lines() {
        let line = line?;
        let client = client.clone();
        // Use workers.spawn, and no longer worry about results
        // ? works just fine inside!
        workers.spawn(async move {
            let resp = client.get(&line).send().await?;
            println!("{},{}", line, resp.status().as_u16());

    // Wait for the workers to complete

There's more noise around error handling, but overall the code is easier to understand. Now that we have that out of the way, we're finally ready to tackle the last piece of this...

Job queue

Let's review again at a high level how we do error handling with workers. We set up a channel to allow each worker task to send its results to a single receiver, the main task. We used mpsc, or "multi-producer single-consumer." That matches up with what we just described, right?

OK, a job queue is kind of similar. We want to have a single task that reads lines from the file and feeds them into a channel. Then, we want multiple workers to read values from the channel. This is "single-producer multi-consumer." Unfortunately, tokio doesn't provide such a channel out of the box. After I asked on Twitter, I was recommended to use async-channel, which provides a "multi-producer multi-consumer." That works for us!

Thanks to our work before with the Workers struct refactor, this is now pretty easy. Let's have a look at the modified main function:

async fn main() -> Result<(), AppError> {
    let file = std::fs::File::open("urls.txt")?;
    let buffile = std::io::BufReader::new(file);


    // Feel free to define to any numnber (> 0) you want
    // At a value of 4, this could comfortably fit in OS threads
    // But tasks are certainly up to the challenge, and will scale
    // up more nicely for large numbers and more complex applications
    const WORKERS: usize = 4;
    let client = reqwest::Client::new();
    let mut workers = workers::Workers::new();
    // Buffers double the size of the number of workers are common
    let (tx, rx) = async_channel::bounded(WORKERS * 2);

    // Spawn the task to fill up the queue
    workers.spawn(async move {
        for line in buffile.lines() {
            let line = line?;

    // Spawn off the individual workers
    for _ in 0..WORKERS {
        let client = client.clone();
        let rx = rx.clone();
        workers.spawn(async move {
            loop {
                match rx.recv().await {
                    // uses Err to represent a closed channel due to tx being dropped
                    Err(_) => break Ok(()),
                    Ok(line) => {
                        let resp = client.get(&line).send().await?;
                        println!("{},{}", line, resp.status().as_u16());

    // Wait for the workers to complete

And just like that, we have a concurrent job queue! It's everything we could have wanted!


I'll admit, when I wrote the post last week, I didn't think I'd be going this deep into the topic. But once I started playing with solutions, I decided I wanted to implement a full job queue for this.

I hope you found this topic interesting! If you want more Rust content, please hit me up on Twitter. Also, feel free to check out some of our other Rust content:

September 02, 2020 12:00 AM

September 01, 2020

Douglas M. Auclair (geophf)

August 2020 1HaskellADay Problems and Solutions

by geophf ( at September 01, 2020 08:22 PM

February 2019 Haskell 1-liners


  • February 18th, 2019:
    Define ext :: (Maybe a, b) -> Maybe (a,b)
    e.g.: ext (Just 5, "Hi") = Just (5, "Hi")
    • Al͜l ̸͑ha͂͟il̶! @TechnoEmpress \o/ 
    • cλementd @clementd `fmap swap. sequenceA . swap` :-)
    • Raveline @Raveline bisequence . second pure
    • Alexey Radkov @sheshanaag uncurry (flip $ fmap . flip (,))
    • a fool @fresheyeball ext = \case (Just x, y) -> Just (x, y); _ -> Nothing

by geophf ( at September 01, 2020 06:17 PM

Philip Wadler

English universities are in peril because of 10 years of calamitous reform


Stefan Collini writes in the Guardian.

Then there is the rather less obvious contradiction between consumerism and education. Our higher education system is at present structurally consumerist. Even now, it is not widely understood how revolutionary were the changes introduced in 2010-12 by the coalition government in England and Wales (Scotland wisely followed another course). It wasn’t simply a “rise in fees”. It was a redefinition of universities in terms of a market model. The Office for Students is explicitly a “consumer watchdog”. Consumers are defined by their wants; in exchange for payment they are “entitled” to get what they ask for. ...

Universities are, by a long way, the main centres of research and scholarship in our societies; they curate the greater part of our intellectual and cultural inheritance; they provide by far the best source of disinterested expertise; they select and prepare those who will be the scholars and scientists of the future, and so on. Countries all over the world have found that you cannot fulfil these functions by distributing students and academics across all institutions either uniformly or randomly. Some element of selection and concentration is needed, and that brings with it some element of hierarchy, however unofficial. Explicit differentiation of function among higher education institutions might well be preferable to any pretence that they are all doing the same thing and doing it equally well.

by Philip Wadler ( at September 01, 2020 06:07 PM

Don Stewart (dons)

Bootstrapping a community via hackathons

I recently gave an interview to Jasper Van der Jeugt as part of the Haskell Zurich Meetup, on the history of hackathons in the Haskell community, and how we intentionally tried to boostrap and grow an open source tooling and infra team for Haskell, via hackathons, in the 2005-2010 period.

Prior to the launch of cabal and hackage the Haskell development experience was “choose a compiler” and “use fptools” as the core library. There were very few 3rd party libraries (< 20 ?) , only a barebones package system and no centralized distribution of packages.

It was really clear by 2005 that we needed to invest in tooling: build system, package management and package distribution. But without corporate funding for infrastructure, who would do the work? We needed to bootstrap an open source package infrastucture team. Enter the hackathons.

In 2007 we met in Oxford to hack for 3 days to launch Hackage, and make it possible to upload and share packages for Haskell. To do this we wanted to link the build system (cabal) to the package management, upload and download (hackage), leading to the modern world of packages for Haskell, which rapidly accelerated into 10s of thousands of libraries.

The first Haskell infrastructure hackathon team that launched Hackage back in 2007.

Looking back this was a pivotal moment: after Hackage , the open source community rapidly became the primary producer of new Haskell code. Corporate sponsorship of the community increased and a wave of corporate adoption was enabled due to the package distribution system. A research community became a (much larger) open source and then commercial software engineering community. And the key steps were Hackage and Cabal, and some polished core libraries that worked together.

You can see the lessons learned echoed in systems like the Rust cargo and crate system, now. Good languages become sustainable when they become viable open source communities around packages.

You can listen to the interview here:

by Don Stewart at September 01, 2020 12:13 PM

August 31, 2020

Douglas M. Auclair (geophf)

August 2020 1HaskellADay 1Liners

  • 2020-08-31:

    #Haskell #BetterWithCurry #1Liner For:

    >>> :t Map.filterWithKey
    Map.filterWithKey :: (k -> a -> Bool) -> Map k a -> Map k a

    we have this filtering function: \key _val -> not (Set.member key stoppers)

    _val is unused. Curry it away.

  • 2020-08-28: 
    rmFront :: Set Char -> String -> String
    rmFront weirds str = dropWhile (flip Set.member weirds) str
    Simple currying questions: can this function-implementation be simplified with currying? Can it be simplified ... MORE? Answers: yes, and yes. Show your implementation.

  • 2020-08-26: We have this: \info -> importBook info >>= return . (info,) There are way too many info-references. What's a better way to write this expression?
    • Five solutions from @noaheasterly:
      • runKleisli (id &&& Kleisli importBook)
      • liftA2 (liftA2 (,)) return importBook
      • liftA2 (fmap . (,)) id importBook
      • traverse importBook . join (,)
      • traverse importBook . (id &&& id)

by geophf ( at August 31, 2020 09:22 PM

Auke Booij

Property testing property testers

Show the Haskell imports
import Numeric.Natural
import Data.Maybe

Property testers such as QuickCheck are used to search for inputs that invalidate a given property. We focus on properties that are implemented in Haskell as a function a -> Bool. In the context of QuickCheck, the type a is assumed to be an instance of the Arbitrary type class, allowing to randomly generate values of a. Because of its random nature, QuickCheck might not find any offending element of a even if it exists.

type Property a = a -> Bool

If the property tester managed to invalidate the property, it returns the offending element of a. Otherwise, it simply succeeds.

data Result counterexample
= Success
| Failure counterexample
instance Show (Result counterexample) where
show Success = "Success"
show (Failure _) = "Failure"
type Tester a = Property a -> Result a

It is quite well-known in the functional programming community that there exist infinite exhaustively testable types. That is, in finite time we can exhaustively test whether a given property holds for all input values of type a, even, in some cases, when a is infinite.

A prototypical infinite exhaustively testable type, and the one we’ll focus on in this blog post, is the Cantor space of functions from the natural numbers to the Booleans.

type Cantor = Natural -> Bool

Expanding the above definition, exhaustive testability of Cantor space means that we can write a testing process of type Tester Cantor = Property Cantor -> Result Cantor that, for any property on Cantor space, succeeds when the property holds for all values of type Cantor, and returns a counterexample when there is a value of type Cantor for which it fails.

(#) :: Bool -> Cantor -> Cantor
b # f = \i -> if i == 0 then b else f (i-1)
-- Try to find a probe that invalidates the property.
find :: Property Cantor -> Cantor
find prop = h # find (\a -> prop (h # a))
where h = prop (False # find (\a -> prop (False # a)))
-- Exhaustively test a property on Cantor space
exhaustiveCheck :: Tester Cantor
exhaustiveCheck prop =
if prop probe
-- The property holds on all inputs
then Success
-- The property fails; 'probe' is a counterexample
else Failure probe
where probe = find prop

The above fact has seen little application outside of papers and blog posts. This text, being a blog post, is no different. The raison d’être of this post is merely to, once again, observe this surprising fact, and to do something fun with it, as appears to be done every couple of years.

So what might we do with an exhaustive testing process Property Cantor -> Result Cantor? What property of Cantor space might we wish to exhaustively test? Such a property might test input-output behavior of a function that takes elements of Cantor space as input (though not necessarily returning Booleans). However, when we are thinking of QuickCheck-able programs, we normally think of types that are isomorphic to the natural numbers, such as strings and binary trees. “Oh, but that type is isomorphic to Cantor space!” is not the one-liner it could be among Haskellers.

Here’s the key observation of this blog post: Cantor = Property Natural.

So here’s a proposal for program we might want to phrase properties about: the QuickCheck-style property tester itself. After all, we may see a property tester for natural numbers as a map Cantor -> Result Natural, that takes a property of natural numbers (e.g. whether taking the square of a natural always results in an even number) and tells us whether it managed to find any inputs for which the property returns False (in this case, one might hope so).

Here are some sample implementations of property testers for us to play around with, which have some intentional shortcomings.

-- A property tester that doesn't check the property against any
-- inputs and simply always succeeds
testNothing :: Tester Natural
-- A property tester that only tests the input 0
testZero :: Tester Natural
-- A property tester that tests the property on inputs 0 through 100
test100 :: Tester Natural
-- A property tester that tests the property on inputs 100 through 200
testNext100 :: Tester Natural
-- A property tester that reports an incorrect witness of failure
testWrongWitness :: Tester Natural
-- A property tester that sometimes reports an incorrect witness:
-- only when the property holds for [0..100] and 100000,
-- but fails at 12345 and 26535
testWrongWitnessSubtle :: Tester Natural
Show the implementations of these testers
testNothing _ = Success
testZero prop =
if prop 0
then Success
else Failure 0
test100 prop = case tests of
[] -> Success
i:_ -> Failure i
tests = mapMaybe (\i -> if prop i then Nothing else Just i) [0..100]
testNext100 prop = case tests of
[] -> Success
i:_ -> Failure i
tests = mapMaybe (\i -> if prop i then Nothing else Just i) [100..200]
testWrongWitness prop =
if prop 12345
then Success
else Failure 9876
testWrongWitnessSubtle prop = case test100 prop of
Success ->
if prop 100000
then case (prop 26535, prop 12345) of
(True, True) -> Success
(False, True) -> Failure 26535
(True, False) -> Failure 12345
(False, False) -> Failure 9876 -- Wrong witness!
else Failure 100000
Failure i -> Failure i

So what property can we phrase for a given property tester to satisfy? We can require that the property tester always tests the property at input 0. More precisely, if the property passes testing, then the property itself needs to hold at 0. We would expect testNothing to fail this test, but testZero and test100 to satisfy it.

prop_alwaysTestZero :: Tester Natural -> Property (Property Natural)
prop_alwaysTestZero tester prop =
case tester prop of
Success -> prop 0
Failure _ -> True

Note that we can’t directly check whether the property tester actually evaluated the property at 0: we only observe input-output behavior of the property tester. So in the case of a test failure, we can’t say anything about the required behavior of the property tester, since it might still fail because the property is false at another input. However, this is not reducing the power of our test at all, since we’re exhaustively testing over all properties of the naturals.

Just to drive this point home: for a given property tester tester, the property prop_alwaysTestZero tester isn’t just QuickCheck-able, in the sense that we can generate 100 inputs and see whether the property holds for those 100 inputs. It is exhaustively checkable, despite it stating a property of an infinite space. If the exhaustive check succeeds, we know that prop_alwaysTestZero tester holds for all properties on the naturals.

At this point, it is important to realize that, in the context of property testing, properties must in fact be decidable, rather than being arbitrary formulae in first-order logic (say). This restricts what we can test. In particular, we can’t test the claim “if the property always holds, then the property tester always succeeds”: because it is not decidable whether the input property always holds, so this claim itself is not a decidable property. But, since if the property tester fails, it also outputs a witness of this, we can test the contrapositive. So here is an exhaustively testable property expressing that if a property fails the property tester, then the witness provided by the property tester is actually an input for which the property is False. We would expect testNothing, testZero and test100 to satisfy it, but testWrongWitness and testWrongWitnessSubtle to fail it.

prop_failureActuallyFails :: Tester Natural -> Property (Property Natural)
prop_failureActuallyFails tester prop =
case tester prop of
Success -> True
-- The property tester found a counterexample, so that better be one
Failure n -> not (prop n)

This, I think, is a key property of property testers that is currently untested for QuickCheck. To be clear, I would expect QuickCheck to satisfy it, but strictly speaking it’s essential behavior that ought to be in the test suite.

QuickCheck attempts to shrinks counterexamples: so once it has randomly encountered a counterexample, it tries to find another which is intended to be smaller or simpler, in a sense specified by the implementation of shrink as part of the Arbitrary type class. We might therefore expect that if a tester finds a counterexample, that it picks a minimal one, in the sense that applying shrink to it does not yield any counterexamples that are smaller. For the natural numbers, this means that when we obtain a Failure i, then any natural smaller than i should not be a counterexample.

prop_failureMinimal :: Tester Natural -> Property (Property Natural)
prop_failureMinimal tester prop =
case tester prop of
Success -> True
Failure i -> all prop (init [0..i])

Here’s how we can run the exhaustive testing:

main :: IO ()
main = do
putStr "Testing whether testNothing tests 0 (expecting Failure): "
print $ exhaustiveCheck (prop_alwaysTestZero testNothing)
putStr "Testing whether testZero tests 0 (expecting Success): "
print $ exhaustiveCheck (prop_alwaysTestZero testZero)
putStr "Testing whether test100 tests 0 (expecting Success): "
print $ exhaustiveCheck (prop_alwaysTestZero test100)
putStr "Testing whether test100 yields minimal failure witnesses (expecting Success): "
print $ exhaustiveCheck (prop_failureMinimal test100)
putStr "Testing whether testNext100 yields minimal failure witnesses (expecting Failure): "
print $ exhaustiveCheck (prop_failureMinimal testNext100)
putStr "Testing whether test100 gives valid failure witnesses (expecting Success): "
print $ exhaustiveCheck (prop_failureActuallyFails test100)
putStr "Testing whether testWrongWitness gives valid failure witnesses (expecting Failure): "
print $ exhaustiveCheck (prop_failureActuallyFails testWrongWitness)
putStr "Testing whether testWrongWitnessSubtle gives valid failure witnesses (expecting Failure): "
print $ exhaustiveCheck (prop_failureActuallyFails testWrongWitnessSubtle)

I think the last test case best demonstrates the power of this approach. Normal Arbitrary-based property testing has no chance to catch the subtle bug in the implementation of the property tester testWrongWitnessSubtle. It would take deep inspection of the code of testWrongWitnessSubtle to write a counterexample manually. But this exhaustive technique finds a counterexample within 0.3 seconds on my machine, without any static analysis.

So why is this just a blog post rather than a pull request?

In reality, QuickCheck uses the IO monad to print additional information to the terminal, and to generate randomness, so that in this instance the true type of quickCheck would be closer to (Natural -> Bool) -> IO (Result Natural). But we may imagine a pure variant of QuickCheck whose random number generator can be pre-seeded, and which computes a result purely without writing anything to the terminal. I’m not sure the above idea is adequate motivation to refactor QuickCheck in such a fundamental way. But, if anybody wants to work on this, do let me know.

Good luck to the next person writing a blog post about infinite searchable types!

by Auke ( at August 31, 2020 06:52 PM

Monday Morning Haskell

Cleaning our Rust with Monadic Functions


A couple weeks ago we explored how to add authentication to a Rocket Rust server. This involved writing a from_request function that was very messy. You can see the original version of that function as an appendix at the bottom. But this week, we're going to try to improve that function! We'll explore functions like map and and_then in Rust. These can help us write cleaner code using similar ideas to functors and monads in Haskell.

For more details on this code, take a look at our Github Repo! For this article, you should look at For a simpler introduction to Rust, take a look at our Rust Beginners Series!

Closures and Mapping

First, let's talk a bit about Rust's equivalent to fmap and functors. Suppose we have a simple option wrapper and a "doubling" function:

fn double(x: f64) -> {
  2.0 * x

fn main() -> () {
  let x: Option<f64> = Some(5.0);

We'd like to pass our x value to the double function, but it's wrapped in the Option type. A logical thing to do would be to return None if the input is None, and otherwise apply the function and re-wrap in Some. In Haskell, we describe this behavior with the Functor class. Rust's approach has some similarities and some differences.

Instead of Functor, Rust has a trait Iterable. An iterable type contains any number of items of its wrapped type. And map is one of the functions we can call on iterable types. As in Haskell, we provide a function that transforms the underlying items. Here's how we can apply our simple example with an Option:

fn main() -> () {
  let x: Option<f64> = Some(5.0);
  let y: Option<f64> =;

One notable difference from Haskell is that map is a member function of the iterator type. In Haskell of course, there's no such thing as member functions, so fmap exists on its own.

In Haskell, we can use lambda expressions as arguments to higher order functions. In Rust, it's the same, but they're referred to as closures instead. The syntax is rather different as well. We capture the particular parameters within bars, and then provide a brace-delimited code-block. Here's a simple example:

fn main() -> () {
  let x: Option<f64> = Some(5.0);
  let y: Option<f64> =|x| {2.0 * x});

Type annotations are also possible (and sometimes necessary) when specifying the closure. Unlike Haskell, we provide these on the same line as the definition:

fn main() -> () {
  let x: Option<f64> = Some(5.0);
  let y: Option<f64> =|x: f64| -> f64 {2.0 * x});

And Then…

Now using map is all well and good, but our authentication example involved using the result of one effectful call in the next effect. As most Haskellers can tell you, this is a job for monads and not merely functors. We can capture some of the same effects of monads with the and_then function in Rust. This works a lot like the bind operator (>>=) in Haskell. It also takes an input function. And this function takes a pure input but produces an effectful output.

Here's how we apply it with Option. We start with a safe_square_root function that produces None when it's input is negative. Then we can take our original Option and use and_then to use the square root function.

fn safe_square_root(x: f64) -> Option<f64> {
  if x < 0.0 {
  } else {

fn main() -> () {
  let x: Option<f64> = Some(5.0);

Converting to Outcomes

Now let's switch gears to our authentication example. Our final result type wasn't Option. Some intermediate results used this. But in the end, we wanted an Outcome. So to help us on our way, let's write a simple function to convert our options into outcomes. We'll have to provide the extra information of what the failure result should be. This is the status_error parameter.

fn option_to_outcome<R>(
  result: Option<R>,
  status_error: (Status, LoginError))
  -> Outcome<R, LoginError> {
    match result {
        Some(r) => Outcome::Success(r),
        None => Outcome::Failure(status_error)

Now let's start our refactoring process. To begin, let's examine the retrieval of our username and password from the headers. We'll make a separate function for this. This should return an Outcome, where the success value is a tuple of two strings. We'll start by defining our failure outcome, a tuple of a status and our LoginError.

fn read_auth_from_headers(headers: &HeaderMap)
  -> Outcome<(String, String), LoginError> {
    let fail = (Status::BadRequest, LoginError::InvalidData);

We'll first retrieve the username out of the headers. Recall that this operation returns an Option. So we can convert it to an Outcome using our function. We can then use and_then with a closure taking the unwrapped username.

fn read_auth_from_headers(headers: &HeaderMap)
  -> Outcome<(String, String), LoginError> {
    let fail = (Status::BadRequest, LoginError::InvalidData);
    option_to_outcome(headers.get_one("username"), fail.clone())
        .and_then(|u| -> Outcome<(String, String), LoginError> {

We can then do the same thing with the password field. When we've successfully unwrapped both fields, we can return our final Success outcome.

fn read_auth_from_headers(headers: &HeaderMap)
  -> Outcome<(String, String), LoginError> {
    let fail = (Status::BadRequest, LoginError::InvalidData);
    option_to_outcome(headers.get_one("username"), fail.clone())
        .and_then(|u| {
              headers.get_one("password"), fail.clone())
                .and_then(|p| {
                      (String::from(u), String::from(p)))


Armed with this function we can start re-tooling our from_request function. We'll start by gathering the header results and invoking and_then. This unwraps the username and password:

impl<'a, 'r> FromRequest<'a, 'r> for AuthenticatedUser {
    type Error = LoginError;
    fn from_request(request: &'a Request<'r>)
      -> Outcome<AuthenticatedUser, LoginError> {
        let headers_result = 
        headers_result.and_then(|(u, p)| {

Now for the next step, we'll make a couple database calls. Both of our normal functions return Option values. So for each, we'll create a failure Outcome and invoke option_to_outcome. We'll follow this up with a call to and_then. First we get the user based on the username. Then we find their AuthInfo using the ID.

impl<'a, 'r> FromRequest<'a, 'r> for AuthenticatedUser {
    type Error = LoginError;
    fn from_request(request: &'a Request<'r>)
      -> Outcome<AuthenticatedUser, LoginError> {
        let headers_result = 
        headers_result.and_then(|(u, p)| {
            let conn_str = local_conn_string();
            let maybe_user =
                  fetch_user_by_email(&conn_str, &String::from(u));
            let fail1 =
                 (Status::NotFound, LoginError::UsernameDoesNotExist);
            option_to_outcome(maybe_user, fail1)
                .and_then(|user: UserEntity| {
                    let fail2 = (Status::MovedPermanently, 
                        &conn_str,, fail2)
                .and_then(|auth_info: AuthInfoEntity| {

This gives us unwrapped authentication info. We can use this to compare the hash of the original password and return our final Outcome!

impl<'a, 'r> FromRequest<'a, 'r> for AuthenticatedUser {
    type Error = LoginError;
    fn from_request(request: &'a Request<'r>)
      -> Outcome<AuthenticatedUser, LoginError> {
        let headers_result = 
        headers_result.and_then(|(u, p)| {
            let conn_str = local_conn_string();
            let maybe_user =
                  fetch_user_by_email(&conn_str, &String::from(u));
            let fail1 =
                 (Status::NotFound, LoginError::UsernameDoesNotExist);
            option_to_outcome(maybe_user, fail1)
                .and_then(|user: UserEntity| {
                    let fail2 = (Status::MovedPermanently, 
                        &conn_str,, fail2)
                .and_then(|auth_info: AuthInfoEntity| {
                   let hash = hash_password(&String::from(p));
                   if hash == auth_info.password_hash {
                        Outcome::Success( AuthenticatedUser{
                            user_id: auth_info.user_id})
                    } else {


Is this new solution that much better than our original? Well it avoids the "triangle of death" pattern with our code. But it's not necessarily that much shorter. Perhaps it's a little more cleaner on the whole though. Ultimately these code choices are up to you! Next time, we'll wrap up our current exploration of Rust by seeing how to profile our code in Rust.

This series has covered some more advanced topics in Rust. For a more in-depth introduction, check out our Rust Video Tutorial!

Appendix: Original Function

impl<'a, 'r> FromRequest<'a, 'r> for AuthenticatedUser {
    type Error = LoginError;
    fn from_request(request: &'a Request<'r>) -> Outcome<AuthenticatedUser, LoginError> {
        let username = request.headers().get_one("username");
        let password = request.headers().get_one("password");
        match (username, password) {
            (Some(u), Some(p)) => {
                let conn_str = local_conn_string();
                let maybe_user = fetch_user_by_email(&conn_str, &String::from(u));
                match maybe_user {
                    Some(user) => {
                        let maybe_auth_info = fetch_auth_info_by_user_id(&conn_str,;
                        match maybe_auth_info {
                            Some(auth_info) => {
                                let hash = hash_password(&String::from(p));
                                if hash == auth_info.password_hash {
                                    Outcome::Success(AuthenticatedUser{user_id: 1})
                                } else {
                                    Outcome::Failure((Status::Forbidden, LoginError::WrongPassword))
                            None => {
                                Outcome::Failure((Status::MovedPermanently, LoginError::WrongPassword))
                    None => Outcome::Failure((Status::NotFound, LoginError::UsernameDoesNotExist))
            _ => Outcome::Failure((Status::BadRequest, LoginError::InvalidData))

by James Bowen at August 31, 2020 02:30 PM

Neil Mitchell

Interviewing while biased

Interviewing usually involves some level of subjectivity. I once struggled to decide about a candidate, and after some period of reflection, the only cause I can see is that I was biased against the candidate. That wasn't a happy realisation, but even so, it's one I think worth sharing.

Over my years, I've interviewed hundreds of candidates for software engineering jobs (I reckon somewhere in the 500-1000 mark). I've interviewed for many companies, for teams I was managing, for teams I worked in, and for other teams at the same company. In most places, I've been free to set the majority of the interview. I have a standard pattern, with a standard technical question, to which I have heard a lot of answers. The quality of the answers fall into one of three categories:

  • About 40% give excellent, quick, effortless answers. These candidates pass the technical portion.
  • About 50% are confused and make nearly no progress even with lots of hints. These candidates fail.
  • About 10% struggle a bit but get to the answer.

Candidates in the final bucket are by far the hardest to make a decision on. Not answering a question effortlessly doesn't mean you aren't a good candidate - it might mean it's not something you are used to, you got interview nerves or a million other factors that go into someone's performance. It makes the process far more subjective.

Many years ago, I interviewed one candidate over the phone. It was their first interview with the company, so I had to decide whether we should take the step of transporting them to the office for an in-person interview, which has some level of cost associated with it. Arranging an in-person interview would also mean holding a job open for them, which would mean pausing further recruitment. The candidate had a fairly strong accent, but a perfect grasp of English. Their performance fell squarely into the final bucket.

For all candidates, I make a decision, and write down a paragraph or so explaining how they performed. My initial decision was to not go any further in interviewing the candidate. But after writing down the paragraph, I found it hard to justify my decision. I'd written other paragraphs that weren't too dissimilar, but had a decision to continue onwards. I wondered about changing my decision, but felt rather hesitant - I had a sneaking suspicion that this candidate "just wouldn't work out". Had I spotted something subtle I had forgotten to write down? Had their answers about their motivation given me a subconscious red-flag? I didn't know, but for the first time I can remember, decided to wait on sending my internal interview report overnight.

One day later, I still had a feeling of unease. But still didn't have anything to pin it on. In the absence of a reason to reject them, I decided the only fair thing to do was get them onsite for further interviews. Their onsite interviews went fine, I went on to hire them, they worked for me for over a year, and were a model employee. If I saw red-flags, they were false-flags, but more likely, I saw nothing.

However, I still wonder what caused me to decide "no" initially. Unfortunately, the only thing I can hypothesise is that their accent was the cause. I had previously worked alongside someone with a similar accent, who turned out to be thoroughly incompetent. I seem to have projected some aspects of that behaviour onto an entirely unrelated candidate. That's a pretty depressing realisation to make.

To try and reduce the chance of this situation repeating, I now write down the interview description first, and then the decision last. I also remember this story, and how my biases nearly caused me to screw up someone's career.

by Neil Mitchell ( at August 31, 2020 12:26 PM

Stackage Blog

LTS 16 uses ghc-8.8.4 as of LTS 16.12

LTS 16.12, the latest update to LTS 16, includes an upgrade from ghc-8.8.3 to ghc-8.8.4. Windows users are encouraged to upgrade immediately, as this ghc upgrade contains an important bugfix to process creation on Windows.

See the ghc-8.8.4 release announcement for details.

August 31, 2020 12:01 PM

August 30, 2020

Oleg Grenrus

ANN: cabal-fmt-0.1.4 - --no-cabal-file flag and fragments

Posted on 2020-08-30 by Oleg Grenrus packages

I spent this Sunday writing two small patches to cabal-fmt.

--no-cabal-file flag

cabal-fmt reasonably assumes that the file it is formatting is a cabal package definition file. So it parses it as such. That is needed to correctly pretty print the fields, as some syntax, for example leading comma requires somewhat recent cabal-version: 2.2 (see Package Description Format Specification History for details).

However, there are other files using the same markup format, for example cabal.project files or cabal.haskell-ci configuration files used by haskell-ci tool. Wouldn't it be nice if cabal-fmt could format these as well. In cabal-fmt-0.1.4 you can pass -n or --no-cabal-fmt flag, to prevent cabal-fmt from parsing these files as cabal package file.

The downside is that the latest known cabal specification will be used. That shouldn't break cabal.haskell-ci files, but it might break cabal.project files if you are not careful. (Their parsing code is somewhat antique).

An example of reformatting the cabal.project of this blog:

--- a/cabal.project
+++ b/cabal.project
@@ -1,9 +1,7 @@
 index-state:   2020-05-10T17:53:22Z
 with-compiler: ghc-8.6.5
-    "."
-    pkg/gists-runnable.cabal
+  "."
+  pkg/gists-runnable.cabal
-  hakyll +previewServer
+constraints:   hakyll +previewServer

So satisfying.


Another addition are fragments. They are best illustrated by an example. Imagine you have a multi-package project, and you use haskell-ci to generate your .travis.yml. Each .cabal package file must have the same


tested-with: GHC ==8.4.4 || ==8.6.5 || ==8.8.3 || ==8.10.1


Then you find out that GHC 8.8.4 and GHC-8.10.2 were recently released, and you want to update your CI configuration. Editing multiple files, with the same change. Busy work.

With cabal-fmt-0.1.4 you can create a fragment file, lets call it tested-with.fragment:

tested-with: GHC ==8.4.4 || ==8.6.5 || ==8.8.4 || ==8.10.2

And then edit your package files with a cabal-fmt pragma ( the fragment is probably in the root directory of project, but .cabal files are inside a directory per package):


+-- cabal-fmt: fragment ../tested-with.fragment
 tested-with: GHC ==8.4.4 || ==8.6.5 || ==8.8.3 || ==8.10.1


Then when you next time run

cabal-fmt --inplace */*.cabal

you'll see the diff


 -- cabal-fmt: fragment ../tested-with.fragment
-tested-with: GHC ==8.4.4 || ==8.6.5 || ==8.8.3 || ==8.10.1
+tested-with: GHC ==8.4.4 || ==8.6.5 || ==8.8.4 || ==8.10.2


for all libraries. Handy!

Some design comments:

  • Fragment is only a single field or a single section (e.g. common stanzas). Never multiple single fields. (Easier to implement, least surprising behavior: pragma is attached to a single field or section).
  • Field name or section header in the .cabal file and the fragment have to match. (To avoid mistakes).
  • Substitution is not recursive. (Guaranteed termination).
  • Other pragmas in fragments are not executed. Neither are comments in fragments preserved. (Not sure whether that would be valuable).

Finally, you can use cabal-fmt --no-cabal-fmt to format fragment files too, even they are reformatted when spliced.


cabal-fmt-0.1.4 is a small release. I made --no-cabal-file to scratch my itch, and fragments partly to highlight that not every feature can exist in Cabal, but is very fine for preprocessors. I do think that fragments could be very useful in bigger projects. Let me know!

August 30, 2020 12:00 AM

August 28, 2020

Mark Jason Dominus

Zucchinis and Eggplants

This morning Katara asked me why we call these vegetables “zucchini” and “eggplant” but the British call them “courgette” and “aubergine”.

I have only partial answers, and the more I look, the more complicated they get.


The zucchini is a kind of squash, which means that in Europe it is a post-Columbian import from the Americas.

“Squash” itself is from Narragansett, and is not related to the verb “to squash”. So I speculate that what happened here was:

  • American colonists had some name for the zucchini, perhaps derived from an Narragansett or another Algonquian language, or perhaps just “green squash” or “little gourd” or something like that. A squash is not exactly a gourd, but it's not exactly not a gourd either, and the Europeans seem to have accepted it as a gourd (see below).

  • When the vegetable arrived in France, the French named it courgette, which means “little gourd”. (Courge = “gourd”.) Then the Brits borrowed “courgette” from the French.

  • Sometime much later, the Americans changed the name to “zucchini”, which also means “little gourd”, this time in Italian. (Zucca = “gourd”.)

The Big Dictionary has citations for “zucchini” only back to 1929, and “courgette” to 1931. What was this vegetable called before that? Why did the Americans start calling it “zucchini” instead of whatever they called it before, and why “zucchini” and not “courgette”? If it was brought in by Italian immigrants, one might expect to the word to have appeared earlier; the mass immigration of Italians into the U.S. was over by 1920.

Following up on this thought, I found a mention of it in Cuniberti, J. Lovejoy., Herndon, J. B. (1918). Practical Italian recipes for American kitchens, p. 18: “Zucchini are a kind of small squash for sale in groceries and markets of the Italian neighborhoods of our large cities.” Note that Cuniberti explains what a zucchini is, rather than saying something like “the zucchini is sometimes known as a green summer squash” or whatever, which suggests that she thinks it will not already be familiar to the readers. It looks as though the story is: Colonial Europeans in North America stopped eating the zucchini at some point, and forgot about it, until it was re-introduced in the early 20th century by Italian immigrants.

When did the French start calling it courgette? When did the Italians start calling it zucchini? Is the Italian term a calque of the French, or vice versa? Or neither? And since courge (and gourd) are evidently descended from Latin cucurbita, where did the Italians get zucca?

So many mysteries.


Here I was able to get better answers. Unlike squash, the eggplant is native to Eurasia and has been cultivated in western Asia for thousands of years.

The puzzling name “eggplant” is because the fruit, in some varieties, is round, white, and egg-sized.

closeup of an eggplant with several of its  round, white, egg-sized  fruits that do indeed look just like eggs

The term “eggplant” was then adopted for other varieties of the same plant where the fruit is entirely un-egglike.

“Eggplant” in English goes back only to 1767. What was it called before that? Here the OED was more help. It gives this quotation, from 1785:

When this [sc. its fruit] is white, it has the name of Egg-Plant.

I inferred that the preceding text described it under a better-known name, so, thanks to the Wonders of the Internet, I looked up the original source:

Melongena or Mad Apple is also of this genus [solanum]; it is cultivated as a curiosity for the largeness and shape of its fruit; and when this is white, it has the name of Egg Plant; and indeed it then perfectly resembles a hen's egg in size, shape, and colour.

(Jean-Jacques Rosseau, Letters on the Elements of Botany, tr. Thos. Martyn 1785. Page 202. (Wikipedia))

The most common term I've found that was used before “egg-plant” itself is “mad apple”. The OED has cites from the late 1500s that also refer to it as a “rage apple”, which is a calque of French pomme de rage. I don't know how long it was called that in French. I also found “Malum Insanam” in the 1736 Lexicon technicum of John Harris, entry “Bacciferous Plants”.

Melongena was used as a scientific genus name around 1700 and later adopted by Linnaeus in 1753. I can't find any sign that it was used in English colloquial, non-scientific writing. Its etymology is a whirlwind trip across the globe. Here's what the OED says about it:

  • The neo-Latin scientific term is from medieval Latin melongena

  • Latin melongena is from medieval Greek μελιντζάνα (/melintzána/), a variant of Byzantine Greek ματιζάνιον (/matizánion/) probably inspired by the common Greek prefix μελανο- (/melano-/) “dark-colored”. (Akin to “melanin” for example.)

  • Greek ματιζάνιον is from Arabic bāḏinjān (بَاذِنْجَان). (The -ιον suffix is a diminutive.)

  • Arabic bāḏinjān is from Persian bādingān (بادنگان)

  • Persian bādingān is from Sanskrit and Pali vātiṅgaṇa (भण्टाकी)

  • Sanskrit vātiṅgaṇa is from Dravidian (for example, Malayalam is vaḻutana (വഴുതന); the OED says “compare… Tamil vaṟutuṇai”, which I could not verify.)


Okay, now how do we get to “aubergine”? The list above includes Arabic bāḏinjān, and this, like many Arabic words was borrowed into Spanish, as berengena or alberingena. (The “al-” prefix is Arabic for “the” and is attached to many such borrowings, for example “alcohol” and “alcove”.)

From alberingena it's a short step to French aubergine. The OED entry for aubergine doesn't mention this. It claims that aubergine is from “Spanish alberchigo, alverchiga, ‘an apricocke’”. I think it's clear that the OED blew it here, and I think this must be the first time I've ever been confident enough to say that. Even the OED itself supports me on this: the note at the entry for brinjal says: “cognate with the Spanish alberengena is the French aubergine”. Okay then. (Brinjal, of course, is a contraction of berengena, via Portuguese bringella.)

Sanskrit vātiṅgaṇa is also the ultimate source of modern Hindi baingan, as in baingan bharta.

(Wasn't there a classical Latin word for eggplant? If so, what was it? Didn't the Romans eat eggplant? How do you conquer the world without any eggplants?)

[ Addendum: My search for antedatings of “zucchini” turned up some surprises. For example, I found what seemed to be many mentions in an 1896 history of Sicily. These turned out not to be about zucchini at all, but rather the computer's pathetic attempts at recognizing the word Σικελίαν. ]

[ Addendum 20200831: Another surprise: Google Books and Hathi Trust report that “zucchini” appears in the 1905 Collier Modern Eclectic Dictionary of the English Langauge, but it's an incredible OCR failure for the word “acclamation”. ]

[ Addendum 20200911: A reader, Lydia, sent me a beautiful map showing the evolution of the many words for ‘eggplant’. Check it out. ]

by Mark Dominus ( at August 28, 2020 08:08 PM


Implementing a GHC Plugin for Liquid Haskell

TL;DR Starting from version, LiquidHaskell is available as a GHC Plugin. Paying special attention to support existing IDE tooling led to some non-conventional design choices.

LiquidHaskell is a refinement type checker for Haskell that empowers programmers to express contracts about their programs which are checked at compile time, and it has been a wonderful tool for the community in addition to GHC, when the latter is not enough. LiquidHaskell has been historically available only as a standalone executable which accepted one Haskell file at a time, making the process of integrating it into an existing code base more tricky than it should have been.

In this blog post we’ll explore how we turned LiquidHaskell into a GHC Plugin, and all the challenges we had to overcome along the way.


This post is intended as an extended cut of my HIW talk, and it nicely complements Ranjit’s more practical blog post. We won’t cover here what LiquidHaskell (LH for brevity) can bring to the table on top of GHC’s type-level programming, as there are plenty of resources out there to get you excited: we recommend starting from the book for a practical motivation on why LiquidHaskell even needs to exist. The official website also covers a lot of material.

What we will do instead is to briefly recap how LH looks like, as well as presenting its internal architecture. After that, we will take a look at the old version of LH in order to understand its shortcomings, and how the plugin helped overcoming those. Last but not least we will explore the iterations of the various designs of our plugin.

A Liquid Haskell “hello world”

If you have never seen how LiquidHaskell looks like in practice, let’s assume we want to check (i.e. refine) a Haskell module, say A.hs. The content of A.hs is not important, but as an example let’s suppose it to be:

module A where

import B
import C

{-@ safeDiv :: Int -> {y:Int | y /= 0 } -> Int @-}
safeDiv :: Int -> Int -> Int
safeDiv = div

The first line of the code block, which looks like a Haskell comment with additional @s, is a Liquid Haskell annotation. All Liquid Haskell instructions are written in special Haskell comments like this. This particular one assigns a refined type to safeDiv, and it’s essentially enforcing that the second Int we pass to safeDiv should not be 0. The {y: Int | y /= 0} also shows quite succinctly the essence of a refinement type, i.e. a type with a logical predicate attached to it. Now, if we try to attempt a division by 0, for example by calling safeDiv 3 0, LH would reject our program with a big red UNSAFE printed on screen, while if all is well a nice green SAFE would be shown.

Users can provide such annotations in a number of ways; they can annotate directly the Haskell source files, like in the example, or they can put all their refinements into a separate .spec file. Doing both at the same time (for the same module) is also supported, as LH will ensure that all these refinements will be merged into a single data structure, as we will see below.

LiquidHaskell’s architecture

Before deep-diving into any other section, let’s take a brief detour to explain how LH internally works, as it will help clarify some concepts later on. As usual, a picture is worth 1000 words:

Let’s break down this picture bit by bit. The first thing that LH does is to parse the annotations from the input file (A.hs) and from the companion .spec file (if any, thus the dashed lines in the picture), converting everything into a richer data structure known as a BareSpec. Think of the BareSpec as your input program, that has yet to be compiled: by this analogy, a BareSpec might still be ill-defined as it’s not yet being checked and verified by LH. Once a BareSpec gets verified by LH it becomes a LiftedSpec; the intuition behind the name is that the relevant Haskell types have been “lifted” into their refined counterparts.

Since A.hs imports some other modules, it has a number of dependencies that LH needs to track. The details are not important at this stage, as this dependency-tracking differs between the executable and the GHC Plugin, so let’s treat this process as a black box. The end result is a HashSet of LiftedSpec, where the use of LiftedSpec indicates that dependencies will have been verified (and lifted) by LH first.

The last piece of the puzzle is the list of Core bindings for A.hs. Core is GHC’s internal representation, with an AST that is smaller and easier to work with than the full source Haskell AST. LH desugars the program into Core, transforms it into A-normal form, then traverses it to generate a set of refinement constraints that must be solved to “lift” a BareSpec into a LiftedSpec. The key takeaway is that LH needs access to the Core bindings of the input program in order to work.

Finally, once we have everything we need, LH will apply a final transformation step to the input {dependencies, coreBinds, bareSpec}, producing two things as a result:

  1. a LiftedSpec, which can now be serialised on disk and used as a dependency for other Haskell modules; and
  2. a TargetSpec, which is the data structure LH uses to verify the soundness of A.hs.

After creating the TargetSpec LH has to solve what is, effectively, a constraints problem: it generates a set of Horn Clauses from the TargetSpec and feeds them into an SMT solver.1 If a solution is found, our program is SAFE, otherwise it is UNSAFE.

The old status quo

Historically LH has been available only as a standalone executable, which had to be invoked on one or more target Haskell files in order to do its job. While this didn’t stop users from using it for real world applications, it somewhat limited its adoption, despite packages such as liquidhaskell-cabal attempting to bridge the gap.

Compiling Haskell code

Now that we know how the kernel of LH works, understanding the executable and its limitations is not hard; in a nutshell, the executable used the GHC API to accomplish all sorts of tasks: from dependency analysis down to parsing, typechecking and desugaring the input module. Astute readers might argue that this is replicating what GHC normally does, and they would be correct! One of the main shortcomings of the old executable was indeed that there was very little or no integration into GHC’s compilation pipeline.

Writing and distributing specifications

As we have seen already, LH checks each and every refinement it can find while parsing an input Haskell module, which means that the more refinements we have, the more useful LH will be at spotting bugs and mistakes. However, this also means that somehow we need to have a way to write and distribute specifications for existing Haskell types, functions and even packages, as users would not like to be obliged to define refinements for basic things like + or natural numbers.

The LiquidHaskell HQ stoically solved this problem by shipping a “hardcoded” prelude as part of the executable, that contained specifications for some types in base and for some preeminent Haskell packages like containers, bytestring etc. While this worked, it wasn’t very flexible, as it was a “closed” environment: new specifications couldn’t be added without releasing a new version of LH, not to talk about tracking breaking changes to the relevant upstream packages. To say it differently, the hardcoded prelude made a best effort to be compatible with such Haskell packages, but it couldn’t guarantee that newer releases of, say, bytestring, wouldn’t break the specifications making LH compatible with bytestring-x.x.x.x but not bytestring-x.x.y.y.

IDE support

With LH available as an executable, there was little hope to integrate it with things like ghcide, ghcid or even ghci. IDE support was added in an ad-hoc fashion by providing the executable with a --json flag that could be used to output the result of the refinement (including errors) in a JSON format. One could then write wrappers around it to turn such JSON into formats editors or LSP servers could understand. A notable mention goes here to Alan Zimmerman, who added a haskell-ide-engine plugin for LH using the limited --json interface, nevertheless creating a very pleasant IDE experience.

The new way

Starting from version, LH is available as a GHC Plugin, which means that integrating it into your project is as easy as adding an extra -fplugin=LiquidHaskell to the list of ghc-options in your .cabal file, and adding liquidhaskell and liquid-base to its dependencies. In the rest of this post we will discuss the design of this plugin, as it ended up being outside the conventional boundaries of what it can be defined a “standard” plugin, for reasons which will become obvious later on.

A GHC Plugin quick recap

Before discussing the plugin’s design, let’s quickly recap how a GHC Plugin works, starting, as always, from a picture:

At its core, a Plugin is simply a Haskell data record where record fields expose customisable actions that can modify most of GHC’s pipeline stages. Let’s take for example parsedResultAction:

parsedResultAction :: [CommandLineOption] -> ModSummary -> HsParsedModule -> Hsc HsParsedModule

The documentation states that it will “Modify the module when it is parsed. This is called by HscMain when the parsing is successful.”. This means that we can write a simple Haskell function that takes as input a HsParsedModule and it has to produce a new HsParsedModule, effectively allowing us to modify how the GHC pipeline behaves. Similarly, typecheckResultAction is called after typechecking has ended, while installCoreToDos allows us to add extra optimizations and transformations on the Core representation of the program. This is for example how inspection-testing performs its magic, by grabbing the Core bindings at this stage.

The general design idea

There is a fil rouge in all the different iterations of the plugin, namely the idea that a LiftedSpec should be serialised into the interface file of the associated Haskell module. Doing so is possible by the virtue of the Annotations API that the ghc library exposes, and has a number of practical consequences. First of all, it strengthens this intuition that a LiftedSpec is the primary output of the refinement process and, as such, it’s worth caching and storing somewhere. Secondly, it allows the plugin to deserialise it back when resolving the dependencies of the module GHC is currently compiling without resorting to hidden system folders. Most importantly, it paves the way to distributing specifications, as now the plugin can fetch a LiftedSpec for a module which doesn’t belong to the HomePackageTable for that package. This is delegated to the SpecFinder module, which I have wittily named after the Finder module in the GHC API.

First attempt: proper pipeline split

Considering all the above, a natural way of designing the plugin could be the following:

  1. Parse any LH specification during the parse stage, using the parsedResultAction hook to produce a BareSpec;
  2. Perform any name resolution needed by LH in the typecheckResultAction, and store the result somewhere;
  3. Add an extra Core pass in the installCoreToDos that grabs the [CoreBind], performs the ANF transformation and finally calls the LH API to check the result.

This was the first design I explored when working on the first prototype. Due to the fact that there is no good story for passing plugin state between phases, I had to resort to either serialise something as an Annotation in order to retrieve it back in a different stage, or more pragmatically use an IORef, typically wrapping something like a Map Module MyState.

Challenge: Unboxed types

The first iteration of the plugin worked, until I tried to refine a program which looked a bit like this:

module Challenge1 where

data BBRef = BBRef
    { size :: {v: Int | v >= 0 }
data BBRef = BBRef {
      size      :: {-# UNPACK #-} !Int
      -- ^ The amount of memory allocated.

-- ... do something with BBRef

One aspect of LH we didn’t talk about is the ability to refine data structures, like in the example. Unfortunately, this made me discover a hard requirement I didn’t anticipate: LH needs access to the unoptimised core bindings.

What does this mean, exactly? It means that LH’s output can’t depend on the optimisation level of the user’s program. In other terms, a plugin would inherit the same optimisation level the user requested when kicking off a GHC build, but LH strictly requires it to be -O0. More specifically, if we take size as an example, LH requires the relevant Int to be boxed, because it needs to “match it” with the Int in the refinement, but UNPACK changes all of that, effectively producing a type mismatch and causing LH to fail with a ghastly error.

Final design: duplicate work, reduce the plugin surface

Back to the drawing board, we decided to bite the bullet and accept the fact that we simply can’t control the optimization level; not in the practical sense, as the GHC API allows you to modify the level in the global DynFlags, but in a semantic one: it would have been wrong to sweep the rug under users’ feet that way. Imagine trying to compile an executable meant for production just to discover with horror that a plugin silently changed the optimisation level to be “debug”. Since LH requires the program to be compiled with -O0, but the user might well be compiling with -O1 or higher, we will inevitably need to duplicate some stages of the compilation pipeline.

Initially I tried to solve this problem by “manually” calling typecheckModule and desugarModule using the GHC API inside the installCoreToDos action run after desugaring. Unfortunately it turns out to be necessary to replicate even the parsing stage, with a call to parseModule.2 Since the pipeline split wasn’t enforced so strictly anymore, we ended up moving all the plugin logic into typecheckResultAction,3 with the added bonus that no intermediate state-passing was needed anymore. This is, to this day, the design we settled for.

This has the unfortunate consequence that we are now doing most of the work twice, but I hope this could be fixed in future versions of GHC, for example by allowing plugins to run even earlier in the pipeline or by having the unoptimised core bindings cached somewhere.

While the duplication of work is admittedly somewhat unsatisfactory, it is still no worse than running the standalone liquid executable in addition to GHC itself. With this design, we effectively recovered feature parity with the old binary. As an added bonus, it meant that ghci worked out of the box. Supporting ghcid required only a bit more effort, namely emitting warnings and errors in the same format used by GHC, and making sure that LH strictly abided to this format everywhere in the codebase. Finally, after patching a few more bugs (discussed in the appendix), even ghcide worked as expected:4

Liquid Haskell running inside ghcide

What about the ecosystem?

We haven’t yet discussed the package ecosystem for LH, or how users can contribute specifications to existing packages. One of the goals for the project was specifically to empower users to extend LH’s ecosystem using familiar infrastructure (e.g. Hackage, Stackage, etc).

We have seen how serialising a LiftedSpec into an interface file solved the technical problem, but it left us with a social problem: what to do about “preeminent” Haskell packages we would like to refine? By “preeminent” I mean all those packages which are really in the top percentile in terms of “most used”: think base, vector, containers and so on. Typically their development speed is not very fast-paced as these are part of the backbone of the Haskell ecosystem, and getting new changes in is quite hard, or requires careful thinking.

While adding LH annotations directly into these packages wouldn’t break anything, it’s surely something which needs to be motivated and, ultimately, maintained, so it might not be easy to make it happen over a short period of time. This is why we opted for a different approach: instead of trying to patch these packages we released “mirror” packages: They trivially re-export all the modules from the mirrored package while also adding all the necessary refinements.

At the moment of writing, we offer drop-in replacements for a handful of packages:

Users willing to write specifications for their existing packages can either adopt this mirrored-package approach or integrate LH specs directly into their Haskell code, and both approaches have pros & cons. In particular, integrating LH specs directly into an existing package means a tighter coupling between the Haskell and LH code, so the package author will be responsible for making sure that LH specs stay compatible with multiple versions of Liquid Haskell when they are released. That is, package authors adopting this approach have to worry that their packages work for new versions of GHC as well as new versions of LH (although the latter is typically released in a lockstep fashion with GHC).

For a more “hands-on” introduction, check out Ranjit’s blog post, specifically the “Shipping Specifications with Packages” section.

Managing and tracking versions

Once again, the astute readers might be wondering how to track changes to the mirrored packages. While getting the recipe right is tricky, we propose a simple PVP scheme, in the form of:


A.B.C.D tracks the upstream package directly, while X.Y allow for LH-related bug fixes and breaking changes.

As a practical example, let’s imagine some users wishing to add specifications to an existing package acme- Those users would release a liquid-acme- adding whichever specification they like. In case acme- is released, they would also release liquid-acme-, and in case they discovers a bug in one of the existing specifications for acme- then a liquid-acme- could be released. The penultimate digit is reserved for changes to the LiquidHaskell language. This way, if the refinements for acme now relies on a new shining LH feature, our users would release something like liquid-acme- to reflect that.

While not perfect, this scheme should work for the majority of cases.


Needless to say, this approach of mirroring packages only to add specifications is not perfect.

First of all, keeping track of all the different versions of all these packages and following their release schedules adds inevitable overhead. The LH codebase provides some scripts to mirror modules fairly easily, but some human intervention is needed in case of API changes. Furthermore, now a package wishing to use LH would have to depend on liquid-base but not base, which might not be an option for some projects, or not something projects would like to commit to immediately, just to try out LH. On the bright side, now it is possible to distribute specs independently of the underlying package, without having to burden the primary maintainer.

Ultimately this would have been a very nice use case for Backpack, but that would require substantial work, not to mention that tools like stack don’t support Backpack yet.


Working on this GHC Plugin was interesting and challenging for me, and I hope that with the plugin Liquid Haskell will get the adoption it deserves.

It is tricky to reconcile plugins that modify the GHC pipeline with tools that act on the frontend (like ghcide), mostly due to the low-level nature of the GHC API. Not having access to a more “pure” way of accessing the unoptimised core binds complicated the plugin design, and I hope in the future we will be able to mitigate this also to make the whole plugin more efficient by avoiding expensive operations, like parsing or typechecking each module again. Another thing which I would love to see in future GHC API releases is a good story for passing around state between different plugin actions: eventually I didn’t need any state due to the fact I was forced to write everything into a single plugin action, but during “First attempt” this was a pain I had to deal with. From a social perspective it will be fascinating to see how the choices we made on the mirror packages will work out for the community.


I would like to thank Ranjit Jhala and Niki Vazou for all their support, from answering all our endless questions down to hand-holding us through the Liquid Haskell internals, speeding up our work several orders of magnitude.

Thanks also to UCSD and to the NSF for funding this work under grant 1917854: “FMitF: Track II: Refinement Types in the Haskell Ecosystem”.

Appendix: supporting ghcide

Having our plugin called by ghcide was not enough unfortunately, as the output reported by ghcide was not correct. We tracked this not to be ghcide’s fault, but rather GHC issue #18070 which GHC HQ kindly patched and released as part of 8.10.2.

Armed with a patched GHC, I finally tried ghcide again, but surprisingly enough diagnostics were emitted (at least from nvim) in a delayed fashion, typically only when the file was saved. This is a suboptimal experience: after all what makes ghcide so compelling is the ability to get diagnostics almost in real time, while editing the buffer. Another debugging session revealed a bug in ghcide, which deserves an honorable mention in this appendix.

In order to understand the problem, we have to briefly explain what is a ModSummary. A ModSummary is a key component of the GHC API, and is described by the documentation as a “single node in a ModuleGraph”. This type is used pervasively by the GHC API, and is an input argument to the parseModule function, that we ended up using in the final design to parse each module.

A ModSummary has a number of fields, but we’ll focus only on two of them:

data ModSummary
   = ModSummary {
        ms_hspp_file    :: FilePath,
     ,  ms_hspp_buf     :: Maybe StringBuffer

The former is the path on the user’s filesystem of the associated Haskell module, whereas the latter is the preprocessed source. What’s even more important is that parseModule would look first at the ms_hspp_buf to see if we have a “cached”, already-parsed module to return, or it would otherwise parse the module from scratch accessing it at the ms_hspp_file path.

The essence of the bug was that ghcide was constructing a ModSummary out of thin air, but it wasn’t correctly setting the ms_hspp_buf with the StringBuffer associated to the current content of the file, which meant that our plugin was able to “see” changes only when the file was saved and the new version of a module was available on disk, but not any sooner.

Luckily for us the fix was easy, and the ghcide devs kindly accepted our PR. After shaving this fairly hairy yak, things worked as expected.

  1. Therefore, LH depends on an external theorem prover such as Z3.↩︎

  2. In particular, a typecheckModule accepts a ModSummary as input, which contains the cached DynFlags, so it’s possible to temporarily create a copy of the global DynFlags with optimisations turned off. Unfortunately, this wasn’t enough for the UNPACK case, because apparently UNPACK gets “resolved” during parsing, which means that by type-checking time it was already too late to switch off optimisations that way.↩︎

  3. Why did we choose typecheckResultAction, rather than say parsedResultAction? This is crucial for supporting integration with ghcide. Note thatghcide uses the GHC API as well, but crucially it runs the typechecking phase, not earlier phases. To put it differently, the GHC API would call any registered GHC Plugin at this point, but it would call only the typecheckResultAction hook, which meant the parsing phase is completely bypassed.↩︎

  4. The screenshot above used ghcide-0.2.0 with my patch cherry-picked on top, and compiled against ghc-8.10.2, so unfortunately YMMV.↩︎

by alfredo at August 28, 2020 12:00 AM

Oleg Grenrus

Fixed points of indexed functors

Posted on 2020-08-28 by Oleg Grenrus


I was lately thinking about fixed points, more or less.

A new version of data-fix was released recently, and also corresponding version of recursion-schemes.

Also I wrote a Fix-ing regular expressions post, about adding fixed points to regular expression.

This post is another exploration: Fixed points of Indexed functors. This is not novel idea at all, but I’m positively surprised this works out quite nicely in modern GHC Haskell. I define a IxFix type and illustrate it with three examples.

Note: The HFix in multirec package is the same as IxFix in this post. I always forget about the existence of multirec.

In the following, the "modern GHC Haskell" is quite conservative, only eight extensions:

{-# LANGUAGE DataKinds #-}
{-# LANGUAGE PolyKinds #-}
{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE TypeOperators #-}

And this literate Haskell script is warning free, with -Wall

{-# OPTIONS_GHC -Wall #-}

On this trip

module IxFix where

we need a handful of imports

-- Type should be added to Prelude
import Data.Kind (Type)

-- Few newtypes
import Data.Functor.Identity (Identity (..))
import Data.Functor.Compose (Compose (..))
import Data.Functor.Const (Const (..))

-- dependently typed programming!
import Data.Fin      (Fin)
import Data.Type.Nat
import Data.Vec.Lazy (Vec (..))

-- magic
import Data.Coerce (coerce)

Before we go further, let me remind you about ordinary fixed points, as defined in data-fix package.

newtype Fix f = Fix { unFix :: f (Fix f) }

foldFix :: Functor f => (f a -> a) -> Fix f -> a
foldFix f = go where go = f . fmap go . unFix

Using Fix we can define recursive types using non-recursive base functors, e.g. for a list we’d have

data ListF a rec = NilF | ConsF a rec

We then use foldFix (or cata and other recursion schemes in recursion-schemes) to decouple "how we recurse" and "what we do at each step". I won’t try to convince you why this separation of concerns might be useful.

Instead I continue directly to the topic: define indexed fixed points. Why we need them? Because Fix is not powerful enough to allow working with Vec or polymorphically recursive types.

-- hello dependently typed world.
data Vec (n :: Nat) (a :: Type) where
    VNil  :: Vec 'Z a
    (:::) :: a -> Vec n a -> Vec ('S n) a

Fixed points of indexed functors

Before talking about fixed points, we need to figure out what are indexed functors. Recall a normal functor is a thing of kind Type -> Type:

class Functor f where
    fmap :: (a -> b) -> (f a -> f b)

Indexed version is the one with Type replaced with k -> Type, for some index k (it is still a functor, but in different category). We want morphisms to work for all indices, and preserve them. Thus we define a commonly used type alias1

-- natural, or parametric, transformation
type f ~> g = forall (j :: k). f j -> g j

Using it we can define a Functor variant2: it looks almost the same.

class IxFunctor (f :: (k -> Type) -> (k -> Type)) where
    ixmap :: (a ~> b) -> (f a ~> f b)

With IxFunctor in our toolbox, we can define an IxFix, note how the definition is again almost the same as for unindexed Fix and foldFix:

newtype IxFix f i = IxFix { unIxFix :: f (IxFix f) i }

foldIxFix :: IxFunctor f => (f g ~> g) -> IxFix f ~> g
foldIxFix alg = alg . ixmap (foldIxFix alg) . unIxFix

Does this work? I hope that following examples will convince that IxFix is usable (at least in theory).

Example: length indexed lists, Vec

The go to example of recursion schemes is a folding a list, The go to example of dependent types is length indexed list, often called Vec. I combine these traditions by defining Vec as an indexed fixed point:

data VecF (a :: Type) rec (n :: Nat) where
    NilF  ::               VecF a rec 'Z
    ConsF :: a -> rec n -> VecF a rec ('S n)

VecF is an IxFunctor:

instance IxFunctor (VecF a) where
    ixmap _ NilF         = NilF
    ixmap f (ConsF x xs) = ConsF x (f xs)

And we can define Vec as fixed point of VecF, with constructors:

type Vec' a n = IxFix (VecF a) n

nil :: Vec' a 'Z
nil = IxFix NilF

cons :: a -> Vec' a n -> Vec' a ('S n)
cons x xs = IxFix (ConsF x xs)

Can we actually use it? Of course! Lets define concatenation3 Vec' a n -> Vec' a m -> Vec' a (Plus n m). We cannot use foldIxFix directly, as Plus n m is not the same index as n, so we need to define an auxiliary newtype to plumb the indices. Another way to think about these kind of newtypes, is that they work around the lack of type-level anonymous functions in nowadays Haskell.

newtype Appended m a n =
    Append { getAppended :: Vec' a m -> Vec' a (Plus n m) }
append :: forall a n m. Vec' a n -> Vec' a m -> Vec' a (Plus n m)
append xs ys = getAppended (foldIxFix alg xs) ys where
    alg :: VecF a (Appended m a) j -> Appended m a j
    alg NilF          = Append id
    alg (ConsF x rec) = Append $ \zs -> cons x (getAppended rec zs)

We can also define a refold function, which doesn’t mention IxFix at all.

ixrefold :: IxFunctor f => (f b ~> b) -> (a ~> f a) -> a ~> b
ixrefold f g = f . ixmap (ixrefold f g) . g

And then, using ixrefold we can define concatenation for Vec from vec package, which isn’t defined using IxFix. Here we need auxiliary newtypes as well.

newtype Swapped f a b =
    Swap { getSwapped :: f b a }
newtype Appended2 m a n =
    Append2 { getAppended2  :: Vec m a -> Vec (Plus n m) a }

append2 :: forall a n m. Vec n a -> Vec m a -> Vec (Plus n m) a
append2 xs ys = getAppended2 (ixrefold f g (Swap xs)) ys where
    -- same as alg in 'append'
    f :: VecF a (Appended2 m a) j -> Appended2 m a j
    f NilF          = Append2 id
    f (ConsF z rec) = Append2 $ \zs -> z :::  (getAppended2 rec zs)

    -- 'project'
    g :: Swapped Vec a j -> VecF a (Swapped Vec a) j
    g (Swap VNil)       = NilF
    g (Swap (z ::: zs)) = ConsF z (Swap zs)

You may note that one can implement append as induction over length, that’s how vec implements them. Theoretically it is not right, and IxFix formulation highlights it:

append3 :: forall a n m. SNatI n
        => Vec' a n -> Vec' a m -> Vec' a (Plus n m)
append3 xs ys = getAppended3 (induction caseZ caseS) xs where
    caseZ :: Appended3 m a 'Z
    caseZ = Append3 (\_ -> ys)

    caseS :: Appended3 m a p -> Appended3 m a ('S p)
    caseS rec = Append3 $ \(IxFix (ConsF z zs)) ->
        cons z (getAppended3 rec zs)

-- Note: this is different than Appended!
newtype Appended3 m a n =
    Append3 { getAppended3 :: Vec' a n -> Vec' a (Plus n m) }

Here we pattern match on IxFix value. If we want to treat it as least fixed point, the only valid elimination is to use foldIxFix!

However, the induction over length is the right approach if Vec is defined as a data or type family:

type family VecFam (a :: Type) (n :: Nat) :: Type where
    VecFam a 'Z     = ()
    VecFam a ('S n) = (a, VecFam a n)

Whether you want to have data or type-family or GADT depends on the application. (Even in Agda or Coq). Family variant doesn’t intristically know its length, which is sometimes a blessing, sometimes a curse. For what it’s worth, vec package provides both variants, with almost the same module interface.

Example: Polymorphically recursive type

The IxFix can also be used to define polymorphically recursive types like

data Nested a = a :<: (Nested [a]) | Epsilon
infixr 5 :<:

nested :: Nested Int
nested = 1 :<: [2,3,4] :<: [[5,6],[7],[8,9]] :<: Epsilon

A length function defined over this datatype will be polymorphically recursive, as the type of the argument changes from Nested a to Nested [a] in the recursive call:

-- >>> nestedLength nested
-- 3
nestedLength :: Nested a -> Int
nestedLength Epsilon    = 0
nestedLength (_ :<: xs) = 1 + nestedLength xs

We cannot represent Nested as Fix of some functor, and we can not use recursion-schemes either. However, we can redefine Nested as indexed fixed point.

An important observation is that we (often or always?) use polymorphic recursion as a solution to the lack indexed types. My favorite example is de Bruijn indices for well-scoped terms. Compare

data Expr1 a
    = Var1 a
    | App1 (Expr1 a) (Expr1 a)
    | Abs1 (Expr1 (Maybe a))


data Expr2 a n
    = Free2 a                -- split free and bound variables
    | Bound2 (Fin n)
    | App2 (Expr2 a n) (Expr2 a n)
    | Abs2 (Expr2 a ('S n))  -- extend bound context by one

Which one is simpler is a really good discussion, but for another time.

In Nested example the single argument is also used for two purposes: the type of an base element (Int) and container type (starts with Identity and increases with extra list layer).

One approach is just use Nat index and have a type family 4

type family Container (n :: Nat) :: Type -> Type where
    Container 'Z     = Identity
    Container ('S n) = Compose [] (Container n)


data NestedF a rec f
    = f a :<<: rec (Compose [] f)
    | EpsilonF

instance IxFunctor (NestedF a) where
    ixmap _ EpsilonF    = EpsilonF
    ixmap f (x :<<: xs) = x :<<: f xs

We can convert from Nested a to IxFix (NestedF a) Identity and back. We use coerce to help with newtype plumbing.

convert :: Nested a -> IxFix (NestedF a) Identity
convert = aux . coerce where
    aux :: Nested (f a) -> IxFix (NestedF a) f
    aux Epsilon    = IxFix EpsilonF
    aux (x :<: xs) = IxFix (x :<<: aux (coerce xs))

-- back left as an exercise

And then we can write nestedLength as a fold.

-- >>> nestedLength2 (convert nested)
-- 3
nestedLength2 :: IxFix (NestedF a) f -> Int
nestedLength2 = getConst . foldIxFix alg where
    alg :: NestedF a (Const Int) ~> Const Int
    alg EpsilonF         = Const 0
    alg (_ :<<: Const n) = Const (n + 1)

Non-example: ListF

In the introduction I mentioned an ordinary list, which is a fixed point

\begin{aligned} \mathsf{List} \coloneqq \lambda (A : \mathsf{Type}).\; \mu (r : \mathsf{Type}).\; 1 + A \times r\end{aligned}

where I use \mu (r : X).\, F\, r notation to represent least fixed points: \mu (r : X). F\, r \cong F\, (\mu (r : X). F\, r) . Note that we first introduce a type parameter A with \lambda , and then make a fixed point with \mu .

We can define an ordirinary list using IxFix, by taking a fixed point of a Type -> Type thing, i.e. first \mu , and then \lambda .

\begin{aligned} \mathsf{List}_1 \coloneqq \mu (r : \mathsf{Type} \to \mathsf{Type}).\; \lambda (A : \mathsf{Type}).\; 1 + A \times r\,A\end{aligned}

data ListF1 rec a = NilF1 | ConsF1 a (rec a)
type List1 = IxFix ListF1

fromList1 :: [a] -> List1 a
fromList1 []     = IxFix NilF1
fromList1 (x:xs) = IxFix (ConsF1 x (fromList1 xs))

Compare to Agda code:

-- parameter
data List (A : Set) : Set where
    nil  : List A
    cons : A -> List A -> List A

-- index
data List : Set -> Set where
    nil :  (A : Set) -> List A
    cons : (A : Set) -> A -> List A -> List A

These types are subtly different. See

This gives a hint why Agda people define Vec (A : Set) : Nat -> Set, i.e. length as the last parameter: because you have to do that way. And Haskellers (usually) define as Vec (n :: Nat) (a :: Type), because then Vec can be given Functor etc instances. In other words, machinery in both languages forces an order of of type arguments.

Finally, we can write parametric version of List using IxFix too. We just use a dummy, boring index.

\begin{aligned} \mathsf{List}_2 \coloneqq \lambda (A : \mathsf{Type}).\; \mu (r : 1 \to \mathsf{Type}).\; \lambda (x : 1).\; 1 + A \times r\,x\end{aligned}

data ListF2 a rec (unused :: ()) = NilF2 | ConsF2 a (rec unused)
type List2 a = IxFix (ListF2 a) '()

fromList2 :: [a] -> List2 a
fromList2 []     = IxFix NilF2
fromList2 (x:xs) = IxFix (ConsF2 x (fromList2 xs))

IxFix is more general than Fix, but if you don’t need an extra power, maybe you shouldn’t use it.

Do we need something even more powerful than IxFix? I don’t think so. If we need more (dependent) indices, we can pack them all into a single index by tupling (or \sum -mming) them.


We have seen IxFix, fixed point of indexed functor. I honestly do not think that you should start looking in your code base whether you can use it. I suspect it is more useful as thinking and experimentation tool. It is an interesting gadget.

  1. I’m sorry that tilde ~> and dash -> arrows look so similar.↩︎

  2. Note that FFunctor in (which is defined with different names in other packages as well) is of different kind. IxFunctor in is again different. Sorry for proliferation of various functors. And for confusing terminology. Dominic Orchard et al uses terms graded (k -> Type, this post) and parameterised (k -> k -> Type, indexed-package) in There is no monad-name for hkd-package variant, as that cannot be made into monad-like thing.↩︎

  3. You may wonder why function name is append, but operation is concatenation? This is similar to having plus for addition.↩︎

  4. Here one starts to wish that GHC had unsaturated type families, so we wouldn’t need to use newtypes...↩︎

August 28, 2020 12:00 AM

August 27, 2020

Philip Wadler

Six questions that predict success

 Carol Dweck, a Stanford psychologist famous for her research on growth mindset, is coauthor of a new study.

To test the theory, the researchers asked student participants six questions and asked them to rate themselves on a 1 (never) to 5 (always) scale.

  • When you are stuck on something, how often do you ask yourself, "What are things I can do to help myself?"
  • Whenever you feel like you are not making progress, how often do you ask yourself, "Is there a better way of doing this?"
  • Whenever you feel frustrated with something, how often do you ask yourself, "How can I do this better?"
  • In moments when you feel challenged, how often do you ask yourself, "What are things I can do to make myself better at this?"
  • When you are struggling with something, how often do you ask yourself, "What can I do to help myself?"
  • Whenever something feels difficult, how often do you ask yourself, "What can I do to get better at this?"

What happened? Higher scores predicted higher grades.

And in subsequent studies, higher scores predicted greater success in a professional challenge and in a health and fitness goal.

by Philip Wadler ( at August 27, 2020 01:17 PM

Edward Z. Yang

Dynamic scoping is an effect, implicit parameters are a coeffect

For the longest time, I thought of implicit parameters and dynamic scoping were basically the same thing, since they both can be used to solve similar problems (e.g., the so called "configuration problem" where you need to plumb down some configuration deep into a nested body of function definitions without defining them all explicitly). But implicit parameters have a reputation of being something you shouldn't use (use reflection instead), whereas dynamic scoping via the reader monad is a useful and well understood construct (except for the bit where you have to monadify everything). Why the difference?

Oleg points out that implicit parameters are not really dynamic scoping, and gives an example where Lisp and Haskell disagree. And you don't even want the Lisp behavior in Haskell: if you think about the operational notion of dynamic scoping (walk up the stack until you find a binding site of the dynamic variable), it's not very compatible with laziness, since a thunk (which accesses a dynamic variable) will be forced at some unpredictable point in program execution. You really don't want to have to reason about where exactly a thunk will be executed to know how its dynamic variables will be bound, that way lies madness. But somehow, in a strict language, no one has trouble figuring out what should happen with dynamic scoping (well, mostly--more on this shortly).

It turns out that the research community has figured out the difference is that implicit parameters are a coeffect. I believe this was first observed in Coeffects: Unified static analysis of context-dependence (a more modern presentation is in Coeffects: A calculus of context-dependent computation; and a more Haskelly presentation can be found in Embedding effect systems in Haskell). Although, Tomas was commenting on my blog in 2012 about similar ideas, so this probably had been in the works for a while. The key point is that for some coeffects (namely, implicit parameters), call-by-name reduction preserves types and coeffects, and so implicit parameters do not blow up in your face in the same way dynamic scoping (an effect) would. These necessarily behave differently! Type classes are coeffects too, and this is why modern use of implicit parameters in Haskell explicitly acknowledges this (e.g., in the reflection package).

At this year's ICFP, I was pointed at an interesting technical report about implicit values and functions in Koka, a new twist on the dynamic scoping. I found myself wondering if Haskell implicit parameters could learn a thing or two from this work. Implicit values make the good choice of defining implicit values globally at the top level, so that they can participate in normal module namespacing, as opposed to an un-namespaced bag of dynamically scoped names (this is also an improvement that reflection makes over implicit parameters). But actually, it seems to me that implicit functions are taking a page from implicit parameters!

The big innovation is the implicit function is that it resolves all dynamic references in the function (not just lexically, but for all further dynamic calls) to the lexical scope (the dynamic scope at the time the function was defined), producing a function that has no dependence on implicit values (aka, has no effect saying that the implicit value must be defined at the time the function is called.) This is exactly what an implicit parameter let ?x = ... binding would have done, in effect directly filling in the dictionary for the implicit function at definition site, rather than waiting. Very contextual! (Of course, Koka implements this using algebraic effects, and gets to the right semantics with a very simple translation anyway). The result is not exactly dynamic scoping, but as the TR says, it leads to better abstraction.

It is difficult to see how implicit values/functions could make their way back into Haskell, at least without some sequencing constructing (e.g., a monad) lurking around. Though implicit functions behave much like implicit parameters, the rest of the dynamic scoping (including the binding of the implicit function itself) is just good old effectful (not coeffectful) dynamic scope. And you can't just do that in Haskell, without breaking type preservation under beta-reduction and eta-expansion. Haskell has no choice but to go all the way, and once you get beyond the obvious problems of implicit parameters (which reflection fixes), things seem to mostly work out.

by Edward Z. Yang at August 27, 2020 05:51 AM

August 26, 2020

Oskar Wickström

Introducing Quickstrom: High-confidence browser testing

In the last post I shared the results from testing TodoMVC implementations using WebCheck. The project has since been renamed Quickstrom (thank you Tom) and is now released as open source.

What is Quickstrom?

Quickstrom is a new autonomous testing tool for the web. It can find problems in any type of web application that renders to the DOM. Quickstrom automatically explores your application and presents minimal failing examples. Focus your effort on understanding and specifying your system, and Quickstrom can test it for you.

Past and future

I started writing Quickstrom on April 2, 2020, about a week after our first child was born. Somehow that code compiled, and evolved into a capable testing tool. I’m now happy and excited to share it with everyone!

In the future, when Quickstrom is more robust and has a greater mileage, I might build a commercial product on top of it. This one of the reasons I’ve chosen an AGPL-2.0 license for the code, and why contributors must sign a CLA before pull requests can be merged. The idea is to keep the CLI test runner AGPL forever, but I might need a license exception if I build a closed-source SaaS product later on.

Learning more

Interested in Quickstrom? Start by checking out any of these resources:

And keep an eye out for updates by signing up for the newsletter, or by following me on Twitter. Documentation should be significantly improved soon.


If you have any comments or questions, please reply to any of the following threads:

August 26, 2020 10:00 PM

August 22, 2020

Philip Wadler

Five stages of accepting constructive mathematics


From a psychological point of view, learning constructive mathematics is agonizing, for it requires one to first unlearn certain deeply ingrained intuitions and habits acquired during classical mathematical training. In her book On Death and Dying psychologist Elisabeth Kubler-Ross identified five stages through which people reach acceptance of life’s traumatizing events: denial, anger, bargaining, depression, and acceptance. We shall follow her path.

Five stages of accepting constructive mathematics is a nifty introduction to constructive mathematics by Andrej Bauer. Mentioned by Martin Escardo on the Agda mailing list. 

by Philip Wadler ( at August 22, 2020 08:43 AM

Donnacha Oisín Kidney

Some More List Algorithms

Posted on August 22, 2020
Tags: Haskell

It’s been a while since I last wrote a post (I’ve been busy with my Master’s thesis, which is nearly done), so I thought I would quickly throw out some fun snippets of Haskell I had reason to write over the past couple of weeks.

Zipping With Folds

For some reason, until recently I had been under the impression that it was impossible to fuse zips efficiently. In other words, I thought that zip was like tail, in that if it was implemented using only foldr it would result in an asymptotic slowdown (tail is normally <semantics>�(1)<annotation encoding="application/x-tex">\mathcal{O}(1)</annotation></semantics>, implemented as a fold it’s <semantics>�(n)<annotation encoding="application/x-tex">\mathcal{O}(n)</annotation></semantics>).

Well, it seems like this is not the case. The old zip-folding code I had looks to me now to be the correct complexity: it’s related to How To Zip Folds, by Oleg Kiselyov (although I’m using a different version of the function which can be found on the mailing list). The relevant code is as follows:

newtype Zip a b = 
  Zip { runZip :: a -> (Zip a b -> b) -> b }

zip :: [a] -> [b] -> [(a,b)]
zip xs ys = foldr xf xb xs (Zip (foldr yf yb ys))
    xf x xk yk = runZip yk x xk
    xb _ = []
    yf y yk x xk = (x,y) : xk (Zip yk)
    yb _ _ = []

There are apparently reasons for why the Prelude’s zip isn’t allowed to fuse both of its arguments: I don’t fully understand them, however. (in particular the linked page says that the fused zip would have different strictness behaviour, but the version I have above seems to function properly).

This version of zip leads to some more fun solutions to folding puzzles, like this one:

Write a function that is equivalent to:

zipFromEnd xs ys = reverse (zip (reverse xs) (reverse ys))

Without creating any intermediate lists.

The desired function is interesting in that, instead of lining up lists according to their first elements, it aligns them according to the ends.

>>> zipFromEnd [1,2,3] "abc"

>>> zipFromEnd [1,2,3] "abcd"

>>> zipFromEnd [1,2,3,4] "abc"

The solution here is just to use foldl, and we get the following:

zipFromEnd :: [a] -> [b] -> [(a,b)]
zipFromEnd xs ys = foldl xf xb xs (Zip (foldl yf yb ys)) []
    xf xk x yk = runZip yk x xk
    xb _ zs = zs
    yf yk y x xk zs = xk (Zip yk) ((x,y) : zs)
    yb _ _ zs = zs

Another function which is a little interesting is the “zip longest� function:

zipLongest :: (a -> a -> a) -> [a] -> [a] -> [a]
zipLongest c xs ys = foldr xf xb xs (Zip (foldr yf yb ys))
    xf x xk yk = runZip yk (Just x) xk
    xb zs = runZip zs Nothing xb
    yf y yk Nothing  xk =     y : xk (Zip yk)
    yf y yk (Just x) xk = c x y : xk (Zip yk)
    yb Nothing  _  = []
    yb (Just x) zs = x : zs (Zip yb)

Finally, all of these functions rely on the Zip type, which is not strictly positive. This means that we can’t use it in Agda, and it’s tricky to reason about: I wonder what it is about functions for deforestation that tends to lead to non-strictly-positive datatypes.

Lexicographic Permutations

The next puzzle I was interested in was finding the next lexicographic permutation of some string. In other words, given some string <semantics>s<annotation encoding="application/x-tex">s</annotation></semantics>, you need to find another string <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> that is a permutation of <semantics>s<annotation encoding="application/x-tex">s</annotation></semantics> such that <semantics>s<t<annotation encoding="application/x-tex">s < t</annotation></semantics>, and that there is no string <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> that is a permutation of <semantics>s<annotation encoding="application/x-tex">s</annotation></semantics> and <semantics>s<u<t<annotation encoding="application/x-tex">s < u < t</annotation></semantics>. The Wikipedia article on the topic is excellent (and clear), but again the algorithm is described in extremely imperative terms:

  1. Find the largest index k such that a[k] < a[k + 1]. If no such index exists, the permutation is the last permutation.
  2. Find the largest index l greater than k such that a[k] < a[l].
  3. Swap the value of a[k] with that of a[l].
  4. Reverse the sequence from a[k + 1] up to and including the final element a[n].

The challenge here is to write this algorithm without doing any indexing: indexing is expensive on Haskell lists, and regardless it is cleaner to express it without.

I managed to work out the following:

nextLexPerm :: Ord a => [a] -> Maybe [a]
nextLexPerm []     = Nothing
nextLexPerm (x:xs) = go1 x xs
    go1 _ []     = Nothing
    go1 i (j:xs) = maybe (go2 i j [] xs) (Just . (i:)) (go1 j xs)

    go2 i j xs ys
      | j <= i    = Nothing
      | otherwise = Just (fromMaybe (j : foldl (flip (:)) (i:xs) ys) (go3 i (j:xs) ys))

    go3 _ _  []     = Nothing
    go3 i xs (j:ys) = go2 i j xs ys

Circular Sorting

This comes from the Rosetta Code problem Circle Sort. This is a strange little sorting algorithm, where basically you compare elements on opposite sides of an array, swapping them as needed. The example given is the following:

6 7 8 9 2 5 3 4 1

First we compare (and swap) 6 and 1, and then 7 and 4, and so on, until we reach the middle. At this point we split the array in two and perform the procedure on each half. After doing this once it is not the case that the array is definitely sorted: you may have to repeat the procedure several (but finitely many) times, until no swaps are performed.

I have absolutely no idea what the practical application for such an odd algorithm would be, but it seemed like an interesting challenge to try implement it in a functional style (i.e. without indices or mutation).

The first thing we have to do is fold the list in half, so we pair up the right items. We’ve actually seen an algorithm to do this before: it’s often called the “tortoise and the hare�, and our previous use was to check if a list was a palindrome. Here’s how we implement it:

halve :: [a] -> [(a,a)]
halve xs = snd (go xs xs)
    go (y:ys) (_:_:zs) = f y (go ys zs)
    go (_:ys) [_]      = (ys,[])
    go ys     []       = (ys,[])
    f x (y:ys,zs) = (ys, (x,y) : zs)
>>> halve [6,7,8,9,2,5,3,4,1]

Notice that the 2 in the very middle of the list is missing from the output: I’ll describe how to handle that element later on. In the above piece of code, that 2 actually gets bound to the underscore (in (_:ys)) in the second clause of go.

Next we need to do the actual swapping: this is actually pretty straightforward, if we think of the algorithm functionally, rather than imperatively. Instead of swapping things in place, we are building up both halves of the new list, so the “swap� operation should simply decide which list each item goes into.

halve :: Ord a => [a] -> ([a],[a])
halve xs = tl (go xs xs)
    tl (_,lte,gt) = (lte,gt)
    go (y:ys) (_:_:zs) = swap y (go ys zs)
    go (_:ys) [_]      = (ys,[],[])
    go ys     []       = (ys,[],[])
    swap x (y:ys,lte,gt) 
      | x <= y    = (ys, x : lte, y : gt)
      | otherwise = (ys, y : lte, x : gt)

At this point we can also see what to do with the middle item: we’ll put it in the higher or lower list, depending on a comparison with the element it’s next to.

halve :: Ord a => [a] -> ([a],[a])
halve xs = tl (go xs xs)
    tl (_,lte,gt) = (lte,gt)
    go (y:ys) (_:_:zs) = swap y (go ys zs)
    go ys     []       = (ys,[],[])
    go (y:ys) [_]      = (ys,[y | e],[y | not e])
      where e = y <= head ys
    swap x (y:ys,lte,gt) 
      | x <= y    = (ys, x : lte, y : gt)
      | otherwise = (ys, y : lte, x : gt)

Next, we can use this as a helper function in the overall recursive function.

circleSort :: Ord a => [a] -> [a]
circleSort [] = []
circleSort [x] = [x]
circleSort xs =
  let (lte,gt) = halve xs
  in circleSort lte ++ circleSort (reverse gt)

This function isn’t correct (yet). As we mentioned already, we need to run the circle sort procedure multiple times until no swaps occur. We can add in the tracking of swaps like so:

circleSort :: Ord a => [a] -> [a]
circleSort xs = if swapped then circleSort ks else ks
    (swapped,ks) = go xs
    go []  = (False, [])
    go [x] = (False, [x])
    go xs  =
      let (s,_,lte,gt) = halve xs xs
          (sl,lte') = go lte
          (sg,gt' ) = go (reverse gt)
      in (s || sl || sg, lte' ++ gt')
    halve (y:ys) (_:_:zs) = swap y (halve ys zs)
    halve ys     []       = (False,ys,[],[])
    halve (y:ys) [_]      = (False,ys,[y | e],[y | not e])
      where e = y <= head ys
    swap x (s,y:ys,lte,gt) 
      | x <= y    = (s   ,ys, x : lte, y : gt)
      | otherwise = (True,ys, y : lte, x : gt)

So at this point we actually have a working implementation of the function, which avoids indices as intended. It has some problems still, though. First, we call ++, when we could be using difference lists. Here’s the solution to that:

circleSort :: Ord a => [a] -> [a]
circleSort xs = if swapped then circleSort ks else ks
    (swapped,ks) = go xs []
    go []  zs = (False, zs)
    go [x] zs = (False, x:zs)
    go xs  zs =
      let (s,_,lte,gt) = halve xs xs
          (sl,lte') = go lte gt'
          (sg,gt' ) = go (reverse gt) zs
      in (s || sl || sg, lte')
    halve (y:ys) (_:_:zs) = swap y (halve ys zs)
    halve ys     []       = (False,ys,[],[])
    halve (y:ys) [_]      = (False,ys,[y | e],[y | not e])
      where e = y <= head ys
    swap x (s,y:ys,lte,gt) 
      | x <= y    = (s   ,ys, x : lte, y : gt)
      | otherwise = (True,ys, y : lte, x : gt)

Next we can actually rewrite the go function to allow for a certain amount of tail recursion (kind of):

circleSort :: Ord a => [a] -> [a]
circleSort xs = if swapped then circleSort ks else ks
    (swapped,ks) = go xs (False,[])
    go []  (s,ks) = (s,ks)
    go [x] (s,ks) = (s,x:ks)
    go xs  (s,ks) =
      let (s',_,ls,rs) = halve s xs xs
      in go ls (go (reverse rs) (s',ks))
    halve s (y:ys) (_:_:zs) = swap y (halve s ys zs)
    halve s ys     []       = (s,ys,[],[])
    halve s (y:ys) [_]      = (s,ys,[y | e],[y | not e])
      where e = y <= head ys
    swap x (s,y:ys,ls,rs)
      | x <= y    = (   s,ys,x:ls,y:rs)
      | otherwise = (True,ys,y:ls,x:rs)

Next, we call reverse: but we can avoid the reverse by passing a parameter which tells us which direction we’re walking down the list. Since the swapping logic is symmetric, we’re able to just invert some of the functions. It is a little tricky, though:

circleSort :: Ord a => [a] -> [a]
circleSort xs = if swapped then circleSort ks else ks
    (swapped,ks) = go False xs (False,[])
    go d []  (s,ks) = (s,ks)
    go d [x] (s,ks) = (s,x:ks)
    go d xs  (s,ks) =
      let (s',_,ls,rs) = halve d s xs xs
      in go False ls (go True rs (s',ks))
    halve d s (y:ys) (_:_:zs) = swap d y (halve d s ys zs)
    halve d s ys     []       = (s,ys,[],[])
    halve d s (y:ys) [_]      = (s,ys,[y | e],[y | not e])
      where e = y <= head ys
    swap d x (s,y:ys,ls,rs)
      | bool (<=) (<) d x y = (    d || s,ys,x:ls,y:rs)
      | otherwise           = (not d || s,ys,y:ls,x:rs)

So there it is! The one-pass, purely function implementation of circle sort. Very possibly the most useless piece of code I’ve ever written.

by Donnacha Oisín Kidney at August 22, 2020 12:00 AM

August 20, 2020

Tweag I/O

How Nix grew a marketing team

Recently I witnessed the moment when a potential Nix user reached eureka. The moment where everything regarding Nix made sense. My friend, now a Nix user, screamed from joy: “We need to Nix–ify everything!”

Moments like these reinforce my belief that Nix is a solution from — and for — the future. A solution that could reach many more people, only if learning about Nix didn’t demand investing as much time and effort as it does now.

I think that Nix has the perfect foundation for becoming a success but that it still needs better marketing. Many others agree with me, and that’s why we formed the Nix marketing team.

I would like to convince you that indeed, marketing is the way to go and that it is worth it. Therefore, in this post I will share my thoughts on what kind of success we aim for, and which marketing efforts we are currently pursuing. The marketing team is already giving its first results, and with your input, we can go further.

What does success look like?

At the time of writing this post, I have been using Nix for 10 years. I organized one and attended most of the Nix conferences since then, and talked to many people in the community. All of this does not give me the authority to say what success for Nix looks like, but it does give me a great insight into what we — the Nix community — can agree on.

Success for Nix would be the next time you encounter a project on GitHub, it would already contain a default.nix for you to start developing. Success for Nix would be the next time you try to run a server on the cloud, NixOS would be offered to you. Or even more ambitious, would be other communities recognising Nix as a de facto standard that improves the industry as a whole.

To some, this success statement may seem very obvious. However, it is important to say it out loud and often, so we can keep focus, and keep working on the parts of Nix that will contribute the most to this success.

The importance of marketing

Before we delve into what Nix still lacks, I would like to say that we — engineers and developers — should be aware of our bias against marketing. This bias becomes clear when we think about what we think are the defining aspects for a project’s success. We tend to believe that code is everything, and that good code leads to good results. But what if I tell you that good marketing constitutes more than 50% of the success of a project? Would you be upset? We have to overcome this bias, since it prevents us from seeing the big picture.

Putting aside those Sunday afternoons when I code for the pure joy of stretching my mind, most of the time I simply want to solve a problem. The joy when seeing others realizing that their problem is not a problem anymore, is one of the best feelings I experienced as a developer. This is what drives me. Not the act of coding itself, but the act of solving the problem. Coding is then only part of the solution. Others need to know about the existence of your code, understand how it can solve their problem and furthermore they need to know how to use it.

That is why marketing, and, more generally, non-technical work, is at least as important as technical work. Documentation, writing blog posts, creating content for the website, release announcements, conference talks, conference booths, forums, chat channels, email lists, demo videos, use cases, swag, search engine optimisation, social media presence, engaging with the community… These are all crucial parts of any successful project.

Nix needs better marketing, from a better website to better documentation, along with all the ingredients mentioned above. If we want Nix to grow as a project we need to improve our marketing game, since this is the area of work that is historically receiving the least amount of attention. And we are starting to work on it. In the middle of March 2020, a bunch of us got together and announced the creation of the Nix marketing team. Since then we meet roughly every two weeks to discuss and work on non-technical challenges that the Nix project is facing.

But before the Nix marketing team could start doing any actual work we had to answer an important question:

What is Nix?

I want to argue that the Nix community is still missing an answer to an apparently very simple question: What is Nix?.

The reason why what is Nix? is a harder question than it may appear at first, is that any complete answer has to tell us what and who Nix is for. Knowing the audience and primary use cases is a precondition to improving the website, documentation, or even Nix itself.

This is what the Nix marketing team discussed first. We identified the following audiences and primary use cases:

  1. Development environments (audience: developers)
  2. Deploying to the cloud (audience: system administrators)

It doesn’t mean other use cases are not important — they are. We are just using the primary use cases as a gateway drug into the rest of the Nix’s ecosystem. In this way, new users will not be overwhelmed with all the existing options and will have a clear idea where to start.

Some reasons for selecting the two use cases are:

  • Both use cases are relatively polished solutions. Clearly, there is still much to be improved, but currently these are the two use cases with the best user experience in the Nix ecosystem.
  • One use case is a natural continuation of another. First, you develop and then you can use the same tools to package and deploy.
  • Market size for both use cases is huge, which means there is a big potential.

A differentiating factor — why somebody would choose Nix over others — is Nix’s ability to provide reproducible results. The promise of reproducibility is the aspect that already attracts the majority of Nix’s user base. From this, we came up with a slogan for Nix:

Reproducible builds and deploys

With the basic question answered we started working.

What has been done so far? How can I help?

So far, the Marketing team focused on improving the website:

  1. Moved the website to Netlify. The important part is not switching to Netlify, but separating the website from the Nix infrastructure. This removes the fear of a website update bringing down parts of Nix infrastructure.
  2. Simplified navigation. If you remember, the navigation was different for each project that was listed on the website. We removed the project differentiation and unified navigation. This will show Nix ecosystem as a unified story and not a collection of projects. One story is easier to follow than five.
  3. Created a new learn page. Discoverability of documentation was a huge problem. Links to popular topics in manuals are now more visible. Some work on entry level tutorials has also started. Good and beginner friendly learning resources are what is going to create the next generation of Nix users.
  4. Created new team pages. We collected information about different official and less official teams working on Nix. The work here is not done, but it shows that many teams don’t have clear responsibilities. It shows how decisions are made and invites new Nix users to become more involved with the project.
  5. Improved landing page. Instead of telling the user what Nix is, they will experience it from the start. The landing page is filled with examples that will convince visitors to give Nix a try.

The work of the marketing team has just started, and there is still a lot to be done. We are working hard on redesigning the website and improving the messaging. The roadmap will tell you more about what to expect next.

If you wish to help come and say hi to #nixos-marketing on


Marketing, and non-technical work, is all too often an afterthought for developers. I really wish it weren’t the case. Having clearly defined problems, audience and strategy should be as important to us as having clean and tested code. This is important for Nix. This is important for any project that aims to succeed.

August 20, 2020 12:00 AM

August 17, 2020

Michael Snoyman

Stackage for Rust?

Every once in a while, a friend will share a comment from social media discussing in broad terms Stackage and Rust. As the original founder of the Stackage project, it’s of course fun to see other languages discussing Stackage. And as a Rustacean myself, it’s great to see it in the context of Rust.

This topic has popped up enough times now, and I’ve thought about it off-and-on for long enough now, that I thought a quick blog post on the topic would make sense.

Before diving in, I want to make something very clear. I know exactly what drove me to create Stackage for Haskell (and I’ll describe it below). And I’ve come to some conclusions about Stackage for Rust, which I’ll also describe below. But I want to acknowledge that while I do write Rust, and I have Rust code running in production, it’s nowhere near the level of Haskell code I have in production. I am well aware of the fact that my opinions on Rust may not match up with others in the Rust community.

I would be more than happy to engage with any Rustaceans who are interested in discussing this topic more, especially if you think I’m wrong in what I say below. Not only do I love the Rust language, but I find these kinds of package coordination topics fascinating, and would love to interact with others on them.

What is Stackage?

Stackage’s two word slogan is “Stable Hackage.” Hackage is the de facto standard repository for open source Haskell libraries and tools. It would easily be described as Haskell’s version of Crates. Stackage is a project which produces versioned snapshots of packages, together with underlying compiler versions, which are guaranteed to meet a basic buildability standard. That essentially comes down to:

  • On a Linux system, using a Docker container we’ve configured with a number of helper system libraries, the package compiles
  • Unless told otherwise, the docs build successful, the tests build and run successfully, and the benchmarks build successfully

There are plenty of holes in this. Packages may not build on Windows. There may be semantic bugs that aren’t picked up by the test suite. It’s far from saying “we guarantee this package is perfect.” It’s saying “we’ve defined an objective standard of quality and are testing for it.” I’m a strong believer that having clearly defined specifications like that is a Very Good Thing.

There are plenty of other details about how Stackage operates. We produce nightly snapshots that always try to grab the latest versions of packages, unless specifically overridden. We produce Long Term Support (LTS) versions that avoid new major versions of packages during a series. We manually add and remove packages from the “skipped tests” based on whether the tests will run correctly in our environment. But most of those kinds of details can be ignored for our discussion today. To summarize, if you want to understand this discussion, think of it this way:

Stackage defines snapshots where open source libraries and tools are tested to build together and pass (most of) their test suites

Why I created Stackage

It’s vital to understand this part to understand my position on a Stackage for Rust. Stackage has an interesting history going through a few previous projects, in particular Haskell Platform and Yesod Platform. Ignoring those for the moment, and pretending that Stackage emerged fully formed as it is today, let’s look at the situation before Stackage.

The Haskell community has always embraced breakage more than other communities. We rely heavily on strong types to flag major changes. The idea of a build time failure with a clear compiler error message is much less intimidating from a silent behavior change or a type error that only appears at runtime. In our pursuit of elegance, we will typically break APIs left, right, and center. I, more than most, have been historically guilty of that attitude. (I’m now of a very different opinion though.)

This attitude goes up the tree to GHC (the de facto standard Haskell compiler), which regularly breaks large swaths of the open source code base with new versions. This kind of activity typically leads to a large cascade effect of releases of userland packages, which ultimately leads to a lot of churn in the ecosystem.

At the time I created Stackage, my primary focus was the Yesod Web Framework. I had made a decision to design Yesod around a large collection of smaller libraries to allow for easier modularity. There are certainly arguments to be made for and against this decision, but for now, the important point is: Yesod relied on 100-150 packages. And I ran into many cases where end users simply could not build Yesod.

At the time, the only real Haskell build tool was cabal. It featured a dependency solver, and respected version bounds for dependencies. In principle, this would mean that cabal should solve the problem for Yesod just fine. In practice, it didn’t, for two important reasons:

  1. At the time, there were some fundamental flaws in cabal’s depedency solver. Even when a valid build plan was available, it would on a regular basis take more than an hour to run without finding it.
  2. Many packages did not include any information on dependency versions in their metadata files, instead simply depending on any version of a dependency.

For those familiar with it: point (2) is definitely the infamous PVP debates, which I’ve been involved with a lot. Again, I’m not discussing the trade-offs of the PVP, simply stating historical facts that led to the creation of Stackage.

So, as someone who believed (and still believes) Yesod is a good thing, Haskell is a good thing, and we should make it as easy as possible to get started with Haskell, solving these “dependency hell” issues was worthwhile. Stackage did this for me, and I believe a large percentage of the Haskell community.

Later, we also created the Stack build tool as an alternative to cabal, which defaulted to this snapshot-based approach instead of dependency solving. But that topic is also outside the scope of this blog post, so I won’t be addressing it further.

Advantages of Stackage

There are some great advantages of to Stackage, and it’s worth getting them out there right now:

  • Virtually no dependency hell problems. If you specify a Stackage snapshot and only use packages within that snapshot, you’re all but guaranteed that your dependencies will all build correctly, no futzing required. Depending on how well maintained the libraries and their metadata is in your language of choice, this may either be a minor convenience, or a massive life changing impact.
  • It’s possible to create documentation sites providing a coherent set of docs for a single package, where links in between packages always end up with compatible versions of packages.
  • For training purposes (which I do quite a bit of), it’s an amazing thing to say “we’re using LTS version 15.3” and know everyone is using the same compiler version and library versions.
  • Similarly, for bug reports, it’s great to say “please provide a repo and specify your Stackage snapshot.”
  • It’s possible to write Haskell scripts that depend on extra library dependencies and have a good guarantee that they’ll continue to work identically far into the future.

That sounds great, but let’s not forget the downsides.

Disadvantages of Stackage

Not everything is perfect in Stackage land. Here are some of the downsides:

  • It’s a maintenance burden, like anything else. I think we’ve done a decent job of automating it, and with the wonderful team of Stackage Curators the burden is spread across multiple people. But like any project, it does require time, investment, and upkeep.
  • Stackage doesn’t nicely handle the case of using a package outside of the snapshot.
  • It introduces some level of “community wide mutex” where you need to either wait for dependent libraries to update to support newer dependencies, or drop those libraries. It’s rarely a big fight these days, since we try to stick to some kind of defined timetables, but it can be frustrating for people, and it has sometimes led to arguments.
    • This brings up one of my guiding principles: avoid these kinds of shared resources whenever possible

OK, so now we understand the pluses and minuses of Stackage. And in the context of Haskell, I believe it was at the time absolutely the right thing to do. And today, I would not want to stop using Stackage in Haskell at all.

But the story is different in Rust. Let me step through some of the motivators in Haskell and how they differ in Rust. Again, keep in mind that my experience in Haskell far outstrips my experience in Rust, and I very much welcome others to weigh in differently than me.

Cargo/SemVer vs PVP

What’s wrong with this line from a Haskell Cabal file?

build-depends: base, bytestring, text

Unless you’re a hard core Haskeller, probably nothing obvious jumps out at you. Now look at this snippet of a perfectly valid Cargo.toml file and tell me what’s wrong:

random = ""
tokio = ""

Even a non-Rustacean will probably come up with the question “what’s with the empty strings?” Both of thse snippets mean “take whatever version of these libraries you feel like.” But the syntax for Haskell encourages this. The syntax for Rust discourages it. Instead, the following Cargo.toml snippet looks far more natural:

random = "0.12.2"
tokio = "0.2.22"

When I first started writing Haskell, I had no idea about the “correct” way to write dependency bounds. Many other people didn’t either. Ultimately, years into my Haskell experience, I found out that the recommended approach was the PVP, which is essentially Haskell’s version of SemVer. (To be clear: the PVP predates SemVer, this was not a case of Not Invented Here syndrome.)

Unfortunately, the PVP is quite a complicated beast to get right. I’ve heard from multiple Haskellers, with a wide range of experience, that they don’t know how to properly version their packages. To this day, the most common response I’m likely to give on a Haskell pull request is “please change the version number, this doesn’t comply with the PVP.” As a result, we have a situation where the easy, obvious thing to do is to include no version information at all, and the “correct” thing to do is really, really hard to get right.

By contrast, in Rust, the simple thing to do is mostly correct: just stick the current version of the package in the double quotes. The format of the file basically begs you to do it. It feels weird not to do it. And even though it’s not perfect, it’s pretty darn good.

So first point against a Stackage for Rust: Cargo’s format encourages proper dependency information to be maintained.

Dependency solver works

This is where my larger Haskell experience may be skewing my vision. As I mentioned, I’ve had huge problems historically with cabal‘s dependency solver. It has improved massively since then. But between the dependency solver itself having historic bugs, and the lack of good dependency information across Hackage (yes, in large part because of people like me and the presence of Stackage), I don’t like to rely on dependency solving at all anymore.

By contrast, in Rust, I’ve rarely run into problems with the dependency solver. I have gotten build plans that have failed to build. I have had to manually modify Cargo.toml files to specify different versions. I have been bitten by libraries accidentally breaking compatibility within a major version. But relative to my pains with Haskell, it’s small potatoes.

So next point: Dependency solving emperically works pretty well in Rust

Culture of compatibility

In contrast to Haskell, Rust has much more of a culture around keeping backwards and forwards compatibility. The compiler team tries very hard to avoid breaking existing code. And when they do, they try to make it possible to rewrite the broken code in a way that’s compatible with both the old and new version of the compiler. I’ve seen a lot of the same attitude from Rust userland packages.

This may sound surprising to Rustaceans, who probably view the Rust library ecosystem as a fast-changing landscape. I’m just speaking in relative terms to Haskell.

As a result, the level of churn in Rust isn’t nearly as high as in Haskell, and therefore even with imperfect dependency information a dependency solving approach is likely to work out pretty well.

Less library churn means code builds more reliably

Lock files

A minor point, but Cargo creates a Cargo.lock file that pins the exact versions of your dependencies. It’s up to you whether you check that file into your code repo, but it’s trivially easy to do so. Paired together with rust-toolchain files, you can get pretty close to reproducible build plans.

I can’t speak to the latest versions of cabal, but historically this has not been the default mode of operation for cabal. And I don’t want to get into the details of Hackage revisions and timestamp-based development, but suffice it to say I have no real expectations that you can fully rely on cabal freeze files. Stack does support fully reproducible build plans and lock files, but that’s essentially via the same snapshot mechanism as Stackage itself.

In other words: In the presence of proper lock files, you regain reproducible builds without snapshots


I hope this post frames my thoughts around Stackage, Haskell, and Rust pretty well. My basic conclusion is:

  • Snapshots are good thing in general
  • Snapshots are less vital when you get other things right
  • Rust has gotten a lot of things right that Haskell didn’t
  • As a result, the marginal benefit of snapshots in Rust are nowhere near the marginal benefits in Haskell

To restate: my calculations here are biased by having much more experience with Haskell than Rust codebases. If I was running the same number of lines of Rust code as Haskell, or teaching the same number of courses, or maintaining as many open source projects, I may not come to this conclusion. But based on what I see right now, the benefits of a snapshot system in Rust do not outweigh the costs of setting up and maintaining one.

All that said: I’m still very interested to hear others’ takes on this, both if you agree with my conclusion, and even more if you don’t. Please feel free to ping me on social media discussions, the Disqus comments below, or my personal email (michael at snoyman dot com).

August 17, 2020 06:15 AM

August 16, 2020

Alson Kemp

Cast Iron Skillets: meditations on seasoning

TL;DR: Do your worst and hack-and-slash at your cast iron skillet or carefully season it and exhibit it to your friends. Doesn’t matter; it’ll still be a cast iron skillet. Also, don’t kill seagulls and seals.

Not sure anyone will read this but I don’t particularly care: it’s a bit of light in the darkness of Covid.

I do everything with my cast iron pans. Tomato, wine, whatever. In contrast, many cast iron aficionados advocate loads of rules about how to use cast iron:

  • Don’t wash it with soap!
  • Don’t scrub it!
  • Don’t cook acidic foods in it!
  • In fact, don’t look at it! That’s quite enough!

Sure, follow those if you take an impractical view of very practical cast iron cookware. These remind me of so many little facts which are exploded into hard and fast rules, which are then passionately advocated without regard to the rule’s utility… Cast iron seasoning zealotry…

What’s the absolute worst thing you can do to a cast iron skillet? Remove the seasoning. Maybe let it rust a bit? Guess what? You still have a frickin’ cast iron skillet. These things get picked up from junk yards, left outside in the rain, whatever. How are people cooking on their great-grandmas’ cast iron skillets if these things are so delicate? Settlers carried these things on pack mules up yonder pass and all that. It’s fancy pig iron but it’s basically pig iron.

Seasoning is certainly important for cooking but most folks seem to think seasoning is important for showing. Seasoning is easily reapplied and can be applied however well and however often you desire. I don’t desire to do either much, so I don’t: oops, I forgot to clean out the skillet last night. Guess what? I still have a great frickin’ cast iron skillet; gave it a rinse but that wasn’t enough; scrubbed the hell out of it and removed the seasoning… Guess what? You still have a great frickin’ cast iron skillet; left it in the garage for five years? Guess what? You still have a great frickin’ cast iron skillet.

If the skillet looks a little sad, I scrub it clean of any sticky bits (and probably a bunch of seasoning), rub some vegetable oil (blah, blah, flax seed oil) around on it with a paper towel and throw it in the oven for an hour (but I forget and it sits in a cold oven overnight…) or leave it on the stove with a low gas flame for 30 minutes. I dunno. It’s pretty non-stick and I don’t show it to people expecting them to gaze jealously at my skillet’s perfect seasoning.

The alternative to a bit of fiddling with cast iron is regular non-stick pans but the non-stick coating never seems to last that long and I want to use metal utensils to hack at my food. Then I have to throw away the no-longer-non-stick non-stick pan and I’m sure someone will tell me a seagull or seal will eat it and die. That seems like a bad thing so I just keep banging on my cast iron skillet and then re-seasoning it. With just 5 minutes per month you, too, could save a seagull or seal. Don’t be a bad person who kills seagulls and seals. Also, don’t be a cast iron seasoning missionary (yes, I see the irony).

Useful links:

by alson at August 16, 2020 04:37 PM

August 15, 2020

Stackage Blog

Switching nightlies to GHC 8.10.2 and a workaround for Windows

The nightly builds are now using GHC 8.10.2 which is known to be broken on Windows. This bug should be fixed in GHC 8.10.3.

To help alleviate some of the problems related to this we wanted to mention a workaround you can use until GHC 8.10.3 comes out.

Javier Neira @jneira · 5 days ago

A direct workaround would be to change in X:\path\to\ghc-8.10.2\lib\settings:

("Merge objects command", " C:/GitLabRunner/builds/2WeHDSFP/0/ghc/ghc/inplace/mingw/bin/ld.exe")


("Merge objects command", "X:/path/to/ghc-8.10.2/mingw/bin/ld.exe")


Thanks to @mpilgrem for linking the workaround and suggesting we write this blog post!

August 15, 2020 12:01 PM

August 13, 2020

Philip Wadler

Positive Obsession and Furor Scribendi


I've just completed reading a trio of books by Octavia Butler: The Parable of the Sower, The Parable of the Talents, and Bloodchild and Other Stories. The latter contains two essays, Positive Obsession and Furor Scribendi, which I heartily recommend. Replace writer by researcher and everything remains true.

Persistence is essential to any writer--the persistence to finish your work, to keep writing in spite of rejection, to keep reading, studying, submitting work for sale. But stubbornness, the refusal to change unproductive behavior or to revise unsalable work can be lethal to your writing hopes.

Octavia Butler, "Furor Scribendi"

by Philip Wadler ( at August 13, 2020 04:00 PM

August 12, 2020

Tweag I/O

Developing Python with Poetry & Poetry2nix: Reproducible flexible Python environments

Most Python projects are in fact polyglot. Indeed, many popular libraries on PyPi are Python wrappers around C code. This applies particularly to popular scientific computing packages, such as scipy and numpy. Normally, this is the terrain where Nix shines, but its support for Python projects has often been labor-intensive, requiring lots of manual fiddling and fine-tuning. One of the reasons for this is that most Python package management tools do not give enough static information about the project, not offering the determinism needed by Nix.

Thanks to Poetry, this is a problem of the past — its rich lock file offers more than enough information to get Nix running, with minimal manual intervention. In this post, I will show how to use Poetry, together with Poetry2nix, to easily manage Python projects with Nix. I will show how to package a simple Python application both using the existing support for Python in Nixpkgs, and then using Poetry2nix. This will both show why Poetry2nix is more convenient, and serve as a short tutorial covering its features.

Our application

We are going to package a simple application, a Flask server with two endpoints: one returning a static string “Hello World” and another returning a resized image. This application was chosen because:

  1. It can fit into a single file for the purposes of this post.
  2. Image resizing using Pillow requires the use of native libraries, which is something of a strength of Nix.

The code for it is in the imgapp/ file:

from flask import send_file
from flask import Flask
from io import BytesIO
from PIL import Image
import requests

app = Flask(__name__)

IMAGE_SIZE = (300, 300)

def hello():
    return "Hello World!"

def image():
    r = requests.get(IMAGE_URL)
    if not r.status_code == 200:
        raise ValueError(f"Response code was '{r.status_code}'")

    img_io = BytesIO()

    img =
    img.thumbnail(IMAGE_SIZE), 'JPEG', quality=70)

    return send_file(img_io, mimetype='image/jpeg')

def main():

if __name__ == '__main__':

The status quo for packaging Python with Nix

There are two standard techniques for integrating Python projects with Nix.

Nix only

The first technique uses only Nix for package management, and is described in the Python section of the Nix manual. While it works and may look very appealing on the surface, it uses Nix for all package management needs, which comes with some drawbacks:

  1. We are essentially tied to whatever package version Nixpkgs provides for any given dependency. This can be worked around with overrides, but those can cause version incompatibilities. This happens often in complex Python projects, such as data science ones, which tend to be very sensitive to version changes.
  2. We are tied to using packages already in Nixpkgs. While Nixpkgs has many Python packages already packaged up (around 3000 right now) there are many packages missing — PyPi, the Python Package Index has more than 200000 packages. This can of course be worked around with overlays and manual packaging, but this quickly becomes a daunting task.
  3. In a team setting, every team member wanting to add packages needs to buy in to Nix and at least have some experience using and understanding Nix.

All these factors lead us to a conclusion: we need to embrace Python tooling so we can efficiently work with the entire Python ecosystem.

Pip and Pypi2Nix

The second standard method tries to overcome the faults above by using a hybrid approach of Python tooling together with Nix code generation. Instead of writing dependencies manually in Nix, they are extracted from the requirements.txt file that users of Pip and Virtualenv are very used to. That is, from a requirements.txt file containing the necessary dependencies:


we can use pypi2nix to package our application in a more automatic fashion than before:

nix-shell -p pypi2nix --run "pypi2nix -r requirements.txt"

However, Pip is not a dependency manager and therefore the requirements.txt file is not explicit enough — it lacks both exact versions for libraries, and system dependencies. Therefore, the command above will not produce a working Nix expression. In order to make pypi2nix work correctly, one has to manually find all dependencies incurred by the use of Pillow:

nix-shell -p pypi2nix --run "pypi2nix -V 3.8 -E pkgconfig -E freetype -E libjpeg -E openjpeg -E zlib -E libtiff -E libwebp -E tcl -E lcms2 -E xorg.libxcb -r requirements.txt"

This will generate a large Nix expression, that will indeed work as expected. Further use of Pypi2nix is left to the reader, but we can already draw some conclusions about this approach:

  1. Code generation results in huge Nix expressions that can be hard to debug and understand. These expressions will typically be checked into a project repository, and can get out of sync with actual dependencies.
  2. It’s very high friction, especially around native dependencies.

Having many large Python projects, I wasn’t satisfied with the status quo around Python package management. So I looked into what could be done to make the situation better, and which tools could be more appropriate for our use-case. A potential candidate was Pipenv, however its dependency solver and lock file format were difficult to work with. In particular, Pipenv’s detection of “local” vs “non-local” dependencies did not work properly inside the Nix shell and gave us the wrong dependency graph. Eventually, I found Poetry and it looked very promising.

Poetry and Poetry2nix

The Poetry package manager is a relatively recent addition to the Python ecosystem but it is gaining popularity very quickly. Poetry features a nice CLI with good UX and deterministic builds through lock files.

Poetry uses pip under the hood and, for this reason, inherited some of its shortcomings and lock file design. I managed to land a few patches in Poetry before the 1.0 release to improve the lock file format, and now it is fit for use in Nix builds. The result was Poetry2nix, whose key design goals were:

  1. Dead simple API.
  2. Work with the entire Python ecosystem using regular Python tooling.
  3. Python developers should not have to be Nix experts, and vice versa.
  4. Being an expert should allow you to “drop down” into the lower levels of the build and customise it.

Poetry2nix is not a code generation tool — it is implemented in pure Nix. This fixes many of problems outlined in previous paragraphs, since there is a single point of truth for dependencies and their versions.

But what about our native dependencies from before? How does Poetry2nix know about those? Indeed, Poetry2nix comes with an extensive set of overrides built-in for a lot of common packages, including Pillow. Users are encouraged to contribute overrides upstream for popular packages, so everyone can have a better user experience.

Now, let’s see how Poetry2nix works in practice.

Developing with Poetry

Let’s start with only our application file above (imgapp/ and a shell.nix:

{ pkgs ? import <nixpkgs> {} }:

pkgs.mkShell {

  buildInputs = [


Poetry comes with some nice helpers to create a project, so we run:

$ poetry init

And then we’ll add our dependencies:

$ poetry add requests pillow flask

We now have two files in the folder:

  • The first one is pyproject.toml which not only specifies our dependencies but also replaces
  • The second is poetry.lock which contains our entire pinned Python dependency graph.

For Nix to know which scripts to install in the bin/ output directory, we also need to add a scripts section to pyproject.toml:

name = "imgapp"
version = "0.1.0"
description = ""
authors = ["adisbladis <>"]

python = "^3.7"
requests = "^2.23.0"
pillow = "^7.1.2"
flask = "^1.1.2"


imgapp = 'imgapp:main'

requires = ["poetry>=0.12"]
build-backend = "poetry.masonry.api"

Packaging with Poetry2nix

Since Poetry2nix is not a code generation tool but implemented entirely in Nix, this step is trivial. Create a default.nix containing:

{ pkgs ? import <nixpkgs> {} }:
pkgs.poetry2nix.mkPoetryApplication {
  projectDir = ./.;

We can now invoke nix-build to build our package defined in default.nix. Poetry2nix will automatically infer package names, dependencies, meta attributes and more from the Poetry metadata.

Manipulating overrides

Many overrides for system dependencies are already upstream, but what if some are lacking? These overrides can be manipulated and extended manually:

poetry2nix.mkPoetryApplication {
    projectDir = ./.;
    overrides = poetry2nix.overrides.withDefaults (self: super: {
      foo = foo.overridePythonAttrs(oldAttrs: {});


By embracing both modern Python package management tooling and the Nix language, we can achieve best-in-class user experience for Python developers and Nix developers alike.

There are ongoing efforts to make Poetry2nix and other Nix Python tooling work better with data science packages like numpy and scipy. I believe that Nix may soon rival Conda on Linux and MacOS for data science.

Python + Nix has a bright future ahead of it!

August 12, 2020 12:00 AM

August 10, 2020

Michael Snoyman

Book review: Loserthink

I don’t typically write book reviews. But I felt like doing one in this case, since the topic hit close to home for me (more on that soon). I’ll start off with: I really enjoyed this book, it helped me understand some things I didn’t before, it was a humorous read, and I think it will have an ongoing impact for me. Highly recommended.

Loserthink by Scott Adams

Loserthink is a book by Scott Adams, with the interesting subtitle “How untrained brains are ruining America.” The book definitely references many American concepts, and refers to US politics quite a bit (I’ll get to that below). For me, those are useful illustrations, but not at all the core of the book.

The first half of the book covers a number of different disciplines in the world, such as engineering, psychology, and economics. This first part is what “hit close to home” for me, and encouraged me to write this book review. In these chapters, Adams walks through how the specialized thought processes in these disciplines encourage helpful ways of thinking about problems.

This topic has been on my mind a lot for the past six months or so. Earlier this year, I gave a talk for LambdaConf titled Economic Argument for Functional Programming. In that talk, I established my credentials (former actuary, now programmer, lots of experience with economics and statistics courses), and then proceeded to explain some economics concepts for making engineering and business decisions. I wasn’t certain how the talk would go over, given that I’ve never known normal people to sign up for an hour long stuffy economics lecture. But people seem to have liked the exposure to a different way of thinking. (Or at least people are really polite to me, or I’m experiencing strong selection bias of only those who like the talk reach out to me.)

I realized quickly when reading Loserthink that the first number of chapters all follow a similar kind of pattern as my talk: present a discipline’s way of thinking, and show how it would approach real-world problems in a better way than “Loserthink” would. For each of these, I had one of three different kinds of responses:

  • Wow, I never thought of it that way before, I’m glad I learned something new
  • I’ve intuitively been thinking about it this way, but now I have a clear method to follow, and to use for helping others
  • I know this solidly, and already use these techniques regularly, and I’m surprised to see how big the gap is from the way most people see things

This last one was strongest in the engineering and economics chapters. I anticipate readers of this blog would be somewhat horrified to see how non-engineers think through problems. I also anticipate many readers here to be shocked at their own “mental prisons” (as Adams puts it) for topics like psychology.

I’ve read a number of Adams’s previous works, including How to fail at almost everything and still win big and Win Bigly. In fact, some of my comments on persuasion in the aforementioned LambdaConf talk were inspired by Adams’s “persuasion filter” concept and Robert Cialdini’s books Influence and Pre-suasion (a book title many people think I’m regularly misspelling and mispronouncing).

Before reading these books, I probably had a typical engineer’s world view on things: I’m a smart, rational person, well grounded in facts and logic, and I follow a straightline path from data to decisions. I’m now firmly a believer that I am nothing of the sort, and this world view shift is deeply pervasive. If people express interest, I may write up some follow up reviews on those other books.

Continuing with Loserthink, the book moves on to point out some real-world examples. As I mentioned above, Adams does have a heavy focus on US politics. This is a double-edged sword, and potentially my only critique of the book. While highly illustrative to those both familiar with US politics and open minded enough to consider both sides of an issue, it may be offputting to others. Adams points out “Loserthink” fairly evenly on both sides of the aisle, and in my opinion all accurately. Without getting into details, I can say that there were arguments he made that made me realize shortcomings in some of my own political views.

Finally, Adams provides guidance on how to escape mental prisons. In many ways, this is a summary of the rest of the book. Which is good, because even given the relative brevity of the book (just a few hundred pages), it packs a punch.

Conclusion If you’re interested in expanding how you think things through, check out Loserthink.

August 10, 2020 04:30 AM