Planet Haskell

October 21, 2021

Gabriel Gonzalez

Co-Applicative programming style

coapplicative

This post showcases an upcoming addition to the contravariant package that permits programming in a “co-Applicative” (Divisible) style that greatly resembles Applicative style.

This post assumes that you are already familiar with programming in an Applicative style, but if you don’t know what that is then I recommend reading:

Example

The easiest way to motivate this is through a concrete example:

{-# LANGUAGE NamedFieldPuns #-}

import Data.Functor.Contravariant (Predicate(..), (>$<))
import Data.Functor.Contravariant.Divisible (Divisible, divided)

nonNegative :: Predicate Double
nonNegative = Predicate (0 <=)

data Point = Point { x :: Double, y :: Double, z :: Double }

nonNegativeOctant :: Predicate Point
nonNegativeOctant = adapt >$< nonNegative >*< nonNegative >*< nonNegative
where
adapt Point{ x, y, z } = (x, (y, z))

-- | This operator will be available in the next `contravariant` release
(>*<) :: Divisible f => f a -> f b -> f (a, b)
(>*<) = divided

infixr 5 >*<

This code takes a nonNegative Predicate on Doubles that returns True if the double is non-negative and then uses co-Applicative (Divisible) style to create a nonNegativeOctant Predicate on Points that returns True if all three coordinates of a Point are non-negative.

The key part to zoom in on is the nonNegativeOctant Predicate, whose implementation superficially resembles the Applicative style that we know and love:

nonNegativeOctant = adapt >$< nonNegative >*< nonNegative >*< nonNegative

The difference is that instead of the <$> and <*> operators we use >$< and >*<, which are their evil twins dual operators1. For example, you can probably see the resemblance to the following code that uses Applicative style:

readDouble :: IO Double
readDouble = readLn

readPoint :: IO Point
readPoint = Point <$> readDouble <*> readDouble <*> readDouble

Types

I’ll walk through the types involved to help explain how this style works.

First, we will take this expression:

nonNegativeOctant = adapt >$< nonNegative >*< nonNegative >*< nonNegative

… and explicitly parenthesize the expression instead of relying on operator precedence and associativity:

nonNegativeOctant = adapt >$< (nonNegative >*< (nonNegative >*< nonNegative))

So the smallest sub-expression is this one:

nonNegative >*< nonNegative

… and given that the type of nonNegative is:

nonNegative :: Predicate Double

… and the type of the (>*<) operator is:

(>*<) :: Divisible f => f a -> f b -> f (a, b)

… then we can specialize the f in that type to Predicate (since Predicate implements the Divisible class):

(>*<) :: Predicate a -> Predicate b -> Predicate (a, b)

… and further specialize a and b to Double:

(>*<) :: Predicate Double -> Predicate Double -> Predicate (Double, Double)

… and from that we can conclude that the type of our subexpression is:

nonNegative >*< nonNegative
:: Predicate (Double, Double)

In other words, nonNegative >*< nonNegative is a Predicate whose input is a pair of Doubles.

We can then repeat the process to infer the type of this larger subexpression:

nonNegative >*< (nonNegative >*< nonNegative))
:: Predicate (Double, (Double, Double))

In other words, now the input is a nested tuple of three Doubles.

However, we want to work with Points rather than nested tuples, so we pre-process the input using >$<:

adapt >$< (nonNegative >*< (nonNegative >*< nonNegative))
where
adapt :: Point -> (Double, (Double, Double))
adapt Point{ x, y, z } = (x, (y, z))

… and this works because the type of >$< is:

(>$<) :: Contravariant f => (a -> b) -> f b -> f a

… and if we specialize f to Predicate, we get:

(>$<) :: (a -> b) -> Predicate b -> Predicate a

… and we can further specialize a and b to:

(>$<)
:: (Point -> (Double, (Double, Double)))
-> Predicate (Double, (Double, Double))
-> Predicate Point

… which implies that our final type is:

nonNegativeOctant :: Predicate Point
nonNegativeOctant = adapt >$< (nonNegative >*< (nonNegative >*< nonNegative))
where
adapt Point{ x, y, z } = (x, (y, z))

Duals

We can better understand the relationship between the two sets of operators by studying their types:

-- | These two operators are dual to one another:
(<$>) :: Functor f => (a -> b) -> f a -> f b
(>$<) :: Contravariant f => (a -> b) -> f b -> f a

-- | These two operators are similar in spirit, but they are not really dual:
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
(>*<) :: Divisible f => f a -> f b -> f (a, b)

Okay, so (>*<) is not exactly the dual operator of (<*>). (>*<) is actually dual to liftA2 (,)2:

(>*<)      :: Divisible   f => f a -> f b -> f (a, b)
liftA2 (,) :: Applicative f => f a -> f b -> f (a, b)

In fact, if we were to hypothetically redefine (<*>) to be liftA2 (,) then we could write Applicative code that is even more symmetric to the Divisible code (albeit less ergonomic):

import Control.Applicative (liftA2)
import Prelude hiding ((<*>))

(<*>) = liftA2 (,)

infixr 5 <*>

readDouble :: IO Double
readDouble = readLn

readPoint :: IO Point
readPoint = adapt <$> readDouble <*> readDouble <*> readDouble
where
adapt (x, (y, z)) = Point{ x, y, z }

-- Compare to:
nonNegativeOctant :: Predicate Point
nonNegativeOctant = adapt >$< nonNegative >*< nonNegative >*< nonNegative
where
adapt Point{ x, y, z } = (x, (y, z))

It would be nice if we could create a (>*<) operator that was dual to the real (<*>) operator, but I could not figure out a good way to do this.

If you didn’t follow all of that, the main thing you should take away from this going into the next section is:

  • the Contravariant class is the dual of the Functor class
  • the Divisible class is the dual of the Applicative class

Syntactic sugar

GHC supports the ApplicativeDo extension, which lets you use do notation as syntactic sugar for Applicative operators. For example, we could have written our readPoint function like this:

{-# LANGUAGE ApplicativeDo #-}

readPoint :: IO Point
readPoint = do
x <- readDouble
y <- readDouble
z <- readDouble
return Point{ x, y, z }

… which behaves in the exact same way. Actually, we didn’t even need the ApplicativeDo extension because IO has a Monad instance and anything that has a Monad instance supports do notation without any extensions.

However, the ApplicativeDo language extension does change how the do notation is desugared. Without the extension the above readPoint function would desugar to:

readPoint =
readDouble >>= \x ->
readDouble >>= \y ->
readDouble >>= \z ->
return Point{ x, y, z }

… but with the ApplicativeDo extension the function instead desugars to only use Applicative operations instead of Monad operations:

-- I don't know the exact desugaring logic, but I imagine it's similar to this:
readPoint = adapt <$> readDouble <*> readDouble <*> readDouble
where
adapt x y z = Point{ x, y, z }

So could there be such a thing as “DivisibleDo” which would introduce syntactic sugar for Divisible operations?

I think there could be such an extension, and there are several ways you could design the user experience.

One approach would be to permit code like this:

{-# LANGUAGE DivisibleFrom #-}

nonNegativeOctant :: Predicate Point
nonNegativeOctant =
from Point{ x, y, z }
x -> nonNegative
y -> nonNegative
z -> nonNegative

… which would desugar to the original code that we wrote:

nonNegativeOctant = adapt >$< nonNegative >*< nonNegative >*< nonNegative
where
adapt Point{ x, y, z } = (x, (y, z))

Another approach could be to make the syntax look exactly like do notation, except that information flows in reverse:

{-# LANGUAGE DivisibleDo #-}

nonNegativeOctant :: Predicate Point
nonNegativeOctant = do
x <- nonNegative
y <- nonNegative
r <- nonNegative
return Point{ x, y, z } -- `return` here would actually be a special keyword

I assume that most people will prefer the from notation, so I’ll stick to that for now.

If we were to implement the former DivisibleFrom notation then the Divisible laws stated using from notation would become:

-- Left identity
from x
x -> m
x -> conquer

= m


-- Right identity
from x
x -> conquer
x -> m

= m

-- Associativity
from (x, y, z)
(x, y) -> from (x, y)
x -> m
y -> n
z -> o

= from (x, y, z)
x -> m
(y, z) -> from (y, z)
y -> n
z -> o

= from (x, y, z)
x -> m
y -> n
z -> o

This explanation of how DivisibleFrom would work is really hand-wavy, but if people were genuinely interested in such a language feature I might take a stab at making the semantics of DivisibleFrom sufficiently precise.

History

The original motivation for the (>*<) operator and Divisible style was to support compositional RecordEncoders for the dhall package.

Dhall’s Haskell API defines a RecordEncoder type which specifies how to convert a Haskell record to a Dhall syntax tree, and we wanted to be able to use the Divisible operators to combine simpler RecordEncoders into larger RecordEncoders, like this:

data Project = Project
{ name :: Text
, description :: Text
, stars :: Natural
}

injectProject :: Encoder Project
injectProject =
recordEncoder
( adapt >$< encodeFieldWith "name" inject
>*< encodeFieldWith "description" inject
>*< encodeFieldWith "stars" inject
)
where
adapt Project{..} = (name, (description, stars))

The above example illustrates how one can assemble three smaller RecordEncoders (each of the encodeFieldWith functions) into a RecordEncoder for the Project record by using the Divisible operators.

If we had a DivisibleFrom notation, then we could have instead written:

injectProject =
recordEncoder from Project{..}
name -> encodeFieldWith "name" inject
description -> encodeFieldWith "description" inject
stars -> encodeFieldWith "stars" inject

If you’d like to view the original discussion that led to this idea you can check out the original pull request.

Conclusion

I upstreamed this (>*<) operator into the contravariant package, which means that you’ll be able to use the trick outlined in this post after the next contravariant release.

Until then, you can define your own (>*<) operator inline within your own project, which is what dhall did while waiting for the operator to be upstreamed.


  1. Alright, they’re not categorically dual in a rigorous sense, but I couldn’t come up with a better term to describe their relationship to the original operators.↩︎

  2. I feel like liftA2 (,) should have already been added to Control.Applicative by now since I believe it’s a pretty fundamental operation from a theoretical standpoint.↩︎

by Gabriella Gonzalez (noreply@blogger.com) at October 21, 2021 03:26 PM

Sandy Maguire

Proving Commutativity of Polysemy Interpreters

To conclude this series of posts on polysemy-check, today we’re going to talk about how to ensure your effects are sane. That is, we want to prove that correct interpreters compose into correct programs. If you’ve followed along with the series, you won’t be surprised to note that polysemy-check can test this right out of the box.

But first, what does it mean to talk about the correctness of composed interpreters? This idea comes from Yang and Wu’s Reasoning about effect interaction by fusion. The idea is that for a given program, changing the order of two subsequent actions from different effects should not change the program. Too abstract? Well, suppose I have two effects:

foo :: Member Foo r => Sem r ()
bar :: Member Bar r => Sem r ()

Then, the composition of interpreters for Foo and Bar is correct if and only if1 the following two programs are equivalent:

forall m1 m2.
  m1 >> foo >> bar >> m2
=
  m1 >> bar >> foo >> m2

That is, since foo and bar are actions from different effects, they should have no influence on one another. This sounds like an obvious property; effects correspond to individual units of functionality, and so they should be completely independent of one another. At least — that’s how we humans think about things. Nothing actually forces this to be the case, and extremely hard-to-find bugs will occur if this property doesn’t hold, because it breaks a mental abstraction barrier.

It’s hard to come up with good examples of this property being broken in the wild, so instead we can simulate it with a different broken abstraction. Let’s imagine we’re porting a legacy codebase to polysemy, and the old code hauled around a giant stateful god object:

data TheWorld = TheWorld
  { counter :: Int
  , lots    :: Int
  , more'   :: Bool
  , stuff   :: [String]
  }

To quickly get everything ported, we replaced the original StateT TheWorld IO application monad with a Member (State TheWorld) r constraint. But we know better than to do that for the long haul, and instead are starting to carve out effects. We introduce Counter:

data Counter m a where
  Increment :: Counter m ()
  GetCount :: Counter m Int

makeSem ''Counter

with an interpretation into our god object:

runCounterBuggy
    :: Member (State TheWorld) r
    => Sem (Counter ': r) a
    -> Sem r a
runCounterBuggy = interpret $ \case
  Increment ->
    modify $ \world -> world
                         { counter = counter world + 1
                         }
  GetCount ->
    gets counter

On its own, this interpretation is fine. The problem occurs when we use runCounterBuggy to handle Counter effects that coexist in application code that uses the State TheWorld effect. Indeed, polysemy-check tells us what goes wrong:

quickCheck $
  prepropCommutative @'[State TheWorld] @'[Counter] $
    pure . runState defaultTheWorld . runCounterBuggy

we see:

Failed.

Effects are not commutative!

k1  = Get
e1 = Put (TheWorld 0 0 False [])
e2 = Increment
k2  = Pure ()

(k1 >> e1 >> e2 >> k2) /= (k1 >> e2 >> e1 >> k2)
(TheWorld 1 0 False [],()) /= (TheWorld 0 0 False [],())

Of course, these effects are not commutative under the given interpreter, because changing State TheWorld will overwrite the Counter state! That’s not to say that this sequence of actions actually exists anywhere in your codebase, but it’s a trap waiting to happen. Better to take defensive action and make sure nobody can ever even accidentally trip this bug!

The bug is fixed by using a different data store for Counter than TheWorld. Maybe like this:

runCounter
    :: Sem (Counter ': r) a
    -> Sem r a
runCounter = (evalState 0) . reinterpret @_ @(State Int) $ \case
  Increment -> modify (+ 1)
  GetCount -> get

Contrary to the old handler, runCounter now introduces its own anonymous State Int effect (via reinterpret), and then immediately eliminates it. This ensures the state is invisible to all other effects, with absolutely no opportunity to modify it. In general, this evalState . reintrpret pattern is a very good one for implementing pure effects.

Of course, a really complete solution here would also remove the counter field from TheWorld.

Behind the scenes, prepropCommutative is doing exactly what you’d expect — synthesizing monadic preludes and postludes, and then randomly pulling effects from each set of rows and ensuring everything commutes.

At first blush, using prepropCommutative to test all of your effects feels like an \(O(n^2)\) sort of deal. But take heart, it really isn’t! Let’s say our application code requires Members (e1 : e2 : e3 : es) r, and our eventual composed interpreter is runEverything :: Sem ([e] ++ es ++ [e3, e2, e1] ++ impl) a -> IO (f a). Here, we only need \(O(es)\) calls to prepropCommutative:

  • prepropCommutative @'[e2] @'[e1] runEverything
  • prepropCommutative @'[e3] @'[e2, e1] runEverything
  • prepropCommutative @'[e] @'(es ++ [e2, e1]) runEverything

The trick here is that we can think of the composition of interpreters as an interpreter of composed effects. Once you’ve proven an effect commutes with a particular row, you can then add that effect into the row and prove a different effect commutes with the whole thing. Induction is pretty cool!

As of today there is no machinery in polysemy-check to automatically generate this linear number of checks, but it seems like a good thing to include in the library, and you can expect it in the next release.

To sum up these last few posts, polysemy-check is an extremely useful and versatile tool for proving correctness about your polysemy programs. It can be used to show the semantics of your effects (and adherence of such for their interpreters.) It can show the equivalence of interpreters — such as the ones you use for testing, and those you use in production. And now we’ve seen how to use it to ensure that the composition of our interpreters maintains its correctness.

Happy testing!


  1. Well, there is a second condition regarding distributivity that is required for correctness. The paper goes into it, but polysemy-check doesn’t yet implement it.↩︎

October 21, 2021 12:53 AM

Tweag I/O

Type-checking plugins, Part I: Why write a type-checking plugin?

Type-checking plugins for GHC are a powerful tool which allows users to inject domain-specific knowledge into GHC’s type-checker. In this series of posts, we will explore why you might want to write your own plugin, and how to do so.

  • I: Why write a type-checking plugin?
  • II: GHC’s constraint solver
  • III: Writing a type-checking plugin

In this first blog post of the series, I’ll be outlining a few examples that showcase some limitations in GHC’s instance resolution and type family reduction mechanisms. With a type-checker plugin, we are no longer restricted by these limitations, and can decide ourselves how to solve constraints and reduce type families.

Instance resolution

Recall how GHC goes about instance resolution, as per the user’s guide. A typeclass instance has two components:

instance ctxt => Cls arg_1 ... arg_n

To the left of => is the instance context, and to the right the instance head. To solve a constraint like Cls x_1 ... x_n, GHC goes through all the class instance declarations for Cls, trying to match the arguments x_1, ..., x_n against the arguments arg_1, ..., arg_n appearing in the instance head. Once it finds such an instance (which should be unique, unless one is using overlapping instances), GHC commits to it, picking up the context as a Wanted constraint (we will cover Wanted and Given constraints in depth in Part II: § Constraint solving).

However, one might be interested in using a different method to resolve instances. Let’s look at two simple examples.

State machines

Suppose we want to implement a state machine: each state corresponds to a type, and we can transition values between types according to certain rules. This can be implemented as a typeclass: arrows in the state diagram are typeclass instances.

type Transition :: Type -> Type -> Constraint
class Transition a b where { transition :: a -> b }

We could start by defining some basic instances:

instance Transition A B where {..}
instance Transition B C where {..}
instance Transition C D where {..}
instance Transition A E where {..}
instance Transition B F where {..}
instance Transition E F where {..}

Instance graph for Transition typeclass.

Then, we might want the compiler to use composition to solve other instances. For example, when trying to solve Transition A D, we notice that there is a unique path

A -> B -> C -> D

which would allow us to compose the transition functions to obtain a transition function of type A -> D. On the other hand, if we want a transition from A to F, we notice that there are two different paths, namely A -> B -> F, and A -> E -> F. We could either reject this, or perhaps choose one of the two paths arbitrarily.

This goes beyond GHC’s instance resolution mechanisms, but we could implement a graph reachability test in a type-checking plugin to solve general Transition X Y instances.

Constraint disjunction

The central feature of typeclasses is that they are open: one can define a typeclass and keep adding instances to it. This is in contrast to something like a datatype

data ABC
  = A
  | B Int
  | C Float Bool

which is closed: users can’t add new constructors to ABC in their own modules.

This property of typeclasses comes with a fundamental limitation: we can’t know in advance whether a typeclass constraint is satisfied. A typeclass constraint that was insoluble at some point might become solvable in another context (e.g. a user could define a Show (Bool -> Int) instance).

As a result, GHC does not offer any mechanism for determining whether a constraint is satisfied, as this could result in incoherent behaviour.1 This is unfortunate, as this can be quite useful: for example, one might want to implement an arithmetic computation differently for integral vs floating-point numbers, to ensure numerical stability:

stableAlgorithm = select @(Floating a) @(Num a) fp_algo int_algo
  where
    fp_algo  :: Floating a => [a] -> a -- floating-point algorithm
    int_algo ::      Num a => [a] -> a -- integral algorithm

Here, the select function dispatches on whether the first constraint (in this case Floating a) is satisfied; when it is, it uses the first (visible) argument; otherwise, the second. This behaviour can be implemented in a type-checker plugin (see for instance my if-instance plugin): when attempting to solve a constraint disjunction ct1 || ct2, we can simply look up whether ct1 is currently satisfiable, disregarding the fact that new instances might be defined later (which can lead to incoherence as mentioned above). The satisfiability of ct1 at the point of solving the disjunction ct1 || ct2 will then determine which implementation is selected.

Stuck type families

Consider the type family

type (+) :: Nat -> Nat -> Nat
type family a + b where
  Zero   + b = b
  Succ a + b = Succ (a + b)

GHC can only reduce a + b when it knows what a is: is it Zero or is it Succ i for some i? This causes a problem when we don’t have concrete natural numbers, but still want to reason about (+):

infixr 5 :<
type Vec :: Nat -> Type -> Type
data Vec n a where
  Nil  :: Vec Zero a
  (:<) :: a -> Vec n a -> Vec (Succ n) a

-- | Interweave the elements of two vectors.
weave :: Vec n a -> Vec n a -> Vec (n + n) a
weave Nil       Nil       = Nil
weave (a :< as) (b :< bs) = a :< b :< weave as bs

To typecheck weave, we need to know that

Succ n + Succ n ~ Succ (Succ (n + n))

Using the second type family equation on the LHS, this reduces to:

Succ (n + Succ n) ~ Succ (Succ (n + n))

Peeling off Succ:

n + Succ n ~ Succ (n + n)

Now we can’t make any more progress: we don’t know which equation of (+) to use, as the first argument is a bare type variable n. We say the type family application is stuck; it doesn’t reduce.

By contrast, in a type-checking plugin, we can rewrite type-family applications involving variables, and thus implement a solver for natural number arithmetic.

Type family arguments

Suppose we are interested in solving a theory of row types, e.g. to implement a framework of extensible records.

A row is an unordered association map, field name ⇝ type, e.g.

myRow = ( "intField" :: Int, "boolField" :: Bool, "anotherInt" :: Int )

Crucially, order doesn’t matter in a row. To communicate this fact to the type-checker, we would want to be able to prove a fact such as:

Insert k v (Insert l w r) ~ Insert l w (Insert k v r)

when k and l are distinct field names that don’t appear in the row r. However, we can’t write a type family equation of the sort:

type Insert :: Symbol -> Type -> Row -> Row
type family Insert k v r where
  Insert k v (Insert l w r) = ...
• Illegal type synonym family application ‘Insert l w row’ in instance:
    Insert k v (Insert l w row)

GHC doesn’t allow type families to appear inside the LHS of type family equations. Doing so would risk non-confluence: the result of type-family reduction might depend on the order in which we rewrite arguments. For example:

type F :: Type -> Type
type family F a where
  F (F a) = a       -- F[0]
  F Bool  = Float   -- F[1]
  F a     = Maybe a -- F[2]

Given a type such as F (F Bool), we can proceed in two ways:

  1. Reduce the outer type family application first, using the first equation (written F[0]). This yields the reduction F (F Bool) ~~> Bool.
  2. Reduce the argument first, using the second equation, F[1], and following up with F[2]: F (F Bool) ~~> F Float ~~> Maybe Float.

We obtained different results depending on the order in which reductions were performed. To avoid this problem, GHC simply disallows type family applications from appearing on the LHS of type family equations.

In a type-checking plugin, we can inspect the arguments of a type family, and use that information in deciding how to reduce the type family application.

Performance of type-family reduction

The current implementation of type families in GHC suffers from one significant problem: they can be painfully slow.

This is because, when GHC reduces a type family application, it also creates a proof that keeps track of which type family equations were used. Such proofs can be large, in particular when using recursive type families. Returning to the example of natural number addition:

type (+) :: Nat -> Nat -> Nat
type family a + b where
  Zero   + b = b
  Succ a + b = Succ (a + b)

The proof that 5 + 0 reduces to 5, in the coercion language that will be explained in Part II: § Constraint solving, is as follows:

+[1] <Succ (Succ (Succ (Succ Zero)))> <Zero>
; (Succ (+[1] <Succ (Succ (Succ Zero))> <Zero>
        ; (Succ (+[1] <Succ (Succ Zero)> <Zero>
                ; (Succ (+[1] <Succ Zero> <Zero>
                        ; (Succ (+[1] <Zero> <Zero>
                                ; (Succ (+[0] <Zero>))))))))))

Here +[0] refers to the first type-family equation of +, and +[1] to the second. The difficulty is that, in this proof language, we store the types of the arguments. For example, in the first reduction step, in which we reduce Succ (Succ (Succ (Succ (Succ Zero)))) + Zero to Succ (Succ (Succ (Succ (Succ Zero))) + 0), the proof records the two arguments to +, namely Succ (Succ (Succ (Succ Zero))) and Zero. As a result, the size of the proof that n + 0 reduces to n is quadratic in n.

This can be a problem, for example, in libraries that implement large sum or product types using type families and type-level lists (like many anonymous record libraries do), causing slow compile-times for even moderately-sized records.

In a type-checking plugin, we can instead perform type-family reduction in a single step, returning a single proof term which omits the intermediate steps. In this way, type-checking plugins allow us to sidestep many of the performance issues that surround type families.

Practical examples

  • Solving with arithmetic expressions of natural numbers with Christiaan Baaij’s ghc-typelits-natnormalise,
  • units of measure and dimensional analysis with Adam Gundry’s uom-plugin,
  • regular expressions with Oleg Grenrus’s kleene-type,
  • row types with Divesh Otwani and Richard Eisenberg’s thoralf,
  • intrinsically typed System F with a solver for a lambda calculus of explicit substitutions, bundled with the ghc-tcplugin-api library.

Conclusion

We’ve seen that type-checking plugins can be useful in many different circumstances:

  • custom constraint-solving logic,
  • added flexibility for type-family reduction,
  • performance considerations.

The next question is then, hopefully: how does one actually write a type-checking plugin? Because type-checking plugins operate directly on constraints, it’s important to be somewhat familiar with GHC’s constraint solver, and how type-checking plugins interact with it.

This will be the topic of Part II of this series, before we dive into the practical aspects of actually writing and debugging a type-checking plugin in Part III.


  1. In this context, coherence is the property that the runtime behaviour of programs does not depend on the specific way in which they are typechecked.

October 21, 2021 12:00 AM

October 20, 2021

Well-Typed.Com

Induction without core-size blow-upa.k.a. Large records: anonymous edition

An important factor affecting compilation speed and memory requirements is the size of the core code generated by ghc from Haskell source modules. Thus, if compilation time is an issue, this is something we should be conscious of and optimize for. In part 1 of this blog post we took an in-depth look at why certain Haskell constructs lead to quadratic blow-up in the generated ghc core code, and how the large-records library avoids these problems. Indeed, the large-records library provides support for records, including support for generic programming, with a guarantee that the generated core is never more than O(n) in the number of record fields.

The approach described there does however not directly apply to the case of anonymous records. This is the topic we will tackle in this part 2. Unfortunately, it seems that for anonymous records the best we can hope for is O(n log n), and even to achieve that we need to explore some dark corners of ghc. We have not attempted to write a new anynomous records library, but instead consider the problems in isolation; on the other hand, the solutions we propose should be applicable in other contexts as well. Apart from section Putting it all together, the rest of this blog post can be understood without having read part 1.

This work was done on behalf of Juspay; we will discuss the context in a bit more detail in the conclusions.

Recap: The problem of type arguments

Consider this definition of heterogenous lists, which might perhaps form the (oversimplified) basis for an anonymous records library:

data HList xs where
  Nil  :: HList '[]
  (:*) :: x -> HList xs -> HList (x ': xs)

If we plot the size of the core code generated by ghc for a Haskell module containing only a single HList value

newtype T (i :: Nat) = MkT Word

type ExampleFields = '[T 00, T 01, .., , T 99]

exampleValue :: HList ExampleFields
exampleValue = MkT 00 :* MkT 01 :* .. :* MkT 99 :* Nil

we get an unpleasant surprise:

The source of this quadratic behaviour is type annotations. The translation of exampleValue in ghc core is shown below (throughout this blog post I will show “pseudo-core�, using standard Haskell syntax):

exampleValue :: HList ExampleFields
exampleValue =
      (:*) @(T 00) @'[T 01, T 02, .., T 99] (MkT 00)
    $ (:*) @(T 01) @'[      T 02, .., T 99] (MkT 01)
      ..
    $ (:*) @(T 99) @'[                    ] (MkT 99)
    $ Nil

Every application of the (:*) constructor records the names and types of the remaining fields; clearly, this list is O(n) in the size of the record, and since there are also O(n) applications of (:*), this term is O(n²) in the number of elements in the HList.

We covered this in detail in part 1. In the remainder of this part 2 we will not be concerned with values of HList (exampleValue), but only with the type-level indices (ExampleFields).

Instance induction considered harmful

A perhaps surprising manifestation of the problem of quadratic type annotations arises for certain type class dictionaries. Consider this empty class, indexed by a type-level list, along with the customary two instances, for the empty list and an inductive instance for non-empty lists:

class EmptyClass (xs :: [Type])

instance EmptyClass '[]
instance EmptyClass xs => EmptyClass (x ': xs)

requireEmptyClass :: EmptyClass xs => Proxy xs -> ()
requireEmptyClass _ = ()

Let’s plot the size of a module containing only a single usage of this function:

requiresInstance :: ()
requiresInstance = requireEmptyClass (Proxy @ExampleFields)

Again, we plot the size of the module against the number of record fields (number of entries in ExampleFields):

The size of this module is practically identical to the size of the module containing exampleValue, because at the core level they actually look very similar. The translation of requiresEmptyClass looks something like like this:

d00  :: EmptyClass '[T 00, T 01, .., T 99]
d01  :: EmptyClass '[      T 01, .., T 99]
..
d99  :: EmptyClass '[                T 99]
dNil :: EmptyClass '[                    ]

d00  = fCons @'[T 01, T 02, .., T 09] @(T 00) d01
d01  = fCons @'[      T 02, .., T 99] @(T 01) d02
..
d99  = fCons @'[                    ] @(T 99) dNil
dNil = fNil

requiresInstance :: ()
requiresInstance = requireEmptyClass @ExampleFields d00 (Proxy @ExampleFields)

The two EmptyClass instances we defined above turn into two functions fCons and fNil, with types

fNil  :: EmptyClass '[]
fCons :: EmptyClass xs -> EmptyClass (x ': xs)

which are used to construct dictionaries. These look very similar to the Nil and (:*) constructor of HList, and indeed the problem is the same: we have O(n) calls to fCons, and each of those records all the remaining fields, which is itself O(n) in the number of record fields. Hence, we again have core that is O(n²).

Note that this is true even though the class is empty! It therefore really does not matter what we put inside the class: any induction of this shape will immediately result in quadratic blow-up. This was already implicit in part 1 of this blog post, but it wasn’t as explicit as it perhaps should have been—at least, I personally did not have it as clearly in focus as I do now.

It is true that if both the module defining the instances and the module using the instance are compiled with optimisation enabled, that might all be optimised away eventually, but it’s not that easy to know precisely when we can depend on this. Moreover, if our goal is to improve compilation time (and therefore the development cycle), we anyway typically do not compile with optimizations enabled.

Towards a solution

For a specific concrete record we can avoid the problem by simply not using induction. Indeed, if we add

instance {-# OVERLAPPING #-} EmptyClass ExampleFields

to our module, the size of the compiled module drops from 25,462 AST nodes to a mere 15 (that’s 15 full stop, not 15k!). The large-records library takes advantage of this: rather than using induction to define instances, it generates a single instance for each record type, which has constraints for every field in the record. The result is code that is O(n) in the size of the record.

The only reason large-records can do this, however, is that every record is explicitly declared. When we are dealing with anonymous records such an explicit declaration does not exist, and we have to use induction of some form. An obvious idea suggests itself: why don’t we try halving the list at every step, rather than reducing it by one, thereby reducing the code from O(n²) to O(n log n)?

Let’s try. To make the halving explicit1, we define a binary Tree type2, which we will use promoted to the type-level:

data Tree a =
    Zero
  | One a
  | Two a a
  | Branch a (Tree a) (Tree a)

EmptyClass is now defined over trees instead of lists:

class EmptyClass (xs :: Tree Type) where

instance EmptyClass 'Zero
instance EmptyClass ('One x)
instance EmptyClass ('Two x1 x2)
instance (EmptyClass l, EmptyClass r) => EmptyClass ('Branch x l r)

We could also change HList to be parameterized over a tree instead of a list of fields. Ideally this tree representation should be an internal implementation detail, however, and so we instead define a translation from lists to balanced trees:

-- Evens [0, 1, .. 9] == [0, 2, 4, 6, 8]
type family Evens (xs :: [Type]) :: [Type] where
  Evens '[]            = '[]
  Evens '[x]           = '[x]
  Evens (x ': _ ': xs) = x ': Evens xs

-- Odds [0, 1, .. 9] == [1, 3, 5, 7, 9]
type family Odds (xs :: [Type]) :: [Type] where
  Odds '[]       = '[]
  Odds (_ ': xs) = Evens xs

type family ToTree (xs :: [Type]) :: Tree Type where
  ToTree '[]       = 'Zero
  ToTree '[x]      = 'One x
  ToTree '[x1, x2] = 'Two x1 x2
  ToTree (x ': xs) = 'Branch x (ToTree (Evens xs)) (ToTree (Odds xs))

and then redefine requireEmptyClass to do the translation:

requireEmptyClass :: EmptyClass (ToTree xs) => Proxy xs -> ()
requireEmptyClass _ = ()

The use sites (requiresInstance) do not change at all. Let’s measure again:

Total failure. Not only did we not improve the situation, it got significantly worse.

What went wrong?

To understand, let’s look at the case for when we have 10 record fields. The generated code looks about as clean as one might hope for:

dEx :: EmptyClass (ToTree ExampleFields)
dEx = dTree `cast` <..>

dTree :: EmptyClass (
             'Branch
               (T 0)
               ('Branch (T 1) ('Two (T 3) (T 7)) ('Two (T 5) (T 9)))
               ('Branch (T 2) ('Two (T 4) (T 8)) ('One (T 6)))
           )
dTree =
    fBranch
      @('Branch (T 1) ('Two (T 3) (T 7)) ('Two (T 5) (T 9)))
      @('Branch (T 2) ('Two (T 4) (T 8)) ('One (T 6)))
      @(T 0)
      ( fBranch
           @('Two (T 3) (T 7))
           @('Two (T 5) (T 9))
           @(T 1)
           (fTwo @(T 3) @(T 7))
           (fTwo @(T 5) @(T 9))
      )
      ( -- .. right branch similar
      )

requiresInstance :: ()
requiresInstance = requireEmptyClass @ExampleFields dEx (Proxy @ExampleFields)

Each recursive call to the dictionary construction function fBranch is now indeed happening at half the elements, as intended.

The need for a cast

The problem is in the first line:

dEx = dTree `cast` <..>

Let’s first understand why we have a cast here at all:

  1. The type of the function that we are calling is

    requireEmptyClass :: forall xs. EmptyClass (ToTree xs) => Proxy xs -> ()

    We must pick a value for xs: we pick ExampleFields.

  2. In order to be able to call the function, we must produce evidence (that is, a dictionary) for EmptyClass (ToTree ExampleFields). We therefore have to evaluate ToTree ... to ('Branch ...); as we do that, we construct a proof π that ToTree ... is indeed equal to the result (Branch ...) that we computed.

  3. We now have evidence of EmptyClass ('Branch ...), but we need evidence of EmptyClass (ToTree ...). This is where the cast comes in: we can coerce one to the other given a proof that they are equal; since π proves that

    ToTree ... ~ 'Branch ...

    we can construct a proof that

    EmptyClass (ToTree ...) ~ EmptyClass ('Branch ...)

    by appealing to congruence (if x ~ y then T x ~ T y).

The part in square brackets <..> that I elided above is precisely this proof.

Proofs

Let’s take a closer look at what that proof looks like. I’ll show a simplified form3, which I hope will be a bit easier to read and in which the problem is easier to spot:

  ToTree[3] (T 0) '[T 1, T 2, T 3, T 4, T 5, T 6, T 7, T 8, T 9]
; Branch (T 0)
    ( Evens[2] (T 1) (T 2) '[T 3, T 4, T 5, T 6, T 7, T 8, T 9]
    ; Evens[2] (T 3) (T 4) '[T 5, T 6, T 7, T 8, T 9]
    ; Evens[2] (T 5) (T 6) '[T 7, T 8, T 9]
    ; Evens[2] (T 7) (T 8) '[T 9]
    ; Evens[1] (T 9)
    ; ToTree[3] (T 1) '[T 3, T 5, T 7, T 9]
    ; Branch (T 1)
        ( Evens[2] (T 3) (T 5) '[T 7, T 9]
        ; Evens[2] (T 7) (T 9) '[]
        ; Evens[0]
        ; ToTree[2] (T 3) (T 7)
        )
        ( Odds[1] (T 3) '[T 5, T 7, T 9]
        ; Evens[2] (T 5) (T 7) '[T 9]
        ; Evens[1] (T 9)
        ; ToTree[2] (T 5) (T 9)
        )
    )
    ( Odds[1] (T 1) '[T 2, T 3, T 4, T 5, T 6, T 7, T 8, T 9]
    ; .. -- omitted (similar to the left branch)
    )

The “axioms� in this proof – ToTree[3], Evens[2], etc. – refer to specific cases of our type family definitions. Essentially the proof is giving us a detailed trace of the evaluation of the type families. For example, the proof starts with

ToTree[3] (T 0) '[T 1, T 2, T 3, T 4, T 5, T 6, T 7, T 8, T 9]

which refers to the fourth line of the ToTree definition

ToTree (x ': xs) = 'Branch x (ToTree (Evens xs)) (ToTree (Odds xs))

recording the fact that

  ToTree '[T 0, .., T 9]
~ 'Branch (T 0) (ToTree (Evens '[T 1, .. T9])) (ToTree (Odds '[T 1, .. T9]))

The proof then splits into two, giving a proof for the left subtree and a proof for the right subtree, and here’s where we see the start of the problem. The next fact that the proof establishes is that

Evens '[T 1, .. T 9] ~ '[T 1, T 3, T 5, T 7, T 9]

Unfortunately, the proof to establish this contains a line for every evaluation step:

  Evens[2] (T 1) (T 2) '[T 3, T 4, T 5, T 6, T 7, T 8, T 9]
; Evens[2] (T 3) (T 4) '[T 5, T 6, T 7, T 8, T 9]
; Evens[2] (T 5) (T 6) '[T 7, T 8, T 9]
; Evens[2] (T 7) (T 8) '[T 9]
; Evens[1] (T 9)

We have O(n) such steps, and each step itself has O(n) size since it records the remaining list. The coercion therefore has size O(n²) at every branch in the tree, leading to an overall coercion also4 of size O(n²).

Six or half a dozen

So far we defined

requireEmptyClass :: EmptyClass (ToTree xs) => Proxy xs -> ()
requireEmptyClass _ = ()

and are measuring the size of the core generated for a module containing

requiresInstance :: ()
requiresInstance = requireEmptyClass (Proxy @ExampleFields)

for some list ExampleFields. Suppose we do one tiny refactoring, and make the caller use ToList instead; that is, requireEmptyClass becomes

requireEmptyClass :: EmptyClass t => Proxy t -> ()
requireEmptyClass _ = ()

and we now call ToTree at the use site instead:

requiresInstance :: ()
requiresInstance = requireEmptyClass (Proxy @(ToTree ExampleFields))

Let’s measure again:

The quadratic blow-up disappeared! What happened? Just to add to the confusion, let’s consider one other small refactoring, leaving the definition of requireEmptyClass the same but changing the use site to

requiresInstance = requireEmptyClass @(ToTree ExampleFields) Proxy

Measuring one more time:

Back to blow-up! What’s going on?

Roles

We will see in the next section that the difference is due to roles. Before we look at the details, however, let’s first remind ourselves what roles are5. When we define

newtype Age = MkAge Int

then Age and Int are representationally equal but not nominally equal: if we have a function that wants an Int but we pass it an Age, the type checker will complain. Representational equality is mostly a run-time concern (when do two values have the same representation on the runtime heap?), whereas nominal equality is mostly a compile-time concern. Nominal equality implies representional equality, of course, but not vice versa.

Nominal equality is a very strict equality. In the absence of type families, types are only nominally equal to themselves; only type families introduce additional axioms. For example, if we define

type family F (a :: Type) :: Type where
  F Int = Bool
  F Age = Char

then F Int and Bool are considered to be nominally equal.

When we check whether two types T a and T b are nominally equal, we must simply check that a and b are nominally equal. To check whether T a and T b are representionally equal is however more subtle [Safe zero-cost coercions for Haskell, Fig 3]. There are three cases to consider, illustrated by the following examples:

data T1 x = T1
data T2 x = T2 x
data T3 x = T3 (F x)

Then

  • T1 a and T1 b are representionally equal no matter the values of a and b: we say that x has a phantom role.
  • T2 a and T2 b are representionally equal if a and b are: x has a representional role.
  • T3 a and T3 b are representionally equal if a and b are nominally equal: x has a nominal role (T3 Int and T3 Age are certainly not representionally equal, even though Int and Age are!).

This propagates up; for example, in

data ShowType a = ShowType {
      showType :: Proxy a -> String
    }

the role of a is phantom, because a has a phantom role in Proxy. The roles of type arguments are automatically inferred, and usually the only interaction programmers have with roles is through

coerce :: Coercible a b => a -> b

where Coercible is a thin layer around representational equality. Roles thus remain invisible most of the time—but not today.

Proxy versus type argument

Back to the problem at hand. To understand what is going on, we have to be very precise. The function we are calling is

requireEmptyClass :: EmptyClass xs => Proxy xs -> ()
requireEmptyClass _ = ()

The question we must carefully consider is: what are we instantiating xs to? In the version with the quadratic blowup, where the use-site is

requiresInstance = requireEmptyClass @(ToTree ExampleFields) Proxy

we are being very explicit about the choice of xs: we are instantiating it to ToTree ExampleFields. In fact, we are more explicit than we might realize: we are instantiating it to the unevaluated type ToTree ExampleFields, not to ('Branch ...). Same thing, you might object, and you’d be right—but also wrong. Since we are instantiating it to the unevaluated type, but then build a dictionary for the evaluated type, we must then cast that dictionary; this is precisely the picture we painted in section What went wrong? above.

The more surprising case then is when the use-site is

requiresInstance = requireEmptyClass (Proxy @(ToTree ExampleFields))

In this case, we’re leaving ghc to figure out what to instantiate xs to. It will therefore instantiate it with some fresh variable �. When it then discovers that we also have a Proxy � argument, it must unify � with ToTree ExampleFields. Crucially, when it does so, it instantiates � to the evaluated form of ToTree ExampleFields (i.e., Branch ...), not the unevaluated form. In other words, we’re effectively calling

requiresInstance = requireEmptyClass @(Branch _ _ _) (Proxy @(ToTree ExampleFields))

Therefore, we need and build evidence for EmptyClass (Branch ...), and there is no need for a cast on the dictionary.

However, the need for a cast has not disappered: if we instantiate xs to (Branch ...), but provide a proxy for (ToTree ...), we need to cast the proxy instead—so why the reduction in core size? Let’s take a look at the equality proof given to the cast:

   Univ(phantom phantom <Tree *>
-- (1)    (2)      (3)     (4)
     :: ToTree ExampleFields         -- (5)
      , 'Branch  (T 0)               -- (6)
           ('Branch (T 1)
               ('Two (T 3) (T 7))
               ('Two (T 5) (T 9)))
           ('Branch (T 2)
               ('Two (T 4) (T 8))
               ('One (T 6)))
)

This is it! The entire coercion that was causing us trouble has been replaced basically by a single-constructor “trust me� universal coercion. This is the only coercion that establishes “phantom equality�6. Let’s take a closer look at the coercion:

  1. Univ indicates that this is a universal coercion.
  2. The first occurrence of phantom is the role of the equality that we’re establishing.
  3. The second occurrence of phantom is the provenance of the coercion: what justifies the use of a universal coercion? Here this is phantom again (we can use a universal coercion when establishing phantom equality), but this is not always the case; one example is unsafeCoerce, which can be used to construct a universal coercion between any two types at all7.
  4. When we establish the equality of two types �1 :: �1 and �2 :: �2, we also need to establish that the two kinds �1 and �2 are equal. In our example, both types trivially have the same kind, and so we can just use <Tree *>: reflexivity for kind Tree *.

Finally, (5) and (6) are the two types that we are proving to be equal to each other.

This then explains the big difference between these two definitions:

requiresInstance = requireEmptyClass (Proxy @(ToTree ExampleFields))
requiresInstance = requireEmptyClass @(ToTree ExampleFields) Proxy

In the former, we are instantiating xs to (Branch ...) and we need to cast the proxy, which is basically free due to Proxy’s phantom type argument; in the latter, we are instantiating xs to (ToTree ...), and we need to cast the dictionary, which requires the full equality proof.

Incoherence

If a solution that relies on the precise nature of unification, instantiating type variables to evaluated types rather than unevaluated types, makes you a bit uneasy—I would agree. Moreover, even if we did accept that as unpleasant but necessary, we still haven’t really solved the problem. The issue is that

requireEmptyClass :: EmptyClass xs => Proxy xs -> ()
requireEmptyClass _ = ()

is a function we cannot abstract over: the moment we define something like

abstracted :: forall xs. EmptyClass (ToTree xs) => Proxy xs -> ()
abstracted _ = requireEmptyClass (Proxy @(ToTree xs))

in an attempt to make that ToTree call a responsibility of a library instead of use sites (after all, we wanted the tree representation to be an implementation detail), we are back to the original problem.

Let’s think a little harder about these roles. We finished the previous section with a remark that “(..) we need to cast the dictionary, which requires the full equality proof�. But why is that? When we discussed roles above, we saw that the a type parameter in

data ShowType a = ShowType {
      showType :: Proxy a -> String
    }

has a phantom role; yet, when we define a type class

class ShowType a where
  showType :: Proxy a -> String

(which, after all, is much the same thing), a is assigned a nominal role, not a phantom role. The reason for this is that ghc insists on coherence (see Safe zero-cost coercions for Haskell, section 3.2, Preserving class coherence). Coherence simply means that there is only ever a single instance of a class for a specific type; it’s part of how ghc prevents ambiguity during instance resolution. We can override the role of the ShowType dictionary and declare it to be phantom

type role ShowType phantom

but we lose coherence:

data ShowTypeDict a where
  ShowTypeDict :: ShowType a => ShowTypeDict a

showWithDict :: Proxy a -> ShowTypeDict a -> String
showWithDict p ShowTypeDict = showType p

instance ShowType Int where showType _ = "Int"

showTypeInt :: ShowTypeDict Int
showTypeInt = ShowTypeDict

oops :: String
oops = showWithDict (Proxy @Bool) (coerce showTypeInt)

This means that the role annotation for ShowType requires the IncoherentInstances language pragma (there is currently no class-level pragma).

Solving the problem

Despite the problem with roles and potential incoherence discussed in the previous section, role annotations on classes cannot make instance resolution ambiguous or result in runtime type errors or segfaults. We do have to be cautious with the use of coerce, but we can shield the user from this through careful module exports. Indeed, we have used role annotations on type classes to our advantage before.

Specifically, we can redefine our EmptyClass as an internal (not-exported) class as follows:

class EmptyClass' (xs :: Tree Type) -- instances as before
type role EmptyClass' phantom

then define a wrapper class that does the translation from lists to trees:

class    EmptyClass' (ToTree xs) => EmptyClass (xs :: [Type])
instance EmptyClass' (ToTree xs) => EmptyClass (xs :: [Type])

requireEmptyClass :: EmptyClass xs => Proxy xs -> ()
requireEmptyClass _ = ()

Now the translation to a tree has become an implementation detail that users do not need to be aware of, whilst still avoiding (super)quadratic blow-up in core:

Constraint families

There is one final piece to the puzzle. Suppose we define a type family mapping types to constraints:

type family CF (a :: Type) :: Constraint

In the next session we will see a non-contrived example of this; for now, let’s just introduce a function that depends on CF a for no particular reason at all:

withCF :: CF a => Proxy a -> ()
withCF _ = ()

Finally, we provide an instance for HList:

type instance CF (HList xs) = EmptyClass (ToTree xs)

Now let’s measure the size of the core generated for a module containing a single call

satisfyCF :: ()
satisfyCF = withCF (Proxy @(HList ExampleFields))

Sigh.

Shallow thinking

In section Proxy versus type argument above, we saw that when we call

requireEmptyClass @(Branch _ _ _) (Proxy @(ToTree ExampleFields))

we construct a proof π :: Proxy (ToTree ...) ~ Proxy (Branch ...), which then gets simplified. We didn’t spell out in detail how that simplification happens though, so let’s do that now.

As mentioned above, the type checker always works with nominal equality. This means that the proof constructed by the type checker is actually π :: Proxy (ToTree ...) ~N Proxy (Branch ...). Whenever we cast something in core, however, we want a proof of representational equality. The coercion language has an explicit constructor for this8, so we could construct the proof

sub π :: Proxy (ToTree ...) ~R Proxy (Branch ...)

However, the function that constructs this proof first takes a look: if it is a constructing an equality proof T π' (i.e., an appeal to congruence for type T, applied to some proof π'), where T has an argument with a phantom role, it replaces the proof (π') with a universal coercion instead. This also happens when we construct a proof that EmptyClass (ToTree ...) ~R EmptyClass (Branch ...) (like in section Solving the problem), provided that the argument to EmptyClass is declared to have a phantom role.

Why doesn’t that happen here? When we call function withCF we need to construct evidence for CF (HList ExampleFields). Just like before, this means we must first evaluate this to EmptyClass (Branch ..), resulting in a proof of nominal equality π :: CF .. ~N EmptyClass (Branch ..). We then need to change this into a proof of representational equality to use as an argument to cast. However, where before π looked like Proxy π' or EmptyClass π', we now have a proof that looks like

  D:R:CFHList[0] <'[T 0, T 1, .., T 9]>_N
; EmptyClass π'

which first proves that CF (HList ..) ~ EmptyClass (ToTree ..), and only then proves that EmptyClass (ToTree ..) ~ EmptyClass (Branch ..). The function that attempts the proof simplification only looks at the top-level of the proof, and therefore does not notice an opportunity for simplification here and simply defaults to using sub π.

The function does not traverse the entire proof because doing so at every point during compilation could be prohibitively expensive. Instead, it does cheap checks only, leaving the rest to be cleaned up by coercion optimization. Coercion optimization is part of the “very simple optimizer�, which runs even with -O0 (unless explicitly disabled with -fno-opt-coercion). In this particular example, coercion optimization will replace the proof (π') by a universal coercion, but it will only do so later in the compilation pipeline; better to reduce the size of the core sooner rather than later. It’s also not entirely clear if it will always be able to do so, especially also because the coercion optimiser can make things significantly worse in some cases, and so it may be scaled back in the future. I think it’s preferable not to depend on it.

Avoiding deep normalization

As we saw, when we call

withCF (Proxy @(HList ExampleFields))

the type checker evaluates CF (HList ExampleFields) all the way to EmptyClass (Branch ...): although there is ghc ticket proposing to change this, at present whenever ghc evaluates a type, it evaluates it all the way. For our use case this is frustrating: if ghc were to rewrite CF (HList ..) to ExampleFields (ToTree ..) (with a tiny proof: just one rule application), we would then be back where we were in section Solving the problem, and we’d avoid the quadratic blow-up. Can we make ghc stop evaluating earlier? Yes, sort of. If instead of

type instance CF (HList xs) = EmptyClass (ToTree xs)

we say

class    EmptyClass (ToTree xs) => CF_HList xs
instance EmptyClass (ToTree xs) => CF_HList xs

type instance CF (HList xs) = CF_HList xs

then CF (HList xs) simply gets rewritten (in one step) to CF_HList xs, which contains no further type family applications. Now the type checker needs to check that CF_HList ExampleFields is satisfied, which will match the above instance, and hence it must check that EmptyClass (ToTree ExampleFields) is satisfied. Now, however, we really are back in the situation from section Solving the problem, and the size of our core looks good again:

Putting it all together: Generic instance for HList

Let’s put theory into practice and give a Generic instance for HList, of course avoiding quadratic blow-up in the process. We will use the Generic class from the large-records library, discussed at length in part 1.

The first problem we have to solve is that when giving a Generic instance for some type a, we need to choose a constraint

Constraints a c :: (Type -> Constraint) -> Constraint

such that when Constraints a c is satisfied, we can get a type class dictionary for c for every field in the record:

class Generic a where
  -- | @Constraints a c@ means "all fields of @a@ satisfy @c@"
  type Constraints a (c :: Type -> Constraint) :: Constraint

  -- | Construct vector of dictionaries, one for each field of the record
  dict :: Constraints a c => Proxy c -> Rep (Dict c) a

  -- .. other class members ..

Recall that Rep (Dict c) a is a vector containing a Dict c x for every field of type x in the record. Since we are avoiding heterogenous data structures (due to the large type annotations), we effectively need to write a function

hlistDict :: Constraints a c => Proxy a -> [Dict c Any]

Let’s tackle this in quite a general way, so that we can reuse what we develop here for the next problem as well. We have a type-level list of types of the elements of the HList, which we can translate to a type-level tree of types. We then need to reflect this type-level tree to a term-level tree with values of some type f Any (in this example, f ~ Dict c). We will delegate reflection of the values in the tree to a separate class:

class ReflectOne (f :: Type -> Type) (x :: Type) where
  reflectOne :: Proxy x -> f Any

class ReflectTree (f :: Type -> Type) (xs :: Tree Type) where
  reflectTree :: Proxy xs -> Tree (f Any)

type role ReflectTree nominal phantom -- critical!

Finally, we can then flatten that tree into a list, in such a way that it reconstructs the order of the list (i.e., it’s a term-level inverse to the ToTree type family):

treeToList :: Tree a -> [a]

The instances for ReflectTree are easy. For example, here is the instance for One:

instance ReflectOne f x => ReflectTree f ('One x) where
  reflectTree _ = One (reflectOne (Proxy @x))

The other instances for ReflectTree follow the same structure (the full source code can be found in the large-records test suite). It remains only to define a ReflectOne instance for our current specific use case:

instance c a => ReflectOne (Dict c) (a :: Type) where
  reflectOne _ = unsafeCoerce (Dict :: Dict c a)

With this definition, a constraint ReflectTree (Dict c) (ToTree xs) for a specific list of xs will result in a c x constraint for every x in xs, as expected. We are now ready to give a partial implementation of the Generic class:

hlistDict :: forall c (xs :: [Type]).
     ReflectTree (Dict c) (ToTree xs)
  => Proxy xs -> [Dict c Any]
hlistDict _ = treeToList $ reflectTree (Proxy @(ToTree xs))

class    ReflectTree (Dict c) (ToTree xs) => Constraints_HList xs c
instance ReflectTree (Dict c) (ToTree xs) => Constraints_HList xs c

instance Generic (HList xs) where
  type Constraints (HList xs) = Constraints_HList xs
  dict _ = Rep.unsafeFromListAny $ hlistDict (Proxy @xs)

The other problem that we need to solve is that we need to construct field metadata for every field in the record. Our (over)simplified “anonymous record� representation does not have field names, so we need to make them up from the type names. Assuming some type family

type family ShowType (a :: Type) :: Symbol

we can construct this metadata in much the same way that we constructed the dictionaries:

instance KnownSymbol (ShowType a) => ReflectOne FieldMetadata (a :: Type) where
  reflectOne _ = FieldMetadata (Proxy @(ShowType a)) FieldLazy

instance ReflectTree FieldMetadata (ToTree xs) => Generic (HList xs) where
  metadata _ = Metadata {
        recordName          = "Record"
      , recordConstructor   = "MkRecord"
      , recordSize          = length fields
      , recordFieldMetadata = Rep.unsafeFromListAny fields
      }
    where
      fields :: [FieldMetadata Any]
      fields = treeToList $ reflectTree (Proxy @(ToTree xs))

The graph below compares the size of the generated core between a straight-forward integration with generics-sop and two integrations with large-records, one in which the ReflectTree parameter has its inferred nominal role, and one with the phantom role override:

Both the generics-sop integration and the nominal large-records integration are O(n²). The constant factors are worse for generics-sop, however, because it suffers from both the problems described in part 1 of this blog as well as the problems described in this part 2. The nominal large-records integration only suffers from the problems described in part 2, which are avoided by the O(n log n) phantom integration.

TL;DR: Advice

We considered a lot of deeply technical detail in this blog post, but I think it can be condensed into two simple rules. To reduce the size of the generated core code:

  1. Use instance induction only with balanced data structures.
  2. Use expensive type families only in phantom contexts.

where “phantom context� is short-hand for “as an argument to a datatype, where that argument has a phantom role�. The table below summarizes the examples we considered in this blog post.

Don’t Do
instance C xs => instance C (x ': xs) instance (C l, C r) => instance C ('Branch l r)
foo @(F xs) Proxy foo (Proxy @(F xs))
type role Cls nominal type role Cls phantom
Use with caution: requires IncoherentInstances
type instance F a = Expensive a type instance F a = ClassAlias a
(for F a :: Constraint)

Conclusions

This work was done in the context of trying to improve the compilation time of Juspay’s code base. When we first started analysing why Juspay was suffering from such long compilation times, we realized that a large part of the problem was due to the large core size generated by ghc when using large records, quadratic in the number of record fields. We therefore developed the large-records library, which offers support for records which guarantees to result in core code that is linear in the size of the record. A first integration attempt showed that this improved compilation time by roughly 30%, although this can probably be tweaked a little further.

As explained in MonadFix’s blog post Transpiling a large PureScript codebase into Haskell, part 2: Records are trouble, however, some of the records in the code base are anonymous records, and for these the large-records library is not (directly) applicable, and instead, MonadFix is using their own library jrec.

An analysis of the integration with large-records revealed however that these large anonymous records are causing similar problems as the named records did. Amongst the 13 most expensive modules (in terms of compilation time), 5 modules suffered from huge coercions, with less extreme examples elsewhere in the codebase. In one particularly bad example, one function contained coercion terms totalling nearly 2,000,000 AST nodes! (The other problem that stood out was due to large enumerations, not something I’ve thought much about yet.)

We have certainly not resolved all sources of quadratic core code for anonymous records in this blog post, nor have we attempted to integrate these ideas in jrec. However, the generic instance for anonymous records (using generics-sop generics) was particularly troublesome, and the ideas outlined above should at least solve that problem.

In general the problem of avoiding generating superlinear core is difficult and multifaceted:

Fortunately, ghc gives us just enough leeway that if we are very careful we can avoid the problem. That’s not a solution, obviously, but at least there are temporary work-arounds we can use.

Postscript: Pre-evaluating type families

If the evaluation of type families at compile time leads to large proofs, one for every step in the evaluation, perhaps we can improve matters by pre-evaluating these type families. For example, we could define ToTree as

type family ToTree (xs :: [Type]) :: Tree Type where
  ToTree '[]                       = 'Zero
  ToTree '[x0]                     = 'One x0
  ToTree '[x0, x1]                 = 'Two x0 x1
  ToTree '[x0, x1, x2]             = 'Branch x0 ('One x1)    ('One x2)
  ToTree '[x0, x1, x2, x3]         = 'Branch x0 ('Two x1 x3) ('One x2)
  ToTree '[x0, x1, x2, x3, x4]     = 'Branch x0 ('Two x1 x3) ('Two x2 x4)
  ToTree '[x0, x1, x2, x3, x4, x5] = 'Branch x0 ('Branch x1 ('One x3) ('One x5)) ('Two x2 x4)
  ...

perhaps (or perhaps not) with a final case that defaults to regular evaluation. Now ToTree xs can evaluate in a single step for any of the pre-computed cases. Somewhat ironically, in this case we’re better off without phantom contexts:

The universal coercion is now larger than the regular proof, because it records the full right-hand-side of the type family:

Univ(phantom phantom <Tree *>
  :: 'Branch
        (T 0)
        ('Branch (T 1) ('Two (T 3) (T 7)) ('Two (T 5) (T 9)))
        ('Branch (T 2) ('Two (T 4) (T 8)) ('One (T 6)))
   , ToTree '[T 0, T 1, T 2, T 3, T 4, T 5, T 6, T 7, T 8, T 9]))

The regular proof instead only records the rule that we’re applying:

D:R:ToTree[10] <T 0> <T 1> <T 2> <T 3> <T 4> <T 5> <T 6> <T 7> <T 8> <T 9>

Hence, the universal coercion is O(n log n), whereas the regular proof is O(n). Incidentally, this is also the reason why ghc doesn’t just always use universal coercions; in some cases the universal coercion can be significantly larger than the regular proof (the Rep type family of GHC.Generics being a typical example). The difference between O(n) and O(n log n) is of course significantly less important than the difference between O(n log n) and O(n²), especially since we anyway still have other O(n log n) factors, so if we can precompute type families like this, perhaps we can be less careful about roles.

However, this is only a viable option for certain type families. A full-blown anonymous records library will probably need to do quite a bit of type-level computation on the indices of the record. For example, the classical extensible records theory by Gaster and Jones depends crucially on a predicate lacks that checks that a particular field is not already present in a record. We might define this as

type family Lacks (x :: k) (xs :: [k]) :: Bool where
  Lacks x '[]       = 'True
  Lacks x (x ': xs) = 'False
  Lacks x (y ': xs) = Lacks x xs

It’s not clear to me how to pre-compute this type family in such a way that the right-hand side can evaluate in O(1) steps, without the type family needing O(n²) cases. We might attempt

type family Lacks (x :: k) (xs :: [k]) :: Bool where
  -- 0
  Lacks x '[] = 'True

  -- 1
  Lacks x '[x] = 'False
  Lacks x '[y] = 'True

  -- 2
  Lacks x '[x  , y2] = 'False
  Lacks x '[y1 , x ] = 'False
  Lacks x '[y1 , y2] = 'True

  -- 3
  Lacks x '[x,  y2 , y3] = 'False
  Lacks x '[y1, x  , y3] = 'False
  Lacks x '[y1, y2 , x ] = 'False
  Lacks x '[y1, y2 , y3] = 'True

  -- etc

but even if we only supported records with up to 200 fields, this would require over 20,000 cases. It’s tempting to try something like

type family Lacks (x :: k) (xs :: [k]) :: Bool where
  Lacks x []           = True
  Lacks x [y0]         = (x != y0)
  Lacks x [y0, y1]     = (x != y0) && (x != y1)
  Lacks x [y0, y1, y2] = (x != y0) && (x != y1) && (x != y2)
  ..

but that is not a solution (even supposing we had such an != operator): the right hand side still has O(n) operations, and so this would still result in proofs of O(n) size, just like the non pre-evaluated Lacks definition. Type families such as Sort would be more difficult still. Thus, although precomputation can in certain cases help avoid large proofs, and it’s a useful technique to have in the arsenal, I don’t think it’s a general solution.

Footnotes

  1. The introduction of the Tree datatype is not strictly necessary. We could instead work exclusively with lists, and use Evens/Odds directly to split the list in two at each step. The introduction of Tree however makes this structure more obvious, and as a bonus leads to smaller core, although the difference is a constant factor only.↩�

  2. This Tree data type is somewhat non-standard: we have values both at the leaves and in the branches, and we have leaves with zero, one or two values. Having values in both branches and leaves reduces the number of tree nodes by one half (leading to smaller core), which is an important enough improvement to warrant the minor additional complexity. The Two constructor only reduces the size the tree roughly by a further 4%, so not really worth it in general, but it’s useful for the sake of this blog post, as it keeps examples of trees a bit more manageable.↩�

  3. Specifically, I made the following simplifications to the actual coercion proof:
    1. Drop the use of Sym, replacing Sym (a ; .. ; c) by (Sym c ; .. ; Sym a) and Sym c by simply c for atomic axioms c.
    2. Appeal to transitivity to replace a ; (b ; c) by a ; b ; c
    3. Drop all explicit use of congruence, replacing T c by simply c, whenever T has a single argument.
    4. As a final step, reverse the order of the proof; in this case at least, ghc seemed to reason backwards from the result of the type family application, but it’s more intuitive to reason forwards instead.↩�

  4. See Master Theorem, case 3: Work to split/recombine a problem dominates subproblems.↩�

  5. Roles were first described in Generative Type Abstraction and Type-level Computation. Phantom roles were introduced in Safe zero-cost coercions for Haskell.↩�

  6. Safe zero-cost coercions for Haskell does not talk about this in great detail, but does mention it. See Fig 5, “Formation rules for coercions�, rule Co_Phantom, as well as the (extremely brief) section 4.2.2., “Phantom equality relates all types�. The paper does not use the terminology “universal coercion�.↩�

  7. Confusingly the pretty-printer uses a special syntax for a universal unsafe coercion, using UnsafeCoinstead of Univ.↩�

  8. Safe zero-cost coercions for Haskell, section 4.2.1: Nominal equality implies representational equality.↩�

by edsko, adam at October 20, 2021 12:00 AM

Lysxia's blog

Initial and final encodings of free monads

Posted on October 20, 2021

Free monads are often introduced as an algebraic data type, an initial encoding:

data Free f a = Pure a | Free (f (Free f a))

Thanks to that, the term “free monads” tends to be confused with that encoding, even though “free monads” originally refers to a representation-independent idea. Dually, there is a final encoding of free monads:

type Free' f a = (forall m. MonadFree f m => m a)

where MonadFree is the following class:

class Monad m => MonadFree f m where
  free :: f (m a) -> m a

The two types Free and Free' are isomorphic. An explanation a posteriori is that free monads are unique up to isomorphism. In this post, we will prove that they are isomorphic more directly,1 in Coq.

In other words, there are two functions:

fromFree' :: Free' f a -> Free f a
toFree' :: Free f a -> Free' f a

such that, for all u :: Free f a,

fromFree' (toFree' u) = u  -- easy

and for all u :: Free' f a,

toFree' (fromFree' u) = u  -- hard

(Also, these functions are monad morphisms.)

The second equation is hard to prove because it relies on a subtle fact about polymorphism. If you have a polymorphic function forall m ..., it can only interact with m via operations provided as parameters—in the MonadFree dictionary. The equation crashes down if you can perform some kind of case analysis on types, such as isinstanceof in certain languages. This idea is subtle because, how do you turn this negative property “does not use isinstanceof” into a positive, useful fact about the functions of a language?

Parametricity is the name given to such properties. You can get a good intuition for it with some practice. For example, most people can convince themselves that forall a. a -> a is only inhabited by the identity function. But formalizing it so you can validate your intuition is a more mysterious art.

Proof sketch

First, unfolding some definitions, the equation we want to prove will simplify to the following:

foldFree (u @(Free f)) = u @m

where u :: forall m. MonadFree f m => m a is specialized at Free f on the left, at an arbitrary m on the right, and foldFree :: Free f a -> m a is a certain function we do not need to look into for now.

The main idea is that those different specializations of u are related by a parametricity theorem (aka. free theorem).

For all monads m1, m2 that are instances of MonadFree f, and for any relation r between m1 and m2, if r satisfies $CERTAIN_CONDITIONS, then r relates u @m1 and u @m2.

In this case, we will let r relate u1 :: Free f a and u2 :: m a when:

foldFree u1 = u2

As it turns out, r will satisfy $CERTAIN_CONDITIONS, so that the parametricity theorem above applies. This yields exactly the desired conclusion:

foldFree (u @(Free f)) = u @m

It is going to be a gnarly exposition of definitions before we can even get to the proof, and the only reason I can think of to stick around is morbid curiosity. But I had the proof and I wanted to do something with it.2

Formalization in Coq

Imports and setting options
From Coq Require Import Morphisms.

Set Implicit Arguments.
Set Contextual Implicit.

Initial free monads

Right off the bat, the first hurdle is that we cannot actually write the initial Free in Coq. To guarantee that all functions terminate and to prevent logical inconsistencies, Coq imposes restrictions about what recursive types can be defined. Indeed, Free could be used to construct an infinite loop by instantiating it with a contravariant functor f. The following snippet shows how we can inhabit the empty type Void, using only non-recursive definitions, so it’s fair to put the blame on Free:

newtype Cofun b a = Cofun (a -> b)

omicron :: Free (Cofun Void) Void -> Void
omicron (Pure y) = y
omicron (Free (Cofun z)) = z (Free (Cofun z))

omega :: Void
omega = omicron (Free (Cofun omicron))

To bypass that issue, we can tweak the definition of Free into what you might know as the freer monad, or the operational monad. The key difference is that the recursive occurrence of Free f a is no longer under an abstract f, but a concrete (->) instead.

Inductive Free (f : Type -> Type) (a : Type) : Type :=
| Pure : a -> Free f a
| Bind : forall e, f e -> (e -> Free f a) -> Free f a
.

Digression on containers

With that definition, it is no longer necessary for f to be a functor—it’s even undesirable because of size issues. Instead, f should rather be thought of as a type of “shapes”, containing “positions” of type e, and that induces a functor by assigning values to those positions (via the function e -> Free f a here); such an f is also known as a “container”.

For example, the Maybe functor consists of two “shapes”: Nothing, with no positions (indexed by Void), and Just, with one position (indexed by ()). Those shapes are defined by the following GADT, the Maybe container:

data PreMaybe _ where
  Nothing_ :: PreMaybe Void
  Just_ :: PreMaybe ()

A container extends into a functor, using a construction that some call Coyoneda:

data Maybe' a where
  MkMaybe' :: forall a e. PreMaybe e -> (e -> a) -> Maybe' a

data Coyoneda f a where
  Coyoneda :: forall f a e. f e -> (e -> a) -> Coyoneda f a

Freer f a (where Freer is called Free here in Coq) coincides with Free (Coyoneda f) a (for the original definition of Free at the top). If f is already a functor, then it is observationally equivalent to Coyoneda f.

Monad and MonadFree

The Monad class hides no surprises. For simplicity we skip the Functor and Applicative classes. Like in C, return is a keyword in Coq, so we have to settle for another name.

Class Monad (m : Type -> Type) : Type :=
  { pure : forall {a}, a -> m a
  ; bind : forall {a b}, m a -> (a -> m b) -> m b
  }.
(* The braces after `forall` make the arguments implicit. *)

Our MonadFree class below is different than in Haskell because of the switch from functors to containers (see previous section). In the original MonadFree, the method free takes an argument of type f (m a), where the idea is to “interpret” the outer layer f, and “carry on” with a continuation m a. Containers encode that outer layer without the continuation.3

Class MonadFree {f m : Type -> Type} `{Monad m} : Type :=
  { free : forall {x}, f x -> m x }.

(* Some more implicit arguments nonsense. *)
Arguments MonadFree f m {_}.

Here comes the final encoding of free monads. The resemblance to the Haskell code above should be apparent in spite of some funny syntax.

Definition Free' (f : Type -> Type) (a : Type) : Type :=
  forall m `(MonadFree f m), m a.

Type classes in Coq are simply types with some extra type inference rules to infer dictionaries. Thus, the definition of Free' actually desugars to a function type forall m, Monad m -> MonadFree f m -> m a. A value u : Free' f a is a function whose arguments are a type constructor m, followed by two dictionaries of the Monad and MonadFree classes. We specialize u to a monad m by writing u m _ _, applying u to the type constructor m and two holes (underscores) for the dictionaries, whose contents will be inferred via type class resolution. See for example fromFree' below.

While we’re at it, we can define the instances of Monad and MonadFree for the initial encoding Free.

Fixpoint bindFree {f a b} (u : Free f a) (k : a -> Free f b) : Free f b :=
  match u with
  | Pure a => k a
  | Bind e h => Bind e (fun x => bindFree (h x) k)
  end.

Instance Monad_Free f : Monad (Free f) :=
  {| pure := @Pure f
  ;  bind := @bindFree f
  |}.

Instance MonadFree_Free f : MonadFree f (Free f) :=
  {| free A e := Bind e (fun a => Pure a)
  |}.

Interpretation of free monads

To show that those monads are equivalent, we must exhibit a mapping going both ways.

The easy direction is from the final Free' to the initial Free: with the above instances of Monad and MonadFree, just monomorphize the polymorph.

Definition fromFree' {f a} : Free' f a -> Free f a :=
  fun u => u (Free f) _ _.

The other direction is obtained via a fold of Free f, which allows us to interpret it in any instance of MonadFree f: replace Bind with bind, interpret the first operand with free, and recurse in the second operand.

Fixpoint foldFree {f m a} `{MonadFree f m} (u : Free f a) : m a :=
  match u with
  | Pure a => pure a
  | Bind e k => bind (free e) (fun x => foldFree (k x))
  end.

Definition toFree' {f a} : Free f a -> Free' f a :=
  fun u M _ _ => foldFree u.

Equality

In everyday mathematics, equality is a self-evident notion that we take for granted. But if you want to minimize your logical foundations, you do not need equality as a primitive. Equations are just equivalences, where the equivalence relation is kept implicit.

Who even decides what the rules for reasoning about equality are anyway? You decide, by picking the underlying equivalence relation. 4

Here is a class for equality. It is similar to Eq in Haskell, but it is propositional (a -> a -> Prop) rather than boolean (a -> a -> Bool), meaning that equality doesn’t have to be decidable.

Class PropEq (a : Type) : Type :=
  propeq : a -> a -> Prop.

Notation "_ = _" was already used in scope type_scope. [notation-overridden,parsing]

For example, for inductive types, a common equivalence can be defined as another inductive type which equates constructors and their fields recursively. Here it is for Free:

Inductive eq_Free f a : PropEq (Free f a) :=
| eq_Free_Pure x : eq_Free (Pure x) (Pure x)
| eq_Free_Bind p (e : f p) k1 k2
  : (forall x, eq_Free (k1 x) (k2 x)) ->
    eq_Free (Bind e k1) (Bind e k2)
.

(* Register it as an instance of PropEq *)
Existing Instance eq_Free.

Having defined equality for Free, we can state and prove one half of the isomorphism between Free and Free'.

f: Type -> Type
a: Type
u: Free f a

fromFree' (toFree' u) = u

The proof is straightforward by induction, case analysis (which is performed as part of induction), and simplification.

f: Type -> Type
a: Type
u: Free f a

fromFree' (toFree' u) = u
f: Type -> Type
a: Type
a0: a

fromFree' (toFree' (Pure a0)) = Pure a0
f: Type -> Type
a, e: Type
f0: f e
f1: e -> Free f a
H: forall e : e, fromFree' (toFree' (f1 e)) = f1 e
fromFree' (toFree' (Bind f0 f1)) = Bind f0 f1
f: Type -> Type
a: Type
a0: a

Pure a0 = Pure a0
f: Type -> Type
a, e: Type
f0: f e
f1: e -> Free f a
H: forall e : e, fromFree' (toFree' (f1 e)) = f1 e
Bind f0 (fun x : e => foldFree (f1 x)) = Bind f0 f1
all: constructor; auto. Qed.

Equality on final encodings, naive attempts

To state the other half of the isomorphism (toFree' (fromFree' u) = u), it is less obvious what the right equivalence relation on Free' should be. When are two polymorphic values u1, u2 : forall m `(MonadFree f m), m a equal? A fair starting point is that all of their specializations must be equal. “Equality” requires an instance of PropEq, which must be introduced as an extra parameter.

(* u1 and u2 are "equal" when all of their specializations
   (u1 m _ _) and (u2 m _ _) are equal. *)
Definition eq_Free'_very_naive f a (u1 u2 : Free' f a) : Prop :=
  forall m `(MonadFree f m) `(forall x, PropEq (m x)),
    u1 m _ _ = u2 m _ _.

That definition is flagrantly inadequate: so far, a PropEq instance can be any relation, including the empty relation (which never holds), and the Monad instance (as a superclass of MonadFree) might be unlawful. In our desired theorem, toFree' (fromFree' u) = u, the two sides use a priori different combinations of bind and pure, so we expect to rely on laws to be able to rewrite one side into the other.

In programming, we aren’t used to proving that implementations satisfy their laws, so there is always the possibility that a Monad instance is unlawful. In math, the laws are in the definitions; if something doesn’t satisfy the monad laws, it’s not a monad. Let’s irk some mathematicians and say that a lawful monad is a monad that satisfies the monad laws. Thus we will have one Monad class for the operations only, and one LawfulMonad class for the laws they should satisfy. Separating code and proofs that way helps to organize things. Code is often much simpler than the proofs about it, since the latter necessarily involves dependent types.

Class LawfulMonad {m} `{Monad m} `{forall a, PropEq (m a)} : Prop :=
  { Equivalence_LawfulMonad :> forall a, Equivalence (propeq (a := m a))
  ; propeq_bind : forall a b (u u' : m a) (k k' : a -> m b),
      u = u' -> (forall x, k x = k' x) -> bind u k = bind u' k'
  ; bind_pure : forall a (u : m a),
      bind u (pure (a := a)) = u
  ; pure_bind : forall a b (x : a) (k : a -> m b),
      bind (pure x) k = k x
  ; bind_bind : forall a b c (u : m a) (k : a -> m b) (h : b -> m c),
      bind (bind u k) h = bind u (fun x => bind (k x) h)
  }.

The three monad laws should be familiar (bind_pure, pure_bind, bind_bind). In those equations, “=” denotes a particular equivalence relation, which is now a parameter/superclass of the class. Once you give up on equality as a primitive notion, algebraic structures must now carry their own equivalence relations. The requirement that it is an equivalence relation also becomes an explicit law (Equivalence_LawfulMonad), and we expect that operations (in this case, bind) preserve the equivalence (propeq_bind). Practically speaking, that last fact allows us to rewrite subexpressions locally, otherwise we could only apply the monad laws at the root of an expression.

A less naive equivalence on Free' is thus to restrict the quantification to lawful instances:

Definition eq_Free'_naive f a (u1 u2 : Free' f a) : Prop :=
  forall m `(MonadFree f m) `(forall x, PropEq (m x)) `(!LawfulMonad (m := m)),
    u1 m _ _ = u2 m _ _.

That is a quite reasonable definition of equivalence for Free'. In other circumstances, it could have been useful. Unfortunately, it is too strong here: we cannot prove the equation toFree' (fromFree' u) = u with that interpretation of =. Or at least I couldn’t figure out a solution. We will need more assumptions to be able to apply the parametricity theorem of the type Free'. To get there, we must formalize Reynolds’ relational interpretation of types.

Types as relations

The core technical idea in Reynolds’ take on parametricity is to interpret a type t as a relation Rt : t -> t -> Prop. Then, the parametricity theorem is that all terms x : t are related to themselves by Rt (Rt x x is true). If t is a polymorphic type, that theorem connects different specializations of a same term x : t, and that allows us to formalize arguments that rely on “parametricity” as a vague idea.

For example, if t = (forall a, a -> a), then Rt is the following relation, which says that two functions f and f' are related if for any relation Ra (on any types), f and f' send related inputs (Ra x x') to related outputs (Ra (f a x) (f' a' x')).

Rt f f' =
  forall a a' (Ra : a -> a' -> Prop),
  forall x x', Ra x x' -> Ra (f a x) (f' a' x')

If we set Ra x x' to mean “x equals an arbitrary constant z0” (ignoring x', i.e., treating Ra as a unary relation), the above relation Rt amounts to saying that f z0 = z0, from which we deduce that f must be the identity function.

The fact that Rt is a relation is not particularly meaningful to the parametricity theorem, where terms are simply related to themselves, but it is a feature of the construction of Rt: the relation for a composite type t1 -> t2 combines the relations for the components t1 and t2, and we could not get the same result with only unary predicates throughout.5 More formally, we define a relation R[t] by induction on t, between the types t and t', where t' is the result of renaming all variables x to x' in t (including binders). The two most interesting cases are:

  • t starts with a quantifier t = forall a, _, for a type variable a. Then the relation R[forall a, _] between the polymorphic f and f' takes two arbitrary types a and a' to specialize f and f' with, and a relation Ra : a -> a' -> Prop, and relates f a and f' a' (recursively), using Ra whenever recursion reaches a variable a.

  • t is an arrow t1 -> t2, then R[t1 -> t2] relates functions that send related inputs to related outputs.

In summary:

R[forall a, t](f, f') = forall a a' Ra, R[t](f a)(f' a')
R[a](f, f')           = Ra(f, f')
                        -- Ra should be in scope when a is in scope.
R[t1 -> t2](f, f')    = forall x x', R[t1](x, x') -> R[t2](f x, f' x')

That explanation was completely unhygienic, but refer to Reynolds’ paper or Wadler’s Theorems for free! for more formal details.

For sums (Either/sum) and products ((,)/prod), two values are related if they start with the same constructor, and their fields are related (recursively). This can be deduced from the rules above applied to the Church encodings of sums and products.

Type constructors as relation transformers

While types t : Type are associated to relations Rt : t -> t -> Prop, type constructors m : Type -> Type are associated to relation transformers (functions on relations) Rm : forall a a', (a -> a' -> Prop) -> (m a -> m a' -> Prop). It is usually clear what’s what from the context, so we will often refer to “relation transformers” as just “relations”.

For example, the initial Free f a type gets interpreted to the relation RFree Rf Ra defined as follows. Two values u1 : Free f1 a1 and u2 : Free f2 a2 are related by RFree if either:

  • u1 = Pure x1, u2 = Pure x2, and x1 and x2 are related (by Ra); or
  • u1 = Bind e1 k1, u2 = Bind e2 k2, e1 and e2 are related, and k1 and k2 are related (recursively).

We thus have one rule for each constructor (Pure and Bind) in which we relate each field (Ra x1 x2 in RFree_Pure; Rf _ _ _ y1 y2 and RFree Rf Ra (k1 x1) (k2 x2) in RFree_Bind). Let us also remark that the existential type e in Bind becomes an existential relation Re in RFree_Bind.

Inductive RFree {f₁ f₂ : Type -> Type}
    (Rf : forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> f₁ a₁ -> f₂ a₂ -> Prop)
    {a₁ a₂ : Type} (Ra : a₁ -> a₂ -> Prop) : Free f₁ a₁ -> Free f₂ a₂ -> Prop :=
  | RFree_Pure : forall (x₁ : a₁) (x₂ : a₂),
      Ra x₁ x₂ -> RFree Rf Ra (Pure x₁) (Pure x₂)
  | RFree_Bind : forall (e₁ e₂ : Type) (Re : e₁ -> e₂ -> Prop) (y₁ : f₁ e₁) (y₂ : f₂ e₂),
      Rf e₁ e₂ Re y₁ y₂ ->
      forall (k₁ : e₁ -> Free f₁ a₁) (k₂ : e₂ -> Free f₂ a₂),
      (forall (x₁ : e₁) (x₂ : e₂),
        Re x₁ x₂ -> RFree Rf Ra (k₁ x₁) (k₂ x₂)) ->
      RFree Rf Ra (Bind y₁ k₁) (Bind y₂ k₂).

Inductive relations such as RFree, indexed by types with existential quantifications such as Free, are a little terrible to work with out-of-the-box—especially if you’re allergic to UIP. Little “inversion lemmas” like the following make them a bit nicer by reexpressing those relations in terms of some standard building blocks which leave less of a mess when decomposed.

f₁, f₂: Type -> Type
Rf: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> f₁ a₁ -> f₂ a₂ -> Prop
a₁, a₂: Type
Ra: a₁ -> a₂ -> Prop
u₁: Free f₁ a₁
u₂: Free f₂ a₂

RFree Rf Ra u₁ u₂ -> match u₁ with | Pure a₁ => match u₂ with | Pure a₂ => Ra a₁ a₂ | Bind _ _ => False end | @Bind _ _ e y₁ k₁ => match u₂ with | Pure _ => False | @Bind _ _ e0 y₂ k₂ => exists Re : e -> e0 -> Prop, Rf e e0 Re y₁ y₂ /\ (forall (x₁ : e) (x₂ : e0), Re x₁ x₂ -> RFree Rf Ra (k₁ x₁) (k₂ x₂)) end end
f₁, f₂: Type -> Type
Rf: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> f₁ a₁ -> f₂ a₂ -> Prop
a₁, a₂: Type
Ra: a₁ -> a₂ -> Prop
u₁: Free f₁ a₁
u₂: Free f₂ a₂

RFree Rf Ra u₁ u₂ -> match u₁ with | Pure a₁ => match u₂ with | Pure a₂ => Ra a₁ a₂ | Bind _ _ => False end | @Bind _ _ e y₁ k₁ => match u₂ with | Pure _ => False | @Bind _ _ e0 y₂ k₂ => exists Re : e -> e0 -> Prop, Rf e e0 Re y₁ y₂ /\ (forall (x₁ : e) (x₂ : e0), Re x₁ x₂ -> RFree Rf Ra (k₁ x₁) (k₂ x₂)) end end
intros []; eauto. Qed.

Type classes, which are (record) types, also get interpreted in the same way. Since Monad is parameterized by a type constructor m, the relation RMonad between Monad instances is parameterized by a relation between two type constructors m1 and m2. Two instances of Monad, i.e., two values of type Monad m for some m, are related if their respective fields, i.e., pure and bind, are related. pure and bind are functions, so two instances are related when they send related inputs to related outputs.

Record RMonad (m₁ m₂ : Type -> Type)
    (Rm : forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> m₁ a₁ -> m₂ a₂ -> Prop)
    `{Monad m₁} `{Monad m₂} : Prop :=
  { RMonad_pure : forall (t₁ t₂ : Type) (Rt : t₁ -> t₂ -> Prop) (x₁ : t₁) (x₂ : t₂),
      Rt x₁ x₂ -> Rm t₁ t₂ Rt (pure x₁) (pure x₂)
  ; RMonad_bind : forall (t₁ t₂ : Type) (Rt : t₁ -> t₂ -> Prop) 
      (u₁ u₂ : Type) (Ru : u₁ -> u₂ -> Prop) (x₁ : m₁ t₁) (x₂ : m₂ t₂),
      Rm t₁ t₂ Rt x₁ x₂ ->
      forall (k₁ : t₁ -> m₁ u₁) (k₂ : t₂ -> m₂ u₂),
      (forall (x₁ : t₁) (x₂ : t₂),
         Rt x₁ x₂ -> Rm u₁ u₂ Ru (k₁ x₁) (k₂ x₂)) ->
      Rm u₁ u₂ Ru (bind x₁ k₁) (bind x₂ k₂)
  }.

MonadFree also gets translated to a relation RMonadFree. Related inputs, related outputs.

Record RMonadFree (f₁ f₂ : Type -> Type)
    (Rf : forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> f₁ a₁ -> f₂ a₂ -> Prop)
    (m₁ m₂ : Type -> Type)
    (Rm : forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> m₁ a₁ -> m₂ a₂ -> Prop)
    `{MonadFree f₁ m₁} `{MonadFree f₂ m₂} : Prop :=
  { RMonadFree_free : forall (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (x₁ : f₁ a₁) (x₂ : f₂ a₂),
      Rf a₁ a₂ Ra x₁ x₂ -> Rm a₁ a₂ Ra (free x₁) (free x₂)
  }.

Note that RMonad and RMonadFree are “relation transformer transformers”, since they take relation transformers as arguments, to produce a relation between class dictionaries.

We can now finally translate the final Free' to a relation. Two values u1 : Free' f1 a1 and u2 : Free' f2 a2 are related if, for any two monads m1 and m2, with a relation transformer Rm, whose Monad and MonadFree instances are related by RMonad and RMonadFree, Rm relates u1 m1 _ _ and u2 m2 _ _.

Definition RFree' {f₁ f₂} Rf {a₁ a₂} Ra (u₁ : Free' f₁ a₁) (u₂ : Free' f₂ a₂) : Prop :=
  forall m₁ m₂ `(MonadFree f₁ m₁) `(MonadFree f₂ m₂) Rm
    (pm : RMonad Rm) (pf : RMonadFree Rf Rm),
    Rm _ _ Ra (u₁ m₁ _ _) (u₂ m₂ _ _).

The above translation of types into relations can be automated by a tool such as paramcoq. However paramcoq currently constructs relations in Type instead of Prop, which got me stuck in universe inconsistencies. That’s why I’m declaring Prop relations the manual way here.

The parametricity theorem says that any u : Free' f a is related to itself by RFree' (for some canonical relations on f and a). It is a theorem about the language Coq which we can’t prove within Coq. Rather than postulate it, we will simply add the required RFree' _ _ u u assumption to our proposition (from_to below). Given a concrete u, it should be straightforward to prove that assumption case-by-case in order to apply that proposition.

These “relation transformers” are a bit of a mouthful to spell out, and they’re usually guessable from the type constructor (f or m), so they deserve a class, that’s a higher-order counterpart to PropEq (like Eq1 is to Eq in Haskell).

Class PropEq1 (m : Type -> Type) : Type :=
  propeq1 : forall a₁ a₂, (a₁ -> a₂ -> Prop) -> m a₁ -> m a₂ -> Prop.

Given a PropEq1 m instance, we can apply it to the relation eq to get a plain relation which seems a decent enough default for PropEq (m a).

Instance PropEq_PropEq1 {m} `{PropEq1 m} {a} : PropEq (m a) := propeq1 eq.

Really lawful monads

We previously defined a “lawful monad” as a monad with an equivalence relation (PropEq (m a)). To use parametricity, we will also need a monad m to provide a relation transformer (PropEq1 m), which subsumes PropEq with the instance just above.6 This extra structure comes with additional laws, extending our idea of monads to “really lawful monads”.

Class Trans_PropEq1 {m} `{PropEq1 m} : Prop :=
  trans_propeq1 : forall a₁ a₂ (r : a₁ -> a₂ -> Prop) x₁ x₁' x₂ x₂',
    x₁ = x₁' -> propeq1 r x₁' x₂ -> x₂ = x₂' -> propeq1 r x₁ x₂'.

Class ReallyLawfulMonad m `{Monad m} `{PropEq1 m} : Prop :=
  { LawfulMonad_RLM :> LawfulMonad (m := m)
  ; Trans_PropEq1_RLM :> Trans_PropEq1 (m := m)
  ; RMonad_RLM : RMonad (propeq1 (m := m))
  }.

Class ReallyLawfulMonadFree f `{PropEq1 f} m `{MonadFree f m} `{PropEq1 m} : Prop :=
  { ReallyLawfulMonad_RLMF :> ReallyLawfulMonad (m := m)
  ; RMonadFree_RLMF : RMonadFree (propeq1 (m := f)) (propeq1 (m := m))
  }.

We inherit the LawfulMonad laws from before. The relations RMonad and RMonadFree, defined earlier, must relate m’s instances of Monad and MonadFree, for the artificial reason that that’s roughly what RFree' will require. We also add a generalized transitivity law, which allows us to rewrite either side of a heterogeneous relation propeq1 r using the homogeneous one = (which denotes propeq1 eq).

It’s worth noting that there is some redundancy here, that could be avoided with a bit of refactoring. That generalized transitivity law Trans_PropEq1 implies transitivity of =, which is part of the claim that = is an equivalence relation in LawfulMonad. And the bind component of RMonad implies propeq_bind in LawfulMonad, so these RMonad and RMonadFree laws can also be seen as generalizations of congruence laws to heterogeneous relations, making them somewhat less artificial than they may seem at first.

Restricting the definition of equality on the final free monad Free' to quantify only over really lawful monads yields the right notion of equality for our purposes, which is to prove the from_to theorem below, validating the isomorphism between Free and Free'.

Instance eq_Free' f `(PropEq1 f) a : PropEq (Free' f a) :=
  fun u₁ u₂ =>
    forall m `(MonadFree f m) `(PropEq1 m) `(!ReallyLawfulMonadFree (m := m)),
      u₁ m _ _ = u₂ m _ _.

Quickly, let’s get the following lemma out of the way, which says that foldFree commutes with bind. We’re really saying that foldFree is a monad morphism but no time to say it properly. The proof of the next lemma will need this, but it’s also nice to look at this on its own.

f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: forall a : Type, PropEq (m a)
LawfulMonad0: LawfulMonad
a, b: Type
u: Free f a
k: a -> Free f b

foldFree (bindFree u k) = bind (foldFree u) (fun x : a => foldFree (k x))
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: forall a : Type, PropEq (m a)
LawfulMonad0: LawfulMonad
a, b: Type
u: Free f a
k: a -> Free f b

foldFree (bindFree u k) = bind (foldFree u) (fun x : a => foldFree (k x))
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: forall a : Type, PropEq (m a)
LawfulMonad0: LawfulMonad
a, b: Type
a0: a
k: a -> Free f b

foldFree (k a0) = bind (pure a0) (fun x : a => foldFree (k x))
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: forall a : Type, PropEq (m a)
LawfulMonad0: LawfulMonad
a, b, e: Type
f0: f e
f1: e -> Free f a
k: a -> Free f b
H2: forall e : e, foldFree (bindFree (f1 e) k) = bind (foldFree (f1 e)) (fun x : a => foldFree (k x))
bind (free f0) (fun x : e => foldFree (bindFree (f1 x) k)) = bind (bind (free f0) (fun x : e => foldFree (f1 x))) (fun x : a => foldFree (k x))
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: forall a : Type, PropEq (m a)
LawfulMonad0: LawfulMonad
a, b: Type
a0: a
k: a -> Free f b

foldFree (k a0) = bind (pure a0) (fun x : a => foldFree (k x))
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: forall a : Type, PropEq (m a)
LawfulMonad0: LawfulMonad
a, b: Type
a0: a
k: a -> Free f b

bind (pure a0) (fun x : a => foldFree (k x)) = foldFree (k a0)
apply pure_bind with (k0 := fun x => foldFree (k x)).
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: forall a : Type, PropEq (m a)
LawfulMonad0: LawfulMonad
a, b, e: Type
f0: f e
f1: e -> Free f a
k: a -> Free f b
H2: forall e : e, foldFree (bindFree (f1 e) k) = bind (foldFree (f1 e)) (fun x : a => foldFree (k x))

bind (free f0) (fun x : e => foldFree (bindFree (f1 x) k)) = bind (bind (free f0) (fun x : e => foldFree (f1 x))) (fun x : a => foldFree (k x))
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: forall a : Type, PropEq (m a)
LawfulMonad0: LawfulMonad
a, b, e: Type
f0: f e
f1: e -> Free f a
k: a -> Free f b
H2: forall e : e, foldFree (bindFree (f1 e) k) = bind (foldFree (f1 e)) (fun x : a => foldFree (k x))

bind (free f0) (fun x : e => foldFree (bindFree (f1 x) k)) = bind (free f0) (fun x : e => bind ((fun x0 : e => foldFree (f1 x0)) x) (fun x0 : a => foldFree (k x0)))
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: forall a : Type, PropEq (m a)
LawfulMonad0: LawfulMonad
a, b, e: Type
f0: f e
f1: e -> Free f a
k: a -> Free f b
H2: forall e : e, foldFree (bindFree (f1 e) k) = bind (foldFree (f1 e)) (fun x : a => foldFree (k x))

free f0 = free f0
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: forall a : Type, PropEq (m a)
LawfulMonad0: LawfulMonad
a, b, e: Type
f0: f e
f1: e -> Free f a
k: a -> Free f b
H2: forall e : e, foldFree (bindFree (f1 e) k) = bind (foldFree (f1 e)) (fun x : a => foldFree (k x))
forall x : e, foldFree (bindFree (f1 x) k) = bind (foldFree (f1 x)) (fun x0 : a => foldFree (k x0))
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: forall a : Type, PropEq (m a)
LawfulMonad0: LawfulMonad
a, b, e: Type
f0: f e
f1: e -> Free f a
k: a -> Free f b
H2: forall e : e, foldFree (bindFree (f1 e) k) = bind (foldFree (f1 e)) (fun x : a => foldFree (k x))

free f0 = free f0
reflexivity.
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: forall a : Type, PropEq (m a)
LawfulMonad0: LawfulMonad
a, b, e: Type
f0: f e
f1: e -> Free f a
k: a -> Free f b
H2: forall e : e, foldFree (bindFree (f1 e) k) = bind (foldFree (f1 e)) (fun x : a => foldFree (k x))

forall x : e, foldFree (bindFree (f1 x) k) = bind (foldFree (f1 x)) (fun x0 : a => foldFree (k x0))
auto. Qed.

Finally the proof

Our goal is to prove an equation in terms of eq_Free', which gives us a really lawful monad as an assumption. We open a section to set up the same context as that and to break down the proof into more digestible pieces.

Section ISOPROOF.

Context {f m} `{MonadFree f m} `{PropEq1 m} `{!ReallyLawfulMonad (m := m)}.

As outlined earlier, parametricity will yield an assumption RFree' _ _ u u, and we will specialize it with a relation R which relates u1 : Free f a and u2 : m a when foldFree u1 = u2. However, RFree' actually expects a relation transformer rather than a relation, so we instead define R to relate u1 : Free f a1 and u2 : Free f a2 when propeq1 Ra (foldFree u1) u2, where Ra is a relation given between a1 and a2.

Let R := (fun a₁ a₂ (Ra : a₁ -> a₂ -> Prop) u₁ u₂ => propeq1 Ra (foldFree u₁) u₂).

The following two lemmas are the “$CERTAIN_CONDITIONS” mentioned earlier, that R must satisfy, i.e., we prove that R, via RMonad (resp. RMonadFree), relates the Monad (resp. MonadFree) instances for Free f and m.

f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop

RMonad R
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop

RMonad R
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
t₁, t₂: Type
Rt: t₁ -> t₂ -> Prop
x₁: t₁
x₂: t₂
H2: Rt x₁ x₂

R Rt (pure x₁) (pure x₂)
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
t₁, t₂: Type
Rt: t₁ -> t₂ -> Prop
u₁, u₂: Type
Ru: u₁ -> u₂ -> Prop
x₁: Free f t₁
x₂: m t₂
H2: R Rt x₁ x₂
k₁: t₁ -> Free f u₁
k₂: t₂ -> m u₂
H3: forall (x₁ : t₁) (x₂ : t₂), Rt x₁ x₂ -> R Ru (k₁ x₁) (k₂ x₂)
R Ru (bind x₁ k₁) (bind x₂ k₂)
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
t₁, t₂: Type
Rt: t₁ -> t₂ -> Prop
x₁: t₁
x₂: t₂
H2: Rt x₁ x₂

R Rt (pure x₁) (pure x₂)
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
t₁, t₂: Type
Rt: t₁ -> t₂ -> Prop
x₁: t₁
x₂: t₂
H2: Rt x₁ x₂

R Rt (Pure x₁) (pure x₂)
apply RMonad_RLM; auto.
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
t₁, t₂: Type
Rt: t₁ -> t₂ -> Prop
u₁, u₂: Type
Ru: u₁ -> u₂ -> Prop
x₁: Free f t₁
x₂: m t₂
H2: R Rt x₁ x₂
k₁: t₁ -> Free f u₁
k₂: t₂ -> m u₂
H3: forall (x₁ : t₁) (x₂ : t₂), Rt x₁ x₂ -> R Ru (k₁ x₁) (k₂ x₂)

R Ru (bind x₁ k₁) (bind x₂ k₂)
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
t₁, t₂: Type
Rt: t₁ -> t₂ -> Prop
u₁, u₂: Type
Ru: u₁ -> u₂ -> Prop
x₁: Free f t₁
x₂: m t₂
H2: R Rt x₁ x₂
k₁: t₁ -> Free f u₁
k₂: t₂ -> m u₂
H3: forall (x₁ : t₁) (x₂ : t₂), Rt x₁ x₂ -> R Ru (k₁ x₁) (k₂ x₂)

propeq1 Ru (foldFree (bind x₁ k₁)) (bind x₂ k₂)
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
t₁, t₂: Type
Rt: t₁ -> t₂ -> Prop
u₁, u₂: Type
Ru: u₁ -> u₂ -> Prop
x₁: Free f t₁
x₂: m t₂
H2: R Rt x₁ x₂
k₁: t₁ -> Free f u₁
k₂: t₂ -> m u₂
H3: forall (x₁ : t₁) (x₂ : t₂), Rt x₁ x₂ -> R Ru (k₁ x₁) (k₂ x₂)

foldFree (bind x₁ k₁) = ?x₁'
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
t₁, t₂: Type
Rt: t₁ -> t₂ -> Prop
u₁, u₂: Type
Ru: u₁ -> u₂ -> Prop
x₁: Free f t₁
x₂: m t₂
H2: R Rt x₁ x₂
k₁: t₁ -> Free f u₁
k₂: t₂ -> m u₂
H3: forall (x₁ : t₁) (x₂ : t₂), Rt x₁ x₂ -> R Ru (k₁ x₁) (k₂ x₂)
propeq1 Ru ?x₁' ?x₂
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
t₁, t₂: Type
Rt: t₁ -> t₂ -> Prop
u₁, u₂: Type
Ru: u₁ -> u₂ -> Prop
x₁: Free f t₁
x₂: m t₂
H2: R Rt x₁ x₂
k₁: t₁ -> Free f u₁
k₂: t₂ -> m u₂
H3: forall (x₁ : t₁) (x₂ : t₂), Rt x₁ x₂ -> R Ru (k₁ x₁) (k₂ x₂)
?x₂ = bind x₂ k₂
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
t₁, t₂: Type
Rt: t₁ -> t₂ -> Prop
u₁, u₂: Type
Ru: u₁ -> u₂ -> Prop
x₁: Free f t₁
x₂: m t₂
H2: R Rt x₁ x₂
k₁: t₁ -> Free f u₁
k₂: t₂ -> m u₂
H3: forall (x₁ : t₁) (x₂ : t₂), Rt x₁ x₂ -> R Ru (k₁ x₁) (k₂ x₂)

foldFree (bind x₁ k₁) = ?x₁'
apply foldFree_bindFree.
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
t₁, t₂: Type
Rt: t₁ -> t₂ -> Prop
u₁, u₂: Type
Ru: u₁ -> u₂ -> Prop
x₁: Free f t₁
x₂: m t₂
H2: R Rt x₁ x₂
k₁: t₁ -> Free f u₁
k₂: t₂ -> m u₂
H3: forall (x₁ : t₁) (x₂ : t₂), Rt x₁ x₂ -> R Ru (k₁ x₁) (k₂ x₂)

propeq1 Ru (bind (foldFree x₁) (fun x : t₁ => foldFree (k₁ x))) ?x₂
eapply RMonad_RLM; eauto.
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
t₁, t₂: Type
Rt: t₁ -> t₂ -> Prop
u₁, u₂: Type
Ru: u₁ -> u₂ -> Prop
x₁: Free f t₁
x₂: m t₂
H2: R Rt x₁ x₂
k₁: t₁ -> Free f u₁
k₂: t₂ -> m u₂
H3: forall (x₁ : t₁) (x₂ : t₂), Rt x₁ x₂ -> R Ru (k₁ x₁) (k₂ x₂)

bind x₂ k₂ = bind x₂ k₂
reflexivity. Qed. Context (Rf : PropEq1 f). Context (RMonadFree_m : RMonadFree propeq1 propeq1).
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
Rf: PropEq1 f
RMonadFree_m: RMonadFree propeq1 propeq1

RMonadFree Rf R
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
Rf: PropEq1 f
RMonadFree_m: RMonadFree propeq1 propeq1

RMonadFree Rf R
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
Rf: PropEq1 f
RMonadFree_m: RMonadFree propeq1 propeq1
a₁, a₂: Type
Ra: a₁ -> a₂ -> Prop
x₁: f a₁
x₂: f a₂
H2: Rf Ra x₁ x₂

R Ra (free x₁) (free x₂)
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
Rf: PropEq1 f
RMonadFree_m: RMonadFree propeq1 propeq1
a₁, a₂: Type
Ra: a₁ -> a₂ -> Prop
x₁: f a₁
x₂: f a₂
H2: Rf Ra x₁ x₂

R Ra (free x₁) (free x₂)
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
Rf: PropEq1 f
RMonadFree_m: RMonadFree propeq1 propeq1
a₁, a₂: Type
Ra: a₁ -> a₂ -> Prop
x₁: f a₁
x₂: f a₂
H2: Rf Ra x₁ x₂

propeq1 Ra (foldFree (free x₁)) (free x₂)
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
Rf: PropEq1 f
RMonadFree_m: RMonadFree propeq1 propeq1
a₁, a₂: Type
Ra: a₁ -> a₂ -> Prop
x₁: f a₁
x₂: f a₂
H2: Rf Ra x₁ x₂

foldFree (free x₁) = ?x₁'
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
Rf: PropEq1 f
RMonadFree_m: RMonadFree propeq1 propeq1
a₁, a₂: Type
Ra: a₁ -> a₂ -> Prop
x₁: f a₁
x₂: f a₂
H2: Rf Ra x₁ x₂
propeq1 Ra ?x₁' ?x₂
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
Rf: PropEq1 f
RMonadFree_m: RMonadFree propeq1 propeq1
a₁, a₂: Type
Ra: a₁ -> a₂ -> Prop
x₁: f a₁
x₂: f a₂
H2: Rf Ra x₁ x₂
?x₂ = free x₂
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
Rf: PropEq1 f
RMonadFree_m: RMonadFree propeq1 propeq1
a₁, a₂: Type
Ra: a₁ -> a₂ -> Prop
x₁: f a₁
x₂: f a₂
H2: Rf Ra x₁ x₂

foldFree (free x₁) = ?x₁'
apply bind_pure.
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
Rf: PropEq1 f
RMonadFree_m: RMonadFree propeq1 propeq1
a₁, a₂: Type
Ra: a₁ -> a₂ -> Prop
x₁: f a₁
x₂: f a₂
H2: Rf Ra x₁ x₂

propeq1 Ra (free x₁) ?x₂
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
Rf: PropEq1 f
RMonadFree_m: RMonadFree propeq1 propeq1
a₁, a₂: Type
Ra: a₁ -> a₂ -> Prop
x₁: f a₁
x₂: f a₂
H2: Rf Ra x₁ x₂

propeq1 Ra x₁ ?Goal0
eassumption.
f, m: Type -> Type
H: Monad m
H0: MonadFree f m
H1: PropEq1 m
ReallyLawfulMonad0: ReallyLawfulMonad
R:= fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂: forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> Free f a₁ -> m a₂ -> Prop
Rf: PropEq1 f
RMonadFree_m: RMonadFree propeq1 propeq1
a₁, a₂: Type
Ra: a₁ -> a₂ -> Prop
x₁: f a₁
x₂: f a₂
H2: Rf Ra x₁ x₂

free x₂ = free x₂
reflexivity. Qed. End ISOPROOF.

Here comes the conclusion, which completes our claim that toFree'/fromFree' is an isomorphism (we proved the other half to_from on the way here). This equation is under an assumption which parametricity promises to fulfill, but we will have to step out of the system if we want it right now.

f: Type -> Type
Rf: PropEq1 f
a: Type
u: Free' f a

RFree' Rf eq u u -> toFree' (fromFree' u) = u

In the proof, we get the assumption H : RFree' Rf eq u u, which we apply to the above lemmas, RMonad_foldFree and RMonadFree_foldFree, using the specialize tactic. That yields exactly our desired goal.

f: Type -> Type
Rf: PropEq1 f
a: Type
u: Free' f a

RFree' Rf eq u u -> toFree' (fromFree' u) = u
f: Type -> Type
Rf: PropEq1 f
a: Type
u: Free' f a
H: RFree' Rf eq u u
m: Type -> Type
H0: Monad m
H1: MonadFree f m
H2: PropEq1 m
ReallyLawfulMonadFree0: ReallyLawfulMonadFree

toFree' (fromFree' u) H1 = u m H0 H1
f: Type -> Type
Rf: PropEq1 f
a: Type
u: Free' f a
H: RFree' Rf eq u u
m: Type -> Type
H0: Monad m
H1: MonadFree f m
H2: PropEq1 m
ReallyLawfulMonadFree0: ReallyLawfulMonadFree

foldFree (u (Free f) Monad_Free MonadFree_Free) = u m H0 H1
f: Type -> Type
Rf: PropEq1 f
a: Type
u: Free' f a
H: forall (m₁ m₂ : Type -> Type) (H : Monad m₁) (H0 : MonadFree f m₁) (H1 : Monad m₂) (H2 : MonadFree f m₂) (Rm : forall a₁ a₂ : Type, (a₁ -> a₂ -> Prop) -> m₁ a₁ -> m₂ a₂ -> Prop), RMonad Rm -> RMonadFree Rf Rm -> Rm a a eq (u m₁ H H0) (u m₂ H1 H2)
m: Type -> Type
H0: Monad m
H1: MonadFree f m
H2: PropEq1 m
ReallyLawfulMonadFree0: ReallyLawfulMonadFree

foldFree (u (Free f) Monad_Free MonadFree_Free) = u m H0 H1
f: Type -> Type
Rf: PropEq1 f
a: Type
u: Free' f a
m: Type -> Type
H0: Monad m
H1: MonadFree f m
H2: PropEq1 m
H: (fun (a₁ a₂ : Type) (Ra : a₁ -> a₂ -> Prop) (u₁ : Free f a₁) (u₂ : m a₂) => propeq1 Ra (foldFree u₁) u₂) a a eq (u (Free f) Monad_Free MonadFree_Free) (u m H0 H1)
ReallyLawfulMonadFree0: ReallyLawfulMonadFree

foldFree (u (Free f) Monad_Free MonadFree_Free) = u m H0 H1
apply H. Qed.

Conclusion

If you managed to hang on so far, treat yourself to some chocolate.

To formalize a parametricity argument in Coq, I had to move the goalposts quite a bit throughout the experiment:

  • Choose a more termination-friendly encoding of recursive types.
  • Relativize equality.
  • Mess around with heterogeneous relations without crashing into UIP.
  • Reinvent the definition of a monad, again.
  • Come to terms with the externality of parametricity.

It could be interesting to see a “really lawful monad” spelled out fully.

Another similar but simpler exercise is to prove the equivalence between initial and final encodings of lists. It probably wouldn’t involve “relation transformers” as much. There are also at least two different variants: is your final encoding “foldr”- or “fold”-based (the latter mentions monoids, the former doesn’t)?

I hope that machinery can be simplified eventually, but given the technical sophistication that is currently necessary, prudence is advised when navigating around claims made “by parametricity”.



  1. Answering Iceland_Jack’s question on Twitter.↩︎

  2. Also an excuse to integrate Alectryon in my blog.↩︎

  3. That idea is also present in Kiselyov and Ishii’s paper.↩︎

  4. Those who do know Coq will wonder, what about eq (“intensional equality”)? It is a fine default relation for first-order data (nat, pairs, sums, lists, ASTs without HOAS). But it is too strong for computations (functions and coinductive types) and proofs (of Props). Then a common approach is to introduce extensionality axioms, postulating that “extensional equality implies intensional equality”. But you might as well just stop right after proving whatever extensional equality you wanted.↩︎

  5. Well, if you tried you would end up with the unary variant of the parametricity theorem, but it’s much weaker than the binary version shown here. n-ary versions are also possible and even more general, but you have to look hard to find legitimate uses.↩︎

  6. To be honest, that decision was a little arbitrary. But I’m not sure making things more complicated by keeping EqProp1 and EqProp separate buys us very much.↩︎

by Lysxia at October 20, 2021 12:00 AM

October 19, 2021

Chris Smith 2

You’re invited to the October virtual Haskell CoHack

Hi everyone,

This Saturday, I’m once again hosting a virtual Haskell CoHack. In the past, we’ve had a great time with groups here working on various projects, whether it’s learning Haskell, hacking on GHC, writing documentation, making progress on personal projects, or just hanging out to chat with like-minded folk. You should consider coming if you would be excited to meet fellow Haskell programmers and work or learn together with them.

There are details, including times, on the meetup page: https://www.meetup.com/NY-Haskell/events/280998863

by Chris Smith at October 19, 2021 01:59 AM

October 18, 2021

Haskell Foundation blog

Into the Future

by Andrew Boardman

In the previous post I talked about what the Haskell Foundation has been up to for the first seven months, now I want to discuss where we’re heading.

The Haskell Foundation could undertake a vast array of initiatives, so restricting the scope of what we will use our resources on has been a major part of our recent efforts.

Mission

Amplify Haskell’s impact on humanity.

We have selected this mission statement to focus our strategy. It is general enough to make sure we can support the community in the ways we need to, but gives us something to measure proposed strategies against.

Strategic Focus: 2022

Increase the productivity of junior, professional Haskell developers.

For our first major strategy we looked to generate positive feedback loops. Not only do we want to amplify Haskell’s impact on humanity, but we want to improve our community’s ability to make changes that do the same.

Plastic flower sculpture in a lake.

This focus translates into a bias for accepting Haskell Foundation Tech Proposals (HFTPs) that relate to tooling enhancements that make junior, professional Haskellers, more productive. We will also create HFTPs as we find compelling use cases.

That does not mean that HFTPs need to be specific to junior, professional Haskellers. We chose this focus because we believe it will generate a broad array of additional improvements that benefit the entire community, while also ensuring that we get a specific set of improvements fully finished. Therefore, as we review proposals, we will prioritize those whose impact on junior, professional Haskellers is clear, but having a broader impact will be a plus.

Emily Pillmore and the Haskell Foundation Technical Track (HFTT) are responsible for both evaluating proposals from the community, as well as incubating ones where appropriate.

Tooling

I just read the phrase “algorithmic attention arms race” by Mark Xu Neyer, and it sums up the state of our world in a really elegant way. The best way to break out of the shackles of consuming culture is to make, and the most leveraged way to make is to produce better tools, to make making faster, easier, and more efficient.

Tape measure at 8 ft. mark.

If you caught my Haskell Love talk, you know that I feel passionately about developer tools, the developer experience, and the impact that has on our ability to make an impact in the larger world. To recap the ideas I presented there:

  • Polished, functional, professional tooling allows developers to work at a higher abstraction level; the details they would otherwise have to juggle in their brains can be trivially accessible in their IDE.
  • A tricky part of learning Haskell is understanding how the concepts fit together, and how they translate into the Haskell run time. An IDE that simulates the runtime and shows developers how their code translates into a running system not only helps programmers be more productive, it helps them learn the language better and faster.
  • Making the language (and runtime) easier and faster to learn, understand, and debug, addresses some of the top reasons why Haskell’s reputation for production work can be rough.
  • A truly interactive, simulating IDE experience also takes care of an issue for bigger projects: the edit -> compile -> run -> test loop needs to be instantaneous from the developer’s perspective for maximum productivity. We must not allow our tools to interrupt developer flow.

The Haskell language deserves an integrated development environment that takes advantage of its pure, lazy, strongly typed, functional programming fundamentals and provides an experience that delivers an order of magnitude improvement in productivity.

Junior, Professional Haskellers

We believe the sweet spot for focusing on tooling improvements is the workflow of professional Haskellers who are at the beginner to intermediate level.

  • Hiring is both a strength and a weakness of our community. The self-selection bias of learning and sticking with Haskell gives a rich talent pool, but smaller than many languages.
  • Senior Haskellers are rare, and those with experience translating business needs into production quality code rarer still.
  • Managers therefore have their effectiveness gated on their ability to get quality work out of the beginner to intermediate engineers on their team, including their ability to hire them.
  • Shortening the time for Haskellers to become seniors and leads both fills the existing talent gap, as well as makes our community as a whole much stronger.

Next Steps

The exciting new phase of Haskell Foundation operations is to identify, in our various task forces and committees, work that we can do to support our strategy, what resources to allocate to them, and get to work!

If you have an idea for a project that fits, are working on one already, or want to get involved, you can email the HF and we will direct you to the appropriate person or task force!

Train!

by Haskell Foundation at October 18, 2021 09:33 PM

Haskell Foundation September Seven Month Update Extravaganza

by Andrew Boardman

Seven Months!

It is hard to believe Emily Pillmore and I have been running the Haskell Foundation for seven months already. Similar to parenting, it feels like no time has elapsed, but at the same time it went very slowly.

We want to improve and become more effective, so in this monthly update let’s dive into what we’ve done over the last seven months, what we’ve learned, and where we want to go.

Fundraising

Our first challenge was raising funds so we could continue to have a Haskell Foundation, and it certainly took a while before the first check came in under our watch.

Last year, prior to the selection of the Board or the Executive Team, GitHub was the first check to clear, they had stepped up to continue funding GHC work that had previously been supported by Microsoft Research. IOHK came a month later with huge support for the Foundation, followed soon by Well-Typed, Mercury, Flipstone, Tweag, and Obsidian Systems. A month after, in January 2021, EMQ joined that illustrious group of early sponsors.

Fundraising has a long lag time between initial contact and checks clearing, so our next sponsors joined us in June, with Digital Asset joining us at the Monad level, and ExFreight at Applicative. This broke the drought, and we added TripShot in July, HERP in August (both as Functors), and CarbonCloud in September at Applicative.

Welcome to CarbonCloud as our newest Sponsor!

We talked to at least 37 companies at different stages of using Haskell, got tons of wonderful feedback and insight into what they’re doing, what is working, what is not. We converted those conversations into five new sponsors totaling $140,000, have another company that is in the final stage, and two more that are figuring out payment logistics.

Additionally, we were given an in-kind donation by MLabs: 40 hours / month of the amazing Koz Ross’s time to dedicate towards HF projects.

Lessons Learned

Fundraising is an ongoing process, and my primary focus as Executive Director. We are always looking for more companies to talk to and more opportunities to find funding for the Foundation and our initiatives. We are also now talking to existing sponsors about renewals and, where reasonable, increasing their contributions.

We cannot take our foot off the gas and relax, our resources are a fundamental limit to what we can accomplish.

Technical Track

Much of the work we need to do is fundamentally technical in nature, and we have largely been successful. We had some ideas of what we wanted to accomplish and did some deep dives into Backpack and the Windows platform for Haskell early on.

Utf8 Text

Andrew Lelechenko (aka Bodigrim) had a very focused proposal in mind: switch the internal Text representation from Utf-16 to Utf-8. This had been attempted before, but bikeshedding and arguing had stalled it out.

Bodigrim created his official proposal in May, disabled implicit fusion in Text in June (he found serious performance issues in basic cases while working on the changes), had a PR up for review in August, and merged the PR in September. Amazing and very well received work!

Dominos on a table.

The next steps are PRs to the GHC codebase, and following the changes to downstream dependencies to ensure smooth updates.

Minimal Windows Installer

For a while Haskell support for Windows was a bit… rough. GHC put a lot of effort into fixing that up, but there was (and still is, but less so) spotty support by the tooling surrounding the compiler.

Julian Ospald stepped up to add proper Windows support for ghcup, and after a marathon of work and collaboration with Tamar Christina, Ben Gamari, and others, got it up and running!

If you want a maintained system GHC, Tamar’s Chocolatey package takes care of the complexity (but requires Chocolatey).
If you prefer to manage your own installation, and want a “system GHC” experience in Windows, you can now use ghcup.
For ease of use, managing multiple GHC installations seamlessly, or if you normally use Stack for your projects, Stack takes care of the complexity behind its CLI.

Lessons Learned

We attempted to use the momentum of this project to consolidate the ecosystem on a single Haskell installer, and unfortunately that did not go well. We did learn a lot from the experience, and made changes to how we go about selecting projects, getting community feedback, and how we assist with project management.

Haskell Foundation Tech Proposal Process

Emily Pillmore created a proposal process proposal, with a template to help guide members in the community on what needs to be thought out, decided, and written when proposing Haskell Foundation involvement in technical work.

It makes sure that we’ve thought through many of the issues that gave us problems: making sure the right people are notified, looking at prior art, determining motivation and deliverables, and what resources are needed to enable success.

Haskell Foundation Technical Task Force Elections

The HFTT is a volunteer group that evaluates the proposals. Emily posted a call for applications to join, received excellent results, and selected the new members. Participation in Haskell Foundation volunteer groups by a wide range of community members is crucial for making sure different points of view and perspectives are involved in the decision making, and we encourage everyone to get involved.

Extended Dependency Generation GHC Proposal

There are other proposal processes in our community as well, the most famous being the GHC Proposal Process. HF Board member Hécate Choutri found this gem, particularly given our love and support of the HLS project, and asked the HFTT to rally support for it. We absolutely agreed, and have requested that the GHC team prioritize it (within reason, given how slammed they are with getting releases out the door).

What does that mean? The Haskell Foundation provides support for GHC development (and we’d love to provide more, please donate and sponsor!), so we get some say in how the work is prioritized. We generally leave it to that team and the Steering Committee’s best judgement, but occasionally when we see a priority to help the ecosystem, we let them know what we’d like moved up in the queue.

Dashboard Proposal

Haskell is a very fast language, but occasionally a problem sneaks through CI and testing and ends up in production code. Emily worked with Ben Gamari to draft a proposal to take infrastructure the GHC team had in place, provide better UX, and extend it beyond GHC itself to cover important libraries that are a dependency of a large percentage of Haskell projects.

We want to consistently measure the performance of GHC itself, core libraries, fundamental dependencies, and eventually more. This will allow us to find GHC changes that affect library and application performance, address regressions closer to when the change is made, as well as lock in performance improvements. If you have expertise in DevOps, data visualization, and performance analysis, you can make a big difference here!

Cabal

A hole in our ecosystem had been consistent, inclusive maintainership of the Cabal projects. Emily stepped up to fill this need, did substantial work to update the code base to modern practices and styles, and found people to help maintain the project. This has led to new releases, new maintainers, and plans for future releases and features.

Core Libraries Committee

Similarly, the CLC had become operationally defunct, and once again Emily stepped in. She created a new way of working process, largely similar to the HFTP and the new process for Cabal, found which existing members were still active and able to perform their duties, and held an election process to find new members. Clear expectations have been set with a focus on communication and consistency.

Documentation

The fabulous Hécate Choutri does a fantastic job organizing volunteers around radically improving the documentation in our community. They have rallied efforts around the Haskell.org wiki, improvements in Haddocks, and Haskell School, a community sourced learning resource from the ground up. It has eight contributors, 76 commits (as of this writing), and three languages in development.

If you are passionate about bringing more people into our community faster, join in!

Performance Book

Gil Mizrahi’s Haskell Performance Tuning Book proposal has been submitted and is getting feedback. A critical gap in the knowledge of intermediate level Haskellers is the ability to deeply understand the performance of Haskell applications, how to address issues, how to design performant systems, and how to use the variety of tools available to debug issues.

This proposal offers a solution: a community sourced and maintained online book with the cumulative knowledge of Haskell performance experts, so that set of skills can be widely available and accessible. No matter what your level of expertise, you can help make this a reality!

Matchmaker

Matchmaker is project to address the issue of volunteers who have time and expertise but don’t know what projects need the help they can provide, and the maintainers who need that help and can’t find the volunteers. If you’re looking for a way to help out, here is a project that would help you help others in your exact situation!

Community

Haskell Interlude Podcast

Niki Vazou, Joachim Breitner, Andres Löh, Alejandro Serrano, and Wouter Swierstra proposed a long form podcast to interview guests about Haskell related topics. Originally they released a teaser episode, introducing the hosts, followed by Episode 1, an interview with Emily Pillmore. Episode 2 is out today, featuring Lennart Augustsson!

Code of Conduct

We started with the Guidelines for Respectful Communication as a foundation for how we wish interactions within the community to be conducted. We have determined that the GRC is necessary but not complete, and are working with the Rust Foundation to standardize on a Code of Conduct that would ideally be shared between the two communities.

This is meant to augment and enhance the GRC, not replace it. If you would like to be involved in making our community a friendlier, more inclusive place, please join our Slack and let us know in the #community-track channel.

Affiliations

Affiliation at this time involves adopting the GRC, but we’re also discussing HF endorsement of open source projects depending on the level of support the maintainers are willing to sign up for.

As an example, we could have multiple tiers:

* Core
* Security fixes turned around within 24 or 48 hours
* Stability guarantees, support for the last N releases of GHC
* Maintainership line of succession
* Code of Conduct / GRC adoption

Each level would include all the requirements below it, and would give people choosing which libraries and tools to adopt better information about the state of that project and whether they feel comfortable using it in production given their project’s needs.

Current Affiliated Projects and Teams

Haskell IDE Team
GHC Steering Committee
Clash Lang
Haskell Weekly
Core Libraries Committee
Haskell Love
Zurihac
Haskell.org Committee
IHP (Integrated Haskell Platform)
Stack
Stackage

Our Foundation Task Force

Chris Smith and Matthias Toepp have created a new task force aiming to help the community feel ownership of the Haskell Foundation. It is Our Foundation, not theirs (or mine). Their first initiatives are increasing the number of individuals donating to the Foundation, and a grant program to steer Foundation funds to the community. Please consider applying to be part of the task force!

Haskell Teachers’ Forum

A very early stage idea to bring together educators teaching Haskell at all levels. We want to share materials, best practices, and ideas. Send me an email if you’re interested in being involved!

State of Haskell Survey

Taylor Fausak has been the steward of the survey since 2017, and we’re discussing how to use HF resources to understand our community better. We would love more and better data about the state of our community, both so we can measure progress, as well as make more informed decisions.

Zurihac

On June 19th Emily gave a Haskell Foundation status update talk at Zurihac.

Haskell Love

My Haskell Love talk was on September 10th, where I talked about how our tooling is the linchpin for unlocking the potential of Haskell to solve real world problems and be the future of software engineering.

Office Hours

On the first Monday of each month, we host Haskell Foundation Office Hours at 16:00 UTC (9a US west coast) on Andrew’s Twitch channel. We have run two so far, the next one will be on October 4th. It is a ton of fun, we talk about a variety of topics, so please bring your questions and feedback!

Misc.

There are operational details that need to be taken care of, particularly true when we had just started. Benefits, payroll, legal forms, accounting, and so on. Big thanks to the HF Board Treasurer, Ryan Trinkle, for helping us navigate all of this and making sure we get paid, the HF gets paid, and we have all of the legalities settled.

Updates

  • 2021–10–05: Added Stack and Stackage to the list of Affiliated projects.

by Haskell Foundation at October 18, 2021 09:32 PM

Brent Yorgey

Competitive programming in Haskell: BFS, part 2 (alternative APIs)

In my last post, I showed how we can solve Modulo Solitaire (and hopefully other BFS problems as well) using a certain API for BFS, which returns two functions: one, level :: v -> Maybe Int, gives the level (i.e. length of a shortest path to) of each vertex, and parent :: v -> Maybe v gives the parent of each vertex in the BFS forest. Before showing an implementation, I wanted to talk a bit more about this API and why I chose it.

In particular, Andrey Mokhov left a comment on my previous post with some alternative APIs:

bfsForest :: Ord a => [a] -> AdjacencyMap a -> Forest a
bfs :: Ord a => [a] -> AdjacencyMap a -> [[a]]

Of course, as Andrey notes, AdjacencyMap is actually a reified graph data structure, which we don’t want here, but that’s not essential; presumably the AdjacencyMap arguments in Andrey’s functions could easily be replaced by an implicit graph description instead. (Note that an API requiring an implicit representation is strictly more powerful, since if you have an explicit representation you can always just pass in a function which does lookups into your explicit representation.) However, Andrey raises a good point. Both these APIs return information which is not immediately available from my API.

  • bfsForest returns an actual forest we can traverse, giving the children of each node. My API only returns a parent function which gives the parent of each node. These contain equivalent information, however, and we can convert back and forth efficiently (where by “efficiently” in this context I mean “in O(n \lg n) time or better”) as long as we have a list of all vertices. To convert from a Forest to a parent function, just traverse the forest and remember all the parent-child pairs we see, building e.g. a Map that can be used for lookup. To convert back, first iterate over the list of all vertices, find the parent of each, and build an inverse mapping from parents to sets of children. If we want to proceed to building an actual Forest data structure, we can unfold one via repeated lookups into our child mapping.

    However, I would argue that in typical applications, having the parent function is more useful than having a Forest. For example, the parent function allows us to efficiently answer common, classic queries such as “Is vertex v reachable from vertex s?” and “What is a shortest path from s to v?” Answering these questions with a Forest would require traversing the entire Forest to look for the target vertex v.

  • bfs returns a list of levels: that is, the first list is the starting vertices, the next list is all vertices one step away from any starting vertex, the next list is all vertices two steps away, and so on. Again, given a list of all vertices, we can recover a list of levels from the level function: just traverse the list of all vertices, looking up the level of each and adding it to an appropriate mapping from levels to sets of vertices. Converting in the other direction is easy as well.

    A level list lets us efficiently answer a queries such as “how many vertices are exactly 5 steps away from s”?, whereas with the level function we can efficiently answer queries such as “What is the length of a shortest path from s to v?” In practice, the latter form of query seems more common.

In the final version of this BFS API, I will probably include some functions to recover forests and level sets as described above. Some benchmarking will be needed to see whether it’s more efficient to recover them after the fact or to actually keep track of them along the way.

by Brent at October 18, 2021 07:48 PM

Monday Morning Haskell

Using IO without the IO Monad!

monad_classes_thumb.jpg

(This post is also available as a YouTube video!)

In last week's article, I explained what effects really are in the context of Haskell and why Haskell's structures for dealing with effects are really cool and distinguish it from other programming languages.

Essentially, Haskell's type system allows us to set apart areas of our code that might require a certain effect from those that don't. A function within a particular monad can typically use a certain effect. Otherwise, it can't. And we can validate this at compile time.

But there seems to be a problem with this. So many of Haskell's effects all sort of fall under the umbrella of the IO monad. Whether that's printing to the terminal, or reading from the file system, using threads and concurrency, connecting over the network, or even creating a new random number generator.

putStrLn :: String -> IO ()
readFile :: FilePath -> IO String
readMVar :: MVar a -> IO a
httpJSON :: (MonadIO m, FromJSON a) => Request -> m (Response a)
getStdGen :: MonadIO m => m StdGen

Now I'm not going to tell you "oh just re-write your program so you don't need as much IO." These activities are essential to many programs. And often, they have to be spread throughout your code.

But the IO monad is essentially limitless in its abilities. If your whole program uses the IO monad, you essentially don't have any of the guarantees that we'd like to have about limiting side effects. If you need any kind of IO, it seems like you have to allow all sorts of IO.

But this doesn't have to be the case. In this article we're going to demonstrate how we can get limited IO effects within our function. That is, we'll write our type signature to allow a couple specific IO actions, without opening the door to all kinds of craziness. Let's see how this works.

An Example Game

Throughout this video we're going to be using this Nim game example I made. You can see all the code in Game.hs.

Our starting point for this article is the instances branch.

The ending point is the monad-class branch.

You can take a look at this pull request to see all the changes we're going to make in this article!

This program is a simple command line game where players are adding numbers to a sum and want to be the one to get to exactly 100. But there are some restrictions. You can't add more than 10, or add a negative number, or add too much to put it over 100. So if we try to do that we get some of these helpful error messages. And then when someone wins, we see who that is.

Our Monad

Now there's not a whole lot of code to this game. There are just a handful of functions, and they mostly live in this GameMonad we created. The "Game Monad" keeps track of the game state (a tuple of the current player and current sum value) using the State monad. Then it also uses the IO monad below that, which we need to receive user input and print all those messages we were seeing.

newtype GameMonad a = GameMonad
  { gameAction :: StateT (Player, Int) IO a
  } deriving (Functor, Applicative, Monad)

We have a couple instances, MonadState, and MonadIO for our GameMonad to make our code a bit simpler.

instance MonadIO GameMonad where
  liftIO action = GameMonad (lift action)

instance MonadState (Player, Int) GameMonad where
  get = GameMonad get
  put = GameMonad . put

Now the drawback here, as we talked about before, is that all these GameMonad functions can do arbitrary IO. We just do liftIO and suddenly we can go ahead and read a random file if we want.

playGame :: GameMonad Player
playGame = do
  promptPlayer
  input <- readInput
  validateResult <- validateMove input
  case validateResult of
    Nothing -> playGame
    Just i -> do
      # Nothing to stop this!
      readResult <- liftIO $ readFile "input.txt"
      ...

Making Our Own Class

But we can change this with just a few lines of code. We'll start by creating our own typeclass. This class will be called MonadTerminal. It will have two functions for interacting with the terminal. First, logMessage, that will take a string and return nothing. And then getInputLine, that will return a string.

class MonadTerminal m where
  logMessage :: String -> m ()
  getInputLine :: m String

How do we use this class? Well we have to make a concrete instance for it. So let's make an instance for our GameMonad. This will just use liftIO and run normal IO actions like putStrLn and getLine.

instance MonadTerminal GameMonad where
  logMessage = liftIO . putStrLn
  getInputLine = liftIO getLine

Constraining Functions

At this point, we can get rid of the old logMessage function, since the typeclass uses that name now. Next, let's think about the readInput expression.

readInput :: GameMonad String
readInput = liftIO getLine

It uses liftIO and getLine right now. But this is exactly the same definition we used in MonadTerminal. So let's just replace this with the getInputLine class function.

readInput :: GameMonad String
readInput = getInputLine

Now let's observe that this function no longer needs to be in the GameMonad! We can instead use any monad m that satisfies the MonadTerminal constraint. Since the GameMonad does this already, there's no effect on our code!

readInput :: (MonadTerminal m) => m String
readInput = getInputLine

Now we can do the same thing with the other two functions. They call logMessage and readInput, so they require MonadTerminal. And they call get and put on the game state, so they need the MonadState constraint. But after doing that, we can remove GameMonad from the type signatures.

validateMove :: (MonadTerminal m, MonadState (Player, Int) m) => String -> m (Maybe Int)
...

promptPlayer :: (MonadTerminal m, MonadState (Player, Int) m) => m ()
...

And now these functions can no longer use arbitrary IO! They're still using using the true IO effects we wrote above, but since MonadIO and GameMonad aren't in the type signature, we can't just call liftIO and do a file read.

Of course, the GameMonad itself still has IO on its Monad stack. That's the only way we can make a concrete implementation for our Terminal class that actually does IO!

But the actual functions in our game don't necessarily use the GameMonad anymore! They can use any monad that satisfies these two classes. And it's technically possible to write instances of these classes that don't use IO. So the functions can't use arbitrary IO functionality! This has a few different implications, but it especially gives us more confidence in the limitations of what these functions do, which as a reminder, is considered a good thing in Haskell! And it also allows us to test them more easily.

Conclusion: Effectful Haskell

Hopefully you think at least that this is a cool idea. But maybe you're thinking "Woah, this is totally game changing!" If you want to learn more about Haskell's effect structures, I have an offer for you!

If you head to this page you'll learn about our Effectful Haskell course. This course will give you hands-on experience working with the ideas from this video on a small but multi-functional application. The course starts with learning the different layers of Haskell's effect structures, and it ends with launching this application on the internet.

It's really cool, and if you've read this long, I think you'll enjoy it, so take a look! As a bonus, if you subscribe to Monday Morning Haskell, you can get a code for 20% off on this or any of our courses!

by James Bowen at October 18, 2021 02:30 PM

October 16, 2021

Mark Jason Dominus

Who is the namesake of the old Hungarian name for the month of June?

[ Previously ]

One oddity about the old-style Hungarian month names that I do not have time to investigate is that the old name of June was Szent Iván hava, Saint Ivan's month, or possibly Saint John's month. “John” in Hungarian is not Iván, it is Ján or János, at least at present. Would a Hungarian understand Szent Iván as a recognizably foreign name? I wonder which saint this is actually?

There is a Saint Ivan of Rila, but he is Bulgarian, so that could be a coincidence. Hungarian Wikipedia strangely does not seem to have an article about the most likely candidate, St. John the Evangelist, so I could not check if that John is known there as Ján or Iván or something else.

It does have an article about John the Baptist, who in Hungarian called Keresztelő János, not Iván. But that article links to the page about St. John the Baptist's Eve, which is titled Szent Iván éjszakája.

Further complicating the whole matter of Szent Iván hava is the issue that there have always been many Slavs in Hungary. The original birth record page is marked “KULA, HUNGARY”, which if correct puts it in what is now called Kula, Serbia — Hungary used to be bigger than it is now. Still why would the name of the month be in Slavic and not Magyar?

Do any of my Gentle Readers understand what is going on here?

[ Addendum: The English Wikipedia page for the Bulgarian Saint Ivan of Rila gives the Bulgarian-language version of his name. It's not written as Иван (“Ivan”) but as Йоан (“John”). The Bulgarian Wikipedia article about him is titled Иван Рилски (“Ivan Rilski”) but in the first line and the header of the infobox, his name is given instead as Йоан. I do not understand the degree to which John and Ivan are or are not interchangeable in this context. ]

[ Addendum 20211018: Michael Lugo points out that it is unlikely to be Ivan of Rila, whose feast day is in October. M. Lugo is right. This also argues strongly against the namesake of Szent Iván hava being John the Evangelist, as his feast is in December. The feast of John the Baptist is in June, so the Szent Iván of Szen Iván hava is probably John the Baptist, same as in Szent Iván éjszakája. I would still like to know why it is Szent Iván and not Szent János though. ]

by Mark Dominus (mjd@plover.com) at October 16, 2021 11:01 PM

Sandy Maguire

Proving Equivalence of Polysemy Interpreters

Let’s talk more about polysemy-check. Last week we looked at how to do property-testing for a polysemy effects’ laws. Today, we’ll investigate how to show that two interpretations are equivalent.

To continue with last week’s example, let’s say we have an effect that corresponds to having a Stack that we can push and pop:

data Stack s m a where
  Push      :: s -> Stack s m ()
  Pop       :: Stack s m (Maybe s)
  RemoveAll :: Stack s m ()
  Size      :: Stack s m Int

deriving instance Show s => Show (Stack s m a)
deriveGenericK ''Stack

makeSem ''Stack

Since we’d like to prove the equivalence of two interpretations, we’ll need to first write two interpretations. But, to illustrate, we’re going simulate multiple interpreters via a single interpretation, parameterized by which bugs should be present in it.

purposes of brevity, we’ll write a single interpretation of Stack s in terms of State [s], and then interpret that in two different ways. In essence, what we’re really testing here is the equivalence of two State interpretations, but it’s good enough for an example.

We’ll start with the bugs:

data Bug
  = PushTwice
  | DontRemove
  deriving stock (Eq, Ord, Show, Enum, Bounded)

instance Arbitrary Bug where
  arbitrary = elements [minBound..maxBound]

hasBug :: [Bug] -> Bug -> Bool
hasBug = flip elem

The PushTwice bug, as you might expect, dispatched a Push command so that it pushes twice onto the stack. The DontRemove bug causes RemoveAll to be a no-op. Armed with our bugs, we can write a little interpreter for Stack that translates Stack s commands into State [s] commands, and then immediately runs the State effect:

runStack
    :: [Bug]
    -> Sem (Stack s ': r) a
    -> Sem r ([s], a)
runStack bugs =
  (runState [] .) $ reinterpret $ \case
    Push s -> do
      modify (s :)
      when (hasBug bugs PushTwice) $
        modify (s :)

    Pop -> do
      r <- gets listToMaybe
      modify (drop 1)
      pure r

    RemoveAll ->
      unless (hasBug bugs DontRemove) $
        put []

    Size ->
      gets length

For our efforts we are rewarded: runState gives rise to four interpreters for the price of one. We can now ask whether or not these interpreters are equivalent. Enter propEquivalent:

With these interpreters out of the way, it’s time to answer our original question: are pureStack and ioStack equivalent? Which is to say, do they get the same answer for every possible program? Enter propEquivalent:

prepropEquivalent
    :: forall effs r1 r2 f
     . ( forall a. Show a => Show (f a)
       , forall a. Eq a => Eq (f a)
       )
    => ( Inject effs r1
       , Inject effs r2
       , Arbitrary (Sem effs Int)
       )
    => (forall a. Sem r1 a -> IO (f a))
    -> (forall a. Sem r2 a -> IO (f a))
    -> Property

All of the functions in polysemy-check have fun type signatures like this one. But despite the preponderance of foralls, it’s not as terrible as you might think. The first ten lines here are just constraints. There are only two arguments to prepropEquivalent, and they are the two interpreters you’d like to test.

This type is crazy, and it will be beneficial to understand it. There are four type variables, three of which are effect rows. We can distinguish between them:

  • effs: The effect(s) you’re interested in testing. In our case, our interpreter handles Stack s, so we let effs ~ Stack s.
  • r1: The effects handled by interpreter 1. Imagine we had an interpreter for Stack s that ran it via IO instead. In that case, r1 ~ '[State s, Embed IO].
  • r2 The effects handled by interpreter 2.

The relationships that must between effs, r1 and r2 are \(effs \subset r1\) and \(effs \subset r2\). When running prepropEquivalent, you must type-apply effs, because Haskell isn’t smart enough to figure it out for itself.

The other type variable to prepropEquivalent is f, which allows us to capture the “resulting state” of an interpreter. In runStack :: [Bug] -> Sem (Stack s ': r) a -> Sem r ([s], a), you’ll notice we transform a program returning a into one returning ([s], a), and thus f ~ (,) [s]. If your interpreter doesn’t produce any resulting state, feel free to let f ~ Identity.

We’re finally ready to test our interpreters! For any equivalence relationship, we should expect something to be equivalent to itself. And this is true regardless of which bugs we enable:

prop_reflexive :: Property
prop_reflexive = do
  bugs <- arbitrary
  pure $
    prepropEquivalent @'[Stack Int]
      (pure . run . runStack bugs)  -- pure is getting us into IO
      (pure . run . runStack bugs)

So what’s happening here? Internally, prepropEquivalent is generating random programs of type Sem '[Stack Int] Int, and lifting that into Sem r1 Int and Sem r2 Int, and then running both interpreters and ensuring the result is the same for every program. Note that this means any fundamental non-determinism in your interpretation will break the test! Make sure to use appropriate interpreters for things like clocks and random values!

To strengthen our belief in prepropEquivalent, we can also check that runStack is not equivalent to itself if different bugs are enabled:

prop_bugsNotEquivalent :: Property
prop_bugsNotEquivalent =
  expectFailure $
    prepropEquivalent @'[Stack Int]
      (pure . run . runStack [PushTwice])
      (pure . run . runStack [])

Running this test will give us output like:

+++ OK, failed as expected. Falsified (after 3 tests):
([0,0],1) /= ([0],1)

The counterexample here isn’t particularly helpful (I haven’t yet figured out how to show the generated program that fails,) but you can get a hint here by noticing that the stack (the [0,0]) is twice as big in the first result as in the second.

Importantly, by specifying @'[Stack Int] when calling prepropEquivalent, we are guaranteed that the generated program will only use actions from Stack Int, so it’s not too hard to track down. This is another win for polysemy in my book — that we can isolate bugs with this level of granularity, even if we can’t yet perfectly point to them.

All of today’s code (and more!) is available as a test in polysemy-check, if you’d like to play around with it. But that’s all for now. Next week we’ll investigate how to use polysemy-check to ensure that the composition of your effects themselves is meaningful. Until then!

October 16, 2021 12:06 PM

October 15, 2021

Mark Jason Dominus

Calendars change too

I had a conversation with a co-worker about the origin of his name, in which he showed me the original Hungarian birth record of one of his ancestors. (The sample below does not include the line with that record.)

Top top third of a page from a 19th-century birth register.  The page is ruled into six labeled columns, and under this three entries are written in cursive, in Hungarian.  A notation, in the upper left, is written in print capitals “KULA, HUNGARY”. The scan is high-resolution, but smudgy and speckled.

I had a fun time digging through this to figure out what it said. As you see, the scan quality is not good. The person writing the records has good handwriting, but not perfect, and not all the letters are easy to make out. (The penmanship of the second and third lines is noticeably better than the first.) Hungarian has letters ö and ő, which are different, but hard to distinguish when written longhand. The printed text is in a small font and is somewhat smudged. For example, what does this say?

A little square box that reads something like “A' fel adott kéreszt- ségnek”

Is the first letter in the second line an ‘f’ or a long ‘s’? Is “A’” an abbreviation, and if so what for? Is that a diacritical mark over the ‘e’ in the third line, or just a smudge? Except for the last, I don't know, but kereztségnek is something about baptism, maybe a dative form or something, so that column is baptism dates. This resolves one of the puzzles, which is why there are two numbers in the two leftmost columns: one is the birth date and one is the baptism date, and sometimes the baptism was done on a different day. For example, in the third line the child Mátyás (“Matthew”) was born on the 5th, and baptized on the 6th.

But the 6th of what? The box says “1845 / something” and presumably the something is the name of the month.

A little square box with handwritten cursive Hungarian that reads something like “1845 Bójkelo kó 2 2”

But I couldn't quite make it out (Bójkeló kó maybe?) and Google did not find anything to match my several tries. No problem, I can go the other direction: just pull up a list of the names of the months in Hungarian and see which one matches.

That didn't work. The names of the months in Hungarian are pretty much the same as in English (január, február, etc.) and there is nothing like Bójkeló kó. I was stuck.

But then I had a brainwave and asked Google for “old hungarian month names”. Paydirt! In former times, the month of February was called böjt elő hava, (“the month before fast”; hava is “month”) which here is abbreviated to Böjt elő ha’.

So that's what I learned: sometime between 1845 and now, the Hungarians changed the names of the months.

This page at fromhungarywithlove says that these month names were used from the 16th century until “the first third of the 20th century”.

[ Addendum 20211016: A further puzzle: The old name for June was “St. Iván's month”. Who was St. Iván? ]

by Mark Dominus (mjd@plover.com) at October 15, 2021 02:33 PM

October 14, 2021

Brent Yorgey

Competitive programming in Haskell: BFS, part 1

In a previous post, I challenged you to solve Modulo Solitaire. In this problem, we are given a starting number s_0 and are trying to reach 0 in as few moves as possible. At each move, we may pick one of up to 10 different rules (a_i,b_i) that say we can transform s into (a_i s + b_i) \bmod m.

In one sense, this is a straightforward search problem. Conceptually, the numbers 0 through m-1 form the vertices of a graph, with a directed edge from s to t whenever there is some allowed (a_i, b_i) such that t = (a_i s + b_i) \bmod m; we want to do a breadth first search in this graph to find the length of a shortest path from s_0 to 0. However, m can be up to 10^6 and there can be up to 10 rules, giving a total of up to 10^7 edges. In the case that 0 is unreachable, we may have to explore every single edge. So we are going to need a pretty fast implementation; we’ll come back to that later.

Haskell actually has a nice advantage here. This is exactly the kind of problem in which we want to represent the graph implicitly. There is no reason to actually reify the graph in memory as a data structure; it would only waste memory and time. Instead, we can specify the graph implicitly using a function that gives the neighbors of each vertex, which means BFS itself will be a higher-order function. Higher-order functions are very awkward to represent in a language like Java or C++, so when I solve problems like this in Java, I tend to just write the whole BFS from scratch every single time, and I doubt I’m the only one. However, in Haskell we can easily make an abstract interface to BFS which takes a function as input specifying an implicit graph, allowing us to nicely separate out the graph search logic from the task of specifying the graph itself.

What would be my ideal API for BFS in Haskell? I think it might look something like this (but I’m happy to hear suggestions as to how it could be made more useful or general):

data BFSResult v =
  BFSR { level :: v -> Maybe Int, parent :: v -> Maybe v }

bfs ::
  (Ord v, Hashable v) =>
  [v] ->                      -- Starting vertices
  (v -> [v]) ->               -- Neighbors
  (v -> Bool) ->              -- Goal predicate
  BFSResult v

bfs takes a list of vertices to search from (which could be a singleton if there is a single specific starting vertex), a function specifying the out-neighbors of each vertex, and a predicate specifying which vertices are “goal” vertices (so we can stop early if we reach one), and returns a BFSResult record, which tells us the level at which each vertex was encountered, if at all (i.e. how many steps were required to reach it), and the parent of each vertex in the search. If we just want to know whether a vertex was reachable at all, we can see if level returns Just; if we want to know the shortest path to a vertex, we can just iterate parent. Vertices must be Ord and Hashable to facilitate storing them in data structures.

Using this API, the solution is pretty short.

main = C.interact $ runScanner tc >>> solve >>> format

data Move = Move { a :: !Int, b :: !Int } deriving (Eq, Show)
data TC = TC { m :: Int, s0 :: Int, moves :: [Move] } deriving (Eq, Show)

tc :: Scanner TC
tc = do
  m <- int
  n <- int
  TC m <$> int <*> n >< (Move <$> int <*> int)

format :: Maybe Int -> ByteString
format = maybe "-1" showB

solve :: TC -> Maybe Int
solve TC{..} = level res 0
  where
    res = bfs [s0] (\v -> map (step v) moves) (==0)
    step v (Move a b) = (a*v + b) `mod` m

We run a BFS from s_0, stopping when we reach 0, and then look up the level of 0 to see the minimum number of steps needed to reach it.

In part 2, I’ll talk about how to implement this API. There are many viable implementation strategies, but the trick is getting it to run fast enough.

by Brent at October 14, 2021 04:09 PM

Gabriel Gonzalez

Advice for aspiring bloggers

writing2

I’m writing this post to summarize blogging advice that I’ve shared with multiple people interested in blogging. My advice (and this post) won’t be very coherent, but I hope people will still find this useful.

Also, this advice is targeted towards blogging and not necessarily writing in general. For example, I have 10 years of experience blogging, but less experience with other forms of writing, such as writing books or academic publications.

Motivation

Motivation is everything when it comes to blogging. I believe you should focus on motivation before working on improving anything else about your writing. In particular, if you always force yourself to set aside time to write then (in my opinion) you’re needlessly making things hard on yourself.

Motivation can be found or cultivated. Many new writers start off by finding motivation; inspiration strikes and they feel compelled to share what they learned with others. However, long-term consistent writers learn how to cultivate motivation so that their writing process doesn’t become “feast or famine”.

There is no one-size-fits-all approach to cultivating motivation, because not everybody shares the same motivation for writing. However, the first step is always reflecting upon what motivates you to write, which could be:

  • sharing exciting new things you learn
  • making money
  • evangelizing a new technology or innovation
  • launching or switching to a new career
  • changing the way people think
  • improving your own understanding by teaching others
  • settling a debate or score
  • sorting out your own thoughts

The above list is not comprehensive, and people can blog for more than one reason. For example, I find that I’m most motivated to blog when I have just finished teaching someone something new or arguing with someone. When I conclude these conversations I feel highly inspired to write.

Once you clue in to what motivates you, use that knowledge to cultivate your motivation. For example, if teaching people inspires me then I’ll put myself in positions where I have more opportunities to mentor others, such as becoming an engineering manager, volunteering for Google Summer of Code, or mentoring friends earlier in their careers. Similarly, if arguing with people inspires me then I could hang out on social media with an axe to grind (although I don’t do that as much these days for obvious reasons…).

When inspiration strikes

That doesn’t mean that you should never write when you’re not motivated. I still sometimes write when it doesn’t strike my fancy. Why? Because inspiration doesn’t always strike at a convenient time.

For example, sometimes I will get “hot” to write something in the middle of my workday (such as right after a 1-on-1 conversation) and I have to put a pin in it until I have more free time later.

One of the hardest things about writing is that inspiration doesn’t always strike at convenient times. There are a few ways to deal with this, all of which are valid:

  • Write anyway, despite the inconvenience

    Sometimes writing entails reneging on your obligations and writing anyway. This can happen when you just know the idea has to come out one way or another and it won’t necessarily happen on a convenient schedule.

  • Write later

    Some topics will always inspire you every time you revisit them, so even if your excitement wears off it will come back the next time you revisit the subject.

    For example, sometimes I will start to write about something that I’m not excited about at the moment but I remember I was excited about it before. Then as I start to write everything comes flooding back and I recapture my original excitement.

  • Abandon the idea

    Sometimes you just have to completely give up on writing something.

    I’ve thrown away a lot of writing ideas that I was really attached to because I knew I would never have the time. It happens, it’s sad when it happens, but it’s a harsh reality of life.

    Sometimes “abandon the idea” can become “write later” if I happen to revisit the subject years later at a more opportune time, but I generally try to abandon ideas completely, otherwise they will keep distracting me and do more harm than good.

I personally have done all of the above in roughly equal measure. There is no right answer to which approach is correct and I treat it as a judgment call.

Quantity over quality

One common pattern I see is that new bloggers tend to “over-produce” some of their initial blog posts, especially for ideas they are exceptionally attached to. This is not necessarily a bad thing, but I usually advise against it. You don’t want to put all of your eggs in one basket and you should focus on writing more frequent and less ambitious posts rather than a few overly ambitious posts, especially when starting out.

One reason why is that people tend to be poor judges of their own work, in my experience. Not only do you not know when inspiration will strike, but you will also not know when inspiration has truly struck. There will be some times when you think something you produce is your masterpiece, your magnum opus, and other people are like “meh”. There will be other times when you put out something that feels half-baked or like a shitpost and other people will tell you that it changed their life.

That’s not to say that you shouldn’t focus on quality at all. Quite the opposite: the quality of your writing will improve more quickly if you write more often instead of polishing a few posts to death. You’ll get more frequent feedback from a wider audience if you keep putting your work out there regularly.

Great writing is learning how to build empathy for the reader and you can’t do that if you’re not regularly interacting with your audience. The more they read your work and provide feedback the better your intuition will get for what your audience needs to hear and how your writing will resonate with them. As time goes on your favorite posts will become more likely to succeed, but there will always remain a substantial element of luck to the process.

Constraints

Writing is hard, even for experienced writers like me, because writing is so underconstrained.

Programming is so much easier than writing for me because I get:

  • Tooling support

    … such as an IDE, syntax highlighting or type-checker

  • Fast feedback loop

    For many application domains I can run my code to see if it works or not

  • Clearer demonstration of value

    I can see firsthand that my program actually does what I created it to do

Writing, on the other hand, is orders of magnitude more freeform and nebulous than code. There are so many ways to say or present the exact same idea, because you can vary things like:

  • Choice of words

  • Conceptual approach

  • Sentence / paragraph structure

  • Scope

  • Diagrams / figures

  • Examples

    Oh, don’t get me started on examples. I can spend hours or even days mulling over which example to use that is just right. A LOT of my posts in my drafts have run aground on the choice of example.

There also isn’t a best way to present an idea. One way of explaining things will resonate with some people better than others.

On top of that the feedback loop is sloooooow. Soliciting reviews from others can take days. Or you can publish blind and hope that your own editing process and intution is good enough.

The way I cope is to add artificial constraints to my writing, especially when first learning to write. I came up with a very opinionated way of structuring everything and saying everything so that I could focus more on what I wanted to say instead of how to say it.

The constraints I created for myself touched upon many of the above freeform aspects of writing. Here are some examples:

  • Choice of words

    I would use a very limited vocabulary for common writing tasks. In fact, I still do in some ways. For example, I still use “For example,” when introducing an example, a writing habit which still lingers to this day.

  • Sentence / paragraph structure

    The Science of Scientific Writing is an excellent resource for how to improve writing structure in order to aid reader comprehension.

  • Diagrams / figures

    I created ASCII diagrams for all of my technical writing. It was extremely low-tech, but it got the job done.

  • Examples

    I had to have three examples. Not two. Not four. Three is the magic number.

In particular, one book stood out as exceptionally helpful in this regard:

The above book provides several useful rules of thumb for writing that new writers can use as constraints to help better focus their writing. You might notice that this post touches only very lightly on the technical aspects of authoring and editing writing, and that’s because all of my advice would boil down to: “go read that book”.

As time went on and I got more comfortable I began to deviate from these rules I had created for myself and then I could more easily find my own “voice” and writing style. However, having those guardrails in place made a big difference to me early on to keep my writing on track.

Stamina

Sometimes you need to write something over an extended period of time, long after you are motivated to do so. Perhaps this because you are obligated to do so, such as writing a blog post for work.

My trick to sustaining interest in posts like these is to always begin each writing session by editing what I’ve written so far. This often puts me back in the same frame of mind that I had when I first wrote the post and gives me the momentum I need to continue writing.

Editing

Do not underestimate the power of editing your writing! Editing can easily transform a mediocre post into a great post.

However, it’s hard to edit the post after you’re done writing. By that point you’re typically eager to publish to get it off your plate, but you should really take time to still edit what you’ve written. My rule of thumb is to sleep on a post at least once and edit in the morning before I publish, but if I have extra stamina then I keep editing each day until I feel like there’s nothing left to edit.

Conclusion

I’d like to conclude this post by acknowledging the blog that inspired me to start blogging:

That blog got me excited about the intersection of mathematics and programming and I’ve been blogging ever since trying to convey the same sense of wonder I got from reading about that.

by Gabriella Gonzalez (noreply@blogger.com) at October 14, 2021 04:01 PM

Well-Typed.Com

Remote Interactive Course on Type-level Programming with GHC

We are offering our “Type-level programming with GHC” course again this autumn. This course is now available to book online on a first come, first served basis. If you want to book a ticket but they have sold out, please sign up to the waiting list, in case one becomes available.

Training course details

This course will be a mixture of lectures, discussions and live coding delivered via Google Meet. The maximum course size is deliberately kept small (up to 10 participants) so that it is still possible to ask and discuss individual questions. The course will be led by Andres Löh, who has more than two decades of Haskell experience and has taught many courses to varied audiences.

Type-level Programming with GHC

An overview of Haskell language extensions designed for type-level programming / how to express more properties of your programs statically

8-10th November 2021, 1930-2230 GMT (3 sessions, each 3 hours)

Other Well-Typed training courses

If you are interested in the format, but not the topic or cannot make the time, feel free to drop us a line with requests for courses on other topics or at other times. We can also do courses remotely or on-site for your company, on the topics you are most interested in and individually tailored to your needs. Check out more detailed information on our training services or just contact us.

by christine, andres at October 14, 2021 12:00 AM

October 13, 2021

Well-Typed.Com

GHC activities report: August-September 2021

This is the eighth edition of our GHC activities report where we describe the work on GHC and related projects that we are doing at Well-Typed. The current edition covers roughly the months of August and September 2021.

You can find the previous editions collected under the ghc-activities-report tag.

A bit of background: One aspect of our work at Well-Typed is to support GHC and the Haskell core infrastructure. Several companies, including IOHK, Facebook, and GitHub via the Haskell Foundation, are providing us with funding to do this work. We are also working with Hasura on better debugging tools. We are very grateful on behalf of the whole Haskell community for the support these companies provide.

If you are interested in also contributing funding to ensure we can continue or even scale up this kind of work, please get in touch.

Of course, GHC is a large community effort, and Well-Typed’s contributions are just a small part of this. This report does not aim to give an exhaustive picture of all GHC work that is ongoing, and there are many fantastic features currently being worked on that are omitted here simply because none of us are currently involved in them in any way. Furthermore, the aspects we do mention are still the work of many people. In many cases, we have just been helping with the last few steps of integration. We are immensely grateful to everyone contributing to GHC. Please keep doing so (or start)!

Team

Currently, Ben Gamari, Andreas Klebinger, Matthew Pickering and Zubin Duggal are working primarily on GHC-related tasks. Sam Derbyshire has just been joining the team at the start of October.

Many others within Well-Typed, including Adam Gundry, Alfredo Di Napoli, Alp Mestanogullari, Douglas Wilson and Oleg Grenrus, are contributing to GHC more occasionally.

Haskell Implementor’s Workshop

A few from our team presented various facets of their work at the Haskell Implementor’s Workshop in late August. More discussion of these presentations can be found in a previous HIW recap post on this blog.

Release management

  • Ben has been handling backports and release planning for the 9.2.1 and 9.0.2 releases.

  • Zubin fixed some bugs with LLVM version detection in the HEAD and 8.10.5 releases (#19973, #19828, #19959).

  • The bindists produced by hadrian now have a very similar structure to the ones produced by make (Matt, !6345, !6349).

Compiler error messages

  • Alfredo continued working on the conversion of GHC diagnostic messages from plain structured documents to richer Haskell types. After porting some errors in the driver code (!6249) he turned his attention to the modules in GHC’s typechecker (!6414), and he’s currently converting GHC’s typeclass-derivation code to use the new diagnostic infrastructure (!6561).

Frontend

  • Matt has been fixing a number of bugs discovered after recent driver refactoring. Hopefully everything works as before in the 9.4 release! (!6508, !6507, !6412)

  • Matt attempted to implement the splice imports proposal but the specification didn’t correctly enforce level invariants. The latest idea is to introduce a complementary “quote” import to distinguish imports allowed to be used in quotations.

Haddock and documentation

  • Zubin has been finishing up the the long-pending hi Haddock work, which should allow Haddock to generate documentation using only GHC interface (.hi) files (!6224). This greatly simplifies Haddock’s implementation, and allows it to skip parsing, renaming and type-checking files if the appropriate information already exists in the interface files, speeding it up greatly in such cases. This also reduces Haddock’s peak memory consumption. Identifiers in Haddock comments will also be renamed by GHC itself, and the results are also serialized into .hi files for tooling to make use of. A number of Haddock bugs were fixed along the way (#20034, haddock #30, haddock #665, haddock #921).

Profiling and debugging

  • Andreas continued looking into using the machine stack register for the haskell stack. blog post. While we have a branch that uses the machine stack register there are issues with perf not unwinding properly as well as issues related to llvm compatibility. For these reasons we will likely stop looking into this for the time being.

Compiler performance

  • Matt has been investigating several compiler performance regressions when compiling common packages (#19478, #19471).

  • Andreas landed a few performance improvements in !6609.

  • Andreas improved further on tag inference in !5614. As it stands it improves compiler performance while also improving runtime for most programs slightly.

  • During investigation of some edges of the tag inference work Andreas found some edge cases in GHC’s current code generation (#20334, #20333, #20332).

  • Adam has been working on a new approach to improving compilation performance for programs with significant use of type families. The key idea is to introduce a more compact representation of coercions in Core, which will occupy less space and be faster to traverse during optimisation (#8095, !6476).

Runtime performance

  • Matt diagnosed and found an interesting recent regression in the text package benchmarks due to code generation using 8-bit instructions which causes partial register stalling (#20405).

Compiler correctness

  • Ben added build system support for multi-target native toolchains (e.g. clang), allowing GHC to be used robustly on platforms which may run code for multiple platforms (#20162).

  • Ben fixed numerous linking issues affecting musl-based platforms, enabling static linkage on Alpine.

  • Ben fixed a number of build system bugs pertaining to libffi linkage affecting Darwin.

  • Ben fixed a Hadrian bug causing binary distribution installation to use platform parameters from the build environment instead of the installation environment (#20253).

Runtime system

  • Ben found and fixed a few tricky write barrier issues in the non-moving garbage collector (#20399).

CI and infrastructure

  • At the request of Richard, Matt has reduced the default verbosity levels of the Hadrian build output (!6584, !6545).

  • Ben refactored GHC’s CI infrastructure on Darwin platforms, eliminating several sources of fragility in the process.

by ben, matthew, andreask, zubin, alfredo, adam at October 13, 2021 12:00 AM

Tweag I/O

Denotational Homomorphic Testing

Almost a million years ago, I was dealing with some sinister bugs inside the data structures in linear-base and to stop the nightmares I decided to just test the specification of the data structures themselves. I ended up using something that I’ve been calling denotational homomorphic testing. In this post, I’ll walk through how I ended up with this and why this is legitimately helpful.

Our Toy Example

Let’s consider a toy example which corresponds to one of the data structures I was working with; see PR #263. How shall we test a specification of a simple Set implementation shown below?

-- Constructors / Modifers
empty :: Set a
insert :: Keyed a => a -> Set a -> Set a
delete :: Keyed a => a -> Set a -> Set a
intersect :: Keyed a => Set a -> Set a -> Set a
union :: Keyed a => Set a -> Set a -> Set a

-- Accessors
member :: Keyed a => a -> Set a -> Bool
size :: Keyed a => Set a -> Int

Specification via Axioms

My first idea was to create a specification for my data structures via axioms, and then property test those axioms.

I actually got this idea from my introductory computer science course. Our professor chose to introduce us to data structures by understanding axioms that functionally specify their behavior. Together, these axioms specified “simplification” rules that would define the value of the accessors on an arbitrary instance of that data structure. Think of it as defining rules to evaluate any well typed expression whose head is an accessor. The idea being that if you define the interface for an arbitrary use of the data structure by an external program, then you’ve defined what you want (functionally) from the data structure1.

In this system, we need at least one axiom for each accessor applied to each constructor or modifier. If we had <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics>m accessors and <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>n constructors or modifiers, we would need at least <semantics>mn<annotation encoding="application/x-tex">mn</annotation></semantics>mn axioms. For instance, for the accessor member, we would need at least 5 axioms, one for each of the 5 constructors or modifiers, and these are some of the axioms we would need:

-- For all x, y != x,
member empty x == False
member (insert x s) x == True
member (insert x s) y == member s y

Intuitively, the axioms provide a way to evaluate any well typed expression starting with member to either True or False.

With this specification by axioms, I thought I could just property test each axiom in hedgehog and provided my samples were large enough and random enough, we’d have a high degree of confidence in each axiom, and hence in the specification and everyone would live happily ever after. It was a nice dream while it lasted.

It turns out that there are just too many axioms to test. For instance, consider the axioms for size on intersection.

size (intersect s1 s2) = ...

You would effectively need axioms that perform the intersection on two terms s1 and s2. A few of simple axioms would include the following.

size (intersect empty s2) = 0
-- For x ∉ s1
size (intersect (insert x s1) s2) = boolToInt (member x s2) + size (intersect s1 s2)

So, if we can’t come up with a specification that’s easy to specify, much less easy to test, what are we to do?

Specification via Denotational Semantics

I looked at a test written by my colleague Utku Demir and it sparked an idea. Utku wrote some tests, actually for arrays, that translated operations on arrays to analogous operations on lists:

f <$> fromList xs == fromList (f <$> xs)
fromList :: [a] -> Array a

I asked myself whether we could test sets by testing whether they map onto lists in a sensible way. I didn’t understand this at the time, but I was really testing a semantic function that was itself a specification for Set.

Denotational Semantics

Concretely, consider the following equations.

-- toList :: Set a -> [a]
-- toList (1) does not produce repeats and (2) is injective

-- Semantic Function
--------------------

toList empty == []
toList (insert x s) == listInsert x (toList s)
  -- listInsert x l = nub (x : l)
toList (delete x s) == listDelete x (toList s)
  -- listDelete x = filter (/= x)
toList (union s1 s2) == listUnion (toList s1) (toList s2)
  -- listUnion a b = nub (a ++ b)
toList (intersect s1 s2) == listIntersect (toList s1) (toList s2)
  -- listIntersect a b = filter (`elem` a) b

-- Accessor Axioms
------------------

member x s == elem x (toList s)
size s == length (toList s)

These effectively model a set as a list with no repeats. This model is a specification for Set. In other words, we can say our Set implementation is correct if and only if it corresponds to this model of lists without repeats.

Consequences of Denotational Semantics

Why is this so, you ask me? Well, I’ll tell you. The first five equations cover each constructor or modifier and hence fully characterise toList. We say that toList is a semantic function. Any expression using Set either is a Set or is composed of operations that use the accessors. Thus, any expression using Sets can be converted to one using lists.

Moreover, since toList is injective, we can go back and forth between expressions in the list model and expressions using Sets. For instance, we can prove that for any sets s1 and s2, insert x (union s1 s2) = union (insert x s1) s2. First we have:

toList (insert x (union s1 s2)) =                 -- Function inverses
listInsert x (toList (union s1 s2)) =             -- Semantic function
listInsert x (nub ((toList s1) ++ (toList s2))) = -- Semantic function
nub (listInsert x (toList s1) ++ (toList s2)) =   -- List properties
nub (toList (insert x s1) ++ (toList s2)) =       -- Semantic function
toList (union (insert x s1) s2) =                 -- Semantic function

Then by injectivity of toList we deduced, as expected:

insert x (union s1 s2) =
union (insert x s1) s2

This actually means that our denotational specification subsumes the axiomatic specification that we were reaching for at the beginning of the post.

Now, there’s an interesting remark to make here. Our axiomatic system had axioms that all began with an accessor. The accessors’ denotations only have toList on the right-hand side, removing the need for injectivity of the semantic function. For instance, member (insert s x) x = elem x (toList (insert s x)) which simplifies to elem x (nub (x:toList s)) which is clearly True. The injectivity of the semantic function is only needed for equalities between sets like the one in the proof above.

Denotational Homomorphic Testing

In summary, we now have an equivalent way to give a specification of Set in terms of laws that we can property test. Namely, we could property test all the laws that define the semantic function, like toList empty = [], and the accessors axioms, like member x s = elem x (toList s), and this would test that our implementation of Set met our specification. And to boot, unlike before, we’d only write one property for each constructor, accessor or modifier. If we had <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics>m accessors and <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>n constructors and modifiers, we’d need to test about <semantics>m+n<annotation encoding="application/x-tex">m+n</annotation></semantics>m+n laws.

Stated abstractly, we have the following technique. Provide a specification (à la denotational semantics) of a data structure D by modeling it by another simpler data structure T. The model consists of a semantic function sem :: D -> T and axioms that correspond constructors on D to constructors of T and accessors on D to accessors on T, e.g. acc someD = acc' (sem someD). Now, test the specification by property testing each of the sem axioms. These axioms look like homomorphisms in that operations on Ds are recursively translated to related operations on Ts, which preserves some semblance of structure (more on this below). If our property testing goes well, we have a strong way to test the implementation of D.

It’s critical that I note that providing and testing a specification for data structures is not at all a novel problem. It has been well studied in a variety of contexts. Two of the most relevant approaches to this technique are in Conal Elliot’s paper Denotational design with type class morphisms and the textbook Algebra Driven Design.

In Elliot’s paper, he outlines a design principle: provide denotational semantics to data structures such that the semantic function is a homomorphism over any type classes shared by the data structure at hand and the one mapped to. Applying this to our example, this would be like creating an IsSet type class, having an instance for Ord a => IsSet [a] and Ord a => IsSet (Set a), and verifying that if we changed each equation in the former to use toList, it would be the corresponding equation in the latter. So, for example

class IsSet s where
  member :: a -> s a -> Bool
  -- ... etc

-- Actual homomorphism property:
--
-- >   member x (toList s) = member x s
--

where the property to test is actually a homomorphism.

This has the same underlying structure to our approach and the type class laws that hold on the list instance of IsSet prove the laws hold on the Set instance, analogous to how facts on lists prove facts about Sets.

In Algebra Driven Design, the approach is similar. First generate axioms on a reference implementation RefSet a (which might be backed by lists) via quickspec and test those. Then, property test the homomorphism laws from Set to RefSet.

Concluding Thoughts

To conclude, it seems to me that denotational homomorphic testing is a pretty cool way to think about and test data structures. It gives us a way to abstract a complex data structure by making a precise correspondence to a simpler one. Facts we know about the simpler one prove facts about the complex one, and yet, the correspondence is itself the specification. Hence proving the correspondence would be a rigorous formal verification. Falling a few inches below that, we can property test the correspondence because it’s in the form of a small number of laws and this in turn gives us a high degree of confidence in our implementation.


  1. Think about the batman line: “It’s not who I am underneath, but what I do that defines me”.

October 13, 2021 12:00 AM

October 11, 2021

Monday Morning Haskell

Why Haskell?

Effectful Haskell Thumb.jpg

(This post is also available as a YouTube video!)

When I tell other programmers I do a lot of programming in Haskell, a common question is "Why"? What is so good about Haskell that it's worth learning a language that is so different from the vast majority of software. And there are a few different things I usually think of, but the biggest one that sticks out for me is the way Haskell structures effects. I think these structures have really helped me change the way I think about programming, and knowing these ideas has made me a more effective developer, even in other languages.

Defining Effects

Now you might be wondering, what exactly is an effect? Well to describe effects, let's first think about a "pure" function. A pure function has no inputs besides the explicit parameters, and the only way it impacts our program's behavior is through the value it returns.

// A simple, pure, function
public int addWith5(int x, int y) {
  int result = x + y + 5;
  return result;
}

We can define an effect as, well, anything outside of that paradigm. This can be as simple as an implicit mutable input to the function like a global variable.

// Global mutable variable as in "implicit" input
global int z;

public int addWith5(int x, int y) {
  int result = x + y + 5 + z; // < z isn't a function parameter!
  return result;
}

Or it can be something more complicated like writing something to the file system, or making an HTTP request to an API.

// More complicated effects (pseudo-code)
public int addWith5(int x, int y) {
  int result = x + y + 5;
  WriteFile("result.txt", result);
  API.post(makeRequest(result));
  return result;
}

Once our function does these kinds of operations, its behavior is significantly less predictable, and that can cause a lot of bugs.

Now a common misconception about Haskell is that it does not allow side effects. And this isn't correct. What is true about Haskell is that if a function has side effects, these must be part of its type signature, usually in the form of a monad, which describes the full computational context of the function.

A function in the State monad can update a shared global value in some way.

updateValue :: Int -> State Int Int

A function in the IO monad can write to the file system or even make a network call.

logAndSendRequest :: Req -> IO Result

Doing this type-level documentation helps avoid bugs and provide guarantees about parts of our program at compile time, and this can be a real lifesaver.

Re-thinking Code

In the last few years I've been writing about Haskell during my free time but using C++ and Python in my day job. And so I have a bigger appreciation for the lessons I learned from Haskell's effect structures and I've seen that my code in other languages is much better because I understand these lessons.

New Course: Effectful Haskell!

And this is why I'm excited to introduce my newest course on the Monday Morning Haskell Academy. This one is called Effectful Haskell, and I think it might be the most important course I've made so far, because it really zeroes in on this idea of effects. For me, this is the main idea that separates Haskell from other languages. But at the same time, it can also teach you to be a better programmer in these other languages.

This course is designed to give you hands-on experience with some of the different tools and paradigms Haskell has for structuring effects. It includes video lectures, screencasts, and in depth coding exercises that culminate with you launching a small but multi-functional web server.

If you've dabbled a bit in Haskell and you understand the core ideas, but you want to see what the language is really capable of, I highly recommend you try out this course. You can head to the course sales page to see an overview of the course as well as the FAQ. I'll mention a couple special items.

First, there is a 30-day refund guarantee if you decide you don't like the course.

And second, if you subscribe (or are already subscribed) to the Monday Morning Haskell newsletter, you'll get a 20% discount code for this and our other courses! So I hope you'll take a look and try it out.

by James Bowen at October 11, 2021 02:30 PM

October 10, 2021

Mark Jason Dominus

More words change meanings

“Salient” seems to have lost its original meaning, and people mostly use it as if it were synonymous with “relevant” or “pertinent”. This is unfortunate. It's from Latin salīre, which is to jump, and it originally meant something that jumps out at you. In a document, the salient point isn't necessarily the one that is most important, most crucial, or most worth consideration; it's the one that jumps out.

It is useful to have a word specifically for something that jumps out, but people no longer understand it that way.

Cognates of salīre include “assail" and “assault”, “salmon” (the jumping fish), and the mysterious “somersault”.

by Mark Dominus (mjd@plover.com) at October 10, 2021 06:07 PM

Words change meanings

This Imgur gallery has a long text post about a kid who saw the movie Labyrinth) in London and met David Bowie after. The salient part was:

He seemed surprised I would want to know, and he told me the whole thing, all out of order, and I eked the details out of him.

This is a use of “eke” that I haven't seen before. Originally “eke” meant an increase, or a small addition, and it was also used in the sense of “also”. For example, from the prologue to the Wife of Bath's tale:

I hadde the bettre leyser for to pleye, And for to se, and eek for to be seye

(“I had more opportunity to play, and to see, and also to be seen.”)

Or also, “a nickname” started out as “an ekename”, an also-name.

From this we get the phrase “to eke out a living”, which means that you don't have quite enough resources, but by some sort of side hustle you are able to increase them to enough to live on.

But it seems to me that from there the meaning changed a little, so that while “eke out a living” continued to mean to increase one's income to make up a full living, it also began to connote increasing one's income bit by bit, in many small increments. This is the sense in which it appears to be used in the original quotation:

He seemed surprised I would want to know, and he told me the whole thing, all out of order, and I eked the details out of him.

Addenda

Searching for something in a corpus of Middle English can be very frustrating. I searched and searched the University of Michigan Corpus of Middle English Prose and Verse looking for the Chaucer quotation, and couldn't find it, because it has “to se” and “to be seye”, but I searched for “to see” and “to seye”; it has “eek” and I had been searching for “eke”. Ouch.

In the Chaucer, “leyser” is “leisure”, but a nearly-dead sense that we now see only in “complete the task at your leisure”.

by Mark Dominus (mjd@plover.com) at October 10, 2021 05:55 PM

October 09, 2021

Sandy Maguire

Testing Polysemy With polysemy-check

Last week we covered how to port an existing codebase to polysemy. The “why you might want to do this” was left implicit, but to be more explicit about things, it’s because littering your codebase with IO makes things highly-coupled and hard to test. By forcing yourself to think about effects, you are forced to pull concerns apart, and use the type-system to document what’s going on. But more importantly for today, it gives us a layer of indirection inside of which we can insert testing machinery.

To take an extreme example from the codebase I’m currently working on, compare a function with its original (non-polysemized) type:

api :: Opts -> ServerT API App

which looks very simple, and gives the false impression that api is fairly uninteresting. However, there is an amazing amount of IO hiding inside of App, which becomes significantly more evident when we give this type explicit dependency constraints:

api ::
  Members
    '[ AReqIDStore,
       AssIDStore,
       BindCookieStore,
       BrigAccess,
       DefaultSsoCode,
       Error SparError,
       GalleyAccess,
       IdP,
       Input Opts,
       Logger (Msg -> Msg)
       Logger String,
       Now,
       Random,
       Reporter,
       SAML2,
       SAMLUserStore,
       SamlProtocolSettings,
       ScimExternalIdStore,
       ScimTokenStore,
       ScimUserTimesStore,
     ]
    r =>
  Opts ->
  ServerT API (Sem r)

Wow! Not so innocent-looking now, is it? Each Member constraint here is a unit of functionality that was previously smuggled in via IO. Not only have we made them more visible, but we’ve now exposed a big chunk of testable surface-area. You see, each one of these members provides an abstract interface, which we can implement in any way we’d like.

Because IO is so hard to test, the idea of polysemy is that we can give several interpretaions for our program — one that is pure, lovely, functional, and, importantly, very easy to test. Another interpretation is one that that runs fast in IO. The trick then is to decompose the problem of testing into two steps:

  1. show that the program is correct under the model interpreter
  2. show that the model interpreter is equivalent to the real interpreter

This sounds great in principle, but as far as I know, it’s never been actually done in practice. My suspicion is that people using polysemy in the wild don’t get further than step 1 (which is OK — a good chunk of the value in effect systems is in the decomposition itself.) Doing all of the work to show equivalence of your interpreters is a significant amount of work, and until now, there have been no tools to help.

Introducing polysemy-check: a new library for proving all the things you’d want to prove about a polysemy codebase. polysemy-check comes with a few tools for synthesizing QuickCheck properties, plus machinery for getting Arbitrary instances for effects for free.

Using polysemy-check

To get started, you’re going to need to give two instances for every effect in your system-under-test. Let’s assume we have a stack effect:

data Stack s m a where
  Push :: s -> Stack s m ()
  Pop :: Stack s m (Maybe s)
  RemoveAll :: Stack s m ()
  Size :: Stack s m Int

makeSem ''Stack

The instances we need are given by:

deriving instance (Show s, Show a) => Show (Stack s m a)
deriveGenericK ''Stack

where deriveGenericK is TemplateHaskell that from kind-generics (but is re-exported by polysemy-check.) kind-generics is GHC.Generics on steroids: it’s capable of deriving generic code for GADTs.

The first thing that probably comes to mind when you consider QuickCheck is “checking for laws.” For example, we should expect that push s followed by pop should be equal to pure (Just s). Laws of this sort give meaning to effects, and act as sanity checks on their interpreters.

Properties for laws can be created via prepropLaw:

prepropLaw
    :: forall effs r a f
     . ( (forall z. Eq z => Eq (f z))
       , (forall z. Show z => Show (f z))
       )
    => ( Eq a
       , Show a
       , ArbitraryEff effs r
       )
    => Gen (Sem r a, Sem r a)
    -> (forall z. Sem r (a, z) -> IO (f (a, z)))
    -> Property

Sorry for the atrocious type. If you’re looking for Boring Haskell, you’d best look elsewhere.

The first argument here is a QuickCheck generator which produces two programs that should be equivalent. The second argument is the interpreter for Sem under which the programs must be equivalent, or will fail the resulting Property. Thus, we can write the push/pop law above as:

law_pushPop
    :: forall s r f effs res
     . (
         -- The type that our generator returns
         res ~ (Maybe s)

         -- The effects we want to be able to synthesize for contextualized
         -- testing
       , effs ~ '[Stack s]

         -- Misc constraints you don't need to care about
       , Arbitrary s
       , Eq s
       , Show s
       , ArbitraryEff effs r
       , Members effs r
       , (forall z. Eq z => Eq (f z))
       , (forall z. Show z => Show (f z))
       )
    => (forall a. Sem r (res, a) -> IO (f (res, a)))
    -> Property
law_pushPop = prepropLaw @effs $ do
  s <- arbitrary
  pure ( push s >> pop
       , pure (Just s)
       )

Sorry. Writing gnarly constraints is the cost not needing to write gnarly code. If you know how to make this better, please open a PR!

There’s something worth paying attention to in law_pushPop — namely the type of the interpreter (forall a. Sem r (Maybe s, a) -> IO (f (Maybe s, a))). What is this forall a thing doing, and where does it come from? As written, our generator would merely checks the equivalence of the exact two given programs, but this is an insufficient test. We’d instead like to prove the equivalence of the push/pop law under all circumstances.

Behind the scenes, prepropLaw is synthesizing a monadic action to run before our given law, as well as some actions to run after it. These actions are randomly pulled from the effects inside the effs ~ '[Stack s] row (and so here, they will only be random Stack actions.) The a here is actually the result of these “contextual” actions. Complicated, but you really only need to get it right once, and can copy-paste it forevermore.

Now we can specialize law_pushPop (plus any other laws we might have written) for a would-be interpreter of Stack s. Any interpreter that passes all the properties is therefore proven to respect the desired semantics of Stack s.

Wrapping Up

polysemy-check can do lots more, but this post is overwhelming already. So next week we’ll discuss how to prove the equivalence of interpreters, and how to ensure your effects are sane with respect to one another.

October 09, 2021 02:23 PM

October 08, 2021

Mark Jason Dominus

Diminishing resources in the Korean Language

Hangul, the Korean alphabet, was originally introduced in the year 1443. At that time it had 28 letters, four of which have since fallen out of use. If the trend continues, the Korean alphabet will be completely used up by the year 7889, preceded by an awful period in which all the words will look like

앙 앙앙앙 앙앙 앙 앙앙앙앙 앙

and eventually

ㅏㅏㅏㅏㅏㅏㅏㅏㅏㅏ!

by Mark Dominus (mjd@plover.com) at October 08, 2021 04:02 PM

October 06, 2021

Gabriel Gonzalez

The "return a command" trick

return-command

This post illustrates a trick that I’ve taught a few times to minimize the “change surface” of a Haskell program. By “change surface” I mean the number of places Haskell code needs to be updated when adding a new feature.

The motivation

I’ll motivate the trick through the following example code for a simple REPL:

import Control.Applicative ((<|>))
import Data.Void (Void)
import Text.Megaparsec (Parsec)

import qualified Data.Char as Char
import qualified System.IO as IO
import qualified Text.Megaparsec as Megaparsec
import qualified Text.Megaparsec.Char as Megaparsec

type Parser = Parsec Void String

data Command = Print String | Save FilePath String

parsePrint :: Parser Command
parsePrint = do
Megaparsec.string "print"

Megaparsec.space1

string <- Megaparsec.takeRest

return (Print string)

parseSave :: Parser Command
parseSave = do
Megaparsec.string "save"

Megaparsec.space1

file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

Megaparsec.space1

string <- Megaparsec.takeRest

return (Save file string)

parseCommand :: Parser Command
parseCommand = parsePrint <|> parseSave

main :: IO ()
main = do
putStr "> "

eof <- IO.isEOF

if eof
then do
putStrLn ""

else do
text <- getLine

case Megaparsec.parse parseCommand "(input)" text of
Left e -> do
putStr (Megaparsec.errorBundlePretty e)

Right command -> do
case command of
Print string -> do
putStrLn string

Save file string -> do
writeFile file string

main

This REPL supports two commands: print and save:

> print Hello, world!
Hello, world!
> save number.txt 42

print echoes back whatever string you supply and save writes the given string to a file.

Now suppose that we wanted to add a new load command to read and display the contents of a file. We would need to change our code in four places.

First, we would need to change the Command type to add a new Load constructor:

data Command = Print String | Save FilePath String | Load FilePath

Second, we would need to add a new parser to parse the load command:

parseLoad :: Parser Command
parseLoad = do
Megaparsec.string "load"

Megaparsec.space1

file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

return (Load file)

Third, we would need to add this new parser to parseCommand:

parseCommand :: Parser Command
parseCommand = parsePrint <|> parseSave <|> parseLoad

Fourth, we would need to add logic for handling our new Load constructor in our main loop:

                    case command of
Print string -> do
putStrLn string

Save file string -> do
writeFile file string

Load file -> do
string <- readFile file

putStrLn string

I’m not a fan of this sort of program structure because the logic for how to handle each command isn’t all in one place. However, we can make a small change to our program structure that will not only simplify the code but also consolidate the logic for each command.

The trick

We can restructure our code by changing the type of all of our parsers from this:

parsePrint :: Parser Command

parseSave :: Parser Command

parseLoad :: Parser Command

parseCommand :: Parser Command

… to this:

parsePrint :: Parser (IO ())

parseSave :: Parser (IO ())

parseLoad :: Parser (IO ())

parseCommand :: Parser (IO ())

In other words, our parsers now return an actual command (i.e. IO ()) instead of returning a Command data structure that still needs to be interpreted.

This entails the following changes to the implementation of our three command parsers:

{-# LANGUAGE BlockArguments #-}

parsePrint :: Parser (IO ())
parsePrint = do
Megaparsec.string "print"

Megaparsec.space1

string <- Megaparsec.takeRest

return do
putStrLn string

parseSave :: Parser (IO ())
parseSave = do
Megaparsec.string "save"

Megaparsec.space1

file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

Megaparsec.space1

string <- Megaparsec.takeRest

return do
writeFile file string

parseLoad :: Parser (IO ())
parseLoad = do
Megaparsec.string "load"

Megaparsec.space1

file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

return do
string <- readFile file

putStrLn string

Now that each parser returns an IO () action, we no longer need the Command type, so we can delete the following datatype definition:

data Command = Print String | Save FilePath String | Load FilePath

Finally, our main loop gets much simpler, because we no longer need to specify how to handle each command. That means that instead of handling each Command constructor:

            case Megaparsec.parse parseCommand "(input)" text of
Left e -> do
putStr (Megaparsec.errorBundlePretty e)

Right command -> do
case command of
Print string -> do
putStrLn string

Save file string -> do
writeFile file string

Load file -> do
string <- readFile file

putStrLn string

… we just run whatever IO () command was parsed, like this:

            case Megaparsec.parse parseCommand "(input)" text of
Left e -> do
putStr (Megaparsec.errorBundlePretty e)

Right io -> do
io

Now we only need to make two changes to the code any time we add a new command. For example, all of the logic for the load command is right here:

parseLoad :: Parser (IO ())
parseLoad = do
Megaparsec.string "load"

Megaparsec.space1

file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

return do
string <- readFile file

putStrLn string

… and here:

parseCommand :: Parser (IO ())
parseCommand = parsePrint <|> parseSave <|> parseLoad
-- ↑

… and that’s it. We no longer need to change our REPL loop or add a new constructor to our Command datatype (because there is no Command datatype any longer).

What’s neat about this trick is that the IO () command we return has direct access to variables extracted by the corresponding Parser. For example:

parseLoad = do
Megaparsec.string "load"

Megaparsec.space1

-- The `file` variable that we parse here …
file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

return do
-- … can be referenced by the corresponding `IO` action here
string <- readFile file

putStrLn string

There’s no need to pack our variables into a data structure and then unpack them again later on when we need to use them. This technique promotes tight and “vertically integrated” code where all of the logic is one place.

Final encodings

This trick is a special case of a more general trick known as a “final encoding” and the following post does a good job of explaining what “initial encoding” and “final encoding” mean:

To briefly explain initial and final encodings in my own words:

  • An “initial encoding” is one where you preserve as much information as possible in a data structure

    This keeps your options as open as possible since you haven’t specified what to do with the data yet

  • A “final encoding” is one where you encode information by how you intend to use it

    This tends to simplify your program if you know in advance how the information will be used

The initial example from this post was an initial encoding because each Parser returned a Command type which preserved as much information as possible without specifying what to do with it. The final example from this post was a final encoding because we encoded our commands by directly specifying what we planned to do with them.

Conclusion

This trick is not limited to returning IO actions from Parsers. For example, the following post illustrates a similar trick in the context of implementing configuration “wizards”:

… where a wizard has type IO (IO ()) (a command that returns another command).

More generally, you will naturally rediscover this trick if you stick to the principle of “keep each component’s logic all in one place”. In the above example the “components” were REPL commands, but this consolidation principle is useful for any sort of plugin-like system.

Appendix

Here is the complete code for the final version of the running example if you would like to test it out yourself:

{-# LANGUAGE BlockArguments #-}

import Control.Applicative ((<|>))
import Data.Void (Void)
import Text.Megaparsec (Parsec)

import qualified Data.Char as Char
import qualified System.IO as IO
import qualified Text.Megaparsec as Megaparsec
import qualified Text.Megaparsec.Char as Megaparsec

type Parser = Parsec Void String

parsePrint :: Parser (IO ())
parsePrint = do
Megaparsec.string "print"

Megaparsec.space1

string <- Megaparsec.takeRest

return do
putStrLn string

parseSave :: Parser (IO ())
parseSave = do
Megaparsec.string "save"

Megaparsec.space1

file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

Megaparsec.space1

string <- Megaparsec.takeRest

return do
writeFile file string

parseLoad :: Parser (IO ())
parseLoad = do
Megaparsec.string "load"

Megaparsec.space1

file <- Megaparsec.takeWhile1P Nothing (not . Char.isSpace)

return do
string <- readFile file

putStrLn string

parseCommand :: Parser (IO ())
parseCommand = parsePrint <|> parseSave <|> parseLoad

main :: IO ()
main = do
putStr "> "

eof <- IO.isEOF

if eof
then do
putStrLn ""

else do
text <- getLine

case Megaparsec.parse parseCommand "(input)" text of
Left e -> do
putStr (Megaparsec.errorBundlePretty e)

Right io -> do
io

main

by Gabriella Gonzalez (noreply@blogger.com) at October 06, 2021 06:25 PM

Ken T Takusagawa

[atmultsv] cube root of a complex number

given a complex number in rectangular (Cartesian) form x + y*i, we first consider computing its square root (one of its square roots) in rectangular form p + q*i such that p and q only contain radicals of real values.

here is a Haskell code that accomplishes this, implementing "How to find the square root of a complex number" by Stanley Rabinowitz:

complexsqrt :: forall real. (Eq real, Num real, Ord real, Floating real) => (real,real) -> (real,real);
complexsqrt (x,y) =
if y == 0 then
  if x < 0 then (0, sqrt(negate x))
  else (sqrt(x), 0)
else let {
  hypotenuse :: real;
  hypotenuse = sqrt(x*x + y*y);
  p :: real;
  p = sqrt((x+hypotenuse)/2);
  imaginarypart :: real;
  imaginarypart = sqrt((hypotenuse - x)/2);
} in if y < 0
then (p, negate imaginarypart)
else (p, imaginarypart);

the above code is only to demonstrate that it is theoretically possible to do this with radicals and conditionals.  practically, the code has issues with loss of numerical precision and potential overflow in intermediate results.

note that if p + q*i is a square root, then -p - q*i is also a square root.  these coincide at 0: zero (and only zero) has exactly one square root.  the above code (I think) gives the answer whose complex argument t is -pi/2 < t <= pi/2 , the right half of the complex plane, and preferring the positive imaginary axis.

is there a similar algorithm to compute the cube root of a complex number?  i strongly suspect the answer is "no", because casus irreducibilis.

let the polar form be r * (cos(t) + i*sin(t)).  then, cube root by De Moivre theorem is r^(1/3) * (cos(t/3) + i*sin(t/3)).  we do not know what cos(t/3) and sin(t/3) are, but we are given cos(t) and sin(t) and, and we have the triple angle identities:

sin(t) = 3*sin(t/3) - 4*(sin(t/3))^3
cos(t) = 4*(cos(t/3))^3 - 3*cos(t/3)

we could solve the triple angle identities for cos(t/3) and sin(t/3) using the cubic equation, eliminating the trigonometric functions from the De Moivre solution.  however, the cubic equation typically requires computing the cube roots of complex numbers, casus irreducibilis, so we are (probably) back where we started.

if so, casus irreducibilis is stranger than it might appear at first sight.  suppose we have a calculator that can do arithmetic on real numbers, including computing arbitrary roots of positive real numbers.  (note that simple arithmetic + - * / on a complex numbers can easily be expressed in terms of real arithmetic on their rectangular components.)  this machinery is insufficient for computing the rectangular components of the cube root of (most) complex numbers.  it seems there is a partition within algebraic numbers: those which can be computed with a finite number of arithmetic operations on reals starting with rationals, and those which cannot.  the rectangular components of the cube root of a complex number are in the latter set.  (the roots of the general quintic equation and higher are also famously in the latter set.)

the components are probably not transcendental numbers.  one can write (p + q*i)^3 = (x + y*i), then separate the real and imaginary components:

x = p^3 - 3*p*q^2
y = 3*p^2*q - q^3

I don't know if algebraic numbers include numbers which are the roots of a system of polynomials: the definition of algebraic states roots of a single polynomial.  but I strongly suspect roots of systems are also algebraic, much like how roots of single polynomials with algebraic-number (not necessarily rational) coefficients are algebraic.

what (subjectively) is the simplest additional computational machinery needed to compute the rectangular components of the cube root of a complex number, perhaps to arbitrary precision?

of course, cos, sin, and atan2 suffice, going via polar form, but those functions are (subjectively) not very simple.

the cube root of a positive real number can be iteratively computed using bisection.  but bisection does not work on the complex plane: there is too much space.  can a root be confined to a rectangle which is iteratively shrunk?

the Newton-Raphson method applied to complex numbers works, but it has the ugly issue of selecting the initial point.  Newton's method famously behaves very weirdly for complex numbers (yielding fractals) depending on the initial point.  (who first applied Newton's method to complex numbers?  I used to think it was Raphson, but I seem mistaken.  it's not obvious that Newton's method should even work for complex numbers.)

perhaps use polynomial approximations of trigonometric and inverse trigonometric functions only good enough to get a good initial guess for Newton's method.  perhaps we win because the 3 cube roots of a number on the complex plane are widely spaced, and the polynomial approximations of sine and cosine only need to be good enough between -pi and pi, the range of atan2.  actually, we only need -pi/3 to pi/3.

by Unknown (noreply@blogger.com) at October 06, 2021 04:30 AM

October 02, 2021

Sandy Maguire

Porting to Polysemy

Many years ago, when I first started using free monads in anger, I was tasked with porting our giant codebase to something that used an effect system. While it was a noble goal, my efforts slowly imploded upon their own weight. I didn’t know how to go about doing such a dramatic refactoring on a live codebase, and unwisely tried to do the whole thing in a single PR. A month later, as you might expect, it became overwhelming obvious that we were never going to merge the thing, and it died there.

Several years older (and wiser), I’ve recently been contracted to port another codebase to Polysemy. Today we hit our first big milestone, and the experience has gone swimmingly. I wanted to spend some time today discussing how to actually go about Polysemizing a codebase. It’s not too onerous if you proceed cautiously. The trick is to do several passes over the codebase, each time introducing a few more effects, but at no point ever actually changing any code paths.

Getting Your Foot in the Door

The first step is to introduce Polysemy into your codebase. Your program is almost certainly structured around a main application monad, and that’s the right place to start. As a first step, we will swap out IO for Sem. For example, if your main monad were:

newtype App a = App
  { unApp :: ReaderT Env (ExceptT AppError IO) a
  }

we will change it to:

newtype App r a = App
  { unApp :: Member (Final IO) r => ReaderT Env (ExceptT AppError (Sem r)) a
  }

This change exposes the effect row (the r type variable,) and asserts that we always have a Final IO member in that row. Exposing r means we can gradually introduce Member constraints in application code as we begin teasing apart effects, and Final IO gives us a way to implement MonadIO for App. Let’s start with that:

instance MonadIO (App r) where
  liftIO a = App $ lift $ lift $ embedFinal a

Due to some quirks of how Haskell deals with impredicativity, this function can’t be written point-free.

This change of App to App r isn’t the end-goal; it’s just enough that we can get Polysemy into the project without it being a huge change. In the medium term, our goal is to eliminate the App newtype altogether, leaving a bare Sem in its place. But one step at a time.

You’ll need to rewrite any instances on App that you were previously newtype deriving. This sucks, but the answer is always just to lift. You might find that some instances used to be derived via IO, and thus now cannot be implemented via lift. In these cases, don’t be afraid to give an orphan instance for Sem r; orphans are bad, but we’ll be cleaning this all up very soon.

Take some time to get everything compiling. It’s a lot of drudgery, but all you need to do is to add the r type variable to every type signature in your codebase that mentions App.

You will also need an introduction function, to lift Polysemy actions into App:

liftSem :: Sem r a -> App r a
liftSem a = App $ lift $ lift a

as well as an elimination function which will evolve as you add effects. At some point in your (existing) program, you will need to actually run App down to IO. It probably looks something like this:

runApp :: Env -> App a -> IO (Either AppError a)
runApp env = runExceptT . flip runReaderT env . unApp

instead we are going to create the canonical interpretation down to IO:

type CanonicalEffects =
  '[ Final IO
   ]

canonicalAppToIO :: Env -> App CanonicalEffects a -> IO (Either AppError a)
canonicalAppToIO env
  = runFinal
  . runExceptT
  . flip runReaderT env
  . unApp

As we pull effects out of the program, we will add them to CanonicalEffects, and their interpreters to canonicalAppToIO. But for now, this function is very boring.

Once everything is up and compiling, all of the old tests should still pass. We haven’t changed anything, just installed some new machinery. But importantly, all of code paths are still exactly the same. Remember, this is a refactoring task! The goal is to do lots of little refactors, each one pulling out some effect machinery, but not changing any code paths. The entire porting project should be a series of no-op PRs that slowly carve your codebase into one with explicitly described effects.

First Effects

Your medium term goal is to eliminate the Final IO constraint inside of App, which exists only to provide a MonadIO instance. So, our real goal is to systematically eliminate raw IO from App.

The usual culprits here are database access, HTTP requests, and logging. If your team has been disciplined, database access and HTTP requests should already be relatively isolated from the rest of the codebase. Isolated here means “database calls are in their own functions,” rather than being inlined directly in the application code whenever it wants to talk to the database. If your database accesses are not isolated, take some time to uninline them before continuing.

Our next step is to identify CRUD groups on the database. We generously interpret the “read” in CRUD to be any queries that exist against the logical datastructure that you’re serializing in the database. These CRUD groups might be organized by table, but they don’t necessarily need to be; by table is good enough for now if it corresponds to how the queries exist today.

For each CRUD group, we want to make a new Polysemy effect, and thread it through the application, replacing each direct call to the database with a call to the effect action. Finish working on each effect before starting on the next; each group makes for a good PR.

For example, maybe we’ve identified the following database accesses for table users:

insertUser       :: MonadDB m => UserName -> User -> m ()
lookupUser       :: MonadDB m => UserName -> m (Maybe User)
getUsersByRegion :: MonadDB m => Region -> m [User]
setUserLapsed    :: MonadDB m => UserName -> m ()
unsetUserLapsed  :: MonadDB m => UserName -> m ()
purgeUser        :: MonadDB m => UserNamr -> m ()

This CRUD group corresponds to an effect:

module App.Sem.UserStore where

data UserStore m a where
  Insert      :: UserName -> User -> UserStore m ()
  Lookup      :: UserName -> UserStore m (Maybe User)
  GetByRegion :: Region -> UserStore m [User]
  SetLapsed   :: UserName -> UserStore m ()
  UnsetLapsed :: UserName -> UserStore m ()
  Purge       :: UserName -> UserStore m ()

makeSem ''UserStore

We can now replace all calls across the codebase to insertUser a b with liftSem $ UserStore.insert a b. Doing so will require you to propagate a Member UserStore r constraint throughout the callstack. I really like this process. It’s a bit annoying to push constraints upwards, but it really gives you a good sense for the hidden complexity in your program. As it turns out, MonadIO is hiding a metric ton of spaghetti code!

All of this replacing and constraint propagating has given you dependency injection. But remember, at this step we’d like all of our changes to be no-ops, so we still need to inject the old codepath. For this we will make an interpreter of the UserStore effect:

module App.Sem.UserStore.IO where

import qualified TheDatabase as DB
import App.Sem.UserStore

userStoreToDB
    :: forall m r a
     . (Member (Embed m) r, MonadDB m)
    => Sem (UserStore ': r) a
    -> Sem r a
userStoreToDB = interpret $ embed @m . \case
  Insert un u    -> DB.insertUser un u
  Lookup un      -> DB.lookupUser un
  GetByRegion r  -> DB.getUsersByRegion r
  SetLapsed un   -> DB.setUserLapsed un
  UnsetLapsed un -> DB.unsetUserLapsed un
  Purge un       -> DB.purgeUser un

Make sure to add UserStore (and its dependency, Embed DB) to the head of CanonicalEffects:

type CanonicalEffects =
  '[ UserStore
   , Embed DB  -- dependency of UserStore
   , Embed IO  -- dependency of Embed DB
   , Final IO
   ]

and then we can update the canonical interpreter:

canonicalAppToIO :: Env -> App CanonicalEffects a -> IO (Either AppError a)
canonicalAppToIO env
  = runFinal
  . embedToFinal
  . runEmbedded @DB @IO (however you run the DB in IO)
  . userStoreToDB @DB
  . runExceptT
  . flip runReaderT env
  . unApp

The general principle here is that you add the new effect somewhere near the top of the CanonicalEffects stack, making sure to add any effects that your intended interpreter requires lower in the stack. Then, add the new interpreter to canonicalAppToIO, in the same order (but perhaps presented “backwards”, since function application is right to left.) Make sure to add interpreters for the depended-upon effects too!

As you pull more and more effects out, you’ll find that often you’ll already have the depended-upon effects in CanonicalEffects. This is a good thing — we will probably have several effects that can all be interpreted via Embed DB.

The benefit here is that we have now separated our application code from the particular choice of database implementation. While we want to use userStoreToDB in production, it might make less sense to use in local testing environments, where we don’t want to spin up a database. Instead, we could just write a little interpreter that emulates the UserStore interface purely in memory! Once you’ve fully exorcised IO from your codebase, this approach gets extremely powerful.

Choosing Effects

Carving out your effects is probably the hardest thing to do here. What’s difficult is that you need to forget your instincts! Things that would make a good MTL-style typeclass are often terrible choices for effects.

Why’s that? There’s this extremely common pattern in the Haskell ecosystem for libraries that want to expose themselves to arbitrary applications’ monad stacks. To continue with the MonadDB example, it’s likely something like:

class (MonadIO m, MonadThrow m) => MonadDB m where
  liftDB :: DB a -> m a

While this works fine for a single underlying implementation, it’s an awful effect for the same reason: there’s only one interpretation! Any meaningful interpreter for MonadDB is equivalent to writing your own implementation of the database! It’s the same reason we don’t like IOIO is so big that every possible interpretation of it would necessarily need to be able to talk to the file system, to the network, to threads, and everything else that we can do in IO.

Instead, when you’re looking for effects to pull out, you need to forget entirely about the implementation, and just look at the abstract interface. Don’t use an HTTP effect to talk to a REST API — it’s too big, and would require you to implement an entire HTTP protocol. Instead, just define an effect that talks to exactly the pieces of the API that you need to talk to. Forget that it’s REST entirely! That’s an implementation detail, and implementation details are the domain of the interpreter, not the effect.

Furthermore, if you’re just using the standard Polysemy effects, pick the smallest effect that you can get away with. You’ll probably reach for Reader more often than you should. You don’t need to use Reader unless you need local — otherwise, prefer Input.

Summing Up

That’s all I have for today, but I have a few more posts in mind for this series. One on how to actually go about testing all of this stuff, and another on how to follow up the refactoring of your new Polysemy codebase now that all of the IO has been removed.

October 02, 2021 10:46 PM

October 01, 2021

Brent Yorgey

Swarm: a lot can happen in a week

It’s been about a week since I put out an announcement and call for collaboration on a new game, Swarm. Since then, the response has been fantastic: lots of people have tried it out, a few have even streamed themselves playing it on Twitch, and there has been lots of development activity.

There’s still a long, long way to go before the game comes anywhere close to the vision for it, but we’ve made great progress! Some notable new features added since the initial announcement include:

  • New scan, upload, and install commands
  • Semicolons are no longer required beetween consecutive defs
  • Basic help panel, and panel shortcut keys
  • Dramatically reduced CPU usage when idle
  • An overhaul of parsing and pretty-printing of constants (makes adding new constants easier, and an important prerequisite for saving definitions and games)
  • Better handling of water (you can make curry now)!

A couple more exciting things in progress that should land very soon:

  • ASCII art recipes

  • Basic editor integration via LSP, so you can write Swarm programs in your favorite editor with automatically highlighted syntax and type errors.

And of course there are many other exciting things planned or in the works. Come join us!

by Brent at October 01, 2021 07:31 PM

September 30, 2021

Tweag I/O

A higher-order integrator for Hamiltonian Monte Carlo

Hamiltonian Monte Carlo (HMC) is a MCMC sampling algorithm which proposes new states based on the simulation of Hamilton’s equations of motion. One of the main ingredients of this algorithm is its integration scheme — how the equations are discretized and its solutions found. The standard choice for this is the classical leapfrog. In this blog post we present our empirical investigation of <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7 — an algorithm that is computationally more expensive, but also more accurate. This trade-off is not a simple one (after all, more precision implies less tries), but our explorations show that this is a promising algorithm, which can outperform leapfrog in some cases.

We don’t suppose previous knowledge of integration methods, but in case you are new to HMC or MCMC methods, a good starting point is our blog post series on the subject.

Introduction

Broadly speaking, the idea of HMC is that, given a previous state <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>x of our Markov chain, we draw a random momentum <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics>v from a normal distribution and simulate the behaviour of a fictive particle with starting point <semantics>(x,v)<annotation encoding="application/x-tex">(x,v)</annotation></semantics>(x,v). This deterministic behaviour is simulated for some fixed time <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>t. The final state <semantics>(x⋆,v⋆)<annotation encoding="application/x-tex">(x^\star, v^\star)</annotation></semantics>(x,v) of the particle after the time <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>t will then serve as the new proposal state of a Metropolis-Hastings algorithm.

The motion of the fictive particle is governed by the Hamiltonian <semantics>H(x,v)=K(v)+E(x)<annotation encoding="application/x-tex">H(x,v) = K(v) + E(x)</annotation></semantics>H(x,v)=K(v)+E(x), the sum of the kinetic (<semantics>K<annotation encoding="application/x-tex">K</annotation></semantics>K) and potential (<semantics>E<annotation encoding="application/x-tex">E</annotation></semantics>E) energies. The coordinates <semantics>(x,v)<annotation encoding="application/x-tex">(x, v)</annotation></semantics>(x,v) then solve Hamilton’s equations of motions:

<semantics>dxdt=∂H∂v,dvdt=−∂H∂x.<annotation encoding="application/x-tex">\frac{ \mathrm{d}x}{\mathrm{d}t} = \frac{\partial H }{\partial v}, \hspace{15pt} \frac{\mathrm{d}v}{\mathrm{d}t}= -\frac{\partial H }{\partial x}.</annotation></semantics>dtdx=vH,dtdv=xH.

By introducing <semantics>z=(x,v)<annotation encoding="application/x-tex">z=(x,v)</annotation></semantics>z=(x,v) and defining an operator <semantics>DH<annotation encoding="application/x-tex">D_{H}</annotation></semantics>DH, the equations of motion can be written compactly as

<semantics>z˙=(x˙v˙)=(∂H∂v−∂H∂x)=DHz.<annotation encoding="application/x-tex">\dot{z} = \begin{pmatrix}\dot x \\ \dot v\end{pmatrix} = \begin{pmatrix}\frac{\partial H }{\partial v} \\ -\frac{\partial H }{\partial x}\end{pmatrix} = D_H z.</annotation></semantics>z˙=(x˙v˙)=(vHxH)=DHz.

The operator <semantics>DH<annotation encoding="application/x-tex">D_{H}</annotation></semantics>DH is a differential operator that uses first derivatives. It describes the change of any observable quantity with respect to the time evolution of a Hamiltonian system. The equations of motion are then formally solved by

<semantics>z(t)=exp⁡(tDH)z(0)=exp⁡(t(DK+DE))z(0).<annotation encoding="application/x-tex">z(t) = \exp ( t D_H) z(0) = \exp ( t(D_K + D_E)) z(0).</annotation></semantics>z(t)=exp(tDH)z(0)=exp(t(DK+DE))z(0).

Here, <semantics>DK<annotation encoding="application/x-tex">D_K</annotation></semantics>DK and <semantics>DE<annotation encoding="application/x-tex">D_E</annotation></semantics>DE respectively describe the change of <semantics>z<annotation encoding="application/x-tex">z</annotation></semantics>z that is due to the kinetic and the potential energy. The full operator <semantics>exp⁡(tDH)<annotation encoding="application/x-tex">\exp ( t D_H )</annotation></semantics>exp(tDH) then describes the time evolution of the system — it maps <semantics>z(0)<annotation encoding="application/x-tex">z(0)</annotation></semantics>z(0) to <semantics>z(t)<annotation encoding="application/x-tex">z(t)</annotation></semantics>z(t). The solution of this equation depends crucially on the potential energy, which in the context of HMC relates to the probability distribution of interest via <semantics>E(x)=−log⁡p(x)<annotation encoding="application/x-tex">E(x) = -\log p(x)</annotation></semantics>E(x)=logp(x). The density function <semantics>p(x)<annotation encoding="application/x-tex">p(x)</annotation></semantics>p(x) is, in general, non-trivial and the Hamiltonian equations of motion can therefore not be solved analytically.

A general recipe for symplectic integration: splitting methods

We thus have to resort to numerical integration to get at least an approximate solution. As discussed in a footnote of Tweag’s HMC blog post, we can’t just use any integration scheme in HMC, but we should make sure it obeys symplecticity. This is a crucial property of Hamilton’s equations of motion and means that they preserve the volume in <semantics>(x,v)<annotation encoding="application/x-tex">(x, v)</annotation></semantics>(x,v) space, ensuring that probabilities are propagated correctly. A very general idea way of deriving symplectic integrators of arbitrary order are splitting methods, as follows.

In 1995, Suzuki proposed a way to approximate expressions such as the formal solution of Hamilton’s equations, yielding in our case

<semantics>exp⁡(t(DK+DE))=∏i=1k/2exp⁡(citDK)exp⁡(ditDE)+O(tk+1),<annotation encoding="application/x-tex">\exp ( t(D_K + D_E)) = \prod_{i=1}^{k/2} \exp (c_i t D_K) \exp (d_i t D_E) + \mathcal{O}(t^{k+1}),</annotation></semantics>exp(t(DK+DE))=i=1k/2exp(citDK)exp(ditDE)+O(tk+1),

where <semantics>Σi=1kci=Σi=1kdi=1<annotation encoding="application/x-tex">\Sigma_{i=1}^k c_i = \Sigma_{i=1}^k d_i =1</annotation></semantics>Σi=1kci=Σi=1kdi=1. You can think of this formula as a generalization of the identity <semantics>em+n=em⋅en<annotation encoding="application/x-tex">e^{m+n} = e ^m \cdot e^n</annotation></semantics>em+n=emen to operators. The error term is a result of the fact that operators generally do not commute.

The factors <semantics>exp⁡(citDK)<annotation encoding="application/x-tex">\exp (c_i t D_K)</annotation></semantics>exp(citDK) correspond to an update of the position <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>x, while the <semantics>exp⁡(ditDE)<annotation encoding="application/x-tex">\exp (d_i t D_E)</annotation></semantics>exp(ditDE) correspond to an update of the momentum <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics>v.1

Now, that we know how to come up with an approximation of the solution of the equations of motion, let’s give a first example of an approximative algorithm.

The Leapfrog

The Leapfrog algorithm is the standard integrator used in HMC. The intuition behind it is that we alternate updating the position coordinate <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>x and the momentum variable <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics>v, but half a time step apart:

(source: Steve McMillan, Drexel University)

This behaviour has given the Leapfrog algorithm its name. More precisley, the updates look like the following,

<semantics>xi+1=xn+vi+1/2Δt<annotation encoding="application/x-tex">x_{i+1}= x_n + v_{i + 1/2} \Delta t</annotation></semantics>xi+1=xn+vi+1/2Δt <semantics>vi+3/2=vi+1/2+(−∂∂xE(xi+1))Δt<annotation encoding="application/x-tex">v_{i + 3/2} = v_{i+1/2} + \left(-\frac{\partial}{\partial x} E(x_{i+1})\right) \Delta t</annotation></semantics>vi+3/2=vi+1/2+(xE(xi+1))Δt

As you might have noticed, you need to perform half a step for the momentum in the beginning and in the end. So, in terms of Suzuki, the Leapfrog looks like this

<semantics>Leapfrog=U3U3⋯U3,<annotation encoding="application/x-tex">\text{Leapfrog} = U_3 U_3 \cdots U_3,</annotation></semantics>Leapfrog=U3U3U3,

where

<semantics>U3=exp⁡(12ΔtDE)exp⁡(ΔtDK)exp⁡(12ΔtDE).<annotation encoding="application/x-tex">U_3 = \exp \left(\frac {1}{2}\Delta t D_E\right)\exp (\Delta t D_K)\exp \left(\frac {1}{2}\Delta t D_E\right).</annotation></semantics>U3=exp(21ΔtDE)exp(ΔtDK)exp(21ΔtDE).

The coefficients are <semantics>c1=0, c2=1, d1=d2=12.<annotation encoding="application/x-tex">c_1 = 0,\, c_2 = 1,\, d_1=d_2 = \frac{1}{2}.</annotation></semantics>c1=0,c2=1,d1=d2=21.

If we further divide our time <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>t into <semantics>t=time step⋅trajectory length<annotation encoding="application/x-tex">t = \text{time\ step} \cdot \text{trajectory\ length}</annotation></semantics>t=time steptrajectory length and apply the Suzuki approximation <semantics>U3<annotation encoding="application/x-tex">U_3</annotation></semantics>U3 <semantics>trajectory length<annotation encoding="application/x-tex">\text{trajectory\ length}</annotation></semantics>trajectory length-many times, we can implement this in Python as follows:

def integrate(x, v):

    v += 1 / 2 * time_step * -gradient_pot_energy(x)

    for i in range(trajectory_length - 1):
        x += time_step * v
        v += time_step * gradient_pot_energy(x)

    x += time_step * v
    v += 1 / 2 * time_step * gradient_pot_energy(x)

    return x, v

An important concept when talking about the accuracy of integration schemes is that of the order of an integrator: if <semantics>(x⋆,v⋆)<annotation encoding="application/x-tex">(x^\star,v^\star)</annotation></semantics>(x,v) is the exact solution after time <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>t and <semantics>(xt,vt)<annotation encoding="application/x-tex">(x_{t},v_{t})</annotation></semantics>(xt,vt) an approximation, then we say that the approximation is of nth-order and write <semantics>O(tn)<annotation encoding="application/x-tex">\mathcal{O}(t^n)</annotation></semantics>O(tn), if <semantics>∥(x⋆,v⋆)−(xt,vt)∥≤C⋅tn<annotation encoding="application/x-tex">\Vert(x^\star,v^\star)-(x_{t},v_{t}) \Vert\leq C \cdot t^n</annotation></semantics>(x,v)(xt,vt)Ctn and <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>C is independent of <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>t.

One can verify that the <semantics>U3<annotation encoding="application/x-tex">U_3</annotation></semantics>U3 is exact to first-order in <semantics>Δt<annotation encoding="application/x-tex">\Delta t</annotation></semantics>Δt.2 Furthermore, because of symmetry, the <semantics>U3<annotation encoding="application/x-tex">U_3</annotation></semantics>U3 needs to be of even-order.3 Thus the <semantics>U3<annotation encoding="application/x-tex">U_3</annotation></semantics>U3 cannot be only a first-order approximation — it needs to be correct up to <semantics>Δt2<annotation encoding="application/x-tex">\Delta t ^2</annotation></semantics>Δt2. In this sense, the <semantics>U3<annotation encoding="application/x-tex">U_3</annotation></semantics>U3 is a second-order approximation and the Leapfrog is too.

Now, you might wonder: why look further since we have found a method yielding a reasonably exact approximation? After all, we can always diminish the error by shortening the time step and increasing the trajectory length!

Well, one answer is that there might be a more efficient way to approximate the equations of motions!

The <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7

A seven-factor approximation, <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7, which can be molded into a five-factor form and which has fourth-order accuracy (that is, it is more precise than leapfrog), was first considered by Chin (1997). Its novelty lies in the usage of the second-order derivative of the potential energy. This comes along with a few more updates of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>x and <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics>v per step. A rediscovery by Chau et al. builds on Suzuki’s method discussed above, but is focused on quantum mechanical applications. We now sketch Chin’s more accessible way of deriving the <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7.

When we want to apply <semantics>eA⋅eB⋅eC=eA+B+C<annotation encoding="application/x-tex">e ^A \cdot e^B \cdot e^C= e^{A+B+C}</annotation></semantics>eAeBeC=eA+B+C to operators, we remember that we must take into account that they do not commute — therefore, this identity thus does not hold in the general case. However, we can use a series expansion, which, like a Taylor expansion, in our case involves higher order derivatives. Then, cutting off the expansion leaves us with an additional error, which is of order <semantics>O(Δt5)<annotation encoding="application/x-tex">\mathcal{O}(\Delta t^5)</annotation></semantics>O(Δt5). Consequently, the <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7 remains exact up to the fourth order.

Whichever way the <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7 is derived, the newly formed term involves the second order derivative and the final <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7 factorization is given by

<semantics>U7=exp⁡(16ΔtDE)exp⁡(12ΔtDK)exp⁡(23ΔtDV~)exp⁡(12ΔtDK)exp⁡(16ΔtDE),<annotation encoding="application/x-tex">U_7 = \exp \left(\frac {1}{6}\Delta t D_E\right)\exp \left(\frac {1}{2}\Delta t D_K\right)\exp \left(\frac {2}{3}\Delta t D_{\tilde{V}}\right)\exp \left( \frac {1}{2}\Delta t D_K\right)\exp \left(\frac {1}{6}\Delta t D_E\right),</annotation></semantics>U7=exp(61ΔtDE)exp(21ΔtDK)exp(32ΔtDV~)exp(21ΔtDK)exp(61ΔtDE),

whereas <semantics>DV~<annotation encoding="application/x-tex">D_{\tilde V}</annotation></semantics>DV~ is a differential operator reflecting the influence of a modified potential energy <semantics>V+148[Δt∇V]2<annotation encoding="application/x-tex">V + \frac{1}{48}[\Delta t\nabla V ]^2</annotation></semantics>V+481[ΔtV]2 and is thus effectively a second-order derivative.

A Python implementation of the algorithm described above would look like this:

def integrate(x, v):

    v += 1 / 6 * time_step * gradient_pot_energy(x)

    for i in range(trajectory_length - 1):
        x += 1 / 2 * v * time_step
        v += (2 / 3 * time_step * (gradient_pot_energy(x)
            + time_step ** 2 / 24
            * np.matmul(hessian_log_prog(x),gradient_pot_energy(x))))
        x += 1 / 2 * v * time_step
        v += 1 / 3 * time_step * gradient_pot_energy(x)

    x += 1 / 2 * v * time_step
    v += (2 / 3 * time_step * (gradient_pot_energy(x)
        + time_step ** 2 / 24
        * np.matmul(hessian_log_prog(x),gradient_pot_energy(x))))
    x += 1 / 2 * v * time_step
    v += 1 / 6 * time_step * gradient_pot_energy(x)

    return x, v

Bear in mind that the higher accuracy achieved with <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7 comes with a non-negligible additional computational cost, namely evaluating the gradient two times instead of one time and additionally evaluating the matrix of second derivatives.

Benchmarking leapfrog and <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7-based HMC

In this paper, Jun Hao Hue et al. benchmark the performance of the leapfrog and <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7 against various classical and quantum systems, but are not concerned with their use in HMC.

To compare the performance of the leapfrog and U7 integration schemes in the context of HMC, we plug above implementations into HMC and sample from two different probability distributions.

The first example is a 100-dimensional standard normal distribution. Because of the high symmetry of this distribution, we must be careful to not compare apples and oranges: if we integrate for different total times, the trajectory might double back and we would waste computational effort — avoiding this is the goal of a widely popular HMC variant called NUTS. We thus fix the total integration time (given by number of integration steps x time step) to ten-time units and run HMC for different combinations of time step and number of integration steps. If we can use a higher time step, we have to perform less integration steps, which means less costly gradient and Hessian evaluations.

normal

We find indeed that the acceptance rate for <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7 stays almost constant at almost one for a wide range of time steps, while the HMC implementation based on the leapfrog integration scheme shows rapidly diminishing acceptance rates. We currently cannot explain the local maximum in the Leapfrog acceptance rate around a stepsize of <semantics>1.5<annotation encoding="application/x-tex">1.5</annotation></semantics>1.5, but we suspect it has to do with high symmetry of the normal distribution — perhaps the Leapfrog is, for that stepsize / trajectory length combination, performing an additional U-turn that makes it double back towards more likely states. In any case, this confirms that we have implemented the U7 integrator correctly and makes us even more excited to test it on a “real” system!

As a more practical application, we consider a simple, coarse-grained polymer model which could represent a biomolecule, like a protein or DNA. In Bayesian biomolecular structure determination, one seeks to infer the coordinates of atoms or coarse-grained modelling units (monomers) of such a polymer model from data obtained from biophysical experiments. This results in an intractable posterior distribution and MCMC methods such as HMC are used to sample from it.

In our case, we consider a polymer of <semantics>N=30<annotation encoding="application/x-tex">N=30</annotation></semantics>N=30 spherical particles with fictive springs between neighbouring particles. We also include a term that makes sure particles do not overlap much. Furthermore, we assume that we have measured pairwise distances for two particle pairs and that these measurements are drawn from a log-normal distribution. See this article for details on a very similar model. This setup results in a posterior distribution over <semantics>N×3=90<annotation encoding="application/x-tex">N \times 3=90</annotation></semantics>N×3=90 parameters, from which we sample using HMC with either the leapfrog or the <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7 integrator. Drawing 10000 samples with a trajectory length of ten steps and varying time steps, we find the following results for the effective sample size (ESS) of the log-posterior probability and the acceptance rate:

polymer

We find that, just as for the 100-dimensional normal distribution, the <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7 HMC shows significantly increased acceptance rates as compared to the leapfrog HMC. The calculation of the ESS shows that for the two smallest time steps tested, the estimated number of independent samples is much higher for the <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7-based HMC than for the standard implementation. It is important to note that in our experiments, if the acceptance rate gets very low, the ESS is likely vastly overestimated. We omit these erroneous data points and can suspect that also for the third time step, the ESS obtained with standard HMC is probably smaller than shown.

Analysis of benchmark results

What does this mean with respect to absolute performance? Remember that while the <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7 yields better acceptance rates and higher ESS, it also requires more computing time: The computationally most expensive part of both integrators is evaluating the first and second derivatives of the log-probability. This requires, in both cases, the evaluation of all pairwise Euclidean distances between monomers. In both cases, the distance evaluations are dominating the computational cost. We assume thus that the computational cost for evaluating the gradient and the second derivative is identical and equal to the cost of evaluating all pairwise distances. Note that we neglect all additional, implementation-specific overhead.

Under these coarse assumptions, we can thus estimate the computational effort for a <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7 iteration to be approximately twice the effort of a leapfrog iteration. Given that, based on the ESS estimation, we can achieve up to approximately seven times the number of independent samples with <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7, we conclude that the <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7 integrator indeed is a very promising way to boost HMC performance.

Conclusion

I hope that you have gained a deeper understanding of the numeric behind Hamiltonian mechanics and that this blog post stimulated you to think more about alternative integration schemes for HMC. Symplectic integration is still an active field of research and especially its applications outside physics are just being explored - we surely can expect more cool results on different integration schemes in the future!

Put in a nutshell, we have seen that HMC does not necessarily rely on the leapfrog integrator and may even be better off with higher-order integration schemes such as <semantics>U7<annotation encoding="application/x-tex">U_7</annotation></semantics>U7.


  1. See this Wikipedia section beginning from equation (6) for further details.

  2. Just use the definition of the order, plug in the definitions for <semantics>(x⋆,v⋆)<annotation encoding="application/x-tex">(x^\star,v^\star)</annotation></semantics>(x,v), <semantics>(xt,vt)<annotation encoding="application/x-tex">(x_{t},v_{t})</annotation></semantics>(xt,vt) and the series definition of the exponential function. Then, when multiplying the series, it is sufficient to consider only the summands that multiply up to an <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>t-order of one and you should be able to find a <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>C such that <semantics>∥(x⋆,v⋆)−(xt,vt)∥≤C⋅t<annotation encoding="application/x-tex">\Vert(x^\star,v^\star)-(x_{t},v_{t}) \Vert\leq C \cdot t</annotation></semantics>(x,v)(xt,vt)Ct. Bear in mind that operators in general do not commute.

  3. For symmetric approximations (<semantics>U(t)U(−t)=1<annotation encoding="application/x-tex">U(t)U(-t) = 1</annotation></semantics>U(t)U(t)=1), the error terms cannot be of even order since then, intuitively speaking, the error would point in the same direction, because <semantics>t2n=(−t)2n<annotation encoding="application/x-tex">t^{2n} = (-t)^{2n}</annotation></semantics>t2n=(t)2n. <semantics>U(t)<annotation encoding="application/x-tex">U(t)</annotation></semantics>U(t) is the time evolution operator and since we only consider time independent systems, <semantics>U(t)<annotation encoding="application/x-tex">U(t)</annotation></semantics>U(t) is symmetric in time, leaving no error behind when the product <semantics>U(t)U(−t)<annotation encoding="application/x-tex">U(t)U(-t)</annotation></semantics>U(t)U(t) is applied.

September 30, 2021 12:00 AM

September 29, 2021

Gabriel Gonzalez

Fall-from-Grace: A ready-to-fork functional programming language

grace

I’m publishing a repository containing a programming language implementation named Fall-from-Grace (or “Grace” for short). The goal of this language is to improve the quality of domain-specific languages by providing a useful starting point for implementers to fork and build upon.

You can visit the project repository if you want to cut to the chase. The README already provides most of the information that you will need to begin using the language. This announcement post focuses more upon the history and motivation behind this project. Specifically, I imagine some people will want to understand how this work relates to my work on Dhall.

TL;DR: I created a functional language that is a superset of JSON, sort of like Jsonnet but with types and bidirectional type inference.

The motivation

The original motivation for this project was very different than the goal I finally settled upon. My original goal was to build a type checker with type inference for Nix, since this is something that a lot of people wanted (myself included). However, I bit off more than I could chew because Nix permits all sorts of stuff that is difficult to type-check (like computed record fields or the callPackages idiom in Nixpkgs).

I lost steam on my Nix type-checker (for now) but then I realized that I had still built a fairly sophisticated interpreter implementation along the way that other people might find useful. In particular, this was my first new interpreter project in years since I created Dhall, and I had learned quite a few lessons from Dhall about interpreter best practices.

“So”, I thought, “why not publish what I had created so far?”

Well, I can think of one very good reason why not: I want to continue to support and improve upon Dhall because people are using Dhall in production and I have neither the time nor the inclination to build yet another ecosystem around a new programming language. But somebody else might!

So I took the project in a totally different direction: publish an instructive implementation of a programming language that others could fork and build upon.

Unlike Dhall, this new language would not be encumbered by the need to support a language standard or multiple independent implementations, so I could go nuts with adding features that I omitted from Dhall. Also, this language would not be published in any way so that I could keep the implementation clear, opinionated, and maintainable. In other words, this language would be like an “executable tutorial”.

The starting point

I designed the initial feature set of this new language based on feedback I had received from Dhall’s skeptics. The rough language these people had in mind went something like this:

  • The language had to have bidirectional type inference

    The most common criticism I received about Dhall was about Dhall’s limited type inference.

  • The language had to have JSON-compatible syntax

    One of the things that had prevented people from using Dhall was that the migration story was not as smooth as, say, CUE (which is a superset of YAML) or Jsonnet (which is a superset of JSON), because Dhall was not a superset of any existing file format.

    Also, many people indicated that JSON syntax would be easier for beginners to pick up, since they would likely be comfortable with JSON if they had prior experience working with JavaScript or Python.

  • The language had to have JSON-compatible types

    JSON permits all sorts of silliness (like [ 1, [] ]), and people wanted a type system that can cope with that stuff, while still getting most of the benefits of working with types. Basically, they wanted something sort of like TypeScript’s type system.

  • They don’t want to run arbitrary code

    TypeScript already checks off most of the above points, but these people were looking for alternatives because they didn’t want to permit users to run arbitrary JavaScript. In other words, they don’t want a language that is “Pac-Man complete” and instead want something more limited as a clean slate for building their own domain-specific languages.

What I actually built

I created the Fall-from-Grace language, which is my best attempt to approximate the Dhall alternative that most people were looking for.

Fall-from-Grace has:

  • JSON-compatible syntax

  • Bidirectional type-inference and type-checking

  • A JSON-compatible type system

  • Dhall-style filepath and URL imports

  • Fast interpretation

  • Open records and open unions

    a.k.a. row polymorphism and polymorphic variants

  • Universal quantification and existential quantification

    a.k.a. “generics” and “typed holes”

One way to think of Grace is like “Dhall but with better type inference and JSON syntax”. For example, here is the Grace equivalent of the tutorial example from dhall-lang.org:

# ./users.ffg
let makeUser = \user ->
let home = "/home/" + user
let privateKey = home + "/.ssh/id_ed25519"
let publicKey = privateKey + ".pub"

in { home: home, privateKey: privateKey, publicKey: publicKey }

in [ makeUser "bill"
, makeUser "jane"
]

… which produces this result:

$ grace interpret ./users.ffg
[ { "home": "/home/bill"
, "privateKey": "/home/bill/.ssh/id_ed25519"
, "publicKey": "/home/bill/.ssh/id_ed25519.pub"
}
, { "home": "/home/jane"
, "privateKey": "/home/jane/.ssh/id_ed25519"
, "publicKey": "/home/jane/.ssh/id_ed25519.pub"
}
]

Just like Dhall, Grace supports let expressions and anonymous functions for simplifying repetitive expressions. However, there are two differences here compared to Dhall:

  • Grace uses JSON-like syntax for records (i.e. : instead of = to separate key-value pairs)

    Because Grace is a superset of JSON you don’t need a separate grace-to-json tool like Dhall. The result of interpreting Grace code is already valid JSON.

  • Grace has better type inference and doesn’t require annotating the type of the user function argument

    The interpreter can work backwards to infer that the user function argument must have type Text based on how user is used.

Another way to think of Grace is as “Jsonnet + types + type inference”. Grace and Jsonnet are both programmable configuration languages and they’re both supersets of JSON, but Grace has a type system whereas Jsonnet does not.

The following example Grace code illustrates this by fetching all post titles from the Haskell subreddit:

# ./reddit-haskell.ffg
let input
= https://www.reddit.com/r/haskell.json
: { data: { children: List { data: { title: Text, ? }, ? }, ? }, ? }

in List/map (\child -> child.data.title) input.data.children

… which at the time of this writing produces the following result:

$ grace interpret ./reddit-haskell.ffg
[ "Monthly Hask Anything (September 2021)"
, "Big problems at the timezone database"
, "How can Haskell programmers tolerate Space Leaks?"
, "async graph traversal in haskell"
, "[ANN] Call For Volunteers: Join The New \"Our Foundation Task Force\""
, "Math lesson to be learned here?"
, "In search a functional job"
, "Integer coordinate from String"
, "New to Haskell"
, "Variable not in scope error"
, "HF Technical Track Elections - Announcements"
, "Learning Haskell by building a static blog generator"
, "Haskell Foundation Board meeting minutes 2021-09-23"
, "Haskell extension 1.7.0 VS Code crashing"
, "[question] Nix/Obelisk with cabal packages intended for hackage"
, "George Wilson - Cultivating an Engineering Dialect (YOW! Lambda Jam 2021)"
, "A new programming game, Swarm by Brent Yorgey"
, "Scheme in 48 hours, First chapter issue"
, "Recursively delete JSON keys"
, "Issue 282 :: Haskell Weekly newsletter"
, "Haskell"
, "Yaml parsing with preserved line numbers"
, "I would like some input on my code if anybody have time. I recently discovered that i can use variables in Haskell (thought one could not use them for some reason). Would just like some input on how i have done it."
, "Diehl's comments on Haskell numbers confuse..."
, "Please explain this syntax"
, "Can Haskell automatically fuse folds?"
]

Here we can see:

  • Dhall-style URL imports

    We can import JSON (or any Grace code) from a URL just by pasting the URL into our code

  • Any JSON expression (like haskell.json) is also a valid Grace expression that we can transform

  • We can give a partial type signature with holes (i.e. ?) specifying which parts we care about and which parts to ignore

Grace’s type system is even more sophisticated than that example lets on. For example, if you ask grace for the type of this anonymous function from the example:

\child -> child.data.title

… then the interpreter will infer the most general type possible without any assistance from the programmer:

forall (a : Type) .
forall (b : Fields) .
forall (c : Fields) .
{ data: { title: a, b }, c } -> a

This is an example of row-polymorphism (what Grace calls “open records”).

Grace also supports polymorphic variants (what Grace calls “open unions”), so you can wrap values of different types in constructors without having to declare any union type in advance:

[ GitHub
{ repository: "https://github.com/Gabriel439/Haskell-Turtle-Library.git"
, revision: "ae5edf227b515b34c1cb6c89d9c58ea0eece12d5"
}
, Local { path: "~/proj/optparse-applicative" }
, Local { path: "~/proj/discrimination" }
, Hackage { package: "lens", version: "4.15.4" }
, GitHub
{ repository: "https://github.com/haskell/text.git"
, revision: "ccbfabedea1cf5b38ff19f37549feaf01225e537"
}
, Local { path: "~/proj/servant-swagger" }
, Hackage { package: "aeson", version: "1.2.3.0" }
]

… and the interpreter automatically infers the most general type for that, too:

forall (a : Alternatives) .
List
< GitHub: { repository: Text, revision: Text }
| Local: { path: Text }
| Hackage: { package: Text, version: Text }
| a
>

Open records + opens unions + type inference mean that the language does not require any data declarations to process input. The interpreter infers the shape of the data from how the data is used.

Conclusion

If you want to learn more about Grace then you should check out the README, which goes into more detail about the language features, and also includes a brief tutorial.

I also want to conclude by briefly mentioning some other secondary motivations for this project:

  • I want to cement Haskell’s status as the language of choice for implementing interpreters

    I gave a longer talk on this subject where I argue that Haskell can go mainstream by cornering the “market” for interpreted languages:

  • I hope to promote certain Haskell best practices for implementing interpreters

    Grace provides a model project for implementing an interpreted language in Haskell, including project organization, choice of dependencies, and choice of algorithms. Even if people choose to not fork Grace I hope that this project will provide some useful opinionated decisions to help get them going.

  • I mentor several people learning programming language theory and I wanted instructive example code to reference when teaching them

    One of the hardest parts about teaching programming language theory is that it’s hard to explain how to combine language features from more than one paper. Grace provides the complete picture by showing how to mix together multiple advanced features.

  • I wanted to prototype a few language features for Dhall’s language standard

    For example, I wanted to see how realistic it would be to standardize bidirectional type-checking for Dhall. I may write a follow-up post on what I learned regarding that.

  • I also wanted to prototype a few implementation techniques for the Haskell implementation of Dhall

    The Haskell implementation of Dhall is complicated enough that it’s hard to test drive some implementation improvements I had in mind, but Grace was small enough that I could learn and iterate on some ideas more quickly.

I don’t plan on building an ecosystem around Grace, although anybody who is interested in doing so can freely fork the language. Now that Grace is complete I plan on using what I learned from Grace to continue improving Dhall.

I also don’t plan on publishing Grace as a package to Hackage nor do I plan to provide an API for customizing the language behavior (other than forking the language). I view the project as an educational resource and not a package that people should depend on directly, because I don’t commit to supporting this project in production.

I do believe Grace can be forked to create a “Dhall 2.0”, but if that happens such an effort will need to be led by somebody else other than me. The main reason why is that I’d rather preserve Grace as a teaching tool and I don’t have the time to productionize Grace (productionizing Dhall was already a big lift for me).

However, I still might fork Grace in the future for other reasons (including a second attempt at implementing a type-checker for Nix).

by Gabriella Gonzalez (noreply@blogger.com) at September 29, 2021 02:27 PM

Michael Snoyman

Babies and OSS maintenance

I'm very happy to let everyone know: after some fun times with preterm labor, Miriam and I welcomed two new babies into the world this past Saturday. Between the hospital trips and the Jewish holiday season (we just finished the final holiday of this season, Shmini Atzeret), I didn't have a chance to let everyone know. But I finally got around to an announcement tweet this morning, which has some pictures:

Before the babies were born, I had thought about putting out a tweet about open source maintenance, but never got around to it. So in the few hours I have now before heading off to pick up the kids from the hospital, I thought I'd write up something brief.

My OSS maintenance

I try to generally stay responsive on my projects, but odds are pretty good that I'm going to fall short of the mark in the next few weeks/months. It's true that I have a bit of experience with managing kids and OSS, but twins are a new curveball I've never encountered before. The main points I want to get across are:

  1. Apologies in advance for delays, or worse, me completely missing some conversations
  2. Now's a great time to volunteer as a comaintainer on any of the packages I maintain

How I maintain packages

I've never actually codified my current stance on OSS project maintenance, so it's worth getting out there. I'll include Haskell-specific concepts in here, though most of what I say would apply to non-Haskell projects too.

  1. Stability of projects is a huge feature in its own right. It's only with great hesitation that I'll include breaking changes in packages these days. Going along with that, when it comes to new and experimental features, I'll usually request they be put in separate packages when possible to avoid version dependency churn on established packages.

  2. My favorite way of including changes is pull requests. I used to be much more trigger-happy to implement new features and write bug fixes myself. But simply the constraints of time, plus the desire to include more people in project maintenance, has changed this. If you've opened an issue on a project of mine in the past few years, you probably got the response "PR welcome."

  3. When reviewing a PR, I'll ask for it to include:

    • The code change itself, together with inline documentation
    • A version number bump in the .cabal or package.yaml file
    • An entry in the ChangeLog
    • A @since Haddock comment for any newly exported identifiers
  4. Once a PR is ready to merge, I want to see all of CI pass, and then follow up with both merging and releasing to Hackage. If I don't immediately release to Hackage, I'm going to forget to do so. And there's generally no good reason to hold off on such releases. Also, each release should include a Git tag for that release.

  5. There are a few time saving mechanisms I've taken on over the years as well. These include:

    • I won't spend time on Hackage revisions. I'm not starting a fight here, I'm stating a fact. When people ask me to make revisions on Hackage to fix the dependency solver, my response will almost always be: I'm happy to give you Hackage maintainer rights to do so yourself, but I won't do it.
    • GHC has overly aggressive releases which cause a lot of breakage. I won't guarantee compatibility with more than three GHC major versions going forward (e.g. GHC 8.6, 8.8, and 8.10) Even maintaining the CI matrix for more than that is a major burden. I realize dropping support for 1.5 year old compilers is aggressive, and I'd prefer not to do that. But it's a simple cost/benefit trade-off here.
    • I maintain a lot of my projects in "mega-repos", or more appropriately named these days monorepos. I use the tools mega-sdist to help maintain releases of these packages.
    • When it comes to dependency bounds in my .cabal/package.yaml files, I will include lower bounds on my packages in two cases:
      • I know for a fact that I don't support an old version of a package.
      • To exclude versions of GHC (via a base-library bound) that my CI scripts don't cover.
    • I almost never include upper bounds on my dependencies, since it causes more work than it saves. (Yes, I know that this is a topic of contention, but these are my personal rules.) If a new release of a dependency breaks my package, by far the best way to fix that is to release a new version with support for both the old and new version. This has the least ecosystem impact of all alternatives I've tried.

I'm sure I've missed some things, and if I think of them I'll try to update this post in the future. But hopefully this gives some clear guidelines for people considering getting involved in comaintaining.

Preferred CI system

My OSS projects have used Travis, AppVeyor, and Azure CI in the past for CI systems. At this point, I consider GitHub Actions the easiest and most integrated experience, and wherever possible I've been moving my CI systems over to it. I no longer maintain dual Cabal/Stack build systems, since it's simply too much of a burden to maintain. A nice example of a Stack-based GitHub Actions script is available in Conduit at https://github.com/snoyberg/conduit/blob/master/.github/workflows/tests.yml.

I haven't moved all of my projects over to GitHub Actions. But if you're interested in picking up co-maintenance on any projects, switching over the CI system would be a great first step in making them more maintainable.

Which packages do I maintain?

The easiest place to get a comprehensive list of Haskell packages I maintain is the Hackage user page:

https://hackage.haskell.org/user/MichaelSnoyman

Note that there are many packages in that list that are deprecated, so just because a package exists there doesn't mean it's in need of a comaintainer.

Also, going along with this, it's worth recalling a point above. I'm trying for the most part to keep my packages stable, which means few breaking changes, and reduced feature expansion. As a result, many of the packages above require little in the way of maintenance, outside of dealing with breakages coming from upstream packages or, more likely, GHC itself.

Special call out: Stack

Stack is by far the largest project I maintain. I don't maintain it alone, but none of the maintainers are spending a significant amount of time on it. Frankly, for all of my needs, it's checking the boxes right now. Most of my pain points come from changes coming upstream from GHC or Cabal causing breakage, or introducing new features (like multiple libraries per package) that require large overhauls to how Stack operates.

I haven't said this before, but I'll say it now: I'm not interested in investing any time in staying on that treadmill, or introducing new features to Stack. I had hoped that with the Haskell Foundation launch this year, we would have an affiliation process for projects that allowed better interproject communication and reduced the maintenance burden. That never came into existence, and so now I feel pretty comfortable in saying: outside of making sure Stack continues to work for my primary use cases, I'm not going to be investing my own time much going forward.

People are free to take those statements as they will. I'm not sure how large an overlap there is between Stack users and people looking to use the latest GHC and Cabal features that it doesn't support. I know that it becomes a blocker for the Stackage team sometimes. And I know regressions in Stack with newer GHC/Cabal versions (like overly aggressive recompilation, or broken deregister support for private libraries) causes the Stackage team pain and suffering. I'd love to see these addressed. But I'm not going to do it myself.

In other words, this is a more serious call to action than I've made previously. If people want to see changes and improvements to Stack, you need to get involved personally. I'll continue maintaining the project in its current state, together with other maintainers currently doing so. But in the past few months, as I've been thinking about where I wanted to spend my limited time after the babies were born, I decided that it was time to call out Stack as needing motivated maintainers, or to see it sit in a feature complete, non-evolving state. I'm quite happy with either outcome.

How to volunteer as comaintainer

In many ways I timed this blog post terribly. I should have put it out two months ago, when I still had time to deal with incoming messages. But I guess better late than never!

If you're interested in becoming a comaintainer on a package, the best way to go about is to open a GitHub issue stating:

  • That you want to be a comaintainer
  • Which packages you want to comaintain (especially necessary for monorepo projects)
  • Your Hackage username

I'm going to try prioritizing responses to these kinds of requests over other OSS duties. Once you've got access to the repos, I'd make one more request: please update the README to include yourself as a comaintainer of the package.

Thank you!

I'm already getting lots of congratulations messages on Twitter, and I won't have a chance to respond to everyone. So I'll put down here: thank you! Miriam and I are both very touched to be part of an online community that we can share our simchas with.

September 29, 2021 12:00 AM

September 28, 2021

Magnus Therning

Using lens to set a value based on another

I started writing a small tool for work that consumes YAML files and combines the data into a single YAML file. To be specific it consumes YAML files containing snippets of service specification for Docker Compose and it produces a YAML file for use with docker-compose. Besides being useful to me, I thought it'd also be a good way to get some experience with lens.

The first transformation I wanted to write was one that puts in the correct image name. So, only slightly simplified, it is transforming

panda:
    x-image: panda
goat:
    x-image: goat
tapir:
    image: incorrent
    x-image: tapir

into

panda:
    image: panda:latest
    x-image: panda
goat:
    image: goat:latest
    x-image: goat
tapir:
    image: tapir:latest
    x-image: tapir

That is, it creates a new key/value pair in each object based on the value of x-image in the same object.

First approach

The first approach I came up with was to traverse the sub-objects and apply a function that adds the image key.

setImage :: Value -> Value
setImage y = y & members %~ setImg
  where
    setImg o =
        o
            & _Object . at "image"
            ?~ String (o ^. key "x-image" . _String <> ":latest")

It did make me wonder if this kind of problem, setting a value based on another value, isn't so common that there's a nicer solution to it. Perhaps coded up in a combinator that isn't mentioned in Optics By Example (or mabye I've forgot it was mentioned). That lead me to ask around a bit, which leads to approach two.

Second approach

Arguably there isn't much difference, it's still traversing the sub-objects and applying a function. The function makes use of view being run in a monad and ASetter being defined with Identity (a monad).

setImage' :: Value -> Value
setImage' y =
    y
        & members . _Object
        %~ (set (at "image") . (_Just . _String %~ (<> ":latest")) =<< view (at "x-image"))

I haven't made up my mind on whether I like this better than the first. It's disappointingly similar to the first one.

Third approach

Then I it might be nice to split the fetching of x-image values from the addition of image key/value pairs. By extracting with an index it's possible to keep track of what sub-object each x-image value comes from. Then two steps can be combined using foldl.

setImage'' :: Value -> Value
setImage'' y = foldl setOne y vals
  where
    vals = y ^@.. members <. key "x-image" . _String
    setOne y' (objKey, value) =
        y'
            & key objKey . _Object . at "image"
            ?~ String (value <> ":latest")

I'm not convinced though. I guess I'm still holding out for a brilliant combinator that fits my problem perfectly.

Please point me to "the perfect solution" if you have one, or if you just have some general tips on optics that would make my code clearer, or shorter, or more elegant, or maybe just more lens-y.

September 28, 2021 11:21 AM

September 25, 2021

Chris Smith 2

September Virtual CoHack Recap

The September chapter of the Virtual Haskell CoHack is now past. Here’s how it went:

  • We had 12 people in attendance overall, ranging from experienced Haskell programmers to the novice and intermediate level. We had a great time getting to know each other and hearing each other’s Haskell stories.
  • One group worked on getting started with the implementation of the lambda cases GHC proposal. They made significant progress on lexing and parsing work to implement the feature.
  • Another group worked on explainable-predicates, implementing a number of minor feature requests and culminating in the implementation of qADT, a powerful Template Haskell combinator for convenient matching of algebraic data types with predicates for the fields. A new version has been released.
  • Yet another group worked on documentation for Data.Foldable, focusing on making the documentation more precise, accessible, and helpful with examples and clear language.

We’ve already got the next Virtual Haskell CoHack scheduled, so go ahead and sign up if you’d like to join the fun next time around.

by Chris Smith at September 25, 2021 11:18 PM

September 24, 2021

Brent Yorgey

Swarm: preview and call for collaboration

For about a month now I have been working on building a game1, tentatively titled Swarm. It’s nowhere near finished, but it has at least reached a point where I’m not embarrassed to show it off. I would love to hear feedback, and I would especially love to have others contribute! Read on for more details.

Swarm is a 2D tile-based resource gathering game, but with a twist: the only way you can interact with the world is by building and programming robots. And there’s another twist: the kinds of commands your robots can execute, and the kinds of programming language features they can interpret, depends on what devices they have installed; and you can create new devices only by gathering resources. So you start out with only very basic capabilities and have to bootstrap your way into more sophisticated forms of exploration and resource collection.

I guess you could say it’s kind of like a cross between Minecraft, Factorio, and Karel the Robot, but with a much cooler programming language (lambda calculus + polymorphism + recursion + exceptions + a command monad for first-class imperative programs + a bunch of other stuff).

The game is far from complete, and especially needs a lot more depth in terms of the kinds of devices and levels of abstraction you can build. But for me at least, it has already crossed the line into something that is actually somewhat fun to play.

If it sounds interesting to you, give it a spin! Take a look at the README and the tutorial. If you’re interested in contributing to development, check out the CONTRIBUTING file and the GitHub issue tracker, which I have populated with a plethora of tasks of varying difficulty. This could be a great project to contribute to especially if you’re relatively new to Haskell; I try to keep everything well-organized and well-commented, and am happy to help guide new contributors.


  1. Can you tell I am on sabbatical?↩

by Brent at September 24, 2021 02:51 AM

September 23, 2021

Tweag I/O

Functional data pipelines with funflow2

As the data science and machine learning fields have grown over the past decade, so has the number of data pipelining frameworks. What started out largely as a set of tools for extract-transform-load (ETL) processes has expanded into a diverse ecosystem of libraries, all of which aim to provide data teams with the ability to move and process their data. Apache Airflow, Beam, Luigi, Azkaban — the list goes on and on.

As users of several of the above frameworks, Tweag has a special interest in data pipelines. While working with them, we have observed a few common shortcomings which make writing and debugging data pipelines more complex than it needs to be. These shortcomings include limited composability of pipeline components, as well as minimal support for static analysis of pipelines. This second issue can be especially annoying when executing pipelines in a machine learning context, where pipelines are often defined as a Directed Acyclic Graph (DAG) of long-running tasks (e.g. training a model). In these situations, early identification of a pipeline doomed to fail due problems like configuration errors can spare great waste of compute time and resources.

Additionally, while many data pipeline frameworks provide some support for choosing the execution environment of the pipeline (e.g. running locally vs. on a Spark or Kubernetes cluster), not all provide control over the execution logic of the tasks themselves. This is a common limitation of workflow-oriented frameworks, like Apache Airflow, that couple a task’s definition to its execution logic by combining them in a single class. This lack of modularity makes it difficult to do things like write integration tests for pipeline components.

Enter: funflow2

funflow2 is the successor of funflow, a Haskell library for writing workflows in a functional style. funflow2 makes use of kernmantle to offer several advantages over the original funflow including greater extensibility, additional type-level controls, and a more feature-rich interpretation layer for analyzing pipelines before execution.

An ICFP Haskell Symposium 2020 paper provides a deep dive into kernmantle’s design and features.

funflow2 aims to address the limitations we have observed in other data pipeline frameworks while providing a simple, high-level API to express pipelines in a functional style. Let’s take a closer look.

Composability

In funflow2, a pipeline is represented by a Flow data type. Flows have the nice property of being composable, which allows pipeline subcomponents to be recombined with other components to create new pipelines.

To illustrate this point, we will compare two simple Apache Airflow and funflow2 pipelines, each with three sequential tasks that execute a function. In Airflow, this can be written as follows.

from datetime import datetime

from airflow import DAG
from airflow.operators.python import PythonOperator

dag = DAG(
   "example1",
   schedule_interval=None,
   default_args={"start_date": datetime(2021, 10, 1)},
)

gen_data = PythonOperator(
   python_callable=lambda: 1,
   op_args=None,
   task_id="task-gen-data",
   dag=dag,
)

add_one = PythonOperator(
   python_callable=lambda x: x + 1,
   op_args=[gen_data.output],
   task_id="task-add-one",
   dag=dag,
)

print_result = PythonOperator(
   python_callable=lambda x: print(x),
   op_args=[add_one.output],
   task_id="task-print-result",
   dag=dag,
)

In Airflow it’s simple to define a DAG of tasks, but it’s tricky to re-use it among pipelines. Airflow provides support for a SubDagOperator to trigger other DAGs, but in the end we end up with two completely separate DAG objects, forcing us to manage them individually.

With Funflow2 reusability is much simpler. Since Flows are designed to be composable with Arrows, we can just write:

{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE GADTs #-}

import Funflow (pureFlow, ioFlow)
import Control.Arrow ((>>>))

-- Note: the >>> operator is used to chain together two Flows sequentially
genData = pureFlow (const 1)
addOne = pureFlow (\x -> x + 1)
printResult = ioFlow (\x -> print x)
example1 = genData >>> addOne >>> printResult

anotherFlow = example1 >>> pureFlow (\_ -> "anotherFlow")

It’s as simple as that!

Identifying errors early

One of the most frustrating things that can happen when executing a pipeline of long-running tasks is to discover that one of the tasks towards the end of the pipeline was misconfigured - leading to wasted developer time and compute resources. Our original funflow package sought to alleviate this pain point by emphasizing resumability. Resumability was achieved through caching results as pipeline execution proceeded. funflow2 supports resumability using the same caching approach as the original funflow.

The potential time and resource savings provided by caching are limited, however, in at least two ways. First, since pipeline tasks are structured as atomic units, a failed pipeline cannot recover the work that was successfully completed within a failing task before the failure occurred. Perhaps even more importantly, with resumability a failed pipeline may still cause lost efficiency due to mental context switching. For instance, maybe you start a pipeline run, switch to another project (perhaps over multiple days), and then find that your pipeline has failed. You might find yourself asking the question, “what was I trying to achieve in the task that failed?” Earlier discovery of errors is less wasteful with mental resources and less stressful for a pipeline user.

As a pipeline author it is useful to identify misconfigurations early in the development process, either through compilation errors or through early failures at run-time. While many pipelining frameworks divide a program’s lifecycle into compile-time and run-time, kernmantle (and thus funflow) distinguishes three phases: compile-time, config-time and run-time. In the intermediate config-time phase, the pipeline is interpreted and configured which allow for static analysis. As we’ll see below, funflow2 makes use of both compile- and execution-time phases to ensure early failure.

Type errors, detectable at compilation time

There are a number of ways in which a pipeline author can accidentally misconfigure a pipeline. One example of misconfiguration is when there’s a mismatch between a task’s input type and the type of argument passed by an upstream task. This kind of issue can plague pipelines built with a library written in a dynamically typed language like Python (e.g. Airflow). It’s less common of an issue, though, for pipelines written in statically typed languages like Java or Scala (e.g. scio). Since funflow2 pipelines are written in Haskell, the language’s static type checking allows us to catch issues like this at compile time.

For example, the following pipeline will fail to compile since flow3 attempts to pass the string output of flow1 as an input to flow2, which expects an integer.

-- Note: `const` is a built in function which just returns the
-- specified value
flow1 :: Flow () String
flow1 = pureFlow (const "42")

flow2 :: Flow Integer Integer
flow2 = pureFlow (+1)
flow3 = flow1 >>> flow2
Couldn't match type ‘Integer’ with ‘String’

Value errors, detectable at config time

A more subtle and less easily detectable kind of misconfiguration occurs when a task is configured in a way that passes compile-time type checks but is guaranteed to be invalid at runtime. Funflow2 is built on top of kernmantle, which provides us with a convenient layer for extracting static information from tasks in a pre-execution phase called config-time, after the pipeline has been loaded but before any task has run. This layer can be used for early detection of errors in a pipeline and ensuring early failure if something is awry. For example, let’s try running a Flow which attempts to run a Docker container with an image tag that does not exist. This pipeline can be compiled without complaint but is doomed to fail at run-time.

{-# LANGUAGE OverloadedStrings #-}

import Funflow (dockerFlow)
import Funflow.Tasks.Docker (DockerTaskConfig (..), DockerTaskInput (..))


failingFlow = dockerFlow $
  DockerTaskConfig {
    image = "alpine:this-tag-does-not-exist",
    command = "echo",
    args = ["hello"]
  }

-- Flows created with dockerFlow expect a DockerTaskInput,
-- and we can just create an empty value for the sake of this
-- example.
emptyInput = DockerTaskInput {inputBindings = [], argsVals = mempty}


flow = ioFlow (\_ -> print "start of flow")
  >>> pureFlow (const emptyInput)          -- Constructs dockerFlow input
  >>> failingFlow
  >>> ioFlow (\_ -> print "end of flow")

Attempting to run this pipeline gives us the following error.

runFlow flow ()
Found docker images, pulling...
Pulling docker image: alpine:this-tag-does-not-exist
ImagePullError "Request failed with status 404: \"Not Found\" …"

Note that “start of flow” is never printed; the first task in the pipeline, which would print that, never actually executes. This is because funflow2’s default pipeline runner extracts the identifiers of all images required to run a pipeline at config-time and attempts to pull them before starting the actual pipeline run.

For another example of this fail-fast behavior when a pipeline is misconfigured, have a look at the configuration tutorial.

Aside: workflows and build systems

The connection between build systems and the kinds of task-based workflows / pipelines mentioned here is no secret and is a topic we have mentioned in previous blog posts. Despite serving different domains, workflow tools like funflow2 and build tools like Bazel or mill all seek to provide users with the ability to execute a graph of tasks in a repeatable way. Here we briefly consider a couple of key parallels between build systems and funflow2.

Early cutoff is a common property of build systems and refers to a build system’s ability to halt once it has determined that no dependency of a downstream task has changed since the most recent build1. While funflow2 is not a build system, it can be used to mimic one and even exhibits early cutoff, owing to its caching strategy using content-addressable storage. If the hash determined by the hash of a task and its inputs has not changed since a previous pipeline run, work need not be repeated.

Another property which disinguishes build systems is whether or not they support dynamic dependencies, or those for which the relationships themselves may vary with task output(s). This property depends on the build systems’ approach to modeling graphs of tasks, and in a functional paradigm is determined by whether or not tasks are modeled using monadic effects1. funflow2 uses arrows and not monads for composing its task graph and therefore falls within the camp of tools which do not support dynamic dependencies such as Bazel. The use of a static task graph is the key to pre-execution dependency collection and is what allows funflow2 to support the execution-free interpretation of tasks to detect invalid configuration. If tasks were linked in a monadic fashion, this would be impossible.

Modularity

In addition to limited composability and fail-late behavior, tight coupling between a task and its execution logic is another common weakness among existing task-based data pipelining frameworks: the data defining a task is inseparably bound to the specific logic for how the task is executed. In a closely coupled context, altering task execution logic requires entire redefinition of the task itself, e.g. by subclassing. This confounds granular testing and nudges pipeline validation toward an end-to-end style. The resulting coarse resolution view of a test failure complicates diagnosis and remedy of a bug.

In funflow2, an interpreter defines task execution logic and is separate from the task definition itself. Effectively, an interpreter transforms a task, which is treated as data, into an executable action. Separating concerns this way allows development of custom interpreters for integration testing, project-specific execution requirements, and more. Using interpreters, you can do things like flip between test and deployment execution contexts while working with the same pipeline definition. Furthermore, this modularity allows for a greater static analysis, yielding enhanced fail-fast behavior.

While an example implementation is out of scope for this post, an example of custom tasks and interpreters is available in the extensibility tutorial on the funflow2 project website.

Similarities with funflow

funflow2 uses a completely new backend (kernmantle) and therefore has major API differences from the original funflow. Nonetheless, many of the useful features of funflow are available in funflow2:

Arrow syntax

The most immediately evident similarity between the original funflow and funflow2 is syntactic. Since funflow2 still models flows using arrows, Arrow syntax can still be used to define and compose flows.

Caching

Like its predecessor, the funflow2 project uses the cas-store package which provides caching functionality using content-addressable storage.

Compatibility

While funflow2’s API differs significantly from funflow, it provides a module with aliases for some of the core functions from funflow such as step and stepIO to help simplify migration. Many of the core example from funflow have also been ported to funflow2 such as the C compilation tutorial or custom make example discussed in an earlier blog post.

Next steps

Interested in trying out some hands-on examples? We’ve prepared a set of tutorials on the funflow2 website. Alternatively, each tutorial notebook can be run locally using the provided Nix shell. You may also initialize a funflow2 project using our cookiecutter template.

funflow2 is still in its early stages, and so far most of the development on it has focused on building the Flow API, along with a few tasks and interpreters. One area in which other data pipeline frameworks excel is in providing a high level of interoperability through a wide range of predefined tasks for common operations, including running database queries and uploading data to cloud storage. We would welcome external contributions to provide these and other common task types, so please open an issue or pull request if this interests you.

Thanks for reading, and stay tuned for future updates on funflow2!


  1. Refer to the paper “Build Systems a la Carte” for much more discussion of minimality, early cutoff, and other properties of build systems and pipelines.

September 23, 2021 12:00 AM

Gil Mizrahi

A new project-oriented Haskell book

This is an announcement and a 'request for comments' for a new project-oriented, online, and free, Haskell book: Learn Haskell by building a blog generator (or LHBG for short).

This book is yet another attempt of mine to try and help make Haskell a bit more approachable. I think Haskell is a great language and I've been enjoying using it for most of my projects for several years now.

I wanted to create a short book aimed at complete beginners to the language which focuses on the practice of writing Haskell programs, including idioms, techniques, and general software development, rather than the concepts and language features - for which there are many resources available.

This book is still pretty much in an 'alpha' kind of stage, meaning that it probably needs some feedback-driven editing, but I think it is ready to be shared. I hope this book will be useful to some people trying to learn Haskell and write programs with it.

Do let me know if you have any comments or feedback on it. Especially if you are a new Haskeller getting started with the language!

by Gil at September 23, 2021 12:00 AM

September 22, 2021

Brent Yorgey

Competitive programming in Haskell: Codeforces Educational Round 114

Yesterday morning I competed in Educational Round 114 on codeforces.com, using only Haskell. It is somewhat annoying since it does not support as many Haskell libraries as Open Kattis (e.g. no unordered-containers, split, or vector); but on the other hand, a lot of really top competitive programmers are active there, and I enjoy occasionally participating in a timed contest like this when I am able.

WARNING: here be spoilers! Stop reading now if you’d like to try solving the contest problems yourself. (However, Codeforces has an editorial with explanations and solutions already posted, so I’m not giving anything away that isn’t already public.) I’m going to post my (unedited) code for each problem, but without all the imports and LANGUAGE extensions and whatnot; hopefully that stuff should be easy to infer.

Problem A – Regular Bracket Sequences

In this problem, we are given a number n and asked to produce any n distinct balanced bracket sequences of length 2n. I immediately just coded up a simple recursive function to generate all possible bracket sequences of length 2n, and then called take n on it. Thanks to laziness this works great. I missed that there is an even simpler solution: just generate the list ()()()()..., (())()()..., ((()))()..., i.e. where the kth bracket sequence starts with k nested pairs of brackets followed by n-k singleton pairs. However, I solved it in only four minutes anyway so it didn’t really matter!

readB = C.unpack >>> read

main = C.interact $
  C.lines >>> drop 1 >>> concatMap (readB >>> solve) >>> C.unlines

bracketSeqs 0 = [""]
bracketSeqs n =
  [ "(" ++ s1 ++ ")" ++ s2
  | k <- [0 .. n-1]
  , s1 <- bracketSeqs k
  , s2 <- bracketSeqs (n - k - 1)
  ]

solve n = map C.pack . take n $ bracketSeqs n

Problem B – Combinatorics Homework

In this problem, we are given numbers a, b, c, and m, and asked whether it is possible to create a string of a A’s, b B’s, and c C’s, such that there are exactly m adjacent pairs of equal letters. This problem requires doing a little bit of combinatorial analysis to come up with a simple Boolean expression in terms of a, b, c, and m; there’s not much to say about it from a Haskell point of view. You can refer to the editorial posted on Codeforces if you want to understand the solution.

readB = C.unpack >>> read

main = C.interact $
  C.lines >>> drop 1 >>> map (C.words >>> map readB >>> solve >>> bool "NO" "YES") >>> C.unlines

solve :: [Int] -> Bool
solve [a,b,c,m] = a + b + c - m >= 3 && m >= z - (x+y) - 1
  where
    [x,y,z] = sort [a,b,c]

Problem C – Slay the Dragon

This problem was super annoying and I still haven’t solved it. The idea is that you have a bunch of “heroes”, each with a numeric strength, and there is a dragon described by two numbers: its attack level and its defense level. You have to pick one hero to fight the dragon, whose strength must be greater than or equal to the dragon’s defense; all the rest of the heroes will stay behind to defend your castle, and their combined strength must be greater than the dragon’s attack. This might not be possible, of course, so you can first spend money to level up any of your heroes, at a rate of one coin per strength point; the task is to find the minimum amount of money you must spend.

The problem hinges on doing some case analysis. It took me a good while to come up with something that I think is correct. I spent too long trying to solve it just by thinking hard; I really should have tried formal program derivation much earlier. It’s easy to write down a formal specification of the correct answer which involves looping over every hero and taking a minimum, and this can be manipulated into a form that doesn’t need to do any looping.

In the end it comes down to (for example) finding the hero with the smallest strength greater than or equal to the dragon’s defense, and the hero with the largest strength less than or equal to it (though one of these may not exist). The intended way to solve the problem is to sort the heroes by strength and use binary search; instead, I put all the heroes in an IntSet and used the lookupGE and lookupLE functions.

However, besides my floundering around getting the case analysis wrong at first, I got tripped up by two other things: first, it turns out that on the Codeforces judging hardware, Int is only 32 bits, which is not big enough for this problem! I know this because my code was failing on the third test case, and when I changed it to use Int64 instead of Int (which means I also had to switch to Data.Set instead of Data.IntSet), it failed on the sixth test case instead. The other problem is that my code was too slow: in fact, it timed out on the sixth test case rather than getting it wrong per se. I guess Data.Set and Int64 just have too much overhead.

Anyway, here is my code, which I think is correct, but is too slow.

data TC = TC { heroes :: ![Int64], dragons :: ![Dragon] }
data Dragon = Dragon { defense :: !Int64, attack :: !Int64 }

main = C.interact $
  runScanner tc >>> solve >>> map (show >>> C.pack) >>> C.unlines

tc :: Scanner TC
tc = do
  hs <- numberOf int64
  ds <- numberOf (Dragon <$> int64 <*> int64)
  return $ TC hs ds

solve :: TC -> [Int64]
solve (TC hs ds) = map fight ds
  where
    heroSet = S.fromList hs
    total = foldl' (+) 0 hs
    fight (Dragon df atk) = minimum $
      [ max 0 (atk - (total - hero)) | Just hero <- [mheroGE] ]
      ++
      [ df - hero + max 0 (atk - (total - hero)) | Just hero <- [mheroLE]]
      where
        mheroGE = S.lookupGE df heroSet
        mheroLE = S.lookupLE df heroSet

I’d like to come back to this later. Using something like vector to sort and then do binary search on the heroes would probably be faster, but vector is not supported on Codeforces. I’ll probably end up manually implementing binary search on top of something like Data.Array.Unboxed. Doing a binary search on an array also means we can get away with doing only a single search, since the two heroes we are looking for must be right next to each other in the array.

Edited to add: I tried creating an unboxed array and implementing my own binary search over it; however, my solution is still too slow. At this point I think the problem is the sorting. Instead of calling sort on the list of heroes, we probably need to implement our own quicksort or something like that over a mutable array. That doesn’t really sound like much fun so I’m probably going to forget about it for now.

Problem D – The Strongest Build

In this problem, we consider a set of k-tuples, where the value for each slot in a tuple is chosen from among a list of possible values unique to that slot (the values for a slot are given to us in sorted order). For example, perhaps the first slot has the possible values 1, 2, 3, the second slot has possible values 5, 8, and the third slot has possible values 4, 7, 16. In this case there would be 3 \times 2 \times 3 possible tuples, ranging from (1,5,4) up to (3,8,16). We are also given a list of forbidden tuples, and then asked to find a non-forbidden tuple with the largest possible sum.

If the list of slot options is represented as a list of lists, with the first list representing the choices for the first slot, and so on, then we could use sequence to turn this into the list of all possible tuples. Hence, a naive solution could look like this:

solve :: Set [Int] -> [[Int]] -> [Int]
solve forbidden =
  head . filter (`S.notMember` forbidden) . sortOn (Down . sum) . sequence

Of course, this is much too slow. The problem is that although k (the size of the tuples) is limited to at most 10, there can be up to 2 \cdot 10^5 choices for each slot (the choices themselves can be up to 10^8). The list of all possible tuples could thus be truly enormous; in theory, there could be up to (2 \cdot 10^5)^{10} \approx 10^{53}), and generating then sorting them all is out of the question.

We can think of the tuples as forming a lattice, where the children of a tuple t are all the tuples obtained by downgrading exactly one slot of t to the next smaller choice. Then the intended solution is to realize that the largest non-forbidden tuple must either be the top element of the lattice (the tuple with the maximum possible value for every slot), OR a child of one of the forbidden tuples (it is easy to see this by contradiction—any tuple which is not the child of a forbidden tuple has at least one parent which has a greater total value). So we can just iterate over all the forbidden tuples (there are at most 10^5), generate all possible children (at most 10) for each one, and take the maximum.

However, that’s not how I solved it! I started thinking from the naive solution above, and wondered whether there is a way to do sortOn (Down . sum) . sequence more efficiently, by interleaving the sorting and the generation. If it can be done lazily enough, then we could just search through the beginning of the generated ordered list of tuples for the first non-forbidden one, without having to actually generate the entire list. Indeed, this reminded me very much of Richard Bird’s implementation of the Sieve of Eratosthenes (see p. 11 of that PDF). The basic idea is to make a function which takes a list of choices for a slot, and a (recursively generated) list of tuples sorted by decreasing sum, and combines each choice with every tuple, merging the results so they are still sorted. However, the key is that when combining the best possible choice for the slot with the largest tuple in the list, we can just immediately return the resulting tuple as the first (best) tuple in the output list, without needing to involve it in any merging operation. This affords just enough laziness to get the whole thing off the ground. I’m not going to explain it in more detail than that; you can study the code below if you like.

I’m quite pleased that this worked, though it’s definitely an instance of me making things more complicated than necessary.


data TC = TC { slots :: [[Choice]], banned :: [[Int]] }

tc = do
  n <- int
  TC <$> (n >< (zipWith Choice [1 ..] <$> numberOf int)) <*> numberOf (n >< int)

main = C.interact $
  runScanner tc >>> solve >>> map (show >>> C.pack) >>> C.unwords

solve :: TC -> [Int]
solve TC{..} = choices . fromJust $ find ((`S.notMember` bannedSet) . choices) bs
  where
    bannedSet = S.fromList banned
    revSlots = map reverse slots
    bs = builds revSlots

data Choice = Choice { index :: !Int, value :: !Int }

data Build = Build { strength :: !Int, choices :: [Int] }
  deriving (Eq, Show, Ord)

singletonBuild :: Choice -> Build
singletonBuild (Choice i v) = Build v [i]

mkBuild xs = Build (sum xs) xs

-- Pre: all input lists are sorted descending.
-- All possible builds, sorted in descending order of strength.
builds :: [[Choice]] -> [Build]
builds []     = []
builds (i:is) = chooseFrom i (builds is)

chooseFrom :: [Choice] -> [Build] -> [Build]
chooseFrom [] _  = []
chooseFrom xs [] = map singletonBuild xs
chooseFrom (x:xs) (b:bs) = addToBuild x b : mergeBuilds (map (addToBuild x) bs) (chooseFrom xs (b:bs))

addToBuild :: Choice -> Build -> Build
addToBuild (Choice i v) (Build s xs) = Build (v+s) (i:xs)

mergeBuilds xs [] = xs
mergeBuilds [] ys = ys
mergeBuilds (x:xs) (y:ys) = case compare (strength x) (strength y) of
  GT -> x : mergeBuilds xs (y:ys)
  _  -> y : mergeBuilds (x:xs) ys

Problems E and F

I didn’t even get to these problems during the contest; I spent too long fighting with problem C and implementing my overly complicated solution to problem D. I might attempt to solve them in Haskell too; if I do, I’ll write about them in another blog post!

by Brent at September 22, 2021 11:05 AM

September 21, 2021

Abhinav Sarkar

Implementing Co, a Small Interpreted Language With Coroutines #2: The Interpreter

In the previous post, we wrote the parser for Co, the small interpreted language we are building in this series of posts. The previous post was all about the syntax of Co. In this post we dive into the semantics of Co, and write an interpreter for its basic features.

This post was originally published on abhinavsarkar.net.

This is the second post in a series of posts:

  1. Implementing Co #1: The Parser
  2. Implementing Co #2: The Interpreter
  3. Implementing Co #3: Continuations, Coroutines, and Channels

Previously, on …

Here’s a quick recap. The basic features of Co that we are aiming to implement in this post are:

  • Dynamic and strong typing.
  • Null, boolean, string and integer literals, and values.
  • Addition and subtraction arithmetic operations.
  • String concatenation operation.
  • Equality and inequality checks on booleans, strings and numbers.
  • Less-than and greater-than comparison operations on numbers.
  • Variable declarations, usage and assignments.
  • if and while statements.
  • Function declarations and calls, with support for recursion.
  • First class functions.
  • Mutable closures.

Note that some parts of code snippets in this post have been faded away. These are the part which add support for coroutines and channels. You can safely ignore these parts for now. We’ll go over them in the next post.

We represent the Co Abstract Syntax Tree (AST) as a pair of Haskell Algebraic Data Types (ADTs), one for Expressions:

data Expr
  = LNull
  | LBool Bool
  | LStr String
  | LNum Integer
  | Variable Identifier
  | Binary BinOp Expr Expr
  | Call Identifier [Expr]
  | Receive Expr
  deriving (Show, Eq)

type Identifier = String

data BinOp = Plus | Minus | Div | Equals | NotEquals | LessThan | GreaterThan
  deriving (Show, Eq)

And another for Statements:

data Stmt
  = ExprStmt Expr
  | VarStmt Identifier Expr
  | AssignStmt Identifier Expr
  | IfStmt Expr [Stmt]
  | WhileStmt Expr [Stmt]
  | FunctionStmt Identifier [Identifier] [Stmt]
  | ReturnStmt (Maybe Expr)
  | YieldStmt
  | SpawnStmt Expr
  | SendStmt Expr Identifier
  deriving (Show, Eq)

type Program = [Stmt]

Also, program is the parser for Co programs. To parse code, run the program parser with the runParser function like this:

> runParser program "var x = 1 + s;"
Right [VarStmt "x" (Binary Plus (LNum 1) (Variable "s"))]

Now, off to the new stuff.

Running a Program

There are many ways to run a program. If the program is written in Machine Code, you can run it directly on the matching CPU. But machine code is too low-level, and writing programs in it is very tedious and error-prone. Thus, programmers prefer to write code in high-level programming languages, and turn it into machine code to be able to run it@1. Here’s where different ways of running code come in:

  • We can run the high-level code through a Compiler to turn it into machine code to be able to run it directly. Example: compiling C++ using GCC.
  • We can run the code through a compiler which turns it into a relatively lower-level programming language code, and then run that lower-level program through another compiler to turn it into machine code. Example: compiling Haskell into LLVM IR using GHC, which can then be run through the LLVM toolchain to generate machine code.
  • We can run the code through a Transpiler (also called Source-to-source compiler) to turn it into code in a programming language that is of similar level, and then run the resultant code with that language’s toolchain. Example: transpiling Purescript into JavaScript, and running it with node.js.
  • We can compile the source code to Bytecode and run the bytecode on a Virtual Machine. Example: Java virtual machine running Java bytecode compiled from Clojure source code by the Clojure compiler.
  • We can parse the code to an AST, and immediately execute the AST using an AST Interpreter. Example: PHP version 3, Bash. 1
  • We can also mix-and-match parts of the above options to create hybrids, like Just-in-time compilation to machine code within a virtual machine.

Many ways to run a program
Many ways to run a program
<noscript>
Many ways to run a program
Many ways to run a program
</noscript>

For running Co programs, we will implement an AST-walking interpreter. The interpreter implemented in this post will support only the basic features of Co. In the next post, we’ll extend it to support coroutines and channels.

The complete code for the interpreter is here. You can load it in GHCi using stack (by running stack co-interpreter.hs), and follow along while reading this article.

Runtime Values

An AST-walking interpreter takes an AST as its input, and recursively walks down the AST nodes, from top to bottom. While doing this, it evaluates expressions to runtime values, and executes the statements to do their effects.

The runtime values are things that can be passed around in the code during the program run time. Often called “first-class”, these values can be assigned to variables, passed as function arguments, and returned from functions. If Co were to support data structures like lists and maps, these values could be stored in them as well. The Value ADT below represents these values:

data Value
  = Null
  | Boolean Bool
  | Str String
  | Num Integer
  | Function Identifier [Identifier] [Stmt] Env
  | BuiltinFunction Identifier Int ([Expr] -> Interpreter Value)
  | Chan Channel

Other than the usual values like null, booleans, strings, and numbers, we also have functions as first-class runtime values in Co. We have a constructor Function for the functions that programmers define in their Co code, and another constructor BuiltinFunction for built-in functions like print2.

We also write instances to show and check equality for these values:

instance Show Value where
  show = \case
    Null -> “null”
    Boolean b -> show b
    Str s -> s
    Num n -> show n
    Function name _ _ _ -> “function ” <> name
    BuiltinFunction name _ _ -> “function ” <> name
    Chan Channel {} -> “Channel”

instance Eq Value where
  Null == Null = True
  Boolean b1 == Boolean b2 = b1 == b2
  Str s1 == Str s2 = s1 == s2
  Num n1 == Num n2 = n1 == n2
  _ == _ = False

Note that only null, booleans, strings and numbers can be checked for equality in Co. Also, only values of same type can be equals. A string can never be equal to a number3.

So, how do we go about turning the expressions to values, and executing statements? Before learning that, we must take a detour into some theory of programming languages.

Readers familiar with the concepts of environments, scopes, closures and early returns can skip the next sections, and jump directly to the implementation.

Environment Model of Evaluation

Let’s say we have this little Co program to run:

var a = 2;
function twice(x) { return x + x; }
print(twice(a));

We need to evaluate twice(a) to a value to print it. One way to do that is to substitute variables for their values, quite literally. twice is a variable, value of which is a function. And a is another variable, with value 2. We can do repeated substitution to arrive at a resultant value like this:

print(twice(a));
=> print(twice(2));
=> print(2 + 2);
=> print(4);

This is called the Substitution model of evaluation@5. This works for the example we have above, and for a large set of programs4. However, it breaks down as soon as we add mutability to the mix:

var a = 2;
function incA() {
  var b = a + 1;
  return b;
}
print(incA());
a = 3;
print(incA());

Running this with the Co interpreter results in the output:

3
4

We can’t use the substitution model here because we can’t consider variables like a to be substitutable with single values anymore. Now, we must think of them more as places in which the values are stored. Also, the stored values may change over the lifetime of the program execution. We call this place where the variable values are stored, the Environment, and this understanding of program execution is called the Environment Model of Evaluation@7.

Value of a variable may change over time
Value of a variable may change over time
<noscript>
Value of a variable may change over time
Value of a variable may change over time
</noscript>

A pair of a variable’s name and its value at any particular time is called a Binding. An Environment is a collection of zero-or-more bindings. To fully understand environments, first we have to learn about scopes.

Scopes

Let’s consider the twice function again:

function twice(x) { return x + x; }
print(twice(1));
print(twice(2));

Calling twice with different arguments prints different results. The function seems to forget the value of its parameter x after each call. This may feel very natural to programmers, but how does it really work? The answer is Scopes.

A scope is a region of the program lifetime during which a variable name-to-value binding is in effect. When the program execution enters a scope, the variables in that scope become defined and available to the executing code5. When the program execution exits the scope, the variables become undefined and inaccessible (also known as going out of scope).

Lexical scoping is a specific style of scoping where the structure of the program itself shows where a scope begins and ends@9. Like most modern languages, Co is lexically scoped. A function in Co starts a new scope which extends over the entire function body, and the scope ends when the function ends6. Functions are the only way of creating new scopes in Co7.

That’s how repeated invocation of functions don’t remember the values of their parameters across the calls. Every time a new call is started, a new scope is created with the parameter names bound to the value of the arguments of the call. And when the call returns, this new scope is destroyed.

Scopes can be enclosed within other scopes. In Co, this can be done by defining a function inside the body of another function. All programs have at least one scope, which is the program’s top-level scope, often called the global scope.

Scopes are intimately related to the environment. In fact, the structure of the environment is how scopes are implemented@12.

Scopes are implemented by the environment
Scopes are implemented by the environment
<noscript>
Scopes are implemented by the environment
Scopes are implemented by the environment
</noscript>

An environment can be thought of as a stack of frames, with one frame per enclosed scope@13. A frame contains zero-or-more bindings. The bindings in enclosed scopes (frames higher in the environment stack) hide the bindings (called shadowing) in enclosing scopes (frames lower in the environment stack). Program’s global scope is represented by the lowermost frame in the stack.

The above diagram shows the frames of the two calls to the twice function. The scope of the twice function is enclosed in the global scope. To find the value of a variable inside the function, the interpret first looks into the topmost frame that represents the scope of the twice function. If the binding is not found, then the interpreter goes down the stack of frames, and looks into the frame for the global scope.

What happens when a function body tries to access variables not defined in the function’s scope? We get Closures.

Closures

If a function body refers to variables not defined in the function’s scope, such variables are called Free Variables@14. In lexically scoped languages, the value of a free variable is determined from the scope in which the function is defined. A function along with the references to all its free variables, is called a Closure8.

Closures are prevalent in programming languages with first-class functions. Co—with its support for first-class functions—also supports closures. Closures in Co are mutable, meaning the values of the free variables of a function can change over time, and the changes are reflected in the behavior of the function9.

We already saw an example of closures earlier:

var a = 2;
function incA() {
  var b = a + 1;
  return b;
}
print(incA());
a = 3;
print(incA());

This is how the frames exist over time for the two invocations of the incA function:

a is a free variable of the function incA
a is a free variable of the function incA
<noscript>
a is a free variable of the function incA
a is a free variable of the function incA
</noscript>

Here, a is a free variable of the function incA. Its value is not present in the scope of incA, but is obtained from the global scope. When its value in the global scope changes later, the value returned by incA changes as well. In other words, incA and a together form a closure.

The following example demonstrates a closure with a mutable free variable and enclosed scopes:

function makeCounter(name) {
  var count = 0;
  function inc() {
    count = count + 1;
    print(name + " = " + count);
  }
  return inc;
}

var countA = makeCounter("a");
var countB = makeCounter("b");

countA();
countA();
countB();
countA();

Here, both name and count are free variables referred in the function inc. While name is only read, count is changed in the body of inc.

Running the above code prints:

a = 1
a = 2
b = 1
a = 3

Note that the two functions countA and countB refer to two different instances of the count variable, and are not affected by each other. In other words, countA and countB are two different closures for the same function inc.

Now for one last thing to learn about before we jump to the implementation: early returns.

Early Returns

Statement oriented programming languages often allow returning from a function before the entire function is done executing. This is called an Early return. We saw an example of this in the fibonacci function in the previous post:

function fib(n) {
  if (n < 2) {
    return n;
  }
  return fib(n - 2)
    + fib(n - 1);
}

In the above code, when the input n is less than 2, the code returns early from the function at the line 3.

Expression oriented programming languages, like Haskell, have no early returns. Every function is an expression in Haskell, and has to be evaluated entirely10 to get back a value. Since our AST-walking interpreter itself is written in Haskell, we need to figure out how to support early returns in the Co code being interpreted. The interpreter should be able to stop evaluating at an AST node representing a return statement, and jump to the node representing the function’s caller.

One way to implement this is Exceptions. Exceptions let us abort the execution of code at any point of execution, and resume from some other point in the lower in the function call stack. Although Haskell supports exceptions as we know them from languages like Java and Python, it also supports exceptions as values using the Error monad. That’s what we will leverage for implementing early returns in our interpreter.

Finally, we are really to start implementing the interpreter.

The Interpreter

The interpreter is implemented as a Haskell newtype over a stack of monad using the monad transformers and typeclasses from the mtl library:

newtype Interpreter a = Interpreter
  { runInterpreter ::
      ExceptT Exception
        (ContT
            (Either Exception ())
            (StateT InterpreterState IO))
        a
  }
  deriving
    ( Functor,
      Applicative,
      Monad,
      MonadIO,
      MonadBase IO,
      MonadState InterpreterState,
      MonadError Exception,
      MonadCont
    )

From bottom to top, the monad stack is comprised of:

  1. the IO monad to be able to print to the console,
  2. the State monad transformer to track the state of the interpreter, and
  3. the Except monad transformer to propagate exceptions while interpreting the code.

We model the environment as Map of variable names to IORefs of values:

type Env = Map.Map Identifier (IORef Value)

The immutable nature of Map and the mutable nature of IORef allow us to correctly model scopes, frames and closures in Co, as we see in the later sections of this post.

The interpreter state contains the environment used for interpretation. The state changes as variables come in and go out of scopes.

type Queue a = IORef (Seq.Seq a)

data InterpreterState = InterpreterState
  { isEnv :: Env,
    isCoroutines :: Queue (Coroutine ())
  }

initInterpreterState :: IO InterpreterState
initInterpreterState =
  InterpreterState <$> builtinEnv <*> newIORef Seq.empty

builtinEnv :: IO Env
builtinEnv = do
  printFn <- newIORef $ BuiltinFunction “print” 1 executePrint
  newChannelFn <- newIORef $
    BuiltinFunction “newChannel” 0 $ fmap Chan . const newChannel
  return $ Map.fromList [
      (“print”, printFn)
    , (“newChannel”, newChannelFn)
    ]

Initial interpreter state contains the built-in environment with bindings for the built-in functions like print. In particular, print is implemented by the executePrint function, which we see in a later section. Note that, arity of built-in functions is also encapsulated in them.

When trying to interpret wrong code like 1 + true, the interpreter throws runtime errors. We roll these errors along with early returns into an ADT for exceptions:

data Exception
  = Return Value
  | RuntimeError String
  | CoroutineQueueEmpty

That’s it for defining the types for the interpreter. Next, we implement the functions to interpret Co programs, starting with functions to work with environments.

Manipulating Environments

In Co, variables must be initialized when being defined. Additionally, only the already defined variables can be referenced or assigned.

To define a new variable, we create a new IORef with the variable’s value, insert it in the current environment map with the variable name as the key, and replace the interpreter state with the new environment map.

defineVar :: Identifier -> Value -> Interpreter ()
defineVar name value = do
  env <- State.gets isEnv
  env' <- defineVarEnv name value env
  setEnv env'

defineVarEnv :: Identifier -> Value -> Env -> Interpreter Env
defineVarEnv name value env = do
  valueRef <- newIORef value
  return $ Map.insert name valueRef env

setEnv :: Env -> Interpreter ()
setEnv env = State.modify' $ \is -> is {isEnv = env}

We extract two helper functions defineVarEnv and setEnv that we reuse in later sections.

To lookup and assign a variable, we get the current environment, lookup the IORef in the map by the variable’s name, and then read the IORef for lookup, or write the new value to it for assignment.

lookupVar :: Identifier -> Interpreter Value
lookupVar name =
  State.gets isEnv >>= findValueRef name >>= readIORef

assignVar :: Identifier -> Value -> Interpreter ()
assignVar name value =
  State.gets isEnv >>= findValueRef name >>= flip writeIORef value

We use the helper function findValueRef to lookup a variable name in the environment map. It throws a runtime error if the variable is not already defined.

findValueRef :: Identifier -> Env -> Interpreter (IORef Value)
findValueRef name env =
  case Map.lookup name env of
    Just ref -> return ref
    Nothing -> throw $ "Unknown variable: " <> name

throw :: String -> Interpreter a
throw = throwError . RuntimeError

These functions are enough for us to implement the evaluation of expressions and execution of statements.

Evaluating Expressions

Co expressions are represented by the Expr ADT. The evaluate function below shows how they are evaluated to runtime values.

evaluate :: Expr -> Interpreter Value
evaluate = \case
  LNull -> pure Null
  LBool bool -> pure $ Boolean bool
  LStr str -> pure $ Str str
  LNum num -> pure $ Num num
  Variable v -> lookupVar v
  binary@Binary {} -> evaluateBinaryOp binary
  call@Call {} -> evaluateFuncCall call
  Receive expr ->
    evaluate expr >>= \case
      Chan channel -> channelReceive channel
      val -> throw $ “Cannot recieve from a non-channel: ” <> show val

Literals null, booleans, strings, and numbers evaluate to themselves. Variables are looked up from the environment using the lookupVar function we wrote earlier. Binary operations and function call expressions are handled by helper functions explained below.

evaluateBinaryOp :: Expr -> Interpreter Value
evaluateBinaryOp ~(Binary op leftE rightE) = do
  left <- evaluate leftE
  right <- evaluate rightE
  let errMsg msg = msg <> ": " <> show left <> " and " <> show right
  case (op, left, right) of
    (Plus, Num n1, Num n2) -> pure $ Num $ n1 + n2
    (Plus, Str s1, Str s2) -> pure $ Str $ s1 <> s2
    (Plus, Str s1, _) -> pure $ Str $ s1 <> show right
    (Plus, _, Str s2) -> pure $ Str $ show left <> s2
    (Plus, _, _) -> throw $ errMsg "Cannot add or append"

    (Minus, Num n1, Num n2) -> pure $ Num $ n1 - n2
    (Minus, _, _) -> throw $ errMsg "Cannot subtract non-numbers"

    (Div, Num n1, Num n2) -> pure $ Num $ n1 `div` n2
    (Div, _, _) -> throw $ errMsg "Cannot divide non-numbers"

    (LessThan, Num n1, Num n2) -> pure $ Boolean $ n1 < n2
    (LessThan, _, _) -> throw $ errMsg "Cannot compare non-numbers"
    (GreaterThan, Num n1, Num n2) -> pure $ Boolean $ n1 > n2
    (GreaterThan, _, _) -> throw $ errMsg "Cannot compare non-numbers"

    (Equals, _, _) -> pure $ Boolean $ left == right
    (NotEquals, _, _) -> pure $ Boolean $ left /= right

To evaluate a binary operation, first we recursively evaluate its left and right operands by calling evaluate on them. Then, depending on the operation and types of the operands, we do different things.

  • The + operation can be used to either add two numbers, or to concat two operands when one or both of them are strings. In all other cases, it throws runtime errors.
  • The -, /, >, and < operations can be invoked only on numbers. Other cases throw runtime errors.
  • The == and != operations run corresponding Haskell operations on their operands.

That’s all for evaluating binary operations. Next, let’s look at how to execute statements. We come back to evaluating function calls after that.

Executing Statements

Co statements are represented by the Stmt ADT. The execute function below uses a case expression to execute the various types of statements in different ways:

execute :: Stmt -> Interpreter ()
execute = \case
  ExprStmt expr -> void $ evaluate expr
  VarStmt name expr -> evaluate expr >>= defineVar name
  AssignStmt name expr -> evaluate expr >>= assignVar name
  IfStmt expr body -> do
    cond <- evaluate expr
    when (isTruthy cond) $
      traverse_ execute body
  while@(WhileStmt expr body) -> do
    cond <- evaluate expr
    when (isTruthy cond) $ do
      traverse_ execute body
      execute while
  ReturnStmt mExpr -> do
    mRet <- traverse evaluate mExpr
    throwError . Return . fromMaybe Null $ mRet
  FunctionStmt name params body -> do
    env <- State.gets isEnv
    defineVar name $ Function name params body env
  YieldStmt -> yield
  SpawnStmt expr -> spawn (void $ evaluate expr)
  SendStmt expr chan -> do
    val <- evaluate expr
    evaluate (Variable chan) >>= \case
      Chan channel -> channelSend val channel
      val’ -> throw $ “Cannot send to a non-channel: ” <> show val’
  where
    isTruthy = \case
      Null -> False
      Boolean b -> b
      _ -> True

Expressions in expression statements are evaluated by calling evaluate on them, and the resultant values are discarded.

For variable definition and assignment statements, first we evaluate the value expressions, and then define or assign variables with the given variable names and the resultant values.

For if statements, first we evaluate their conditions, and if conditions yield truthy11 values, we recursively execute the statement bodies. while statements are executed in a similar fashion, except we recursively execute the while statements again after executing their bodies.

For return statements, we evaluate their optional return value expressions, and then throw the resultant values as exceptions wrapped with the Return constructor.

Execution of function statements is more interesting. First thing that we do is to capture the current environment from the interpreter state. Then we define a new variable12 with the function’s name and a runtime function value that encapsulates the function’s name, parameter names, and body statements, as well as, the captured environment. This is how closures record the values of functions’ free variables from their definition contexts.

In the next section, we see how the captured environments and returns as exceptions are used to evaluate function calls.

Evaluating Function Calls

The capability of defining and calling functions is the cornerstone of abstraction in programming languages. In Co, functions are first-class, support recursion13, and are also the means of implementing scopes and closures. Hence, this section is the most important and involved one.

We start by trying to find a function by looking up the function name in the environment:

evaluateFuncCall :: Expr -> Interpreter Value
evaluateFuncCall ~(Call funcName argEs) =
  lookupVar funcName >>= \case
    BuiltinFunction _ arity func -> do
      checkArgCount funcName argEs arity
      func argEs
    func@Function {} -> evaluateFuncCall' func argEs
    val -> throw $ "Cannot call a non-function: " <> show val

checkArgCount :: Identifier -> [Expr] -> Int -> Interpreter ()
checkArgCount funcName argEs arity =
  when (length argEs /= arity) $
    throw $ funcName <> " call expected " <> show arity
            <> " argument(s) but received " <> show (length argEs)

executePrint :: [Expr] -> Interpreter Value
executePrint argEs =
  evaluate (head argEs) >>= liftIO . print >> return Null

If no value is found, or if the found value is not a function, we throw a runtime error.

If we find a built-in function, we check that the count of arguments is same as the arity of the function by invoking checkArgCount, failing which we throw a runtime error. Then, we invoke the corresponding implementation function. For print, it is the executePrint function, in which we evaluate the argument and print it using Haskell’s print function.

If we find a user-defined function, we evaluate the function call with the helper function evaluateFuncCall'. But before diving into it, let’s take a look at how the world looks from inside a function.

function makeGreeter(greeting) {
  function greeter(name) {
    var say = greeting + " " + name;
    print(say);
  }
  return greeter;
}

var hello = makeGreeter("hello");
var namaste = makeGreeter("namaste");

hello("Arthur");
namaste("Ford");

In the above Co code, the function greeter has a free variable greeting, a bound parameter name, and a local variable say. Upon executing the code with the interpreter, we get the following output:

hello Arthur
namaste Ford

The output makes sense when we understand the variables hello and namaste are closures over the function greeter. The environment seen from inside greeter when it is being executed is a mix of the scope (and hence, the environment) it is defined in, and the scope it is called in.

Function environment is a mix of its caller and definition environments
Function environment is a mix of its caller and definition environments
<noscript>
Function environment is a mix of its caller and definition environments
Function environment is a mix of its caller and definition environments
</noscript>

More specifically, the free variables come from the definition scope, and the parameters come from the caller scope. Local variables can be derived from any combinations of free variables and parameters. With this understanding, let’s see how we evaluate function calls:

evaluateFuncCall' :: Value -> [Expr] -> Interpreter Value
evaluateFuncCall'
    ~func@(Function funcName params body funcDefEnv) argEs = do
  checkArgCount funcName argEs (length params)
  funcCallEnv <- State.gets isEnv
  setupFuncEnv
  retVal <- executeBody funcCallEnv
  setEnv funcCallEnv
  return retVal
  where
    setupFuncEnv = do
      args <- traverse evaluate argEs
      funcDefEnv' <- defineVarEnv funcName func funcDefEnv
      setEnv funcDefEnv'
      for_ (zip params args) $ uncurry defineVar

    executeBody funcCallEnv =
      (traverse_ execute body >> return Null) `catchError` \case
        Return val -> return val
        err -> setEnv funcCallEnv >> throwError err

Let’s go over the above code, step by step:

  1. evaluateFuncCall' is called with the function to evaluate. We get access to the function’s name, its parameter names, body statements, and the environment it is defined in. We also get the argument expressions for the function call. (Line 2–3)
  2. First, we check that the count of arguments is same as the count of the function parameter by invoking checkArgCount, failing which we throw a runtime error. (Line 4)
  3. Then, we capture the current environment from the interpreter state. This is the function’s caller’s environment. (Line 5)
  4. Next, we set up the environment in which the function will be executed (line 6). In setupFuncEnv:
    1. We evaluate the argument expressions in the current (caller’s) environment14. (Line 12)
    2. We bind the callee function itself to its name in its own environment. This lets our function to recursively call itself. (Line 13)
    3. We set the current environment in the interpreter state to the functions’s environment. (Line 14)
    4. We bind the argument values to their parameter names in the function’s environment. This lets the function body access the arguments being called with. (Line 15)
  5. With the function environment set up, we execute the function body in executeBody (line 7):
    1. We execute each statement in the body, and return null in case there was no explicit return in the function. (Line 18)
    2. If the body contains a return statement, or if its execution throws a runtime error, we handle the exception in the catchError case statement.
      1. For return, we pass along the return value. (Line 19)
      2. For a runtime error, first we set the current environment back to the caller’s environment that we captured in step 3, and then we throw the error. The error is eventually handled in the interpret function described in the next section. (Line 20)
    3. We capture the value returned from executing the body. (Line 7)
  6. We set the current environment back to the caller’s environment that we captured in step 3. (Line 8)
  7. We return the captured return value from evaluateFuncCall'. The function call is complete now. (Line 9)

Curious readers may wonder, why do we need to use State monad, Maps, and IORefs together, when all of them do similar work of storing and mutating variables? Because, together they let us implement function calls, scopes and closures, as described below:

  1. State monad lets us swap the current environment for a function’s definition environment when a function call is made, and to restore the calling environment after the call is complete.
  2. Immutable maps are perfect for implementing scopes. Adding variables in an immutable map returns a modified map without changing the original map. This lets us shadow variables defined in outer scopes when entering inner scopes, while also being able to easily restore the shadowed variables by just restoring the original map after the inner scopes end. There is no need to use a stack of mutable maps, which is how environments are generally implemented in interpreters which do not use immutable maps.
  3. Lastly, putting IORefs as values of immutable maps lets us implement mutable closures. All closures of same function share the same references to the IORefs. This allows variable mutations made from one closure to be visible to all others. If we had used just immutable maps, changes made to variable values would not propagate between closures because of immutability.

So that’s how function calls—the most crucial part of the interpreter—work. That completes the guts of our interpreter for the basic features of Co. In the next and last section, we put everything together.

Interpreting a Program

We are down to the last step. We interpret a program returned from the parser written in the previous post to run it.

interpret :: Program -> IO (Either String ())
interpret program = do
  state <- initInterpreterState
  retVal <- flip evalStateT state
    . flip runContT return
    . runExceptT
    . runInterpreter
    $ (traverse_ execute program >> awaitTermination)
  case retVal of
    Left (RuntimeError err) -> return $ Left err
    Left (Return _) -> return $ Left “Cannot return for outside functions”
    Left CoroutineQueueEmpty -> return $ Right ()
    Right _ -> return $ Right ()

We run the list of statements in the program by running the execute function on them. Then we run the monad transformer stack, layer by layer, to get the return value. Finally, we case match on the return value to catch errors, and we are done.

We package the parser and the interpreter together to create the runFile function that takes a file path, reads and parses the file, and then interprets the AST:

runFile :: FilePath -> IO ()
runFile file = do
  code <- readFile file
  case runParser program code of
    Left err -> hPutStrLn stderr err
    Right program -> interpret program >>= \case
      Left err -> hPutStrLn stderr $ "ERROR: " <> err
      _ -> return ()

Finally, we can run the interpreter on the Co files:

> runFile "fib.co"
0
1
1
2
3
5
0
1
1
2
3
5

That’s all for now. We implemented the interpreter for the basic features for Co, and learned about how function calls, scopes and closures work. In the next part, we’ll extend our interpreter to add support for coroutines and channels in Co.

The full code for the interpreter can be seen here. You can discuss this post on lobsters, r/haskell, discourse, twitter or in the comments below.

Abelson, Harold, Gerald Jay Sussman, and with Julie Sussman. “Lexical Addressing.” In Structure and Interpretation of Computer Programs, 2nd Editon. MIT Press/McGraw-Hill, 1996. https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-35.html#%_sec_5.5.6.
———. “Metalinguistic Abstraction.” In Structure and Interpretation of Computer Programs, 2nd Editon. MIT Press/McGraw-Hill, 1996. https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-25.html#%_chap_4.
———. “Normal Order and Applicative Order.” In Structure and Interpretation of Computer Programs, 2nd Editon. MIT Press/McGraw-Hill, 1996. https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-27.html#%_sec_4.2.1.
———. “Procedures as Black-Box Abstractions.” In Structure and Interpretation of Computer Programs, 2nd Editon. MIT Press/McGraw-Hill, 1996. https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-10.html#%_sec_1.1.8.
———. “The Costs of Introducing Assignment.” In Structure and Interpretation of Computer Programs, 2nd Editon. MIT Press/McGraw-Hill, 1996. https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-20.html#%_sec_3.1.3.
———. “The Environment Model of Evaluation.” In Structure and Interpretation of Computer Programs, 2nd Editon. MIT Press/McGraw-Hill, 1996. https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-21.html#%_sec_3.2.
———. “The Substitution Model for Procedure Application.” In Structure and Interpretation of Computer Programs, 2nd Editon. MIT Press/McGraw-Hill, 1996. https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-10.html#%_sec_1.1.5.

  1. It’s hard to find examples of real-world programming languages that are run with AST interpreters. This is because AST interpreters are too slow for real-world usage. However, they are the easiest to understand and implement, and hence are widely using in teaching programming languages theory.↩︎

  2. Since they are first-class, user-defined and built-in functions can be assigned to variables, and passed as arguments to other functions. Thus, Co supports higher-order functions as well.↩︎

  3. This is called Strong typing in programming languages parlance. JavaScript, on the other hand, is a weakly typed language. In JavaScript, 1 == '1' evaluates to true, whereas in Co, it evaluates to false.↩︎

  4. The property of being able to substitute expressions for their corresponding values without changing the meaning of the program is called Referential transparency@6. Pure functions—like twice here—that do not have any side-effects are referentially transparent.↩︎

  5. I’m being a little hand-wavy here because most programmers have at least an intuitive understanding of scopes. Read literature for accurate details.↩︎

  6. This is in contrast to Dynamic scoping where the a variable’s scope is essentially global, and is defined by function’s execution context instead of definition context, as in lexical scoping.↩︎

  7. Blocks are another widely used structure that support lexical scoping. Co doesn’t have blocks in the interest of simplicity of implementation.↩︎

  8. The function is said to close its free variables over its closure. Hence, the name Closure.↩︎

  9. Some programming languages like Java support a limited version of closures, which require values of the free variables of functions to not change over time.↩︎

  10. Well, not entirely, because Haskell is a lazily evaluated language.↩︎

  11. In Co, only null and false evaluate to false. All other values evaluate to true. This is implemented by the isTruthy function.↩︎

  12. Functions are just variables in Co. That is to say, functions definitions and variable definitions share the same namespace. This is how it works in many programming languages like JavaScript and Python. But some languages like Common Lisp have separate namespaces for functions and variables.↩︎

  13. Co does not support mutual recursion though. This is because a function in Co only sees the bindings done before its own definition. This can be fixed by either adding a special syntax for mutually recursive functions, or by hoisting all the bindings in a scope to the top of the scope, like how JavaScript does.↩︎

  14. Evaluating function arguments before the function body is called the Strict evaluation strategy. Most of the modern programming languages work this way, for example, Java, Python, JavaScript, Ruby etc. This is in contrast to Non-strict evaluation in programming languages like Haskell, where the arguments to functions are evaluated only when their values are needed in the function bodies@21.↩︎

If you liked this post, please leave a comment.

by Abhinav Sarkar (abhinav@abhinavsarkar.net) at September 21, 2021 12:00 AM

September 20, 2021

FP Complete

Combining Axum, Hyper, Tonic, and Tower for hybrid web/gRPC apps: Part 4

This is the fourth and final post in a series on combining web and gRPC services into a single service using Tower, Hyper, Axum, and Tonic. The full four parts are:

  1. Overview of Tower
  2. Understanding Hyper, and first experiences with Axum
  3. Demonstration of Tonic for a gRPC client/server
  4. Today's post: How to combine Axum and Tonic services into a single service

Single port, two protocols

That heading is a lie. Both an Axum web application and a gRPC server speak the same protocol: HTTP/2. It may be more fair to say they speak different dialects of it. But importantly, it's trivially easy to look at a request and determine whether it wants to talk to the gRPC server or not. gRPC requests will all include the header Content-Type: application/grpc. So our final step today is to write something that can accept both a gRPC Service and a normal Service, and return one unified service. Let's do it! For