Planet Haskell

May 26, 2015

Edward Kmett

Categories of Structures in Haskell

In the last couple posts I've used some 'free' constructions, and not remarked too much on how they arise. In this post, I'd like to explore them more. This is going to be something of a departure from the previous posts, though, since I'm not going to worry about thinking precisely about bottom/domains. This is more an exercise in applying some category theory to Haskell, "fast and loose".

(Advance note: for some continuous code to look at see this file.)

First, it'll help to talk about how some categories can work in Haskell. For any kind k made of * and (->), [0] we can define a category of type constructors. Objects of the category will be first-class [1] types of that kind, and arrows will be defined by the following type family:

 
newtype Transformer f g = Transform { ($$) :: forall i. f i ~> g i }
 
type family (~>) :: k -> k -> * where
  (~>) = (->)
  (~>) = Transformer
 
type a < -> b = (a -> b, b -> a)
type a < ~> b = (a ~> b, b ~> a)
 

So, for a base case, * has monomorphic functions as arrows, and categories for higher kinds have polymorphic functions that saturate the constructor:

 
  Int ~> Char = Int -> Char
  Maybe ~> [] = forall a. Maybe a -> [a]
  Either ~> (,) = forall a b. Either a b -> (a, b)
  StateT ~> ReaderT = forall s m a. StateT s m a -> ReaderT s m a
 

We can of course define identity and composition for these, and it will be handy to do so:

 
class Morph (p :: k -> k -> *) where
  id :: p a a
  (.) :: p b c -> p a b -> p a c
 
instance Morph (->) where
  id x = x
  (g . f) x = g (f x)
 
instance Morph ((~>) :: k -> k -> *)
      => Morph (Transformer :: (i -> k) -> (i -> k) -> *) where
  id = Transform id
  Transform f . Transform g = Transform $ f . g
 

These categories can be looked upon as the most basic substrates in Haskell. For instance, every type of kind * -> * is an object of the relevant category, even if it's a GADT or has other structure that prevents it from being nicely functorial.

The category for * is of course just the normal category of types and functions we usually call Hask, and it is fairly analogous to the category of sets. One common activity in category theory is to study categories of sets equipped with extra structure, and it turns out we can do this in Haskell, as well. And it even makes some sense to study categories of structures over any of these type categories.

When we equip our types with structure, we often use type classes, so that's how I'll do things here. Classes have a special status socially in that we expect people to only define instances that adhere to certain equational rules. This will take the place of equations that we are not able to state in the Haskell type system, because it doesn't have dependent types. So using classes will allow us to define more structures that we normally would, if only by convention.

So, if we have a kind k, then a corresponding structure will be σ :: k -> Constraint. We can then define the category (k,σ) as having objects t :: k such that there is an instance σ t. Arrows are then taken to be f :: t ~> u such that f "respects" the operations of σ.

As a simple example, we have:

 
  k = *
  σ = Monoid :: * -> Constraint
 
  Sum Integer, Product Integer, [Integer] :: (*, Monoid)
 
  f :: (Monoid m, Monoid n) => m -> n
    if f mempty = mempty
       f (m <> n) = f m <> f n
 

This is just the category of monoids in Haskell.

As a side note, we will sometimes be wanting to quantify over these "categories of structures". There isn't really a good way to package together a kind and a structure such that they work as a unit, but we can just add a constraint to the quantification. So, to quantify over all Monoids, we'll use 'forall m. Monoid m => ...'.

Now, once we have these categories of structures, there is an obvious forgetful functor back into the unadorned category. We can then look for free and cofree functors as adjoints to this. More symbolically:

 
  Forget σ :: (k,σ) -> k
  Free   σ :: k -> (k,σ)
  Cofree σ :: k -> (k,σ)
 
  Free σ ⊣ Forget σ ⊣ Cofree σ
 

However, what would be nicer (for some purposes) than having to look for these is being able to construct them all systematically, without having to think much about the structure σ.

Category theory gives a hint at this, too, in the form of Kan extensions. In category terms they look like:

  p : C -> C'
  f : C -> D
  Ran p f : C' -> D
  Lan p f : C' -> D

  Ran p f c' = end (c : C). Hom_C'(c', p c) ⇒ f c'
  Lan p f c' = coend (c : c). Hom_C'(p c, c') ⊗ f c'

where is a "power" and is a copower, which are like being able to take exponentials and products by sets (or whatever the objects of the hom category are), instead of other objects within the category. Ends and coends are like universal and existential quantifiers (as are limits and colimits, but ends and coends involve mixed-variance).

Some handy theorems relate Kan extensions and adjoint functors:

  if L ⊣ R
  then L = Ran R Id and R = Lan L Id

  if Ran R Id exists and is absolute
  then Ran R Id ⊣ R

  if Lan L Id exists and is absolute
  then L ⊣ Lan L Id

  Kan P F is absolute iff forall G. (G . Kan P F) ~= Kan P (G . F)

It turns out we can write down Kan extensions fairly generally in Haskell. Our restricted case is:

 
  p = Forget σ :: (k,σ) -> k
  f = Id :: (k,σ) -> (k,σ)
 
  Free   σ = Ran (Forget σ) Id :: k -> (k,σ)
  Cofree σ = Lan (Forget σ) Id :: k -> (k,σ)
 
  g :: (k,σ) -> j
  g . Free   σ = Ran (Forget σ) g
  g . Cofree σ = Lan (Forget σ) g
 

As long as the final category is like one of our type constructor categories, ends are universal quantifiers, powers are function types, coends are existential quantifiers and copowers are product spaces. This only breaks down for our purposes when g is contravariant, in which case they are flipped. For higher kinds, these constructions occur point-wise. So, we can break things down into four general cases, each with cases for each arity:

 
newtype Ran0 σ p (f :: k -> *) a =
  Ran0 { ran0 :: forall r. σ r => (a ~> p r) -> f r }
 
newtype Ran1 σ p (f :: k -> j -> *) a b =
  Ran1 { ran1 :: forall r. σ r => (a ~> p r) -> f r b }
 
-- ...
 
data RanOp0 σ p (f :: k -> *) a =
  forall e. σ e => RanOp0 (a ~> p e) (f e)
 
-- ...
 
data Lan0 σ p (f :: k -> *) a =
  forall e. σ e => Lan0 (p e ~> a) (f e)
 
data Lan1 σ p (f :: k -> j -> *) a b =
  forall e. σ e => Lan1 (p e ~> a) (f e b)
 
-- ...
 
data LanOp0 σ p (f :: k -> *) a =
  LanOp0 { lan0 :: forall r. σ r => (p r -> a) -> f r }
 
-- ...
 

The more specific proposed (co)free definitions are:

 
type family Free   :: (k -> Constraint) -> k -> k
type family Cofree :: (k -> Constraint) -> k -> k
 
newtype Free0 σ a = Free0 { gratis0 :: forall r. σ r => (a ~> r) -> r }
type instance Free = Free0
 
newtype Free1 σ f a = Free1 { gratis1 :: forall g. σ g => (f ~> g) -> g a }
type instance Free = Free1
 
-- ...
 
data Cofree0 σ a = forall e. σ e => Cofree0 (e ~> a) e
type instance Cofree = Cofree0
 
data Cofree1 σ f a = forall g. σ g => Cofree1 (g ~> f) (g a)
type instance Cofree = Cofree1
 
-- ...
 

We can define some handly classes and instances for working with these types, several of which generalize existing Haskell concepts:

 
class Covariant (f :: i -> j) where
  comap :: (a ~> b) -> (f a ~> f b)
 
class Contravariant f where
  contramap :: (b ~> a) -> (f a ~> f b)
 
class Covariant m => Monad (m :: i -> i) where
  pure :: a ~> m a
  join :: m (m a) ~> m a
 
class Covariant w => Comonad (w :: i -> i) where
  extract :: w a ~> a
  split :: w a ~> w (w a)
 
class Couniversal σ f | f -> σ where
  couniversal :: σ r => (a ~> r) -> (f a ~> r)
 
class Universal σ f | f -> σ where
  universal :: σ e => (e ~> a) -> (e ~> f a)
 
instance Covariant (Free0 σ) where
  comap f (Free0 e) = Free0 (e . (.f))
 
instance Monad (Free0 σ) where
  pure x = Free0 $ \k -> k x
  join (Free0 e) = Free0 $ \k -> e $ \(Free0 e) -> e k
 
instance Couniversal σ (Free0 σ) where
  couniversal h (Free0 e) = e h
 
-- ...
 

The only unfamiliar classes here should be (Co)Universal. They are for witnessing the adjunctions that make Free σ the initial σ and Cofree σ the final σ in the relevant way. Only one direction is given, since the opposite is very easy to construct with the (co)monad structure.

Free σ is a monad and couniversal, Cofree σ is a comonad and universal.

We can now try to convince ourselves that Free σ and Cofree σ are absolute Here are some examples:

 
free0Absolute0 :: forall g σ a. (Covariant g, σ (Free σ a))
               => g (Free0 σ a) < -> Ran σ Forget g a
free0Absolute0 = (l, r)
 where
 l :: g (Free σ a) -> Ran σ Forget g a
 l g = Ran0 $ \k -> comap (couniversal $ remember0 . k) g
 
 r :: Ran σ Forget g a -> g (Free σ a)
 r (Ran0 e) = e $ Forget0 . pure
 
free0Absolute1 :: forall (g :: * -> * -> *) σ a x. (Covariant g, σ (Free σ a))
               => g (Free0 σ a) x < -> Ran σ Forget g a x
free0Absolute1 = (l, r)
 where
 l :: g (Free σ a) x -> Ran σ Forget g a x
 l g = Ran1 $ \k -> comap (couniversal $ remember0 . k) $$ g
 
 r :: Ran σ Forget g a x -> g (Free σ a) x
 r (Ran1 e) = e $ Forget0 . pure
 
free0Absolute0Op :: forall g σ a. (Contravariant g, σ (Free σ a))
                 => g (Free0 σ a) < -> RanOp σ Forget g a
free0Absolute0Op = (l, r)
 where
 l :: g (Free σ a) -> RanOp σ Forget g a
 l = RanOp0 $ Forget0 . pure
 
 r :: RanOp σ Forget g a -> g (Free σ a)
 r (RanOp0 h g) = contramap (couniversal $ remember0 . h) g
 
-- ...
 

As can be seen, the definitions share a lot of structure. I'm quite confident that with the right building blocks these could be defined once for each of the four types of Kan extensions, with types like:

 
freeAbsolute
  :: forall g σ a. (Covariant g, σ (Free σ a))
  => g (Free σ a) < ~> Ran σ Forget g a
 
cofreeAbsolute
  :: forall g σ a. (Covariant g, σ (Cofree σ a))
  => g (Cofree σ a) < ~> Lan σ Forget g a
 
freeAbsoluteOp
  :: forall g σ a. (Contravariant g, σ (Free σ a))
  => g (Free σ a) < ~> RanOp σ Forget g a
 
cofreeAbsoluteOp
  :: forall g σ a. (Contravariant g, σ (Cofree σ a))
  => g (Cofree σ a) < ~> LanOp σ Forget g a
 

However, it seems quite difficult to structure things in a way such that GHC will accept the definitions. I've successfully written freeAbsolute using some axioms, but turning those axioms into class definitions and the like seems impossible.

Anyhow, the punchline is that we can prove absoluteness using only the premise that there is a valid σ instance for Free σ and Cofree σ. This tends to be quite easy; we just borrow the structure of the type we are quantifying over. This means that in all these cases, we are justified in saying that Free σ ⊣ Forget σ ⊣ Cofree σ, and we have a very generic presentations of (co)free structures in Haskell. So let's look at some.

We've already seen Free Monoid, and last time we talked about Free Applicative, and its relation to traversals. But, Applicative is to traversal as Functor is to lens, so it may be interesting to consider constructions on that. Both Free Functor and Cofree Functor make Functors:

 
instance Functor (Free1 Functor f) where
  fmap f (Free1 e) = Free1 $ fmap f . e
 
instance Functor (Cofree1 Functor f) where
  fmap f (Cofree1 h e) = Cofree1 h (fmap f e)
 

And of course, they are (co)monads, covariant functors and (co)universal among Functors. But, it happens that I know some other types with these properties:

 
data CoYo f a = forall e. CoYo (e -> a) (f e)
 
instance Covariant CoYo where
  comap f = Transform $ \(CoYo h e) -> CoYo h (f $$ e)
 
instance Monad CoYo where
  pure = Transform $ CoYo id
  join = Transform $ \(CoYo h (CoYo h' e)) -> CoYo (h . h') e
 
instance Functor (CoYo f) where
  fmap f (CoYo h e) = CoYo (f . h) e
 
instance Couniversal Functor CoYo where
  couniversal tr = Transform $ \(CoYo h e) -> fmap h (tr $$ e)
 
newtype Yo f a = Yo { oy :: forall r. (a -> r) -> f r }
 
instance Covariant Yo where
  comap f = Transform $ \(Yo e) -> Yo $ (f $$) . e
 
instance Comonad Yo where
  extract = Transform $ \(Yo e) -> e id
  split = Transform $ \(Yo e) -> Yo $ \k -> Yo $ \k' -> e $ k' . k
 
instance Functor (Yo f) where
  fmap f (Yo e) = Yo $ \k -> e (k . f)
 
instance Universal Functor Yo where
  universal tr = Transform $ \e -> Yo $ \k -> tr $$ fmap k e
 

These are the types involved in the (co-)Yoneda lemma. CoYo is a monad, couniversal among functors, and CoYo f is a Functor. Yo is a comonad, universal among functors, and is always a Functor. So, are these equivalent types?

 
coyoIso :: CoYo < ~> Free Functor
coyoIso = (Transform $ couniversal pure, Transform $ couniversal pure)
 
yoIso :: Yo < ~> Cofree Functor
yoIso = (Transform $ universal extract, Transform $ universal extract)
 

Indeed they are. And similar identities hold for the contravariant versions of these constructions.

I don't have much of a use for this last example. I suppose to be perfectly precise, I should point out that these uses of (Co)Yo are not actually part of the (co-)Yoneda lemma. They are two different constructions. The (co-)Yoneda lemma can be given in terms of Kan extensions as:

 
yoneda :: Ran Id f < ~> f
 
coyoneda :: Lan Id f < ~> f
 

But, the use of (Co)Yo to make Functors out of things that aren't necessarily is properly thought of in other terms. In short, we have some kind of category of Haskell types with only identity arrows---it is discrete. Then any type constructor, even non-functorial ones, is certainly a functor from said category (call it Haskrete) into the normal one (Hask). And there is an inclusion functor from Haskrete into Hask:

             F
 Haskrete -----> Hask
      |        /|
      |       /
      |      /
Incl  |     /
      |    /  Ran/Lan Incl F
      |   /
      |  /
      v /
    Hask

So, (Co)Free Functor can also be thought of in terms of these Kan extensions involving the discrete category.

To see more fleshed out, loadable versions of the code in this post, see this file. I may also try a similar Agda development at a later date, as it may admit the more general absoluteness constructions easier.

[0]: The reason for restricting ourselves to kinds involving only * and (->) is that they work much more simply than data kinds. Haskell values can't depend on type-level entities without using type classes. For *, this is natural, but for something like Bool -> *, it is more natural for transformations to be able to inspect the booleans, and so should be something more like forall b. InspectBool b => f b -> g b.

[1]: First-class types are what you get by removing type families and synonyms from consideration. The reason for doing so is that these can't be used properly as parameters and the like, except in cases where they reduce to some other type that is first-class. For example, if we define:

 
type I a = a
 

even though GHC will report I :: * -> *, it is not legal to write Transform I I.

by Dan Doel at May 26, 2015 05:53 AM

Manuel M T Chakravarty

May 24, 2015

Philip Wadler

May 23, 2015

Well-Typed.Com

Parametricity Tutorial (Part 1)

A powerful feature of Haskell’s type system is that we can deduce properties of functions by looking only at their type. For example, a function of type

f :: a. a -> a

can only be the identity function: since it must return something of type a, for any type a, the only thing it can do is return the argument of type a that it was given (or crash). Similarly, a function of type

f :: a. a -> a -> a

can only do one of two things: either return the first argument, or return the second. This kind of reasoning is becoming more and more important with the increasing use of types such as this definition of a “lens”:

type Lens s t a b = forall f. Functor f => (a -> f b) -> s -> f t

Since a lens is just a function of a particular type, the only thing we can conclude about such a function is whatever we can deduce from its type.

To reason about the properties of functions based only on their types we make use of the theory of parametricity, which tells how to derive a so-called “free theorem” from a type. This blog post is a tutorial on how to do this; it won’t explain why the theory works, only how it works. A Haskell practitioner’s guide to parametricity, if you will.

This is a two-part blog post. In part 1 (this post) we will cover the basics: constant types, functions and polymorphism (over types of kind *).

In part 2 (to be published shortly) we will deal with more advanced material: type constructor (types of kind * -> *), type classes, polymorphism over type constructors and type constructor classes.

The Basics

The main theorem of parametricity is the following:

if f :: t then f ℛ(t) f

When t is a closed type, ℛ(t) is a relation between two terms of type t (we shall see later that the type of is actually slightly more general). In words, parametricity states that any term f of type t is related to itself by ℛ(t). Don’t worry if this all looks incredibly abstract! We shall see lots and lots of examples.

Constant types (types of kind *)

For any constant type C, the relation ℛ(C) is the identity relation. In other words,

     x ℛ(C) x'
iff  x ≡ x'

(We will use ≡ throughout to mean mathematical equality, to distinguish it from Haskell’s equality function (==).)

Let’s see an example. Suppose that x :: Int. Then parametricity tells us that

     x ℛ(Int) x
iff  x ≡ x

I.e., it tells us that x is equal to itself. Not very insightful! Intuitively, this makes sense: if all we know about x is that it is an integer, we cannot tell anything about its value.

TOOLING. Many of the examples in this blog post (though sadly not all) can also be auto-derived by one of two tools. On the #haskell IRC channel we can ask lambdabot to derive free theorems for any types not involving type classes or type constructor classes. If you ask

@free x :: Int

lambdabot will reply with

x = x

(I recommend starting a private conversation with lambdabot so you avoid spamming the whole #haskell channel.)

Alternatively, you can also try the online free theorem generator. This free theorem generator is a bit more precise than lambdabot (which takes some shortcuts sometimes), and supports type classes, but cannot work with type constructors (lambdabot can work with unknown type constructors but not with quantification over type constructors, unfortunately).

Functions

For functions we map related arguments to related results:

     f ℛ(A -> B) f'
iff  forall x, x'.
       if x ℛ(A) x' then f x ℛ(B) f' x'

(The types of x and x' here depend on what precisely A is; see The type of ℛ, below.)

Example. Suppose f :: Int -> Bool. By parametricity

     f ℛ(Int -> Bool) f
iff  forall x :: Int, x' :: Int.
       if x ℛ(Int) x' then f x ℛ(Bool) f x'
-- both Int and Bool are constant types
iff  forall x :: Int, x' :: Int.
       if x ≡ x' then f x ≡ f x'
-- simplify
iff  f ≡ f

Again, not very informative. Parametricity doesn’t tell us anything about functions between constant types. Time to look at something more interesting!

Polymorphism (over types of kind *)

The definition for polymorphic values is

     f ℛ(a. t) f'
iff  forall A, A', a :: AA'.
       f@A ℛ(t) f'@A'           -- where 'a' can occur free in t

That is, whenever we pick two types A and A', and some relation a between A and A', the function f@A obtained by instantiating the type variable by A must be related to the function f'@A' obtained by instantiating the type variable by A'. In what follows we will write explicit type instantiation like this only if the type is not clear from the context; specifically, we will omit it when we supply arguments to the function.

The type of ℛ.

∀ab. a -> b -> a is an example of a closed type: all type variables are bound by a universal quantifier. An open type is a type with free type variables such as ∀b. a -> b -> a or even a -> b -> a. (Note that this distinction is harder to see in Haskell where universal quantifiers are often implicit. We will not follow that convention in this article.)

We said in the introduction that if t is a closed type, ℛ(t) relates two terms of type t. As we saw, in order to be able to give a meaning to open types we need a mapping from any free variable a to a relation a :: A ⇔ A'. In this article we somewhat informally maintain this mapping simply by using the same name for the type variable and the relation.

Given two relations a :: A ⇔ A' and b :: B ⇔ B', ℛ(a -> b -> a) relates terms of type A -> B -> A with terms of type A' -> B' -> A'. It is important to realize that can therefore relate terms of different types. (For a more precise treatment, see my Coq formalization of a part of this blog post.)

The interpretation of for a free type variable a is defined in terms of the corresponding relation:

     x ℛ(a) x'     -- the type variable
iff  (x, x') ∈ a    -- the relation

Example: ∀a. a -> a

Let’s consider a number of examples, starting with an f :: ∀a. a -> a:

     f ℛ(a. a -> a) f
-- parametricity
iff  forall A, A', a :: AA'.
       f@A ℛ(a -> a) f@A'
-- definition for function types
iff  forall A, A', a :: AA', x :: A, x' :: A'.
       if x ℛ(a) x' then f x ℛ(a) f x'

It might not be immediately evident from the last line what this actually allows us to conclude about f, so let’s look at this a little closer. A function g is a special kind of relation, relating any argument x to g x; since the property holds for any kind of relation a : A ⇔ A', it must also hold for a function a⃯ :: A -> A':

     forall x, x'.
       if x ℛ(a⃯) x' then f x ℛ(a⃯) f x'
-- x ℛ(a⃯) x' iff a⃯ x ≡ x'
iff  forall x :: A, x' :: A'.
       if a⃯ x ≡ x' then a⃯ (f x) ≡ f x'
-- simplify
iff  forall x :: A,
       a⃯ (f x) ≡ f (a⃯ x)

We can apply this result to show that any f :: ∀a. a -> a must be the identity function: picking a⃯ = const x, we get const x (f x) ≡ f (const x x), i.e. x ≡ f x, as required.

NOTE. We are doing fast and loose reasoning in this tutorial and will completely ignore any totality issues. See Automatically Generating Counterexamples to Naive Free Theorems, or the associated web interface, for some insights about what wrong conclusions we can draw by ignoring totality.

Example: ∀a. a -> a -> a

Intuitively, there are only two things a function f :: ∀a. a -> a -> a can do: it can either return its first argument, or it can return its second argument. What does parametricity tell us? Let’s see:

     f ℛ(a. a -> a -> a) f
iff  forall A, A', a :: AA'.
       f@A ℛ(a -> a -> a) f@A'
-- applying the rule for functions twice
iff  forall A, A', a :: AA', x :: A, x' :: A', y :: A, y' :: A'.
       if x ℛ(a) x', y ℛ(a) y' then f x y ℛ(a) f x' y'

Let’s again specialize the last line to pick a function a⃯ :: A -> A' for the relation a:

     forall x :: A, x' :: A', y :: A, y' :: A'.
       if x ℛ(a⃯) x', y ℛ(a⃯) y' then f x y ℛ(a⃯) f x' y'
-- a⃯ is a function
iff  forall x :: A, y :: A.
       if a⃯ x ≡ x' and a⃯ y ≡ y' then a⃯ (f x y) ≡ f x' y'
-- simplify
iff  a⃯ (f x y) = f (a⃯ x) (a⃯ y)

So parametricity allows us to “push in” or “distribute” a⃯ over f. The fact that f must either return its first or its second argument follows from parametricity, but not in a completely obvious way; see the reddit thread How to use free theorems.

 Example: ∀ab. a -> b

Other than undefined (which we are ignoring), there can be no function f :: ∀ab. a -> b. Let’s suppose that one did exist; what does parametricity tell us?

     f ℛ(ab. a -> b) f
-- applying the rule for universal quantification, twice
iff  forall A, A', B, B', a :: AA', b :: BB'.
       f@A,B ℛ(a -> b) f@A',B'
-- applying the rule for functions
iff  forall A, A', B, B', a :: AA', b :: BB', x :: A, x' :: A'.
       if x ℛ(a) x' then f x ℛ(b) f x'

Picking two functions a⃯ :: A -> A' and b⃯ :: B -> B' for a and b, we get

b⃯ . f = f . a⃯

It’s not too difficult to derive contradiction from this (remember that you can pick any two functions a⃯ and b⃯ between any types of your choosing). Hence, such a function cannot exist.

Example: ∀ab. (a -> b) -> a -> b

The only thing a function of this type can do is apply the supplied function to the supplied argument (alternatively, if you prefer, this must be the identity function). Let’s spell this example out in a bit of detail because it is our first example of a higher order function.

     f ℛ(ab. (a -> b) -> a -> b) f
-- apply rule for polymorphism, twice
iff  forall A, A', B, B', a :: AA', b :: BB'.
       f@A,B ℛ((a -> b) -> a -> b) f@A',B'
-- apply rule for functions, twice
iff  forall A, A', B, B', a :: AA', b :: BB'.
     forall g :: A -> B, g' :: A' -> B', x :: A, x' :: A'.
       if g ℛ(a -> b) g' and x ℛ(a) x' then f g x ℛ(b) f g' x'

Let’s expand what that premise g ℛ(a -> b) g' means:

     g ℛ(a -> b) g'
iff  forall y :: A, y' :: A'.
       if y ℛ(a) y' then g y ℛ(b) g' y'

For the special case that we pick functions a⃯ :: A -> A' and b⃯ :: B -> B' for a and b, that premise collapses to

     forall y :: A, y' :: A'.
       if y ℛ(a⃯) y' then g y ℛ(b⃯) g' y'
-- a⃯ and b⃯ are functions
iff  forall y :: A, y' :: A'.
       if a⃯ y ≡ y' then b⃯ (g y) ≡ g' y'
-- simplify
iff  forall y :: A.
       b⃯ (g y) ≡ g' (a⃯ y)
-- simplify (extensionality)
iff  b⃯ . g ≡ g' . a⃯

So that the free theorem for f :: ∀ab. (a -> b) -> a -> b becomes

if b⃯ . g ≡ g' . a⃯ then b⃯ . f g ≡ f g' . a⃯

Picking b⃯ = const g, g' = g, and a⃯ = id (verify that the premise holds) we get that indeed g ≡ f g, as expected.

Useful shortcut. This pattern is worth remembering:

      g ℛ(a⃯ -> b⃯) g'
iff   b⃯ . g ≡ g' . a⃯
whenever a⃯ and b⃯ are function(al relation)s.

Example: ∀ab. (∀c. c -> String) -> a -> b -> String

A function f :: ∀ab. (∀c. c -> String) -> a -> b -> String is not only higher order, but has a rank-2 type: it insists that the function it is given is itself polymorphic. This makes it possible to write, for example

f g x y = g x ++ g y

Note that since x and y have different types, it is important that g is polymorphic. What is the free theorem for f?

     f ℛ(ab. (c. c -> String) -> a -> b -> String) f
-- apply rule for polymorphism, twice
iff  forall A, A', B, B', a :: AA', b :: BB'.
       f ℛ((c. c -> String) -> a -> b -> String) f
-- apply rule for functions three times, and simplify ℛ(String) to (≡)
iff  forall A, A', B, B', a :: AA', b :: BB'.
     forall g :: c. c -> String, g' :: c. c -> String.
     forall x :: A, x' :: A', y :: B, y' :: B'.
       if
         g ℛ(c. c -> String) g', x ℛ(a) x', y ℛ(b) y'
       then
         f g x y ≡ f g' x' y'

Specializing this theorem to functions a⃯ :: A -> A' and b⃯ :: B -> B' we get

forall A, A', B, B', a⃯ :: A -> A', b⃯ :: B -> B'.
forall g :: c. c -> String, g' :: c. c -> String.
forall x :: A, y :: B.
  if
    g ℛ(c. c -> String) g'
  then
    f g x y ≡ f g' (a⃯ x) (b⃯ y)

But that is somewhat surprising, because it seems to say that the values of x and y cannot matter at all! What is going on? Expanding the first premise:

     g ℛ(c. c -> String) g'
iff  forall C, C', c :: CC',
       g ℛ(c -> String) g'
iff  forall C, C', c :: CC', z :: C, z' :: C'.
       if z ℛ(c) z' then g z = g' z'

Let’s stop for a moment to ponder what this requirement for g and g' really says: given any relation c, and any elements z and z' that are related by c—in other words, for any z and z' at all—we must have that g z and g' z' give us equal results. This means that g and g' must be constant functions, and the same constant function at that. As a consequence, for any function f of the above type, f g must itself be constant in x and y. In part two we will see a more useful variation which uses the Show type class.

Incidentally, note that this quantification over an arbitrary relation c is a premise to the free theorem, not a conclusion; hence we cannot simply choose to consider only functions c.

TOOLING. Indeed, if you enter

(forall c. c -> String) -> a -> b -> String
in the online free theorem generator you will see that it first gives the free theorem using relations only, and then says it will reduce all “permissible” relation variables to functions; in this example, that is all relations except for c; lambdabot doesn’t make this distinction and always reduces relations to functions, which is not correct.

To be continued

In part 2 (to be published shortly) we will cover type constructors, type classes and type constructor classes. Meanwhile, here are some links to papers on the subject if you want to read more.

by edsko at May 23, 2015 02:26 PM

wren gayle romano

New Website

I have a new academic/professional website: http://cl.indiana.edu/~wren/!

There are still a few unfinished areas (e.g., my publications page), but hopefully I'll be finished with them shortly. The new site it built with Hakyll instead of my old Google-Summer-of-Code static website generator. I'm still learning how to implement my old workflow in Hakyll, and if I get the chance I'll write a few posts on how I've set things up. Reading through other folks' posts on how they use Hakyll have been very helpful, and I'd like to give back to the community. I've already issued a pull request for adding two new combinators for defining template fields.

In the meantime, if you notice any broken links from my old blog posts, please let me know.



comment count unavailable comments

May 23, 2015 03:14 AM

May 22, 2015

FP Complete

What do Haskellers want? Over a thousand tell us

The Commercial Haskell SIG members want to help people adopt Haskell. What would help? Data beats speculation, so FP Complete recently emailed surveys to over 16000 people interested in Haskell. The questions were aimed at identifying needs rather than celebrating past successes, and at helping applied users rather than researchers.

Over 1240 people sent detailed replies, spending over 250 person-hours to provide their answers.

This rich data set includes extensive information on Haskell user needs. We are open-sourcing the entire anonymized data set, downloadable by clicking here [.zip]. There are numeric ratings and extensive textual comments. Feel free to analyze the data -- or just read the summaries -- and share your most helpful and actionable conclusions with the community. We will too.

First Results

Although we have completed only basic analysis, here are some of our first findings -- those so clear that they show up on even the most basic examination of the aggregate data.

  • Satisfaction with the language, the compiler, and the community are high.
  • Among non-students, 58% would recommend Haskell for a project at their workplace, but only 26% actually use it at work -- partly due to colleagues unfamiliar with Haskell who see it as requiring skills that are hard to obtain, or who need to see more success stories. Would improvement to colleagues' perceptions make a difference in the team's choice of Haskell for a project? 33% of respondents rated this "crucial" and another 26% said it would be "important", while only 16% said it would be a "slight help" or no help.
  • Package management with cabal is the single worst aspect of using Haskell. Asked if improvements to package management would make a difference to their future choice of Haskell for a project, 38% said it would be "crucial" and a further 29% said it would be "important". Comments connected cabal with words like hell, pain, awful, sucks, frustrating, and hideous. Only this topic showed such grave dissatisfaction.
  • Documentation improvements are a very high priority. For example, users need more concrete tutorials and templates showing them exactly what kinds of problems Haskell is good at solving, and exactly how to implement such programs completely. 65% of respondents said improvements to documentation and learning resources would be crucial or important, and a further 23% said they would be helpful. However, comments did not begin to approach the level of concern seen with cabal.
  • Skills are a priority. Users need to see that people with Haskell skills are readily available, and that Haskell skills are quite feasible to learn. A majority of respondents said an improvement in availability of skilled personnel would make an important or crucial difference to them, and many also expressed concern about their or colleagues' abilities to learn the needed concepts and skills.

We have started deeper statistical analysis of the data, and we hope that some readers of this post will perform -- and share -- even better analyses than we can. New issues may become clearer by clustering or segmenting users, or through other statistical techniques. Also, we may find more clarity about needs through deeper study of textual responses. Follow-up studies are also a possibility.

We propose that the community, given this large and detailed data set, should set some of its priorities in a data-driven manner focused on user-expressed needs. This effort should be complementary to the ongoing research on issues of historic Haskell strength such as language design and runtime architecture.

Areas for Further Work

We request that useful findings or insights derived from the open-sourced data set be shared with the community, including attribution of the source of the data and the methods of analysis used.

The data collected strongly invites clustering or segmentation, so as to identify needs of different sub-populations. FP Complete has already begun one such study.

The data collected includes extensive textual remarks which should be studied by knowledgeable people for insights. Automated text analysis methods may also be applicable.

Cost-benefit analysis seems worthwhile: based on identified needs, what improvements would help the most people, to the greatest degree, at the least cost and/or the least delay? A method to match volunteer contributors with identified high-payoff projects also seems worthwhile.

It would be useful to merge the data from versions 0.1 and 0.2 with version 1.0 of the survey, since 1.0 includes only 71% of the total answers received. Differences between the questions and scales make this a nontrivial, but still realistic, goal.

If important new hypotheses require testing, or if further detail is needed, we intend to conduct follow-up research at some future date among users who volunteered their email addresses for follow-up questions.

A future repeat survey could determine which improvement efforts are successful.

Methodology Notes

This was not a survey of the general public, but of a population motivated to provide feedback on Haskell. Invitees included 16165 non-opted-out email addresses gathered from FP Complete's website, in randomized order. Due to privacy considerations this list will not be published, but FP Complete was able to use it to contact these users since the survey was directly related to their demonstrated interest in Haskell. The high quality of the list is reflected in the extremely high response rate (7.7%), the low bounce rate (1.9%), and the low unsubscribe rate (also 1.9%).

Surveys were conducted using SurveyGizmo.com, with an email inviting each participant to click a link to a four-page Web-based survey. Survey form 0.1 invitations went to 1999 users of whom 190 completed the survey. Survey form 0.2, incorporating some edits, went to 2000 users of whom 170 completed the survey. Survey form 1.0, incorporating further edits, went to 12166 users of whom 894 completed the survey.

Form 0.2 incorporated edits to eliminate questions yielding little information about how to help users, either because satisfaction was very high (the language itself, the compiler, the community) or because two questions were redundant. Also, new questions inspired by textual responses to form 0.1 were included.

Form 1.0 incorporated further such edits. Also, the rating scale was changed to ask about helping the user's (and team's) future choice of Haskell rather than current usefulness/difficulty. The ratings questions were displayed under the heading "Would improvements help you and your group to choose Haskell for your future work?"

Responses were processed anonymously, but users were given the option to fill in their email address if they would accept follow-up questions, and the option to name their company/organization. Users were informed that the survey results, without these fields, would be shared with the community.

Acknowledgments

We are grateful to the many, many people who spent their valuable time and expertise completing and returning their survey forms. Thanks to Dr. Tristan Webb and Ms. Noelle McCool-Smiley, both of FP Complete, for their material help in formulating and conducting the survey. Thanks to FP Complete's corporate customers for providing the revenues that allow us to fund this and other community projects. Thanks to the Commercial Haskell SIG for providing the motivation behind this project. Thanks to the many volunteers who've spent absolutely huge amounts of time and expertise making Haskell as good as it is today, and who continue to make improvements like those requested by the survey participants. Thanks to the companies that allow some of their staff to spend company time making such contributions to the common good. Special thanks to the late Professor Paul Hudak; may we all strive to live up to his example.

May 22, 2015 08:00 PM

The new Stackage Server

tl;dr Please check out beta.stackage.org

I made the first commit to the Stackage Server code base a little over a year ago. The goal was to provide a place to host package sets which both limited the number of packages from Hackage available, and modified packages where necessary. This server was to be populated by regular Stackage builds, targeted at multiple GHC versions, and consisted of both inclusive and exclusive sets. It also allowed interested individuals to create their own package sets.

If any of those details seem surprising today, they should. A lot has happened for the Stackage project in the past year, making details of what was initially planned irrelevant, and making other things (like hosting of package documentation) vital. We now have LTS Haskell. Instead of running with multiple GHC versions, we have Stackage Nightly which is targeted at a single GHC major version. To accomodate goals for GPS Haskell (which unfortunately never materialized), Stackage no longer makes corrections to upstream packages.

I could go into lots more detail on what is different in project requirements. Instead, I'll just summarize: I've been working on a simplified version of the Stackage Server codebase to address our goals better, more easily ensure high availability, and make the codebase easier to maintain. We also used this opportunity to test out a new hosting system our DevOps team put together. The result is running on beta.stackage.org, and will replace the official stackage.org after a bit more testing (which I hope readers will help with).

The code

All of this code lives on the simpler branch of the stackage-server code base, and much to my joy, resulted in quite a bit less code. In fact, there's just about a 2000 line reduction. The rest of this post will get into how that happened.

No more custom package sets

One of the features I mentioned above was custom package sets. This fell out automatically from the initial way Stackage Server was written, so it was natural to let others create package sets of their own. However, since release, only one person actually used that feature. I discussed with him, and he agreed with the decision to deprecate and then remove that functionality.

So why get rid of it now? Two powerful reasons:

  • We already host a public mirror of all packages on S3. Since we no longer patch upstream packages, it's best if tooling is able to just refer to that high-reliability service.
  • We now have Git repositories for all of LTS Haskell and Stackage Nightly. Making these the sources of package sets means we don't have two (possibly conflicting) sources of data. That brings me to the second point

Upload code is gone

We had some complicated logic to allow users to upload package sets. It started off simple, but over time we added Haddock hosting and other metadata features, making the code more complex. Actually, it ended up having two parallel code paths for this. So instead, we now just upload information on the package sets to the Git repositories, and leave it up to a separate process (described below) to clone these repositories and make the data available to the server.

Haddocks on S3

After generating a snapshot, the Haddocks used to be tarred and compressed, and then uploaded as a compressed bundle to S3. Then, Stackage Server would receive a request for files, unpack them, and serve them. This presented some problems:

  • Users would have to wait for a first request to succeed during the unpacking
  • With enough snapshots being generated, we would eventually run out of disk space and need to clear our temp directory
  • Since we run our cluster in a high availabilty mode with multiple horizontally-scaled machines, one machine may have finished unpacking when another didn't, resulting in unstyled content (see issue #82).

Instead, we now just upload the files to S3 and redirect there from stackage-server (though we'll likely switch to reverse proxying to allow for nicer SSL urls). In fact, you can easily view these docs, at URLs such as http://haddock.stackage.org/lts-2.9/ or https://s3.amazonaws.com/haddock.stackage.org/nightly-2015-05-21/index.html.

These Haddocks are publicly available, and linkable from projects beyond Stackage Server. Each set of Haddocks is guaranteed to have consistent internal links to other compatible packages. And while some documentation doesn't generate due to known package bugs, the generation is otherwise reliable.

I've already offered access to these docs to Duncan for usage on Hackage, and hope that will improve the experience for users there.

Metadata SQLite database

Previously, information on snapshots was stored in a PostgreSQL database that was maintained by Stackage Server. This database also had package metadata, like author, homepage, and description. Now, we have a completely different process:

  • The all-cabal-metadata from the Commercial Haskell Special Interest Group provides an easily cloneable Git repo with package metadata, which is automatically updated by Travis.
  • We run a cron job on the stackage-build server that updates the lts-haskell, stackage-nightly, and all-cabal-metadata repos and generates a SQLite database from them with all of the data that Stackage Server needs. You can look at the Stackage.Database module for some ideas of what this consists of. That database gets uploaded to Amazon S3, and is actually publicly available if you want to poke at it
  • The live server downloads a new version of this file on a regular basis

I've considered spinning off the Stackage.Download code into its own repository so that others can take advantage of this functionality in different contexts if desired. Let me know if you're interested.

At this point, the PostgreSQL database is just used for non-critical functionality, such as social features (tags and likes).

Slightly nicer URLs

When referring to a snapshot, there are "official" short names (slugs), of the form lts-2.9 and nightly-2015-05-22. The URLs on the new server now reflect this perfectly, e.g.: https://beta.stackage.org/nightly-2015-05-22. We originally used hashes of the snapshot content for the original URLs, but that was fixed a while ago. Now that we only have to support these official snapshots, we can always (and exclusively) use these short names.

As a convenience, if you visit the following URLs, you get automatic redirects:

  • /nightly redirects to the most recent nightly
  • /lts to the latest LTS
  • /lts-X to the latest LTS in the X.* major version (e.g., today, /lts-2 redirects to /lts-2.9)

This also works for URLs under that hierarchy. For example, consider https://beta.stackage.org/lts/cabal.config, which is an easy way to get set up with LTS in your project (by running wget https://beta.stackage.org/lts/cabal.config).

ECS-based hosting

While not a new feature of the server itself, the hosting cluster we're running this on is brand new. Amazon recently released EC2 Container Service, which is a service for running Docker containers. Since we're going to be using this for the new School of Haskell, it's nice to be giving it a serious usage now. We also make extensive use of Docker for customer projects, both for builds and hosting, so it's a natural extension for us.

This ECS cluster uses standard Amazon services like Elastic Load Balancer (ELB) and auto-scaling to provide for high availability in the case of machine failure. And while we have a lot of confidence in our ability to keep Stackage Server up and running regularly, it's nice that our most important user-facing content is provided by these external services:

This provides for a pleasant experience in both browsing the website and using Stackage in your build system.

A special thanks to Jason Boyer for providing this new hosting cluster, which the whole FP Complete team is looking forward to putting through its paces.

May 22, 2015 07:30 AM

May 21, 2015

Neil Mitchell

Handling Control-C in Haskell

Summary: The development version of ghcid seemed to have some problems with terminating when Control-C was hit, so I investigated and learnt some things.

Given a long-running/interactive console program (e.g. ghcid), when the user hits Control-C/Ctrl-C the program should abort. In this post I'll describe how that works in Haskell, how it can fail, and what asynchronous exceptions have to do with it.

What happens when the user hits Ctrl-C?

When the user hits Ctrl-C, GHC raises an async exception of type UserInterrupt on the main thread. This happens because GHC installs an interrupt handler which raises that exception, sending it to the main thread with throwTo. If you install your own interrupt handler you won't see this behaviour and will have to handle Ctrl-C yourself.

There are reports that if the user hits Ctrl-C twice the runtime will abort the program. In my tests, that seems to be a feature of the shell rather than GHC itself - in the Windows Command Prompt no amount of Ctrl-C stops an errant program, in Cygwin a single Ctrl-C works.

What happens when the main thread receives UserInterrupt?

There are a few options:

  • If you are not masked and there is no exception handler, the thread will abort, which causes the whole program to finish. This behaviour is the desirable outcome if the user hits Ctrl-C.
  • If you are running inside an exception handler (e.g. catch or try) which is capable of catching UserInterrupt then the UserInterrupt exception will be returned. The program can then take whatever action it wishes, including rethrowing UserInterrupt or exiting the program.
  • If you are running with exceptions masked, then the exception will be delayed until you stop being masked. The most common way of running while masked is if the code is the second argument to finally or one of the first two arguments to bracket. Since Ctrl-C will be delayed while the program is masked, you should only do quick things while masked.

How might I lose UserInterrupt?

The easiest way to "lose" a UserInterrupt is to catch it and not rethrow it. Taking a real example from ghcid, I sometimes want to check if two paths refer to the same file, and to make that check more robust I call canonicalizePath first. This function raises errors in some circumstances (e.g. the directory containing the file does not exist), but is inconsistent about error conditions between OS's, and doesn't document its exceptions, so the safest thing is to write:

canonicalizePathSafe :: FilePath -> IO FilePath
canonicalizePathSafe x = canonicalizePath x `catch`
\(_ :: SomeException) -> return x

If there is any exception, just return the original path. Unfortunately, the catch will also catch and discard UserInterrupt. If the user hits Ctrl-C while canonicalizePath is running the program won't abort. The problem is that UserInterrupt is not thrown in response to the code inside the catch, so ignoring UserInterrupt is the wrong thing to do.

What is an async exception?

In Haskell there are two distinct ways to throw exceptions, synchronously and asynchronously.

  • Synchronous exceptions are raised on the calling thread, using functions such as throw and error. The point at which a synchronous exception is raised is explicit and can be relied upon.
  • Asynchronous exceptions are raised by a different thread, using throwTo and a different thread id. The exact point at which the exception occurs can vary.

How is the type AsyncException related?

In Haskell, there is a type called AsyncException, containing four exceptions - each special in their own way:

  • StackOverflow - the current thread has exceeded its stack limit.
  • HeapOverflow - never actually raised.
  • ThreadKilled - raised by calling killThread on this thread. Used when a programmer wants to kill a thread.
  • UserInterrupt - the one we've been talking about so far, raised on the main thread by the user hitting Ctrl-C.

While these have a type AsyncException, that's only a hint as to their intended purpose. You can throw any exception either synchronously or asynchronously. In our particular case of caonicalizePathSafe, if canonicalizePath causes a StackOverflow, we probably are happy to take the fallback case, but likely the stack was already close to the limit and will occur again soon. If the programmer calls killThread that thread should terminate, but in ghcid we know this thread won't be killed.

How can I catch avoid catching async exceptions?

There are several ways to avoid catching async exceptions. Firstly, since we expect canonicalizePath to complete quickly, we can just mask all async exceptions:

canonicalizePathSafe x = mask_ $
canonicalizePath x `catch` \(_ :: SomeException) -> return x

We are now guaranteed that catch will not receive an async exception. Unfortunately, if canonicalizePath takes a long time, we might delay Ctrl-C unnecessarily.

Alternatively, we can catch only non-async exceptions:

canonicalizePathSafe x = catchJust
(\e -> if async e then Nothing else Just e)
(canonicalizePath x)
(\_ -> return x)

async e = isJust (fromException e :: Maybe AsyncException)

We use catchJust to only catch exceptions which aren't of type AsyncException, so UserInterrupt will not be caught. Of course, this actually avoids catching exceptions of type AsyncException, which is only related to async exceptions by a partial convention not enforced by the type system.

Finally, we can catch only the relevant exceptions:

canonicalizePathSafe x = canonicalizePath x `catch`
\(_ :: IOException) -> return x

Unfortunately, I don't know what the relevant exceptions are - on Windows canonicalizePath never seems to throw an exception. However, IOException seems like a reasonable guess.

How to robustly deal with UserInterrupt?

I've showed how to make canonicalizePathSafe not interfere with UserInterrupt, but now I need to audit every piece of code (including library functions I use) that runs on the main thread to ensure it doesn't catch UserInterrupt. That is fragile. A simpler alternative is to push all computation off the main thread:

import Control.Concurrent.Extra
import Control.Exception.Extra

ctrlC :: IO () -> IO ()
ctrlC act = do
bar <- newBarrier
forkFinally act $ signalBarrier bar
either throwIO return =<< waitBarrier bar

main :: IO ()
main = ctrlC $ ... as before ...

We are using the Barrier type from my previous blog post, which is available from the extra package. We create a Barrier, run the main action on a forked thread, then marshal completion/exceptions back to the main thread. Since the main thread has no catch operations and only a few (audited) functions on it, we can be sure that Ctrl-C will quickly abort the program.

Using version 1.1.1 of the extra package we can simplify the code to ctrlC = join . onceFork.

What about cleanup?

Now we've pushed most actions off the main thread, any finally sections are on other threads, and will be skipped if the user hits Ctrl-C. Typically this isn't a problem, as program shutdown automatically cleans all non-persistent resources. As an example, ghcid spawns a copy of ghci, but on shutdown the pipes are closed and the ghci process exits on its own. If we do want robust cleanup of resources such as temporary files we would need to run the cleanup from the main thread, likely using finally.

Should async exceptions be treated differently?

At the moment, Haskell defines many exceptions, any of which can be thrown either synchronously or asynchronously, but then hints that some are probably async exceptions. That's not a very Haskell-like thing to do. Perhaps there should be a catch which ignores exceptions thrown asynchronously? Perhaps the sync and async exceptions should be of different types? It seems unfortunate that functions have to care about async exceptions as much as they do.

Combining mask and StackOverflow

As a curiosity, I tried to combine a function that stack overflows (using -O0) and mask. Specifically:

main = mask_ $ print $ foldl (+) 0 [1..1000000]

I then ran that with +RTS -K1k. That prints out the value computed by the foldl three times (seemingly just a buffering issue), then fails with a StackOverflow exception. If I remove the mask, it just fails with StackOverflow. It seems that by disabling StackOverflow I'm allowed to increase my stack size arbitrarily. Changing print to appendFile causes the file to be created but not written to, so it seems there are oddities about combining these features.

Disclaimer

I'm certainly not an expert on async exceptions, so corrections welcome. All the above assumes compiling with -threaded, but most applies without -threaded.

by Neil Mitchell ([email protected]) at May 21, 2015 08:19 PM

May 20, 2015

Functional Jobs

Clojure/Erlang backend engineer/architect at Zoomo (Full-time)

You will:

  1. Write backend services that power our listings , internal dashboards and platform. Our services are built on erlang,python and clojure.
  2. Write a clean, fast and maintainable code using functional paradigms.
  3. Research on latest trends in software and adapt them for business needs.
  4. Contribute to open source libraries and write educational blogs.
  5. Take care of health of production setup and ensure all systems are up and running at all times.

What are we looking for ?

  1. Strong fundamentals of computer programming.
  2. Familiarity with any functional programming language.
  3. Have contributed to open source projects.
  4. Have shown a real interest in learning programming.
  5. Ability to learn new languages and frameworks.
  6. In-depth knowledge of libraries and programming languages used on a day to day basis.
  7. In-depth knowledge of rest services, http protocol and messaging services.
  8. Comfortable with event driven programming and multi-threaded programming.

Good to have

  1. Familiarity with angularjs/react.
  2. Familiarity with clojure/lisp/haskell/erlang/sml/racket/scala.

Perks :

  1. Freedom to experiment and evangelize latest technologies.
  2. Best in industry salary and siginificant equity.
  3. Leadership position and fast track career growth.

What are we doing?

We are a peer-to-peer marketplace for used cars. We accelerate p2p car transactions. We want to offer the average Indian, a credible, easy alternative to buying an expensive new car. As you read this, we are coding to make used car buying as systematic as ordering a phone off flipkart.

Where can I see what you have built thus far?

Download zoomo for android. On apple/ desktop check us out at www.gozoomo.com.

Why used cars?

Used cars is a $10B market, and will grow ten folds to $100B in the 7-10 year timeframe

Buying a used car make immense economic sense for a value-for-money economy like ours. A car is a depreciating asset, often losing ~30% of its value in the first year itself Why buy a mid segment hatcback for 6L when you can get an almost new premium hatchback/ Sedan at 4L?

What is it that we are fixing?

Buying a used car is horribly difficult today. Used cars buying is one experience still untouched by advances in interfaces, devices, networks and technology in general. As per our studies more than 70% of users who start searching for a used car drop off to buy an expensive, new car.

We are changing all that. We are making people happy by saving them big bucks. By finding them cars they fall in love with.

So how are you solving this/ are different/ are confident that this is going to be big?

Meet us in person and we will spill the beans :) You will get great coffee and 2-way conveyance

Get information on how to apply for this position.

May 20, 2015 10:40 AM

FP Complete

Call C functions from Haskell without bindings

Because Haskell is a language of choice for many problem domains, and for scales ranging from one-off scripts to full scale web services, we are fortunate to by now have over 8,000 open source packages (and a few commercial ones besides) available to build from. But in practice, Haskell programming in the real world involves interacting with myriad legacy systems and libraries. Partially because the industry is far older than the comparatively recent strength of our community. But further still, because quality new high-performance libraries are created every day in languages other than Haskell, be it intensive numerical codes or frameworks for shuffling bits across machines. Today we are releasing inline-c, a package for writing mixed C/Haskell source code that seamlessly invokes native and foreign functions in the same module. No FFI required.

The joys of programming with foreign code

Imagine that you just found a C library that you wish to use for your project. The standard workflow is to,

  1. check Hackage if a package with a set of bindings for that library exists,
  2. if one does, program against that, or
  3. if it doesn't, write your own bindings package, using Haskell's FFI.

Writing and maintaining bindings for large C libraries is hard work. The libraries are constantly updated upstream, so that the bindings you find are invariably out-of-date, providing only partial coverage of library's API, sometimes don't compile cleanly against the latest upstream version of the C library or need convoluted and error-prone conditional compilation directives to support multiple versions of the API in the package. Which is a shame, because typically you only need to perform a very specific task using some C library, using only a minute proportion of its API. It can be frustrating for a bindings package to fail to install, only because the binding for some function that you'll never use doesn't match up with the header files of the library version you happen to have installed on your system.

This is especially true for large libraries that expose sets of related but orthogonal and indepedently useful functions, such as GTK+, OpenCV or numerical libraries such as the GNU Scientific Library (GSL), NAG and IMSL. inline-c lets you call functions from these libraries using the full power of C's syntax, directly from client code, without the need for monolithic bindings packages. High-level bindings (or "wrappers") may still be useful to wrap low-level details into an idiomatic Haskell interface, but inline-c enables rapid prototyping and iterative development of code that uses directly some of the C library today, keeping for later the task of abstracting calls into a high-level, type safe wrapper as needed. In short, inline-c let's you "pay as you go" when programming foreign code.

We first developed inline-c for use with numerical libraries, in particular the popular and very high quality commercial NAG library, for tasks including ODE solving, function optimization, and interpolation. If getting seamless access to the gold standard of fast and reliable numerical routines is what you need, then you will be interested in our companion package to work specifically with NAG, inline-c-nag.

A taste of inline-c

What follows is just a teaser of what can be done with inline-c. Please refer to the Haddock documentation and the README for more details on how to use the showcased features.

Let's say we want to use C's variadic printf function and its convenient string formats. inline-c let's you write this function call inline, without any need for a binding to the foreign function:

{-# LANGUAGE QuasiQuotes #-}
{-# LANGUAGE TemplateHaskell #-}

import qualified Language.C.Inline as C

C.include "<stdio.h>"
C.include "<math.h>"

main :: IO ()
main = do
   x <- [C.exp| int{ printf("Some number: %.2f\n", cos(0.5)) } |]
   putStrLn $ show x ++ " characters printed."

Importing Language.C.Inline brings into scope the Template Haskell function include to include C headers (<stdio.h> and <math.h>), and the exp quasiquoter for embedding expressions in C syntax in Haskell code. Notice how inline-c has no trouble even with C functions that have advanced calling conventions, such as variadic functions. This is a crucial point: we have the full power of C available at our fingertips, not just whatever can be shoe-horned through the FFI.

We can capture Haskell variables to be used in the C expression, such as when computing x below:

mycos :: CDouble -> IO CDouble
mycos x = [C.exp| double{ cos($(double x)) } |]

The anti-quotation $(double x) indicates that we want to capture the variable x from the Haskell environment, and that we want it to have type double in C (inline-c will check at compile time that this is a sensible type ascription).

We can also splice in a block of C statements, and explicitly return the result:

C.include "<stdio.h>"

-- | @readAndSum n@ reads @n@ numbers from standard input and returns
-- their sum.
readAndSum :: CInt -> IO CInt
readAndSum n = do
  x <- [C.block| int {
      int i, sum = 0, tmp;
      for (i = 0; i < $(int n); i++) {
        scanf("%d ", &tmp);
        sum += tmp;
      }
      return sum;
    } |]
  print x

Finally, the library provides facilities to easily use Haskell data in C. For example, we can easily use Haskell ByteStrings in C:

{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE QuasiQuotes #-}
import qualified Data.ByteString as BS
import           Data.Monoid ((<>))
import           Foreign.C.Types
import qualified Language.C.Inline as C
import           Text.RawString.QQ (r)

C.context (C.baseCtx <> C.bsCtx)

-- | Count the number of set bits in a 'BS.ByteString'.
countSetBits :: BS.ByteString -> IO CInt
countSetBits bs = [C.block|
    int {
      int i, bits = 0;
      for (i = 0; i < $bs-len:bs; i++) {
        unsigned char ch = $bs-ptr:bs[i];
        bits += (ch * 01001001001ULL & 042104210421ULL) % 017;
      }
      return bits;
    }
  |]

In this example, we use the bs-len and bs-ptr anti-quoters to get the length and pointer for a Haskell ByteString. inline-c has a modular design: these anti-quoters are completely optional and can be included on-demand. The C.context invocation adds the extra ByteStrings anti-quoters to the base set. Similar facilities are present to easily use Haskell Vectors as well as for invoking Haskell closures from C code.

Larger examples

We have included various examples in the inline-c and inline-c-nag repositories. Currently they're geared toward scientific and numerical computing, but we would welcome contributions using inline-c in other fields.

For instance, gsl-ode.hs is a great example of combining the strengths of C and the strengths of Haskell to good effect: we use a function from C's GNU Scientific Library for solving ordinary differential equations (ODE) to solve a Lorenz system, and then take advantage of the very nice Chart-diagrams Haskell library to display its x and z coordinates:

Lorenz system

In this example, the vec-ptr anti-quoter is used to get a pointer out of a mutable vector:

$vec-ptr:(double *fMut)

Where fMut is a variable of type Data.Storable.Vector.Mutable.Vector CDouble. Moreover, the fun anti-quoter is used to get a function pointer from a Haskell function:

$fun:(int (* funIO) (double t, const double y[], double dydt[], void * params))

Where, funIO is a Haskell function of type

CDouble -> Ptr CDouble -> Ptr CDouble -> Ptr () -> IO CInt

Note that all these anti-quoters (apart from the ones where only one type is allowed, like vec-len or bs-ptr) force the user to specify the target C type. The alternative would have been to write the Haskell type. Either way some type ascription is unfortunately required, due to a limitation of Template Haskell. We choose C type annotations because in this way, the user can understand precisely and state explicitly the target type of any marshalling.

Note that at this stage, type annotations are needed, because it is not possible to get the type of locally defined variables in Template Haskell.

How it works under the hood

inline-c generates a piece of C code for most of the Template Haskell functions and quasi-quoters function that it exports. So when you write

[C.exp| double{ cos($(double x)) } |]

a C function gets generated:

double some_name(double x) {
    return cos(x);
}

This function is then bound to in Haskell through an automatically generated FFI import declaration and invoked passing the right argument -- the x variable from the Haskell environment. The types specified in C are automatically translated to the corresponding Haskell types, to generate the correct type signatures.

Custom anti quoters, such as vec-ptr and vec-len, handle the C and Haskell types independently. For example, when writing

[C.block| double {
    int i;
    double res;
    for (i = 0; i < $vec-len:xs; i++) {
      res += $vec-ptr:(double *xs)[i];
    }
    return res;
  } |]

we'll get a function of type

double some_name(int xs_len, double *xs_ptr)

and on the Haskell side the variable xs will be used in conjuction with some code getting its length and the underlying pointer, both to be passed as arguments.

Building programs that use inline-c

The C code that inline-c generates is stored in a file named like the Haskell source file, but with a .c extension.

When using cabal, it is enough to specify generated C source, and eventual options for the C code:

executable foo
  main-is:             Main.hs, Foo.hs, Bar.hs
  hs-source-dirs:      src
  -- Here the corresponding C sources must be listed for every module
  -- that uses C code.  In this example, Main.hs and Bar.hs do, but
  -- Foo.hs does not.
  c-sources:           src/Main.c, src/Bar.c
  -- These flags will be passed to the C compiler
  cc-options:          -Wall -O2
  -- Libraries to link the code with.
  extra-libraries:     -lm
  ...

Note that currently cabal repl is not supported, because the C code is not compiled and linked appropriately. However, cabal repl will fail at the end, when trying to load the compiled C code, which means that we can still use it to type check our package when developing.

If we were to compile the above manually we could do

$ ghc -c Main.hs
$ cc -c Main.c -o Main_c.o
$ ghc Foo.hs
$ ghc Bar.hs
$ cc -c Bar.c -o Bar_c.o
$ ghc Main.o Foo.o Bar.o Main_c.o Bar_c.o -lm -o Main

Extending inline-c

As mentioned previously, inline-c can be extended by defining custom anti-quoters. Moreover, we can also tell inline-c about more C types beyond the primitive ones.

Both operations are done via the Context data type. Specifically, the Context contains a TypesTable, mapping C type specifiers to Haskell types; and a Map of AntiQuoters. A baseCtx is provided specifying mappings from all the base C types to Haskell types (int to CInt, double to CDouble, and so on). Contexts can be composed using their Monoid instance.

For example, the vecCtx contains two anti-quoters, vec-len and vec-ptr. When using inline-c with external libraries we often define a context dedicated to said library, defining a TypesTable converting common types find in the library to their Haskell counterparts. For example inline-c-nag defines a context containing information regarding the types commonly using in the NAG scientific library.

See the Language.C.Inline.Context module documentation for more.

C++ support

Our original use case for inline-c was always C oriented. However, thanks to extensible contexts, it should be possible to build C++ support on top of inline-c, as we dabbled with in inline-c-cpp. In this way, one can mix C++ code into Haskell source files, while reusing the infrastructure that inline-c provides for invoking foreign functions. Since inline-c generates C wrapper functions for all inline expressions, one gets a function with bona fide C linkage to wrap a C++ call, for free. Dealing with C++ templates, passing C++ objects in and out and conveniently manipulating them from Haskell are the next challenges. If C++ support is what you need, feel free to contribute to this ongoing effort!

Wrapping up

We meant inline-c as a simple, modular alternative to monolithic binding libraries, borrowing the core concept of FFI-less programming of foreign code from the H project and language-c-inline. But this is just the first cut! We are releasing the library to the community early in hopes that it will accelerate the Haskell community's embrace of quality foreign libraries where they exist, as an alternative to expending considerable resources reinventing such libraries for little benefit. Numerical programming, machine learning, computer vision, GUI programming and data analysis come to mind as obvious areas where we want to leverage existing quality code. In fact, FP Complete is using inline-c today to enable quick access to all of NAG, a roughly 1.6K function strong library, for a large compute-intensive codebase. We hope to see many more use cases in the future.

May 20, 2015 10:00 AM

May 19, 2015

Cartesian Closed Comic

Gabriel Gonzalez

Morte: an intermediate language for super-optimizing functional programs

The Haskell language provides the following guarantee (with caveats): if two programs are equal according to equational reasoning then they will behave the same. On the other hand, Haskell does not guarantee that equal programs will generate identical performance. Consequently, Haskell library writers must employ rewrite rules to ensure that their abstractions do not interfere with performance.

Now suppose there were a hypothetical language with a stronger guarantee: if two programs are equal then they generate identical executables. Such a language would be immune to abstraction: no matter how many layers of indirection you might add the binary size and runtime performance would be unaffected.

Here I will introduce such an intermediate language named Morte that obeys this stronger guarantee. I have not yet implemented a back-end code generator for Morte, but I wanted to pause to share what I have completed so far because Morte uses several tricks from computer science that I believe deserve more attention.

Morte is nothing more than a bare-bones implementation of the calculus of constructions, which is a specific type of lambda calculus. The only novelty is how I intend to use this lambda calculus: as a super-optimizer.

Normalization

The typed lambda calculus possesses a useful property: every term in the lambda calculus has a unique normal form if you beta-reduce everything. If you're new to lambda calculus, normalizing an expression equates to indiscriminately inlining every function call.

What if we built a programming language whose intermediate language was lambda calculus? What if optimization was just normalization of lambda terms (i.e. indiscriminate inlining)? If so, then we would could abstract freely, knowing that while compile times might increase, our final executable would never change.

Recursion

Normally you would not want to inline everything because infinitely recursive functions would become infinitely large expressions. Fortunately, we can often translate recursive code to non-recursive code!

I'll demonstrate this trick first in Haskell and then in Morte. Let's begin from the following recursive List type along with a recursive map function over lists:

import Prelude hiding (map, foldr)

data List a = Cons a (List a) | Nil

example :: List Int
example = Cons 1 (Cons 2 (Cons 3 Nil))

map :: (a -> b) -> List a -> List b
map f Nil = Nil
map f (Cons a l) = Cons (f a) (map f l)

-- Argument order intentionally switched
foldr :: List a -> (a -> x -> x) -> x -> x
foldr Nil c n = n
foldr (Cons a l) c n = c a (foldr l c n)

result :: Int
result = foldr (map (+1) example) (+) 0

-- result = 9

Now imagine that we disable all recursion in Haskell: no more recursive types and no more recursive functions. Now we must reject the above program because:

  • the List data type definition recursively refers to itself

  • the map and foldr functions recursively refer to themselves

Can we still encode lists in a non-recursive dialect of Haskell?

Yes, we can!

-- This is a valid Haskell program

{-# LANGUAGE RankNTypes #-}

import Prelude hiding (map, foldr)

type List a = forall x . (a -> x -> x) -> x -> x

example :: List Int
example = \cons nil -> cons 1 (cons 2 (cons 3 nil))

map :: (a -> b) -> List a -> List b
map f l = \cons nil -> l (\a x -> cons (f a) x) nil

foldr :: List a -> (a -> x -> x) -> x -> x
foldr l = l

result :: Int
result = foldr (map (+ 1) example) (+) 0

-- result = 9

Carefully note that:

  • List is no longer defined recursively in terms of itself

  • map and foldr are no longer defined recursively in terms of themselves

Yet, we somehow managed to build a list, map a function over the list, and fold the list, all without ever using recursion! We do this by encoding the list as a fold, which is why foldr became the identity function.

This trick works for more than just lists. You can take any recursive data type and mechanically transform the type into a fold and transform functions on the type into functions on folds. If you want to learn more about this trick, the specific name for it is "Boehm-Berarducci encoding". If you are curious, this in turn is equivalent to an even more general concept from category theory known as "F-algebras", which let you encode inductive things in a non-inductive way.

Non-recursive code greatly simplifies equational reasoning. For example, we can easily prove that we can optimize map id l to l:

map id l

-- Inline: map f l = \cons nil -> l (\a x -> cons (f a) x) nil
= \cons nil -> l (\a x -> cons (id a) x) nil

-- Inline: id x = x
= \cons nil -> l (\a x -> cons a x) nil

-- Eta-reduce
= \cons nil -> l cons nil

-- Eta-reduce
= l

Note that we did not need to use induction to prove this optimization because map is no longer recursive. The optimization became downright trivial, so trivial that we can automate it!

Morte optimizes programs using this same simple scheme:

  • Beta-reduce everything (equivalent to inlining)
  • Eta-reduce everything

To illustrate this, I will desugar our high-level Haskell code to the calculus of constructions. This desugaring process is currently manual (and tedious), but I plan to automate this, too, by providing a front-end high-level language similar to Haskell that compiles to Morte:

-- mapid.mt

( \(List : * -> *)
-> \( map
: forall (a : *)
-> forall (b : *)
-> (a -> b) -> List a -> List b
)
-> \(id : forall (a : *) -> a -> a)

-> \(a : *) -> map a a (id a)
)

-- List
(\(a : *) -> forall (x : *) -> (a -> x -> x) -> x -> x)

-- map
( \(a : *)
-> \(b : *)
-> \(f : a -> b)
-> \(l : forall (x : *) -> (a -> x -> x) -> x -> x)
-> \(x : *)
-> \(Cons : b -> x -> x)
-> \(Nil: x)
-> l x (\(va : a) -> \(vx : x) -> Cons (f va) vx) Nil
)

-- id
(\(a : *) -> \(va : a) -> va)

This line of code is the "business end" of the program:

\(a : *) -> map a a (id a)

The extra 'a' business is because in any polymorphic lambda calculus you explicitly accept polymorphic types as arguments and specialize functions by applying them to types. Higher-level functional languages like Haskell or ML use type inference to automatically infer and supply type arguments when possible.

We can compile this program using the morte executable, which accepts a Morte program on stdin, outputs the program's type stderr, and outputs the optimized program on stdout:

$ morte < id.mt
∀(a : *) → (∀(x : *) → (a → x → x) → x → x) → ∀(x : *) → (a
→ x → x) → x → x

λ(a : *) → λ(l : ∀(x : *) → (a → x → x) → x → x) → l

The first line is the type, which is a desugared form of:

forall a . List a -> List a

The second line is the program, which is the identity function on lists. Morte optimized away the map completely, the same way we did by hand.

Morte also optimized away the rest of the code, too. Dead-code elimination is just an emergent property of Morte's simple optimization scheme.

Equality

We could double-check our answer by asking Morte to optimize the identity function on lists:

-- idlist.mt

( \(List : * -> *)
-> \(id : forall (a : *) -> a -> a)

-> \(a : *) -> id (List a)
)

-- List
(\(a : *) -> forall (x : *) -> (a -> x -> x) -> x -> x)

-- id
(\(a : *) -> \(va : a) -> va)

Sure enough, Morte outputs an alpha-equivalent result (meaning the same up to variable renaming):

$ ~/.cabal/bin/morte < idlist.mt
∀(a : *) → (∀(x : *) → (a → x → x) → x → x) → ∀(x : *) → (a
→ x → x) → x → x

λ(a : *) → λ(va : ∀(x : *) → (a → x → x) → x → x) → va

We can even use the morte library to mechanically check if two Morte expressions are alpha-, beta-, and eta- equivalent. We can parse our two Morte files into Morte's Expr type and then use the Eq instance for Expr to test for equivalence:

$ ghci
Prelude> import qualified Data.Text.Lazy.IO as Text
Prelude Text> txt1 <- Text.readFile "mapid.mt"
Prelude Text> txt2 <- Text.readFile "idlist.mt"
Prelude Text> import Morte.Parser (exprFromText)
Prelude Text Morte.Parser> let e1 = exprFromText txt1
Prelude Text Morte.Parser> let e2 = exprFromText txt2
Prelude Text Morte.Parser> import Control.Applicative (liftA2)
Prelude Text Morte.Parser Control.Applicative> liftA2 (==) e1 e2
Right True
$ -- `Right` means both expressions parsed successfully
$ -- `True` means they are alpha-, beta-, and eta-equivalent

We can use this to mechanically verify that two Morte programs optimize to the same result.

Compile-time computation

Morte can compute as much (or as little) at compile as you want. The more information you encode directly within lambda calculus, the more compile-time computation Morte will perform for you. For example, if we translate our Haskell List code entirely to lambda calculus, then Morte will statically compute the result at compile time.

-- nine.mt

( \(Nat : *)
-> \(zero : Nat)
-> \(one : Nat)
-> \((+) : Nat -> Nat -> Nat)
-> \((*) : Nat -> Nat -> Nat)
-> \(List : * -> *)
-> \(Cons : forall (a : *) -> a -> List a -> List a)
-> \(Nil : forall (a : *) -> List a)
-> \( map
: forall (a : *) -> forall (b : *)
-> (a -> b) -> List a -> List b
)
-> \( foldr
: forall (a : *)
-> List a
-> forall (r : *)
-> (a -> r -> r) -> r -> r
)
-> ( \(two : Nat)
-> \(three : Nat)
-> ( \(example : List Nat)

-> foldr Nat (map Nat Nat ((+) one) example) Nat (+) zero
)

-- example
(Cons Nat one (Cons Nat two (Cons Nat three (Nil Nat))))
)

-- two
((+) one one)

-- three
((+) one ((+) one one))
)

-- Nat
( forall (a : *)
-> (a -> a)
-> a
-> a
)

-- zero
( \(a : *)
-> \(Succ : a -> a)
-> \(Zero : a)
-> Zero
)

-- one
( \(a : *)
-> \(Succ : a -> a)
-> \(Zero : a)
-> Succ Zero
)

-- (+)
( \(m : forall (a : *) -> (a -> a) -> a -> a)
-> \(n : forall (a : *) -> (a -> a) -> a -> a)
-> \(a : *)
-> \(Succ : a -> a)
-> \(Zero : a)
-> m a Succ (n a Succ Zero)
)

-- (*)
( \(m : forall (a : *) -> (a -> a) -> a -> a)
-> \(n : forall (a : *) -> (a -> a) -> a -> a)
-> \(a : *)
-> \(Succ : a -> a)
-> \(Zero : a)
-> m a (n a Succ) Zero
)

-- List
( \(a : *)
-> forall (x : *)
-> (a -> x -> x) -- Cons
-> x -- Nil
-> x
)

-- Cons
( \(a : *)
-> \(va : a)
-> \(vas : forall (x : *) -> (a -> x -> x) -> x -> x)
-> \(x : *)
-> \(Cons : a -> x -> x)
-> \(Nil : x)
-> Cons va (vas x Cons Nil)
)

-- Nil
( \(a : *)
-> \(x : *)
-> \(Cons : a -> x -> x)
-> \(Nil : x)
-> Nil
)

-- map
( \(a : *)
-> \(b : *)
-> \(f : a -> b)
-> \(l : forall (x : *) -> (a -> x -> x) -> x -> x)
-> \(x : *)
-> \(Cons : b -> x -> x)
-> \(Nil: x)
-> l x (\(va : a) -> \(vx : x) -> Cons (f va) vx) Nil
)

-- foldr
( \(a : *)
-> \(vas : forall (x : *) -> (a -> x -> x) -> x -> x)
-> vas
)

The relevant line is:

foldr Nat (map Nat Nat ((+) one) example) Nat (+) zero

If you remove the type-applications to Nat, this parallels our original Haskell example. We can then evaluate this expression at compile time:

$ morte < nine.mt
∀(a : *) → (a → a) → a → a

λ(a : *) → λ(Succ : a → a) → λ(Zero : a) → Succ (Succ (Succ
(Succ (Succ (Succ (Succ (Succ (Succ Zero))))))))

Morte reduces our program to a church-encoded nine.

Run-time computation

Morte does not force you to compute everything using lambda calculus at compile time. Suppose that we wanted to use machine arithmetic at run-time instead. We can do this by parametrizing our program on:

  • the Int type,
  • operations on Ints, and
  • any integer literals we use

We accept these "foreign imports" as ordinary arguments to our program:

-- foreign.mt

-- Foreign imports
\(Int : *) -- Foreign type
-> \((+) : Int -> Int -> Int) -- Foreign function
-> \((*) : Int -> Int -> Int) -- Foreign function
-> \(lit@0 : Int) -- Literal "1" -- Foreign data
-> \(lit@1 : Int) -- Literal "2" -- Foreign data
-> \(lit@2 : Int) -- Literal "3" -- Foreign data
-> \(lit@3 : Int) -- Literal "1" -- Foreign data
-> \(lit@4 : Int) -- Literal "0" -- Foreign data

-- The rest is compile-time lambda calculus
-> ( \(List : * -> *)
-> \(Cons : forall (a : *) -> a -> List a -> List a)
-> \(Nil : forall (a : *) -> List a)
-> \( map
: forall (a : *)
-> forall (b : *)
-> (a -> b) -> List a -> List b
)
-> \( foldr
: forall (a : *)
-> List a
-> forall (r : *)
-> (a -> r -> r) -> r -> r
)
-> ( \(example : List Int)

-> foldr Int (map Int Int ((+) lit@3) example) Int (+) lit@4
)

-- example
(Cons Int lit@0 (Cons Int lit@1 (Cons Int lit@2 (Nil Int))))
)

-- List
( \(a : *)
-> forall (x : *)
-> (a -> x -> x) -- Cons
-> x -- Nil
-> x
)

-- Cons
( \(a : *)
-> \(va : a)
-> \(vas : forall (x : *) -> (a -> x -> x) -> x -> x)
-> \(x : *)
-> \(Cons : a -> x -> x)
-> \(Nil : x)
-> Cons va (vas x Cons Nil)
)

-- Nil
( \(a : *)
-> \(x : *)
-> \(Cons : a -> x -> x)
-> \(Nil : x)
-> Nil
)

-- map
( \(a : *)
-> \(b : *)
-> \(f : a -> b)
-> \(l : forall (x : *) -> (a -> x -> x) -> x -> x)
-> \(x : *)
-> \(Cons : b -> x -> x)
-> \(Nil: x)
-> l x (\(va : a) -> \(vx : x) -> Cons (f va) vx) Nil
)

-- foldr
( \(a : *)
-> \(vas : forall (x : *) -> (a -> x -> x) -> x -> x)
-> vas
)

We can use Morte to optimize the above program and Morte will reduce the program to nothing but foreign types, operations, and values:

$ morte < foreign.mt
∀(Int : *) → (Int → Int → Int) → (Int → Int → Int) → Int →
Int → Int → Int → Int → Int

λ(Int : *) → λ((+) : Int → Int → Int) → λ((*) : Int → Int →
Int) → λ(lit : Int) → λ(lit@1 : Int) → λ(lit@2 : Int) →
λ(lit@3 : Int) → λ(lit@4 : Int) → (+) ((+) lit@3 lit) ((+)
((+) lit@3 lit@1) ((+) ((+) lit@3 lit@2) lit@4))

If you study that closely, Morte adds lit@3 (the "1" literal) to each literal of the list and then adds them up. We can then pass this foreign syntax tree to our machine arithmetic backend to transform those foreign operations to efficient operations.

Morte lets you choose how much information you want to encode within lambda calculus. The more information you encode in lambda calculus the more Morte can optimize your program, but the slower your compile times will get, so it's a tradeoff.

Corecursion

Corecursion is the dual of recursion. Where recursion works on finite data types, corecursion works on potentially infinite data types. An example would be the following infinite Stream in Haskell:

data Stream a = Cons a (Stream a)

numbers :: Stream Int
numbers = go 0
where
go n = Cons n (go (n + 1))

-- numbers = Cons 0 (Cons 1 (Cons 2 (...

map :: (a -> b) -> Stream a -> Stream b
map f (Cons a l) = Cons (f a) (map f l)

example :: Stream Int
example = map (+ 1) numbers

-- example = Cons 1 (Cons 2 (Cons 3 (...

Again, pretend that we disable any function from referencing itself so that the above code becomes invalid. This time we cannot reuse the same trick from previous sections because we cannot encode numbers as a fold without referencing itself. Try this if you don't believe me.

However, we can still encode corecursive things in a non-corecursive way. This time, we encode our Stream type as an unfold instead of a fold:

-- This is also valid Haskell code

{-# LANGUAGE ExistentialQuantification #-}

data Stream a = forall s . MkStream
{ seed :: s
, step :: s -> (a, s)
}

numbers :: Stream Int
numbers = MkStream 0 (\n -> (n, n + 1))

map :: (a -> b) -> Stream a -> Stream b
map f (MkStream s0 k) = MkStream s0 k'
where
k' s = (f a, s')
where (a, s') = k s

In other words, we store an initial seed of some type s and a step function of type s -> (a, s) that emits one element of our Stream. The type of our seed s can be anything and in our numbers example, the type of the internal state is Int. Another stream could use a completely different internal state of type (), like this:

-- ones = Cons 1 ones

ones :: Stream Int
ones = MkStream () (\_ -> (1, ()))

The general name for this trick is an "F-coalgebra" encoding of a corecursive type.

Once we encode our infinite stream non-recursively, we can safely optimize the stream by inlining and eta reduction:

map id l

-- l = MkStream s0 k
= map id (MkStream s0 k)

-- Inline definition of `map`
= MkStream s0 k'
where
k' = \s -> (id a, s')
where
(a, s') = k s

-- Inline definition of `id`
= MkStream s0 k'
where
k' = \s -> (a, s')
where
(a, s') = k s

-- Inline: (a, s') = k s
= MkStream s0 k'
where
k' = \s -> k s

-- Eta reduce
= MkStream s0 k'
where
k' = k

-- Inline: k' = k
= MkStream s0 k

-- l = MkStream s0 k
= l

Now let's encode Stream and map in Morte and compile the following four expressions:

map id

id

map f . map g

map (f . g)

Save the following Morte file to stream.mt and then uncomment the expression you want to test:

(   \(id : forall (a : *) -> a -> a)
-> \( (.)
: forall (a : *)
-> forall (b : *)
-> forall (c : *)
-> (b -> c)
-> (a -> b)
-> (a -> c)
)
-> \(Pair : * -> * -> *)
-> \(P : forall (a : *) -> forall (b : *) -> a -> b -> Pair a b)
-> \( first
: forall (a : *)
-> forall (b : *)
-> forall (c : *)
-> (a -> b)
-> Pair a c
-> Pair b c
)

-> ( \(Stream : * -> *)
-> \( map
: forall (a : *)
-> forall (b : *)
-> (a -> b)
-> Stream a
-> Stream b
)

-- example@1 = example@2
-> ( \(example@1 : forall (a : *) -> Stream a -> Stream a)
-> \(example@2 : forall (a : *) -> Stream a -> Stream a)

-- example@3 = example@4
-> \( example@3
: forall (a : *)
-> forall (b : *)
-> forall (c : *)
-> (b -> c)
-> (a -> b)
-> Stream a
-> Stream c
)

-> \( example@4
: forall (a : *)
-> forall (b : *)
-> forall (c : *)
-> (b -> c)
-> (a -> b)
-> Stream a
-> Stream c
)

-- Uncomment the example you want to test
-> example@1
-- -> example@2
-- -> example@3
-- -> example@4
)

-- example@1
(\(a : *) -> map a a (id a))

-- example@2
(\(a : *) -> id (Stream a))

-- example@3
( \(a : *)
-> \(b : *)
-> \(c : *)
-> \(f : b -> c)
-> \(g : a -> b)
-> map a c ((.) a b c f g)
)

-- example@4
( \(a : *)
-> \(b : *)
-> \(c : *)
-> \(f : b -> c)
-> \(g : a -> b)
-> (.) (Stream a) (Stream b) (Stream c) (map b c f) (map a b g)
)
)

-- Stream
( \(a : *)
-> forall (x : *)
-> (forall (s : *) -> s -> (s -> Pair a s) -> x)
-> x
)

-- map
( \(a : *)
-> \(b : *)
-> \(f : a -> b)
-> \( st
: forall (x : *)
-> (forall (s : *) -> s -> (s -> Pair a s) -> x)
-> x
)
-> \(x : *)
-> \(S : forall (s : *) -> s -> (s -> Pair b s) -> x)
-> st
x
( \(s : *)
-> \(seed : s)
-> \(step : s -> Pair a s)
-> S
s
seed
(\(seed@1 : s) -> first a b s f (step seed@1))
)
)
)

-- id
(\(a : *) -> \(va : a) -> va)

-- (.)
( \(a : *)
-> \(b : *)
-> \(c : *)
-> \(f : b -> c)
-> \(g : a -> b)
-> \(va : a)
-> f (g va)
)

-- Pair
(\(a : *) -> \(b : *) -> forall (x : *) -> (a -> b -> x) -> x)

-- P
( \(a : *)
-> \(b : *)
-> \(va : a)
-> \(vb : b)
-> \(x : *)
-> \(P : a -> b -> x)
-> P va vb
)

-- first
( \(a : *)
-> \(b : *)
-> \(c : *)
-> \(f : a -> b)
-> \(p : forall (x : *) -> (a -> c -> x) -> x)
-> \(x : *)
-> \(Pair : b -> c -> x)
-> p x (\(va : a) -> \(vc : c) -> Pair (f va) vc)
)

Both example@1 and example@2 will generate alpha-equivalent code:

$ morte < example1.mt
∀(a : *) → (∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a →
s → x) → x) → x) → x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x
: *) → (a → s → x) → x) → x) → x

λ(a : *) → λ(st : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) →
(a → s → x) → x) → x) → x) → st

$ morte < example2.mt
∀(a : *) → (∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a →
s → x) → x) → x) → x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x
: *) → (a → s → x) → x) → x) → x

λ(a : *) → λ(va : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) →
(a → s → x) → x) → x) → x) → va

Similarly, example@3 and example@4 will generate alpha-equivalent code:

$ morte < example3.mt
∀(a : *) → ∀(b : *) → ∀(c : *) → (b → c) → (a → b) → (∀(x :
*) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) →
x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (c → s → x)
→ x) → x) → x

λ(a : *) → λ(b : *) → λ(c : *) → λ(f : b → c) → λ(g : a → b)
→ λ(st : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s
→ x) → x) → x) → x) → λ(x : *) → λ(S : ∀(s : *) → s → (s → ∀
(x : *) → (c → s → x) → x) → x) → st x (λ(s : *) → λ(seed :
s) → λ(step : s → ∀(x : *) → (a → s → x) → x) → S s seed (λ(
seed@1 : s) → λ(x : *) → λ(Pair : c → s → x) → step seed@1 x
(λ(va : a) → Pair (f (g va)))))

$ morte < example4.mt
∀(a : *) → ∀(b : *) → ∀(c : *) → (b → c) → (a → b) → (∀(x :
*) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) →
x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (c → s → x)
→ x) → x) → x

λ(a : *) → λ(b : *) → λ(c : *) → λ(f : b → c) → λ(g : a → b)
→ λ(va : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s
→ x) → x) → x) → x) → λ(x : *) → λ(S : ∀(s : *) → s → (s → ∀
(x : *) → (c → s → x) → x) → x) → va x (λ(s : *) → λ(seed :
s) → λ(step : s → ∀(x : *) → (a → s → x) → x) → S s seed (λ(
seed@1 : s) → λ(x : *) → λ(Pair : c → s → x) → step seed@1 x
(λ(va : a) → Pair (f (g va))))

We inadvertently proved stream fusion for free, but we're still not done, yet! Everything we learn about recursive and corecursive sequences can be applied to model recursive and corecursive effects!

Effects

I will conclude this post by showing how to model both recursive and corecursive programs that have side effects. The recursive program will echo ninety-nine lines from stdin to stdout. The equivalent Haskell program is in the comment header:

-- recursive.mt

-- The Haskell code we will translate to Morte:
--
-- import Prelude hiding (
-- (+), (*), IO, putStrLn, getLine, (>>=), (>>), return )
--
-- -- Simple prelude
--
-- data Nat = Succ Nat | Zero
--
-- zero :: Nat
-- zero = Zero
--
-- one :: Nat
-- one = Succ Zero
--
-- (+) :: Nat -> Nat -> Nat
-- Zero + n = n
-- Succ m + n = m + Succ n
--
-- (*) :: Nat -> Nat -> Nat
-- Zero * n = Zero
-- Succ m * n = n + (m * n)
--
-- foldNat :: Nat -> (a -> a) -> a -> a
-- foldNat Zero f x = x
-- foldNat (Succ m) f x = f (foldNat m f x)
--
-- data IO r
-- = PutStrLn String (IO r)
-- | GetLine (String -> IO r)
-- | Return r
--
-- putStrLn :: String -> IO U
-- putStrLn str = PutStrLn str (Return Unit)
--
-- getLine :: IO String
-- getLine = GetLine Return
--
-- return :: a -> IO a
-- return = Return
--
-- (>>=) :: IO a -> (a -> IO b) -> IO b
-- PutStrLn str io >>= f = PutStrLn str (io >>= f)
-- GetLine k >>= f = GetLine (\str -> k str >>= f)
-- Return r >>= f = f r
--
-- -- Derived functions
--
-- (>>) :: IO U -> IO U -> IO U
-- m >> n = m >>= \_ -> n
--
-- two :: Nat
-- two = one + one
--
-- three :: Nat
-- three = one + one + one
--
-- four :: Nat
-- four = one + one + one + one
--
-- five :: Nat
-- five = one + one + one + one + one
--
-- six :: Nat
-- six = one + one + one + one + one + one
--
-- seven :: Nat
-- seven = one + one + one + one + one + one + one
--
-- eight :: Nat
-- eight = one + one + one + one + one + one + one + one
--
-- nine :: Nat
-- nine = one + one + one + one + one + one + one + one + one
--
-- ten :: Nat
-- ten = one + one + one + one + one + one + one + one + one + one
--
-- replicateM_ :: Nat -> IO U -> IO U
-- replicateM_ n io = foldNat n (io >>) (return Unit)
--
-- ninetynine :: Nat
-- ninetynine = nine * ten + nine
--
-- main_ :: IO U
-- main_ = replicateM_ ninetynine (getLine >>= putStrLn)

-- "Free" variables
( \(String : * )
-> \(U : *)
-> \(Unit : U)

-- Simple prelude
-> ( \(Nat : *)
-> \(zero : Nat)
-> \(one : Nat)
-> \((+) : Nat -> Nat -> Nat)
-> \((*) : Nat -> Nat -> Nat)
-> \(foldNat : Nat -> forall (a : *) -> (a -> a) -> a -> a)
-> \(IO : * -> *)
-> \(return : forall (a : *) -> a -> IO a)
-> \((>>=)
: forall (a : *)
-> forall (b : *)
-> IO a
-> (a -> IO b)
-> IO b
)
-> \(putStrLn : String -> IO U)
-> \(getLine : IO String)

-- Derived functions
-> ( \((>>) : IO U -> IO U -> IO U)
-> \(two : Nat)
-> \(three : Nat)
-> \(four : Nat)
-> \(five : Nat)
-> \(six : Nat)
-> \(seven : Nat)
-> \(eight : Nat)
-> \(nine : Nat)
-> \(ten : Nat)
-> ( \(replicateM_ : Nat -> IO U -> IO U)
-> \(ninetynine : Nat)

-> replicateM_ ninetynine ((>>=) String U getLine putStrLn)
)

-- replicateM_
( \(n : Nat)
-> \(io : IO U)
-> foldNat n (IO U) ((>>) io) (return U Unit)
)

-- ninetynine
((+) ((*) nine ten) nine)
)

-- (>>)
( \(m : IO U)
-> \(n : IO U)
-> (>>=) U U m (\(_ : U) -> n)
)

-- two
((+) one one)

-- three
((+) one ((+) one one))

-- four
((+) one ((+) one ((+) one one)))

-- five
((+) one ((+) one ((+) one ((+) one one))))

-- six
((+) one ((+) one ((+) one ((+) one ((+) one one)))))

-- seven
((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one))))))

-- eight
((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one)))))))
-- nine
((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one))))))))

-- ten
((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one)))))))))
)

-- Nat
( forall (a : *)
-> (a -> a)
-> a
-> a
)

-- zero
( \(a : *)
-> \(Succ : a -> a)
-> \(Zero : a)
-> Zero
)

-- one
( \(a : *)
-> \(Succ : a -> a)
-> \(Zero : a)
-> Succ Zero
)

-- (+)
( \(m : forall (a : *) -> (a -> a) -> a -> a)
-> \(n : forall (a : *) -> (a -> a) -> a -> a)
-> \(a : *)
-> \(Succ : a -> a)
-> \(Zero : a)
-> m a Succ (n a Succ Zero)
)

-- (*)
( \(m : forall (a : *) -> (a -> a) -> a -> a)
-> \(n : forall (a : *) -> (a -> a) -> a -> a)
-> \(a : *)
-> \(Succ : a -> a)
-> \(Zero : a)
-> m a (n a Succ) Zero
)

-- foldNat
( \(n : forall (a : *) -> (a -> a) -> a -> a)
-> n
)

-- IO
( \(r : *)
-> forall (x : *)
-> (String -> x -> x)
-> ((String -> x) -> x)
-> (r -> x)
-> x
)

-- return
( \(a : *)
-> \(va : a)
-> \(x : *)
-> \(PutStrLn : String -> x -> x)
-> \(GetLine : (String -> x) -> x)
-> \(Return : a -> x)
-> Return va
)

-- (>>=)
( \(a : *)
-> \(b : *)
-> \(m : forall (x : *)
-> (String -> x -> x)
-> ((String -> x) -> x)
-> (a -> x)
-> x
)
-> \(f : a
-> forall (x : *)
-> (String -> x -> x)
-> ((String -> x) -> x)
-> (b -> x)
-> x
)
-> \(x : *)
-> \(PutStrLn : String -> x -> x)
-> \(GetLine : (String -> x) -> x)
-> \(Return : b -> x)
-> m x PutStrLn GetLine (\(va : a) -> f va x PutStrLn GetLine Return)
)

-- putStrLn
( \(str : String)
-> \(x : *)
-> \(PutStrLn : String -> x -> x )
-> \(GetLine : (String -> x) -> x)
-> \(Return : U -> x)
-> PutStrLn str (Return Unit)
)

-- getLine
( \(x : *)
-> \(PutStrLn : String -> x -> x )
-> \(GetLine : (String -> x) -> x)
-> \(Return : String -> x)
-> GetLine Return
)
)

This program will compile to a completely unrolled read-write loop, as most recursive programs will:

$ morte < recursive.mt
∀(String : *) → ∀(U : *) → U → ∀(x : *) → (String → x → x) →
((String → x) → x) → (U → x) → x

λ(String : *) → λ(U : *) → λ(Unit : U) → λ(x : *) → λ(PutStr
Ln : String → x → x) → λ(GetLine : (String → x) → x) → λ(Ret
urn : U → x) → GetLine (λ(va : String) → PutStrLn va (GetLin
e (λ(va@1 : String) → PutStrLn va@1 (GetLine (λ(va@2 : Strin
g) → PutStrLn va@2 (GetLine (λ(va@3 : String) → PutStrLn ...
<snip>
... GetLine (λ(va@92 : String) → PutStrLn va@92 (GetLine (λ(
va@93 : String) → PutStrLn va@93 (GetLine (λ(va@94 : String)
→ PutStrLn va@94 (GetLine (λ(va@95 : String) → PutStrLn va@
95 (GetLine (λ(va@96 : String) → PutStrLn va@96 (GetLine (λ(
va@97 : String) → PutStrLn va@97 (GetLine (λ(va@98 : String)
→ PutStrLn va@98 (Return Unit))))))))))))))))))))))))))))))
))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))
))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))
))))))))))))))))))))))))))))))))))))))))))))))))

In contrast, if we encode the effects corecursively we can express a program that echoes indefinitely from stdin to stdout:

-- corecursive.mt

-- data IOF r s
-- = PutStrLn String s
-- | GetLine (String -> s)
-- | Return r
--
-- data IO r = forall s . MkIO s (s -> IOF r s)
--
-- main = MkIO
-- Nothing
-- (maybe (\str -> PutStrLn str Nothing) (GetLine Just))

( \(String : *)
-> ( \(Maybe : * -> *)
-> \(Just : forall (a : *) -> a -> Maybe a)
-> \(Nothing : forall (a : *) -> Maybe a)
-> \( maybe
: forall (a : *)
-> Maybe a
-> forall (x : *)
-> (a -> x)
-> x
-> x
)
-> \(IOF : * -> * -> *)
-> \( PutStrLn
: forall (r : *)
-> forall (s : *)
-> String
-> s
-> IOF r s
)
-> \( GetLine
: forall (r : *)
-> forall (s : *)
-> (String -> s)
-> IOF r s
)
-> \( Return
: forall (r : *)
-> forall (s : *)
-> r
-> IOF r s
)
-> ( \(IO : * -> *)
-> \( MkIO
: forall (r : *)
-> forall (s : *)
-> s
-> (s -> IOF r s)
-> IO r
)
-> ( \(main : forall (r : *) -> IO r)
-> main
)

-- main
( \(r : *)
-> MkIO
r
(Maybe String)
(Nothing String)
( \(m : Maybe String)
-> maybe
String
m
(IOF r (Maybe String))
(\(str : String) ->
PutStrLn
r
(Maybe String)
str
(Nothing String)
)
(GetLine r (Maybe String) (Just String))
)
)
)

-- IO
( \(r : *)
-> forall (x : *)
-> (forall (s : *) -> s -> (s -> IOF r s) -> x)
-> x
)

-- MkIO
( \(r : *)
-> \(s : *)
-> \(seed : s)
-> \(step : s -> IOF r s)
-> \(x : *)
-> \(k : forall (s : *) -> s -> (s -> IOF r s) -> x)
-> k s seed step
)
)

-- Maybe
(\(a : *) -> forall (x : *) -> (a -> x) -> x -> x)

-- Just
( \(a : *)
-> \(va : a)
-> \(x : *)
-> \(Just : a -> x)
-> \(Nothing : x)
-> Just va
)

-- Nothing
( \(a : *)
-> \(x : *)
-> \(Just : a -> x)
-> \(Nothing : x)
-> Nothing
)

-- maybe
( \(a : *)
-> \(m : forall (x : *) -> (a -> x) -> x-> x)
-> m
)

-- IOF
( \(r : *)
-> \(s : *)
-> forall (x : *)
-> (String -> s -> x)
-> ((String -> s) -> x)
-> (r -> x)
-> x
)

-- PutStrLn
( \(r : *)
-> \(s : *)
-> \(str : String)
-> \(vs : s)
-> \(x : *)
-> \(PutStrLn : String -> s -> x)
-> \(GetLine : (String -> s) -> x)
-> \(Return : r -> x)
-> PutStrLn str vs
)

-- GetLine
( \(r : *)
-> \(s : *)
-> \(k : String -> s)
-> \(x : *)
-> \(PutStrLn : String -> s -> x)
-> \(GetLine : (String -> s) -> x)
-> \(Return : r -> x)
-> GetLine k
)

-- Return
( \(r : *)
-> \(s : *)
-> \(vr : r)
-> \(x : *)
-> \(PutStrLn : String -> s -> x)
-> \(GetLine : (String -> s) -> x)
-> \(Return : r -> x)
-> Return vr
)

)

This compiles to a state machine that we can unfold one step at a time:

$ morte < corecursive.mt
∀(String : *) → ∀(r : *) → ∀(x : *) → (∀(s : *) → s → (s → ∀
(x : *) → (String → s → x) → ((String → s) → x) → (r → x) →
x) → x) → x

λ(String : *) → λ(r : *) → λ(x : *) → λ(k : ∀(s : *) → s → (
s → ∀(x : *) → (String → s → x) → ((String → s) → x) → (r →
x) → x) → x) → k (∀(x : *) → (String → x) → x → x) (λ(x : *)
→ λ(Just : String → x) → λ(Nothing : x) → Nothing) (λ(m : ∀
(x : *) → (String → x) → x → x) → m (∀(x : *) → (String → (∀
(x : *) → (String → x) → x → x) → x) → ((String → ∀(x : *) →
(String → x) → x → x) → x) → (r → x) → x) (λ(str : String)
→ λ(x : *) → λ(PutStrLn : String → (∀(x : *) → (String → x)
→ x → x) → x) → λ(GetLine : (String → ∀(x : *) → (String → x
) → x → x) → x) → λ(Return : r → x) → PutStrLn str (λ(x : *)
→ λ(Just : String → x) → λ(Nothing : x) → Nothing)) (λ(x :
*) → λ(PutStrLn : String → (∀(x : *) → (String → x) → x → x)
→ x) → λ(GetLine : (String → ∀(x : *) → (String → x) → x →
x) → x) → λ(Return : r → x) → GetLine (λ(va : String) → λ(x
: *) → λ(Just : String → x) → λ(Nothing : x) → Just va))

I don't expect you to understand that output other than to know that we can translate the output to any backend that provides functions, and primitive read/write operations.

Conclusion

If you would like to use Morte, you can find the library on both Github and Hackage. I also provide a Morte tutorial that you can use to learn more about the library.

Morte is dependently typed in theory, but in practice I have not exercised this feature so I don't understand the implications of this. If this turns out to be a mistake then I will downgrade Morte to System Fw, which has higher-kinds and polymorphism, but no dependent types.

Additionally, Morte might be usable to transmit code in a secure and typed way in distributed environment or to share code between diverse functional language by providing a common intermediate language. However, both of those scenarios require additional work, such as establishing a shared set of foreign primitives and creating Morte encoders/decoders for each target language.

Also, there are additional optimizations which Morte might implement in the future. For example, Morte could use free theorems (equalities you deduce from the types) to simplify some code fragments even further, but Morte currently does not do this.

My next goals are:

  • Add a back-end to compile Morte to LLVM
  • Add a front-end to desugar a medium-level Haskell-like language to Morte

Once those steps are complete then Morte will be a usable intermediate language for writing super-optimizable programs.

Also, if you're wondering, the name Morte is a tribute to a talking skull from the game Planescape: Torment, since the Morte library is a "bare-bones" calculus of constructions.

Literature

If this topic interests you more, you may find the following links helpful, in roughly increasing order of difficulty:

by Gabriel Gonzalez ([email protected]) at May 19, 2015 02:06 PM

FP Complete

PSA: GHC 7.10, cabal, and Windows

Since we've received multiple bug reports on this, and there are many people suffering from it reporting on the cabal issue, Neil and I decided a more public announcement was warranted.

There is an as-yet undiagnosed bug in cabal which causes some packages to fail to install. Packages known to be affected are blaze-builder-enumerator, data-default-instances-old-locale, vector-binary-instances, and data-default-instances-containers. The output looks something like:

Resolving dependencies...
Configuring data-default-instances-old-locale-0.0.1...
Building data-default-instances-old-locale-0.0.1...
Failed to install data-default-instances-old-locale-0.0.1
Build log ( C:\Users\gl67\AppData\Roaming\cabal\logs\data-default-instances-old-locale-0.0.1.log ):
Building data-default-instances-old-locale-0.0.1...
Preprocessing library data-default-instances-old-locale-0.0.1...
[1 of 1] Compiling Data.Default.Instances.OldLocale ( Data\Default\Instances\OldLocale.hs, dist\build\Data\Default\Instances\OldLocale.o )
C:\Users\gl67\repos\MinGHC\7.10.1\ghc-7.10.1\mingw\bin\ar.exe: dist\build\libHSdata-default-instances-old-locale-0.0.1-6jcjjaR25tK4x3nJhHHjFM.a-11336\libHSdata-default-instances-old-locale-0.0.1-6jcjjaR25tK4x3nJhHHjFM.a: No such file or directory
cabal.exe: Error: some packages failed to install:
data-default-instances-old-locale-0.0.1 failed during the building phase. The
exception was:
ExitFailure 1

There are two workarounds I know of at this time:

  • You can manually unpack and install the package which seems to work, e.g.:

    cabal unpack data-default-instances-old-locale-0.0.1
    cabal install .\data-default-instances-old-locale-0.0.1
  • Drop down to GHC 7.8.4 until the cabal bug is fixed

For normal users, you can stop reading here. If you're interested in more details and may be able to help fix it, here's a summary of the research I've done so far:

As far as I can tell, this is a bug in cabal-install, not the Cabal library. Despite reports to the contrary, it does not seem to be that the parallelization level (-j option) has any impact. The only thing that seems to affect the behavior is whether cabal-install unpacks and installs in one step, or does it in two steps. That's why unpacking and then installing works around the bug.

I've stared at cabal logs on this quite a bit, but don't see a rhyme or reason to what's happening here. The bug is easily reproducible, so hopefully someone with more cabal expertise will be able to look at this soon, as this bug has high severity and has been affecting Windows users for almost two months.

May 19, 2015 08:20 AM

Danny Gratzer

Compiling a Lazy Language in 1,000 words

Posted on May 19, 2015

I’m a fan of articles like this one which set out to explain a really complicated subject in 600 words or less. I wanted to write one with a similar goal for compiling a language like Haskell. To help with this I’ve broken down what most compilers for a lazy language do into 5 different phases and spent 200 words explaining how they work. This isn’t really intended to be a tutorial on how to implement a compiler, I just want to make it less magical.

I assume that you know how a lazy functional language looks (this isn’t a tutorial on Haskell) and a little about how your machine works since I make a few references to how some lower level details are compiled. These will make more sense if you know such things, but they’re not necessary.

And the word-count-clock starts… now.

Parsing

Our interactions with compilers usually involve treating them as a huge function from string to string. We give them a string (our program) and it gives us back a string (the compiled code). However, on the inside the compiler does all sorts of stuff to that string we gave it and most of those operations are inconvenient to do as string operations. In the first part of the compiler, we convert the string into an abstract syntax tree. This is a data structure in the compiler which represents the string, but in

  1. A more abstract way, it doesn’t have details such as whitespace or comments
  2. A more convenient way, it let’s the compiler perform the operations it wants efficiently

The process of going String -> AST is called “parsing”. It has a lot of (kinda stuffy IMO) theory behind it. This is the only part of the compiler where the syntax actually matters and is usually the smallest part of the compiler.

Examples:

Type Checking

Now that we’ve constructed an abstract syntax tree we want to make sure that the program “makes sense”. Here “make sense” just means that the program’s types are correct. The process for checking that a program type checks involves following a bunch of rules of the form “A has type T if B has type T1 and C has type…”. All of these rules together constitute the type system for our language. As an example, in Haskell f a has the type T2 if f has the type T1 -> T2 and a has the type T1.

There’s a small wrinkle in this story though: most languages require some type inference. This makes things 10x harder because we have to figure the types of everything as we go! Type inference isn’t even possible in a lot of languages and some clever contortions are often needed to be inferrable.

However, once we’ve done all of this the program is correct enough to compile. Past type checking, if the compiler raises an error it’s a compiler bug.

Examples:

Optimizations/Simplifications

Now that we’re free of the constraints of having to report errors to the user things really get fun in the compiler. Now we start simplifying the language by converting a language feature into a mess of other, simpler language features. Sometimes we convert several features into specific instances of one more general feature. For example, we might convert our big fancy pattern language into a simpler one by elaborating each case into a bunch of nested cases.

Each time we remove a feature we end up with a slightly different language. This progression of languages in the compiler are called the “intermediate languages” (ILs). Each of these ILs have their own AST as well! In a good compiler we’ll have a lot of ILs as it makes the compiler much more maintainable.

An important part of choosing an IL is making it amenable to various optimizations. When the compiler is working with each IL it applies a set of optimizations to the program. For example

  1. Constant folding, converting 1 + 1 to 2 during compile time
  2. Inlining, copy-pasting the body of smaller functions where they’re called
  3. Fusion, turning multiple passes over a datastructure into a single one

Examples:

Spineless, Tagless, and Generally Wimpy IL

At some point in the compiler, we have to deal with the fact we’re compiling a lazy language. One nice way is to use a spineless, tagless, graph machine (STG machine).

How an STG machine works is a little complicated but here’s the gist

  • An expression becomes a closure/thunk, a bundling of code to compute the expressoin and the data it needs. These closure may depend on several arguments being supplied
  • We have a stack for arguments and another for continuations. A continuation is some code which takes the value returned from an expression and does something with it, like pattern match on it
  • To evaluate an expression we push the arguments it needs onto the stack and “enter” the corresponding closure, running the code in it
  • When the expression has evaluated itself it will pop the next continuation off the stack and give it the resulting value

During this portion of the compiler, we’d transform out last IL into a C-like language which actually works in terms of pushing, popping, and entering closures.

The key idea here that makes laziness work is that a closure defers work! It’s not a value, it’s a recipe for how to compute a value when we need it. Also note, all calls are tail calls since function calls are just a special case of entering a closure.

Another really beautiful idea in the STG machine is that closures evaluate themselves. This means closures present a uniform interface no matter what, all the details are hidden in that bundled up code. (I’m totally out of words to say this, but screw it it’s really cool).

Examples:

Code Generation

Finally, after converting to compiling STG machine we’re ready to output the target code. This bit is very dependent on what exactly we’re targeting.

If we’re targeting assembly, we have a few things to do. First, we have to switch from using variables to registers. This process is called register allocation and we basically slot each variable into an available register. If we run out, we store variables in memory and load it in as we need it.

In addition to register allocation, we have to compile those C-like language constructs to assembly. This means converting procedures into a label and some instructions, pattern matches into something like a jump table and so on. This is also where we’d apply low-level, bit-twiddling optimizations.

Examples:

Conclusion

Okay, clock off.

Hopefully that was helpful even if you don’t care that much about lazy languages (most of these ideas apply in any compiler). In particular, I hope that you now believe me when I say that lazy languages aren’t magical. In fact, the worry of how to implement laziness only really came up in one section of the compiler!

Now I have a question for you dear reader, what should I elaborate on? With summer ahead, I’ll have some free time soon. Is there anything else that you would like to see written about? (Just not parsing please)

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus

May 19, 2015 12:00 AM

Jasper Van der Jeugt

Can we write a Monoidal Either?

> {-# LANGUAGE GeneralizedNewtypeDeriving #-}
> import           Control.Applicative (Applicative (..))
> import           Data.Monoid         (Monoid, (<>))
> newtype Expr = Expr String
> instance Show Expr where show (Expr x) = x
> type Type = String
> type TypeScope = [Type]
> type Scope = String
> type Program = String
> expr1, expr2 :: Expr
> expr1 = Expr "False == 3"
> expr2 = Expr "[1, 'a']"
> program1 :: Program
> program1 = undefined

Introduction

For the last month or, I have been working as a contractor for Luminal. I am helping them implement Fugue, and more specifically Ludwig – a compiler a for statically typed declarative configuration language. This is one of the most interesting projects I have worked on so far – writing a compiler is really fun. While implementing some parts of this compiler, I came across an interesting problem.

In particular, a typeclass instance seemed to adhere to both the Monad and Applicative laws, but with differing behaviour – which felt a bit fishy. I started a discussion on Twitter understand it better, and these are my thoughts on the matter.

The problem

Suppose we’re writing a typechecker. We have to do a number of things:

  • Parse the program to get a list of user-defined types.
  • Typecheck each expression.
  • A bunch of other things, but for the purpose of this post we’re going to leave it at that.

Now, any of these steps could fail, and we’d like to log the reason for failure. Clearly this is a case for something like Either! Let’s define ourselves an appropriate datatype.

> data Check e a
>     = Failed e
>     | Ok     a
>     deriving (Eq, Show)

We can write a straightforward Functor instance:

> instance Functor (Check e) where
>     fmap _ (Failed e) = Failed e
>     fmap f (Ok     x) = Ok (f x)

The Monad instance is also very obvious:

> instance Monad (Check e) where
>     return x = Ok x
> 
>     Failed e >>= _ = Failed e
>     Ok     x >>= f = f x

However, the Applicative instance is not that obvious – we seem to have a choice.

But first, lets take a step back and stub out our compiler a bit more, so that we have some more context. Imagine we have the following types in our compiler:

data Type = ...
data Expr = ...
data Program = ...
type TypeScope = [Type]

And our code looks like this:

> findDefinedTypes1 :: Program -> Check String TypeScope
> findDefinedTypes1 _ = Ok []  -- Assume we can't define types for now.
> typeCheck1 :: TypeScope -> Expr -> Check String Type
> typeCheck1 _ e = Failed $ "Could not typecheck: " ++ show e
> compiler1 :: Check String ()
> compiler1 = do
>     scope <- findDefinedTypes1 program1
>     typeCheck1 scope expr1  -- False == 3
>     typeCheck1 scope expr2  -- [1, 'a']
>     return ()

On executing compiler1, we get the following error:

*Main> compiler1
Failed "Could not typecheck: False == 3"

Which is correct, but using a compiler entirely written in this fashion would be annoying. Check, like Either, short-circuits on the first error it encounters. This means we would compile our program, fix one error, compile, fix the next error, compile, and so on.

It would be much nicer if users were to see multiple error messages at once.

Of course, this is not always possible. On one hand, if findDefinedTypes1 throws an error, we cannot possibly call typeCheck1, since we do not have a TypeScope.

On the other hand, if findDefinedTypes1 succeeds, shouldn’t it be possible to collect error messages from both typeCheck1 scope expr1 and typeCheck1 scope expr2?

It turns out this is possible, precisely because the second call to typeCheck1 does not depend on the result of the first call – so we can execute them in parallel, if you will. And that is precisely the difference in expressive power between Monad and Applicative: Monadic >>= provides access to previously computed results, where Applicative <*> does not. Let’s (ab?)use this to our advantage.

The solution?

Cleverly, we put together following instance:

> instance Monoid e => Applicative (Check e) where
>     pure x = Ok x
> 
>     Ok     f  <*> Ok     x  = Ok (f x)
>     Ok     _  <*> Failed e  = Failed e
>     Failed e  <*> Ok     _  = Failed e
>     Failed e1 <*> Failed e2 = Failed (e1 <> e2)

Using this instance we can effectively collect error messages. We need to change our code a bit to support a collection of error messages, so let’s use [String] instead of String since a list is a Monoid.

> findDefinedTypes2 :: Program -> Check [String] TypeScope
> findDefinedTypes2 _ = Ok []  -- Assume we can't define types for now.
> typeCheck2 :: TypeScope -> Expr -> Check [String] Type
> typeCheck2 _ e = Failed ["Could not typecheck: " ++ show e]
> compiler2 :: Check [String] ()
> compiler2 = do
>     scope <- findDefinedTypes2 program1
>     typeCheck2 scope expr1 *> typeCheck2 scope expr2
>     return ()

Note that *> is the Applicative equivalent of the Monadic >>.

Now, every error is represented by a list of error messages (typically a singleton such as in typeCheck2), and the Applicative <*> combines error messages. If we execute compiler2, we get:

*Main> compiler2
Failed ["Could not typecheck: False == 3",
        "Could not typecheck: [1, 'a']"]

Success! But is that all there is to it?

The problem with the solution

The problem is that we have created a situation where <*> is not equal to ap 1. After researching this for a while, it seems that <*> = ap is not a verbatim rule. However, most arguments suggest it should be the case – even the name.

This is important for refactoring, for example. Quite a few Haskell programmers (including myself) would refactor:

do b <- bar
   q <- qux
   return (Foo b q)

Into:

Foo <$> bar <*> qux

Without putting too much thought in it, just assuming it does the same thing.

In our case, they are clearly similar, but not equal – we would get only one error instead of collecting error messages. One could argue that this is close enough, but when one uses that argument too frequently, you might just end up with something like PHP.

The problem becomes more clear in the following fragment:

checkForCyclicImports modules >>
compileAll modules

Which has completely different behaviour from this fragment:

checkForCyclicImports modules *>
compileAll modules

The latter will get stuck in some sort of infinite recursion, while the former will not. This is not a subtle difference anymore. While the problem is easy to spot here (>> vs. *>), this is not always the case:

forEveryImport_ :: Monad m => Module -> (Import -> m ()) -> m ()

Ever since AMP, it is impossible to tell whether this will do a forM_ or a for_-like traversal without looking at the implementation – this makes making mistakes easy.

The solution to the problem with the solution

As we discussed in the previous section, it should be possible for a programmer to tell exactly how a Monad or Applicative will behave, without having to dig into implementations. Having a structure where <*> and ap behave slightly differently makes this hard.

When a Haskell programmer wants to make a clear distinction between two similar types, the first thing that comes to mind is probably newtypes. This problem is no different.

Let’s introduce a newtype for error-collecting Applicative. Since the Functor instance is exactly the same, we might as well generate it using GeneralizedNewtypeDeriving.

> newtype MonoidCheck e a = MonoidCheck {unMonoidCheck :: Check e a}
>     deriving (Functor, Show)

Now, we provide our Applicative instance for MonoidCheck:

> instance Monoid e => Applicative (MonoidCheck e) where
>     pure x = MonoidCheck (Ok x)
> 
>     MonoidCheck l <*> MonoidCheck r = MonoidCheck $ case (l, r) of
>         (Ok     f , Ok     x ) -> Ok (f x)
>         (Ok     _ , Failed e ) -> Failed e
>         (Failed e , Ok     _ ) -> Failed e
>         (Failed e1, Failed e2) -> Failed (e1 <> e2)

Finally, we avoid writing a Monad instance for MonoidCheck. This approach makes the code cleaner:

  • This ensures that when people use MonoidCheck, they are forced to use the Applicative combinators, and they cannot accidentally reduce the number of error messages.

  • For other programmers reading the code, it is very clear whether we are dealing with short-circuiting behaviour or that we are collecting multiple error messages: it is explicit in the types.

Usage and conversion

Our fragment now becomes:

> findDefinedTypes3 :: Program -> Check [String] TypeScope
> findDefinedTypes3 _ = Ok []  -- Assume we can't define types for now.
> typeCheck3 :: TypeScope -> Expr -> MonoidCheck [String] Type
> typeCheck3 _ e = MonoidCheck $ Failed ["Could not typecheck: " ++ show e]
> compiler3 :: Check [String] ()
> compiler3 = do
>     scope <- findDefinedTypes3 program1
>     unMonoidCheck $ typeCheck3 scope expr1 *> typeCheck3 scope expr2
>     return ()

We can see that while it is not more concise, it is definitely more clear: we can see exactly which functions will collect error messages. Furthermore, if we now try to write:

typeCheck3 scope expr1 >> typeCheck3 scope expr2

We will get a compiler warning telling us we should use *> instead.

Explicitly, we now convert between Check and MonoidCheck by simply calling MonoidCheck and unMonoidCheck. We can do this inside other transformers if necessary, using e.g. mapReaderT.

Data.Either.Validation

The MonoidCheck discussed in this blogpost is available as Data.Either.Validation on hackage. The main difference is that instead of using a newtype, the package authors provide a full-blown datatype.

> data Validation e a
>     = Failure e
>     | Success a

And two straightforward conversion functions:

> validationToEither :: Validation e a -> Either e a
> validationToEither (Failure e) = Left e
> validationToEither (Success x) = Right x
> eitherToValidation :: Either e a -> Validation e a
> eitherToValidation (Left e)  = Failure e
> eitherToValidation (Right x) = Success x

This makes constructing values a bit easier:

Failure ["Can't go mucking with a 'void*'"]

Instead of:

MonoidCheck $ Failed ["Can't go mucking with a 'void*'"]

At this point, it shouldn’t surprise you that Validation intentionally does not provide a Monad instance.

Conclusion

This, of course, is all my opinion – there doesn’t seem to be any definite consensus on whether or not ap should be the same as <*>, since differing behaviour occurs in prominent libraries. While the Monad and Applicative laws are relatively well known, there is no canonical law saying that ap = <*>.

Update: there actually is a canonical law that ap should be <*>, and it was right under my nose in the Monad documentation since AMP. Before that, it was mentioned in the Applicative documentation. Thanks to quchen for pointing that out to me!

A key point here is that the AMP actually related the two typeclasses. Before that, arguing that the two classes were in a way “unrelated” was still a (dubious) option, but that is no longer the case.

Furthermore, considering this as a law might reveal opportunities for optimisation 2.

Lastly, I am definitely a fan of implementing these differing behaviours using different types and then converting between them: the fact that types explicitly tell me about the behaviour of code is one of the reasons I like Haskell.

Thanks to Alex Sayers for proofreading and suggestions.


  1. ap is the Monadic sibling of <*> (which explains why <*> is commonly pronounced ap). It can be implemented on top of >>=/return:

    > ap :: Monad m => m (a -> b) -> m a -> m b
    > ap mf mx = do
    >     f <- mf
    >     x <- mx
    >     return (f x)
  2. Take this with a grain of salt – Currently, GHC does not use any of the Monad laws to perform any optimisation. However, some Monad instances use them in RULES pragmas.

May 19, 2015 12:00 AM

Christopher Done

How Haskellers are seen and see themselves

How Haskellers are seen

The type system and separated IO is an awkward, restricting space suit:

Spending most of their time gazing longingly at the next abstraction to yoink from mathematics:

Looking at anything outside the Haskell language and the type system:

Using unsafePerformIO:

How Haskellers see themselves

No, it’s not a space suit. It’s Iron Man’s suit!

The suit enables him to do impressive feats with confidence and safety:

Look at the immense freedom and power enabled by wearing the suit:

Reality

May 19, 2015 12:00 AM

May 18, 2015

Philip Wadler

Royal Navy whistleblower says Trident is "a disaster waiting to happen"

A Royal Navy weapons expert who served on HMS Victorious from January to April this year has released via WikiLeaks an eighteen-page report claiming Trident is "a disaster waiting to happen".

McNeil's report on WikiLeaks.

Original report in The Sunday Herald.
McNeilly's report alleges 30 safety and security flaws on Trident submarines, based at Faslane on the Clyde. They include failures in testing whether missiles could be safely launched, burning toilet rolls starting a fire in a missile compartment, and security passes and bags going unchecked.
He also reports alarms being muted because they went off so often, missile safety procedures being ignored and top secret information left unguarded.
The independent nuclear submarine expert, John Large, concluded McNeilly was credible, though he may have misunderstood some of the things he saw.
Large said: "Even if he is right about the disorganisation, lack of morale, and sheer foolhardiness of the personnel around him - and the unreliability of the engineered systems - it is likely that the Trident system as a whole will tolerate the misdemeanours, as it's designed to do." 
(Regarding the quote from Large, I'm less sanguine. Ignoring alarms is a standard prelude to disaster. See Normal Accidents.)

Second report in The National.
“We are so close to a nuclear disaster it is shocking, and yet everybody is accepting the risk to the public,” he warned. “It’s just a matter of time before we’re infiltrated by a psychopath or a terrorist.”
Coverage in CommonSpace.


by Philip Wadler ([email protected]) at May 18, 2015 10:14 PM

Gabriel Gonzalez

The internet of code

<head><meta charset="UTF-8"/></head>

In this post I will introduce a proof-of-concept implementation for distributing typed code over the internet where the unit of compilation is individual expressions.

The core language

To motivate this post, consider this Haskell code:

data Bool = True | False

and :: Bool -> Bool -> Bool
and b1 b2 = if b1 then b2 else False

or :: Bool -> Bool -> Bool
or b1 b2 = if b1 then True else b2

data Even = Zero | SuccE Odd

data Odd = SuccO Even

four :: Even
four = SuccE (SuccO (SuccE (SuccO Zero)))

doubleEven :: Even -> Even
doubleEven (SuccE o) = SuccE (SuccO (doubleOdd o)
doubleEven Zero = Zero

doubleOdd :: Odd -> Even
doubleOdd (SuccO e) = SuccE (SuccO (doubleEven e)

I will encode each one of the above types, terms, and constructors as separate, closed, non-recursive expressions in the calculus of constructions. You can think of the calculus of constructions as a typed assembly language for functional programs which we will use to distribute program fragments over the internet. You can learn more about the calculus of constructions and other pure type systems by reading this clear paper by Simon Peyton Jones: "Henk: a typed intermediate language".

For example, here is how you encode the True constructor in the calculus of constructions:

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True

Note that the entire expression is the True constructor, not just the right-hand side:

             This is the True constructor
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True
^^^^
Not this

I just chose the variable names so that you can tell at a glance what constructor you are looking at from the right-hand side of the expression.

Similarly, here is how you encode the type Bool in the calculus of constructions:

               This is the `Bool` type
vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool
^^^^
Not this

Again, the entire expression is the Bool type, but I chose the variable names so that you can tell which type you are looking at from the right-hand side.

You can learn more about the full set of rules for translating data types to System F (a subset of the calculus of constructions) by reading the paper: "Automatic synthesis of typed Λ-programs on term algebras". Also, I will soon release a compiler named annah that automates this translation algorithm, and I used this compiler to translate the above Haskell code to the equivalent expressions in the calculus of constructions.

Distribution

We can distribute these expressions by hosting each expression as text source code on the internet. For example, I encoded all of the above types, terms and constructors in the calculus of constructions and hosted them using a static web server. You can browse these expressions by visiting sigil.place/post/0/.

Click on one of the expressions in the directory listing to see how they are encoded in the calculus of constructions. For example, if you click the link to four you will find an ordinary text file whose contents look like this (formatted for clarity):

  λ(Even : *)                        -- This entire
→ λ(Odd : *) -- expression is
→ λ(Zero : Even) -- the number `four`
→ λ(SuccE : ∀(pred : Odd) → Even) --
→ λ(SuccO : ∀(pred : Even) → Odd) --
→ SuccE (SuccO (SuccE (SuccO Zero))) -- Not just this last line

Each one of these expressions gets a unique URL, and we can embed any expression in our code by referencing the appropriate URL.

Remote imports

We can use the morte compiler to download, parse, and super-optimize programs written in the calculus of constructions. The morte compiler reads in a program from standard input, outputs the program's type to standard error, then super-optimizes the program and outputs the optimized program to standard output.

For example, we can compute and True False at compile time by just replacing and, True, and False by their appropriate URLs:

$ cabal install 'morte >= 1.2'
$ morte
#http://sigil.place/post/0/and
#http://sigil.place/post/0/True
#http://sigil.place/post/0/False

When we hit <Ctrl-D> to signal end of standard input then morte will compile the program:

$ morte
#http://sigil.place/post/0/and
#http://sigil.place/post/0/True
#http://sigil.place/post/0/False
<Ctrl-D>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False

The program's type is Bool and morte optimizes away the program to False at compile time. Both the type (Bool) and the value (False) are encoded in the calculus of constructions.

Here we are using morte as a compile-time calculator, mainly because Morte does not yet compile to a backend language. When I release a backend language I will go into more detail about how to embed expressions to evaluate at runtime instead of compile time.

Local imports

We can shorten this example further because morte also lets you import expressions from local files using the same hashtag syntax. For example, we can create local files that wrap remote URLs like this:

$ echo "#http://sigil.place/post/0/Bool"  > Bool
$ echo "#http://sigil.place/post/0/True" > True
$ echo "#http://sigil.place/post/0/False" > False
$ echo "#http://sigil.place/post/0/or" > or

We can then use these local files as convenient short-hand aliases for remote imports:

$ morte
#or #True #False
<Ctrl-D>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True

We can use imports anywhere in our program, even as types! For example, in the calculus of constructions you encode if as the identity function on #Bools:

λ(b : #Bool ) → b  # Note: Imports must end with whitespace

We can then save our implementation of if to a file named if, except using the ASCII symbols \ and -> instead of λ and :

$ echo "\(b : #Bool ) -> b" > if

Now we can define our own and function in terms of if. Remember that the Haskell definition of and is:

and b1 b2 = if b1 then b2 else False

Our definition won't be much different:

$ echo "\(b1 : #Bool ) -> \(b2 : #Bool ) -> #if b1 #Bool b2 #False" > and

Let's confirm that our new and function works:

$ echo "#and #True #False" | morte
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False

We can also ask morte to resolve all imports for our and function and optimize the result:

$ morte < and
∀(b1 : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
→ ∀(b2 : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
→ ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(b1 : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
→ λ(b2 : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
→ b1 (∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
b2
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False)

We can then compare our version with the and expression hosted online, which is identical:

$ curl sigil.place/post/0/and
λ(b1 : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
→ λ(b2 : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
→ b1 (∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
b2
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False)

Reduction

When we write an expression like this:

#or #True #False

The compiler resolves all imports transitively until all that is left is an expression in the calculus of constructions, like this one:

-- or
( λ(b1 : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
→ b1 (∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True)
)
-- True
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True )
-- False
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False)

Then the compiler reduces this expression using β-reduction and ε-reduction. We can safely reduce these expressions at compile time because these reductions always terminate in the calculus of constructions, which is a total and non-recursive language.

For example, the above expression reduces to:

  ( λ(b1 : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
→ b1 (∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True)
)
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True )
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False)

-- β-reduce
= (λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True )
(∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True)
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False)

-- β-reduce
= ( λ(True : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
→ λ(False : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
→ True
)
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True)
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False)

-- β-reduce
= ( λ(False : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
→ λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True
)
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False)

-- β-reduce
= λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True

Linking

The and we defined is "dynamically linked", meaning that the file we saved has not yet resolved all imports:

$ cat and
\(b1 : #Bool ) -> \(b2 : #Bool ) -> #if b1 #Bool b2 #False

The morte compiler will resolve these imports every time we import this expression within a program. To be precise, each import is resolved once per program and then cached and reused for subsequent duplicate imports. That means that the compiler only imports #Bool once for the above program and not three times. Also, we can transparently cache these expressions just like any other web resource by providing the appropriate Cache-Control HTTP header. For example, my static web server sets max-age to a day so that expressions can be cached for up to one day.

If our imported expressions change then our program will reflect those changes, which may or may not be desirable. For the above program dynamic linking is undesirable because if we change the file #False to point to sigil.place/post/0/True then we would break the behavior of the and function.

Alternatively, we can "statically link" the and function by resolving all imports using the morte compiler. For example, I statically linked my remote and expression because the behavior should never change:

$ curl sigil.place/post/0/and
λ(b1 : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
→ λ(b2 : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
→ b1 (∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
b2
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False)

In other scenarios you might want to dynamically link expressions if you want to automatically pull in upgrades from trusted upstream sources. This is the same rationale behind service-oriented architectures which optimize for transparent system-wide updates, except that instead of updating a service we update an expression.

Partial application

We can store partially applied functions in files, too. For example, we could store and True in a statically linked file named example using morte:

$ echo "#and #True" | morte > example
∀(b2 : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool)
→ ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

The type still goes to standard error, but the partially applied function goes to the example file. We can use the partially applied function just by referencing our new file:

$ morte
#example #False -- Same as: #and #True #False
<Ctrl-D>
∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → False

We can even view example and see that it's still just an ordinary text source file:

$ cat example
λ(b2 : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool) → b2

We can also see that morte was clever and optimized #and #True to the identity function on #Bools.

If we wanted to share our example code with our friends, we'd just host the file using any static web server. I like to use Haskell's warp server (from the wai-app-static package) for static hosting, but even something like python -m SimpleHTTPServer would work just as well:

$ cabal install wai-app-static
$ warp
Serving directory /tmp/code on port 3000 with ["index.html","index.htm"] index files.

Then we could provide our friends with a URL pointing to the example file and they could embed our code within their program by pasting in our URL.

Types

The calculus of constructions is typed, so if you make a mistake, you'll know immediately:

$ morte
#True #True
<Ctrl-D>
Expression:
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True)
(λ(Bool : *) → λ(True : Bool) → λ(False : Bool) → True)

Error: Function applied to argument of the wrong type

Expected type: *
Argument type: ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool

Types are what differentiates morte from a curl into sh. You can use the type system to whitelist the set of permissible values to import.

For example, in this code, there are only two values of #x that will type-check (up to α-conversion):

(λ(b : ∀(Bool : *) → ∀(True : Bool) → ∀(False : Bool) → Bool) → b) #x

Therefore, we can safely import a remote value knowing that the type-checker will reject attempts to inject arbitrary code.

When building a program with effects, we can similarly refine the set of permissible actions using the types. I introduced one such example in my previous post on morte, where the recursive.mt program restricts the effects to reading and printing lines of text and nothing else. You could then import a remote expression of type:

  ∀(String : *)
→ ∀(U : *)
→ ∀(Unit : U)
→ ∀(IO : *)
→ ∀(GetLine : String → IO → IO)
→ ∀(PutStrLn : (String → IO) → IO)
→ ∀(Return : U → IO)
→ IO

... which is the type of an effect syntax tree built from GetLine/PutStrLn/Return constructors. The type-checker will then enforce that the imported syntax tree cannot contain any other constructors and therefore cannot be interpreted to produce any other effects.

Recursive data types

You can encode recursive data types and functions in the calculus of constructions. This is all the more amazing when you realize that the calculus of constructions does not permit recursion! Also, morte's import system forbids recursion as well. If you try to recurse using imports you will get an error:

$ echo "#foo" > bar
$ echo "#bar" > foo
$ morte < foo
morte:
⤷ #bar
⤷ #foo
Cyclic import: #bar

Joe Armstrong once proposed that the core language for an internet of code would require built-in support for recursion (via letrec or something similar), but that's actually not true! The paper "Automatic synthesis of typed Λ-programs on term algebras" spells out how to encode recursive data types in the non-recursive System F language. What's amazing is that the algorithm works even for mutually recursive data types like Even and Odd from our original Haskell example.

You don't have to take my word for it! You can verify for yourself that the Even and Odd types and the Zero/SuccE/SuccO constructors that I hosted online are not recursive:

Let's create local aliases for the constructors so we can built our own Even or Odd values:

$ echo "#http://sigil.place/post/0/Zero"  > Zero
$ echo "#http://sigil.place/post/0/SuccE" > SuccE
$ echo "#http://sigil.place/post/0/SuccO" > SuccO

We can then assemble the number four using these constructors:

$ morte
#SuccE (#SuccO (#SuccE (#SuccO #Zero )))
<Ctrl-D>
∀(Even : *)
→ ∀(Odd : *)
→ ∀(Zero : Even)
→ ∀(SuccE : ∀(pred : Odd) → Even)
→ ∀(SuccO : ∀(pred : Even) → Odd)
→ Even

λ(Even : *)
→ λ(Odd : *)
→ λ(Zero : Even)
→ λ(SuccE : ∀(pred : Odd) → Even)
→ λ(SuccO : ∀(pred : Even) → Odd)
→ SuccE (SuccO (SuccE (SuccO Zero)))

The result is identical to the four that I hosted:

$ curl sigil.place/post/0/four
λ(Even : *)
→ λ(Odd : *)
→ λ(Zero : Even)
→ λ(SuccE : ∀(pred : Odd) → Even)
→ λ(SuccO : ∀(pred : Even) → Odd)
→ SuccE (SuccO (SuccE (SuccO Zero)))

We can even encode functions over mutually recursive types like doubleEven and doubleOdd. You can verify that the ones I wrote are not recursive:

... and then we can test that they work by doubling the number four:

$ morte
#http://sigil.place/post/0/doubleEven
#http://sigil.place/post/0/four
<Ctrl-D>
∀(Even : *)
→ ∀(Odd : *)
→ ∀(Zero : Even)
→ ∀(SuccE : ∀(pred : Odd) → Even)
→ ∀(SuccO : ∀(pred : Even) → Odd)
→ Even

λ(Even : *)
→ λ(Odd : *)
→ λ(Zero : Even)
→ λ(SuccE : ∀(pred : Odd) → Even)
→ λ(SuccO : ∀(pred : Even) → Odd)
→ SuccE (SuccO (SuccE (SuccO (SuccE (SuccO (SuccE (SuccO Zero)))))))

We get back the Even number eight encoded in the calculus of constructions.

Stack traces

morte will provide a "stack trace" if there is a type error or parse error:

$ echo "\(a : *) ->" > foo  # This is a malformed program
$ echo "#foo" > bar
$ echo "#bar" > baz
$ echo "#baz" > qux
$ morte < qux
morte:
⤷ #qux
⤷ #baz
⤷ #bar
⤷ #foo

Line: 2
Column: 1

Parsing: EOF

Error: Parsing failed

You can learn more about how morte's import system works by reading the newly add "Imports" section of the morte tutorial.

Comparison to other software architectures

morte's code distribution system most closely resembles the distribution model of Javascript, meaning that code can be downloaded from any URL and is compiled or interpreted client-side. The most important difference between the two is the granularity of imports and the import mechanism.

In morte the unit of distribution is individual types, terms, and constructors and you can inject a remote expression anywhere in the syntax tree by referencing its URL. This is why we can do crazy things like use a URL for a type:

λ(b : #http://sigil.place/post/0/Bool ) → ...

The second difference is that morte is designed from the ground up to be typed and highly optimizable (analogous to asm.js, a restricted subset of Javascript designed for ease of optimization).

The third difference is that morte lets you precisely delimit what remote code can do using the type system, unlike Javascript.

Future directions

This is just one piece of the puzzle in a long-term project of mine to build a typed and distributed intermediate language that we can use to share code across language boundaries. I want to give people the freedom to program in the language of their choice while still interoperating freely with other languages. In other words, I'm trying to build a pandoc for programming languages.

However, this project is still not really usable, even in anger. There are several missing features to go, some of which will be provided by my upcoming annah library:

Requirement #1: There needs to be a way to convert between restricted subsets of existing programming languages and the calculus of constructions

annah currently provides logic to encode medium-level language abstractions to and from the calculus of constructions. In fact, that's how I converted the Haskell example at the beginning of this post into the calculus of constructions. For example, I used annah to derive how to encode the SuccE constructor in the calculus of constructions:

$ annah compile
type Even
data Zero
data SuccE (pred : Odd)

type Odd
data SuccO (pred : Even)

in SuccE
<Ctrl-D>

... and annah correctly deduced the type and value in the calculus of constructions:

  ∀(pred : ∀(Even : *)
→ ∀(Odd : *)
→ ∀(Zero : Even)
→ ∀(SuccE : ∀(pred : Odd)
→ Even )
→ ∀(SuccO : ∀(pred : Even) → Odd) → Odd)
→ ∀(Even : *)
→ ∀(Odd : *)
→ ∀(Zero : Even)
→ ∀(SuccE : ∀(pred : Odd) → Even)
→ ∀(SuccO : ∀(pred : Even) → Odd)
→ Even

λ(pred : ∀(Even : *)
→ ∀(Odd : *)
→ ∀(Zero : Even)
→ ∀(SuccE : ∀(pred : Odd) → Even)
→ ∀(SuccO : ∀(pred : Even) → Odd)
→ Odd )
→ λ(Even : *)
→ λ(Odd : *)
→ λ(Zero : Even)
→ λ(SuccE : ∀(pred : Odd) → Even)
→ λ(SuccO : ∀(pred : Even) → Odd)
→ SuccE (pred Even Odd Zero SuccE SuccO)

Among other things, annah automates the algorithm from "Automatic synthesis of typed Λ-programs on term algebras", which is known as "Böhm-Berarducci encoding".

Requirement #2: There must be an efficient way to transmit bytes, text, and numbers alongside code.

My plan is to transmit this information out-of-band as a separate file rather than embedding the data directly within the code and annah will provide a systematic convention for distributing data and referencing that data within source code.

Requirement #3: There needs to be a standard library of types, data structures, functions, and side effects that all target languages must support.

In other words, there needs to be some sort of thrift for code so that languages can maximize code sharing.

Requirement #4: There must be better tooling for mass installation and hosting of expressions.

For example, I'd like to be able to alias all imports within a remote directory to local files with a single command.

Requirement #5: I need to figure out a way to mesh type inference with an expression-level distribution system.

As far as I can tell this is still an open research problem and this is most likely going to be the greatest obstacle to making this usable in practice.

Resources

If you would like to learn more about Morte or contribute, then check out the following resources:

by Gabriel Gonzalez ([email protected]) at May 18, 2015 05:38 PM

Well-Typed.Com

Recent Hackage improvements

You may or may not have noticed but over the last few months we’ve had a number of incremental improvements to the Hackage site, with patches contributed by numerous people.

I’m very pleased that we’ve had contributions from so many people recently. Apart from one patch that took a long time to merge we’ve generally been reasonably good at getting patches merged. Currently there’s just 1 outstanding pull request on hackage-server’s github site.

I gave a talk a couple months ago at the London Haskell User Group about getting started with hacking on Cabal and Hackage. Unfortunately the video isn’t yet available. (I’ll try and chase that up and link it here later).

An idea we floated at that talk was to run a couple hackathons dedicated to these and related infrastructure projects. If you want to help organise or have a venue in London, please get in touch. If you can’t get to London, fear not as we’d also welcome people attending online. Of course there’s also the upcoming ZuriHac where I expect there will be plenty of infrastructure work going on.

If you do want to get involved, the github site is the place to start. Discussion of features happens partly in issues on github and the #hackage channel on IRC. So those are good places to get feedback if you decide to start working on a bug or feature.

Recent changes

Visible changes

  • Code to render README and changelog files as markdown has been merged, though this is not yet in its final form.

    The idea is that these days many packages have good README files but less good descriptions, partly because Haddock markup has never been very good for writing long prose descriptions while markdown is much better suited for that. So the goal is to display package READMEs in a useful form. Some people would like to be able to just write a README and not have to partially duplicate that in the .cabal file description, and we want to support that option.

    So while the code has been merged (which is good because it was big and had been suffering bitrot), there is still a question of exactly when to display the README. The initial patch would show it in place of the description whenever the README existed, while a later revision would show both the description and README. Unfortunately while this greatly improved things for some packages it made things greatly worse for others.

    We are now discussing when to show the README inline so that it benefits those packages that have good READMEs, without making other packages much worse. Suggestions include only showing the README when the description field is blank, and putting the README at the bottom of the package page like github does to cope better with very long READMEs, or using an expander widget. We very much welcome feedback on this question, and pull requests even more so.

    In the meantime while that is being sorted out, as a temporary solution, the package page just has a link to the README rendered as markdown. If nothing else this makes it easy to see what various packages READMEs would look like when included inline (including some rendering issues due to the variety of markdown dialects).

    Thanks to Christian Conkle and Matthew Pickering for the original patch, and to Matthew Pickering and Michael Snoyman for fixing the bitrot and pestering us to get it reviewed and merged. It’s all much appreciated.

  • Changelogs are now also rendered as markdown. This doesn’t always work because some changelogs are not markdown and look bad when rendered as such, so we also provide a plain text link. See for example the changelog for zlib-0.6.1.0.

    The changelog link has been moved to the list of properties, immediately after the list of all versions. We’d had feedback that people had been overlooking the changelog link where it was previously.

    Future directions here include displaying the changelog inline, or parts or with an expander.

  • The display of dependencies and version ranges has been improved. Since the early days of Hackage we used a disjunctive normal form presentation. This has never been particularly visually appealing or easy to understand as well as rather verbose. We now present a much shorter summary of the dependencies and their versions constraints. This display has been tweaked a few times to make it a more accurate summary.

    Thanks to Kristen Kozak for making several improvements and to Michael Snoyman for feedback.

  • License files are now linked from the package properties. So for example rather than just saying “OtherLicense”, it now links directly to the license file inside the package tarball. See for example iconv-0.2.

  • The modules list on the package page now only links to Haddock pages that actually exist. This solves the problem of libraries with “semi-public” modules that are technically exposed but where they have a haddock HIDDEN pragma. The haddock index is now also linked.

  • Browsing the content of the package tarball is much improved. We now show a full directory listing, rather than just one directory at a time, and the presentation is generally clearer. See for example the contents of zlib-0.6.1.0.tar.gz.

  • Improvements to the mime types of files served from the contents of package documentation or tarballs. We now declare more things to be UTF8 and text/plain, so they’re easier to browse online rather than triggering your browser to download the file.

    Thanks to Rabi Shanker and Adam Bergmark.

  • The recent upload page now links to another page (and corresponding RSS feed) which lists all revisions to .cabal files from authors and trustees editing .cabal metadata. These feeds now always contains at minimum 48 hours worth of changes.

    Thanks to Gershom for this.

  • A new audit log for Hackage administrators. All changes to user’s permissions (by adding and removing them from groups) are now logged, with an optional reason/explanation, and this is visible to the admin.

    Thanks to Gershom for contributing this feature and Herbert Valerio Riedel for agitating to get it included.

  • The presentation of edits/diffs between .cabal file revisions has been improved to identify which component the change was in (since sometimes similar changes have to be made to multiple components in a package). See for example the revisions to fay-0.18.0.4.

    Thanks to Rabi Shanker.

A few boring but important ones

  • Build fixes for newer GHC versions. People run hackage-server instances locally so it’s always useful to be able to build with the latest, even if the central community site is using an older stable GHC for its build.

  • Potential DOS fix. Older versions of the aeson package had a problem with parsing large numbers that would cause it to take huge amounts of memory. We had been stuck with an old aeson package for a while due to incompatible changes in the JSON output for derived instances.

    Thanks to Michael Snoyman for reporting that and Daniel Gröber for fixing it.

  • Updated to Cabal-1.22. This has a number of small effects, like support for new licenses, and compilers (HaskellSuite).

    One regression that we discovered is that Cabal-1.22 changed one of the distribution Q/A checks on the -Werror flags, which meant that Hackage started rejecting these, even when guarded by a manual “dev” flag.

    I have since sent in a patch for Cabal to allow -Werror and other development flags when guarded by a manual cabal configuration flag. This lets people use -Werror in development and in their CI systems but without us having on Hackage lots of packages that bitrot quickly due to GHC introducing new warnings and breaking packages that use -Werror by default. This Cabal change has been cherry-picked into the community hackage-server instance for the time being and is now deployed.

    Thanks to the many people who pointed out that this had changed as was a problem, and to Herbert Valerio Riedel for identifying the source of the problem.

  • Related to the Cabal version update is a change in the way that AllRightsReserved packages are handled. Cabal used to complain about distributing AllRightsReserved packages, but that’s not appropriate at that level of the toolchain. Hackage has been updated to implement that check instead, but only for the community’s public instance. If you run your own local Hackage server then it will now accept AllRightsReserved packages.

Miscellaneous small changes

  • In the administrators’ control panel, old account requests and resets are now cleaned up.

  • Internally there is a new cron subsystem which will be useful for new features.

  • More resources are available in JSON format. Thanks to Herbert Valerio Riedel for pointing out places where this was missing.

  • Improved README instructions for the hackage-server itself. Contributed by Michael Snoyman.

by duncan at May 18, 2015 04:31 PM

Ken T Takusagawa

[kdrtkgdr] Polymorphic matrix inversion

Here is matrix inversion implemented in pure Haskell and a demonstration of its use with Complex Rational exact arithmetic:

A= [ 9 8*i 9*i 4 8*i 3 5*i 3*i ; 6*i 6 7*i 5 0 3*i 5*i 7 ; 4 4*i 4*i 8 1*i 3 1 9*i ; 4 6*i 7 5 4*i 7 4 4 ; 0 3 6 0 4 2*i 0 7 ; 7 2 0 4*i 3 3*i 1 4*i ; 4*i 3 8 1 2*i 3 1 3*i ; 3 3 4*i 3*i 6 3 7 1*i ]

Ainv= [ -651341+282669*i 70449+105643*i 470260+543736*i -90558-322394*i 940582+401976*i -1471210-32706*i 304731+202312*i 428573+497242*i ; 544367-80697*i -1154274+187794*i 1139302-87593*i -21921+1224087*i 1488234-1168263*i -1627965+1312344*i -1248400-535541*i 337176-362289*i ; -31970+991102*i 93690-139399*i 123282-341373*i -421343-757909*i 50198+495419*i 8593-654052*i -468577+527063*i 897227+455914*i ; 695254+159129*i -629809+1226054*i -2027820-1557744*i -161289+1062653*i -1299979-787107*i 915864+11456*i 1378121-78438*i 1043558-421873*i ; -925872-186803*i 1371020-146791*i -2291428-34310*i 2099076-57282*i -3405480+2662395*i 1616586-1020331*i 313380-265939*i -1386116-96576*i ; -1420678-1211999*i 1206717-844239*i 93772+1350941*i 1290846-32987*i -511+2133602*i 1126031+1379618*i -2659652-75501*i -2000675-207615*i ; 2039672+316590*i -813732+665441*i 417543-103784*i -2312041+106452*i 1746852-2305394*i -871774-685499*i 1641748-24893*i -261188+84637*i ; -23113-302279*i -610268-221894*i 1101428+322959*i -838351-211053*i -239412-540356*i 160747+259506*i 736020+689615*i -180808+391291*i ] / (-14799118+6333791*i)

Aesthetically striking is how simple input matrices become complicated when inverted: complexity seemingly appears out of nowhere.  The complexity arises from a convolution of several effects: numerators and denominators intermingle as fractions get added; real and imaginary parts intermingle as complex numbers get multiplied; matrix elements intermingle in row operations.

(Perhaps this mixing could be put to cryptographic use someday? Row operations do vaguely look like Salsa20. AES has a field inversion to define its S box.)

The code is kind of a mess, not really trying to achieve performance: we compute the LU decomposition to get the determinant, which is used just to provide the scalar factor in the final displayed result of the inverse to avoid fractions inside the matrix.  (It is neat that multiplying through by the determinant makes fractions disappear.)  We do a separately implemented Gauss-Jordan elimination on the original matrix to compute the inverse, then multiply by the determinant computed via LU.

We made modifications to two library packages.

We forked the base module Data.Complex to a new module Data.ComplexNum to provide a Complex type that does not require RealFloat (so only requires Num).  (Complex normally requires RealFloat because abs and signum of a Complex number requires sqrt.)  Avoiding RealFloat allowed us to form Complex Rational, i.e., Complex (Ratio Integer), a powerful data type.  We defined abs and signum as error.

Our starting point for matrix operations was Data.Matrix in the matrix package (by Daniel Díaz).  The original luDecomp function calls abs to select the largest pivot during LU decomposition.  The output type of abs is constrained in the Num class to be the same as the input type (abs :: a -> a), so Complex numbers weirdly provide a Complex output type for abs even through mathematically the absolute value of a complex number is real.  Unfortunately Complex numbers do not provide Ord, so we cannot select the "largest" absolute value as the pivot.  Therefore, we added a new entry point to LU decomposition, luDecompWithMag, to allow passing in a custom function to compute the magnitude.  Since we are avoiding square root, we provided magnitude-squared (magnitude2) as the pivot-choosing function.

We fixed some (but not all) space leaks in LU decomposition so processing matrices of size 100x100 no longer requires gigabytes of memory.

We added to Data.Matrix two different implementations of matrix inversion both which use monadic imperative style for in-place modification of the augmented matrix with row operations: one (invertS) uses Control.Monad.Trans.State.Strict and the other (invert) uses MVector and Control.Monad.ST.  The latter is a little bit faster. Both implementations required care to avoid space leaks, which were tracked down using ghc heap profiling.

The modifications to Data.Matrix can be tracked in this github repository.

Runtime (in seconds) and maxresident memory usage (k) of inversion of random Complex Rational matrices, where each entry in the original matrix is a random single-digit pure real or pure imaginary number (as in the example above with size 8), of sizes 10 through 320: [ 10 0.01 2888 ; 20 0.26 4280 ; 30 1.40 5324 ; 40 4.84 7132 ; 50 12.32 10204 ; 60 27.05 14292 ; 70 52.67 19416 ; 80 94.36 26588 ; 90 157.18 33752 ; 100 250.12 44000 ; 110 373.71 56524 ; 120 546.32 69600 ; 130 768.92 85988 ; 140 1056.05 103632 ; 150 1419.47 124900 ; 160 1877.97 148696 ; 170 2448.16 176364 ; 180 3130.31 208104 ; 190 3963.96 238824 ; 200 4958.93 277504 ; 210 6138.61 318704 ; 220 7543.16 364544 ; 230 9252.55 413700 ; 240 11115.32 467196 ; 250 13217.16 524300 ; 260 15690.80 594956 ; 270 18520.73 658448 ; 280 21761.36 734460 ; 290 25509.77 812288 ; 300 30048.34 1022028 ; 310 34641.25 1218636 ; 320 39985.63 1419340 ]

Runtime and memory for Hilbert matrices (Rational) of sizes 10 through 510: [ 10 0.00 2736 ; 20 0.03 3676 ; 30 0.14 4164 ; 40 0.51 4608 ; 50 1.45 5804 ; 60 3.65 8196 ; 70 7.86 9224 ; 80 15.59 11096 ; 90 27.85 13444 ; 100 45.03 17320 ; 110 67.97 21408 ; 120 98.61 26876 ; 130 137.02 27132 ; 140 187.48 29664 ; 150 248.99 37792 ; 160 325.17 38816 ; 170 420.00 43920 ; 180 529.02 55796 ; 190 659.03 56304 ; 200 811.39 65520 ; 210 980.33 70912 ; 220 1202.42 72608 ; 230 1426.34 82852 ; 240 1696.02 89996 ; 250 2014.39 97192 ; 260 2300.64 109496 ; 270 2693.48 118692 ; 280 3124.52 130980 ; 290 3589.06 138164 ; 300 4158.67 210900 ; 310 4711.61 223188 ; 320 5323.13 243692 ; 330 6047.59 262104 ; 340 6789.74 241640 ; 350 7621.69 269268 ; 360 8433.56 338924 ; 370 9447.28 360428 ; 380 10496.28 358356 ; 390 11662.98 422872 ; 400 12935.35 375772 ; 410 14203.69 477164 ; 420 15655.88 420840 ; 430 17143.12 547820 ; 440 19003.82 517104 ; 450 20592.28 550880 ; 460 22471.48 574428 ; 470 24484.14 604120 ; 480 26672.65 662492 ; 490 28958.52 766936 ; 500 31302.45 646132 ; 510 33932.76 731120 ]

Slope of runtime on a log-log plot is about 4.3.

According to Wikipedia, the worst case run time (bit complexity) of Gaussian elimination is exponential, but the Bareiss algorithm can guarantee O(n^5): Bareiss, Erwin H. (1968), "Sylvester's Identity and multistep integer-preserving Gaussian elimination", Mathematics of Computation 22 (102): 565–578, doi:10.2307/2004533.  Future work: try the code presented here on the exponentially bad matrices described in the paper; implement Bareiss algorithm.

Also someday, it may be a fun exercise to create a data type encoding algebraic expressions which is an instance of Num, then use the polymorphism of the matrix code presented here to invert symbolic matrices.

by Ken ([email protected]) at May 18, 2015 07:44 AM

May 17, 2015

Eric Kidd

Unscientific column store benchmarking in Rust

I've been fooling around with some natural language data from OPUS, the “open parallel corpus.” This contains many gigabytes of movie subtitles, UN documents and other text, much of it tagged by part-of-speech and aligned across multiple languages. In total, there's over 50 GB of data, compressed.

“50 GB, compressed” is an awkward quantity of data:

Let's look at various ways to tackle this.

Read more…

May 17, 2015 08:03 PM

May 16, 2015

Philip Wadler

Status Report 5


I am recovered. My bone marrow biopsy and my scan at the National Amyloidosis Centre show no problems, and my urologist has discharged me. Photo above shows me and Bob Harper (otherwise known as TSOPLRWOKE, The Society of Programming Language Researchers With One Kidney Each) at Asilomar for Snapl.

My thanks again to staff of the NHS. Everyone was uniformly friendly and professional, and the standard of care has been excellent. My thanks also to everyone who wished me well, and especially to the SIGPLAN EC, who passed a get-well card around the world for signing, as shown below. I am touched to have received so many good wishes.



by Philip Wadler ([email protected]) at May 16, 2015 04:00 PM

Status Report 4


It seemed as if no time had passed: the anaesthetist injected my spine, and next thing I knew I was waking in recovery. Keyhole surgery to remove my left kidney was completed on Tuesday 17 March, and I expect to leave the Western General on Saturday 21 March. Meanwhile, progress on diagnosing the amyloid spotted in my liver: I had a bone marrow biopsy on Thursday 19 March, and two days of testing at the National Amyloidosis Centre in London are to be scheduled. NHS has provided excellent care all around.

My room was well placed for watching the partial eclipse this morning. A nurse with a syringe helped me jury rig a crude pinhole camera (below), but it was too crude. Fortunately, there was exactly the right amount of cloud cover through which to view the crescent sun. My fellow patients and our nurses all gathered together, and for five minutes it was party time on the ward.

Update: I left the hospital as planned on Saturday 21 March. Thanks to Guido, Sam, Shabana, Stephen, and Jonathan for visits; to Marjorie for soup; to Sukkat Shalom council for a card and to Gillian for hand delivery; and to Maurice for taking me in while my family was away.

Related: Status report, Status report 2, A paean to the Western General, Status report 3.


by Philip Wadler ([email protected]) at May 16, 2015 02:35 PM

May 14, 2015

mightybyte

LTMT Part 3: The Monad Cookbook

Introduction

The previous two posts in my Less Traveled Monad Tutorial series have not had much in the way of directly practical content. In other words, if you only read those posts and nothing else about monads, you probably wouldn't be able to use monads in real code. This was intentional because I felt that the practical stuff (like do notation) had adequate treatment in other resources. In this post I'm still not going to talk about the details of do notation--you should definitely read about that elsewhere--but I am going to talk about some of the most common things I have seen beginners struggle with and give you cookbook-style patterns that you can use to solve these issues.

Problem: Getting at the pure value inside the monad

This is perhaps the most common problem for Haskell newcomers. It usually manifests itself as something like this:

main = do
    lineList <- lines $ readFile "myfile.txt"
    -- ... do something with lineList here

That code generates the following error from GHC:

    Couldn't match type `IO String' with `[Char]'
    Expected type: String
      Actual type: IO String
    In the return type of a call of `readFile'

Many newcomers seem puzzled by this error message, but it tells you EXACTLY what the problem is. The return type of readFile has type IO String, but the thing that is expected in that spot is a String. (Note: String is a synonym for [Char].) The problem is, this isn't very helpful. You could understand that error completely and still not know how to solve the problem. First, let's look at the types involved.

readFile :: FilePath -> IO String
lines :: String -> [String]

Both of these functions are defined in Prelude. These two type signatures show the problem very clearly. readFile returns an IO String, but the lines function is expecting a String as its first argument. IO String != String. Somehow we need to extract the String out of the IO in order to pass it to the lines function. This is exactly what do notation was designed to help you with.

Solution #1

main :: IO ()
main = do
    contents <- readFile "myfile.txt"
    let lineList = lines contents
    -- ... do something with lineList here

This solution demonstrates two things about do notation. First, the left arrow lets you pull things out of the monad. Second, if you're not pulling something out of a monad, use "let foo =". One metaphor that might help you remember this is to think of "IO String" as a computation in the IO monad that returns a String. A do block lets you run these computations and assign names to the resulting pure values.

Solution #2

We could also attack the problem a different way. Instead of pulling the result of readFile out of the monad, we can lift the lines function into the monad. The function we use to do that is called liftM.

liftM :: Monad m => (a -> b) -> m a -> m b
liftM :: Monad m => (a -> b) -> (m a -> m b)

The associativity of the -> operator is such that these two type signatures are equivalent. If you've ever heard Haskell people saying that all functions are single argument functions, this is what they are talking about. You can think of liftM as a function that takes one argument, a function (a -> b), and returns another function, a function (m a -> m b). When you think about it this way, you see that the liftM function converts a function of pure values into a function of monadic values. This is exactly what we were looking for.

main :: IO ()
main = do
    lineList <- liftM lines (readFile "myfile.txt")
    -- ... do something with lineList here

This is more concise than our previous solution, so in this simple example it is probably what we would use. But if we needed to use contents in more than one place, then the first solution would be better.

Problem: Making pure values monadic

Consider the following program:

import Control.Monad
import System.Environment
main :: IO ()
main = do
    args <- getArgs
    output <- case args of
                [] -> "cat: must specify some files"
                fs -> liftM concat (mapM readFile fs)
    putStrLn output

This program also has an error. GHC actually gives you three errors here because there's no way for it to know exactly what you meant. But the first error is the one we're interested in.

    Couldn't match type `[]' with `IO'
    Expected type: IO Char
      Actual type: [Char]
    In the expression: "cat: must specify some files"

Just like before, this error tells us exactly what's wrong. We're supposed to have an IO something, but we only have a String (remember, String is the same as [Char]). It's not convenient for us to get the pure result out of the readFile functions like we did before because of the structure of what we're trying to do. The two patterns in the case statement must have the same type, so that means that we need to somehow convert our String into an IO String. This is exactly what the return function is for.

Solution: return

return :: a -> m a

This type signature tells us that return takes any type a as input and returns "m a". So all we have to do is use the return function.

import Control.Monad
import System.Environment
main :: IO ()
main = do
    args <- getArgs
    output <- case args of
                [] -> return "cat: must specify some files"
                fs -> liftM concat (mapM readFile fs)
    putStrLn output

The 'm' that the return function wraps its argument in, is determined by the context. In this case, main is in the IO monad, so that's what return uses.

Problem: Chaining multiple monadic operations

import System.Environment
main :: IO ()
main = do
    [from,to] <- getArgs
    writeFile to $ readFile from

As you probably guessed, this function also has an error. Hopefully you have an idea of what it might be. It's the same problem of needing a pure value when we actually have a monadic one. You could solve it like we did in solution #1 on the first problem (you might want to go ahead and give that a try before reading further). But this particular case has a pattern that makes a different solution work nicely. Unlike the first problem, you can't use liftM here.

Solution: bind

When we used liftM, we had a pure function lines :: String -> [String]. But here we have writeFile :: FilePath -> String -> IO (). We've already supplied the first argument, so what we actually have is writeFile to :: String -> IO (). And again, readFile returns IO String instead of the pure String that we need. To solve this we can use another function that you've probably heard about when people talk about monads...the bind function.

(=<<) :: Monad m => (a -> m b) -> m a -> m b
(=<<) :: Monad m => (a -> m b) -> (m a -> m b)

Notice how the pattern here is different from the first example. In that example we had (a -> b) and we needed to convert it to (m a -> m b). Here we have (a -> m b) and we need to convert it to (m a -> m b). In other words, we're only adding an 'm' onto the 'a', which is exactly the pattern we need here. Here are the two patterns next to each other to show the correspondence.

writeFile to :: String -> IO ()
                     a ->  m b

From this we see that "writeFile to" is the first argument to the =<< function. readFile from :: IO String fits perfectly as the second argument to =<<, and then the return value is the result of the writeFile. It all fits together like this:

import System.Environment
main :: IO ()
main = do
    [from,to] <- getArgs
    writeFile to =<< readFile from

Some might point out that this third problem is really the same as the first problem. That is true, but I think it's useful to see the varying patterns laid out in this cookbook style so you can figure out what you need to use when you encounter these patterns as you're writing code. Everything I've said here can be discovered by carefully studying the Control.Monad module. There are lots of other convenience functions there that make working with monads easier. In fact, I already used one of them: mapM.

When you're first learning Haskell, I would recommend that you keep the documentation for Control.Monad close by at all times. Whenever you need to do something new involving monadic values, odds are good that there's a function in there to help you. I would not recommend spending 10 hours studying Control.Monad all at once. You'll probably be better off writing lots of code and referring to it whenever you think there should be an easier way to do what you want to do. Over time the patterns will sink in as form new connections between different concepts in your brain.

It takes effort. Some people do pick these things up more quickly than others, but I don't know anyone who just read through Control.Monad and then immediately had a working knowledge of everything in there. The patterns you're grappling with here will almost definitely be foreign to you because no other mainstream language enforces this distinction between pure values and side effecting values. But I think the payoff of being able to separate pure and impure code is well worth the effort.

by mightybyte ([email protected]) at May 14, 2015 05:52 PM

JP Moresmau

EclipseFP end of life (from me at least)

Hello, after a few years and several releases, I am now stopping the maintenance of EclipseFP and its companion Haskell packages (BuildWrapper, ghc-pkg-lib and scion-browser). If anybody wants to take over I' ll gladly give them all what's required to get started. Feel free to fork and continue!

Why am I stopping? Not for a very specific reason, though seeing that I had to adapt BuildWrapper to GHC 7.10 didn't exactly fill me with joy, but more generally I got tired of being the single maintainer for this project. I got a few pull requests over the years and some people have at some stage participated (thanks to you, you know who you are!), but not enough, and the biggest part of the work has been always on my shoulders. Let's say I got tired of getting an endless stream of issues reports and enhancement requests with nobody stepping up to actually address them.

Also, I don't think on the Haskell side it makes sense for me to keep on working on a GHC API wrapper like BuildWrapper. There are other alternatives, and with the release of ide-backend, backed up by FPComplete, a real company staffed by competent people who seem to have more that 24 hours per day to hack on Haskell tools, it makes more sense to have consolidation there.

The goal of EclipseFP was to make it easy for Java developers or other Eclipse users to move to Haskell, and I think this has been a failure, mainly due to the inherent complexity of the setup (the Haskell stack and the Java stack) and the technical challenges of integrating GHC and Cabal in a complex IDE like Eclipse. Of course we could have done better with the constraints we were operating under, but if more eyes had looked at the code and more hands had worked on deck we could have succeeded.

Personally I would now be interested in maybe getting the Atom editor to use ide-backend-client, or maybe work on a web based (but local) Haskell IDE. Some of my dabblings can be found at https://github.com/JPMoresmau/dbIDE. But I would much prefer to not work on my own, so if you have an open source project you think could do with my help, I'll be happy to hear about it!

I still think Haskell is a great language that would deserve a top-notch IDE, for newbies and experts alike, and I hope one day we'll get there.

For you EclipseFP users, you can of course keep using it as long as it works, but if no other maintainers step up, down the line you'll have to look for other options, as compatibility with the Haskell ecosystem will not be assured. Good luck!

Happy Haskell Hacking!

by JP Moresmau ([email protected]) at May 14, 2015 01:09 PM

Functional Jobs

OCaml server-side developer at Ahrefs Research (Full-time)

Who we are

Ahrefs Research is a San Francisco branch of Ahrefs Pte Ltd (Singapore), which runs an internet-scale bot that crawls whole Web 24/7, storing huge volumes of information to be indexed and structured in timely fashion. On top of that Ahrefs is building analytical services for end-users.

Ahrefs Research develops a custom petabyte-scale distributed storage to accommodate all that data coming in at high speed, focusing on performance, robustness and ease of use. Performance-critical low-level part is implemented in C++ on top of a distributed filesystem, while all the coordination logic and communication layer, along with API library exposed to the developer is in OCaml.

We are a small team and strongly believe in better technology leading to better solutions for real-world problems. We worship functional languages and static typing, extensively employ code generation and meta-programming, value code clarity and predictability, constantly seek out to automate repetitive tasks and eliminate boilerplate, guided by DRY and following KISS. If there is any new technology that will make our life easier - no doubt, we'll give it a try. We rely heavily on opensource code (as the only viable way to build maintainable system) and contribute back, see e.g. https://github.com/ahrefs . It goes without saying that our team is all passionate and experienced OCaml programmers, ready to lend a hand or explain that intricate ocamlbuild rule.

Our motto is "first do it, then do it right, then do it better".

What we need

Ahrefs Research is looking for backend developer with deep understanding of operating systems, networks and taste for simple and efficient architectural designs. Our backend is implemented mostly in OCaml and some C++, as such proficiency in OCaml is very much appreciated, otherwise a strong inclination to intensively learn OCaml in a short term will be required. Understanding of functional programming in general and/or experience with other FP languages (F#,Haskell,Scala,Scheme,etc) will help a lot. Knowledge of C++ is a plus.

The candidate will have to deal with the following technologies on the daily basis:

  • networks & distributed systems
  • 4+ petabyte of live data
  • OCaml
  • C++
  • linux
  • git

The ideal candidate is expected to:

  • Independently deal with and investigate bugs, schedule tasks and dig code
  • Make argumented technical choice and take responsibility for it
  • Understand the whole technology stack at all levels : from network and userspace code to OS internals and hardware
  • Handle full development cycle of a single component, i.e. formalize task, write code and tests, setup and support production (devops)
  • Approach problems with practical mindset and suppress perfectionism when time is a priority

These requirements stem naturally from our approach to development with fast feedback cycle, highly-focused personal areas of responsibility and strong tendency to vertical component splitting.

What you get

We provide:

  • Competitive salary
  • Modern office in San Francisco SOMA (Embarcadero)
  • Informal and thriving atmosphere
  • First-class workplace equipment (hardware, tools)
  • No dress code

Get information on how to apply for this position.

May 14, 2015 08:07 AM

Yesod Web Framework

Deprecating system-filepath and system-fileio

I posted this information on Google+, but it's worth advertising this a bit wider. The tl;dr is: system-filepath and system-fileio are deprecated, please migrate to filepath and directory, respectively.

The backstory here is that system-filepath came into existence at a time when there were bugs in GHC's handling of character encodings in file paths. system-filepath fixed those bugs, and also provided some nice type safety to prevent accidentally treating a path as a String. However, the internal representation needed to make that work was pretty complicated, and resulted in some weird corner case bugs.

Since GHC 7.4 and up, the original character encoding issues have been resolved. That left a few options: continue to maintain system-filepath for additional type safety, or deprecate. John Millikin, the author of the package, decided on the latter back in December. Since we were using it extensively at FP Complete via other libraries, we decided to take over maintenance. However, this week we decided that, in fact, John was right in the first place.

I've already migrated most of my libraries away from system-filepath (though doing so quickly was a mistake, sorry everyone). One nice benefit of all this is there's no longer a need to convert between different FilePath representations all over the place. I still believe overall that type FilePath = String is a mistake and a distinct datatype would be better, but there's much to be said for consistency.

Some quick pointers for those looking to convert:

  • You can drop basically all usages of encodeString and decodeString
  • If you're using basic-prelude or classy-prelude, you should get some deprecation warnings around functions like fpToString
  • Most functions have a direct translation, e.g. createTree becomes createDirectoryIfMissing True (yes, the system-filepath and system-fileio names often times feel nicer...)

And for those looking for more type safety: all is not lost. Chris Done has been working on a new package which is aimed at providing additional type safety around absolute/relative and files/directories. It's not yet complete, but is already seeing some interesting work and preventing bugs at some projects we've been working on (and which will be announced soon).

May 14, 2015 04:00 AM

May 13, 2015

Daniil Frumin

Hoogle inside the sandbox

Introduction

This is my first post from the (hopefuly fruitful!) series of blog posts as part of my Haskell SoC project. I will spend a great chunk of my summer hacking away on DarcsDen; in addition, I will document my hardships and successes here. You can follow my progress on my DarcsHub.

This particular post will be about my working environment.

The problem

Hoogle is an amazing tool that usually needs no introduction. Understandably, the online version at haskell.org indexes only so many packages. This means that if I want to use hoogle to search for functions and values in packages like darcs and darcsden, I will have to set up a local copy.

Cabal sandboxing is a relatively recent feature of the Cabal package manager, but I don’t think it is reasonable in this day to install from the source (let alone develop) a Haskell package without using sandboxing.

The problem seems to be that the mentioned tools do not play well together out of the box, and some amount of magic is required. In this note I sketch the solution, on which I’ve eventually arrived after a couple of tries.

Using hoogle inside a Cabal sandbox

The presumed setup: a user is working on a package X using the cabal sandboxes. The source code is located in the directory X and the path to the cabal sandbox is X/.cabal-sandbox.

Step 1: Install hoogle inside the sandbox. This is simply a matter of running cabal install hoogle inside X. If you want to have a standard database alongside the database for your packages in development, now is the time to do .cabal-sandbox/bin/hoogle data.

Step 2: Generate haddocks for the packages Y,Z you want to use with hoogle. In my case, I wanted to generate haddocks for darcs and darcsden. This is just a matter of running cabal haddock --hoogle in the correct directory.

Step 3: Convert haddocks to .hoo files. Run the following commands in X/:

.cabal-sandbox/bin/hoogle convert /path/to/packageY/dist/doc/html/*/*.txt

You should see something like

Converting /path/to/packageY/dist/doc/html/Y/Y.txt
Converting Y... done

after which the file Y.hoo appears in /path/to/packageY/dist/doc/html/Y/

Step 4: Moving and combining databases. The hoogle database should be stored in .cabal-sandbox/share/*/hoogle-*/databases. Create such directory, if it’s not present already. Then copy the ‘default’ database to that folder:

cp .cabal-sandbox/hoogle/databases/default.hoo .cabal-sandbox/share/*/hoogle-*/databases

Finally, you can combine your Y.hoo with the default database.

.cabal-sandbox/bin/hoogle combine /path/to/packageY/dist/doc/html/*/*.hoo .cabal-sandbox/share/*/hoogle-*/databases/default.hoo
mv default.hoo .cabal-sandbox/share/*/hoogle-*/databases/default.hoo

And you are done! You can test your installation

$ .cabal-sandbox/bin/hoogle rOwner
DarcsDen.State.Repo rOwner :: Simple Lens (Repository bp) String

For additional usability, consider adding .cabal-sandbox/bin to your $PATH.


Tagged: cabal, darcs, haskell, hoogle

by Dan at May 13, 2015 09:54 PM

Mark Jason Dominus

Want to work with me on one of these projects?

I did a residency at the Recurse Center last month. I made a profile page on their web site, which asked me to list some projects I was interested in working on while there. Nobody took me up on any of the projects, but I'm still interested. So if you think any of these projects sounds interesting, drop me a note and maybe we can get something together.

They are listed roughly in order of their nearness to completion, with the most developed ideas first and the vaporware at the bottom. I am generally language-agnostic, except I refuse to work in C++.

Or if you don't want to work with me, feel free to swipe any of these ideas yourself. Share and enjoy.

Linogram

Linogram is a constraint-based diagram-drawing language that I think will be better than prior languages (like pic, Metapost, or, god forbid, raw postscript or SVG) and very different from WYSIWYG drawing programs like Inkscape or Omnigraffle. I described it in detail in chapter 9 of Higher-Order Perl and it's missing only one or two important features that I can't quite figure out how to do. It also needs an SVG output module, which I think should be pretty simple.

Most of the code for this already exists, in Perl.

I have discussed Linogram previously in this blog.

Orthogonal polygons

Each angle of an orthogonal polygon is either 90° or 270°. All 4-sided orthogonal polygons are rectangles. All 6-sided orthogonal polygons are similar-looking letter Ls. There are essentially only four different kinds of 8-sided orthogonal polygons. There are 8 kinds of 10-sided orthogonal polygons, and 29 kinds of 12-sided orthogonal polygons. I want to efficiently count the number of orthogonal polygons with N sides, and have the computer draw exemplars of each type.

I have a nice method for systematically generating descriptions of all simple orthogonal polygons, and although it doesn't scale to polygons with many sides I think I have an idea to fix that, making use of group-theoretic (mathematical) techniques. (These would not be hard for anyone to learn quickly; my ten-year-old daughter picked them right up. Teaching the computer would be somewhat trickier.) For making the pictures, I only have half the ideas I need, and I haven't done the programming yet.

The little code I have is written in Perl, but it would be no trouble to switch to a different language.

Simple Android app

I want to learn to build Android apps for my Android phone. I think a good first project would be a utility where you put in a sequence of letters, say FBS, and it displays all the words that contain those letters in order. (For FBS the list contains "afterburners", "chlorofluorocarbons", "fables", "fabricates", …, "surfboards".) I play this game often with my kid (the letters are supplied by license plates we pass) and we want a way to cheat when we are stumped.

My biggest problem with Android development in the past has been getting the immense Android SDK set up.

The project would need to be done in Java, because that is what Android uses.

gi

Git is great, but its user interface is awful. The command set is obscure and non-orthogonal. Error messages are confusing. gi is a thinnish layer that tries to present a more intuitive and uniform command set, with better error messages and clearer advice, without removing any of git's power.

There's no code written yet, and we could do it in any language. Perl or Python would be good choices. The programming is probably easy; the hard part of this project is (a) design and (b) user testing.

I have a bunch of design notes written up about this already.

Twingler

Twingler takes an example of an input data structure and and output data structure, and writes code in your favorite language for transforming the input into the output. Or maybe it takes some sort of simplified description of what is wanted and writes the code from that. The description would be declarative, not procedural. I'm really not at all sure what it should do or how it should work, but I have a lot of notes, and if we could make it happen a lot of people would love it.

No code is written; we could do this in your favorite language. Haskell maybe?

Bonus: Whatever your favorite language is, I bet it needs something like this.

Crapspad

I want a simple library that can render simple pixel graphics and detect and respond to mouse events. I want people to be able to learn to use it in ten minutes. It should be as easy as programming graphics on an Apple II and easier than a Commodore 64. It should not be a gigantic object-oriented windowing system with widgets and all that stuff. It should be possible to whip up a simple doodling program in Crapspad in 15 minutes.

I hope to get Perl bindings for this, because I want to use it from Perl programs, but we could design it to have a language-independent interface without too much trouble.

Git GUI

There are about 17 GUIs for Git and they all suck in exactly the same way: they essentially provide a menu for running all the same Git commands that you would run at the command line, obscuring what is going on without actually making Git any easier to use. Let's fix this.

For example, why can't you click on a branch and drag it elsewhere to rebase it, or shift-drag it to create a new branch and rebase that? Why can't you drag diff hunks from one commit to another?

I'm not saying this stuff would be easy, but it should be possible. Although I'm not convinced I really want to put ion the amount of effort that would be required. Maybe we could just submit new features to someone else's already-written Git GUI? Or if they don't like our features, fork their project?

I have no code yet, and I don't even know what would be good to use.

by Mark Dominus ([email protected]) at May 13, 2015 05:53 PM

Functional Jobs

Senior Systems Engineer (Scala) at AdAgility (Full-time)

Who we are

AdAgilityTM cross-promotes thousands of offers everyday, driving incremental revenue and margin to clients while delighting our customers. The AdAgility Platform supports both first and third-party offer delivery, to help monetization experts, ecommerce managers, and partnership teams power cross-sell offers with minimal technical effort. A single line of code allows for the secure and efficient delivery of relevant offers, with 24/7 access to full-funnel analytics and powerful real-time offer administration.

As a key member of our engineering team, you will be at the forefront of building out the decisioning and analytics capabilities that deliver fantastic results for our clients.

Location

Our main street office is a 5 minute walk from the Waltham commuter rail station. There is plenty of (free) parking onsite and it is walking distance to a bevy of restaurants.

This position requires presence in the office on a weekly basis.

The Stack

Our goal is to maximize the amount of time we spend building and releasing great software. The technology choices we have made so far reflect this view.

  • Our data processing and decisioning systems are written in Scala
  • The dashboard is a Ruby on Rails application running against a Postgres cluster
  • All of our deployments are automated with Ansible
  • We run on AWS services such as EC2, Kinesis, Elasticache, S3, ELB and Route 53
  • Libraries we are using include Scalaz, Spray, Akka and Kamon
  • We leverage light-weight, saas monitoring tools to know exactly what’s going on 24/7: StackDriver, Cronitor, Logentries, Rollbar, …

What you really need

  • Proficiency with two jvm languages (Scala, Clojure, Java, …)
  • Ability to work independently but wisely - if you are stuck on something, don’t wait a week to bounce ideas off a colleague
  • Experience in functional programming (Haskell, Erlang, Lisp, ML, Scala, etc) - open source or significant online coursework counts
  • Knowledge of the various forms of automated testing - this is not the wild west
  • Battle scars from building systems (this is not measured in years of experience)
  • A burning desire to build software that produces real value for your team

The cherry on top (nice to haves)

  • Experience with Ansible or similar deployment automation tools
  • You’ve dabbled with Angular, Node.js or Rails
  • A knack for front-end design
  • A good understanding of micro-services and when it makes sense to use them

What we value

  • Honesty
  • Dependability
  • A positive attitude
  • Having fun
  • Checking your ego at the door

Company-wide benefits

At AdAgility, we strive to create a fun, low stress environment that is conducive for success. Some of our top benefits include:

  • 100% company paid health + dental insurance
  • Customize your work station. Standing desk - no problem!
  • Flexible hours
  • A weekly, company-wide work-from-home day
  • Regular team outings (recent ones include F1 racing, a Bruins game and laser tag)
  • Feeling tired in the afternoon? Take a cat nap on our sectional.

Get information on how to apply for this position.

May 13, 2015 04:14 PM

Senior Software Engineer (Functional Programming/Scala) at AdAgility (Full-time)

Who we are

AdAgilityTM cross-promotes thousands of offers everyday, driving incremental revenue and margin to clients while delighting our customers. The AdAgility Platform supports both first and third-party offer delivery, to help monetization experts, ecommerce managers, and partnership teams power cross-sell offers with minimal technical effort. A single line of code allows for the secure and efficient delivery of relevant offers, with 24/7 access to full-funnel analytics and powerful real-time offer administration.

As a key member of our engineering team, you will be at the forefront of building out the decisioning and analytics capabilities that deliver fantastic results for our clients.

Location

Our main street office is a 5 minute walk from the Waltham commuter rail station. There is plenty of (free) parking onsite and it is walking distance to a bevy of restaurants.

This position requires presence in the office on a weekly basis.

The Stack

Our goal is to maximize the amount of time we spend building and releasing great software. The technology choices we have made so far reflect this view.

  • Our data processing and decisioning systems are written in Scala
  • The dashboard is a Ruby on Rails application running against a Postgres cluster
  • All of our deployments are automated with Ansible
  • We run on AWS services such as EC2, Kinesis, Elasticache, S3, ELB and Route 53
  • Libraries we are using include Scalaz, Spray, Akka and Kamon
  • We leverage light-weight, saas monitoring tools to know exactly what’s going on 24/7: StackDriver, Cronitor, Logentries, Rollbar, …

What you really need

  • Proficiency with two jvm languages (Scala, Clojure, Java, …)
  • Ability to work independently but wisely - if you are stuck on something, don’t wait a week to bounce ideas off a colleague
  • Experience in functional programming (Haskell, Erlang, Lisp, ML, Scala, etc) - open source or significant online coursework counts
  • Knowledge of the various forms of automated testing - this is not the wild west
  • Battle scars from building systems (this is not measured in years of experience)
  • A burning desire to build software that produces real value for your team

The cherry on top (nice to haves)

  • Experience with Ansible or similar deployment automation tools
  • You’ve dabbled with Angular, Node.js or Rails
  • A knack for front-end design
  • A good understanding of micro-services and when it makes sense to use them

What we value

  • Honesty
  • Dependability
  • A positive attitude
  • Having fun
  • Checking your ego at the door

Company-wide benefits

At AdAgility, we strive to create a fun, low stress environment that is conducive for success. Some of our top benefits include:

  • 100% company paid health + dental insurance
  • Customize your work station. Standing desk - no problem!
  • Flexible hours
  • A weekly, company-wide work-from-home day
  • Regular team outings (recent ones include F1 racing, a Bruins game and laser tag)
  • Feeling tired in the afternoon? Take a cat nap on our sectional.

Get information on how to apply for this position.

May 13, 2015 04:14 PM

Bryan O'Sullivan

Sometimes, the old ways are the best

Over the past few months, the Sigma engineering team at Facebook has rolled out a major Haskell project: a rewrite of Sigma, an important weapon in our armory for fighting spam and malware.

Sigma has a mission-critical job, and it needs to scale: its growing workload currently sees it handling tens of millions of requests per minute.

The rewrite of Sigma in Haskell, using the Haxl library that Simon Marlow developed, has been a success. Throughput is higher than under its predecessor, and CPU usage is lower. Sweet!

Nevertheless, success brings with it surprises, and even though I haven’t worked on Sigma or Haxl, I’ve been implicated in one such surprise. To understand my accidental bit part in the show, let's begin by mentioning that Sigma uses JSON internally for various purposes. These days, the Haskell-powered Sigma uses aeson, the JSON library I wrote, to handle JSON data.

A few months ago, the Haxl rewrite of Sigma was going through an episode of crazytown, in which it would intermittently and unpredictably use huge amounts of CPU and memory. The culprit turned out to be JSON strings containing zillions of backslashes. (I have no idea why. If you’ve worked with large volumes of data for a long time, you won’t even bat an eyelash at the idea that a data store somewhere contains some really weird records.)

The team quickly mitigated the problem, and gave me a nudge that I might want to look into the problem. On Sunday evening, with a glass of red wine in hand, I finally dove in to see what was wrong.

Since the Sigma developers had figured out what was causing these time and space explosions, I immediately had a test case to work with, and the results were grim: decoding a mere megabyte of continuous backslashes took over a second, consumed over a gigabyte of memory, and killed concurrency by causing the runtime system to spend almost 90% of its time in the garbage collector. Yikes!

Whatever was going on? If you look at the old implementation of aeson’s unescape function, it seems quite efficient and innocuous. It’s reasonably tightly optimized low-level Haskell.

Trouble is, unescape uses an API (a bytestring builder) that is intended for streaming a result incrementally. Unfortunately the unescape function can’t hand any data back to its caller until it has processed an entire string.

The result is as you’d expect: we build a huge chain of thunks. In this case, the thunks will eventually write data efficiently into buffers. Alas, the thunks have nobody demanding the evaluation of their contents. This chain consumes a lot (a lot!) of memory and incurs a huge amount of GC overhead (long chains of thunks are expensive). Sadness ensues.

The “old ways” in the title refer to the fix: in place of a fancy streaming API, I simply allocate a single big buffer and blast the bytes straight into it.

For that pathological string with almost a megabyte of consecutive backslashes, the new implementation is 27x faster and uses 42x less memory, all for the cost of perhaps an hour of Sunday evening hacking (including a little enabling work that incidentally illustrates just how easy it is to work with monad transformers). Not bad!

by Bryan O'Sullivan at May 13, 2015 04:13 PM

Brent Yorgey

Pan-Galactic Division in Haskell

Summary: given an injective function A \times N \hookrightarrow B \times N, it is possible to constructively “divide by N” to obtain an injection A \hookrightarrow B, as shown recently by Peter Doyle and Cecil Qiu and expounded by Richard Schwartz. Their algorithm is nontrivial to come up with—this had been a longstanding open question—but it’s not too difficult to explain. I exhibit some Haskell code implementing the algorithm, and show some examples.

Introduction: division by two

Suppose someone hands you the following:

  • A Haskell function f :: (A, Bool) -> (B, Bool), where A and B are abstract types (i.e. their constructors are not exported, and you have no other functions whose types mention A or B).

  • A promise that the function f is injective, that is, no two values of (A, Bool) map to the same (B, Bool) value. (Thus (B, Bool) must contain at least as many inhabitants as (A, Bool).)

  • A list as :: [A], with a promise that it contains every value of type A exactly once, at a finite position.

Can you explicitly produce an injective function f' :: A -> B? Moreover, your answer should not depend on the order of elements in as.

It really seems like this ought to be possible. After all, if (B, Bool) has at least as many inhabitants as (A, Bool), then surely B must have at least as many inhabitants as A. But it is not enough to reason merely that some injection must exist; we have to actually construct one. This, it turns out, is tricky. As a first attempt, we might try f' a = fst (f (a, True)). That is certainly a function of type A -> B, but there is no guarantee that it is injective. There could be a1, a2 :: A which both map to the same b, that is, one maps to (b, False) and the other to (b, True). The picture below illustrates such a situation: (a1, True) and (a2, True) both map to b2. So the function f may be injective overall, but we can’t say much about f restricted to a particular Bool value.

9577e0a81dc50817

The requirement that the answer not depend on the order of as also makes things difficult. (Over in math-land, depending on a particular ordering of the elements in as would amount to the well-ordering principle, which is equivalent to the axiom of choice, which in turn implies the law of excluded middle—and as we all know, every time someone uses the law of excluded middle, a puppy dies. …I feel like I’m in one of those DirecTV commercials. “Don’t let a puppy die. Ignore the order of elements in as.”) Anyway, making use of the order of values in as, we could do something like the following:

  • For each a :: A:
    • Look at the B values generated by f (a,True) and f (a,False). (Note that there might only be one distinct such B value).
    • If neither B value has been used so far, pick the one that corresponds to (a,True), and add the other one to a queue of available B values.
    • If one is used and one unused, pick the unused one.
    • If both are used, pick the next available B value from the queue.

It is not too hard I couldn’t be bothered to show that this will always successfully result in a total function A -> B, which is injective by construction. (One has to show that there will always be an available B value in the queue when you need it.) The only problem is that the particular function we get depends on the order in which we iterate through the A values. The above example illustrates this as well: if the A values are listed in the order [a_1, a_2], then we first choose a_1 \mapsto b_2, and then a_2 \mapsto b_3. If they are listed in the other order, we end up with a_2 \mapsto b_2 and a_1 \mapsto b_1. Whichever value comes first “steals” b_2, and then the other one takes whatever is left. We’d like to avoid this sort of dependence on order. That is, we want a well-defined algorithm which will yield a total, injective function A -> B, which is canonical in the sense that the algorithm yields the same function given any permutation of as.

It is possible—you might enjoy puzzling over this a bit before reading on!

Division by N

The above example is a somewhat special case. More generally, let N = \{0, \dots, n-1\} denote a canonical finite set of size n, and let A and B be arbitrary sets. Then, given an injection f : A \times N \hookrightarrow B \times N, is it possible to effectively (that is, without excluded middle or the axiom of choice) compute an injection A \hookrightarrow B?

Translating down to the world of numbers representing set cardinalities—natural numbers if A and B are finite, or cardinal numbers in general—this just says that if an \leq bn then a \leq b. This statement about numbers is obviously true, so it would be nice if we could say something similar about sets, so that this fact about numbers and inequalities can be seen as just a “shadow” of a more general theorem about sets and injections.

As hinted in the introduction, the interesting part of this problem is really the word “effectively”. Using the Axiom of Choice/Law of Excluded Middle makes the problem a lot easier, but either fails to yield an actual function that we can compute with, instead merely guaranteeing the existence of such a function, or gives us a function that depends on a particular ordering of A.

Apparently this has been a longstanding open question, recently answered in the affirmative by Peter Doyle and Cecil Qiu in their paper Division By Four. It’s a really great paper: they give some fascinating historical context for the problem, and explain their algorithm (which is conceptually not all that difficult) using an intuitive analogy to a card game with certain rules. (It is not a “game” in the usual sense of having winners and losers, but really just an algorithm implemented with “players” and “cards”. In fact, you could get some friends together and actually perform this algorithm in parallel (if you have sufficiently nerdy friends).) Richard Schwartz’s companion article is also great fun and easy to follow (you should read it first).

A Game of Thrones Cards

Here’s a quick introduction to the way Doyle, Qiu, and Schwartz use a card game to formulate their algorithm. (Porting this framework to use “thrones” and “claimants” instead of “spots” and “cards” is left as an exercise to the reader.)

The finite set N is to be thought of as a set of suits. The set A will correspond to a set of players, and B to a set of ranks or values (for example, Ace, 2, 3, …) In that case B \times N corresponds to a deck of cards, each card having a rank and a suit; and we can think of A \times N in terms of each player having in front of them a number of “spots” or “slots”, each labelled by a suit. An injection A \times N \hookrightarrow B \times N is then a particular “deal” where one card has been dealt into each of the spots in front of the players. (There may be some cards left over in the deck, but the fact that the function is total means every spot has a card, and the fact that it is injective is encoded in the common-sense idea that a given card cannot be in two spots at once.) For example, the example function from before:

9577e0a81dc50817

corresponds to the following deal:

88e26b8c78506653

Here each column corresponds to one player’s hand, and the rows correspond to suit spots (with the spade spots on top and the heart spots beneath). We have mapped \{b_1, b_2, b_3\} to the ranks A, 2, 3, and mapped T and F to Spades and Hearts respectively. The spades are also highlighted in green, since later we will want to pay particular attention to what is happening with them. You might want to take a moment to convince yourself that the deal above really does correspond to the example function from before.

A Haskell implementation

Of course, doing everything effectively means we are really talking about computation. Doyle and Qiu do talk a bit about computation, but it’s still pretty abstract, in the sort of way that mathematicians talk about computation, so I thought it would be interesting to actually implement the algorithm in Haskell.

The algorithm “works” for infinite sets, but only (as far as I understand) if you consider some notion of transfinite recursion. It still counts as “effective” in math-land, but over here in programming-land I’d like to stick to (finitely) terminating computations, so we will stick to finite sets A and B.

First, some extensions and imports. Nothing too controversial.

> {-# LANGUAGE DataKinds                  #-}
> {-# LANGUAGE GADTs                      #-}
> {-# LANGUAGE GeneralizedNewtypeDeriving #-}
> {-# LANGUAGE KindSignatures             #-}
> {-# LANGUAGE RankNTypes                 #-}
> {-# LANGUAGE ScopedTypeVariables        #-}
> {-# LANGUAGE StandaloneDeriving         #-}
> {-# LANGUAGE TypeOperators              #-}
> 
> module PanGalacticDivision where
> 
> import           Control.Arrow (second, (&&&), (***))
> import           Data.Char
> import           Data.List     (find, findIndex, transpose)
> import           Data.Maybe
> 
> import           Diagrams.Prelude hiding (universe, value)
> import           Diagrams.Backend.Rasterific.CmdLine
> import           Graphics.SVGFonts

We’ll need some standard machinery for type-level natural numbers. Probably all this stuff is in a library somewhere but I couldn’t be bothered to find out. Pointers welcome.

> -- Standard unary natural number type
> data Nat :: * where
>   Z :: Nat
>   Suc :: Nat -> Nat
> 
> type One = Suc Z
> type Two = Suc One
> type Three = Suc Two
> type Four = Suc Three
> type Six = Suc (Suc Four)
> type Eight = Suc (Suc Six)
> type Ten = Suc (Suc Eight)
> type Thirteen = Suc (Suc (Suc Ten))
> 
> -- Singleton Nat-indexed natural numbers, to connect value-level and
> -- type-level Nats
> data SNat :: Nat -> * where
>   SZ :: SNat Z
>   SS :: Natural n => SNat n -> SNat (Suc n)
> 
> -- A class for converting type-level nats to value-level ones
> class Natural n where
>   toSNat :: SNat n
> 
> instance Natural Z where
>   toSNat = SZ
> 
> instance Natural n => Natural (Suc n) where
>   toSNat = SS toSNat
> 
> -- A function for turning explicit nat evidence into implicit
> natty :: SNat n -> (Natural n => r) -> r
> natty SZ r     = r
> natty (SS n) r = natty n r
> 
> -- The usual canonical finite type.  Fin n has exactly n
> -- (non-bottom) values.
> data Fin :: Nat -> * where
>   FZ :: Fin (Suc n)
>   FS :: Fin n -> Fin (Suc n)
> 
> finToInt :: Fin n -> Int
> finToInt FZ     = 0
> finToInt (FS n) = 1 + finToInt n
> 
> deriving instance Eq (Fin n)

Finiteness

Next, a type class to represent finiteness. For our purposes, a type a is finite if we can explicitly list its elements. For convenience we throw in decidable equality as well, since we will usually need that in conjunction. Of course, we have to be careful: although we can get a list of elements for a finite type, we don’t want to depend on the ordering. We must ensure that the output of the algorithm is independent of the order of elements.1 This is in fact true, although somewhat nontrivial to prove formally; I mention some of the intuitive ideas behind the proof below.

While we are at it, we give Finite instances for Fin n and for products of finite types.

> class Eq a => Finite a where
>   universe :: [a]
> 
> instance Natural n => Finite (Fin n) where
>   universe = fins toSNat
> 
> fins :: SNat n -> [Fin n]
> fins SZ     = []
> fins (SS n) = FZ : map FS (fins n)
> 
> -- The product of two finite types is finite.
> instance (Finite a, Finite b) => Finite (a,b) where
>   universe = [(a,b) | a <- universe, b <- universe]

Division, inductively

Now we come to the division algorithm proper. The idea is that panGalacticPred turns an injection A \times N \hookrightarrow B \times N into an injection A \times (N-1) \hookrightarrow B \times (N-1), and then we use induction on N to repeatedly apply panGalacticPred until we get an injection A \times 1 \hookrightarrow B \times 1.

> panGalacticDivision
>   :: forall a b n. (Finite a, Eq b)
>   => SNat n -> ((a, Fin (Suc n)) -> (b, Fin (Suc n))) -> (a -> b)

In the base case, we are given an injection A \times 1 \hookrightarrow B \times 1, so we just pass a unit value in along with the A and project out the B.

> panGalacticDivision SZ f = \a -> fst (f (a, FZ))

In the inductive case, we call panGalacticPred and recurse.

> panGalacticDivision (SS n') f = panGalacticDivision n' (panGalacticPred n' f)

Pan-Galactic Predecessor

And now for the real meat of the algorithm, the panGalacticPred function. The idea is that we swap outputs around until the function has the property that every output of the form (b,0) corresponds to an input also of the form (a,0). That is, using the card game analogy, every spade in play should be in the leftmost spot (the spades spot) of some player’s hand (some spades can also be in the deck). Then simply dropping the leftmost card in everyone’s hand (and all the spades in the deck) yields a game with no spades. That is, we will have an injection A \times \{1, \dots, n-1\} \hookrightarrow B \times \{1, \dots, n-1\}. Taking predecessors everywhere (i.e. “hearts are the new spades”) yields the desired injection A \times (N-1) \hookrightarrow B \times (N-1).

We need a Finite constraint on a so that we can enumerate all possible inputs to the function, and an Eq constraint on b so that we can compare functions for extensional equality (we iterate until reaching a fixed point). Note that whether two functions are extensionally equal does not depend on the order in which we enumerate their inputs, so far validating my claim that nothing depends on the order of elements returned by universe.

> panGalacticPred
>   :: (Finite a, Eq b, Natural n)
>   => SNat n
>   -> ((a, Fin (Suc (Suc n))) -> (b, Fin (Suc (Suc n))))
>   -> ((a, Fin (Suc n)) -> (b, Fin (Suc n)))

We construct a function f' which is related to f by a series of swaps, and has the property that it only outputs FZ when given FZ as an input. So given (a,i) we can call f' on (a, FS i) which is guaranteed to give us something of the form (b, FS j). Thus it is safe to strip off the FS and return (b, j) (though the Haskell type checker most certainly does not know this, so we just have to tell it to trust us).

> panGalacticPred n f = \(a,i) -> second unFS (f' (a, FS i))
>   where
>     unFS :: Fin (Suc n) -> Fin n
>     unFS FZ = error "impossible!"
>     unFS (FS i) = i

To construct f' we iterate a certain transformation until reaching a fixed point. For finite sets A and B this is guaranteed to terminate, though it is certainly not obvious from the Haskell code. (Encoding this in Agda so that it is accepted by the termination checker would be a fun (?) exercise.)

One round of the algorithm consists of two phases called “shape up” and “ship out” (to be described shortly).

>     oneRound = natty n $ shipOut . shapeUp
> 
>     -- iterate 'oneRound' beginning with the original function...
>     fs = iterate oneRound f
>     -- ... and stop when we reach a fixed point.
>     f' = fst . head . dropWhile (uncurry (=/=)) $ zip fs (tail fs)
>     f1 =/= f2 = all (\x -> f1 x == f2 x) universe

Encoding Card Games

Recall that a “card” is a pair of a value and a suit; we think of B as the set of values and N as the set of suits.

> type Card v s = (v, s)
> 
> value :: Card v s -> v
> value = fst
> 
> suit :: Card v s -> s
> suit = snd

Again, there are a number of players (one for each element of A), each of which has a “hand” of cards. A hand has a number of “spots” for cards, each one labelled by a different suit (which may not have any relation to the actual suit of the card in that position).

> type PlayerSpot p s = (p, s)
> type Hand v s = s -> Card v s

A “game” is an injective function from player spots to cards. Of course, the type system is not enforcing injectivity here.

> type Game p v s = PlayerSpot p s -> Card v s

Some utility functions. First, a function to project out the hand of a given player.

> hand :: p -> Game p v s -> Hand v s
> hand p g = \s -> g (p, s)

A function to swap two cards, yielding a bijection on cards.

> swap :: (Eq s, Eq v) => Card v s -> Card v s -> (Card v s -> Card v s)
> swap c1 c2 = f
>   where
>     f c
>       | c == c1   = c2
>       | c == c2   = c1
>       | otherwise = c

leftmost finds the leftmost card in a player’s hand which has a given suit.

> leftmost :: Finite s => s -> Hand v s -> Maybe s
> leftmost targetSuit h = find (\s -> suit (h s) == targetSuit) universe

Playing Rounds

playRound abstracts out a pattern that is used by both shapeUp and shipOut. The first argument is a function which, given a hand, produces a function on cards; that is, based on looking at a single hand, it decides how to swap some cards around.2 playRound then applies that function to every hand, and composes together all the resulting permutations.

Note that playRound has both Finite s and Finite p constraints, so we should think about whether the result depends on the order of elements returned by any call to universe—I claimed it does not. Finite s corresponds to suits/spots, which corresponds to N in the original problem formulation. N explicitly has a canonical ordering, so this is not a problem. The Finite p constraint, on the face of it, is more problematic. We will have to think carefully about each of the rounds implemented in terms of playRound and make sure they do not depend on the order of players. Put another way, it should be possible for all the players to take their turn simultaneously.

> playRound :: (Finite s, Finite p, Eq v) => (Hand v s -> Card v s -> Card v s) -> Game p v s -> Game p v s
> playRound withHand g = foldr (.) id swaps . g
>   where
>     swaps = map (withHand . flip hand g) players
>     players = universe

Shape Up and Ship Out

Finally, we can describe the “shape up” and “ship out” phases, beginning with “shape up”. A “bad” card is defined as one having the lowest suit; make sure every hand with any bad cards has one in the leftmost spot (by swapping the leftmost bad card with the card in the leftmost spot, if necessary).

> shapeUp :: (Finite s, Finite p, Eq v) => Game p v s -> Game p v s
> shapeUp = playRound shapeUp1
>   where
>     badSuit = head universe
>     shapeUp1 theHand =
>       case leftmost badSuit theHand of
>         Nothing      -> id
>         Just badSpot -> swap (theHand badSuit) (theHand badSpot)

And now for the “ship out” phase. Send any “bad” cards not in the leftmost spot somewhere else, by swapping with a replacement, namely, the card whose suit is the same as the suit of the spot, and whose value is the same as the value of the bad card in the leftmost spot. The point is that bad cards in the leftmost spot are OK, since we will eventually just ignore the leftmost spot. So we have to keep shipping out bad cards not in the leftmost spot until they all end up in the leftmost spot. For some intuition as to why this is guaranteed to terminate, consult Schwartz; note that columns tend to acquire more and more cards that have the same rank as a spade in the top spot (which never moves).

> shipOut :: (Finite s, Finite p, Eq v) => Game p v s -> Game p v s
> shipOut = playRound shipOutHand
>   where
>     badSuit = head universe
>     spots = universe
>     shipOutHand theHand = foldr (.) id swaps
>       where
>         swaps = map (shipOut1 . (theHand &&& id)) (drop 1 spots)
>         shipOut1 ((_,s), spot)
>           | s == badSuit = swap (theHand spot) (value (theHand badSuit), spot)
>           | otherwise    = id

And that’s it! Note that both shapeUp and shipOut are implemented by composing a bunch of swaps; in fact, in both cases, all the swaps commute, so the order in which they are composed does not matter. (For proof, see Schwartz.) Thus, the result is independent of the order of the players (i.e. the set A).

Enough code, let’s see an example! This example is taken directly from Doyle and Qiu’s paper, and the diagrams are being generated literally (literately?) by running the code in this blog post. Here’s the starting configuration:

08c40ab96ca385c0

Again, the spades are all highlighted in green. Recall that our goal is to get them all to be in the first row, but we have to do it in a completely deterministic, canonical way. After shaping up, we have:

e32d54891cc5e470

Notice how the 6, K, 5, A, and 8 of spades have all been swapped to the top of their column. However, there are still spades which are not at the top of their column (in particular the 10, 9, and J) so we are not done yet.

Now, we ship out. For example, the 10 of spades is in the diamonds position in the column with the Ace of spades, so we swap it with the Ace of diamonds. Similarly, we swap the 9 of spades with the Queen of diamonds, and the Jack of spades with the 4 of hearts.

271c3505d198b229

Shaping up does nothing at this point so we ship out again, and then continue to alternate rounds.

d2012b69fc3cc161

In the final deal above, all the spades are at the top of a column, so there is an injection from the set of all non-spade spots to the deck of cards with all spades removed. This example was, I suspect, carefully constructed so that none of the spades get swapped out into the undealt portion of the deck, and so that we end up with only spades in the top row. In general, we might end up with some non-spades also in the top row, but that’s not a problem. The point is that ignoring the top row gets rid of all the spades.

Anyway, I hope to write more about some “practical” examples and about what this has to do with combinatorial species, but this post is long enough already. Doyle and Qiu also describe a “short division” algorithm (the above is “long division”) that I hope to explore as well.

The rest of the code

For completeness, here’s the code I used to represent the example game above, and to render all the card diagrams (using diagrams 1.3).

> type Suit = Fin
> type Rank = Fin
> type Player = Fin
> 
> readRank :: SNat n -> Char -> Rank n
> readRank n c = fins n !! (fromJust $ findIndex (==c) "A23456789TJQK")
> 
> readSuit :: SNat n -> Char -> Suit n
> readSuit (SS _) 'S'                = FZ
> readSuit (SS (SS _)) 'H'           = FS FZ
> readSuit (SS (SS (SS _))) 'D'      = FS (FS FZ)
> readSuit (SS (SS (SS (SS _)))) 'C' = FS (FS (FS FZ))
> 
> readGame :: SNat a -> SNat b -> SNat n -> String -> Game (Player a) (Rank b) (Suit n)
> readGame a b n str = \(p, s) -> table !! finToInt p !! finToInt s
>   where
>     table = transpose . map (map readCard . words) . lines $ str
>     readCard [r,s] = (readRank b r, readSuit n s)
> 
> -- Example game from Doyle & Qiu
> exampleGameStr :: String
> exampleGameStr = unlines
>   [ "4D 6H QD 8D 9H QS 4C AD 6C 4S"
>   , "JH AH 9C 8H AS TC TD 5H QC JS"
>   , "KC 6S 4H 6D TS 9S JC KD 8S 8C"
>   , "5C 5D KS 5S TH JD AC QH 9D KH"
>   ]
> 
> exampleGame :: Game (Player Ten) (Rank Thirteen) (Suit Four)
> exampleGame = readGame toSNat toSNat toSNat exampleGameStr
> 
> suitSymbol :: Suit n -> String
> suitSymbol = (:[]) . ("♠♥♦♣"!!) . finToInt  -- Huzzah for Unicode
> 
> suitDia :: Suit n -> Diagram B
> suitDia = (suitDias!!) . finToInt
> 
> suitDias = map mkSuitDia (fins (toSNat :: SNat Four))
> mkSuitDia s = text' (suitSymbol s) # fc (suitColor s) # lw none
> 
> suitColor :: Suit n -> Colour Double
> suitColor n
>   | finToInt n `elem` [0,3] = black
>   | otherwise               = red
> 
> rankStr :: Rank n -> String
> rankStr n = rankStr' (finToInt n + 1)
>   where
>     rankStr' 1 = "A"
>     rankStr' i | i <= 10    = show i
>                | otherwise = ["JQK" !! (i - 11)]
> 
> text' t = stroke (textSVG' (TextOpts lin INSIDE_H KERN False 1 1) t)
> 
> renderCard :: (Rank b, Suit n) -> Diagram B
> renderCard (r, s) = mconcat
>   [ mirror label
>   , cardContent (finToInt r + 1)
>   , back
>   ]
>   where
>     cardWidth  = 2.25
>     cardHeight = 3.5
>     cardCorners = 0.1
>     mirror d = d  d # rotateBy (1/2)
>     back  = roundedRect cardWidth cardHeight cardCorners # fc white
>           # lc (case s of { FZ -> green; _ -> black })
>     label = vsep 0.1 [text' (rankStr r), text' (suitSymbol s)]
>           # scale 0.6 # fc (suitColor s) # lw none
>           # translate ((-0.9) ^& 1.5)
>     cardContent n
>       | n <= 10   = pips n
>       | otherwise = face n # fc (suitColor s) # lw none
>                            # sized (mkWidth (cardWidth * 0.6))
>     pip = suitDia s # scale 1.1
>     pips 1 = pip # scale 2
>     pips 2 = mirror (pip # up 2)
>     pips 3 = pips 2  pip
>     pips 4 = mirror (pair pip # up 2)
>     pips 5 = pips 4  pip
>     pips 6 = mirror (pair pip # up 2)  pair pip
>     pips 7 = pips 6  pip # up 1
>     pips 8 = pips 6  mirror (pip # up 1)
>     pips 9 = mirror (pair (pip # up (2/3)  pip # up 2))  pip # up (case finToInt s of {1 -> -0.1; 3 -> 0; _ -> 0.1})
>     pips 10 = mirror (pair (pip # up (2/3)  pip # up 2)  pip # up (4/3))
>     pips _ = mempty
>     up n = translateY (0.5*n)
>     pair d = hsep 0.4 [d, d] # centerX
>     face 11 = squares # frame 0.1
>     face 12 = loopyStar
>     face 13 = burst # centerXY
>     squares
>       = strokeP (mirror (square 1 # translate (0.2 ^& 0.2)))
>       # fillRule EvenOdd
>     loopyStar
>       = regPoly 7 1
>       # star (StarSkip 3)
>       # pathVertices
>       # map (cubicSpline True)
>       # mconcat
>       # fillRule EvenOdd
>     burst
>       = [(1,5), (1,-5)] # map r2 # fromOffsets
>       # iterateN 13 (rotateBy (-1/13))
>       # mconcat # glueLine
>       # strokeLoop
> 
> renderGame :: (Natural n, Natural a) => Game (Player a) (Rank b) (Suit n) -> Diagram B
> renderGame g = hsep 0.5 $ map (\p -> renderHand p $ hand p g) universe
> 
> renderHand :: Natural n => Player a -> Hand (Rank b) (Suit n) -> Diagram B
> renderHand p h = vsep 0.2 $ map (renderCard . h) universe

  1. If we could program in Homotopy Type Theory, we could make this very formal by using the notion of cardinal-finiteness developed in my dissertation (see section 2.4).

  2. In practice this function on cards will always be a permutation, though the Haskell type system is not enforcing that at all. An early version of this code used the Iso type from lens, but it wasn’t really paying its way.


by Brent at May 13, 2015 01:07 AM

FP Complete

Distributing our packages without a sysadmin

At FP Complete, we're no strangers to running complex web services. But we know from experience that the simplest service to maintain is one someone else is managing for you. A few days ago I described how secure package distribution with stackage-update and stackage-install works, focusing on the client side tooling. Today's blog post is about how we use Amazon S3, Github, and Travis CI to host all of this with (almost) no servers of our own (that caveat explained in the process).

Making executables available

We have two different Haskell tools needed to this hosting: hackage-mirror to copy the raw packages to Hackage, and all-cabal-hashes-tool to populate the raw cabal files with hash/package size information. But we don't want to have to compile this executables every time we call them. Instead, we'd like to simply download and run a precompiled executable.

Like many other Github projects, these two utilize Travis CI to build and test the code every time a commit is pushed. But that's not all; using Travis's deployment capability, they also upload an executable to S3.

Figuring out the details of making this work is a bit tricky, so it's easiest to just look at the .travis.yml file. For the security conscious: the trick is that Travis allows us to encrypt data so that no one but Travis can decrypt it. Then, Travis can decrypt and upload it to S3 for us.

Result: a fully open, transparent process for executable building that can be reviewed by anyone in the community, without allowing private credentials to be leaked. Also, notice how none of our own servers needed to get involved.

Running the executables

We're going to leverage Travis yet again, and use it to run the executables it so politely generated for us. We'll use all-cabal-hashes as our demonstration, though all-cabal-packages works much the same way. We have an update.sh script which downloads and runs our executable, and then commits, signs, and pushes to Github. In order to sign and push, however, we need to have a GPG and SSH key, respectively.

Once again, Travis's encryption capabilities come into play. In the .travis.yml file, we decrypt a tar file containing the GPG and SSH key, put them in the correct location, and also configure Git. Then we call out to the update.sh script. One wrinkle here is that Travis only supports having a single encrypted file per repo, which is why we have to tar together the two different keys, which is a minor annoyance.

As before, we have processes running on completely open, auditable systems. Uploads are being made to providers we don't manage (either Amazon or Github). The only thing kept hidden are the secrets themselves (keys). And if the process ever fails, I get an immediate notification from Travis. So far, that's only happened when I was playing with the build or Hackage was unresponsive.

Running regularly

It wouldn't be very useful if these processes weren't run regularly. This is a perfect place for a cron job. Unfortunately, Travis doesn't yet support cron job, though they seem to be planning it for the future. In the meanwhile, we do have to run this on our own service. Fortunately, it's a simple job that just asks Travis to restart the last build it ran for each repository.

To simplify even further, I run the Travis command line client from inside a Docker container, so that the only host system dependency is Docker itself. The wrapper script is:

#!/bin/bash

set -e
set -x

docker run --rm -v /home/ubuntu/all-cabal-files-internal.sh:/run.sh:ro bilge/travis-cli /run.sh

The script that runs inside the Docker container is the following (token hidden to protect... well, me).

#!/bin/bash

set -ex

travis login --skip-version-check --org --github-token XXXXXXXXX

# Trigger the package mirroring first, since it's used by all-cabal-hashes
BUILD=$(travis branches --skip-version-check -r commercialhaskell/all-cabal-packages | grep "^hackage" | awk "{ print \$2 }")
BUILDNUM=${BUILD###}
echo BUILD=$BUILD
echo BUILDNUM=$BUILDNUM
travis restart --skip-version-check -r commercialhaskell/all-cabal-packages $BUILDNUM

BUILD=$(travis branches --skip-version-check -r commercialhaskell/all-cabal-files | grep "^hackage" | awk "{ print \$2 }")
BUILDNUM=${BUILD###}
echo BUILD=$BUILD
echo BUILDNUM=$BUILDNUM
travis restart --skip-version-check -r commercialhaskell/all-cabal-files $BUILDNUM

# Put in a bit of a delay to allow the all-cabal-packages job to finish. If
# not, no big deal, next job will pick up the change.
sleep 30

BUILD=$(travis branches --skip-version-check -r commercialhaskell/all-cabal-hashes | grep "^hackage" | awk "{ print \$2 }")
BUILDNUM=${BUILD###}
echo BUILD=$BUILD
echo BUILDNUM=$BUILDNUM
travis restart --skip-version-check -r commercialhaskell/all-cabal-hashes $BUILDNUM

Conclusion

Letting someone else deal with our file storage, file serving, executable building, and update process is a massive time saver. Now our sysadmins can stop dealing with these problems, and start solving complicated problems. The fact that everyone can inspect, learn from, and understand what our services are doing is another advantage. I encourage others to try out these kinds of deployments whenever possible.

May 13, 2015 12:00 AM

May 11, 2015

The GHC Team

GHC Weekly News - 2015/05/11

Hi *,

It's been a few weeks since the last news bulletin - this is the result of mostly quietness on behalf of the list and developers, and some sickness on behalf of your editor for several days there. But now there's actually some things to write here!

The past few weeks, GHC HQ has been having some quiet meetings mostly about bugfixes for a 7.10.2 release - as well as noodling about compiler performance. Austin has begun compiling his preliminary notes on the wiki, under the CompilerPerformance page, where we'll be trying to keep track of the ongoing performance story. Hopefully, GHC 7.12.1 will boast a bit better performance numbers.

There are a lot of users who are interested in this particular pain point, so please file tickets and CC yourself on bugs (like #10370), or feel free to help out!

7.10.2 status

There's been a bit of chatter about the lists about something on many peoples mind: the release of GHC 7.10.2. Most prominently, Mark Lentczner popped in to ask when the next GHC release will happen - in particular, he'd like to make a Haskell Platform release in lockstep with it (see below for a link to Mark's email).

Until recently, the actual desire for 7.10.2 wasn't totally clear, and at this point, GHC HQ hasn't firmly committed to the 7.10.2 release date. But if milestone:7.10.2 is any indicator, we've already closed over three dozen bugs, several of them high priority - and they keep coming in. So it seems likely people will want these fixes in their hands relatively soon.

Just remember: if you need a fix for 7.10.2, or have a bug you need us to look at, please email the ghc-devs list, file a ticket, and get our attention! Just be sure to set the milestone to 7.10.2.

List chatter

  • Niklas Hambüchen announced that he's backported the recent lightweight stack-trace support in GHC HEAD to GHC 7.10 and GHC 7.8 - meaning that users of these stable release can have informative call stack traces, even without profiling! FP Complete was interested in this feature, so they'd probably love to hear user input. https://mail.haskell.org/pipermail/ghc-devs/2015-April/008862.html
  • David Terei has written up a proposal on reconciling the existence of Roles with Safe Haskell, which caused us a lot of problems during the 7.8 release cycle. In particular, concerning the ability to break module abstractions and requiring programmers to safeguard abstractions through careful use of roles - and David's written a proposal to address that. https://mail.haskell.org/pipermail/ghc-devs/2015-April/008902.html
  • Mark Lentczner started a thread about the 7.10.2 release schedule - because this time, he wants to do a concurrent Haskell Platform release! The thread ended up with a good amount of discussion concerning if 7.10.2 is even needed - but at this rate, it looks like it will ship sometime soon. https://mail.haskell.org/pipermail/ghc-devs/2015-May/008904.html
  • Mateusz Kowalczyk posted to ghc-devs hoping to get some help with a tricky, long-standing issue: #4012, which concerns the determinism of GHC binaries. It turns out GHC isn't entirely deterministic when it calculates package IDs, meaning things get really bad when you mix prebuilt binary packages for systems. This in particular has become a real problem for the Nix package manager and users of Haskell applications. Mateusz asks if anyone would be willing to help look into it - and a lot of people would appreciate the help! https://mail.haskell.org/pipermail/ghc-devs/2015-May/008992.html

Noteworthy commits

Closed tickets

#10293, #10273, #10021, #10209, #10255, #10326, #9745, #10314, #8928, #8743, #10182, #10281, #10325, #10297, #10292, #10304, #10260, #9204, #10121, #10329, #9920, #10308, #10234, #10356, #10351, #10364, #9564, #10306, #10108, #9581, #10369, #9673, #10288, #10260, #10363, #10315, #10389, #9929, #10384, #10382, #10400, #10256, #10254, #10277, #10299, #10268, #10269, #10280, #10312, #10209, #10109, #10321, #10285, #9895, #10395, #10263, #10293, #10210, #10302, #10206, #9858, #10045, and #9840.

by thoughtpolice at May 11, 2015 02:49 PM

May 07, 2015

Tom Schrijvers

May 06, 2015

Gabriel Gonzalez

Haskell content spinner

Recently somebody posted a template for generating blog comment spam, so I thought: "What sillier way to show how elegant Haskell is than generating comment spam?!"

The first "stanza" of the template looks like this:

{I have|I've} been {surfing|browsing} online 
more than {three|3|2|4} hours today, yet
I never found any interesting article like yours.
{It's|It is}
pretty worth enough for me. {In my
opinion|Personally|In my view},
if all {webmasters|site owners|website owners|web
owners} and bloggers made good content as you
did, the {internet|net|web} will be {much more|a
lot more} useful than ever before.|
I {couldn't|could not} {resist|refrain from}
commenting.
{Very well|Perfectly|Well|Exceptionally well}
written!|
{I will|I'll} {right away|immediately} {take
hold of|grab|clutch|grasp|seize|snatch} your
{rss|rss feed} as I {can not|can't} {in
finding|find|to find} your {email|e-mail}
subscription {link|hyperlink} or
{newsletter|e-newsletter} service.
Do {you have|you've} any? {Please|Kindly}
{allow|permit|let} me
{realize|recognize|understand|recognise|know} {so
that|in order that} I {may
just|may|could} subscribe. Thanks.|
{It is|It's} {appropriate|perfect|the best}
time to
make some plans for the future and {it is|it's}
time to be happy.

Anything of the form {x|y|z} represents a choice between alternative text fragments x, y, and z. The above template has four large alternative comments to pick from, each with their own internal variations. The purpose of these alternatives is to evade simple spam detection algorithms, much like how some viruses evade the immune system by mutating antigens.

I wanted to write a Haskell program that selected a random template from one of the provided alternatives and came up with this:

{-# LANGUAGE OverloadedStrings #-}

import Control.Foldl (random) -- Requires `foldl-1.0.10` or higher
import Turtle

main = do
x <- foldIO spam random
print x

spam :: Shell Text
spam = -- 1st major template
""
* ("I have" + "I've")
* " been "
* ("surfing" + "browsing")
* " online more than "
* ("three" + "3" + "2" + "4")
* " hours today, yet I never found any interesting article like yours. "
* ("It's" + "It is")
* " pretty worth enough for me. "
* ("In my opinion" + "Personally" + "In my view")
* ", if all "
* ("webmasters" + "site owners" + "website owners" + "web owners")
* " and bloggers made good content as you did, the "
* ("internet" + "net" + "web")
* " will be "
* ("much more" + "a lot more")
* " useful than ever before."

-- 2nd major template
+ " I "
* ("couldn't" + "could not")
* " "
* ("resist" + "refrain from")
* " commenting. "
* ("Very well" + "Perfectly" + "Well" + "Exceptionally well")
* " written!"

-- 3rd major template
+ " "
* ("I will" + "I'll")
* " "
* ("right away" + "immediately")
* " "
* ("take hold of" + "grab" + "clutch" + "grasp" + "seize" + "snatch")
* " your "
* ("rss" + "rss feed")
* " as I "
* ("can not" + "can't")
* " "
* ("in finding" + "find" + "to find")
* " your "
* ("email" + "e-mail")
* " subscription "
* ("link" + "hyperlink")
* " or "
* ("newsletter" + "e-newsletter")
* " service. Do "
* ("you have" + "you've")
* " any? "
* ("Please" + "Kindly")
* " "
* ("allow" + "permit" + "let")
* " me "
* ("realize" + "recognize" + "understand" + "recognise" + "know")
* " "
* ("so that" + "in order that")
* " I "
* ("may just" + "may" + "could")
* " subscribe. Thanks."

-- 4th major template
+ " "
* ("It is" + "It's")
* " "
* ("appropriate" + "perfect" + "the best")
* " time to make some plans for the future and "
* ("it is" + "it's")
* " time to be happy."

Conceptually, all I did to embed the template in Haskell was to:

  • add a quote to the beginning of the template: "
  • replace all occurences of { with "*(" (including quotes)
  • replace all occurences of } with ")*" (including quotes)
  • replace all occurences of | with "+" (including quotes)
  • add a quote to the end of the template: "

In fact, I mechanically transformed the template to Haskell code using simple sed commands within vi and then just formatted the result to be more readable.

Before explaining why this works, let's try our program out to verify that it works:

$ ghc -O2 spam.hs
$ ./spam
Just " I will right away grab your rss feed as I
can not find your email subscription hyperlink or
newsletter service. Do you've any? Please let me
recognise in order that I could subscribe.
Thanks."
$ ./spam
Just " I'll immediately seize your rss as I can not find
your email subscription link or e-newsletter service. Do
you have any? Please allow me realize in order that I may
subscribe. Thanks."

You might wonder: how does the above program work?

Types

Let's begin from the type of the top-level utility named foldIO:

foldIO
:: Shell a -- A stream of `a`s
-> FoldM IO a b -- A fold that reduces `a`s to a single `b`
-> IO b -- The result (a `b`)

foldIO connects a producer of as (i.e. a Shell) to a fold that consumes as and produces a single b (i.e. a FoldM). For now we will ignore how they are implemented. Instead we will play type tetris to see how we can connect things together.

The first argument we supply to foldIO is spam, whose type is:

spam :: Shell Text

Think of a Shell as a stream, and spam is a stream whose elements are Text values. Each element of this stream corresponds to one possible alternative for our template. For example, a template with exactly one alternative would be a stream with one element.

When we supply spam as the first argument to foldIO, the compiler infers that the first a in the type of foldIO must be Text

foldIO :: Shell a -> FoldM IO a b -> IO b
^
|
|
|
spam :: Shell Text

... therefore, the second a must also be Text:

foldIO :: Shell a -> FoldM IO a b -> IO b
^ ^
| |
+-------------+
|
spam :: Shell Text

... so in this context foldIO has the more specialized type:

foldIO :: Shell Text -> FoldM IO Text b -> IO b

... and when we apply foldIO to spam we get the following narrower type:

foldIO spam :: FoldM IO Text b -> IO b

Now all we need to do is to provide a fold that can consume a stream of Text elements. We choose the random fold, which uses reservoir sampling to pick a random element from the stream. The type of random is:

random :: FoldM IO a (Maybe a)

In other words, given an input stream of as, this fold reduces the stream to a single Maybe a. The Maybe is either Nothing if the stream is empty or Just some random element from the stream if the stream is non-empty.

When we supply random as the second argument to foldIO, the compiler infers that the a in random must be Text:

foldIO spam :: FoldM IO Text        b -> IO b
|
|
|
v
random :: FoldM IO a (Maybe a)

... therefore the second a must also be Text:

foldIO spam :: FoldM IO Text        b -> IO b
|
+-----------+
| |
v v
random :: FoldM IO a (Maybe a)

So the specialized type of random becomes:

foldIO spam :: FoldM IO Text        b     -> IO b

random :: FoldM IO Text (Maybe Text)

Now we can apply type inference in the opposite direction! The compiler infers that the b in the type of foldIO must be Maybe Text:

foldIO spam :: FoldM IO Text        b     -> IO b
^
|
|
|
random :: FoldM IO Text (Maybe Text)

... therefore the other b must also be Maybe Text:

foldIO spam :: FoldM IO Text        b     -> IO b
^ ^
| |
+-----------+
|
random :: FoldM IO Text (Maybe Text)

... so we specialize foldIO's type even further to:

foldIO spam :: foldM IO Text (Maybe Text) -> IO (Maybe Text)

... and when we apply that to random the type simplifies down to:

foldIO spam random :: IO (Maybe Text)

The end result is a subroutine that loops over the stream using reservoir sampling, selects a random element (or Nothing if the stream is empty), and then returns the result.

All that's left is to print the result:

main = do
x <- foldIO spam random
print x

So that explains the top half of the code, but what about the bottom half? What is up with the addition and multiplication of strings?

Overloading

The first trick is that the strings are actually not strings at all! Haskell lets you overload string literals using the OverloadedStrings extension so that they type-check as any type that implements the IsString type class. The Shell Text type is one such type. If you provide a string literal where the compiler expects a Shell Text then the compiler will instead build a 1-element stream containing just that string literal.

The second trick is that Haskell lets you overload numeric operatoins to work on any type that implements the Num type class. The Shell Text type implements this type class, so you can add and multiply streams of text elements.

The behavior of addition is stream concatenation. In our template, when we write:

"Very well" + "Perfectly" + "Well" + "Exceptionally well"

... we are really concatenating four 1-element streams into a combined 4-element stream representing the four alternatives.

The behavior of multiplication is to sequence two templates. Either template may be a stream of multiple alternatives so when we sequence them we take the "cartesian product" of both streams. When we multiply two streams we concatenate every alterntaive from the first stream with every alternative from the second stream and return all possible combinations.

For example, when we write:

("couldn't " + "could not ") * ("resist" + "refrain from")

This reduces to four combinations:

  "couldn't resist"
+ "couldn't refrain from"
+ "could not resist"
+ "could not refrain from"

You can actually derive this using the rules of addition and multiplication:

("couldn't " + "could not ") * ("resist" + "refrain from")

-- Multiplication distributes over left addition
"couldn't " * ("resist" + "refrain from")
+ "could not " * ("resist" + "refrain from")

-- Multiplication distributes again
"couldn't " * "resist"
+ "couldn't " * "refrain from"
+ "could not " * "resist"
+ "could not " * "refrain from"

-- Multiplying 1-element templates just sequences them
"couldn't resist"
+ "couldn't refrain from"
+ "could not resist"
+ "could not refrain from"

Notice how if we sequence two 1-element templates

"I have " * "been"

... it's identical to string concatenation:

"I have been"

And that's it! We build the template using arithmetic and then we fold the results using random to select one template at random. That's the complete program! Or did we?

Weighting

Actually there's one catch: our spam generator is very heavily biased towards the third major template. This is because the generator weights all alternatives equally, but each major template has a different number of alternatives:

  • 1st template: 2304 alternatives
  • 2nd template: 16 alternatives
  • 3rd template: 829440
  • 4th template: 12 alternatives

As a result we're 360 times more likely to get the 3rd template than the next most common template (the 1st one). How can we weight each template to undo this bias?

The answer is simple: we can weight each template by using multiplication, scaling each template by the appropritae numeric factor.

In this case, the weights we will apply are:

  • 1st template: Increase frequency by 360x
  • 2nd template: Increase frequency by 51840x
  • 3rd template: Keep frequency the same (1x)
  • 4th template: Increase frequency by 69120x

Here's the implementation:

spam =  -- 1st major template
360
* ""
* ("I have" + "I've")
* " been "
* ("surfing" + "browsing")
* " online more than "
* ("three" + "3" + "2" + "4")
* " hours today, yet I never found any interesting article like yours. "
* ("It's" + "It is")
* " pretty worth enough for me. "
* ("In my opinion" + "Personally" + "In my view")
* ", if all "
* ("webmasters" + "site owners" + "website owners" + "web owners")
* " and bloggers made good content as you did, the "
* ("internet" + "net" + "web")
* " will be "
* ("much more" + "a lot more")
* " useful than ever before."

-- 2nd major template
+ 51840
* " I "
* ("couldn't" + "could not")
* " "
* ("resist" + "refrain from")
* " commenting. "
* ("Very well" + "Perfectly" + "Well" + "Exceptionally well")
* " written!"

-- 3rd major template
+ 1
* " "
* ("I will" + "I'll")
* " "
* ("right away" + "immediately")
* " "
* ("take hold of" + "grab" + "clutch" + "grasp" + "seize" + "snatch")
* " your "
* ("rss" + "rss feed")
* " as I "
* ("can not" + "can't")
* " "
* ("in finding" + "find" + "to find")
* " your "
* ("email" + "e-mail")
* " subscription "
* ("link" + "hyperlink")
* " or "
* ("newsletter" + "e-newsletter")
* " service. Do "
* ("you have" + "you've")
* " any? "
* ("Please" + "Kindly")
* " "
* ("allow" + "permit" + "let")
* " me "
* ("realize" + "recognize" + "understand" + "recognise" + "know")
* " "
* ("so that" + "in order that")
* " I "
* ("may just" + "may" + "could")
* " subscribe. Thanks."

-- 4th major template
+ 69120
* " "
* ("It is" + "It's")
* " "
* ("appropriate" + "perfect" + "the best")
* " time to make some plans for the future and "
* ("it is" + "it's")
* " time to be happy."

Now this produces a fairer distribution between the four major alternatives:

$ ./spam
Just "I have been surfing online more than three
hours today, yet I never found any interesting
article like yours. It's pretty worth enough for
me. In my view, if all web owners and bloggers
made good content as you did, the internet will
be a lot more useful than ever before."
$ ./spam
Just " It's the best time to make some plans for
the future and it's time to be happy."
$ ./spam
Just " I will right away clutch your rss feed as
I can't in finding your e-mail subscription link
or newsletter service. Do you have any? Kindly
let me understand so that I may just subscribe.
Thanks."
$ ./spam
Just " I could not refrain from commenting. Exceptionally well written!"

Remember how we said that Shell Text implements the Num type class in order to get addition and multiplication? Well, you can also use the same Num class to overload integer literals. Any time the compiler sees an integer literal where it expects a Shell Text it will replace that integer with a stream of empty strings whose length is the given integer.

For example, if you write the number 3, it's equivalent to:

-- Definition of 3
3 = 1 + 1 + 1

-- 1 = ""
3 = "" + "" + ""

So if you write:

3 * "some string"

... that expands out to:

("" + "" + "") * "some string"

... and multiplication distributes to give us:

("" * "some string") + ("" * "some string") + ("" * "some string")

... which reduces to three copies of "some string":

"some string" + "some string" + "some string"

This trick works even when multiplying a number by a template with multiple alternatives:

2 * ("I have" + "I've")

-- 2 = 1 + 1
= (1 + 1) * ("I have" + "I've")

-- 1 = ""
= ("" + "") * ("I have" + "I've")

-- Multiplication distributes
= ("" * "I have") + ("" * "I've") + ("" * "I have") + ("" * "I've")

-- Simplify
= "I have" + "I've" + "I have" + "I've"

Arithmetic

In fact, Shell Text obeys all sort of arithmetic laws:

-- `0` is the identity of addition
0 + x = x
x + 0 = x

-- Addition is associative
(x + y) + z = x + (y + z

-- `1` is the identity of multiplication
1 * x = 1
x * 1 = 1

-- Multiplication is associative
(x * y) * z = x * (y * z)

-- Multiplication right-distributes over addition
(x + y) * z = (x * z) + (y * z)

-- 1-element streams left-distribute over addition
"string" * (x + y) = ("string" * x) + ("string" * y)

I'm not sure what the mathematical name is for this sort of structure. I usually call this a "semiring", but that's technically not correct because in a semiring we expect addition to commute, but here it does not because Shell Text preserves the ordering of results. For the case of selecting a random element ordering does not matter, but there are other operations we can perform on these streams that are order-dependent.

In fact, the laws of arithmetic enforce that you weigh all fine-grained alternatives equally, regardless of the frequencies of the top-level coarse-grained alternatives. If you weighted things based on the relative frequencies of top-level alternatives you would get very inconsistent behavior.

For example, if you tried to be more "fair" for outer alternatives, then addition stops being associative, meaning that these templates would no longer behave the same:

{x|{y|z}}
{{x|y}|z}
{x|y|z}

Conclusion

The Haskell template generator was so concise for two main reasons:

  • We embedded the template directly within Haskell, so we skipped having to parse the template
  • We reuse modular and highly generic components (like random, foldIO, (+), and (*) and numeric literals) instead of writing our own custom code

Also, our program is easily modifiable. For example, if we want to collect all the templates, we just replace random with vector:

import Control.Foldl (vector)
import Data.Vector (Vector)

main = do
x <- foldIO spam vector
print (x :: Vector Text)

That efficiently builds a vector in place using mutation that stores all the results, then purifies the result.

However, these sorts of tricks come at a cost. Most of the awesome tricks like this are not part of the standard library and instead exist in libraries, making them significantly harder to discover. Worse, you're never really "done" learning the Haskell language. The library ecosystem is a really deep rabbit hole full of jewels and at some point you just have to pick a point to just stop digging through libraries and build something useful ...

... or useless, like a comment spam generator.

by Gabriel Gonzalez ([email protected]) at May 06, 2015 02:11 PM

Neil Mitchell

Announcing js-jquery Haskell Library

Summary: The library js-jquery makes it easy to get at the jQuery Javascript code from Haskell. I've just released a new version.

I've just released the Haskell library js-jquery 1.11.3, following the announcement of jQuery 1.11.3. This package bundles the minified jQuery code into a Haskell package, so it can be depended upon by Cabal packages. The version number matches the upstream jQuery version. It's easy to grab the jQuery code from Haskell using this library, as an example:

import qualified Language.Javascript.JQuery as JQuery

main = do
putStrLn $ "jQuery version " ++ show JQuery.version ++ " source:"
putStrLn =<< readFile =<< JQuery.file

There are two goals behind this library:

  • Make it easier for jQuery users to use and upgrade jQuery in Haskell packages. You can upgrade jQuery without huge diffs and use it without messing around with extra-source-files.
  • Make it easier for upstream packagers like Debian. The addition of a jQuery file into a Haskell package means you are mixing licenses, authors, and distributions like Debian also require the source (unminified) version of jQuery to be distributed alongside. By having one package provide jQuery they only have to do that work once, and the package has been designed to meet their needs.

It's pretty easy to convert something that has bundled jQuery to use the library, as some examples:

The library only depends on the base library so it shouldn't cause any version hassles, although (as per all Cabal packages) you can't mix and match libraries with incompatible js-jquery version constraints in one project.

As a companion, there's also js-flot, which follows the same ideas for the Flot library.

by Neil Mitchell ([email protected]) at May 06, 2015 10:28 AM

May 05, 2015

Functional Jobs

Haskell Web Engineer at Front Row Education (Full-time)

Position

Haskell web engineer to join fast-growing education startup that changes how over a million young students learn math.

TL;DR - Why you should join Front Row

  • Our mission is important to us, and we want it to be important to you as well: hundreds of thousands of kids learn math using Front Row every month. Our early results show students improve twice as much while using Front Row than their peers who aren’t using the program.
  • You’ll be one of the first engineers on the team, which means you’ll have an immense impact on our company, product, and culture; you’ll have a ton of autonomy and responsibility; you’ll have equity to match the weight of this role. If you're looking for an opportunity to both grow and do meaningful work, surrounded and supported by like-minded professionals, this is THE place for you.
  • A lot of flexibility: while we all work towards the same goals, you’ll have a lot of autonomy in what you work on. You can work from home up to one day a week, and we have a very flexible unlimited vacation days policy
  • You’ll use the most effective tools in the industry: Haskell, Postgres, Ansible and more. Front Row is one of the very few organizations in the world that use Haskell in production for most of their systems and is an active member of the Haskell community, including the Commercial Haskell Special Interest Group. Read more about our experience on the FPComplete Blog.
  • In addition to doing good, we’re doing really well: in just over a year after launch, we are in more than 20% of all US elementary & middle schools.

The Business

Millions of teachers around the USA are struggling to help 30+ students in their class learn math because every student is in their own place. In a typical fourth grade classroom, there may be students learning to count, students learning to add, students learning to multiply, and students learning how exponents work - and one teacher somehow needs to address all these needs.

Front Row makes that impossible task possible, and as of today, more than a hundred thousand students use Front Row to receive personalized guidance in their learning. Thousands of teachers use Front Row every day to save hours of time and make sure their students are growing at the fastest rate achievable. Front Row active users have been growing over 25% a month for the past 6 months.

Front Row is successfully venture-funded and on the road to profitability.

The Role

As one of our very first engineers, you will be part of a team of developers who take pride in their craft and push each other to continuously get better. You will strive for pragmatism and 80/20 in your work. You will be using tools that make you most effective and give you the most leverage. By working really smart, you will produce more than the average developer ever will, but without the crazy hours.

We love generalists who can quickly bring themselves up to speed with any technology we’re using: you will have the chance to learn a lot, and fast too. You will receive continuous support and mentorship on your journey to achieving mastery. We do however expect you not to need to be hand-held and rely on others for your own growth. You will have full autonomy over your work.

You will work in an effective team that plans, executes and reflects together. Because we’re a small team, everything you create will go into production and be used by students, teachers, parents and school personnel. You will never do unimportant work: every contribution will make a clear and tangible impact on the company’s trajectory. Your personal success will be directly aligned with that of the company.

Most importantly, your work will have purpose: Front Row is a mission-driven company that takes pride in making a significant impact in the lives of hundreds of thousands of students.

Tools

  • Front Row is a polyglot combination of multiple web applications, mobile apps and asset generation tools.
  • Web front-ends are a custom version of Backbone.js + plugins.
  • The backend is a series of Haskell+Yesod-based applications talking to PostgreSQL and 3rd party services. Some Clojure here and there from older codebases on the way out.
  • All test, build and deployment automation relies on Ansible. AWS for hosting.
  • We have mobile apps for both iOS and Android
  • Work is continuously happening to simplify and minimize the codebases. We <3 getting rid of code :)

Must haves

  • You have functional programming experience (Haskell / Clojure / Scala / OCaml etc.)
  • Fast learner: you'll be drinking out of a firehose every single day for a very long time, you should be very comfortable with that
  • Extreme hustle: you’ll be solving a lot of problems you haven’t faced before without the resources and the support of a giant organization. You must thrive on getting creative in order to get things done.
  • US citizenship or permanent residency

Bonus Points

  • You have experience doing full-stack web development.
  • You have experience with RDBMS
  • You understand networking and have experience developing distributed systems
  • You have existing familiarity with a functional stack
  • You're comfortable with the Behavior-Driven Development style and Continuous Delivery
  • You have worked at a very small startup before: you thrive on having a lot of responsibility and little oversight
  • You have worked in small and effective Agile/XP teams before
  • You have delivered working software to large numbers of users before
  • You have done system and network administration and are comfortable working in the Linux environment
  • You have implemented deployment strategies for cloud infrastructure
  • You have experience scaling distributed systems and designing large scale web backends

Benefits

  • Competitive salary
  • Generous equity option grants
  • Medical, Dental, and Vision
  • Lunch is on us three times a week, and half-day event every month (trip to Sonoma, BBQ, etc)
  • Equipment budget
  • Flexible work schedule
  • Flexible, untracked vacation day policy
  • Working from downtown SF, super accessible location right by Powell station

Front Row - our mission

It's an unfortunate reality that students from less affluent families perform worse in school than students from wealthier families. Part of this reason has to do with home environment and absentee parents, but much of it has to do with inferior resources and less experienced teachers. The worst part of this problem is that if a student falls behind in any grade, they will forever be behind in every grade.

That's the core problem Front Row solves - it doesn't let students fall behind. And if they fall behind, Front Row helps catch them up really quickly because Front Row arms teachers with the resources and knowledge to develop their students individually. Now, the probability of falling behind in any given grade is irrelevant, because it will never compound. The student who would have been the most at risk will instead be up to speed, and therefore far more motivated.

Get information on how to apply for this position.

May 05, 2015 11:30 PM

Theory Lunch (Institute of Cybernetics, Tallinn)

A concrete piece of evidence for incompleteness

On Thursday, the 25th of March 2015, Venanzio Capretta gave a Theory Lunch talk about Goodstein’s theorem. Later, on the 9th of March, Wolfgang Jeltsch talked about ordinal numbers, which are at the base of Goodstein’s proof. Here, I am writing down a small recollection of their arguments.

Given a base b \geq 2, consider the base-b writing of the nonnegative integer

n = b^m \cdot a_m + b^{m-1} \cdot a_{m-1} + \ldots + b \cdot a_1 + a_0

where each a_i is an integer between 0 and b-1. The Cantor base-b writing of n is obtained by iteratively applying the base-b writing to the exponents as well, until the only values appearing are integers between 0 and b. For example, for b = 2 and n = 49, we have

49 = 32 + 16 + 1 = 2^{2^2 + 1} + 2^{2^2} + 1

and also

49 = 27 + 9 \cdot 2 + 3 + 1 = 3^3 + 3^2 \cdot 2 + 3 + 1

Given a nonnegative integer n, consider the Goodstein sequence defined for i \geq 2 by putting x_2 = n, and by constructing x_{i+1} from x_i as follows:

  1. Take the Cantor base-i representation of x_i.
  2. Convert each i into i+1, getting a new number.
  3. If the value obtained at the previous point is positive, then subtract 1 from it.
    (This is called the woodworm’s trick.)

Goodstein’s theorem. Whatever the initial value x_2, the Goodstein sequence ultimately reaches the value 0 in finitely many steps.

Goodstein’s proof relies on the use of ordinal arithmetic. Recall the definition: an ordinal number is an equivalence class of well-ordered sets modulo order isomorphisms, i.e., order-preserving bijections.Observe that such order isomorphism between well-ordered sets, if it exists, is unique: if (X, \leq_X) and (Y, \leq_Y) are well-ordered sets, and f, g : X \to Y are two distinct order isomorphisms, then either U = \{ x \in X \mid f(x) <_Y g(x) \} or V = \{ x \in X \mid g(x) <_Y f(x) \} has a minimum m, which cannot correspond to any element of Y.

An interval in a well-ordered set (X, \leq) is a subset of the form [0, y) = \{ x \in \alpha \mid x < y \}.

Fact 1. Given any two well-ordered sets, either they are order-isomorphic, or one of them is order-isomorphic to an initial interval of the other.

In particular, every ordinal \alpha is order-isomorphic to the interval [0, \alpha).

All ordinal numbers can be obtained via von Neumann’s classification:

  • The zero ordinal is 0 = \emptyset, which is trivially well-ordered as it has no nonempty subsets.
  • A successor ordinal is an ordinal of the form \alpha + 1 = \alpha \sqcup \{\alpha\}, with every object in \alpha being smaller than \{\alpha\} in \alpha + 1.
    For instance, N + 1 can be seen as N \sqcup \{N\}.
  • A limit ordinal is a nonzero ordinal which is not a successor. Such ordinal must be the least upper bound of the collection of all the ordinals below it.
    For instance, the smallest transfinite ordinal \omega is the limit of the collection of the finite ordinals.

Observe that, with this convention, each ordinal is an element of every ordinal strictly greater than itself.

Fact 2. Every set of ordinal numbers is well-ordered with respect to the relation: \alpha < \beta if and only if \alpha \in \beta.

Operations between ordinal numbers are defined as follows: (up to order isomorphisms)

  • \alpha + \beta is a copy of \alpha followed by a copy of \beta, with every object in \alpha being strictly smaller than any object in \beta.
    If M and N are finite ordinals, then M+N has the intuitive meaning. On the other hand, 1 + \omega = \omega, as a copy of 1 followed by a copy of \omega is order-isomorphic to \omega: but \omega + 1 is strictly larger than \omega, as the latter is an initial interval of the former.
  • \alpha \cdot \beta is a stack of \beta copies of \alpha, with each object in each layer being strictly smaller than any object of any layer above.
    If M and N are finite ordinals, then M \cdot N has the intuitive meaning. On the other hand, 2 \cdot \omega is a stack of \omega copies of 2, which is order-isomorphic to \omega: but \omega \cdot 2 is a stack of 2 copies of \omega, which is order-isomorphic to \omega + \omega.
  • \alpha^\beta is 1 if \beta = 0, \alpha^\gamma \cdot \alpha if \beta is the successor of \gamma, and the least upper bound of the ordinals of the form \alpha^x with x < \beta if \beta is a limit ordinal.
    If M and N are finite ordinals, then M^N has the intuitive meaning. On the other hand, 2^\omega is the least upper bound of all the ordinals of the form 2^N where N is a finite ordinal, which is precisely \omega: but \omega^2 = \omega \cdot \omega.

Proof of Goodstein’s theorem: To each integer value x_i we associate an ordinal number y_i by replacing each i (which, let’s not forget, is the base x_i is written in) with \omega. For example, if x_2 = 49 = 2^{2^2 + 1} + 2^{2^2} + 1, then

y_2 = \omega^{\omega^\omega + 1} + \omega^{\omega^\omega} + 1

and x_3 = 3^{3^3 + 1} + 3^{3^3} (which, incidentally, equals 30,502,389,939,948) so that

y_3 = \omega^{\omega^\omega + 1} + \omega^{\omega^\omega}

We notice that, in our example, x_3 > x_2, but y_3 < y_2: why is it so?, and is it just a case, or is there a rule behind this?

At each step i \geq 2 where x_i > 0, consider the writing x_i = i^m \cdot a_m + \ldots + i \cdot a_1 + a_0. Three cases are possible:

  1. m = 0.
    Then y_i = a_0, x_{i+1} = a_0 - 1 as a_0 < i, and y_{i+1} = a_0 - 1 < y_i.
  2. m > 0 and a_0 > 0.
    Then y_i = \alpha + a_0 for a transfinite ordinal \alpha, and y_{i+1} = \alpha + (a_0 - 1) < y_i.
  3. m > 0 and a_0 = 0.
    Then x_i = i^m \cdot a_m + \ldots + i^p \cdot a_p for some p > 0, and x_{i+1} = (i+1)^m \cdot a_m + \ldots + (i+1)^p \cdot a_p - 1 is a number whose pth digit in base i+1 is zero: correspondingly, the rightmost term in y_i will be replaced by a smaller ordinal in y_{i+1}.

It is then clear that the sequence y_i is strictly decreasing. But the collection of all ordinals not larger than y_2 is a well-ordered set, and every nonincreasing sequence in a well-ordered set is ultimately constant: hence, there must be a value i such that y_i = y_{i+1}. But the only way it can be so is when y_i = 0: in turn, the only option for y_i to be zero, is that x_i is zero as well. This proves the theorem. \Box

So why is it that Goodstein’s theorem is not provable in the first order Peano arithmetics? The intuitive reason, is that the exponentiations can be arbitrarily many, which requires having available all the ordinals up to

\varepsilon_0 = \left. \omega^{\omega^{\omega^{\omega \vdots}}} \right\}, \omega times = \sup_{n < \omega} \left. \omega^{\omega^{\vdots^\omega}} \right\}, n times:

this, however, is impossible if induction only allows finitely many steps, as it is the case for first-order Peano arithmetics. A full discussion of a counterexample, however, would greatly exceed the scope of this post.


by Silvio Capobianco at May 05, 2015 04:23 PM

Danny Gratzer

A Proof of Church Rosser in Twelf

Posted on May 5, 2015
Tags: twelf, types

An important property in any term rewriting system, a system of rules for saying one term can be rewritten into another, is called confluence. In a term rewriting system more than one rule may apply at a time, confluence states that it doesn’t matter in what order we apply these rules. In other words, there’s some sort of diamond property in our system

                 Starting Term
                    /     \
                   /       \
          Rule 1  /         \ Rule 2
                 /           \
                /             \
               B               C
                \              /
         A bunch \     of     / rules later
                  \          /
                   \        /
                    \      /
                 Same end point

In words (and not a crappy ascii picture)

  1. Suppose we have some term A
  2. The system lets us rewrite A to B
  3. The system lets us rewrite A to C

Then two things hold

  1. The system lets us rewrite B to D in some number of rewrites
  2. The system lets us rewrite C to D with a different series of rewrites

In the specific case of lambda calculus, confluence is referred to as the “Church-Rosser Theorem”. This theorem has several important corollaries, including that the normal forms of any lambda term is unique. To see this, remember that a normal form is always “at the bottom” of diamonds like the one we drew above. This means that if some term had multiple steps to take, they all must converge before one of them reaches a normal form. If any of them did hit a normal form first, they couldn’t complete the diamond.

Proving Church-Rosser

In this post I’d like to go over a proof of the Church Rosser theorem in Twelf, everyone’s favorite mechanized metalogic. To follow along if you don’t know Twelf, perhaps some shameless self linking will help.

We need to start by actually defining lambda calculus. In keeping with Twelf style, we laugh at those restricted by the bounds of inductive types and use higher order abstract syntax to get binding for free.

    term : type.
    ap   :  term -> term  -> term.
    lam  : (term -> term) -> term.

We have to constructors, ap, which applies one term to another. The interesting one here is lam which embeds the LF function space, term -> term into term. This actually makes sense because term isn’t an inductive type, just a type family with a few members. There’s no underlying induction principle with which we can derive contradictions. To be perfectly honest I’m not sure how the proof of soundness of something like Twelf %total mechanism proceeds. If a reader is feeling curious, I believe this is the appropriate paper to read.

With this, something like λx. x x as lam [x] ap x x.

Now on to evaluation. We want to talk about things as a term rewriting system, so we opt for a small step evaluation approach.

    step     : term -> term -> type.
    step/b   : step (ap (lam F) A) (F A).
    step/ap1 : step (ap F A) (ap F' A)
                <- step F F'.
    step/ap2 : step (ap F A) (ap F A')
                <- step A A'.
    step/lam : step (lam [x] M x) (lam [x] M' x)
                <- ({x} step (M x) (M' x)).

    step* : term -> term -> type.
    step*/z : step* A A.
    step*/s : step* A C
               <- step A B
               <- step* B C.

We start with the 4 sorts of steps you can make in this system. 3 of them are merely “if you can step somewhere else, you can pull the rewrite out”, I’ve heard these referred to as compatibility rules. This is what ap1, ap2 and lam do, lam being the most interesting since it deals with going under a binder. Finally, the main rule is step/b which defines beta reduction. Note that HOAS gives us this for free as application.

Finally, step* is for a series of steps. We either have no steps, or a step followed by another series of steps. Now we want to prove a couple theorems about our system. These are mostly the lifting of the “compatibility rules” up to working on step*s. The first is the lifting of ap1.

     step*/left : step* F F' -> step* (ap F A) (ap F' A) -> type.
     %mode +{F : term} +{F' : term} +{A : term} +{In : step* F F'}
     -{Out : step* (ap F A) (ap F' A)} (step*/left In Out).

     - : step*/left step*/z step*/z.
     - : step*/left (step*/s S* S) (step*/s S'* (step/ap1 S))
          <- step*/left S* S'*.

     %worlds (lam-block) (step*/left _ _).
     %total (T) (step*/left T _).

Note, the mode specification I’m using a little peculiar. It needs to be this verbose because otherwise A mode-errors. Type inference is peculiar.

The theorem says that if F steps to F' in several steps, for all A, ap F A steps to ap F' A in many steps. The actual proof is quite boring, we just recurse and apply step/ap1 until everything type checks.

Note that the world specification for step*/left is a little strange. We use the block lam-block because later one of our theorem needs this. The block is just

%block lam-block : block {x : term}.

We need to annotate this on all our theorems because Twelf’s world subsumption checker isn’t convinced that lam-block can subsume the empty worlds we check some of our theorems in. Ah well.

Similarly to step*/left there is step*/right. The proof is 1 character off so I won’t duplicate it.

    step*/right : step* A A' -> step* (ap F A) (ap F A') -> type.

Finally, we have step/lam, the lifting of the compatibility rule for lambdas. This one is a little more fun since it actually works by pattern matching on functions.

     step*/lam : ({x} step* (F x) (F' x))
                  -> step* (lam F) (lam F')
                  -> type.
     %mode step*/lam +A -B.

     - : step*/lam ([x] step*/z) step*/z.
     - : step*/lam ([x] step*/s (S* x) (S x))
          (step*/s S'* (step/lam S))
          <- step*/lam S* S'*.

     %worlds (lam-block) (step*/lam _ _).
     %total (T) (step*/lam T _).

What’s fun here is that we’re inducting on a dependent function. So the first case matches [x] step*/z and the second [x] step*/s (S* x) (S x). Other than that we just use step/lam to lift up S and recurse to lift up S* in the second case.

We need one final (more complicated) lemma about substitution. It states that if A steps to A', then F A steps to F A' in many steps for all F. This proceeds by induction on the derivation that A steps to A'. First off, here’s the formal statement in Twelf

This is the lemma that actually needs the world with lam-blocks

    subst : {F} step A A' -> step* (F A) (F A') -> type.
    %mode subst +A +B -C.

Now the actual proof. The first two cases are for constant functions and the identity function

    - : subst ([x] A) S step*/z.
    - : subst ([x] x) S (step*/s step*/z S).

In the case of the constant functions the results of F A and F A' are the same so we don’t need to step at all. In the case of the identity function we just step with the step from A to A'.

In the next case, we deal with nested lambdas.

     - : subst ([x] lam ([y] F y x)) S S'*
          <- ({y} subst (F y) S (S* y))
          <- step*/lam S* S'*.

Here we recurse, but we carefully do this under a pi type. The reason for doing this is because we’re recursing on the open body of the inner lambda. This has a free variable and we need a pi type in order to actually apply F to something to get at the body. Otherwise this just uses step*/lam to lift the step across the body to the step across lambdas.

Finally, application.

     - : subst ([x] ap (F x) (A x)) S S*
          <- subst F S F*
          <- subst A S A*
          <- step*/left F* S1*
          <- step*/right A* S2*
          <- join S1* S2* S*.

This looks complicated, but isn’t so bad. We first recurse, and then use various compatibility lemmas to actually plumb the results of the recursive calls to the right parts of the final term. Since there are two individual pieces of stepping, one for the argument and one for the function, we use join to slap them together.

With this, we’ve got all our lemmas

    %worlds (lam-block) (subst _ _ _).
    %total (T) (subst T _ _).

The Main Theorem

Now that we have all the pieces in place, we’re ready to state and prove confluence. Here’s our statement in Twelf

    confluent : step A B -> step A C -> step* B D -> step* C D -> type.
    %mode confluent +A +B -C -D.

Unfortunately, there’s a bit of a combinatorial explosion with this. There are approximately 3 * 3 * 3 + 1 = 10 cases for this theorem. And thanks to the lemmas we’ve proven, they’re all boring.

First we have the cases where step A B is a step/ap1.

     - : confluent (step/ap1 S1) (step/ap1 S2) S1'* S2'*
          <- confluent S1 S2 S1* S2*
          <- step*/left S1* S1'*
          <- step*/left S2* S2'*.
     - : confluent (step/ap1 S1) (step/ap2 S2)
          (step*/s step*/z (step/ap2 S2))
          (step*/s step*/z (step/ap1 S1)).
     - : confluent (step/ap1 (step/lam F) : step (ap _ A) _) step/b
          (step*/s step*/z step/b) (step*/s step*/z (F A)).

In the first case, we have two ap1s. We recurse on the smaller S1 and S2 and then immediately use one of our lemmas to lift the results of the recursive call, which step the function part of the the ap we’re looking at, to work across the whole ap term. In the second case there, we’re stepping the function in one, and the argument in the other. In order to bring these to a common term we just apply the first step to the resulting term of the second step and vice versa. This means that we’re doing something like this

                 F A
                /   \
           S1  /     \ S2
              /       \
            F' A     F  A'
              \       /
           S2  \     /  S1
                \   /
                F' A'

This clearly commutes so this case goes through. For the final case, we’re applying a lambda to some term so we can beta reduce. On one side we step the body of the lambda some how, and on the other we immediately substitute. Now we do something clever. What is a proof that lam A steps to lam B? It’s a proof that for any x, A x steps to B x. In fact, it’s just a function from x to such a step A x to B x. So we have that lying around in F. So to step from the beta-reduced term G A to G' A all we do is apply F to A! The other direction is just beta-reducing ap (lam G') A to the desired G' A.

In the next set of cases we deal with ap2!

     - : confluent (step/ap2 S1) (step/ap2 S2) S1'* S2'*
          <- confluent S1 S2 S1* S2*
          <- step*/right S1* S1'*
          <- step*/right S2* S2'*.
     - : confluent (step/ap2 S1) (step/ap1 S2)
          (step*/s step*/z (step/ap1 S2))
          (step*/s step*/z (step/ap2 S1)).
     - : confluent (step/ap2 S) (step/b : step (ap (lam F) _) _)
          (step*/s step*/z step/b) S1*
          <- subst F S S1*.

The first two cases are almost identical to what we’ve seen before. The key difference here is in the third case. This is again where we’re stepping something on one side and beta-reducing on the other. We can’t use the nice free stepping provided by F here since we’re stepping the argument, not the function. For this we appeal to subst which let’s us step F A to F A' using S1* exactly as required. The other direction is trivial just like it was in the ap1 case, we just have to step ap (lam F) A' to F A' which is done with beta reduction.

I’m not going to detail the cases to do with step/b as the first argument because they’re just mirrors of the cases we’ve looked at before. That only leaves us with one more case, the case for step/lam.

     - : confluent (step/lam F1) (step/lam F2) F1'* F2'*
          <- ({x} confluent (F1 x) (F2 x) (F1* x) (F2* x))
          <- step*/lam F1* F1'*
          <- step*/lam F2* F2'*.

This is just like all the other “diagonal” cases, like confluent (ap1 S1) (ap1 S2) .... We first recurse (this time using a pi to unbind the body of the lambda) and then use compatibility rules in order to get something we can give back from confluent. And with this, we can actually prove that lambda calculus is confluent.

    %worlds (lam-block) (confluent _ _ _ _).
    %total (T) (confluent T _ _ _).

Wrap Up

We went through a fairly significant proof here, but the end results were interesting at least. One nice thing this proof illustrates is how well HOAS lets us encode these proofs. It’s a very Twelf-y approach to use lambdas to represent bindings. All in all, it’s a fun proof.

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus

May 05, 2015 12:00 AM

May 03, 2015

Keegan McAllister

Modeling garbage collectors with Alloy: part 1

Formal methods for software verification are usually seen as a high-cost tool that you would only use on the most critical systems, and only after extensive informal verification. The Alloy project aims to be something completely different: a lightweight tool you can use at any stage of everyday software development. With just a few lines of code, you can build a simple model to explore design issues and corner cases, even before you've started writing the implementation. You can gradually make the model more detailed as your requirements and implementation get more complex. After a system is deployed, you can keep the model around to evaluate future changes at low cost.

Sounds great, doesn't it? I have only a tiny bit of prior experience with Alloy and I wanted to try it out on something more substantial. In this article we'll build a simple model of a garbage collector, visualize its behavior, and fix some problems. This is a warm-up for exploring more complex GC algorithms, which will be the subject of future articles.

I won't describe the Alloy syntax in full detail, but you should be able to follow along if you have some background in programming and logic. See also the Alloy documentation and especially the book Software Abstractions: Logic, Language, and Analysis by Daniel Jackson, which is a very practical and accessible introduction to Alloy. It's a highly recommended read for any software developer.

You can download Alloy as a self-contained Java executable, which can do analysis and visualization and includes an editor for Alloy code.

The model

We will start like so:

open util/ordering [State]

sig Object { }
one sig Root extends Object { }

sig State {
pointers: Object -> set Object,
collected: set Object,
}

The garbage-collected heap consists of Objects, each of which can point to any number of other Objects (including itself). There is a distinguished object Root which represents everything that's accessible without going through the heap, such as global variables and the function call stack. We also track which objects have already been garbage-collected. In a real implementation these would be candidates for re-use; in our model they stick around so that we can detect use-after-free.

The open statement invokes a library module to provide a total ordering on States, which we will interpret as the progression of time. More on this later.

Relations

In the code that follows, it may look like Alloy has lots of different data types, overloading operators with total abandon. In fact, all these behaviors arise from an exceptionally simple data model:

Every value is a relation; that is, a set of tuples of the same non-zero length.

When each tuple has length 1, we can view the relation as a set. When each tuple has length 2, we can view it as a binary relation and possibly as a function. And a singleton set is viewed as a single atom or tuple.

Since everything in Alloy is a relation, each operator has a single definition in terms of relations. For example, the operators . and [] are syntax for a flavor of relational join. If you think of the underlying relations as a database, then Alloy's clever syntax amounts to an object-relational mapping that is at once very simple and very powerful. Depending on context, these joins can look like field access, function calls, or data structure lookups, but they are all described by the same underlying framework.

The elements of the tuples in a relation are atoms, which are indivisible and have no meaning individually. Their meaning comes entirely from the relations and properties we define. Ultimately, atoms all live in the same universe, but Alloy gives "warnings" when the type system implied by the sig declarations can prove that an expression is always the empty relation.

Here are the relations implied by our GC model, as tuple sets along with their types:

Object: {Object} = {O1, O2, ..., Om}
Root: {Root} = {Root}
State: {State} = {S1, S2, ..., Sn}

pointers: {(State, Object, Object)}
collected: {(State, Object)}

first: {State} = {S1}
last: {State} = {Sn}
next: {(State, State)} = {(S1, S2), (S2, S3), ..., (S(n-1), Sn)}

The last three relations come from the util/ordering library. Note that a sig implicitly creates some atoms.

Dynamics

The live objects are everything reachable from the root:

fun live(s: State): set Object {
Root.*(s.pointers)
}

*(s.pointers) constructs the reflexive, transitive closure of the binary relation s.pointers; that is, the set of objects reachable from each object.

Of course the GC is only part of a system; there's also the code that actually uses these objects, which in GC terminology is called the mutator. We can describe the action of each part as a predicate relating "before" and "after" states.

pred mutate(s, t: State) {
t.collected = s.collected
t.pointers != s.pointers
all a: Object - t.live |
t.pointers[a] = s.pointers[a]
}

pred gc(s, t: State) {
t.pointers = s.pointers
t.collected = s.collected + (Object - s.live)
some t.collected - s.collected
}

The mutator cannot collect garbage, but it can change the pointers of any live object. The GC doesn't touch the pointers, but it collects any dead object. In both cases we require that something changes in the heap.

It's time to state the overall facts of our model:

fact {
no first.collected
first.pointers = Root -> (Object - Root)
all s: State - last |
let t = s.next |
mutate[s, t] or gc[s, t]
}

This says that in the initial state, no object has been collected, and every object is in the root set except Root itself. This means we don't have to model allocation as well. Each state except the last must be followed by a mutator step or a GC step.

The syntax all x: e | P says that the property P must hold for every tuple x in e. Alloy supports a variety of quantifiers like this.

Interacting with Alloy

The development above looks nice and tidy — I hope — but in reality, it took a fair bit of messing around to get to this point. Alloy provides a highly interactive development experience. At any time, you can visualize your model as a collection of concrete examples. Let's do that now by adding these commands:

pred Show {}
run Show for 5

Now we select this predicate from the "Execute" menu, then click "Show". The visualizer provides many options to customise the display of each atom and relation. The config that I made for this project is "projected over State", which means you see a graph of the heap at one moment in time, with forward/back buttons to reach the other States.

After clicking around a bit, you may notice some oddities:

Diagram of a heap with an object pointing to the root

The root isn't a heap object; it represents all of the pointers that are reachable without accessing the heap. So it's meaningless for an object to point to the root. We can exclude these cases from the model easily enough:

fact {
all s: State | no s.pointers.Root
}

(This can also be done more concisely as part of the original sig.)

Now we're ready to check the essential safety property of a garbage collector:

assert no_dangling {
all s: State | no (s.collected & s.live)
}

check no_dangling for 5 Object, 10 State

And Alloy says:

Executing "Check no_dangling for 5 Object, 10 State"
...
8338 vars. 314 primary vars. 17198 clauses. 40ms.
Counterexample found. Assertion is invalid. 14ms.

Clicking "Counterexample" brings up the visualization:

Diagram of four states. A single heap object is unrooted, then collected, but then the root grows a new pointer to it!

Whoops, we forgot to say that only pointers to live objects can be stored! We can fix this by modifying the mutate predicate:

pred mutate(s, t: State) {
t.collected = s.collected
t.pointers != s.pointers
all a: Object - t.live |
t.pointers[a] = s.pointers[a]

// new requirement!
all a: t.live |
t.pointers[a] in s.live
}

With the result:

Executing "Check no_dangling for 5 Object, 10 State"
...
8617 vars. 314 primary vars. 18207 clauses. 57ms.
No counterexample found. Assertion may be valid. 343ms.

SAT solvers and bounded model checking

"May be" valid? Fortunately this has a specific meaning. We asked Alloy to look for counterexamples involving at most 5 objects and 10 time steps. This bounds the search for counterexamples, but it's still vastly more than we could ever check by exhaustive brute force search. (See where it says "8617 vars"? Try raising 2 to that power.) Rather, Alloy turns the bounded model into a Boolean formula, and feeds it to a SAT solver.

This all hinges on one of the weirdest things about computing in the 21st century. In complexity theory, SAT (along with many equivalents) is the prototypical "hardest problem" in NP. Why do we intentionally convert our problem into an instance of this "hardest problem"? I guess for me it illustrates a few things:

  • The huge gulf between worst-case complexity (the subject of classes like NP) and average or "typical" cases that we encounter in the real world. For more on this, check out Impagliazzo's "Five Worlds" paper.

  • The fact that real-world difficulty involves a coordination game. SAT solvers got so powerful because everyone agrees SAT is the problem to solve. Standard input formats and public competitions were a key part of the amazing progress over the past decade or two.

Of course SAT solvers aren't quite omnipotent, and Alloy can quickly get overwhelmed when you scale up the size of your model. Applicability to the real world depends on the small scope hypothesis:

If an assertion is invalid, it probably has a small counterexample.

Or equivalently:

Systems that fail on large instances almost always fail on small instances with similar properties.

This is far from a sure thing, but it already underlies a lot of approaches to software testing. With Alloy we have the certainty of proof within the size bounds, so we don't have to resort to massive scale to find rare bugs. It's difficult (but not impossible!) to imagine a GC algorithm that absolutely cannot fail on fewer than 6 nodes, but is buggy for larger heaps. Implementations will often fall over at some arbitrary resource limit, but algorithms and models are more abstract.

Conclusion

It's not surprising that our correctness property

all s: State | no (s.collected & s.live)

holds, since it's practically a restatement of the garbage collection "algorithm":

t.collected = s.collected + (Object - s.live)

Because reachability is built into Alloy, via transitive closure, the simplest model of a garbage collector does not really describe an implementation. In the next article we'll look at incremental garbage collection, which breaks the reachability search into small units and allows the mutator to run in-between. This is highly desirable for interactive or real-time apps; it also complicates the algorithm quite a bit. We'll use Alloy to uncover some of these complications.

In the meantime, you can play around with the simple GC model and ask Alloy to visualize any scenario you like. For example, we can look at runs where the final state includes at least 5 pointers, and at least one collected object:

pred Show {
#(last.pointers) >= 5
some last.collected
}

run Show for 5

Thanks for reading! You can find the code in a GitHub repository which I'll update if/when we get around to modeling more complex GCs.

by keegan ([email protected]) at May 03, 2015 09:21 PM

apfelmus

GUI - Release of the threepenny-gui library, version 0.6.0.1

I am pleased to announce release of threepenny-gui version 0.6, a cheap and simple library to satisfy your immediate GUI needs in Haskell.

Want to write a small GUI thing but forgot to sacrifice to the giant rubber duck in the sky before trying to install wxHaskell or Gtk2Hs? Then this library is for you! Threepenny is easy to install because it uses the web browser as a display.

The library also has functional reactive programming (FRP) built-in, which makes it a lot easier to write GUI application without getting caught in spaghetti code. For an introduction to FRP, see for example my slides from a tutorial I gave in 2012. (The API is slightly different in Reactive.Threepenny.)

In version 0.6, the communication with the web browser has been overhauled completely. On a technical level, Threepenny implements a HTTP server that sends JavaScript code to the web browser and receives JSON data back. However, this is not the right level of abstraction to look at the problem. What we really want is a foreign function interface for JavaScript, i.e. we want to be able to call arbitrary JavaScript functions from our Haskell code. As of this version, Threepenny implements just that: The module Foreign.JavaScript gives you the essential tools you need to interface with the JavaScript engine in a web browser, very similar to how the module Foreign and related modules from the base library give you the ability to call C code from Haskell. You can manipulate JavaScript objects, call JavaScript functions and export Haskell functions to be called from JavaScript.

However, the foreign calls are still made over a HTTP connection (Threepenny does not compile Haskell code to JavaScript). This presents some challenges, which I have tried to solve with the following design choices:

  • Garbage collection. I don’t know any FFI that has attemped to implement cross-runtime garbage collection. The main problem are cyclic references, which happen very often in a GUI setting, where an event handler references a widget, which in turn references the event handler. In Threepenny, I have opted to leave garbage collection entirely to the Haskell side, because garbage collectors in current JavaScript engines are vastly inferior to what GHC provides. The module Foreign.RemotePtr gives you the necessary tools to keep track of objects on the JavaScript (“remote”) side where necessary.

  • Foreign exports. Since the browser and the HTTP server run concurrently, there is no shared “instruction pointer” that keeps track of whether you are currently executing code on the Haskell side or the JavaScript side. I have chosen to handle this in the following way: Threepenny supports synchronous calls to JavaScript functions, but Haskell functions can only be called as “asynchronous event handlers” from the JavaScript side, i.e. the calls are queued and they don’t return results.

  • Latency, fault tolerance. Being a GUI library, Threepenny assumes that both the browser and the Haskell code run on localhost, so all network problems are ignored. This is definitely not the right way to implement a genuine web application, but of course, you can abuse it for writing quick and dirty GUI apps over your local network (see the Chat.hs example).

To see Threepenny in action, have a look at the following applications:

Daniel Austin’s FNIStash
Editor for Torchlight 2 inventories.
Chaddai’s CurveProject
Plotting curves for math teachers.

Get the library here:

Note that the API is still in flux and is likely to change radically in the future. You’ll have to convert frequently or develop against a fixed version.

May 03, 2015 03:33 PM

May 02, 2015

Roman Cheplyaka

Smarter validation

Today we’ll explore different ways of handling and reporting errors in Haskell. We shall start with the well-known Either monad, proceed to a somewhat less common Validation applicative, and then improve its efficiency and user experience.

The article contains several exercises that will hopefully help you better understand the issues that are being addressed here.

Running example

{-# LANGUAGE GeneralizedNewtypeDeriving, KindSignatures, DataKinds,
             ScopedTypeVariables, RankNTypes, DeriveFunctor #-}
import Text.Printf
import Text.Read
import Control.Monad
import Control.Applicative
import Control.Applicative.Lift (Lift)
import Control.Arrow (left)
import Data.Functor.Constant (Constant)
import Data.Monoid
import Data.Traversable (sequenceA)
import Data.List (intercalate, genericTake, genericLength)
import Data.Proxy
import System.Exit
import System.IO
import GHC.TypeLits

Our running example will consist of reading a list of integer numbers from a file, one number per line, and printing their sum.

Here’s the simplest way to do this in Haskell:

printSum1 :: FilePath -> IO ()
printSum1 path = print . sum . map read . lines =<< readFile path

This code works as expected for a well-formed file; however, if a line in the file can’t be parsed as a number, we’ll get unhelpful

Prelude.read: no parse

Either monad

Let’s rewrite our function to be aware of possible errors.

parseNum
  :: Int -- line number (for error reporting)
  -> String -- line contents
  -> Either String Integer
     -- either parsed number or error message
parseNum ln str =
  case readMaybe str of
    Just num -> Right num
    Nothing -> Left $
      printf "Bad number on line %d: %s" ln str

-- Print a message and exit
die :: String -> IO ()
die msg = do
  hPutStrLn stderr msg
  exitFailure

printSum2 :: FilePath -> IO ()
printSum2 path =
  either die print .
  liftM sum .
  sequence . zipWith parseNum [1..] .
  lines =<< readFile path

Now, upon reading a line that is not a number, we’d see something like

Bad number on line 2: foo

This is a rather standard usage of the Either monad, so I won’t get into details here. I’ll just note that there are two ways in which this version is different from the first one:

  1. We call readMaybe instead of read and, upon detecting an error, construct a helpful error message. For this reason, we keep track of the line number.
  2. Instead of throwing a runtime exception right away (using the error function), we return a pure Either value, and then combine these Eithers together using the Moand Either isntance.

The two changes are independent; there’s no reason why we couldn’t use error and get the same helpful error message. The exceptions emulated by the Either monad have the same semantics here as the runtime exceptions. The benefit of the pure formulation is that the semantics of runtime exceptions is built-in; but the semantics of the pure data is programmable, and we will take advantage of this fact below.

Validation applicative

You get a thousand-line file with numbers from your accountant. He asks you to sum them up because his enterprise software mysteriously crashes when trying to read it.

You accept the challenge, knowing that your Haskell program won’t let you down. The program tells you

Bad number on line 378: 12o0

— I see! Someone put o instead of zero. Let me fix it.

You locate the line 378 in your editor and replace 12o0 with 1200. Then you save the file, exit the editor, and re-run the program.

Bad number on line 380: 11i3

— Come on! There’s another similar mistake just two lines below. Except now 1 got replaced by i. If you told me about both errors from the beginning, I could fix them faster!

Indeed, there’s no reason why our program couldn’t try to parse every line in the file and tell us about all the mistakes at once.

Except now we can’t use the standard Monad and Applicative instances of Either. We need the Validation applicative.

The Validation applicative combines two Either values in such a way that, if they are both Left, their left values are combined with a monoidal operation. (In fact, even a Semigroup would suffice.) This allows us to collect errors from different lines.

newtype Validation e a = Validation { getValidation :: Either e a }
  deriving Functor

instance Monoid e => Applicative (Validation e) where
  pure = Validation . Right
  Validation a <*> Validation b = Validation $
    case a of
      Right va -> fmap va b
      Left ea -> either (Left . mappend ea) (const $ Left ea) b

The following example demonstrates the difference between the standard Applicative instance and the Validation one:

> let e1 = Left "error1"; e2 = Left " error2"
> e1 *> e2
Left "error1"
> getValidation $ Validation e1 *> Validation e2
Left "error1 error2"

A clever implementation of the same applicative functor exists inside the transformers package. Ross Paterson observes that this functor can be constructed as

type Errors e = Lift (Constant e)

(see Control.Applicative.Lift).

Anyway, let’s use this to improve our summing program.

printSum3 :: FilePath -> IO ()
printSum3 path =
  either (die . intercalate "\n") print .
  liftM sum .
  getValidation . sequenceA .
  map (Validation . left (\e -> [e])) .
  zipWith parseNum [1..] .
  lines =<< readFile path

Now a single invocation of the program shows all the errors it can find:

Bad number on line 378: 12o0
Bad number on line 380: 11i3

Exercise. Could we use Writer [String] to collect error messages?

Exercise. When appending lists, there is a danger of incurring quadratic complexity. Does that happen in the above function? Could it happen in a different function that uses the Validation applicative based on the list monoid?

Smarter Validation applicative

Next day your accountant sends you another thousand-line file to sum up. This time your terminal gets flooded by error messages:

Bad number on line 1: 27297.
Bad number on line 2: 11986.
Bad number on line 3: 18938.
Bad number on line 4: 22820.
...

You already see the problem: every number ends with a dot. This is trivial to diagnose and fix, and there is absolutely no need to print a thousand error messages.

In fact, there are two different reasons to limit the number of reported errors:

  1. User experience: it is unlikely that the user will pay attention to more than, say, 10 messages at once. If we try to display too many errors on a web page, it may get slow and ugly.
  2. Efficiency: if we agree it’s only worth printing the first 10 errors, then, once we gather 10 errors, there is no point processing the data further.

Turns out, each of the two goals outlined above will need its own mechanism.

Bounded lists

We first develop a list-like datatype which stores only the first n elements and discards anything else that may get appended. This primarily addresses our first goal, user experience, although it will be handy for achieving the second goal too.

Although for validation purposes we may settle with the limit of 10, it’s nice to make this a generic, reusable type with a flexible limit. So we’ll make the limit a part of the type, taking advantage of the type-level number literals.

Exercise. Think of the alternatives to storing the limit in the type. What are their pros and cons?

On the value level, we will base the new type on difference lists, to avoid the quadratic complexity issue that I allude to above.

data BoundedList (n :: Nat) a =
  BoundedList
    !Integer -- current length of the list
    (Endo [a])

Exercise. Why is it important to cache the current length instead of computing it from the difference list?

Once we’ve figured out the main ideas (encoding the limit in the type, using difference lists, caching the current length), the actual implementation is straightforward.

singleton :: KnownNat n => a -> BoundedList n a
singleton a = fromList [a]

toList :: BoundedList n a -> [a]
toList (BoundedList _ (Endo f)) = f []

fromList :: forall a n . KnownNat n => [a] -> BoundedList n a
fromList lst = BoundedList (min len limit) (Endo (genericTake limit lst ++))
  where
    limit = natVal (Proxy :: Proxy n)
    len = genericLength lst

instance KnownNat n => Monoid (BoundedList n a) where
  mempty = BoundedList 0 mempty
  mappend b1@(BoundedList l1 f1) (BoundedList l2 f2)
    | l1 >= limit = b1
    | l1 + l2 <= limit = BoundedList (l1 + l2) (f1 <> f2)
    | otherwise = BoundedList limit (f1 <> Endo (genericTake (limit - l1)) <> f2)
    where
      limit = natVal (Proxy :: Proxy n)

full :: forall a n . KnownNat n => BoundedList n a -> Bool
full (BoundedList l _) = l >= natVal (Proxy :: Proxy n)

null :: BoundedList n a -> Bool
null (BoundedList l _) = l <= 0

SmartValidation

Now we will build the smart validation applicative which stops doing work when it doesn’t make sense to collect errors further anymore. This is a balance between the Either applicative, which can only store a single error, and Validation, which collects all of them.

Implementing such an applicative functor is not as trivial as it may appear at first. In fact, before reading the code below, I recommend doing the following

Exercise. Try implementing a type and an applicative instance for it which adheres to the above specification.

Did you try it? Did you succeed? This is not a rhetorical question, I am actually interested, so let me know. Is your implementation the same as mine, or is it simpler, or more complicated?

Alright, here’s my implementation.

newtype SmartValidation (n :: Nat) e a = SmartValidation
  { getSmartValidation :: forall r .
      Either (BoundedList n e) (a -> r) -> Either (BoundedList n e) r }
  deriving Functor

instance KnownNat n => Applicative (SmartValidation n e) where
  pure x = SmartValidation $ \k -> k <*> Right x
  SmartValidation a <*> SmartValidation b = SmartValidation $ \k ->
    let k' = fmap (.) k in
    case a k' of
      Left errs | full errs -> Left errs
      r -> b r

And here are some functions to construct and analyze SmartValidation values.

-- Convert SmartValidation to Either
fatal :: SmartValidation n e a -> Either [e] a
fatal = left toList . ($ Right id) . getSmartValidation

-- Convert Either to SmartValidation
nonFatal :: KnownNat n => Either e a -> SmartValidation n e a
nonFatal a = SmartValidation $ (\k -> k <+> left singleton a)

-- like <*>, but mappends the errors
(<+>)
  :: Monoid e
  => Either e (a -> b)
  -> Either e a
  -> Either e b
a <+> b = case (a,b) of
  (Right va, Right vb) -> Right $ va vb
  (Left e,   Right _)  -> Left e
  (Right _,  Left e)   -> Left e
  (Left e1,  Left e2)  -> Left $ e1 <> e2

Exercise. Work out what fmap (.) k does in the definition of <*>.

Exercise. In the definition of <*>, should we check whether k is full before evaluating a k'?

Exercise. We developed two mechanisms — BoundedList and SmartValidation, which seem to do about the same thing on different levels. Would any one of these two mechanisms suffice to achieve both our goals, user experience and efficiency, when there are many errors being reported?

Exercise. If the SmartValidation applicative was based on ordinary lists instead of difference lists, would we be less or more likely to run into the quadratic complexity problem compared to simple Validation?

Conclusion

Although the Validation applicative is known among Haskellers, the need to limit the number of errors it produces is rarely (if ever) discussed. Implementing an applicative functor that limits the number of errors and avoids doing extra work is somewhat tricky. Thus, I am happy to share my solution and curious about how other people have dealt with this problem.

May 02, 2015 08:00 PM

Edward Z. Yang

Width-adaptive XMonad layout

My usual laptop setup is I have a wide monitor, and then I use my laptop screen as a secondary monitor. For a long time, I had two XMonad layouts: one full screen layout for my laptop monitor (I use big fonts to go easy on the eyes) and a two-column layout when I'm on the big screen.

But I had an irritating problem: if I switched a workspace from the small screen to the big screen, XMonad would still be using the full screen layout, and I would have to Alt-Tab my way into the two column layout. To add insult to injury, if I moved it back, I'd have to Alt-Tab once again.

After badgering the fine folks on #xmonad, I finally wrote an extension to automatically switch layout based on screen size! Here it is:

{-# LANGUAGE FlexibleInstances, MultiParamTypeClasses #-}

-----------------------------------------------------------------------------
-- |
-- Module      :  XMonad.Layout.PerScreen
-- Copyright   :  (c) Edward Z. Yang
-- License     :  BSD-style (see LICENSE)
--
-- Maintainer  :  <[email protected]>
-- Stability   :  unstable
-- Portability :  unportable
--
-- Configure layouts based on the width of your screen; use your
-- favorite multi-column layout for wide screens and a full-screen
-- layout for small ones.
-----------------------------------------------------------------------------

module XMonad.Layout.PerScreen
    ( -- * Usage
      -- $usage
      PerScreen,
      ifWider
    ) where

import XMonad
import qualified XMonad.StackSet as W

import Data.Maybe (fromMaybe)

-- $usage
-- You can use this module by importing it into your ~\/.xmonad\/xmonad.hs file:
--
-- > import XMonad.Layout.PerScreen
--
-- and modifying your layoutHook as follows (for example):
--
-- > layoutHook = ifWider 1280 (Tall 1 (3/100) (1/2) ||| Full) Full
--
-- Replace any of the layouts with any arbitrarily complicated layout.
-- ifWider can also be used inside other layout combinators.

ifWider :: (LayoutClass l1 a, LayoutClass l2 a)
               => Dimension   -- ^ target screen width
               -> (l1 a)      -- ^ layout to use when the screen is wide enough
               -> (l2 a)      -- ^ layout to use otherwise
               -> PerScreen l1 l2 a
ifWider w = PerScreen w False

data PerScreen l1 l2 a = PerScreen Dimension Bool (l1 a) (l2 a) deriving (Read, Show)

-- | Construct new PerScreen values with possibly modified layouts.
mkNewPerScreenT :: PerScreen l1 l2 a -> Maybe (l1 a) ->
                      PerScreen l1 l2 a
mkNewPerScreenT (PerScreen w _ lt lf) mlt' =
    (\lt' -> PerScreen w True lt' lf) $ fromMaybe lt mlt'

mkNewPerScreenF :: PerScreen l1 l2 a -> Maybe (l2 a) ->
                      PerScreen l1 l2 a
mkNewPerScreenF (PerScreen w _ lt lf) mlf' =
    (\lf' -> PerScreen w False lt lf') $ fromMaybe lf mlf'

instance (LayoutClass l1 a, LayoutClass l2 a, Show a) => LayoutClass (PerScreen l1 l2) a where
    runLayout (W.Workspace i p@(PerScreen w _ lt lf) ms) r
        | rect_width r > w    = do (wrs, mlt') <- runLayout (W.Workspace i lt ms) r
                                   return (wrs, Just $ mkNewPerScreenT p mlt')
        | otherwise           = do (wrs, mlt') <- runLayout (W.Workspace i lf ms) r
                                   return (wrs, Just $ mkNewPerScreenF p mlt')

    handleMessage (PerScreen w bool lt lf) m
        | bool      = handleMessage lt m >>= maybe (return Nothing) (\nt -> return . Just $ PerScreen w bool nt lf)
        | otherwise = handleMessage lf m >>= maybe (return Nothing) (\nf -> return . Just $ PerScreen w bool lt nf)

    description (PerScreen _ True  l1 _) = description l1
    description (PerScreen _ _     _ l2) = description l2

I'm going to submit it to xmonad-contrib, if I can figure out their darn patch submission process...

by Edward Z. Yang at May 02, 2015 04:36 AM

May 01, 2015

Danny Gratzer

Bracket Abstraction: The Smallest PL You've Ever Seen

Posted on May 1, 2015
Tags: types, haskell

It’s well known that lambda calculus is an extremely small, Turing Complete language. In fact, most programming languages over the last 5 years have grown some (typed and or broken) embedding of lambda calculus with aptly named lambdas.

This is wonderful and everything but lambda calculus is actually a little complicated. It’s centred around binding and substituting for variables, while this is elegant it’s a little difficult to formalize mathematically. It’s natural to wonder whether we can avoid dealing with variables by building up all our lambda terms from a special privileged few.

These systems (sometimes called combinator calculi) are quite pleasant to model formally, but how do we know that our system is complete? In this post I’d like to go over translating any lambda calculus program into a particular combinator calculus, SK calculus.

What is SK Combinator Calculus?

SK combinator calculus is a language with exactly 3 types of expressions.

  1. We can apply one term to another, e e,
  2. We have one term s
  3. We another term k

Besides the obvious ones, there are two main rules for this system:

  1. s a b c = (a c) (b c)
  2. k a b = a

And that’s it. What makes SK calculus so remarkable is how minimal it is. We now show that it’s Turing complete by translating lambda calculus into it.

Bracket Abstraction

First things first, let’s just define how to represent both SK calculus and lambda calculus in our Haskell program.

    data Lam = Var Int | Ap Lam Lam | Lam Lam
    data SK  = S | K | SKAp SK SK

Now we begin by defining a translation from a simplified lambda calculus to SK calculus. This simplified calculus is just SK supplemented with variables. By defining this step, the actual transformation becomes remarkably crisp.

    data SKH = Var' Int | S' | K' | SKAp' SKH SKH

Note that while SKH has variables, but no way to bind them. In order to remove a variable, we have bracket. bracket has the property that replacing Var 0 in a term, e, with a term, e', is the same as SKAp (bracket e) e'.

    -- Remove one variable
    bracket :: SKH -> SKH
    bracket (Var' 0) = SKAp' (SKAp' S' K') K'
    bracket (Var' i) = Var' (i - 1)
    bracket (SKAp' l r) = SKAp' (SKAp' S' (bracket l)) (bracket r)
    bracket x = x

If we’re at Var 0 we replace the variable with the term s k k. This has the property that (s k k) A = A. It’s traditional to abbreviate s k k as i (leading to the name SKI calculus) but i is strictly unnecessary as we can see.

If we’re at an application, we do something really clever. We have two terms which both have a free variable, so we bracket them and use S to supply the free variable to both of them! Remember that

s (bracket A) (bracket B) C = ((bracket A) C) ((bracket B) C)

which is exactly what we require by the specification of bracket.

Now that we have a way to remove free variables from an SKH term, we can close off a term with no free variables to give back a normal SK term.

    close :: SKH -> SK
    close (Var' _) = error "Not closed"
    close S' = S
    close K' = K
    close (SKAp' l r) = SKAp (close l) (close r)

Now our translator can be written nicely.

    l2h :: Lam -> SKH
    l2h (Var i) = Var' i
    l2h (Ap l r) = SKAp' (l2h l) (l2h r)
    l2h (Lam h) = bracket (l2h h)

    translate :: Lam -> SK
    translate = close . l2h

l2h is the main worker in this function. It works across SKH’s because it needs to deal with open terms during the translation. However, during the process we repeatedly call bracket so every time we go under a binder we call bracket afterwards, removing the free variable we just introduced.

This means that if we call l2h on a closed lambda term we get back a closed SKH term. This justifies using close after the toplevel call to l2h in translate which wraps up our conversion.

For funsies I decided to translate the Y combinator and got back this mess

(s ((s ((s s) ((s k) k))) ((s ((s s) ((s ((s s) k)) k))) ((s ((s s) k)) k))))
((s ((s s) ((s k) k))) ((s ((s s) ((s ((s s) k)) k))) ((s ((s s) k)) k)))

Completely useless, but kinda fun to look at. More interestingly, the canonical nonterminating lambda term is λx. x x which gives back s i i, much more readable.

Wrap Up

Now that we’ve performed this translation we have a very nice proof of the turing completeness of SK calculus. This has some nice upshots, folks who study things like realizability models of constructive logics use Partial Combinatory Algebras a model of computation. This is essentially an algebraic model of SK calculus.

If nothing else, it’s really quite crazy that such a small language is possible of simulating any computable function across numbers.

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus

May 01, 2015 12:00 AM

April 30, 2015

Douglas M. Auclair (geophf)

April 2015 1HaskellADay Problems and Solutions

April 2015
  • April 30th, 2015: "SHOW ME DA MONAY!" http://lpaste.net/3352992723589136384 for today's #haskell problem 
    <iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/1-mOKMq19zU/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/1-mOKMq19zU?feature=player_embedded" width="320"></iframe>
    Simple? Sure! Solution? Yes. http://lpaste.net/7331259237240143872
  • April 29th, 2015: We take stock of the Stochastic Oscillator http://lpaste.net/8447434917217828864 for today's #haskell problem #trading We are so partially stoched for a partial solution for the Stochastic Oscillator http://lpaste.net/4307607333212520448 
  • April 28th, 2015: Today's #haskell puzzle as a ken-ken solver http://lpaste.net/6211501623257071616 a solution (beyond my ... ken) is defined at http://lpaste.net/929006498481176576
  • April 27th, 2015: Rainy days and Mondays do not stop the mail, nor today's #haskell problem! http://lpaste.net/6468251516921708544 The solution posted at http://lpaste.net/6973841984536444928 … shows us view-patterns and how to spell the word 'intercalate'.
  • April 24th, 2015: Bidirectionally (map) yours! for today's #haskell problem http://lpaste.net/1645129197724631040 A solution to this problem is posted at http://lpaste.net/540860373977268224 
  • April 23rd, 2015: Today's #haskell problem looks impossible! http://lpaste.net/6861042906254278656 So this looks like this is a job for ... KIM POSSIBLE! YAY! @sheshanaag offers a solution at http://lpaste.net/131309 .
  • April 22nd, 2015: "I need tea." #BritishProblems "I need clean data" #EveryonesPipeDream "Deletia" today's #haskell problem http://lpaste.net/2343021306984792064 Deletia solution? Solution deleted? Here ya go! http://lpaste.net/5973874852434542592
  • April 21st, 2015: In which we learn about Tag-categories, and then Levenshtein distances between them http://lpaste.net/2118427670256549888 for today's #haskell problem Okay, wait: is it a group of categories or a category of groups? me confused! A solution to today's #haskell at http://lpaste.net/8855539857825464320
  • April 20th, 2015: Today we can't see the forest for the trees, so let's change that http://lpaste.net/3949027037724803072 A solution to our first day in the tag-forest http://lpaste.net/4634897048192155648 ... make sure you're marking your trail with breadcrumbs!
  • April 17th, 2015: No. Wait. You wanted line breaks with that, too? Well, why didn't you say so in the first place? http://lpaste.net/8638783844922687488 Have some curry with a line-breaky solution at http://lpaste.net/8752969226978852864
  • April 16th, 2015: "more then." #okaythen Sry, not sry, but here's today's #haskell problem: http://lpaste.net/6680706931826360320 I can't even. lolz. rofl. lmao. whatevs. And a big-ole-blob-o-words is given as the solution http://lpaste.net/2810223588836114432 for today's #haskell problem. It ain't pretty, but... there it is
  • April 15th, 2015: Poseidon's trident or Andrew's Pitchfork analysis, if you prefer, for today's #haskell problem http://lpaste.net/5072355173985157120
  • April 14th, 2015: Refining the SMA-trend-ride http://lpaste.net/3856617311658049536 for today's #haskell problem. Trending and throttling doesn't ... quite get us there, but ... solution: http://lpaste.net/9223292936442085376
  • April 13th, 2015: In today's #haskell problem we learn zombies are comonadic, and like eating SMA-brains. http://lpaste.net/8924989388807471104 Yeah. That. Hold the zombies, please! (Or: when $40k net profit is not enough by half!) http://lpaste.net/955577567060951040
  • April 10th, 2015: Today's #haskell problem delivered with much GRAVITAS, boils down to: don't be a dumb@$$ when investing http://lpaste.net/5255378926062010368 #KeepinItReal The SMA-advisor is REALLY chatty, but how good is it? TBD, but here's a very simple advisor: http://lpaste.net/109712 Backtesting for this strategy is posted at http://lpaste.net/109687 (or: how a not so good buy/sell strategy give you not so good results!)
  • April 9th, 2015: A bit of analysis of historical stock data http://lpaste.net/6960188425236381696 for today's #haskell problem A solution to the SMA-analyses part is posted at http://lpaste.net/3427480809555099648 
  • April 8th, 2015: MOAR! MOAR! You clamor for MOAR real-world #haskell problems, and how can I say no? http://lpaste.net/5198207211930648576 Downloading stock screens Hint: get the screens from a web service; look at, e.g.: https://code.google.com/p/yahoo-finance-managed/wiki/YahooFinanceAPIs A 'foldrM'-solution to this problem is posted at http://lpaste.net/2729747257602605056
  • April 7th, 2015: Looking at a bit of real-world #haskell for today's stock (kinda-)screen-scraping problem at http://lpaste.net/5737110678548774912 Hint: perhaps you'd like to solve this problem using tagsoup? https://hackage.haskell.org/package/tagsoup *GASP* You mean ... it actually ... works? http://lpaste.net/1209131365107236864 A MonadWriter-y tagsoup-y Monoidial-MultiMap-y solution
  • April 6th, 2015: What do three men teaching all of high school make, beside today's #haskell problem? http://lpaste.net/667230964799242240 Tired men, of course! Thanks, George Boole! Three Men and a High School, SOLVED! http://lpaste.net/7942804585247145984
  • April 3rd, 2015: reverseR that list like a Viking! Rrrrr! for today's problem http://lpaste.net/8513906085948555264 … #haskell Totes cheated to get you the solution http://lpaste.net/1880031563417124864 used a library that I wrote, so, like, yeah, totes cheated! ;)
  • April 2nd, 2015: We're breaking new ground for today's #haskell problem: let's reverse lists... relationally. And tag-type some values http://lpaste.net/389291192849793024 After several fits and starts @geophf learns how to reverse a list... relationally http://lpaste.net/7875722904095162368 and can count to the nr 5, as well
  • April 1st, 2015: Take a drink of today's #haskell problem: love potion nr9 http://lpaste.net/435384893539614720 because, after all: all we need is love, la-di-dah-di-da! A solution can be found au shaque d'amour posted at http://lpaste.net/6859866252718899200
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/9SOryJvTAGs/0.jpg" frameborder="0" height="266" src="http://www.youtube.com/embed/9SOryJvTAGs?feature=player_embedded" width="320"></iframe>

          by geophf ([email protected]) at April 30, 2015 05:21 PM

          Brandon Simmons

          Announcing hashabler: like hashable only more so

          I’ve just released the first version of a haskell library for principled, cross-platform & extensible hashing of types, which includes an implementation of the FNV-1a algorithm. It is available on hackage, and can be installed with:

          cabal install hashabler
          

          hashabler is a rewrite of the hashable library by Milan Straka and Johan Tibell, having the following goals:

          • Extensibility; it should be easy to implement a new hashing algorithm on any Hashable type, for instance if one needed more hash bits

          • Honest hashing of values, and principled hashing of algebraic data types (see e.g. #30)

          • Cross-platform consistent hash values, with a versioning guarantee. Where possible we ensure morally identical data hashes to indentical values regardless of processor word size and endianness.

          • Make implementing identical hash routines in other languages as painless as possible. We provide an implementation of a simple hashing algorithm (FNV-1a) and make an effort define Hashable instances in a way that is well-documented and sensible, so that e.g. one can (hopefully) easily implement string hashing routine in JavaScript that will match the way we hash strings here.

          Motivation

          I started writing a fast concurrent bloom filter variant, but found none of the existing libraries fit my needs. In particular hashable was deficient in a number of ways:

          • The number of hash bits my data structure requires can vary based on user parameters, and possibly be more than the 64-bits supported by hashable

          • Users might like to serialize their bloomfilter and store it, pass it to other machines, or work with it in a different language, so we need

            • hash values that are consistent across platforms
            • some guarantee of consistency across library versions

          I was also very concerned about the general approach taken for algebraic types, which results in collision, the use of “hashing” numeric values to themselves, dubious combining functions, etc. It wasn’t at all clear to me how to ensure my data structure wouldn’t be broken if I used hashable. See below for a very brief investigation into hash goodness of the two libraries.

          There isn’t interest in supporting my use case or addressing these issues in hashable (see e.g. #73, #30, and #74) and apparently hashable is working in practice for people, but maybe this new package will be useful for some other folks.

          Hash goodness of hashable and hashabler, briefly

          Hashing-based data structures assume some “goodness” of the underlying hash function, and may depend on the goodness of the hash function in ways that aren’t always clear or well-understood. “Goodness” also seems to be somewhat subjective, but can be expressed statistically in terms of bit-independence tests, and avalanche properties, etc.; various things that e.g. smhasher looks at.

          I thought for fun I’d visualize some distributions, as that’s easier for my puny brain to understand than statistics. We visualize 32-bit hashes by quantizing by 64x64 and mapping that to a pixel following a hilbert curve to maintain locality of hash values. Then when multiple hash values fall within the same 64x64 pixel, we darken the pixel, and finally mark it red if we can’t go any further to indicate clipping.

          It’s easy to cherry-pick inputs that will result in some bad behavior by hashable, but below I’ve tried to show some fairly realistic examples of strange or less-good distributions in hashable. I haven’t analysed these at all. Images are cropped ¼ size, but are representative of the whole 32-bit range.

          First, here’s a hash of all [Ordering] of size 10 (~59K distinct values):

          Hashabler:

          Hashable:

          Next here’s the hash of one million (Word8,Word8,Word8) (having a domain ~ 16 mil):

          Hashabler:

          Hashable:

          I saw no difference when hashing english words, which is good news as that’s probably a very common use-case.

          Please help

          If you could test the library on a big endian machine and let me know how it goes, that would be great. See here.

          You can also check out the TODOs scattered throughout the code and send pull requests. I mayb not be able to get to them until June, but will be very grateful!

          P.S. hire me

          I’m always open to interesting work or just hearing about how companies are using haskell. Feel free to send me an email at [email protected]

          April 30, 2015 03:03 PM

          Jan Stolarek

          Smarter conditionals with dependent types: a quick case study

          Find the type error in the following Haskell expression:

          if null xs then tail xs else xs

          You can’t, of course: this program is obviously nonsense unless you’re a typechecker. The trouble is that only certain computations make sense if the null xs test is True, whilst others make sense if it is False. However, as far as the type system is concerned, the type of the then branch is the type of the else branch is the type of the entire conditional. Statically, the test is irrelevant. Which is odd, because if the test really were irrelevant, we wouldn’t do it. Of course, tail [] doesn’t go wrong – well-typed programs don’t go wrong – so we’d better pick a different word for the way they do go.

          The above quote is an opening paragraph of Conor McBride’s “Epigram: Practical Programming with Dependent Types” paper. As always, Conor makes a good point – this test is completely irrelevant for the typechecker although it is very relevant at run time. Clearly the type system fails to accurately approximate runtime behaviour of our program. In this short post I will show how to fix this in Haskell using dependent types.

          The problem is that the types used in this short program carry no information about the manipulated data. This is true both for Bool returned by null xs, which contains no evidence of the result, as well as lists, that store no information about their length. As some of you probably realize the latter is easily fixed by using vectors, ie. length-indexed lists:

          data N = Z | S N  -- natural numbers
           
          data Vec a (n :: N) where
            Nil  :: Vec a Z
            Cons :: a -> Vec a n -> Vec a (S n)

          The type of vector encodes its length, which means that the type checker can now be aware whether it is dealing with an empty vector. Now let’s write null and tail functions that work on vectors:

          vecNull :: Vec a n -> Bool
          vecNull Nil        = True
          vecNull (Cons _ _) = False
           
          vecTail :: Vec a (S n) -> Vec a n
          vecTail (Cons _ tl) = tl

          vecNull is nothing surprising – it returns True for empty vector and False for non-empty one. But the tail function for vectors differs from its implementation for lists. tail from Haskell’s standard prelude is not defined for an empty list so calling tail [] results in an exception (that would be the case in Conor’s example). But the type signature of vecTail requires that input vector is non-empty. As a result we can rule out the Nil case. That also means that Conor’s example will no longer typecheck1. But how can we write a correct version of this example, one that removes first element of a vector only when it is non-empty? Here’s an attempt:

          shorten :: Vec a n -> Vec a m
          shorten xs = case vecNull xs of
                         True  -> xs
                         False -> vecTail xs

          That however won’t compile: now that we written type-safe tail function typechecker requires a proof that vector passed to it as an argument is non empty. The weak link in this code is the vecNull function. It tests whether a vector is empty but delivers no type-level proof of the result. In other words we need:

          vecNull` :: Vec a n -> IsNull n

          ie. a function with result type carrying the information about the length of the list. This data type will have the runtime representation isomorphic to Bool, ie. it will be an enumeration with two constructors, and the type index will correspond to length of a vector:

          data IsNull (n :: N) where
               Null    :: IsNull Z
               NotNull :: IsNull (S n)

          Null represents empty vectors, NotNull represents non-empty ones. We can now implement a version of vecNull that carries proof of the result at the type level:

          vecNull` :: Vec a n -> IsNull n
          vecNull` Nil        = Null
          vecNull` (Cons _ _) = NotNull

          The type signature of vecNull` says that the return type must have the same index as the input vector. Pattern matching on the Nil case provides the type checker with the information that the n index of Vec is Z. This means that the return value in this case must be Null – the NotNull constructor is indexed with S and that obviously does not match Z. Similarly in the Cons case the return value must be NotNull. However, replacing vecNull in the definition of shorten with our new vecNull` will again result in a type error. The problem comes from the type signature of shorten:

          shorten :: Vec a n -> Vec a m

          By indexing input and output vectors with different length indices – n and m – we tell the typechecker that these are completely unrelated. But that is not true! Knowing the input length n we know exactly what the result should be: if the input vector is empty the result vector is also empty; if the input vector is not empty it should be shortened by one. Since we need to express this at the type level we will use a type family:

          type family Pred (n :: N) :: N where
              Pred Z     = Z
              Pred (S n) = n

          (In a fully-fledged dependently-typed language we would write normal function and then apply it at the type level.) Now we can finally write:

          shorten :: Vec a n -> Vec a (Pred n)
          shorten xs = case vecNull` xs of
                         Null    -> xs
                         NotNull -> vecTail xs

          This definition should not go wrong. Trying to swap expression in the branches will result in a type error.

          1. Assuming we don’t abuse Haskell’s unsoundness as logic, eg. by using undefined.

          by Jan Stolarek at April 30, 2015 02:42 PM

          Diagrams

          Diagrams + Cairo + Gtk + Mouse picking, Reloaded

          by Brent Yorgey on April 30, 2015

          Tagged as: cairo, GTK, mouse, coordinates, transformation, features, 1.3.

          Diagrams + Cairo + Gtk + Mouse picking, reloaded

          A little over a year ago, Christopher Mears wrote a nice article on how to match up mouse clicks in a GTK window with parts of a diagram. The only downside was that to make it work, you had to explicitly construct the diagram in such a way that its coordinate system precisely matched the coordinates of the window you wanted to use, so that there was essentially no "translation" to do. This was unfortunate, since constructing a diagram in a particular global coordinate system is not a very "diagrams-y" sort of thing to do. However, the 1.3 release of diagrams includes a new feature that makes matching up mouse clicks and diagrams much easier and more idiomatic, and I thought it would be worth updating Chris's original example to work more idiomatically in diagrams 1.3. The complete code is listed at the end.

          First, here's how we construct the house. This is quite different from the way Chris did it; I have tried to make it more idiomatic by focusing on local relationships of constituent pieces, rather than putting everything at absolute global coordinates. We first create all the constituent pieces:

          > -- The diagram to be drawn, with features tagged by strings.
          > prettyHouse :: QDiagram Cairo V2 Double [String]
          > prettyHouse = house
          >   where
          >     roof    = triangle 1   # scaleToY 0.75 # centerY # fc blue
          >     door    = rect 0.2 0.4 # fc red
          >     handle  = circle 0.02  # fc black
          >     wall    = square 1     # fc yellow
          >     chimney = fromOffsets [0 ^& 0.25, 0.1 ^& 0, 0 ^& (-0.4)]
          >             # closeTrail # strokeT # fc green
          >             # centerX
          >             # named "chimney"
          >     smoke = mconcat
          >       [ circle 0.05 # translate v
          >       | v <- [ zero, 0.05 ^& 0.15 ]
          >       ]
          >       # fc grey

          We then put the pieces together, labelling each by its name with the value function. Diagrams can be valuated by any monoid; when two diagrams are combined, the value at each point will be the mappend of the values of the two component diagrams. In this case, each point in the final diagram will accumulate a list of Strings corresponding to the pieces of the house which are under that point. Note how we make use of combinators like vcat and mconcat, alignments like alignB, snugL and snugR, and the use of a named subdiagram (the chimney) to position the components relative to each other. (You can click on any of the above function names to go to their documentation!)

          >     house = vcat
          >       [ mconcat
          >         [ roof    # snugR                   # value ["roof"]
          >         , chimney # snugL                   # value ["chimney"]
          >         ]
          >         # centerX
          >       , mconcat
          >         [ handle  # translate (0.05 ^& 0.2) # value ["handle"]
          >         , door    # alignB                  # value ["door"]
          >         , wall    # alignB                  # value ["wall"]
          >         ]
          >       ]
          >       # withName "chimney" (\chim ->
          >           atop (smoke # moveTo (location chim) # translateY 0.4
          >                       # value ["smoke"]
          >                )
          >         )

          Now, when we render the diagram to a GTK window, we can get diagrams to give us an affine transformation that mediates between the diagram's local coordinates and the GTK window's coordinates. I'll just highlight a few pieces of the code; the complete listing can be found at the end of the post. We first create an IORef to hold the transformation:

          >   gtk2DiaRef <- (newIORef mempty :: IO (IORef (T2 Double)))

          We initialize it with the identity transformation. We use the renderDiaT function to get not only a rendering action but also the transformation from diagram to GTK coordinates; we save the inverse of the transformation in the IORef (since we will want to convert from GTK to diagram coordinates):

          >     let (dia2gtk, (_,r)) = renderDiaT Cairo
          >                              (CairoOptions "" (mkWidth 250) PNG False)
          >                              prettyHouse
          >
          >     -- store the inverse of the diagram -> window coordinate transformation
          >     -- for later use in interpreting mouse clicks
          >     writeIORef gtk2DiaRef (inv dia2gtk)

          (Note that if it is possible for the first motion notify event to happen before the expose event, then such mouse motions will be computed to correspond to the wrong part of the diagram, but who cares.) Now, when we receive a mouse click, we apply the stored transformation to convert to a point in diagram coordinates, and pass it to the sample function to extract a list of house components at that location.

          >     (x,y) <- eventCoordinates
          >
          >     -- transform the mouse click back into diagram coordinates.
          >     gtk2Dia <- liftIO $ readIORef gtk2DiaRef
          >     let pt' = transform gtk2Dia (p2 (x,y))
          >
          >     liftIO $ do
          >       putStrLn $ show (x,y) ++ ": "
          >                    ++ intercalate " " (sample prettyHouse pt')

          The final product ends up looking and behaving identically to the video that Chris made.

          Finally, here's the complete code. A lot of it is just boring standard GTK setup.

          > import           Control.Monad                   (void)
          > import           Control.Monad.IO.Class          (liftIO)
          > import           Data.IORef
          > import           Data.List                       (intercalate)
          > import           Diagrams.Backend.Cairo
          > import           Diagrams.Backend.Cairo.Internal
          > import           Diagrams.Prelude
          > import           Graphics.UI.Gtk
          >
          > main :: IO ()
          > main = do
          >   -- Ordinary Gtk setup.
          >   void initGUI
          >   w <- windowNew
          >   da <- drawingAreaNew
          >   w `containerAdd` da
          >   void $ w `on` deleteEvent $ liftIO mainQuit >> return True
          >
          >   -- Make an IORef to hold the transformation from window to diagram
          >   -- coordinates.
          >   gtk2DiaRef <- (newIORef mempty :: IO (IORef (T2 Double)))
          >
          >   -- Render the diagram on the drawing area and save the transformation.
          >   void $ da `on` exposeEvent $ liftIO $ do
          >     dw <- widgetGetDrawWindow da
          >
          >     -- renderDiaT returns both a rendering result as well as the
          >     -- transformation from diagram to output coordinates.
          >     let (dia2gtk, (_,r)) = renderDiaT Cairo
          >                              (CairoOptions "" (mkWidth 250) PNG False)
          >                              prettyHouse
          >
          >     -- store the inverse of the diagram -> window coordinate transformation
          >     -- for later use in interpreting mouse clicks
          >     writeIORef gtk2DiaRef (inv dia2gtk)
          >
          >     renderWithDrawable dw r
          >     return True
          >
          >   -- When the mouse moves, show the coordinates and the objects under
          >   -- the pointer.
          >   void $ da `on` motionNotifyEvent $ do
          >     (x,y) <- eventCoordinates
          >
          >     -- transform the mouse click back into diagram coordinates.
          >     gtk2Dia <- liftIO $ readIORef gtk2DiaRef
          >     let pt' = transform gtk2Dia (p2 (x,y))
          >
          >     liftIO $ do
          >       putStrLn $ show (x,y) ++ ": "
          >                    ++ intercalate " " (sample prettyHouse pt')
          >       return True
          >
          >   -- Run the Gtk main loop.
          >   da `widgetAddEvents` [PointerMotionMask]
          >   widgetShowAll w
          >   mainGUI
          >
          > -- The diagram to be drawn, with features tagged by strings.
          > prettyHouse :: QDiagram Cairo V2 Double [String]
          > prettyHouse = house
          >   where
          >     roof    = triangle 1   # scaleToY 0.75 # centerY # fc blue
          >     door    = rect 0.2 0.4 # fc red
          >     handle  = circle 0.02  # fc black
          >     wall    = square 1     # fc yellow
          >     chimney = fromOffsets [0 ^& 0.25, 0.1 ^& 0, 0 ^& (-0.4)]
          >             # closeTrail # strokeT # fc green
          >             # centerX
          >             # named "chimney"
          >     smoke = mconcat
          >       [ circle 0.05 # translate v
          >       | v <- [ zero, 0.05 ^& 0.15 ]
          >       ]
          >       # fc grey
          >     house = vcat
          >       [ mconcat
          >         [ roof    # snugR                  # value ["roof"]
          >         , chimney # snugL                  # value ["chimney"]
          >         ]
          >         # centerX
          >       , mconcat
          >         [ handle  # translate (0.05 ^& 0.2) # value ["handle"]
          >         , door    # alignB                  # value ["door"]
          >         , wall    # alignB                  # value ["wall"]
          >         ]
          >       ]
          >       # withName "chimney" (\chim ->
          >           atop (smoke # moveTo (location chim) # translateY 0.4
          >                       # value ["smoke"]
          >                )
          >         )

          by diagrams-discuss at April 30, 2015 12:00 AM

          Danny Gratzer

          Compiling With CPS

          Posted on April 30, 2015

          Hello folks. It’s been a busy month so I haven’t had much a chance to write but I think now’s a good time to talk about another compiler related subject: continuation passing style conversion.

          When you’re compiling a functional languages (in a sane way) your compiler mostly consists of phases which run over the AST and simplify it. For example in a language with pattern matching, it’s almost certainly the case that we can write something like

              case x of
                (1, 2) -> True
                (_, _) -> False

          Wonderfully concise code. However, it’s hard to compile nested patterns like that. In the compiler, we might simplify this to

              case x of
               (a, b) -> case a of
                           1 -> case b of
                                  2 -> True
                                  _ -> False
                           _ -> False

          note to future me, write a pattern matching compiler

          We’ve transformed our large nested pattern into a series of simpler, unnested patterns. The benefit here is that this maps more straight to a series of conditionals (or jumps).

          Now one of the biggest decisions in any compiler is what to do with expressions. We want to get rid of complicated nested expressions because chances are our compilation target doesn’t support them. In my second to last post we transformed a functional language into something like SSA. In this post, we’re going to walk through a different intermediate representation: CPS.

          What is CPS

          CPS is a restriction of how a functional language works. In CPS we don’t have nested expressions anymore. We instead have a series of lets which telescope out and each binds a “flat” expressions. This is the process of “removing expressions” from our language. A compiler probably is targeting something with a much weaker notion of expressions (like assembly) and so we change our tree like structure into something more linear.

          Additionally, no functions return. Instead, they take a continuation and when they’re about to return they instead pass their value to it. This means that conceptually, all functions are transformed from a -> b to (a, b -> void) -> void. Logically, this is actually a reasonable thing to do. This corresponds to mapping a proposition b to ¬ ¬ b. What’s cool here is that since each function call calls a continuation instead of returning its result, we can imagine that each function just transferring control over to some other part of the program instead of returning. This leads to a very slick and efficient way of implementing CPSed function calls as we’ll see.

          This means we’d change something like

              fact n = if n == 0 then 1 else n * fact (n - 1)

          into

              fact n k =
                if n == 0
                then k 1
                else let n' = n - 1 in
                     fact n' (\r ->
                                    let r' = n * r in
                                    k r')

          To see what’s going on here we

          1. Added an extra argument to fact, its return continuation
          2. In the first branch, we pass the result to the continuation instead of returning it
          3. In the next branch, we lift the nested expression n - 1 into a flat let binding
          4. We add an extra argument to the recursive call, the continuation
          5. In this continuation, we apply multiply the result of the recursive call by n (Note here that we did close over n, this lambda is a real lambda)
          6. Finally, we pass the final result to the original continuation k.

          The only tree-style-nesting here comes from the top if expression, everything else is completely linear.

          Let’s formalize this process by converting Simply Typed Lambda Calculus (STLC) to CPS form.

          STLC to CPS

          First things first, we specify an AST for normal STLC.

              data Tp = Arr Tp Tp | Int deriving Show
          
              data Op = Plus | Minus | Times | Divide
          
              -- The Tp in App refers to the return type, it's helpful later
              data Exp a = App (Exp a) (Exp a) Tp
          
                         | Lam Tp (Scope () Exp a)
                         | Num Int
                           -- No need for binding here since we have Minus
                         | IfZ (Exp a) (Exp a) (Exp a)
                         | Binop Op (Exp a) (Exp a)
                         | Var a

          We’ve supplemented our lambda calculus with natural numbers and some binary operations because it makes things a bit more fun. Additionally, we’re using bound to deal with bindings for lambdas. This means there’s a terribly boring monad instance lying around that I won’t bore you with.

          To convert to CPS, we first need to figure out how to convert our types. Since CPS functions never return we want them to go to Void, the unoccupied type. However, since our language doesn’t allow Void outside of continuations, and doesn’t allow functions that don’t go to Void, let’s bundle them up into one new type Cont a which is just a function from a -> Void. However, this presents us with a problem, how do we turn an Arr a b into this style of function? It seems like our function should take two arguments, a and b -> Void so that it can produce a Void of its own. However, this requires products since currying isn’t possible with the restriction that all functions return Void! Therefore, we supplement our CPS language with pairs and projections for them.

          Now we can write the AST for CPS types and a conversion between Tp and it.

              data CTp = Cont CTp | CInt | CPair CTp CTp
          
              cpsTp :: Tp -> CTp
              cpsTp (Arr l r) = Cont $ CPair (cpsTp l) (Cont (cpsTp r))
              cpsTp Int = CInt

          The only interesting thing here is how we translate function types, but we talked about that above. Now for expressions.

          We want to define a new data type that encapsulates the restrictions of CPS. In order to do this we factor out our data types into “flat expressions” and “CPS expressions”. Flat expressions are things like values and variables while CPS expressions contain things like “Jump to this continuation” or “Branch on this flat expression”. Finally, there’s let expressions to perform various operations on expressions.

              data LetBinder a = OpBind Op (FlatExp a) (FlatExp a)
                               | ProjL a
                               | ProjR a
                               | Pair (FlatExp a) (FlatExp a)
          
              data FlatExp a = CNum Int | CVar a | CLam CTp a (CExp a)
          
              data CExp a = Let a (LetBinder a) (CExp a)
                          | CIf (FlatExp a) (CExp a) (CExp a)
                          | Jump (FlatExp a) (FlatExp a)
                          | Halt (FlatExp a)

          Lets let us bind the results of a few “primitive operations” across values and variables to a fresh variable. This is where things like “incrementing a number” happen. Additionally, in order to create a pair or access its elements we need to us a Let.

          Notice that here application is spelled Jump hinting that it really is just a jmp and not dealing with the stack in any way. They’re all jumps we can not overflow the stack as would be an issue with a normal calling convention. To seal of the chain of function calls we have Halt, it takes a FlatExp and returns it as the result of the program.

          Expressions here are also parameterized over variables but we can’t use bound with them (for reasons that deserve a blogpost-y rant :). Because of this we settle for just ensuring that each a is globally unique.

          So now instead of having a bunch of nested Exps, we have flat expressions which compute exactly one thing and linearize the tree of expressions into a series of flat ones with let binders. It’s still not quite “linear” since both lambdas and if branches let us have something tree-like.

          We can now define conversion to CPS with one major helper function

              cps :: (Eq a, Enum a)
                  => Exp a
                  -> (FlatExp a -> Gen a (CExp a))
                  -> Gen a (CExp a)

          This takes an expression, a “continuation” and produces a CExp. We have some monad-gen stuff going on here because we need unique variables. The “continuation” is an actual Haskell function. So our function breaks an expression down to a FlatExp and then feeds it to the continuation.

              cps (Var a) c = c (CVar a)
              cps (Num i) c = c (CNum i)

          The first two cases are easy since variables and numbers are already flat expressions, they go straight into the continuation.

              cps (IfZ i t e) c = cps i $ \ic -> CIf ic <$> cps t c <*> cps e c

          For IfZ we first recurse on the i. Then once we have a flattened computation representing i, we use CIf and recurse.

              cps (Binop op l r) c =
                cps l $ \fl ->
                cps r $ \fr ->
                gen >>= \out ->
                Let out (OpBind op fl fr) <$> c (CVar out)

          Like before, we use cps to recurse on the left and right sides of the expression. This gives us two flat expressions which we use with OpBind to compute the result and bind it to out. Now that we have a variable for the result we just toss it to the continuation.

              cps (Lam tp body) c = do
                [pairArg, newCont, newArg] <- replicateM 3 gen
                let body' = instantiate1 (Var newArg) body
                cbody <- cps body' (return . Jump (CVar newCont))
                c (CLam (cpsTp tp) pairArg
                   $ Let newArg  (ProjL pairArg)
                   $ Let newCont (ProjR pairArg)
                   $ cbody)

          Converting a lambda is a little bit more work. It needs to take a pair so a lot of the work is projecting out the left component (the argument) and the right component (the continuation). With those two things in hand we recurse in the body using the continuation supplied as an argument. The actual code makes this process look a little out of order. Remember that we only use cbody once we’ve bound the projections to newArg and pairArg respectively.

              cps (App l r tp) c = do
                arg <- gen
                cont <- CLam (cpsTp tp) arg <$> c (CVar arg)
                cps l $ \fl ->
                  cps r $ \fr ->
                  gen >>= \pair ->
                  return $ Let pair (Pair fr cont) (Jump fl (CVar pair))

          For application we just create a lambda for the current continuation. We then evaluate the left and right sides of the application using recursive calls. Now that we have a function to jump to, we create a pair of the argument and the continuation and bind it to a name. From there, we just jump to fl, the function. Turning the continuation into a lambda is a little strange, it’s also we needed an annotation for App. The lambda uses the return type of the application and constructs a continuation that maps a to c a. Note that c a is a Haskell expressions with the type CExp a.

              convert :: Exp Int -> CExp Int
              convert = runGen . flip cps (return . Halt)

          With this, we’ve written a nice little compiler pass to convert expressions into their CPS forms. By doing this we’ve “eliminated expressions”. Everything is now flat and evaluation basically proceeds by evaluating one small computation and using the result to compute another and another.

          There’s still some things left to compile out before this is machine code though

          • Closures - these can be converted to explicitly pass records with closure conversion
          • Hoist lambdas out of nested scope - this gets rid of anonymous functions, something we don’t have in C or assembly
          • Make allocation explicit - Allocate a block of memory for a group of let statements and have them explicitly move the results of their computations to it
          • Register allocation - Cleverly choose whether to store some particular variable in a register or load it in as needed.

          Once we’ve done these steps we’ve basically written a compiler. However, they’re all influenced by the fact that we’ve compiled out expressions and (really) function calls with our conversion to CPS, it makes the process much much simpler.

          Wrap Up

          CPS conversion is a nice alternative to something like STG machines for lazy languages or SSA for imperative ones. As far as I’m aware the main SML interpreter (SML/NJ) compiles code in this way. As does Ur/Web if I’m not crazy. Additionally, the course entitled “Higher Order, Typed Compilation” which is taught here at CMU uses CPS conversion to make compiling SML really quite pleasant.

          In fact, someone (Andrew Appel?) once wrote a paper that noted that SSA and CPS are actually the same. The key difference is that in SSA we merge multiple blocks together using the phi function. In CPS, we just let multiple source blocks jump to the same destination block (continuation). You can see this in our conversion of IfZ to CPS, instead of using phi to merge in the two branches, they both just use the continuation to jump to the remainder of the program. It makes things a little simpler because no one person needs to worry about

          Finally, if you’re compiling a language like Scheme with call/cc, using CPS conversion makes the whole thing completely trivial. All you do is define call/cc at the CPS level as

          call/cc (f, c) = f ((λ (x, c') → c x), c)

          So instead of using the continuation supplied to us in the expression we give to f, we use the one for the whole call/cc invocation! This causes us to not return into the body of f but instead to carry on the rest of the program as if f had returned whatever value x is. This is how my old Scheme compiler did things, I put figuring out how to implement call/cc off for a week before I realized it was a 10 minute job!

          Hope this was helpful!

          <script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus

          April 30, 2015 12:00 AM

          April 29, 2015

          Edward Kmett

          Domains, Sets, Traversals and Applicatives

          Last time I looked at free monoids, and noticed that in Haskell lists don't really cut it. This is a consequence of laziness and general recursion. To model a language with those properties, one needs to use domains and monotone, continuous maps, rather than sets and total functions (a call-by-value language with general recursion would use domains and strict maps instead).

          This time I'd like to talk about some other examples of this, and point out how doing so can (perhaps) resolve some disagreements that people have about the specific cases.

          The first example is not one that I came up with: induction. It's sometimes said that Haskell does not have inductive types at all, or that we cannot reason about functions on its data types by induction. However, I think this is (techincally) inaccurate. What's true is that we cannot simply pretend that that our types are sets and use the induction principles for sets to reason about Haskell programs. Instead, one has to figure out what inductive domains would be, and what their proof principles are.

          Fortunately, there are some papers about doing this. The most recent (that I'm aware of) is Generic Fibrational Induction. I won't get too into the details, but it shows how one can talk about induction in a general setting, where one has a category that roughly corresponds to the type theory/programming language, and a second category of proofs that is 'indexed' by the first category's objects. Importantly, it is not required that the second category is somehow 'part of' the type theory being reasoned about, as is often the case with dependent types, although that is also a special case of their construction.

          One of the results of the paper is that this framework can be used to talk about induction principles for types that don't make sense as sets. Specifically:

           
          newtype Hyp = Hyp ((Hyp -> Int) -> Int)
           

          the type of "hyperfunctions". Instead of interpreting this type as a set, where it would effectively require a set that is isomorphic to the power set of its power set, they interpret it in the category of domains and strict functions mentioned earlier. They then construct the proof category in a similar way as one would for sets, except instead of talking about predicates as subsets, we talk about sub-domains instead. Once this is done, their framework gives a notion of induction for this type.

          This example is suitable for ML (and suchlike), due to the strict functions, and sort of breaks the idea that we can really get away with only thinking about sets, even there. Sets are good enough for some simple examples (like flat domains where we don't care about ⊥), but in general we have to generalize induction itself to apply to all types in the 'good' language.

          While I haven't worked out how the generic induction would work out for Haskell, I have little doubt that it would, because ML actually contains all of Haskell's data types (and vice versa). So the fact that the framework gives meaning to induction for ML implies that it does so for Haskell. If one wants to know what induction for Haskell's 'lazy naturals' looks like, they can study the ML analogue of:

           
          data LNat = Zero | Succ (() -> LNat)
           

          because function spaces lift their codomain, and make things 'lazy'.

          ----

          The other example I'd like to talk about hearkens back to the previous article. I explained how foldMap is the proper fundamental method of the Foldable class, because it can be massaged to look like:

           
          foldMap :: Foldable f => f a -> FreeMonoid a
           

          and lists are not the free monoid, because they do not work properly for various infinite cases.

          I also mentioned that foldMap looks a lot like traverse:

           
          foldMap  :: (Foldable t   , Monoid m)      => (a -> m)   -> t a -> m
          traverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)
           

          And of course, we have Monoid m => Applicative (Const m), and the functions are expected to agree in this way when applicable.

          Now, people like to get in arguments about whether traversals are allowed to be infinite. I know Ed Kmett likes to argue that they can be, because he has lots of examples. But, not everyone agrees, and especially people who have papers proving things about traversals tend to side with the finite-only side. I've heard this includes one of the inventors of Traversable, Conor McBride.

          In my opinion, the above disagreement is just another example of a situation where we have a generic notion instantiated in two different ways, and intuition about one does not quite transfer to the other. If you are working in a language like Agda or Coq (for proving), you will be thinking about traversals in the context of sets and total functions. And there, traversals are finite. But in Haskell, there are infinitary cases to consider, and they should work out all right when thinking about domains instead of sets. But I should probably put forward some argument for this position (and even if I don't need to, it leads somewhere else interesting).

          One example that people like to give about finitary traversals is that they can be done via lists. Given a finite traversal, we can traverse to get the elements (using Const [a]), traverse the list, then put them back where we got them by traversing again (using State [a]). Usually when you see this, though, there's some subtle cheating in relying on the list to be exactly the right length for the second traversal. It will be, because we got it from a traversal of the same structure, but I would expect that proving the function is actually total to be a lot of work. Thus, I'll use this as an excuse to do my own cheating later.

          Now, the above uses lists, but why are we using lists when we're in Haskell? We know they're deficient in certain ways. It turns out that we can give a lot of the same relevant structure to the better free monoid type:

           
          newtype FM a = FM (forall m. Monoid m => (a -> m) -> m) deriving (Functor)
           
          instance Applicative FM where
            pure x = FM ($ x)
            FM ef < *> FM ex = FM $ \k -> ef $ \f -> ex $ \x -> k (f x)
           
          instance Monoid (FM a) where
            mempty = FM $ \_ -> mempty
            mappend (FM l) (FM r) = FM $ \k -> l k <> r k
           
          instance Foldable FM where
            foldMap f (FM e) = e f
           
          newtype Ap f b = Ap { unAp :: f b }
           
          instance (Applicative f, Monoid b) => Monoid (Ap f b) where
            mempty = Ap $ pure mempty
            mappend (Ap l) (Ap r) = Ap $ (<>) < $> l < *> r
           
          instance Traversable FM where
            traverse f (FM e) = unAp . e $ Ap . fmap pure . f
           

          So, free monoids are Monoids (of course), Foldable, and even Traversable. At least, we can define something with the right type that wouldn't bother anyone if it were written in a total language with the right features, but in Haskell it happens to allow various infinite things that people don't like.

          Now it's time to cheat. First, let's define a function that can take any Traversable to our free monoid:

           
          toFreeMonoid :: Traversable t => t a -> FM a
          toFreeMonoid f = FM $ \k -> getConst $ traverse (Const . k) f
           

          Now let's define a Monoid that's not a monoid:

           
          data Cheat a = Empty | Single a | Append (Cheat a) (Cheat a)
           
          instance Monoid (Cheat a) where
            mempty = Empty
            mappend = Append
           

          You may recognize this as the data version of the free monoid from the previous article, where we get the real free monoid by taking a quotient. using this, we can define an Applicative that's not valid:

           
          newtype Cheating b a =
            Cheating { prosper :: Cheat b -> a } deriving (Functor)
           
          instance Applicative (Cheating b) where
            pure x = Cheating $ \_ -> x
           
            Cheating f < *> Cheating x = Cheating $ \c -> case c of
              Append l r -> f l (x r)
           

          Given these building blocks, we can define a function to relabel a traversable using a free monoid:

           
          relabel :: Traversable t => t a -> FM b -> t b
          relabel t (FM m) = propser (traverse (const hope) t) (m Single)
           where
           hope = Cheating $ \c -> case c of
             Single x -> x
           

          And we can implement any traversal by taking a trip through the free monoid:

           
          slowTraverse
            :: (Applicative f, Traversable t) => (a -> f b) -> t a -> f (t b)
          slowTraverse f t = fmap (relabel t) . traverse f . toFreeMonoid $ t
           

          And since we got our free monoid via traversing, all the partiality I hid in the above won't blow up in practice, rather like the case with lists and finite traversals.

          Arguably, this is worse cheating. It relies on the exact association structure to work out, rather than just number of elements. The reason is that for infinitary cases, you cannot flatten things out, and there's really no way to detect when you have something infinitary. The finitary traversals have the luxury of being able to reassociate everything to a canonical form, while the infinite cases force us to not do any reassociating at all. So this might be somewhat unsatisfying.

          But, what if we didn't have to cheat at all? We can get the free monoid by tweaking foldMap, and it looks like traverse, so what happens if we do the same manipulation to the latter?

          It turns out that lens has a type for this purpose, a slight specialization of which is:

           
          newtype Bazaar a b t =
            Bazaar { runBazaar :: forall f. Applicative f => (a -> f b) -> f t }
           

          Using this type, we can reorder traverse to get:

           
          howBizarre :: Traversable t => t a -> Bazaar a b (t b)
          howBizarre t = Bazaar $ \k -> traverse k t
           

          But now, what do we do with this? And what even is it? [1]

          If we continue drawing on intuition from Foldable, we know that foldMap is related to the free monoid. Traversable has more indexing, and instead of Monoid uses Applicative. But the latter are actually related to the former; Applicatives are monoidal (closed) functors. And it turns out, Bazaar has to do with free Applicatives.

          If we want to construct free Applicatives, we can use our universal property encoding trick:

           
          newtype Free p f a =
            Free { gratis :: forall g. p g => (forall x. f x -> g x) -> g a }
           

          This is a higher-order version of the free p, where we parameterize over the constraint we want to use to represent structures. So Free Applicative f is the free Applicative over a type constructor f. I'll leave the instances as an exercise.

          Since free monoid is a monad, we'd expect Free p to be a monad, too. In this case, it is a McBride style indexed monad, as seen in The Kleisli Arrows of Outrageous Fortune.

           
          type f ~> g = forall x. f x -> g x
           
          embed :: f ~> Free p f
          embed fx = Free $ \k -> k fx
           
          translate :: (f ~> g) -> Free p f ~> Free p g
          translate tr (Free e) = Free $ \k -> e (k . tr)
           
          collapse :: Free p (Free p f) ~> Free p f
          collapse (Free e) = Free $ \k -> e $ \(Free e') -> e' k
           

          That paper explains how these are related to Atkey style indexed monads:

           
          data At key i j where
            At :: key -> At key i i
           
          type Atkey m i j a = m (At a j) i
           
          ireturn :: IMonad m => a -> Atkey m i i a
          ireturn = ...
           
          ibind :: IMonad m => Atkey m i j a -> (a -> Atkey m j k b) -> Atkey m i k b
          ibind = ...
           

          It turns out, Bazaar is exactly the Atkey indexed monad derived from the Free Applicative indexed monad (with some arguments shuffled) [2]:

           
          hence :: Bazaar a b t -> Atkey (Free Applicative) t b a
          hence bz = Free $ \tr -> runBazaar bz $ tr . At
           
          forth :: Atkey (Free Applicative) t b a -> Bazaar a b t
          forth fa = Bazaar $ \g -> gratis fa $ \(At a) -> g a
           
          imap :: (a -> b) -> Bazaar a i j -> Bazaar b i j
          imap f (Bazaar e) = Bazaar $ \k -> e (k . f)
           
          ipure :: a -> Bazaar a i i
          ipure x = Bazaar ($ x)
           
          (>>>=) :: Bazaar a j i -> (a -> Bazaar b k j) -> Bazaar b k i
          Bazaar e >>>= f = Bazaar $ \k -> e $ \x -> runBazaar (f x) k
           
          (>==>) :: (s -> Bazaar i o t) -> (i -> Bazaar a b o) -> s -> Bazaar a b t
          (f >==> g) x = f x >>>= g
           

          As an aside, Bazaar is also an (Atkey) indexed comonad, and the one that characterizes traversals, similar to how indexed store characterizes lenses. A Lens s t a b is equivalent to a coalgebra s -> Store a b t. A traversal is a similar Bazaar coalgebra:

           
            s -> Bazaar a b t
              ~
            s -> forall f. Applicative f => (a -> f b) -> f t
              ~
            forall f. Applicative f => (a -> f b) -> s -> f t
           

          It so happens that Kleisli composition of the Atkey indexed monad above (>==>) is traversal composition.

          Anyhow, Bazaar also inherits Applicative structure from Free Applicative:

           
          instance Functor (Bazaar a b) where
            fmap f (Bazaar e) = Bazaar $ \k -> fmap f (e k)
           
          instance Applicative (Bazaar a b) where
            pure x = Bazaar $ \_ -> pure x
            Bazaar ef < *> Bazaar ex = Bazaar $ \k -> ef k < *> ex k
           

          This is actually analogous to the Monoid instance for the free monoid; we just delegate to the underlying structure.

          The more exciting thing is that we can fold and traverse over the first argument of Bazaar, just like we can with the free monoid:

           
          bfoldMap :: Monoid m => (a -> m) -> Bazaar a b t -> m
          bfoldMap f (Bazaar e) = getConst $ e (Const . f)
           
          newtype Comp g f a = Comp { getComp :: g (f a) } deriving (Functor)
           
          instance (Applicative f, Applicative g) => Applicative (Comp g f) where
            pure = Comp . pure . pure
            Comp f < *> Comp x = Comp $ liftA2 (< *>) f x
           
          btraverse
            :: (Applicative f) => (a -> f a') -> Bazaar a b t -> Bazaar a' b t
          btraverse f (Bazaar e) = getComp $ e (c . fmap ipure . f)
           

          This is again analogous to the free monoid code. Comp is the analogue of Ap, and we use ipure in traverse. I mentioned that Bazaar is a comonad:

           
          extract :: Bazaar b b t -> t
          extract (Bazaar e) = runIdentity $ e Identity
           

          And now we are finally prepared to not cheat:

           
          honestTraverse
            :: (Applicative f, Traversable t) => (a -> f b) -> t a -> f (t b)
          honestTraverse f = fmap extract . btraverse f . howBizarre
           

          So, we can traverse by first turning out Traversable into some structure that's kind of like the free monoid, except having to do with Applicative, traverse that, and then pull a result back out. Bazaar retains the information that we're eventually building back the same type of structure, so we don't need any cheating.

          To pull this back around to domains, there's nothing about this code to object to if done in a total language. But, if we think about our free Applicative-ish structure, in Haskell, it will naturally allow infinitary expressions composed of the Applicative operations, just like the free monoid will allow infinitary monoid expressions. And this is okay, because some Applicatives can make sense of those, so throwing them away would make the type not free, in the same way that even finite lists are not the free monoid in Haskell. And this, I think, is compelling enough to say that infinite traversals are right for Haskell, just as they are wrong for Agda.

          For those who wish to see executable code for all this, I've put a files here and here. The latter also contains some extra goodies at the end that I may talk about in further installments.

          [1] Truth be told, I'm not exactly sure.

          [2] It turns out, you can generalize Bazaar to have a correspondence for every choice of p

           
          newtype Bizarre p a b t =
            Bizarre { bizarre :: forall f. p f => (a -> f b) -> f t }
           

          hence and forth above go through with the more general types. This can be seen here.

          by Dan Doel at April 29, 2015 07:37 AM

          April 28, 2015

          Dan Burton

          An informal explanation of stackage-sandbox

          Works on my machine, will it work on yours? Suppose there’s a Haskell project called stackage-cli that I’d like to share with you. It builds just fine on my machine, but will it build on yours? If we have different … Continue reading

          by Dan Burton at April 28, 2015 08:45 PM

          April 27, 2015

          Ian Ross

          C2HS Tutorial Ideas

          C2HS Tutorial Ideas

          One of the things that C2HS is lacking is a good tutorial. So I’m going to write one (or try to, anyway).

          To make this as useful as possible, I’d like to base a large part of the tutorial on a realistic case study of producing Haskell bindings to a C library. My current plan is to break the tutorial into three parts: the basics, the case study and “everything else”, for C2HS features that don’t get covered in the first two parts. To make this even more useful, I’d like to base the case study on a C library that someone actually cares about and wants Haskell bindings for.

          The requirements for the case study C library are:

          1. There shouldn’t already be Haskell bindings for it – I don’t want to duplicate work.

          2. The C library should be “medium-sized”: big enough to be realistic, not so big that it takes forever to write bindings.

          3. The C library should be of medium complexity. By this, I mean that it should have a range of different kinds of C functions, structures and things that need to be made accessible from Haskell. It shouldn’t be completely trivial, and it should require a little thought to come up with good bindings. On the other hand, it shouldn’t be so unusual that the normal ways of using C2HS don’t work.

          4. Ideally it should be something that more than one person might want to use.

          5. It needs to be a library that’s available for Linux. I don’t have a Mac and I’m not that keen on doing something that’s Windows-only.

          Requirements #2 and #3 are kind of squishy, but it should be fairly clear what’s appropriate and what’s not: any C library for which you think development of Haskell bindings would make a good C2HS tutorial case study is fair game.

          If you have a library you think would be a good fit for this, drop me an email, leave a comment here or give me a shout on IRC (I’m usually on #haskell as iross or iross_ or something like that).

          <script src="http://zor.livefyre.com/wjs/v3.0/javascripts/livefyre.js" type="text/javascript"></script> <script type="text/javascript"> (function () { var articleId = fyre.conv.load.makeArticleId(null); fyre.conv.load({}, [{ el: 'livefyre-comments', network: "livefyre.com", siteId: "290329", articleId: articleId, signed: false, collectionMeta: { articleId: articleId, url: fyre.conv.load.makeCollectionUrl(), } }], function() {}); }()); </script>

          April 27, 2015 03:46 PM

          Tom Schrijvers

          FLOPS 2016: Call for Papers

          FLOPS 2016: 13th International Symposium on Functional and Logic Programming
          March 3-6, 2016, Kochi, Japan

          Call For Papers             http://www.info.kochi-tech.ac.<wbr></wbr>jp/FLOPS2016/

          Writing down detailed computational steps is not the only way of
          programming. The alternative, being used increasingly in practice, is
          to start by writing down the desired properties of the result. The
          computational steps are then (semi-)automatically derived from these
          higher-level specifications. Examples of this declarative style
          include functional and logic programming, program transformation and
          re-writing, and extracting programs from proofs of their correctness.

          FLOPS aims to bring together practitioners, researchers and
          implementors of the declarative programming, to discuss mutually
          interesting results and common problems: theoretical advances, their
          implementations in language systems and tools, and applications of
          these systems in practice. The scope includes all aspects of the
          design, semantics, theory, applications, implementations, and teaching
          of declarative programming.  FLOPS specifically aims to
          promote cross-fertilization between theory and practice and among
          different styles of declarative programming.

          Scope

          FLOPS solicits original papers in all areas of the declarative
          programming:
           * functional, logic, functional-logic programming, re-writing
             systems, formal methods and model checking, program transformations
             and program refinements, developing programs with the help of theorem
             provers or SAT/SMT solvers;
           * foundations, language design, implementation issues (compilation
             techniques, memory management, run-time systems), applications and
             case studies.

          FLOPS promotes cross-fertilization among different styles of
          declarative programming. Therefore, submissions must be written to be
          understandable by the wide audience of declarative programmers and
          researchers. Submission of system descriptions and declarative pearls
          are especially encouraged.

          Submissions should fall into one of the following categories:
           * Regular research papers: they should describe new results and will
             be judged on originality, correctness, and significance.
           * System descriptions: they should contain a link to a working
             system and will be judged on originality, usefulness, and design.
           * Declarative pearls: new and excellent declarative programs or
             theories with illustrative applications.
          System descriptions and declarative pearls must be explicitly marked
          as such in the title.

          Submissions must be unpublished and not submitted for publication
          elsewhere. Work that already appeared in unpublished or informally
          published workshops proceedings may be submitted. See also ACM SIGPLAN
          Republication Policy.

          The proceedings will be published by Springer International Publishing
          in the Lecture Notes in Computer Science (LNCS) series, as a printed
          volume as well as online in the digital library SpringerLink.

          Post-proceedings: The authors of 4-7 best papers will be invited to
          submit the extended version of their FLOPS paper to a special issue of
          the journal Science of Computer Programming (SCP).


          Important dates

          Monday, September 14, 2015 (any time zone): Submission deadline
          Monday, November 16, 2015:                  Author notification
          March 3-6, 2016:                            FLOPS Symposium
          March 7-9, 2016:                            PPL Workshop


          Submission

          Submissions must be written in English and can be up to 15 pages long
          including references, though pearls are typically shorter. The
          formatting has to conform to Springer's guidelines.  Regular research
          papers should be supported by proofs and/or experimental results. In
          case of lack of space, this supporting information should be made
          accessible otherwise (e.g., a link to a Web page, or an appendix).

          Papers should be submitted electronically at
          https://easychair.org/<wbr></wbr>conferences/?conf=flops2016


          Program Committee

          Andreas Abel         Gothenburg University, Sweden
          Lindsay Errington    USA
          Makoto Hamana        Gunma University, Japan
          Michael Hanus        CAU Kiel, Germany
          Jacob Howe           City University London, UK
          Makoto Kanazawa      National Institute of Informatics, Japan
          Andy King            University of Kent, UK   (PC Co-Chair)
          Oleg Kiselyov        Tohoku University, Japan   (PC Co-Chair)
          Hsiang-Shang Ko      National Institute of Informatics, Japan
          Julia Lawall         Inria-Whisper, France
          Andres Löh           Well-Typed LLP, UK
          Anil Madhavapeddy    Cambridge University, UK
          Jeff Polakow         PivotCloud, USA
          Marc Pouzet          École normale supérieure, France
          Vítor Santos Costa   Universidade do Porto, Portugal
          Tom Schrijvers       KU Leuven, Belgium
          Zoltan Somogyi       Australia
          Alwen Tiu            Nanyang Technological University, Singapore
          Sam Tobin-Hochstadt  Indiana University, USA
          Hongwei Xi           Boston University, USA
          Neng-Fa Zhou         CUNY Brooklyn College and Graduate Center, USA


          Organizers

          Andy King            University of Kent, UK                  (PC Co-Chair)
          Oleg Kiselyov        Tohoku University, Japan                (PC Co-Chair)
          Yukiyoshi Kameyama   University of Tsukuba, Japan            (General Chair)
          Kiminori Matsuzaki   Kochi University of Technology, Japan   (Local Chair)

          flops2016 at logic.cs.tsukuba.ac dot jp

          by Tom Schrijvers ([email protected]) at April 27, 2015 11:14 AM

          Dominic Steinitz

          Rejection Sampling

          Introduction

          Suppose you want to sample from the truncated normal distribution. One way to do this is to use rejection sampling. But if you do this naïvely then you will run into performance problems. The excellent Devroye (1986) who references Marsaglia (1964) gives an efficient rejection sampling scheme using the Rayleigh distribution. The random-fu package uses the Exponential distribution.

          Performance

          > {-# OPTIONS_GHC -Wall                     #-}
          > {-# OPTIONS_GHC -fno-warn-name-shadowing  #-}
          > {-# OPTIONS_GHC -fno-warn-type-defaults   #-}
          > {-# OPTIONS_GHC -fno-warn-unused-do-bind  #-}
          > {-# OPTIONS_GHC -fno-warn-missing-methods #-}
          > {-# OPTIONS_GHC -fno-warn-orphans         #-}
          
          > {-# LANGUAGE FlexibleContexts             #-}
          
          > import Control.Monad
          > import Data.Random
          > import qualified Data.Random.Distribution.Normal as N
          
          > import Data.Random.Source.PureMT
          > import Control.Monad.State
          

          Here’s the naïve implementation.

          > naiveReject :: Double -> RVar Double
          > naiveReject x = doit
          >   where
          >     doit = do
          >       y <- N.stdNormal
          >       if y < x
          >         then doit
          >         else return y
          

          And here’s an implementation using random-fu.

          > expReject :: Double -> RVar Double
          > expReject x = N.normalTail x
          

          Let’s try running both of them

          > n :: Int
          > n = 10000000
          
          > lower :: Double
          > lower = 2.0
          
          > testExp :: [Double]
          > testExp = evalState (replicateM n $ sample (expReject lower)) (pureMT 3)
          
          > testNaive :: [Double]
          > testNaive = evalState (replicateM n $ sample (naiveReject lower)) (pureMT 3)
          
          > main :: IO ()
          > main = do
          >   print $ sum testExp
          >   print $ sum testNaive
          

          Let’s try building and running both the naïve and better tuned versions.

          ghc -O2 CompareRejects.hs

          As we can see below we get 59.98s and 4.28s, a compelling reason to use the tuned version. And the difference in performance will get worse the less of the tail we wish to sample from.

          Tuned

          2.3731610476911187e7
            11,934,195,432 bytes allocated in the heap
                 5,257,328 bytes copied during GC
                    44,312 bytes maximum residency (2 sample(s))
                    21,224 bytes maximum slop
                         1 MB total memory in use (0 MB lost due to fragmentation)
          
                                              Tot time (elapsed)  Avg pause  Max pause
            Gen  0     23145 colls,     0 par    0.09s    0.11s     0.0000s    0.0001s
            Gen  1         2 colls,     0 par    0.00s    0.00s     0.0001s    0.0002s
          
            INIT    time    0.00s  (  0.00s elapsed)
            MUT     time    4.19s  (  4.26s elapsed)
            GC      time    0.09s  (  0.11s elapsed)
            EXIT    time    0.00s  (  0.00s elapsed)
            Total   time    4.28s  (  4.37s elapsed)
          
            %GC     time       2.2%  (2.6% elapsed)
          
            Alloc rate    2,851,397,967 bytes per MUT second
          
            Productivity  97.8% of total user, 95.7% of total elapsed

          Naïve

          2.3732073159369867e7
           260,450,762,656 bytes allocated in the heap
               111,891,960 bytes copied during GC
                    85,536 bytes maximum residency (2 sample(s))
                    76,112 bytes maximum slop
                         1 MB total memory in use (0 MB lost due to fragmentation)
          
                                              Tot time (elapsed)  Avg pause  Max pause
            Gen  0     512768 colls,     0 par    1.86s    2.24s     0.0000s    0.0008s
            Gen  1         2 colls,     0 par    0.00s    0.00s     0.0001s    0.0002s
          
            INIT    time    0.00s  (  0.00s elapsed)
            MUT     time   58.12s  ( 58.99s elapsed)
            GC      time    1.86s  (  2.24s elapsed)
            EXIT    time    0.00s  (  0.00s elapsed)
            Total   time    59.98s  ( 61.23s elapsed)
          
            %GC     time       3.1%  (3.7% elapsed)
          
            Alloc rate    4,481,408,869 bytes per MUT second
          
            Productivity  96.9% of total user, 94.9% of total elapsed

          Bibliography

          Devroye, L. 1986. Non-Uniform Random Variate Generation. Springer-Verlag. http://books.google.co.uk/books?id=mEw\_AQAAIAAJ.

          Marsaglia, G. 1964. “Generating a Variable from the Tail of the Normal Distribution.” J-Technometrics 6 (1): 101–2.


          by Dominic Steinitz at April 27, 2015 09:59 AM

          Ken T Takusagawa

          [imxscmnf] Function call syntax

          A brief survey of syntax used by different programming languages to express calling a function:

          Calling a one-argument function:
          f x
          (f x)
          f(x)

          Calling a two-argument uncurried function:
          f x,y
          f(x,y)
          (f x y)

          Calling a two-argument curried function:
          f x y
          ((f x) y)
          f(x)(y)

          Calling a zero-argument function:
          f
          (f)
          f()

          Intriguing but not implemented as far as I know is a Lisp-like language that prefers currying and partial application like Haskell.

          by Ken ([email protected]) at April 27, 2015 08:25 AM

          April 26, 2015

          Joachim Breitner

          Fifth place in Godingame World Cup

          Last evening, Codingame held a “Programming World Cup” titled “There is no Spoon”. The format is that within four hours, you get to write a program that solves a given task. Submissions are first rated by completeness (there are 13 test inputs that you can check your code again, and further hidden tests that will only be checked after submission) and then by time of submission. You can only submit your code once.

          What I like about Codingame is that they support a great number of programming languages in their Web-“IDE”, including Haskell. I had nothing better to do yesterday, so I joined. I was aiming for a good position in the Haskell-specific ranking.

          After nearly two hours my code completed all the visible test cases and I submitted. I figured that this was a reasonable time to do so, as it was half-time and there are supposed to be two challenges. I turned out that the first, quite small task, which felt like a warm-up or qualification puzzle, was the first of those two, and that therefore I was done, and indeed the 5th fastest to complete a 100% solution! With only less than 5 minutes difference to the 3rd, money-winning place – if I had known I had such a chance, I had started on time...

          Having submitted the highest ranked Haskell code, I will get a T-Shirt. I also defended Haskell’s reputation as an efficient programming language, ranked third in the contest, after C++ (rank 1) and Java (rank 2), but before PHP (9), C# (10) and Python (11), listing only those that had a 100% solution.

          The task, solving a Bridges puzzle, did not feel like a great fit for Haskell at first. I was juggling Data.Maps around where otherwise I’d simple attach attributes to object, and a recursive function simulated nothing but a plain loop. But it played off the moment I had to implement guessing parts of the solution, trying what happens and backtracking when it did not work: With all state in parameters and pure code it was very simple to get a complete solution.

          My code is of course not very polished, and having the main loop live in the IO monad just to be able to print diagnostic commands is a bit ugly.

          The next, Lord of the Ring-themed world cup will be on June 27th. Maybe we will see more than 18 Haskell entries then?

          by Joachim Breitner ([email protected]) at April 26, 2015 03:01 PM

          April 25, 2015

          Neil Mitchell

          Cleaning stale files with Shake

          Summary: Sometimes source files get deleted, and build products become stale. Using Shake, you can automatically delete them.

          Imagine you have a build system that compiles Markdown files into HTML files for your blog. Sometimes you rename a Markdown file, which means the corresponding HTML will change name too. Typically, this will result in a stale HTML file being left, one that was previously produced by the build system, but will never be updated again. You can remove that file by cleaning all outputs and running the build again, but with the Shake build system you can do better. You can ask for a list of all live files, and delete the build products not on that list.

          A basic Markdown to HTML converter

          Let's start with a simple website generator. For each Markdown file, with the extension .md, we generate an HTML file. We can write that as:

          import Development.Shake
          import Development.Shake.FilePath

          main :: IO ()
          main = shakeArgs shakeOptions $ do
          action $ do
          mds <- getDirectoryFiles "." ["//*.md"]
          need ["output" </> x -<.> "html" | x <- mds]

          "output//*.html" %> \out -> do
          let src = dropDirectory1 out -<.> "md"
          need [src]
          cmd "pandoc -s -o" [out, src]

          phony "clean" $ do
          removeFilesAfter "output" ["//*.html"]

          Nothing too interesting here. There are three parts:

          • Search for all .md files, and for each file foo/bar.md require output/foo/bar.html.
          • To generate an .html file, depend on the source file then run pandoc.
          • To clean everything, delete all .html files in output.

          Using a new feature in Shake 0.15, we can name save this script as Shakefile.hs and then:

          • shake will build all the HTML files.
          • shake -j0 will build all the files, using one thread for each processor on our system.
          • shake output/foo.html will build just that one HTML file.
          • shake clean will delete all the HTML files.

          Removing stale files

          Now let's imagine we've added a blog post using-pipes.md. Before publishing we decide to rename our post to using-conduit.md. If we've already run shake then there will be a stale file output/using-pipes.html. Since there is no source .md file, Shake will not attempt to rebuild the file, and it won't be automatically deleted. We can do shake clean to get rid of it, but that will also wipe all the other HTML files.

          We can run shake --live=live.txt to produce a file live.txt listing all the live files - those that Shake knows about, and has built. If we run that after deleting using-pipes.md it will tell us that using-conduit.md and output/using-conduit.md are both "live". If we delete all files in output that are not mentioned as being live, that will clean away all our stale files.

          Using Shake 0.15.1 (released in the last hour) you can write:

          import Development.Shake
          import Development.Shake.FilePath
          import Development.Shake.Util
          import System.Directory.Extra
          import Data.List
          import System.IO

          pruner :: [FilePath] -> IO ()
          pruner live = do
          present <- listFilesRecursive "output"
          mapM_ removeFile $ map toStandard present \\ map toStandard live

          main :: IO ()
          main = shakeArgsPrune shakeOptions pruner $ do
          ... as before ...

          Now when running shake --prune it will build all files, then delete all stale files, such as output/using-pipes.html. We are using the shakePrune function (just sugar over --live) which lets us pass a pruner function. This function gets called after the build completes with a list of all the live files. We use listFilesRecursive from the extra package to get a list of all files in output, then do list difference (\\) to delete all the files which are present but not live. To deal with the / vs \ path separator issue on Windows, we apply toStandard to all files to ensure they match.

          A few words of warning:

          • If you run shake output/foo.html --prune then it will only pass output/foo.html and foo.md as live files, since they are the only ones that are live as you have asked for a subset of the files to be built. Generally, you want to enable all sensible targets (typically no file arguments) when passing --prune.
          • Sometimes a rule will generate something you care about, and a few files you don't really bother tracking. As an example, building a GHC DLL on Windows generates a .dll and a .dll.a file. While the .dll.a file may not be known to Shake, it probably doesn't want to get pruned. The pruning function may need a few special cases, like not deleting the .dll.a file if the .dll is live.

          by Neil Mitchell ([email protected]) at April 25, 2015 02:21 PM