# Elevator pitch for Haskell short enough for an elevator ride

Greg Hale has written an "elevator pitch" for Haskell. While it is certainly a good piece of advocacy, it is quite long, and therefore not an elevator pitch. The idea of an elevator pitch is something you can deliver in the 30 seconds or so that you find yourself sharing an elevator with a potential investor.

I've been looking for an effective Haskell elevator pitch for some years now, but the only thing I was able to come up with was just that you can deliver software better, faster and cheaper because you need fewer lines of code. This just sounds like hype.

However I think I've now got something better. Here it is:

Conventional languages make the programmer construct both a control flow and a data flow for the program. There is no way to check they are consistent, and anytime they are inconsistent you get a bug. In Haskell the programmer just specifies the data flow: the control flow is up to the compiler. That simplifies the program, cutting down the work and completely preventing a big class of errors.

# The complete correctness of sorting

A while ago I set out to prove the correctness of merge sort in Agda. Of course this has been done before. But most proofs you find are far from complete. All they prove is a lemma such as

is-sorted : ∀ (xs : List A) → IsSortedList (sort xs)


Maybe even restricted to lists of natural numbers. While it is nice that a sort function indeed produces a sorted output, that is only half of the story. Consider this function:

cheat-sort : List A → List A
cheat-sort _ = []


Clearly the empty list is sorted. So we are done. What is missing is the second half of correctness of sorting: that the output is a permutation of the input. You want something like:

sort : (xs : List A) → Sorted' A
record Sorted' (xs : List A) : Set where
field
ys       : List A
isSorted : IsSorted ys
isPerm   : IsPermutation ys xs


While I was at it, I decided to add the third half of correctness: a bound on the runtime or computational complexity. In the end I was able to define:

insertion-sort : ∀ xs → (Sorted xs) in-time (length xs * length xs)
selection-sort : ∀ xs → (Sorted xs) in-time (length xs * length xs)
merge-sort : ∀ xs → (Sorted xs) in-time (length xs * ⌈log₂ length xs ⌉)


This was not as easy as I would have hoped. In this post I will not bore you with all the details, I'll just go over some of the highlights. The full code is on github.

## What it means to be sorted

There are roughly two ways to define sorted lists that I know of:

1. Parametrize the sorted list by a lower bound on the values it contains. For a cons cell the head should be smaller than the lower bound, and the tail should be larger than the head. This requires the type to have a smallest element, but you can adjoin -∞ with a new datatype.
2. Parametrize the sorted list by a list of all values in it. For a cons cell require that the head is smaller than all the values in the tail.

Since I already need to parametrize by all values in the list to show that the sorted list contains a permutation of them, I went with the second approach:

-- A proof that x is less than all values in xs
data _≤*_ (x : A) : List A → Set where
[]  : x ≤* []
_∷_ : ∀ {y ys} → (x ≤ y) → x ≤* ys → x ≤* (y ∷ ys)

-- Proof that a list is sorted
data IsSorted : List A → Set where
[]  : IsSorted []
_∷_ : ∀ {x xs} → x ≤* xs → IsSorted xs → IsSorted (x ∷ xs)


## What it means to be a permutation

To show that one list is a permutation of another I again used two data types. Suppose that we know that xs is a permutation of ys. Then when is xxs a permutation of some list xys? Well, we can permute xs to ys, and insert x anywhere. I used to denote this insertion,

-- x ◂ xs ≡ xys means that xys is equal to xs with x inserted somewhere
data _◂_≡_ (x : A) : List A → List A → Set a where
here  : ∀ {xs}           → x ◂ xs ≡ (x ∷ xs)
there : ∀ {y} {xs} {xys} → (p : x ◂ xs ≡ xys) → x ◂ (y ∷ xs) ≡ (y ∷ xys)

-- Proof that a list is a permutation of another one
data IsPermutation : List A → List A → Set a where
[]  : IsPermutation [] []
_∷_ : ∀ {x xs ys xys}
→ (p : x ◂ ys ≡ xys)
→ (ps : IsPermutation xs ys)
→ IsPermutation (x ∷ xs) xys


Now the Sorted data type has three components: the sorted list, a proof that it is sorted, and a proof that it is a permutation of the input. These parts are either all [], or they are all __. It turns out to be much nicer to combine the parts together,

-- Sorted permutations of a list
data Sorted : List A → Set  where
[]   : Sorted []
cons : ∀ x {xs xxs}
→ (p : x ◂ xs ≡ xxs) -- inserting x somewhere into xs gives xxs
→ (least : x ≤* xs)  -- x is the smallest element of the list
→ (rest : Sorted xs) -- and we have also sorted xs
→ Sorted xxs


Of course Sorted and Sorted' are equivalent.

As an aside, these are all the ingredients necessary for proving

sorted-unique : ∀ {xs} → (ys zs : Sorted xs)
→ sorted-to-List ys ≡ sorted-to-List zs


## A monad for keeping track of the runtime

To be able to reason about the runtime, as measured in the number of comparisons performed, I decided to use a monad. The type is simply

data _in-time_ (A : Set) (n : ℕ) : Set a where
box : A → C A n


the constructor box is private, and it can only be accessed through the standard monad operations,

return : ∀ {A n} → A → A in-time n

_>>=_ : ∀ {A B} {m n} → A in-time n → (A → B in-time m) → B in-time (n + m)


Then the sorting functions will be parametrized by a function that for some partial order decides between x y and y x in one step, using the monad we defined above:

module Sorting
{A : Set} {l} {_≤_ : Rel A l}
(isPartialOrder : IsPartialOrder _≡_ _≤_)
(_≤?_ : (x y : A) → (x ≤ y ⊎ y ≤ x) in-time 1)
where ...


Note that I specify that __ is a partial order, because the Agda standard library definition of a total order actually comes with a function

total : ∀ x y → (x ≤ y) ⊎ (y ≤ x)


which would defeat the whole prupose of _≤?_. In fact, the standard TotalOrders are decidable up to base equality, and if the base equality is propositional equality, then they are decidable. I.e.

total-decidable : ∀ {a r} {A : Set a} → (_≤_ : Rel A r)
→ IsTotalOrder _≡_ _≤_
→ IsDecTotalOrder _≡_ _≤_


See the source for the proof of this side theorem. It relies on a trick to show that total x y can only be different from total y x if x y. Which holds for propositional equality, but not in general.

## Logarithms

To be able to complete the specification of merge sort, we still need to add some missing functions on natural numbers. In particular, we need a logarithm. This logarithm turns out to be surprisingly tricky to define in Agda. Why? Because the usual definition uses non-structural recursion. In haskell you would write

-- @log n@ calculates ⌊log₂ (n+1)⌋
log 0 = 0
log n = 1 + log (n div 2)


But Agda is not able to see that n div 2 (or in agda notation, n /2⌋) is smaller than n. There are two approaches to circumvent this problem:

1. Use a different algorithm: Convert n to a binary representation, and count the number of digits.
2. Use well-founded recursion, manually supplying a proof that n /2⌋ < n.

I went with the second option, because I will also be using the same shape of recursion inside merge sort itself. The standard way to use well-founded recursion is through the function <-rec, which works a bit like fix in haskell, except that you need to pass in a proof that the argument is smaller. The code would look like this:

log = <-rec log'
where
log′ self 0 = 0
log′ self (suc n) = 1 + self ⌊ suc n /2⌋ ({-proof ommitted-})


But this leads to a problem as soon as you want to prove a property of logarithms. For example, you would think that log (suc n) 1 + (log suc n /2⌋). But that is not definitionally true, since one <-rec is not like another. I found that the well-founded recursion library was in general a pain to work with, especially because it uses so many type synonyms. My solution was to use the slightly lower level accessibility relation. A value of type Acc _<′_ n allows you to do recursion with any m <′ n. Now I can use actual recursion:

log-acc : ∀ n → Acc _<′_ n → ℕ
log-acc 0 _ = 0
log-acc (suc n) (acc more) = 1 + log-acc ⌊ suc n /2⌋ (more _ {-proof ommitted-})


And use the well-foundedness of ℕ to get an Acc for any number:

log : ℕ → ℕ
log n = log-acc n (<-well-founded n)

⌈log₂_⌉ : ℕ → ℕ
⌈log₂ n ⌉ = log (pred n)


There is still a snag when proving properties of log or log-acc, namely that you need to prove that (more n ...) <-well-founded n. But the accessibility relation doesn't actually matter for the computation, so I decided to just postulate

postulate acc-irrelevance : ∀ {n : ℕ} → {a b : Acc _<′_ n} → a ≡ b
-- this also follows from function extensionality


If anyone knows a better way to prove properties of functions defined with well-founded recursion, I am open to suggestions.

## Vectors versus lists

While working on the proofs I had to choose: Do I use fixed length Vecs or variable length Lists? Both have their pros and cons.

On the one hand, the sorting functions with vectors look a bit nicer, because we can use n instead of length xs:

merge-sort : ∀ {n} (xs : Vec A n) → Sorted xs in-time (n * ⌈log₂ n ⌉)


Additionally, with lists we can only do recursion on the input list, with vectors we can do recursion on the length of the list. The former works fine for insertion sort, where in each step you do something with the head element of the list; but it fails for selection and merge sort.

On the other hand, with vectors you sometimes can't even state the property that one vector is equal to another. For the term xs ys ++ zs to be well-typed, xs must have the type Vec A (m + n).

I went back and forth a couple of times between vectors and lists. In the end I settled for using vectors only when needed, and specifying properties in terms of lists. For example the split function for merge sort has the type

splitHalf : ∀ {n} → (xs : Vec A n)
→ ∃₂ \(ys : Vec A ⌈ n /2⌉) (zs : Vec A ⌊ n /2⌋)
→ toList ys ++ toList zs ≡ toList xs


So instead of using Vec._++_, I use List._++_. In this style 'select' from selection sort looks like

select : ∀ {n} (xs : Vec A (suc n))
→ (∃₂ \y ys → (y ◂ toList ys ≡ toList xs) × (y ≤* toList ys)) in-time n


I.e. given a vector xs with n+1 elements, return a vector ys with n elements, such that inserting y into it gives us back xs. And this item y should be the smallest one.

## Extension: expected runtime

An extension of this post would be to look at randomized sorting algorithms. In particular, quick sort with a randomly chosen pivot has expected runtime O(n * log n). At first I thought that all that would be needed is a function

expected : ∀ {P}
→ (ns : List ℕ)             -- A list of numbers
→ All (\n → P in-time n) ns -- for each n we have P in-time n
→ P in-time ⌈mean ns ⌉      -- then expect time is mean of ns


But that is not quite right, since if we actually knew the runtimes ns we could just pick the fastest one. With the randomized quicksort you will end up in a situation where you have two or more computations to choose from, and you know that some are faster than the others, but you don't yet know which one. That sounds a bit classical. A second idea is to return the runtimes at a later time, something like

expected : ∀ {P} {long-time}
→ (xs : List (\ex n P in-time n) in-time long-time)
→ P in-time ⌈mean map proj1 xs ⌉


But this is not quite right either, since after long-time computing P (i.e. a sorting) can be done in 0 time. Rather, we need to decouple the proof about the runtime from the computation. This is not possible with the _in-time_ monad. We would need to get rid of the runtime from the type, and store it as a value instead.

I have tried redoing the proofs in this post with the monad

data Timed (A : Set) : Set a where
_in-time_ : A → ℕ → Timed A
runtime : Timed A → ℕ


But I didn't succeed; I ended up with the baffling error message

runtime (big-lambda-term (unbox (x ≤? u)))
!=
runtime (big-lambda-term (unbox (x ≤? u)))


## Another extension: lower bound on runtime

So far I have proved that you can sort a list in time n * log n. It would also be interesting to look at the well known lower bound on the runtime of sorting, and prove a theorem such as

can't-sort-in-linear-time : ¬ ∃ \k → ∀ xs → Sorted xs in-time k * length xs


unfortunately this statement is not actually true for all types. For finite sets you actually can sort in linear time with counting sort. It also fails if we happen to have some decidable total order for that type lying around. But it might be possible to prove

can't-sort-in-linear-time
: (no-fast-compare : ∀ x y → (x ≤ y ⊎ y ≤ x) in-time 0 → x ≡ y)
→ ¬ ∃ \k → ∀ xs → Sorted xs in-time k * length xs


But you have to be really careful with a term like no-fast-compare, because inside the runtime monad we do have values of type (x y y x). And so you can derive x y x y in-time 1, and therefore also in-time 1 for non trivial types. Which certainly looks wrong to me.

I don't know a way around this problem, but it might be related to the same issue as expected runtime. I.e. the problem is that all information about the runtime is bundled together with the return value. The lower bound proof essentially asks to sort a 'random' list, and by a counting argument shows that at least a certain number of comparisons are needed to be able to produce all outputs.

# Upcoming talk

Next month I'll be giving a talk at the NLCS workshop, on the chiastic lambda-calculi I first presented at NASSLLI 2010 (slides[1]). After working out some of the metatheory for one of my quals, I gave more recent talks at our local PL Wonks and CLingDing seminars (slides). The NASSLLI talk was more about the linguistic motivations and the general idea, whereas the PLWonks/CLingDing talks were more about the formal properties of the calculus itself. For NLCS I hope to combine these threads a bit better— which has always been the challenge with this work.

NLCS is collocated with this year's LICS (and MFPS and CSF). I'll also be around for LICS itself, and in town for MFPS though probably not attending. So if you're around, feel free to stop by and chat.

[1] N.B., the NASSLLI syntax is a bit different than the newer version: square brackets were used instead of angle brackets (the latter were chosen because they typeset better in general); juxtaposition was just juxtaposition rather than being made explicit; and the left- vs right-chiastic distinction was called chi vs ksi (however, it turns out that ksi already has an important meaning in type theory).

comments

# Incoming: Ruby bridge

I am working on a quick and dirty Ruby bridge library, that I hope will yield a huge performance gain with template interpolation in the language-puppet library. Right now, it is capable of:

• Initializing a Ruby interpreter from libruby
• Calling Ruby methods and functions
• Registering methods or functions that will be called from Ruby code
• Converting data between the two Worlds (right now the most complex instance is the JSON one, which means that many complex Ruby types can’t be converted, but it is more than enough for passing data)
• Embedding native Haskell values that can be passed around in Ruby to the Haskell-provided external functions (I will use this for passing the Puppet catalog state around)

There are still a few things to do before releasing it :

• Making compilation a bit less dependant on the system. This will probably require quite a few flags in the cabal definition …
• Hunting for memory leaks. I am not sure how to do this with the GHC Runtime in the middle, and I do hope that ruby_finalize frees everything that is managed by the Ruby runtime. After all, restarting processes seems to be the only working garbage collection method for Ruby daemons …
• Writing stubs for the Puppet library methods that might be needed by templates. I would like to be able to support custom types and functions directly written in Ruby instead of Lua, but this will probably turn into a nightmare …
• Cleaning things up !

Here is a quick code preview :

<figure class="code"><figcaption>test.hs </figcaption>
 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53  {-# LANGUAGE OverloadedStrings, OverloadedStrings #-} module Main where import Foreign.Ruby.Bindings import Data.Aeson import Data.Attoparsec.Number -- this is an external function that will be executed from the Ruby interpreter -- the first parameter to the function is probably some reference to some top object -- my knowledge of ruby is close to nonexistent, so I can't say for sure ... extfunc :: RValue -> RValue -> IO RValue extfunc _ v = do -- deserialize the Ruby value into some JSON Value onv <- fromRuby v :: IO (Maybe Value) -- and display it print onv -- now let's create a JSON object containing all kind of data types let nv = object [ ("bigint" , Number (I 16518656116889898998656112323135664684684)) , ("int", Number (I 12)) , ("double", Number (D 0.123)) , ("null", "Null") , ("string", String "string") , ("true", Bool True) , ("false", Bool False) , ("array", toJSON ([1,2,3,4,5] :: [Int])) , ("object", object [ ("k", String "v") ] ) ] -- turn it into Ruby values, and return this toRuby nv -- this is the function that is called if everything was loaded properly nextThings :: IO () nextThings = do -- turn the extfunc function into something that can be called by the Ruby interpreter myfunc <- mkRegistered2 extfunc -- and bind it to the global 'hsfunction' function rb_define_global_function "hsfunction" myfunc 1 -- now call a method in the Ruby interpreter o <- safeMethodCall "MyClass" "testfunc" [] case o of Right v -> (fromRuby v :: IO (Maybe Value)) >>= print Left r -> putStrLn r main :: IO () main = do -- initialize stuff ruby_init ruby_init_loadpath -- and load "test.rb" s <- rb_load_protect "test.rb" 0 if s == 0 then nextThings else showError >>= putStrLn 
</figure>

And here is the ruby program, that calls our external function :

<figure class="code"><figcaption>test.rb </figcaption>
 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15  class MyClass def self.testfunc hsfunction( [16588, "qsqsd", true, { 'a' => 'b' }, :symbol, 0.432, 5611561561186918918918618789115616591891198189123165165889 ] ).each do |k,v| puts "#{k} => #{v} [#{v.class}]" end 12 end end 
</figure>

And the output, showing that data is properly converted from either sides :

<figure class="code">
 1 2 3 4 5 6 7 8 9 10 11  Just (Array (fromList [Number 16588,String "qsqsd",Bool True,Object fromList [("a",String "b")],String "symbol",Number 0.432,Number 5611561561186918918918618789115616591891198189123165165889])) bigint => 16518656116889898998656112323135664684684 [Bignum] int => 12 [Fixnum] double => 0.123 [Float] array => 12345 [Array] true => true [TrueClass] null => Null [String] string => string [String] object => kv [Hash] false => false [FalseClass] Just (Number 12)
</figure>

EDIT: added link to the code.

# Game idea: reverse bullet hell

I have come to realize that I have more ideas for programs than I'll ever have time to write. (This means they're not actually all that significant, on average — see all that's been said on ‘ideas vs. execution’.) But maybe I have the time to scribble a blog post about them, and that's stuff to blog about, if nothing else.

So, a video game idea I had today: reverse bullet-hell shooter.

A regular bullet-hell shooter is a game where you move in a 2D space dodging an immense number of mostly dumb instant-death projectiles launched in mostly predefined patterns, and trying to shoot back with dinkier, but better aimed, weapons. Instead, here you design the bullet pattern so as to trap and kill AI enemies doing the dodging.

The roles seem a bit similar to tower defense, but the space of strategies would be considerably more, ah, bumpy, since you're not doing a little bit of damage at a time and how it plays out depends strongly on the AI's choices.

That's probably the downfall of this idea: either the outcome is basically butterfly effect random due to enemy AI decisions and you mostly lose, or there are trivial ways to design undodgeable bullet patterns and you mostly win. I don't immediately see how to make the space of inputs and outcomes “smooth” enough.

# Anatomy of an MVar operation

Adam Belay (of Dune fame) was recently wondering why Haskell’s MVars are so slow. “Slow?” I thought, “aren’t Haskell’s MVars supposed to be really fast?” So I did some digging around how MVars worked, to see if I could explain.

Let’s consider the operation of the function takeMVar in Control.Concurrent.MVar. This function is very simple, it unpacks MVar to get the underlying MVar# primitive value, and then calls the primop takeMVar#:

takeMVar :: MVar a -> IO a
takeMVar (MVar mvar#) = IO \ s# -> takeMVar# mvar# s#  Primops result in the invocation of stg_takeMVarzh in PrimOps.cmm, which is where the magic happens. For simplicity, we consider only the multithreaded case. The first step is to lock the closure: ("ptr" info) = ccall lockClosure(mvar "ptr");  Objects on the GHC heap have an info table header which indicates what kind of object they are, by pointing to the relevant info table for the object. These headers are also used for synchronization: since they are word-sized, they can be atomically swapped for other values. lockClosure is in fact a spin-lock on the info table header: EXTERN_INLINE StgInfoTable *lockClosure(StgClosure *p) { StgWord info; do { nat i = 0; do { info = xchg((P_)(void *)&p->header.info, (W_)&stg_WHITEHOLE_info); if (info != (W_)&stg_WHITEHOLE_info) return (StgInfoTable *)info; } while (++i < SPIN_COUNT); yieldThread(); } while (1); }  lockClosure is used for some other objects, namely thread state objects (stg_TSO_info, via lockTSO) and thread messages i.e. exceptions (stg_MSG_THROWTO_info, stg_MSG_NULL_info). The next step is to apply a GC write barrier on the MVar: if (info == stg_MVAR_CLEAN_info) { ccall dirty_MVAR(BaseReg "ptr", mvar "ptr"); }  As I’ve written before, as the MVar is a mutable object, it can be mutated to point to objects in generation 0; thus, when a mutation happens, it has to be added to the root set via the mutable list. Since mutable is per capability, this boils down into a bunch of pointer modifications, and does not require any synchronizations. Note that we will need to add the MVar to the mutable list, even if we end up blocking on it, because the MVar is a retainer of the thread (TSO) which is blocked on it! (However, I suspect in some cases we can get away with not doing this.) Next, we case split depending on whether or not the MVar is full or empty. If the MVar is empty, we need to block the thread until the MVar is full: /* If the MVar is empty, put ourselves on its blocking queue, * and wait until we're woken up. */ if (StgMVar_value(mvar) == stg_END_TSO_QUEUE_closure) { // We want to put the heap check down here in the slow path, // but be careful to unlock the closure before returning to // the RTS if the check fails. ALLOC_PRIM_WITH_CUSTOM_FAILURE (SIZEOF_StgMVarTSOQueue, unlockClosure(mvar, stg_MVAR_DIRTY_info); GC_PRIM_P(stg_takeMVarzh, mvar)); q = Hp - SIZEOF_StgMVarTSOQueue + WDS(1); SET_HDR(q, stg_MVAR_TSO_QUEUE_info, CCS_SYSTEM); StgMVarTSOQueue_link(q) = END_TSO_QUEUE; StgMVarTSOQueue_tso(q) = CurrentTSO; if (StgMVar_head(mvar) == stg_END_TSO_QUEUE_closure) { StgMVar_head(mvar) = q; } else { StgMVarTSOQueue_link(StgMVar_tail(mvar)) = q; ccall recordClosureMutated(MyCapability() "ptr", StgMVar_tail(mvar)); } StgTSO__link(CurrentTSO) = q; StgTSO_block_info(CurrentTSO) = mvar; StgTSO_why_blocked(CurrentTSO) = BlockedOnMVar::I16; StgMVar_tail(mvar) = q; jump stg_block_takemvar(mvar); }  A useful thing to know when decoding C-- primop code is that StgTSO_block_info(...) and its kin are how we spell field access on objects. C-- doesn’t know anything about C struct layout, and so these “functions” are actually macros generated by utils/deriveConstants. Blocking a thread consists of three steps: 1. We have to add the thread to the blocked queue attached to the MVar (that’s why blocking on an MVar mutates the MVar!) This involves performing a heap allocation for the linked list node as well as mutating the tail of the old linked list. 2. We have to mark the thread as blocked (the StgTSO modifications). 3. We need to setup a stack frame for the thread so that when the thread wakes up, it performs the correct action (the invocation to stg_block_takemvar). This invocation is also responsible for unlocking the closure. While the machinery here is pretty intricate, it’s not really in scope for this blog post. If the MVar is full, then we can go ahead and take the value from the MVar. /* we got the value... */ val = StgMVar_value(mvar);  But that’s not all. If there are other blocked putMVars on the MVar (remember, when a thread attempts to put an MVar that is already full, it blocks until the MVar empties out), then we should immediately unblock one of these threads so that the MVar can always be left in a full state:  q = StgMVar_head(mvar); loop: if (q == stg_END_TSO_QUEUE_closure) { /* No further putMVars, MVar is now empty */ StgMVar_value(mvar) = stg_END_TSO_QUEUE_closure; unlockClosure(mvar, stg_MVAR_DIRTY_info); return (val); } if (StgHeader_info(q) == stg_IND_info || StgHeader_info(q) == stg_MSG_NULL_info) { q = StgInd_indirectee(q); goto loop; }  There is one interesting thing about the code that checks for blocked threads, and that is the check for indirectees (stg_IND_info). Under what circumstances would a queue object be stubbed out with an indirection? As it turns out, this occurs when we delete an item from the linked list. This is quite nice, because on a singly-linked list, we don't have an easy way to delete items unless we also have a pointer to the previous item. With this scheme, we just overwrite out the current item with an indirection, to be cleaned up next GC. (This, by the way, is why we can't just chain up the TSOs directly, without the extra linked list nodes. [1]) When we find some other threads, we immediately run them, so that the MVar never becomes empty: // There are putMVar(s) waiting... wake up the first thread on the queue tso = StgMVarTSOQueue_tso(q); StgMVar_head(mvar) = StgMVarTSOQueue_link(q); if (StgMVar_head(mvar) == stg_END_TSO_QUEUE_closure) { StgMVar_tail(mvar) = stg_END_TSO_QUEUE_closure; } ASSERT(StgTSO_why_blocked(tso) == BlockedOnMVar::I16); // note: I16 means this is a 16-bit integer ASSERT(StgTSO_block_info(tso) == mvar); // actually perform the putMVar for the thread that we just woke up W_ stack; stack = StgTSO_stackobj(tso); PerformPut(stack, StgMVar_value(mvar));  There is one detail here: PerformPut doesn’t actually run the thread, it just looks at the thread’s stack to figure out what it was going to put. Once the MVar is put, we then wake up the thread, so it can go on its merry way: // indicate that the MVar operation has now completed. StgTSO__link(tso) = stg_END_TSO_QUEUE_closure; // no need to mark the TSO dirty, we have only written END_TSO_QUEUE. ccall tryWakeupThread(MyCapability() "ptr", tso); unlockClosure(mvar, stg_MVAR_DIRTY_info); return (val);  To sum up, when you takeMVar, you pay the costs of: • one spinlock, • on order of several dozen memory operations (write barriers, queue twiddling), and • when the MVar is empty, a (small) heap allocation and stack write. Adam and I puzzled about this a bit, and then realized the reason why the number of cycles was so large: our numbers are for roundtrips, and even with such lightweight synchronization (and lack of syscalls), you still have to go through the scheduler when all is said and done, and that blows up the number of cycles. [1] It wasn’t always this way, see: commit f4692220c7cbdadaa633f50eb2b30b59edb30183 Author: Simon Marlow <marlowsd@gmail.com> Date: Thu Apr 1 09:16:05 2010 +0000 Change the representation of the MVar blocked queue The list of threads blocked on an MVar is now represented as a list of separately allocated objects rather than being linked through the TSOs themselves. This lets us remove a TSO from the list in O(1) time rather than O(n) time, by marking the list object. Removing this linear component fixes some pathalogical performance cases where many threads were blocked on an MVar and became unreachable simultaneously (nofib/smp/threads007), or when sending an asynchronous exception to a TSO in a long list of thread blocked on an MVar. MVar performance has actually improved by a few percent as a result of this change, slightly to my surprise. This is the final cleanup in the sequence, which let me remove the old way of waking up threads (unblockOne(), MSG_WAKEUP) in favour of the new way (tryWakeupThread and MSG_TRY_WAKEUP, which is idempotent). It is now the case that only the Capability that owns a TSO may modify its state (well, almost), and this simplifies various things. More of the RTS is based on message-passing between Capabilities now.  ## May 19, 2013 ### Alessandro Vermeulen # Why you should switch to declarative programming We are reaching limits of what is feasible with imperative languages and we should move to declarative languages. When applications written in imperative languages grow, the code becomes convoluted. Why? Imperatively programmed applications contain statements such as if X do Y else do Z. As Y and Z contain invisible side-effects the correctness of the program relies on some implicit invariant. This invariant has to be maintained by the programmer or else the code will break. Thus each time a new feature is added to an application or a bug is fixed the code for the application gets more complex as keeping the invariant intact becomes harder. After a while the code becomes spaghetti-code and bugs are introduced as the programmer fails to maintain the invariant. This is going to happen despite the best intentions of the programmer to keep things clean. Why is this? An imperative language is a type of language that tells the computer what to do and in which order. However, most, if not all, applications are nothing but a translation of some business domain into a computer program. In order to get the imperative code the programmer has to translate the business model to a set of imperative instructions, the business logic. The imperative instructions bear little resemblance to the original description of the business model. When the business model changes the imperative counterpart could change entirely but what happens is that programmers make incremental updates to the code. This is done because either they do not see that a more drastic change is necessary or because they are under pressure to deliver results. Over time this leads to bugs and unmaintainable code. Summarized, bugs are introduced because there is a manual translation step between the business model and the program code. Imagine a system for calculating whether a person should receive a certain allowance. To receive the allowance a person has to meet several criteria such as age > 18 and income < 2400. We can denote this in the following way: <notextile><figure class="code"><figcaption></figcaption>  1 2 3  def receives_allowance?(age, income) age > 18 && income < 2400 end  </figure></notextile> There are several remarks to be made for this code. Adding a criteria such as marital status would involve adding a parameter to the function and changing the boolean expression. We could already avoid having to change parameters when we had chosen a parameter of type Person that contains the information about a person. But what if we would introduce time as an aspect in our criteria? We need to change the function again. And what if the age criteria changed? If the programmer erroneously codes something like age > 18 && age < 18 into the condition we would only find the bug during testing, if we are lucky. Additionally when our criteria become more complex we would like to extract criteria to their own functions. In short, it is easy to make mistakes this way. A solution to this is to avoid the translation process by using a declarative language. A declarative language is a language that describes what is to be computed but not how it should be computed.1 In essence: it omits control-flow. By encoding and thereby recording the business model into a set of declarative statements it becomes easier to spot irregularities in the business logic as the business logic reads more like the description of the business model and all invariants are visible. In this manner the programmer no longer tells the computer how to perform a computation but rather what the computation should be. This makes it easier to maintain However, we take it even to an higher level entirely. By just using the declarative language, such as Haskell or Prolog, you are still using a general-purpose language and are thus lacking domain specific checks. It would be advantageous to devise a Domain Specific Language (DSL) instead. This would be a language specifically geared towards your business domain and can be done easily in a language such as Haskell. Creating a DSL has a great benefit: Because the business domain is written down in code the responsibility of translating the business domain into a program shifts from the programmer to the interpreter of the DSL. This has two benefits: the translation is consolidated in one single point (the interpreter) and can be verified or even proven to be correct. It could be checked that no contradicting statements are present. In this sense we can compare the validation to a spell-checker or JSLint but for our specific domain. Secondly the programmer cannot make mistakes in the translation of the business logic to imperative code. A simple DSL embedding the idea of the allowance could look like the text in the examples below. The interpreter / compiler is able to inspect the separate rules and check whether they would cause a tautology or contradiction. We will illustrate this with some examples. Consider the following text below, it is easy to read and it is clear what it means. Each line is a condition that has to hold, so we could read an “AND” at eacht line end. (Whether Income is monthly or annually or weekly is not taken into account here but you get the picture.) <notextile><figure class="code"><figcaption></figcaption>  1 2 3 4 5  A person is an adult when his Age is GREATER THAN OR EQUAL TO 18 A person is allegeable for an allowance IF he is an adult A person is allegeable for an allowance IF his Income is NOT GREATER THAN €2400  </figure></notextile> When we would add a rule as shown below, we could get a warning or error from the interpreter telling us that we have two different conditions on a person’s age. <notextile><figure class="code"><figcaption></figcaption>  1 2 3 4 5 6 7  A person is an adult when his Age is GREATER THAN OR EQUAL TO 18 A person is allegeable for an allowance IF he is an adult A person is allegeable for an allowance IF his Income is NOT GREATER THAN €2400 A person is allegeable for an allowance IF his Age is GREATER THAN 21  </figure></notextile> And when we would add a rule as this it could warn us that no-one will ever get an allowance. <notextile><figure class="code"><figcaption></figcaption>  1 2 3 4 5 6 7  A person is an adult when his Age is GREATER THAN OR EQUAL TO 18 A person is allegeable for an allowance IF he is an adult A person is allegeable for an allowance IF his Income is NOT GREATER THAN €2400 A person is allegeable for an allowance IF his Age is LESS THAN 12  </figure></notextile> Not only would the interpreter be able to spot these kinds of errors but it would also be easier for the writer of these rules to spot whether there is a mistake. The astute reader will have noticed than we have not included any control flow into our language making it a declarative language. Because the business-logic is now represented by the DSL it can be written by domain experts instead of the programmers of the application itself. The compiler can provide feedback when something is wrong in the DSL and there is less chance for errors in the implementation of the business logic. Additionally this frees the programmes for implementing better translations of the DSL or other projects saving time and other resources. To summarise 4 reasons why you should switch to declarative languages: 1. Direct translation of the business model into business logic 2. Better readability of the business logic 3. Better scalability for the program in terms of functionality 4. Less bugs And 3 reasons why you should use DSLs to boot: 1. Free up programmers to do important stuff 2. Let the domain-experts handle the business logic and have it machine-checked! 3. It is just awesome ## May 16, 2013 ### language-puppet # language-puppet v0.4.0 I just released the latest language-puppet version. For the full list of changes, please take a look at the changelog. Here are the highlights. ### PuppetDB code reworked The PuppetDB code and API has been completely overhauled. It is now more generic : the resource collection and puppet query functions now work the same. Additionally, a PuppetDB stub has been created for testing use. ### Better diagnostic facilities As the main use of this library is to test stuff, the following features were added: • Several error messages have been reworked so that they are more informative. • A dumpvariables built-in function has been added. It just prints all known variables (and facts) to stdout, and can be quite handy. • The “scope stack” description is stored with the resources. This turned out to be extremely useful when debugging resource names colisions or to find out where some resource is defined. Here is an example, let’s say you do not remember which package installs the collectd package. Just run this : <figure class="code">  1 2 3 4 5 6 7  » puppetresources . default.domain 'Package[collectd]' package { "collectd": #"./modules/collectd/manifests/base.pp" (line 4, column 9) ["::","site::baseconfig","collectd","collectd::client","collectd::base"] ensure => "installed", require => [Class["collectd"], Class["collectd::base"], Class["collectd::client"], Class["site::baseconfig"]]; } </figure> You now know exactly where the package resource is declared, and the list of “scopes” that have been traversed in order to do so. Note that this information is displayed when resources names collide. ### Easier to setup This library doesn’t depend from a newish bytestring anymore, and should build with the package provided with a GHC compiler of the 7.6.x serie. This is not yet done, but I will certainly soon publish a debian-style repository of the compiled puppetresources binary. I am interested in suggestions for an automated building system. ### Better testing The testing API seems sufficient to write pretty strong tests, but would still benefit from a few more helper functions. The testing “daemon” has been reworked to use the new PuppetDB stub. It makes it possible to test complex interactions between hosts using the exported resource or PuppetDB query features. ### Work in progress I will probably lensify the code until I get a descent understanding of it. I do not intend to work on Hiera emulation just yet, as I am probably the only user of this library for now and I do not use this feature. One area of improvement would be to embed the ruby interpreter in the library. I am not sure how to do this, but as there are quite a few projects of lightweight interpreters sprouting from the earth, it might be possible in the near future. The only problem would be figuring out how to build a large C project with cabal. ### Some other considerations I recently ported the code from random.c to Haskell (here). This has been quite tedious, and is quite hard to read. This is an almost naive port of the code found in the Ruby interpreter, without the useless loop variables. For some reason, there are many loops like this : <figure class="code">  1 2 3 4 5 6 7 8 9 10  i=1; j=0; k = (N>key_length ? N : key_length); for (; k; k--) { mt->state[i] = (mt->state[i] ^ ((mt->state[i-1] ^ (mt->state[i-1] >> 30)) * 1664525U)) + init_key[j] + j; /* non linear */ mt->state[i] &= 0xffffffffU; /* for WORDSIZE > 32 machines */ i++; j++; if (i>=N) { mt->state[0] = mt->state[N-1]; i=1; } if (j>=key_length) j=0; } </figure> As you can see, the value of k is never used in the loop. I am not sure why the author didn’t go for something like : <figure class="code">  1  for(i=1;i </figure> Anyway, the Haskell code is pretty bad, and will certainly only work for 64-bit builds. I am not sure how I should have written it. I suppose staying in the ST monad would have lead to nicer code, and I am open to suggestions. ### Philip Wadler # From Session Types to Data Types: RA posts and PhD studentships I've posted this elsewhere, but neglected to blog before now. We are recruiting for research associate positions in design and implementation of programming languages, and also may have PhD studentships available this year and next. The posts are on the project "From Data Types to Session Types: A Basis for Concurrency and Distribution" which is a programme grant funded by EPSRC for five years from 20 May 2013, joint with Simon Gay at the University of Glasgow and Nobuko Yoshida at Imperial College London. The RA post at Edinburgh for an initial period of 24 months, with possibility of extension, and is on the UE07 scale (£30,424 - £36,298). Deadline for applications for the RA post is Monday 20 May 2013, anyone interested in a PhD studentship should apply as soon as possible. Glasgow and Imperial are also recruiting. Please contact me if you are interested in either the RA post or a PhD studentship. Further description of the Edinburgh RA post is below. Project Description Just as data types describe the structure of data, session types describe the structure of communication between concurrent and distributed processes. Our project has particular emphasis on putting theory into practice, by embedding session types in a range of programming languages and applying them to realistic case studies. The research programme is joint between the University of Edinburgh, University of Glasgow, and Imperial College London, and includes collaboration with Amazon, Cognizant, Red Hat, VMware, and the Ocean Observatories Initiative. Principal Duties The successful candidate will join a team responsible for extending the functional web programming language Links with session types to support concurrency and distribution. We will test our techniques by providing a library to access Amazon Web Services (AWS) cloud computing infrastructure, and perform empirical experiments to assess how our language design impacts the performance of programmers. You should possess a PhD in a relevant area, or be nearing completion of same, or have comparable experience. You should have a track-record of publication, or other evidence of ability to undertake research and communicate well. You should have a strong background in programming languages, including type systems, and strong programming and software engineering skills. It is desirable for candidates to also have one or more of the following: a combination of theoretical and practical skills; experience of web programming or cloud programming; knowledge of the theory or practice of concurrent and distributed systems; knowledge of linear logic; or training in empirical measurement of programming tasks. We seek applicants at an international level of excellence. The Laboratory for Foundations of Computer Science is internationally renowned, the School of Informatics at Edinburgh is among the strongest in the world, and Edinburgh is known as a cultural centre providing a high quality of life. Further details of the RA post, including how to apply, are here. ### Bryn Keller # Hot Towel SPA Is a Great Starter A few months ago, John Papa released a Visual Studio template called Hot Towel SPA, which Scott Hanselman kindly pointed out to me. SPA, as all the hip kids will tell you, stands for Single Page Application. That is, the kind of application that you start by visiting a web page, and you stay on that same page for as long as you use the application. As opposed to most web applications, where you skip from page to page as you interact with the site. People have been doing this for a long time, of course, but the Hot Towel SPA starter really is a nice introduction to the style. In a SPA, you really need to think of the browser as “the client,” a standalone entity that communicates with your server via (web) API calls. Once you get used to it, it’s really rather refreshing, and it allows you to take advantage of all the computing power on the client machine in a way that can be quite liberating. Hot Towel uses a JavaScript application framework called Durandal to structure the client side code. It divides the world up into services (JavaScript modules, basically), views, and view models. All of this is just for the JavaScript side of things, remember – you may also have views and view models and so on on the server side, but that’s a different thing – you’ll interact with those via AJAX calls, usually using JSON to encode the data. Hot Towel uses HTML for the views and lets Knockout do the view composition and data binding, which makes it a good source of examples for learning Knockout as well. The JavaScript code is nicely modular, in the style of require.js. If you’ve not seen this style, it’s worth checking out. Basically, you declare all your dependencies for your JavaScript module, and the framework asynchronously loads them as necessary, as passes them to your module. It’s great documentation, great for structuring the code so you don’t get circular dependencies, and makes for easier unit testing too, since can easily supply alternative (mock) implementations of your module’s dependencies. On the server, the MVC code is well organized as well, and it’s straightforward to plug in your new Web API controllers and start coding. I’ve been playing with this starter for a while, working on a proof of concept for a new series of articles on my blog. I found Hot Towel to be a great starting point, and it opened my eyes to some interesting new techniques on the client side. Give Hot Towel a try for your next project, it’ll be fun. ### Philip Wadler # Aleph Cloud Aleph Cloud is looking to hire Haskell programmers. ### Snap Framework # Announcing: Snap Framework v0.12 The Snap team is happy to announce the release of version 0.12 of the Snap Framework. ## New features • Heist now has the ability to reload templates. Along with this, HeistConfig now stores template locations instead of templates. A template location is essentially an IO action returning templates. This allows you to have Heist get its templates from a database, over the network, etc–anything that can be done from IO. • The Heist snaplet now has generic functions that can work with either interpreted or compiled templates. Most applications will choose one of either interpreted or compiled templates and not need this new functionality. However, if you are writing a generic snaplet, then you probably want it to work no matter which mode the end application uses. All you need to do is import the Snap.Snaplet.Heist.Generic module. The Heist snaplet defaults to compiled mode. If you want to use interpreted mode, call the setInterpreted function in your application initializer. • It is now possible to reload individual snaplets without reloading the whole site. The snaplet API now includes a function modifyMaster that you can use to write reload functions for individual snaplets. The Heist snaplet now provides a reloader leveraging this functionality. We found this very useful in allowing us to rapidly iterate when making changes to markup in large applications that take a long time to initialize. ## Bugfixes / minor improvements • Generalized parts of the compiled splice API to use RuntimeSplice n a instead of the less general n a. Since RuntimeSplice is a monad transformer, this change only requires you to add a call to lift in places where you have an n a. • Fixed Heist’s runAttributesRaw function to do both types of attribute parsing. • Fixed Heist bug that caused XML templates to be rendered as HTML5. • Improve the consistency of the auth snaplet API. • Eliminated the inappropriate export of orphan instances. ## May 15, 2013 ### Wolfgang Jeltsch # ucs 2.2 released I have released a new version of the LaTeX ucs package yesterday, which is version 2.2. It fixes the bug that is described in the TeX StackExchange question Conflict between ifxetex and ucs under pdflatex/xelatex – why?. Tagged: CTAN, LaTeX, software release, StackExchange, TeX, ucs (LaTeX package), Unicode ## May 14, 2013 ### Functional Jobs # Test Engineer at Klarna (Full-time) For us QA should be part of the entire development process, from concept to execution. We believe in agile methodologies and believe that to be able to build largescale, world class systems we need to focus on the users, their needs and how we can make sure that we deliver what they want. Now we are looking for Test Engineers that share our view on how to do QA in product development. As a Test Engineer, you will be part of creating a world class agile software development organisation where QA is at the center. We are not there today, but that is of course just another challenge. As a Test Engineer you are part of an agile development team that consists of around 5-8 members. You work closely to the developers and will develop the ways you work with testing within your team. The developers do unit tests and you will mostly focus on white-box testing but also some black-box testing. The test Engineer also creates tools and test frameworks when needed to be used inside the team. We believe that you should automate what can be automated. This means that your mission is to implement test cases, automate tests and make sure we deliver in accordance to our high demands. Since we are forming our future test organisation it is a great opportunity to help setting up and develop our ways of testing. Make your ideas be part of our overall plan. Requirements: • Degree in Computer science or similar degree with focus on programming • 3+ Years of experience in testing in medium- or largescale software development • Experience of agile software development (TDD / Scrum / Kanban) • Broad skills in programming (preferably functional programming such as Erlang, Lisp, Scala, Scheme, F#, Haskell etc) • Great skills in automation and scripting (Perl, Bash, Python etc) • Broad experience in white-box testing frameworks such as xUnit Preferred skills: • ISTQB-certificate (or ISEB, CSTE) • Experience of BDD, Behaviour-driven Development, or ATDD, Acceptance Test Driven Development • Experience in different higher level testing frameworks (Selenium, Cucumber etc) We offer you an international working environment filled with smart and ambitious colleagues. As an employee at one of Sweden’s fastest growing companies, you will play an important role in taking Klarna to the next level. You won’t work for Klarna, you’ll work with Klarna.‬ For questions please contact Philip Andrén at philip [dot] andren [at] klarna [dot] com or +46 (0) 76-526 00 70. We recommend you to apply as soon as possible, selection and interviews are held continuously. Location Stockholm, Sweden Get information on how to apply for this position. ### Edward Z. Yang # HotOS “Unconference” report:Verifying Systems Ariel Rabkin has some code he'd like to verify, and at this year’s HotOS he appealed to participants of one “unconference” (informal breakout sessions to discuss various topics) to help him figure out what was really going on as far as formal verification went. He had three questions: "What can we verify? What is impossible to verify? How can we tell that the verification is correct?" They seemed like impossibly large questions, so we drilled in a little bit, and found that Ariel had the very understandable question, "Say I have some C++ code implementing a subtle network protocol, and I'd like to prove that the protocol doesn't deadlock; how do I do that?" I wish the formal verification community had a good answer to a question like this, but unfortunately we don't. The largest verification projects include things like verified kernels, which are written in fully specified subsets of C; which assume the translation performed by the compiler is correct, formalize C in a theorem prover, and then verify there. This is the "principled approach". It's just not feasible to take C or C++ in its entirety and try to formalize it; it's too complicated and too ill-specified. The easiest thing to do is formalize a small fragment of your algorithm and then make a hand-wavy argument that your implementation is adequate. Martin Abadi remarked that before you embark on a verification project, you have to figure out where you'll get the most bang for your buck. Most of the time, a formalization won't get you "full correctness"; the "electrons may be faulty", as the case may be. But even a flawed verification forces you to state your assumptions explicitly, which is a good thing. We then circled around to the subject of, well, what can be verified. Until the 90s, the formal verification community limited itself to only complete and sound analyses—and failed. The relaxation of this restriction lead to a renaissance of formal verification work. We talked about who was using formal verification, and the usual suspects showed up: safety critical software, cache coherence protocols (but one participant remarked that this was only a flash in the pan, as far as applications goes—he asserted that these companies are likely not going to use these methods any longer in the future), etc. Safety critical software is also likely to use coprocessors (since hardware failure is a very real issue), but Gernot Heiser noted that these folks are trying to get away from physical separation: it is expensive in terms of expense, weight and energy. Luckily, the costs of verification, as he recounted, are within a factor of two of normal industrial assurance, and half the cost of military assurance (though, he cautioned that this was for a specific project, and for a specific size of code.) He also remarked that as far as changes to code requiring changes to the proofs, the changes in the proofs seemed to be linear in the complexity (conceptual or implementation-wise) of the change, which is a good sign! Well, supposing that you decide that you actually want to verify your software, how do you go about doing it? Unfortunately, it takes a completely different set of skills to build verified software versus normal software. Everyone agreed, "Yes, you need to hire a formal methods guy" if you're going to make any progress. But that's just not enough. The formal methods guy has to talk to the systems guy. Heiser recounted a very good experience hiring a formal methods person who was able to communicate with the other systems researchers working on the project; without this line of communication, he said, the project likely would have failed. And he mentioned another project, which had three times as much funding, but didn't accomplish nearly as much their team had. (Names not mentioned to protect the guilty.) In the end, it seemed that we didn’t manage to give Ari a quite satisfactory answer. As one participant said, “You’ll probably learn the most by just sitting down and trying to formalize the thing you are interested in.” This is probably true, though I fear most will be scared off by the realization of how much work it actually takes to formalize software. Hey guys, I’m liveblogging HotOS at my research Tumblr. The posts there are likely to be more fragmented than this, but if people are interested in any particular topics I can inflate them into full posts. ### Alson Kemp # Go experiment: de-noising CoffeeScript is a great example of how to de-noise a language like Javascript. (Of course, I know people that consider braces to be a good thing, but lots of us consider them noise and prefer significant whitespace, so I’m speaking to those folks.) What would Go code look like with some of CoffeeScript’s denoising? TL;DR : the answer is that de-noised Go would not look much different than normal Go… As an experiment, I picked some rules from CoffeeScript and re-wrote the Mandelbrot example from The Computer Benchmarks Game. Note: this is someone else’s original Go code, so I can’t vouch for the quality of the Go code…. Here’s the original Go code: /* targeting a q6600 system, one cpu worker per core */ const pool = 4 const ZERO float64 = 0 const LIMIT = 2.0 const ITER = 50 // Benchmark parameter const SIZE = 16000 var rows []byte var bytesPerRow int // This func is responsible for rendering a row of pixels, // and when complete writing it out to the file. func renderRow(w, h, bytes int, workChan chan int,iter int, finishChan chan bool) { var Zr, Zi, Tr, Ti, Cr float64 var x,i int for y := range workChan { offset := bytesPerRow * y Ci := (2*float64(y)/float64(h) - 1.0) for x = 0; x < w; x++ { Zr, Zi, Tr, Ti = ZERO, ZERO, ZERO, ZERO Cr = (2*float64(x)/float64(w) - 1.5) for i = 0; i < iter && Tr+Ti <= LIMIT*LIMIT; i++ { Zi = 2*Zr*Zi + Ci Zr = Tr - Ti + Cr Tr = Zr * Zr Ti = Zi * Zi } // Store the value in the array of ints if Tr+Ti <= LIMIT*LIMIT { rows[offset+x/8] |= (byte(1) << uint(7-(x%8))) } } } /* tell master I'm finished */ finishChan <- true  My quick de-noising rules are: • Eliminate var since it can be inferred. • Use ‘:’ instead of const (a la Ruby’s symbols). • Eliminate func in favor of ‘-> and variables for functions. • Replace braces {} with significant whitespace • Replace C-style comments with shell comments “#” • Try to leave other spacing along to not fudge on line count • Replace simple loops with an “in” and range form The de-noised Go code: # targeting a q6600 system, one cpu worker per core :pool = 4 :ZERO float64 = 0 # These are constants :LIMIT = 2.0 :ITER = 50 # Benchmark parameter :SIZE = 16000 rows []byte bytesPerRow int # This func is responsible for rendering a row of pixels, # and when complete writing it out to the file. renderRow = (w, h, bytes int, workChan chan int,iter int, finishChan chan bool) -> Zr, Zi, Tr, Ti, Cr float64 x,i int for y := range workChan offset := bytesPerRow * y Ci := (2*float64(y)/float64(h) - 1.0) for x in [0..w] Zr, Zi, Tr, Ti = ZERO, ZERO, ZERO, ZERO Cr = (2*float64(x)/float64(w) - 1.5) i = 0 while i++ < iter && Tr+Ti <= LIMIT*LIMIT Zi = 2*Zr*Zi + Ci Zr = Tr - Ti + Cr Tr = Zr * Zr Ti = Zi * Zi # Store the value in the array of ints if Tr+Ti <= LIMIT*LIMIT rows[offset+x/8] |= (byte(1) << uint(7-(x%8))) # tell master I'm finished finishChan <- true  That seems to be a pretty small win in return for a syntax adjustment that does not produce significantly enhanced readability. Some bits are nice: I prefer the significant whitespace, but the braces just aren’t that obtrusive in Go; I do prefer the shell comment style, but it’s not a deal breaker; the simplified loop is nice, but not incredible; eliding “var” is okay, but harms readability given the need to declare the types of some variables; I do prefer the colon for constants. Whereas Coffeescript can dramatically shorten and de-noise a Javascript file, it looks as though Go is already pretty terse. Obviously, I didn’t deal with all of Go in this experiment, so I’ll look over more of it soon, but Go appears to be quite terse already given its design… ### Tom Moertel # Tricks of the trade: Recursion to Iteration, Part 2: Eliminating Recursion with the Time-Traveling Secret Feature Trick Posted on May 14, 2013 This is the second post in a series on converting recursive algorithms into iterative algorithms. If you haven’t read the previous post, you probably should. It introduces some terms and background that will be helpful. Last time, if you’ll recall, we discussed The Simple Method of converting recursive functions into iterative functions. The method, as its name implies, is straightforward and mostly mechanical. The only potential trouble is that, for the method to work, you must convert all of the recursive calls in your function into tail calls. This task can be tricky. So we also discussed the Secret Feature trick for putting recursive calls into tail-call form. That trick works well for simple recursive calls, but when your calls aren’t so simple, you need a beefier version of the trick. That’s the subject of this post: the Time-Traveling Secret Feature trick. It’s like sending a T-800 back in time to terminate a function’s recursiveness before it can ever offer resistance in the present. Yeah. But we’ll have to work up to it. So stick with the early examples to prepare for the cybernetic brain augmentations to come. Enough talk! Let’s start with a practical example. ### Computing binomial coefficients The binomial coefficient $\genfrac{}{}{0}{}{n}{k}$ for integers $n$ and $k$ gives the number of ways to choose $k$ items from a set of $n$ items. For this reason, it’s often pronounced “$n$ choose $k$.” It’s very common in combinatorics and statistics and often pops up in the analysis of algorithms. A whole chapter, in fact, is dedicated to them in Graham, Knuth, and Patashnik’s Concrete Mathematics. Math textbooks define the coefficient like this: $\genfrac{}{}{0}{}{n}{k}=\frac{n!}{k!\left(n-k\right)!},$ but that form causes all sorts of problems for computers. Fortunately, Concrete Mathematics helpfully points out a lifesaving absorption identity: $\genfrac{}{}{0}{}{n}{k}=\frac{n}{k}\genfrac{}{}{0}{}{n-1}{k-1},$ and we know what that is, don’t we? That’s a recursive function just waiting to happen! And that identity, along with the base case $\genfrac{}{}{0}{}{n}{0}=1$, gives us today’s running example: def binomial(n, k): if k == 0: return 1 return n * binomial(n - 1, k - 1) // k Before going further, a cautionary point. Math math and computer math are not the same. In math math, $\frac{nx}{k}=\frac{n}{k}x=\frac{x}{k}n$, but in computer math the equation does not necessarily hold because of finite precision or, in our case, because we’re dealing with integer division that throws away remainders. (By the way, in Python, // is the division operator that throws away remainders.) For this reason, I have been careful to use the form n * binomial(n - 1, k - 1) // k instead of the more literal translation (n // k) * binomial(n - 1, k - 1) which, if you try it, you’ll see often produces the wrong answer. Okay, our challenge is set before us. Ready to de-recursivify binomial? ### Once again, the Secret Feature trick First, however, we must put the function’s recursive call into tail-call form. And you remember how to do that, right? With the Secret Feature trick, of course! As a refresher from last time, here’s the trick in a nutshell: 1. Find a recursive call that’s not a tail call. 2. Identify what work is being done between that call and its return statement. 3. Extend the function with a secret feature to do that work, as controlled by a new accumulator argument with a default value that causes it to do nothing. 4. Use the secret feature to eliminate the old work. 5. You’ve now got a tail call! 6. Repeat until all recursive calls are tail calls. Let’s go through it, by the numbers. 1. Find a recursive call that’s not a tail call. Well, there’s only three lines of logic. We’re not talking rocket science here. def binomial(n, k): if k == 0: return 1 return n * binomial(n - 1, k - 1) // k # ^^^^^^^^^^^^^^^^^^^^^^ # right here! 2. Identify what work is being done between that call and its return statement. In our mind’s eye, let’s lift the recursive call out of the return statement and replace it with the variable x.  x = binomial(n - 1, k - 1) return n * x // k Now we can visualize the additional work as a separate function operating on x: multiplying it on the left by one number and dividing it on the right by another: def work(x, lmul, rdiv): return lmul * x // rdiv For functions this simple, you can just hold them in your head and inline them into your code as needed. But for more-complicated functions, it really does help to break them out like this. For this example, I’m going to pretend that our work function is more complicated, just to show how to do it. 3. Extend the function with a secret feature to do that work, as controlled by a new accumulator argument – in this case a pair (lmul, rdiv) – with a default value that causes it to do nothing. def work(x, lmul, rdiv): return lmul * x // rdiv def binomial(n, k, lmul=1, rdiv=1): if k == 0: return work(1, lmul, rdiv) return work(n * binomial(n - 1, k - 1) // k, lmul, rdiv) Note that I just mechanically converted all return {whatever} statements into return work({whatever}, lmul, rdiv). 4. Use the secret feature to eliminate the old work. Watch what happens to that final line. def work(x, lmul, rdiv): return lmul * x // rdiv def binomial(n, k, lmul=1, rdiv=1): if k == 0: return work(1, lmul, rdiv) return binomial(n - 1, k - 1, lmul * n, k * rdiv) 5. You’ve now got a tail call! Indeed, we do! All that’s left is to inline the sole remaining work call: def binomial(n, k, lmul=1, rdiv=1): if k == 0: return lmul * 1 // rdiv return binomial(n - 1, k - 1, lmul * n, k * rdiv) And to simplify away the needless multiplication by 1 in the return statement: def binomial(n, k, lmul=1, rdiv=1): if k == 0: return lmul // rdiv return binomial(n - 1, k - 1, lmul * n, k * rdiv) 6. Repeat until all recursive calls are tail calls. There was only one, so we’re done! Yay! With all the recursive calls (just the one) converted into tail calls, we can easily convert the function into iterative form by applying the Simple Method. Here’s what we get after we load the function body into a one-shot loop and convert the tail call into an assignment-and-continue pair. def binomial_iter(n, k, lmul=1, rdiv=1): while True: if k == 0: return lmul // rdiv (n, k, lmul, rdiv) = (n - 1, k - 1, lmul * n, k * rdiv) continue break As a final touch, we can tidy up and in the process tuck the accumulators inside the function body to keep them completely secret: def binomial_iter(n, k): lmul = rdiv = 1 while k > 0: (n, k, lmul, rdiv) = (n - 1, k - 1, lmul * n, k * rdiv) return lmul // rdiv And now we have an iterative version of our original binomial function! ### A short, cautionary lesson This next part is subtle but important. To understand what’s going on, you first need to see it. So, once again, let’s use the Online Python Tutor’s excellent Python-runtime visualizer. Open the link below in a new tab if you can, and then read on. Visualize the execution of binomial(39, 9). Click the Forward button to advance through each step of the computation. When you get to step 22, where the recursive version has fully loaded the stack with its nested frames, click slowly. Watch the return value (in red) carefully as you advance. See how it gently climbs to the final answer of 211915132, never exceeding that value? Now continue stepping through to the iterative version. Watch the value of lmul as you advance. See how it grows rapidly, finally reaching a whopping 76899763100160? This difference matters. While both versions computed the correct answer, the original recursive version would do so without exceeding the capacity of 32-bit signed integers.1 The iterative version, however, needs a comparatively hoggish 47 bits to faithfully arrive at the correct answer. Python’s integers, lucky for us, grow as needed to hold their values, so we need not fear overflow in this case, but not all languages for which we might want to use our recursion-to-iteration techniques offer such protections. It’s something to keep in mind the next time you’re taking the recursion out of an algorithm in C: typedef int32_t Int; Int binomial(Int n, Int k) { if (k == 0) return 1; return n * binomial(n - 1, k - 1) / k; } Int binomial_iter(Int n, Int k) { Int lmul = 1, rdiv = 1; while (k > 0) { lmul *= n; rdiv *= k; n -= 1; k -= 1; } return lmul / rdiv; } int main(int argc, char* argv[]) { printf("binomial(39, 9) = %ld\n", (long) binomial(39, 9)); printf("binomial_iter(39, 9) = %ld\n", (long) binomial_iter(39, 9)); } /* Output with Int = int32_t: binomial(39, 9) = 211915132 binomial_iter(39, 9) = -4481 <-- oops! */ In any case, bigger integers are slower and consume more memory. In one important way, the original, recursive algorithm was better. We have lost something important. Now let’s get it back. ### The importance of respecting associativity and commutativity It turns out that our iterative version of binomial is the original’s evil twin. Its stack usage looks charming and all, but something else about it is off. Something hard to put our finger on. What is the difference? It all comes down to grouping and ordering decisions. Implicit grouping and ordering decisions that we overlooked during our code transformations. Consider how both versions compute $\genfrac{}{}{0}{}{5}{2}$. First, the recursive version. We start out with $n=5,k=2$ and then recursively expand the expression $n\genfrac{}{}{0}{}{n-1}{k-1}/k$ until we hit the base case of $k=0$. The series of expansions proceeds like so: $\genfrac{}{}{0}{}{5}{2}=5\genfrac{}{}{0}{}{4}{1}/2=5\cdot \left(4\genfrac{}{}{0}{}{3}{0}/1\right)/2=5\cdot \left(4\cdot \left(1\right)/1\right)/2=10.$ At every step (except for the base case) we perform a multiplication by $n$ and then a division by $k$. It’s that division by $k$ that keeps intermediate results from getting out of hand. Now lets look at the iterative version. It’s not so obvious what’s going on. (Score another point for recursion over iteration!) But we can puzzle it out. We start out with both accumulators at $1$ and then loop over the decreasing values of $k$, building up the accumulators until $k=0$, at which point we return the quotient of the accumulators. So the calculation is given by $\genfrac{}{}{0}{}{5}{2}=\frac{\mathit{\text{lmul}}}{\mathit{\text{rdiv}}}=\frac{\left(\left(1\right)\cdot 5\right)\cdot 4}{\left(\left(1\right)\cdot 2\right)\cdot 1}=10.$ Both the numerator and denominator grow and grow and grow until the final division. Not a good trend. So why did the Secret Feature trick work great for factorial in our previous article but fail us, albeit subtly, now? The answer is that in factorial the extra work being done between each recursive call and its return statement was multiplication and nothing more. And multiplication (of integers that don’t overflow) is commutative and associative. Meaning, grouping and ordering don’t matter. The lesson, then, is this: Think carefully about whether ordering and grouping matter before using the Secret Feature trick. If it matters, you have two options: One, you can modify your algorithm so that ordering and grouping don’t matter and then use the Secret Feature trick. (But this option is often intractable.) Two, you can use the Time-Traveling Secret Feature trick, which preserves ordering and grouping. And that trick is what we’ve been waiting for. It’s time. ### The Time-Traveling Secret Feature trick What’s about to happen next is so mind-bendingly awesome that you should probably get a drink to prepare yourself. It’s like combining a T-800 from The Terminator, with the dreams-within-dreams layering from Inception, with the momentary inversion of reality that occurs when the quantum field surrounding an inductive proof collapses and everything snaps into place. It’s. Really. Cool. Got your drink? Okay, let’s do this. Let’s go back to our original, recursive binomial function: def binomial(n, k): if k == 0: return 1 return n * binomial(n - 1, k - 1) // k Our goal is to create an equivalent iterative version that exactly preserves the properties of the original. Well, how the hell are we going to do that? Hold that thought for a sec. Let’s just stop for a moment and think about what that recursive function is doing. We call it to compute some answer ${x}_{t}$. But that answer depends on some earlier answer ${x}_{t-1}$ that the function doesn’t know, so it calls itself to get that answer. And so on. Until, finally, it finds one concrete answer ${x}_{0}$ that it actually knows and, from which, it can build every subsequent answer. So, in truth, our answer ${x}_{t}$ is just the final element in a whole timeline of needed earlier answers ${x}_{i}$: $\left({x}_{0},{x}_{1},\dots {x}_{t}\right)=\left({x}_{i}\mid i=0,1,\dots t\right).$ Well, so what? Why should we care? Because we’ve watched The Terminator about six hundred times! We know how to get rid of a problem in the present when we’ve seen its timeline: We send a Terminator into the past to rewrite that timeline so the problem never gets created in the first place. And our little recursive problem with binomial here? We’ve seen its timeline. That’s right. It’s about to get real. One more thing. Every single one of these steps preserves the original function’s behavior. You can run your unit tests after every step, and they should all pass. Let’s begin. 1. Send a T-800 terminator unit into the function’s timeline, back to the time of ${x}_{0}$. Oh, we don’t have a T-800 terminator unit? No problem. We’ll just invoke an induction hypothesis: Assume we’ve sent a T-800 terminator unit into the function’s timeline, back to the time of ${x}_{0}$. That was easy. What’s next? 2. Move the recursive call and surrounding work into a helper function. This might seem like overkill but prevents mistakes. Do you make mistakes? Then just make the helper function already. I’m calling it step for reasons that will shortly become obvious. def step(n, k): return n * binomial(n - 1, k - 1) // k def binomial(n, k): if k == 0: return 1 return step(n, k) 3. Partition the helper function into its 3 fundamental parts. They are: 1. the recursive call to get the previous answer ${x}_{i-1}$, 2. the work to compute the current answer ${x}_{i}$ from ${x}_{i-1}$, and 3. a return statement applied to ${x}_{i}$ alone: def step(n, k): previous_x = binomial(n - 1, k - 1) # part (1) x = n * previous_x // k # part (2) return x # part (3) def binomial(n, k): if k == 0: return 1 return step(n, k) 4. Secretly prepare to receive communications from the T-800 terminator unit. You see, once the T-800 gets its cybernetic hands on earlier values in the timeline, we can use its knowledge to eliminate the recursion. We just need some way to get that knowledge from the T-800, now stuck in the past, to us, stuck in the present. Fortunately, we’ve seen time-travel movies, so we know exactly how this problem is solved. We just use a dead drop! That’s a prearranged secret location where the T-800 will drop values in the past and where we will check for them in the future. So, per prior arrangement with the T-800, we’ll extend our helper function with a secret feature that checks the dead drop for a previous value ${x}_{i-1}$ and, if one’s there, uses it to – muahahahaha! – break the recursive call chain: def step(n, k, previous_x=None): # <- new stuff if previous_x is None: # < previous_x = binomial(n - 1, k - 1) # < x = n * previous_x // k return x def binomial(n, k): if k == 0: return 1 return step(n, k) Ordinarily, we would think of the helper as a function of ${n}_{i}$ and ${k}_{i}$ that computes ${x}_{i}$. That’s because the ${x}_{i-1}$ argument can safely be ignored. It has no effect unless the T-800 drops a value into it. But if the T-800 does drop a value into it, we can see it as a function representing the transformation $\left({n}_{i},{k}_{i},{x}_{i-1}\right)\to {x}_{i}$ And from that little kernel of power we do some seriously crazy stuff. For example: reverse the flow of time. Yeah. Read on. 5. Modify the helper’s return statement to also pass through its non-secret arguments. def step(n, k, previous_x=None): if previous_x is None: previous_x = binomial(n - 1, k - 1) x = n * previous_x // k return (n, k, x) # <-- here def binomial(n, k): if k == 0: return 1 return step(n, k)[2] # <-- note also Now our helper represents the transformation $\left({n}_{i},{k}_{i},{x}_{i-1}\right)\to \left({n}_{i},{k}_{i},{x}_{i}\right)$ Interesting. 6. Apply, to the non-secret arguments in the helper’s return statement, the opposite of the transformations applied to them for the recursive call. Since we passed $n-1,k-1$ for the recursive call, in the return statement we’ll pass $n+1,k+1$: def step(n, k, previous_x=None): if previous_x is None: previous_x = binomial(n - 1, k - 1) # <-- look here x = n * previous_x // k return (n + 1, k + 1, x) # <-- apply opposite here def binomial(n, k): if k == 0: return 1 return step(n, k)[2] Now our helper represents the transformation $\left({n}_{i},{k}_{i},{x}_{i-1}\right)\to \left({n}_{i+1},{k}_{i+1},{x}_{i}\right)$ And now we can reverse the flow of time. When we started out with our original recursive function, if we asked it for ${x}_{t}$, the function had to go back in the timeline to get ${x}_{t-1}$. And to get ${x}_{t-1}$, it had to go back even further to get ${x}_{t-2}$. And so on, chewing up stack every backward step of the way to ${x}_{0}$. It was heartbreaking, watching it work like that. But now, we can step the other way. If we have any (${n}_{i},{k}_{i},{x}_{i-1}\right)$ we can get ${x}_{i}$ straight away, no recursion required. In goes ${x}_{i-1}$, out comes ${x}_{i}$. We can go forward. Why, if we knew ${n}_{1}$, ${k}_{1}$, and ${x}_{0}$, we could compute the entire series ${x}_{0},{x}_{1},\dots {x}_{t}$ with zero recursion by starting at $i=0$ and working forward. So let’s get those values! 7. Determine the initial conditions at the start of the timeline. For this simple example, most of you can probably determine the initial conditions by inspection. But I’m going to go through the process anyway. You never know when you might need it. So: What’s the start of the timeline? It’s when the recursive binomial function calls itself so many times that it finally hits one of its base cases, which defines the first entry in the timeline, anchoring the timeline at time $i=0$. The base cases in this example are easy to find since we’ve already split out the step logic; all that’s left in binomial is the base-case logic. It’s easy to see that there is only one base case, and it’s when $k=0$: def step(n, k, previous_x=None): if previous_x is None: previous_x = binomial(n - 1, k - 1) x = n * previous_x // k return (n + 1, k + 1, x) def binomial(n, k): if k == 0: # <-- base case return 1 return step(n, k)[2] From this base case we know that at the very start of the timeline we must have $i=0$, $k=0$, and $x=1$. Thus we know that ${k}_{0}=0$ and ${x}_{0}=1$. And because the timeline ends at time $i=t$, and we know that $\left({n}_{t},{k}_{t}\right)=\left(n,k\right)$, the original arguments binomial was called with. Let’s visualize what we know about the timeline so far as a table: time $i$: $0$ $1$ $2$ $t$ ${n}_{i}$: $n$ ${k}_{i}$: $1$ $k$ ${x}_{i}$: $0$ ? The answer we ultimately seek is ${x}_{t}$, marked by a question mark in the timeline. Now to determine ${n}_{1}$ and ${k}_{1}$. To do that, we must answer this question: How do we go from $\left({n}_{i},{k}_{i}\right)$ to $\left({n}_{i+1},{k}_{i+1}\right)$? We already know the answer! Our step helper computes $\left({n}_{i},{k}_{i},{x}_{i-1}\right)\to \left({n}_{i+1},{k}_{i+1},{x}_{i}\right)$. So look at its logic: How does it transform its $n$ and $k$ arguments? It adds one to them. Thus, $\begin{array}{ccc}\hfill {n}_{i+1}& \hfill =\hfill & {n}_{i}+1,\hfill \\ \hfill {k}_{i+1}& \hfill =\hfill & {k}_{i}+1.\hfill \end{array}$ Therefore, ${k}_{1}={k}_{0}+1=0+1=1.$ Since ${k}_{i}$ is $i$ steps from ${k}_{0}=0$ and each step adds $+1$ we have ${k}_{i}={k}_{0}+i\left(+1\right)=i.$ And when $i=t$ we know that $t=i={k}_{i=t}={k}_{t}=k$. Therefore $t=k$. Finally, we can compute ${n}_{1}$, which is $t-1$ reverse steps from ${n}_{t}=n$, the only ${n}_{i}$ that we know so far: ${n}_{1}={n}_{t}-\left(t-1\right)\left(+1\right)=n-k+1.$ And now we have our initial conditions: $\left({n}_{1},{k}_{1},{x}_{0}\right)=\left(n-k+1,1,0\right).$ So now our knowledge of the timeline looks like this: time $i$: $0$ $1$ $2$ $t=k$ ${n}_{i}$: $n-k$ $n-k+1$ $n$ ${k}_{i}$: $0$ $1$ $k$ ${x}_{i}$: $1$ ? And with this knowledge, we can step forward through the timeline, from time $i=1$ to time $i=2$, and so on, until finally, at the last step when $i=t$ we will have determined ${x}_{t}$, our desired answer! Let’s do it! 8. In the main function, iterate the step helper $t$ times, starting from the initial conditions. Then return ${x}_{t}$. def step(n, k, previous_x=None): if previous_x is None: previous_x = binomial(n - 1, k - 1) x = n * previous_x // k return (n + 1, k + 1, x) def binomial(n, k): if k == 0: return 1 t = k # <- new (n, k, previous_x) = (n - k + 1, 1, 1) # for _i in xrange(1, t + 1): # (n, k, previous_x) = step(n, k, previous_x) # return previous_x # = x_t # Boom! That’s 100% iterative. But there’s more! 9. Remove the base cases from the original function. Since our iterations start from a base case, the base cases are already incorporated into our new, iterative logic. def step(n, k, previous_x=None): if previous_x is None: previous_x = binomial(n - 1, k - 1) x = n * previous_x // k return (n + 1, k + 1, x) def binomial(n, k): # if k == 0: # <- remove these lines # return 1 # t = k (n, k, previous_x) = (n - k + 1, 1, 1) for _i in xrange(1, t + 1): (n, k, previous_x) = step(n, k, previous_x) return previous_x # = x_t Boom! That’s 100% iterative. But there’s more! 10. Remove the secret feature from the step function. Since previous_x always gets a value now, make it a required part of the function and not an optional secret feature. def step(n, k, previous_x): # <- remove default value for previous_x # if previous_x is None: # <- remove these 2 lines # previous_x = binomial(n - 1, k - 1) # x = n * previous_x // k return (n + 1, k + 1, x) def binomial(n, k): t = k (n, k, previous_x) = (n - k + 1, 1, 1) for _i in xrange(1, t + 1): (n, k, previous_x) = step(n, k, previous_x) return previous_x # = x_t 11. Inline the step function. We’re on it! def binomial(n, k): t = k (n, k, previous_x) = (n - k + 1, 1, 1) for _i in xrange(1, t + 1): x = n * previous_x // k (n, k, previous_x) = (n + 1, k + 1, x) return previous_x # = x_t 12. Apply finishing touches. This example is pretty tight already, but we can substitute away the x variable. And, because ${k}_{i}=i$, we can replace the for loop’s induction variable _i with ${k}_{i}$ and correspondingly eliminate ${k}_{i}$ from the step logic. And, while we’re making stuff better, there’s an obvious optimization. One property of binomial coefficients is that $\genfrac{}{}{0}{}{n}{k}=\genfrac{}{}{0}{}{n}{n-k}$. One property of our code is that it runs for $t=k$ steps. So when $k>n-k$, we can reduce the number of steps by solving for $\genfrac{}{}{0}{}{n}{n-k}$ instead. Let’s add a couple of lines at the start of the logic to claim this optimization. And, of course, let’s add a docstring – we’re nice like that. The final result: def binomial(n, k): """Compute binomial coefficient C(n, k) = n! / (k! * (n-k)!).""" if k > n - k: k = n - k # 'pute C(n, n-k) instead if it's easier t = k (n, previous_x) = (n - k + 1, 1) for k in xrange(1, t + 1): (n, previous_x) = (n + 1, n * previous_x // k) return previous_x # = x_t Let’s review what we just did. It’s mind blowing: 1. We sent an imaginary T-800 terminator unit back into the function’s timeline. 2. Then we added a secret feature to our function that allowed the T-800 to send us values from the timeline’s past. 3. Then we used those values to break the recursive chain, reverse the flow of time, and compute the timeline forward and iteratively instead of backward and recursively. 4. The function being fully iterative then, we removed the secret feature, and the imaginary T-800 winked out of existence, because we had in effect written the need for them both out of history. 5. And now we have a fast, efficient, and property-preserving iterative version of our original recursive function, and it’s about as good as anything we could hope to conjure up from scratch. (See it in action – just one stack frame and easy on the ints, too.) 6. And – most importantly – we did it all using a mechanical, repeatable process! ### Thanks! Well, that’s it for this installment. I hope you enjoyed reading it as much as I did writing it. If you liked it (or didn’t), or if you found a mistake, or especially if you can think of any way to help make my explanations better, let me know. Just post a comment or fire me a tweet at @tmoertel. Until next time, keep your brain recursive and your Python code iterative. 1. In the visualization, you can’t actually see the largest integer produced by the recursive version’s computation. It’s produced between steps 32 and 33, when the return value from step 32 is multiplied by step 33’s $n=39$ to arrive at an integer of ${\mathrm{log}}_{2}\left(39\cdot 48903492\right)\approx 30.8$ bits, which is then immediately divided by $k=9$ to produce the final answer. ## May 13, 2013 ### Brent Yorgey # Workshop on Functional Art, Music, Modeling and Design I’m helping organize a new workshop, FARM, to be held in Boston this September (right after ICFP). Many readers of this blog may have already seen the announcement, but I thought it worth saying a bit more about it here, in the spirit of trying to spread the word as widely as possible. The short of it is—it should be super interesting and a lot of fun. If you are at all interested in the intersection of functional programming and design, art, music—anything that has to do with using beautiful code to produce beautiful artifacts—you should consider submitting a paper, or planning to attend! Papers can be submitted in two categories, full papers novel research contribution) and “aesthetic applications” (which should describe some sort of beautiful way to produce something beautiful). The deadline for submissions is June 14. See the website for more details. ### Ken T Takusagawa # [zxmxkwqa] XMonad with GNOME on Ubuntu 12.10 and 13.04 This was on a freshly installed and dist-updated i386 Ubuntu 12.10 Quantal Quetzal . apt-get install --no-install-recommends xmonad libghc-xmonad-contrib-dev gnome-panel indicator-applet-complete indicator-multiload indicator-multiload is optional, but I like it. ~/.xmonad/xmonad.hs: import XMonad import XMonad.Config.Gnome main=xmonad gnomeConfig  Logout, the choose the "GNOME with Xmonad" login option available above and to the right of the password field. indicator-multiload fails to start on its own at first. Run it from the command line. It will start correctly, automatically, on future logins. These instructions also seem to work on Ubuntu 13.04 Raring Ringtail i386. ## May 12, 2013 ### Luke Palmer # Polyamory and Respect I have been in an open, polyamorous relationship with my partner Amanda for about a year and a half. The relationship began as open for somewhat coincidental reasons, but over its course, I have developed a respect for polyamory — an understanding of why it makes sense for me, and why, I suspect, I will want my future relationships to be open as well1. And it is not for the reasons that most people think. For the first time in the course of the relationship, I’m currently being intimate with someone else. However, I was supportive of polyamory before I had taken advantage of its freedoms, even though Amanda was seeing other people reasonably often. The question is: why? Why would I put myself in such a position? Why would I allow Amanda to sleep with other people while she is with me? The key lies in a word of that final question — “allow”. To me, a healthy relationship is founded on mutual respect. There are many relationships which are not, but I find the most fulfillment from a relationship which is a coming together of two whole people with respect for each other. Anything else, to me, is just a fling (maybe a long-term one). So, under the supposition that I respect my partner, what does it mean to “allow” something? More pointedly, what does it mean to “disallow” something? Both allowing and disallowing suppose that I have the power to make decisions for her. It supposes that I am informed enough, without even being present, to make the judgment call about whether her actions were right. In a traditional monogamous setting, I have a wholly un-nuanced view of the situation — if she has slept with someone else, she has made the wrong choice, and I, therefore, have been wronged, and I (with the assistance of social norms) am the one who has decided that. Let’s imagine a polyamorous situation to help get to the heart of this. Let’s say that she met a new partner, and asked me if it’s okay if she sleeps with them. I will not respond with yes or no. She has offered me the power (and responsibility) to decide the best course of action for her, and I feel it necessary not to accept it. In accordance with my values, I can’t accept that power for anyone but myself: it would be a disservice to us both. However, I don’t mean to say that there are never any emotions that come with it, or that if there are I have an obligation to bury them. Indeed, I often get jealous and feel hurt when she is with someone else. But as a partner, I want to understand. Why did what she did make sense to her? How did she perceive that would affect me? — knowing that I am considered in her decision-making process is important to me. I will communicate how it actually affected me. Perhaps I spent the night alone feeling shitty — it’s important for her to know that, to take that possibility into account next time she makes a decision, and it’s important for me to understand that I am still alive and that we still love each other. But the key is that, because of respect, I give her the benefit of the doubt that she made the best choice she had — I just want to understand her reasoning, and probably be reassured that she still cares — which she always has. There are certain “codes” that I see as being very powerful, as leading to a stronger and more aligned internal experience. One of these is honesty — I am committed to always being open & honest (in a more nuanced way than I have been in the past). This is not because honesty in itself is “right”, but because integrity (i.e. always doing what I feel is right) is a quality that is important to me, and I have found that honesty is a code that is easy to verify (i.e. it is easy for me to know if I am abiding by it), which leads to integrity. This is because if I do something which I feel is wrong, I learn that, because of my code of open honesty, I will need to tell someone that I felt what I did was wrong. And that pressure is huge — I can no longer keep it to myself, now I need to show others about my lack of integrity when it happens. This pressure very quickly causes me to start acting with integrity. In the same way, I see polyamory as a code which is easy to verify, which leads to respect as a consequence, and respect for my partner is something I value. Jealousy happens — when she talks to someone I can tell she thinks is attractive, when she stays out later than I expected her, when she tells me she has or had a crush on someone. But I know that we are in an open relationship — we have agreed that being attracted to others, even to the point of acting on it, is okay, and therefore my feeling of jealousy cannot be instantly transformed into a feeling of righteousness and being wronged. Hence, I have to consider the larger situation — I have to see where she is coming from, I have to understand her and her choices, I have to know her better. And in doing so I understand her values, her wishes, her way of being, her way of relating to others — and such a deep understanding leads me to respect her. I have not felt such a deep respect for anyone else I have ever been in a relationship with, and I think the openness of our relationship has been a major factor in that. Further, polyamory leads to more communication and strength in our relationship. Consider “cheating” in a monogamous relationship. Let’s say I am in a monogamous relationship with my partner and, in a flush of sexually-excited irrationality I slept with someone else. I still love my partner very much and want to be with her, and we have a good, mutually supportive relationship, but I just made a mistake. (The idea that I could sleep with someone else while still being in love with her may seem impossible to some; that idea is worth examining — consider these prompts: masturbation, past relationships, fantasizing.) The question is, do I share my mistake with her? If I do share, it’s very likely that the relationship will end by social contract — many consider cheating to be an unforgivable offense. I don’t want the relationship to end, because I still love her and want to be with her. If I don’t share, I turn one wrong into two, and eventually many — not only have I wronged her with my actions, I wrong her by lying once about it — and, as lies are, probably many more times to cover up the first one. So not sharing is incompatible with my respect for her, and sharing is incompatible with my love and desire to be with her. Would it not be easier for everyone if I felt free to share my mistake, if I were not in this terrible bind after making it? With the roles reversed, what would it say about how much I care if I were willing to put my partner in such a bind? Letting go of the moral attachment to fidelity allows this situation easily to be a conversation — she can tell me how it affected her, I can understand that and that may inform my desire not to be so reckless again. Perhaps the conversation will reveal something about our relationship dynamic that needs attention, or perhaps something that is secretly making us both unhappy (one of the possible causes of sleeping with someone else). In that sense we can make a plan to repair it, or possibly we will mutually agree it is in both of our best interests to end the relationship, allowing us to be friends afterward, feeling sadness for our loss but not hurt and anger, because we both know that it was the right decision. In the case that the relationship does not end, the conversation may have revealed a deep problem which we are now on the road to solving, strengthening the relationship and bringing us closer. And maybe it was no big deal, and we understand that as sexual beings sometimes we just need to feel attractive and get our rocks off, and the relationship has not been harmed. All of these are preferable to an abrupt end due to an objective wrong, in which one person feels deeply guilty and the other feels deeply wounded. There are things which I will only briefly mention: for example, it is freeing to know that a friendship/relationship with someone other than my partner can develop in whatever way seems natural, without worrying if every action has crossed the line. This freedom allows me to get closer to others in my life, even if their gender allows some sexual tension, which brings me more fulfillment and happiness. In my experience, even though I like this other woman a lot, it has not in the least diminished the love I feel for Amanda, and experiencing that helps me see that it is probably the same for her when she is with someone else. In fact, since she has asked me for more reassurance now, I am verbalizing why I love her more, thus reminding myself and strengthening my sense of love for her. Where does the idea that love is a finite resource come from? These are the reasons why polyamory makes sense to me as a way of conducting myself in relationship. It leads to more honest communication (and therefore more integrity), more mutual understanding and respect, and ultimately a stronger relationship. I see traditional monogamy as a way to defend yourself from scary thoughts of abandonment, but the cost is a dynamic in which it is possible to justify a sense of ownership over your partner, controlling them and taking away their free agency. Is that really worth it? 1One reason is that I get to have future relationships without first ending this wonderful one. ## May 11, 2013 ### Joachim Breitner # How to play Rock-Paper-Scissors online? There was an interesting question by ‘Fool’ recently on the StackExchange site for Boardgames: How do you play a game like Rock-Paper-Scissors with friends online? Or any other game where players have to simultaneously submit their moves (e.g. Diplomacy, or Rock-Paper-Scissors-Lizzard-Spock), which, as I just learned, are simultaneous action selection games. While there are websites dedicated to playing specific games, such as webdiplomacy.net, we could not find a generic one that you can use if you, for example, invent your own variants of a game. So I created one: At you-say-first.nomeata.de you can enter rooms and share the URL with your friends. On the one hand, you have a regular chat room there. But there is also the possibility to enter moves (whatever a move may be to you) and only when all players have done that and marked the move as final, it is shown to everyone. If you want to try it out: There is an integrated, not very fancy Rock-Paper-Scissors-playing bot. Just enter a room, join and say „I want to play!“ Note that this site can be used for more than just for games. Have you ever observed that persons would often want other to express their preference (e.g. where to dine) first to not reveal their own preference, so that they can (or pretend to) change their mind if they would contradict? In such situations simultaneous action selection can be a fairer method. A technical note: I created this web app using meteor (a JavaScript framework building on node.js and MongoDB that allows for reactive programming), and it is also hosted on meteor.com. I chose Meteor after someone mentioned Firebase to me, which looked very slick, but was not Free Software, so I looked for alternatives. I did not do any cross-browser-testing, and the UI design could be improved, so if you want to help out (or just complain), please use the GitHub code repository and issue tracker. ### Tom Moertel # Tricks of the trade: Recursion to Iteration, Part 1: The Simple Method, secret features, and accumulators Posted on May 11, 2013 Alternative title: I wish Python had tail-call elimination. Recursive programming is the cat’s meow because it maps so easily to proof by induction, making it easy to design algorithms and prove them correct. Getting stuff right is what programming is all about. But recursion is poorly supported by many popular programming languages. Drop a large input into a recursive algorithm in Python, and you’ll probably hit the runtime’s recursion limit. Raise the limit, and you may run out of stack space and segfault. These are not happy outcomes. Therefore, an important trick of the trade is knowing how to translate recursive algorithms into iterative algorithms. That way, you can design, prove, and initially code your algorithms in the almighty realm of recursion. Then, after you’ve got things just the way you want them, you can translate your algorithms into equivalent iterative forms through a series of mechanical steps. You can prove your cake and run it in Python, too. This topic – turning recursion into iteration – is fascinating enough that I’m going to do a series of posts on it. Tail calls, trampolines, continuation-passing style – and more. Lots of good stuff. For today, though, let’s just look at one simple method and one supporting trick. ### The Simple Method This translation method works on many simple recursive functions. When it works, it works well, and the results are lean and fast. I generally try it first and consider more complicated methods only when it fails. In a nutshell: 1. Study the function. 2. Convert all recursive calls into tail calls. (If you can’t, stop. Try another method.) 3. Introduce a one-shot loop around the function body. 4. Convert tail calls into continue statements. 5. Tidy up. An important property of this method is that it’s incrementally correct – after every step you have a function that’s equivalent to the original. So if you have unit tests, you can run them after each and every step to make sure you didn’t make a mistake. Let’s see the method in action. ### Example: factorial With a function this simple, we could probably go straight to the iterative version without using any techniques, just a little noggin power. But the point here is to develop a mechanical process that we can trust when our functions aren’t so simple or our noggins aren’t so powered. So we’re going to work on a really simple function so that we can focus on the process. Ready? Then let’s show these guys how cyber-commandos get it done! Mark IV style! 1. Study the original function. def factorial(n): if n < 2: return 1 return n * factorial(n - 1) Nothing scary here. Just one recursive call. We can do this! 2. Convert recursive calls into tail calls.  def factorial1a(n, acc=1): if n < 2: return 1 * acc return factorial1a(n - 1, acc * n) (If this step seemed confusing, see the Bonus Explainer at the end of the article for the “secret feature” trick behind the step.) 3. Introduce a one-shot loop around the function body. You want while True: body ; break. def factorial1b(n, acc=1): while True: if n < 2: return 1 * acc return factorial1a(n - 1, acc * n) break Yes, I know putting a break after a return is crazy, but do it anyway. Clean-up comes later. For now, we’re going by the numbers. 4. Replace all recursive tail calls f(x=x1, y=y1, ...) with (x, y, ...) = (x1, y1, ...); continue. Be sure to update all arguments in the assignment. def factorial1c(n, acc=1): while True: if n < 2: return 1 * acc (n, acc) = (n - 1, acc * n) continue break For this step, I copy the original function’s argument list, parentheses and all, and paste it over the return keyword. Less chance to screw something up that way. It’s all about being mechanical. 5. Tidy the code and make it more idiomatic. def factorial1d(n, acc=1): while n > 1: (n, acc) = (n - 1, acc * n) return acc Okay. This step is less about mechanics and more about style. Eliminate the cruft. Simplify. Make it sparkle. 6. Wait. That’s it. You’re finished! ### What have we gained? We just did five steps of work to convert our original, recursive factorial function into the equivalent, iterative factorial1d function. If our programming language had supported tail-call elimination, we could have stopped at step two with factorial1a. But nooooooo… We had to press on, all the way through step five, because we’re using Python. It’s almost enough to make a cyber-commando punch a kitten. No, the work wasn’t hard, but it was still work. So what did it buy us? Too see what it bought us, let’s look inside the Python run-time environment. We’ll use the Online Python Tutor’s visualizer to observe the build-up of stack frames as factorial, factorial1a, and factorial1d each compute the factorial of 5. This is very cool, so don’t miss the link: Visualize It! (ProTip: Open it in a new tab.) Click the “Forward” button to step through the execution of the functions. Notice what happens in the Frames column. When factorial is computing the factorial of 5, five frames build up on the stack. Not a coincidence. Same thing for our tail-recursive factorial1a. (You’re right. It is tragic.) But not for our iterative wonder factorial1d. It uses just one stack frame, over and over, until it’s done. That’s economy! That’s why we did the work. Economy. We converted O(n) stack use into O(1) stack use. When n could be large, that savings matters. It could be difference between getting an answer and getting a segfault. ### Not-so-simple cases Okay, so we tackled factorial. But that was an easy one. What if your function isn’t so easy? Then it’s time for more advanced methods. That’s our topic for next time. Until then, keep your brain recursive and your Python code iterative. ### Bonus Explainer: Using secret features for tail-call conversion In step 2 of The Simple Method, we converted the recursive call in this code: def factorial(n): if n < 2: return 1 return n * factorial(n - 1) # <-- right here! into the recursive tail call in this code:  def factorial(n, acc=1): if n < 2: return 1 * acc return factorial(n - 1, acc * n) # <-- oh, yeah! This conversion is easy once you get the hang of it, but the first few times you see it, it seems like magic. So let’s take this one step by step. First, the problem. We want to get rid of the n * in the following code:  return n * factorial(n - 1) That n * stands between our recursive call to factorial and the return keyword. In other words, the code is actually equivalent to the following:  x = factorial(n - 1) result = n * x return result That is, our code has to call the factorial function, await its result (x), and then do something with that result (multiply it by n) before it can return its result. It’s that pesky intermediate doing something we must get rid of. We want nothing but the recursive call to factorial in the return statement. So how do we get rid of that multiplication? Here’s the trick. We extend our function with a multiplication feature and use it to do the multiplication for us. Shh! It’s a secret feature, though, just for us. Nobody else. In essence, every call to our extended function will not only compute a factorial but also (secretly) multiply that factorial by whatever extra value we give it. The variables that hold these extra values are often called “accumulators,” so I use the name acc here as a nod to tradition. So here’s our function, freshly extended: def factorial(n, acc=1): if n < 2: return acc * 1 return acc * n * factorial(n - 1) See what I did to add the secret multiplication feature? Two things. First, I took the original function and added an additional argument acc, the multiplier. Note that it defaults to 1 so that it has no effect until we give it some other value (since $1\cdot x=x$). Gotta keep the secret secret, after all. Second, I changed every single return statement from return {whatever} to return acc * {whatever}. Whenever our function would have returned x, it now returns acc * x. And that’s it. Our secret feature is done! And it’s trivial to prove correct. (In fact, we just did prove it correct! Re-read the second sentence.) Both changes were entirely mechanical, hard to screw up, and, together, default to doing nothing. These are the properties you want when adding secret features to your functions. Okay. Now we have a function that computes the factorial of n and, secretly, multiplies it by acc. Now let’s return to that troublesome line, but in our newly extended function:  return acc * n * factorial(n - 1) It computes the factorial of n - 1 and then multiplies it by acc * n. But wait! We don’t need to do that multiplication ourselves. Not anymore. Now we can ask our extended factorial function to do it for us, using the secret feature. So we can rewrite the troublesome line as  return factorial(n - 1, acc * n) And that’s a tail call! So our tail-call version of the factorial function is this: def factorial(n, acc=1): if n < 2: return acc * 1 return factorial(n - 1, acc * n) And now that all our recursive calls are tail calls – there was only the one – this function is easy to convert into iterative form using The Simple Method described in the main article. Let’s review the Secret Feature trick for making recursive calls into tail calls. By the numbers: 1. Find a recursive call that’s not a tail call. 2. Identify what work is being done between that call and its return statement. 3. Extend the function with a secret feature to do that work, as controlled by a new accumulator argument with a default value that causes it to do nothing. 4. Use the secret feature to eliminate the old work. 5. You’ve now got a tail call! 6. Repeat until all recursive calls are tail calls. With a little practice, it becomes second nature. So… ### Exercise: Get your practice on! You mission is to get rid of the recursion in the following function. Feel like you can handle it? Then just fork the exercise repo and do your stuff to exercise1.py. def find_val_or_next_smallest(bst, x): """Get the greatest value <= x in a binary search tree. Returns None if no such value can be found. """ if bst is None: return None elif bst.val == x: return x elif bst.val > x: return find_val_or_next_smallest(bst.left, x) else: right_best = find_val_or_next_smallest(bst.right, x) if right_best is None: return bst.val return right_best Have fun! ## May 09, 2013 ### Mike Izbicki # Markov Networks, Monoids, and Futurama In this post, we’re going to look at how to manipulate multivariate distributions in Haskell’s HLearn library. There are many ways to represent multivariate distributions, but we’ll use a technique called Markov networks. These networks have the algebraic structure called a monoid (and group and vector space), and training them is a homomorphism. Despite the scary names, these mathematical structures make working with our distributions really easy and convenient—they give us online and parallel training algorithms “for free.” If you want to go into the details of how, you can check out my TFP13 submission, but in this post we’ll ignore those mathy details to focus on how to use the library in practice. We’ll use a running example of creating a distribution over characters in the show Futurama. ### Prelimiaries: Creating the data Types As usual, this post is a literate haskell file. To run this code, you’ll need to install the hlearn-distributions package. This package requires GHC version at least 7.6. bash> cabal install hlearn-distributions-1.0.0.1 Now for some code. We start with our language extensions and imports: >{-# LANGUAGE DataKinds #-} >{-# LANGUAGE TypeFamilies #-} >{-# LANGUAGE TemplateHaskell #-} > >import HLearn.Algebra >import HLearn.Models.Distributions Next, we’ll create data type to represent Futurama characters. There are a lot of characters, so we’ll need to keep things pretty organized. The data type will have a record for everything we might want to know about a character. Each of these records will be one of the variables in our multivariate distribution, and all of our data points will have this type. >data Character = Character > { _name :: String > , _species :: String > , _job :: Job > , _isGood :: Maybe Bool > , _age :: Double -- in years > , _height :: Double -- in feet > , _weight :: Double -- in pounds > } > deriving (Read,Show,Eq,Ord) > >data Job = Manager | Crew | Henchman | Other > deriving (Read,Show,Eq,Ord) Now, in order for our library to be able to interpret the Character type, we call the template haskell function: >makeTypeLenses ''Character This function creates a bunch of data types and type classes for us. These “type lenses” give us a type-safe way to reference the different variables in our multivariate distribution. We’ll see how to use these type level lenses a bit later. There’s no need to understand what’s going on under the hood, but if you’re curious then checkout the hackage documentation or source code. ### Training a distribution Now, we’re ready to create a data set and start training. Here’s a list of the employees of Planet Express provided by the resident bureaucrat Hermes Conrad. This list will be our first data set. >planetExpress = > [ Character "Philip J. Fry" "human" Crew (Just True) 1026 5.8 195 > , Character "Turanga Leela" "alien" Crew (Just True) 43 5.9 170 > , Character "Professor Farnsworth" "human" Manager (Just True) 85 5.5 160 > , Character "Hermes Conrad" "human" Manager (Just True) 36 5.3 210 > , Character "Amy Wong" "human" Other (Just True) 21 5.4 140 > , Character "Zoidberg" "alien" Other (Just True) 212 5.8 225 > , Character "Cubert Farnsworth" "human" Other (Just True) 8 4.3 135 > ] Let’s train a distribution from this data. Here’s how we would train a distribution where every variable is independent of every other variable: >dist1 = train planetExpress :: Multivariate Character > '[ Independent Categorical '[String,String,Job,Maybe Bool] > , Independent Normal '[Double,Double,Double] > ] > Double In the HLearn library, we always use the function train to train a model from data points. We specify which model to train in the type signature. As you can see, the Multivariate distribution takes three type parameters. The first parameter is the type of our data point, in this case Character. The second parameter describes the dependency structure of our distribution. We’ll go over the syntax for the dependency structure in a bit. For now, just notice that it’s a type-level list of distributions. Finally, the third parameter is the type we will use to store our probabilities. What can we do with this distribution? One simple task we can do is to find marginal distributions. The marginal distribution is the distribution of a certain variable ignoring all the other variables. For example, let’s say I want a distribution of the species that work at planet express. I can get this by: >dist1a = getMargin TH_species dist1 Notice that we specified which variable we’re taking the marginal of by using the type level lens TH_species. This data constructor was automatically created for us by out template haskell function makeTypeLenses. Every one of our records in the data type has its own unique type lens. It’s name is the name of the record, prefixed by TH. These lenses let us infer the types of our marginal distributions at compile time, rather than at run time. For example, the type of the marginal distribution of species is: ghci> :t dist1a dist1a :: Categorical String Double That is, a categorical distributions whose data points are Strings and which stores probabilities as a Double. Now, if I wanted a distribution of the weights of the employees, I can get that by: >dist1b = getMargin TH_weight dist1 And the type of this distribution is: ghci> :t dist1b dist1b :: Normal Double Now, I can easily plot these marginal distributions with the plotDistribution function: ghci> plotDistribution (plotFile "dist1a") dist1a ghci> plotDistribution (plotFile "dist1b") dist1b But wait! I accidentally forgot to include Bender in the planetExpress data set! What can I do? In a traditional statistics library, we would have to retrain our data from scratch. If we had billions of elements in our data set, this would be an expensive mistake. But in our HLearn library, we can take advantage of the model’s monoid structure. In particular, the compiler used this structure to automatically derive a function called add1dp for us. Let’s look at its type: ghci> :t add1dp add1dp :: HomTrainer model => model -> Datapoint model -> model It’s pretty simple. The function takes a model and adds the data point associated with that model. It returns the model we would have gotten if the data point had been in our original data set. This is called online training. Again, because our distributions form monoids, the compiler derived an efficient and exact online training algorithm for us automatically. So let’s create a new distribution that considers bender: >bender = Character "Bender Rodriguez" "robot" Crew (Just True) 44 6.1 612 >dist1' = add1dp dist1 bender And plot our new marginals: ghci> plotDistribution (plotFile "dist1-withbender-species" PNG 250 250) $getMargin TH_species dist1' ghci> plotDistribution (plotFile "dist1-withbender-weight"$ PNG 250 250) $getMargin TH_weight dist1' Notice that our categorical marginal has clearly changed, but that our normal marginal doesn’t seemed to have changed at all. This is because the plotting routines automatically scale the distribution, and the normal distribution, when scaled, always looks the same. We can double check that we actually did change the weight distribution by comparing the mean: ghci> mean dist1b 176.42857142857142 ghci> mean$ getMargin TH_weight dist1'
230.875

Bender’s weight really changed the distribution after all!

### Complicated DependencE structureS

That’s cool, but our original distribution isn’t very interesting.  What makes multivariate distributions interesting is when the variables affect each other.  This is true in our case, so we’d like to be able to model it.  For example, we’ve already seen that robots are much heavier than organic lifeforms, and are throwing off our statistics.  The HLearn library supports a small subset of Markov Networks for expressing these dependencies.

We represent Markov Networks as graphs with undirected edges.  Every attribute in our distribution is a node, and every dependence between attributes is an edge.  We can draw this graph with the plotNetwork command:

ghci> plotNetwork "dist1-network" dist1

As expected, there are no edges in our graph because everything is independent.  Let’s create a more interesting distribution and plot its Markov network.

>dist2 = train planetExpress :: Multivariate Character
>  '[ Ignore                  '[String]
>   , MultiCategorical        '[String]
>   , Independent Categorical '[Job,Maybe Bool]
>   , Independent Normal      '[Double,Double,Double]
>   ]
>   Double
ghci> plotNetwork "dist2-network" dist2

Okay, so what just happened?

The syntax for representing the dependence structure is a little confusing, so let’s go step by step.  We represent the dependence information in the graph as a list of types.  Each element in the list describes both the marginal distribution and the dependence structure for one or more records in our data type.  We must list these elements in the same order as the original data type.

Notice that we’ve made two changes to the list.  First, our list now starts with the type Ignore ‘[String].  This means that the first string in our data type—the name—will be ignored.  Notice that TH_name is no longer in the Markov Network.  This makes sense because we expect that a character’s name should not tell us too much about any of their other attributes.

Second, we’ve added a dependence.  The MultiCategorical distribution makes everything afterward in the list dependent on that item, but not the things before it.  This means that the exact types of dependencies it can specify are dependent on the order of the records in our data type.  Let’s see what happens if we change the location of the MultiCategorical:

>dist3 = train planetExpress :: Multivariate Character
>  '[ Ignore '[String]
>   , Independent Categorical '[String]
>   , MultiCategorical '[Job]
>   , Independent Categorical '[Maybe Bool]
>   , Independent Normal '[Double,Double,Double]
>   ]
>   Double
ghci> plotNetwork "dist3-network" dist3

As you can see, our species no longer have any relation to anything else.  Unfortunately, using this syntax, the order of list elements is important, and so the order we specify our data records is important.

Finally, we can substitute any valid univariate distribution for our Normal and Categorical distributions.  The HLearn library currently supports Binomial, Exponential, Geometric, LogNormal, and Poisson distributions.  These just don’t make much sense for modelling Futurama characters, so we’re not using them.

Now, we might be tempted to specify that every variable is fully dependent on every other variable.  In order to do this, we have to introduce the “Dependent” type.  Any valid multivariate distribution can follow Dependent, but only those records specified in the type-list will actually be dependent on each other.  For example:

>dist4 = train planetExpress :: Multivariate Character
>  '[ Ignore '[String]
>   , MultiCategorical '[String,Job,Maybe Bool]
>   , Dependent MultiNormal '[Double,Double,Double]
>   ]
>   Double
ghci> plotNetwork "dist4-network" dist4

Undoubtably, this is in always going to be the case—everything always has a slight influence on everything else.  Unfortunately, it is not easy in practice to model these fully dependent distributions.  We need roughly data points to accurately train a distribution, where n is the number of nodes in our graph and e is the number of edges in our network.  Thus, by selecting that two attributes are independent of each other, we can greatly reduce the amount of data we need to train an accurate distribution.

I realize that this syntax is a little awkward.  I chose it because it was relatively easy to implement.  Future versions of the library should support a more intuitive syntax.  I also plan to use copulas to greatly expand the expressiveness of these distributions.  In the mean time, the best way to figure out the dependencies in a Markov Network are just to plot it and see visually.

Okay.  So what distribution makes the most sense for Futurama characters?  We’ll say that everything depends on both the characters’ species and job, and that their weight depends on their height.

>planetExpress = train planetExpress :: Multivariate Character
>  '[ Ignore '[String]
>   , MultiCategorical '[String,Job]
>   , Independent Categorical '[Maybe Bool]
>   , Independent Normal '[Double]
>   , Dependent MultiNormal '[Double,Double]
>   ]
>   Double
ghci> plotNetwork "planetExpress-network" planetExpress

We still don’t have enough data to to train this network, so let’s create some more.  We start by creating a type for our Markov network called FuturamaDist.  This is just for convenience so we don’t have to retype the dependence structure many times.

>type FuturamaDist = Multivariate Character
>  '[ Ignore '[String]
>   , MultiCategorical '[String,Job]
>   , Independent Categorical '[Maybe Bool]
>   , Independent Normal '[Double]
>   , Dependent MultiNormal '[Double,Double]
>   ]
>   Double

Next, we train some more distribubtions of this type on some of the characters.  We’ll start with Mom Corporation and the brave Space Forces.

>momCorporation =
>   [ Character "Mom"                   "human" Manager  (Just False) 100 5.5 130
>   , Character "Walt"                  "human" Henchman (Just False) 22  6.1 170
>   , Character "Larry"                 "human" Henchman (Just False) 18  5.9 180
>   , Character "Igner"                 "human" Henchman (Just False) 15  5.8 175
>   ]
>momDist = train momCorporation :: FuturamaDist
>spaceForce =
>   [ Character "Zapp Brannigan"        "human" Manager  (Nothing)   45  6.0 230
>   , Character "Kif Kroker"            "alien" Crew     (Just True) 113 4.5 120
>   ]
>spaceDist = train spaceForce :: FuturamaDist

And now some more robots:

>robots =
>   [ bender
>   , Character "Calculon"              "robot" Other    (Nothing)    123  6.8 650
>   , Character "The Crushinator"       "robot" Other    (Nothing)    45   8.0 4500
>   , Character "Clamps"                "robot" Henchman (Just False) 134  5.8 330
>   , Character "DonBot"                "robot" Manager  (Just False) 178  5.8 520
>   , Character "Hedonismbot"           "robot" Other    (Just False) 69   4.3 1200
>   , Character "Preacherbot"           "robot" Manager  (Nothing)    45   5.8 350
>   , Character "Roberto"               "robot" Other    (Just False) 77   5.9 250
>   , Character "Robot Devil"           "robot" Other    (Just False) 895  6.0 280
>   , Character "Robot Santa"           "robot" Other    (Just False) 488  6.3 950
>   ]
>robotDist = train robots :: FuturamaDist

Now we’re going to take advantage of the monoid structure of our multivariate distributions to combine all of these distributions into one.

> futuramaDist = dist1 <> momDist <> spaceDist <> robotDist

The resulting distribution is equivalent to having trained a distribution from scratch on all of the data points:

train (planetExpress++momCorporation++spaceForces++robots) :: FuturamaDist

We can take advantage of this property any time we use the train function to automatically parallelize our code.  The higher order function parallel will split  the training task evenly over each of your available processors, then merge them together with the monoid operation.  This results in “theoretically perfect” parallel training of these models.

parallel train (planetExpress++momCorporation++spaceForces++robots) :: FuturamaDist

Again, this is only possible because the distributions have a monoid structure.

Now, let’s ask some questions of our distribution.  If I pick a character at random, what’s the probability that they’re a good guy?  Let’s plot the marginal.

ghci> plotDistribution (plotFile "goodguy" $PNG 250 250)$ getMargin TH_isGood futuramaDist

But what if I only want to pick from those characters that are humans, or those characters that are robots?  Statisticians call this conditioning.  We can do that with the condition function:

ghci> plotDistribution (plotFile "goodguy-human" $PNG 250 250)$
getMargin TH_isGood $condition TH_species "human" futuramaDist ghci> plotDistribution (plotFile "goodguy-robot"$ PNG 250 250) $getMargin TH_isGood$ condition TH_species "robot" futuramaDist

On the left is the plot for humans, and on the right the plot for robots.  Apparently, original robot sin is much worse than that in humans!  If only they would listen to Preacherbot and repent of their wicked ways…

Now let’s ask: What’s the average age of an evil robot?

ghci> mean $getMargin TH_age$
condition TH_isGood (Just False) $condition TH_species "robot" futuramaDist 273.0769230769231 Notice that conditioning a distribution is a commutative operation. That means we can condition in any order and still get the exact same results. Let’s try it: ghci> mean$ getMargin TH_age $condition TH_species "robot"$ condition TH_isGood (Just False) futuramaDist
273.0769230769231

There’s one last thing for us to consider.  What does our Markov network look like after conditioning?  Let’s find out!

plotNetwork "condition-species-isGood" $condition TH_species "robot"$ condition TH_isGood (Just False) futuramaDist

Notice that conditioning against these variables caused them to go away from our Markov Network.

Finally, there’s another similar process to conditioning called “marginalizing out.” This lets us ignore the effects of a single attribute without specifically saying what that attribute must be. When we marginalize out on our Markov network, we get the same dependence structure as if we conditioned.

plotNetwork "marginalizeOut-species-isGood" $marginalizeOut TH_species$ marginalizeOut TH_isGood futuramaDist

Effectively, what the marginalizeOut function does is “forget” the extra dependencies, whereas the condition function “applies” those dependencies.  In the end, the resulting Markov network has the same structure, but different values.

Finally, at the start of the post, I mentioned that our multivariate distributions have group and vector space structure.  This gives us two more operations we can use: the inverse and scalar multiplication.  You can find more posts on how to take advantage of these structures here and here.

### Next time…

The best part of all of this is still coming.  Next, we’ll take a look at full on Bayesian classification and why it forms a monoid.  Besides online and parallel trainers, this also gives us a fast cross-validation method.

There’ll also be a posts about the monoid structure of Markov chains, the Free HomTrainer, and how this whole algebraic framework applies to NP-approximation algorithms as well.

Subscribe to the RSS feed to stay tuned.

# Benchmarking GHC HEAD with Criterion

So you’re developing GHC. You make some changes that affect performance of compiled programs, but how do you check whether the performance is really improved? Well, if you’re making some general optimisations – a new Core-to-Core transformation perhaps – than you can use the NoFib benchmark suite, which is a commonly accepted method of measuring GHC performance. But what if you’re developing some very specific optimisations that are unlikely to be benchmarked by NoFib? What if you extended the compiler in a way that allows you to write faster code in a way that was previously impossible and there is now way for NoFib to measure your improvements? Sounds like writing some criterion benchmarks would be a Good Thing. There’s a problem though – installing criterion with GHC HEAD. Criterion has lots of dependencies, but you cannot install them automatically with cabal-install, because cabal-install usually doesn’t work with GHC HEAD (although the Cabal library is one of GHC boot libraries). On the other hand installing dependencies manually is a pain. Besides, many libraries will not compile with GHC HEAD. So how to write criterion benchmarks for HEAD? I faced this problem some time ago and found a solution which, although not perfect, works fine for me.

In principle my idea is nothing fancy:

1. download all the required dependencies from hackage to the disk and extract them in a single directory,
2. determine the order in which they need to be installed,
3. build each library with GHC HEAD, resolving the build errors if necessary
4. register each library with GHC HEAD (see Appendix below)

Doing these things for the first time was very tedious and took me about 2-3 hours. Determining package dependencies was probably the most time consuming. Resolving build errors wasn’t that bad, though there were a couple of difficulties. It turned out that many packages put an upper bound on the version of the base package and removing these dependency is the only change required to build that package.

The key to my solution is that once you figure out in what order packages should be installed and remove the build errors, you can write a shell script that builds and installs packages automatically. This means that after installing GHC HEAD in a sandbox (see Appendix below) you can run the script to build and install all the packages. This will give you a fully working GHC installation in which you can write Criterion benchmarks for new features that you implemented in the compiler. Here’s what the script looks like (full version available here):

#!/bin/bash   PKGS="\ primitive-0.5.0.1 \ vector-0.10.0.1 \ dlist-0.5 \ vector-algorithms-0.5.4.2 \ ..." # more packages in this list   if [[ $# -gt 1 ]]; then echo "Too many parameters" exit elif [[$# -eq 1 ]]; then     if [[ $1 == "clean" ]]; then echo -n "Cleaning" for i in$PKGS         do             echo -n "."             cd $i rm -rf dist rm -f Setup Setup.o Setup.hi cd .. done echo "done" else echo "Invalid parameter:$1"         exit     fi else     for i in $PKGS do echo "Installing package$i"         cd $i ((if [[ -f Setup.lhs ]]; then ghc Setup.lhs; else ghc Setup.hs; fi) && \ ./Setup configure --user --enable-shared \ && ./Setup build && ./Setup install) \ || exit cd .. done fi The script is nothing elaborate. Running without any parameters will build and install all packages on the list. If you run it with “clean” parameter it will remove build artefacts from package directories. If for some reason the script fails – e.g. one of the libraries fails to build – you can comment out already installed libraries so that the script resumes from the point it previously stopped. # Summary Using the approach described above I can finally write criterion benchmarks for GHC HEAD. There are a couple of considerations though: • things are likely to break as HEAD gets updated. Be prepared to add new libraries as dependencies, change compilation parameters or fix new build errors, • since some time you need to pass --enable-shared flag to cabal configure when building a shared library. This causes every library to be compiled twice. I don’t know if there’s anything one can do about that, • you need to manually download new versions of libraries, • fixing build errors manually may not be easy, • rerunning the script when something fails may be tedious, • changes in HEAD might cause performance problems in libraries you are using. If this goes unnoticed the benchmarking results might be invalid (I think this problem is hypothetical). You can download my script and the source code for all the modified packages here. I’m not giving you any guarantee that it will work for you, since HEAD changes all the time. It’s also quite possible that you don’t need some of the libraries I’m using, for example Repa. # Appendix: Sandboxing GHC For the above method to work effectively you need to have a sandboxed installation of GHC. There are tools designed for sandboxing GHC (e.g. hsenv) but I use a method described here. It’s perfectly suited for my needs. I like to have full manual control when needed but I also have this shell script to automate switching of sandboxes: #!/bin/bash SANDBOX_DIR="/path/to/ghc-sandbox/" ACTIVE_SYMLINK="${SANDBOX_DIR}active" STARTCOLOR="\e[32m"; ENDCOLOR="\e[0m";   active_link_name=readlink ${ACTIVE_SYMLINK} active_name=basename${active_link_name}   if [[ $# -lt 1 ]]; then for i in ls${SANDBOX_DIR}; do if [[ $i != "active" ]]; then if [[$i == $active_name ]]; then echo -e "*$STARTCOLOR$i$ENDCOLOR" else echo " $i" fi fi done exit fi for i in ls${SANDBOX_DIR}; do if [[ $i ==$1 ]]; then cd $SANDBOX_DIR rm${ACTIVE_SYMLINK} ln -s $1${ACTIVE_SYMLINK} exit fi done   echo "Sandbox $1 not found" It displays list of sandboxes when run without any parameter (the active sandbox is displayed in green and marked with an asterisk) and switches the active sandbox when given a command-line parameter. Together with bash auto completion feature switching between different GHC versions is a matter of seconds. ### JP Moresmau # Moving from Windows 8 to Ubuntu (I'm a Haskell on Windows developer no more!) I bought myself a new laptop the other day, and while I'm happy with it, it came with Windows 8. This wasn't an issue per se, once I looked up on the internet how to find the "shut down" button... Sure, the gestures are a bit annoying when you don't have a touch screen, but I could live with that. But then I started running into familiar issues again, as I tried to set all the Haskell libraries I needed for a project: libraries not working on Windows, MinGW/MSYS/Cygwin hell, etc... I just couldn't take it anymore. So I guess this is an admission of defeat that when you need work done quickly using a significant number of Haskell libraries, you need a Unix based OS. So I decided to dual boot the computer with Ubuntu. I don' really know why I chose Ubuntu, from looking around it looked like it was both stable and developer friendly. I was first pleased to see that Windows would give me tools to resize my main partition. I remember a time where you had to use third party software to to do that! I burned a DVD with the 12.10 image (I see 13.04 is out, I'll have to upgrade some day I suppose), and went with the Linux Secure install. It didn't recognize that I had a Windows install, so I configured my partitions and tried to understand what the messages about the boot loader meant. But when I restarted, I went straight to Windows as if Ubuntu didn't exist. So I restarted using the DVD, and launched the Boot Repair utility. At the end it told me that I had to change something in the UEFI options of the computer. I went in there and sure enough there was an entry for Ubuntu. Once I selected that, I could now see the dual boot screen, giving me the choice between Ubuntu and Windows, and both work! Success! After that, not much sweat to report. I quickly appreciated apt-get, as opposed to the various downloads+install procedures you get for windows software. Sure, at the time I installed it, Eclipse was still 3.8 and not 4.2, but I'm not going to complain. Ho, and the library I needed that I couldn't get to work on Windows? Installed like a charm. I'm now a Ubuntu Haskell developer, and I don't regret it. ## May 08, 2013 ### Dan Burton # Two implementations of Seers Last time, we implemented a bowling game scorer by using a Tardis. If you aren’t yet familiar with the Tardis’s interface, then I recommend you check out the explanation on Hackage. (tl;dr it’s a State monad with get and put, … Continue reading ### Jasper Van der Jeugt # OdHac # Introduction So, last thursday I escaped Belgium for a bit and travelled to Odessa to attend OdHac, a Haskell Hackathon organised there by Roman and hosted by Provectus IT. Odessa is a really sweet city. One thing I noticed is that while some of the buildings seem to be in a deplorable condition from the outside, they actually turn out to be very nice once you’re indoors! This was for example the case for the hostel I was staying in. The contrast is also seen elsewhere in the city, with brand new apartments next to ramshackle, desolated houses. This is, however, not the case for the very centre of the city – around the famous Potemkin Stairs. This neighbourhood is absolutely amazing. # Hakyll at OdHac At the Hackathon, I did a small presentation about Hakyll on friday, and some awesome people decided to help me improve Hakyll a bit this weekend. We were able to implement some exciting features. ## If-else conditionals in templates We added some functionality to the Hakyll templates which you allows you to check if a certain value is defined. We chose to use the same syntax as Pandoc, to further ease integration: <h1>$title$</h1>$if(author)$<em>by$author$</em>$endif$ We also support if-else-endif. On the plane back, I also took the time to implement $foreach$ and $partial. ## Teasers Teasers allow you to write a short introduction for a blogpost, and make it easy to reproduce this introduction on another page together with a “Read more” link. We decided to implement the convention used in Wordpress, which means you define your teasers using the following simple format: --- title: Some post --- This is the introduction. <!--more--> The rest of the post... ## Pagination We also started to work on a generic Pagination module. It’s not finished yet but it should be possible to clean this up and package it by the end of the week. I’m quite excited about this feature because lots of people have requested it in the past. Additionally, we did some performance improvements and fixed a bug or two. All in all, a very productive and fun Hackathon! I hope to package up all these changes and push them to Hackage soon. Thanks for contributing to Hakyll, Alexey Smirnov, Anton Dubovik, Dmitriy Shamatrin, Ivan Veselov and Pavel Poukh! And, of course, thanks for organizing, Roman! ## May 07, 2013 ### Douglas M. Auclair (geophf) # Small survey of Category Theory introductions Some time ago a friend asked for a good introductory work on Category theory. I never did answer his question to my satisfaction, as the stuff I picked up on the subject was here and there as I needed it, and I thought there was never any succinct introductory work. Well, I thought wrongly. http://en.wikipedia.org/wiki/Categories_for_the_Working_Mathematician Above link ... links to the seminal summa, available for you if you wish to pursue this delightful area of research into expressivity in mathematics. Also, of course, there's the working-quantum-physicist's introduction at: http://math.ucr.edu/home/baez/categories.html Having a working knowledge of quantum computation is not necessary, probably not even helpful, but a very nice introduction is Quantum Computation and Quantum Information by Nielsen and Chuang, if you wish to see the source from where I got to categories and Category Theory. Dr. Baez's work starts off lightly and playfully, but then gets pretty deep pretty quickly, as he goes into the Groupoid/Topoid theoretical application of Category Theory, but that's to be understood, as quantists are always concerned about (super-)symmetries, and I, not so much, as I look for the more practical application of Categories in Monoids and the Relational Calculus, but there it is. I do, of course, have more advanced works on this topic if you wish to research further, and there's always this blog, where I look at the logical implications of cat theory (heh: 'logical' 'implications' ... Math humor). There is, e.g., an introductory article on monads and their computational application at: http://logicaltypes.blogspot.com/2011/09/monads-in-java.html ### Holden Karau # Whats new in Spark this week #1 Whats new in Spark will look the activity in the Spark commit logs every week and attempt to summarize what new features and bug fixes have occurred. This not intended to summarize everything, mostly things that might be useful to application developers. Without further ado lets get started: That is all that I found interesting in skimming this weeks commit logs, if I missed something important feel free to let me know :) ## May 06, 2013 ### Kevin Reid (kpreid) # A Portal 2 co-op not-really-a-puzzle ### Ketil Malde # Spring cleaning The beautiful May weather is upon us, and it’s time for a spring cleaning, sorting through old drafts and dusting off old web pages. So, slightly too late to claim the spirit of “publish early, publish often”, a series of posts written over the last months and never quite finished are now out there. That is, here and here and here. Also, while static generated HTML is great from a security perspective (my WordPress instance got hacked), the lack of comment options is really very unsatisfactory. So, after coming across a post from Gwern, I checked out disqus, and after a few moments, comments are now available. This was pretty simple, and I think the dowside of not having control of the data is an acceptable price for not having to deal with spam and security issues. Feel free to go over the back catalog and add your voice where applicable. # Illumina RNAseq and bias After doing mostly genome sequences for some time, we recently looked at some transcript sequences, specifically RNASeq data using Illumina sequencing platform. One of the first questions will be whether the data look okay, and one of the first things to check to answer it is the distribution of nucleotides, using the FastX toolkit. It looks like this (ripped off the ’net, from University of Exeter): Does this looks scary to you? Since the sequence fragment should be randomly distributed in the transcripts, there should be no (or at least very little) bias, but the distribution of the first twelve nucleotides is clearly skewed. We checked this for 454 RNASeq data as well, and yes, the same thing happens there. On the other hand, genome data does not have anything of the sort, here everything aligns nicely to the expected AT/GC ratio. After some searching around, we found the explanation in a paper The protocol used for RNA sequencing is to use reverse transcriptase to convert it back into DNA, which can then be sequenced as usual. The RNA is single stranded, and the transcriptase, like polymerase, needs a small double-stranded segment which it then extends. This is achieved by mixing in random hexamers - small single stranded DNA stretches six nucleotides long - which will then bind to the RNA at random places. As far as the theory goes, at least. But it turns out, priming by random hexamers doesn’t amount to randomly located transcripts, and there is a bias. The main message of the paper is solid enough, and it seems clear that this bias is not caused by read errors, but rather by a location bias for the hexamers. The article makes a convincing argument, and I’m almost entirely, but not quite, satisfied with it. What still makes me uneasy about the whole thing is: • The pattern stretches up to position 12, or perhaps 13. Conceivably, this could be caused by two biased hexamers ligating after each other, but then the pattern should be the same in position 7-12, and it isn’t really. (Pearson correlation between the first and second hexamer: 0.11.) Why would there be different biases here? • Also, if there is a double hexamer effect, I’d be much happier if I observed a weak triple hexamer effect - but from position 13 onwards, it’s almost dead flat; no discernible bias. • The same pattern appear for the second reads in paired end sequencing. I can accept a hexamer bias in the beginning of the reads, but how could this possibly affect the end of the cloned DNA? Edit: I guess we use random hexamer primers for the next round of amplification (in the reverse direction), and then sort clones by length (on a gel or similar). Doh. Oh: if there is just a tiny bias, wouldn’t that be amplified with each round of PCR? Would that explain it? • I can’t figure out the hexamers involved. The common hexamers are, unsurprisingly, repeats. You would think that such a distinct and consistent pattern would be caused by a few very common hexamers, but that doesn’t appear to be the case. ## A brief tour of the evidence So I extracted the initial 12-mer from the reads in a file 28pre12.seq, as well as the following 12-mer in 28inf12.seq, to be used as a control. First, counting what should be the most common three-letter pattern: % grep -c AT...A 28???12.seq 28inf12.seq:15649 28pre12.seq:14544 No dice - this is slightly more common in the infix sequences than in the prefix ones. Counting reads by nucleotide in position 7: % for a in A C G T; do echoa grep -c "^......$a" 28pre12.seq for> done A 16887 C 19624 G 15239 T 48250 Here, T is vastly overrepresented. The control: % for a in A C G T; do echo$a grep -c "^......$a" 28inf12.seq; done A 29518 C 20374 G 20894 T 29206 Looking at the 12-mers with a T in position 7 at the beginning of reads, and in using the middle of reads as the control, we get: % grep '^......T' 28pre12.seq | sort | uniq -c | sort -n | tail 27 C A C G A A T A T C T T 27 C A G C A C T A T A T T 30 T G A A T A T A T A C A 31 C A G G A A T C A A A C 31 C G C T G T T A T C C C 34 C A A A A T T G T A T A 39 T T T T G C T A C A C A 50 C G C T G A T A T A C T 57 C G A A T A T C T T T T 130 C T G A G A T A G A A A Or, as a position-specific frequency matrix: A - 4 3 4 4 6 7 1 7 2 4 C 8 - 3 1 - 2 2 1 2 4 2 G - 4 3 2 3 - 1 1 - - - T 2 2 1 3 2 2 X - 7 1 4 2 * * * * * * * * * And again, no such pattern in the infixes: % grep '^......T' 28inf12.seq | sort | uniq -c | sort -n | tail 23 TTATTCTTATCT 24 ATCTCTTGAATC 25 CGTCGATCTTAA 26 AGATATTCGTGA 27 GAGCTCTAAAAG 28 AAGTATTACGCG 28 ATACTTTTAGAG 30 ACTTTTTTACTT 34 TCTTATTTAAAA 39 GTCTTATGTATC So, there are a few over-represented strings at the start of the reads, contributing to this, but the “worst” offender only has 130 occurrences, thus contributing 0.1% extra to the T in position 7. ## Conclusion? There is a clear bias, but it seems to be pretty weakly connected to specific hexamers, instead, it looks like e.g. any prefix with a T in position seven is more common. To me, this looks more like a binding site motif than skewed hexamer distribution. For instance, could there be a bias in the reverse transcriptase binding? Or could it be some other effect? I can’t help but notice that position seven is the first poymerase-extended position - is it somehow easier for the polymerase to start with a T? I don’t get it. ## Acknowledgments Thanks to my colleagues Tomasz Furmanek and Susanne Balzer for looking into this with me. ## May 05, 2013 ### Alson Kemp # [Synthetic] Performance of the Go frontend for GCC First, a note: this is a tiny synthetic bench. It’s not intended to answer the question: is GCCGo a good compiler. It is intended to answer the question: as someone investigating Go, should I also investigate GCCGo? While reading some announcements about the impending release of Go 1.1, I noticed that GCC was implementing a Go frontend. Interesting. So the benefits of the Go language coupled with the GCC toolchain? Sounds good. The benefits of the Go language combing with GCC’s decades of x86 optimization? Sounds great. So I grabbed GCCGo and built it. Instructions here: http://golang.org/doc/install/gccgo Important bits: • Definitely follow the instructions to build GCC in a separate directory from the source. • My configuration was: /tmp/gccgo/configure --disable-multilib --enable-languages=c,c++,go I used the Mandelbrot script from The Benchmarks Game at mandlebrot.go. Compiled using go and gccgo, respectively: go build mandel.go gccgo -v -lpthread -B /tmp/gccgo-build/gcc/ -B /tmp/gccgo-build/lto-plugin/ \ -B /tmp/gccgo-build/x86_64-unknown-linux-gnu/libgo/ \ -I /tmp/gccgo-build/x86_64-unknown-linux-gnu/libgo/ \ -m64 -fgo-relative-import-path=_/home/me/apps/go/bin \ -o ./mandel.gccgo ./mandel.go -O3 Since I didn’t install GCCGo and after flailing at compiler options for getting “go build” to find includes, libraries, etc, I gave up on the simple “go -compiler” syntax for gccgo. So the above gccgo command is the sausage-making version. So the two files: 4,532,110 mandel.gccgo - Compiled in 0.3s 1,877,120 mandel.golang - Compiled in 0.5s As a HackerNewser noted, stripping the executables could be good. Stripped: 1,605,472 mandel.gccgo 1,308,840 mandel.golang Note: the stripped GCCGo executables don’t actually work, so take the “stripped” value with a grain of salt for the moment. Bug here. GCCGo produced an *unstripped* executable 2.5x as large as Go produced. Stripped, the executables were similar, but the GCCGo executable didn’t work. So far the Go compiler is winning. Performance [on a tiny, synthetic, CPU bound, floating point math dominated program]: time ./mandel.golang 16000 > /dev/null real 0m10.610s user 0m41.091s sys 0m0.068s time ./mandel.gccgo 16000 > /dev/null real 0m9.719s user 0m37.758s sys 0m0.064s So GCCGo produces executables that are about 10% faster than does Go, but the executable is nearly 3x the size. I think I’ll stick with the Go compiler for now, especially since the tooling built into/around Go is very solid. Additional notes from HN discussion: • GCC was 4.8.0. Go was 1.1rc1. Both AMD64. ### Dominic Steinitz # Cabal Hacking at OdHac This is a very informal blog of the Cabal hacking team at OdHac: Sasha Manzyuk, Bram Schuur and me (Dominic Steinitz). I really enjoyed pair programming with both of them and I certainly wouldn’t have tracked down the bug I investigated without their help. Also a big thank you to Roman Cheplyaka for organising a great event. ## Cabal Bug Fixing I Ticket 823 The problem is indeed in the function Distribution.Simple.Utils.wrapText, which is used to wrap messages at 79 characters. No attempt is made to detect whether a space at which a line is wrapped is part of a file name. One can reproduce the problem more easily as follows: $ cabal install http://hackage.haskell.org/packages/archive/groups/0.2.0.1/groups-0.2.0.1.tar.gz --prefix "/home/manzyuk/foo/bar/baz quux"
Downloading

http://hackage.haskell.org/packages/archive/groups/0.2.0.1/groups-0.2.0.1.tar.gz

Resolving dependencies...
In order, the following will be installed:
groups-0.2.0.1 (reinstall)
Warning: Note that reinstalls are always dangerous. Continuing anyway...
Configuring groups-0.2.0.1...
Building groups-0.2.0.1...
Preprocessing library groups-0.2.0.1...
[1 of 1] Compiling Data.Group ( src/Data/Group.hs, dist/build/Data/Group.o )
In-place registering groups-0.2.0.1...
Installing library in /home/manzyuk/foo/bar/baz
quux/lib/groups-0.2.0.1/ghc-7.4.1
Registering groups-0.2.0.1...
Installed groups-0.2.0.1

Note that the line starting with “Installing library in” is wrapped at the space that is part of a directory name.

I suppose, a proper way to fix it would be to use an algebraic data type rather than String to represent messages, something like Text.PrettyPrint.Doc, so that file names can be treated as unbreakable entities that shouldn’t be wrapped. This, however, seems to require quite some re-engineering of error handling in Cabal. Should I go ahead and do it?

2013-05-04 Sat: Mikhail Glushenkov commented: “I think it’s a minor issue and your efforts are better spent elsewhere.”

### Ticket 760

Lines beginning with ‘–’ are comments. To enable library profiling one has to uncomment the line

-- library-profiling: False

and change the value of the option to True:

library-profiling: True

This is somewhat confusing: an option can be in three different states: True, False, and commented out.

### Ticket 774

We can augment the tokenizer so that lines beginning with ‘>’ are left intact:

tokeniseLine :: (LineNo, Indent, HasTabs, String) -> [Token]
tokeniseLine (n0, i, t, l) = case split n0 l of
(Span _ l':ss) -> Line n0 i t l' :ss
cs              -> cs
where split _ "" = []
split n s@('>' : _) = [Line n i t s]
split n s  = case span (\c -> c /='}' && c /= '{') s of
("", '{' : s') ->             OpenBracket  n : split n s'
(w , '{' : s') -> mkspan n w (OpenBracket  n : split n s')
("", '}' : s') ->             CloseBracket n : split n s'
(w , '}' : s') -> mkspan n w (CloseBracket n : split n s')
(w ,        _) -> mkspan n w []

mkspan n s ss | null s'   =             ss
| otherwise = Span n s' : ss
where s' = trimTrailing (trimLeading s)

Is it a good / right solution? Am I overlooking anything?

2013-05-04 Sat: Roman has pointed out that Duncan Coutts announced he was rewriting the parser of .cabal files, so it probably doesn’t makes sense to fix the old parser if it’s going to be replaced by the new one.

### Ticket 757

Can’t reproduce with Firefox 20.0 (Linux/x86$_{\mathrm{64}}$).

### Ticket 735

I tried to reproduce this issue and I can’t. I tried to build NaturalSort-0.2.1 with GHC 7.4.1 and Cabal 1.16.03, and I would expect the build to fail due to this line in NaturalSort.cabal: Cabal-Version: >= 1.6 && < 1.9, but it does build, and the log file does contain some log info. No summary file has been produced though.

I also tried to build cal3d, which on my machine fails due to a missing C library, but again the log file contains the error message. The summary file is generated when I try to install the library from Hackage but is not generated when I manually download the source and try to build the library locally, which confirms the behavior observed in Issue #1189.

### Ticket 676

The culprit of the problem was the function simplifyCondTree, which traversed CondTree in post-order, putting global options after the options defined in the branches of conditionals. I’ve changed that to pre-order traversal.

Sent a pull request. Also, added a test demonstrating what’s changed.

### Ticket 885

Duplicate of Issue 774. 2013-05-04 Sat: Closed by Johan Tibell.

### Ticket 883

I don’t think this is a cabal bug.

I tried to reproduce this behaviour: I downloaded monad-par-0.3.4.1 and mangled the file Control/Monad/Par/Scheds/Direct.hs so that its top looked as follows:

{-# LANGUAGE RankNTypes, NamedFieldPuns, BangPatterns,
ExistentialQuantification, CPP, ScopedTypeVariables,
TypeSynonymInstances, MultiParamTypeClasses,
GeneralizedNewtypeDeriving, PackageImports,
ParallelListComp
#-} # OPTIONS_GHC
{- -Wall -fno-warn-name-shadowing -fno-warn-unused-do-bind #-}

Running GHC directly on this file gives the following error message:

Control/Monad/Par/Scheds/Direct.hs:6:18: parse error on input #'

Building through cabal build produces the following error message:

Control/Monad/Par/Scheds/Direct.hs:1:1:
File name does not match module name:
Saw: Main'
Expected: Control.Monad.Par.Scheds.Direct'

However, running GHC on Control/Monad/Par.hs produces an identical error message.

I’m guessing that something like this is happening:

• when GHC is run directly on Control/Monad/Par/Scheds/Direct.hs, it tries to parse a module declaration, encounters # OPTIONS_GHC, decides that the file doesn’t have a module header (in particular, defaults the module name to Main), tries to parse # OPTIONS_GHC and fails;
• when GHC is run on Control/Monad/Par.hs, it expects to find a module named Control.Monad.Par.Scheds.Direct in the file Control/Monad/Par/Scheds/Direct.hs, but when it tries to parse it, it encounters # OPTIONS_GHC, and decides that the file doesn’t have a module header, defaults the module name to Main, and fails with a different error.

This theory is supported by the following experiment: replace the pragmas and # OPTIONS_GHC above with some code, e.g., foo = 3. Then running GHC on Control/Monad/Par/Scheds/Direct.hs produces

Control/Monad/Par/Scheds/Direct.hs:12:1:
parse error on input module'

and running GHC on Control/Monad/Par.hs produces

Control/Monad/Par/Scheds/Direct.hs:1:1:
File name does not match module name:
Saw: Main'
Expected: Control.Monad.Par.Scheds.Direct'

The summary of the above is that this is not a cabal bug (and probably not a bug at all), and this ticket should be closed.

2013-05-04 Sat: Closed by Johan Tibell.

### Ticket 417

Is this still an issue? I see a comment in Cabal/Distribution/PackageDescription/Parse.hs:

-- "build-depends" is a local field now.  To be backwards
-- compatible, we still allow it as a global field in old-style
-- package description files and translate it to a local field by
-- adding it to every non-empty section

which makes me think that what Duncan is suggesting:

or it should automatically propagate such global options into each buildable component.

has already been implemented.

2013-05-05 Sun: It seems I was wrong, investigating.

## Cabal Bug Fixing II

• Work on adding a –lowest-dependencies flag for cabal which builds a package against its lowest dependencies. This is useful for package maintainers to see whether the lower bounds of a package are still valid. An initial version is issued as a pull-request but the actual workings are still being tweaked.
• Small bugfix making the –package-db flag accept relative paths

## Cabal Bug Fixing III

I took a look at Ticket 1297.

My first step was to grab the latest sources and run the tests.

bash-3.2$cabal test Running 2 test suites... Test suite package-tests: RUNNING... Cabal test suite - testing cabal version 1.17.0 BuildTestSuiteDetailedV09: : [Failed] build failed! expected: False but got: True BuildDeps/InternalLibrary2: : [Failed] executable should have linked with the internal library expected: "myLibFunc internal" but got: "" BuildDeps/InternalLibrary3: : [Failed] executable should have linked with the internal library expected: "myLibFunc internal" but got: "" BuildDeps/InternalLibrary4: : [Failed] executable should have linked with the installed library expected: "myLibFunc installed" but got: "" PackageTests/CMain: : [OK] Test Cases Total Passed 20 20 Failed 4 4 Total 24 24 Test suite package-tests: FAIL Test suite logged to: dist/test/Cabal-1.17.0-package-tests.log Test suite unit-tests: RUNNING... Test suite unit-tests: PASS Test suite logged to: dist/test/Cabal-1.17.0-unit-tests.log 1 of 2 test suites (1 of 2 test cases) passed. BuildTestSuiteDetailedV09 is a test which tests that Cabal testing is working correctly. It takes a small Cabal project with a test-suite stanza builds it and runs the tests in the test suite. This test passes on linux so it looked like the problem was in the way cabal was running the tests. This turned out to be a red herring. I pulled the test out of the Cabal test suite and ran it on its own (obvious in hindsight). So the problem was in Cabal not in the way Cabal was being tested. bash-3.2$ ../cabal/cabal-install/dist/build/cabal/cabal configure --enable-tests --package-db=../cabal/Cabal/dist/package.conf.inplace
Warning: The package list for 'hackage.haskell.org' is 38 days old.
Run 'cabal update' to get the latest list of available packages.
Warning: my.cabal: A package using section syntax must specify at least
'cabal-version: >= 1.2'.
Resolving dependencies...
Warning: my.cabal: A package using section syntax must specify at least
'cabal-version: >= 1.2'.
Configuring BuildTestSuiteDetailedV09-0.1...
bash-3.2$../cabal/cabal-install/dist/build/cabal/cabal build Warning: my.cabal: A package using section syntax must specify at least 'cabal-version: >= 1.2'. Building BuildTestSuiteDetailedV09-0.1... Preprocessing library BuildTestSuiteDetailedV09-0.1... [1 of 1] Compiling Dummy ( Dummy.hs, dist/build/Dummy.o ) In-place registering BuildTestSuiteDetailedV09-0.1... Preprocessing test suite 'dummy' for BuildTestSuiteDetailedV09-0.1... [1 of 1] Compiling Dummy ( Dummy.hs, dist/build/Dummy.o ) ar: dist/build/Dummy.o: No such file or directory But why was Cabal complaining about a missing Dummy.o when it had clearly just created it? Sasha suggested using strace (http://en.wikipedia.org/wiki/Strace) but this didn’t exist on my Mac. But I did have dtrace (http://hints.macworld.com/article.php?story=20071031121823710 and http://chihungchan.blogspot.com/2009/05/which-process-deleted-my-file.html). Dominics-MacBook-Pro:Dummy dom$ dtrace -qn 'syscall::unlink:entry {printf("PID=%d, CMD=%s, FILE=%s\n", pid, curpsinfo->pr_psargs, copyinstr(arg0));}'
dtrace: failed to initialize dtrace: DTrace requires additional privileges
Dominics-MacBook-Pro:Dummy dom$sudo dtrace -qn 'syscall::unlink:entry {printf("PID=%d, CMD=%s, FILE=%s\n", pid, curpsinfo->pr_psargs, copyinstr(arg0));}' Password: PID=16137, CMD=Emacs, FILE=/Users/dom/Dropbox/Private/.#Everything.org PID=16137, CMD=Emacs, FILE=/Users/dom/Dropbox/Private/#Everything.org# PID=82566, CMD=ar, FILE=/var/folders/3p/m593dprn5snbjz6sc45c77f80000gn/T//ar.ugfvvt PID=82567, CMD=libtool, FILE=dist/build/libHSBuildTestSuiteDetailedV09-0.1.a PID=82557, CMD=cabal, FILE=dist/build/libHSBuildTestSuiteDetailedV09-0.1.a PID=82557, CMD=cabal, FILE=dist/build/libHSBuildTestSuiteDetailedV09-0.1_p.a PID=82557, CMD=cabal, FILE=dist/build/libHSBuildTestSuiteDetailedV09-0.1-ghc7.6.2.dylib PID=82557, CMD=cabal, FILE=dist/build/HSBuildTestSuiteDetailedV09-0.1.o PID=82560, CMD=ghc, FILE=/var/folders/3p/m593dprn5snbjz6sc45c77f80000gn/T/ghc82560_0/ghc82560_0.c PID=82560, CMD=ghc, FILE=/var/folders/3p/m593dprn5snbjz6sc45c77f80000gn/T/ghc82560_0/ghc82560_0.s PID=82573, CMD=i686-apple-darwi, FILE=/var/folders/3p/m593dprn5snbjz6sc45c77f80000gn/T//ccnsroxB.s PID=82562, CMD=i686-apple-darwi, FILE=/var/folders/3p/m593dprn5snbjz6sc45c77f80000gn/T//ccjXqsIc.s PID=82571, CMD=ghc, FILE=/var/folders/3p/m593dprn5snbjz6sc45c77f80000gn/T/ghc82571_0/ghc82571_0.c PID=82571, CMD=ghc, FILE=/var/folders/3p/m593dprn5snbjz6sc45c77f80000gn/T/ghc82571_0/ghc82571_0.s PID=82576, CMD=as, FILE=dist/build/Dummy.o PID=82577, CMD=ar, FILE=/var/folders/3p/m593dprn5snbjz6sc45c77f80000gn/T//ar.mlwmuT PID=82557, CMD=cabal, FILE=dist/build/libdummy.a PID=82557, CMD=cabal, FILE=dist/build/libdummy_p.a PID=82557, CMD=cabal, FILE=dist/build/libdummy-ghc7.6.2.dylib PID=82557, CMD=cabal, FILE=dist/build/dummy.o I noticed I had a Dummy.o and a dummy.o. Also the name of the test-suite was dummy and the name of the module in the the Cabal package was Dummy. I tried changing the name of the test-suite stanza to Dummy. No luck but when I tried changing the name of the test-suite stanza to something completely different everything worked. bash-3.2$ ../cabal/cabal-install/dist/build/cabal/cabal build
Warning: my.cabal: A package using section syntax must specify at least
'cabal-version: >= 1.2'.
Building BuildTestSuiteDetailedV09-0.1...
Preprocessing library BuildTestSuiteDetailedV09-0.1...
[1 of 1] Compiling Dummy            ( Dummy.hs, dist/build/Dummy.o )
In-place registering BuildTestSuiteDetailedV09-0.1...
Preprocessing test suite 'Urk' for BuildTestSuiteDetailedV09-0.1...
[1 of 1] Compiling Dummy            ( Dummy.hs, dist/build/Dummy.o )
In-place registering Urk-0.1...
[1 of 1] Compiling Main             ( dist/build/UrkStub/UrkStub-tmp/UrkStub.hs, dist/build/UrkStub/UrkStub-tmp/Main.o )
Linking dist/build/UrkStub/UrkStub ...

### Case Insensitive but Case Preserving

It turns out that most Macs are configured like this. You can check as shown below. The absence of Case-sensitive in the Name field means the file system is case insensitive.

bash-3.2$diskutil info disk0s2 Device Identifier: disk0s2 Device Node: /dev/disk0s2 Part of Whole: disk0 Device / Media Name: Customer Volume Name: Macintosh HD Escaped with Unicode: Macintosh%FF%FE%20%00HD Mounted: Yes Mount Point: / Escaped with Unicode: / File System Personality: Journaled HFS+ Type (Bundle): hfs Name (User Visible): Mac OS Extended (Journaled) Journal: Journal size 40960 KB at offset 0x1238b000 Owners: Enabled Partition Type: Apple_HFS OS Can Be Installed: Yes Media Type: Generic Protocol: SATA SMART Status: Verified Volume UUID: 86F67826-3F78-387D-B335-C695CADCFD86 Total Size: 499.4 GB (499418034176 Bytes) (exactly 975425848 512-Byte-Blocks) Volume Free Space: 427.8 GB (427780079616 Bytes) (exactly 835507968 512-Byte-Blocks) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: No Ejectable: No Whole: No Internal: Yes Solid State: Yes ### Coda Changing dummy to Dummy tickles the bug on linux so now we have an easily reproducible bug although it is not clear how to fix this: Ticket 1311. ## May 03, 2013 ### Tom Schrijvers # PPDP 2013: Second Call for Papers ===================================================================== 2nd Call for papers 15th International Symposium on Principles and Practice of Declarative Programming PPDP 2013 Special Issue of Science of Computer Programming (SCP) Madrid, Spain, September 16-18, 2013 (co-located with LOPSTR 2013) http://users.ugent.be/~tschrijv/PPDP2013/ ====================================================================== EXTENDED SUBMISSION DEADLINE: JUNE 13, 2013 PPDP 2013 is a forum that brings together researchers from the declarative programming communities, including those working in the logic, constraint and functional programming paradigms, but also embracing a variety of other paradigms such as visual programming, executable specification languages, database languages, and knowledge representation languages. The goal is to stimulate research in the use of logical formalisms and methods for specifying, performing, and analysing computations, including mechanisms for mobility, modularity, concurrency, object-orientation, security, verification and static analysis. Papers related to the use of declarative paradigms and tools in industry and education are especially solicited. Topics of interest include, but are not limited to: * Functional programming * Logic programming * Answer-set programming * Functional-logic programming * Declarative visual languages * Constraint Handling Rules * Parallel implementation and concurrency * Monads, type classes and dependent type systems * Declarative domain-specific languages * Termination, resource analysis and the verification of declarative programs * Transformation and partial evaluation of declarative languages * Language extensions for security and tabulation * Probabilistic modelling in a declarative language and modelling reactivity * Memory management and the implementation of declarative systems * Practical experiences and industrial application This year the conference will be co-located with the 23nd International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2013) and held in cooperation with ACM SIGPLAN. The conference will be held in Madrid, Spain. Previous symposia were held at Leuven (Belgium), Odense (Denmark), Hagenberg (Austria), Coimbra (Portugal), Valencia (Spain), Wroclaw (Poland), Venice (Italy), Lisboa (Portugal), Verona (Italy), Uppsala (Sweden), Pittsburgh (USA), Florence (Italy), Montreal (Canada), and Paris (France). Papers must describe original work, be written and presented in English, and must not substantially overlap with papers that have been published or that are simultaneously submitted to a journal, conference, or workshop with refereed proceedings. Work that already appeared in unpublished or informally published workshop proceedings may be submitted (please contact the PC chair in case of questions). Proceedings will be published in the ACM International Conference Proceedings Series. After the symposium, a selection of the best papers will be invited to extend their submissions in the light of the feedback solicited at the symposium. The papers are expected to include at least 30% extra material over and above the PPDP version. Then, after another round of reviewing, these revised papers will be published in a special issue of SCP with a target publication date by Elsevier of 2014. Important Dates Abstract Submission: June 10, 2013 Paper submission: June 13, 2013 Notification: July 18, 2013 Camera-ready: August 4, 2013 Symposium: September 16-18, 2013 Invites for SCP: October 2, 2013 Submission of SCP: December 11, 2013 Notification from SCP: February 22, 2014 Camera-ready for SCP: March 14, 2014 Authors should submit an electronic copy of the paper (written in English) in PDF. Each submission must include on its first page the paper title; authors and their affiliations; abstract; and three to four keywords. The keywords will be used to assist us in selecting appropriate reviewers for the paper. Papers should consist of no more than 12 pages, formatted following the ACM SIG proceedings template (option 1). The 12 page limit must include references but excludes well-marked appendices not intended for publication. Referees are not required to read the appendices, and thus papers should be intelligible without them. Program Committee Sergio Antoy Portland State University, USA Manuel Carro IMDEA Software Institute, Spain Iliano Cervesato Carnegie Mellon University, Qatar Agostino Dovier Universita degli Studi di Udine, Italy Maria Garcia de la Banda Monash University, Australia Ralf Hinze University of Oxford, UK Yukiyoshi Kameyama University of Tsukuba, Japan Oleg Kiselyov USA Yanhong Annie Liu State University of New York at Stony Brook, USA Stefan Monnier Universite de Montreal, Canada Alan Mycroft University of Cambrige, UK Bruno C. d. S. Oliveira National University of Singapore, Singapore Alberto Pettorossi Universita di Roma Tor Vergata, Italy Enrico Pontelli New Mexico State University, USA Kristoffer Rose IBM Research, USA Sukyoung Ryu KAIST, South Korea Vitor Santos Costa University of Porto, Portugal Torsten Schaub University Potsdam, Germany Tom Schrijvers Ghent University, Belgium Martin Sulzmann Hochschule Karlsruhe, Germany Wouter Swierstra Universiteit Utrecht, The Netherlands Tarmo Uustalu Institute of Cybernetics, Estonia Janis Voigtlaender University of Bonn, Germany Meng Wang Chalmers University of Technology, Sweden Jan Wielemaker Universiteit van Amsterdam, The Netherlands Program Chair Tom Schrijvers Department of Applied Mathematics and Computer Science Ghent University 9000 Gent, Belgium General Chair Ricardo Pena Facultad de Informatica Universidad Complutense de Madrid 28040 Madrid, Spain ### Disciple/DDC # First working program using the Repa plugin ### Haskell Source repa_process :: R.Stream k Int -> Intrepa_process s = R.fold (+) 0 s + R.fold (*) 1 s ### Actual well formed GHC Core code .. or at least the parts that get printed with -dsuppress-all. repa_processrepa_process = \ @ k_aq6 arg_s2Rf -> let { (# x1_s2R6, x1_acc_s2R5 #) ~ _ <- newByteArray# 8 realWorld# } in let { __DEFAULT ~ x2_s2R8 <- writeIntArray# x1_acc_s2R5 0 0 x1_s2R6 } in let { (# x3_s2Rd, x3_acc_s2Rc #) ~ _ <- newByteArray# 8 x2_s2R8 } in let { __DEFAULT ~ x4_s2RS <- writeIntArray# x3_acc_s2Rc 0 1 x3_s2Rd } in let { Stream ds1_s2Rs ds2_s2Rj ~ _ <- arg_s2Rf } in let { Vector rb_s2Rz _ rb2_s2Ry ~ _ <- ds2_s2Rj cast ... } in letrec { go_s2RP go_s2RP = \ ix_s2Rr world_s2Ru -> case >=# ix_s2Rr ds1_s2Rs of _ { False -> let { (# x7_s2RF, x0_s2RC #) ~ _ <- readIntArray# x1_acc_s2R5 0 world_s2Ru } in let { __DEFAULT ~ sat_s2SV <- +# rb_s2Rz ix_s2Rr } in let { __DEFAULT ~ wild3_s2RD <- indexIntArray# rb2_s2Ry sat_s2SV } in let { __DEFAULT ~ sat_s2SU <- +# x0_s2RC wild3_s2RD } in let { __DEFAULT ~ x8_s2RH <- writeIntArray# x1_acc_s2R5 0 sat_s2SU x7_s2RF } in let { (# x9_s2RN, x1_s2RL #) ~ _ <- readIntArray# x3_acc_s2Rc 0 x8_s2RH } in let { __DEFAULT ~ sat_s2ST <- *# x1_s2RL wild3_s2RD } in let { __DEFAULT ~ x10_s2RR <- writeIntArray# x3_acc_s2Rc 0 sat_s2ST x9_s2RN } in let { __DEFAULT ~ sat_s2SS <- +# ix_s2Rr 1 } in go_s2RP sat_s2SS x10_s2RR; True -> world_s2Ru }; } in let { __DEFAULT ~ x11_s2RU <- go_s2RP 0 x4_s2RS } in let { (# x12_s2RY, x1_s2S2 #) ~ _ <- readIntArray# x1_acc_s2R5 0 x11_s2RU } in let { (# _, x2_s2S3 #) ~ _ <- readIntArray# x3_acc_s2Rc 0 x12_s2RY } in let { __DEFAULT ~ sat_s2SW <- +# x1_s2S2 x2_s2S3 } in I# sat_s2SW Im using ByteArrays to fake imperative variables. The next job is to convert the arrays x1_acc_s2R5 and x3_acc_s2Rc into proper accumulation variables, and attach them to the go_s2RP loop. ### x86 assembly code This is just for the inner loop. LBB11_2: movq (%rcx), %rdi addq %rdi, 16(%rdx) imulq 16(%rsi), %rdi movq %rdi, 16(%rsi) addq$8, %rcx        incq    %r14        cmpq    %r14, %rax        jg      LBB11_2
The extra memory operations are there because the LLVM optimiser doesn't know that the two ByteArrays I'm using to fake imperative variables don't alias. This should turn into optimal code once it uses pure accumulators.

# Representing Applicatives

In the previous two posts, we've built up a whole range of applicatives, out of Const, Identity, Reader, Compose, Product, Sum, and Fix (and some higher-order analogues). Sum has given us the most trouble, but in some sense has been the most powerful, letting us write things like possibly eventually terminating lists, or trees, or in fact any sort of structure with branching alternatives. In this post, I want to think a bit more about why it is that Sum is the trickiest of the bunch, and more generally, what we can say about when two applicative structures are the "same". In the process of doing so, we'll invent something a lot like Traversable en passant.

Let's do some counting exercises. Product Identity Identity holds exactly two things. It is therefore isomorphic to ((->) Bool), or if we prefer, ((->) Either () ()). That is to say that a pair that holds two values of type a is the same as a function that takes a two-valued type and yields a value of type a. A product of more functors in turn is isomorphic to the reader of the sum of each of the datatypes that "represent" them. E.g. Product (Product Identity Identity) (Product (Const ()) Identity) is iso to ((->) (Either (Either () ()) ()), i.e. a data type with three possible inhabitants. In making this move we took Product to Either -- multiplication to sum. We can pull a similar trick with Compose. Compose (Product Identity Identity) (Product Identity Identity) goes to ((->) (Either () (),Either () ())). So again we took Product to a sum type, but now we took Compose to a pair -- a product type! The intuition is that composition multiplies the possibilities of spaces in each nested functor.

Hmm.. products go to sums, composition goes to multiplication, etc. This should remind us of something -- these rules are exactly the rules for working with exponentials. x^n * x^m = x^(n + m). (x^n)^m = x^(n*m). x^0 = 1, x^1 = x.

Seen from the right standpoint, this isn't surprising at all, but almost inevitable. The functors we're describing are known as "representable," a term which derives from category theory. (See appendix on representable functors below).

In Haskell-land, a "representable functor" is just any functor isomorphic to the reader functor ((->) a) for some appropriate a. Now if we think back to our algebraic representations of data types, we call the arrow type constructor an exponential. We can "count" a -> x as x^a, since e.g. there are 3^2 distinct functions that inhabit the type 2 -> 3. The intuition for this is that for each input we pick one of the possible results, so as the number of inputs goes up by one, the number of functions goes up by multiplying through by the set of possible results. 1 -> 3 = 3, 2 -> 3 = 3 * 3, (n + 1) -> 3 = 3 * (n -> 3).

Hence, if we "represent" our functors by exponentials, then we can work with them directly as exponentials as well, with all the usual rules. Edward Kmett has a library encoding representable functors in Haskell.

Meanwhile, Peter Hancock prefers to call such functors "Naperian" after John Napier, inventor of the logarithm (See also here). Why Naperian? Because if our functors are isomorphic to exponentials, then we can take their logs! And that brings us back to the initial discussion of type mathematics. We have some functor F, and claim that it is isomorphic to -^R for some concrete data type R. Well, this means that R is the logarithm of F. E.g. (R -> a, S -> a) =~ Either R S -> a, which is to say that if log F = R and log G =~ S, then log (F * G) = log F + log G. Similarly, for any other data type n, again with log F = R, we have n -> F a =~ n -> R -> a =~ (n * R) -> a, which is to say that log (F^n) =~ n * log F.

This gives us one intuition for why the sum functor is not generally representable -- it is very difficult to decompose log (F + G) into some simpler compound expression of logs.

So what functors are Representable? Anything that can be seen as a fixed shape with some index. Pairs, fixed-size vectors, fixed-size matrices, any nesting of fixed vectors and matricies. But also infinite structures of regular shape! However, not things whose shape can vary -- not lists, not sums. Trees of fixed depth or infinite binary trees therefore, but not trees of arbitrary depth or with ragged structure, etc.

Representable functors turn out to be extremely powerful tools. Once we know a functor is representable, we know exactly what its applicative instance must be, and that its applicative instance will be "zippy" -- i.e. acting pointwise across the structure. We also know that it has a monad instance! And, unfortunately, that this monad instance is typically fairly useless (in that it is also "zippy" -- i.e. the monad instance on a pair just acts on the two elements pointwise, without ever allowing anything in the first slot to affect anything in the second slot, etc.). But we know more than that. We know that a representable functor, by virtue of being a reader in disguise, cannot have effects that migrate outwards. So any two actions in a representable functor are commutative. And more than that, they are entirely independent.

This means that all representable functors are "distributive"! Given any functor f, and any data type r, then we have


distributeReader :: Functor f => f (r -> a) -> (r -> f a)
distributeReader fra = \r -> fmap ($r) fra  That is to say, given an arrow "inside" a functor, we can always pull the arrow out, and "distribute" application across the contents of the functor. A list of functions from Int -> Int becomes a single function from Int to a list of Int, etc. More generally, since all representable functors are isomorphic to reader, given g representable, and f any functor, then we have: distribute :: (Functor f, Representable g) => f (g a) -> g (f a). This is pretty powerful sauce! And if f and g are both representable, then we get the transposition isomorphism, witnessed by flip! That's just the beginning of the good stuff. If we take functions and "unrepresent" them back to functors (i.e. take their logs), then we can do things like move from ((->) Bool) to pairs, etc. Since we're in a pervasively lazy language, we've just created a library for memoization! This is because we've gone from a function to a data structure we can index into, representing each possible argument to this function as a "slot" in the structure. And the laziness pays off because we only need to evaluate the contents of each slot on demand (otherwise we'd have a precomputed lookup table rather than a dynamically-evaluated memo table). And now suppose we take our representable functor in the form s -> a and paired it with an "index" into that function, in the form of a concrete s. Then we'd be able to step that s forward or backwards and navigate around our structure of as. And this is precisely the Store Comonad! And this in turn gives a characterization of the lens laws. What this all gives us a tiny taste of, in fact, is the tremendous power of the Yoneda lemma, which, in Haskell, is all about going between values and functions, and in fact captures the important universality and uniqueness properties that make working with representable functors tractable. A further tiny taste of Yoneda comes from a nice blog post by Conal Elliott on memoization. Extra Credit on Sum Functors There in fact is a log identity on sums. It goes like this: log(a + c) = log a + log (1 + c/a)  Do you have a useful computational interpretation of this? I've got the inklings of one, but not much else. Appendix: Notes on Representable Functors in Hask. The way to think about this is to take some arbitrary category C, and some category that's basically Set (in our case, Hask. In fact, in our case, C is Hask too, and we're just talking about endofunctors on Hask). Now, we take some functor F : C -> Set, and some A which is an element of C. The set of morphisms originating at A (denoted by Hom(A,-)) constitutes a functor called the "hom functor." For any object X in C, we can "plug it in" to Hom(A,-), to then get the set of all arrows from A to X. And for any morphism X -> Y in C, we can derive a morphism from Hom(A,X) to Hom(A,Y), by composition. This is equivalent to, in Haskell-land, using a function f :: x -> y to send g :: a -> x to a -> y by writing "functionAToY = f . g". So, for any A in C, we have a hom functor on C, which is C -> Set, where the elements of the resultant Set are homomorphisms in C. Now, we have this other arbitrary functor F, which is also C -> Set. Now, if there is an isomorphism of functors between F, and Hom(A,_), then we say F is "representable". A representable functor is thus one that can be worked with entirely as an appropriate hom-functor. ### Yesod Web Framework # Yesod 1.2 released The Yesod team is pleased to announce the release of Yesod 1.2. You can get it with: cabal install yesod-platform yesod-bin The yesod binary is now a separate package which helps manage dependencies, but it does mean you need to remember to install 2 separate packages. Yesod 1.1 was released in August. Shortly after, Michael started working for FP Complete. A lot has happened since then! ## Yesod ecosystem ## Yesod 1.2 ### Representation system Previously discussed in the post: a better representation system, cleaner internals, and the request local cache. Providing different representation types (JSON or HTML) used to be cumbersome at times, but now it is simple using selectRep. getResource :: Handler TypedContent getResource = do selectRep$ do
provideRep $[hamlet|<div>|] provideRep$ object ["result" .= "ok"]

### Request local type-based caching

See the previous mentioned blog post, but you just need to create a newtype wrapper around some data and then you can cache it with the cached function.

### Subsite overhaul

Subsites are now just a transformer over a master site

### Flexible routing

routing dispatch is more flexible

### Better streaming API

streaming has been simplified

### Asset pipeline

Yesod has always made it easy to combine dynamic css and javascript since before Rails started using the term asset pipeline. What was missing was the same ease for static assets. Combining static assets is very important for optimal performance by reducing the total number of network requests. You can now easily combine CSS and Javascript with the combineScripts and combineStylesheets helpers. Here is the scaffolding change and you can also look at the haddock documentation.

### Better testing

yesod-test was completely overhauled, making it easier to use and providing cleaner integration with hspec.. It is easy in Haskell to just lean against the type system for most things and skip testing, particularly if it is something that is hard to test with QuickCheck. But yesod-test (and wai-test) are there to prevent bugs that the type system cannot.

### Even more

• More efficient session handling.
• yesod-auth email plugin now supports logging in via username in addition to email address.
• probably more stuff we forgot to mention

## Conclusion & more info

FPComplete's development of the School of Haskell has been great for the Haskell community to keep spreading knowledge. It has also been running with the changes for 1.2 for quite a while which should contribute to making 1.2 a more stable release.

[https://github.com/yesodweb/yesod/wiki/Changelog#yesod-12-not-yet-released](The high-level changelog) has been discussed in high-level here. [https://github.com/yesodweb/yesod/wiki/Detailed-change-list#not-yet-released-yesod-12](Detailed changes are here)

The book documentation for 1.2 has been started, but still needs more work to get fully up to date.

Most of the changes to upgrade your site to 1.2 should be fairly mechanical. I started a wiki page for the upgrade. If you have any issues, please note them there or on the mail list.

We hope you enjoy using Yesod 1.2

# Seminar by Dave Thomas on "VMs Demystified – A Tour of the Engine Room"

Dave Thomas, who is widely known for his work on virtual machines and the YOW! Australia conference series, will be in Sydney this month. He kindly agreed to talk about VMs at CSE. Anybody with an interest in VM engineering and programming language implementations should mark the date in their calendar!

Time: 29 May 2013 (Wed), 11AM

Title: VMs Demystified – A Tour of the Engine Room

Speaker: David Thomas (Bedarra Research Labs and YOW! Developer Conference)

Also showing at: The talk is also presented at ScalaSyd the previous evening, 28 May, 18:30 — details here

Abstract

Language virtual machines are an essential part of current and next generation platforms.  Yet many developers have no real idea of what is actually happening when their program is run on a VM or the hardware.  This leads to many false assumptions about speed and space performance.  In this talk you will see under the hood of language virtual machines and gain an understanding of what makes VMs tick as well as differences between the languages they support.

First we explain the essence VM engineering including object representations, stack versus register VMs, RISC versus CISC byte codes; static dispatch to polymorphic inline cache; context management; interpretation versus dynamic translation/tracing versus compilation; garbage collection; and native types and code interfaces. We discuss benchmark speed and space performance versus real application performance.

Armed with the above knowledge we then engage in some of the entertaining educational VM debates. How can a JVM or PHP VM faster than C++? When is the JVM or CLR better? How does the language, or the language library impact the VM? Are strongly typed languages always faster than dynamic languages? How does hosting with CRuby, compare to JRuby or Java? Let’s put the VM in hardware? How do functional language VMs differ from object VMs? How can thousands of processes in Erlang be efficient compared to using native OS threads?

Bio

Dave Thomas is an expert in dynamic languages and has decades of experience building and deploying language VMs for mobile, instrumentation, embedded command and control, and business application on platforms from mainframes to micro devices. He is widely known and respected in the programming language community and this year will be presenting the keynote at the Commercial Users of Functional Programming (CUFP) conference. While CEO of OTI, now IBM OTI Labs, he over saw IBM’s Smalltalk and J9 family of Java enterprise and embedded JVMs, OSGi as well as the initial releases of Eclipse. He lead an IBM OTI research effort into universal virtual machines. After leaving IBM he worked on JVM support for dynamic languages and the use of V8 for embedded applications. For the past 6 years Dave has been working with high performance vector functional virtual machines, DSLs and most recently exploring special purpose HW VMs.

# Array fusion using the lowering transform

I'm getting back into blogging. This is current work in progress.

Repa 4 will include a GHC plugin that performs array fusion using a version of Richard Waters's series expressions system, extended to support the segmented operators we need for Data Parallel Haskell.

The plugin converts GHC core code to Disciple Core, performs the fusion transform, and then converts back to GHC core. We're using Disciple Core for the main transform because it has a simple (and working) external core format, and the core language AST along with most of the core-to-core transforms are parametric in the type used for names. This second feature is important because we use a version of Disciple Core where the main array combinators (map, fold, filter etc) are primitive operators rather than regular function bindings.

The fusion transform is almost (almost) working for some simple examples. Here is the compilation sequence:

Haskell Source

process :: Stream k Int -> Int
process s
= fold (+) 0 s + fold (*) 1 s

In this example we're performing two separate reductions over the same stream. None of the existing short-cut fusion approaches can handle this properly. Stream fusion in Data.Vector, Repa-style delayed array fusion, and build/foldr fusion will all read the stream elements from memory twice (if they are already manifest) or compute them twice (if they are delayed). We want to compute both reductions in a single pass.

Raw GHC core converted to DDC core

repa_process_r2 : [k_aq0 : Data].Stream_r0 k_aq0 Int_3J -> Int_3J
= \(k_c : *_34d).\(s_aqe : Stream_r0 k_c Int_3J).
$fNumInt_$c+_rjF
(fold_r34 [k_c] [Int_3J] [Int_3J] $fNumInt_$c+_rjF
(I#_6d 0i#) s_aqe)
(fold_r34 [k_c] [Int_3J] [Int_3J] $fNumInt_$c*_rjE
(I#_6d 1i#) s_aqe)

Detect array combinators from GHC core, and convert to DDC primops

repa_process_r2 : [k_aq0 : Rate].Stream# k_aq0 Int# -> Int#
= /\(k_c : Rate).
\(s_aqe : Stream# k_c Int#).
add# [Int#]
(fold# [k_c] [Int#] [Int#] (add# [Int#]) 0i# s_aqe)
(fold# [k_c] [Int#] [Int#] (mul# [Int#]) 1i# s_aqe)

Normalize and shift array combinators to top-level
All array combinators are used in their own binding.

repa_process_r2 : [k_aq0 : Rate].Stream# k_aq0 Int# -> Int#
= /\(k_c : Rate).
\(s_aqe : Stream# k_c Int#).
let x0 = add# [Int#] in
let x1 = fold# [k_c] [Int#] [Int#] x0 0i# s_aqe in
let x2 = mul# [Int#] in
let x3 = fold# [k_c] [Int#] [Int#] x2 1i# s_aqe in
add# [Int#] x1 x3

Inline and eta-expand worker functions
This puts the program in the correct form for the next phase.

repa_process_r2 : [k_aq0 : Rate].Stream# k_aq0 Int# -> Int#
= /\(k_c : Rate).
\(s_aqe : Stream# k_c Int#).
let x1
= fold# [k_c] [Int#] [Int#]
(\(x0 x1 : Int#). add# [Int#] x0 x1) 0i# s_aqe in
let x3
= fold# [k_c] [Int#] [Int#]
(\(x2 x3 : Int#). mul# [Int#] x2 x3) 1i# s_aqe in
add# [Int#] x1 x3

Do the lowering transform
This is the main pass that performs array fusion. Note that we've introduced a single loop# that computes both of the fold# results.

repa_process_r2 : [k_c : Rate].Stream# k_c Int# -> Int#
= /\(k_c : Rate).
\(s_aqe : Stream# k_c Int#).
let x1_acc : Ref# Int# = new# [Int#] 0i# in
let x3_acc : Ref# Int# = new# [Int#] 1i# in
let _ : Unit
= loop# (lengthOfRate# [k_c])
(\(x0 : Nat#).
let x1 : Int# = next# [Int#] [k_c] s_aqe x0 in
let x0 : Int# = read# [Int#] x1_acc in
let _  : Void#
= write# [Int#] x1_acc (add# [Int#] x0 x1) in
let x2 : Int# = read# [Int#] x3_acc in
let _  : Void#
= write# [Int#] x3_acc (mul# [Int#] x2 x1) in
()) in
let x1 : Int# = read# [Int#] x1_acc in
let x3 : Int# = read# [Int#] x3_acc in
add# [Int#] x1 x3

Assign imperative variable storage to arrays
We need to convert the code back to GHC core, but we don't want to use IORefs because they can't hold unboxed values (of types like Int#). Instead, we use some new arrays to hold these values instead.

repa_process_r2 : [k_c : Rate].Stream# k_c Int# -> Int#
= /\(k_c : Rate).
\(s_aqe : Stream# k_c Int#).
let x1_acc : Array# Int# = newArray# [Int#] 8# in
let _      : Void#       = writeArray# [Int#] x1_acc 0# 0i# in
let x3_acc : Array# Int# = newArray# [Int#] 8# in
let _      : Void#       = writeArray# [Int#] x3_acc 0# 1i# in
let _ : Unit
= loop# (lengthOfRate# [k_c])
(\(x0 : Nat#).
let x1 : Int# = next# [Int#] [k_c] s_aqe x0 in
let x0 : Int# = readArray# [Int#] x1_acc 0# in
let _ : Void#
= writeArray# [Int#] x1_acc 0#
(add# [Int#] x0 x1) in
let x2 : Int# = readArray# [Int#] x3_acc 0# in
let _ : Void#
= writeArray# [Int#] x3_acc 0#
(mul# [Int#] x2 x1) in
()) in
let x1 : Int# = readArray# [Int#] x1_acc 0# in
let x3 : Int# = readArray# [Int#] x3_acc 0# in
add# [Int#] x1 x3

Thread state token through effectful primops
The lowered code is naturally imperative, and GHC uses state threading to represent this.

repa_process_r2 : [k_c : Rate].Stream# k_c Int# -> World# -> Tuple2# World# Int#
= /\(k_c : Rate).
\(s_aqe : Stream# k_c Int#).\(x0 : World#).
caselet T2# (x1 : World#) (x1_acc : Array# Int#)
= newArray# [Int#] 8# x0 in
caselet T2# (x2 : World#) (_ : Void#)
= writeArray# [Int#] x1_acc 0# 0i# x1 in
caselet T2# (x3 : World#) (x3_acc : Array# Int#)
= newArray# [Int#] 8# x2 in
caselet T2# (x4 : World#) (_ : Void#)
= writeArray# [Int#] x3_acc 0# 1i# x3 in
caselet T2# (x11 : World#) (_ : Unit)
= loop# (lengthOfRate# [k_c])
(\(x0 : Nat#).\(x5 : World#).
caselet T2# (x6 : World#) (x1 : Int#)
= next# [Int#] [k_c] s_aqe x0 x5 in
caselet T2# (x7 : World#) (x0 : Int#)
= readArray# [Int#] x1_acc 0# x6 in
caselet T2# (x8 : World#) (_ : Void#)
= writeArray# [Int#] x1_acc 0# (add# [Int#] x0 x1) x7 in
caselet T2# (x9 : World#) (x2 : Int#)
= readArray# [Int#] x3_acc 0# x8 in
caselet T2# (x10 : World#) (_ : Void#)
= writeArray# [Int#] x3_acc 0# (mul# [Int#] x2 x1) x9 in
T2# x10 ()) x4 in
caselet T2# (x12 : World#) (x1 : Int#)
= readArray# [Int#] x1_acc 0# x11 in
caselet T2# (x13 : World#) (x3 : Int#)
= readArray# [Int#] x3_acc 0# x12 in
T2# x13 (add# [Int#] x1 x3)

Here, "caselet" is just sugar for a case expression with a single alternative.

Covert back to GHC core

repa_process_sTX
:: forall k_c.
Data.Array.Repa.Series.Stream k_c GHC.Types.Int
-> GHC.Prim.State# GHC.Prim.RealWorld
-> (# GHC.Prim.State# GHC.Prim.RealWorld, GHC.Prim.Int# #)
[LclId]
lowered_sTX =
\ (@ k_c)
(rate_sTY :: GHC.Prim.Int#)
(x_sTZ :: Data.Array.Repa.Series.Stream k_c GHC.Types.Int)
(x_sU0 :: GHC.Prim.State# GHC.Prim.RealWorld) ->
let { (# x_sU1, x_sU2 #) ~ scrut_sUv
<- newByteArray#_sU3 @ GHC.Prim.RealWorld 8 x_sU0
} in
let { __DEFAULT ~ x_sU8
<- writeIntArray#_sU4 @ GHC.Prim.RealWorld x_sU2 0 0 x_sU1
} in
let { (# x_sU5, x_sU6 #) ~ scrut_sUw
<- newByteArray#_sU7 @ GHC.Prim.RealWorld 8 x_sU8
} in
let { __DEFAULT ~ x_sUp
<- writeIntArray#_sU9 @ GHC.Prim.RealWorld x_sU6 0 1 x_sU5
} in
let { (# x_sUa, x_sUb #) ~ x_sUa
<- Main.primLoop
(Main.primLengthOfRate rate_sTY)
(\ (x_sUc :: GHC.Prim.Int#)
(x_sUd :: GHC.Prim.State# GHC.Prim.RealWorld) ->
let { (# x_sUe, x_sU1 #) ~ x_sUe
<- Main.primNext_Int @ k_c x_sTZ x_sUc x_sUd
} in
let { (# x_sUf, x_sUc #) ~ x_sUf
<- readIntArray#_sUg x_sU2 0 x_sUe
} in
let { __DEFAULT ~ x_sUl
<- writeIntArray#_sUh
@ GHC.Prim.RealWorld x_sU2 0 (+#_sUi x_sUc x_sU1) x_sUf
} in
let { (# x_sUj, x_sU8 #) ~ x_sUj
<- readIntArray#_sUk x_sU6 0 x_sUl
} in
let { __DEFAULT ~ x_sUo
<- writeIntArray#_sUm
@ GHC.Prim.RealWorld x_sU6 0 (*#_sUn x_sU8 x_sU1) x_sUj
} in
(# x_sUo, GHC.Tuple.() #))
x_sUp
} in
let { (# x_sUq, x_sU1 #) ~ x_sUq
<- readIntArray#_sUr x_sU2 0 x_sUa
} in
let { (# x_sUs, x_sU5 #) ~ x_sUs
<- readIntArray#_sUt x_sU6 0 x_sUq
} in
(# x_sUs, +#_sUu x_sU1 x_sU5 #)

This doesn't work yet because I've forgotten to pass the type arguments to the unboxed tuple constructor (#,#), and maybe other problems as well. I'll post again when I have an actual program running.

# Big fucking deal

Quoth Wikipedia:

Big data[1][2] is a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The challenges include capture, curation, storage,[3] search, sharing, transfer, analysis,[4] and visualization. The trend to larger data sets is due to the additional information derivable from analysis of a single large set of related data, as compared to separate smaller sets with the same total amount of data, allowing correlations to be found to “spot business trends, determine quality of research, prevent diseases, link legal citations, combat crime, and determine real-time roadway traffic conditions.”

Now what if we get tired of the current hype cycle?

Big fucking deal[1][2] is a collection of deals so fucking large and complex that it becomes difficult to process using on-hand fuck giving tools or traditional shit giving techniques. The challenges include capture, curation, storage,[3] search, sharing, transfer, analysis,[4] and all kinds of who the fuck knows what else. The trend to larger fucking deals is due to the additional shit derivable from giving a fuck about a single large fucking pile of related shit, as compared to separate smaller piles with the same total amount of bullshit, allowing correlations to be found to “spot business shit, determine quality of whatever, prevent some nasty shit, link legal shit right the fuck together, combat fucking crime no I am not making this up it’s like fucking Batman, and determine real-time traffic shittiness.”

# darcs weekly news #103

### News and discussions

1. Darcs will be participating to this year's Google Summer of Code under the umbrella of Haskell.org! If you are interested please consult the ideas page and contact us:
2. Sebastian Fischer implemented darcs-history, a program to be used as darcs posthook and that tracks patch movements inside of a repository:
3. Sebastian also suggested the possibility for darcs to easily split and merge patches that are depended upon:
4. Piyush P Kurur was also interested in some special kinds of deep amend-record:

### Issues resolved in the last week (1)

issue2274 Guillaume Hoffmann

### Patches applied in the last week (27)

See darcs wiki entry for details.

# Logistic Regression and Automated Differentiation

## Introduction

Having shown how to use automated differentiation to estimate parameters in the case of linear regression let us now turn our attention to the problem of classification. For example, we might have some data about people’s social networking such as volume of twitter interactions and number of twitter followers together with a label which represents a human judgement about which one of the two individuals is more influential. We would like to predict, for a pair of individuals, the human judgement on who is more influential.

## Logistic Regression

We define the probability of getting a particular value of the binary label:

\displaystyle \begin{aligned} {\mathbb P}(y = 1 \mid \boldsymbol{x}; \boldsymbol{\theta}) &= h_{\boldsymbol{\theta}}(\boldsymbol{x}) \\ {\mathbb P}(y = 0 \mid \boldsymbol{x}; \boldsymbol{\theta}) &= 1 - h_{\boldsymbol{\theta}}(\boldsymbol{x}) \end{aligned}

where $\boldsymbol{x^{(i)}}$ and $\boldsymbol{\theta}$ are column vectors of size $m$

$\displaystyle h_{\boldsymbol{\theta}}(\boldsymbol{x}) = g(\boldsymbol{\theta}^T\boldsymbol{x})$

and $g$ is a function such as the logistic function $g(x) = 1 / (1 + e^{-x})$ or $\tanh$.

We can re-write this as:

$\displaystyle p(y \mid \boldsymbol{x} ; \boldsymbol{\theta}) = (h_{\boldsymbol{\theta}}(\boldsymbol{x}))^y(1 - h_{\boldsymbol{\theta}}(\boldsymbol{x}))^{1 - y}$

We wish to find the value of $\boldsymbol{\theta}$ that gives the maximum probability to the observations. We do this by maximising the likelihood. Assuming we have $n$ observations the likelihood is:

\displaystyle \begin{aligned} \mathcal{L}(\boldsymbol{\theta}) &= \prod_{i=1}^n p(y^{(i)} \mid {\boldsymbol{x}}^{(i)} ; \boldsymbol{\theta}) \\ &= \prod_{i=1}^n (h_{\boldsymbol{\theta}}(\boldsymbol{x}^{(i)}))^{y^{(i)}} (1 - h_{\boldsymbol{\theta}}(\boldsymbol{x}^{(i)}))^{1 - y^{(i)}} \end{aligned}

It is standard practice to maximise the log likelihood which will give the same maximum as log is monotonic.

\displaystyle \begin{aligned} \lambda(\boldsymbol{\theta}) &= \log \mathcal{L}(\boldsymbol{\theta}) \\ &= \sum_{i=1}^n {y^{(i)}}\log h_{\boldsymbol{\theta}}(\boldsymbol{x}^{(i)}) + (1 - y^{(i)})\log (1 - h_{\boldsymbol{\theta}}(\boldsymbol{x}^{(i)})) \end{aligned}

In order to maximize the cost function, we again use the method of steepest ascent: if $\boldsymbol{\theta}^i$ is a guess for the parameters of the model then we can improve the guess by stepping a small distance in the direction of greatest change.

$\displaystyle \boldsymbol{\theta}^{i+1} = \boldsymbol{\theta}^{i} - \gamma \nabla\mathcal{J}(\boldsymbol{\theta})$

$\gamma$ is some constant known in machine learning as the learning rate. It must be chosen to be large enough to ensure convergence within a reasonable number of steps but not so large that the algorithm fails to converge.

When the number of observations is high then the cost of evaluating the cost function can be high; as a cheaper alternative we can use stochastic gradient descent. Instead of taking the gradient with respect to all the observations, we take the gradient with respect to each observation in our data set. Of course if our data set is small we may have to use the data set several times to achieve convergence.

When the observations / training data are linearly separable then the magnitude of the parameters can grow without bound as the (parametized) logistic function then tends to the Heaviside / step function. Moreover, it is obvious that there can be more than one separaing hyperplane in this circumstance. To circumvent these infelicities, we instead maximize a penalized log likelihood function:

$\displaystyle \sum_{i=1}^n {y^{(i)}}\log h_{\boldsymbol{\theta}}(\boldsymbol{x}^{(i)}) + (1 - y^{(i)})\log (1 - h_{\boldsymbol{\theta}}(\boldsymbol{x}^{(i)})) - \frac{\delta}{2}\|\boldsymbol{\theta}\|^2$

See Bishop and Mitchell for further details.

## Implementation

Some pragmas to warn us about potentially dangerous situations.

> {-# OPTIONS_GHC -Wall                    #-}
> {-# OPTIONS_GHC -fno-warn-name-shadowing #-}
> {-# OPTIONS_GHC -fno-warn-type-defaults  #-}
> {-# OPTIONS_GHC -fno-warn-unused-do-bind #-}
>
> {-# LANGUAGE TupleSections #-}
>
> module Logistic ( betas
>                 , main
>                 , a
>                 , b
>                 , nSamples
>                 ) where


Modules from the automatic differentiation library.

> import Numeric.AD
> import Numeric.AD.Types

> import qualified Data.Vector as V
> import Control.Monad
> import Control.Monad.State
> import Data.List
> import Text.Printf


Some modules from a random number generator library as we will want to generate some test data.

> import System.Random
> import Data.Random ()
> import Data.Random.Distribution.Beta
> import Data.RVar


Our model: the probability that $y$ has the label 1 given the observations $\boldsymbol{x}$.

> logit :: Floating a =>
>          a -> a
> logit x = 1 / (1 + exp (negate x))


For each observation, the log likelihood:

> logLikelihood :: Floating a => V.Vector a -> a -> V.Vector a -> a
> logLikelihood theta y x = y * log (logit z) +
>                           (1 - y) * log (1 - logit z)
>   where
>     z = V.sum $V.zipWith (*) theta x  > totalLogLikelihood :: Floating a => > V.Vector a -> > V.Vector a -> > V.Vector (V.Vector a) -> > a > totalLogLikelihood theta y x = (a - delta * b) / l > where > l = fromIntegral$ V.length y
>     a = V.sum $V.zipWith (logLikelihood theta) y x > b = (/2)$ V.sum $V.map (^2) theta  As before we can implement steepest descent using the grad function. > delTotalLogLikelihood :: Floating a => > V.Vector a -> > V.Vector (V.Vector a) -> > V.Vector a -> > V.Vector a > delTotalLogLikelihood y x = grad f > where > f theta = totalLogLikelihood theta > (V.map auto y) > (V.map (V.map auto) x) > > stepOnce :: Double -> > V.Vector Double -> > V.Vector (V.Vector Double) -> > V.Vector Double -> > V.Vector Double > stepOnce gamma y x theta = > V.zipWith (+) theta (V.map (* gamma)$ del theta)
>   where
>     del = delTotalLogLikelihood y x


Or even easier just use the library function gradientAscent!

> estimates :: (Floating a, Ord a) =>
>              V.Vector a ->
>              V.Vector (V.Vector a) ->
>              V.Vector a ->
>              [V.Vector a]
> estimates y x = gradientAscent $> \theta -> totalLogLikelihood theta > (V.map auto y) > (V.map (V.map auto) x)  Let’s try it out. First we need to generate some data. Rather arbitrarily let us create some populations from the beta distribution. > betas :: Int -> Double -> Double -> [Double] > betas n a b = > fst$ runState (replicateM n (sampleRVar (beta a b))) (mkStdGen seed)
>     where
>       seed = 0


We can plot the populations we wish to distinguish by sampling.

> a, b :: Double
> a          = 15
> b          = 6
> nSamples :: Int
> nSamples   = 100000
>
> sample0, sample1 :: [Double]
> sample0 = betas nSamples a b
> sample1 = betas nSamples b a


Note that in this case we could come up with a classification rule by inspecting the histograms. Furthermore, the populations overlap which means we will inevitably mis-classify some observations.

> mixSamples :: [Double] -> [Double] -> [(Double, Double)]
> mixSamples xs ys = unfoldr g ((map (0,) xs), (map (1,) ys))
>   where
>     g ([], [])         = Nothing
>     g ([],  _)         = Nothing
>     g ( _, [])         = Nothing
>     g ((x:xs), (y:ys)) = Just $(x, (y:ys, xs))  > createSample :: V.Vector (Double, Double) > createSample = V.fromList$ take 100 $mixSamples sample1 sample0  We create a model with one independent variables and thus two parameters. > actualTheta :: V.Vector Double > actualTheta = V.fromList [0.0, 1.0]  We initialise our algorithm with arbitrary values. > initTheta :: V.Vector Double > initTheta = V.replicate (V.length actualTheta) 0.1  Set the learning rate, the strength of the penalty term and the number of iterations. > gamma :: Double > gamma = 0.1 > > delta :: Floating a => a > delta = 1.0 > > nIters :: Int > nIters = 4000  Now we can run our example. For the constant parameter of our model (aka in machine learning as the bias) we ensure that the correspoding “independent variable” is always set to $1.0$. > vals :: V.Vector (Double, V.Vector Double) > vals = V.map (\(y, x) -> (y, V.fromList [1.0, x]))$ createSample

> main :: IO ()
> main = do
>   let u = V.map fst vals
>       v = V.map snd vals
>       hs = iterate (stepOnce gamma u v) initTheta
>       xs = V.map snd vals
>       theta =  head $drop nIters hs > theta' = head$ drop 100 $estimates u v initTheta > printf "Hand crafted descent: theta_0 = %5.3f, theta_1 = %5.3f\n" > (theta V.! 0) (theta V.! 1) > printf "Library descent: theta_0 = %5.3f, theta_1 = %5.3f\n" > (theta' V.! 0) (theta' V.! 1) > let predProbs = V.map (\x -> logit$ V.sum $V.zipWith (*) theta x) xs > mismatches = V.filter (> 0.5)$
>                    V.map abs $> V.zipWith (-) actuals preds > where > actuals = V.map fst vals > preds = V.map (\x -> fromIntegral$ fromEnum (x > 0.5))
>                           predProbs
>   let lActuals, lMisMatches :: Double
>       lActuals    = fromIntegral $V.length vals > lMisMatches = fromIntegral$ V.length mismatches
>   printf "%5.2f%% correct\n" $> 100.0 * (lActuals - lMisMatches) / lActuals  And we get quite reasonable estimates: ghci> main Hand crafted descent: theta_0 = -2.071, theta_1 = 4.318 Library descent: theta_0 = -2.070, theta_1 = 4.319 97.00% correct  ### Richard Eisenberg # Coincident Overlap in Type Families Haskell allows what I will call coincident overlap among type family instances. Coincident overlap occurs when two (or more) type family instances might be applicable to a given type family usage site, but they would evaluate to the same right-hand side. This post, inspired by Andy Adams-Moran’s comment to an earlier blog post, explores coincident overlap and how to extend it (or not!) to branched instances. This post is a literate Haskell file (though there really isn’t that much code). Paste it into a .lhs file and load it into GHCi. Because the post uses branched instances, you will need the HEAD version of GHC. (Branched instances will be included in GHC 7.8, but not before.) > {-# LANGUAGE TypeFamilies, DataKinds, GADTs, TypeOperators #-}  Our examples will be over Bools, and we need some way to get GHC to evaluate our type families. The easiest way is to use the following singleton GADT: > data SBool :: Bool -> * where > SFalse :: SBool False > STrue :: SBool True  ## Conflict checking with type family instances When compiling type family instances, GHC checks the instances for conflicts. To know if two instances conflict (i.e., could both match the same usage site), GHC unifies the two left-hand sides. For example, the following code is bad and is rejected: type family F x type instance F Int = Bool type instance F a = Double Compiling the above instances gives the following error message: Conflicting family instance declarations: F Int -- Defined at ... F a -- Defined at ... This check is a good thing, because otherwise it would be possible to equate two incompatible types, such as Int and Bool. ## Coincident overlap among unbranched instances Here is a nice example of how coincident overlap is useful: > type family (x :: Bool) && (y :: Bool) :: Bool > type instance False && a = False -- 1 > type instance True && b = b -- 2 > type instance c && False = False -- 3 > type instance d && True = d -- 4  Although the first two equations fully define the && operation, the last two instances allow GHC to reduce a use of && that could not otherwise be reducible. For example: > and1 :: SBool a -> SBool (True && a) > and1 x = x > > and2 :: SBool a -> SBool (a && True) > and2 x = x  and1 uses the second instance of &&, but and2 requires the fourth instance. If we comment that instance out, and2 fails to compile, because GHC cannot figure out that a && True must be a for all values of a. For various good reasons, perhaps to be explored in another post, GHC does not do case analysis on types during type inference. How does GHC know that overlap is coincident? During the conflict check, GHC looks for a substitution that unifies two potentially-conflicting instances. In our case, the fourth and first instances would conflict under the substitution {a |-> True, d |-> False}. However, after finding the unifying substitution, GHC checks the right-hand sides under that same substitution. If they are the same, then GHC considers the overlap to be coincident and allows the instance pair. In our case, applies the substitution {a |-> True, d |-> False} to False and d and discovers that both are False, and so the instances are allowed. ## Coincident overlap within branched instances When thinking about branched instances and coincident overlap, there are two possibilities to consider: coincident overlap within a branched instance and coincident overlap among two separate branched instances. Let’s consider the first case here. Imagine we define || analogously to &&, but using one branched instance: > type family (x :: Bool) || (y :: Bool) :: Bool > type instance where > False || a = a -- 1 > True || b = True -- 2 > c || False = c -- 3 > d || True = True -- 4  Now, let’s consider simplifying the type e || False. The first two branches don’t match, but the third does. Now, following the rule for branched instance simplification (as stated in the Haskell wiki), we check to see if any previous branches might be applicable to e || False, for any possible instantiation of e. The first branch certainly might apply, and so e || False fails to simplify. This is surely counterintuitive, because the third branch matches e || False exactly! Just to prove this behavior, I tried running this code through GHC: bar :: SBool a -> SBool (a || False) bar x = x Here is the error: Couldn't match type ‛a’ with ‛a || 'False’ At first blush, it seems I’ve missed something important here in the implementation — allowing coincident overlap within a branched instance. But, there is a major problem with such overlap in this setting. Let’s think about how coincident overlap would work in this setting. After selecting the third branch to simplify e || False (with the substitution {c |-> e}), GHC checks to see if any previous branch could be applicable to e || False. The first branch, False || a, unifies with e || False, so it might be applicable later on. The unifying substitution is {a |-> False, e |-> False}. Now, if we wanted to check for coincident overlap, we would apply both substitutions ({c |-> e} and {a |-> False, e |-> False}) to the right-hand sides. In this case, we would see that both right-hand sides would become False, and it seems we should allow the simplification of e || False to e. Let’s try a harder case. What if we want to simplify (G f) || False, for some type family G? The third branch matches, with the substitution {c |-> G f}. Now, we check earlier branches for applicability. The first branch is potentially applicable, if G f simplifies to False. But, we can’t produce a substitution over type variables to witness to check right-hand sides. In this case, it wouldn’t be hard to imagine a substitution like {(G f) |-> False}, but that’s a slippery slope to slide down. What if f appears multiple times in the type, perhaps under different type family applications? How do we deal with this? There may well be an answer, but it would be subtle and likely quite fragile from the programmer’s point of view. So, we decided to ignore the possibility of coincident overlap within a branch. We were unable to come up with a compelling example of why anyone would want this feature, it seemed hard to get right, and we can just write || using separate instances, anyway. ## Coincident overlap between branched instances Consider the following (contrived) example: type family F (x :: Bool) (y :: Bool) :: * type instance where F False True = Int F a a = Int F b c = Bool type instance F d True = Int Is this set of instances allowable? Is that your final answer? I believe that this set of instances wouldn’t produce any conflicts. Anything that matches F d True would have to match one of the first two branches of the branched instance, meaning that the right-hand sides coincide. However, it is difficult to reason about such cases, for human and GHC alike. So, for the sake of simplicity, we also forbid coincident overlap whenever even one instance is branched. This means that type instance F Int = Bool and type instance where F Int = Bool are very subtly different. ## Andy Adams-Moran’s example This post was partially inspired by Andy Adams-Moran’s comment in which Andy poses this example, paraphrased slightly: > data T a > type family Equiv x y :: Bool > type instance where > Equiv a a = True -- 1 > Equiv (T b) (T c) = True -- 2 > Equiv (t d) (t e) = Equiv d e -- 3 > Equiv f g = False -- 4  Alas, this does not work as well as one would like. The problem is that we cannot simplify, say, Equiv (T a) (T b) to True, because this matches the second branch but might later match the first branch. Simplifying this use would require coincident overlap checking within branched instances. We could move the first branch to the third position, and that would help, but not solve the problem. With that change, Equiv x x would not simplify, until the head of x were known. So, is this the “compelling example” we were looking for? Perhaps. Andy, why do you want this? Can your use case be achieved with other means? Do you (anyone out there) have a suggestion for how to deal with coincident overlap within branched instances in a simple, easy-to-explain manner? ## April 29, 2013 ### Thomas M. DuBuisson # The Jabberwolk has moved I’ve moved my blog to tommd.github.io which currently includes my exploration of image processing as well as a run-down of commsec-keyexchange. ### wren ng thornton # John C. Reynolds (1935–2013) For those who haven't heard, John Reynolds (of separation logic fame) just passed on. I don't have anything particular to add to what's mentioned on LtU and Neel's blog, but I recommend reading them both. comments ### Brent Yorgey # Monad transformers: a cautionary tale When writing the code in my previous post, I wanted to have a monad which combined the ability to generate random numbers with the ability to fail. Naturally, I decided to use RandT Maybe. But when I tried to write a failing computation of this type, I got a type error:  No instance for (MonadPlus (RandT StdGen Maybe)) arising from a use of mzero' It seems that no one ever bothered to add a MonadPlus instance for RandT. Well, that’s easy to fix. Since RandT is just a newtype wrapper around StateT we can even derive a MonadPlus instance automatically using -XGeneralizedNewtypeDeriving. So I modified the MonadRandom package, and everything worked great. …That is, everything worked great until I started to get some strange behavior—sometimes computations would hang when I expected them to complete quickly. I finally was able to boil it down to the following minimal example. foo succeeds or fails with equal probability; bar reruns foo until it succeeds. foo :: RandT StdGen Maybe () foo = do r <- getRandomR (0,1) if r < 1/2 then return () else mzero bar :: RandT StdGen Maybe () bar = foo mplus bar Seems straightforward, right? bar should always succeed pretty much instantly, since there’s only a $1/2^n$ chance that it will have to call foo $n$ times. However, this is not what happens: some of the time bar returns instantly as expected, and some of the time it hangs in what seems like an infinite loop! What gives? Have you figured it out yet? (If you like these sorts of puzzles you might want to stop and see if you can figure out what was going on.) The problem is that the mplus operation for RandT StdGen Maybe runs both of its arguments with the same random seed! In other words, when a computation fails the generator state gets thrown away. And if we think about how monad transformers work this is actually not surprising. We have the following isomorphisms:  RandT StdGen Maybe () == StateT StdGen Maybe () == StdGen -> Maybe ((), StdGen) So when a computation fails you just get Nothing—in particular you don’t get to see what the new StdGen value would have been, so you can’t (say) pass it along to the second argument of mplus. The upshot is that bar succeeds if the first call to foo happens to succeed; otherwise it simply keeps calling foo with the exact same seed and foo keeps failing every time. The general principle here is that “the effects of inner monad transformers take precedence over the effects of outer transformers”—in this case the failure effect of the inner Maybe takes precedence and causes the random generator state to be lost. So what I really wanted was MaybeT (Rand StdGen), which—after adding a MonadRandom instance for MaybeT, now released as MonadRandom-0.1.9—works perfectly. The moral of the story: monad transformers aren’t (in general) commutative! Think carefully about what order you want. (I actually wrote about this once before; you’d think I would have learned my lesson.) ### Ken T Takusagawa # [isjnkupe] Local types and imports I have occasionally wanted this in Haskell. let { newtype T = ... } in ... Compiler signals error if any value of type T "escapes" the scope. Type synonym types are allowed to escape. Similarly, let { import ... } . The compiler signals error if any imported type escapes the scope, unless it is also imported in the outer scope. ### Yesod Web Framework # Persistent 1.2 is out, last call on Yesod 1.2 I'm happy to announce the release of Persistent 1.2, along with all related packages and backends. In other words, I'm announcing the release of: • persistent-1.2.0 • persistent-mongoDB-1.2.0 • persistent-mysql-1.2.0 • persistent-postgresql-1.2.0 • persistent-sqlite-1.2.0 • persistent-template-1.2.0 • pool-conduit-0.1.2 You can see a Persistent detailed change list. I'm also announcing the last call for input for the Yesod 1.2 release. The Yesod team has decided to make this release on Thursday, May 2, unless there is new input from the community. If you want to play with it before release, please install from Github: cabal update git clone https://github.com/yesodweb/yesod cd yesod cabal-meta install If you run into any problems, or have any questions, please raise them on the mailing list or the IRC channel. As a reminder, you can see the upcoming changes for Yesod 1.2 in the changelog and the detailed change list. ### Manuel M T Chakravarty # Back to Tumblr With the demise of Posterous (after it was swallowed by Twitter), my blog is back to Tumblr. I have got plans for a more customised set up in the medium term, but currently I don’t have the time to set up blogging software. As the URL scheme of Posterous and Tumblr is different, this breaks all existing direct links to post. Moreover, many of the old posts include a footer that points back to the defunct Posterous, and comments are also gone. I’m sorry for the mess. ## April 28, 2013 ### Tim Docker # Composible Value Editor Code I’ve made the code for this library (previously described here) available via a repository on github: https://github.com/timbod7/veditor It’s still experimental, so I don’t intend to put it on hackage until I have (or someone else has) a dependency on it. The actual VE GADT in the source has an extra type parameter intended to let the generated UIs depend on context. Where this is not necessary, the ConstE type may be supplied. Hence, in the actual code the type VE ConstE a corresponds to VE a in the previous blog post. ### Antti-Juhani Kaijanaho (ibid) # Looking for a copy of LLparser.hs Does anybody have a copy of my LLParser.hs that once was available at http://antti-juhani.kaijanaho.fi/tmp/LLParser.hs? It looks like all my copies (except for an early version I used in one of my functional programming lectures) have gone the way of the dodo. ### LambdaCube # Designing a custom kind system for rendering pipeline descriptions We are in the process of creating the next iteration of LambdaCube, where we finally depart from the Haskell EDSL approach and turn the language into a proper DSL. The reasons behind this move were outlined in an earlier post. However, we still use the EDSL as a testing ground while designing the type system, since GHC comes with a rich set of features available for immediate use. The topic of today’s instalment is our recent experiment to make illegal types unrepresentable through a custom kind system. This is made possible by the fact that GHC recently introduced support for promoting datatypes to kinds. Even though the relevant DataKinds extension is over a year old (although it’s been officially supported only since GHC 7.6), we couldn’t find any example to use it for modelling a real-life domain. Our first limited impression is that this is a direction worth pursuing. It might sound surprising that this idea was already brought up in the context of computer graphics in the Spark project. Tim Foley’s dissertation briefly discusses the type theory behind Spark (see Chapter 5 for details). The basic idea is that we can introduce a separate kind called Frequency (Spark refers to this concept as RecordType), and constrain the Exp type constructor (@ in Spark) to take a type of this kind as its first argument. To define the new kind, all we need to do is write a plain data declaration, as opposed to the four empty data declarations we used to have: data Frequency = Obj | V | G | F As we enable the DataKinds extension, this definition automatically creates a kind and four types of this kind. Now we can change the definition of Exp to take advantage of it: data Exp :: Frequency -> * -> * where ... For the time being, we diverge from the Spark model. The difference is that the resulting type has kind * in our case, while in the context of Spark it would be labelled as RateQualifiedType. Unfortunately, Haskell doesn’t allow using non-* return kinds in data declarations, so we can’t test this idea with the current version. Since having a separate Exp universe is potentially useful, we might adopt the notion for the DSL proper. We don’t have to stop here. There are a few more areas where we can sensibly constrain the possible types. For instance, primitive and fragment streams as well as framebuffers in LambdaCube have a type parameter that identifies the layer count, i.e. the number of framebuffer layers we can emit primitives to using geometry shaders. Instead of rolling a custom solution, now we can use type level naturals with support for numeric literals. Among others, the definition of stream types and images reflects the new structure: data VertexStream prim t data PrimitiveStream prim (layerCount :: Nat) (stage :: Frequency) t data FragmentStream (layerCount :: Nat) t data Image (layerCount :: Nat) t where ... Playing with kinds led to a little surprise when we looked into the texture subsystem. We had marker types called DIM1, DIM2, and DIM3, which were used for two purposes: to denote the dimension of primitive vectors and also the shape of equilateral textures. Both structures have distinct fourth options: 4-dimension vectors and rectangle-shaped textures. While they are related – e.g. the texture shape implies the dimension of vectors used to address texels –, these are different concepts, and we consider it an abuse of the type system to let them share some cases. Now vector dimensions are represented as type-level naturals, and the TextureShape kind is used to classify the phantom types denoting the different options for texture shapes. It’s exactly like moving from an untyped language to a typed one. But what did we achieve in the end? It looks like we could express the very same constraints with good old type classes. One crucial difference is that kinds defined as promoted data types are closed. Since LambdaCube tries to model a closed domain, we see this as an advantage. It also feels conceptually and notationally cleaner to express simple membership constraints with kinds than by shoving them in the context. However, the final decision about whether to use this approach has to wait until we have the DSL working. ## April 27, 2013 ### Theory Lunch (Institute of Cybernetics, Tallinn) # A taste of Curry This Theory Lunch, I gave an introduction to the functional logic programming language Curry. You can find a write-up of my talk on my personal blog. ### Wolfgang Jeltsch # A taste of Curry Curry is a programming language that integrates functional and logic programming. Last week, Denis Firsov and I had a look at Curry, and Thursday, I gave an introductory talk about Curry in the Theory Lunch. This blog post is mostly a write-up of my talk. Like Haskell, Curry has support for literate programming. So I wrote this blog post as a literate Curry file, which is available for download. If you want to try out the code, you have to install the Curry system KiCS2. The code uses the functional patterns language extension, which is only supported by KiCS2, as far as I know. ## Functional programming The functional fragment of Curry is very similar to Haskell. The only fundamental difference is that Curry does not support type classes. Let us do some functional programming in Curry. First, we define a type whose values denote me and some of my relatives. data Person = Paul | Joachim | Rita | Wolfgang | Veronika | Johanna | Jonathan | Jaromir Now we define a function that yields the father of a given person if this father is covered by the Person type. father :: Person -> Person father Joachim = Paul father Rita = Joachim father Wolfgang = Joachim father Veronika = Joachim father Johanna = Wolfgang father Jonathan = Wolfgang father Jaromir = Wolfgang Based on father, we define a function for computing grandfathers. To keep things simple, we only consider fathers of fathers to be grandfathers, not fathers of mothers. grandfather :: Person -> Person grandfather = father . father ## Combining functional and logic programming Logic programming languages like Prolog are able to search for variable assignments that make a given proposition true. Curry, on the other hand, can search for variable assignments that make a certain expression defined. For example, we can search for all persons that have a grandfather according to the above data. We just enter grandfather person where person free at the KiCS2 prompt. KiCS2 then outputs all assignments to the person variable for which grandfather person is defined. For each of these assignments, it additionally prints the result of the expression grandfather person. ## Nondeterminism Functions in Curry can actually be non-deterministic, that is, they can return multiple results. For example, we can define a function element that returns any element of a given list. To achieve this, we use overlapping patterns in our function definition. If several equations of a function definition match a particular function application, Curry takes all of them, not only the first one, as Haskell does. element :: [el] -> el element (el : _) = el element (_ : els) = element els Now we can enter element "Hello!" at the KiCS2 prompt, and the system outputs six different results. ## Logic programming We have already seen how to combine functional and logic programming with Curry. Now we want to do pure logic programming. This means that we only want to search for variable assignments, but are not interested in expression results. If you are not interested in results, you typically use a result type with only a single value. Curry provides the type Success with the single value success for doing logic programming. Let us write some example code about routes between countries. We first introduce a type of some European and American countries. data Country = Canada | Estonia | Germany | Latvia | Lithuania | Mexico | Poland | Russia | USA Now we want to define a relation called borders that tells us which country borders which other country. We implement this relation as a function of type Country -> Country -> Success that has the trivial result success if the first country borders the second one, and has no result otherwise. Note that this approach of implementing a relation is different from what we do in functional programming. In functional programming, we use Bool as the result type and signal falsity by the result False. In Curry, however, we signal falsity by the absence of a result. Our borders relation only relates countries with those neighbouring countries whose names come later in alphabetical order. We will soon compute the symmetric closure of borders to also get the opposite relationships. borders :: Country -> Country -> Success Canada borders USA = success Estonia borders Latvia = success Estonia borders Russia = success Germany borders Poland = success Latvia borders Lithuania = success Latvia borders Russia = success Lithuania borders Poland = success Mexico borders USA = success Now we want to define a relation isConnected that tells whether two countries can be reached from each other via a land route. Clearly, isConnected is the equivalence relation that is generated by borders. In Prolog, we would write clauses that directly express this relationship between borders and isConnected. In Curry, on the other hand, we can write a function that generates an equivalence relation from any given relation and therefore does not only work with borders. We first define a type alias Relation for the sake of convenience. type Relation val = val -> val -> Success Now we define what reflexive, symmetric, and transitive closures are. reflClosure :: Relation val -> Relation val reflClosure rel val1 val2 = rel val1 val2 reflClosure rel val val = success symClosure :: Relation val -> Relation val symClosure rel val1 val2 = rel val1 val2 symClosure rel val2 val1 = rel val1 val2 transClosure :: Relation val -> Relation val transClosure rel val1 val2 = rel val1 val2 transClosure rel val1 val3 = rel val1 val2 & transClosure rel val2 val3 where val2 free The operator & used in the definition of transClosure has type Success -> Success -> Success and denotes conjunction. We define the function for generating equivalence relations as a composition of the above closure operators. Note that it is crucial that the transitive closure operator is applied after the symmetric closure operator, since the symmetric closure of a transitive relation is not necessarily transitive. equivalence :: Relation val -> Relation val equivalence = reflClosure . transClosure . symClosure The implementation of isConnected is now trivial. isConnected :: Country -> Country -> Success isConnected = equivalence borders Now we let KiCS2 compute which countries I can reach from Estonia without a ship or plane. We do so by entering Estonia isConnected country where country free at the prompt. We can also implement a nondeterministic function that turns a country into the countries connected to it. For this, we use a guard that is of type Success. Such a guard succeeds if it has a result at all, which can only be success, of course. connected :: Country -> Country connected country1 | country1 isConnected country2 = country2 where country2 free ## Equational constraints Curry has a predefined operator =:= :: val -> val -> Success that stands for equality. We can use this operator, for example, to define a nondeterministic function that yields the grandchildren of a given person. Again, we keep things simple by only considering relationships that solely go via fathers. grandchild :: Person -> Person grandchild person | grandfather grandkid =:= person = grandkid where grandkid free Note that grandchild is the inverse of grandfather. ## Functional patterns Functional patterns are a language extension that allows us to use ordinary functions in patterns, not just data constructors. Functional patterns are implemented by KiCS2. Let us look at an example again. We want to define a function split that nondeterministically splits a list into two parts.1 Without functional patterns, we can implement splitting as follows. split' :: [el] -> ([el],[el]) split' list | front ++ rear =:= list = (front,rear) where front, rear free With functional patterns, we can implement splitting in a much simpler way. split :: [el] -> ([el],[el]) split (front ++ rear) = (front,rear) As a second example, let us define a function sublist that yields the sublists of a given list. sublist :: [el] -> [el] sublist (_ ++ sub ++ _) = sub ## Inverting functions In the grandchild example, we showed how we can define the inverse of a particular function. We can go further and implement a generic function inversion operator. inverse :: (val -> val') -> (val' -> val) inverse fun val' | fun val =:= val' = val where val free With this operator, we could also implement grandchild as inverse grandfather. Inverting functions can make our lives a lot easier. Consider the example of parsing. A parser takes a string and returns a syntax tree. Writing a parser directly is a non-trivial task. However, generating a string from a syntax tree is just a simple functional programming exercise. So we can implement a parser in a simple way by writing a converter from syntax trees to strings and inverting it. We show this for the language of all arithmetic expressions that can be built from addition, multiplication, and integer constants. We first define types for representing abstract syntax trees. These types resemble a grammar that takes precedence into account. type Expr = Sum data Sum = Sum Product [Product] data Product = Product Atom [Atom] data Atom = Num Int | Para Sum Now we implement the conversion from abstract syntax trees to strings. toString :: Expr -> String toString = sumToString sumToString :: Sum -> String sumToString (Sum product products) = productToString product ++ concatMap ((" + " ++) . productToString) products productToString :: Product -> String productToString (Product atom atoms) = atomToString atom ++ concatMap ((" * " ++) . atomToString) atoms atomToString :: Atom -> String atomToString (Num num) = show num atomToString (Para sum) = "(" ++ sumToString sum ++ ")" Implementing the parser is now extremely simple. parse :: String -> Expr parse = inverse toString KiCS2 uses a depth-first search strategy by default. However, our parser implementation does not work with depth-first search. So we switch to breadth-first search by entering :set bfs at the KiCS2 prompt. Now we can try out the parser by entering parse "2 * (3 + 4)" . 1. Note that our split function is not the same as the split function in Curry’s List module. Tagged: breadth-first search, Curry, Denis Firsov, depth-first search, functional logic programming, functional pattern, functional programming, KiCS2, literate programming, logic programming, parsing, Prolog, Theory Lunch, TTÜ Küberneetika Instituut, type class ### Leon P Smith # Announcing postgresql-simple 0.3.1 ## Array types Postgresql-simple has been progressing since my last announcement of version 0.1 nearly a year ago. Since then there has been many changes by myself and contributors, some of which will break your code with or without compilation errors. So this post will attempt to highlight some of the bigger changes. Probably the most exciting recent change is the new support for PostgreSQL’s array types, largely due to work from Jason Dusek, Bas van Dijk, and myself. Here’s two examples of a table and query that makes use of this functionality: CREATE TABLE assocs ( id INT NOT NULL , link INT NOT NULL ); CREATE TABLE matrices ( id INT NOT NULL , matrix FLOAT8[][] ); import qualified Data.Vector as V import Database.PostgreSQL.Simple import Data.Int(Int64) getAssocs :: Connection -> IO [(Int,V.Vector Int)] getAssocs conn = do query_ conn "SELECT id, ARRAY_AGG(link) FROM assocs GROUP BY id" insertMatrix :: Connection -> Int -> V.Vector (V.Vector Double) -> IO Int64 insertMatrix conn id matrix = do execute conn "INSERT INTO matrices (id, matrix) VALUES (?,?)" (id, matrix) ## TypeInfo Changes In order to properly support the FromField a => FromField (Vector a) instance, the TypeInfo system was overhauled. Previously, the only information it tracked was a mapping of type OIDs to type names, first by consulting a static table and then using a per-connection cache, finally querying the pg_type metatable. This was useful for writing FromField instances for postgresql types that do not have a stable OID, such as those provided by an extension, like the period type from Temporal Postgres or the hstore type from the contributed modules bundled with PostgreSQL. However, proper array support required more information, especially the type of the array’s elements. This information is now available in an easily extended data structure, available in the new Database.PostgreSQL.Simple.TypeInfo module. This was introduced in 0.3, 0.3.1 added support for range and composite types; however there is not yet any FromField instance that makes use of this information or deals with these types. ## IO at Conversion Time Version 0.3 also stopped pre-computing the type name of every column and storing these in a vector before converting the results, by allowing a restricted set of IO actions at conversion time. This is a win, because the common case is that the typename is never consulted; for example almost all of the out-of-box FromField instances examine the type OID alone. Also, since IO information no longer has to be computed before conversion takes place, it makes it practical to consider using IO information that would rarely be used in normal circumstances, such as turning table OIDs into table names when errors are encountered. It’s possible to extend the IO actions available to FromField and FromRow instances by accessing the data constructor of the Conversion monad via the Database.PostgreSQL.Simple.Internal module. This required changing the type of the FromField.typename operator, which will break your FromField instances that use it. It also required small changes to the FromField and FromRow interface, which has a chance of breaking some of your FromField and FromRow instances if they don’t strictly use the abstract Applicative and/or Monad interfaces. However, all of this breakage should be obvious; if your code compiles, it should work with the new interface. ## HStore support Version 0.3.1 also introduced out-of-box support for hstore. The hstore type provides key-value maps from textual strings to textual strings. Conversions to/from lists and Data.Map is provided, while conversions from other Haskell types can be easily implemented via the HStoreBuilder interface (similar to my own json-builder package), and conversions to other Haskell types can easily be implemented via the conversion to lists. CREATE TABLE hstore_example ( id INT NOT NULL , map hstore ); insertHStore :: Connection -> Int -> [(Text,Text)] -> IO Int64 insertHStore conn id map = do execute conn "INSERT INTO hstore_example (id,map) VALUES (?,?)" (id, HStoreList map) retrieveHStore :: Connection -> Int -> IO (Maybe [(Text,Text)]) retrieveHStore conn id xs <- query conn "SELECT map FROM hstore_example WHERE id = ?" (Only id) case xs of [] -> return Nothing (Only (HStoreList val):_) -> return (Just val) ## Better Error Messages Jeff Chu and Leonid Onokhov have improved both error messages and error handling options in the latest version. Thanks to Jeff, the ResultError exception now includes the column name and associated table OID (if any) from which the column was taken from. And Leonid has contributed a new Errors module that can be used to dissect SqlError values in greater detail. ## Better Time Conversions And in older news, version 0.1.4 debuted brand new time parsers and printers for the ISO-8601 syntax flavor that PostgreSQL emits, included FromField instances for LocalTime, and introduced new datatypes for dealing with PostgreSQL’s time infinities. Among other things, the new parsers correctly handle timestamps with UTC offsets of a whole number of minutes, which means (for example) that postgresql-simple now works in India. Version 0.2 removed the conversion from timestamp (without time zone) to the UTCTime and ZonedTime types, due to the inherent ambiguity that conversion represents; LocalTime is now the preferred way of handling timestamps (without time zones). ## April 25, 2013 ### Johan Tibell # More haskell.org GSoC ideas Here are another two haskell.org GSoC ideas. ### Faster ghc -c building The main obstacle to implementing completely parallel builds in Cabal is that calling ghc --make on a bunch of modules is much faster than calling ghc -c on each module. Today, Cabal builds modules by passing them to ghc --make, but we'd like to create the module dependency graph ourselves and then call ghc -c on each module, in parallel. However, since ghc -c is so much slower than ghc --make, the user needs 2 (and sometimes even more) cores for parallel builds to pay off. If we could make ghc -c faster, we'd could write a parallel build system that actually gives good speed-ups on typical developer hardware. The project would involve profiling ghc -c to figure out where the time is spent and then trying to improve performance. One possible source of inefficiency is reparsing all .hi files every time ghc -c is run. Preferably the student should be at least vaguely familiar with the GHC source code. ### Cabal dependency solver improvements There's one shortcoming in the Cabal package dependency solver that is starting to bite more and more often, namely not treating the components in a .cabal file (i.e. libraries, executables, tests, and benchmarks) as separate entities for the purpose of dependency resolution. In practice this means that for many core libraries this fails: cabal install --only-dependencies --enable-tests but this succeeds: cabal install <manually compiled list of test dependencies> cabal install --only-dependencies The reason is that the Library defined in the package is a dependency of the test framework (i.e. the test-framework package), creating a dependency cycle involving the library itself and the test framework. However, if the test dependency is expressed as: library foo ... test-suite my-tests -- No direct dependency on the library: hs-source-dirs: . tests ` the dependency solver could find a solution, as the test suite no longer depends on the library, but it doesn't today. The project would involve fixing the solver to treat each component separately (i.e. as if it was a separate package) for the purpose of dependency resolution. For an example of this problem see the hashable package's .cabal file. In this case the dependency cycle involves hashable and Criterion. ## April 24, 2013 ### Well-Typed.Com # Haskell training in San Francisco and New York We are offering Haskell courses in San Francisco and New York in early June. They are for professional developers who want to learn Haskell or improve their skills. There is a 2-day introductory course and a 2-day advanced course. ## Locations, dates and registration Well-Typed are running these courses in partnership with FP Complete and Skills Matter. ### San Francisco Bay Area • Introductory Course: June 4-5th, 2013 • Advanced Course: June 6-7th, 2013 •$1300 for each course, with 15% discount if you book both

Register via FP Complete.

### New York

• Introductory Course: June 10-11th, 2013
• Advanced Course: June 12-13th, 2013
• \$1495 for each course, with 15% discount before April 29th

Register via Skills Matter: introductory course, or advanced course.

# The carbondioxide footprint of Debian's Haskell packages

By now, Debian ships quite a lot of Haskell packages (~600). Because of GHC's ABI volatility, whenever we upload a new version of a library, we have to rebuild all libraries that depend on that. In particular, if we upload a new version of the compiler itself, we have to rebuild all Haskell library packages. So we have to rebuild stuff a lot. Luckily, Debian has a decent autobuilding setup so that I just need to tell it what to rebuild, and the rest happens automatically (including figuring out the actual order to build things).

I was curious how much we use the buildd system compared to other packages, and also how long the builders are busy building Haskell packages. All the data is in a postgresql database on buildd.debian.org, so with some python and javascript code, I can visualize this. The graphs show the number of all uploads by autobuilder on the amd64 architecture, with haskell uploads specially marked, and the second graph does the same for the build time. You can select time ranges and get aggregate statistics for that time span.

During the last four days a complete rebuild was happening, due to the upload of GHC 7.6.3. During these 2 days and 18 hours building 537 packages took 48 hours of build time and produced 15kg of CO2. That is 94% of all uploads and 91% the total build time. The numbers are lower for the whole of last year: 52% of uploads, 31% of build time and 57kg of CO2. (The CO2 numbers are very rough estimates.)

Note that amd64 is a bit special, as most packages are uploaded on this architecture by the developers, so no automatic builds are happening. On other architectures have, every upload of a (arch:any) package is built, so the share of Haskell packages will be lower. Unfortunately, at the moment the database does not provide me with a table across all architectures (and I was too lazy to make it configurable yet).

# The Lambda Papers

I’m reading about functional programming as much as I can. Aside from staying up-to-date with the latest publications I also spend some time reading the older papers. Among the classics are true pearls: John Backus’ “Can Programming Be Liberated from the von Neumann Style. A Functional Style and Its Algebra of Programs” took hours to read but was an eye-opener for me (I think this was the first paper on FP I ever read) and Phil Wadler’s “Comprehending monads” and “The essence of functional programming” are a beautiful demonstration of power and elegance of monads. But today I want to write about The Lambda Papers – a series of approximately 10 papers (mostly AI Lab Memos) written by Guy Steele and Gerald Sussman between 1975 and 1980. I originally heard about them while browsing Lambda The Ultimate (which, by the way, takes its name from these papers).

Lambda papers revolve around the programming language Scheme, a simple dialect of Lisp developed by Steele and Sussman at MIT in the seventies. So far I have read two of these papers – “Lambda: The ultimate imperative” and “Lambda: The ultimate declarative”. The first one discusses the implementation of programming constructs know from imperative languages – for example GOTOs, blocks, assignments, loops or exceptions – in Scheme. Authors demonstrate that lambda expressions, that is anonymous functions, suffice to implement all these imperative constructs. Moreover, they show that using lambdas we can easily implement different scoping strategies (dynamic vs. static) and calling conventions (call by name vs. call by value vs. call by need). “Lambda: The ultimate declarative” continues the discussion of GOTO expressions started in “Lambda: The ultimate imperative”. Main focus in placed on relation between GOTOs, function calls and operations that modify environment (function call is one of them, assignments are another). The paper provides some really deep insight into the mechanisms of compiling function calls. For example, authors demonstrate that in compiled Scheme code it is not necessary to make any function calls – GOTOs (unconditional jumps) are all that is needed. There’s also an interesting discussion on defining data types using closures and named variables vs. temporaries (unnamed variables generated by the compiler). Paper concludes with a thought that the expressive power of Lisp makes it a good candidate for an intermediate language used in compilers.

As I already mentioned I consider these papers to be one of the best papers I have read. I am truly impressed with the amount of knowledge squeezed into these 90 pages. Probably the most surprising thing is that after almost 40 years since their original publication Lambda Papers remain relevant. It is interesting to see that some ideas are mentioned briefly in those papers and today these ideas have evolved into really important concepts. For example Steele mentions in “Lambda: The ultimate declarative” about the idea of annotating functions with information about possible side effects, which is done in Haskell’s type system to separate pure and impure code. I have also found it interesting to learn a bit about history of computer science and trace the origin of concepts like thunks and continuations.

That’s only the outline of the Lambda Papers. They provide a lot more insightful information and I strongly encourage anyone interested in programming languages to read these two publications. They are not too difficult – if you know Scheme you’ll be able to grasp them.