Planet Haskell

November 17, 2018

Mark Jason Dominus

How do you make a stella octangula?

Yesterday Katara asked me out of nowhere “When you make a stella octangula, do you build it up from an octahedron or a tetrahedron?” Stella octangula was her favorite polyhedron eight years ago.

“Uh,” I said. “Both?”

Then she had to make one to see what I meant. You can start with a regular octahedron:

a regular octahedron made of six steel ball bearings and twelve blue and green magnetic struts

Then you erect spikes onto four of the octahedron's faces; this produces a regular tetrahedron:

A very similar octahedron, this time with an orange tripod attached to four of its eight faces, forming an orange tetrahedron with a blue and green octahedron embedded in it

Then you erect spikes onto the other four of the octahedron's faces. Now you have a stella octangula.

The octahedron from before, but with red tripods attached to its other four faces, making eight tripods in all.  The final result looks like interpenetrating red and orange tetrahedra, with the original octahedron in their intersection

So yeah, both. Or instead of starting with a unit octahedron and erecting eight spikes of size 1, you can start with a unit tetrahedron and erect four spikes of size ½. It's both at once.

by Mark Dominus (mjd@plover.com) at November 17, 2018 06:33 AM

November 15, 2018

Holden Karau

Validating Big Data Jobs - Stopping Failures before Production (w/ Spark, BEAM, & friends!) @ Big Data Spain

Thanks for joining me on 2018-11-14 at Big Data Spain 2018 Madrid, Spain for Validating Big Data Jobs - Stopping Failures before Production (w/ Spark, BEAM, & friends!).

The slides are at http://bit.ly/2S1y3HJ.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="356" marginheight="0" marginwidth="0" scrolling="no" src="https://www.slideshare.net/slideshow/embed_code/key/waR8PdjCYv8U4h" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" width="427"> </iframe> Comment bellow to join in the discussion :).Talk feedback is appreciated at http://bit.ly/holdenTalkFeedback

by Holden Karau (noreply@blogger.com) at November 15, 2018 05:31 PM

Big Data w/Python on Kubernetes (PySpark on K8s) @ Big Data Spain

Thanks for joining me on 2018-11-15 for Big Data w/Python on Kubernetes (PySpark on K8s).

The slides are at http://bit.ly/2RWsxpA.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="356" marginheight="0" marginwidth="0" scrolling="no" src="https://www.slideshare.net/slideshow/embed_code/key/FaOvUqcsvJcFBk" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" width="427"> </iframe> And some related links: recorded demos, setup livestreamComment bellow to join in the discussion :).Talk feedback is appreciated at http://bit.ly/holdenTalkFeedback

by Holden Karau (noreply@blogger.com) at November 15, 2018 05:31 PM

November 14, 2018

Holden Karau

A Basic Introduction to PySpark Dataframes by exploring ASF Gender Diversity Data - workshop @ @pyconca

Thanks for joining us (@holdenkarau,@math_foo) on 2018-11-10 at @pyconca 2018 Toronto, ON, Canada for A Basic Introduction to PySpark Dataframes by exploring ASF Gender Diversity Data - workshop.

You can find the code for this talk at https://github.com/holdenk/diversity-analytics/.The slides are at http://bit.ly/2RNhG1g.There is a related video you might want to check out.

<iframe allow="autoplay; encrypted-media" allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/lXrgN9-9jV8&amp;t=12s" width="560"></iframe>Comment bellow to join in the discussion :).Talk feedback is appreciated at http://bit.ly/holdenTalkFeedback

by Holden Karau (noreply@blogger.com) at November 14, 2018 10:53 AM

Michael Snoyman

Crates and more iterators - Rust Crash Course lesson 4 - exercise solutions

Below are the solutions to the exercises from the last Rust Crash Course lesson, “Crates, I/O, and more iterators.”

This post is part of a series based on teaching Rust at FP Complete. If you’re reading this post outside of the blog, you can find links to all posts in the series at the top of the introduction post. You can also subscribe to the RSS feed.

Exercise 1

You can find my complete solution as a Github Gist. If your solution looks a bit different than mine, don’t worry. Also, see if there’s anything interesting you can learn from my implementation, or some improvements you’d like to make to it.

Exercise 2

struct TheAnswer;

impl Iterator for TheAnswer {
    type Item = u32;

    fn next(&mut self) -> Option<u32> {
        Some(42)
    }
}

Exercise 3

Let’s start with the simpler solution:

struct Fibs {
    x: u32,
    y: u32,
}

fn fibs() -> Fibs {
    Fibs {
        x: 0,
        y: 1,
    }
}

impl Iterator for Fibs {
    type Item = u32;

    fn next(&mut self) -> Option<u32> {
        let orig_x = self.x;
        let orig_y = self.y;

        self.x = orig_y;
        self.y = orig_x + orig_y;

        Some(orig_x)
    }
}

fn main() {
    for i in fibs().take(10) {
        println!("{}", i);
    }
}

However, if you bump that take(10) to take(47), the end of your output will look like:

701408733
1134903170
thread 'main' panicked at 'attempt to add with overflow', foo.rs:21:18
note: Run with `RUST_BACKTRACE=1` for a backtrace.

One solution would be to bump to a u64, but that’s just delaying the problem. Instead, we can use Rust’s checked addition method:

fn next(&mut self) -> Option<u32> {
    let orig_x = self.x;
    let orig_y = self.y;

    match orig_x.checked_add(orig_y) {
        // overflow
        None => None,

        // no overflow
        Some(new_y) => {
            self.x = orig_y;
            self.y = new_y;

            Some(orig_x)
        }
    }
}

Now our stream will stop as soon as overflow occurs.

If you want to get really advanced here, you could actually output two more values. To do so, we need to assign to a derefenced value and use an enum to track our state:

fn next(&mut self) -> Option<u32> {
    use Fibs::*;
    match *self {
        Done => None,
        OneLeft(x) => {
            *self = Done;
            Some(x)
        }
        Running(orig_x, orig_y) => {
            *self = match orig_x.checked_add(orig_y) {
                // overflow
                None => OneLeft(orig_y),
                Some(new_y) => Running(orig_y, new_y),
            };

            Some(orig_x)
        }
    }
}

Exercise 4

impl<I> Iterator for Doubler<I>
    where
    I: Iterator,
    I::Item: std::ops::Add<Output=I::Item> + Copy,
{
    type Item = I::Item;
    fn next(&mut self) -> Option<Self::Item> {
        match self.iter.next() {
            None => None,
            Some(x) => Some(x + x),
        }
    }
}

Exercise 5

The fold method takes two parameters: the initial value, and a function for adding the running total with the next value. One approach using closures is:

fn main() {
    let res = (1..11).fold(0, |x, y| x + y);
    println!("{}", res);
}

Another approach is to directly refer to the addition function. Remember how there was a Mul trait for the * operator? There’s also an Add trait for addition:

fn main() {
    let res = (1..11).fold(0, std::ops::Add::add);
    println!("{}", res);
}

As for writing our own sum function: we’ll end up back in the situation where things are generic and we have to provide appropriate traits. We’ll follow a similar approach with using From and a u8:

fn sum<I>(iter: I) -> I::Item
    where
    I: Iterator,
    I::Item: std::ops::Add<Output=I::Item> + From<u8>,
{
    iter.fold(From::from(0u8), std::ops::Add::add)
}

Rust at FP Complete | Introduction

November 14, 2018 04:00 AM

Mark Jason Dominus

Counting paths through polyhedra

A while back someone asked on math stack exchange how many paths there were of length from one vertex of a dodecahedron to the opposite vertex. The vertices are distance 5 apart, so for the answer is zero, but the paths need not be simple, so the number grows rapidly with ; there are 58 million paths of length 19.

This is the kind of thing that the computer answers easily, so that's where I started, and unfortunately that's also where I finished, saying:

I'm still working out a combinatorial method of calculating the answer, and I may not be successful.

Another user reminded me of this and I took another whack at it. I couldn't remember what my idea had been last year, but my idea this time around was to think of the dodecahedron as the Cayley graph for a group, and then the paths are expressions that multiply out to a particular group element.

I started by looking at a tetrahedron instead of at a dodecahedron, to see how it would work out. Here's a tetrahedron.

Let's say we're counting paths from the center vertex to one of the others, say the one at the top. (Tetrahedra don't have opposite vertices, but that's not an important part of the problem.) A path is just a list of edges, and each edge is labeled with a letter , , or . Since each vertex has exactly one edge with each label, every sequence of 's, 's, and 's represents a distinct path from the center to somewhere else, although not necessarily to the place we want to go. Which of these paths end at the bottom vertex?

The edge labeling I chose here lets us play a wonderful trick. First, since any edge has the same label at both ends, the path always ends at the same place as , because the first goes somewhere else and then the second comes back again, and similarly and also go to the same place. So if we have a path that ends where we want, we can insert any number of pairs or and the new path will end at the same place.

But there's an even better trick available. For any starting point, and any letters and , the path always ends at the same place as . For example, if we start at the middle and follow edge , then , we end at the lower left; similarly if we follow edge and then we end at the same place, although by a different path.

Now suppose we want to find all the paths of length 7 from the middle to the top. Such a path is a sequence of a's, b's, and c's of length 7. Every such sequence specifies a different path out of the middle vertex, but how can we recognize which sequences end at the top vertex?

Since always goes to the same place as , the order of the seven letters doesn't matter. A complicated-seeming path like abacbcb must go to the same place as , the same path with the letters in alphabetical order. And since always goes back to where it came from, the path does to the same place as

Since the paths we want are those that go to the same place as the trivial path , we want paths that have an even number of s and s and an odd number of s. Any path fitting that description will go to same place as , which is the top vertex. It's easy to enumerate such paths:

Prototypical
path
How many?
ccccccc1
cccccaa21
cccccbb21
cccaaaa35
cccaabb210
cccbbbb35
caaaaaa7
caaaabb105
caabbbb105
cbbbbbb7
Total547

Here something like “cccbbbb” stands for all the paths that have three c's and four b's, in some order; there are possible orders, so 35 paths of this type. If we wanted to consider paths of arbitrary length, we could use Burnside's lemma, but I consider the tetrahedron to have been sufficiently well solved by the observations above (we counted 547 paths by hand in under 60 seconds) and I don't want to belabor the point.

Okay! Easy-peasy!

Now let's try cubes:

Here we'll consider paths between two antipodal vertices in the upper right and the lower left, which I've colored much darker gray than the other six vertices.

The same magic happens as in the tetrahedron. No matter where we start, and no matter what and are, the path always gets us to the same place as . So again, if some complicated path gets us where we want to go, we can permute its components into any order and get a different path of the same langth to the same place. For example, starting from the upper left, bcba, abcb, and abbc all go to the same place.

And again, because always make a trip along one edge and then back along the same edge, it never goes anywhere. So the three paths in the previous paragraph also go to the same place as ac and ca and also aa bcba bb aa aa aa aa bb cc cc cc bb.

We want to count paths from one dark vertex to the other. Obviously abc is one such, and so too must be bac, cba, acb, and so forth. There are six paths of length 3.

To get paths of length 5, we must insert a pair of matching letters into one of the paths of length 3. Without loss of generality we can assume that we are inserting aa. There are 20 possible orders for aaabc, and three choices about which pair to insert, for a total of 60 paths.

To get paths of length 7, we must insert two pairs. If the two pairs are the same, there are possible orders and 3 choices about which letters to insert, for a total of 126. If the two pairs are different, there are possible orders and again 3 choices about which pairs to insert, for a total of 420, and a grand total of paths of length 7. Counting the paths of length 9 is almost as easy. For the general case, again we could use Burnside's lemma, or at this point we could look up the unusual sequence in OEIS and find that the number of paths of length is already known to be .

So far this technique has worked undeservedly well. The original problem wanted to use it to study paths on a dodecahedron. Where, unfortunately, the magic property doesn't hold. It is possible to label the edges of the dodecahedron so that every sequence of labels determines a unique path:

but there's nothing like . Well, nothing exactly like it. is equivalent to , and here instead we have . I'm not sure that helps. I will probably need another idea.

The method fails similarly for the octahedron — which is good, because I can use the octahedron as a test platform to try to figure out a new idea. On an octahedron we need to use four kinds of labels because each vertex has four edges emerging from it:

Here again we don't have but we do have . So it's possible that if I figure out a good way to enumerate paths on the octahedron I may be able to adapt the technique to the dodecahedron. But the octahedron will be times easier.

Viewed as groups, by the way, these path groups are all examples of Coxeter groups. I'm not sure this is actually a useful observation, but I've been wanting to learn about Coxeter groups for a long time and this might be a good enough excuse.

by Mark Dominus (mjd@plover.com) at November 14, 2018 02:47 AM

Sandy Maguire

Thinking with Types

After five months of work, I'm happy to announce that it's done.

Thinking with Types

November 14, 2018 12:17 AM

November 13, 2018

Mark Jason Dominus

A puzzle about representing numbers as a sum of 3-smooth numbers

I think this would be fun for a suitably-minded bright kid of maybe 12–15 years old.

Consider the following table of numbers of the form :

1 3 9 27 81 243
2 6 18 54 162
4 12 36 108
8 24 72 216
16 48 144
32 96
64 192
128

Given a number , is is possible to represent as a sum of entries from the table, with the following constraints:

  • No more than one entry from any column
  • An entry may only be used if it is in a strictly higher row than any entry to its left.

For example, one may not represent , because the is in a lower row than the to its left.

1 3 9 27
2 6 18 54
4 12 36 108

But is acceptable, because 6 is higher than 8, and 9 is higher than 6.

1 3 9 27
2 6 18 54
4 12 36 108
8 24 72 216

Or, put another way: can we represent any number in the form $$n = \sum_i 2^{a_i}3^{b_i}$$ where the are strictly decreasing and the are strictly increasing?

Spoiler:

maxpow3 1 = 1 maxpow3 2 = 1 maxpow3 n = 3 * maxpow3 (n `div` 3) rep :: Integer -> [Integer] rep 0 = [] rep n = if even n then map (* 2) (rep (n `div` 2)) else (rep (n - mp3)) ++ [mp3] where mp3 = maxpow3 n

Sadly, the representation is not unique. For example, , and .

by Mark Dominus (mjd@plover.com) at November 13, 2018 03:03 PM

November 12, 2018

Michael Snoyman

Crates and more iterators - Rust Crash Course lesson 4

I’m getting bored with bouncy, this is our third week talking about this program. It’s time to finish this off! How do we do double buffering? It’s time to learn about external libraries in Rust, or crates.

This post is part of a series based on teaching Rust at FP Complete. If you’re reading this post outside of the blog, you can find links to all posts in the series at the top of the introduction post. You can also subscribe to the RSS feed.

We still haven’t fixed our double buffering problem, let’s finally do that. We’re also going to introduce some more error handling from the standard library. And then we’ll get to play a bit more with iterators as well.

Finding a crate

To do double buffering in the terminal, we’re going to want some kind of a curses library. We’re going to look for a crate on crates.io. On that page, search for “curses”. Looking through the download counts and the descriptions, pancurses looks nice, especially since it supports Unix and Windows. Let’s use it!

Side note If you’re wondering, this is exactly the process I went through when writing up bouncy. I had no prior knowledge to tell me pancurses was going to be the right choice. Also, this program happens to be the first time I’ve ever used a curses library.

Starting a project

We’re going to create a new project for this using cargo new. We’re going to pull in the bouncy code from the end of week 2 (before the introduction of command line parsing), since as we’ll see later, we won’t need that parsing anymore.

$ cargo new bouncy4
$ cd bouncy4
$ curl https://gist.githubusercontent.com/snoyberg/5307d493750d7b48c1c5281961bc31d0/raw/8f467e87f69a197095bda096cbbb71d8d813b1d7/main.rs > src/main.rs
$ cargo run

The ball should be bouncing around your terminal, right? Great!

Adding the crate

The configuration for your project lives in the file Cargo.toml, known as the manifest. On my system, it looks like this:

[package]
name = "bouncy4"
version = "0.1.0"
authors = ["Michael Snoyman <michael@snoyman.com>"]

[dependencies]

It will look a bit different on your machine (unless your name is also Michael Snoyman). Notice that [dependencies] section at the end? That’s where we add information on external creates. If you go back to the pancurses page on creates.io, you’ll see:

Cargo.toml  pancurses = "0.16.0"

It may be different when you’re reading this. That’s telling us what we can add to the dependencies section to depend on the library. There are lots of details about how to specify dependencies, which we won’t get into here (and probably never will in the crash course, I’ve spent enough of my life discussing dependency management already :) ). You can read more in the Cargo book reference.

Anyway, go ahead and add pancurses = "0.16.0" to the end of your Cargo.toml and run cargo build. cargo should run off and do some updating, downloading, compiling, and end up with an executable that does exactly the same thing as it did before. Perfect, time to use it!

NOTE It seems like, at the time of writing, pancurses does not build against the latest version of the ncurses package. If you get a build failure, you may need to also add ncurses = "= 5.94.0" to your dependencies to force Cargo to use an older, compatible version.

With the current version of the Rust, you’ll also need to add the following to the top of src/main.rs:

extern crate pancurses;

This requirement is disappearing with Rust 2018. If you’d like play around with that, you’ll need to switch to a nightly compiler and add edition = "2018" to your Cargo.toml. But we’re going to stick to the stable compiler and Rust 2015.

Library docs

On the crates.io page, there’s a link to the documentation for pancurses. Open that in a new tab. Also, reading down the crates page, we get a nice usage example. In main(), the first call is to initscr. Jump over to the API documentation and search for initscr, and you’ll end up at the initscr function.

Let’s try this out. Add the following line to the top of your main function and compile:

let window = pancurses::initscr();

Great! We’re going to use that returned Window struct to interact with pancurses. Go ahead and open its documentation and start browsing through.

Getting the frame size

We’re currently just assigning an arbitrary frame size. Ideally, we’d like to base it off of the actual terminal size. Window provides a method for this, get_max_yx.

First, a step for you dear reader: modify the new method of Game to take a Frame as input. Then, let’s jump back to our main. We can capture the maximum y and x values with:

let (max_y, max_x) = window.get_max_yx();

And now let’s create a Frame value out of those.

let frame = Frame {
    width: max_x,
    height: max_y,
};

Then pass that value into Game::new(). This will only work if you already modified new to take an extra Frame argument, go back a few paragraphs if you missed that.

Challenge Before you hit compile, can you figure out what error message we’re about to encounter?

When I run cargo build, I get the following error:

error[E0308]: mismatched types
   --> src/main.rs:109:16
    |
109 |         width: max_x,
    |                ^^^^^ expected u32, found i32

error[E0308]: mismatched types
   --> src/main.rs:110:17
    |
110 |         height: max_y,
    |                 ^^^^^ expected u32, found i32

If you remember, width and height are u32s, but pancurses gives us i32s for the maximum x and y values. How do we convert? One easy way to handle this is with as u32:

width: max_x as u32,
height: max_y as u32,

This is a totally unchecked cast. To demonstrate, try running this program:

fn main() {
    for i in -5i32..6i32 {
        println!("{} -> {}", i, i as u32)
    }
}

(Yes, there’s syntax in Rust for following a number with its type like that, it’s really nice.)

Which results in:

-5 -> 4294967291
-4 -> 4294967292
-3 -> 4294967293
-2 -> 4294967294
-1 -> 4294967295
0 -> 0
1 -> 1
2 -> 2
3 -> 3
4 -> 4
5 -> 5

We’re going to just accept this bad behavior for now. Don’t worry, it’ll get worse.

Carriage return and the border

If you run this program, you’ll get some really weird looking output. If you don’t believe me, go run it yourself now. I’ll wait.

The first problem is that our newlines aren’t working as expected. We previously used \n to create a newline. However, with curses enabled, this create a line feed, moving the cursor down one row, without a carriage return, moving the cursor to the beginning of the line. Go ahead and replace the \n usages with \r\n.

That’s better, but there’s still a problem: the grid doesn’t fit in the terminal! That’s because we didn’t take the size of the border into account. Try subtracing 4 from the maximum x and y values and see if that fixes things. (Note: there’s a bit of a trick to where you put the - 4. Think about what value you’re trying to cast.)

Double buffering

We still haven’t done double buffering, but we’re in a much better position to do so now! The trick is to replace our println! call in the main function’s loop with calls to window methods. If you want the challenge, go read through the docs and try to figure out how to make it work. One hint: you’ll need to revert the addition of the \r carriage returns we added above.

loop {
    window.clear(); // get rid of old content
    window.printw(game.to_string()); // write to the buffer
    window.refresh(); // update the screen
    game.step();
    std::thread::sleep(sleep_duration);
}

Hurrah, we have double buffering in place! We can finally be done with these bouncing balls.

Or can we?

Exercise 1

It’s time to get more comfortable with exploring API docs, and writing some Rust code with less training wheels. There are a few problems with our implementation:

  • We don’t need to generate an entire string for the whole grid. Instead, we can use some Window methods for moving the cursor around and adding characters. Use that instead.
  • Drawing the border as we have is harder than it should be. There’s a method on Window that will help significantly.
  • We have a number of assumptions about numbers, in particular that the maximum x and y are positive, and large enough to hold the ball’s starting position. Put in some sane limits: both values must be at least 10.
  • More advanced, but: pancurses has some built in support for input handling with timeouts. Instead of using std::thread::sleep, we can set a timeout on input handling and add that to our main loop. You can then also respond to two specific kinds of input:
    • Exit the program on a q press
    • Reset the game when the size of the window changes

More iterators!

Alright, I’m sufficiently bored with bouncing balls. Let’s talk about something far more interesting: streaming data. Personally I found it easiest to understand iterators by writing a few of them myself, so that’s where we’re going to start.

Let’s do some compiler-guided programming. We discussed previously that there’s an Iterator trait. So a fair bet is that we need to create a new data type and provide an implementation of the Iterator trait. Let’s start off with something simple: an iterator that produces no values at all.

struct Empty;

fn main() {
    for i in Empty {
        panic!("Wait, this shouldn't happen!");
    }
    println!("All done!");
}

panic!ing is a way to cause the current thread to exit due to an impossible situation. It’s kind of like runtime exceptions in other languages, except you can’t recover from them. They are only intended to be used for impossible situations.

OK, compile that and you’ll get a helpful error message:

error[E0277]: the trait bound `Empty: std::iter::Iterator` is not satisfied

Cool, let’s add an empty implementation (no pun intended):

impl Iterator for Empty {
}

More help from the compiler!

error[E0046]: not all trait items implemented, missing: `Item`, `next`
 --> foo.rs:3:1
  |
3 | impl Iterator for Empty {
  | ^^^^^^^^^^^^^^^^^^^^^^^ missing `Item`, `next` in implementation
  |
  = note: `Item` from trait: `type Item;`
  = note: `next` from trait: `fn(&mut Self) -> std::option::Option<<Self as std::iter::Iterator>::Item>`

So we need to provide two things: Item and next. next is a function, so we’ll get to that in a second. What about that type Item;? It’s what we call an associated type. It tells us what type of values will be produced by this iterator. Since we’re not producing anything, we can use any type here. I’ll use a u32:

type Item = u32;

Now we need to add in a next. Above, it gives the type of that function as:

fn(&mut Self) -> std::option::Option<<Self as std::iter::Iterator>::Item>

Let’s simplify this bit by bit. The type of the input is &mut Self. However, the correct syntax will be &mut self. Find that confusing? Remember that &mut self is a shortcut for self: &mut Self.

fn(&mut self) -> std::option::Option<<Self as std::iter::Iterator>::Item>

Next, we can remove the module qualifiers for Option and Iterator, since they’re already in our namespace:

fn(&mut self) -> Option<<Self as Iterator>::Item>

This Self as Iterator is interesting. It’s saying “take the current type, and look at its Iterator implementation. The reason we care about specifying the implementation is because of what comes next: ::Item. What we’re really saying is “we want the Item associated type related to the Iterator implementation.” It’s possible that other traits will also have an associated type with the same name, so this is an unambiguous way to refer to it.

Anyway, let’s see if this type signature works. Include the name of the function and give it a dummy function body:

fn next(&mut self) -> Option<<Self as Iterator>::Item> {
    unimplemented!()
}

unimplemented!() is a macro that uses panic! under the surface, and is a convenient way to stub out implementations while under active development. If you compile this, it will succeed. Yay! Then it crashes at runtime due to the unimplemented!. Cool.

We can simplify the signature a bit by removing the as Iterator, which isn’t necessary:

fn next(&mut self) -> Option<Self::Item>

You can also replace the Self::Item directly with u32 if you want. The upside is, in this case, it’s shorter. The downside is that if you change the Item in the future, you’ll have to change it in two places. This is really a subjective point, your call.

Alright, let’s provide an implementation. We return an Option, which is an enum with two variants: None or Some. The former means “we don’t have anything,” the latter means “we have something.” Given that we’re implementing the empty iterator, returning None seems like the right move:

fn next(&mut self) -> Option<u32> {
    None
}

And just like that, we have our first Iterator implementation!

Exercise 2

Create an iterator that infinitely produces the number 42. Here’s a main function to test it out:

fn main() {
    // only take 10 to avoid looping forever
    for i in TheAnswer.take(10) {
        println!("The answer to life, the universe, and everything is {}", i);
    }
    println!("All done!");
}

Mutable state

The signature for next involves taking a mutable reference to self. Let’s use it! We’re going to create an iterator that counts from 1 to 10. (If you’re feeling brave, try to implement it yourself before reading my solution.)

struct OneToTen(u32);

fn one_to_ten() -> OneToTen {
    OneToTen(1)
}

impl Iterator for OneToTen {
    type Item = u32;

    fn next(&mut self) -> Option<u32> {
        if self.0 > 10 {
            None
        } else {
            let res = Some(self.0);
            self.0 += 1;
            res
        }
    }
}

fn main() {
    for i in one_to_ten() {
        println!("{}", i);
    }
}

Exercise 3 Create an iterator that produces the Fibonacci sequence. (Anyone who’s heard me bemoaning this exercise in the functional programming world should have a hearty chuckle right now.)

Iterator adapters

We can also write an iterator that will modify an existing iterator. Let’s write Doubler, which will double the values produced by a previous iterator. In order to make this work, we’re going to capture the underlying iterator inside our new data type, which will also necessitate parameterizing our Doubler data type on the contained iterator:

struct Doubler<I> {
    iter: I,
}

Let’s throw in a main function to show how this is used:

fn main() {
    let orig_iter = 1..11; // array indices start at 1
    let doubled_iter = Doubler {
        iter: orig_iter,
    };
    for i in doubled_iter {
        println!("{}", i);
    }
}

Great. If we compile this, we get an error about the missing Iterator implementation. Let’s try to write something for this:

impl Iterator for Doubler {
}

When we compile this, we get the error message:

error[E0107]: wrong number of type arguments: expected 1, found 0
 --> foo.rs:5:19
  |
5 | impl Iterator for Doubler {
  |                   ^^^^^^^ expected 1 type argument

OK, that makes sense. Doubler itself isn’t a type until we give it its parameters. Let’s do that:

impl Iterator for Doubler<I> {
}

We get two error messages. Feel free to look at the second if you like, but it’s not terribly helpful. (Recommendation: always look at the first error message first and try to solve that before moving on.) The first error message is:

error[E0412]: cannot find type `I` in this scope
 --> foo.rs:5:27
  |
5 | impl Iterator for Doubler<I> {
  |                           ^ not found in this scope

OK, what’s happening? When we provide an implementation, we need to state all of the type variables we want upfront. So let’s do this:

impl<I> Iterator for Doubler<I> {
}

That may look a bit redundant (it did to me at first), but eventually you’ll get to cases where things are more complicated and the two sets of angle brackets don’t look identical. (For Haskellers or PureScript users, this is kind of like requiring an explicit forall.)

Alright, now we’ve got something closer, and the compiler is upset that we haven’t given type Item and next. Let’s go ahead and return a u32 again:

type Item = u32;
fn next(&mut self) -> Option<u32> {
    unimplemented!()
}

The compiles and runs, and then crashes because of the unimplemented!. Great, progress! The trick here is we want to ask the underlying Iterator for its next value. We’ll do this with some explicit pattern matching (functional programmers: yes, there’s a map method on Option we could use here).

fn next(&mut self) -> Option<u32> {
    match self.iter.next() {
        None => None,
        Some(x) => Some(x * 2),
    }
}

Nice enough, but when we compile it, we get told:

error[E0599]: no method named `next` found for type `I` in the current scope
 --> foo.rs:8:25
  |
8 |         match self.iter.next() {
  |                         ^^^^
  |
  = help: items from traits can only be used if the trait is implemented and in scope
  = note: the following traits define an item `next`, perhaps you need to implement one of them:
          candidate #1: `std::iter::Iterator`
          candidate #2: `std::str::pattern::Searcher`

The compiler knows that we might mean the next method from Iterator. But it doesn’t use it. Why, you might ask? Because we never told the compiler that the implementation exists! We need to indicate that the I parameter must have an Iterator implementation.

impl<I: Iterator> Iterator for Doubler<I>

That’s some new syntax, but pretty straightforward: I must have an implementation of Iterator. Unfortunately, we’re not out of the woods quite yet:

error[E0369]: binary operation `*` cannot be applied to type `<I as std::iter::Iterator>::Item`
  --> foo.rs:10:29
   |
10 |             Some(x) => Some(x * 2),
   |                             ^^^^^
   |
   = note: an implementation of `std::ops::Mul` might be missing for `<I as std::iter::Iterator>::Item`

Let’s talk this through. I is some Iterator, we’ve already established that. And we know that the x value we use in x * 2 will be of whatever type the Item associated type for that I is. The problem is: we have no idea what it is, or whether or not it can be multiplied!

We’ve already said we’re going to produce u32s here, so can we just enforce that the Item is a u32? Yes!

impl<I: Iterator<Item=u32>> Iterator for Doubler<I>

Whew, our code works!

Alternative syntax: where

As our constraints get more complex, shoving them all into the parameters at the beginning of impl starts to feel crowded. You can alternatively use where for this:

impl<I> Iterator for Doubler<I>
    where
    I: Iterator<Item=u32>

There’s a subjective point at which people decide to make that transition. Personally, I prefer consistency over brevity, and almost always use where. Your mileage may vary. You should be aware of both syntaxes for reading other people’s code.

Not just u32

It’s a bit weird that we’re tied down to u32s. What if we change our main funciton to have:

let orig_iter = 1..11u64;

We’ll get a compiler error:

error[E0271]: type mismatch resolving `<std::ops::Range<u64> as std::iter::Iterator>::Item == u32`
  --> foo.rs:23:14
   |
23 |     for i in doubled_iter {
   |              ^^^^^^^^^^^^ expected u64, found u32
   |
   = note: expected type `u64`
              found type `u32`

It’s possible to relax this, but it starts to get more complicated. But let’s do it! We’ll need to remove all of the references to u32 in our implementation. Here’s a first stab at this:

impl<I> Iterator for Doubler<I>
    where
    I: Iterator
{
    type Item = ???;
    fn next(&mut self) -> Option<Self::Item> {
        match self.iter.next() {
            None => None,
            Some(x) => Some(x * 2),
        }
    }
}

I’ve replaced Option<u32> with Option<Self::Item>, and removed the <Item = u32> on the I: Iterator. But what should we use for type Item =? We want it to be the same type as the underlying iterator’s Item… so let’s just say that!

type Item = I::Item;

That works! We’re still not compiling though, because Rust doesn’t know that it can perform multiplication on I::Item. Fortunately, there’s a trait for things that can be multiplied called Mul. We can add in:

where
I: Iterator,
I::Item: std::ops::Mul,

New error message:

error[E0308]: mismatched types
  --> foo.rs:14:29
   |
14 |             Some(x) => Some(x * From::from(2u8)),
   |                             ^^^^^^^^^^^^^^^^^^^ expected std::iter::Iterator::Item, found std::ops::Mul::Output
   |
   = note: expected type `<I as std::iter::Iterator>::Item`
              found type `<<I as std::iter::Iterator>::Item as std::ops::Mul>::Output`

It turns out that Mul has an associated type for its output. This is useful for expressing more complicated relationships at the type level. For example, we can define types for Force, Mass, and Acceleration, and then define a Mul implementation that says Mass times Acceleration produces a Force.

That’s a wonderful feature, but it’s getting in our way here. We want to say “the output should be the same as the item itself.” Fair enough:

impl<I> Iterator for Doubler<I>
    where
    I: Iterator,
    I::Item: std::ops::Mul<Output=I::Item>,

And now:

error[E0308]: mismatched types
  --> foo.rs:14:33
   |
14 |             Some(x) => Some(x * 2),
   |                                 ^ expected associated type, found integral variable
   |
   = note: expected type `<I as std::iter::Iterator>::Item`
              found type `{integer}`

Ugh. We have 2, which can be some integral type. But we don’t know that Item is an integral type! I’m not aware of a way to give a constraint that allows this code to work (if someone knows better, please let me know and I’ll update this text). One trick that does work is to upcast from a u8 using the From trait, which performs safe numeric conversions (which cannot over or underflow):

impl<I> Iterator for Doubler<I>
    where
    I: Iterator,
    I::Item: std::ops::Mul<Output=I::Item> + From<u8>,
{
    type Item = I::Item;
    fn next(&mut self) -> Option<Self::Item> {
        match self.iter.next() {
            None => None,
            Some(x) => Some(x * From::from(2u8)),
        }
    }
}

Whew, that’s finally over!

Exercise 4 An easier approach to the above is to use x + x instead of x * 2. Rewrite the iterator to do that. One hint: the compiler won’t know that it’s allowed to make copies of that type unless you tell it via the appropriately-named trait.

Recap

That was a lot! But hopefully it gets the idea across of how iterators work. We can write highly generic adapters that work with many kinds of input. You could apply Doubler to the range iterator as we did. You could apply it to the Empty we defined earlier. Or to dozens of other things.

You may notice that the types of these iterators seem to grow as you add more things to them. That’s absolutely true, and it’s also by design. By representing the full type statically, the compiler is able to see all of the actions that are being performed in a pipeline of iterator operations, and optimize things very well.

More idiomatic

The Doubler we wrote was not idiomatic at all. Let’s do it the real way:

fn main() {
    for i in (1..11).map(|x| x * 2) {
        println!("{}", i);
    }
}

The Iterator trait includes many helper methods, so you can chain up large sets of actions like that:

fn main() {
    for i in (1..11).skip(3).map(|x| x + 1).filter(|x| x % 2 == 0) {
        println!("{}", i);
    }
}

You could of course write something like this as a for loop in C/C++, but:

  • It would be harder to see the logic
  • It would be harder to extend in the future
  • It wouldn’t be faster: the Rust compiler will optimize case like this down to the same hand-rolled loop you may write

Collecting results

You can collect the results from an iterator in a vector:

fn main() {
    let my_vec: Vec<u32> = (1..11).collect();
    println!("{:?}", my_vec);
}

The type annotation is necessary here, since collect can work on many different datatypes.

Exercise 5 Use the fold method to get the sum of the numbers from 1 to 10. Extra credit: write a helper sum function.

Next time

We’ve been dancing around closures for quite a while now, including in that last exercise. We now know enough to attack them head on. We’ll do so next time.

You’re now at a point where you can write some real Rust applications and start cutting your teeth. I’d recommend finding a few play tasks you want to experiment with. Exercism may be a good choice if you’re looking for some challenges.

Rust at FP Complete | Introduction

November 12, 2018 04:00 AM

November 09, 2018

Mark Jason Dominus

Why I never finish my Haskell programs (part 3 of ∞)

(Previously: [1] [2])

I'm doing more work on matrix functions. A matrix represents a relation, and I am representing a matrix as a [[Integer]]. Then matrix addition is simply liftA2 (liftA2 (+)). Except no, that's not right, and this is not a complaint, it's certainly my mistake. The overloading for liftA2 for lists does not do what I want, which is to apply the operation to each pair of correponding elements. I want liftA2 (+) [1,2,3] [10,20,30] to be [11,22,33] but it is not. Instead liftA2 lifts an operation to apply to each possible pair of elements, producing [11,21,31,12,22,32,13,23,33]. And the twice-lifted version is similarly not what I want:

$$ \require{enclose} \begin{pmatrix}1&2\\3&4\end{pmatrix}\enclose{circle}{\oplus} \begin{pmatrix}10&20\\30&40\end{pmatrix}= \begin{pmatrix} 11 & 21 & 12 & 22 \\ 31 & 41 & 32 & 42 \\ 13 & 23 & 14 & 24 \\ 33 & 43 & 34 & 44 \end{pmatrix} $$

No problem, this is what ZipList is for. ZipLists are just regular lists that have a label on them that advises liftA2 to lift an operation to the element-by-element version I want instead of the each-one-by-every-other-one version that is the default. For instance

    liftA2 (+) (ZipList [1,2,3]) (ZipList [10,20,30])

gives ZipList [11,22,33], as desired. The getZipList function turns a ZipList back into a regular list.

But my matrices are nested lists, so I need to apply the ZipList marker twice, once to the outer list, and once to each of the inner lists, because I want the element-by-element behavior at both levels. That's easy enough:

    matrix :: [[a]] -> ZipList (ZipList a)
    matrix m = ZipList (fmap ZipList m)

(The fmap here is actually being specialized to map, but that's okay.)

Now

    (liftA2 . liftA2) (+) (matrix [[1,2],[3,4]]) (matrix [[10,20],[30, 40]])

does indeed produce the result I want, except that the type markers are still in there: instead of

    [[11,22],[33,44]]

I get

    ZipList [ ZipList [11, 22], ZipList [33, 44] ]

No problem, I'll just use getZipList to turn them back again:

    unmatrix :: ZipList (ZipList a) -> [[a]]
    unmatrix m = getZipList (fmap getZipList m)

And now matrix addition is finished:

    matrixplus :: [[a]] -> [[a]] -> [[a]]
    matrixplus m n = unmatrix $ (liftA2 . liftA2) (+) (matrix m) (matrix n)

This works perfectly.

But the matrix and unmatrix pair bugs me a little. This business of changing labels at both levels has happened twice already and I am likely to need it again. So I will turn the two functions into a single higher-order function by abstracting over ZipList. This turns this

    matrix m = ZipList (fmap ZipList m)

into this:

    twice zl m = zl (fmap zl m)

with the idea that I will now have matrix = twice ZipList and unmatrix = twice getZipList.

The first sign that something is going wrong is that twice does not have the type I wanted. It is:

    twice ::  Functor f             => (f a -> a)   -> f (f a) -> a

where I was hoping for something more like this:

    twice :: (Functor f, Functor g) => (f a -> g a) -> f (f a) -> g (g a)

which is not reasonable to expect: how can Haskell be expected to figure out I wanted two diferent functors in there when there is only one fmap? And indeed twice does not work; my desired matrix = twice ZipList does not even type-check:

    <interactive>:19:7: error:
        • Occurs check: cannot construct the infinite type: a ~ ZipList a
          Expected type: [ZipList a] -> ZipList a
            Actual type: [a] -> ZipList a
        • In the first argument of ‘twice’, namely ‘ZipList’
          In the expression: twice ZipList
          In an equation for ‘matrix’: matrix = twice ZipList
        • Relevant bindings include
            matrix :: [[ZipList a]] -> ZipList a (bound at <interactive>:20:5)

Telling GHC explicitly what type I want for twice doesn't work either, so I decide it's time to go to lunch. w I take paper with me, and while I am eating my roast pork hoagie with sharp provolone and spinach (a popular local delicacy) I work out the results of the type unification algorithm on paper for both cases to see what goes wrong.

I get the same answers that Haskell got, but I can't see where the difference was coming from.

So now, instead of defining matrix operations, I am looking into the type unification algorithm and trying to figure out why twice doesn't work.

And that is yet another reason why I never finish my Haskell programs. (“What do you mean, λ-abstraction didn't work?”)

by Mark Dominus (mjd@plover.com) at November 09, 2018 06:22 PM

Holden Karau

PyConCanada After Lunch Keynote Sunday @ @pyconca

Thanks for joining me on 2018-11-08 for PyConCanada After Lunch Keynote Sunday.I'll update this post with the slides soon.Comment bellow to join in the discussion :).Talk feedback is appreciated at http://bit.ly/holdenTalkFeedback

by Holden Karau (noreply@blogger.com) at November 09, 2018 05:02 AM

Dealing With Contributor Overload @ Festival de Software Libre

Thanks for joining us (@holdenkarau,@griscz) on 2018-11-02 at Festival de Software Libre 2018 Puerto Vallarta, Jalisco, Mexico for Dealing With Contributor Overload.I'll update this post with the slides soon.Comment bellow to join in the discussion :).Talk feedback is appreciated at http://bit.ly/holdenTalkFeedback

by Holden Karau (noreply@blogger.com) at November 09, 2018 04:51 AM

November 08, 2018

Mark Jason Dominus

Haskell type checker complaint 184 of 698

I want to build an adjacency matrix for the vertices of a cube; this is a matrix that has m[a][b] = 1 exactly when vertices a and b share an edge. We can enumerate the vertices arbitrarily but a convenient way to do it is to assign them the numbers 0 through 7 and then say that vertices and are adjacent if, regarded as binary numerals, they differ in exactly one bit, so:

   import Data.Bits
   a `adj` b = if (elem (xor a b) [1, 2, 4]) then 1 else 0         

This compiles and GHC infers the type

   adj :: (Bits a, Num a, Num t) => a -> a -> t 

Fine.

An illustration, in the style of the illustration from Stanislaw Lem's “The Cyberiad”, depicting a giant humanoid computer proudly displaying the problem “2 + 2 =” and its solution, “7“, on its front panel.

Now I want to build the adjacency matrix, which is completely straightforward:

    cube = [ [a `adj` b | b <- [0 .. 7] ] | a <- [0 .. 7] ]  where
      a `adj` b = if (elem (xor a b) [1, 2, 4]) then 1 else 0

Ha ha, no it isn't; in Haskell nothing is straightforward. This produces 106 lines of type whining, followed by a failed compilation. Apparently this is because because 0 and 7 are overloaded, and could mean some weird values in some freakish instance of Num, and then 0 .. 7 might generate an infinite list of 1-graded torsion rings or something.

To fix this I have to say explicitly what I mean by 0. “Oh, yeah, by the way, that there zero is intended to denote the integer zero, and not the 1-graded torsion ring with no elements.”

        cube = [ [a `adj` b | b <- [0 :: Integer .. 7] ] | a <- [0 .. 7] ]  where
          a `adj` b = if (elem (xor a b) [1, 2, 4]) then 1 else 0

Here's another way I could accomplish this:

        zero_i_really_mean_it = 0 :: Integer
        cube = [ [a `adj` b | b <- [zero_i_really_mean_it .. 7] ] | a <- [0 .. 7] ] where       
          a `adj` b = if (elem (xor a b) [1, 2, 4]) then 1 else 0

Or how about this?

        cube = [ [a `adj` b | b <- numbers_dammit [0 .. 7] ] | a <- [0 .. 7] ] where
          p `adj` q = if (elem (xor p q) [1, 2, 4]) then 1 else 0
          numbers_dammit = id :: [Integer] -> [Integer] 

I think there must be something really wrong with the language design here. I don't know exactly what it is, but I think someone must have made the wrong tradeoff at some point.

by Mark Dominus (mjd@plover.com) at November 08, 2018 08:40 PM

November 07, 2018

Michael Snoyman

Proposal: Stack Code of Conduct

Technical disagreement is not only an inevitable part of software development and community building. It is a necessary and healthy component thereof. Respectful conversations around technical trade-offs are vital to improving software. And these conversations need to be handled with proper care.

I have failed at this in the past, and I’ve apologized for having done so. I’ve strived to do better since. I have encouraged—and continue to encourage—my friends, colleagues, and fellow developers to do the same, and I believe it has been successful.

I think the other extreme—avoiding all discussions of disagreements—is actively harmful. And unfortunately, I see significant confusion in the general Haskell community around what is and is not an acceptable form of technical disagreement.

Outside of my position on the Core Libraries Committee, I do not hold any official position in the leadership of Haskell. I’ve written open source libraries and educational material. I’ve worked on and maintained build tooling. I run a team of Haskell engineers at FP Complete. None of those positions grant me any authority to demand compliance from individuals with any code of conduct.

That said, at many points in the past, I have reached out to individuals whom I’ve been told (or saw myself) were behaving in a non-ideal way. I’ve attempted to remedy those situations privately wherever possible, and have on rare occasions asked people to discontinue such behavior publicly.

I’ve tried to keep this process informal because, as mentioned, I’m not any official authority. The exception to this would be my leadership at FP Complete, but I have a strong policy of avoiding the usage of my position to force those reporting to me to behave a certain way. I do not believe it is appropriate for me to leverage what essentially comes to someone’s livelihood to demand their compliance with my wishes.

There’s been a slow burn of public discussion over the years that has made me consider making a more formal process. But recently, I had a “straw that broke the camel’s back” moment. Someone I like and respect expressed opinions that I think are antithetical to healthy community growth. I won’t call anyone out or refer to those discussions, I’m just explaining why I’m doing this now.

My proposal

I will not in any way claim to have an authority position for the Haskell community. As a result, I’m not going to make any statement for the Haskell community. As the founder of the Stack project, I think I have more of a right (and responsibility) to speak for that project. So I’m going to refer here to the Haskell Stack community. I’m hesitant to do so given that it may be seen as divisive, but I believe the alternative—trying to speaking for the entire Haskell community—is inappropriate. My intent is not to be divisive.

I also have no ability to enforce any kind of behavior on Stack users or contributors. Stack is open source: no matter what statements I make, anyone can use it. I could ask the rest of the maintainer team to block pull requests from specific people, but I believe that’s also overstepping my authority, and so I won’t be doing it.

Instead, I intend to publish two documents, following interaction with people interested in engaging with me on this process. Those two documents will be:

  • A code of conduct. I do not intend to write one myself, but instead use a standard, “off the shelf” CoC. I did this about a year ago for Yesod, and it worked out well.
  • A recommended communication guide. This is something I do intend to write myself, with feedback from others. I intend to write this as an open source project, on a Github repo, accepting issues and pull requests. I intend this to address specific concerns of how the communication I’ve been involved with has broken down.

I am not demanding anything of anyone here. I’m not expecting anyone in other communities (whether within Haskell or outside of it) to participate or change their behavior. And as stated, I’m not willing or able to demand changes from within the Stack community.

What I am hoping is that by clarifying these points:

  • People unsure of how to communicate in certain situations have some clarity
  • We have some better ideas of how to communicate on sensitive technical topics
  • If someone within the Stack community is seen as misbehaving, there’s a mechanism to politely and confidentially raise the issue, and hopefully have the situation improved

Next steps

I’ve created a Github repo. This repo is on my personal Github account, not commercialhaskell, fpco, or haskell. Again, this is my own personal set of content. I encourage others to participate if they feel invested in the topic. I’m considering having a sign-up sheet for the repo after we have a first version, so that people can state that:

  • They’ve reviewed a certain version of the repo
  • They agree with what it says
  • They will strive to follow its recommendations

What to avoid

We’ll get into far more details in that repo itself, but since I anticipate this blog post itself kicking off some discussion, I’m going to make some requests right now:

  • Avoid ad hominems. (Yes, I’ve made mistakes here in the past.) This applies both to someone’s personal history, and to any organizations they are members of, or who they are friendly with.
  • Avoid offensive language. This is actually more complicated than it seems; one of the ways I’ve upset people is my highly sarcastic-by-nature communication style. I’ve worked hard on tamping that down. I point this out because what’s offensive to one person may be normal to others. This is especially true in a global community.
  • We will inevitably have to address some old behavior to know what to include in our recommendations. This is not an invitation to air grievances, as tempting as that may be for some people. Keep the comments non-personal, speak in general terms, and try to avoid any grandstanding. This will be a tough line to walk. But a simple example:
    • Good: “I’ve seen people discredit some people’s opinion because of who their friends are, I believe we should address that in the guidelines.”
    • Bad: “Two months ago, in this Reddit thread, person X said Y about Z. This is horrible, and X needs to either apologize or be removed from the community. They need to be mentioned by name in the guidelines as an offender.”

This is my first time trying to have a discussion quite like this, and I’m guessing the first time many people involved will be participating in one. As one final note, I’d like to request people make an assumption of good will. Mistakes will be made, but we can try to minimize those mistakes, and move on from them when they occur.

November 07, 2018 05:48 AM

Iterators and Errors - Rust Crash Course lesson 3 - exercise solutions

Below are the solutions to the exercises from the last Rust Crash Course lesson, “Iterators and Errors.”

This post is part of a series based on teaching Rust at FP Complete. If you’re reading this post outside of the blog, you can find links to all posts in the series at the top of the introduction post. You can also subscribe to the RSS feed.

Exercise 1

The trick here is to “wrap” the Args data type with the Skip data type:

use std::env::{args, Args};
use std::iter::Skip;

fn main() {
    let args: Skip<Args> = args().skip(1);
    for arg in args {
        println!("{}", arg);
    }
}

You may have noticed that we didn’t need to mark args as mutable. That’s because we are moving the args value into the for loop, meaning that any mutation to it by the for loop cannot be seen in the main function.

Exercise 2

When we call parse in the context of that example, type inference tells us that the width and height results must be u32s, since they are used as fields in Frame. Rust is able to determine what implementation to use based on the needed return type. Cool!

Yet again, Haskellers are rolling their eyes and saying “old news.”

Exercise 3

Complete source file:

#[derive(Debug)]
struct Frame {
    width: u32,
    height: u32,
}

#[derive(Debug)]
enum ParseError {
    TooFewArgs,
    TooManyArgs,
    InvalidInteger(String),
}

struct ParseArgs(std::env::Args);

impl ParseArgs {
    fn new() -> ParseArgs {
        ParseArgs(std::env::args())
    }


    fn require_arg(&mut self) -> Result<String, ParseError> {
        match self.0.next() {
            None => Err(ParseError::TooFewArgs),
            Some(s) => Ok(s),
        }
    }

    fn require_no_args(&mut self) -> Result<(), ParseError> {
        match self.0.next() {
            Some(_) => Err(ParseError::TooManyArgs),
            None => Ok(()),
        }
    }
}

fn parse_u32(s: String) -> Result<u32, ParseError> {
    match s.parse() {
        Err(_) => Err(ParseError::InvalidInteger(s)),
        Ok(x) => Ok(x),
    }
}

fn parse_args() -> Result<Frame, ParseError> {
    let mut args = ParseArgs::new();

    // skip the command name
    args.require_arg()?;

    let width_str = args.require_arg()?;
    let height_str = args.require_arg()?;
    args.require_no_args()?;
    let width = parse_u32(width_str)?;
    let height = parse_u32(height_str)?;

    Ok(Frame { width, height })
}

fn main() {
    println!("{:?}", parse_args());
}

Exercise 4

We want to ensure a minimum size for the width and height. First, let’s add two more variants to the ParseError enum:

WidthTooSmall(u32),
HeightTooSmall(u32),

Then add the following to the parse_args function, just before the Ok:

if width < 2 {
    return Err(WidthTooSmall(width));
}
if height < 2 {
    return Err(HeightTooSmall(height));
}

Exercise 5

This is another perfect time to pull out our trailing question mark!

fn main () -> Result<(), self::parse_args::ParseError> {
    let frame = parse_args::parse_args()?;
    let mut game = Game::new(frame);
    let sleep_duration = std::time::Duration::from_millis(33);
    loop {
        println!("{}", game);
        game.step();
        std::thread::sleep(sleep_duration);
    }
}

Rust at FP Complete | Introduction

November 07, 2018 04:00 AM

November 05, 2018

Michael Snoyman

Iterators and Errors - Rust Crash Course lesson 3

Last time, we finished off with a bouncy ball implementation with some downsides: lackluster error handling and ugly buffering. It also had another limitation: a static frame size. Today, we’re going to address all of these problems, starting with that last one: let’s get some command line arguments to control the frame size.

This post is part of a series based on teaching Rust at FP Complete. If you’re reading this post outside of the blog, you can find links to all posts in the series at the top of the introduction post. You can also subscribe to the RSS feed.

Like last time, I’m going to expect you, the reader, to be making changes to the source code along with me. Make sure to actually type in the code while reading!

Command line arguments

We’re going to modify our application as follows:

  • Accept two command line arguments: the width and the height
  • Both must be valid u32s
  • Too many or too few command line arguments will result in an error message

Sounds easy enough. In a real application, we would use a proper argument-handling library, like clap. But for now, we’re going lower level. Like we did for the sleep function, let’s start by searching the standard library docs for the word args. The first two entries both look relevant.

  • std::env::Args An iterator over the arguments of a process, yielding a String value for each argument.
  • std::env::args Returns the arguments which this program was started with (normally passed via the command line).

Now’s a good time to mention that, by strong convention:

  • Module names (like std and env) and function names (like args) are snake_cased
  • Types (like Args) are PascalCased
    • Exception: primitives like u32 and str are lower case

The std module has an env module. The env module has both an Args type and a args function. Why do we need both? Even more strangely, let’s look at the type signature for the args function:

pub fn args() -> Args

The args function returns a value of type Args. If Args was a type synonym for, say, a vector of Strings, this would make sense. But that’s not the case. And if you check out its docs, there aren’t any fields or methods exposed on Args, only trait implementations!

The extra datatype pattern

Maybe there’s a proper term for this in Rust, but I haven’t seen it myself yet. (If someone has, please let me know so I can use the proper term.) There’s a pervasive pattern in the Rust ecosystem, which in my experience starts with iterators and continues to more advanced topics like futures and async I/O.

  • We want to have composable interfaces
  • We also want high performance
  • Therefore, we define lots of helper data types that allow the compiler to perform some great optimizations
  • And we define traits as an interface to let these types compose nicely with each other

Sound abstract? Don’t worry, we’ll make that concrete in a bit. Here’s the practical outcome of all of this:

  • We end up programming quite a bit against traits, which provide a common abstractions and lots of helper functions
  • We get a matching data type for many common functions
  • Often times, our type signatures will end up being massive, representing all of the different composition we performed (though the new-ish -> impl Iterator style helps with that significantly, see the announcement blog post for more details)

Alright, with that out of the way, let’s get back to command line arguments!

CLI args via iterators

Let’s play around in an empty file before coming back to bouncy. (Either use cargo new and cargo run, or use rustc directly, your call.) If I click on the expand button next to the Iterator trait on the Args docs page, I see this function:

fn next(&mut self) -> Option<String>

Let’s play with that a bit:

use std::env::args;

fn main() {
    let mut args = args(); // Yes, that name shadowing works
    println!("{:?}", args.next());
    println!("{:?}", args.next());
    println!("{:?}", args.next());
    println!("{:?}", args.next());
}

Notice that we had to use let mut, since the next method will mutate the value. Now I’m going to run this with cargo run foo bar:

$ cargo run foo bar
   Compiling args v0.1.0 (/Users/michael/Desktop/tmp/args)
    Finished dev [unoptimized + debuginfo] target(s) in 1.60s
     Running `target/debug/args foo bar`
Some("target/debug/args")
Some("foo")
Some("bar")
None

Nice! It gives us the name of our executable, followed by the command line arguments, returning None when there’s nothing left. (For pedants out there: command line arguments aren’t technically required to have the command name as the first argument, it’s just a really strong convention most tools follow.)

Let’s play with this some more. Can you write a loop that prints out all of the command line arguments and then exits? Take a minute, and then I’ll provide some answers.

Alright, done? Cool, let’s see some examples! First, we’ll loop with return.

use std::env::args;

fn main() {
    let mut args = args();
    loop {
        match args.next() {
            None => return,
            Some(arg) => println!("{}", arg),
        }
    }
}

We also don’t need to use return here. Instead of returning from the function, we can just break out of the loop:

use std::env::args;

fn main() {
    let mut args = args();
    loop {
        match args.next() {
            None => break,
            Some(arg) => println!("{}", arg),
        }
    }
}

Or, if you want to save on some indentation, you can use the if let.

use std::env::args;

fn main() {
    let mut args = args();
    loop {
        if let Some(arg) = args.next() {
            println!("{}", arg);
        } else {
            break;
            // return would work too, but break is nicer
            // here, as it is more narrowly scoped
        }
    }
}

You can also use while let. Try to guess what that would look like before checking the next example:

use std::env::args;

fn main() {
    let mut args = args();
    while let Some(arg) = args.next() {
        println!("{}", arg);
    }
}

Getting better! Alright, one final example:

use std::env::args;

fn main() {
    for arg in args() {
        println!("{}", arg);
    }
}

Whoa, what?!? Welcome to one of my favorite aspects of Rust. Iterators are a concept built into the language directly, via for loops. A for loop will automate the calling of next(). It also hides away the fact that there’s some mutable state at play, at least to some extent. This is a powerful concept, and allows a lot of code to end up with a more functional style, something I happen to be a big fan of.

Skipping

It’s all well and good that the first arguments in the name of the executable. But we typically don’t care about that. Can we somehow skip that in our output? Well, here’s one approach:

use std::env::args;

fn main() {
    let mut args = args();
    let _ = args.next(); // drop it on the floor
    for arg in args {
        println!("{}", arg);
    }
}

That works, but it’s a bit clumsy, especially compared to our previous version that had no mutable variables. Maybe there’s some other way to skip things. Let’s search the standard library again. I see the first results as std::iter::Skip and std::iter::Iterator::skip. The former is a data type, and the latter is a method on the Iterator trait. Since our Args type implements the Iterator trait, we can use it. Nice!

Side note Haskellers: skip is like drop in most Haskell libraries, like Data.List or vector. drop has a totally different meaning in Rust (dropping owned data), so skip is a better name in Rust.

Let’s look at some signatures from the docs above:

pub struct Skip<I> { /* fields omitted */ }
fn skip(self, n: usize) -> Skip<Self>

Hmm… deep breaths. Skip is a data type that is parameterized over some data type, I. This is a common pattern in iterators: Skip wraps around an existing data type and adds some new functionality to how it iterates. The skip method will consume an existing iterator, take the number of arguments to skip, and return a new Skip<OrigDataType> value. How do I know it consumes the original iterator? The first parameter is self, not &self or &mut self.

That seemed like a lot of concepts. Fortunately, usage is pretty easy:

use std::env::args;

fn main() {
    for arg in args().skip(1) {
        println!("{}", arg);
    }
}

Nice!

Exercise 1 Type inference lets the program above work just fine without any type annotations. However, it’s a good idea to get used to the generated types, since you’ll see them all too often in error messages. Get the program below to compile by fixing the type signature. Try to do it without using compiler at first, since the error messages will almost give the answer away.

use std::env::{args, Args};
use std::iter::Skip;

fn main() {
    let args: Args = args().skip(1);
    for arg in args {
        println!("{}", arg);
    }
}

This layering-of-datatypes approach, as mentioned above, is a real boon to performance. Iterator-heavy code will often compile down to an efficient loop, comparable with the best hand-rolled loop you could have written. However, iterator code is much higher level, more declarative, and easy to maintain and extend.

There’s a lot more to iterators, but we’re going to stop there for the moment, since we still want to process our command line parameters, and we need to learn one more thing first.

Parsing integers

If you search the standard library for parse, you’ll find the str::parse method. The documentation does a good job of explaining things, I won’t repeat that here. Please go read that now.

OK, you’re back? Turbofish is a funny name, right?

Take a crack at writing a program that prints the result of parsing each command line argument as a u32, then check my version:

fn main() {
    for arg in std::env::args().skip(1) {
        println!("{:?}", arg.parse::<u32>());
    }
}

And let’s try running it:

$ cargo run one 2 three four 5 6 7
Err(ParseIntError { kind: InvalidDigit })
Ok(2)
Err(ParseIntError { kind: InvalidDigit })
Err(ParseIntError { kind: InvalidDigit })
Ok(5)
Ok(6)
Ok(7)

When the parse is successful, we get the Ok variant of the Result enum. When the parse fails, we get the Err variant, with a ParseIntError telling us what went wrong. (The type signature on parse itself uses some associated types to indicate this type, we’re not going to get into that right now.)

This is a common pattern in Rust. Rust has no runtime exceptions, so we track potential failure at the type level with actual values.

Side note You may think of panics as similar to runtime exceptions, and to some extent they are. However, you’re not able to properly recover from panics, making them different in practice from how runtime exceptions are used in other languages like Python.

Parse our command line

We’re finally ready to get started on our actual command line parsing! We’re going to be overly tedious in our implementation, especially with our data types. After we finish implementing this in a blank file, we’ll move the code into the bouncy implementation itself. First, let’s define a data type to hold a successful parse, which will contain the width and the height.

Challenge Will this be a struct or an enum? Can you try implementing this yourself first?

Since we want to hold onto multiple values, we’ll be using a struct. I’d like to use named fields, so we have:

struct Frame {
    width: u32,
    height: u32,
}

Next, let’s define an error type to represent all of the things that can go wrong during this parse. We have:

  • Too few arguments
  • Too many arguments
  • Invalid integer

Challenge Are we going to use a struct or an enum this time?

This time, we’ll use an enum, because we’ll only detect one of these problems (whichever we notice first). Officianados of web forms and applicative parsing may scoff at this and say we should detect all errors, but we’re going to be lazy.

enum ParseError {
    TooFewArgs,
    TooManyArgs,
    InvalidInteger(String),
}

Notice that the InvalidInteger variant takes a payload, the String it failed parsing. This is what makes enums in Rust so much more powerful than enumerations in most other languages.

Challenge We’re going to write a parse_args helper function. Can you guess what its type signature will be?

Combining all of the knowledge we established above, here’s an implementation:

#[derive(Debug)]
struct Frame {
    width: u32,
    height: u32,
}

#[derive(Debug)]
enum ParseError {
    TooFewArgs,
    TooManyArgs,
    InvalidInteger(String),
}

fn parse_args() -> Result<Frame, ParseError> {
    use self::ParseError::*; // bring variants into our namespace

    let mut args = std::env::args().skip(1);

    match args.next() {
        None => Err(TooFewArgs),
        Some(width_str) => {
            match args.next() {
                None => Err(TooFewArgs),
                Some(height_str) => {
                    match args.next() {
                        Some(_) => Err(TooManyArgs),
                        None => {
                            match width_str.parse() {
                                Err(_) => Err(InvalidInteger(width_str)),
                                Ok(width) => {
                                    match height_str.parse() {
                                        Err(_) => Err(InvalidInteger(height_str)),
                                        Ok(height) => Ok(Frame {
                                            width,
                                            height,
                                        }),
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}

fn main() {
    println!("{:?}", parse_args());
}

Holy nested blocks Batman, that is a lot of indentation! The pattern is pretty straightforward:

  • Pattern match
  • If we got something bad, stop with an Err
  • If we got something good, keep going

Haskellers at this point are screaming about do notation and monads. Ignore them. We’re in the land of Rust, we don’t take kindly to those things around here. (Someone please yell at me for that terrible pun.)

Exercise 2 Why didn’t we need to use the turbofish on the call to parse above?

What we want to do is return early from our function. You know what keyword can help with that? That’s right: return!

fn parse_args() -> Result<Frame, ParseError> {
    use self::ParseError::*;

    let mut args = std::env::args().skip(1);

    let width_str = match args.next() {
        None => return Err(TooFewArgs),
        Some(width_str) => width_str,
    };

    let height_str = match args.next() {
        None => return Err(TooFewArgs),
        Some(height_str) => height_str,
    };

    match args.next() {
        Some(_) => return Err(TooManyArgs),
        None => (),
    }

    let width = match width_str.parse() {
        Err(_) => return Err(InvalidInteger(width_str)),
        Ok(width) => width,
    };

    let height = match height_str.parse() {
        Err(_) => return Err(InvalidInteger(height_str)),
        Ok(height) => height,
    };

    Ok(Frame {
        width,
        height,
    })
}

Much nicer to look at! However, it’s still a bit repetitive, and littering those returns everywhere is subjectively not very nice. In fact, while typing this up, I accidentally left off a few of the returns and got to stare at some long error messages. (Try that for yourself.)

Question mark

Side note The trailing question mark we’re about to introduce used to be the try! macro in Rust. If you’re confused about the seeming overlap: it’s simply a transition to new syntax.

The pattern above is so common that Rust has built in syntax for it. If you put a question mark after an expression, it basically does the whole match/return-on-Err thing for you. It’s more powerful than we’ll demonstrate right now, but we’ll get to that extra power a bit later.

To start off, we’re going to define some helper functions to:

  • Require another argument
  • Require that there are no more arguments
  • Parse a u32

All of these need to return Result values, and we’ll use a ParseError for the error case in all of them. The first two functions need to take a mutable reference to our arguments. (As a side note, I’m going to stop using the skip method now, because if I do it will give away the solution to exercise 1.)

use std::env::Args;

fn require_arg(args: &mut Args) -> Result<String, ParseError> {
    match args.next() {
        None => Err(ParseError::TooFewArgs),
        Some(s) => Ok(s),
    }
}

fn require_no_args(args: &mut Args) -> Result<(), ParseError> {
    match args.next() {
        Some(_) => Err(ParseError::TooManyArgs),
        // I think this looks a little weird myself.
        // But we're wrapping up the unit value ()
        // with the Ok variant. You get used to it
        // after a while, I guess
        None => Ok(()),
    }
}

fn parse_u32(s: String) -> Result<u32, ParseError> {
    match s.parse() {
        Err(_) => Err(ParseError::InvalidInteger(s)),
        Ok(x) => Ok(x),
    }
}

Now that we have these helpers defined, our parse_args function is much easier to look at:

fn parse_args() -> Result<Frame, ParseError> {
    let mut args = std::env::args();

    // skip the command name
    let _command_name = require_arg(&mut args)?;

    let width_str = require_arg(&mut args)?;
    let height_str = require_arg(&mut args)?;
    require_no_args(&mut args)?;
    let width = parse_u32(width_str)?;
    let height = parse_u32(height_str)?;

    Ok(Frame { width, height })
}

Beautiful!

Forgotten question marks

What do you think happens if you forget the question mark on the let width_str line? If you do so:

  • width_str will contain a Result<String, ParseError> instead of a String
  • The call to parse_u32 will not type check
error[E0308]: mismatched types
  --> src/main.rs:50:27
   |
50 |     let width = parse_u32(width_str)?;
   |                           ^^^^^^^^^ expected struct `std::string::String`, found enum `std::result::Result`
   |
   = note: expected type `std::string::String`
              found type `std::result::Result<std::string::String, ParseError>`

That’s nice. But what will happen if we forget the question mark on the require_no_args call? We never use the output value there, so it will type check just fine. Now we have the age old problem of C: we’re accidentally ignoring error codes!

Well, not so fast. Check out this wonderful warning from the compiler:

warning: unused `std::result::Result` which must be used
  --> src/main.rs:49:5
   |
49 |     require_no_args(&mut args);
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |
   = note: #[warn(unused_must_use)] on by default
   = note: this `Result` may be an `Err` variant, which should be handled

That’s right: Rust will detect if you’ve ignored a potential failure. There is a hole in this in the current code sample:

let _command_name = require_arg(&mut args);

That doesn’t trigger the warning, since in let _name = blah;, the leading underscore says “I know what I’m doing, I don’t care about this value.” Instead, it’s better to write the code without the let:

require_arg(&mut args);

Now we get a warning, which can be solved with the added trailing question mark.

Exercise 3

It would be more convenient to use method call syntax. Let’s define a helper data type to make this possible. Fill in the implementation of the code below.

#[derive(Debug)]
struct Frame {
    width: u32,
    height: u32,
}

#[derive(Debug)]
enum ParseError {
    TooFewArgs,
    TooManyArgs,
    InvalidInteger(String),
}

struct ParseArgs(std::env::Args);

impl ParseArgs {
    fn new() -> ParseArgs {
        unimplemented!()
    }


    fn require_arg(&mut self) -> Result<String, ParseError> {
        match self.0.next() {
        }
    }
}

fn parse_args() -> Result<Frame, ParseError> {
    let mut args = ParseArgs::new();

    // skip the command name
    args.require_arg()?;

    let width_str = args.require_arg()?;
    let height_str = args.require_arg()?;
    args.require_no_args()?;
    let width = parse_u32(width_str)?;
    let height = parse_u32(height_str)?;

    Ok(Frame { width, height })
}

fn main() {
    println!("{:?}", parse_args());
}

Updating bouncy

This next bit should be done as a Cargo project, not with rustc. Let’s start a new empty project:

$ cargo new bouncy-args --bin
$ cd bouncy-args

Next, let’s get the old code and place it in src/main.rs. You can copy-paste manually, or run:

$ curl https://gist.githubusercontent.com/snoyberg/5307d493750d7b48c1c5281961bc31d0/raw/8f467e87f69a197095bda096cbbb71d8d813b1d7/main.rs > src/main.rs

Run cargo run and make sure it works. You can use Ctrl-C to kill the program.

We already wrote fully usable argument parsing code above. Instead of putting it in the same source file, let’s put it in its own file. In order to do so, we’re going to have to play with modules in Rust.

For convenience, you can view the full source code as a Gist. We need to put this in src/parse_args.rs:

$ curl https://gist.githubusercontent.com/snoyberg/568899dc3ae6c82e54809efe283e4473/raw/2ee261684f81745b21e571360b1c5f5d77b78fce/parse_args.rs > src/parse_args.rs

If you run cargo build now, it won’t even look at parse_args.rs. Don’t believe me? Add some invalid content to the top of that file and run cargo build again. Nothing happens, right? We need to tell the compiler that we’ve got another module in our project. We do that by modifying src/main.rs. Add the following line to the top of your file:

mod parse_args;

If you put in that invalid line before, running cargo build should now result in an error message. Perfect! Go ahead and get rid of that invalid line and make sure everything compiles and runs. We won’t be accepting command line arguments yet, but we’re getting closer.

Use it!

We’re currently getting some dead code warnings, since we aren’t using anything from the new module:

warning: struct is never constructed: `Frame`
 --> src/parse_args.rs:2:1
  |
2 | struct Frame {
  | ^^^^^^^^^^^^
  |
  = note: #[warn(dead_code)] on by default

warning: enum is never used: `ParseError`
 --> src/parse_args.rs:8:1
  |
8 | enum ParseError {
  | ^^^^^^^^^^^^^^^

warning: function is never used: `parse_args`
  --> src/parse_args.rs:14:1
   |
14 | fn parse_args() -> Result<Frame, ParseError> {
   | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Let’s fix that. To start off, add the following to the top of your main function, just to prove that we can, in fact, use our new module:

println!("{:?}", parse_args::parse_args());
return; // don't start the game, our output will disappear

Also, add a pub in front of the items we want to access from the main.rs file, namely:

  • struct Frame
  • enum ParseError
  • fn parse_args

Running this gets us:

$ cargo run
   Compiling bouncy-args v0.1.0 (/Users/michael/Desktop/tmp/bouncy-args)
warning: unreachable statement
   --> src/main.rs:115:5
    |
115 |     let mut game = Game::new();
    |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |
    = note: #[warn(unreachable_code)] on by default

warning: variable does not need to be mutable
   --> src/main.rs:115:9
    |
115 |     let mut game = Game::new();
    |         ----^^^^
    |         |
    |         help: remove this `mut`
    |
    = note: #[warn(unused_mut)] on by default

    Finished dev [unoptimized + debuginfo] target(s) in 0.67s
     Running `target/debug/bouncy-args`
Err(TooFewArgs)

It’s nice that we get an unreachable statement warning. It’s also a bit weird that game is no longer required to be mutable. Strange. But most importantly: our argument parsing is working!

Let’s try to use this. We’ll modify the Game::new() method to accept a Frame as input:

impl Game {
    fn new(frame: Frame) -> Game {
        let ball = Ball {
            x: 2,
            y: 4,
            vert_dir: VertDir::Up,
            horiz_dir: HorizDir::Left,
        };
        Game {frame, ball}
    }

    ...
}

And now we can rewrite our main function as:

fn main () {
    match parse_args::parse_args() {
        Err(e) => {
            // prints to stderr instead of stdout
            eprintln!("Error parsing args: {:?}", e);
        },
        Ok(frame) => {
            let mut game = Game::new(frame);
            let sleep_duration = std::time::Duration::from_millis(33);
            loop {
                println!("{}", game);
                game.step();
                std::thread::sleep(sleep_duration);
            }
        }
    }
}

Mismatched types

We’re good, right? Not quite:

error[E0308]: mismatched types
   --> src/main.rs:114:38
    |
114 |             let mut game = Game::new(frame);
    |                                      ^^^^^ expected struct `Frame`, found struct `parse_args::Frame`
    |
    = note: expected type `Frame`
               found type `parse_args::Frame`

We now have two different definitions of Frame: in our parse_args module, and in main.rs. Let’s fix that. First, delete the Frame declaration in main.rs. Then add the following after our mod parse_args; statement:

use self::parse_args::Frame;

self says we’re finding a module that’s a child of the current module.

Public and private

Now everything will work, right? Wrong again! cargo build will vomit a bunch of these errors:

error[E0616]: field `height` of struct `parse_args::Frame` is private
  --> src/main.rs:85:23
   |
85 |         for row in 0..self.frame.height {
   |

By default, identifiers are private in Rust. In order to expose them from one module to another, you need to add the pub keyword. For example:

pub width: u32,

Go ahead and add pub as needed. Finally, if you run cargo run, you should see Error parsing args: TooFewArgs. And if you run cargo run 5 5, you should see a much smaller frame than before. Hurrah!

Exercise 4

What happens if you run cargo run 0 0? How about cargo run 1 1? Put in some better error handling in parse_args.

Exit code

Alright, one final irritation. Let’s provide some invalid arguments and inspect the exit code of the process:

$ cargo run 5
Error parsing args: TooFewArgs
$ echo $?
0

For those not familiar: a 0 exit code means everything went OK. That’s clearly not the case here! If we search the standard library, it seems the std::process::exit can be used to address this. Go ahead and try using that to solve the problem here.

However, we’ve got one more option: we can return a Result straight from main!

fn main () -> Result<(), self::parse_args::ParseError> {
    match parse_args::parse_args() {
        Err(e) => {
            return Err(e);
        },
        Ok(frame) => {
            let mut game = Game::new(frame);
            let sleep_duration = std::time::Duration::from_millis(33);
            loop {
                println!("{}", game);
                game.step();
                std::thread::sleep(sleep_duration);
            }
        }
    }
}

Exercise 5 Can you do something to clean up the nesting a bit here?

Better error handling

The error handling problem we had in the last lesson involved the call to top_bottom. I’ve already included a solution to that in the download of the code provided. Guess what I changed since last time and then check the code to confirm that you’re right.

If you’re following very closely, you may be surprised that there aren’t more warnings about unused Result values coming from other calls to write!. As far as I can tell, this is in fact a bug in the Rust compiler.

Still, it would be good practice to fix up those calls to write!. Take a stab at doing so.

Next time

We still didn’t fix our double buffering problem, we’ll get to that next time. We’re also going to introduce some more error handling from the standard library. And maybe we’ll get to play a bit more with iterators as well.

Rust at FP Complete | Introduction

November 05, 2018 04:00 AM

November 04, 2018

Brent Yorgey

Hendrix teams at ACM ICPC

Today I took five students from Hendrix to Fort Smith to compete in the Mid-Central USA Regional of the ACM International Collegiate Programming Contest (ICPC). Both teams solved five out of ten problems, and finished second and fourth out of 18 teams at our site (23rd and 25th out of 121 teams in the whole region). I’m really proud of them!

A lot of fun was had by all. Thanks to Andrew Mackey and the rest of the folks at UAFS for hosting a great contest!

by Brent at November 04, 2018 04:53 AM

November 03, 2018

Matt Parsons

The Trouble with Typed Errors

You, like me, program in either Haskell, or Scala, or F#, or Elm, or PureScript, and you don’t like runtime errors. They’re awful and nasty! You have to debug them, and they’re not represented in the types. Instead, we like to use Either (or something isomorphic) to represent stuff that might fail:

data Either l r = Left l | Right r

Either has a Monad instance, so you can short-circuit an Either l r computation with an l value, or bind it to a function on the r value.

So, we take our unsafe, runtime failure functions:

head   :: [a] -> a
lookup :: k -> Map k v -> v
parse  :: String -> Integer

and we use informative error types to represent their possible failures:

data HeadError = ListWasEmpty

head :: [a] -> Either HeadError a

data LookupError = KeyWasNotPresent

lookup :: k -> Map k v -> Either LookupError v

data ParseError 
    = UnexpectedChar Char String
    | RanOutOfInput

parse :: String -> Either ParseError Integer

Except, we don’t really use types like HeadError or LookupError. There’s only one way that head or lookup could fail. So we just use Maybe instead. Maybe a is just like using Either () a – there’s only one possible Left () value, and there’s only one possible Nothing value. (If you’re unconvinced, write newtype Maybe a = Maybe (Either () a), derive all the relevant instances, and try and detect a difference between this Maybe and the stock one).

But, Maybe isn’t great – we’ve lost information! Suppose we have some computation:

foo :: String -> Maybe Integer
foo str = do
    c <- head str
    r <- lookup str strMap
    eitherToMaybe (parse (c : r))

Now, we try it on some input, and it gives us Nothing back. Which step failed? We actually can’t know that! All we can know is that something failed.

So, let’s try using Either to get more information on what failed. Can we just write this?

foo :: String -> Either ??? Integer
foo str = do
    c <- head str
    r <- lookup str strMap
    parse (c : r)

Unfortunately, this gives us a type error. We can see why by looking at the type of >>=:

(>>=) :: (Monad m) => m a -> (a -> m b) -> m b

The type variable m must be an instance of Monad, and the type m must be exactly the same for the value on the left and the function on the right. Either LookupError and Either ParseError are not the same type, and so this does not type check.

Instead, we need some way of accumulating these possible errors. We’ll introduce a utility function mapLeft that helps us:

mapLeft :: (l -> l') -> Either l r -> Either l' r
mapLeft f (Left l) = Left (f l)
mapLeft _ r = r

Now, we can combine these error types:

foo :: String 
    -> Either 
        (Either HeadError (Either LookupError ParseError)) 
        Integer
foo str = do
    c <- mapLeft Left (head str)
    r <- mapLeft (Right . Left) (lookup str strMap)
    mapLeft (Right . Right) (parse (c : r))

There! Now we can know exactly how and why the computation failed. Unfortunately, that type is a bit of a monster. It’s verbose and all the mapLeft boilerplate is annoying.

At this point, most application developers will create a “application error” type, and they’ll just shove everything that can go wrong into it.

data AllErrorsEver
     = AllParseError ParseError
     | AllLookupError LookupError
     | AllHeadError HeadError
     | AllWhateverError WhateverError
     | FileNotFound FileNotFoundError
     | etc...

Now, this slightly cleans up the code:

foo :: String -> Either AllErrorsEver Integer
foo str = do
    c <- mapLeft AllHeadError (head str)
    r <- mapLeft AllLookupError (lookup str strMap)
    mapLeft AllParseError (parse (c : r))

However, there’s a pretty major problem with this code. foo is claiming that it can “throw” all kinds of errors – it’s being honest about parse errors, lookup errors, and head erors, but it’s also claiming that it will throw if files aren’t found, “whatever” happens, and etc. There’s no way that a call to foo will result in FileNotFound, because foo can’t even do IO! It’s absurd. The type is too large! And I have written about keeping your types small and how wonderful it can be for getting rid of bugs.

Suppose we want to handle foo’s error. We call the function, and then write a case expression like good Haskellers:

case foo "hello world" of
    Right val -> 
        pure val
    Left err -> 
        case err of
            AllParseError parseError ->
                handleParseError parseError
            AllLookupError lookupErr ->
                handleLookupError
            AllHeadError headErr ->
                handleHeadError
            _ ->
                error "impossible?!?!?!"

Unfortunately, this code is brittle to refactoring! We’ve claimed to handle all errors, but we’re really not handling many of them. We currently “know” that these are the only errors that can happen, but there’s no compiler guarantee that this is the case. Someone might later modify foo to throw another error, and this case expression will break. Any case expression that evaluates any result from foo will need to be updated.

The error type is too big, and so we introduce the possibility of mishandling it. There’s another problem. Let’s suppose we know how to handle a case or two of the error, but we must pass the rest of the error cases upstream:

bar :: String -> Either AllErrorsEver Integer
bar str = 
    case foo str of
        Right val -> Right val
        Left err ->
            case err of
                AllParseError pe ->
                    Right (handleParseError pe)
                _ ->
                    Left err

We know that AllParseError has been handled by bar, because – just look at it! However, the compiler has no idea. Whenever we inspect the error content of bar, we must either a) “handle” an error case that has already been handled, perhaps dubiously, or b) ignore the error, and desperately hope that no underlying code ever ends up throwing the error.

Are we done with the probles on this approach? No! There’s no guarantee that I throw the right error!

head :: [a] -> Either AllErrorsEver a
head (x:xs) = Right x
head [] = Left (AllLookupError KeyWasNotPresent)

This code typechecks, but it’s wrong, because LookupError is only supposed to be thrown by lookup! It’s obvious in this case, but in larger functions and codebases, it won’t be so clear.

Monolithic error types are bad

So, having a monolithic error type has a ton of problems. I’m going to make a claim here:

All error types should have a single constructor

That is, no sum types for errors. How can we handle this?

Let’s maybe see if we can make Either any nicer to use. We’ll define a few helpers:

type (+) = Either
infixr + 5

l :: l -> Either l r
l = Left

r :: r -> Either l r
r = Right

Now, let’s refactor that uglier Either code with these new helpers:

foo :: String 
    -> Either 
        (HeadError + LookupError + ParseError) 
        Integer
foo str = do
    c <- mapLeft l (head str)
    r <- mapLeft (r . l) (lookup str strMap)
    mapLeft (r . r) (parse (c : r))

Well, the syntax is nicer. We can case over the nested Either in the error branch to eliminate single error cases. It’s easier to ensure we don’t claim to throw errors we don’t – after all, GHC will correctly infer the type of foo, and if GHC infers a type variable for any +, then we can assume that we’re not using that error slot, and can delete it.

Unfortuntaly, there’s still the mapLeft boilerplate. And expressions which you’d really want to be equal, aren’t –

x :: Either (HeadError + LookupError) Int
y :: Either (LookupError + HeadError) Int

The values x and y are isomorphic, but we can’t use them in a do block because they’re not exactly equal. If we add errors, then we must revise all mapLeft code, as well as all case expressions that inspect the errors. Fortunately, these are entirely compiler-guided refactors, so the chance of messing them up is small. However, they contribute significant boilerplate, noise, and busywork to our program.

Boilerplate be gone!

Well, turns out, we can get rid of the order dependence and boilerplate with type classes! The most powerful approach is to use “classy prisms” from the lens package. Let’s translate our types from concrete values to prismatic ones:

-- Concrete:
head :: [a] -> Either HeadError a

-- Prismatic:
head :: AsHeadError err => [a] -> Either err a


-- Concrete:
lookup :: k -> Map k v -> Either LookupError v

-- Prismatic:
lookup 
    :: (AsLookupError err) 
    => k -> Map k v -> Either err v

Now, type class constraints don’t care about order – (Foo a, Bar a) => a and (Bar a, Foo a) => a are exactly the same thing as far as GHC is concerned. The AsXXX type classes will automatically provide the mapLeft stuff for us, so now our foo function looks a great bit cleaner:

foo :: (AsHeadError err, AsLookupError err, AsParseError err)
    => String -> Either err Integer
foo str = do
    c <- head str
    r <- lookup str strMap
    parse (c : r)

This appears to be a significant improvement over what we’ve had before! And, most of the boilerplate with the AsXXX classes is taken care of via Template Haskell:

makeClassyPrisms ''ParseError
-- this line generates the following:

class AsParseError a where
    _ParseError :: Prism' a ParseError
    _UnexpectedChar :: Prism' a (Char, String)
    _RanOutOfInput :: Prism' a ()

instance AsParseError ParseError where
    -- etc...

However, we do have to write our own boilerplate when we eventually want to concretely handle these ttypes. We may end up writing a huge AppError that all of these errors get injected into.

There’s one major, fatal flaw with this approach. While it composes very nicely, it doesn’t decompose at all! There’s no way to catch a single case and ensure that it’s handled. The machinery that prisms give us don’t allow us to separate out a single constraint, so we can’t pattern match on a single error.

Once again, our types become ever larger, with all of the problems that entails.

Generics to the rescue!

What we really want is:

  • Order independence
  • No boilerplate
  • Easy composition
  • Easy decomposition

In PureScript or OCaml, you can use open variant types to do this flawlessly. Haskell doesn’t have open variants, and the attempts to mock them end up quite clumsy to use in practice.

I’m happy to say that the entire job is handled quite nicely with the amazing generic-lens package. I created a gist that demonstrates their usage, but the magic comes down to this simple fact: there’s an instance of the prismatic AsType class for Either, which allows you to “pluck” a constraint off. This satisfies all of the things I wanted in my list, and we can consider representing errors mostly solved.

Mostly?

Well, ExceptT e IO a still imposes a significant runtime performance hit, and asynchronous exceptions aren’t considered here. A bifunctor IO type like newtype BIO err a = BIO (IO a) which carries the type class constraints of the errors it contains is promising, but I haven’t been able to write a satisfying interface to this yet.

I also haven’t used this technique in a large codebase yet, and I don’t know how it scales. And the technique does require you to be comfortable with lens, which is a fairly high bar for training new folks on. I’m sure that API improvements could be made to make this style more accessible and remove some of the lens knowledge prerequisites.

November 03, 2018 12:00 AM

Capability and Suitability

Gary Bernhardt has a fantastic talk on Capability vs Suitability, where he separates advances in software engineering into two buckets:

  • Capability: The ability to do new things!
  • Suitability: The ability to do things well.

Capability is progressive and daring, while suitability is conservative and boring. Capability wants to create entirely new things, while suitability wants to refine existing things.

This post is going to explore a metaphor with bicycles, specifically bike tires, while we think about capability and suitability. When you get a bike, you have so many options. Tire size is one of them. You can opt for a super narrow road tire – a mere 19mm in width! Or, on the other end of the scale, you can opt for a truly fat tire at around 5” in width. What’s the difference?

Narrower tires are less capable – there is less terrain you can cover on a narrow tire. However, they’re more suitable for the terrain they can cover – a 19mm tire will be significantly lighter and faster thana 5” tire. A good 19mm tire weighs around 200g, while a 5” tire might weigh 1,800g each. Lugging around an extra 7lbs of rubber takes a lot of energy! Additionally, all that rubber is going to have a lot of rolling resistance – it’ll be harder to push across the ground on smooth surfaces where the 19mm tire excels.

So, most cyclists don’t use fat tire bikes. But they also don’t use 19mm skinny tires. Most road cyclists have moved up to 25 or 28mm tires. While the 19mm tires work fantastically on a perfectly smooth surface, they start suffering when the road gets bumpy. All the bumps and rough surfaces call for a slightly more capable tire. The wider tires can run lower air pressure, which lets them float over bumps rather than being bumped up and down.

So, we have two competing forces in bike tires:

  • The speed and comfort on the terrain you ride most frequently
  • The speed and comfort on the worst terrain you encounter regularly

You want enough capability to handle the latter, while a tire that’s suitable for the former.

In computer programming, we tend to reach for the most capable thing we can get our hands on. Dynamically typed, impure, and Turing complete programming languages like Ruby, JavaScript, and Python are immensely popular. Statically typed languages are often seen as stifling, and pure languages even more so. There simply aren’t many languages that are Turing incomplete, that’s how little we like them!

Yet, these extra gains in capability are often unnecessary. There’s very little code that’s difficult to statically type with a reasonable type system. Impurity seems convenient, until you realize that you need to look at every single method call to see why the code that renders an HTML page is making an N+1 query and ruining performance. Indeed, even Turing completeness is overrated – a Turing incomplete language permits dramatically more optimizations and static analysis for bug prevention, and very few programs actually require Turing completeness.

In this sense, programmers are like cyclists that pick up the 5” tire fat bikes and then wonder why they’re moving so slow. They may ride in the snow or deep sand once or twice a year, and they stick with the 5” tire for that reason alone. Programmers that are willing to give up the capability they don’t need in order to purchase suitability they could use tend to go faster, as you might expect. Folks that learn Haskell and become sufficiently familiar with purely functional and statically typed programming tend to take those practices with them, even in impure or dynamically typed languages.

It is easier to understand what you did when you limit what you can do.

November 03, 2018 12:00 AM

November 02, 2018

Douglas M. Auclair (geophf)

October 2018 1HaskellADay Problems and Solutions

by geophf (noreply@blogger.com) at November 02, 2018 02:39 PM

October 31, 2018

Chris Smith 2

Hi Glaukon.

Hi Glaukon. There’s a bunch of stuff related to my teaching at https://drive.google.com/drive/folders/0B-qIu_nqxaMoYTQ4ODZjMjMtNWRiOC00OWZiLWI3MzMtOGIxODIyZDkxODBk?usp=sharing It’s in varying degrees of completion, but it includes an outline of units and lessons, descriptions of themes, worksheets and handouts, and some (slightly out-of-date) presentation slides, all of which have been used in past classes. There’s also a comic book that is mostly complete, that I sometimes give out as just a cool add-on for students in the class. Finally, I’ve just finished completely writing (at least a first draft of) the first unit of the online guide, which can be found at http://code.world when you first visit; or under the Guide button at the bottom of the page.

by Chris Smith at October 31, 2018 04:44 PM

October 29, 2018

Disciple/DDC

Sekiso's Functions

Sekiso said, "Pure functions exist beyond time and space. The results we achieve will be achieved always." Ummon held up his fan and replied "If you yourself do not exist beyond time and space, then how can you be so sure? Functions only have meaning when the hardware is running, and when it is not, there is space, but no time." The master overhearing replied, "Last Tuesday I typed "cabal install cabal-install" and nothing happened."

by Ben Lippmeier (noreply@blogger.com) at October 29, 2018 04:57 AM

October 24, 2018

Matt Parsons

TChan vs TQueue: What's the difference?

I always forget the difference between a TChan and a TQueue. They appear to have an almost identical API, so whenever I need a concurrent message thing, I spend some time working out what the difference is. I’ve done this a few times now, and it’s about time that I write it out so that I don’t need to keep reconstructing it.

Aside: Please don’t use TChan or TQueue. These types are unbounded, which means that you can run into unbounded memory use if your producer is faster than your consumer. Instead, use TBChan or TBQueue, which allow you to set a bound. I have run into issues with livelock with the standard stm and stm-chans types, and have found that unagi-chan package has better performance in all cases, so I usually reach for that when I have a need for a high performance concurrent channel. Unfortunately, the unagi-chan variants don’t operate in STM, which can be a dealbreaker depending on your workflow.

tl;dr: Use a channel when you want all readers to receive each message. Use a queue when you want only one reader to receive each message.

The docs for a TChan are concise:

TChan is an abstract type representing an unbounded FIFO channel.

The docs for a TQueue are a bit more verbose:

A TQueue is like a TChan, with two important differences:

  • it has faster throughput than both TChan and Chan (although the costs are amortised, so the cost of individual operations can vary a lot).
  • it does not provide equivalents of the dupTChan and cloneTChan operations.

The implementation is based on the traditional purely-functional queue representation that uses two lists to obtain amortised $O(1)$ enqueue and dequeue operations.

So the docs say that TQueue is faster, but has fewer operations. Presumably, we should use a TQueue unless we need these operations. What do dupTChan and cloneTChan do? Let’s look at the Haddocks:

dupTChan :: TChan a -> STM (TChan a)

Duplicate a TChan: the duplicate channel begins empty, but data written to either channel from then on will be available from both. Hence this creates a kind of broadcast channel, where data written by anyone is seen by everyone else.

cloneTChan :: TChan a -> STM (TChan a)

Clone a TChan: similar to dupTChan, but the cloned channel starts with the same content available as the original channel.

So, what’s the point of these? Let’s write some code and see what happens.

test2 :: IO ()
test2 = do
    c <- newTChanIO
    forkIO $ do
        for_ [1..5] $ \n -> do
            i <- atomically $ readTChan c
            putStrLn $ "First thread received: " ++ show i ++ "on #: " ++ show n

    forkIO $ do
        for_ [5..10] $ \n -> do
            i <- atomically $ readTChan c
            putStrLn $ "Second thread received: " ++ show i ++ "on #: " ++ show n

    for_ [1..10] $ \i -> do
       threadDelay 10000
       atomically $ writeTChan c i

This creates a new TChan, then forks two threads. Each thread sits and waits on the TChan to have a value, and then it prints the value out. Finally we stuff the numbers 1 through 100 into the channel, with a slight delay.

This is the output:

Beginning test...
First thread received: 1on #: 1
First thread received: 2on #: 2
Second thread received: 3on #: 6
First thread received: 4on #: 3
Second thread received: 5on #: 7
Second thread received: 6on #: 8
First thread received: 7on #: 4
Second thread received: 8on #: 9
First thread received: 9on #: 5
Second thread received: 10on #: 10

Alright, so the two threads mostly just interleave their work. The values 1-100 are printed out by each thread. Let’s try using dupTChan and see what happens:

test3 :: IO ()
test3 = do
    c <- newTChanIO
    forkIO $ do
        for_ [1..5] $ \n -> do
            i <- atomically $ readTChan c
            putStrLn $ "First thread received: " ++ show i ++ " on #: " ++ show n

    forkIO $ do
        c' <- atomically $ dupTChan c
        for_ [6..10] $ \n -> do
            i <- atomically $ readTChan c'
            putStrLn $ "Second thread received: " ++ show i ++ " on #: " ++ show n

    for_ [1..10] $ \i -> do
       threadDelay 10000
       atomically $ writeTChan c i

This is basically the same code, but we’ve duplicated the TChan in the second thread. Here’s the new output:

Beginning test...
First thread received: 1 on #: 1
Second thread received: 1 on #: 6
Second thread received: 2 on #: 7
First thread received: 2 on #: 2
First thread received: 3 on #: 3
Second thread received: 3 on #: 8
First thread received: 4 on #: 4
Second thread received: 4 on #: 9
First thread received: 5 on #: 5
Second thread received: 5 on #: 10

Interesting! So the duplicated channel is able to receive a writeTChan, and both threads are able to see the values. So, a TChan with a dupTChan call is suitable for when you want all copies of the TChan to receive a value. A TQueue will only permit a value to be seen once, by a single thread.

There’s a variant of newTChan called newBroadcastTChan. How does it differ? The docs explain:

Create a write-only TChan. More precisely, readTChan will retry even after items have been written to the channel. The only way to read a broadcast channel is to duplicate it with dupTChan.

Consider a server that broadcasts messages to clients:

serve :: TChan Message -> Client -> IO loop
serve broadcastChan client = do
    myChan <- dupTChan broadcastChan
    forever $ do
        message <- readTChan myChan
        send client message

The problem with using newTChan to create the broadcast channel is that if it is only written to and never read, items will pile up in memory. By using newBroadcastTChan to create the broadcast channel, items can be garbage collected after clients have seen them.

The last paragraph is the important part. A standard TChan will accumulate messages until that copy of the TChan is read. A broadcast TChan will not accumulate messages on the write end of the channel. This points to an important performance concern:

  • If you dupTChan a channel, you must read from every duplicate of the TChan in order to avoid memory loss.
  • If you intend on having a write-only end, you must use newBroadcastTChan for any channel that you won’t read from.

TQueue avoids this problem as it cannot be duplicated.

October 24, 2018 12:00 AM

October 22, 2018

Philip Wadler

Co-Recursive Interview


Adam Gordon Bell interviewed me for his Co-Recursive podcast. Thanks, Adam!

by Philip Wadler (noreply@blogger.com) at October 22, 2018 07:38 PM

Functional Jobs

Haskell Developer Position at Karamaan Group (Full-time)

Karamaan Group is an investment firm based in Manhattan. We are looking for an outstanding software developer to develop tools for financial analysis and knowledge management. Since we are investors and not traders, our focus is not on building trading systems, but on business and data analysis. We strongly value people who take pride in their work and have a craftsman’s dedication to building bespoke products. Our ideal candidate is an experienced software developer with a penchant for abstraction and analysis, a solid sense of organization, and an ability to communicate and persuade. While we are looking for a candidate who demonstrates an intense focus on quality, she also needs a master engineer’s intuition for how to make trade offs that are a necessary part of day-to-day software development.

Candidates should have some formal training in a quantitative field and a keen interest in building robust and elegant software. This is a high-impact, high-visibility position where successful candidates will be entrusted with a lot of responsibility for products that have a direct effect on the P&L of the firm and influence on our workflow.

The ideal candidate will have production experience with Haskell or another functional language, relational databases and relational data modeling, Python, and Java. Additionally, as this is a startup environment, the ideal candidate will be comfortable filling many different technology roles, and contributing to the success of the firm wherever they can, not just in agreement with their job description.

Karamaan Group is an investment adviser and manager catering to family offices and institutional investors. We value innovative thinking, encourage an entrepreneurial atmosphere, and enjoy spirited discussions.

Get information on how to apply for this position.

October 22, 2018 03:46 PM

October 21, 2018

Neil Mitchell

Announcing Profiterole - GHC Profile Viewer

Summary: profiterole reformats GHC profile reports so they are easier to read.

Do you often work with GHC time profiling reports? Do you find them excessively long and hard to navigate? Profiterole reads standard GHC .prof files and generates both textual and HTML reports which are typically more than 10x smaller. As an example compare HLint profile input to HLint Profiterole output.

Usage

To run, first install (cabal update && cabal install profiterole), generate a GHC profile the normal way, then run:

profiterole myprogram.prof

Profiterole will generate myprogram.profiterole.txt and myprogram.profiterole.html - both contain the same information, but the HTML has hyperlinks. There are three columns of numbers:

  • TOT is the total time spent in any item under this code, what GHC calls inherited time.
  • INH is the total time spent in the items that Profiterole did not move out to the top level.
  • IND is the individual time, just like GHC profiles.

For large programs, using +RTS -P (instead of the common -p) will give more accurate results.

How it works

Profiterole aims to make the profile shorter by combining common subtrees and lifting them to the root - e.g. if you call parseFile from 7 places in the code, instead of having 7 pieces of parseFile profiling, Profiterole will give you one. With only 1 place containing parseFile, it's easier to optimise parseFile, and it's easier to read the code calling it without getting lost in the internals.

How to profile

Given profile data, different ways of looking at it reveal different insights, and the ones discovered by Profiterole have definitely had value. I tend to use:

  • I first use Profiteur to get an overall sense of where the time is going visually. Profiteur lets me orientate myself, but tends to be a little difficult to drill into the details and repeat experiments.
  • I then use Profiterole to see if there were any repeated pieces of code Profiteur missed, and then dig into the details using Profiterole.
  • Only if I'm really going into the details do I go to the GHC .prof output - it's pretty rare.

by Neil Mitchell (noreply@blogger.com) at October 21, 2018 01:03 PM

October 20, 2018

Dan Piponi (sigfpe)

Running from the past


Preface

Functional programming encourages us to program without mutable state. Instead we compose functions that can be viewed as state transformers. It's a change of perspective that can have a big impact on how we reason about our code. But it's also a change of perspective that can be useful in mathematics and I'd like to give an example: a really beautiful technique that alows you to sample from the infinite limit of a probability distribution without needing an infinite number of operations. (Unless you're infinitely unlucky!)



Markov Chains

A Markov chain is a sequence of random states where each state is drawn from a random distribution that possibly depends on the previous state, but not on any earlier state. So it is a sequence such that for all . A basic example might be a model of the weather in which each day is either sunny or rainy but where it's more likely to be rainy (or sunny) if the previous day was rainy (or sunny). (And to be technically correct: having information about two days or earlier doesn't help us if we know yesterday's weather.)


Like imperative code, this description is stateful. The state at step depends on the state at step . Probability is often easier to reason about when we work with independent identically drawn random variables and our aren't of this type. But we can eliminate the state from our description using the same method used by functional programmers.


Let's choose a Markov chain to play with. I'll pick one with 3 states called , and and with transition probabilities given by where


Here's a diagram illustrating our states:





Implementation

First some imports:



> {-# LANGUAGE LambdaCase #-}
> {-# LANGUAGE TypeApplications #-}



> import Data.Sequence(replicateA)
> import System.Random
> import Control.Monad.State
> import Control.Monad
> import Data.List
> import Data.Array



And now the type of our random variable:



> data ABC = A | B | C deriving (Eq, Show, Ord, Enum, Bounded)



We are now in a position to simulate our Markov chain. First we need some random numbers drawn uniformly from [0, 1]:



> uniform :: (RandomGen gen, MonadState gen m) => m Double
> uniform = state random



And now the code to take a single step in the Markov chain:



> step :: (RandomGen gen, MonadState gen m) => ABC -> m ABC
> step A = do
> a <- uniform
> if a < 0.5
> then return A
> else return B
> step B = do
> a <- uniform
> if a < 1/3.0
> then return A
> else if a < 2/3.0
> then return B
> else return C
> step C = do
> a <- uniform
> if a < 0.5
> then return B
> else return C



Notice how the step function generates a new state at random in a way that depends on the previous state. The m ABC in the type signature makes it clear that we are generating random states at each step.


We can simulate the effect of taking steps with a function like this:



> steps :: (RandomGen gen, MonadState gen m) => Int -> ABC -> m ABC
> steps 0 i = return i
> steps n i = do
> i <- steps (n-1) i
> step i



We can run for 100 steps, starting with , with a line like so:



*Main> evalState (steps 3 A) gen
B



The starting state of our random number generator is given by gen.


Consider the distribution of states after taking steps. For Markov chains of this type, we know that as goes to infinity the distribution of the th state approaches a limiting "stationary" distribution. There are frequently times when we want to sample from this final distribution. For a Markov chain as simple as this example, you can solve exactly to find the limiting distribution. But for real world problems this can be intractable. Instead, a popular solution is to pick a large and hope it's large enough. As gets larger the distribution gets closer to the limiting distribution. And that's the problem I want to solve here - sampling from the limit. It turns out that by thinking about random functions instead of random states we can actually sample from the limiting distribution exactly.



Some random functions


Here is a new version of our random step function:



> step' :: (RandomGen gen, MonadState gen m) => m (ABC -> ABC)
> step' = do
> a <- uniform
> return $ \case
> A -> if a < 0.5 then A else B
> B -> if a < 1/3.0
> then A
> else if a < 2/3.0 then B else C
> C -> if a < 0.5 then B else C



In many ways it's similar to the previous one. But there's one very big difference: the type signature m (ABC -> ABC) tells us that it's returning a random function, not a random state. We can simulate the result of taking 10 steps, say, by drawing 10 random functions, composing them, and applying the result to our initial state:



> steps' :: (RandomGen gen, MonadState gen m) => Int -> m (ABC -> ABC)
> steps' n = do
> fs <- replicateA n step'
> return $ foldr (flip (.)) id fs



Notice the use of flip. We want to compose functions , each time composing on the left by the new . This means that for a fixed seed gen, each time you increase by 1 you get the next step in a single simulation: (BTW I used replicateA instead of replicateM to indicate that these are independent random draws. It may be well known that you can use Applicative instead of Monad to indicate independence but I haven't seen it written down.)



*Main> [f A | n <- [0..10], let f = evalState (steps' n) gen]
[A,A,A,B,C,B,A,B,A,B,C]



When I first implemented this I accidentally forgot the flip. So maybe you're wondering what effect removing the flip has? The effect is about as close to a miracle as I've seen in mathematics. It allows us to sample from the limiting distribution in a finite number of steps!


Here's the code:



> steps_from_past :: (RandomGen gen, MonadState gen m) => Int -> m (ABC -> ABC)
> steps_from_past n = do
> fs <- replicateA n step'
> return $ foldr (.) id fs



We end up building . This is still a composition of independent identically distributed functions and so it's still drawing from exactly the same distribution as steps'. Nonetheless, there is a difference: for a particular choice of seed, steps_from_past n no longer gives us a sequence of states from a Markov chain. Running with argument draws a random composition of functions. But if you increase by 1 you don't add a new step at the end. Instead you effectively restart the Markov chain with a new first step generated by a new random seed.


Try it and see:



*Main> [f A | n <- [0..10], let f = evalState (steps_from_past n) gen]
[A, A, A, A, A, A, A, A, A, A]



Maybe that's surprising. It seems to get stuck in one state. In fact, we can try applying the resulting function to all three states.



*Main> [fmap f [A, B, C] | n <- [0..10], let f = evalState (steps_from_past n) gen]
[[A,B,C],[A,A,B],[A,A,A],[A,A,A],[A,A,A],[A,A,A],[A,A,A],[A,A,A],[A,A,A],[A,A,A],[A,A,A]]



In other words, for large enough we get the constant function.


Think of it this way: If f isn't injective then it's possible that two states get collapsed to the same state. If you keep picking random f's it's inevitable that you will eventually collapse down to the point where all arguments get mapped to the same state. Once this happens, we'll get the same result no matter how large we take . If we can detect this then we've found the limit of as goes to infinity. But because we know composing forwards and composing backwards lead to draws from the same distribution, the limiting backward composition must actually be a draw from the same distribution as the limiting forward composition. That flip can't change what probability distribution we're drawing from - just the dependence on the seed. So the value the constant function takes is actually a draw from the limiting stationary distribution.


We can code this up:



> all_equal :: (Eq a) => [a] -> Bool
> all_equal [] = True
> all_equal [_] = True
> all_equal (a : as) = all (== a) as



> test_constant :: (Bounded a, Enum a, Eq a) => (a -> a) -> Bool
> test_constant f =
> all_equal $ map f $ enumFromTo minBound maxBound



This technique is called coupling from the past. It's "coupling" because we've arranged that different starting points coalesce. And it's "from the past" because we're essentially asking answering the question of what the outcome of a simulation would be if we started infinitely far in the past.



> couple_from_past :: (RandomGen gen, MonadState gen m, Enum a, Bounded a, Eq a) =>
> m (a -> a) -> (a -> a) -> m (a -> a)
> couple_from_past step f = do
> if test_constant f
> then return f
> else do
> f' <- step
> couple_from_past step (f . f')



We can now sample from the limiting distribution a million times, say:



*Main> let samples = map ($ A) $ evalState (replicateA 1000000 (couple_from_past step' id)) gen



We can now count how often A appears:



*Main> fromIntegral (length $ filter (== A) samples)/1000000
0.285748



That's a pretty good approximation to , the exact answer that can be found by finding the eigenvector of the transition matrix corresponding to an eigenvalue of 1.



> gen = mkStdGen 669




Notes

The technique of coupling from the past first appeared in a paper by Propp and Wilson. The paper Iterated Random Functions by Persi Diaconis gave me a lot of insight into it. Note that the code above is absolutely not how you'd implement this for real. I wrote the code that way so that I could switch algorithm with the simple removal of a flip. In fact, with some clever tricks you can make this method work with state spaces so large that you couldn't possibly hope to enumerate all starting states to detect if convergence has occurred. Or even with uncountably large state spaces. But I'll let you read the Propp-Wilson paper to find out how.

by Dan Piponi (noreply@blogger.com) at October 20, 2018 11:22 PM

Brent Yorgey

What’s the Difference? video and slides

At ICFP in St. Louis I presented my paper with Kenny Foner, What’s the Difference? A Functional Pearl on Subtracting Bijections. Many people seemed to enjoy the talk and we got lots of great feedback afterwards. We will probably write up an extended version of the paper for JFP, to include extra stuff we could not fit in the paper and to incorporate things we learned at ICFP.

(Fun fact: this was actually my first ICFP talk—though not my first ICFP paper. I am glad I have a bunch of experience giving talks at this point; I can remember a time in my life when giving a talk on that stage would have been terrifying.)

Here is the video of the talk:

<iframe allowfullscreen="true" class="youtube-player" height="360" src="https://www.youtube.com/embed/hq9gAe-pmzM?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="640"></iframe>

You can download my slides here. This version of the slides includes speaking notes—it’s not exactly what I ended up saying, but it should give you a decent idea if you just want to read the slides without watching the video.

by Brent at October 20, 2018 08:38 PM

October 18, 2018

FP Complete

Is Rust functional?

In the past few months, and in particular in the past two weeks, I’ve gotten a number of people asking me the question: Is Rust a functional programming language? This makes sense: I’m a big advocate of functional programming, I work at a company with FP in its name, my primary programming language is Haskell, and yet I use and enjoy Rust. So is Rust consistent with everything else in my FP-centric world, or is it my dirty vice to get away from it all?

by Michael Snoyman (michael@fpcomplete.com) at October 18, 2018 03:02 AM

October 17, 2018

FP Complete

Development Workflows in Haskell

Our monthly webinar series is really starting to take off and judging by the registration numbers we might be forced into allowing more than 500 people to register for our next webinar which could be about Rust - Don't hold us to that ;)  For Roman's, Development Workflows in Haskell, we had 434 registrants which makes this our most popular webinar so far.  

by Robert Bobbett (robert@fpcomplete.com) at October 17, 2018 08:04 PM

October 16, 2018

Neil Mitchell

Announcing Shake 0.17

I'm delighted to announce Shake 0.17. As always, the full changelog is on GitHub, but I'd like to highlight three areas that have seen most attention.

Error Messages

Error messages now support GHC's HasCallStack feature, giving code locations in error messages. As an example, let's define rules for both *.txt and overlap.*, then try and build overlap.txt. With Shake 0.17 we get the far more informative error:

Error when running Shake build system:
at Example.hs:50:46-55:
* Depends on: overlap.txt
* Raised the exception:
Build system error - key matches multiple rules:
Key type: FileQ
Key value: overlap.txt
Rules matched: 2
Rule 1: "overlap.*" %> at Example::21:94-106:
Rule 2: ".txt" %> at Example::24:94-106:
Modify your rules so only one can produce the above key

We can see where the dependency was introduced (line 50), where the rules were defined (lines 21 and 24), and what their patterns were.

The Database module

The new module Development.Shake.Database provides operations for working with open Shake databases - meaning you can now open the database, run some actions against, and shut it. Unlike before, you can now run against an open database repeatedly, and query the resulting database for live or erroneous files. When combined with the new feature that /dev/null for shakeFiles results in no on-disk representation of Shake, you can create an in-memory database, execute it many times, then throw it away. These features aren't targetted at build systems, but allow reuse of Shake in other domains.

If you are using the Database module, and have a way to observe changes interactively, the deprioritize function may be of use, to cause Shake to build some unimportant rules last.

This work was supported by Digital Asset.

Changes to Builtin Rules

Most users shouldn't need to define their own types of rules, but for those who do, the biggest improvement is probably the better documentation in Development.Shake.Rule, complete with a worked example. At the same time, the builtin rules have changed slightly in incompatible ways - the docs explain the new state. These changes are intended to set the stage for Cloud Shake, following the pattern described in Build Systems a la Carte. I hope that a forthcoming release of Shake will provide an actual implementation of Cloud Shake.

by Neil Mitchell (noreply@blogger.com) at October 16, 2018 09:49 AM

Magnus Therning

October 15, 2018

Ken T Takusagawa

[jhomitso] Haskell pack int to bytes

The following Haskell code demonstrates packing a 32-bit unsigned integer (Word32) into a (lazy) ByteString with big-endian byte ordering, using the Put monad of Data.Binary in the binary package.  Essentially, runPut . putWord32be is the Word32 -> ByteString conversion function.

module Main(main) where {
import qualified Data.Binary.Put;
import Data.Word(Word32);
import qualified Data.ByteString.Lazy;

value :: Word32;
value = 65539; -- 3 + 2^16

main :: IO();
main = out_bytestring $ Data.Binary.Put.runPut $ Data.Binary.Put.putWord32be value;

{- expected output:
0 1 0 3
-}

out_bytestring :: Data.ByteString.Lazy.ByteString -> IO();
out_bytestring = putStrLn . unwords . map show . Data.ByteString.Lazy.unpack;
}

This code is analogous to Perl's pack function.  Here is the mapping from Perl's pack template characters to Haskell functions:

L = putWord32host
N = putWord32be (network, big-endian)
V = putWord32le (x86, vax, little-endian)

To convert from lazy to strict ByteString, do Strict.concat . Lazy.toChunks

by Ken (noreply@blogger.com) at October 15, 2018 08:27 PM

Chris Smith 2

Fixpoints in Haskell

I gave a brief informal talk about fixpoints and their uses in functional programming at the Norcross Haskathon yesterday. I ended up promising to write it up as a blog post, as well. This is that post.

Just to set expectations from the beginning:

  • This is not research; it’s just exposition of things that are already widely known in programming language theory. If you eat denotational semantics for breakfast, nothing here will be new.
  • This is also not an applied post. You won’t learn (at least directly) how to build better software in Haskell. You might, though, learn a new way of thinking about what your existing Haskell code means.

If you’re still with me, let’s go!

What is a fixpoint?

If you have a function from any set to itself, then a fixpoint of that function is any input that maps to itself. On a standard, graph you can think of these as being values that fall on the line xy.

<figure><figcaption>Fixpoints of a simple function (from Wikimedia Commons)</figcaption></figure>

Here are a few examples:

  • Consider the function f(x) = x². It has two fixpoints: 0 and 1. These are the two real numbers that don’t change when you square them. (Negative numbers become positive, so they can’t work. Numbers between 0 and 1 become smaller. Numbers larger than 1 get larger.)
  • Now consider the cosine function, with its argument in radians. You may have unknowingly calculated the fixpoint of this function when you were bored in high school math class. If you start with any number, and repeatedly press the cosine button on your calculator, you’ll notice that the answer converges to a number around 0.739. This is the only fixpoint of the cosine function.
  • Finally, consider g(x) = x + 1. This function has no fixpoints, since if g(x) = x, then subtracting x from both sides, you could conclude that 0 = 1.

In general, deciding whether a function has fixpoints or not can be difficult. So there are many different fixpoint theorems. For example, Brouwer’s fixpoint theorem says that any continuous function from a closed ball into itself in Euclidean space must have a fixpoint. Another, which will be important in a bit, is the Knaster-Tarski theorem, which says that any order-preserving function on a complete lattice has a complete lattice of fixpoints. But more on that in a bit.

Why care about fixpoints?

That’s what a fixpoint is, but why should we care about it? Are fixpoints more interesting than a children’s amusement with the cosine button of a calculator? In fact, they are.

A really beautiful sequence of the MIT text Structure and Interpretation of Computer Programs deals with progressively more abstract representations of algorithms for computing a square root. Heron of Alexandria proposed an algorithm as follows. To compute the square root of n: (a) start with a guess, a; (b) compare a with n / a; (c) if they are close enough to each other, then stop; otherwise, average the two and repeat with that as your guess.

For example, if we start by guessing that the square root of 10 is 5, then we go through these steps:

  1. a = 5, but n / a = 2. Those are not close, so our new guess is 7/2.
  2. a = 7/2 (3.5), but n / a = 20/7 (about 2.86). Those are not close, so our new guess is 89/14.
  3. a = 89/28 (about 3.18), but n / a = 280/89 (about 3.15). Those are not quite close enough, so our new guess is 15761/4984.
  4. a = 15761/4984 (about 3.16), and n / a = 49840/15761 (also about 3.16). Those pretty close, so we can stop.

See what’s happening? Just like with the cosine button, we’re repeating a computation to make a guess better until it converges. The point where it converges is a fixpoint. But this time, the computation was chosen so that its fixpoint is the square root of n, giving a technique for calculating square roots.

To express this, we can write a Haskell implementation built around the notion of searching for a fixpoint.

<iframe frameborder="0" height="0" scrolling="no" src="https://medium.com/feed/@cdsmithus/" width="0">https://medium.com/media/561dd185b05b03f9d6cd549f491e90ae/href</iframe>

You can play with this code using CodeWorld, and compute your own square roots.

This is sort of a funky way to compute a fixpoint, though.

  • First, it requires a starting value, which is a wart in the API. A function’s fixpoints don’t really depend on any particular starting value.
  • Second, it relies on the function to converge — to produce a value closer to a fixpoint in its result than its input. This is the reason the average is there at all, instead of just using n/2, which has the right fixpoint but doesn’t converge to that fixpoint.
  • Third, the function may not even have a fixpoint at all! Here, a fixpoint exists for Double because it has finite precision. But the function f(x) = (x + 10/x) / 2 has no fixpoint in the rational numbers, so there’s no chance this would also work with Rational.

These problems are hard to overcome in the current form, but it turns out that by creatively modifying some definitions, we can distill the essence of this into a more abstract combinator, which is Haskell’s fix.

Haskell’s fix combinator

In the Data.Function module of Haskell’s base library, there’s a surprising function.

fix :: (a -> a) -> a

This function computes a fixpoint of any Haskell function. It does it without any of the disadvantages of the version above, and even without any constraints on the types! But how can this be? First of all, even demonstrating the existence of fixpoints is so difficult that it requires heavy mathematical theorems. And even worse, some functions (like g(x) = x + 1, above) have no fixpoints at all!

The trick is that Haskell functions aren’t just ordinary functions on ordinary sets. Haskell functions can throw exceptions, and they can loop indefinitely. To reconcile this with the world of mathematical functions, we say that Haskell functions work not just with the set of ordinary values of its types, but the set of all partially defined values.

Here’s how that works. In math, the set of integers is {0, 1, -1, 2, -2, …}. That’s it! There is nothing else in that set. But in Haskell, we say that the type Integer includes one more value: ⊥. This special value is used to indicate that the function doesn’t produce a true number at all, instead either running forever in an infinite loop, or throwing an exception. In essence, ⊥ means “I don’t know.”

Once ⊥ is taken into account, g(x) = x + 1 does actually have a fixpoint, because g(⊥) = ⊥. That is, if you don’t know what number is passed in, you also don’t know what number comes back out. And this is, in fact, the fixpoint you get from fix. The expression fix g loops forever instead of producing a value, so we say its value is ⊥.

In fact, any time that f(⊥) = ⊥, fix f will produce ⊥. (This is unfortunate if we’re looking for actual numbers as fixpoints like we are above. But later, we’ll be able to recover our original notion of fixpoint in terms of this new one!) This might make you wonder why fix is useful at all! After all, if you don’t know the input to a function, can you ever know the output? Are there any functions for which ⊥ is not a fixpoint? Well, yes there are, and it’s all because of laziness. Consider a constant function, like h(x) = 42. Because it’s a non-strict language, Haskell will produce an output of 42 without even looking at the input. In other words, h(⊥) = 42. ⊥ is not a fixpoint. And, in fact, fix h is 42 for this function.

The definedness ordering

With functions on simple types like Double, these are the only two possibilities: if the function is constant, then fix will produce the constant value, and otherwise it will produce ⊥. (The word for these simple types is “flat”; either you know nothing about the value, or you know everything.) But the result is more interesting when the types have more structure. Given a linked list, for example, there are many possible values where different parts of the value are unknown:

  • ⊥ means that nothing about the list is known.
  • ⊥ : ⊥ means that the list is known to be non-empty, but neither the first element nor the rest of the list are known. This is more information than you have about ⊥ (which might be an empty list).
  • 3 : ⊥ means that the list starts with a 3, but you don’t know what (if anything) comes after that. This is strictly more information than ⊥ : ⊥.
  • ⊥ : [] (also known as [⊥]) means that the list is known to only contain one element, but it’s not known what that element is. This is again strictly more information than ⊥ : ⊥, but it’s incomparable to 3 : ⊥. They are both more info than just whether the list is empty, but neither is strictly more informative than the other. They provide different information.
  • 3 : [] (also known as [3]) is strictly more information than either of ⊥ : [] or 3 : ⊥.

This defines a partial order on Haskell values. It’s different from the order given by Ord, which is the usual meaningful order for that type. This order sorts the values by how much we know about them. So in this order, neither of 3 or 4 is less than the other (they are both completely defined, just different); but ⊥ is less than ⊥ : ⊥, which is in turn less than either 3 : ⊥ or ⊥ : [], and each of these is in turn less than 3 : [].

In fact, any Haskell type has a complete semilattice of values in this definedness order, which just means that given any nonempty collection of values, there is always some unique greatest lower bound, which is the most-defined possible value that’s still less defined — but compatible — with all of them. For example, for lists of numbers, the greatest lower bound of the set {[3, 2], [3, 1, 5], [3, 4, 5, ⊥]} is 3 : ⊥ : ⊥, since all three lists are compatible with this type, but making it any more defined would have to exclude some of them.

An important attribute of Haskell functions is that they preserve this definedness order. In other words, if x is less than or equal to y in this order — meaning that x is compatible with y but possibly less defined — , then f(x) is also less then or equal to f(y). If you think about it, this makes sense. If knowing a little bit about the input gives you some knowledge about the output, then learning more about the input might tell you more about the output, but it should never contradict or take away from what you already knew.

Now here’s where everything starts to come together. The fix combinator always produces the least fixpoint in this definedness ordering. This least fixpoint will be guaranteed to exist by the Knaster-Tarski theorem mentioned earlier, which says that any order-preserving function on a complete semilattice must also have a complete semilattice of fixpoints — and in particular, there must be a least one of them. Definedness is a complete semilattice, and all Haskell functions are order-preserving, and that’s good enough to guarantee that the least fixpoint exists.

Let’s look at another example. Define threeAnd list = 3 : list. A fixpoint here is a list that is not changed at all by prepending a 3 onto it. It can’t be ⊥, because 3 : ⊥ is definitely not the same as ⊥. The answer is an infinite list of 3s, so fix threeAnd gives you exactly that: an infinite list of 3s. We can check this with GHCi.

ghci> import Data.Function
ghci> fix (3 :)
[3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ^C
Interrupted

Fixpoints and recursion

The reason that fixpoints play such a dominant role in functional programming is that they are intricately related to recursion, and that relationship is an important bridge between the operational realm — understanding what the program does— and the denotational realm — understanding what the program means.

In the operational sense, Haskell’s fixpoints can be defined using recursion, like this:

fix f = x  where x = f x

This definition seems almost too cute to work, but it does! It just starts with an x, and then keeps replacing every x with f(x). After an infinite number of substitutions, we find that the least fixpoint of f is just f(f(f(f(f(f(f(f(…)))))))). In other words, any time we need to refer to the input of the function, we just substitute another function application instead.

In the opposite direction, though, general recursion can be defined in terms of the fixpoint combinator. Suppose you have a programming language with no direct recursion allowed, but you are given a fixpoint combinator. Then there’s a simple syntactic sugar for recovering general recursion. Just take your recursive definition, like x = ... x ..., and rewrite it with an extra parameter that takes the place of recursive uses: x1 x0 = ... x0 .... Notice that x1 is not defined recursively. But setting x = fix x1 satisfies the equation given for x.

In the untyped lambda calculus, fix is known as the Y combinator, and it’s a crucial step to showing that the untyped lambda calculus is computationally universal. That’s because lambda calculus doesn’t have general recursion as an a priori language feature. But Y combinators can be built within the untyped lambda calculus itself, so that general recursion can be simulated with fixpoints as described above.

Once you add types, things change a little bit. It’s impossible to define a Y combinator directly in most sound typed lambda calculi, because it would allow for nontermination, which typed lambda calculi usually try to avoid. When these calculi are used as the basis for general programming languages, the first step is usually introduce some kind of general recursion (such as a recursive let) to allow for nonterminating programs to be written.

There’s still an important place for a fixpoint combinator, though! If we care only about what our programs do, then adding recursion might be sufficient. But recursion literally means defining things in terms of themselves, and that’s not safest thing to do if you would like words to keep having meanings that make sense. So what does a recursive definition even mean?

Fortunately, the Knaster-Tarski theorem guarantees that any function has a unique least fixpoint in the definedness order! So, at least for the pure fragment of Haskell, it’s okay to just define the meaning of a recursive definition to be the least fixed point of the related non-recursive function obtained by pull recursive uses into a parameter exactly as we did above. This ensures that any recursive definition can be given a solid meaning, even though on face value it’s just a circular definition.

(Incidentally, fixpoints are a nice way out of all kinds of self-referential paradoxes like this — what Hofstadter called “strange loops”. So next time you’re debating your science fiction geek friends about time travel and the integrity of the space-time continuum, remember that there’s no paradox in time travel as long as the universe is a fixpoint of the function obtained by pulling the back references out as parameters.)

Looping back, then, we could now redefine the fixpoint function from the introduction — using iterative refinement to close in on the right value of the square root — using the least fixpoint of a Haskell function.

A fun puzzle

Putting together all that we’ve talked about, here’s a question. Suppose you get frustrated with fixing your Haskell code, and type the following into GHCi:

ghci> import Data.Function
ghci> fix error

What happens next? Give it some thought before reading on.

To solve the puzzle, let’s first explore the types. The error function in Haskell has the type String -> a. It cannot reasonably have a fixpoint unless the domain and codomain are the same, so the type specializes to String -> String. Then fix error has the type String.

What is this magical string that fixes all errors? Well, it must be a fixpoint of the error function, but the result of theerror function is always ⊥. That is its whole purpose in life! So fix error must be ⊥, too. Not so special after all?

Ah, but on the operational side, there are many kinds of ⊥. Some are just infinite loops, while others throw exceptions. If they throw exceptions, then there’s an exception value to think about. These are all invisible to the pure fragment of Haskell, but immensely important to what the program does! To understand what’s going on here, recall that fix error = error (error (error ...))), an infinite nesting of error functions. The result is quite interesting.

ghci> fix error
"*** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception: *** Exception:^C
Interrupted

Let’s just say it did not fix any errors. Instead, it threw an exception, with an associated exception message that itself throws an exception when printed, and that exception has another exploding error message, and so on as long as you allow it to continue!

by Chris Smith at October 15, 2018 04:23 AM

October 14, 2018

Magnus Therning

Picking up new rows in a DB

I’ve been playing around a little with postgresql-simple in combination with streaming for picking up inserted rows in a DB table. What I came up with was

It seems almost too simple so I’m suspecting I missed something.

  • Am I missing something?
  • Should I be using streaming, or is pipes or conduit better (in case they are, in what way)?

October 14, 2018 12:00 AM

October 12, 2018

Gabriel Gonzalez

Detailed walkthrough for a beginner Haskell program

<html lang="" xml:lang="" xmlns="http://www.w3.org/1999/xhtml"><head> <meta charset="utf-8"/> <meta content="pandoc" name="generator"/> <meta content="width=device-width, initial-scale=1.0, user-scalable=yes" name="viewport"/> post <style type="text/css"> code{white-space: pre-wrap;} span.smallcaps{font-variant: small-caps;} span.underline{text-decoration: underline;} div.column{display: inline-block; vertical-align: top; width: 50%;} </style> <style type="text/css">a.sourceLine { display: inline-block; line-height: 1.25; } a.sourceLine { pointer-events: none; color: inherit; text-decoration: inherit; } a.sourceLine:empty { height: 1.2em; } .sourceCode { overflow: visible; } code.sourceCode { white-space: pre; position: relative; } div.sourceCode { margin: 1em 0; } pre.sourceCode { margin: 0; } @media screen { div.sourceCode { overflow: auto; } } @media print { code.sourceCode { white-space: pre-wrap; } a.sourceLine { text-indent: -1em; padding-left: 1em; } } pre.numberSource a.sourceLine { position: relative; left: -4em; } pre.numberSource a.sourceLine::before { content: attr(data-line-number); position: relative; left: -1em; text-align: right; vertical-align: baseline; border: none; pointer-events: all; display: inline-block; -webkit-touch-callout: none; -webkit-user-select: none; -khtml-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; padding: 0 4px; width: 4em; color: #aaaaaa; } pre.numberSource { margin-left: 3em; border-left: 1px solid #aaaaaa; padding-left: 4px; } div.sourceCode { } @media screen { a.sourceLine::before { text-decoration: underline; } } code span.al { color: #ff0000; font-weight: bold; } /* Alert */ code span.an { color: #60a0b0; font-weight: bold; font-style: italic; } /* Annotation */ code span.at { color: #7d9029; } /* Attribute */ code span.bn { color: #40a070; } /* BaseN */ code span.bu { } /* BuiltIn */ code span.cf { color: #007020; font-weight: bold; } /* ControlFlow */ code span.ch { color: #4070a0; } /* Char */ code span.cn { color: #880000; } /* Constant */ code span.co { color: #60a0b0; font-style: italic; } /* Comment */ code span.cv { color: #60a0b0; font-weight: bold; font-style: italic; } /* CommentVar */ code span.do { color: #ba2121; font-style: italic; } /* Documentation */ code span.dt { color: #902000; } /* DataType */ code span.dv { color: #40a070; } /* DecVal */ code span.er { color: #ff0000; font-weight: bold; } /* Error */ code span.ex { } /* Extension */ code span.fl { color: #40a070; } /* Float */ code span.fu { color: #06287e; } /* Function */ code span.im { } /* Import */ code span.in { color: #60a0b0; font-weight: bold; font-style: italic; } /* Information */ code span.kw { color: #007020; font-weight: bold; } /* Keyword */ code span.op { color: #666666; } /* Operator */ code span.ot { color: #007020; } /* Other */ code span.pp { color: #bc7a00; } /* Preprocessor */ code span.sc { color: #4070a0; } /* SpecialChar */ code span.ss { color: #bb6688; } /* SpecialString */ code span.st { color: #4070a0; } /* String */ code span.va { color: #19177c; } /* Variable */ code span.vs { color: #4070a0; } /* VerbatimString */ code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */ </style> </head><body>

This post walks through the development of a small Haskell program for aligning equals symbols in a block of text. This walkthrough targets a beginning programmer by describing several steps and concepts in extra detail.

Note that this post describes how to author, compile, and run single-file Haskell programs for ease of experimentation and learning. For larger Haskell projects you will want to use cabal or stack to scaffold and run the project and to share your work with others. I mainly introduce things this way because this provides a lightweight way to get started and try the language.

Background

I obsess about visual cleanliness of the code I write because I optimize for ease of reading rather than ease of writing. One way I try to improve appearance is aligning equals signs. For example, the code might begin like this:

address = "192.168.0.44"
port = 22
hostname = "wind"

… and then I manually indent the equals signs to all align with each other:

address  = "192.168.0.44"
port = 22
hostname = "wind"

My editor is vim, and you can install support for that using the Tabular plugin. However, I thought this would be an instructive example to implement from scratch to illustrate how to program in a functional style.

One neat feature of vim is that you can transform text inside your editor using any command line program. For example, I can select text using visual mode and then type:

:!some-command

… and vim will feed the selected text as standard input to the command-line program named some-command and replace the selected text with whatever the command emits to standard output.

This means that I only need to write a program that takes text to align on standard input and then emits the aligned text on standard output. I’ll call this program: align-equals.

Development environment

The command line is my “IDE”,so I will usually keep up to three terminal windows open:

  • One window edits text using vim
  • One window displays type errors using ghcid
  • One window tests the Haskell code I write in a REPL

I also use Nix, specifically a nix-shell to provision my development tools. I prefer Nix to provision development tools is because I don’t want to accumulate globally installed cruft on my system. Using Nix, I can provision any development tool or library transiently using nix-shell.

I will run all of the following examples inside of the following Nix shell (created for each new terminal):

That creates a transient shell that provides ghc, ghci, ghcid, and hoogle with the text and safe Haskell packages available. If I need to change the set of available Haskell packages I edit the command line and recreate the shell.

I turn on live type-checking by running:

… which will then automatically reload align-equals.hs every time the file changes and display any errors or warnings that the Haskell compiler has found.

In another terminal, I open the code I’m editing inside the ghci REPL so that I can interactively test the functions that I am writing:

Then in the third terminal, I actually edit the file:

The program

First, let’s describe what we plan to do using English prose, before we translate that to code:

We want to find the longest prefix preceding an = sign and then add spaces to the end of all other prefixes to match the length of the longest prefix.

Prefix length

In order to do that, we first need a function that computes the number of characters preceding an = sign for a given line. This function should have the following type:

You can read that type signature as saying: “prefixLength is a function whose input is a value of type Text (i.e. a line of input) and whose output is an Int (the number of characters preceding the first = sign)”. We can even add comments to that effect:

I import Data.Text because I prefer not to use the default String type provided by Haskell’s Prelude, which is inefficient. The text package provides a high-performance alternative type named Text along with many useful utilities.

The implementation of the prefixLength function directly follows the description:

As the name suggests, the prefixLength is the length of the prefix. The only non-trivial part is just knowing how to discover the Data.Text.breakOn function created for this purpose.

I usually browse the online documentation for Haskell packages by doing a Google search for “hackage ${package name}” (i.e. “hackage text”). That’s how I found the breakOn function here:

However, some people also prefer to use hoogle, which indexes and searches Haskell functions by name and by type. For example, if I’m looking for a function that splits a Text value into a prefix and suffix, I can run:

I can also verify that my function works as expected by testing my function in the long-running REPL I have open:

I have to enable theOverloadedStrings extension because I’m not using the default String type from the Haskell Prelude. This extension allows other packages to override how string literals behave to work with alternative implementations like Text.

One thing that’s neat about Haskell is how order-insensitive the language is. You can define things out of order and the compiler doesn’t mind. That’s why we can write our code as if it says: “prefixLength is the length of the prefix … and by the way you can compute the prefix and suffix by splitting the string using breakOn”.

This out-of-order coding style also jives well with lazy evaluation. Haskell is a “lazy” language, meaning that the program can evaluate things out-of-order, too, or even not at all if they are never used. For example, our prefixLength function never actually uses the suffix, so the program will never actually bother to compute or allocate that value.

The more you program in Haskell the less you think about your program as a sequence of statements and the more you think about your program as a graph of dependent computations.

Indenting a line

Now we need a function that pads a prefix with spaces up to a desired number of characters:

… or with comments:

This function is slightly longer, but still pretty straightforward:

Note that we can reorder all the lines after the where with no change to the program’s behavior. However, to keep things readable we order them so that the reader can understand them from top to bottom:

  • Split the line into a prefix and suffix
  • Compute the actual length of the prefix
  • Compute the number of spaces to pad by subtracting the actual length from the desired length
  • Create the padding by repeating a space the specified number of times
  • Create a new line by inserting the padding in between the prefix and suffix

Note that this reads a lot like a function defined in an imperative language. For example, the analogous Python code is not much different:

In general, you can port functional programs to imperative programs in this way if the functional program sticks to simple types (i.e. strings, numbers, records, etc.). For these simple programs, the functional program is essentially an imperative program where you forbid reassigning a value (i.e. “mutation”), which is generally good practice anyway for ease of program comprehension.

We can confirm that our function works by saving the complete program so far:

… and then :reloading that program in our REPL:

Indenting multiple lines

Now we can define a function to indent multiple lines:

This function takes advantage of two convenient utilities called lines and unlines.

Data.Text.lines splits a block of Text into a list of lines:

… and Data.Text.unlines combines a list of lines back into a block of Text:

You can use these two utilities to simplify line-oriented Text transformations in Haskell:

  • Split the block of Text into a list of lines
  • Process the list of lines into a new list of lines
  • Join the new list of lines back into a block of Text

The interesting part of our adjustText function is how we process the list of lines:

You can read the above code as saying:

  • Apply (“map”) the prefixLength function over each element of our list of lines to get a list of prefix lengths
  • Find the maximum length
  • If there is no maximum length, return an empty list of lines
  • If there is a maximum length, pad each line using that length

You might wonder: “Why would there not be a maximum length?” Well, consider the case where we begin with 0 lines of input: what is the maximum value of an empty list? The maximumMay function doesn’t throw an exception or return a bogus sentinel value that might be confused for real data. Instead, the maximumMay function returns an optional result:

The a in the type of maximumMay can be any type that is comparable ( i.e. implements Ord), and in the context of our code that type is Int, so we can pretend that the maximumMay function actually has this more specific type:

In other words, given a list of Ints, the maximumMay function will Maybe return an Int (i.e. it might return an Int or it might not). This means that the result will either be a Nothing (i.e. no result) or an Int value wrapped inside of a Just.

We consume the result of maximumMay by “pattern matching” on the result, like this:

The first branch covers the case where the list is empty. Here, the desiredPrefixLength is not in scope, so if we try to use that value we will get a type error. This provides a nice safeguard so that we can’t attempt to access a result that does not exist. In other languages this might be caught at runtime as a java.lang.NullPointerException or AttributeError: 'NoneType' object has no attribute 'x', but with pattern matching the type-checker can detect these sorts of bugs before we even attempt to run the program.

The second branch covers the case where the list is non-empty and does have a sensible maximum length. There we use that length to adjust each line.

The nice thing about pattern matching is that you can’t forget to handle these cases. If you try to use the result of maximumMay directly as an Int then you will get a type error. The maximumMay function wraps its own result in a Maybe to force downstream users to carefully consider how to handle the case where the list may be empty.

Tying it all together

So far all of our functions are “pure” functions, meaning that they are deterministic conversions from their inputs to their outputs that don’t mutate any variables and don’t have any side effects that we care about.

Carefully note the key phrase: “side effects that we care about”. These functions do technically have side effects, including:

  • Allocating memory/registers
  • Taking a non-zero amount of time to compute

Sometimes we do care about these types of side effects in certain contexts, like cryptography (where secure information can leak through these side channels) or embedded programming (where programs require careful attention to time/memory). However, for our simple utility these functions are effectively “pure”.

We can’t use our utility functions from the command line, though, unless we wrap them in a main function which a program can run, like this:

The interact function converts any pure Text transformation into a runnable program that transforms standard input into standard output using the supplied transformation:

This is an example of a “higher-order” function: a function whose input is another function. The input to the interact function is another function of type Text -> Text. Conveniently, our adjustText function has exactly the correct type to be supplied as an argument to the interact function:

… and then any value of type IO () can be assigned to main, which is what our program will run when invoked as a command-line executable.

If we save the following complete example to align-equals.hs:

… then we can compile that using:

… and verify that the executable works using:

That means that I can now use ./align-equals to transform my text buffer by selecting a text block like this:

address = "192.168.0.44"
port = 22
hostname = "wind"

… and running :!./align-equals to align the block:

address  = "192.168.0.44"
port = 22
hostname = "wind"

… and now I don’t have to tediously align the code manually.

Conclusion

Hopefully this illustrates one way to learn the Haskell language in the course of building small but practical programs. The language introduces many cool features and concepts and this post only touches the tip of the iceberg.

</body></html>

by Gabriel Gonzalez (noreply@blogger.com) at October 12, 2018 02:24 PM

Matt Parsons

TChan vs TQueue: What's the difference?

I always forget the difference between a TChan and a TQueue. They appear to have an almost identical API, so whenever I need a concurrent message thing, I spend some time working out what the difference is. I’ve done this a few times now, and it’s about time that I write it out so that I don’t need to keep reconstructing it.

Aside: Please don’t use TChan or TQueue. These types are unbounded, which means that you can run into unbounded memory use if your producer is faster than your consumer. Instead, use TBChan or TBQueue, which allow you to set a bound. I have run into issues with livelock with the standard stm and stm-chans types, and have found that unagi-chan package has better performance in all cases, so I usually reach for that when I have a need for a high performance concurrent channel. Unfortunately, the unagi-chan variants don’t operate in STM, which can be a dealbreaker depending on your workflow.

tl;dr: Use a channel when you want all readers to receive each message. Use a queue when you want only one reader to receive each message.

The docs for a TChan are concise:

TChan is an abstract type representing an unbounded FIFO channel.

The docs for a TQueue are a bit more verbose:

A TQueue is like a TChan, with two important differences:

  • it has faster throughput than both TChan and Chan (although the costs are amortised, so the cost of individual operations can vary a lot).
  • it does not provide equivalents of the dupTChan and cloneTChan operations.

The implementation is based on the traditional purely-functional queue representation that uses two lists to obtain amortised $O(1)$ enqueue and dequeue operations.

So the docs say that TQueue is faster, but has fewer operations. Presumably, we should use a TQueue unless we need these operations. What do dupTChan and cloneTChan do? Let’s look at the Haddocks:

dupTChan :: TChan a -> STM (TChan a)

Duplicate a TChan: the duplicate channel begins empty, but data written to either channel from then on will be available from both. Hence this creates a kind of broadcast channel, where data written by anyone is seen by everyone else.

cloneTChan :: TChan a -> STM (TChan a)

Clone a TChan: similar to dupTChan, but the cloned channel starts with the same content available as the original channel.

So, what’s the point of these? Let’s write some code and see what happens.

test2 :: IO ()
test2 = do
    c <- newTChanIO
    forkIO $ do
        for_ [1..5] $ \n -> do
            i <- atomically $ readTChan c
            putStrLn $ "First thread received: " ++ show i ++ "on #: " ++ show n

    forkIO $ do
        for_ [5..10] $ \n -> do
            i <- atomically $ readTChan c
            putStrLn $ "Second thread received: " ++ show i ++ "on #: " ++ show n

    for_ [1..10] $ \i -> do
       threadDelay 10000
       atomically $ writeTChan c i

This creates a new TChan, then forks two threads. Each thread sits and waits on the TChan to have a value, and then it prints the value out. Finally we stuff the numbers 1 through 100 into the channel, with a slight delay.

This is the output:

Beginning test...
First thread received: 1on #: 1
First thread received: 2on #: 2
Second thread received: 3on #: 6
First thread received: 4on #: 3
Second thread received: 5on #: 7
Second thread received: 6on #: 8
First thread received: 7on #: 4
Second thread received: 8on #: 9
First thread received: 9on #: 5
Second thread received: 10on #: 10

Alright, so the two threads mostly just interleave their work. The values 1-100 are printed out by each thread. Let’s try using dupTChan and see what happens:

test3 :: IO ()
test3 = do
    c <- newTChanIO
    forkIO $ do
        for_ [1..5] $ \n -> do
            i <- atomically $ readTChan c
            putStrLn $ "First thread received: " ++ show i ++ " on #: " ++ show n

    forkIO $ do
        c' <- atomically $ dupTChan c
        for_ [6..10] $ \n -> do
            i <- atomically $ readTChan c'
            putStrLn $ "Second thread received: " ++ show i ++ " on #: " ++ show n

    for_ [1..10] $ \i -> do
       threadDelay 10000
       atomically $ writeTChan c i

This is basically the same code, but we’ve duplicated the TChan in the second thread. Here’s the new output:

Beginning test...
First thread received: 1 on #: 1
Second thread received: 1 on #: 6
Second thread received: 2 on #: 7
First thread received: 2 on #: 2
First thread received: 3 on #: 3
Second thread received: 3 on #: 8
First thread received: 4 on #: 4
Second thread received: 4 on #: 9
First thread received: 5 on #: 5
Second thread received: 5 on #: 10

Interesting! So the duplicated channel is able to receive a writeTChan, and both threads are able to see the values. So, a TChan with a dupTChan call is suitable for when you want all copies of the TChan to receive a value. A TQueue will only permit a value to be seen once, by a single thread.

There’s a variant of newTChan called newBroadcastTChan. How does it differ? The docs explain:

Create a write-only TChan. More precisely, readTChan will retry even after items have been written to the channel. The only way to read a broadcast channel is to duplicate it with dupTChan.

Consider a server that broadcasts messages to clients:

serve :: TChan Message -> Client -> IO loop
serve broadcastChan client = do
    myChan <- dupTChan broadcastChan
    forever $ do
        message <- readTChan myChan
        send client message

The problem with using newTChan to create the broadcast channel is that if it is only written to and never read, items will pile up in memory. By using newBroadcastTChan to create the broadcast channel, items can be garbage collected after clients have seen them.

The last paragraph is the important part. A standard TChan will accumulate messages until that copy of the TChan is read. A broadcast TChan will not accumulate messages on the write end of the channel. This points to an important performance concern:

  • If you dupTChan a channel, you must read from every duplicate of the TChan in order to avoid memory loss.
  • If you intend on having a write-only end, you must use newBroadcastTChan for any channel that you won’t read from.

TQueue avoids this problem as it cannot be duplicated.

October 12, 2018 12:00 AM

October 10, 2018

Sandy Maguire

Protobuffers Are Wrong

I've spent a good deal of my professional life arguing against using protobuffers. They're clearly written by amateurs, unbelievably ad-hoc, mired in gotchas, tricky to compile, and solve a problem that nobody but Google really has. If these problems of protobuffers remained quarantined in serialization abstractions, my complaints would end there. But unfortunately, the bad design of protobuffers is so persuasive that these problems manage to leak their way into your code as well.

Ad-Hoc and Built By Amateurs

Stop. Put away your email client that is half-way through writing me about how "Google is filled with the world's best engineers," and that "anything they build is, by definition, not built by amateurs." I don't want to hear it.

Let's just get this out of the way. Full disclosure: I used to work at Google. It was the first (but unfortunately, not the last) place I ever used protobuffers. All of the problems I want to talk about today exist inside of Google's codebase; it's not just a matter of "using protobuffers wrong" or some such nonsense like that.

By far, the biggest problem with protobuffers is their terrible type-system. Fans of Java should feel right at home with protobuffers, but unfortunately, literally nobody considers Java to have a well-designed type-system. The dynamic typing guys complain about it being too stifling, while the static typing guys like me complain about it being too stifling without giving you any of the things you actually want in a type-system. Lose lose.

The ad-hoc-ness and the built-by-amateurs-itude go hand-in-hand. So much of the protobuffer spec feels bolted on as an afterthought that it clearly was bolted on as an afterthought. Many of its restrictions will make you stop, scratch your head and ask "wat?" But these are just symptoms of the deeper answer, which is this:

Protobuffers were obviously built by amateurs because they offer bad solutions to widely-known and already-solved problems.

No Compositionality

Protobuffers offer several "features", but none of them see to work with one another. For example, look at the list of orthogonal-yet-constrained typing features that I found by skimming the documentation.

  • oneof fields can't be repeated.
  • map<k,v> fields have dedicated syntax for their keys and values, but this isn't used for any other types.
  • Despite map fields being able to be parameterized, no user-defined types can be. This means you'll be stuck hand-rolling your own specializations of common data structures.
  • map fields cannot be repeated.
  • map keys can be strings, but can not be bytes. They also can't be enums, even though enums are considered to be equivalent to integers everywhere else in the protobuffer spec.
  • map values cannot be other maps.

This insane list of restrictions is the result of unprincipled design choices and bolting on features after the fact. For example, oneof fields can't be repeated because rather than resulting in a coproduct type, instead the code generator will give you a product of mutually-exclusive optional fields. Such a transformation is only valid for a singular field (and, as we'll see later, not even then.)

The restriction behind map fields being unable to be repeated is related, but shows off a different limitation of the type-system. Behind the scenes, a map<k,v> is desugared into something spiritually similar to repeated Pair<k,v>. And because repeated is a magical language keyword rather than a type in its own right, it doesn't compose with itself.

Your guess is as good as mine for why an enum can't be used as a map key.

What's so frustrating about all of this is a little understanding of how modern type-systems work would be enough to drastically simplify the protobuffer spec and simultaneously remove all of the arbitrary restrictions.

The solution is as follows:

  • Make all fields in a message required. This makes messages product types.
  • Promote oneof fields to instead be standalone data types. These are coproduct types.
  • Give the ability to parameterize product and coproduct types by other types.

That's it! These three features are all you need in order to define any possible piece of data. With these simpler pieces, we can re-implement the rest of the protobuffer spec in terms of them.

For example, we can rebuild optional fields:

product Unit {
  // no fields
}

coproduct Optional<t> {
  t    value = 0;
  Unit unset = 1;
}

Building repeated fields is simple too:

coproduct List<t> {
  Unit empty = 0;
  Pair<t, List<t>> cons = 1;
}

Of course, the actual serialization logic is allowed to do something smarter than pushing linked-lists across the network---after all, implementations and semantics don't need to align one-to-one.

Questionable Choices

In the vein of Java, protobuffers make the distinction between scalar types and message types. Scalars correspond more-or-less to machine primitives---things like int32, bool and string. Messages, on the other hand, are everything else. All library- and user-defined types are messages.

The two varieties of types have completely different semantics, of course.

Fields with scalar types are always present. Even if you don't set them. Did I mention that (at least in proto31) all protobuffers can be zero-initialized with absolutely no data in them? Scalar fields get false-y values---uint32 is initialized to 0 for example, and string is initialized as "".

It's impossible to differentiate a field that was missing in a protobuffer from one that was assigned to the default value. Presumably this decision is in place in order to allow for an optimization of not needing to send default scalar values over the wire. Presumably, though the encoding guide makes no mention of this optimization being performed, so your guess is as good as mine.

As we'll see when we discuss protobuffers' claim to being god's gift to backwards- and forwards-compatible APIs, this inability to distinguish between unset and default values is a nightmare. Especially if indeed it's a design decision made in order to save one bit (set or not) per field.

Contrast this behavior against message types. While scalar fields are dumb, the behavior for message fields is outright insane. Internally, message fields are either there or they're not---but their behavior is crazy. Some pseudocode for their accessor is worth a thousand words. Pretend this is Java or something similar:

private Foo m_foo;

public Foo foo {
  // only if `foo` is used as an expression
  get {
    if (m_foo != null)
      return m_foo;
    else
      return new Foo();
  }

  // instead if `foo` is used as an lvalue
  mutable get {
    if (m_foo = null)
      m_foo = new Foo();
    return m_foo;
  }
}

The idea is that if the foo field is unset, you'll see a default-initialized copy whenever you ask for it, but won't actually modify its container. But if you modify foo, it will modify its parent as well! All of this just to avoid using a Maybe Foo type and the associated "headaches" of the nuance behind needing to figure out what an unset value should mean.

This behavior is especially egregious, because it breaks a law! We'd expect the assignment msg.foo = msg.foo; to be a no-op. Instead the implementation will actually silently change msg to have a zero-initialized copy of foo if it previously didn't have one.

Unlike scalar fields, at least it's possible to detect if a message field is unset. Language bindings for protobuffers offer something along the lines of a generated bool has_foo() method. In the frequent case of copying a message field from one proto to another, iff it was present, you'll need to write the following code:

if (src.has_foo(src)) {
  dst.set_foo(src.foo());
}

Notice that, at least in statically-typed languages, this pattern cannot be abstracted due to the nominal relationship between the methods foo(), set_foo() and has_foo(). Because all of these functions are their own identifiers, we have no means of programmatically generating them, save for a preprocessor macro:

#define COPY_IFF_SET(src, dst, field) \
if (src.has_##field(src)) { \
  dst.set_##field(src.field()); \
}

(but preprocessor macros are verboten by the Google style guide.)

If instead all optional fields were implemented as Maybes, you'd get abstract-able, referentially transparent call-sites for free.

To change tack, let's talk about another questionable decision. While you can define oneof fields in protobuffers, their semantics are not of coproduct types! Rookie mistake my dudes! What you get instead is an optional field for each case of the oneof, and magic code in the setters that will just unset any other case if this one is set.

At first glance, this seems like it should be semantically equivalent to having a proper union type. But instead it is an accursed, unutterable source of bugs! When this behavior teams up with the law-breaking implementation of msg.foo = msg.foo;, it allows this benign-looking assignment to silently delete arbitrary amounts of data!

What this means at the end of the day is that oneof fields do not form law-abiding Prisms, nor do messages form law-abiding Lenses. Which is to say good luck trying to write bug-free, non-trivial manipulations of protobuffers. It is literally impossible to write generic, bug-free, polymorphic code over protobuffers.

That's not the sort of thing anybody likes to hear, let alone those of us who have grown to love parametric polymorphism---which gives us the exact opposite promise.

The Lie of Backwards- and Forwards-Compatibility

One of the frequently cited killer features of protobuffers is their "hassle-free ability to write backwards- and forwards-compatible APIs." This is the claim that has been pulled over your eyes to blind you from the truth.

What protobuffers are is permissive. They manage to not shit the bed when receiving messages from the past or from the future because they make absolutely no promises about what your data will look like. Everything is optional! But if you need it anyway, protobuffers will happily cook up and serve you something that typechecks, regardless of whether or not it's meaningful.

This means that protobuffers achieve their promised time-traveling compatibility guarantees by silently doing the wrong thing by default. Of course, the cautious programmer can (and should) write code that performs sanity checks on received protobuffers. But if at every use-site you need to write defensive checks ensuring your data is sane, maybe that just means your deserialization step was too permissive. All you've managed to do is decentralize sanity-checking logic from a well-defined boundary and push the responsibility of doing it throughout your entire codebase.

One possible argument here is that protobuffers will hold onto any information present in a message that they don't understand. In principle this means that it's nondestructive to route a message through an intermediary that doesn't understand this version of its schema. Surely that's a win, isn't it?

Granted, on paper it's a cool feature. But I've never once seen an application that will actually preserve that property. With the one exception of routing software, nothing wants to inspect only some bits of a message and then forward it on unchanged. The vast majority of programs that operate on protobuffers will decode one, transform it into another, and send it somewhere else. Alas, these transformations are bespoke and coded by hand. And hand-coded transformations from one protobuffer to another don't preserve unknown fields between the two, because it's literally meaningless.

This pervasive attitude towards protobuffers always being compatible rears its head in other ugly ways. Style guides for protobuffers actively advocate against DRY and suggest inlining definitions whenever possible. The reasoning behind this is that it allows you to evolve messages separately if these definitions diverge in the future. To emphasize that point, the suggestion is to fly in the face of 60 years' worth of good programming practice just in case maybe one day in the future you need to change something.

At the root of the problem is that Google conflates the meaning of data with its physical representation. When you're at Google scale, this sort of thing probably makes sense. After all, they have an internal tool that allows you to compare the finances behind programmer hours vs network utilization vs the cost to store x bytes vs all sorts of other things. Unlike most companies in the tech space, paying engineers is one of Google's smallest expenses. Financially it makes sense for them to waste programmers' time in order to shave off a few bytes.

Outside of the top five tech companies, none of us is within five orders of magnitude of being Google scale. Your startup cannot afford to waste engineer hours on shaving off bytes. But shaving off bytes and wasting programmers' time in the process is exactly what protobuffers are optimized for.

Let's face it. You are not Google scale and you never will be. Stop cargo-culting technology just because "Google uses it" and therefore "it's an industry best-practice."

Protobuffers Contaminate Codebases

If it were possible to restrict protobuffer usage to network-boundaries I wouldn't be nearly as hard on it as a technology. Unfortunately, while there are a few solutions in principle, none of them is good enough to actually be used in real software.

Protobuffers correspond to the data you want to send over the wire, which is often related but not identical to the actual data the application would like to work with. This puts us in the uncomfortable position of needing to choose between one of three bad alternatives:

  1. Maintain a separate type that describes the data you actually want, and ensure that the two evolve simultaneously.
  2. Pack rich data into the wire format for application use.
  3. Derive rich information every time you need it from a terse wire format.

Option 1 is clearly the "right" solution, but its untenable with protobuffers. The language isn't powerful enough to encode types that can perform double-duty as both wire and application formats. Which means you'd need to write a completely separate datatype, evolve it synchronously with the protobuffer, and explicitly write serialization code between the two. Seeing as most people seem to use protobuffers in order to not write serialization code, this is obviously never going to happen.

Instead, code that uses protobuffers allows them to proliferate throughout the codebase. True story, my main project at Google was a compiler that took "programs" written in one variety of protobuffer, and spit out an equivalent "program" in another. Both the input and output formats were expressive enough that maintaining proper parallel C++ versions of them could never possibly work. As a result, my code was unable to take advantage of any of the rich techniques we've discovered for writing compilers, because protobuffer data (and resulting code-gen) is simply too rigid to do anything interesting.

The result is that a thing that could have been 50 lines of recursion schemes was instead 10,000 lines of ad-hoc buffer-shuffling. The code I wanted to write was literally impossible when constrained by having protobuffers in the mix.

While this is an anecdote, it's not in isolation. By virtue of their rigid code-generation, manifestations of protobuffers in languages are never idiomatic, nor can they be made to be---short of rewriting the code-generator.

But even then, you still have the problem of needing to embed a shitty type-system into the targeted language. Because most of protobuffers' features are ill-conceived, these unsavory properties leak into our codebases. It means we're forced to not only implement, but also use these bad ideas in any project which hopes to interface with protobuffers.

While it's easy to implement inane things out of a solid foundation, going the other direction is challenging at best and the dark path of Eldrich madness at worst.

In short, abandon all hope ye who introduce protobuffers into your projects.


  1. To this day, there's a raging debate inside Google itself about proto2 and whether fields should ever be marked as required. Manifestos with both titles "optional considered harmful" and "required considered harmful." Good luck sorting that out.

October 10, 2018 03:06 PM

October 09, 2018

Philip Wadler

Developer Yoga

<iframe allowfullscreen="allowfullscreen" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/055aMWWWWUQ/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/055aMWWWWUQ?feature=player_embedded" width="320"></iframe>

My favourite is the first, Lambdasana. Spotted while at LambdUp in Prague.

by Philip Wadler (noreply@blogger.com) at October 09, 2018 05:36 PM

October 08, 2018

FP Complete

2018 Haskell Survey Results

The results are in!!

The powerful Haskell community includes researchers who advance the language, and users who apply the language to solve real-world problems. Recently 1100 people took the time to respond to FP Complete's Haskell User Survey. Here are the key results. See the full report by clicking here.

by Aaron Contorer (aaron@fpcomplete.com) at October 08, 2018 05:52 PM

October 06, 2018

Brent Yorgey

Counting inversions with monoidal sparks

Time for me to reveal the example I had in mind that led to the generalization in my previous post. Thanks for all the interesting comments: it seems like there are some interesting connections to be explored (e.g. to the algebra of graphs, formal group laws, …?)!

This is a literate Haskell post; download it and play along in ghci! (Note it requires GHC 8.6 since I couldn’t resist making use of DerivingVia…)

> {-# LANGUAGE DefaultSignatures          #-}
> {-# LANGUAGE FlexibleInstances          #-}
> {-# LANGUAGE MultiParamTypeClasses      #-}
> {-# LANGUAGE DerivingStrategies         #-}
> {-# LANGUAGE GeneralizedNewtypeDeriving #-}
> {-# LANGUAGE DerivingVia                #-}
> {-# LANGUAGE TypeApplications           #-}
> {-# LANGUAGE ScopedTypeVariables        #-}
> 
> import Data.Semigroup
> import Data.Coerce

Consider a sequence of integers \sigma = a_1, a_2, \dots, a_n. (Actually, integers is too specific; any linearly ordered domain will do.) An inversion is a pair of positions in \sigma which are “out of order”: that is, (i,j) such that i < j but a_i > a_j. So, for example, \sigma = [3,5,1,4,2] has six inversions, namely (3,1), (3,2), (5,1), (5,4), (5,2), (4,2). (Here I’ve written the elements that are out of order rather than their positions, which doesn’t matter much when the elements are all distinct; but it’s important to keep in mind that e.g. [2,2,1] has two inversions, not one, because each copy of 2 makes an inversion with the 1.) The total number of inversions of \sigma is denoted \mathrm{inv}(\sigma).

One way to think about the inversion count is as a measure of how far away the sequence is from being sorted. In particular, bubble sort will make precisely \mathrm{inv}(\sigma) adjacent swaps while sorting \sigma. The highest possible value of \mathrm{inv}(\sigma) is n(n-1)/2, when \sigma is sorted in reverse order.

The obvious brute-force algorithm to count inversions is to use two nested loops to enumerate all possible pairs of elements, and increment a counter each time we discover a pair which is out of order. This clearly takes O(n^2) time. Can it be done any faster?

It turns out the (generally well-known) answer is yes, using a variant of mergesort. The trick is to generalize to counting inversions and sorting the sequence at the same time. First split the sequence in half, and recursively sort and count the inversions in each half. Any inversion in the original sequence must either be entirely contained in one of the two halves (these will be counted by the recursive calls), or have one endpoint in the left half and one in the right. One key observation at this point is that any inversion with one endpoint in each half will still be an inversion even after independently sorting the two halves. The other key observation is that we can merge the two sorted subsequences and count inversions between them in linear time. Use the usual two-finger algorithm for merging two sorted sequences; each time we take an element from the right subsequence, it’s because it is less than all the remaining elements in the left subsequence, but it was to the right of all of them, so we can add the length of the remaining left subsequence to the inversion count. Intuitively, it’s this ability to count a bunch of inversions in one step which allows this algorithm to be more efficient, since any algorithm which only ever increments an inversion counter is doomed to be O(n^2) no matter how cleverly it splits up the counting. In the end, the number of total inversions is the sum of the inversions counted recursively in the two sublists, plus any inversions between the two sublists.

Here’s some Haskell code implementing this sorted-merge-and-inversion-count. We have to be a bit careful because we don’t want to call length on the remaining sublist at every step (that would ruin the asymptotic performance!), so we precompute the length and pass along the length of the left subsequence as an extra parameter which we keep up-to-date as we recurse.

> -- Precondition: the input lists are sorted.
> -- Output the sorted merge of the two lists, and the number of pairs
> -- (a,b) such that a \in xs, b \in ys with a > b.
> mergeAndCount :: Ord a => [a] -> [a] -> ([a], Int)
> mergeAndCount xs ys = go xs (length xs) ys
>   -- precondition/invariant for go xs n ys:   n == length xs
>   where
>     go [] _ ys = (ys, 0)
>     go xs _ [] = (xs, 0)
>     go (x:xs) n (y:ys)
>       | x <= y    = let (m, i) = go xs (n-1) (y:ys) in (x:m, i)
>       | otherwise = let (m, i) = go (x:xs) n ys     in (y:m, i + n)
> 
> merge :: Ord a => [a] -> [a] -> [a]
> merge xs ys = fst (mergeAndCount xs ys)
> 
> inversionsBetween :: Ord a => [a] -> [a] -> Int
> inversionsBetween xs ys = snd (mergeAndCount xs ys)

Do you see how this is an instance of the sparky monoid construction in my previous post? A is the set of sorted lists with merge as the monoid operation; B is the natural numbers under addition. The spark operation takes two sorted lists and counts the number of inversions between them. So the monoid on pairs A \times B merges the lists, and adds the inversion counts together with the number of inversions between the two lsits.

We have to verify that this satisfies the laws: let a be any sorted list, then we need

  • a \cdot \varepsilon_A = \varepsilon_B, that is, a `inversionsBetween` [] = 0. This is true since there are never any inversions between a and an empty list. Likewise for \varepsilon_A \cdot a = \varepsilon_B.

  • a `inversionsBetween` (a1 `merge` a2) == (a `inversionsBetween` a1) + (a `inversionsBetween` a2). This is also true since a1 `merge` a2 contains the same elements as a1 and a2: any inversion between a and a1 `merge` a2 will be an inversion between a and a1, or between a and a2, and vice versa. The same reasoning shows that (a1 `merge` a2) `inversionsBetween` a == (a1 `inversionsBetween` a) + (a2 `inversionsBetween` a).

Note that A and B are commutative monoids, but the spark operation isn’t commutative; in fact, any given pair of elements is an inversion between a1 and a2 precisely iff they are not an inversion between a2 and a1. Note also that A and B aren’t idempotent; for example merging a sorted list with itself produces not the same list, but a new list with two copies of each element.

So let’s see some more Haskell code to implement the entire algorithm in a nicely modular way. First, let’s encode sparky monoids in general. The Sparky class is for pairs of types with a spark operation. As we saw in the example above, sometimes it may be more efficient to compute a_1 \diamond a_2 and the spark a_1 \cdot a_2 at the same time, so we bake that possibility into the class.

> class Sparky a b where

The basic spark operation, with a default implementation that projects the result out of the prodSpark method.

>   (<.>) :: a -> a -> b
>   a1 <.> a2 = snd (prodSpark a1 a2)

prodSpark does the monoidal product and spark at the same time, with a default implementation that just does them separately.

>   prodSpark :: a -> a -> (a,b)
>   default prodSpark :: Semigroup a => a -> a -> (a,b)
>   prodSpark a1 a2 = (a1 <> a2, a1 <.> a2)

Finally we can specify that we have to implement one or the other of these methods.

>   {-# MINIMAL (<.>) | prodSpark #-}

Sparked a b is just a pair type, but with Semigroup and Monoid instances that implement the sparky product.

> data Sparked a b = S { getA :: a, getSpark :: b }
>   deriving Show
> 
> class Semigroup a => CommutativeSemigroup a
> class (Monoid a, CommutativeSemigroup a) => CommutativeMonoid a
> 
> instance (Semigroup a, CommutativeSemigroup b, Sparky a b) => Semigroup (Sparked a b) where
>   S a1 b1 <> S a2 b2 = S a' (b1 <> b2 <> b')
>     where (a', b') = prodSpark a1 a2
> 
> instance (Monoid a, CommutativeMonoid b, Sparky a b) => Monoid (Sparked a b) where
>   mempty = S mempty mempty

Now we can make instances for sorted lists under merge…

> newtype Sorted a = Sorted [a]
>   deriving Show
> 
> instance Ord a => Semigroup (Sorted a) where
>   Sorted xs <> Sorted ys = Sorted (merge xs ys)
> instance Ord a => Monoid (Sorted a) where
>   mempty = Sorted []
> 
> instance Ord a => CommutativeSemigroup (Sorted a)
> instance Ord a => CommutativeMonoid (Sorted a)

…and for inversion counts.

> newtype InvCount = InvCount Int
>   deriving newtype Num
>   deriving (Semigroup, Monoid) via Sum Int
> 
> instance CommutativeSemigroup InvCount
> instance CommutativeMonoid InvCount

Finally we make the Sparky (Sorted a) InvCount instance, which is just mergeAndCount (some conversion between newtypes is required, but we can get the compiler to do it automagically via coerce and a bit of explicit type application).

> instance Ord a => Sparky (Sorted a) InvCount where
>   prodSpark = coerce (mergeAndCount @a)

And here’s a function to turn a single a value into a sorted singleton list paired with an inversion count of zero, which will come in handy later.

> single :: a -> Sparked (Sorted a) InvCount
> single a = S (Sorted [a]) 0

Finally, we can make some generic infrastructure for doing monoidal folds. First, Parens a encodes lists of a which have been explicitly associated, i.e. fully parenthesized:

> data Parens a = Leaf a | Node (Parens a) (Parens a)
>   deriving Show

We can make a generic fold for Parens a values, which maps each Leaf into the result type b, and replaces each Node with a binary operation:

> foldParens :: (a -> b) -> (b -> b -> b) -> Parens a -> b
> foldParens lf _  (Leaf a)   = lf a
> foldParens lf nd (Node l r) = nd (foldParens lf nd l) (foldParens lf nd r)

Now for a function which splits a list in half recursively to produce a balanced parenthesization.

> balanced :: [a] -> Parens a
> balanced []  = error "List must be nonempty"
> balanced [a] = Leaf a
> balanced as  = Node (balanced as1) (balanced as2)
>   where (as1, as2) = splitAt (length as `div` 2) as

Finally, we can make a balanced variant of foldMap: instead of just mapping a function over a list and then reducing with mconcat, as foldMap does, it first creates a balanced parenthesization for the list and then reduces via the given monoid. This will always give the same result as foldMap due to associativity, but in some cases it may be more efficient.

> foldMapB :: Monoid m => (e -> m) -> [e] -> m
> foldMapB leaf = foldParens leaf (<>) . balanced

Let’s try it out!

λ> :set +s
λ> getSpark $ foldMap single [3000, 2999 .. 1 :: Int]
Sum {getSum = 4498500}
(34.94 secs, 3,469,354,896 bytes)
λ> getSpark $ foldMapB single [3000, 2999 .. 1 :: Int]
Sum {getSum = 4498500}
(0.09 secs, 20,016,936 bytes)

Empirically, it does seem that we are getting quadratic performance with normal foldMap, but O(n \log n) with foldMapB. We can verify that we are getting the correct inversion count in either case, since we know there should be n(n-1)/2 when the list is reversed, and sure enough, 3000 \cdot 2999 / 2 = 4498500.

by Brent at October 06, 2018 08:53 PM

October 05, 2018

Douglas M. Auclair (geophf)

August 2018 1HaskellADay problems and solutions

by geophf (noreply@blogger.com) at October 05, 2018 12:41 PM

October 04, 2018

FP Complete

Well-Typed.Com

Upcoming Haskell events: Haskell eXchange, Courses, MuniHac

Upcoming events

In the upcoming months, Well-Typed will be participating in a number of events that we would like to advertise here:

Well-Typed’s Guide to the Haskell Type System

London, 10 October 2018

On the day before the Haskell eXchange, Adam Gundry will teach a single-day compact course on Haskell type system extensions. Learn all about GADTs, data kinds, rank-n polymorphism, type families and more.

Registration is open.

Haskell eXchange 2018

London, 11–12 October 2018

The Haskell eXchange will return to London for the seventh time, with keynotes by Simon Peyton Jones, Stephanie Weirich, Niki Vazou and Simon Marlow, and more than 30 other talks on all topics Haskell. This year’s programme includes three talks by Well-Typed: Andres Löh on Deriving Via, Duncan Coutts on the Cardano Cryptocurrency, and David Eichmann on Geometry with Denotational Design.

Registration is open.

Well-Typed’s Guide to Performance and Optimisation

London, 15–16 October 2018

After the Haskell eXchange, Andres Löh will teach a two-day course on performance and optimisation. Learn about data structures, performance pitfalls, common optimisations that GHC applies, how to read GHC’s internal Core language, and more.

Registration is open.

MuniHac 2018

Unterföhring / Munich, 16–18 November 2018

After the successful MuniHac in 2016, we will have a second MuniHac in Munich this November! Join us for three days of conversations and hacking on Haskell libraries, applications and tools. Everyone is welcome, whether beginner or expert. There are some talks, including keynotes by Ben Gamari from Well-Typed, Matthew Pickering from the University of Bristol and Ryan Scott from Indiana University. Furthermore, there will be plenty of opportunity to learn and improve. You can work on your own project or join others in theirs.

Well-Typed is co-organising this event with TNG Technology Consulting. The event is free to attend, but the number of spaces is limited, so be sure to register early.

Registration is open.

by andres at October 04, 2018 12:48 PM

Functional Jobs

Software Engineer - Haskell at Capital Match (Full-time)

Who We Are Capital Match is the largest marketplace invoice financing platform in Southeast Asia based in Singapore. We are an innovative fintech business with a range of financial solutions to SMEs and investors. We have funded more than US$80 million in Singapore over the past 3 years. We are funded by multiple VCs and institutional investors including Dymon Asia Capital, an alternative asset manager with $6bn AUM and Eduardo Saverin's B Capital Group.

Role We are looking for experienced developers to lead our tech growth in the fintech space, expand into surrounding countries, develop new products and integrations. You will be involved in all aspects of the creation, growth and operations of a secure web-based platform. Experience in front-to-back feature development, distributed deployment and automation in the cloud, build and test automation, is highly desirable.

Our Platform Our platform is primarily developed in Haskell with a ClojureScript frontend. We use Docker containers, Nix and standard cloud infrastructure systems to manage our production rollouts. We make heavy use of modern functional programming practices, such as property-based testing and type-driven programming, to improve the quality of our code. We use many popular Haskell libraries such as Lens, QuickCheck, Servant, Scotty, Aeson, Attoparsec and Pandoc. We use continuous testing, integration and deployment wherever possible

Job Requirements

  • Adequate software engineering experience
  • Software engineering experience with Haskell
  • Strong foundation in data structures, algorithms, and software design
  • Experience of developing both server and web applications

Additional Points

  • Experience with modern software engineering practices such as TDD, CI, Emergent Design, Refactoring, Peer Review and Continuous Improvement
  • Familiarity with Linux systems, command-line environments and cloud-based deployments
  • A background in fintech and especially the lending space would be an advantage, but is not essential

What We Offer

  • Monthly salary from SGD 5,000-10,000 depends on experience
  • Equity options for excellent candidate
  • Working visa will be provided

How to Apply Send your resume to careers [at] capital-match [dot] com

Get information on how to apply for this position.

October 04, 2018 02:07 AM

Tweag I/O

capability: the ReaderT pattern without boilerplate

Andreas Herrmann, Arnaud Spiwack

About a year ago, Michael Snoyman made a blog post about the ReaderT pattern. Ostensibly, it's a blog post about preferring to encode state as IORef-s when you can. But that's not how we read it. What we saw first and foremost is a story about using extensional type classes describing specifically the effects that functions use, but not how the effects are implemented (see the MonadBalance story in Michael's blog post). We call these extensional type classes capabilities. Here is an excellent blog post from Matt Parsons which takes this aspect to heart.

Capability is a library to replace the MTL with capabilities. In this post, we'll argue why capabalities are important, why you should use them, and tell you about what it took to design a library of capabilities with good ergonomics. It turns out that a brand new language extension that shipped with GHC 8.6, -XDerivingVia, has a crucial role in this story.

The difference with the MTL

How is using capabilities, like in Snoyman's and Parsons's blog posts any different from using the well-trodden MTL? Quite a bit! The MTL's classes, like MonadReader, MonadState and their close relatives, are intensional: they reflect how the monad has been constructed. A monad MyM is a MonadState because it is a stack of monad transformers, one of which is a StateT.

This is because of what Haskell instances mean: if I have an instance,

instance MonadState s m => MonadState s (ReaderT r m)

while it may look like we're saying that it suffices to have MonadState s m for ReaderT r m to be MonadState s, what we are really saying is that MonadState s (ReaderT r m) means that MonadState s m. It defines a computation, rather than a deduction. In particular, we are not permitted to add the following instance:

data St = ...
instance MonadState St (ReaderT (IORef St) IO)

You may want to work around this issue using {-# OVERLAPPING #-} instances. However, in doing so, you are acting against the semantics of instances, and heading for trouble. For an example of issues with overlapping instances, see the warning at the end of the Overlapping instances section of the GHC manual.

In contrast, what the ReaderT pattern is advertising is extensional classes, indicating what an individual function is allowed to do (hence the name "capability"), regardless of how the monad is implemented.

The problem

Irrespective of whether you are willing to twist the arm of the MTL with {-# OVERLAPPING #-} instances, when programming with capabilities you will run into two kind of issues. The first is probably the lesser of the two, but has been a known pain point with the MTL for a while: it's that the MTL uses types (e.g. the state type) to discriminate layers. In other words: with the MTL, you can't have two states of the same type in your capabilities.

This, of course, is not a problem if you are writing your own type classes: you can simply use a different class for each piece of state (e.g. MonadBalance). This leads us to the second, more serious issue: lack of inference. With all its faults, the MTL gives a very comfortable environment: it defines plenty of generic instances, so that when we give a concrete monad stack, then all the instances are automatically computed for us. In contrast, with capabilities and the ReaderT pattern, we collect all the capabilities, and assemble a bespoke type to handle all of these instances. Haskell's instance resolution is simply not equipped for this.

The bottom line is: an insane amount of boilerplate. Custom type class definitions. A slew of instances at each main entry point (where a concrete type is defined for the monad).

Deriving Via

What we would really like is a way to use type class instances the other way around, compared to instance resolution. Instead of reading

instance MonadState s m => MonadState s (ReaderT r m)

as saying that MonadState, on ReaderT means that MonadState s m, and fixing the implementation, we would like to read it as: if I have an implementation of MonadState s m, then this is a possible implementation of MonadState on a ReaderT.

This is made possible by a new language extension available in the freshly released GHC 8.6: -XDerivingVia.

In short, -XDerivingVia is a generalisation of -XGeneralizedNewtypeDeriving that allows you not only to derive an instance for a newtype, but also from a newtype, or, and this is most relevant for us, from a combination of newtypes. For example:

{-# LANGUAGE DerivingVia #-}

import Data.Monoid (Sum (..))

newtype MyInt = MyInt Int
  deriving (Monoid, Semigroup) via Sum Int
  deriving Num via Int

In the above snippet we define MyInt which wraps an Int, and derive two instances for it. The Monoid instance is taken from Sum Int, and the Num instance is taken directly from Int. Note the via keyword in the deriving clauses. (In this example we could also have derived the Num instance using -XGeneralizedNewtypeDeriving.)

You will find a more complete introduction in Baldur Blondal's talk on the subject. If you want all the details, head to the proposal or the paper.

Enter capability

With the above piece of kit in hand, we can write the capability library. This library provides strategies that can be composed to derive capabilities using the -XDerivingVia language extension.

capability defines a set of standard, reusable capability type classes, such as HasReader, or HasState. Contrary to the MTL type classes these are parameterized by a name (aka tag), which makes it possible to refer to, say, multiple different states in the same computation, even if they correspond to the same type.

getAB :: (HasState "a" Int m, HasState "b" Int m) => m (Int, Int)
getAB = do
  a <- get @"a"  -- get state under tag "a"
  b <- get @"b"  -- get state under tag "b"
  pure (a, b)

The library then provides newtypes to derive instances of these capability type-classes in deriving via clauses, similar to Sum in the MyInt example above.

For example, given an MTL MonadReader instance, we can derive a HasReader capability as follows:

data AppData = ...

newtype AppM a = AppM (ReaderT AppData IO a)
  deriving (HasReader "appData" AppData) via
    MonadReader (ReaderT AppData IO)

We can also combine multiple newtypes to derive capability instances. Building on the above example, we can pick a field within AppData as follows:

data AppData = AppData { intRef :: IORef Int, ... }
  deriving Generic

newtype AppM a = AppM (ReaderT AppData IO a)
  deriving (HasReader "intRef" (IORef Int)) via
    Field "intRef" "appData" (MonadReader (ReaderT AppData IO))

The Field combinator takes two tags as arguments. The first specifies the field name, and also the new tag. The second specifies the old tag, which provides the record with the requested field. Note, that the MonadReader newtype can provide an instance for any tag. The Field combinator uses generic-lens under the hood, which is why AppData needs to have a Generic instance.

A worked example: combining writers without guilt

Let's consider a complete example to demonstrate how you could use capability in your own projects. The code is available in the capability repository if you want to follow along.

In this example we will receive a text as input and want to count occurrences of words and letters in the text, ignoring white space. To that end we will use a writer monad. Recall, that writer has the method tell :: w -> m (), which will mappend the given w to the current tally, starting from mempty. This requires a Monoid instance on w.

We start with counting single letters and words.

-- | Count the occurrence of a single letter.
countLetter ::
  HasWriter "letterCount" (Occurrences Char) m
  => Char -> m ()
countLetter letter = tell @"letterCount" (oneOccurrence letter)

-- | Count the occurrence of a single word.
countWord ::
  HasWriter "wordCount" (Occurrences Text) m
  => Text -> m ()
countWord word = tell @"wordCount" (oneOccurrence word)

The type Occurrences k is a newtype around a Map from values k to their count. Its Monoid instance will add occurrences in the expected fashion.

newtype Occurrences k = Occurrences (Map k Int)

instance Ord k => Monoid (Occurrences k)
  -- See repository for instance implementation.

-- | A single occurrence of the given value.
oneOccurrence :: k -> Occurrences k
oneOccurrence k = Occurrences $ Map.singleton k 1

Next, we combine countLetter and countWord to handle one word in the input text.

-- | Count the occurrence of a single word and all the letters in it.
countWordAndLetters ::
  ( HasWriter "letterCount" (Occurrences Char) m
  , HasWriter "wordCount" (Occurrences Text) m )
  => Text -> m ()
countWordAndLetters word = do
  countWord word
  mapM_ countLetter (Text.unpack word)

Finally, we can handle the full input text by first splitting it into its words and then applying the above function to each word.

-- | Count the occurrences of words and letters in a text,
-- excluding white space.
countWordsAndLettersInText ::
  ( HasWriter "letterCount" (Occurrences Char) m
  , HasWriter "wordCount" (Occurrences Text) m )
  => Text -> m ()
countWordsAndLettersInText text =
  mapM_ countWordAndLetters (Text.words text)

In a production setting we might prefer to stream the input, instead of holding he whole text in memory. For simplicity's sake we will omit this here.

With that we have written a program that demands two HasWriter capabilities.

Before we can execute this program we need to define a concrete implementation that provides these capabilities. This is where we make use of the deriving-via strategies that the library offers.

It is well-known that the writer monad provided by MTL has a space leak. In capability, we can derive a writer capability from a state capability instead, to avoid this issue. In fact, we don't even provide a way to derive a writer capability from a writer monad. Following the ReaderT pattern we derive the state capabilities from reader capabilities on IORefs.

First, we define the application context. A record holding two IORefs - one for each counter.

-- | Counter application context.
data CounterCtx = CounterCtx
  { letterCount :: IORef (Occurrences Char)
    -- ^ Counting letter occurrences.
  , wordCount :: IORef (Occurrences Text)
    -- ^ Counting word occurrences.
  } deriving Generic

Next, we define our application monad.

-- | Counter application monad.
newtype Counter a = Counter { runCounter :: CounterCtx -> IO a }
  deriving (Functor, Applicative, Monad) via (ReaderT CounterCtx IO)

Note that we use ReaderT in the deriving via clause as a strategy to derive the basic Functor, Applicative, and Monad instances.

Deriving the writer capabilities makes use of a large set of newtypes provided by the capability library. Each line after the via keyword corresponds to one newtype. Comments explain the purpose of the respective newtype. Read these from bottom to top.

  deriving (HasWriter "letterCount" (Occurrences Char)) via
    (WriterLog  -- Generate HasWriter using HasState of Monoid
    (ReaderIORef  -- Generate HasState from HasReader of IORef
    (Field "letterCount" "ctx"  -- Focus on the field letterCount
    (MonadReader  -- Generate HasReader using mtl MonadReader
    (ReaderT CounterCtx IO)))))  -- Use mtl ReaderT newtype

The "wordCount" writer is derived in the same way:

  deriving (HasWriter "wordCount" (Occurrences Text)) via
    WriterLog (ReaderIORef
    (Field "wordCount" "ctx" (MonadReader (ReaderT CounterCtx IO))))

The only thing left is to combine all these pieces into an executable program. We will take the text as an argument, and return an IO action that executes countWordsAndLettersInText using our Counter monad, and prints the resulting word and letter counts to standard output.

-- | Given a text count the occurrences of all words and letters in it,
-- excluding white space, and print the outcome to standard output.
wordAndLetterCount :: Text -> IO ()
wordAndLetterCount text = do

First, we setup the required IORefs and the counter context:

  lettersRef <- newIORef Map.empty
  wordsRef <- newIORef Map.empty
  let ctx = CounterCtx
        { letterCount = lettersRef
        , wordCount = wordsRef
        }

Then, we call countWordsAndLettersInText on the input text, and instantiate it using our Counter application monad:

  let counter :: Counter ()
      counter = countWordsAndLettersInText text

Finally, we run counter and print the results:

  runCounter counter ctx
  let printOccurrencesOf name ref = do
        putStrLn name
        occurrences <- readIORef ref
        ifor_ occurrences $ \item num ->
          putStrLn $ show item ++ ": " ++ show num
  printOccurrencesOf "Letters" lettersRef
  printOccurrencesOf "Words" wordsRef

Executing this program in GHCi should produce the following output:

>>> wordAndLetterCount "ab ba"
Letters
'a': 2
'b': 2
Words
"ab": 1
"ba": 1

This concludes the example. We invite you to experiment with this library. It is still in an early stage and the API is subject to change. However, your feedback will help to evolve it in a better direction.

A word on free monads

Another solution to many of the same problems has been known for a while: free monads and extensible effects. As it happens, capability and free monads can be formally compared. In this paper, Mauro Jaskelioff and Russell O'Connor, prove that free monads are a special case of capabilities (it's not phrased in these terms, of course, but that's what the paper amounts to).

So another way of looking at capability is that it is a library of extensible effects. It makes it possible to write effects which are not available to free monads: free monads can only model algebraic effects, while capabilities do not have such a restriction. For instance the HasCatch capability, giving a function the ability to catch errors, is not algebraic, hence not available to free monads.

However, the most important reason for us to develop capability is that we find this style of programming quite manageable and idiomatic, whereas free-monad programming quickly becomes unwieldy. This is an entirely subjective judgement of course, but we believe that it has slowed the adoption of extensible effects. Absolutely wonderful though they are! As a bonus, capabilities should be more efficient than free-monad-style programming because it doesn't rely on a tagged encoding.

October 04, 2018 12:00 AM

October 03, 2018

Christopher Allen

Wrapping up Haskell Book

I’d like to summarize some of the background to Haskell Programming from First Principles’ development and finalization.

A brief summary of events

Julie and I worked together on the book as equal partners until we started to disagree about its direction. Even though we had different ideas and work needs, I explicitly expressed my intention that we still finish the first edition of the Haskell Book together. The book was in limbo after that point. Work from both of us after that was sporadic and unfocused. There was disagreement on whether the book needed further editing to be finalized and printed. The emails we got about syntax errors, typos and other issues in the book seemed conclusive to me that it did need further editing. I suggested an editor for the book which Julie rejected. It became more and more difficult for Julie and I to communicate with each other about the book which created more tension.

In October 2017, Julie demanded that I enter into negotiations with her over the book’s completion and management. This is when the lawyers started talking. We tried working through different scenarios such as:

  • We would continue as equal partners, but there’s a structured agreement in place for the book’s completion and management.
  • I would sell Julie my rights in the book or vice versa.

There was a lot of back and forth and it took nearly a year of negotiations before we arrived at an agreement. Work on the book was halted because we could not agree and our working relationship had soured. The final agreement that we arrived at was based largely on the terms Julie demanded. It was supposed to be an amicable resolution to our problems. Julie got what she wanted and would benefit from the book’s final publication and the readers would finally get a finished product. The agreement included a confidentiality clause as well. To see Julie attack me online with spiteful insinuations and untruthful comments about me and what happened with the book after she voluntarily signed the agreement is upsetting.

What happens now

The book is going through its final edit. When that is done the final electronic version will be published and then the print version will be sorted out.

I know this site is a bit of a disaster zone, but if you like my writing or think you could learn something useful from me, please take a look at the Haskell book I've been writing. There's a free sample available too!

October 03, 2018 12:00 AM

October 02, 2018

Neil Mitchell

Full-time Haskell jobs in Zürich/New York, at Digital Asset

Summary: We're hiring 3 Haskell programmers and a few other roles too.

I am currently working at Digital Asset, working on our DAML programming language. We're seeking 3 additional Haskell programmers to join, 2 in New York and 1 in Zurich (remote work is not currently an option). There are also a ton of other jobs on our website, including Formal Methods and nix + some Haskell Build Engineering (also available at a more junior level).

What we do

We have built DAML, the Digital Asset Modelling Language, which is the centerpiece of our distributed ledger technology. DAML is a contract language that consists of a strongly-typed purely functional core extended with domain specific constructs to express the flow of rights and obligations underlying today's multi-party business processes. Application Developers using DAML and our distributed ledger technology are supported by the DAML SDK. It provides a type-safe integration of DAML with existing technology like Java, Scala, XML and SQL, and contains DAML Studio, which provides a modern IDE experience to develop, test, and analyse DAML programs.

Working on the Language Engineering team with Digital Asset involves partnering with people around the world (we have centers in New York, Zurich and Sydney), working with exciting new technology, where many of the answers haven't yet been figured out, producing solutions for clients, such as replacing the settlement and clearing platform of the Australian Stock Exchange (ASX), and making sure the end result has the quality required for robust usage. It's challenging work, but the impact could be huge.

What we require

We're looking for the best functional programmers out there, with a strong bias towards Haskell. If not Haskell, then Scala is useful, as other teams in the company write Scala. However, we've hired ML/F# programmers too, with good results. In particular we want:
  • Experienced functional programmer. Either some open-source libraries (Hackage/GitHub) or commercial experience.
  • Writes good, clean, effective code.
  • Existing compiler development experience is useful, if it's with GHC then even better.
We do not require any existing blockchain/DLT/finance knowledge.

How to apply

To apply, email neil.mitchell AT digitalasset.com with a copy of your CV. If you have any questions, email me.
The best way to assess technical ability is to look at code people have written. If you have any packages on Hackage or things on GitHub, please point me at the best projects. If your best code is not publicly available, please describe the Haskell projects you've been involved in.

by Neil Mitchell (noreply@blogger.com) at October 02, 2018 11:21 AM

Matt Parsons

Keep your types small...

… and your bugs smaller

In my post “Type Safety Back and Forth”, I discussed two different techniques for bringing type safety to programs that may fail. On the one hand, you can push the responsibility forward. This technique uses types like Either and Maybe to report a problem with the inputs to the function. Here are two example type signatures:

safeDivide
    :: Int
    -> Int
    -> Maybe Int

lookup
    :: Ord k
    => k
    -> Map k a
    -> Maybe a

If the second parameter to safeDivide is 0, then we return Nothing. Likewise, if the given k is not present in the Map, then we return Nothing.

On the other hand, you can push it back. Here are those functions, but with the safety pushed back:

safeDivide
    :: Int
    -> NonZero Int
    -> Int

lookupJustified
    :: Key ph k
    -> Map ph k a
    -> a

With safeDivide, we require the user pass in a NonZero Int – a type that guarantees that the underlying value is not 0. With lookupJustified, the ph type guarantees that the Key is present in the Map, so we can pull the resulting value out without requiring a Maybe. (Check out the tutorial for justified-containers, it is pretty awesome)

Expansion and Restriction

“Type Safety Back and Forth” uses the metaphor of “pushing” the responsibility in one of two directions:

  • forwards: the caller of the function is responsible for handling the possible error output
  • backwards: the caller of the function is required to providing correct inputs

However, this metaphor is a bit squishy. We can make it more precise by talking about the “cardinality” of a type – how many values it can contain. The type Bool can contain two values – True and False, so we say it has a cardinality of 2. The type Word8 can express the numbers from 0 to 255, so we say it has a cardinality of 256.

The type Maybe a has a cardinality of 1 + a. We get a “free” value Nothing :: Maybe a. For every value of type a, we can wrap it in Just.

The type Either e a has a cardinality of e + a. We can wrap all the values of type e in Left, and then we can wrap all the values of type a in Right.

The first technique – pushing forward – is “expanding the result type.” When we wrap our results in Maybe, Either, and similar types, we’re saying that we can’t handle all possible inputs, and so we must have extra outputs to safely deal with this.

Let’s consider the second technique. Specifically, here’s NonZero and NonEmpty, two common ways to implement it:

newtype NonZero a 
    = UnsafeNonZero 
    { unNonZero :: a 
    }

nonZero :: (Num a, Eq a) => a -> Maybe (NonZero a)
nonZero 0 = Nothing
nonZero i = Just (UnsafeNonZero i)

data NonEmpty a = a :| [a]

nonEmpty :: [a] -> Maybe (NonEmpty a)
nonEmpty []     = Nothing
nonEmpty (x:xs) = x :| xs

What is the cardinality of these types?

NonZero a represents “the type of values a such that the value is not equal to 0.” NonEmpty a represents “the type of lists of a that are not empty.” In both of these cases, we start with some larger type and remove some potential values. So the type NonZero a has the cardinality a - 1, and the type NonEmpty a has the cardinality [a] - 1.

Interestingly enough, [a] has an infinite cardinality, so [a] - 1 seems somewhat strange – it is also infinite! Math tells us that these are even the same infinity. So it’s not the mere cardinality that helps – it is the specific value(s) that we have removed that makes this type safer for certain operations.

These are custom examples of refinement types. Another closely related idea is quotient types. The basic idea here is to restrict the size of our inputs. Slightly more formally,

  • Forwards: expand the range
  • Backwards: restrict the domain

Constraints Liberate

Runar Bjarnason has a wonderful talk titled Constraints Liberate, Liberties Constrain. The big idea of the talk, as I see it, is this:

When we restrict what we can do, it’s easier to understand what we can do.

I feel there is a deep connection between this idea and Rich Hickey’s talk Simple Made Easy. In both cases, we are focusing on simplicity – on cutting away the inessential and striving for more elegant ways to express our problems.

Pushing the safety forward – expanding the range – does not make things simpler. It provides us with more power, more options, and more possibilities. Pushing the safety backwards – restricting the domain – does make things simpler. We can use this technique to take away the power to get it wrong, the options that aren’t right, and the possibilities we don’t want.

Indeed, if we manage to restrict our types sufficiently, there may be only one implementation possible! The classic example is the identity function:

identity :: a -> a
identity a = a

This is the only implementation of this function that satisfies the type signature (ignoring undefined, of course). In fact, for any function with a sufficiently precise type signature, there is a way to automatically derive the function! Joachim Breitner’s justDoIt is a fascinating utility that can solve these implementations for you.

With sufficiently fancy types, the computer can write even more code for you. The programming language Idris can write well-defined functions like zipWith and tranpose for length-indexed lists nearly automatically!

Restrict the Range

I see this pattern and I am compelled to fill it in:

  Restrict Expand
Range   :(
Domain :D  

I’ve talked about restricting the range and expanding the domain. Expanding the domain seems silly to do – we accept more possible values than we know what to do with. This is clearly not going to make it easier or simpler to implement our programs. However, there are many functions in Haskell’s standard library that have a domain that is too large. Consider:

take :: Int -> [a] -> [a]

Int, as a domain, is both too large and too small. It allows us to provide negative numbers: what does it even mean to take -3 elements from a list? As Int is a finite type, and [a] is infinite, we are restricted to only using this function with sufficiently small Ints. A closer fit would be take :: Natural -> [a] -> [a]. Natural allows any non-negative whole number, and perfectly expresses the reasonable domain. Expanding the domain isn’t desirable, as we might expect.

base has functions with a range that is too large, as well. Let’s consider:

length :: [a] -> Int

This has many of the same problems as take – a list with too many elements will overflow the Int, and we won’t get the right answer. Additionally, we have a guarantee that we forget – a length for any container must be positive! We can more correctly express this type by restricting the output type:

length :: [a] -> Natural

A perfect fit

The more precisely our types describe our program, the fewer ways we have to go wrong. Ideally, we can provide a correct output for every input, and we use a type that tightly describes the properties of possible outputs.

October 02, 2018 12:00 AM

October 01, 2018

Brent Yorgey

Monoidal sparks

While at ICFP in St. Louis this past week, I discovered an interesting construction on monoids that no one I talked to (including Kenny Foner and Edward Kmett) seemed to have heard of or thought about before. In this post I’ll present the abstract construction itself, and in another post I’ll explain the particular context in which I first came up with it. (Normally I would put the examples first, before explaining the idea in general; but I only know of one example so far, and I’m curious to see if anyone will come up with more. I don’t want to bias people’s thinking too much with my one example!)

The bigger context here is thinking about different ways of putting a monoid structure on a product type A \times B, assuming A and B are themselves monoids. Previously I knew of two ways to do this. The obvious way is to just have the two sides operate in parallel, without interacting at all: (a_1,b_1) \diamond (a_2,b_2) = (a_1 \diamond a_2, b_1 \diamond b_2) and so on. Alternatively, if A acts on B, we can form the semidirect product.

But here is a third way. Suppose (A, \varepsilon_A, \diamond) is a monoid, and (B, \varepsilon_B, \oplus) is a commutative monoid. To connect them, we also suppose there is another binary operation - \cdot - : A \to A \to B, which I will pronounce “spark”. The way I like to think of it is that two values of type A, in addition to combining to produce another A via the monoid operation, also produce a “spark” of type B. That is, values of type B somehow capture information about the interaction between pairs of A values.

Sparking needs to interact sensibly with the monoid structures on A and B: in particular, fixing either argument must result in a monoid homomorphism from A to B. That is, for any choice of a \in A, we have a \cdot \varepsilon_A = \varepsilon_B (i.e. \varepsilon_A is boring and can never produce a nontrivial spark with anything else), and a \cdot (a_1 \diamond a_2) = (a \cdot a_1) \oplus (a \cdot a_2) (i.e. sparking commutes with the monoid operations). Similar laws should hold if we fix the second argument of - \cdot - instead of the first.

Given all of this, we can now define a monoid on A \times B as follows:

(a_1, b_1) \otimes (a_2, b_2) = (a_1 \diamond a_2, b_1 \oplus b_2 \oplus (a_1 \cdot a_2))

That is, we combine the A values normally, and then we combine the B values together with the “spark” of type B produced by the two A values.

Let’s see that this does indeed define a valid monoid:

  • The identity is (\varepsilon_A, \varepsilon_B), since

    (\varepsilon_A, \varepsilon_B) \otimes (a,b) = (\varepsilon_A  \diamond a, \varepsilon_B \oplus b \oplus (\varepsilon_A \cdot a)) =  (a, b \oplus \varepsilon_B) = (a,b)

    Notice how we used the identity laws for A (once) and B (twice), as well as the law that says \varepsilon_A \cdot a =  \varepsilon_B. The proof that (\varepsilon_A, \varepsilon_B) is a right identity for \otimes is similar.

  • To see that the combining operation is associative, we can reason as follows:

    \begin{array}{rcl} & & ((a_1,b_1) \otimes (a_2,b_2)) \otimes (a_3,b_3) \\[0.5em] & = & \qquad \text{\{expand definition of \begin{math}\otimes\end{math}\}} \\[0.5em] & & (a_1 \diamond a_2, b_1 \oplus b_2 \oplus (a_1 \cdot a_2)) \otimes (a_3,b_3) \\[0.5em] & = & \qquad \text{\{expand definition of \begin{math}\otimes\end{math} again\}} \\[0.5em] & & (a_1 \diamond a_2 \diamond a_3, b_1 \oplus b_2 \oplus (a_1 \cdot a_2) \oplus b_3 \oplus ((a_1 \diamond a_2) \cdot a_3)) \\[0.5em] & = & \qquad \text{\{\begin{math}- \cdot a_3\end{math} is a homomorphism, \begin{math}\oplus\end{math} is commutative\}} \\[0.5em] & & (a_1 \diamond a_2 \diamond a_3, b_1 \oplus b_2 \oplus b_3 \oplus (a_1 \cdot a_2) \oplus (a_1 \cdot a_3) \oplus (a_2 \cdot a_3)) \end{array}

    and

    \begin{array}{rcl} & & (a_1,b_1) \otimes ((a_2,b_2) \otimes (a_3,b_3)) \\[0.5em] & = & \qquad \text{\{expand definition of \begin{math}\otimes\end{math}\}} \\[0.5em] & & (a_1,b_1) \otimes (a_2 \diamond a_3, b_2 \oplus b_3 \oplus (a_2 \cdot a_3)) \\[0.5em] & = & \qquad \text{\{expand definition of \begin{math}\otimes\end{math} again\}} \\[0.5em] & & (a_1 \diamond a_2 \diamond a_3, b_1 \oplus (b_2 \oplus b_3 \oplus (a_2 \cdot a_3)) \oplus (a_1 \cdot (a_2 \diamond a_3))) \\[0.5em] & = & \qquad \text{\{\begin{math}a_1 \cdot -\end{math} is a homomorphism, \begin{math}\oplus\end{math} is commutative\}} \\[0.5em] & & (a_1 \diamond a_2 \diamond a_3, b_1 \oplus b_2 \oplus b_3 \oplus (a_1 \cdot a_2) \oplus (a_1 \cdot a_3) \oplus (a_2 \cdot a_3)) \end{array}

    In a formal proof one would have to also explicitly note uses of associativity of \diamond and \oplus but I didn’t want to be that pedantic here.

In addition, if A is a commutative monoid and the spark operation - \cdot - commutes, then the resulting monoid (A \times B, \otimes) will be commutative as well.

The proof of associativity gives us a bit of insight into what is going on here. Notice that when reducing (a_1,b_1) \otimes (a_2,b_2) \otimes (a_3,b_3), we end up with all possible sparks between pairs of a’s, i.e. (a_1 \cdot a_2) \oplus (a_1 \cdot a_3) \oplus (a_2 \cdot a_3), and one can prove that this holds more generally. In particular, if we start with a list of A values:

[a_1, a_2, \dots, a_n],

then inject them all into A \times B by pairing them with \varepsilon_B:

[(a_1, \varepsilon_B), (a_2, \varepsilon_B), \dots, (a_n, \varepsilon_B)],

and finally fold this list with \otimes, the second element of the resulting pair is

\displaystyle \bigoplus_{i \neq j} (a_i \cdot a_j),

that is, the combination (via the monoid on B) of the sparks between all possible pairings of the a_i. Of course there are O(n^2) such pairings: the point is that whereas computing this via a straightforward fold over the list may well take O(n^2) time, by using a balanced fold (i.e. splitting the list in half recursively and then combining from the leaves up) it may be possible to compute it in O(n \log n) time instead (at least, this is the case for the example I have in mind!).

Please leave a comment if you can you think of any instances of this pattern, or if you have seen this pattern discussed elsewhere. In a future post I will write about the instance I had in mind when coming up with this generalized formulation.

by Brent at October 01, 2018 04:15 PM

Magnus Therning

Using a configuration in Scotty

At work we’re only now getting around to put correlation IDs into use. We write most our code in Clojure but since I’d really like to use more Haskell at work I thought I’d dive into Scotty and see how to deal with logging and then especially how to get correlation IDs into the logs.

The types

For configuration it decided to use the reader monad inside ActionT from Scotty. Enter Chell:

In order to run it I wrote a function corresponding to scotty:

Correlation ID

To deal with the correlation ID each incoming request should be checked for the HTTP header X-Correlation-Id and if present it should be used during logging. If no such header is present then a new correlation ID should be created. Since it’s per request it feels natural to create a WAI middleware for this.

The easiest way I could come up with was to push the correlation ID into the request’s headers before it’s passed on:

It also turns out to be useful to have both a default correlation ID and a function for pulling it out of the headers:

Getting the correlation ID into the configuration

Since the correlation ID should be picked out of the request on handling of every request it’s useful to have it the configuration when running the ChellActionM actions. However, since the correlation ID isn’t available when running the reader (the call to runReaderT in chell) something else is called for. When looking around I found local (and later I was pointed to the more general withReaderT) but it doesn’t have a suitable type. After some help on Twitter I arrived at withConfig which allows me to run an action in a modified configuration:

Making it handy to use

Armed with this I can put together some functions to replace Scotty’s get, post, etc. With a configuration type like this:

The modified get looks like this (Scotty’s original is S.get)

With this in place I can use the simpler ReaderT Config IO for inner functions that need to log.

October 01, 2018 12:00 AM

September 30, 2018

ERDI Gergo

Composable CPU descriptions in CλaSH, and wrap-up of RetroChallenge 2018/09

My last post ended with some sample CλaSH code illustrating the spaghetti-ness you can get yourself into if you try to describe a CPU directly as a function (CPUState, CPUIn) -> (CPUState, CPUOut). I promised some ideas for improving that code.

To start off gently, first of all we can give names to some common internal operations to help readability. Here's the code from the previous post rewritten with a small kit of functions:

intensionalCPU (s0, CPUIn{..}) = case phase s of
    WaitKeyPress reg ->
        let s' = case cpuInKeyEvent of
                Just (True, key) -> goto Fetch1 . setReg reg key $ s
                _ -> s
            out = def{ cpuOutMemAddr = pc s' }
        in (s', out)                   
    Fetch1 ->
        let s' = goto Exec s{ opHi = cpuInMem, pc = succ $ pc s }
            out = def{ cpuOutMemAddr = pc s' }
        in (s', out)
  where
    s | cpuInVBlank = s0{ timer = fromMaybe 0 . predIdx $ timer s0 }
      | otherwise = s0
    

Using a State monad

To avoid the possibility of screwing up the threading of the internal state, we can use the State CPUState monad:

stateCPU :: CPUIn -> State CPUState CPUOut      
stateCPU CPUIn{..} = do
    when cpuInVBlank $
        modify $ \s -> s{ timer = fromMaybe 0 . predIdx $ timer s }
    gets phase >>= \case 
      WaitKeyPress reg -> do
        forM_ cpuInKeyEvent $ \(pressed, key) -> when pressed $ do
            setReg reg key
            goto Fetch1
        return def{ cpuOutMemAddr = pc s' }
      Fetch1 -> do
        modify $ \s -> s{ opHi = cpuInMem, pc = succ $ pc s }
        goto Exec
        return def{ cpuOutMemAddr = pc s' }
    

At this point, the state-accessing actions either come from the kit, because they are useful in many parts of our CPU description (like setReg), or they read or change individual fields of the CPUState record. This second group can benefit from using lenses:

stateCPU CPUIn{..} = do
    when cpuInVBlank $
        timer %= fromMaybe 0 . predIdx
    use phase >>= \case 
      WaitKeyPress reg -> do
        forM_ cpuInKeyEvent $ \(pressed, key) -> when pressed $ do
            setReg reg key
            goto Fetch1
        return def{ cpuOutMemAddr = pc s' }
      Fetch1 -> do
        opHi .= cpuInMem
        pc %= succ
        goto Exec
        return def{ cpuOutMemAddr = pc s' }
    

Problems with composability

While this code is much more readable than the original one, and allows describing the internal state changes piecewise, it still has a problem with composability. To illustrate this, let's add some more functionality by implementing some CPU instructions (simplified a bit from a real CHIP-8 which can only write to memory through a save-registers instruction and even that is addressed by a dedicated pointer register; and accessing the framebuffer is also done via multiple-byte blitting only).

stateCPU CPUIn{..} = do
    when cpuInVBlank $
        timer %= fromMaybe 0 . predIdx
    use phase >>= \case 
      WaitKeyPress reg -> do
        forM_ cpuInKeyEvent $ \(pressed, key) -> when pressed $ do
            setReg reg key
            goto Fetch1
        pc <- use pc    
        return def{ cpuOutMemAddr = pc }
      Idle -> do
        goto Fetch1
        pc <- use pc    
        return def{ cpuOutMemAddr = pc }
      Fetch1 -> do
        opHi .= cpuInMem
        pc %= succ
        goto Exec
        pc <- use pc    
        return def{ cpuOutMemAddr = pc }
      Exec -> do
        opLo .= cpuInMem
        pc %= succ
        goto Fetch1
        decode >>= \case
          WaitKey reg -> do
            goto $ WaitKeyPress reg
            pc <- use pc    
            return def{ cpuOutMemAddr = pc }
          LoadImm reg imm -> do
            setReg reg imm
            pc <- use pc    
            return def{ cpuOutMemAddr = pc }
          WriteMem addr reg -> do
            val <- getReg reg
            goto Idle -- flush write to RAM before starting next fetch
            return def{ cpuOutMemAddr = addr, cpuOutMemWrite = Just val }           
          SetPixel regX regY -> do
            x <- getReg regX
            y <- getReg regY
            pc <- use pc
            return def{ cpuOutMemAddr = pc, cpuOutFBAddr = (x, y), cpuOutFBWrite = Just True }          
          -- And a lot more opcodes here
    

Since there are going to be a lot of opcodes to support, it could seem like a good idea to refactor the Exec state to be a separate function, leaving only the implementation of the other control phases in the main function:

stateCPU cpuIn@CPUIn{..} = do
    when cpuInVBlank $
        timer %= fromMaybe 0 . predIdx
    use phase >>= \case 
      WaitKeyPress reg -> do
        forM_ cpuInKeyEvent $ \(pressed, key) -> when pressed $ do
            setReg reg key
            goto Fetch1
        pc <- use pc    
        return def{ cpuOutMemAddr = pc }
      Idle -> do
        goto Fetch1
        pc <- use pc    
        return def{ cpuOutMemAddr = pc }
      Fetch1 -> do
        opHi .= cpuInMem
        pc %= succ
        goto Exec
        pc <- use pc    
        return def{ cpuOutMemAddr = pc }
      Exec -> do
        opLo .= cpuInMem
        pc %= succ
        goto Fetch1
        decode >>= exec cpuIn
        
exec CPUIn{..} (WaitKey reg) = ... -- same as previous version
exec CPUIn{..} (LoadImm reg imm) = ... -- same as previous version
exec CPUIn{..} (WriteMem addr reg) = ...
exec CPUIn{..} (SetPixel regX regY) = ...
    

However, this becomes problematic if we want to set some parts of the CPUOut result in a generic way, outside exec. Suppose we implement sound support, which in CHIP-8 is done via another 60Hz timer:

stateCPU cpuIn@CPUIn{..} = do
    when cpuInVBlank $ do
        timer %= fromMaybe 0 . predIdx
        soundTimer %= fromMaybe 0 . predIdx
    soundOn <- uses soundTimer (/= 0)
    def <- return def{ cpuOutSound = soundOn }
    

This takes care of setting cpuOutSound in the branches for WaitKeyPress, Idle and Fetch1 (albeit in an ugly way, by shadowing def), but now this needs to be passed as an extra parameter to exec (or the code to set cpuOutSound needs to be duplicated).

Assembling the output piecewise

So what we want is a way to specify the resulting CPUOut such that

  1. Reasonable defaults (like reading the next instruction from RAM via cpuOutMemAddr = pc) are applied automatically, without requiring any code. Note that the defaults shouldn't need to be static; they can depend on the state.
  2. Various parts of the CPU implementation can contribute to the final CPUOut result that is visible from the outside.

This has led me to a design where the CPU is described by a type CPU i s o = RWS i (Endo o) s:

  • The State axis is, of course, the internal state, i.e. the CPU registers, as before.
  • The Reader axis provides easy access to the inputs (as of the current clock cycle) to (potentially deeply nested) functions, without having to pass anything around by hand.
  • The Writer axis collects an Endo CPUOut; in other words, a function CPUOut -> CPUOut with "later" functions applied after (and thus overriding) "earlier" functions. Thus, the final collected Endo CPUOut can be applied to a CPUOut computed as the default output for the final CPUState:
    runCPU :: (s -> o) -> CPU i s o () -> (i -> State s o)
    runCPU mkDef cpu inp = do
        s <- get
        let (s', f) = execRWS (unCPU cpu) inp s
        put s'
        def <- gets mkDef
        return $ appEndo f def    
            

With this setup, we can rewrite our CPU implementation as:

cpu = do
    CPUIn{..} <- input
    when cpuInVBlank $
        timer %= fromMaybe 0 . predIdx
        soundTimer %= fromMaybe 0 . predIdx
    soundOn <- uses soundTimer (/= 0)
    when soundOn $ do
        tell $ \out -> out{ cpuOutSound = True }

    use phase >>= \case 
      WaitKeyPress reg -> do
        forM_ cpuInKeyEvent $ \(pressed, key) -> when pressed $ do
            setReg reg key
            goto Fetch1
      Idle -> do
        goto Fetch1
      Fetch1 -> do
        opHi .= cpuInMem
        pc %= succ
        goto Exec
      Exec -> do
        opLo .= cpuInMem
        pc %= succ
        goto Fetch1
        decode >>= exec

exec (WaitKey reg) = do
    goto $ WaitKeyPress reg
exec (LoadImm reg imm) = do
    setReg reg imm
exec (WriteMem addr reg) = do
    val <- getReg reg
    writeMem addr val
    goto Idle -- flush write to RAM before starting next fetch
exec (SetPixel regX regY) = do
    x <- getReg regX
    y <- getReg regY
    writeFB (x, y) True
    

Most branches don't need to do anything to get the correct final CPUOut; and just like we were able to name the State kit of setReg &c, we can define

writeMem addr val = tell $ \out -> out{ cpuOutMemAddr = addr, cpuOutMemWrite = Just val }
writeFB addr val = tell $ \out -> out{ cpuOutFBAddr = addr, cpuOutFBWrite = Just val }
    

The system described above is implemented here for the reusable library definitions and here for CHIP-8 specifically; the version that uses lenses is in the branch linked above, since I am not 100% convinced yet that the cognitive overhead of the lens library is worth it once enough kit is developed around state access.

Future directions

There are two questions about the above that I feel that haven't been resolved yet:

  • Is it a good idea to include ContT in the CPU monad stack? On one hand, it is known to be problematic to compose ContT with Reader/Writer; on the other hand, here we only need ask and tell, not local or pass. On the gripping hand, ContT might be useful, for example, to describe CPU sub-states that wait for some external output before proceeding normally, as an early exit from the CPU monad.
  • Can the CPUState and Phase types themselves be defined piecewise? Is there some unholy union of extensible effects, extensible records and extensible sum types that would allow us to, informally speaking, define the WaitKeyPress phase together with its handler, in the middle of exec where the WaitKey instruction is implemented?

RetroChallenge 2018/09 wrap-up

All in all, even though I couldn't get as far as I originally hoped, I am glad I at least managed to build a CHIP-8 computer in CλaSH that can boot up on real hardware and play original CHIP-8 games; that's pretty much what I originally set out to do. The RetroChallenge format of detailed blog posts concurrent with the actual development work being done is quite taxing on me, because I am slow to write these posts; overall, I have probably spent about half and half of my RetroChallenge time on coding/design vs. writing about it. But the fixed one-month timeframe and the great progress reports by the other participants have been great motivators; I would probably try something next time where I can expect less uknown unknowns.

There is still work to be done even on this CHIP-8 computer: I've found some games that don't work as expected when they try to use the built-in hexadecimal digit font; and there are of course the previously discussed performance problems inherent in addressing the frame buffer as single bits. Also, I would like to adapt it to use a real four-by-four keypad (by scanning the key rows from the FPGA) and a PCD8544 monochrome matrix LCD

And then, of course, further exploration of the design space of CPU descriptions discussed above; and then moving on to a "real" CPU, probably by rewriting my Lava 6502 core. Also, I will need to get a new FPGA dev board with a chip that is supported by a contemporary toolchain, so that I won't get bogged down by problems like the Spartan-6 synthesis bug.

September 30, 2018 09:15 PM

September 29, 2018

Stackage Blog

Announcing stackage nightly snapshots with ghc-8.6.1

I am pleased to announce that Stackage nightly snapshots now use ghc-8.6.1. Stack users can simply use resolver: nightly-2018-09-29 in a project’s stack.yaml file in order to build and test with ghc-8.6.1.

As is usual when a new ghc is released, many packages were removed from the nightly snapshot due to version constraints and build failures. We expect many packages to return quickly to the build plan, and we hope that the existing snapshots will help mainatainers get back in by making it easy to build and test their packages.

Please check to see if your favorite packages are in the latest nightly snapshot, and open up a PR or an issue on Stackage if you find anything missing which is known to work with ghc-8.6.1.

As ghc moves to its new 6-month release cycle, I’d like to encourage everyone to be proactive in bumping bounds and testing packages you rely on with the new ghc alphas and release candidates as they come out. Haskell has a remarkable ecosystem; let’s strive to make it even better!

September 29, 2018 12:01 AM

Lysxia's blog

September 24, 2018

Ken T Takusagawa

[zvyswyrq] Fun with accents

Cès̋ hll m̀a̋k lw̏ r̋p̏e̊c̋tın̑g̀ ån̄ s̃tbhṅt öf liḡi, r̃ p̏r̈hıbıtiğ th fèé ĕx́ŕc̄iśë théf; õ ăbr̊idg̈ıthe̋ fr̊èdȏ f ȅćh, f thẽ p̊r̊; thig̈ht f thĕ p̈ēŏp̋l p̈e̋āc̄èảblỹ to̊ äbl, d tő p̈tıtiỏń thė Gŕn̉t f ä ŕde̋s̋ṡ f g̈r̋is̃.

Unicode combining diacritics can put an accent on any character.  We limit to characters without ascenders.  i and j are treated specially as explained below.  We also experimented with diacritical marks below (cedilla, ogonek), but it caused formatting problems with some fonts when there was both a mark above and below: the mark below was (probably correctly) off center, but it caused the mark above to incorrectly go off center also.  Therefore, we did underline via HTML, and only below letters without descenders.

We experimented with putting accents on dotless i and j, but formatting got messed up with some fonts: the accents were wider than the character so overlapped with accents on neighboring characters.  (This is only a problem in proportional fonts, not monospace.)  Therefore the only "accent" possible on i and j are the removal of the dot.  Possible problem: if a font has a "f i" ligature, it might be difficult to distinguish the ligature versus f followed by a dotless i: dotted= fi fj, dotless= fı fȷ

The general idea is that ascenders and descenders in some letters force space to be reserved for them for all letters, so fill empty space with an accent or underline to increase information density.  (Of course, it will become harder to read, as increased density often does.)  Encoding information as a pattern of accents on some "carrier" text is mixed-radix base conversion.

There was a choice between the curved breve/inverted breve pair or the angled circumflex/caron pair: probably don't use both because they look too similar.  We chose curvy because there were lots of angles already with the single and double acute and grave.  Tilde and macron look somewhat similar, but we used both.  We chose [ 0x300, 0x301, 0x306, 0x311, 0x303, 0x304, 0x30b, 0x30f, 0x308, 0x307, 0x30a, 0x309 ].

Although not relevant to the above example, some fonts have descenders for capital Q and J so should not be underlined.  QJQJOIOI.  Same using CSS instead of the HTML U tag (some browsers do CSS differently than <u>): QJQJOIOI.

Haskell source code to add random accents, and to enumerate the range of possible characters subject to the constraints we've chosen.  Editorially: Unicode is supposed to capture all the world's languages that actually exist.  It isn't supposed to be used to invent new characters not used in any real language, a capability we are abusing here.

a<wbr></wbr>à<wbr></wbr>á<wbr></wbr>ă<wbr></wbr>ȃ<wbr></wbr>ã<wbr></wbr>ā<wbr></wbr>a̋<wbr></wbr>ȁ<wbr></wbr>ä<wbr></wbr>ȧ<wbr></wbr>å<wbr></wbr>ả<wbr></wbr>a<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>b<wbr></wbr>b<wbr></wbr>c<wbr></wbr>c̀<wbr></wbr>ć<wbr></wbr>c̆<wbr></wbr>c̑<wbr></wbr>c̃<wbr></wbr>c̄<wbr></wbr>c̋<wbr></wbr>c̏<wbr></wbr>c̈<wbr></wbr>ċ<wbr></wbr>c̊<wbr></wbr>c̉<wbr></wbr>c<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>d<wbr></wbr>d<wbr></wbr>e<wbr></wbr>è<wbr></wbr>é<wbr></wbr>ĕ<wbr></wbr>ȇ<wbr></wbr>ẽ<wbr></wbr>ē<wbr></wbr>e̋<wbr></wbr>ȅ<wbr></wbr>ë<wbr></wbr>ė<wbr></wbr>e̊<wbr></wbr>ẻ<wbr></wbr>e<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>f<wbr></wbr>f<wbr></wbr>g<wbr></wbr>g̀<wbr></wbr>ǵ<wbr></wbr>ğ<wbr></wbr>g̑<wbr></wbr>g̃<wbr></wbr>ḡ<wbr></wbr>g̋<wbr></wbr>g̏<wbr></wbr>g̈<wbr></wbr>ġ<wbr></wbr>g̊<wbr></wbr>g̉<wbr></wbr>h<wbr></wbr>h<wbr></wbr>i<wbr></wbr>ı<wbr></wbr>i<wbr></wbr>ı<wbr></wbr>j<wbr></wbr>ȷ<wbr></wbr>k<wbr></wbr>k<wbr></wbr>l<wbr></wbr>l<wbr></wbr>m<wbr></wbr>m̀<wbr></wbr>ḿ<wbr></wbr>m̆<wbr></wbr>m̑<wbr></wbr>m̃<wbr></wbr>m̄<wbr></wbr>m̋<wbr></wbr>m̏<wbr></wbr>m̈<wbr></wbr>ṁ<wbr></wbr>m̊<wbr></wbr>m̉<wbr></wbr>m<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>n<wbr></wbr>ǹ<wbr></wbr>ń<wbr></wbr>n̆<wbr></wbr>n̑<wbr></wbr>ñ<wbr></wbr>n̄<wbr></wbr>n̋<wbr></wbr>n̏<wbr></wbr>n̈<wbr></wbr>ṅ<wbr></wbr>n̊<wbr></wbr>n̉<wbr></wbr>n<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>o<wbr></wbr>ò<wbr></wbr>ó<wbr></wbr>ŏ<wbr></wbr>ȏ<wbr></wbr>õ<wbr></wbr>ō<wbr></wbr>ő<wbr></wbr>ȍ<wbr></wbr>ö<wbr></wbr>ȯ<wbr></wbr>o̊<wbr></wbr>ỏ<wbr></wbr>o<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>p<wbr></wbr>p̀<wbr></wbr>ṕ<wbr></wbr>p̆<wbr></wbr>p̑<wbr></wbr>p̃<wbr></wbr>p̄<wbr></wbr>p̋<wbr></wbr>p̏<wbr></wbr>p̈<wbr></wbr>ṗ<wbr></wbr>p̊<wbr></wbr>p̉<wbr></wbr>q<wbr></wbr>q̀<wbr></wbr>q́<wbr></wbr>q̆<wbr></wbr>q̑<wbr></wbr>q̃<wbr></wbr>q̄<wbr></wbr>q̋<wbr></wbr>q̏<wbr></wbr>q̈<wbr></wbr>q̇<wbr></wbr>q̊<wbr></wbr>q̉<wbr></wbr>r<wbr></wbr>r̀<wbr></wbr>ŕ<wbr></wbr>r̆<wbr></wbr>ȓ<wbr></wbr>r̃<wbr></wbr>r̄<wbr></wbr>r̋<wbr></wbr>ȑ<wbr></wbr>r̈<wbr></wbr>ṙ<wbr></wbr>r̊<wbr></wbr>r̉<wbr></wbr>r<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>s<wbr></wbr>s̀<wbr></wbr>ś<wbr></wbr>s̆<wbr></wbr>s̑<wbr></wbr>s̃<wbr></wbr>s̄<wbr></wbr>s̋<wbr></wbr>s̏<wbr></wbr>s̈<wbr></wbr>ṡ<wbr></wbr>s̊<wbr></wbr>s̉<wbr></wbr>s<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>t<wbr></wbr>t<wbr></wbr>u<wbr></wbr>ù<wbr></wbr>ú<wbr></wbr>ŭ<wbr></wbr>ȗ<wbr></wbr>ũ<wbr></wbr>ū<wbr></wbr>ű<wbr></wbr>ȕ<wbr></wbr>ü<wbr></wbr>u̇<wbr></wbr>ů<wbr></wbr>ủ<wbr></wbr>u<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>v<wbr></wbr>v̀<wbr></wbr>v́<wbr></wbr>v̆<wbr></wbr>v̑<wbr></wbr>ṽ<wbr></wbr>v̄<wbr></wbr>v̋<wbr></wbr>v̏<wbr></wbr>v̈<wbr></wbr>v̇<wbr></wbr>v̊<wbr></wbr>v̉<wbr></wbr>v<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>w<wbr></wbr>ẁ<wbr></wbr>ẃ<wbr></wbr>w̆<wbr></wbr>w̑<wbr></wbr>w̃<wbr></wbr>w̄<wbr></wbr>w̋<wbr></wbr>w̏<wbr></wbr>ẅ<wbr></wbr>ẇ<wbr></wbr>ẘ<wbr></wbr>w̉<wbr></wbr>w<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>x<wbr></wbr>x̀<wbr></wbr>x́<wbr></wbr>x̆<wbr></wbr>x̑<wbr></wbr>x̃<wbr></wbr>x̄<wbr></wbr>x̋<wbr></wbr>x̏<wbr></wbr>ẍ<wbr></wbr>ẋ<wbr></wbr>x̊<wbr></wbr>x̉<wbr></wbr>x<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>y<wbr></wbr>ỳ<wbr></wbr>ý<wbr></wbr>y̆<wbr></wbr>y̑<wbr></wbr>ỹ<wbr></wbr>ȳ<wbr></wbr>y̋<wbr></wbr>y̏<wbr></wbr>ÿ<wbr></wbr>ẏ<wbr></wbr>ẙ<wbr></wbr>ỷ<wbr></wbr>z<wbr></wbr>z̀<wbr></wbr>ź<wbr></wbr>z̆<wbr></wbr>z̑<wbr></wbr>z̃<wbr></wbr>z̄<wbr></wbr>z̋<wbr></wbr>z̏<wbr></wbr>z̈<wbr></wbr>ż<wbr></wbr>z̊<wbr></wbr>z̉<wbr></wbr>z<wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr><wbr></wbr>A<wbr></wbr>A<wbr></wbr>B<wbr></wbr>B<wbr></wbr>C<wbr></wbr>C<wbr></wbr>D<wbr></wbr>D<wbr></wbr>E<wbr></wbr>E<wbr></wbr>F<wbr></wbr>F<wbr></wbr>G<wbr></wbr>G<wbr></wbr>H<wbr></wbr>H<wbr></wbr>I<wbr></wbr>I<wbr></wbr>J<wbr></wbr>K<wbr></wbr>K<wbr></wbr>L<wbr></wbr>L<wbr></wbr>M<wbr></wbr>M<wbr></wbr>N<wbr></wbr>N<wbr></wbr>O<wbr></wbr>O<wbr></wbr>P<wbr></wbr>P<wbr></wbr>Q<wbr></wbr>R<wbr></wbr>R<wbr></wbr>S<wbr></wbr>S<wbr></wbr>T<wbr></wbr>T<wbr></wbr>U<wbr></wbr>U<wbr></wbr>V<wbr></wbr>V<wbr></wbr>W<wbr></wbr>W<wbr></wbr>X<wbr></wbr>X<wbr></wbr>Y<wbr></wbr>Y<wbr></wbr>Z<wbr></wbr>Z

Total number of possibilities: 460.

More capital letters possible.

by Ken (noreply@blogger.com) at September 24, 2018 08:16 PM

September 23, 2018

ERDI Gergo

CPU modeling in CλaSH

My entry for RetroChallenge 2018/09 is building a CHIP-8 computer. Previously, I've talked in detail about the video signal generator and the keyboard interface; the only part still missing is the CPU.

The CHIP-8 instruction set

Since the CHIP-8 was originally designed to be an interpreted language run as a virtual machine, some of its instructions are quite high-level. For example, the framebuffer is modified via a dedicated blitting instruction; there is a built-in random number generator; and instructions to manipulate two 60 Hz timers. Other instructions are more in line with what one would expect to see in a CPU, and implement basic arithmetic such as addition or bitwise AND. There is also a generic escape hatch instruction but that doesn't really apply to hardware implementations.

The CPU has 16 generic-purpose 8-bit registers V0…VF; register VF is also used to report flag results like overflow from arithmetic operations, or collision during blitting. Most instructions operate on these general registers. Since the available memory is roughly 4K, these 8-bit registers wouldn't be too useful as pointers. Instead, there is a 12-bit Index register that is used as the implicit address argument to memory-accessing instructions.

For flow control, the program counter needs 12 bits as well; the CHIP-8 is a von Neumann machine. Furthermore, it has CALL / RET instructions backed by a call-only stack (there is no argument passing or local variables).

Modeling the CPU's internal state

We can collect all of the registers described above into a single Haskell datatype. I have also added two 8-bit registers for the high and low byte of the current instruction, but in retrospect it would be enough to just store the high byte, since the low byte is coming from RAM exactly when we need to dispatch on it anyway. The extra phase register is to distinguish between execution phases such as fetching the first byte of the next instruction, or for instructions that are implemented in multiple clock cycles, like clearing the frame buffer (more on that below).

type Addr = Unsigned 12
type Reg = Index 16

data CPUState = CPUState
    { opHi, opLo :: Word8
    , pc, ptr :: Addr
    , registers :: Vec 16 Word8
    , stack :: Vec 24 Addr
    , sp :: Index 24
    , phase :: Phase
    , timer :: Word8
    , randomState :: Unsigned 9
    }
    

I implemented the random number generator as a 9-bit linear-feedback shift register, truncated to its lower 8 bits; this is because a maximal 8-bit LFSR wouldn't generate 0xFF.

lfsr :: Unsigned 9 -> Unsigned 9
lfsr s = (s `rotateR` 1) `xor` b4
  where
    b = fromIntegral $ complement . lsb $ s
    b4 = b `shiftL` 4
    

Input and output "pins"

Similar to how a real chip has various pins to interface with other parts, our CPU description will also have multiple inputs and outputs. The input consists of the data lines read from main memory and the framebuffer; the events coming from the keypad, and the keypad state; and the 60 Hz VBlank signal from the video generator. This latter signal is used to implement the timer register's countdown. The keypad's signals are fed into the CPU both as events and statefully; I've decided to do it this way so that only the peripheral interface needs to be changed to accomodate devices that are naturally either parallel (like a keypad matrix scanner) or serial (like a computer keyboard on a PS/2 connector).

type Key = Index 16
type KeypadState = Vec 16 Bool

data CPUIn = CPUIn
    { cpuInMem :: Word8
    , cpuInFB :: Bit
    , cpuInKeys :: KeypadState
    , cpuInKeyEvent :: Maybe (Bool, Key)
    , cpuInVBlank :: Bool
    }      
    

The output is even less surprising: there's an address line and a data out (write) line for main memory and the video framebuffer.

type VidX = Unsigned 6
type VidY = Unsigned 5

data CPUOut = CPUOut
    { cpuOutMemAddr :: Addr
    , cpuOutMemWrite :: Maybe Word8
    , cpuOutFBAddr :: (VidX, VidY)
    , cpuOutFBWrite :: Maybe Bit
    }
    

So, what is a CPU?

As far as CλaSH is concerned, the CPU is extensionally a circuit converting input signals to output signals, just like any other component:

extensionalCPU :: Signal dom CPUIn -> Signal dom CPUOut
    

The internal CPU state is of no concern at this level. Internally, we can implement the above as a Mealy machine with a state transition function that describes behaviour in any given single cycle:

intensionalCPU :: (CPUState, CPUIn) -> (CPUState, CPUOut)

extensionalCPU = mealy intenstionalCPU initialState
    

As far as a circuit is concerned, a clock cycle is a clock cycle is a clock cycle. If we want to do any kind of sequencing, for example to fetch two-byte instruction opcodes from the byte-indexed main memory in two steps, we need to know in intensionalCPU which step is next. This is why we have the phase field in CPUState, so we can read out what we need to do, and store what we want to do next. For example, in my current version the video framebuffer is bit-indexed (addressed by the 6-bit X and the 5-bit Y coordinate), and there is no DMA to take care of bulk writes; so to implement the instruction that clears the screen, we need to write low to all framebuffer addresses, one by one, from (0, 0) to (63, 31). This requires 2048 cycles, so we need to go through the Phase that clears (0, 0), to the one that clears (0, 1), all the way to (63, 31), before fetching the first byte of the next opcode to continue execution. Accordingly, one of the constructors of Phase stores the (x, y) coordinate of the next bit to clear, and we'll need to add some logic so that if phase = ClearFB (x, y), we emit (x, y) on the cpuOutFBAddr line and Just low on the cpuOutFBWrite line. Blitting proceeds similarly, with two sub-phases per phase: one to read the old value, and one to write back the new value (with the bitmap image xor'd to it)

data Phase
    = Init
    | Fetch1
    | Exec
    | StoreReg Reg
    | LoadReg Reg
    | ClearFB (VidX, VidY)
    | Draw DrawPhase (VidX, VidY) Nybble (Index 8)
    | WaitKeyPress Reg
    | WriteBCD Word8 (Index 3)      
    

So how should we write intensionalCPU? We could do it in direct style, i.e. something like

intensionalCPU (s0, CPUIn{..}) = case phase s of
    Fetch1 ->
        let s' = s{ opHi = cpuInMem, pc = succ $ pc s, phase = Exec }
            out = CPUOut{ cpuOutMemAddr = pc s', cpuOutMemWrite = Nothing
                        , cpuOutFBAddr = minBound, cpuOutFBWrite = Nothing
                        }
        in (s', out)
    WaitKeyPress reg ->
        let s' = case cpuInKeyEvent of
                Just (True, key) -> s{ registers = replace reg key (registers s), phase = Fetch1 }
                _ -> s
            out = CPUOut{ cpuOutMemAddr = pc s', cpuOutMemWrite = Nothing
                        , cpuOutFBAddr = minBound, cpuOutFBWrite = Nothing
                        }
        in (s', out)                   
    -- and lots of other cases as well, of course
  where
    s | cpuInVBlank = s0{ timer = fromMaybe 0 $ predIdx $ timer s0 }
      | otherwise = s0
    

If you think this is horrible and unreadable and unmaintainable, then yes! I agree! Which is why I've spent most of this RetroChallenge (when not fighting synthesizer crashes) thinking about nicer ways of writing this.

This post is getting long, let's end on this note here. Next time, I am going to explain how far I've gotten so far in this quest for nicely readable, composable descriptions of CPUs.

September 23, 2018 09:50 PM

Edward Z. Yang

HIW’18: Let’s Go Mainstream with Eta!

My name is Rahul Muttineni, CTO of TypeLead, working on building services around a language named Eta. To get started, I'll give an overview of how the project started, and where it is now.

It started as a HSOC project. It was called GHCVM; back then we had plans of making it both on JVM and CLR... we don't think about CLR anymore. I was mentored by Edward Kmett. We got pretty good response on this, so Jo and I decided to take the risk and work on this full time.

Big thanks to the GHC team, really good work. We've worked with the codebase for two years, and the more and more we work with it, we see how much awesome stuff there is. I've learned a lot by working with the code.

What is Eta? Eta is a fork of GHC. During the GSOC project, it started off as a Haskell program that used the GHC API. Midway in the program, I found that there were certain things that I wanted to do that I couldn't do, and I spent 3-4 days setting up a fork. I'll talk about what those limitations are. Like Haskell, it's a ... language, but the key difference is that it runs on the JVM. That is its own set of challenges, primarily with respect to tail calls. The nice thing about Eta is that it runs on the JVM, and it can run a good chunk of projects just like that. lens... recently, in the last month, we got Yesod working... it's in good shape. The next really great type of Eta is the strongly typed FFI. That works really well with the subtyping in JVM. A good chunk of the talk is about how we got that working. One of the main focuses of Eta is to be focused on industrial use. GHC is focused on industrial use, and research, both. There's a tension between the two... the nice thing we have for Eta is we don't have to face that tension; it's easy to make decisions on how to add new features, because will it help companies? If it is yes, otherwise we don't. (SPJ: That can be a hard question to answer!)

Haskell: Avoid success at all costs. We're not going to sacrifice core principles of language for benefit. Pursue success, at minimum cost. We want to make it successful as much as possible, but we want to make as little sacrifice as possible. That will be a little tricky...

What is Eta? What language features does it support? It started off as a fork of GHC 7.10.3. All extensions that work there, work with Eta as well. The only thing was TemplateHaskell and QuasiQuotes didn't work for a long time. We got it working 3-4 months ago. Biggest change is JavaFFI. GHC 7.10.3 is MINUS C FFI. We could have supported it: Java has JNI, but we tried to avoid it because we didn't want to make platform specific bindings to all the libbraries.

Joe backported a bunch of GHC 8 features: StrictData, ApplicativeDo, OverloadedLabels. Backpack was got recently. There's a very particular reason we had to do it: it has to do with the fact that we don't have green threads by default, and we wanted to give the user a choice of threaded runtime versus blocking runtime.

The compiler? It's a fork of GHC, so all the compiler passes are the same. We just chopped off everything after STG; e.g., C-- is gone. We generate bytecode from STG. We don't do any optimizations right now, and won't need to for some fine. We don't have to because in JVM, it's JIT compiled, so we don't have to optimize as much since JVM will remove a lot of the code that's not used anyway. And the driver: GHC generates object files... we decided to use JAR files. They're just zip files that package up a bunch of class files that store Java bytecodes. We also added one more mode for Uberjars. These are JAR files that are packaged up into one giant package.

I'll talk a little bit about how we implemented the REPL; template haskell. It works through the external-interpreter architecture. In GHC that's called iserv: the process, what it does, is handles running the code. So the compiler will still do the typechecking and everything, but once it's done with all that stuff, GHC will generate, a specific bytecode set, for interpreting Haskell efficiently. Because we already generated JVM bytecodes. We didn't need that custom bytecode set; we just compile with optimizations off; that gives us JVM bytecodes, then we send it to the external process, load it up, and execute them. Implementing the REPL is pretty easy how to get all this working together. JVM has a mechanism called classloading, which is very flexible. You can download bytecodes from the network, get code an runtime. Once you load the class, it's statically compiled code, it's optimized the same, etc.

The build tool we use is Etlas. We didn't want to move too far off of GHC, we stuck with Cabal. At the point we started using it, we forked off of Cabal 2.0. Main difference is that it lets you manage Eta versions. Etlas is almost like Stack, but it's much much closer to Cabal. We took the nice features of Stack and added them to Cabal. The other thing is that it does patch management. What we've been finding as we add more features and backport, Eta is not exactly GHC 7.10, nor is it GHC 8.0, it's a weird intermediate state, so certain packages that won't exactly compile without small changes, so we needed some system to apply those changes before we actually run the build. So we setup a GitHub repository that stores all the patch files. What etlas will do, it will get you the most recent set of patches. Then if you install a package, lens or something, it will download lens, apply the patch, and then it will build. Just recently, we were using base 4.8, and recently we upgraded to base 4.11. But we couldn't update to the new Generics infrastructure, because it slowed down compile times. So there were a bunch of packages that would check if they were GHC 8... and then use new generics. So we had to do a bunch of patching for that. But that's the kind of stuff we have to deal with.

The title of this talk is lets go mainstream with eta. I want to take a moment and say, what does that mean? "The ideas, attitudes, or activities that are shared by most people and regarded as normal or conventional." So at what point does a programming language become consdiered normal or conventional? It has to be used a big company, solve a big real world problem, and people have to believe it works. That's a very complicated question, multifaceted, one part of that answer is, it should make it easier to solve real world problems easier than the status quo. Take for example PHP. PHP came out when there was nothing better to program dynamic web applications. It had just the minimum features required to make it useful to build these. Now everyone here is asking the question: Haskell clearly solves a lot of problems better than the status quo. So why isn't it moving forward? That's a big question, I'm going to talk about how we're approaching it.

The strategy we're using internally, is we put on a "Big Company Hat"; we pretend we're a big company with a lot of employees, millions or billions of lines, and try to figure out what problems they'll face. Some problems are crazy long build times, when trying to build huge software; dynamic where you have to make sure junior developers get up to speed... etc. That's couple to get this conversation started.

After thinking about this a long time, we boiled it down to three basic principles, how we will develop Eta.

1. User Experience
2. Performance
3. Safety

User Experience is mainly, an emotional thing, how you feel when you use Eta technology, how you interact with it, what you feel when you get an error, psychologically. When something has good UX, we feel good. That's a very subjective thing, it can vary between different people, we have to figure out a way to standardize / make it universal. Something we forget as software and tool developers, the person developing the software is human. If they get errors persistently over time, they'll get frustrated. Machines will do what you tell them over and over again.

So what have we done in Eta to concern? We've done something very recently; it's not in master yet. Jo and I spent a week refactoring the codebase to refactor the error reporting part of the typechecker. It stores a list of strings; internally in GHC, there's a pretty printed data type, a list of those. The problem is we can't do postprocessing on that. So, what Jo did was made a giant data type with three hundred data constructors, one for every error message in GHC. That refactor to a week (SPJ: only a week?!) How it is now, it's decoupled, now you have, instead of storing in the typechecking monad, storing strings, you store a data type that stores the relevant data to print out that error message. And then at the final point, you can traverse the data type; based on the presence of other errors, you can decide what to do. Now it's pattern matching on certain error patterns and reporting them nicely. This is one example. We talked about simple errors: refactoring, adding an argument, changing the type, that's one of the most common errors you'll get working with Haskell. So we focused on those first. This shows an example of a type error... 'checker', it's an IO action.

GHC would tell you, couldn't match Int -> IO () with IO (). The problem is, for people who don't know how the typechecker works, they won't be able to understand what the typechecker is doing: going argument by argument. Because of the refactor we've done, it was easy to pattern match on this particular case, and say, hey, if the user forgot to put an argument, you can print out an error message of this form. You print an argument is missing, you highlight. (SM: You might have been missing the first argument, in this case!) That's true. It's tricky; sometimes the suggestion you give, might not. We don't tell people what they did exactly wrong, because we don't know. This is not a perfect thing, but we try to give the best suggestion that we can. And an important feature of this, most of how we decdied this layout, we studied languages like Elm and Purescript, which have done good work in this error. PureScript and Elm both, what they do, for a certain type of error, and you're not sure what to do... e.g., our info is not complete, they can go to a particular link and see other things that could have happened. So we don't have to flood the user with every suggestion, we just have to show the user what probably is the cause for it. And if it's a tricky case, not what we posted, in the link, we'll have the case as well.

(BG: There's other information that might be relevant; expanding type synonyms, etc. Do you have this info?) We're still thinking about that. Probably we'll have extra flags and stuff. Eventually, we'll have a mode that prints out JSON for IDEs, then it's easier to parse on the IDE side. (BG: Incidentally, there's a ticket, a student working with Richard, trying to figure out smoething similar).

Another aspect of UX is we added the REPL. Tried to simplify the entry point, try to make it easy. You want types, kinds, and where to find out more information. This is a statically typed language: you always hhave to be thinking about types. So we :set +t: always print out the types when you print things. One more thing, one of the former Scala engineers, has been learning Haskell, and he made a critique of one aspect of the REPL experience. f is a function of two argumets. In a second statement of the REPL, I applied 1. Find instance, show instance, for a goes to a. He said that... no show instance found, just say that this is a function, and you can't print it. That's a change we did. This was very easy for us to do.

Performance: it can mean many things. We're talking about fast developer feedback loop. Compile time and develop time, reducing that feedback loop. Some work we've done in this direction is reproducible builds. As of now, we have bit-for-bit reproducibility in Eta. That amounted to... nikita already did lots of work on reproducibility, he made HAskell interface reproducible; but the last mile of bit for bit is hard, there's many places. For our code generator, it was a lot simpler, we didn't have to do as much. It was 20 lines of code to make it deterministic. The main source of nondeterminism in GHC is the Unique data type, that changes between different runs depending on environment. What we did, was we added a counter. We used to print the uniques in the Java class name; that will make it nondeterministic. So we made a counter: the order in which the bindings make it to STG is the same.

GHCi is known to take up lots of memory, esp. with IDE. Simon Marlow has a bunch of fixes to that; we also backported those.

Another aspect of performance is the actual runtime performance. We're on the JVM, that puts us at a huge disadvantage. We don't have control over many things. The runtime system... this is Java. It's OO, so the runtime system is implemented in Java. We setup a hierarchy for values, that are defined in Eta. We have Closure, it's a class, parent class of all values, thunks, WNF. The Closure class has two methods. evaluate, evaluate to WHNF, and enter will actually enter... it's similar to GHC runtime system. The initial version was modeled exactly after GHC, except for tail calls. The terminology is similar. It's primarily used when you do the body of function. The main subclasses of Closure are Thunk and Value. Value will be the parent class, of things like functions, partiallly applied functions, and data constructors. Thunk will be the superclass of things like CAFs, single entry thunks, and updatable thunks. CAFs don't have free variables, so there's a special case for that, and you create a blackholing entry every time, to avoid two threads evaluating the same thunk. UpdatableThunk pushes an update frame, when it's finished evaluating, it will update the thunk to point to the newly computed value. And SingleEntryThunk, they're evaluated only once, so you can just evaluate it directly without pushing an update frame. This terminology is borrowed from GHC as well.

VAlues: DataCon, Function and PAPs. In the early days, and even now, every function call that was a tail call, is just a method call. This is the only way to make it remotely efficient. (More on stack soon). For static tail recursive calls: singly recursive or mutually recursive, they get compiled to loops. In most cases, they get a nice tight loop. In the mutual case, what will happen is, we collect all of the SCC, and we make one giant method that goes into a loop. Let's say you're in the even/odd example, what will happen is, when even calls odd, there's a variable called target, an integer. Even will be assigned 0, odd is assigned 1, so then you set 1 and restart. (BG: Do you always have unfoldings available for functions you compiled?) This is mutually recursive functions defined in the same module. (SPJ: They might have very different type arguments.) We cat all the arguments into one. The main problem with this argument, is parsers generated with Happy and Alex, we hit limits. (BG: Crash?) Not stack blowup. JVM has method size limit, so you can only have 65000 bytecodes. That's Eta compiled with itself. That's the only thing that's preventing us from using Eta with Eta. But all you need to do is split method into smaller chunks.

So how do we handle tail calls? When we know it , tail recursive, let's say you don't. Let's say you're using CPS. It's so common in Haskell, any fast parser uses CPS. In early days, Aeson would just blow the stack, it was pretty bad. So, we explored trampolining by default, and it was just awful, it was slow, super slow. What we did is turn it off, and let stack blow up. We found a better solution. The JVM has... the only way to unwind the stack is throwing an exception, or returning, and keep on returning until you return all the way down. It turns out, with exceptions, you can turn off the feature that captures the stack trace: that's the most expensive part of an exception. So we have a general exception. So this trampoline mechanism is optional. So, what we do, we have a function 'trampoline :: a -> a', runtime primitive, what it does is activates a boolean in the context which says, I'm going to trampoline now, and it activates a codepath that turns a counter, and once you reach a certain number, which is configurable, it will unwind the stack, and then continue where it needed to go. Our limit is 400, and then we unwind. It used to be in 1000s, but with Happy and Alex, we needed a smaller number. (BG: Inside that context, how much does it cost? But observably, it's faster. A couple months ago, we got PureScript to work in Eta, and it wasn't bad by default?) (SPJ: So you could turn it on by default: all you're doing is counting.) The counting is how we know how big the stack is. In your main function, you could call trampolineIO, and trampoline your whole program. (SPJ: Maybe it's low overhead, and you can do it all the time.) If it's low, we will do it. (How do you resume? Once you raise the exception, what do you store?) The counter happens at the entry point, and it's guarded bby the boolean. So, that, if the limit is exceeded, it will call another function that takes the context. So we store all the arguments in a context variable that gets passed to every eta function. We stash all the arguments into a function that has the state, then wjhen it unwinds, marked by this function, it will call that, with that function and those arguments.

As I mentioned, it's guarded by a boolean. JVM has an optimization, where it observes the boolean is true for a lot of times, it won't even compile that branch in the native code. So if you don't use trampolining, it doesn't affect you at all; the code for the counter will just not be there.

One nice thing I like about Eta is that you actually get stack traces for exceptions. This is because, to get good perf for Eta, you have to implement most primitives on JVM stack. This is a sample stack. You have a schedule loop, and you hare evaluting some IO acttion. applyV/applyN, these are partial applications. Execute an IO action. And another nice part, we've tried to encode it close to the original name. So you can tell this fucntion call happened in statistics.Regression, rnfAll. If you see, you notice there are line numbers. This is not perfect, and we can definitely make it better later... GHC gives you a lot of debugging info at STG time, but because the JVM doesn't have much flexibility, we can only attach one line number to code, so we have to discard all that info. This will get better; we'll stash that debug information in the classfile itself, and then access it and render a better stack trace. (BG: This is Source Notes?) Yeah.

Concurrency: One nice part is, it's nice or not. If you're evaluating a long chain of thunks, you're going to blow the stack. This happily coincides with GHC also having a space leak. Neil Mitchell wrote a blog post about how to detect space leaks: restrict stack size and then figure out which thunk was being evaluated. If you see a stack trace like this, and you see a huge chain of evaluates, in a long chain, you probably have a space leak.

How do I do interop? The way we did interop was, made a thing called the Java monad. IT's supposed to give you the experience of programming JAva. The basic implementation is inspired from IO monad. Object# c is "this", the object that is being threaded through. Because of this encoding, you get the Java experience: you can call dot on the Java object. It's almost like working with Java inside. The argument is called... that's the type constructor that forced us to fork, instead of use the API. You can't declare primitive types in the API. And we had to introduce a new low level representation. Declare wrapper types, wrapping the iterable interface in Java. We've stolen better syntax, which were type applications... resolve it somehow. I'm declaring an Eta type that wraps a JAva type, @java.lang.Iterable.

You use the java function to run the Java monad. All of these have to be imported. newArrayList, newInteger, but we brought some combinators, that let you call methods. It owrked out with the monad. This is sample code that does the same thing as Java code. it just uses standard monadic combinators. If it's a fixed c, it's an instance.

You can use Eta as a better JAva, with referential transparency! Unlike Kotlin or Scala.

How do we handle subtyping? We define uilt in type families. We have a typeclass named extends. Any time you declare a function that takes a given class and any subtype of that class, you can, instead of actually subtyping, we do it with constraints. Extends' takes the info from Inherits and figures it out. You can use the dot operator on anything that is a subclass of Iterator. We had to extend the typechecker just a little bit: a lot of times the type gets stuck in the form Extends' (List JSTring) (List a) where a is unconstrained.

Imports are tiresome, so we're setting up direct Java Interop; actually use JAva reflection to get info class files, and generate imports. "import java java.lang.Math" works, but doesn't scale. Biggest priority for the rest of the year is Java interop, really good IDE support, documentation, language extensions: UnboxedSums, TypeApplications, DerivingVia, QuantifiedConstraints. We have some new language extensions in mind, AnonymousRecords, RowTypePolymorphism... We'll see how that goes.

I was thinking about ways... we work on the same codebase, how to collaborate? We're interested in compile performance, support for unbboxed sums. Worker wrapper has some glitch, and no one got around to fixing it. At some point, maybe not any time soon, that and mutable fields. Pretty important for us. (BG: Do unboxed sums get a lot of usage? Why unboxed sums? Does Eta code make a lot of use?) No. But a lot of people on JVM are annoyed that Maybe is boxed all the time. But if you have unboxed sums, you can represent it as null. (SPJ: Or you can say, just box it, and you won't notice it. If it's fast enough all the time, focus on what's going to make a difference.)

Q: Did you consider using Graal (it's a new virtual machine that supports partial evaluation and partial escape analysis, good for functional languages)?

A: We have looked into it, it's not completely there yet to use, and we're not sure if it's something we can invest time with. We're keeping up with it. (BG: But you lose the JVM!) That's what's preventing us from going there. Maybe if it gets integrated into a mainline VN we might look at it. (Mainline Java is planning to integrate Graal)

Q: (SM) Are you keeping the fork up to date with master GHC?

A: One thing that is out of bounds for us, and for a long time, is all the dependent Haskell work. Everything else, we keep up. If there's any nice bugfixes... (SM: So you're selectively backporting).

Q: (BG) Have you considered unforking.

A: Not yet, no.

by Edward Z. Yang at September 23, 2018 03:02 PM

September 22, 2018

ERDI Gergo

Back in the game!

For most of this week, it seemed I will have to throw in the towel. As I mentioned in my previous entry last Saturday, I ran into what at first seemed like a CλaSH bug. However, further investigation showed that the error message was actually pointing at an internal bug in the Xilinx ISE synthesizer. The same generated VHDL didn't cause any problems when fed into the Yosys open source synthesizer, Altera Quartus, or the newer version of Xilinx Vivado. But the Papilio Pro I'm using is based on the Spartan 6 FPGA, which is not supported by the newer Xilinx tools, so I am stuck with ISE 14.7 from 2013. So the conclusion is, just like all other closed-source proprietary software from FPGA vendors, the Xilinx ISE is simply a piece of shit that falls over under its own weight on perfectly valid VHDL.

I was thinking of ordering a new FPGA board, but I only have until next Wednesday to finish this (I won't be able to put in any work on the last Retrochallenge weekend), so it's not even a given it would get here in time. Also, I'd like to do a bit more research on what board I should get -- on one hand, both Altera and Xilinx have nice, more modern dev boards with good IO ports for my retro-computing-oriented needs, but on the other hand, it feels a bit like throwing good money after bad, since these would still be programmed with proprietary shitty software, with no way forward when (not if!) they break.

Then there's Lattice's ICE40 line which is fully supported by the open source toolchain IceStorm, but the largest ICE40 is still quite small compared to the Spartan 7 or the Cyclone V series; not to mention that even the nicest ICE40 board I could find doesn't have a USB connector on board, so you have to play around with an Arduino and plug jumper wires into this weird connector to get anything working. Also, while I'm ranting, of course the Lattice ICE40 open source toolchain is not from Lattice themselves; instead, its bitstream format had to be reverse-engineered by awesome free software hackers

So anyway, I had a perfectly functioning board betrayed by its software toolchain. I tried some desparate ideas like generating Verilog instead of VHDL or getting rid of the unguarded block statements, but nothing made any difference. Then Thursday night I had an even wilder idea. If the Xilinx ISE is crashing because the generated VHDL is triggering some weird corner case in the synthesizer, then maybe using the same ISE version, but changing the target FPGA model, would get over the hurdle? And that's when I remembered I still have my first ever FPGA board: the Papilio One based on the Spartan 3E. Luckily, the Spartan 3-series is also supported by the 14 series ISE, so the same toolchain can serve both boards.

On Friday morning, I did the necessary changes to my code to target the Papilio One. The clock generator is different between the models, so I needed to replace that; the other difference was that the Spartan 3 doesn't seem to have wide enough blocks for 64-bit arithmetic. This shouldn't be a problem for the CHIP-8, but CλaSH generates code that converts everything into 64 bits. I initially overcame that by post-processing CλaSH's output with sed, but then I discovered that there is a flag -fclash-intwidth to set that properly.

With these changes, I was able to get it through the Xilinx ISE's synthesizer, and all the way through the rest of the pipeline! As before, the code is on GitHub.

<object height="350" width="425"><param name="movie" value="http://www.youtube.com/v/eqpMFACw-B4"/><param name="wmode" value="transparent"/><embed height="350" src="http://www.youtube.com/v/eqpMFACw-B4" type="application/x-shockwave-flash" width="425" wmode="transparent"></embed></object>

And with this, I am where I was supposed to be a week ago at half-time. I probably won't have time to work on this project next weekend since we'll be travelling; this looks like a good time to take inventory of the project.

  • I am very happy with how the video and keyboard peripheral interfaces turned out; the CλaSH code is nice and clean.
  • I still need to write a blog post about the design I came up with for implementing the CPU. I'm convinced it should scale to more complex processors; but of course the proof of that pudding will be in implementing a real retro-CPU like a 6502 or a Z80.
  • The font ROM is not hooked up yet; I plan to finish that tomorrow.
  • The plan was to get it working first, and make it more performant later. I'm afraid I won't have time before the RetroChallenge finishes to implement improvements. The biggest deficiency here is that the framebuffer is accessed one bit at a time, so clearing the screen (a single opcode in CHIP-8!) takes 64⨯32 = 2048 cycles!
  • The signal-less high-level simulation makes debugging the CPU implementation very convenient and fast. Being able to run the CHIP-8 machine in real-time, with interactive graphics and input, is immensely satisfying.
  • I prefer the code I was able to write in CλaSH, when compared to Kansas Lava. Of course, I don't know how much of that is simply because this is the second time I was implementing these circuits.
  • There's a lot to be improved in CλaSH itself. Significant improvements to compilation time are already coming, which will be welcome given that even this tiny CHIP-8 circuit is now taking about 5 minutes to compile; this issue is exacerbated by the fact that compilation cannot use multiple cores. The other problem is that CλaSH generates very inefficient HDL, putting big pressure on the vendor-specific synthesis tools. As we've seen, this pressure can be too big for crappy software like the old version of the Xilinx ISE I am forced to use with my FPGA.
  • Writing these posts takes a lot of time! I love reading the posts of other RetroChallenge entrants, but there is no way I could have written more frequent updates. I wouldn't have had time to do the actual work then!

September 22, 2018 07:10 PM

September 20, 2018

FP Complete

Deploying Postgres based Yesod web application to Kubernetes using Helm

Yesod is one of the most popular web frameworks in the Haskell land. This post will explore creating a sample Postgres based Yesod web application and then deploying it to Kubernetes cluster. We will create a Helm chart for doing our kubernetes release. Note that the entire source lives here:

by Sibi Prabakaran at September 20, 2018 10:45 PM

September 18, 2018

Chris Smith 2

CodeWorld Update — September 17, 2018

It’s been a couple months, so here’s some of the latest news about CodeWorld.

First of all, if you’re reading this and will be at ICFP, please consider coming to dinner on Monday night, Sep 24, to talk with other interested people about Haskell (and Haskell-like languages) in K-12 education. Please RSVP here as soon as you can.

Platform

There have been a number of changes made to the CodeWorld platform lately.

Classes

CodeWorld is being taught in quite a few places now!

  • In Louisiana, a pilot program is continuing to teach computational thinking with CodeWorld, organized by a group at LSU. This program reaches 350 students, including many schools and teachers who were trained over the summer.
  • In a separate effort in Louisiana, middle school students have seen a brief introduction to CodeWorld as part of a survey course that also includes p5.js, Arduino, and web programming.
  • In California, Brooklyn Cook is teaching CodeWorld to a new class, keeping it alive there even after my moving to New York.
  • Together with a coworker and a local teacher, I’m leading a class at The Clinton School in New York. This is CodeWorld’s first entry into the New York area. Schools here start quite a bit later in the year, so we’ve just had our first class today.

Curriculum

I’ve been working on updating the Guide to match our current best understanding of how to teach CodeWorld. Every time I do this, though, it gets quite a lot longer. My plan is to try to keep up with my own class this school year, so I’ll have a good chunk of the updates done by the end of the year. I’ve also swapped the Guide to use markdeep, a powerful variant of markdown that makes for a surprisingly complete publishing platform. I’ve extended markdeep locally with some features like collapsible notes that can be expanded for more information. Just seeing the results makes me excited to be writing the guide again!

Fernando Alegre at Louisiana State University is also building a separate curriculum focused on 9th grade, and with different elements and API entry points involved.

by Chris Smith at September 18, 2018 01:04 AM