Planet Haskell

April 17, 2015

Roman Cheplyaka

Safe concurrent MySQL access in Haskell

mysql, Bryan O’Sullivan’s low-level Haskell bindings to the libmysqlclient C library, powers a few popular high-level MySQL libraries, including mysql-simple, persistent-mysql, snaplet-mysql-simple, and groundhog-mysql.

Most users do not suspect that using mysql as it stands concurrently is unsafe.

This article describes the issues and their solutions.

Issue 1: unsafe foreign calls

As of version 0.1.1.8, mysql marks many of its ffi imports as unsafe. This is a common trick to make these calls go faster. In our case, the problem with unsafe calls is that they block a capability (that is, an OS thread that can execute Haskell code). This is bad for two reasons:

  1. Fewer threads executing Haskell code may result in less multicore utilization and degraded overall performance.
  2. If all capabilities get blocked executing related MySQL statements, they may deadlock.

Here’s a demonstration of such a deadlock:

{-# LANGUAGE OverloadedStrings #-}
import Database.MySQL.Simple
import Control.Concurrent
import Control.Concurrent.STM
import Control.Applicative
import Control.Monad
import Control.Exception

main = do
  tv <- atomically $ newTVar 0
  withConn $ \conn -> do
    mapM_ (execute_ conn)
      [ "drop table if exists test"
      , "create table test (x int)"
      , "insert into test values (0)"
      ]
    
  forM_ [1..2] $ \n -> forkIO $ withConn $ \conn -> (do
    execute_ conn "begin"
    putStrLn $ show n ++ " updating"
    execute_ conn "update test set x = 42"
    putStrLn $ show n ++ " waiting"
    threadDelay (10^6)
    execute_ conn "commit"
    putStrLn $ show n ++ " committed"
    ) `finally`
    (atomically $ modifyTVar tv (+1))

  atomically $ check =<< (>=2) <$> readTVar tv
  where
    withConn = bracket (connect defaultConnectInfo) close

If you run this with stock mysql-0.1.1.8, one capability (i.e. without +RTS -Nx), and either threaded or non-threaded runtime, you’ll see:

1 updating
1 waiting
2 updating
1 committed
test: ConnectionError {
  errFunction = "query",
  errNumber = 1205,
  errMessage = "Lock wait timeout exceeded; try restarting transaction"}

Here’s what’s going on:

  1. Both threads are trying to update the same row inside their transactions;
  2. MySQL lets the first update pass but blocks the second one until the first update committed (or rolled back);
  3. The first transaction never gets a chance to commit, because it has no OS threads (capabilities) to execute on. The only capability is blocked waiting for the second UPDATE to finish.

The solution is to patch mysql to mark its ffi calls as safe (and use the threaded runtime). Here’s what would happen:

  1. To compensate for the blocked OS thread executing the second UPDATE, the GHC runtime moves the capability to another thread (either fresh or drawn from a pool);
  2. The first transaction finishes on this unblocked capability;
  3. MySQL then allows the second UPDATE to go through, and the second transaction finishes as well.

Issue 2: uninitialized thread-local state in libmysqlclient

To quote the docs:

When you call mysql_init(), MySQL creates a thread-specific variable for the thread that is used by the debug library (among other things). If you call a MySQL function before the thread has called mysql_init(), the thread does not have the necessary thread-specific variables in place and you are likely to end up with a core dump sooner or later.

Here’s the definition of the thread-local state data structure, taken from mariadb-10.0.17:

struct st_my_thread_var
{
  int thr_errno;
  mysql_cond_t suspend;
  mysql_mutex_t mutex;
  mysql_mutex_t * volatile current_mutex;
  mysql_cond_t * volatile current_cond;
  pthread_t pthread_self;
  my_thread_id id;
  int volatile abort;
  my_bool init;
  struct st_my_thread_var *next,**prev;
  void *keycache_link;
  uint  lock_type; /* used by conditional release the queue */
  void  *stack_ends_here;
  safe_mutex_t *mutex_in_use;
#ifndef DBUG_OFF
  void *dbug;
  char name[THREAD_NAME_SIZE+1];
#endif
};

This data structure is used by both server and client code, although it seems like most of these fields are used by the server, not client (with the exception of the dbug thing), which would explain why Haskellers have gotten away with not playing by the rules so far. However:

  1. I am not an expert, and I spent just about 20 minutes grepping the codebase. Am I sure that there’s no code path in the client that accesses this? No.
  2. Am I going to ignore the above warning and bet the stability of my production system on MySQL/MariaDB devs never making use of this thread-local state? Hell no!

What should we do to obey the rules?

First, make threads which work with MySQL bound, i.e. launch them with forkOS instead of forkIO. Otherwise, even if an OS thread is initialized, the Haskell thread may be later scheduled on a different, uninitialized OS thread.

If you create a connection in a thread, use it, and dispose of it, then using a bound thread should be enough. This is because mysql’s connect calls mysql_init, which in turn calls mysql_thread_init.

However, if you are using a thread pool or otherwise sharing a connection between threads, then connect may occur on a different OS thread than a subsequent use. Under this scenario, every thread needs to call mysql_thread_init prior to other MySQL calls.

Issue 3: non-thread-safe calls

The mysql_library_init function needs to be called prior to any other MySQL calls. It only needs to be called once per process, although it is harmless to call it more than once.

It is called implicitly by mysql_init (which is in turn called by connect). However, this function is documented as not thread-safe. If you connect from two threads simultaneously, bad or unexpected things can happen.

Also, if you are calling mysql_thread_init as described above, it should be called after mysql_library_init.

This is why it is a good idea to call mysql_library_init in the very beginning, before you spawn any threads.

Using a connection concurrently

This is not specific to the Haskell bindings, just something to be aware of:

You should not use the same MySQL connection simultaneously from different threads.

First, the docs explicitly warn you about that:

Multiple threads cannot send a query to the MySQL server at the same time on the same connection

(there are some details on this in case you are interested)

Second, the MySQL wire protocol is not designed to multiplex several communication «threads» onto the same TCP connection (unlike, say, AMQP), and trying to do so will probably confuse both the server and the client.

Example

Here is, to the best of my knowledge, a correct example of concurrently accessing a MySQL database. The example accepts request at http://localhost/key and looks up that key in a MySQL table.

It needs to be compiled against my fork of mysql, which has the following changes compared to 0.1.1.8:

  • Unsafe calls are marked as safe (the patch is due to Matthias Hörmann);
  • mysql_library_init and mysql_thread_init are exposed under the names initLibrary and initThread.

(How to use a fork that is not on hackage? For example, through a stackage snapshot.)

{-# LANGUAGE OverloadedStrings, RankNTypes #-}
import Network.Wai
import qualified Network.Wai.Handler.Warp as Warp
import Network.HTTP.Types
import qualified Database.MySQL.Base as MySQL
import Database.MySQL.Simple
import Control.Exception (bracket)
import Control.Monad (void)
import Control.Concurrent (forkOS)
import qualified Data.Text.Lazy.Encoding as LT
import Data.Pool (createPool, destroyAllResources, withResource)
import Data.Monoid (mempty)
import GHC.IO (unsafeUnmask)

main = do
  MySQL.initLibrary
  bracket mkPool destroyAllResources $ \pool ->
    Warp.runSettings (Warp.setPort 8000 . Warp.setFork forkOSWithUnmask $ Warp.defaultSettings) $
      \req resp -> do
        MySQL.initThread
        withResource pool $ \conn ->
          case pathInfo req of
            [key] -> do
              rs <- query conn "SELECT `desc` FROM `test` WHERE `key` = ?"
                (Only key)
              case rs of
                Only result : _ -> resp $
                  responseLBS
                    ok200
                    [(hContentEncoding, "text/plain")]
                    (LT.encodeUtf8 result)
                _ -> resp e404
            _ -> resp e404

  where
    mkPool = createPool (connect defaultConnectInfo) close 1 60 10
    e404 = responseLBS notFound404 [] mempty
    forkOSWithUnmask :: ((forall a . IO a -> IO a) -> IO ()) -> IO ()
    forkOSWithUnmask io = void $ forkOS (io unsafeUnmask)

The forkWithUnmask business is only an artifact of the way warp spawns threads; normally a simple forkOS would do. On the other hand, this example shows that in the real world you sometimes need to make an extra effort to have bound threads. Even warp got this feature only recently.

Note that this isn’t the most efficient implementation, since it essentially uses OS threads instead of lightweight Haskell threads to serve requests.

On destructors

The *_init functions allocate memory, so there are complementary functions, mysql_thread_end and mysql_library_end, which free that library.

However, you probably do not want to call them. Here’s why.

Most multithreaded Haskell programs have a small numbers of OS threads managed by the GHC runtime. These threads are also long-lived. Trying to free the resources associated with those threads won’t give much, and not doing so won’t do any harm.

Furthermore, suppose that you still want to free the resources. When should you do so?

Naively calling mysql_thread_end after serving a request would be wrong. It is only the lightweight Haskell thread that is finishing. The OS thread executing the Haskell thread may be executing other Haskell threads at the same time. If you suddenly destroy MySQL’s thread-local state, the effect on other Haskell threads would be the same as if you didn’t call mysql_thread_init in the first place.

And calling mysql_library_end without mysql_thread_end makes MySQL upset when it sees that not all threads have ended.

References

  1. GitHub issue bos/mysql#11: Address concurrency
  2. Leon P Smith: Concurrency And Foreign Functions In The Glasgow Haskell Compiler
  3. Edward Z. Yang: Safety first: FFI and threading
  4. Simon Marlow, Simon Peyton Jones, Wolfgang Thaller: Extending the Haskell Foreign Function Interface with Concurrency
  5. MySQL 5.6 Reference Manual: Writing C API Threaded Client Programs

April 17, 2015 08:00 PM

Yesod Web Framework

Announcing stackage-update

I just released a simple tool to Hackage called stackage-update. Instead of repeating myself, below is a copy-paste of the README.md from the Github repository.


This package provides an executable, stackage-update, which provides the same functionality as cabal update (it updates your local package index). However, instead of downloading the entire package index as a compressed tarball over insecure HTTP, it uses git to incrementally update your package list, and downloads over secure HTTPS.

It has minimal Haskell library dependencies (all dependencies are shipped with GHC itself) and only requires that the git executable be available on the PATH. It builds on top of the all-cabal-files repository.

Advantages

Versus standard cabal update, using stackage-update gives the following advantages:

  • Only downloads the deltas from the last time you updated your index, threby requiring significantly less bandwidth
  • Downloads over a secure HTTPS connection instead of an insecure HTTP connection
    • Note that the all-cabal-files repo is also updated from Hackage over a secure HTTPS connection

Usage

Install from Hackage as usual with:

cabal update
cabal install stackage-update

From then on, simply run stackage-update instead of cabal update.

Why stackage?

You may be wondering why this tool is called stackage-update, when in fact the functionality is useful outside of the Stackage project itself. The reason is that the naming allows it to play nicely with the other Stackage command line tooling. Concretely, that means that if you have stackage-cli installed, stackage-update works as a plugin. However, you can certainly use stackage-update on its own without any other tooling or dependencies on the Stackage project.

Future enhancements

  • If desired, add support for GPG signature checking when cloning/pulling from the all-caba-files repo

April 17, 2015 04:00 AM

April 16, 2015

Functional Jobs

Mid/Senior Software Development Engineer at Lookingglass Cyber Solutions (Full-time)

Lookingglass is the world leader in cyber threat intelligence management. We collect and process all source intelligence, connecting organizations to valuable information through its cyber threat intelligence monitoring and management platform.

Our solutions allow customers to continuously monitor threats far and near, such as the presence of botnets, hosts associated with cybercriminal networks, unexpected route changes and the loss of network resiliency.

We are seeking qualified Software Development Engineers to join our team!

Required Skills & Experience:

  • US Citizen or Green Card Holder
  • Location: MD/VA based
  • Bachelor’s or Masters degree in: computer science, engineering, information systems or mathematics
  • 3-5 yrs experienced with full development life-cycle with shipping products
  • 2+ yrs experienced with Functional and OO languages – have worked in functional paradigms with immutable data models
  • 3+ yrs building distributed system products including messaging, NoSQL, RPC / RMI mechanisms – Including building Service Orientated Architectures
  • Proficiency with data structure and algorithm analysis
  • Experienced working in a TDD Environment

Nice to Have:

  • Product development experience in network security, content security or cyber threat intelligence
  • Experience with concurrency models
  • Experience with key-value distributed databases
  • Experience deploying production software in Haskell, OCAML, Clojure, or Lisp
  • Comfortable writing one or more of the following Javascript, GoLang, Ruby

At Lookingglass, we believe our employees are our greatest assets. We offer competitive salaries, a full benefits package, with available medical, dental, vision, and disability insurance, a 401k retirement package, and stock options. We offer generous PTO, a well supplied kitchen, and regular team activities. Most importantly, we offer the opportunity to build a world-class product with a team of talented engineers.

Get information on how to apply for this position.

April 16, 2015 09:46 PM

Senior/Principal Software Development Engineer at Lookingglass Cyber Solutions (Full-time)

Are you an experienced software engineer in security, networking, cloud and big data? Are you interested in cyber security or improving the security of the Internet? Do you push yourself to be creative and innovative and expect the same of others?

At Lookingglass, we are driven and passionate about what we do. We believe that teams deliver great products not individuals. We inspire each other and our customers every day with technology that improves the security of the Internet and of our customer’s. Behind our success is a team that thrives on collaboration and creativity, delivering meaningful impact to our customers.

We are currently seeking qualified Senior/Principal Software Development Engineers to join our team!

Required Skills & Experience:

  • US Citizen or Green Card Holder
  • Location: MD/VA or CA based
  • Bachelor’s or Masters degree in:computer science, engineering, information systems or mathematics
  • Strong technical leader with proven experience leading project technologies and mentoring junior development team members
  • 7-10 yrsexperienced with full development life-cycle with shipping products
  • 4-8yrs experienced with Functional and OO languages – have worked in functional paradigms with immutable data models
  • 5+ yrs building distributed system products including messaging, NoSQL, RPC / RMI mechanisms, and Service Orientated Architectures
  • Proficiency with data structure, algorithm analysis, and concurrency programming
  • Experienced working in a TDD Environment
  • Comfortable with aggressive refactoring
  • Architectural and design skills to map a solution across hardware, software, and business layers in a distributed architecture

Nice to Have:

  • Product development experience in network security, content security or cyber threat intelligence
  • Experience with CSP concurrency models
  • Experience with key-value distributed databases
  • Experience deploying production software in Haskell, OCAML, Clojure, or Lisp
  • Polyglot programmer with production experience in imperative, declarative, OO, functional, strongly/weakly typed, static/dynamic, interpreted/compiled languages

Lookingglass believes our employees are our greatest assets. We offer competitive salaries, a full benefits package, with available medical, dental, vision, and disability insurance, a 401k retirement package, and stock options. We offer generous PTO, a well supplied kitchen, and regular team activities. Most importantly, we offer the opportunity to build a world-class product with a team of talented engineers.

Get information on how to apply for this position.

April 16, 2015 09:46 PM

Philip Wadler

Blame and Coercion:Together Again for the First Time


Blame and Coercion:Together Again for the First Time
Jeremy Siek, Peter Thiemann, Philip Wadler.
PLDI, June 2015.

C#, Dart, Pyret, Racket, TypeScript, VB: many recent languages integrate dynamic and static types via gradual typing. We systematically develop three calculi for gradual typing and the relations between them, building on and strengthening previous work. The calculi are: λB, based on the blame calculus of Wadler and Findler (2009); λC, inspired by the coercion calculus of Henglein (1994); λS inspired by the space-efficient calculus of Herman, Tomb, and Flanagan (2006) and the threesome calculus of Siek and Wadler (2010). While λB is little changed from previous work, λC and λS are new. Together, λB, λC, and λS provide a coherent foundation for design, implementation, and optimisation of gradual types.

We define translations from λB to λC and from λC to λS. Much previous work lacked proofs of correctness or had weak correctness criteria; here we demonstrate the strongest correctness criterion one could hope for, that each of the translations is fully abstract. Each of the calculi reinforces the design of the others: λC has a particularly simple definition, and the subtle definition of blame safety for λB is justified by the simple definition of blame safety for λC. Our calculus λS is implementation-ready: the first space-efficient calculus that is both straightforward to implement and easy to understand. We give two applications: first, using full abstraction from λC to λS to validate the challenging part of full abstraction between λB and λC; and, second, using full abstraction from λB to λS to easily establish the Fundamental Property of Casts, which required a custom bisimulation and six lemmas in earlier work.

by Philip Wadler ([email protected]) at April 16, 2015 03:33 PM

A complement to blame



A complement to blame
Philip Wadler.
SNAPL, May 2015.

Contracts, gradual typing, and hybrid typing all permit less-precisely typed and more-precisely typed code to interact. Blame calculus encompasses these, and guarantees blame safety: blame for type errors always lays with less-precisely typed code. This paper serves as a complement to the literature on blame calculus: it elaborates on motivation, comments on the reception of the work, critiques some work for not properly attending to blame, and looks forward to applications. No knowledge of contracts, gradual typing, hybrid typing, or blame calculus is assumed.

by Philip Wadler ([email protected]) at April 16, 2015 03:32 PM

Well-Typed.Com

Improving Hackage security

The members of the Industrial Haskell Group are funding work to improve the security of packages downloaded from Hackage. The IHG members agreed to fund this task some time ago and my colleague Austin and I have been working on the design and implementation.

In this post I will explain the problem and the design of our solution.

TL;DR

We’re implementing a system to significantly improve Hackage security. It’s based on a sensible design (The Update Framework) by proper crypto experts. The basic system is fully automatic and covers all packages on Hackage. A proposed extension would give further security improvements for individual packages at the cost of a modest effort from package authors.

It will also allow the secure use of untrusted public Hackage mirrors, which is the simplest route to better Hackage reliability. As a bonus we’re including incremental index downloads to reduce cabal update wait times. And it’s all fully backwards compatible.

Goals

The general aim is to make a significant incremental improvement to Hackage security, given limited resources to spend on the project. So that means covering as many users, packages and likely threats as we can.

We have had to think carefully about the implementation effort and what improvements will give the biggest security benefits. Our general approach has been to go for the biggest improvements first, but to leave the door open to further tightening later.

Crypto-humility and initial thoughts

Both Austin and I have some experience with cryptography and computer security. Austin used to work for a computer security company, and many years ago I worked at an anti-virus company on the problem of securing the software update process. So we know the general options. Importantly, we also know enough to know that we don’t know everything. Hopefully the combination will be enough to avoid security disasters like the recent Docker image download vulnerability.

The basic options are:

  • end-to-end verification by having package authors sign their packages; and/or
  • having the Hackage server sign the packages (or rather just the index)

Our initial thoughts were that having the server sign the index gives a bigger bang for the buck than a scheme with author signing. This is because with the server doing the signing we can cover all packages and not require any knowledge or effort by either the package authors or the developers downloading the packages. With author signing we fear we would not be able to cover the majority of packages, at least not in the near term. Without covering all the packages we cannot have untrusted Hackage mirrors. Our rough estimate was also that index signing would be less effort to implement. Of course we’re well aware that index signing provides shallower security than the end-to-end verification with author signing.

So while we started with a general idea of the kind of solutions that would fit with Hackage and our budget, we looked around for designs and analysis from better experts than ourselves.

Austin pointed out some prior art called The Update Framework (abbreviated TUF). TUF is an architecture for securing software update systems, though there is also a reference implementation in Python. It has been designed by a number of academics, based on a few published papers and practical experience with the Tor Project’s update system. There is ongoing work to use TUF to secure Python and Ruby’s package distribution systems (both of which are fairly similar to Hackage).

What is good about TUF is that it considers the problem as a whole, with a wide range of threats. It has what security researchers call a “threat model”, which is needed to be able to discuss with clarity what kinds of threats a system can protect against. As an architecture it is also flexible enough to cover a fairly wide range of update systems: one can use subsets and still be clear about what protections one has.

The elements of TUF are still based on the basic ideas of end-to-end signing and index signing, but it is put together in a comprehensive way. There are several attacks that TUF considers and protects against that we did not consider with our initial thoughts on index signing.

Indeed even if one goes for a system based only on end-to-end author-based signing (e.g. individual GPG-signed packages) then there are still several possible attacks. In retrospect this is clear, but it was a bit of a surprise since packages individually GPG-signed by their authors is often considered as the gold standard.

The Update Framework

TUF is designed to protect against adversaries that can interfere with network traffic between you and the repository server, for example someone operating a wifi network you’re connected to, or in control of a proxy or operating a mirror of the repository. To a degree it is also designed to protect against adversaries than can compromise a repository server itself.

In a full implementation of the architecture, TUF subsumes both server-based index signing and end-to-end verification with author signing.

It relies on a number of different cryptographic keys with different roles. There are one or more root keys and a few subordinate keys for particular purposes. It does not rely on an external PKI, like the public certificate authority system that SSL/TLS uses.

The key idea with having multiple keys with different roles is that one can design a system in such a way that compromising one key is not enough to break the whole system. This relies on different keys living in different places so that compromising one doesn’t automatically imply compromising the others. Some keys have to live on public facing servers and so are at greatest risk, while others can be kept offline or on private servers and so are much harder to compromise.

Attacks

TUF provides protection against a number of attacks. If you want to imagine a concrete example, consider that the attacker runs or has compromised a mirror of Hackage that you use, and then think about each of these attacks:

  • Supplying modified packages to end users.

  • Rollback attacks where an attacker gets a client to install an older version of a package than a version the client previously knew about. Consider for example a case where the older package might have known security vulnerabilities.

  • Freeze attacks where the attacker prevents the entire set of packages from being updated (e.g. by always responding with an old snapshot).

  • Mix and match attacks where the attacker supplies combinations of packages or package metadata that never existed in the upstream repository.

  • Extraneous dependencies attacks where the attacker causes clients to install dependencies that were not the intended dependencies. For example, with Hackage we might sign individual .tar.gz files but the .cabal metadata is kept separately and the attacker could control that.

  • Wrong author attacks where an otherwise legitimate author supplies a package that they are not authorised to supply.

  • Slow retrieval attacks where the attacker causes the download to be so slow that updates effectively never complete, and without the user noticing. Of course a malicious mirror can always prevent downloads, but it shouldn’t go unnoticed.

  • Endless data attacks where the attacker responds to requests with huge amounts of download data that causes problems for the client. For example a client should always securely know the size of a package before downloading it.

Interestingly, a naive approach based only on authors signing packages only solves the first of the attacks above. All the others are still possible, and several of them can lead directly to compromised client systems, while others can force you to use old possibly vulnerable software, or just DOS your system.

Roles for keys

TUF defines a number of roles for keys:

  • Root role key (or keys). The root keys are used to sign the other keys. This is the ultimate root of the trust chain. These keys would be kept very securely, preferably password-protected and offline.

  • Snapshot role key. This is used to sign the package index. This protects the package metadata and prevents mix and match and extraneous dependency attacks.

  • Timestamp role key. This is used to ensure clients only accept an up-to-date package index, e.g. at most a day old. This prevents freeze or rollback attacks.

  • Target role key. This is used for signing individual packages. It is not used directly to sign packages but to sign delegation information. There would be per-author or per-package target keys. The top-level target key is used to say which target key is allowed to be used for which package. This is what gives end-to-end author signing, and the signed delegation information protects against wrong author attacks.

  • It also has an optional mirror role. This isn’t actually needed for mirroring. It’s just for securely distributing a list of official mirrors.

Client verification process

In a full implementation, including delegated target keys, the client download and verification process goes like this:

  • The client downloads a very small timestamp file signed by the timestamp key.
  • It verifies the timestamp file signature (including checking that the timestamp key is signed by the root key(s) and is not yet expired).
  • The timestamp file includes a hash and size of the current package index.

At this point we know (with cryptographic evidence) whether the package index has changed and what size and hash to expect when we download it. We also have a bound on the freshness of this information because the timestamp signature includes a short (e.g. day or two) expiry time.

  • If the timestamp file indicates that the package index has changed then the client downloads the package index.
  • The client verifies the hash of the package index matches the expected one from the timestamp file.
  • The package index is also signed by the snapshot key and so the client verifies this signature too.
  • The package index contains hashes and sizes of all the packages.

At this point we have downloaded the package index successfully and know (again with cryptographic evidence) that it has not been tampered with since being created on the server and so we can trust all the metadata in the index.

  • The client may choose to download an individual package.
  • The client verifies that the hash of the package matches the expected one from the package index.
  • The package index also contains a signature for the package signed by one of the delegated target keys.
  • It also contains a set of target key delegation information, signed by the root target key, listing which delegated target keys are allowed to sign which packages.
  • The client verifies the key delegation information.
  • The client verifies the given target key is allowed to sign for the given package (by looking it up in the key delegation information).
  • The client verifies the package signature with the given target key.

At this point we have downloaded an individual package and we know (with cryptographic evidence) that the package tarball has not been modified since it was created on the author’s system, and that this author is authorised to provide this named package.

Discussion

The different keys provide assurance of slightly different (but somewhat overlapping) things. If we only care about man-in-the-middle attacks then we only need the timestamp key and signature, since it includes the hash of the index which includes hashes of the packages.

The addition of the snapshot key to sign the index allows the snapshot key to be kept on a different system from the public-facing index server (think of an organisation like Red Hat with an internal system only available to people uploading packages). This timestamp/snapshot separation would allow the public-facing index server to be compromised with the damage being limited to freeze attacks (and the duration of this is bounded).

The addition of the target key (and delegated target keys) gives additional protection in the case that the index server is compromised. When target keys are being used, an attacker that has taken control of the index server can perform freeze attacks, rollback and mix and match attacks, but cannot change the contents of individual packages, nor provide packages that they are not authorised to provide.

Again, it is important to emphasise that all the keys need to be kept in separate places for them to have any security benefit.

Note that slow download and endless data attacks can be prevented because except for the initial tiny timestamp file, the client knows (securely) the expected download size and so carefully written download code can at least notice slow download and endless data attacks.

The Update Framework for Hackage

TUF is an architecture (albeit with a reference implementation) designed to be flexible enough to cover many use cases (distros, individual apps etc) and so we are free to adapt it for our particular use case.

For the case of Hackage as currently designed, the timestamp/snapshot key distinction does not add anything, as both keys would need to be on the same system. This is because the Hackage server is public facing both for download and upload. There would need to be a separate more secure upload server for the timestamp/snapshot distinction to add value. So for the Hackage implementation we will merge the timestamp and snapshot keys into a single package index signing key.

We also need to adapt the format and layout of the various bits of TUF metadata to fit the existing Hackage index format. For the most part this is fairly straightforward. The timestamp file can be provided as a separate file over HTTP, since it needs to be small and downloaded before the package index. All the other files that TUF specifies can be included inside the Hackage index file. This is straightforward because the Hackage index is simply a tarball and is designed to be extensible with extra file entries.

Our development plan had us start with the package index signing but keeping in mind the target keys feature so that it can be added later without any compatibility problems. We have enough funding to complete the package index signing and we hope that with this clear design roadmap we can secure either further funding or community volunteer effort.

Implementing target keys later would involve client tools (like cabal) generating keys and an out-of-band mechanism to submit them to be signed by the holder of the central target key (in practice the Hackage admins). For updating who is allowed to upload which package, a semi-manual process will be needed for the central target key holders to check and authorise re-signing the key delegation information. Tools like cabal would also need to be extended to sign tarballs and upload the signatures. So there would be some extra work for package authors but it could be kept fairly minimal. The main cost would be that changing the set of maintainers for a package on Hackage would become an asynchronous process involving a Hackage admin re-signing something.

As a final point, Hackage allows package authors and Hackage trustees to edit .cabal file metadata after release. So when we add target keys then these parts of the index should be protected with the target keys, not just the main package index key.

Comparing with other approaches

HTTPS

HTTPS is often mentioned as a way to improve Hackage security. As the TUF papers make clear, relying on the public PKI for security leaves a lot to be desired, given all the well-documented problems with trusting all the public certificate authorities in the world. In addition, relying on HTTPS does not allow for untrusted mirrors, whereas TUF’s index signing approach does allow that. Finally, on a practical note HTTPS is unfortunately complicated to support sanely on all platforms (OpenSSL is horrible, and distributing it for platforms that don’t come with it is painful). By contrast, TUF only relies on a crypto hash and signature verification algorithm. There are many choices for these including good choices that are small enough to embed existing audited C code directly into a Haskell library or tool. We plan to use ed25519 and in particular the C implementation extracted from NaCl.

GnuPG

Another approach that has been discussed and had some development effort is one based on author signing with GnuPG by Nikita Karetnikov. It was designed to fit with the existing Hackage index format with signatures being included in the package index tarball. The primary difficulty with a GnuPG-based system is in establishing the web-of-trust to a sufficient degree that signature verification can be done fully automatically without supervision from the user.

More GnuPG

Another recent GnuPG-based proposal comes from Chris Done. This proposal is under active discussion so in the spirit of constructive criticism it’s worth comparing it with TUF.

The first general point is that TUF has been designed by academic experts in the subject, based both on research and existing real-world systems. Our crypto-humility should cover not just crypto algorithms but extend to whole system designs. As I mentioned at the beginning of this post, though my initial thoughts on package index signing looked quite similar to the root+snapshot key part of the TUF system, there are many extra details that TUF covers that I would not have considered. With these in mind we can already spot underspecified areas in the proposal. For example one part of the spec says “Downloads the index from the web server or loads from cache if not modified”, but this leaves open the question of how we know securely if the index has been modified or not. TUF addresses this with the signed timestamp file. This kind of detail is not something I would have considered either, but TUF identifies the problem (that relying on HTTP cache control headers would be vulnerable to MITM) and has a solution. Generally in a system designed by crypto amateurs like us we would expect to have such holes. So we can do better by picking an existing expert design. TUF seems like a reasonable choice since it is designed to be somewhat flexible to fit different situations, and it’s being considered for both Python and Ruby’s package systems.

More specifically, the GnuPG-based proposal relies on a web of trust in which individual end users have to decide who they trust and configure their tools appropriately. A more recent version of the proposal includes signed mappings from packages to signers, which are basically statements of “I trust so and so to upload package foo”, but you still need to trust the author of that statement.

The system seems to be designed primarily for enterprise users who are using a carefully selected set of packages where a level of manual supervision to decide which authors to trust is justified. By contrast TUF’s target key delegation system is designed to be fully automatic for end users.

With TUF’s target key delegation, the chain of trust is very clear and simple: the root key signs the target key, which signs delegated keys and delegation information, and the delegated keys sign individual packages.

I also think it’s unfortunate that the signatures are not integrated as part of the Hackage index as in Nikita’s system (though it is in keeping with the general 3rd party design). On the positive side, the system has the flexibility to have 3rd parties make claims of trustworthiness, such as “I have manually audited this code” or “My CI system says this builds ok without obviously attempting to attack the system”. That extra complexity adds to the amount of manual supervision needed and seems to have been left out of the latest version of the proposal.

Finally because the proposal has nothing corresponding to the TUF snapshot key signature then all the corresponding attacks are possible, including rollback, freeze and mix-and-match attacks.

A central authority

A suggestion that has come up in recent discussion is that there should be a central authority for who is allowed to upload which package and that that information should be transparent. The criticism is that while Hackage is currently that central authority, it does not generate any verifiable evidence.

With TUF the target key delegation information is precisely that cryptographic evidence. The holders of the target key is that central authority and the signed delegation information is the cryptographic evidence of who is allowed to upload what, and that information is distributed to all clients.

Incremental updates

There has been a design for incremental updates of the Hackage index floating around for some years. With the TUF approach of putting more signatures into the index the pain of slow Hackage index downloads would become critical so our design incorporates that existing design for incremental updates to the index. This means most index updates will be 10s of kilobytes rather than multiple megabytes. This is actually even better than rsync (and likely better than git). The trick is that the tar format was originally designed to be append only (for tape drives) and so if the server simply updates the index in an append only way then the clients only need to download the tail (with appropriate checks and fallback to a full update). Effectively the index becomes an append only transaction log of all the package metadata changes. This is also fully backwards compatible.

Conclusion

We think implementing TUF for Hackage will be a major improvement to security. The initial implementation without target keys will be completely transparent to both end users and authors, and then the later addition of per-author target keys will further improve security by guarding against certain attacks even if the central server is compromised.

We accept that although we are programming experts, we are not crypto experts. We are hopeful that by using TUF we are reusing an existing expert crypto design and will not fall into the traps that typically bedevil our fellow crypto amateurs.

by duncan at April 16, 2015 09:17 AM

April 15, 2015

Joachim Breitner

Talk and Article on Monads for Reverse Engineering

In a recent project of mine, a tool to analyze and create files for the Ravensburger Tiptoi pen, I used two interesting Monads with good results:

  • A parser monad that, while parsing, remembers what part of the file were used for what and provides, for example, an annotated hex dump.
  • A binary writer monad that allows you to reference and write out offsets to positions in the file that are only determined “later” in the monad, using MonadFix.

As that’s quite neat, I write a blog post for the German blog funktionale-programmierung.de about it, and also held a talk at the Karlsruhe functional programmers group. If you know some German, enjoy; if not, wait until I have a reason to hold the talk in English. (As a matter of fact, I did hold the talk in English, but only spontaneously, so the text is in German only so far.)

by Joachim Breitner ([email protected]) at April 15, 2015 11:32 PM

Noam Lewis

Identity Crisis

Compared to other tools adding static types to JavaScript, Infernu’s main strengths are full type inference and strong type safety. Here are a few examples.

Identity Function

Here is the simplest possible function:

function id(x) {
  return x;
}

TypeScript

TypeScript‘s compiler tsc can generate a .d.ts from that:

declare function id(x: any): any;

That’s no good! We can pass any type we want to our function, but the return type is not tied to the argument type – it is treated as anything, basically untyped. So we can do this:

var n = 'hi'; n = id(5);

And TypeScript will output the following .d.ts:

declare var n: string;

That seems wrong: n is assigned a number via id(5). But wait – there is a way to turn off inference of any types (with --noImplicitAny). If we try that on our identity function, we get:

id.ts(1,13): error TS7006: Parameter 'x' implicitly has an 'any' type.

Explicit Generics

Oops. Ok, but TypeScript has generics! Let’s try that: the TypeScript handbook gives exactly the example we need – we just write the type out explicitly, like so:

function identity<T>(arg: T): T {
    return arg;
}

Great! We got what we needed, but without type inference.

Flow

Facebook’s Flow has a type system that’s (slightly?) different from TypeScript’s, and apparently works differently. Let’s try it. We can use the flow suggest command to get suggested typing (I’m using version 0.7). Here’s what we get for a single file containing only the identity function above: nothing. It doesn’t suggest any type. Ok, let’s try using our id in a way that makes no sense, to induce an error (after all, type checkers are used to find errors). Here’s bad_id.js:

/* @flow */
function id(x) { return x;}
var n = 'hi'; n = id(5);
var z = n; // added so we can see what flow says the type of n is here.

(Note: The /* @flow */ header is used to tell flow that it should look at this file.)
Run flow suggest bad_id.js and you get a diff-style output. I’ve ‘applied’ it to make it easier to read – here’s what flow suggests:

function id(x: number) : number{ return x;}
var n: string | number = 'hi'; n = id(5);
var z: number = n;

Interesting! We managed to get something without reverting to explicit type annotations. But we didn’t get an error!

First, id was inferred to take and return number, apparently because that’s the only way we’ve used it. It would be interesting to see what happens when we use id several times with different types – we’ll try that soon.

Second, n was given a union type string | number, because it takes on both types during its lifetime. It may be a matter of taste, but I would rather not have the type checker deduce implicit union types in this case (n = 'hi'; n = 5;) – instead I would expect that to be an error.

The unique (and impressive) part is that flow is able to tell that z is only ever going to have number values, and so it’s safe to assign it that type. That’s probably a result of the flow analysis they do.

Now let’s try calling id several times, with different types:

/* @flow */
function id(x) { return x;}
id(5); id('hi');

Flow suggests:

function id(x: string | number) : string | number{ return x;}

Uh oh – does this means the argument and result types are no longer tied to each other? If I pass in a number, will the compiler check that I use the result only as a number (and not as a string)? Let’s try using it, doing var n = id(5), flow suggests:

var n: string | number = id(5);

Despite n only ever being assigned a number, it now has type string | number. So apparently, union types propagate implicitly, infecting everything on the way.

Explicit Generics

Fortunately, flow too has generics, and again straight out of the manual we find:

/* @flow */
function foo(x: X): X { return x; }

Great! We got what we needed, but without type inference.

Infernu

Let’s get down to business. Infernu says:

//       id : a.(b -> b)
function id(x) { return x; }

Cool! Without any help from us, Infernu figured out the most generic type. Take a type b, return the same type b. The magic of polymo…Wait, what’s that a. thing?

Well, JavaScript has this nice keyword called this which is dynamically scoped, meaning that this is bound to different things depending on how your function is invoked and not on how it’s defined. For example:

var obj = { hat: { type: 'top' }, getHatType: function() { return this.hat.type; } };
obj.getHatType(); // ok.
var f = obj.getHatType;
f(); // oops! TypeError: Cannot read property 'type' of undefined

Nasty stuff. Every JavaScript programmer should know this.

Fortunately, Infernu is here to save you. It infers not only what arguments and return types a function has, but also what this must be for the function call to work, and verifies that you use it correctly.

Infernu type signatures for functions have the following format (subject to change without notice, bla bla bla):

this.(arguments -> result)

So for our var f = obj.getHatType example, Infernu says:

//  f : {hat: {type: d, ..f}, ..e}.(() -> d)
var f = obj.getHatType;

Decomposing that type signature, we read that this is expected to be an object containing at least a property called ‘hat’ which is an object with at least a property called ‘type’ of some type d. The function takes no arguments (hence the empty ()) and returns the same type d that was taken from the hat.type property of this. (The ellipsis stuff ..f and ..e is due to row-type polymorphism, which will be elaborated upon in a future blog post.)

Back to our identity function, we examine the signature again: a.(b -> b) – the type of this is given an unconstrained type parameter a – so Infernu is telling us explicitly that this is allowed to be anything for our identity function. Yippy!

Summary

We saw that both TypeScript and Flow (and Google Closure too, which I haven’t shown) support generics that can express the identity function properly. They also offer weak forms of type inference that sometimes yields weakly-typed results. Infernu, on the other hand, will infer generic types automatically, and prefers to fail over giving weak typings.

There are many known discussions about subtyping (inheritance)-based type systems, represented here by TypeScript and Flow, vs. parametric polymorphism (being Infernu in this case). There are known pros and cons to both sides: but one important result is that type inference is just easier when there is no subtyping involved.

Infernu is designed to take advantage of that.


Tagged: Infernu, Javascript

by sinelaw at April 15, 2015 10:10 PM

Tom Schrijvers

FP Complete

The future of School of Haskell and FP Haskell Center

Last month, we announced the open sourcing of ide-backend, the engine underlying both School of Haskell and FP Haskell Center. In this blog post, I'm going to lay out FP Complete's plans for the future of both of those services, and our overall commercial Haskell tooling.

tl;dr Open sourcing School of Haskell, releasing brand new open source Stackage-based tooling, and merging the features of FP Haskell Center into the other two offerings.

School of Haskell

School of Haskell has served as a foundational service for interactive Haskell training, written in a collaborative way. In our efforts to help increase adoption of Haskell, we've decided that the best next step for the evolution of School of Haskell is to fully open source it. Concretely, our plans are:

  • Open up a new code base for School of Haskell, under the auspices of the Commercial Haskell Special Interest Group
  • Host the site on its own dedicated domain name, http://www.schoolofhaskell.com/
  • Set up redirects as necessary
  • Make the content of School of Haskell much more open
    • Make the terms of use more open, likely moving towards a Creative Commons license
    • Make it easier to interact with the raw content of the school, possibly by storing the content in a Git repository that is publicly cloneable
    • Encourage collaborators on the code base to get new features in place (and see the FP Haskell Center discussion below for more information on this from us)

To be clear: FP Complete fully intends to continue maintaining both the code base and the live servers. This move is intended to allow others to contribute their own, wonderful new ideas to the project.

Stackage-based Tooling

FP Complete started the Stackage project about three years ago to address issues around library compatibility and collaborative development on teams, and it has been a resounding success. It's already the basis for School of Haskell and FP Haskell Center, and with Stackage Server is fully available to all users as an easy way to get sane package sets. We started LTS Haskell at the beginning of this year, which has only increased that success further.

Today's Stackage project answers a lot of the demand for better tooling, especially in the commercial realm. However, it doesn't answer everything. Companies we work with need solutions for everything from CI integration, to editor plugins, to deployment solutions. FP Haskell Center answers some of these demands, but not all of them. Over the past year, we've been working with our customers to develop a commercial grade set of tools that solve these problems more holistically, and we are happy with the result.

Our new tooling is integrated tightly with Stackage, provides improvements to some flaws in currently available tooling, and adds in functionality not yet present at all in the open source world. For example:

  • Distribution of binary package databases to your whole team, avoiding recompile time and ensuring consistent environments
  • Reliable rebuilding of all local packages with a single command
  • Built in documentation serving for your entire package set
  • Powerful code exploration features

We feel comfortable with the quality of the tooling to now take it to the next step. So over the next few weeks, we will begin the process of open sourcing all of this tooling to the Haskell community. As we release components, we will be describing their functionality, explaining expected use cases, and documenting current shortcomings. But the short answer is: if you're working on developing Haskell applications, this tooling is likely to make your life significantly better.

FP Haskell Center

This leaves the question: what of FP Haskell Center? We initially received a lot of requests from companies looking for a web based IDE. However, over the past two years, we have seen that- in reality- people were looking to solve two different use cases:

  • For learners: an easy way to get started with learning Haskell
  • For application writers: a reliable set of tools for developing, building, and shipping software

After careful consideration, we believe that the two offerings mentioned above- School of Haskell and Stackage-based tooling- are the best way forward, and that continuing to push FP Haskell Center as a development platform is not a good path forward. Instead, our goals are to take the best that FP Haskell Center has to offer, such as interactive type information, and make them available in both School of Haskell and our commercial tooling.

At the end of the day, we'll be offering two very complementary products: a hosted learning site for quickly getting up to speed with Haskell, and commercial grade tooling for writing software.

Timeline

The first concrete step on this path is going to be our Stackage-based tooling work. We're hoping to get a first public version out the door in short order. The following step is open sourcing School of Haskell. Once that is fully done, we'll start the discussions around FP Haskell Center. It is still a service that many people use, and we have no intention of pulling the rug out from under people. Our timeline their will likely be to:

  • leave FP Haskell Center running for quite a while
  • when School of Haskell and Stackage-based tooling become solid enough offerings to replace it, officially deprecate it
  • some time after that, stop offering FP Haskell Center as a separate service

We very much welcome community input on these plans. Let's kick off a discussion on Reddit.

April 15, 2015 05:00 AM

Brent Yorgey

Polynomial Functors Constrained by Regular Expressions

I’ve now finished revising the paper that Dan Piponi and I had accepted to MPC 2015; you can find a PDF here:

Polynomial Functors Constrained by Regular Expressions

Here’s the 2-minute version: certain operations or restrictions on functors can be described by regular expressions, where the elements of the alphabet correspond to type arguments. The idea is to restrict to only those structures for which an inorder traversal yields a sequence of types matching the regular expression. For example, (aa)^* gives you even-size things; a^*ha^* gives you the derivative (the structure has a bunch of values of type a, a single hole of type h, and then more values of type a), and b^*ha^* the dissection.

dissected-tree

The punchline is that we show how to use the machinery of semirings, finite automata, and some basic matrix algebra to automatically derive an algebraic description of any functor constrained by any regular expression. This gives a nice unified way to view differentiation and dissection; we also draw some connections to the theory of divided differences.

I’m still open to discussion, suggestions, typo fixes, etc., though at this point they won’t make it into the proceedings. There’s certainly a lot more that could be said or ways this could be extended further.


by Brent at April 15, 2015 01:22 AM

Mark Jason Dominus

I'm old

This week I introduced myself to Recurse Center, where I will be in residence later this month, and mentioned:

I have worked as a professional programmer for a long time so I sometimes know strange historical stuff because I lived through it.

Ms. Nikki Bee said she wanted to hear more. Once I got started I had trouble stopping.


I got interested in programming from watching my mom do it. I first programmed before video terminals were common. I still remember the smell of the greasy paper and the terminal's lubricating oil. When you typed control-G, the ASCII BEL character, a little metal hammer hit an actual metal bell that went "ding!".

I remember when there was a dedicated computer just for word processing; that's all it did. I remember when hard disks were the size of washing machines. I remember when you could buy magnetic cores on Canal Street, not far from where Recurse Center is now. Computer memory is still sometimes called “core”, and on Unix your program still dumps a core file if it segfaults. I've worked with programmers who were debugging core dumps printed on greenbar paper, although I've never had to do it myself.

I frequented dialup BBSes before there was an Internet. I remember when the domain name system was rolled out. Until then email addresses looked like yuri@kremvax, with no dots; you didn't need dots because each mail host had a unique name. I read the GNU Manifesto in its original publication in Dr. Dobb's. I remember the day the Morris Worm hit.

I complained to Laurence Canter after he and his wife perpetrated the first large scale commercial spamming of the Internet. He replied:

People in your group are interested. Why do you wish to deprive them of what they consider to be important information??

which is the same excuse used by every spammer since.

I know the secret history of the Java compiler, why Java 5.0 had generics even though Sun didn't want them, and why they couldn't get rid of them. I remember when the inventors of LiveScript changed its name to JavaScript in a craven attempt to borrow some of Java's buzz.

I once worked with Ted Nelson.

I remember when Sun decided they would start charging extra to ship C compilers with their hardware, and how the whole Internet got together to fund an improved version of the GNU C compiler that would be be free and much better than the old Sun compiler ever was.

I remember when NCSA had a web page, updated daily, called “What's New on the World Wide Web”. I think I was the first person to have a guest book page on the Web. I remember the great land rush of 1996 when every company woke up at the same time and realized it needed a web site.

I remember when if you were going to speak at a conference, you would mail a paper copy of your slides to the conference people a month before so they could print it into books to hand out to the attendees. Then you would photocopy the slides onto plastic sheets so you could display them on the projector when you got there. God help you if you spilled the stack of plastic right before the talk.

tl;dr i've been around a while.

However, I have never programmed in COBOL.


(I'm not actually very old, but I got started very young.)

by Mark Dominus ([email protected]) at April 15, 2015 12:00 AM

April 14, 2015

The GHC Team

GHC Weekly News - 2015/04/14

Hi *,

It's been a few weeks since the last news bulletin - your editor apologizes about that. It's actually been a relatively slow few weeks here too, and busy-ness outside of GHC has attracted some of my attention. Despite that, GHC 7.10.1 was released, a new HP alpha is out, and things are moving along smoothly. Now that the release is done, things are quitely moving along in HEAD - with people committing code with reckless abandon, of course.

This week, GHC HQ met up, but it's been very light since the 7.10.1 release. Currently there isn't anything pushing us to do a 7.10.2 release at least for a few more weeks it looks like - but see below.

  • We puzzled a bit about the release status of 7.10.2, and thought: it's only holding up people who are depending on it. So, who's depending on it, and what do they need fixed? See below for more.
  • We also talked a bit about performance - it seems the compiler has been getting much slower over time since the 7.8.x release series, and it's time to reign it in. Austin will be spending his week investigating a bit of this, and the causes.

7.10.2 Status

So, you may be wondering when the 7.10.2 release is. The trick is it happens when you tell us it should happen!

So far, taking a look at milestone:7.10.2, we've fixed about half the bugs we currently have marked down to fix. But we're probably going to punt some of those - and we're not sure all the ones that are there should be.

So this is a call: If you need something to be fixed during 7.10.2, please file a ticket, set the milestone, and alert us. The sooner the better, because it'll inform us as to when we should release. Emailing [email protected] is also a sure-fire way to get our attention.

And remember: you can always find out the latest about the next release at the Status page (in this case, for 7.10.2) - https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.10.2

Call for help: DocBook to AsciiDoc

The GHC team needs some help. A few months ago, we put out a poll to convert our DocBook-based manual to AsciiDoc.

The poll had a mostly lukewarm reception, with the idea that it will A) make life easier for people who frequently modify the users guide, and B) make life easier for people who add things irregularly, as a lower barrier to entry.

It looks like we still want to do this - but alas, many of us don't have time!

So, we're asking the public: Is anyone willing to step up and help here? For example, it may be possible to get a long ways with just pandoc, but we need someone to finish it - and in return, we'll help along the way!

List chatter

  • A GHC user, Dave, asked the list about some questions with Cross Compilation, as he's attempting to get GHC to work natively inside the Open Embedded build environment. Unfortunately, things haven't been going well so far, and any input from enterprising hackers is appreciated: https://mail.haskell.org/pipermail/ghc-devs/2015-April/008774.html
  • Dan Aloni has started a discussion about improving GHC's error messages, spurred by a popular blog post he wrote and posted on Reddit about some Happy/GHC improvements he's made. This is a difficult area (error messages in general are hard) to work on, so thanks to Dan for helping! https://mail.haskell.org/pipermail/ghc-devs/2015-April/008778.html
  • Tamar Christina started a thread about replacing ghc-split, an old Perl script inside GHC, but he wanted to know: what do we do about a regex replacement? Mikhail Glushenkov spoke up about a similar decision the LLVM developers used: to use the OpenBSD regex implementation. https://mail.haskell.org/pipermail/ghc-devs/2015-April/008785.html

Noteworthy commits

Closed tickets

#10222, #10219, #8057, #10226, #10220, #9723, #10230, #10208, #10236, #10213, #10231, #10240, #10243, #10237, #10224, #8811, #10197, #10252, #9958, #10253, #8248, #10207, #10214, #9964, #10194, #10251, #10188, #10257, #10247, #10247, #9160, #10259, #9965, #10265, #10264, #10286, #10282, #10290, #10291, #10300, #9929, #8276, #10218, #10148, #10232, #10274, #10275, #10195, and #10233.

by thoughtpolice at April 14, 2015 05:44 PM

Dimitri Sabadie

Generalized swap

Generalized swap

I pushed a pull request to Edward Kmett’s either package to implement two functions some guys was complaining not to find: flipEither :: Either e a -> Either a e and flipEitherT :: EitherT e m a -> EitherT a m e.

When implementing the functions, I wondered: “Hey, flipping stuff is a pretty common operation. Don’t we have an abstraction for that yet?”. I haven’t found any.

Meet Swap

I decided to make a little typeclass to see what it’d be.

class Swap s where
swap :: s a b -> s b a

instance Swap (,) where
swap (a,b) = (b,a)

instance Swap Either where
swap = flipEither

-- let’s go wild and fooled
instance Swap Map where
swap = fromList . fmap swap . toList

If you think that’s handy, I’ll write a little package with default instances to make it live.

Happy hacking folks!

by Dimitri Sabadie ([email protected]) at April 14, 2015 02:31 PM

Philip Wadler

FP Complete

Announcing: stackage-view

As part of our announcement of open sourcing ide-backend, there was a video demonstrating a tool called “fpview” (that was our internal name for this tool).

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/FI3u8uqZ2Q4" width="420"></iframe>

Like many of our tools, after some internal development, we're now releasing it open source here at: stackage-view.

It looks like this:

Stackage-view screenshot

The README documents the features, but here's a quick rundown:

  • Go to definition: You can double-click any identifier to jump straight to the definition.
  • Type information of any expression or pattern binding. Just click any identifier to see its type. Alternatively, you can select an expression to see the type of that. Hitting the <-> expansion button will expand the selection to a parent node.

The first purpose behind the tool was to target general teams of people who may not all be using the same tooling (Haskellers are like cats, they prefer to do their own thing!), some users may be on Emacs, Vim, or Sublime, FP Eclipse, Leksah, etc. whereas this read-only tool serves as a way to inspect a codebase with handy information and jumping available.

The second purpose was to serve as a demonstration and acid test for our combination GHCJS and React library ghcjs-react. It makes for a nice simple-but-non-trivial demonstration of the library. There is real-world state handling going on, interacting with third-party APIs (aside from React itself, the Ace editor).

It also serves as a demonstration of using ide-backend programmatically. If you look in the source of stackage-view you'll see it's quite straight-forward to use. To build it, you will need the Git version of ide-backend found here. Currently it cannot be uploaded to Hackage due to an issue with the -dynamic flag.

April 14, 2015 11:20 AM

Noam Lewis

The wl-pprint package maintainer…

…is now me.

I’ve talked to the previous maintainer who is no longer interested. There was a small change required to support the new ghc (7.10), so I also released a new version. Since the new version only benefits people who need a new ghc, I also added the source-repository field in the cabal file, despite it breaking cabals older than 1.6.

According to hackage, wl-pprint was authored by Daan Leijen. The new github repo is up at https://github.com/sinelaw/wl-pprint.
I’ve taken the liberty of forking based on someone else’s patch for GHC 7.10 – thank you Alex Legg!

Incidentally, on hackage, there are several packages with the substring ‘wl-pprint’ in their names, some are newer and better maintained and have more features than others. This situation is rather confusing. wl-pprint itself is being used as a dependency by at least one package I need (language-ecmascript).

It would be nice if the community converged on a single Wadler-pretty-printer package. (Probably not wl-pprint – maybe ansi-wl-pprint?)


by sinelaw at April 14, 2015 07:52 AM

Mads Lindstrøm

Producer Consumer example using GHCJS and NixOS

For a time I have wanted to try GHCJS, but it was rumored to be hard to install. However, as I recently installed NixOS on my desktop computer and it has packages for GHCJS, I decided I would give GHCJS a shot. Bas van Diijk had made a mailing list post outlining how he uses uses GHCJS with NixOS. However, being new to NixOS and the Nix package manager language, I had a hard time understanding Bas van Diijk’s post. But with time and a lot of errors, I got a working setup. In this post I will describe what I did.

If you want to follow this howto, you properly already have NixOS installed. If not, you can find a good guide in the Nix Manual. If you want to install NixOS in a virtual machine this guide will help you.

Our example will depend on the unstable Nix package repository. Therefore do:

> mkdir ProducerConsumer
> cd ProducerConsumer
> git clone https://github.com/NixOS/nixpkgs.git

Unfortunately, I could not get the code to work with the newest version of Nix unstable. This is not necessarily surprising as unstable is a moving and not always perfectly working target – hence the name. But here the Nix way comes to the rescue, as one can just roll back the Nix Git repository back to when it did work:

> cd nixpkgs
> git reset --hard 1901f3fe77d24c0eef00f73f73c176fae3bcb44e
> cd ..

So with Nix you can easily follow the bleeding edge, without getting traped in a non-working unstable branch. I would not know how to do this easily with my former Linux distribution Debian.

We will start creating the client:

> mkdir client
> mkdir client/src

We need a client/default.nix file descriping how to buld this client:

{ pkgs ? (import <nixpkgs> {})
, hp ? pkgs.haskellPackages_ghcjs  # use ghcjs packages instead of ghc packages
}:

hp.cabal.mkDerivation (self: {
  pname = "ProducerConsumerClient";
  version = "1.0.0";
  src = ./.;
  isLibrary = false;
  isExecutable = true;
  buildDepends = [ hp.ghcjsDom hp.random hp.stm ];
  buildTools = [ hp.cabalInstall ];
})

This is fairly standard default.nix for Haskell projects, except that we are using GHCJS instead of GHC. If you’re not familiar with Nix expressions, then a good guide can be found here.

We also need a Cabal file client/ProducerConsumerClient.cabal:

name:                ProducerConsumerClient
version:             1.0.0
author:              Mads Lindstrøm
build-type:          Simple
cabal-version:       >=1.10

executable producer-consumer-client
  main-is:             Main.hs
  build-depends:       base >=4.7 && <4.8,
                       ghcjs-dom >= 0.1.1.3,
                       random >= 1.0.1.3,
                       stm >= 2.4.2
  hs-source-dirs:      src
  default-language:    Haskell2010

Finally we need to actually program. We create a small example with a producer and consumer of integers. And a showNumbersVar function, which presents the numbers to the user. We only have one source file client/src/Main.hs:

module Main (
    main
) where

import GHCJS.DOM
import GHCJS.DOM.Document
import GHCJS.DOM.HTMLElement

import System.Random (randomRIO)
import Control.Concurrent.STM (TVar, retry, atomically, modifyTVar, readTVar, newTVar)
import Control.Concurrent (threadDelay, forkIO)

main :: IO ()
main = do
  numbersVar <- atomically $ newTVar [1, 2, 3]
  forkIO (producer numbersVar)
  forkIO (consumer numbersVar)
  showNumbersVar [] numbersVar

showNumbersVar :: [Int] -> TVar [Int] -> IO ()
showNumbersVar lastNumbers numbersVar = do
  currentNumbers <- atomically (do numbers <- readTVar numbersVar
                                   if lastNumbers == numbers then retry else return numbers
                               )
  Just doc <- currentDocument
  Just body   <- documentGetBody doc
  htmlElementSetInnerHTML body ("<h1>" ++ unlines (map (\x -> show x ++ "<br>") currentNumbers) ++ "</h1>")
  showNumbersVar currentNumbers numbersVar

producer :: TVar [Int] -> IO ()
producer numbersVar = do
  sleepMillies 500 2000
  newNumber <- randomRIO (0, 100)
  atomically (modifyTVar numbersVar (newNumber:))
  producer numbersVar

consumer :: TVar [Int] -> IO ()
consumer numbersVar = do
  sleepMillies 500 2000
  atomically (modifyTVar numbersVar (drop 1))
  consumer numbersVar

sleepMillies :: Int -> Int -> IO()
sleepMillies minMs maxMs = randomRIO (minMs*1000, maxMs*1000) >>= threadDelay

This is ordinary Haskell and the code should not have many surprises for the experienced Haskell programmer. It is very nice that we can use Software Transactional Memory (STM) to handle integer list. STM is likely to be especially helpful in a user interface application, where there necessarily is a lot of concurrency.

We can build the client now:

> nix-build -I . client

If successful you should get a link called result, which points to the ProducerConsumerClient in the Nix store. Try:

> ls -l result/bin/producer-consumer-client.jsexe/

Where you should see some files including javascript and html files.

Next the server part. The server parts needs access to the client. We can achieve this by creating a Nix expression pointing to both client and server. Create packages.nix:

{ pkgs ? import <nixpkgs> {} }:

rec {
    client = import ./client { };
    server = import ./server { inherit client; };
}

The server will be a simple Snap application, which just serves the JavaScript files created by ProducerConsumerClient.

We need a server directory:

> mkdir server
> mkdir server/src

And server/default.nix:

{ pkgs ? (import <nixpkgs> {})
, hp ? pkgs.haskellPackages_ghc784
, client
}:

hp.cabal.mkDerivation (self: {
  pname = "ProducerConsumerServer";
  version = "1.0.0";
  src = ./.;
  enableSplitObjs = false;
  buildTools = [ hp.cabalInstall ];
  isExecutable = true;
  isLibrary = false;
  buildDepends = [
      hp.MonadCatchIOTransformers hp.mtl hp.snapCore hp.snapServer hp.split hp.systemFilepath
      client
  ];
  extraLibs = [ ];

  preConfigure = ''
    rm -rf dist
  '';
  
  postInstall = ''
    # This is properly not completely kosher, but it works.
    cp -r $client/bin/producer-consumer-client.jsexe $out/javascript
  '';

  inherit client;
  })

And server/ProducerConsumerServer.cabal:

Name:                ProducerConsumerServer
Version:             1.0
Author:              Author
Category:            Web
Build-type:          Simple
Cabal-version:       >=1.2

Executable producer-consumer-server
  hs-source-dirs: src
  main-is: Main.hs

  Build-depends:
    base                      >= 4     && < 5,
    bytestring                >= 0.9.1 && < 0.11,
    MonadCatchIO-transformers >= 0.2.1 && < 0.4,
    mtl                       >= 2     && < 3,
    snap-core                 >= 0.9   && < 0.10,
    snap-server               >= 0.9   && < 0.10,
    split                     >= 0.2.2,
    system-filepath           >= 0.4.13,
    filepath                  >= 1.3.0.2

  ghc-options: -threaded -Wall -fwarn-tabs -funbox-strict-fields -O2
               -fno-warn-unused-do-bind

And server/src/Main.hs:

{-# LANGUAGE OverloadedStrings #-}
module Main where

import Prelude hiding (head, id, div)
import qualified Prelude

import Snap.Core (Snap, dir, modifyResponse, addHeader)
import Snap.Util.FileServe (serveDirectory)
import Snap.Http.Server (quickHttpServe)

import System.Environment (getEnvironment, getEnv, getExecutablePath)
import System.FilePath
import Data.List.Split (splitOn)
import Data.List (isInfixOf)

main :: IO ()
main = do
   exePath <- getExecutablePath
   let baseDir = takeDirectory exePath ++ "/../javascript/"
   quickHttpServe $ site baseDir

getClientDir :: IO String
getClientDir = do
   getEnvironment >>= mapM_ print
   nativeBuildInputs <- getEnv "propagatedNativeBuildInputs"
   return $ Prelude.head $ filter (isInfixOf "my-test-app") $ splitOn " " nativeBuildInputs

site :: String -> Snap ()
site clientDir =
   do Snap.Core.dir "client" (serveDirectory clientDir)
      let header key value = modifyResponse (addHeader key value)
      header "Cache-Control" "no-cache, no-store, must-revalidate"
      header "Pragma" "no-cache"
      header "Expires" "0"

Now we can compile the client, using packages.nix, and the server:

> nix-build -I . packages.nix -A client
> nix-build -I . packages.nix -A server

Now it is time to run the application:

> result/bin/producer-consumer-server

and point your browser to http://localhost:8000/client. You should see the numbers one, two, and three. After about a second you should see the numbers lists changing, as the producer and consumer changes the list.


by Mads Lindstrøm at April 14, 2015 07:16 AM

April 08, 2015

FP Complete

Announcing: monad-unlift

I've released the first version of monad-unlift. From its README:

Typeclasses for providing for unlifting of monad transformers and stacks, and concrete implementations of common transformers implementing this type classes.

This package is a companion to monad-control, providing simplified functions to a common subset of transformers and functionality. I've copied in the content of the README below to give some examples and explanations.

I'd like to thank Erik Hesselink and Hiromi Ishii for providing some brilliant solutions to make this package more useful, based on the constraints package.

Synopsis

import Control.Concurrent.Async
import Control.Monad.Trans.Unlift
import Control.Monad.Trans.RWS.Ref
import Control.Monad.IO.Class
import Data.Mutable

-- Some artbirary data type for the MonadReader
data SomeEnv = SomeEnv Int

myFunc :: RWSRefT
    -- The WriterT piece is contained by an IORef
    IORef
    -- For efficiency, we store the state in a primitive
    -- reference for efficiency
    (PRef RealWorld)
    SomeEnv   -- Reader
    [String]  -- Writer
    Int       -- State
    IO
    (String, String)
myFunc = do
    -- Get the unlift function. Due to weaknesses in ImpredicativeTypes, we
    -- need to use a newtype wrapper. You can also use askRunBase.
    --
    -- If you want to just unwrap one transformer layer, use
    -- askUnlift/askRun/Unlift.
    UnliftBase run <- askUnliftBase

    -- Note that we can use unlift to turn our transformer actions into IO
    -- actions. Unlike the standard RWST, actions from separate threads are
    -- both retained due to mutable references.
    --
    -- In real life: you shouldn't rely on this working, as RWST is not thread
    -- safe. This example is provided as a good demonstration of the type level
    -- functionality.
    liftIO $ concurrently (run foo) (run bar)
  where
    foo = do
        tell ["starting foo"]
        modify (+ 1)
        tell ["leaving foo"]
        return "foo is done"
    bar = do
        tell ["starting bar"]
        SomeEnv e <- ask
        modify (+ e)
        tell ["leaving bar"]
        return "bar is done"

main :: IO ()
main = do
    ((w, x), y, z) <- runRWSRefT myFunc (SomeEnv 5) 6
    print w -- foo is done
    print x -- bar is done
    print y -- 12 = 6 + 5 + 1
    print z -- starting and leaving statements, order ambiguous

Overview

A common pattern is to have some kind of a monad transformer, and want to pass an action into a function that requires actions in a base monad. That sounds a bit abstract, so let's give a concrete example:

-- From async
concurrently :: IO a -> IO b -> IO (a, b)

func1 :: ReaderT Foo IO String
func2 :: ReaderT Foo IO Double

doBoth :: ReaderT Foo IO (String, Double)
doBoth = _

Doing this manually is possible, but a bit tedious:

doBoth :: ReaderT Foo IO (String, Double)
doBoth = ReaderT $ \foo -> concurrently
    (runReaderT func1 foo)
    (runReaderT func2 foo)

This also doesn't generalize at all; you'll be stuck writing concurrently variants for every monad transformer stack. Fortunately, the monad-control package generalizes this to a large number of transformer stacks. Let's implement our generalized concurrently:

concurrentlyG :: MonadBaseControl IO m
              => m a -> m b -> m (StM m a, StM m b)
concurrentlyG f g = liftBaseWith $ \run ->
    concurrently (run f) (run g)

Notice how, in the signature for concurrentlyG, we no longer return (a, b), but (StM m a, StM m b). This is because there may be additional monadic context for each thread of execution, and we have no way of merging these together in general. Some examples of context are:

  • With WriterT, it's the values that you called tell on
  • With EitherT, the returned value may not exist at all

In addition to this difficulty, many people find the types in monad-control difficult to navigate, due to their extreme generality (which is in fact the power of that package!).

There is a subset of these transformer stacks that are in fact monad morphisms. Simply stated, these are transformer stacks that are isomorphic to ReaderT. For these monads, there is not context in the returned value. Therefore, there's no need to combine returned states or deal with possibly missing values.

This concept is represented by the monad-unlift package, which provides a pair of typeclasses for these kinds of transformer stacks. Before we dive in, let's see how we solve our concurrentlyG problem with it:

concurrentlyG :: MonadBaseUnlift IO m
              => m a -> m b -> m (a, b)
concurrentlyG f g = do
    UnliftBase run <- askUnliftBase
    liftBase $ concurrently (run f) (run g)

Notice how we get (a, b) in the return type as desired. There's no need to unwrap values or deal with context.

MonadTransUnlift

MonadTransUnlift is a class for any monad transformer which is isomorphic to ReaderT, in the sense that the environment can be captured and applied later. Some interesting cases in this space are:

  • IdentityT and things isomorphic to it; in this case, you can think of the environment as being ()
  • Transformers which contain a mutable reference in their environment. This allows them to behave like stateful transformers (e.g., StateT or WriterT), but still obey the monad morphism laws. (See below for more details.)

Due to weaknesses in GHC's ImpredicativeTypes, we have a helper datatype to allow for getting polymorphic unlift functions, appropriately named Unlift. For many common cases, you can get away with using askRun instead, e.g.:

bar :: ReaderT Foo IO ()

baz :: ReaderT Foo IO ()
baz = do
    run <- askRun
    liftIO $ void $ forkIO $ run bar

Using Unlift, this would instead be:

    Unlift run <- askUnlift
    liftIO $ void $ forkIO $ run bar

or equivalently:

    u <- askUnlift
    liftIO $ void $ forkIO $ unlift u bar

MonadBaseUnlift

MonadBaseUnlift extends this concept to entire transformer stacks. This is typically the typeclass that people end up using. You can think of these two typeclasses in exactly the same way as MonadTrans and MonadIO, or more precisely MonadTrans and MonadBase.

For the same ImpredicativeTypes reason, there's a helper type UnliftBase. Everything we just discussed should transfer directly to MonadBaseUnlift, so learning something new isn't necessary. For example, you can rewrite the last snippet as:

    u <- askUnliftBase
    liftIO $ void $ forkIO $ unliftBase u bar

Reference transformers

When playing transformer stack games with a transformer like StateT, it's common to accidentally discard state modifications. Additionally, in the case of runtime exceptions, it's usually impossible to retain the state. (Similar statements apply to WriterT and RWST, both in strict and lazy variants.)

Another approach is to use a ReaderT and hold onto a mutable reference. This is problematic since there's no built in support for operations like get, put, or tell. What we want is to have a MonadState and/or MonadWriter instance.

To address this case, this package includes variants of those transformers that use mutable references. These reference are generic using the mutable-containers package, which allows you to have highly efficient references like PRef instead of always using boxed references like IORef.

Note that, for generality, the reference transformers take type parameters indicating which mutable reference type to use. Some examples you may use are:

  • IORef for boxed references in IO
  • STRef s for boxed references in ST
  • PRef RealWorld for an unboxed reference in IO

See the synopsis for a complete example.

conduit

The transPipe function in conduit has caused confusion in the past due to its requirement of provided functions to obey monad morphism laws. This package makes a good companion to conduit to simplify that function's usage.

Other notable instances

Both the HandlerT transformer from yesod-core and LoggingT/NoLoggingT are valid monad morphisms. HandlerT is in fact my first example of using the "enviornment holding a mutable reference" technique to overcome exceptions destroying state.

{-# LANGUAGE FlexibleContexts  #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TemplateHaskell   #-}
import Control.Concurrent.Async
import Control.Monad.IO.Class
import Control.Monad.Logger
import Control.Monad.Trans.Unlift

main :: IO ()
main = runStdoutLoggingT foo

foo :: (MonadLogger m, MonadBaseUnlift IO m, MonadIO m) => m ()
foo = do
    run <- askRunBase
    a <- liftIO $ async $ run $ $logDebug "Hello World!"
    liftIO $ wait a

April 08, 2015 11:00 AM

Dimitri Sabadie

Volumetric Light Shafts

Volumetric light scattering

On the last weekend, I was at the Revision demoparty (Easterparty). It’s a meeting of friends and great people doing graphics, music, code and that kind of fun and pretty stuff. I didn’t release anything, but I’ve been working on a release for a while now, and I was lacking something in my engine: light shafts.

I know a few ways to do that. They almost all fall in one of the two following classes:

  1. screen-space-based;
  2. raymarching-based.

Both the techniques produce interesting images. However, the screen-space-based method produces bad results when you don’t look directly to the light – it may even produce nothing if the light is out of screen – and the raymarching-based is a step process that might generate artifacts and can be slow.

The idea I had is very simple. I haven’t tested it yet, but it’s planned for very soon. I’ll post images with benchmarks as soon as I have something on screen. I’m not sure it’s an unknown way to do it. I just haven’t found anything describing that yet. If you have, please leave a link in the comments below! :)

Volume extraction

The idea is the following. You need a depthmap. If you’re used to shadow mapping you already have a depthmap for your light. In order to simplify this, we’ll use the depthmap used for shadow mapping, but I guess it’s totally okay to use another depthmap – we’ll see that it could be even handier.

For each point in that depthmap is the distance – in a specific space coordinates system – of the corresponding point in world space to the position of the light. If you have the projection and the view matrices of the light, it’s easy to deproject the depthmap. What would we get if we deproject all the depthmap texels into the world space? We’d get the exact lit surfaces.

For a spot light, you can imagine the deprojected version of the depthmap as a cone of light. The “disk” of the cone will deform and shape as the lit environment. That’s the first part of the algorithm.

We have a points cloud. What happens if, for each deprojected texel – i.e. point – we draw a line to the position of the light? We get an actual lines field representing… photons paths! How amazing is that?! Furthermore, because of the depthmap sampling, if you look at the light and that an object is between you and the light, the photons paths won’t go through the obstructing object! Like the following image:

Of course, because of using raw lines, the render might be a bit blocky at first. If you know the laser trick1 – i.e. quad-based lasers, you can apply it to our lines as well, in order to get better results. The first improvement is to disable depth test and enable additive blending.

Algorithm

In the first place, you need to generate the depthmap of the light. Then, you need to extract the volumetric shaft. You’ll need a vertex buffer for that. Allocate w*h elements, where w and h are the depthmap’s width and height. Yeah, a point per texel.

Create a shader and make a render with glDrawArrays(GL_POINTS, 0, w*h) with an attributeless vertex array object. Don’t forget to bind the depthmap for use in the shader.

In your vertex shader, use gl_VertexID to sample from the depthmap. Transform the resulting texel in world-space. You can use something like the following deprojection formula:

vec3 deproject() {
float depth = 2. * texelFetch(depthmap, ivec2(gl_FragCoord.xy), 0).r - 1.;
vec4 position = vec4(vv, depth, 1.);
position = iProjView * position;
position.xyz /= position.w;
return position.xyz;
}

Pass that to the next stage, the geometry shader. There, build whatever kind of new primitive you want. In the first place, I’ll go for a simple line connected to the light’s position. In further implementation, I’ll go for lasers-like base shapes, like star-crossed quads.

In the fragment shader, put whatever color you want the shaft to have. You could use interpolation to reduce lighting wherever you want to create nice effects.

Don’t forget to use additive blending, as we do for lasers.

Considerations

I see two major problems. The first one is the bright aspect the shaft will have if you don’t blend correctly. Play with alpha to reduce more if the fragment is near the light’s position and make the alpha bigger when you’re far away from the light’s position. Because you’ll blend way more photons paths near the light’s position than far from it.

The second issue is the resolution of the extracted volume. For a 512x512 depthmap, you’ll get around 262k points, then 524k lines. That’s a lot for such an object. And that’s only for a simple spot light. An omnidirectional light would require six times more lines. What happens if we don’t use lines, but star-crossed quads an that we want several shafts? You see the problem.

A solution could be to sample from high mipmap level, so that you don’t use the full resolution of the shadow map. That would result in less visual appealing shafts, but I’m pretty sure it’d be still good. You could also branch a blur shader to smooth out the whole thing.

Conclusion

I’ll try to implement that as soon as possible, because I think my idea is pretty interesting compared to raymarching, which is expensive, and way better than screen-space, because the shaft will still be visible if the light goes out of screen.


  1. I’ll write an article about it if you don’t – leave a comment for asking

by Dimitri Sabadie ([email protected]) at April 08, 2015 10:01 AM

April 07, 2015

Noam Lewis

Type guards are not inferable

Type guards, or typecase expressions allow refining a binding’s type and are commonly used in dynamic languages such as JavaScript. Consider (adapted from Simple Unification-based Type Inference for GADTs):

f x y = typecase x of
            Int -> i + y
            other -> 0

The function can have either of these two types (among others):

a -> Int -> Int

And

a -> a -> Int

Neither is more general, so there is no principal type.

Unless the user tells us something (in a type annotation) we have no way of deciding which type should be inferred.

So for now, Infernu will probably not have support for type guards like Flow and TypeScript do (both of which don’t have full type inference). Another possibility is to allow type guards in trivial cases only.


by sinelaw at April 07, 2015 10:00 PM

Gabriel Gonzalez

Mathematical APIs

You can use mathematical APIs to smooth the onboarding process for new users of your library by reusing their existing intuition for algebraic operations.

Let's use an example from my turtle library for shell scripting, which uses the Shell Text type to represent a stream of lines. You can forward such a stream to the console using stdout:

stdout :: Shell Text -> IO ()

Thanks to Haskell's OverloadedStrings extension, we can use a string literal to encode a 1-element stream that emits just that one string literal:

>>> :set -XOverloadedStrings
>>> import Turtle
>>> stdout "Hello, world!"
Hello, world
>>>

Now let's drop a thin mathematical API on top of these streams so that we can treat streams like ordinary numbers. These examples will only work with turtle-1.1.0 and later.

If we add streams, we concatenate their elements in order. Here we concatenate three 1-element streams to generate a combined 3-element stream:

>>> stdout ("Line 1" + "Line 2" + "Line 3")
Line 1
Line 2
Line 3
>>>

0 will be the empty stream:

>>> stdout 0
>>> -- This prints nothing

1 is a stream that emits a single empty string:

>>> stdout 1

>>>

So what is 2? Well, it stands to reason that 2 must be 1 + 1:

>>> stdout 2


>>> stdout (1 + 1) -- Same thing


>>>

Similarly, 3 must be 1 + 1 + 1:

>>> stdout 3



>>>

... and so forth.

If we multiply two 1-element streams, we generate a new 1-element stream that concatenates their strings.

>>> stdout ("123" * "456")
123456
>>>

Now what do you think will happen if I do this?

>>> let x = "Line 1" + "Line 2"  -- A stream with two elements
>>> let y = "Line A" + "Line B" -- A stream with two elements
>>> stdout (x * ", " * y) -- What this will print?

Well, we can reason about this using the same rules of addition and multiplication we learned in school.

First we substitute x and y with their definitions:

x * ", " * y

= ("Line 1" + "Line 2") * ", " * ("Line A" + "Line B")

Then we expand out the multiplication to get four terms:

= "Line 1" * ", " * "Line A"
+ "Line 1" * ", " * "Line B"
+ "Line 2" * ", " * "Line A"
+ "Line 2" * ", " * "Line B"

Then we use the rule that multiplying 1-element streams just concatenates their strings:

= "Line 1, Line A"
+ "Line 1, Line B"
+ "Line 2, Line A"
+ "Line 2, Line B"

... and there's our final result! We expect our stream to have four elements, one for each permutation of elements from the left and right streams.

Let's verify that:

>>> stdout (x * ", " * y)
Line 1, Line A
Line 1, Line B
Line 2, Line A
Line 2, Line B

We can even leverage our algebraic intuition when working with impure streams:

>>> stdout (stdin * ("!" + "?"))
Test<Enter>
Test!
Test?
ABC<Enter>
ABC!
ABC?
...

This is why I love mathematical APIs: they are concise, general, and expressive.

People judge programming languages using mindshare, but the mindshare of mathematics dwarfs all programming languages combined. By specifying programs mathematically we can unlock a large and latent pool of programming talent.

by Gabriel Gonzalez ([email protected]) at April 07, 2015 03:17 AM

April 06, 2015

Robert Harper

Old Neglected Theorems Are Still Theorems

I have very recently been thinking about the question of partiality vs totality in programming languages, a perennial topic in PL’s that every generation thinks it discovers for itself.  And this got me to remembering an old theorem that, it seems, hardly anyone knows ever existed in the first place.  What I like about the theorem is that it says something specific and technically accurate about the sizes of programs in total languages compared to those in partial languages.  The theorem provides some context for discussion that does not just amount to opinion or attitude (and attitude alway seems to abound when this topic arises).

The advantage of a total programming language such as Goedel’s T is that it ensures, by type checking, that every program terminates, and that every function is total. There is simply no way to have a well-typed program that goes into an infinite loop. This may seem appealing, until one considers that the upper bound on the time to termination can be quite large, so large that some terminating programs might just as well diverge as far as we humans are concerned. But never mind that, let us grant that it is a virtue of  T that it precludes divergence.

Why, then, bother with a language such as PCF that does not rule out divergence? After all, infinite loops are invariably bugs, so why not rule them out by type checking? (Don’t be fooled by glib arguments about useful programs, such as operating systems, that “run forever”. After all, infinite streams are programmable in the language M of inductive and coinductive types in which all functions terminate. Computing infinitely does not mean running forever, it just means “for as long as one wishes, without bound.”)  The notion does seem appealing until one actually tries to write a program in a language such as T.

Consider computing the greatest common divisor (GCD) of two natural numbers. This can be easily programmed in PCF by solving the following equations using general recursion:

\begin{array}{rcl}    \textit{gcd}(m,0) & = & m \\    \textit{gcd}(0,m) & = & m \\    \textit{gcd}(m,n) & = & \textit{gcd}(m-n,n) \quad \text{if}\ m>n \\    \textit{gcd}(m,n) & = & \textit{gcd}(m,n-m) \quad \text{if}\ m<n    \end{array}

The type of \textit{gcd} defined in this manner has partial function type (\mathbb{N}\times \mathbb{N})\rightharpoonup \mathbb{N}, which suggests that it may not terminate for some inputs. But we may prove by induction on the sum of the pair of arguments that it is, in fact, a total function.

Now consider programming this function in T. It is, in fact, programmable using only primitive recursion, but the code to do it is rather painful (try it!). One way to see the problem is that in T the only form of looping is one that reduces a natural number by one on each recursive call; it is not (directly) possible to make a recursive call on a smaller number other than the immediate predecessor. In fact one may code up more general patterns of terminating recursion using only primitive recursion as a primitive, but if you examine the details, you will see that doing so comes at a significant price in performance and program complexity. Program complexity can be mitigated by building libraries that codify standard patterns of reasoning whose cost of development should be amortized over all programs, not just one in particular. But there is still the problem of performance. Indeed, the encoding of more general forms of recursion into primitive recursion means that, deep within the encoding, there must be “timer” that “goes down by ones” to ensure that the program terminates. The result will be that programs written with such libraries will not be nearly as fast as they ought to be.  (It is actually quite fun to derive “course of values” recursion from primitive recursion, and then to observe with horror what is actually going on, computationally, when using this derived notion.)

But, one may argue, T is simply not a serious language. A more serious total programming language would admit sophisticated patterns of control without performance penalty. Indeed, one could easily envision representing the natural numbers in binary, rather than unary, and allowing recursive calls to be made by halving to achieve logarithmic complexity. This is surely possible, as are numerous other such techniques. Could we not then have a practical language that rules out divergence?

We can, but at a cost.  One limitation of total programming languages is that they are not universal: you cannot write an interpreter for T within T (see Chapter 9 of PFPL for a proof).  More importantly, this limitation extends to any total language whatever.  If this limitation does not seem important, then consider the Blum Size Theorem (BST) (from 1967), which places a very different limitation on total languages.  Fix any total language, L, that permits writing functions on the natural numbers. Pick any blowup factor, say 2^{2^n}, or however expansive you wish to be.  The BST states that there is a total function on the natural numbers that is programmable in L, but whose shortest program in L is larger by the given blowup factor than its shortest program in PCF!

The underlying idea of the proof is that in a total language the proof of termination of a program must be baked into the code itself, whereas in a partial language the termination proof is an external verification condition left to the programmer. Roughly speaking, there are, and always will be, programs whose termination proof is rather complicated to express, if you fix in advance the means by which it may be proved total. (In T it was primitive recursion, but one can be more ambitious, yet still get caught by the BST.)  But if you leave room for ingenuity, then programs can be short, precisely because they do not have to embed the proof of their termination in their own running code.

There are ways around the BST, of course, and I am not saying otherwise.  For example, the BST merely guarantees the existence of a bad case, so one can always argue that such a case will never arise in practice.  Could be, but I did mention the GCD in T problem for a reason: there are natural problems that are difficult to express in a language such as T.  By fixing the possible termination arguments in advance, one is tempting fate, for there are many problems, such as the Collatz Conjecture, for which the termination proof of a very simple piece of code has been an open problem for decades, and has resisted at least some serious attempts on it.  One could argue that such a function is of no practical use.  I agree, but I point out the example not to say that it is useful, but to say that it is likely that its eventual termination proof will be quite nasty, and that this will have to be reflected in the program itself if you are limited to a T-like language (rendering it, once again, useless).  For another example, there is no inherent reason why termination need be assured by means similar to that used in T.  We got around this issue in NuPRL by separating the code from the proof, using a type theory based on a partial programming language, not a total one.  The proof of termination is still required for typing in the core theory (but not in the theory with “bar types” for embracing partiality).  But it’s not baked into the code itself, affecting its run-time; it is “off to the side”, large though it may be).

Updates: word smithing, fixed bad link, corrected gcd, removed erroneous parenthetical reference to Coq, fixed LaTeX problems.


Filed under: Programming, Research, Teaching Tagged: functional programming, primitive recursion, programming languages, type theory, verification

by Robert Harper at April 06, 2015 11:11 PM

The Power of Negative Thinking

Exception tracking is a well-known tar baby of type system design.  After all, if expressions can have two sorts of result, why shouldn’t the type say something about them both?  Languages such as CLU, FX, and Java, to name three, provide “throws” or “raises” clauses to the types of procedures that specify an upper bound on the exceptions that can occur when they are called.  It all seems natural and easy, but somehow the idea never really works very well.  One culprit is any form of higher-order programming, which is inherent in object-oriented and functional languages alike.  To handle the indirection requires more complex concepts, such as effect polymorphism, to make thing work reasonably well.  Or untracked exceptions are used to avoid the need to track them.  Somehow such an appealing idea seems rather stickier to realize than one might expect.  But why?

A piece of the puzzle was put into place by Xavier Leroy and François Pessaux in their paper on tracking uncaught exceptions. Their idea was to move use type-based methods to track uncaught exceptions, but to move the clever typing techniques required out of the programming language itself and into a separate analysis tool.  They make effective use of the powerful concept of row polymorphism introduced by Didier Rémy for typing records and variants in various dialects of Caml.  Moving exception tracking out of the language and into a verification tool is the decisive move, because it liberates the analyzer from any constraints that may be essential at the language level.

But why track uncaught exceptions?  That is, why track uncaught exceptions, rather than caught exceptions?  From a purely methodological viewpoint it seems more important to know that a certain code fragment cannot raise certain exceptions (such as the \texttt{match} exception in ML, which arises when a value matches no pattern in a case analysis).  In a closed world in which all of the possible exceptions are known, then tracking positive information about which exceptions might be raised amounts to the same as tracking which exceptions cannot be raised, by simply subtracting the raised set from the entire set.  As long as the raised set is an upper bound on the exceptions that might be raised, then the difference is a lower bound on the set of exceptions that cannot be raised.  Such conservative approximations are necessary because a non-trivial behavioral property of a program is always undecidable, and hence requires proof.  In practice this means that stronger invariants must be maintained than just the exception information so that one may prove, for example, that the values passed to a pattern match are limited to those that actually do satisfy some clause of an inexhaustive match.

How realistic is the closed world assumption?  For it to hold seems to require a whole-program analysis, and is therefore non-modular, a risky premise in today’s world.  Even on a whole-program basis exceptions must be static in the sense that, even if they are scoped, they may in principle be declared globally, after suitable renaming to avoid collisions.  The global declarations collectively determine the whole “world” from which positive exception tracking information may be subtracted to obtain negative exception information.  But in languages that admit multiple instantiations of modules, such as ML functors, static exceptions are not sufficient (each instance should introduce a distinct exception).  Instead, static exceptions must be replaced by dynamic exceptions that are allocated at initialization time, or even run-time, to ensure that no collisions can occur among the instances.  At that point we have an open world of exceptions, one in which there are exceptions that may be raised, but which cannot be named in any form of type that seeks to provide an upper bound on the possible uncaught exceptions that may arise.

For example consider the ML expression

let
  exception X
in
  raise X
end

If one were to use positive exception tracking, what would one say about the expression as a whole?  It can, in fact it does, raise the exception \texttt{X}, yet this fact is unspeakable outside of the scope of the declaration.   If a tracker does not account for this fact, it is unsound in the sense that the uncaught exceptions no longer provide an upper bound on what may be raised.  One maneuver, used in Java, for example, is to admit a class of untracked exceptions about which no static information is maintained.  This is useful, because it allows one to track those exceptions that can be tracked (by the Java type system) and to not track those that cannot.

In an open world (which includes Java, because exceptions are a form of object) positive exception tracking becomes infeasible because there is no way to name the exceptions that might be tracked.  In the above example the exception \textsf{X} is actually a bound variable bound to a reference to an exception constructor.  The name of the bound variable ought not matter, so it is not even clear what the exception raised should be called.  (It is amusing to see the messages generated by various ML compilers when reporting uncaught exceptions.  The information they provide is helpful, certainly, but is usually, strictly speaking, meaningless, involving identifiers that are not in scope.)

The methodological considerations mentioned earlier suggest a way around this difficulty.  Rather than attempt to track those exceptions that might be raised, instead track the exceptions that cannot be raised.  In the above example there is nothing to say about \texttt{X} not being raised, because it is being raised, so we’re off the hook there.  The “dual” example

let
  exception X
in
  2+2
end

illustrates the power of negative thinking.  The body of the \textsf{let} does not raise the exception bound to \textsf{X}, and this may be recorded in a type that makes sense within the scope of \textsf{X}.  The crucial point is that when exiting its scope it is sound to drop mention of this information in a type for the entire expression.  Information is lost, but the analysis is sound.  In contrast there is no way to drop positive information without losing soundness, as the first example shows.

One way to think about the situation is in terms of type refinements, which express properties of the behavior of expressions of a type.  To see this most clearly it is useful to separate the exception mechanism into two parts, the control part and the data part.  The control aspect is essentially just a formulation of error-passing style, in which every expression has either a normal return of a specified type, or an exceptional return of the type associated to all exceptions.  (Nick Benton and Andrew Kennedy nicely formulated this view of exceptions as an extension of the concept of a monad.)

The data aspect is, for dynamic exceptions, the type of dynamically classified values, which is written \textsf{clsfd} in PFPL.  Think of it as an open-ended sum in which one can dynamically generate new classifiers (aka summands, injections, constructors, exceptions, channels, …) that carry a value of a specified type.  According to this view the exception \textsf{X} is bound to a dynamically-generated classifier carrying a value of unit type.  (Classifier allocation is a storage effect, so that the data aspect necessarily involves effects, whereas the control aspect may, and, for reasons of parallelism, be taken as pure.)  Exception constructors are used to make values of type \textsf{clsfd}, which are passed to handlers that can deconstruct those values by pattern matching.

Type refinements come into play as a means of tracking the class of a classified value.  For the purposes of exception tracking, the crucial refinements of the type \textsf{clsfd} are the positive refinement, \textsf{a}\cdot, and the negative refinement\overline{\textsf{a}\cdot}, which specify that a classified value is, or is not, of class \textsf{a}.  Positive exception tracking reduces to maintaining invariants expressed by a disjunction of positive refinements; negative exception tracking reduces to maintaining invariants expressed by a conjunction of negative refinements.  Revisiting the logic of exception tracking, the key is that the entailment

\overline{\mathsf{a}_1\cdot}\wedge \cdots\wedge \overline{\mathsf{a}_n\cdot} \leq \overline{\mathsf{a}_1\cdot} \wedge \cdots \wedge \overline{\mathsf{a}_{n-1}\cdot}

is valid, whereas the “entailment”

\mathsf{a}_1\cdot \vee \cdots \vee \mathsf{a}_n\cdot \leq \mathsf{a}_1\cdot\vee \cdots \vee \mathsf{a}_{n-1}\cdot

is not.  Thus, in the negative setting we may get ourselves out of the scope of an exception by weakening the refinement, an illustration of the power of negative thinking.


Filed under: Programming, Research Tagged: dynamic classification, execeptions, open vs closed world, type refinements

by Robert Harper at April 06, 2015 11:01 PM

Web site sml-family.org Up and Running!

After far too long, and far too many obstacles to be overcome, Dave MacQueen, Lars Bergstrom, and I have finally prepared an open-source site for the entire family of languages derived from Standard ML.  The defining characteristic of Standard ML has always been that it has a rigorous definition, so that it is always clear what is a valid program and how it should behave.  And indeed we have seven different working compilers, all of which are compatible with each other, with the exception of some corner cases arising from known mistakes in the definition.  Moreover, there are several active projects developing new variations on the language, and it would be good to maintain the principle that such extensions be precisely defined.

To this end the sources of the 1990 and 1997 versions of the definition are on the web site, with the permission of MIT Press, as is the type-theoretic definition formulated by Stone and H., which was subsequently used as the basis for a complete machine-checked proof of type safety for the entire language done by Crary, Lee, and H.  It is be hoped that the errors in the definition (many are known, we provide links to the extensive lists provided by Kahrs and Rossberg in separate investigations) may now be corrected.  Anyone is free to propose an alteration to be merged into the main branch, which is called “SML, The Living Language” and also known as “Successor ML”.  One may think of this as a kind of “third edition” of the definition, but one that is in continual revision by the community.  Computer languages, like natural languages, belong to us all collectively, and we all contribute to their evolution.

Everyone is encouraged to create forks for experimental designs or new languages that enrich, extend, or significantly alter the semantics of the language.  The main branch will be for generally accepted corrections, modifications, and extensions, but it is to be expected that completely separate lines of development will also emerge.

The web site, sml-family.org is up and running, and will be announced in various likely places very soon.

Update: We have heard that some people get a “parked page” error from GoDaddy when accessing sml-family.org.  It appears to be a DNS propagation problem.

Update: The DNS problems have been resolved, and I believe that the web site is stably available now as linked above.

Update: Word smithing for clarity.


Filed under: Research Tagged: functional programming, sml

by Robert Harper at April 06, 2015 10:50 PM

Bellman on “Dynamic Programming”

Everyone who has studied algorithms has wondered “why the hell is Bellman’s memorization technique called dynamic programming?”.  I recently learned the answer from my colleague, Guy Blelloch, who dug up the explanation from Richard Bellman himself:

“I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision processes.

“An interesting question is, ‘Where did the name, dynamic programming, come from?’ The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word, research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term, research, in his presence. You can imagine how he felt, then, about the term, mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various rea- sons. I decided therefore to use the word, ‘programming.’ I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying—I thought, let’s kill two birds with one stone. Let’s take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it’s impossible to use the word, dynamic, in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It’s impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities” (p. 159).

As with algorithms, so too with dynamic languages?

Update: why is it called “memoization” and not “memorization”?

Update: rewrite of the commentary.


Filed under: Research, Teaching Tagged: dynamic and static typing

by Robert Harper at April 06, 2015 10:41 PM

Eric Kidd

Migrating from Heroku (and Linode) to Docker on AWS

I've long been a huge fan of Heroku. They've made it super easy to deploy and scale web applications without getting bogged down in server administration. Also, their free tier has been very generous, which made Heroku a perfect place to run weekend projects. (And my clients have happily paid plenty of money to Heroku over the years, so nobody's been losing out.)

Heroku's costs and limitations

Lately, the costs of using Heroku for weekend projects have been creeping upwards:

Read more…

April 06, 2015 12:38 AM

April 04, 2015

Douglas M. Auclair (geophf)

March 2015 1HaskellADay Problems and Solutions


March 2015
  • March 31st, 2015: Today's #haskell exercise has us looking for a really big pandigital prime http://lpaste.net/6128183741660528640 ... like: REALLY big. Maybe.
  • March 30th, 2015: A little math-problem to ease us into the week, suggested by @jamestanton http://lpaste.net/845208190432837632 3 consecutive integers that are co-composed
  • March 27th, 2015: Today's #haskell problem is unification of multiple free variables http://lpaste.net/370356889654919168 We find this to be 'CoSimple.' 34 lines (and a new n-to-1 data mapping-type) defining not a 'CoSimple' unifier but now an 'UnSimple' one. http://lpaste.net/8114086012700852224 Ugh!
  • March 26th, 2015: Unification is interesting! http://www.cs.uu.nl/research/techreps/repo/CS-2008/2008-027.pdf (Functional Unification paper) Let's look at the unification-problem for today's #haskell problem http://lpaste.net/1572781401336446976 We define simple unification (of up to one free logic variable) in a module called unification.simple, oddly enough. http://lpaste.net/5064654031335456768
  • March 25th, 2015: In Old Norse, words end with 'R' (not all). For today's #Haskell problem, relations end with 'R' http://lpaste.net/8067966125595426816 Where I learn to speak Old Norse with a slightly better accent: unifying lists with headR and tailR http://lpaste.net/4854317134819360768 with unifyLists
  • March 24th, 2015: For today's #haskell problem we learn that 'kayso' is a word, and we edge a bit more toward pure relational calculushttp://lpaste.net/5312260085655797760 A solution to this relational-calculus problem that is defined over Data/Control logic modules: http://lpaste.net/5718036917765799936
  • March 23rd, 2015: WHO KNEW a chance meeting with @webyrd at the POPL06 would lead to today's #haskell problem? http://lpaste.net/7445712320312901632 μBikini I mean: μKanren And the solution implies that monadic-list states are logic programming? http://lpaste.net/8325001449901654016 Perhaps.
  • March 19th, 2015: We now learn signal spreads ... like ... margarine! for today's #haskell problem http://lpaste.net/4389894538622140416
  • March 18th, 2015: For today's #haskell problem we learn that Jupiter's moon Europa is made from Froyo, and custard! Mmmm! http://lpaste.net/127272
  • March 17th, 2015: No quaternions were harmed in today's #haskell π-problem IN SPACE! http://lpaste.net/7134990338598371328
  • March 16th, 2105: In space, no one can here you scream "@NASA" (nor anything else for that matter. Today's #haskell problem http://lpaste.net/7238728062383161344
  • March 14th, 2015: Happy π day! A plain-text version of these NASA π-in-space puzzles are available at http://lpaste.net/7533371868384854016
  • March 13th, 2015: Okay, ladies and gentlemen, today, in honor of tomorrow being π day, let's take a #coercive #logic break and π it up! http://lpaste.net/7075392225642807296
  • March 12th, 2015: I wonder if the suitor's surname is Quinehttp://lpaste.net/5006337360527360000 for today's #haskell problem #coercive #logic
  • March 10th, 2015: For today's #haskell problem, we wonder if Balikaya is Aharmanite or Mazdaysian http://lpaste.net/6972955679380209664 #scheherazade #coercive #logic
  • March 9th, 2015: In which Iskandar is asked Kamar's Children's ages in Scheherazade's #haskell metapuzzle http://lpaste.net/8990755663210610688 #coercive #logic 
  • March 6th, 2015: Champernowne's constant for today's #haskell problem http://lpaste.net/6911025218794291200 Please let me know if you can NOT access this problem: I've marked it private as previous ones are being modified in place. DON'T DO THAT!  In which we show @geophf triumphs with brütish-forcisms for today's #haskell solution http://lpaste.net/2944338052937416704
  • March 5th, 2015: For today's #haskell problem, we ponder why they aren't calledLEFT-triangles. Is it a plot? http://lpaste.net/121797 Leftist triangles are subject to the (IO) State (monad) ... geddit? #sigh never mind anyway, solution: http://lpaste.net/122143 Or, put another way: in which we see @geophf can not have a function type that includes -> Int -> ... AND, we really need MapReduce here!
  • March 4th, 2015: Truncatable primes (11 in all) are asking to be solved in today's projecteuler.com #haskell problem http://lpaste.net/121528 This @1HaskellADay problem turned into a _TWO_day solution! Worth it? Why, yes, after redesigning _TWO_ libraries! http://lpaste.net/122503
  • March 3rd, 2015: There's more than a 1-in-a-million chance there are 'sum' double palindromic numbers in today's #haskell problem http://lpaste.net/121470 The solution was not too hard, M. Euler! http://lpaste.net/121471
  • March 2nd, 2015: What-what! abcde * fghij is equal to ... well, something, it appears. http://lpaste.net/121421 and the prob1, prob2 winners are ...http://lpaste.net/121466 A solution to today's #haskell problem.

by geophf ([email protected]) at April 04, 2015 07:42 PM

Philip Wadler

Toolkits for the Mind

An article on programming languages aimed at a general readership, from MIT Technical Review. Spotted by De Lesley Hutchins.
When the Japanese computer scientist Yukihiro Matsumoto decided to create Ruby, a programming language that has helped build Twitter, Hulu, and much of the modern Web, he was chasing an idea from a 1966 science fiction novel called Babel-17 by Samuel R. Delany. At the book’s heart is an invented language of the same name that upgrades the minds of all those who speak it. “Babel-17 is such an exact analytical language, it almost assures you technical mastery of any situation you look at,” the protagonist says at one point. With Ruby, Matsumoto wanted the same thing: to reprogram and improve the way programmers think....When I meet Yaron Minsky, Jane Street’s head of technology, he’s sitting at a desk with a working Enigma machine beside him, one of only a few dozen of the World War II code devices left in the world. I would think it the clear winner of the contest for Coolest Secret Weapon in the Room if it weren’t for the way he keeps talking about an obscure programming language called OCaml. Minsky, a computer science PhD, convinced his employer 10 years ago to rewrite the company’s entire trading system in OCaml.

by Philip Wadler ([email protected]) at April 04, 2015 03:15 PM

April 03, 2015

Ken T Takusagawa

[cssohavu] Enabling arbitrary static verification

As the complexity of programs increase, static verification features of a programming language become relatively more important, compared to say code conciseness or code execution speed (at least, constant factors of code execution speed).  It becomes relatively more important for a machine verifier to be able to "understand" the program and verify correctness properties about it.  That computer verifier might become the only entity that understands the entirety of a large program written over a long period of time by many authors.

Rich type systems such as in Haskell are an example of static verification.  For example, it is useful to be able to annotate which code is stateful and what state it is modifying. Haskell monads accomplish such annotation, and the type checker checks that the annotations are correct.

However, Haskell types are not the be-all-end-all of static verification.  In more complicated code, it is easy to want to express and verify properties which cannot be expressed in its type system.

Design a language whose static verification features are user definable and may be arbitrarily complex, keeping pace with the complexity of the program.

(We will probably eventually want to express static verification of the static verification, and so on.  The difficulty of creating a correct program inherently grows faster than linearly with size of the executable part of the program.  Perhaps this will prevent the technological singularity.)

Language features such as polymorphic types or other type extensions would not be hardcoded into the compiler but imported as one of many possible static verification libraries that a program can invoke in the static verification phase of its compilation.  Theorem provers will also likely be useful as libraries.

Lisp syntax looks nice, with its arbitrary extensibility and ability to treat code as data, which is what the static verification code will be doing.

by Ken ([email protected]) at April 03, 2015 08:12 PM

April 02, 2015

wren gayle romano

An introduction to recursive types

The past couple weeks I've been teaching about recursive types and their encodings in B522. Here's a short annotated bibliography for followup reading:

  • For a basic intro to recursive types, and for the set-theoretic metatheory: see section IV, chapters 20 and 21.
    • Benjamin C. Pierce (2002) Types and Programming Languages. MIT Press.
  • The proof of logical inconsistency and non-termination is "well-known". For every type τ we can define a fixed-point combinator and use that to exhibit an inhabitant of the type:
    • fixτ = λf:(τ→τ). let e = λx:(μα.α→τ). f (x (unroll x)) in e (roll(μα.α→τ) e)
    • τ = fixτ (λx:τ. x)
  • A category-theoretic proof that having fixed-points causes inconsistency
  • The proof of Turing-completeness is "well-known". Here's a translation from the untyped λ-calculus to STLC with fixed-points:
    • (x)* = x
    • (λx. e)* = roll(μα.α→α) (λx:(μα.α→α). e*)
    • (f e)* = unroll (f*) (e*)
  • Knaster–Tarski (1955): For any monotone function, f, (a) the least fixed-point of f is the intersection of all f-closed sets, and (b) the greatest fixed-point of f is the union of all f-consistent sets.
  • For a quick introduction to category theory, a good place to start is:
    • Benjamin C. Pierce (1991) Basic Category Theory for Computer Scientists. MIT Press.
  • For a more thorough introduction to category theory, consider:
  • The Boehm–Berarducci encoding
  • Church/Boehm–Berarducci encodings are not canonical
    • [citation needed]
  • Surjective pairing cannot be encoded in STLC (i.e., the implicational fragment of intuitionistic propositional logic): see p.155
    • Morten H. Sørensen and Paweł Urzyczyn (2006) Lectures on the Curry–Howard isomorphism. Studies in Logic and the Foundations of Mathematics, v.149.
  • However, adding it is a conservative extension
  • Compiling data types with Scott encodings
  • For more on the difference between Scott and Mogensten–Scott encodings:
  • Parigot encodings
    • M. Parigot (1988) Programming with proofs: A second-order type theory. ESOP, LNCS 300, pp.145–159. Springer.
  • For more on catamorphisms, anamorphisms, paramorphisms, and apomorphisms
  • build/foldr list fusion
    • Andy Gill, John Launchbury, and Simon Peyton Jones (1993) A short cut to deforestation. In Proc. Functional Programming Languages and Computer Architecture, pp.223–232.
    • Many more links at the bottom of this page
  • For another general introduction along the lines of what we covered in class
  • "Iterators" vs "recursors" in Heyting arithmetic and Gödel's System T: see ch.10:
    • Morten H. Sørensen and Paweł Urzyczyn (2006) Lectures on the Curry–Howard isomorphism Studies in Logic and the Foundations of Mathematics, v.149.
  • There are a great many more papers by Tarmo Uustalu, Varmo Vene, Ralf Hinze, and Jeremy Gibbons on all this stuff; just google for it.


comment count unavailable comments

April 02, 2015 09:37 PM

Ian Ross

Non-diffusive atmospheric flow #14: Markov matrix calculations

Non-diffusive atmospheric flow #14: Markov matrix calculations

This is going to be the last substantive post of this series (which is probably as much of a relief to you as it is to me…). In this article, we’re going to look at phase space partitioning for our dimension-reduced <semantics>Z500<annotation encoding="application/x-tex">Z_{500}</annotation></semantics> PCA data and we’re going to calculate Markov transition matrices for our partitions to try to pick out consistent non-diffusive transitions in atmospheric flow regimes.

Phase space partitioning

We need to divide the phase space we’re working in (the unit sphere parameterised by <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> and <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics>) into a partition of equal sized components, to which we’ll assign each data point. We’ll produce partitions by dividing the unit sphere into bands in the <semantics>θ<annotation encoding="application/x-tex">\theta</annotation></semantics> direction, then splitting those bands in the <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> direction as required. The following figures show the four partitions we’re going to use here1:

In each case, the “compartments” of the partition are each of the same area on the unit sphere. For Partitions 1 and 2, we find the angle <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics> of the boundary of the “polar” components by solving the equation

<semantics>0αsinθdθ02πdϕ=4πC,<annotation encoding="application/x-tex"> \int_0^{\alpha} \sin \theta \, d\theta \int_0^{2\pi} \, d\phi = \frac{4\pi}{C}, </annotation></semantics>

where <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is the number of components in the partition. For partition 1, with <semantics>N=4<annotation encoding="application/x-tex">N=4</annotation></semantics>, this gives <semantics>α1=π/3<annotation encoding="application/x-tex">\alpha_1 = \pi/3</annotation></semantics> and for partition 2, with <semantics>N=6<annotation encoding="application/x-tex">N=6</annotation></semantics>, <semantics>α2=cos1(2/3)<annotation encoding="application/x-tex">\alpha_2 = \cos^{-1} (2/3)</annotation></semantics>.

Assigning points in our time series on the unit sphere to partitions is then done by this code (as usual, the code is in a Gist):

-- Partition component: theta range, phi range.
data Component = C { thmin :: Double, thmax :: Double
                   , phmin :: Double, phmax :: Double
                   } deriving Show

-- A partition is a list of components that cover the unit sphere.
type Partition = [Component]

-- Angle for 1-4-1 partition.
th4 :: Double
th4 = acos $ 2.0 / 3.0

-- Partitions.
partitions :: [Partition]
partitions = [ [ C 0        (pi/3)   0  (2*pi)
               , C (pi/3)   (2*pi/3) 0  pi
               , C (pi/3)   (2*pi/3) pi (2*pi)
               , C (2*pi/3) pi       0  (2*pi) ]
             , [ C 0        th4      0        (2*pi)
               , C th4      (pi-th4) 0        (pi/2)
               , C th4      (pi-th4) (pi/2)   pi
               , C th4      (pi-th4) pi       (3*pi/2)
               , C th4      (pi-th4) (3*pi/2) (2*pi)
               , C (pi-th4) pi       0        (2*pi) ]
             , [ C 0      (pi/2) 0  pi
               , C 0      (pi/2) pi (2*pi)
               , C (pi/2) pi     0  pi
               , C (pi/2) pi     pi (2*pi) ]
             , [ C 0      (pi/2) (pi/4)   (5*pi/4)
               , C 0      (pi/2) (5*pi/4) (pi/4)
               , C (pi/2) pi     (pi/4)   (5*pi/4)
               , C (pi/2) pi     (5*pi/4) (pi/4) ] ]

npartitions :: Int
npartitions = length partitions

-- Convert list of (theta, phi) coordinates to partition component
-- numbers for a given partition.
convert :: Partition -> [(Double, Double)] -> [Int]
convert part pts = map (convOne part) pts
  where convOne comps (th, ph) = 1 + length (takeWhile not $ map isin comps)
          where isin (C thmin thmax ph1 ph2) =
                  if ph1 < ph2
                  then th >= thmin && th < thmax && ph >= ph1 && ph < ph2
                  else th >= thmin && th < thmax && (ph >= ph1 || ph < ph2)

The only thing we need to be careful about is dealing with partitions that extend across the zero of <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics>.

What we’re doing here is really another kind of dimensionality reduction. We’ve gone from our original spatial maps of <semantics>Z500<annotation encoding="application/x-tex">Z_{500}</annotation></semantics> to a continuous reduced dimensionality representation via PCA, truncation of the PCA basis and projection to the unit sphere, and we’re now reducing further to a discrete representation – each <semantics>Z500<annotation encoding="application/x-tex">Z_{500}</annotation></semantics> map in our original time series data is represented by a single integer label giving the partition component in which it lies.

We can now use this discrete data to construct empirical Markov transition matrices.

Markov matrix calculations

Once we’ve generated the partition time series described in the previous section, calculating the empirical Markov transition matrices is fairly straightforward. We need to be careful to avoid counting transitions from the end of one winter to the beginning of the next, but apart from that little wrinkle, it’s just a matter of counting how many times there’s a transition from partition component <semantics>j<annotation encoding="application/x-tex">j</annotation></semantics> to partition component <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>, which we call <semantics>Tij<annotation encoding="application/x-tex">T_{ij}</annotation></semantics>. We also need to make sure that we consider the same number, <semantics>Nk<annotation encoding="application/x-tex">N_k</annotation></semantics>, of points from each of the partition components. The listing below shows the function we use to do this – the function takes as arguments the size of the partition and the time series of partition components as a vector, and returns the transition count matrix <semantics>𝐓<annotation encoding="application/x-tex">\mathbf{T}</annotation></semantics> and <semantics>Nk<annotation encoding="application/x-tex">N_k</annotation></semantics>, the number of points in each partition used to calculate the transitions:

transMatrix :: Int -> SV.Vector Int -> (M, Int)
transMatrix n pm = (accum (konst 0.0 (n, n)) (+) $ zip steps (repeat 1.0), ns)
  where allSteps = [((pm SV.! (i + 1)) - 1, (pm SV.! i) - 1) |
                    i <- [0..SV.length pm - 2], (i + 1) `mod` 21 /= 0]
        steps0 = map (\k -> filter (\(i, j) -> i == k) allSteps) [0..n-1]
        ns = minimum $ map length steps0
        steps = concat $ map (take ns) steps0

Once we have <semantics>𝐓<annotation encoding="application/x-tex">\mathbf{T}</annotation></semantics>, the Markov matrix is calculated as <semantics>𝐌=Nk1𝐓<annotation encoding="application/x-tex">\mathbf{M} = N_k^{-1} \mathbf{T}</annotation></semantics> and the symmetric and asymmetric components of <semantics>𝐌<annotation encoding="application/x-tex">\mathbf{M}</annotation></semantics> are calculated in the obvious way:

splitMarkovMatrix :: M -> (M, M)
splitMarkovMatrix mm = (a, s)
  where s = scale 0.5 $ mm + tr mm
        a = scale 0.5 $ mm - tr mm

We can then calculate the <semantics>𝐌A+|𝐌A|<annotation encoding="application/x-tex">\mathbf{M}^A + |\mathbf{M}^A|</annotation></semantics> matrix that recovers the non-diffusive part of the system dynamics. One thing we need to consider is the statistical significance of the resulting components in the <semantics>𝐌A+|𝐌A|<annotation encoding="application/x-tex">\mathbf{M}^A + |\mathbf{M}^A|</annotation></semantics> matrix: these components need to be sufficiently large compared to the “natural” variation due to the diffusive dynamics in the system for us to consider them not to have occurred by chance. The statistical significance calculations aren’t complicated, but I’ll just present the results here without going into the details (you can either just figure out what’s going on directly from the code or you can read about it in Crommelin (2004)).

Let’s look at the results for the four partitions we showed earlier. In each case, we’ll show the <semantics>𝐓<annotation encoding="application/x-tex">\mathbf{T}</annotation></semantics> Markov transition count matrix and the <semantics>𝐌A+|𝐌A|<annotation encoding="application/x-tex">\mathbf{M}^A + |\mathbf{M}^A|</annotation></semantics> “non-diffusive dynamics” matrix. We’ll annotate the entries in this matrix to show their statistical significance: <semantics>>95%̲̲<annotation encoding="application/x-tex">\underline{\underline{\mathbf{> 95\%}}}</annotation></semantics>, <semantics>95%90%̲<annotation encoding="application/x-tex">\underline{\mathbf{95\%-90\%}}</annotation></semantics>, <semantics>90%85%<annotation encoding="application/x-tex">\mathbf{90\%-85\%}</annotation></semantics>, <semantics>85%80%̲<annotation encoding="application/x-tex">\underline{85\%-80\%}</annotation></semantics>, <semantics>80%75%<annotation encoding="application/x-tex">80\%-75\%</annotation></semantics>, <semantics><75%<annotation encoding="application/x-tex">\mathit{<75\%}</annotation></semantics>.

For partition #1, we find:

<semantics>𝐓=(145676334771106260623212590247370142)𝐌A+|𝐌A|=1100(000.33.23.209.7̲̲00006.504.200)<annotation encoding="application/x-tex"> \mathbf{T} = \begin{pmatrix} 145 & 67 & 63 & 34 \\ 77 & 110 & 62 & 60 \\ 62 & 32 & 125 & 90 \\ 24 & 73 & 70 & 142 \\ \end{pmatrix} \qquad \mathbf{M}^A + |\mathbf{M}^A| = \frac{1}{100} \begin{pmatrix} 0 & 0 & \mathit{0.3} & \mathit{3.2} \\ \mathit{3.2} & 0 & \underline{\underline{\mathbf{9.7}}} & 0 \\ 0 & 0 & 0 & 6.5 \\ 0 & \mathit{4.2} & 0 & 0 \\ \end{pmatrix} </annotation></semantics>

For partition #2:

<semantics>𝐓=(772230371414276623742292621773392820103066293933191029653873224243770)<annotation encoding="application/x-tex"> \mathbf{T} = \begin{pmatrix} 77 & 22 & 30 & 37 & 14 & 14 \\ 27 & 66 & 23 & 7 & 42 & 29 \\ 26 & 21 & 77 & 33 & 9 & 28 \\ 20 & 10 & 30 & 66 & 29 & 39 \\ 33 & 19 & 10 & 29 & 65 & 38 \\ 7 & 32 & 24 & 24 & 37 & 70 \\ \end{pmatrix} </annotation></semantics>

<semantics>𝐌A+|𝐌A|=1100(002.18.803.62.601.0011.9̲̲00001.502.101.50007.7̲9.8̲00.5000.501.50000)<annotation encoding="application/x-tex"> \mathbf{M}^A + |\mathbf{M}^A| = \frac{1}{100} \begin{pmatrix} 0 & 0 & \mathit{2.1} & \mathbf{8.8} & 0 & \mathit{3.6} \\ \mathit{2.6} & 0 & \mathit{1.0} & 0 & \underline{\underline{\mathbf{11.9}}} & 0 \\ 0 & 0 & 0 & \mathit{1.5} & 0 & \mathit{2.1} \\ 0 & \mathit{1.5} & 0 & 0 & 0 & \underline{7.7} \\ \underline{\mathbf{9.8}} & 0 & \mathit{0.5} & 0 & 0 & \mathit{0.5} \\ 0 & \mathit{1.5} & 0 & 0 & 0 & 0 \\ \end{pmatrix} </annotation></semantics>

For partition #3:

<semantics>𝐓=(159716326671423377594613381276478150)𝐌A+|𝐌A|=1100(01.31.300004.104.100.90.3000)<annotation encoding="application/x-tex"> \mathbf{T} = \begin{pmatrix} 159 & 71 & 63 & 26 \\ 67 & 142 & 33 & 77 \\ 59 & 46 & 133 & 81 \\ 27 & 64 & 78 & 150 \\ \end{pmatrix} \qquad \mathbf{M}^A + |\mathbf{M}^A| = \frac{1}{100} \begin{pmatrix} 0 & \mathit{1.3} & \mathit{1.3} & 0 \\ 0 & 0 & 0 & \mathit{4.1} \\ 0 & \mathit{4.1} & 0 & \mathit{0.9} \\ \mathit{0.3} & 0 & 0 & 0 \\ \end{pmatrix} </annotation></semantics>

And for partition #4:

<semantics>𝐓=(160536827751353662564313376197050169)𝐌A+|𝐌A|=1100(003.92.67.100002.308.4̲02.600)<annotation encoding="application/x-tex"> \mathbf{T} = \begin{pmatrix} 160 & 53 & 68 & 27 \\ 75 & 135 & 36 & 62 \\ 56 & 43 & 133 & 76 \\ 19 & 70 & 50 & 169 \\ \end{pmatrix} \qquad \mathbf{M}^A + |\mathbf{M}^A| = \frac{1}{100} \begin{pmatrix} 0 & 0 & \mathit{3.9} & \mathit{2.6} \\ \mathbf{7.1} & 0 & 0 & 0 \\ 0 & \mathit{2.3} & 0 & \underline{\mathbf{8.4}} \\ 0 & \mathit{2.6} & 0 & 0 \\ \end{pmatrix} </annotation></semantics>

So, what conclusions can we draw from these results? First, the results we get here are rather different from those in Crommelin’s paper. This isn’t all that surprising – as we’ve followed along with the analysis in the paper, our results have become more and more different, mostly because the later parts of the analysis are more dependent on smaller details in the data, and we’re using a longer time series of data than Crommelin did. The plots below represent that contents of the <semantics>𝐌A+|𝐌A|<annotation encoding="application/x-tex">\mathbf{M}^A + |\mathbf{M}^A|</annotation></semantics> matrices for each partition in a graphical form that makes it easier to see what’s going on. In these figures, the thickness and darkness of the arrows show the statistical significance of the transitions.

We’re only going to be able to draw relatively weak conclusions from these results. Let’s take a look at the apparent dynamics for partitions #1 and #2 shown above. In both cases, there is a highly significant flow from the right hand side of the plot to the left, presumably mostly representing transitions from the higher probability density regions on the right (around <semantics>θ=π/2<annotation encoding="application/x-tex">\theta=\pi/2</annotation></semantics>, <semantics>ϕ=7π/4<annotation encoding="application/x-tex">\phi=7\pi/4</annotation></semantics>) to those on the left (around <semantics>θ=3π/4<annotation encoding="application/x-tex">\theta=3\pi/4</annotation></semantics>, <semantics>ϕ=3π/8<annotation encoding="application/x-tex">\phi=3\pi/8</annotation></semantics>). In addition, there are less significant flows from the upper hemisphere of the unit sphere to the lower, more significant for partition #2 than for #1, with the flow apparently preferentially going via partition component number 4 for partition #2. Looking back at the “all data” spherical PDF with some labelled bumps, we see that the flow from the right hand side of the PDF to the left is probably something like a transition from bump 4 (more like a blocking pattern) to bump 2 (more like a normal flow).

I’ll freely admit that this isn’t terrible convincing, and for partitions #3 and #4, the situation is less clear.

For me, one of the lessons to take away from this is that even though we started with quite a lot of data (daily <semantics>Z500<annotation encoding="application/x-tex">Z_{500}</annotation></semantics> maps for 66 years), the progressive steps of dimensionality reduction that we’ve used to try to elucidate what’s going on in the data result in less and less data on which we do the later steps of our analysis, making it more and more difficult to get statistically significant (or even superficially convincing) results. It’s certainly not the case that the results in Crommelin (2004) are just a statistical accident – there really is observed persistence of atmospheric flow patterns and pretty clear evidence that there are consistent transitions between different flow regimes. It’s just that those might be quite hard to see via this kind of analysis. Why the results that we see here are less consistent than those in Crommelin’s analysis is hard to determine. Perhaps it’s just because we have more data and there was more variability in climate in the additional later part of the <semantics>Z500<annotation encoding="application/x-tex">Z_{500}</annotation></semantics> time series. Or I might have made a mistake somewhere along the way!

It’s difficult to tell, but if I was doing this analysis “for real”, rather than just as an exercise to play with data analysis in Haskell, I’d probably do two additional things:

  1. Use a truncated version of the data set to attempt to the replicate the results from Crommelin (2004) as closely as possible. This would give better confidence that I’ve not made a mistake.

  2. Randomly generate partitions of the unit sphere for calculating the Markov transition matrices and use some sort of bootstrapping to get a better idea of how robust the “significant” transitions really are. (Generating random partitions of the sphere would be kind of interesting – I’d probably sample a bunch of random points uniformly on the sphere, then use some kind of spring-based relaxation to spread the points out and use the Voronoi polygons around the relaxed points as the components of the partition.)

However, I think that that’s quite enough about atmospheric flow regimes for now…


  1. I was originally planning to do some more work to demonstrate the independence of the results we’re going to get to the choice of partition in a more sophisticated way, but my notes are up to about 80 pages already, so I think these simpler fixed partition Markov matrix calculations will be the last thing I do on this!

<script src="http://zor.livefyre.com/wjs/v3.0/javascripts/livefyre.js" type="text/javascript"></script> <script type="text/javascript"> (function () { var articleId = fyre.conv.load.makeArticleId(null); fyre.conv.load({}, [{ el: 'livefyre-comments', network: "livefyre.com", siteId: "290329", articleId: articleId, signed: false, collectionMeta: { articleId: articleId, url: fyre.conv.load.makeCollectionUrl(), } }], function() {}); }()); </script>

April 02, 2015 10:48 AM

FP Complete

Announcing: LTS (Long Term Support) Haskell 2

The Stackage team is proud to announce the release of LTS Haskell 2. To quote the package page:

LTS Haskell: Version your Ecosystem

LTS (Long Term Support) Haskell is a curated set of packages which includes non-breaking point releases. It is a companion to Stackage Nightly: whereas Stackage Nightly releases include potentially breaking changes with each new release, LTS Haskell maintains major version stability for a longer period of time.

As usual, to start using LTS Haskell, you typically need to run the command wget https://www.stackage.org/lts/cabal.config in your package directory. More detailed instructions are available on the LTS Haskell 2 page itself.

This release is significant in that it is the first major version bump we've performed on LTS Haskell. I'm also happy to note that, despite some earlier concerns, both primitive 0.6 and blaze-builder 0.4 made it in, thanks to last minute patches by Emanuel Borsboom, Simon Meier, Edward Kmett, and Gregory Collins.

I'm also happy to announce that, in the three months since LTS 1 was released, there has been a significant surge in involvement from the community. For comparison:

MeasurementLTS 1.0LTS 2.0
Core packages2929
Non-core packages8331030
Total packages8621059
Unique maintainers6796

I'm excited to see the community embrace this project so fully, and look forward to the trend continuing.

The road to 3.0

The current plan is to target the LTS 3.0 release some time around August, depending on when the Hackage ecosystem updates to GHC 7.10 fully. The goal is to make sure the 3.0 is switched over to GHC 7.10.

In addition, Daniel Bergey sent an email which resulted in some questions from me about how we should plan and communicate around LTS major bumps. To summarize my goals and ideas:

  • We need to make sure package authors understand when a release is coming out, and the importance of making their packages compatible with upstream dependencies. I believed previously that the issue tracker on the Stackage repo was sufficient to indicate this to authors, but Daniel's questions and other responses I received from package authors tells me that we need some more explicit communication. Perhaps there should be an email 1-2 weeks in advance of the release warning about restrictive upper bounds.
  • How strictly should we adhere to a release schedule? I want to make sure that LTS Haskell is a reliable release, but perhaps providing a release window of 1-2 weeks instead of a hard release date will give package authors some necessary flexibility.
  • Since Stackage Nightly is essentially the testing ground for new LTS major bumps, how aggressive should we be on removing packages with restrictive upper bounds? I've been pretty lenient until now. However, this is a two-edged sword. Allowing upper bounds to creep in makes the lives of some authors easier, but makes the lives of other authors (the ones updating their packages regularly) much more difficult.

I don't want to make any of these decisions myself, as they're pretty central to how the LTS ecosystem is going to operate. If you have thoughts on any of these points- or on points I haven't raised- please bring them up on the Stackage mailing list and/or Reddit.

April 02, 2015 09:00 AM

Mark Jason Dominus

Your kids will love a cookie-decorating party

We had a party last week for Toph's 7th birthday, at an indoor rock-climbing gym, same as last year. Last year at least two of the guests showed up and didn't want to climb, so Lorrie asked me to help think of something for them to do if the same thing happened this year. After thinking about it, I decided we should have cookie decorating.

This is easy to set up and kids love it. I baked some plain sugar cookies, bought chocolate, vanilla, and strawberry frosting, several tubes of edible gel, and I mixed up five kinds of colored sugar. We had some colored sprinkles and little gold dragées and things like that. I laid the ingredients out on the table in the gym's side room with some plastic knives and paintbrushes, and the kids who didn't want to climb, or who wanted a break from climbing, decorated cookies. It was a great success. Toph's older sister Katara had hurt her leg, and couldn't climb, so she helped the littler kids with cookies. Even the tiny two-year-old sister of one of the guests was able to participate, and enjoyed playing with the dragées.

(It's easy to vary the project depending on how much trouble you want to take. I made the cookies from scratch, which is pretty easy, but realized later I could have bought prefabricated cookie batter, which would have been even easier. The store sold colored sugar for $3.29 for four ounces, which is offensive, so I went home and made my own. You put one drop of food coloring per two ounces of sugar in a sealed container and shake it up for a minute, for a total cost of close to zero; Toph helped with this. I bought my frosting, but my when my grandmother used to do it she'd make a simple white frosting from confectioners' sugar and then color it with food coloring.)

I was really pleased with the outcome, and not just because the guests liked it, but also because it is a violation of gender norms for a man to plan a cookie-decorating activity and then bake the cookies and prepare the pastel-colored sugar and so forth. (And of course I decorated some cookies myself.) These gender norms are insidious and pervasive, and to my mind no opportunity to interfere with them should be wasted. Messing with the gender norms is setting a good example for the kids and a good example for other dads and for the rest of the world.

I am bisexual, and sometimes I feel that it doesn't affect my life very much. The sexual part is mostly irrelevant now; I fell in love with a woman twenty years ago and married her and now we have kids. I probably won't ever have sex with another man. Whatever! In life you make choices. My life could have swung another way, but it didn't.

But there's one part of being bisexual that has never stopped paying dividends for me, and that is that when I came out as queer, it suddenly became apparent to me that I had abandoned the entire gigantic structure of how men are supposed to behave. And good riddance! This structure belongs in the trash; it completely sucks. So many straight men spend a huge amount of time terrified that other straight men will mock them for being insufficiently manly, or mocking other straight men for not being sufficiently manly. They're constantly wondering "if I do this will the other guys think it's gay?" But I've already ceded that argument. The horse is out of the barn, and I don't have to think about it any more. If people think what I'm doing is gay, that's a pill I swallowed when I came out in 1984. If they say I'm acting gay I'll say "close, but actually, I'm bi, and go choke on a bag of eels, jackass."

You don't have to be queer to opt out of straight-guy bullshit, and I think I would eventually have done it anyway, but being queer made opting out unavoidable. When I was first figuring out being queer I spent a lot of time rethinking my relationship to society and its gender constructions, and I felt that I was going to have to construct my own gender from now and that I no longer had the option of taking the default. I wasn't ever going to follow Rule Number One of Being a Man (“do not under any circumstances touch, look at, mention, or think about any dick other than your own”), so what rules was I going to follow? Whenever someone tried to pull “men don't” on me, (or whenever I tried to pull it on myself) I'd immediately think of all the Rule Number One stuff I did that “men don't” and it would all go in the same trash bin. Where (did I say this already?) it belongs.

Opting out frees up a lot of mental energy that I might otherwise waste worrying about what other people think of stuff that is none of their business, leaving me more space to think about how I feel about it and whether I think it's morally or ethically right and whether it's what I want. It means that if someone is puzzled or startled by my pink sneakers, I don't have to care, except I might congratulate myself a little for making them think about gender construction for a moment. Or the same if people find out I have a favorite flower (CROCUSES YEAH!) or if I wash the dishes or if I play with my daughters or watch the ‘wrong’ TV programs or cry or apologize for something I did wrong or whatever bullshit they're uncomfortable about this time.

Opting out frees me up to be a feminist; I don't have to worry that a bunch of men think I'm betraying The Team, because I was never on their lousy team in the first place.

And it frees me up to bake cookies for my kid's birthday party, to make a lot of little kids happy, and to know that that can only add to, not subtract from, my identity. I'm Dominus, who loves programming and mathematics and practicing the piano and playing with toy octopuses and decorating cookies with a bunch of delightful girls.

This doesn't have to be a big deal. Nobody is likely to be shocked or even much startled by Dad baking cookies. But these tiny actions, chipping away at these vile rules, are one way we take tiny steps toward a better world. Every kid at that party will know, if they didn't before, that men can and do decorate cookies.

And perhaps I can give someone else courage to ignore some of that same bullshit that prevents all of us from being as great as we could and should be, all those rules about stuff men aren't supposed to do and other stuff women aren't supposed to do, that make everyone less. I decided about twenty years ago that that was the best reason for coming out at all. People are afraid to be different. If I can be different, maybe I can give other people courage and comfort when they need to be different too. As a smart guy once said, you can be a light to the world, like a city on a hilltop that cannot be hidden.

And to anyone who doesn't like it, I say:

by Mark Dominus ([email protected]) at April 02, 2015 12:00 AM

April 01, 2015

Clifford Beshers

April Fools' Joke Exposes Weakness in Twitter Verification

There was a prank in my Twitter feed today (April 1) that, on top of being very elegant, exposed a subtle loophole in Twitter's verification procedures.  (Here is a screenshot, in case the prank has been removed by now.) Notice the checkmark that indicates a verified account.

The prank involves two soccer players, Jozy Altidore, USMNT striker who was given a red card yesterday for cursing at the referee, and Mike Magee, star of MLS team Chicago Fire, known mischief maker.

Magee swapped the theme on his twitter account to make it look like he was Jozy Altidore and posted the spoof.  Normally, this sort of thing gets rejected easily because of the verification badge. But the badge is there.  I went looking to see how Magee had faked the badge.

He hadn't. Magee is also a celebrity and also has a verified account.  Anyone with a verified account can pretend to be someone else with a verified account, apparently.

So I think if I were Twitter, I would change the name field  for verified accounts to be immutable. Quickly, before Nathan Fillion finds out.



by Clifford Beshers ([email protected]) at April 01, 2015 05:25 PM

Philip Wadler

Pension cuts - respond to USS


The UCU has lain down its placards and ended the strike to avoid cuts to pensions. Our one remaining hope is for members to make their outrage known in the USS consultation exercise, which ends 22 May 2015. I hope everyone with a USS pension will respond. Suggested responses from UCU Left and UCU are linked; please comment below, especially to list other suggested responses.

Universities UK argue that the reductions are necessary to avoid a deficit, but their claim has been widely criticised. A group of prominent statisticians point out Universities UK inflated the deficit by assuming a buoyant economy when predicting future salaries but assuming a recession when predicting investment returns.

In 2011, Universities UK imposed vastly reduced pensions on new hires. Old hires who pay into the pension fund for forty years receive a pension of one-half their final salary; new hires who do the same receive a pension of one-half their average salary. Basing pensions on average rather than final salary may be sensible, but to do so with no little or no adjustment in multiplier suggests employers used this as an excuse to slip in a large cut; new hires receive about 2/3 the benefits received by old hires. All staff also suffered other cuts to pensions: additional caps and less good adjustment for inflation. At the time, it was predicted that within a few years old hires would be moved to the inferior scheme for new hires, and that is what has now come to pass.

Like many in USS, employees and employers alike, I am concerned that the changes will make UK academia a less attractive option for potential scholars.



by Philip Wadler ([email protected]) at April 01, 2015 10:28 AM

March 31, 2015

Edward Z. Yang

Protected: Expression-level Backpack

This content is password protected. To view it please enter your password below:

by Edward Z. Yang at March 31, 2015 11:29 PM

Brent Yorgey

Blogging again, & some major life events

It’s been a long time since I’ve written anything here; the blog was on hold while I was finishing my PhD and on the academic job market. Now that things have settled down a bit I plan to get back to blogging.

For starters, here are a few of the major events that have happened in the meantime, that readers of this blog might care about:

  • I successfully defended my PhD dissertation in October, officially graduated in December, and got an actual diploma in the mail a few weeks ago. I’ll be back in Philadelphia for the official graduation ceremony in May.
  • I accepted a tenure-track position at Hendrix College in Conway, Arkansas, and will be moving there this summer.
  • Dan Piponi and I had a paper accepted to MPC 2015. Here’s the github repo, and I plan to post a PDF copy here soon (once I get around to incorporating feedback from the reviewers). I look forward to seeing a bunch of folks (Volk?) in Königswinter this summer; I already have my plane tickets (CIU -> DTW -> AMS -> CGN, it’s a long story).
  • Work on diagrams continues strong (no thanks to me!), and we are aiming for a big new release soon—I will certainly post about that here as well.

by Brent at March 31, 2015 06:03 PM

March 30, 2015

Well-Typed.Com

OverloadedRecordFields revived

Way back in the summer of 2013, with support from the Google Summer of Code programme, I implemented a GHC extension called OverloadedRecordFields to address the oft-stated desire to improve Haskell’s record system. This didn’t get merged into GHC HEAD at the time, because the implementation cost outweighed the benefits. Now, however, I’m happy to report that Well-Typed are sponsoring the work required to get an improved version into GHC. Moreover, the first part of the work is already up for review on Phabricator.

The crucial thing that has enabled OverloadedRecordFields to get going again is that we’ve found a way to factor the design differently, so that we get a much better power-to-weight ratio for the extension by splitting it into two parts.

Part 1: Records with duplicate field labels

The first step is to cut down the core OverloadedRecordFields extension as much as possible. The essential idea is the same as it ever was, namely that a single module should be able to use the same field name in multiple datatypes, as in this example:

data Person  = Person  { personId :: Int, name :: String }
data Address = Address { personId :: Int, address :: String }

These definitions are forbidden in normal Haskell, because personId is defined twice, but the OverloadedRecordFields extension will permit them and instead postpone name conflict checking to use sites. The basic extension will require that fields are used in such a way that the relevant datatype is always unambiguously determined, and the meanings of record construction, pattern matching, selection and update will not change. This means that the extension can always be enabled for an existing module and it will continue to compile unchanged, an important property that was not true of the previous design.

The Haskell syntax for record construction and pattern-matching is always unambiguous, because it mentions the data constructor, which means that code like this is perfectly fine:

p = Person { personId = 1, name = "Donald" }
getId (Person { personId = i }) = i

On the other hand, record selector functions are potentially ambiguous. The name and address selectors can be used without restrictions, and with their usual types, but it will not be possible to use personId as a record selector if both versions are in scope (although we will shortly see an alternative).

Record update is a slightly more interesting case, because it may or may not be ambiguous depending on the fields being updated. In addition, since updates are a special syntactic form, the ambiguity can be resolved using a type signature. For example, this update would be ambiguous and hence rejected by the compiler:

f x = x { personId = 0 } -- is x a Person or an Address?

On the other hand, all these updates are unambiguous:

g :: Person -> Person
g x = x { personId = 0 }                 -- top-level type signature

h x = x { personId = 0 } :: Person       -- type signature outside

k x = (x :: Person) { personId = 0 }     -- type signature inside

l x = x { personId = 0, name = "Daffy" } -- only Person has both fields

Overall, this extension requires quite a bit of rewiring inside GHC to distinguish between field labels, which may be overloaded, and record selector function names, which are always unambiguous. However, it requires nothing conceptually complicated. As mentioned above, the implementation of this part is available for review on Phabricator.

Part 2: Polymorphism over record fields

While the OverloadedRecordFields extension described in part 1 is already useful, even though it is a relatively minor relaxation of the Haskell scoping rules, another important piece of the jigsaw is some way to refer to fields that may belong to multiple datatypes. For example, we would like to be able to write a function that selects the personId field from any type that has such a field, rather than being restricted to a single datatype. Much of the unavoidable complexity of the previous OverloadedRecordFields design came from treating all record selectors in an overloaded way.

But since this is new functionality, it can use a new syntax, tentatively a prefix # sign (meaning that use of # as an operator will require a space afterwards when the extension is enabled). This means that it will be possible to write #personId for the overloaded selector function. Since we have a syntactic cue, it is easy to identify such overloaded uses of selector functions, without looking at the field names that are in scope.

Typeclasses and type families will be used to implement polymorphism over fields belonging to record types, though the details are beyond the scope of this blog post. For example, the following definition is polymorphic over all types r that have a personId :: Int field:

getId :: r { personId :: Int } => r -> Int
getId x = #personId x

Moreover, we are not limited to using #personId as a selector function. The same syntax can also be given additional interpretations, allowing overloaded updates and making it possible to produce lenses for fields without needing Template Haskell. In fact, the syntax is potentially useful for things that have nothing to do with records, so it will be available as a separate extension (implied by, but distinct from, OverloadedRecordFields).

Further reading

More details of the redesigned extensions are available on the GHC wiki, along with implementation notes for GHC hackers. Last year, I gave a talk about the previous design which is still a good guide to how the types work under the hood, even though it predates the redesign.

by adam at March 30, 2015 01:39 PM

OverloadedRecordFields revived

Way back in the summer of 2013, with support from the Google Summer of Code programme, I implemented a GHC extension called OverloadedRecordFields to address the oft-stated desire to improve Haskell’s record system. This didn’t get merged into GHC HEAD at the time, because the implementation cost outweighed the benefits. Now, however, I’m happy to report that Well-Typed are sponsoring the work required to get an improved version into GHC. Moreover, the first part of the work is already up for review on Phabricator.

The crucial thing that has enabled OverloadedRecordFields to get going again is that we’ve found a way to factor the design differently, so that we get a much better power-to-weight ratio for the extension by splitting it into two parts.

Part 1: Records with duplicate field labels

The first step is to cut down the core OverloadedRecordFields extension as much as possible. The essential idea is the same as it ever was, namely that a single module should be able to use the same field name in multiple datatypes, as in this example:

data Person  = Person  { personId :: Int, name :: String }
data Address = Address { personId :: Int, address :: String }

These definitions are forbidden in normal Haskell, because personId is defined twice, but the OverloadedRecordFields extension will permit them and instead postpone name conflict checking to use sites. The basic extension will require that fields are used in such a way that the relevant datatype is always unambiguously determined, and the meanings of record construction, pattern matching, selection and update will not change. This means that the extension can always be enabled for an existing module and it will continue to compile unchanged, an important property that was not true of the previous design.

The Haskell syntax for record construction and pattern-matching is always unambiguous, because it mentions the data constructor, which means that code like this is perfectly fine:

p = Person { personId = 1, name = "Donald" }
getId (Person { personId = i }) = i

On the other hand, record selector functions are potentially ambiguous. The name and address selectors can be used without restrictions, and with their usual types, but it will not be possible to use personId as a record selector if both versions are in scope (although we will shortly see an alternative).

Record update is a slightly more interesting case, because it may or may not be ambiguous depending on the fields being updated. In addition, since updates are a special syntactic form, the ambiguity can be resolved using a type signature. For example, this update would be ambiguous and hence rejected by the compiler:

f x = x { personId = 0 } -- is x a Person or an Address?

On the other hand, all these updates are unambiguous:

g :: Person -> Person
g x = x { personId = 0 }                 -- top-level type signature

h x = x { personId = 0 } :: Person       -- type signature outside

k x = (x :: Person) { personId = 0 }     -- type signature inside

l x = x { personId = 0, name = "Daffy" } -- only Person has both fields

Overall, this extension requires quite a bit of rewiring inside GHC to distinguish between field labels, which may be overloaded, and record selector function names, which are always unambiguous. However, it requires nothing conceptually complicated. As mentioned above, the implementation of this part is available for review on Phabricator.

Part 2: Polymorphism over record fields

While the OverloadedRecordFields extension described in part 1 is already useful, even though it is a relatively minor relaxation of the Haskell scoping rules, another important piece of the jigsaw is some way to refer to fields that may belong to multiple datatypes. For example, we would like to be able to write a function that selects the personId field from any type that has such a field, rather than being restricted to a single datatype. Much of the unavoidable complexity of the previous OverloadedRecordFields design came from treating all record selectors in an overloaded way.

But since this is new functionality, it can use a new syntax, tentatively a prefix # sign (meaning that use of # as an operator will require a space afterwards when the extension is enabled). This means that it will be possible to write #personId for the overloaded selector function. Since we have a syntactic cue, it is easy to identify such overloaded uses of selector functions, without looking at the field names that are in scope.

Typeclasses and type families will be used to implement polymorphism over fields belonging to record types, though the details are beyond the scope of this blog post. For example, the following definition is polymorphic over all types r that have a personId :: Int field:

getId :: r { personId :: Int } => r -> Int
getId x = #personId x

Moreover, we are not limited to using #personId as a selector function. The same syntax can also be given additional interpretations, allowing overloaded updates and making it possible to produce lenses for fields without needing Template Haskell. In fact, the syntax is potentially useful for things that have nothing to do with records, so it will be available as a separate extension (implied by, but distinct from, OverloadedRecordFields).

Further reading

More details of the redesigned extensions are available on the GHC wiki, along with implementation notes for GHC hackers. Last year, I gave a talk about the previous design which is still a good guide to how the types work under the hood, even though it predates the redesign.

by adam at March 30, 2015 01:39 PM

Ian Ross

C2HS 0.25.1 "Snowmelt"

C2HS 0.25.1 "Snowmelt"

I took over the day-to-day support for C2HS about 18 months ago and have now finally cleaned up all the issues on the GitHub issue tracker. It took a lot longer than I was expecting, mostly due to pesky “real work” getting in the way. Now seems like a good time to announce the 0.25.1 “Snowmelt” release of C2HS and to summarise some of the more interesting new C2HS features.

Regression suite and Travis testing

When I first started working on C2HS, I kept breaking things and getting emails letting me know that such-and-such a package no longer worked. That got boring pretty quickly, so I wrote a Shelly-driven regression suite to build a range of packages that use C2HS to check for breakages. This now runs on Travis CI so that whenever a C2HS change is pushed to GitHub, as well as the main C2HS test suite, a bunch of C2HS-dependent packages are built. This has been pretty handy for avoiding some stupid mistakes.

Enum handling

Thanks to work contributed by Philipp Balzarek, the treatment of the mapping between C enum values and Haskell Enum types is now much better than it was. The C enum/Haskell Enum association is kind of an awkward fit, since the C and Haskell worlds make really quite different assumptions about what an “enumerated” type is, and the coincidence of names is less meaningful than you might hope. We might have to do some more work on that in the future: I’ve been thinking about whether it would be good to have a CEnum class in Foreign.C.Types to capture just the features of C enums that can be mapped to Haskell types in a sensible way.

Finalizers for foreign pointers

You can now say things like:

#include <stdio.h>

{#pointer *FILE as File foreign finalizer fclose newtype#}

{#fun fopen as ^ {`String', `String'} -> `File'#}
{#fun fileno as ^ {`File'} -> `Int'#}

main :: IO ()
main = do
  f <- fopen "tst.txt" "w"
  ...

and the file descriptor f will be cleaned up by a call to fclose via the Haskell garbage collector. This encapsulates a very common use case for handling pointers to C structures allocated by library functions. Previously there was no direct way to associate finalizers with foreign pointers in C2HS, but now it’s easy.

Easy access to preprocessor constants

C2HS has a new const hook for directly accessing the value of C preprocessor constants – you can just say {#const FOO#} to use the value of a constant FOO defined in a C header in Haskell code.

Special case argument marshalling

I’ve implemented a couple of special mechanisms for argument marshalling that were requested. The first of these is a little esoteric, but an example should make it clear. A common pattern in some C libraries is to have code that looks like this:

typedef struct {
  int a;
  float b;
  char dummy;
} oid;

void func(oid *obj, int aval, float bval);
int oid_a(oid *obj);
float oid_b(oid *obj);

Here the function func takes a pointer to an oid structure and fills in the values in the structure and the other functions take oid pointers and do various things with them. Dealing with functions like func through the Haskell FFI is a tedious because you need to allocate space for an oid structure, marshall a pointer to the allocated space and so on. Now though, the C2HS code

{#pointer *oid as Oid foreign newtype#}

{#fun func as ^ {+, `Int', `Float'} -> `Oid'#}

generates Haskell code like this:

newtype Oid = Oid (ForeignPtr Oid)
withOid :: Oid -> (Ptr Oid -> IO b) -> IO b
withOid (Oid fptr) = withForeignPtr fptr

func :: Int -> Float -> IO Oid
func a2 a3 =
  mallocForeignPtrBytes 12 >>= \a1'' -> withForeignPtr a1'' $ \a1' ->
  let {a2' = fromIntegral a2} in
  let {a3' = realToFrac a3} in
  func'_ a1' a2' a3' >>
  return (Oid a1'')

This allocates the right amount of space using the fast mallocForeignPtrBytes function and deals with all the marshalling for you. The special + parameter in the C2HS function hook definition triggers this (admittedly rather specialised) case.

The second kind of “special” argument marshalling is more general. A lot of C libraries include functions where small structures are passed “bare”, i.e. not as pointers. The Haskell FFI doesn’t include a means to marshal arguments of this type, which makes using libraries of this kind painful, with a lot of boilerplate marshalling code needed (just the kind of thing C2HS is supposed to eliminate!). The solution I came up with for C2HS is to add an argument annotation for function hooks that says that a structure pointer should really be passed as a bare structure. In such cases, C2HS then generates an additional C wrapper function to marshal between structure pointer and bare structure arguments. An example will make this clear. Suppose you have some code in a C header:

typedef struct {
  int x;
  int y;
} coord_t;

coord_t *make_coord(int x, int y);
void free_coord(coord_t *coord);
int coord_x(coord_t c, int dummy);

Here, the coord_x function takes a bare coord_t structure as a parameter. To bind to these functions in C2HS code, we write this:

{#pointer *coord_t as CoordPtr foreign finalizer free_coord newtype#}

{#fun pure make_coord as makeCoord {`Int', `Int'} -> `CoordPtr'#}
{#fun pure coord_x as coordX {%`CoordPtr', `Int'} -> `Int'#}

Here, the % annotation on the CoordPtr argument to the coordX function hook tells C2HS that this argument needs to be marshalled as a bare structure. C2HS then generates Haskell code as usual, but also an extra .chs.c file containing wrapper functions. This C code needs to be compiled and linked to the Haskell code.

This is kind of new and isn’t yet really supported by released versions of Cabal. I’ve made some Cabal changes to support this, which have been merged and will hopefully go into the next or next but one Cabal release. When that’s done, the handling of the C wrapper code will be transparent – Cabal will know that C2HS has generated these extra C files and will add them to the “C sources” list for whatever it’s building.

Binding to variadic C functions

Previously, variadic C functions weren’t supported in C2HS at all. Now though, you can do fun things like this:

#include <stdio.h>

{#fun variadic printf[int] as printi {`String', `Int'} -> `()'#}
{#fun variadic printf[int, int] as printi2 {`String', `Int', `Int'} -> `()'#}
{#fun variadic printf[const char *] as prints {`String', `String'} -> `()'#}

You need to give distinct names for the Haskell functions to be bound to different calling sequences of the underlying C function, and because there’s no other way of finding them out, you need to specify explicit types for the arguments you want to pass in the place of C’s ... variadic argument container (that’s what the C types in the square brackets are). Once you do that, you can call printf and friends to your heart’s content. (The user who wanted this feature wanted to use it for calling Unix ioctl…)

User-defined default marshallers

A big benefit of C2HS is that it tries quite hard to manage the associations between C and Haskell types and the marshalling of arguments between C and Haskell. To that end, we have a lot of default marshallers that allow you very quickly to write FFI bindings. However, we can’t cover every case. There were a few long-standing issues (imported from the original Trac issue tracker when I moved the project to GitHub) asking for default marshalling for various C standard or “standardish” typedefs. I held off on trying to fix those problems for a long time, mostly because I thought that fixing them one at a time as special cases would be a little futile and would just devolve into endless additions of “just one more” case.

In the end, I implemented a general scheme to allow users to explicitly associate C typedef names with Haskell types and to define default marshallers between them. As an example, using this facility, you can write code to marshal Haskell String values to and from C wide character strings like this:

#include <wchar.h>

{#typedef wchar_t CWchar#}
{#default in `String' [wchar_t *] withCWString* #}
{#default out `String' [wchar_t *] peekCWString* #}
{#fun wcscmp {`String', `String'} -> `Int'#}
{#fun wcscat {`String', `String'} -> `String'#}

I think that’s kind of fun…

Miscellany

As well as the features described above, there’s a lot more that’s been done over the last 18 months: better handling of structure tags and typedefs; better cross-platform support (OS X, FreeBSD and Windows); lots more default marshallers; support for parameterised pointer types; some vague gestures in the direction of “backwards compatibility” (basically just a C2HS_MIN_VERSION macro); and just in the last couple of days, some changes to deal with marshalling of C bool values (really C99 _Bool) which aren’t supported directly by the Haskell FFI (so again require some wrapper code and some other tricks).

Contributors

As well as myself and Manuel Chakravarty, the original author of C2HS, the following people have contributed to C2HS development over the last 18 months (real names where known, GitHub handles otherwise):

  • Anton Dessiatov
  • Boyun Tang
  • Cindy Wang
  • Dimitri Sabadie
  • Facundo Dominguez
  • Index Int
  • j-keck
  • Kai Harries
  • Merijn Verstraaten
  • Michael Steele
  • Philipp Balzarek
  • RyanGlScott
  • Sivert Berg
  • watashi
  • Zejun Wu

Many thanks to all of them, and many thanks also to Benedikt Huber, who maintains the language-c package on which C2HS is critically dependent!

What next?

All of the work I’ve done on C2HS has been driven purely by user demand, based on issues I imported from the original Trac issue tracker and then on things that people have asked for on GitHub. (Think of it as a sort of call-by-need exploration of the C2HS design space.) I’m now anticipating that since I’ve raised my head above the parapet by touting all these shiney new features, I can expect a new stream of bug reports to come in…

One potential remaining large task is to “sort out” the Haskell C language libraries, of which there are now at least three, all with different pros and cons. The language-c library used in C2HS has some analysis capabilities that aren’t present in the other libraries, but the other libraries (notably Geoffrey Mainland’s language-c-quote and Manuel’s language-c-inline) support more recent dialects of C. Many of the issues with C2HS on OS X stem from modern C features that occur in some of the OS X headers that the language-c package just doesn’t recognise. Using one of the other C language packages might alleviate some of those problems. To do that though, some unholy mushing-together of language-c and one of these other packages has to happen, in order to bring the analysis capabilities of language-c to the other package. That doesn’t look like much fun at all, so I might ignore the problem and hope it goes away.

I guess longer term the question is whether tools like C2HS really have a future. There are better approaches to FFI programming being developed by research groups (Manuel’s is one of them: this talk is pretty interesting) so maybe we should just wait until they’re ready for prime time. On the other hand, quite a lot of people seem to use C2HS, and it is pretty convenient.

One C2HS design decision I’ve recently had to modify a little is that C2HS tries to use only information available via the “official” Haskell FFI. Unfortunately, there are situations where that just isn’t enough. The recent changes to marshal C99 _Bool values are a case in point. In order to determine offsets into structures containing _Bool members, you need to know how big a _Bool is. Types that are marshalled by the Haskell FFI are all instances of Storable, so you can just use the size method from Storable for this. However, the Haskell FFI doesn’t know anything about _Bool, so you end up having to “query” the C compiler for the information by generating a little C test program that you compile and run. (You can find out which C compiler to use from the output of ghc --info, which C2HS thus needs to run first.) This is all pretty nasty, but there’s no obvious other way to do it.

This makes me think, since I’m having to do this anyway, that it might be worth reorganising some of C2HS’s structure member offset calculation code to use the same sort of “query the C compiler” approach. There are some cases (e.g. structures within structures) where it’s just not possible to reliably calculate structure member offsets from the size and alignment information available through the Haskell FFI – the C compiler is free to insert padding between structure members, and you can’t work out just by looking when a particular compiler is going to do that. Generating little C test programs and compiling and running them allows you to get the relevant information “straight from the horse’s mouth”… (I don’t know whether this idea really has legs, but it’s one thing I’m thinking about.)

<script src="http://zor.livefyre.com/wjs/v3.0/javascripts/livefyre.js" type="text/javascript"></script> <script type="text/javascript"> (function () { var articleId = fyre.conv.load.makeArticleId(null); fyre.conv.load({}, [{ el: 'livefyre-comments', network: "livefyre.com", siteId: "290329", articleId: articleId, signed: false, collectionMeta: { articleId: articleId, url: fyre.conv.load.makeCollectionUrl(), } }], function() {}); }()); </script>

March 30, 2015 09:43 AM

FP Complete

Announcing: open sourcing of ide-backend

After many years of development, FP Complete is very happy and proud to announce the open sourcing of ide-backend. ide-backend has served as the basis for our development of School of Haskell and FP Haskell Center, by providing a high level, easy to use, and robust wrapper around the GHC API. We'd like to express our thanks to Duncan Coutts, Edsko de Vries, and Mikolaj Konarski for implementing this library on our behalf.

ide-backend provides a means to do a variety of tasks involving GHC, such as:

  • Compile code
  • Get compile error message
  • Submit updated code for recompilation
  • Extract type information
  • Find usage locations
  • Run generated bytecode
  • Produce optimized executables

For much more information, you can see the Haddock documentation.

Members of the Commercial Haskell Special Interest Group have encouraged us to open source more of our work, to help them build more tools useful to real-world developers. We're happy to contribute.

ide-backend opens the possibility for many new and interesting tools. To give some ideas:

  • A basis for providing a fast development web server while working on a code base. The idea here is a generalized yesod devel, which compiles and runs your web application on each change.
  • Edward Kmett has a project idea of using ide-backend to extract and organize type information from a large number of packages
  • Editor plugins can be improved, simplified, and begin to share much more code than they do today
  • Lightweight tools for inspecting code
  • Refactoring tools

I've shared information about this repository with some maintainers of existing tools in the Haskell world already, and hopefully now with the complete move to open source, we can start a much broader discussion going.

But today's release isn't just a code release; we also have demos! Edsko and Chris have been collaborating on some next-generation editor plugins, and have put together ide-backend-client with support for both Emacs and Atom. Chris has put together a screencast of his Emacs integration:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/Cwi1p2CLW54" width="420"></iframe>

We also have an early prototype tool at FP Complete for inspecting a code base and getting type information, based on ide-backend, GHCJS, and React.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/FI3u8uqZ2Q4" width="420"></iframe>

Where we go from here

Open sourcing this library is just the first step.

  • Duncan is planning on writing a blog post describing the architecture employed by this library.
  • Edsko's ide-backend-client project is a great place to continue making contributions and playing with new ideas.
  • FP Complete intends to release more of our ide-backend based tooling in the future as it matures.
  • I've asked for a GSoC proposal on a better development web server, and I'm sure other proposals would be great as well
  • FP Complete and Well Typed are both currently maintaining this library, and we are happy to have new people join the team

I'm excited to hear everyone's thoughts on this library, and look forward to seeing some awesome tools appear.

March 30, 2015 04:00 AM

March 29, 2015

Functional Jobs

Full-stack Software Engineer at Capital Match (Full-time)

Capital Match is a Singapore-based marketplace lending platform that enables SMEs to secure loans from private investors. Backed by a number of VCs and banking industry leaders, we have started our operations at the beginning of 2015 and are now embarking on the next stage of tech development.

We are looking for a software engineer interested in all aspects of the creation, growth and operations of a secure web-based platform: Front-end design and implementation, front-to-back features development, distributed deployment and automation in the cloud, build automation...

Our platform is primarily developed in Haskell with an ClojureScript frontend. A candidate with a good command of Functional Programming paradigm and real web programming experience, even not primarily in Haskell or Clojurescript, is welcomed to apply. For the platform side of things, we want to keep things as simple as possible and learning to use both languages is relatively easy.

Experience in web-based software development is essential and minimum exposure to and understanding of XP practices (TDD, CI, Emergent Design, Refactoring, Peer review and programming, Continuous improvement) is expected. These are standard practices within the team. We therefore favour candidates who value simplicity and respect, accept and give feedback and who are generally team player.

Proposed salary range is USD 3000-4000 / month, depending on profile. We offer up to 1% equity in stock-options, subject to usual vesting scheme.

Our team is distributed so ability to work comfortably in a distributed setting is essential. However, please note this position is to be filled in Singapore so if you are not currently living there, relocation is necessary in the short-term. Visa sponsorship will be provided. European citizens who relocate to Singapore do not have to pay their home country taxes and the local tax rate in Singapore is around 2% (effective on the proposed salary range). Singapore is a great place to live, a vibrant city rich with diverse cultures, a very strong financial sector and a central position in Southeast Asia.

Get information on how to apply for this position.

March 29, 2015 11:42 AM

Full-stack Software Engineer (updated) at Capital Match (Full-time)

Overview

Capital Match (www.capital-match.com) is a Singapore-based peer-to-peer lending platform. We are backed by a number of VCs and banking industry leaders. We have started our operations at the beginning of 2015 and are steadily growing. Our online platform is already operational and we are now embarking on the next stage of tech development to be able to service large institutional investors. Our product enables SMEs to secure loans from professional investors. We are hiring a well-rounded full-stack software developer to join the team in Singapore.

Responsibilities

We are looking for a software engineer interested in all aspects of the creation, growth and operations of a secure web-based platform: Front-end design and implementation, front-to-back features development, distributed deployment and automation in the cloud, build and test automation etc.

Requirements

Our platform is primarily developed in Haskell with an Om/ClojureScript frontend. Obviously mastering both environments, while not essential, would be a plus. A candidate with a good command of Functional Programming paradigm and Scala/OCaml/F#/Clojure/Lisp/Erlang/Javascript experience is also welcome to apply. For the platform side of things, we want to keep things as simple as possible and learning to use both languages is relatively easy.

Experience in web-based software development is essential and minimum exposure to and understanding of XP practices (TDD, CI, Emergent Design, Refactoring, Peer review and programming, Continuous improvement) is expected. These are standard practices within the team. We therefore favour candidates who value simplicity and respect, accept and give feedback and who are generally team player.

Our team is distributed so ability to work comfortably in a distributed setting is essential. However, please note this position is to be filled in Singapore so if you are not currently living there, relocation is necessary in the short-term.

Offer

We offer a combination of salary and equity depending on experience and skills:

Salary: USD 3,000-6,000 / month

Equity: 0.1-1% (subject to vesting)

European citizens who relocate to Singapore do not have to pay their home country taxes and the local tax rate in Singapore is around 2-3% (effective on the proposed salary range). Visa sponsorship will be provided.

Singapore is a great place to live, a vibrant city rich with diverse cultures, a very strong financial sector and a central position in Southeast Asia.

Get information on how to apply for this position.

March 29, 2015 11:42 AM

March 28, 2015

Paul Johnson

Google Maps on Android demands I let Google track me

I recently upgraded to Android 5.1 on my Nexus 10. One app I often use is Google Maps. This has a "show my location" button:
 
When I clicked on this I got the following dialog box:



Notice that I have two options: I either agree to let Google track me, or I cancel the request. There is no "just show my location" option.

As a matter of principle, I don't want Google to be tracking me. I'm aware that Google can offer me all sorts of useful services if I just let it know every little detail of my life, but I prefer to do without them. But now it seems that zooming in on my GPS-derived location has been added to the list of features I can't have. There is no technical reason for this; it didn't used to be the case. But Google has decided that as the price for looking at the map of where I am, I now have to tell them where I am all the time.

I'm aware that of course my cellphone company knows roughly where I am and who I talk to, and my ISP knows which websites I visit and can see my email (although unlike GMail I don't think they derive any information about me from the contents), and of course Google knows what I search for. But I can at least keep that information compartmentalised in different companies. I suspect that the power of personal data increases non-linearly with the volume and scope, so having one company know where I am and another company read my email means less loss of privacy than putting both location and email in the same pot.

 Hey, Google, stop being evil!

by Paul Johnson ([email protected]) at March 28, 2015 02:59 PM

Joachim Breitner

An academic birthday present

Yesterday, which happened to be my 30th birthday, a small package got delivered to my office: The printed proceedings of last year's “Trends in Functional Programming” conference, where I published a paper on Call Arity (preprint). Although I doubt the usefulness of printed proceedings, it was a nicely timed birthday present.

Looking at the rather short table of contents – only 8 papers, after 27 presented and 22 submitted – I thought that this might mean that, with some luck, I might have chances to get the “Best student paper award”, which I presumed to be announced at the next iteration of the conference.

For no particular reason I was leisurely browsing through the book, and started to read the preface. And what do I read there?

Among the papers selected for these proceedings, two papers stood out. The award for Best Student Paper went to Joachim Breitner for his paper entitled Call Arity, and the award for Best Paper Overall went to Edwin Brady for his paper entitled Resource-dependent Algebraic Effects. Congratulations!

Now, that is a real nice birthday present! Not sure if I even would have found out about it, had I not have thrown a quick glance at page V...

I hope that it is a good omen for my related ICFP'15 submission.

by Joachim Breitner ([email protected]) at March 28, 2015 01:13 PM

Gabriel Gonzalez

Algebraic side effects

Haskell differentiates itself from most other functional languages by letting you reason mathematically about programs with side effects. This post begins with a pure Haskell example that obeys algebraic equations and then generalizes the example to impure code that still obeys the same equations.

Algebra

In school, you probably learned algebraic rules like this one:

f * (xs + ys) = (f * xs) + (f * ys)

Now let's make the following substitutions:

  • Replace the mathematical multiplication with Haskell's map function
  • Replace the mathematical addition with Haskell's ++operator

These two substitutions produce the following Haskell equation:

map f (xs ++ ys) = (map f xs) ++ (map f ys)

In other words, if you concatenate the list xs with the list ys and then map a function named f over the combined list, the result is indistinguishable from mapping f over each list individually and then concatenating them.

Let's test this equation out using the Haskell REPL:

>>> map (+ 1) ([2, 3] ++ [4, 5])
[3,4,5,6]
>>> (map (+ 1) [2, 3]) ++ (map (+ 1) [4, 5])
[3,4,5,6]

Evaluation order

However, the above equation does not hold in most other languages. These other languages use function evaluation to trigger side effects, and therefore if you change the order of evaluation you change the order of side effects.

Let's use Scala to illustrate this. Given the following definitions:

>>> def xs() = { print("!"); Seq(1, 2) }
>>> def ys() = { print("?"); Seq(3, 4) }
>>> def f(x : Int) = { print("*"); x + 1 }

.. the order of side effects differs depending on whether I concatenate or map first:

>>> (xs() ++ ys()).map(f)
!?****res0: Seq[Int] = List(2, 3, 4, 5)
>>> (xs().map(f)) ++ (ys().map(f))
!**?**res1: Seq[Int] = List(2, 3, 4, 5)

One line #1, the two lists are evaluated first, printing "!" and "?", followed by evaluating the function f on all four elements, printing "*" four times. On line #2, we call f on each element of xs before beginning to evaluate ys. Since evaluation order matters in Scala we get two different programs which print the punctuation characters in different order.

The solution

Haskell, on the other hand, strictly separates evaluation order from side effect order using a two-phase system. In the first phase you evaluate your program to build an abstract syntax tree of side effects. In the second phase the Haskell runtime executes the tree by interpreting it for its side effects. This phase distinction ensures that algebraic laws continue to behave even in the presence of side effects.

To illustrate this, we'll generalize our original Haskell code to interleave side effects with list elements and show that it still obeys the same algebraic properties as before. The only difference from before is that we will:

  • Generalize pure lists to their impure analog, ListT
  • Generalize functions to impure functions that wrap side effects with lift
  • Generalize (++) (list concatenation) to (<|>) (ListT concatenation)
  • Generalize map to (=<<), which streams elements through an impure function

This means that our new equation becomes:

-- f  *  (xs  +  ys) = (f  *  xs)  +  (f  * ys)
f =<< (xs <|> ys) = (f =<< xs) <|> (f
=<< ys)

You can read this as saying: if we concatenate xs and ys and then stream their values through the impure function f, the behavior is indistinguishable from streaming each individual list through f first and then concatenating them.

Let's test this equation out with some sample definitions for xs, ys, and f that mirror their Scala analogs:

>>> import Control.Applicative
>>> import Pipes
>>> let xs = do { lift (putChar '!'); return 1
<|> return 2 }
>>> let ys = do { lift (putChar '?'); return 3
<|> return 4 }
>>> let f x = do { lift (putChar '*'); return (x + 1) }
>>> runListT (f =<< (xs <|> ys)) -- Note:
`runListT` discards the result
!**?**>>> runListT ((f =<< xs) <|> (f =<< ys))
!**?**>>>

The resulting punctuation order is identical. Many people mistakenly believe that Haskell's mathematical elegance breaks down when confronted with side effects, but nothing could be further from the truth.

Conclusion

Haskell preserves algebraic equations even in the presence of side effects, which simplifies reasoning about impure code. Haskell separates evaluation order from side effect order so that you spend less time reasoning about evaluation order and more time reasoning about your program logic.

by Gabriel Gonzalez ([email protected]) at March 28, 2015 12:21 AM

March 27, 2015

Jan Stolarek

The basics of coinduction

I don’t remember when I first heard the terms “coinduction” and “corecursion” but it must have been quite long ago. I had this impression that they are yet another of these difficult theoretical concepts and that I should learn about them one day. That “one day” happened recently while reading chapter 5 of “Certified Programming with Dependent Types”. It turns out that basics of coinduction are actually quite simple. In this post I’ll share with you what I already know on the subject.

Recursion in Haskell

Let’s begin with looking at Haskell because it is a good example of language not formalizing coinduction in any way. Two features of Haskell are of interest to us. First one is laziness. Thanks to Haskell being lazy we can write definitions like these (in GHCi):

ghci> let ones = 1 : ones
ghci> let fib = zipWith (+) (1:fib) (1:1:fib)

ones is – as the name implies – an infinite sequence (list) of ones. fib is a sequence of Fibonacci numbers. Both these definitions produce infinite lists but we can use these definitions safely because laziness allows us to force a finite number of elements in the sequence:

ghci> take 5 ones
[1,1,1,1,1]
ghci> take 10 fib
[2,3,5,8,13,21,34,55,89,144]

Now consider this definition:

ghci> let inf = 1 + inf

No matter how hard we try there is no way to use the definition of inf in a safe way. It always causes an infinite loop:

ghci> (0 /= inf)
*** Exception: <<loop>>

The difference between definitions of ones or fib an the definition of inf is that the former use something what is called a guarded recursion. The term guarded comes from the fact that recursive reference to self is hidden under datatype constructor (or: guarded by a constructor). The way lazy evaluation is implemented gives a guarantee that we can stop the recursion by not evaluating the recursive constructor argument. This kind of infinite recursion can also be called productive recursion, which means that although recursion is infinite each recursive call is guaranteed to produce something (in my examples either a 1 or next Fibonacci number). By contrast recursion in the definition of inf is not guarded or productive in any way.

Haskell happily accepts the definition of inf even though it is completely useless. When we write Haskell programs we of course don’t want them to fall into silly infinite loops but the only tool we have to prevent us from writing such code is our intelligence. Situation changes when it comes to….

Dependently-typed programming languages

These languages deeply care about termination. By “termination” I mean ensuring that a program written by the user is guaranteed to terminate for any input. I am aware of two reasons why these languages care about termination. First reason is theoretical: without termination the resulting language is inconsistent as logic. This happens because non-terminating term can prove any proposition. Consider this non-terminating Coq definition:

Fixpoint evil (A : Prop) : A := evil A.

If that definition was accepted we could use it to prove any proposition. Recall that when it comes to viewing types as proofs and programs as evidence “proving a proposition” means constructing a term of a given type. evil would allow to construct a term inhabiting any type A. (Prop is a kind of logical propositions so A is a type.) Since dependently-typed languages aim to be consistent logics they must reject non-terminating programs. Second reason for checking termination is practical: dependently typed languages admit functions in type signatures. If we allowed non-terminating functions then typechecking would also become non-terminating and again this is something we don’t want. (Note that Haskell gives you UndecidableInstances that can cause typechecking to fall into an infinite loop).

Now, if you paid attention on your Theoretical Computer Science classes all of this should ring a bell: the halting problem! The halting problem says that the problem of determining whether a given Turing machine (read: a given computer program) will ever terminate is undecidable. So how is that possible that languages like Agda, Coq or Idris can answer that question? That’s simple: they are not Turing-complete (or at least their terminating subsets are not Turing complete). They prohibit user from using some constructs, probably the most important one being general recursion. Think of general recursion as any kind of recursion imaginable. Dependently typed languages require structural recursion on subterms of the arguments. That means that if a function receives an argument of an inductive data type (think: algebraic data type/generalized algebraic data type) then you can only make recursive calls on terms that are syntactic subcomponents of the argument. Consider this definition of map in Idris:

map : (a -> b) -> List a -> List b
map f []      = []
map f (x::xs) = f x :: map f xs

In the second equation we use pattern matching to deconstruct the list argument. The recursive call is made on xs, which is structurally smaller then the original argument. This guarantees that any call to map will terminate. There is a silent assumption here that the List A argument passed to map is finite, but with the rules given so far it is not possible to construct infinite list.

So we just eliminated non-termination by limiting what can be done with recursion. This means that our Haskell definitions of ones and fib would not be accepted in a dependently-typed language because they don’t recurse on an argument that gets smaller and as a result they construct an infinite data structure. Does that mean we are stuck with having only finite data structures? Luckily, no.

Coinduction to the rescue

Coinduction provides a way of defining and operating on infinite data structures as long as we can prove that our operations are safe, that is they are guarded and productive. In what follows I will use Coq because it seems that it has better support for coinduction than Agda or Idris (and if I’m wrong here please correct me).

Coq, Agda and Idris all require that a datatype that can contain infinite values has a special declaration. Coq uses CoInductive keyword instead of Inductive keyword used for standard inductive data types. In a similar fashion Idris uses codata instead of data, while Agda requires ∞ annotation on a coinductive constructor argument.

Let’s define a type of infinite nat streams in Coq:

CoInductive stream : Set :=
| Cons : nat -> stream -> stream.

I could have defined a polymorphic stream but for the purpose of this post stream of nats will do. I could have also defined a Nil constructor to allow finite coinductive streams – declaring data as coinductive means it can have infinite values, not that it must have infinite values.

Now that we have infinite streams let’s revisit our examples from Haskell: ones and fib. ones is simple:

CoFixpoint ones : stream := Cons 1 ones.

We just had to use CoFixpoint keyword to tell Coq that our definition will be corecursive and it is happily accepted even though a similar recursive definition (ie. using Fixpoint keyword) would be rejected. Allow me to quote directly from CPDT:

whereas recursive definitions were necessary to use values of recursive inductive types effectively, here we find that we need co-recursive definitions to build values of co-inductive types effectively.

That one sentence pins down an important difference between induction and coinduction.

Now let’s define zipWith and try our second example fib:

CoFixpoint zipWith (f : nat -> nat -> nat) (a : stream)
                   (b : stream) : stream :=
  match a, b with
    | Cons x xs, Cons y ys > Cons (f x y) (zipWith f xs ys)
  end.
 
CoFixpoint fib : stream :=
   zipWith plus (Cons 1 fib) (Cons 1 (Cons 1 fib)).

Unfortunately this definition is rejected by Coq due to “unguarded recursive call”. What exactly goes wrong? Coq requires that all recursive calls in a corecursive definition are:

  1. direct arguments to a data constructor
  2. not inside function arguments

Our definition of fib violates the second condition – both recursive calls to fib are hidden inside arguments to zipWith function. Why does Coq enforce such a restriction? Consider this simple example:

Definition tl (s : stream) : stream :=
  match s with
    | Cons _ tl' => tl'
  end.
 
CoFixpoint bad : stream := tl (Cons 1 bad).

tl is a standard tail function that discards the first element of a stream and returns its tail. Just like our definition of fib the definition of bad places the corecursive call inside a function argument. I hope it is easy to see that accepting the definition of bad would lead to non-termination – inlining definition of tl and simplifying it leads us to:

CoFixpoint bad : stream := bad.

and that is bad. You might be thinking that the definition of bad really has no chance of working whereas our definition of fib could in fact be run safely without the risk of non-termination. So how do we persuade Coq that our corecursive definition of fib is in fact valid? Unfortunately there seems to be no simple answer. What was meant to be a simple exercise in coinduction turned out to be a real research problem. This past Monday I spent well over an hour with my friend staring at the code and trying to come up with a solution. We didn’t find one but instead we found a really nice paper “Using Structural Recursion for Corecursion” by Yves Bertot and Ekaterina Komendantskaya. The paper presents a way of converting definitions like fib to a guarded and productive form accepted by Coq. Unfortunately the converted definition looses the linear computational complexity of the original definition so the conversion method is far from perfect. I encourage to read the paper. It is not long and is written in a very accessible way. Another set of possible solutions is given in chapter 7 of CPDT but I am very far from labelling them as “accessible”.

I hope this post demonstrates that basics ideas behind coinduction are actually quite simple. For me this whole subject of coinduction looks really fascinating and I plan to dive deeper into it. I already have my eyes set on several research papers about coinduction so there’s a good chance that I’ll write more about it in future posts.

by Jan Stolarek at March 27, 2015 03:21 PM

Well-Typed.Com

Qualified Goals in the Cabal Solver

When you ask cabal-install to install one or more packages, it needs to solve a constraint satisfaction problem: select a version for each of the packages you want to install, plus all their dependencies, such that all version constraints are satisfied. Those version constraints come both from the user (“please install lens version 4”) and from the packages themselves (“this package relies on mtl between version 2.1 and 2.2”). Internally cabal-install uses a modular constraint solver, written by Andres Löh. It was first presented at the Haskell Implementor’s Workshop in 2011, and is based on the paper Modular lazy search for Constraint Satisfaction Problems.

For the Industrial Haskell Group, Well-Typed has recently extended the constraint solver to deal with qualified goals. In this blog post we will explain how this constraint solver works by means of a running example, and look at how the new qualified goals change things.

Overview of the Solver

The ability to be able to decouple generating solutions from finding the right one is one of the classical reasons why functional programming matters, and cabal-install’s constraint solver makes very heavy use of this. It first builds a tree with lots of “solutions”; solutions in quotes because many of these solutions will not be valid. It then validates these solutions, marking any invalid ones. This might still leave many possible solutions, so after this we apply some preferences (we prefer newer packages over older ones for example) and heuristics (we want to pick the version of base that is already installed, no point considering others) and then it uses the first solution it finds (if any).

It is very important to realize throughout all this that these trees are never built in their entirety. We depend on laziness to only evaluate as much as necessary. A key design principle throughout the solver is that we must have enough information at each node in the tree to be able to make local decisions. Any step that would require looking at the tree in its entirety would be a big no-no.

In the remainder of this section we will see what these different steps look like using a running example.

Building the tree

Suppose we have a package database with base version 4.0, mtl versions 1.0 and 2.0, both of which depend on base, and a package foo that depends on both base and mtl version 2.0.

When we ask cabal to install package foo, it constructs a search tree that looks something like this:

GoalChoice nodes, shown as G nodes in the diagram, represent points where we decide on which package to solve for next. Initially we have only one option: we need to solve for package foo. Similarly, PChoice nodes P represent points where we decide on a package version. Since there is only one version of foo available, we again only have one choice.

Once we have chosen foo version 1.0, we need to solve for foo's dependencies: base and mtl. We don’t know which we should solve for first; the order in which we consider packages may affect how quickly we find a solution, and which solution we return (since we will eventually report the first solution we find). When we build the tree we essentially make a arbitary decision (depending on which order we happen to find the dependencies), and we record the decision using a GoalChoice node. Later we can traverse the tree and apply local heuristics to these GoalChoice nodes (for instance, we might want to consider base before mtl).

In the remainder of the tree we then pick a version for mtl (here we do have a choice in version), and then a version for base, or the other way around. Note that when we build the tree, package constraints are not yet applied: in the tree so far there is nothing that reflects the fact that foo wants version 2.0 of mtl, and every path ends with a Done node D, indicating success. Indeed, we would get precisely the same tree if we have a package DB

where foo depends on either version of mtl.

Validation

Once we have built the tree, we then walk over the tree and verify package constraints. As soon as we detect a violation of a constraint on a given path we replace that node in the tree with a failure node. For the above example this gives us the following tree:

Paths through the tree that lead to failure are not removed from the tree, but are replaced by explicit failure F. This helps with generating a good error message if we fail to find a solution. In this case, both failures are essentially the same problem: we cannot pick version 1.0 for mtl because foo needs version 2.0.

Heuristics and Preferences

After validation we apply a number of heuristics to the tree. For example, we prefer to pick a version of base early because there is generally only one version of base available in the system. In addition, we apply user preferences; for example, we try newer versions of packages before older versions. For our example this gives us

Finding a solution

After applying the heuristics we throw away all but the first choice in each GoalChoice node (but keeping all choices in the PChoice nodes)

and traverse the tree depth first to find a solution, returning the first solution we find. In our running example, this means that we will use version 1.0 of foo, 4.0 of base and 2.0 of mtl.

Whenever we encounter a Fail node we backtrack. This backtracking is aided by so-called conflict sets. I haven’t shown these conflict sets in the diagrams, but each Fail node in the tree is annotated with a conflict set which records why this path ended in failure. In our running example the conflict set for both Fail nodes is the set {foo, mtl}, recording that there is a conflict between the versions of mtl and the version of foo that we picked. The conflict set is used to guide the backtracking; any choice that we encounter while backtracking that does not involve any variables in the conflict set does not need to be reconsidered, as it would lead to the same failure.

If we cannot find any solution, then we must report an error. Reporting a good error here however is difficult: after all, we have a potentially very large tree, with lots of different kinds of failures. Constructing an informative error from this is difficult, and this is one area where cabal-install might still be improved.

Qualified goals

Motivation

Normally cabal-install can only pick a single version for each package. For example, if we have a situation

we cannot install package D because it would require installing both versions 1.0 and 2.0 of package A (this is known as the diamond problem).

Setup scripts

Cabal packages can however have their own custom Setup scripts, when necessary, which are Cabal’s equivalent of the traditional ./configure scripts. In principle there should be no problem building these Setup scripts using different (and possibly conflicting) dependencies than the library itself; after all, the Setup script is completely independent from the library.

From Cabal 1.23 and up these setup scripts can have their own list of dependencies. Let’s suppose that in our running example the Setup script of foo has a dependency on any version of mtl:

We want to allow the setup script to be compiled against a different version of mtl as foo itself, but of course we would prefer not to, in order to avoid unnecessary additional compilation time.

Qualification

In order to allow picking a different version, we introduce qualified goals for each of the setup dependencies. In our running example, this means that cabal-install will now try to solve for the variables foo, mtl, and base, as well as foo.setup.mtl and foo.setup.base. This makes it possible to pick one version for mtl and another for foo.setup.mtl.

Linking

But how do we make sure that we pick the same version when possible? One (non-) option is to look at the entire search tree and find the solution that installs the smallest number of packages. While that might work in theory, it violates the earlier design principle we started with: we only ever want to make local decisions. The search tree can be quite large; indeed, the addition of the single setup dependency to foo already makes the tree much larger, as we shall see shortly. We certainly never want to inspect the entire search tree (or, worse yet, have the entire search tree in memory).

Instead, we introduce the concept of linking. This means that when we select a version for foo.setup.mtl (say), in addition to being able to pick either version 1.0 or 2.0, we can also say “link the version of foo.setup.mtl to the version of mtl” (provided that we already picked a version for mtl).

Then we can make local decisions: when we pick a version, we prefer to link whenever possible. Of course, this is subject to certain constraints. In the remainder of this section we shall see how qualified goals and linking works using our running example, and identify some of these linking constraints.

Building the tree

The code to build the initial tree is modified to introduce qualified constraints for setup dependencies, but does not itself deal with linking. Instead, it builds the tree as normal, and we then add additional linking options into the tree as a separate phase.

The search tree for our running example gets much bigger now due to combinational explosion: we have two additional variables to solve for, and linking means we have more choices for package versions. Here’s part of the initial search tree:

Let’s follow along the spine of this tree to see what’s going on. We first consider foo and pick version 1.0, just like before (there is no other choice). Then, on this path, we first consider foo.setup.mtl, and we have two options: we can either pick version 1.0 or version 2.0. We pick version 1.0, and continue with foo.setup.base and pick version 4.0 (only one option).

But when we now consider mtl things are more interesting: in addition to picking versions 1.0 and 2.0, we can also decide to link the version of mtl against the version of foo.setup.mtl (indicated by a red label in the tree). Similarly, when we pick a version for base, we can now choose to link base against the version of foo.setup.base in addition to picking version 4.0.

Validation

When we link mtl against foo.setup.mtl, we are really saying “please use the exact same package instance for both mtl and foo.setup.mtl”. This means that the dependencies of mtl must also be linked against the dependencies of foo.setup.mtl.

In addition, ghc enforces a so-called single instance restriction. This means that (in a single package database) we can only have one instance of a particular package version. So, for example, we can have both mtl version 1.0 and mtl version 2.0 installed in the same package database, but we cannot have two instance of mtl version 2.0 (for instance, one linked against version 3.0 of transformers and one linked against version 4.0 oftransformers) installed at the same time. Lifting this restriction is an important step towards solving Cabal Hell, but for now we have to enforce it. In our terminology, this means that when we have to solve for both (say) mtl and foo.setup.mtl, we can either pick two different versions, or we can link one to the other, but we cannot pick the same version for both goals.

So, in addition to the regular validation phase which verifies package constraints, we introduce a second validation phase that verifies these kinds of “linking constraints”. We end up with a tree such as

In this part of the tree, the two failures for mtl are because we picked version 1.0 for foo.setup.mtl, but since foo itself wants mtl version 2.0, we cannot pick version 1.0 for goal mtl nor can we link mtl to foo.setup.mtl. The two failures for base are due to the single instance restriction: since we picked version 4.0 for foo.setup.base, we must link base to foo.setup.base.

Heuristics

If we picked the first solution we found in the tree above, we would select version 1.0 of mtl for foo’s setup script and version 2.0 of mtl for foo itself. While that is not wrong per se, it means we do more work than necessary. So, we add an additional heuristic that says that we should consider setup dependencies after regular (library) dependencies. After we apply this heuristic (as well as all the other heuristics) we end up with

In this part of the tree we see one failure for foo.setup.mtl and two failures for foo.setup.base. The failure for foo.setup.mtl comes from the single instance restriction again: since we picked version 2.0 for mtl, we cannot pick an independent instance for foo.setup.mtl. The failure for foo.setup.base on the right is due to the same reason, but there is an additional reason for the left failure: since we chose to link foo.setup.mtl to mtl, its dependencies (in this case, foo.setup.base) must also be linked.

Finding a solution

As before, after applying the heuristics we prune

and we report the first solution we find. In this case, this means that we will pick version 2.0 for mtl and link foo.setup.mtl to mtl, and foo.setup.base to base.

Conclusions

Although we skimmed over many details, we covered the most important design principles behind the solver and the new implementation of qualified goals. One thing we did not talk about in this blog post are flag assignments; when the solver needs to decide on the value for a flag, it introduces a FChoice node into the tree with two subtrees for true and false, and then proceeds as normal. When we link package P to package Q, we must then also verify that their flag assignments match.

Qualified goals and linking are now used for setup dependencies, but could also be used to deal with private dependencies, to split library dependencies from executable dependencies, to deal with the base-3 shim package, and possibly other purposes. The modular design of the solver means that such features can be added (mostly) as independent units of code. That doesn’t of course necessarily mean the code is also easy; making sure that all decisions remain local can be a subtle problem. Hopefully this blog post will make the solver code easier to understand and to contribute to.

by edsko at March 27, 2015 10:17 AM

Qualified Goals in the Cabal Solver

When you ask cabal-install to install one or more packages, it needs to solve a constraint satisfaction problem: select a version for each of the packages you want to install, plus all their dependencies, such that all version constraints are satisfied. Those version constraints come both from the user (“please install lens version 4”) and from the packages themselves (“this package relies on mtl between version 2.1 and 2.2”). Internally cabal-install uses a modular constraint solver, written by Andres Löh. It was first presented at the Haskell Implementor’s Workshop in 2011, and is based on the paper Modular lazy search for Constraint Satisfaction Problems.

For the Industrial Haskell Group, Well-Typed has recently extended the constraint solver to deal with qualified goals. In this blog post we will explain how this constraint solver works by means of a running example, and look at how the new qualified goals change things.

Overview of the Solver

The ability to be able to decouple generating solutions from finding the right one is one of the classical reasons why functional programming matters, and cabal-install’s constraint solver makes very heavy use of this. It first builds a tree with lots of “solutions”; solutions in quotes because many of these solutions will not be valid. It then validates these solutions, marking any invalid ones. This might still leave many possible solutions, so after this we apply some preferences (we prefer newer packages over older ones for example) and heuristics (we want to pick the version of base that is already installed, no point considering others) and then it uses the first solution it finds (if any).

It is very important to realize throughout all this that these trees are never built in their entirety. We depend on laziness to only evaluate as much as necessary. A key design principle throughout the solver is that we must have enough information at each node in the tree to be able to make local decisions. Any step that would require looking at the tree in its entirety would be a big no-no.

In the remainder of this section we will see what these different steps look like using a running example.

Building the tree

Suppose we have a package database with base version 4.0, mtl versions 1.0 and 2.0, both of which depend on base, and a package foo that depends on both base and mtl version 2.0.

When we ask cabal to install package foo, it constructs a search tree that looks something like this:

GoalChoice nodes, shown as G nodes in the diagram, represent points where we decide on which package to solve for next. Initially we have only one option: we need to solve for package foo. Similarly, PChoice nodes P represent points where we decide on a package version. Since there is only one version of foo available, we again only have one choice.

Once we have chosen foo version 1.0, we need to solve for foo's dependencies: base and mtl. We don’t know which we should solve for first; the order in which we consider packages may affect how quickly we find a solution, and which solution we return (since we will eventually report the first solution we find). When we build the tree we essentially make a arbitary decision (depending on which order we happen to find the dependencies), and we record the decision using a GoalChoice node. Later we can traverse the tree and apply local heuristics to these GoalChoice nodes (for instance, we might want to consider base before mtl).

In the remainder of the tree we then pick a version for mtl (here we do have a choice in version), and then a version for base, or the other way around. Note that when we build the tree, package constraints are not yet applied: in the tree so far there is nothing that reflects the fact that foo wants version 2.0 of mtl, and every path ends with a Done node D, indicating success. Indeed, we would get precisely the same tree if we have a package DB

where foo depends on either version of mtl.

Validation

Once we have built the tree, we then walk over the tree and verify package constraints. As soon as we detect a violation of a constraint on a given path we replace that node in the tree with a failure node. For the above example this gives us the following tree:

Paths through the tree that lead to failure are not removed from the tree, but are replaced by explicit failure F. This helps with generating a good error message if we fail to find a solution. In this case, both failures are essentially the same problem: we cannot pick version 1.0 for mtl because foo needs version 2.0.

Heuristics and Preferences

After validation we apply a number of heuristics to the tree. For example, we prefer to pick a version of base early because there is generally only one version of base available in the system. In addition, we apply user preferences; for example, we try newer versions of packages before older versions. For our example this gives us

Finding a solution

After applying the heuristics we throw away all but the first choice in each GoalChoice node (but keeping all choices in the PChoice nodes)

and traverse the tree depth first to find a solution, returning the first solution we find. In our running example, this means that we will use version 1.0 of foo, 4.0 of base and 2.0 of mtl.

Whenever we encounter a Fail node we backtrack. This backtracking is aided by so-called conflict sets. I haven’t shown these conflict sets in the diagrams, but each Fail node in the tree is annotated with a conflict set which records why this path ended in failure. In our running example the conflict set for both Fail nodes is the set {foo, mtl}, recording that there is a conflict between the versions of mtl and the version of foo that we picked. The conflict set is used to guide the backtracking; any choice that we encounter while backtracking that does not involve any variables in the conflict set does not need to be reconsidered, as it would lead to the same failure.

If we cannot find any solution, then we must report an error. Reporting a good error here however is difficult: after all, we have a potentially very large tree, with lots of different kinds of failures. Constructing an informative error from this is difficult, and this is one area where cabal-install might still be improved.

Qualified goals

Motivation

Normally cabal-install can only pick a single version for each package. For example, if we have a situation

we cannot install package D because it would require installing both versions 1.0 and 2.0 of package A (this is known as the diamond problem).

Setup scripts

Cabal packages can however have their own custom Setup scripts, when necessary, which are Cabal’s equivalent of the traditional ./configure scripts. In principle there should be no problem building these Setup scripts using different (and possibly conflicting) dependencies than the library itself; after all, the Setup script is completely independent from the library.

From Cabal 1.23 and up these setup scripts can have their own list of dependencies. Let’s suppose that in our running example the Setup script of foo has a dependency on any version of mtl:

We want to allow the setup script to be compiled against a different version of mtl as foo itself, but of course we would prefer not to, in order to avoid unnecessary additional compilation time.

Qualification

In order to allow picking a different version, we introduce qualified goals for each of the setup dependencies. In our running example, this means that cabal-install will now try to solve for the variables foo, mtl, and base, as well as foo.setup.mtl and foo.setup.base. This makes it possible to pick one version for mtl and another for foo.setup.mtl.

Linking

But how do we make sure that we pick the same version when possible? One (non-) option is to look at the entire search tree and find the solution that installs the smallest number of packages. While that might work in theory, it violates the earlier design principle we started with: we only ever want to make local decisions. The search tree can be quite large; indeed, the addition of the single setup dependency to foo already makes the tree much larger, as we shall see shortly. We certainly never want to inspect the entire search tree (or, worse yet, have the entire search tree in memory).

Instead, we introduce the concept of linking. This means that when we select a version for foo.setup.mtl (say), in addition to being able to pick either version 1.0 or 2.0, we can also say “link the version of foo.setup.mtl to the version of mtl” (provided that we already picked a version for mtl).

Then we can make local decisions: when we pick a version, we prefer to link whenever possible. Of course, this is subject to certain constraints. In the remainder of this section we shall see how qualified goals and linking works using our running example, and identify some of these linking constraints.

Building the tree

The code to build the initial tree is modified to introduce qualified constraints for setup dependencies, but does not itself deal with linking. Instead, it builds the tree as normal, and we then add additional linking options into the tree as a separate phase.

The search tree for our running example gets much bigger now due to combinational explosion: we have two additional variables to solve for, and linking means we have more choices for package versions. Here’s part of the initial search tree:

Let’s follow along the spine of this tree to see what’s going on. We first consider foo and pick version 1.0, just like before (there is no other choice). Then, on this path, we first consider foo.setup.mtl, and we have two options: we can either pick version 1.0 or version 2.0. We pick version 1.0, and continue with foo.setup.base and pick version 4.0 (only one option).

But when we now consider mtl things are more interesting: in addition to picking versions 1.0 and 2.0, we can also decide to link the version of mtl against the version of foo.setup.mtl (indicated by a red label in the tree). Similarly, when we pick a version for base, we can now choose to link base against the version of foo.setup.base in addition to picking version 4.0.

Validation

When we link mtl against foo.setup.mtl, we are really saying “please use the exact same package instance for both mtl and foo.setup.mtl”. This means that the dependencies of mtl must also be linked against the dependencies of foo.setup.mtl.

In addition, ghc enforces a so-called single instance restriction. This means that (in a single package database) we can only have one instance of a particular package version. So, for example, we can have both mtl version 1.0 and mtl version 2.0 installed in the same package database, but we cannot have two instance of mtl version 2.0 (for instance, one linked against version 3.0 of transformers and one linked against version 4.0 oftransformers) installed at the same time. Lifting this restriction is an important step towards solving Cabal Hell, but for now we have to enforce it. In our terminology, this means that when we have to solve for both (say) mtl and foo.setup.mtl, we can either pick two different versions, or we can link one to the other, but we cannot pick the same version for both goals.

So, in addition to the regular validation phase which verifies package constraints, we introduce a second validation phase that verifies these kinds of “linking constraints”. We end up with a tree such as

In this part of the tree, the two failures for mtl are because we picked version 1.0 for foo.setup.mtl, but since foo itself wants mtl version 2.0, we cannot pick version 1.0 for goal mtl nor can we link mtl to foo.setup.mtl. The two failures for base are due to the single instance restriction: since we picked version 4.0 for foo.setup.base, we must link base to foo.setup.base.

Heuristics

If we picked the first solution we found in the tree above, we would select version 1.0 of mtl for foo’s setup script and version 2.0 of mtl for foo itself. While that is not wrong per se, it means we do more work than necessary. So, we add an additional heuristic that says that we should consider setup dependencies after regular (library) dependencies. After we apply this heuristic (as well as all the other heuristics) we end up with

In this part of the tree we see one failure for foo.setup.mtl and two failures for foo.setup.base. The failure for foo.setup.mtl comes from the single instance restriction again: since we picked version 2.0 for mtl, we cannot pick an independent instance for foo.setup.mtl. The failure for foo.setup.base on the right is due to the same reason, but there is an additional reason for the left failure: since we chose to link foo.setup.mtl to mtl, its dependencies (in this case, foo.setup.base) must also be linked.

Finding a solution

As before, after applying the heuristics we prune

and we report the first solution we find. In this case, this means that we will pick version 2.0 for mtl and link foo.setup.mtl to mtl, and foo.setup.base to base.

Conclusions

Although we skimmed over many details, we covered the most important design principles behind the solver and the new implementation of qualified goals. One thing we did not talk about in this blog post are flag assignments; when the solver needs to decide on the value for a flag, it introduces a FChoice node into the tree with two subtrees for true and false, and then proceeds as normal. When we link package P to package Q, we must then also verify that their flag assignments match.

Qualified goals and linking are now used for setup dependencies, but could also be used to deal with private dependencies, to split library dependencies from executable dependencies, to deal with the base-3 shim package, and possibly other purposes. The modular design of the solver means that such features can be added (mostly) as independent units of code. That doesn’t of course necessarily mean the code is also easy; making sure that all decisions remain local can be a subtle problem. Hopefully this blog post will make the solver code easier to understand and to contribute to.

by edsko at March 27, 2015 10:17 AM

Danny Gratzer

Value vs Monomorphism Restriction

Posted on March 27, 2015
Tags: sml, haskell

I’m taking the undergraduate course on programming languages at CMU. For the record, I still get really excited about the novelty of taking a class (at school!) on programming languages. I’m easily made happy.

We started talking about System F and before long we touched on the value restriction. Specifically, how most people think of the value restriction incorrectly. To understand why this is, let’s first define the value restriction since it’s probably new to you if you don’t use SML.

The Value Restriction

In SML there are value level declarations just like in Haskell. We can write things like

    val x = 1
    val y = x + 1

and we end up with x bound to 1 and y bound to 2. Note that SML is strict so these bindings are evaluated right as we reach them. Also like in Haskell, SML has polymorphism, so we can write map

   fun map f [] = []
     | map f (x :: xs) = f x :: map f xs

And it gets the type ('a -> 'b) -> ('a list -> 'b list). Aside from minor syntatic differences, this is pretty much identical to what we’d write in Haskell. The value restriction concerns the intersection of these two things. In SML, the following should not compile under the standard

    val x = rev []

This is because SML requires that all polymorphic val bindings be values! In practice all implementations will do something besides this but we’ll just focus on what the standard says. Now the reason for this value restriction is widely misunderstood. Most people believe that the value restrictions

    val r  = ref NONE
    val () = r := SOME 1
    val _  = case !r of
                 SOME s => s
               | NONE   => ""

This seems to illustrate a pretty big issue for SML! We’re filling in polymorphic reference with one type and unboxing it with a different one! Clearly this would segfault without the value restriction. However, there’s a catch.

SML is based on System F (did you really think I could get through a blog post without some theory?) which is sometimes called the “polymorphic lambda calculus”. It’s the minimal language with polymorphism and functions. In this language there’s a construct for making polymorphic things: Λ.

In this language we write polymorphism explicitly by saying Λ τ. e which has the type ∀ t. T. So for example we write the identity function as

    id ≡ Λ τ. λ x : τ. x
    () = id[unit] ()

Now SML (and vanilla Haskell) have a limited subset of the power of Λ. Specifically all these lambdas have to appear at the start of a toplevel term. Meaning that they have to be of the form

    val x = Λ α. Λ β. ... e

This is called “prenex” form and is what makes type inference for SML possible. Now since we don’t show anyone the hidden Λs it doesn’t make sense to show them the type application that comes with them and SML infers and adds those for us too. What’s particularly interesting is that SML is often formalized as having this property: values start with Λ and are implicitly applied to the appropriate types where used. Even more interesting, how do you suppose we should evaluate a Λ? What for example, should this code do

    val x  = Λ τ. raise[τ] Fail (* Meaning raise an exception and say
                                  we have type τ *)
    val () = print "I made it here"
    val () = x[unit]

It seems clear that Λ should be evaluated just like how we evaluate λ, when we apply it. So I’d (and the formalization of SML) would expect this to print "I made it here" before throwing that exception. This might now surprise you just by parallels with code like this

    val x  = fn () => raise[τ] Fail
    val () = print "I made it here"
    val () = x ()

However, what about when those lambdas are implicit? In the actual source language of ML our code snippet would be

    val x  = raise Fail
    val () = print "I made it here"
    val () = x[unit]

Uhoh, this really looks like it ought to throw an exception but it apparently doesn’t! More worringly, what about when we have something like

    fun die ()  = raise Fail
    val x = die ()
    val () = print "Made it here"

Since x is never specialized, this doesn’t even throw an error! Yikes! Clearly this is a little confusing. It is however, type safe. Consider our original motivation for the value restriction. With explicit type application

    val r  = Λ τ. ref[τ] NONE
    val () = r[int] := SOME 1
    val _  = case !(r[string]) of
                 SOME s => s
               | NONE   => ""

Since the body of this function is run every time we do something with r, we’re just creating a whole bunch of new references in this code! There’s no type safety failure since !(r[string]) returns a fresh ref cell, completely different from the one we modified on the line above! This code always runs the NONE case. In fact, if this did the wrong thing it’s just a canary in the coal mine, a symptom of the fact that our system evaluates under (big) lambda binders.

So the value restriction is really not at all about type safety, it’s about comprehensibility. Mostly since the fact that a polymorphic expression is evaluated at usage rather than location is really strange. Most documentation seems to be wrong about this, everyone here seems agree that this is unfortunate but such is life.

The Monomorphism Restriction

Now let’s talk about the monomorphism restriction. This is better understood but still worth recapping. In Haskell we have type classes. They let us overload function to behave differently on different types. Everyone’s favoriate example is the type class for numbers which let’s us write

    fact :: (Eq a, Num a) => a -> a
    fact 0 = 1
    fact n = n * fact (n - 1)

And this works for all numbers, not just int or something. Under the hood, this works by passing a record of functions like *, fromInteger, and - to make the code work. That => is really just a sort of function arrow that happens to only take particular “implicit” records as an argument.

Now what do you suppose the most polymorphic type this code is?

    x = fact 10000

It could potentially work on all numbers so it gets the type

    x :: (Num a, Eq a) => a

However this is really like a function! This means that fact :: Integer and fact :: Int evaluate that computation twice. In fact each time we call fact we supply a new record and end up evaluating again. This is very costly and also very surprising to most folks. After all, why should something that looks like a normal number evaluate every time we use it! The monomorphism restriction is essentially

  1. If you have a binding
  2. Whose type is (C1, C2 ...) => t
  3. And has no arguments to the left of the =
  4. Don’t generalize it

This is intended to keep us from the surprise of evaluating a seemingly fully reduced term twice.

Sound familiar? Just like with the value restriction the whole point of the monomorphism restriction is to prevent a hidden function, either type abstraction or type constraints, from causing us to silently and dangerously duplicate work. While neither of them are essential to type safety: without it some really simple looking pieces of code become exponential.

Wrap Up

That about covers things. It turns out that both of these restrictions are just patches to cover some surprising areas of the semantics but both are easily understood when you look at the elaborated version. I deliberately went a bit faster through the monomorphism restriction since quite a lot of ink has already been spilled on the subject and unlike the value restriction, most of it is correct :)

As one final note, the way that Haskell handles the monomorphism restriction is precisely how OCaml handles the value restriction: weak polymorphism. Both of them mark the type variables they refuse to generalize as weak type variables. Whenever we first instantiate them to something we go back and retroactively modify the definition to pretend we had used this type all along. In this way, we only evaluate things once but can handle a lot of simple cases of binding a value and using it once.

The more you know.

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus

March 27, 2015 12:00 AM

March 26, 2015

Manuel M T Chakravarty

Schrödinger’s Box

In Objective-C, NSValue provides a universal container that can hold scalars and value types in addition to references. Conversely, in Swift, we quickly learnt to appreciate Box<T> as a wrapper to embed (cyclic) references in enums and structs (especially due to limitations in the initial versions of the Swift compiler):

final public class Box<T> {
  public let unbox: T
  public init(_ value: T) { self.unbox = value }
}

NSValue is also useful to wrap reference. For example, if we want to turn a strong reference into a weak reference (possibly to store it in a collection, such as NSArray), we can create an NSValue with +valueWithNonretainedObject.

How do we achieve that in Swift without losing type information, as using NSValue would? We use Schrödinger’s Box:

final public class WeakBox<T: AnyObject> {
  private weak var box: T?
  public var unbox: T? { get { return box } }
  public init(_ value: T) { self.box = value }
}

You can put something into Schrödinger’s Box, which will or will not be gone by the time you try to retrieve it. It enables, for example, to build a typed collection of weakly referenced objects (to avoid retain cycles).

NB: The definition of WeakBox is fine with the latest Swift 1.2 compiler. However, the Swift 1.1 compiler can only handle it with -Onone; otherwise, it crashes.

March 26, 2015 11:43 PM

Neil Mitchell

March 25, 2015

Ketil Malde

Can you trust science?

Hardly a week goes by without newspaper writing about new and exciting results from science. Perhaps scientists have discovered a new wonderful drug for cancer treatment, or maybe they have found a physiological cause for CFS. Or perhaps this time they finally proved that homeopathy works? And in spite of these bold announcements, we still don't seem to have cured cancer. Science is supposed to be the method which enables us to answer questions about how the world works, but one could be forgiven for wondering whether it, in fact, works at all.

As my latest contribution to my local journal club, I presented a paper by Ioannidis, titled Why most published research findings are false 1. This created something of a stir when it was published in 2005, because it points out some simple mathematical reasons why science isn't as accurate as we would like to believe.

The ubiquitous p-value

Science is about finding out what is true. For instance, is there a relationship between treatment with some drug and the progress of some disease - or is there not? There are several ways to go about finding out, but in essence, it boils down to making some measurements, and doing some statistical calculations. Usually, the result will be reported along with a p-value, which is a by-product of the statistical calculations saying something about how certain we are of the results.

Specifically, if we claim there is a relationship, the associated p-value is the probability we would make such a claim even if there is no relationship in reality.

We would like this probability to be low, of course, and since we usually are free to select the p-value threshold, it is usually chosen to be 0.05 (or 0.01), meaning that if the claim is false, we will only accept it 5% (or 1%) of the times.

The positive predictive value

Now, the p-value is often interpreted as the probability of our (positive) claim being wrong. This is incorrect! There is a subtle difference here, which it is important to be aware of. What you must realize, is that the probability α relies on the assumption that the hypothesis is wrong - which may or may not be true, we don't know (which is precisely why we want to find out).

The probability of a claim being wrong after the fact is called the positive predictive value (PPV). In order to say something about this, we also need to take into account the probability of claiming there exists a relationship when the claim is true. Our methods aren't perfect, and even if a claim is true, we might not have sufficient evidence to say for sure.

So, take one step back and looking at our options. Our hypothesis (e.g., drug X works against disease Y) can be true or false. In either case, our experiment and analysis can lead us to reject or accept it with some probability. This gives us the following 2-by-2 table:

True False
Accept 1-β α
Reject β 1-α

Here, α is the probability of accepting a false relationship by accident (i.e., the p-value), and β is the probability of missing a true relationship -- we reject a hypothesis, even when it is true.

To see why β matters, consider a hypothetical really really poor method, which has no chance of identifying a true relationship, in other words, $\beta$=1. Then, every accepted hypothesis must come from the False column, as long as α is at all positive. Even if the p-value threshold only accepts 1 in 20 false relationships, that's all you will get, and as such, they constitute 100% of the accepted relationships.

But looking at β is not sufficient either. Let's say a team of researchers test hundreds of hypotheses, which all happen to be false? Then again, some of them will get accepted anyway (sneaking in under the p-value threshold α), and since there are no hypotheses in the True column, again every positive claim is false.

A β of 1 or a field of research with 100% false hypotheses are extreme cases2, and in reality, things are not quite so terrible. The Economist had a good article with a nice illustration showing how this might work in practice with more reasonable numbers. It should still be clear that the ratio of true to false hypotheses being tested, as well as the power of the analysis to identify true hypotheses are important factors. And if these numbers approach their limits, things can get quite bad enough.

More elaborate models

Other factors also influence the PPV. Try as we might to be objective, scientists often try hard to find a relationship -- that's what you can publish, after all3. Perhaps in combination with a less firm grasp of statistics than one could wish for (and scientists who think they know enough statistics are few and far between - I'm certainly no exception there), this introduces bias towards acceptance.

Multiple teams pursuing the same challenges in a hot and rapidly developing field also decrease the chance of results being correct, and there's a whole cottage industry of scientist reporting spectacular and surprising results in high-ranking journals, followed by a trickle of failures to replicate.

Solving this

One option is to be stricter - this is the default when you do multiple hypothesis testing, you require a lower p-value threshold in order to reduce α. The problem with this is that if you are stricter with what you accept as true, you will also reject more actually true hypotheses. In other words, you can reduce α, but only at the cost of increasing β.

On the other hand, you can reduce β by running a larger experiment. One obvious problem with this is cost, for many problems, a cohort of a hundred thousand or more is necessary, and not everybody can afford to run that kind of studies. Perhaps even worse, a large cohort means that almost any systematic difference will be found significant. Biases that normally are negligible will show up as glowing bonfires in your data.

In practice?

Modern biology has changed a lot in recent years, and today we are routinely using high-throughput methods to test the expression of tens of thousands of genes, or the value of hundreds of thousands of genetic markers.

In other words, we simultaneously test an extreme number of hypotheses, where we expect a vast majority of them to be false, and in many cases, the effect size and the cohort are both small. It's often a new and exciting field, and we usually strive to use the latest version of the latest technology, always looking for new and improved analysis tools.

To put it bluntly, it is extremely unlikely that any result from this kind of study will be correct. Some people will claim these methods are still good for "hypothesis generation", but Ioannidis shows a hypothetical example where a positive result increases the likelihood that a hypothesis is correct by 50%. This doesn't sound so bad, perhaps, but in reality, the likelihood is only improved from 1 in 10000 to 1 in 7000 or so. I guess three thousand fewer trials to run in the lab is something, but you're still going to spend the rest of your life running the remaining ones.

You might expect scientists to be on guard for this kind of thing, and I think most scientists will claim they desire to publish correct results. But what counts for your career is publications and citations, and incorrect results are no less publishable than correct ones - and might even get cited more, as people fail to replicate them. And as you climb the academic ladder, publications in high-ranking journals is what counts, an for that you need spectacular results. And it is much easier to get spectacular incorrect results than spectacular correct ones. So the academic system rewards and encourages bad science.

Consequences

The bottom line is to be skeptical of any reported scientific results. The ability of the experiment and analysis to discover true relationships is critical, and one should always ask what the effect size is, and what the statistical power -- the probability of detecting a real effect -- is.

In addition, the prior probability of the hypothesis being true is crucial. Apparently-solid, empirical evidence of people getting cancer from cell phone radiation, or working homeopathic treatment of disease can almost be dismissed out of hand - there simply is no probable explanation for how that would work.

A third thing to look out for, is how well studied a problem is, and how the results add up. For health effects of GMO foods, there is a large body of scientific publications, and an overwhelming majority of them find no ill effects. If this was really dangerous, wouldn't some of these investigations show it conclusively? For other things, like the decline of honey bees, or the cause of CFS, there is a large body of contradictory material. Again - if there was a simple explanation, wouldn't we know it by now?


  1. And since you ask: No, the irony of substantiating this claim with a scientific paper is not lost on me.

  2. Actually, I would suggest that research in paranormal phenomena is such a field. They still manage to publish rigorous scientific works, see this Less Wrong article for a really interesting take.

  3. I think the problem is not so much that you can't publish a result claiming no effect, but that you can rarely claim it with any confidence. Most likely, you just didn't design your study well enough to tell.

March 25, 2015 08:00 PM

Ken T Takusagawa

[zyxbhqnd] Defining monads with do notation

If one happens to be most comfortable with "do" notation for monads ("What are monads? They are the things for which "do" notation works well."), so monads implicitly being defined in terms of bind (>>=) and return, here are the definitions of map and join, the "other" way of defining monads:

join :: (Monad m) => m (m a) -> m a;
join xs = do { x <- xs ; x }

map :: (Monad m) => (a -> b) -> m a -> m b;
map f xs = do { x <- xs ; return $ f x }

map is identical to liftM and slightly narrower than fmap which requires only the Functor typeclass instead of Monad.  This redundancy is one of the motivations for the AMP proposal.  Incidentally, map (as defined above) would work as well as the famous Prelude map function which operates only on lists, because a list is a Monad (and a Functor).

Just for completeness, here is bind defined in do notation:

(>>=) :: (Monad m) => m a -> (a -> m b) -> m b;
(>>=) xs f = do { x <- xs ; f x }

I sometimes like explicitly using the bind operator instead of do notation because the syntax, specifying xs then f, lines up well with the thought process "first prepare the input xs to a function, then call the function f on it".  It also works well for longer chains.  For example, the expression xs >>= f >>= g >>= h is equivalent to

do {
x <- xs;
y <- f x;
z <- g y;
h z;
}

but not having to name the intermediate results.

Inspired by the tutorial Monads as containers.

Update: added type signature for (>>=).

by Ken ([email protected]) at March 25, 2015 08:33 AM

The GHC Team

GHC Weekly News - 2015/03/24

Hi *,

It's time for the GHC weekly news. We've had an absence of the last one, mostly due to a lot of hustle to try and get 7.10 out the door (more on that shortly throughout this post). But now we're back and everything seems to be taken care of.

This week, in the wake of the GHC 7.10 release (which is occuring EOD, hopefully), GHC HQ met up for a brief chat and caught up:

  • This week GHC HQ met for only a very short time to discuss the pending release - it looks like all the blocking bugs have been fixed, and we've got everything triaged appropriately. You'll hopefully see the 7.10 announcement shortly after reading this.

We've also had small amounts of list activity (the past 6-8 weeks have been very, very quiet it seems):

  • Yitzchak Gale revived a thread he started a while back, which puttered out: bootstrapping GHC 7.8 with GHC 7.10. The long and short of it is, it should just about work - although we still haven't committed to this policy, it looks like Yitz and some others are quite adamant about it. https://mail.haskell.org/pipermail/ghc-devs/2015-March/008531.html

Some noteworthy commits that went into ghc.git in the past two weeks include:

Closed tickets this past week include: #9122, #10099, #10081, #9886, #9722, #9619, #9920, #9691, #8976, #9873, #9541, #9619, #9799, #9823, #10156, #1820, #6079, #9056, #9963, #10164, #10138, #10166, #10115, #9921, #9873, #9956, #9609, #7191, #10165, #10011, #8379, #10177, #9261, #10176, #10151, #9839, #8078, #8727, #9849, #10146, #9194, #10158, #7788, #9554, #8550, #10079, #10139, #10180, #10181, #10170, #10186, #10038, #10164, and #8976.

by thoughtpolice at March 25, 2015 02:20 AM

March 24, 2015

Jan Stolarek

First impressions of Coq and “Certified Programming with Dependent Types”

A simplistic view of strongly typed functional programming ecosystem is that the two most influential languages are Haskell and ML. Haskell is my language of choice so when it came to learning dependently-typed programming I stayed on the Haskell side of the spectrum and went with Agda and Idris. I chose the two over the ML-inspired Coq, most often advertised as a proof assistant rather that a programming language, planning to learn it “when I find some free time”. I wouldn’t probably find that time if it wasn’t for a friend of mine who recently picked up a book “Certified Programming with Dependent Types” by Adam Chlipala of MIT. Having a companion to discuss the ideas in the book was a perfect opportunity to pick it up – the truth is I found out about the book well over a year ago and since then it stayed on my always too long must-read-this-some-day list. So far I have read around 1/3rd of the book and would like to share some of my first impressions, both about the book, which I will refer to as CPDT, and Coq.

(Note: In what follows I will compare Coq to Agda and Idris but you have to be aware that despite similarity in features of these languages they don’t aim to be the same. Coq is a proof assistant with an extra feature of code extraction that allows you to turn your proofs into code – if you ever heard about “programs as proofs” this is it. Idris is a programming language with extra features that allow you to prove your code correct. I’m not really sure how to classify Agda. It is definitely on the programming-language-end of the spectrum – it allows you to prove your code correct but does not provide any extra built-in proof support. At the same time turning Agda code into working programs is non-trivial.)

Let me start off by saying that I don’t have any real-life Coq project on the horizon, so my learning is not motivated by need to solve any practical problem. My main driving force for learning Coq is purely interest in programming languages and seeing how Coq compares to Agda and Idris. A common thing with dependently-typed languages is that the types can get too complicated for the programmer to comprehend and thus a language requires an interactive mode to provide programmer with compiler feedback about the types. This is true for Agda, Idris and Coq. Agda offers a great support for holes: programmer can insert question marks into the program code and once it is re-compiled in the editor (read: Emacs) question marks become holes ie. places where Agda compiler provides user with feedback about expected types, bindings available in the hole context as well as some nice inference features allowing to automatically fill in contents of holes. So in Agda one proves by constructing terms of appropriate types. Coq is different, as it relies on a mechanism called “tactics”. Once the user writes down a type (theorem) he is presented with a set of goals to prove. Applying a tactic transforms the current goal into a different goal (or several goals). Conducting consecutive steps of the proof (ie. applying several tactics) should lead to some trivial goal that follows from definition and ends the proof. To work with Coq I decided to use Proof General, an Emacs extension for working with proofs (many other proof assistants are supported besides Coq)1. It launches Coq process in the background and essentially integrates writing code with proving. With Proof General I can easily step through my proofs to see how the goals are transformed by usage of tactics. Idris falls somewhere between Agda and Coq. As stated earlier it is mostly a programming language but it also provides tactic-based proving. So for example when I write a definition that requires explicit proof to typecheck, idris-mode launches interactive REPL in which I can conduct a proof in a fashion similar to Proof General and once I’m finished the proof is inserted into the source code. the result looks something like this:

par : (n : Nat) -> Parity n
par Z = even {n=Z}
par (S Z) = odd {n=Z}
par (S (S k)) with (par k)
  par (S (S (j + j)))     | even ?= even {n = S j}
  par (S (S (S (j + j)))) | odd  ?= odd {n = S j}
 
---------- Proofs ----------
 
Basics.par_lemma_2 = proof
  intros
  rewrite sym (plusSuccRightSucc j j)
  trivial
 
 
Basics.par_lemma_1 = proof
  intros
  rewrite sym (plusSuccRightSucc j j)
  trivial

The last time I checked Idris once the proof was completed and added to the source code it was not possible to step through it back and forth to see how goals are transformed. (Things might have changed since I last checked.)

So far I’ve been doing rather basic stuff with Coq so I haven’t seen much that wouldn’t be also possible in Agda or Idris. The biggest difference is that Coq feels a much more grown up language than any of the mentioned two. One totally new thing I learned so far is co-induction, but I’m only starting with it and won’t go into details, rather leaving it for a separate post. (Agda also supports co-induction.)

As for the CPDT book I have to say it is a challenging read: it’s very dense and focuses on more advanced Coq techniques without going into details of basics. As such it is a great demonstration of what can be done in Coq but not a good explanation of how it can be done. Depending on what you are expecting this book might or might not be what you want. As stated earlier I don’t plan on applying Coq in any project but rather want to see a demo of Coq features and possibly pick up some interesting theoretical concepts. As such CPDT works quite well for me although I am not entirely happy with not being able to fully understand some of the demonstrated techniques. As such CPDT is definitely not a self-contained read, which I believe was a conscious decision on the author’s side. Discussing with people on #coq IRC channel and reading various posts on the internet leads me to a conclusion that CPDT is a great book for people that have been using Coq for some time and want to take their skills to a new level. The main theme of the book is proof automation, that is replacing tactic-based sequential proofs with automated decision procedures adjusted to the problem at hand that can construct proofs automatically. Indeed tactic-based proofs are difficult to understand and maintain. Consider this proof of a simple property that n + 0 = n:

Theorem n_plus_O : forall (n : nat), plus n O = n.
  intros.
  induction n.
  reflexivity.
  simpl.
  rewrite IHn.
  reflexivity.
Qed.

To understand that proof one has to step through it to see how goals are transformed or have enough knowledge of Coq to know that without the need of stepping through. Throughout the book Adam Chlipala demonstrates the power of proof automation by using his tactic called crush, which feels like a magic wand since it usually ends the proof immediately (sometimes it requires some minimal guidance before it ends the proof immediately). I admit this is a bit frustrating as I don’t feel I learn anything by seeing crush applied to magically finish a proof. Like I said, a good demo of what can be done but without an explanation. The worst thing is that crush does not seem to be explained anywhere in the book so readers wanting to understand it are left on their own (well, almost on their own).

What about those of you who want to learn Coq starting from the basics? It seems like Software Foundations is the introductory book about Coq and – given that the main author is Benjamin Pierce – it looks like you can’t go wrong with this book. I am not yet sure whether I’ll dive into SF but most likely not as this would mean giving up on CPDT and for me it’s more important to get a general coverage of more advanced topics rather than in-depth treatment of basics.

  1. Other choices of interactive mode are available for Coq, for example CoqIDE shipped by default with Coq installation

by Jan Stolarek at March 24, 2015 01:53 PM

Noam Lewis

Danny Gratzer

A Tiny Compiler For A Typed Higher Order Language

Posted on March 24, 2015

Hi folks, the last week or so I was a little tired of schoolwork so I decided to scratch out some fun code. The end result is an extremely small compiler for a typed, higher order functional language called PCF to C. In this post I’ll explain attempt to explain the whole thing, from front to back :)

What’s PCF

First things first, it’s important to define the language we’re compiling. The language, PCF short for “partial computable functions”, is an extremely small language you generally find in a book on programming languages, it originates with Plotkin if I’m not mistaken.

PCF is based around 3 core elements: natural numbers, functions (closures), and general recursion. There are two constants for creating numbers, Zero and Suc. Zero is self explanatory and Suc e is the successor of the natural number e evaluates to. In most programming languages this just means Suc e = 1 + e but + isn’t a primitive in PCF (we can define it as a normal function).

For functions, we have lambdas like you’d find in any functional language. Since PCF includes no polymorphism it’s necessary to annotate the function’s argument with it’s type.

Finally, the weird bit: recursion. In PCF we write recursive things with fix x : τ in e. Here we get to use x in e and we should understand that x “stands for” the whole expression, fix .... As an example, here’s how we define +.

    plus =
          fix rec : nat -> nat -> nat in
            λ m : nat.
            λ n : nat.
              ifz m {
                  Zero  => n
                | Suc x => Suc (rec x n)
              }

Now Let’s Compile It

Now compilation is broken up into a bunch of phases and intermediate languages. Even in this small of a compiler there are 3 (count-em) languages so along with the source and target language there are 5 different languages running around inside of this compiler. Each phase with the exception of typechecking is just translating one intermediate language (IL) into another and in the process making one small modification to the program as a whole.

The AST

This compiler starts with an AST, I have no desire to write a parser for this because parsers make me itchy. Here’s the AST

    data Ty = Arr Ty Ty
            | Nat
            deriving Eq

    data Exp a = V a
               | App (Exp a) (Exp a)
               | Ifz (Exp a) (Exp a) (Scope () Exp a)
               | Lam Ty (Scope () Exp a)
               | Fix Ty (Scope () Exp a)
               | Suc (Exp a)
               | Zero
               deriving (Eq, Functor, Foldable, Traversable)

What’s interesting here is that our AST uses bound to manage variables. Unfortunately there really isn’t time to write both a bound tutorial and a PCF compiler one. I’ve written about using bound before here otherwise you can just check out the official docs. The important bits here are that Scope () ... binds one variable and that a stands for the free variables in an expression. 3 constructs bind variables here, Ifz for pattern matching, Fix for recursive bindings, and Lam for the argument. Note also that Fix and Lam both must be annotated with a type otherwise stuff like fix x in x and fn x => x are ambiguous.

Type Checking

First up is type checking. This should be familiar to most people we’ve written a type checker before since PCF is simply typed. We simply have a Map of variables to types. Since we want to go under binders defined using Scope we’ll have to use instantiate. However this demands we be able to create fresh free variables so we don’t accidentally cause clashes. To prevent this we use monad-gen to generate fresh free variables.

To warm up, here’s a helper function to check that an expression has a particular type. This uses the more general typeCheck function which actually produces the type of an expression.

    type TyM a = MaybeT (Gen a)

    assertTy :: (Enum a, Ord a) => M.Map a Ty -> Exp a -> Ty -> TyM a ()
    assertTy env e t = (== t) <$> typeCheck env e >>= guard

This type checks the variable in an environment (something that stores the types of all of the free variables). Once it receives that it compares it to the type we expected and chucks the resulting boolean into guard. This code is used in places like Ifz where we happen to know that the first expression has the type Nat.

Now on to the main code, typeCheck

    typeCheck :: (Enum a, Ord a) => M.Map a Ty -> Exp a -> TyM a Ty
    typeCheck _   Zero = return Nat
    typeCheck env (Suc e) = assertTy env e Nat >> return Nat

The first two cases for typeCheck are nice and straightforward. All we if we get a Zero then it has type Nat. If we get a Suc e we assert that e is an integer and then the whole thing has the type Nat.

    typeCheck env (V a) = MaybeT . return $ M.lookup a env

For variables we just look things up in the environment. Since this returns a Maybe it’s nice and easy to just jam it into our MaybeT.

    typeCheck env (App f a) = typeCheck env f >>= \case
      Arr fTy tTy -> assertTy env a fTy >> return tTy
      _ -> mzero

Application is a little more interesting. We recurse over the function and make sure it has an actual function type. If it does, we assert the argument has the argument type and return the domain. If it doesn’t have a function type, we just fail.

    typeCheck env (Lam t bind) = do
      v <- gen
      Arr t <$> typeCheck (M.insert v t env) (instantiate1 (V v) bind)
    typeCheck env (Fix t bind) = do
      v <- gen
      assertTy (M.insert v t env) (instantiate1 (V v) bind) t
      return t

Type checking lambdas and fixpoints is quite similar. In both cases we generate a fresh variable to unravel the binder with. We know what type this variable is supposed to have because we required explicit annotations so we add that to the map constituting our environment. Here’s where they diverge.

For a fixpoint we want to make sure that the body has the type as we said it would so we use assertTy. For a lambda we infer the body type and return a function from the given argument type to the body type.

    typeCheck env (Ifz i t e) = do
      assertTy env i Nat
      ty <- typeCheck env t
      v <- gen
      assertTy (M.insert v Nat env) (instantiate1 (V v) e) ty
      return ty

For Ifz we want to ensure that we actually are casing on a Nat so we use assertTy. Next we figure out what type the zero branch returns and make sure that the else branch has the same type.

All in all this type checker is not particularly fascinating since all we have are simple types. Things get a bit more interesting with polymorphism. I’d suggest looking at that if you want to see a more interesting type checker.

Closure Conversion

Now for our first interesting compilation phase, closure conversion. In this phase we make closures explicit by annotating lambdas and fixpoints with the variables that they close over. Those variables are then explicitly bound in the scope of the lambda. With these changes, our new syntax tree looks like this

    -- Invariant, Clos only contains VCs, can't be enforced statically due
    -- to annoying monad instance
    type Clos a = [ExpC a]

    data ExpC a = VC a
                | AppC (ExpC a) (ExpC a)
                | LamC Ty (Clos a) (Scope Int ExpC a)
                | FixC Ty (Clos a) (Scope Int ExpC a)
                | IfzC (ExpC a) (ExpC a) (Scope () ExpC a)
                | SucC (ExpC a)
                | ZeroC
                deriving (Eq, Functor, Foldable, Traversable)

The interesting parts are the additions of Clos and the fact that the Scope for a lambda and a fixpoint now binds an arbitrary number of variables instead of just one. Here if a lambda or fixpoint binds n variables, the first n - 1 are stored in the Clos and the last one is the “argument”. Closure conversion is thus just the process of converting an Exp to an ExpC.

    closConv :: Ord a => Exp a -> Gen a (ExpC a)
    closConv (V a) = return (VC a)
    closConv Zero = return ZeroC
    closConv (Suc e) = SucC <$> closConv e
    closConv (App f a) = AppC <$> closConv f <*> closConv a
    closConv (Ifz i t e) = do
      v <- gen
      e' <- abstract1 v <$> closConv (instantiate1 (V v) e)
      IfzC <$> closConv i <*> closConv t <*> return e'

Most of the cases here are just recursing and building things back up applicatively. There’s the moderately interesting case where we instantiate the else branch of an Ifz with a fresh variable and then recurse, but the interesting cases are for fixpoints and lambdas. Since they’re completely identical we only present the case for Fix.

    closConv (Fix t bind) = do
      v <- gen
      body <- closConv (instantiate1 (V v) bind)
      let freeVars = S.toList . S.delete v $ foldMap S.singleton body
          rebind v' = elemIndex v' freeVars <|>
                      (guard (v' == v) *> (Just $ length freeVars))
      return $ FixC t (map VC freeVars) (abstract rebind body)

There’s a lot going on here but it boils down into three parts.

  1. Recurse under the binder
  2. Gather all the free variables in the body
  3. Rebind the body together so that all the free variables map to their position in the closure and the argument is n where n is the number of free variables.

The first is accomplished in much the same way as in the above cases. To gather the number of free variables all we need to is use the readily available notion of a monoid on sets. The whole process is just foldMap S.singleton! There’s one small catch: we don’t want to put the argument into the list of variables we close over so we carefully delete it from the closure. We then convert it to a list which gives us an actual Clos a. Now for the third step we have rebind.

rebind maps a free variable to Maybe Int. It maps a free variable to it’s binding occurrence it has one here. This boils down to using elemIndex to look up somethings position in the Clos we just built up. We also have a special case for when the variable we’re looking at is the “argument” of the function we’re fixing. In this case we want to map it to the last thing we’re binding, which is just length n. To capture the “try this and then that” semantics we use the alternative instance for Maybe which works wonderfully.

With this, we’ve removed implicit closures from our language: one of the passes on our way to C.

Lambda Lifting

Next up we remove both fixpoints and lambdas from being expressions. We want them to have an explicit binding occurrence because we plan to completely remove them from expressions soon. In order to do this, we define a language with lambdas and fixpoints explicitly declared in let expressions. The process of converting from ExpC to this new language is called “lambda lifting” because we’re lifting things into let bindings.

Here’s our new language.

    data BindL a = RecL Ty [ExpL a] (Scope Int ExpL a)
                 | NRecL Ty [ExpL a] (Scope Int ExpL a)
                 deriving (Eq, Functor, Foldable, Traversable)
    data ExpL a = VL a
                | AppL (ExpL a) (ExpL a)
                | LetL [BindL a] (Scope Int ExpL a)
                | IfzL (ExpL a) (ExpL a) (Scope () ExpL a)
                | SucL (ExpL a)
                | ZeroL
                deriving (Eq, Functor, Foldable, Traversable)

Much here is the same except we’ve romved both lambdas and fixpoints and replaced them with LetL. LetL works over bindings which are either recursive (Fix) or nonrecursive (Lam). Lambda lifting in this compiler is rather simplistic in how it lifts lambdas: we just boost everything one level up and turn

    λ (x : τ). ...

into

    let foo = λ (x : τ). ...
    in foo

Just like before, this procedure is captured by transforming an ExpC into an ExpL.

    llift :: Eq a => ExpC a -> Gen a (ExpL a)
    llift (VC a) = return (VL a)
    llift ZeroC = return ZeroL
    llift (SucC e) = SucL <$> llift e
    llift (AppC f a) = AppL <$> llift f <*> llift a
    llift (IfzC i t e) = do
      v <- gen
      e' <- abstract1 v <$> llift (instantiate1 (VC v) e)
      IfzL <$> llift i <*> llift t <*> return e'

Just like in closConv we start with a lot of very boring and trivial “recurse and build back up” cases. These handle everything but the cases where we actually convert constructs into a LetL.

Once again, the interesting cases are pretty much identical. Let’s look at the case for LamC for variety.

    llift (LamC t clos bind) = do
      vs <- replicateM (length clos + 1) gen
      body <- llift $ instantiate (VC . (!!) vs) bind
      clos' <- mapM llift clos
      let bind' = abstract (flip elemIndex vs) body
      return (LetL [NRecL t clos' bind'] trivLetBody)

Here we first generate a bunch of fresh variables and unbind the body of our lambda. We then recurse on it. We also have to recurse across all of our closed over arguments but since those are variables we know that should be pretty trivial (why do we know this?). Once we’ve straightened out the body and the closure all we do is transform the lambda into a trivial let expression as shown above. Here trivLetBody is.

    trivLetBody :: Scope Int ExpL a
    trivLetBody = fromJust . closed . abstract (const $ Just 0) $ VL ()

Which is just a body that returns the first thing bound in the let. With this done, we’ve pretty much transformed our expression language to C. In order to get rid of the nesting, we want to make one more simplification before we actually generate C.

C-With-Expression

C-With-Expressions is our next intermediate language. It has no notion of nested functions or of fixpoints. I suppose now I should finally fess up to why I keep talking about fixpoints and functions as if they’re the same and why this compiler is handling them identically. The long and short of it is that fixpoints are really a combination of a “fixed point combinator” and a function. Really when we say

    fix x : τ in ...

It’s as if we had sayed

    F (λ x : τ. ...)

Where F is a magical constant with the type

    F :: (a -> a) -> a

F calculates the fixpoint of a function. This means that f (F f) = F f. This formula underlies all recursive bindings (in Haskell too!). In the compiler we basically compile a Fix to a closure (the runtime representation of a function) and pass it to a C function fixedPoint which actually calculates the fixed point. Now it might seem dubious that a function has a fixed point. After all, it would seem that there’s no x so that (λ (x : nat). suc x) = x right? Well the key is to think of these functions as not ranging over just values in our language, but a domain where infinite loops (bottom values) are also represented. In the above equation, the solution is that x should be bottom, an infinite loop. That’s why

    fix x : nat in suc x

should loop! There’s actual some wonderful math going on here about how computable functions are continuous functions over a domain and that we can always calculate the least fixed point of them in this manner. The curious reader is encouraged to check out domain theory.

Anyways, so that’s why I keep handling fixpoints and lambdas in the same way, because to me a fixpoint is a lambda + some magic. This is going to become very clear in C-With-Expressions (FauxC from now on) because we’re going to promote both sorts of let bindings to the same thing, a FauxC toplevel function. Without further ado, here’s the next IL.

    -- Invariant: the Integer part of a FauxCTop is a globally unique
    -- identifier that will be used as a name for that binding.
    type NumArgs = Int
    data BindTy = Int | Clos deriving Eq

    data FauxCTop a = FauxCTop Integer NumArgs (Scope Int FauxC a)
                    deriving (Eq, Functor, Foldable, Traversable)
    data BindFC a = NRecFC Integer [FauxC a]
                  | RecFC BindTy Integer [FauxC a]
                  deriving (Eq, Functor, Foldable, Traversable)
    data FauxC a = VFC a
                 | AppFC (FauxC a) (FauxC a)
                 | IfzFC (FauxC a) (FauxC a) (Scope () FauxC a)
                 | LetFC [BindFC a] (Scope Int FauxC a)
                 | SucFC (FauxC a)
                 | ZeroFC
                 deriving (Eq, Functor, Foldable, Traversable)

The big difference is that we’ve lifted things out of let bindings. They now contain references to some global function instead of actually having the value right there. We also tag fixpoints as either fixing an Int or a Clos. The reasons for this will be apparent in a bit.

Now for the conversion. We don’t just have a function from ExpL to FauxC because we also want to make note of all the nested lets we’re lifting out of the program. Thus we use WriterT to gather a lift of toplevel functions as we traverse the program. Other than that this is much like what we’ve seen before.

    type FauxCM a = WriterT [FauxCTop a] (Gen a)

    fauxc :: ExpL Integer -> FauxCM Integer (FauxC Integer)
    fauxc (VL a) = return (VFC a)
    fauxc (AppL f a) = AppFC <$> fauxc f <*> fauxc a
    fauxc ZeroL = return ZeroFC
    fauxc (SucL e) = SucFC <$> fauxc e
    fauxc (IfzL i t e) = do
      v <- gen
      e' <- abstract1 v <$> fauxc (instantiate1 (VL v) e)
      IfzFC <$> fauxc i <*> fauxc t <*> return e'

In the first couple cases we just recurse. as we’ve seen before. Things only get interesting once we get to LetL

    fauxc (LetL binds e) = do
      binds' <- mapM liftBinds binds
      vs <- replicateM (length binds) gen
      body <- fauxc $ instantiate (VL . (!!) vs) e
      let e' = abstract (flip elemIndex vs) body
      return (LetFC binds' e')

In this case we recurse with the function liftBinds across all the bindings, then do what we’ve done before and unwrap the body of the let and recurse in it. So the meat of this transformation is in liftBinds.

      where liftBinds (NRecL t clos bind) = lifter NRecFC clos bind
            liftBinds (RecL t clos bind) = lifter (RecFC $ bindTy t) clos bind
            lifter bindingConstr clos bind = do
              guid <- gen
              vs <- replicateM (length clos + 1) gen
              body <- fauxc $ instantiate (VL . (!!) vs) bind
              let bind' = abstract (flip elemIndex vs) body
              tell [FauxCTop guid (length clos + 1) bind']
              bindingConstr guid <$> mapM fauxc clos
            bindTy (Arr _ _) = Clos
            bindTy Nat = Int

To lift a binding all we do is generate a globally unique identifier for the toplevel. Once we have that we that we can unwrap the particular binding we’re looking at. This is going to comprise the body of the TopC function we’re building. Since we need it to be FauxC code as well we recurse on it. No we have a bunch of faux-C code for the body of the toplevel function. We then just repackage the body up into a binding (a FauxCTop needs one) and use tell to make a note of it. Once we’ve done that we return the stripped down let binding that just remembers the guid that we created for the toplevel function.

In an example, this code transformers

    let x = λ (x : τ). ... in
      ... x ...

into

    TOP = λ (x : τ). ...
    let x = TOP in
      ... x ...

With this done our language is now 80% of the way to C!

Converting To SSA-ish C

Converting our faux-C language to actual C has one complication: C doesn’t have let expressions. Given this, we have to flatten out a faux-C expression so we can turn a let expression into a normal C declaration. This conversion is almost a conversion to single static assignment form, SSA. I say almost because there’s precisely one place where we break the single assignment discipline. This is just because it seemed rather pointless to me to introduce an SSA IL with φ just so I could compile it to C. YMMV.

This is what LLVM uses for its intermediate language and because of this I strongly suspect regearing this compiler to target LLVM should be pretty trivial.

Now we’re using a library called c-dsl to make generating the C less painful, but there’s still a couple of things we’d like to add. First of all, all our names our integers so we have i2e and i2d for converting an integer into a C declaration or an expression.

    i2d :: Integer -> CDeclr
    i2d = fromString . ('_':) . show

    i2e :: Integer -> CExpr
    i2e = var . fromString . ('_':) . show

We also have a shorthand for the type of all expression in our generated C code.

    taggedTy :: CDeclSpec
    taggedTy = CTypeSpec "tagged_ptr"

Finally, we have our writer monad and helper function for implementing the SSA conversion. We write C99 block items and use tellDecl binding an expression to a fresh variable and then we return this variable.

    type RealCM = WriterT [CBlockItem] (Gen Integer)

    tellDecl :: CExpr -> RealCM CExpr
    tellDecl e = do
      i <- gen
      tell [CBlockDecl $ decl taggedTy (i2d i) $ Just e]
      return (i2e i)

Next we have the conversion procedure. Most of this is pretty straightforward because we shell out to calls in the runtime system for all the hardwork. We have the following RTS functions

  • mkZero, create a zero value
  • inc, increment an integer value
  • dec, decrement an integer value
  • apply, apply a closure to an argument
  • mkClos, make a closure with a closing over some values
  • EMPTY, an empty pointer, useful for default values
  • isZero, check if something is zero
  • fixedPoint, find the fixed point of function
  • INT_SIZE, the size of the runtime representation of an integer
  • CLOS_SIZE, the size of the runtime representation of a closure

Most of this code is therefore just converting the expression to SSA form and using the RTS functions to shell do the appropriate computation at each step. Note that c-dsl provides a few overloaded string instances and so to generate the C code to apply a function we just use "foo"#[1, "these", "are", "arguments"].

The first few cases for conversion are nice and straightforward.

    realc :: FauxC CExpr -> RealCM CExpr
    realc (VFC e) = return e
    realc (AppFC f a) = ("apply" #) <$> mapM realc [f, a] >>= tellDecl
    realc ZeroFC = tellDecl $ "mkZero" # []
    realc (SucFC e) = realc e >>= tellDecl . ("inc"#) . (:[])

We take advantage of the fact that realc returns it’s result and we can almost make this look like the applicative cases we had before. One particularly slick case is how Suc works. We compute the value of e and apply the result to suc. We then feed this expression into tellDecl which binds it to a fresh variable and returns the variable. Haskell is pretty slick.

    realc (IfzFC i t e) = do
      outi <- realc i
      deci <- tellDecl ("dec" # [outi])
      let e' = instantiate1 (VFC deci) e
      (outt, blockt) <- lift . runWriterT $ (realc t)
      (oute, blocke) <- lift . runWriterT $ (realc e')
      out <- tellDecl "EMPTY"
      let branch b tempOut =
            CCompound [] (b ++ [CBlockStmt . liftE $ out <-- tempOut]) undefNode
          ifStat =
            cifElse ("isZero"#[outi]) (branch blockt outt) (branch blocke oute)
      tell [CBlockStmt ifStat]
      return out

In this next case we’re translating Ifz. For this we obviously need to compute the value of i. We do that by recursing and storing the result in outi. Now we want to be able to use 1 less than the value of i in case we go into the successor branch. This is done by calling dec on outi and storing it for later.

Next we do something a little odd. We recurse on the branches of Ifz but we definitely don’t want to compute both of them! So we can’t just use a normal recursive call. If we did they’d be added to the block we’re building up in the writer monad. So we use lift . runWriterT to give us back the blocks without adding them to the current one we’re building. Now it’s just a matter of generating the appropriate if statement.

To do this we add one instruction to the end of both branches, to assign to some output variable. This ensures that no matter which branch we go down we’ll end up the result in one place. This is also the one place where we are no longer doing SSA. Properly speaking we should write this with a φ but who has time for that? :)

Finally we build add the if statement and the handful of declarations that precede it to our block. Now for the last case.

    realc (LetFC binds bind) = do
      bindings <- mapM goBind binds
      realc $ instantiate (VFC . (bindings !!)) bind
      where sizeOf Int = "INT_SIZE"
            sizeOf Clos = "CLOS_SIZE"
            goBind (NRecFC i cs) =
              ("mkClos" #) <$> (i2e i :) . (fromIntegral (length cs) :)
                           <$> mapM realc cs
                           >>= tellDecl
            goBind (RecFC t i cs) = do
              f <- ("mkClos" #) <$> (i2e i :) . (fromIntegral (length cs) :)
                                <$> mapM realc cs
                                >>= tellDecl
              tellDecl ("fixedPoint"#[f, sizeOf t])

For our last case we have to deal with lets. For this we simply traverse all the bindings which are now flat and then flatten the expression under the binder. When we mapM over the bindings we actually get back a list of all the expressions each binding evaluated to. This is perfect for use with instantiate making the actual toplevel function quite pleasant. goBind is slightly less so.

In the nonrecursive case all we have to do is create a closure. So goBind of a nonrecursive binding shells out to mkClos. This mkClos is applied to the number of closed over expressions as well as all the closed over expressions. This is because mkClos is variadic. Finally we shove the result into tellDecl as usual. For a recursive call there’s a slight difference, namely after doing all of that we apply fixedPoint to the output and to the size of the type of the thing we’re fixing. This is why we kept types around for these bindings! With them we can avoid dragging the size with every value since we know it statically.

Next, we have a function for converting a faux C function into an actual function definition. This is the function that we use realc in.

    topc :: FauxCTop CExpr -> Gen Integer CFunDef
    topc (FauxCTop i numArgs body) = do
      binds <- gen
      let getArg = (!!) (args (i2e binds) numArgs)
      (out, block) <- runWriterT . realc $ instantiate getArg body
      return $
        fun [taggedTy] ('_' : show i) [decl taggedTy $ ptr (i2d binds)] $
          CCompound [] (block ++ [CBlockStmt . creturn $ out]) undefNode
      where indexArg binds i = binds ! fromIntegral i
            args binds na = map (VFC . indexArg binds) [0..na - 1]

This isn’t the most interesting function. We have one array of arguments to our C function, and then we unbind the body of the FauxC function by indexing into this array. It’s not explicitly stated in the code but the array contains the closed over expressions for the first n - 1 entries and the nth is the actual argument to the function. This is inline with how the variables are actually bound in the body of the function which makes unwrapping the body to index into the argument array very simple. We then call realc which transforms our faux-c expression into a block of actual C code. We add one last statement to the end of the block that returns the final outputted variable. All that’s left to do is bind it up into a C function and call it a day.

Putting It All Together

Finally, at the end of it all we have a function from expression to Maybe CTranslUnit, a C program.

    compile :: Exp Integer -> Maybe CTranslUnit
    compile e = runGen . runMaybeT $ do
      assertTy M.empty e Nat
      funs <- lift $ pipe e
      return . transUnit . map export $ funs
      where pipe e = do
              simplified <- closConv e >>= llift
              (main, funs) <- runWriterT $ fauxc simplified
              i <- gen
              let topMain = FauxCTop i 1 (abstract (const Nothing) main)
                  funs' = map (i2e <$>) (funs ++ [topMain])
              (++ [makeCMain i]) <$> mapM topc funs'
            makeCMain entry =
              fun [intTy] "main"[] $ hBlock ["call"#[i2e entry]]

This combines all the previous compilation passes together. First we typecheck and ensure that the program is a Nat. Then we closure convert it and immediately lambda lift. This simplified program is then fed into fauxc giving a fauxc expression for main and a bunch of functions called by main. We wrap up the main expression in a function that ignores all it’s arguments. We then map realc over all of these fauxc functions. This gives us actual C code. Finally, we take on a trivial C main to call the generated code and return the whole thing.

And that’s our PCF compiler.

Wrap Up

Well if you’ve made it this far congratulations. We just went through a full compiler from a typed higher order language to C. Along the way we ended up implementing

  1. A Type Checker
  2. Closure Conversion
  3. Lambda Lifting
  4. Conversion to Faux-C
  5. SSA Conversion

If you’d like to fiddle a bit more, some fun project might be

  1. Writing type checkers for all the intermediate languages. They’re all typeable except perhaps Faux-C
  2. Implement compilation to LLVM instead of C. As I said before, this shouldn’t be awful

Cheers,

<script type="text/javascript"> var disqus_shortname = 'codeco'; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the comments powered by Disqus.</noscript> comments powered by Disqus

March 24, 2015 12:00 AM