Thursday, January 19, 2012

Possible projects for Boston Haskell Hackathon: smarter evaluation strategies and refactoring combinators

This weekend I'm going to be participating in the Boston Haskell hackathon. I'm very excited about it and I have a couple idea for projects to work on. If any of these sound interesting or you are thinking of something similar, I'm looking for people to collaborate with! Send me an email or come talk to me in person! I think I'm going to get there sometime in the late afternoon to early evening on Friday and I'll be around all weekend.

Evaluation strategies for Haskell that don't leak space

The first project is doing some research / prototyping of an alternate evaluation strategy with the same termination properties as normal order evaluation, but with much easier reasoning about space usage. For lack of a better name, I'm calling it specializing, strictness-propagating evaluation. In this model, calling a function is something like two communicating coroutines. When calling a function, the callee begins evaluating its body, yielding control back to the caller when it needs its first argument, and also indicating whether that argument should be strict or lazily passed, using whatever information is available at runtime. Subsequent arguments work similarly. As a result, functions are automatically specialized as arguments are passed, and we do not construct thunks if they are going to be consumed strictly by a subsequent callee. This can be implemented efficiently using just two call stacks, and there are various optimizations to the scheme. It is intended to augment, not replace, the existing static strictness analysis and argument passing.

Here's an example working through this for the if function, which let's assume has the following implementation:

foldl f z l = case l of 
  [] -> z
  h:t -> foldl t f (f z h)

I'm also going to assume we've done some static strictness analysis to determine that all branches evaluate z and that therefore the h:t branch evaluates f (since all branches evaluate z and in the h:t case, f appears at the head of an expression passed as z). Suppose we call this with foldl (+) 0 [1,2,3,4].

  1. Caller pushes foldl onto the call stack. foldl begins evaluating with no arguments. It gets as far as the case l. It will then request l strictly, since it is about to evaluated it anyway.
  2. To request the argument, foldl pops its currently running frame from the call stack and pushes it onto the save stack. It then resumes the caller now at the top of the call stack with an argument of strict.
  3. The caller passes the argument as requested - if the caller were itself receiving these arguments as function parameters, it would propagate the strictness request of foldl to its caller.
  4. To resume foldl, it pops foldl from the save stack and pushes it onto the call stack, giving it the (strictly evaluated) list it requested.
  5. Now we hit the interesting case: inside the h:t branch, we know that z is strict (this is known statically). We also know that f can now be evaluated, so we request this argument strictly from our caller. With f now evaluated, we can propagate its stricness information. We know we will be evaluating f z h - what we did not know until runtime was that f was plus (let's just say it was + specialized to Int), and therefore static SA has no choice but to pass f z h as a thunk. We now know that f is strict in both its arguments, so the call to f z h means we can fully evaluate z (which we do by requesting z strictly from our caller), h, and then f z h.

Each step of the iteration works similarly and foldl ends up running in constant space. I'm handwaving a lot here, but in general I want an evaluation order that is totally predictable in its space usage - values are immediately forced as soon as their consuming functions are known at runtime. The consuming functions tell us if an argument will ultimately be forced so we find out sooner rather than building up enormous thunks.

This needs some serious whiteboarding, but assuming it is at all sensible, here's what I propose doing:

  • Come up with an instruction set for this evaluation model, and write a simple interpreter for it
  • Write a compiler for a toy functional language to this instruction set, including the basic static analysis needed to kickstart the dynamic analysis
  • Try writing some programs with it

Some other interesting ideas - I wonder if there's some way to embed this evaluation model in GHC itself.

A code database for Haskell and refactoring combinators

The other project I'm interested in working on is a code database for Haskell, and a Datalog interpreter to go with it. Using this database and the datalog query language, I then want to implement a set of refactoring combinators. A "refactoring" is simply a compilation-preserving function from one code database to another. I've started tinkering with a set of combinators that individually preserve compilation and can be composed to allow arbitrary code transformations. I wrote up some ideas for that here:

... Refactoring times in this new model will go from weeks or months to hours, and writing code to transform a codebase will become a separate but critical skill, distinct from the usual act of programming. That is, programmers do not simply conceive of a refactoring (which is often quite simple to express to another programmer), then begin a tedious, manual and error-prone process of text munging to implement it. Instead, the programmer conceives of a refactoring, then conceives of a code transforming program to implement the refactoring, then applies this transformation to the code database, all in the span of a few hours.

... First, I am not advocating for datalog syntax. I don't care about that. The key functionality enabled by datalog over and above the relational algebra is the ability to express transitive closure and mutual recursion guaranteed to terminate. Together these features enable many of the common queries we'd like to express in transforming and querying our codebases. For instance, here is a hypothetical query to find all references to a given function id, fid. Don't worry if the syntax looks alien or doesn't make sense. The key is more that this query is just a few lines of code to express, and it can be reused and built upon.

-- propagate reference to containing apply
refs(Id) :- apps(Id, fid, _). 
refs(Id) :- apps(Id, _, fid).
refs(Id) :- refs(X), apps(Id,X,_).
refs(Id) :- refs(X), apps(Id,_,X).
-- any lambda whose body is or contains fid
-- is considered to reference fid
return(Id) :- lambdas(Id,_,Id1), refs(Id1).
return(Id) :- lambdas(Id,_,fid).

Much of the analysis required to implement refactorings has this sort of "transitive-closure" feel to it - you need to do something to the "direct" callers, then do some transformation for their callers as necessary, and so on.

Here's what I propose for this project:

  • Implement datalog, possibly backed by just in-memory data structures, or maybe tied to something like SQLite. Or if there's an existing free datalog interpreter and backend for it somewhere, let's see if we can use that.
  • Come up with the normalized datalog representation for the Haskell AST and type information - besides just the AST I think you'll need to know all the type information. Is there some way to use the GHC API to get the type of all expressions in the
  • Implement or steal a Haskell parser, and write code to translate this to the normalized datalog representation. As a proof of concept, take some existing Haskell project and "code-database-ify" it.
  • Come up with a good set of refactoring combinators. Implement them using datalog. As a proof of concept, use the combinators to express some nontrivial refactoring (like - make this value monadic rather than pure, and propagate the change in calling convention to all direct and indirect callers as needed - this is exactly the sort of refactoring that is trivial to describe to another Haskell programmer, and is totally mechanical, but is still done via a tedious process of text munging)

If all this is too much, I propose not doing this for Haskell but instead for a toy functional language with a very simple AST and type system.

2 comments:

PJW said...

I like the goals of both projects. I lean towards the refactoring project more however. I would enjoy following your progress on either if the code is accessible. I would enjoy contributing as well in what ways I am able.

Paul Chiusano said...

@PJW - Check out https://github.com/pchiusano/datalog-refactoring - if you're at the hackathon come talk to me maybe there is something you can help out with!