CPS vs. ANF

Posted on by Heidy

So it seems as if I’m doing a bit better, I’m pretty glad that this cold is rather minor. This is possibly the last post that I’ll be writing until I return to Tallahassee. Seeing that I’ll be landing in Jacksonville first, I have to rest, repack all my things, and then drive to Tallahassee before settling down and writing any future posts. I finished reading the “Compiling with Continuations, Continued” paper, which introduces CPS languages that utilize “contification transformation” and demonstrates an efficient graph-based representation of CPS with linear-time shrinking reductions. The languages also make a syntactic distinction between the heap allocated functions and local continuations, which can be compiled as jumps. I won’t be discussing the details of the CPS languages, but the addressed advantages of CPS(Continuation-passing style) over ANF(Administrative Normal Form). If you’d like to know more about the introduced CPS languages, I highly suggest reading this paper as it is somewhat of an easy-read.

The author of this paper, Kennedy,  puts forward an argument that ANF is not infact beneficially equivalent to CPS. As I mentioned in my previous blog post, Sabry and Felleisen claimed that that “{\beta\eta} + A-reductions on a call-by-value language is equivalent to {\beta \eta} on a CPS’ed call-by-name language”, but Kennedy argues that the normal form is not preserved. When utilizing transformations such as function inlining, a more complex form of {\beta}-reduction is required to re-normalize the let constructs in ANF in order to produce the proper normal form. Page 2 of this paper provides an example of this. Instead of using substitution to traverse the CPS term and replace it with the continuation, ANF and monadic languages utilize what they define as commuting conversions such as

let y {\Leftarrow} (let x  {\Leftarrow}  M in N) in P
{\rightarrow}  let x  {\Leftarrow}  M in (let y  {\Leftarrow}  N in P).

Kennedy points out that the necessity of commuting conversions increases the complexity to  O({n^2}),  while applying reductions on CPS terms produces a complexity of O({n}). Any pragmatic programming language includes conditional expressions, which also  requires commuting conversions or a “join-point” because in ANF, a conditional expression is not reduced to normal form due to a let-bound conditional expression. Thus, the true correspondence of ANF and monadic languages to CPS reductions comes at a cost of an increased complexity. Although the drawbacks of ANF clearly demonstrate issues pertaining to complexity, CPS has efficiency issues of its own. CPS introduces administrative redexes, while ANF has successfully avoided them. “The cost of allocating closures for lambdas introduced by the CPS transformation” is another complexity concern introduced by CPS. Despite this, Kennedy argues that “the complexity of CPS terms is really a benefit, assigning useful names to all intermediate computations and control points,” and I must concur.  Seeing that both ANF and CPS have separate pros and cons pertaining to complexity, the explicit control flow demonstrated by CPS terms definitely has syntactic advantages over ANF. I believe that it’s merely a personal choice at this point, but I find the simplified {\beta} -reductions of CPS to be much more elegant than the direct style of ANF.

P.S. I installed the LaTeX plugin, so now I can tediously use symbols whenever I feel fit!

3 comments on “CPS vs. ANF

  1. Glad to see I’m not the only one who really enjoyed this paper!

    I think the past efficiency concerns of CPS transformations which Kennedy referenced do not apply to his use of second-class continuations, since they are just lexically-scoped basic blocks.

    1. Ah, I never considered his second-class continuations. That indeed is a good point. I suppose I’m mainly referencing to the attacks on CPS by Flanagan and Sabry.

Leave a Reply

Your email address will not be published. Required fields are marked *