Bit Hacker

The long road to the Ph.D.

Lexical Data: Lexical Scoping is a Right, Not a Privilege

Thu Feb 20 13:45:55 2014

I had an interesting conversation with Edward Amsden yesterday. The crux of it was his claim that modern functional languages do an incredibly poor job of scoping data definitions. By comparison, languages like Scheme, ML, and Haskell can easily and cleanly scope any value you give it through let and lambda. And the variables bound by these forms don't have a large scope or a long life time: they don't live as a top-level definition. So why do ADTs?

The language Idris introduces name spaces, which help ease the problem, but they still require programmers to define data types in a namespace-global way:

namespace Foo
  data Env a where
    Empty : Env a
    Ext : a -> Env a -> Env a

  interp : Exp -> Env Val -> Val
  interp = ...

It's interesting, though. If I would like to define an interpreter of type Exp -> Val which implicitly uses some environment to find the result without wishing to expose it to the user, I can easily write the full version inside in even Haskell:

interp :: Exp -> Val
interp e = interpH e []
    interpH :: Exp -> Env Val -> Val
    interpH = ...

And yet, the environment here must still be exposed in the global namespace of the local module:

data Env a = Empty | Ext a Env

This is globally exposed: not only does interpH use it, but any other piece of code is free to use this environment definition anywhere in the module. The philosophical argument behind functional programming that that functions are first-class values. So why do we insist that functions should get this treatment, but data-type declarations do not?

That's the crux: why can't I also define lexically-scoped data, close to the way I can define lexically-scoped helper functions? Ideally, I'd like to write this code:

interp :: Exp -> Val
interp e = interpH e []
    data Env = Empty | Ext a Env
    interpH :: Exp -> Env Val -> Val
    interpH = ...

I want the entire type of Env lexically enclosed, just like helper functions are. And further, I'd like for every definition to get such treatment. Even type-class instances should get this! If it'd like, I should be able to write this code:

interp :: Exp -> Val
interp e = interpH e []
    data Env = Empty | Ext a Env
    instance (Show Env) where
    interpH :: Exp -> Env Val -> Val
    interpH = ...

It's the case that Idris does support this, quite nicely. (David Christiansen has provided this code.)

module Teeeest

fnord : Nat -> Nat
fnord z = case Bar of
            Bar => S z
            Something => z
  where data Foo = Bar | Something

--- REPL
*Teeest> fnord 5
6 : Nat
*Teeest> Foo
(input):1:1:No such variable Foo
*Teeest> Bar
(input):1:1:No such variable Bar

It's likely this works in Agda, too. And this is important; data definitions should be subject lexical scoping. Why should functions get preferential treatment?

I Wrote a Grant

Fri Jan 17 19:19:40 2014

So I wrote a grant. It wasn't particularly good, and it isn't likely to get funded, and it got finished in far too little time to be shiny, but it got written. Some day's that's all we can ask for. The experience was intense, and amazing, and it yielded some interesting insights and ideas. I've decided to reprint the overview here, for those who are interested or otherwise curious.

Modular Compilers using Dependent Effect Handlers

The use of monads and monad transformers for constructing modular programs is now well-established. In this context, a "modular program" is an extensible program constructed from reusable blocks. This technology was first demonstrated and popularized for interpreters, but it has proven difficult to extend this approach to compilers.

Recent advances in combining computational effects using effect handlers, in the implementation of dependent types, and compiler architectures such as the nanopass framework provide new opportunities that we may exploit to design and implement modular composable compilers. Our main objective is to refine this theory and adapt it to write a full compiler for the dependently-typed language, Idris.

Idris is currently implemented in Haskell, providing a number of features, including fully dependent types, type classes, dependent records, pattern matching, a tactic-based theorem prover, and totality checking. We propose to develop a self-hosting compiler for Idris from the type-checker to the executable generator, building the necessary language facilities along the way. Once complete, we will harness the dependent type system of Idris to organize the compiler in a collection of composable and extensible modules that can be used to prove language properties, or to perform aggressive, type-guided optimizations including novel non-standard analyses and transformations related to information-flow and energy consumption.

Wish me luck!

A Semester's Retrospective

Fri Dec 20 18:18:22 2013

It's been exactly three months since I've written anything for this blog. There are a number of reasons for this: I was busy, I was anxious, but most it's because I'm starting to feel like a second-class graduate student without anything to say---or that anyone wants to hear. I try to write about my work, but the more I do, the more I realize I really hate working on contracts, so instead I write nothing and life goes on.

I'm going to use effort, though, so here's my best shot.

We moved in August, into a larger apartment. Moving was expensive and time-consuming, but it was fun to be in a new place. Then, two weeks into living here (actually, a week and a half), we found out we had bed bugs. They had been left by the previous tenants. This meant that everything we had just unpacked needed to be repacked into plastic bags for fumigation---twice. We spent $500 on our end, and (thank the heavens) the apartment company covered the exterminator's fee of ~$900.

In all of this, I was still trying to finish my qualifying exams. It went poorly, to say the lease. My qualifying exam date got pushed back and back and back and back, but in the mean-time I made good on my promise to buy an apple laptop and got a paper on rkanren accepted to the Scheme and Functional Programming Worshop.

Next, I successfully finished and orally defended my qualifying examination.

Then my semester got weird.

I owed Amr a lot of work accrued during my quals. A lot. So I started to do it, vigorously. My work went from about 15 hours a week to about 30, and I started getting results. I found a huge bug in our contract system's operational semantics, in which is was possible for contracts to not be checked, but that's a discussion for another time. We also did some work on effect handlers, which may or may not ever see the light of day. It seems like they are a good fit for contract monitoring, however.

My grant is up at the end of the year, so I started flailing around for ideas for write for a grant, and finally I found one that seemed to stick: writing a self-hosting Idris compiler. I have no idea if a month is enough time to write a grant, and I suspect it isn't, but that won't stop me from trying.

While all of this was going on (I mean all), I was also playing in tabletop games in a group that can't seem to decide what it wants. Most of us sitting at the table want rules-light, and a different, intersecting subset wants "too cool" situations, and a different, intersectings ubset just wants a good story. We're almost pulling ourselves apart, but we've taking a hiatus over winter break so we'll see how it comes out on the other side of things. (Meanwhile, I've been involved in a lot of PbP on SomethingAwful's forums, so I'm still getting my fix in).

I'm just really worred about the grant: I don't know how to write a grant, or how to focus a grant, or what I'm doing. I don't have a ton of guidance, so I'm just going to try my best and see how I land, I suppose. Going back to teaching classes wouldn't be the worst thing ever...

rKanren: Guided Search in miniKanren, Part 1

Fri Sep 20 17:29:22 2013

In a previous post, I mentioned a rough idea to allow users to perform guided search in miniKanren. As part of my qualifying examination with Dan Friedman, I rewrote the deep structural segments of miniKanren to allow users to do exactly that! (The code will be posted in a public repository when the paper is published or rejected.)

This post will discuss the basic usage of this new search strategy through the condr form. This form uses a revised implementation to associate ranks with individual search paths, allowing us to guide the answer search.

Generic Answer Order

To help describe the implementation and how it "ticks", we'll start with some simple examples to describe the new search strategy. First, consider the following miniKanren programs using conde and their associated output:

> (run* (q)
      ((== q #t)) 
      ((== q #f))))

(#t #f)

> (run* (q)
      ((== q #f)) 
      ((== q #t))))

(#f #t)

Using conde causes the goals listed to be explored in a first-in, first-out style, exploring the first goal of the first clause, then the first goal of the second clause, and so on.

Depending on the order of clauses to change the answer order, however, is less than ideal: programmers must carefully modify programs to change the order of the answer stream, and the result may be unexpected. One alternative is to provide a new form, condr, which associates a numeric value with each clause in order to guide the search:

> (run* (q)
      (2 (== q #f))
      (1 (== q #t))))

(#t #f)

Complex Searches

While the searches are associated with a rank, there are a number of other considerations when designing guided searches. The most important is that more goals mean more work, and the work done during a computation contributes to the cost, and thus the computation's rank. Thus if a search path is ranked lower but does significantly more work, the cost increase may cause the answer to occur later.

> (run* (q)
    (fresh (a b)
        (2 (== q #f))
        (1 (== q `(,a ,b))
           (== a #t)
           (== b #t)))))

(#f (#t #t))

This additional computation cost may be dealt with by increasing the rank of other branches: each goal contributes one more point of work, so we must increase the first goal to rank 4 in order to produce the expected behavior once again.

> (run* (q)
     (fresh (a b)
       (== q `(,a ,b))
            (2 (== a 'a) (== b 'b))
            (1 (== a 'b) (== b 'a))))
            (2 (== a 'a) (== b 'a))
            (1 (== a 'b) (== b 'b)))))))

((b a) (b b) (a b) (a a))

Recursive Weights

The original motivation for developing condr was to find deeper answers more quickly. Large search trees may be easily generated using recursive functions, but previous miniKanren implementations have been unable to return answers found recursively earlier than ground answers listed in the function's body.

Ground answers are answers found at the end of search branches, and grounding out indicates arriving at the end of a search path and finding an answer. Consider the following function, written with conde, and associated call. It has two recursive options and a single clause (the first one) that grounds the search path.

(define recur-e
  (lambda (e)
    (fresh (a b)
        ((== e '(x)))
        ((== e `(b . ,a)) (recur-e a))
        ((== e `(a . ,b)) (recur-e b))))))

> (run 5 (q) (recur-e q))
((x) (b x) (a x) (b b x) (a b x))

The conde clause that grounds out first---in this case, the first clause---is the first answer, and each subsequent answer is produced in order of the conde clauses. If we would like these answers in a different order, we may use condr to rank our desired answer order.

(define recur-r
  (lambda (e)
    (fresh (a b)
        (10 (== e '(x)))
        (4  (== e `(b . ,a)) (recur-r a))
        (2  (== e `(a . ,b)) (recur-r b))))))

> (run 5 (q) (recur-r q))
((x) (a x) (b x) (a a x) (b a x))

This result may be somewhat unexpected: though the first clause is ranked significantly higher, and thus should occur later than the other clauses, it is still produced as the first answer. Luckily, the intuition here is straight-forward: the other two clauses must eventually ground out in a via the first clause, and ranks are cumulative. As a result, the answer (x) has rank 10 while the answer (a x) has rank 12 and the answer (b x) has rank 14.

If we would like complex answers earlier, we must change the cost of grounding during later recursive calls. This is a relatively simple fix: each rank is a full Scheme expression that evaluates to a natural number, so we may use an extra parameter to keep track of our recursive depth and use the information to change the grounding cost:

(define recur-r-n
  (lambda (e n)
    (fresh (a b)
        ((if (< n 1) 10 1)
         (== e '(x)))
        (4 (== e `(b . ,a))
           (recur-r-n a (add1 n)))
        (2 (== e `(a . ,b))
           (recur-r-n b (add1 n)))))))

> (run 5 (q) (recur-r-n q 0))
((a x) (b x) (a a x) (x) (a b x))

Our recursion tracks the depth, and after the first two steps the grounding cost is reduced from 10 to 1, causing the answers (a x), (b x), and (a a x) each to have a total cost lower than 10. This technique allows users to explore deeper branches of the search space before considering the shallow answers.


The behavior of condr is modeled after the A* search technique, where the traditional heuristic approach has been discarded in favor of simple Scheme expressions that evaluate to numeric values. (This may lead to inadmissible heuristics---heuristics that may overestimate the cost to reach an answer---and programmers should take care to fine-tune the numeric values used.) Consider revising recur-p by ranking the first clause with one hundred: 100 recursive steps would need to be taken before any ground answers were produced, and yet the answer order would be identical to the one presented.

More importantly, the condr form still performs a complete search: unlike the conda and condu forms that provide users some control over search paths but ultimately discards certain answers, condr will eventually find every answer in finite search spaces. Furthermore, condr is strictly more expressive than conde: conde's behavior may be reproduced by replacing each rank in condr with the number 0.

Research Blog!

Sun Sep 15 13:40:30 2013

This blog has been dormant for over six months because, simply put, my life got crazy. I don't particularly feel the need to explain what, exactly, happened, but I have a lot of posting to make up for with all the research I've been getting done. Some posts to look out for include:

  • How I crashed and burned my ICFP paper.
  • Working with someone living in another time zone and how much fun it can be.
  • Writing Extensible Effects, and how they affect you. (Or: why Monads might be the wrong abstraction.)
  • Adding a guided search to miniKanren.
  • How and why I think miniKanren should be rewritten from the ground up, and why other people disagree.
  • The basic algorithms for syllogistic logics and why the algorithms are more subtle than you'd think.
  • How to say No: basic algorithms for syllogistic logics and their subtleties.
  • My Ph.D. topic and the direction of my future research.

Give me time; my Qualifying Exam projects are nearly wrapped up and then I should have some time to get some real posts in here. I've missed my readers, and I've probably lost all of them, but it's time to get this blog back on the road, and, more importantly, back to talking about computer science.

Semantics Dipped in Lime
—Terrible CS Idea

Wed May 8 02:07:55 2013

Wed Feb 20 16:50:24 2013

ACM doesn't understand Open Access

Wed Feb 6 10:44:36 2013

The ACM recently posted an article with the slug line "The age of open access is upon us. Increasingly, the consensus of authors of research articles and their funding institutions is that the fruits of taxpayer-supported research should be freely available to the public."

This article was exciting! Open access is a fairly hot topic right now, and the way that the ACM systematically locks up research in my field behind paywalls is frustrating and flustering. (Luckily, my university is large enough to pay for access to their publications, so if I absolutely must have a paper I can get it on campus.)

The next paragraph, though, falls flat on its face. The authors make the claim that maintaining ``large digital archives" for a long time "incur significant cost". They describe an alternative payment plan to this end, proposing that authors pay for these costs.

Ultimately, the ACM article proposes four ways for them to make money in order to maintain these archives, which boil down to author payment, conference payment, or, well, author and conference payment. The people they must pay for are system administrators and web developers. And while these things cost money, they are not impossible to pay for if you abandon paywalls.

For comparison, the Internet Archive group stores over ten petabytes of data. Wikipedia has roughly twenty-nine million articles. These websites take donations and keep their doors open, partially because not all of their pages are served every month (or even every year). And those companies do fine.

As for the ACM and ACM publications, here is a list of things that I think would serve as a much better business model and that few, if any, researchers could complain about:

  1. Asking for donations for the ACM Digital Library. It works for plenty of other companies, and I can't imagine that a company as wide-reaching as the ACM would have trouble finding funding support from its members and users if it came with the promise of no more paywalls.

  2. Requiring that at least one author of each paper be a full ACM member. This may seem a little like "pay to publish", but is distinctly different. It encourages membership in the premiere CS organization among its academic members while also ensuring that work published by the ACM is truly representative of its members (because its members wrote it).

  3. Offering premium services as part of the Digital Library. The digital library lacks a slew of features that I'd greatly enjoy having, including author maps (large webs with links between authors that have published together, perhaps with sizes based on publication count), content tags for papers (perhaps with a similar web, and searching on tags), pages for conferences themselves (why does the ACM not more closely tie with its conferences), and possibly even links between follow-up papers for authors (it'd be great to know that the paper I'm reading from 2008 has further results presented in 2010). These are services worth paying for, and these premium accounts could go a long way toward running the ACM Digital Library. If the ACM Publication Board insists that hosting these papers is a service worth paying for, it should be.

  4. Charging a small overhead per paper published in conferences and journals in such a way that these costs may be folded into conference fees. There were 42 papers published at POPL this last year. There were at least 500 people in attendence. If each paid $20 in conference fees toward publication, that would be $10,000 toward hosting and maintaining the conference proceedings in the ACM Digital Library. And given the modern costs of webhosting, this should cover at least the next twenty years of hosting 42 PDFs.

If all of these solutions were implemented, I cannot imagine that the ACM would need to continue with its paywall behavior (or at least greatly reduce costs). Their arguments and justifications feel reminiscent of the MPAA and RIAA arguments in favor of their failing business models as we move into the digital age, and it's a google away to know how well it's working out for those companies.

My next laptop will be an Apple

Mon Feb 4 15:30:50 2013

My next laptop will be a Macbook Air. To many, this may seem like a moment of sheer hypocricy on my part, so I feel I should justify my choice; I have long been a loud critic of Apple's products. I have often loudly announced my distaste for the extra price attached to their hardware (or Mac tax), the design philosophy behind their user interface (such as window menu bars across the top of the screen), and the culture built up around the computers (including, but not limited to, the arrogance and snobbishness of most of the users).

I've owned several laptops now, and each has run Linux. I've owned a Toshiba which dealt fairly well with Ubuntu but refused to suspend and died young due to power failure, a wonderful little Lenovo IdeaPad that had the problem that its wifi would only work if the computer was booted, suspended, dumped its wiki kernel drivers, and reloaded its wiki kernel drivers, and I'm currently borrowing a Dell Inspiron 11z which has the distinct issue of Intel-based graphics drivers with no linux support and a tendency to have X lock up completely.

The thing is, none of these issues are particularly uncommon with Linux on laptops. I was once explaining these woes to Andy Keep, and he related to me a story from a Ruby conference talk he once attended:

So, a note on laptops and operating systems. If you're running Windows, what are you doing, man? How do you hope to get any code written that way? If you've got a Mac, welcome among your peers. Come, join us!

And if you're running Linux... well, keep fighting the good fight.

I've been fighting a variant of that fight since I was 12. I've used Linux, in some form or another, on desktops and laptops for nearly a decade. I've tried most of the major distributions under the sun (from Ubuntu to Slackware, Gentoo to Arch, and a slew of others), and I've wrestled with everything from boot loaders to wireless drivers.

I've run Linux on laptops for six of those years and I'll be honest: I'm tired of fighting the good fight. I have, on no fewer than four separate occasions in my graduate career, had to borrow a laptop to give a presentation, due to connectivity, graphics, or other issues. And when I will be expected to attend conferences and present talks, fumbling with a laptop or being unable to do work because of my chosen computing platform are completely unacceptable.

I am in need of a new laptop, and I would like something small and light-weight with a long battery life and killer form factor. It needs to provide a programming environment, preferably have a solid-state drive, and provide a robust CLI.

I briefly considered purchasing an Asus Zenbook and installing Linux on it, and then I looked up the Arch Wiki page for getting everything configured. The items listed on the page are, as with every laptop I've owned, laughably complex for arbitrary reasons. The lack of Linux support by laptop hardware manufacturers is their single biggest loss against Apple in the laptop market. The entire tech industry isn't switching to Macbooks because they want to pay extra or love the technology, but because they need software support that other software manufacturers simply are not providing.

So this spring, when I purchase a new laptop to replace my now-dead Lenovo, I will be purchasing a 13-inch Macbook Air. I'll use github and a small package manager, and when I need Linux or heavy lifting, I will SSH.

I'm sorry. The good fight is getting to be too much.

What if the reason pay phones are disappearing is to make it harder to get out of the Matrix?

Thu Jan 10 23:05:07 2013