I just got back from visiting Northeastern and Harvard where I yet again flogged a version of my POPL talk. Olin Shivers was my host at Northeastern and Greg Morrisett at Harvard. It was a bit of a rushed visit, but a lot of fun nonetheless.

Both Greg and Olin are very interested in making the next big jump in programming languages, and they both think that that next step will require better ways of reasoning statically about programs. I think they’re dead-on in terms of what the right direction to go is, but I think they’ve got their work cut out for them. It will be hard to beat ML because ML sits in a kind of sweet spot; make it a little bit better in one aspect, and you give something up in another. I can think of two ways in which this is true. The first has to do with the expressiveness of the type system. If you make the type system much more expressive, you generally need to give up a lot on the type-inference side and in the quality and comprehensibility of error messages. You can actually see this at work in OCaml. Some of the more advanced features, like polymorphic variants and objects, do indeed suffer from more obscure error messages and worse type inference than the rest of the language. I think polymorphic variants in particular are worth the trouble, but it just underlines the fact that adding to the Hindley-Milner type system is tricky. And compared to some of the things Olin and Greg and others in the PL community are thinking about, OCaml’s extensions to HM are pretty tame.

The second way in which ML is in a sweet spot has to do with the execution strategy. In ML you can write code that is reasonably “declarative” (which I think mostly means that you can express your logic cleanly and concisely), and at the same time there is a straight-ahead execution strategy that allows you to run the code reasonably efficiently. You can make things a little more declarative, but you often give up quite a bit on the predictability of the performance you get out of your code. There are plenty of other languages where you can see this tradeoff play out. Haskell’s laziness allows for a more declarative style at the expense of making it harder to think about space complexity (and indeed, Haskell’s designers are aware of this. Simon Peyton Jones like to say that he thinks the next ML should be pure and the next Haskell should be strict). SQL and Prolog make it possible to specify queries without worrying too much about how the search for an answer is to be performed, at the expense that both time and space complexity become hard to reason about.

ML is hardly perfect. But it’s such a nice compromise that it’s going to take a real leap forward on the research side to definitively surpass it.