Looking for a technical writer

Jane Street is looking to hire a technical writer. If you're interested, or know someone who you think would be a good match, here's the application link.

We've always believed that developers should spend time and effort documenting their own code, but at the same time, a great writer with a feel for the technology can raise the level of quality in a way that few developers can. And as we've grown, having someone dedicated to writing makes a ton of sense.

Here are the kinds of things we'd like to have a technical writer work on:

  • Training material. We have a training program that many new hires go through, including most new developers and all new traders. In that program, they learn about OCaml, our base libraries, our build system, the UNIX shell, Emacs, and our dev tools. Part of the job would be to help make the course better, both by improving what we have, and by adding new material.
  • Improving library documentation. While we expect developers to do a reasonable job of documenting their code, our most important libraries deserve the time and care to make them really shine. This is aimed both internally and externally, since a lot of these libraries, like Async, Core and Incremental, are open source.
  • Writing longer pieces. We need more tutorials and overviews on a variety of topics. Part of the work would be to create great new documentation, and part of it is to serve as an example for others as to what good documentation looks like. And where possible, we want to do this so that the documentation effectively compiles against our current APIs, preventing it from just drifting out of date.

In terms of skills, we want someone who is both a clear and effective written communicator, and who is good enough at programming to navigate our codebase, work through our tutorials, and write up examples. An interest in functional programming and expressive type systems is a plus, but you don't need to know any OCaml (the language we use). That's something we're happy to teach you here.

Caveat Configurator: how to replace configs with code, and why you might not want to

We have a new tech talk coming up on May 17th, from our very own Dominick LoBraico. This one is about how to represent configurations with programs. In some sense, this is an obvious idea. Lots of programmers have experienced the dysphoria that comes from watching your elegant little configuration format metamorphize into a badly constructed programming language with miserable tools. This happens because, as you try to make your configs clearer and more concise, you often end up walking down the primrose path of making your config format ever more language-like. But you never really have the time to make it into a proper language.

The obvious alternative is to just use a real language, one that comes with decent tooling and well-designed abstractions. (And ideally, a functional language, because they tend to be better at writing clear, declarative code.)

This talk discusses what happens when you try to put this obvious idea into practice, which is less, well, obvious. I like this kind of topic because it represents the kind of hard-won knowledge you can only get by trying something and screwing it up a few times.

So please join us! You can register here.

One more talk, two more videos

I'm happy to announce our next public tech talk, called Seven Implementations of Incremental, on Wednesday, April 5th, presented by yours truly. You can register here.

The talk covers the history of Incremental, a library for building efficient online algorithms. The need to update computations incrementally is pretty common, and we've found Incremental to be useful in creating such computations in a number of different domains, from constructing efficient financial calculations to writing responsive, data-rich web UIs.

The ideas behind Incremental aren't new with us; there is a lot of prior art, most notably the work from Umut Acar's work on self-adjusting computations, on which Incremental is most directly modeled.

But there's a big gap between the academic version of an idea and a production ready instantiation, and this talk is about crossing that gap. It discusses the 7 different implementations we went through and the various mistakes we made along the way towards the current one we use in production.

So join us. I hope you enjoy seeing what we learned about building this kind of system, as well as hearing about the hilarious pratfalls along the way.

On another note, we have finally posted videos from our two previous talks, including Brian Nigito's talk on the architecture of the modern exchange, and Arjun Guha's talk on taming Puppet. And, of course, you can subscribe to our channel while you're there.

Jane Street Tech Talks: Verifying Puppet Configs

Our first Jane Street Tech Talk went really well! Thanks to everyone who came and made it a fun event.

Now it's time for another. We're planning for the series to feature a combination of voices from inside and outside Jane Street. This one is of the latter variety: on March 6th, Arjun Guha will be presenting On Verification for System Configuration Languages, which is about using static verification techniques for catching bugs in Puppet configs.

I've known Arjun for years, and he's a both a good hacker and a serious academic with a real knack for finding good ways of applying ideas from programming languages to systems problems. Also, he has excellent taste in programming languages...

I hope you'll come! You can sign up here.

How to Build an Exchange

UPDATE: We are full up. Tons of people signed up for the talk, and we're now at the limit of what we feel like we can support in the space. Thanks for all the interest, and if you didn't get into this one, don't worry, we have more talks coming!

We're about to do the first of what will hopefully become a series of public tech talks in our NY office.

The first talk is on February 2nd, and is an overview of the architecture of a modern exchange. The talk is being given by Brian Nigito, and is inspired by our work on JX, a crossing engine built at Jane Street. But Brian's experience is much broader, going all the way back to the Island ECN, which in my mind marks the birth of the modern exchange.

I did some work on JX, and one of the things I was struck by is the role that performance plays in the design. In particular, JX uses a simple replication scheme based on reliable multicast that relies critically on the components of the system having high throughput and low, deterministic latencies.

This is a situation where performance engineering is done not so much for reducing end-to-end latency, but instead to act as a kind of architectural superpower, making it possible to build systems in a simpler and more reliable way than would be possible otheriwse.

Anyway, I think it's a fascinating topic. If you're interested in coming, you can go here to get the details and sign up. (We have a registration step so we can get people through building security.)

Observations of a functional programmer

I was recently invited to do the keynote at the Commercial Users of Functional Programming workshop, a 15-year-old gathering which is attached to ICFP, the primary academic functional programming conference.

You can watch the whole video here, but it's a bit on the long side, so I thought an index might be useful. (Also, because of my rather minimal use of slides, this is closer to a podcast than a video...)

Anyway, here are the sections:

Hope you enjoy!

What the interns have wrought, 2016

Now that the interns have mostly gone back to school, it's a good time to look back at what they did while they were here. We had a bumper crop -- more than 30 dev interns between our London, New York and Hong Kong offices -- and they worked on just about every corner of our code-base.

In this post, I wanted to talk about just one of those areas: building efficient, browser-based user interfaces.

Really, that's kind of a weird topic for us. Jane Street is not a web company, and is not by nature a place that spends a lot of time on pretty user interfaces. The software we build is for our own consumption, so the kind of spit and polish you see in consumer oriented UIs are just not that interesting here.

But we do care about having functional, usable user interfaces. The work we do is very data driven, and that rewards having UIs that are good at presenting up-to-date data clearly and concisely. And we want to achieve that while keeping complexity of developing these UIs low.

Historically, almost all of our UIs have been text-based. While I love the command line, it does impose some limitations. For one thing, terminals don't offer any (decent) graphical capabilities. But beyond that obvious constraint, getting all the pieces you need for a decent UI to work well in a terminal, from scrolling to text-entry widgets, requires a lot of work that just isn't necessary in a browser.

So this year, we've finally started pushing to make it easy for us to write browser-based applications, in particular relying on OCaml's shockingly good JavaScript back-end. This has allowed us to write web applications in OCaml, using our usual tools and libraries. As I've blogged about previously, we've also been exploring how to use Incremental, a framework for building efficient on-line computations, to make browser UIs that are both pleasant to write and performant.

That's roughly where we were at the beginning of the summer: some good ideas about what designs to look at, and a few good foundational libraries. So the real story is what our interns, Aaron Zeng and Corwin de Boor, did to take us farther.

Web Tables

Aaron Zeng's project was to take all of the ideas and libraries we'd been working on and put them to use in a real project. That project was web-tables, a new user-interface for an existing tool developed on our options desk, called the annotated signal publisher. This service provides a simple way for traders to publish and then view streams of interesting tabular data often based on analysis of our own trading or trading that we see in the markets.

The publisher fed its data to Catalog, our internal pub-sub system. From Catalog, the data could be viewed in Excel, or in our terminal-based Catalog browser. But neither of these approaches worked as well or as flexibly as we wanted.

Enter web-tables. What we wanted was pretty simple: the ability to display tabular data from the annotated signal publisher with customizable formatting, filtering and sorting. This involved breaking a lot of new ground, from figuring out how do the sorting and filtering in an efficient and incremental way, to fixing performance issues with our RPC-over-websockets implementation, to figuring out a deployment and versioning story that let people easily create and deploy new views of their data.

One of the great things about the project is how quickly it was put into use. The options guys started using web-tables before the project was even really finished, and there was a tight feedback loop between Aaron and his mentor Matt Russell, who was using the tool on a daily basis.

Optimizing rendering

Aaron's web-tables work used incr_dom, a small framework that sets up the basic idioms for creating UIs using Incremental. As part of that work, we discovered some limitations of the library that made it hard to hit the performance goals we wanted to. Corwin de Boor's project was to fix those limitations.

The key to building an efficient UI for displaying a lot of data is figuring out what work you can avoid. To this end, Corwin wanted to build UIs that logically contained thousands or even millions of rows, while only actually materializing DOM nodes corresponding to the hundred or so rows that are actually in view.

In order to figure out which nodes to render, he had to first figure out which nodes would be visible, based on the location of the browser's viewport. This in turn required a way of looking up data-points based on where that data would be expected to render on the screen.

Corwin did this by building a data structure for storing the expected height of each object that could be rendered, while allowing one to query for a node based on the sum of the heights of all the nodes ahead of it. He did this by taking an existing splay tree library and rewriting it so it could be parameterized with a reduction function that would be used to aggregate extra information along the spine of the splay tree. By integrating the reduction into splay tree itself, the necessary data could be kept up to date efficiently as the splay tree was modified.

Corwin also spent a lot of time improving incr_dom itself, taking inspiration from other systems like React and the Elm language. We even corresponded a bit with Jordan Walke and Evan Czaplicki, the authors of React and Elm respectively.

One thing that came out of this was a neat trick for making the incr_dom API cleaner by using a relatively new feature of OCaml called open types. The details are a little technical (you can see the final result here and here), but I think what we ended up with is a bit of an advance on the state of the art.

There were a lot of other bits and pieces, like improving the handling of keyboard events in js_of_ocaml, creating a new incremental data-structure library called incr_select for more efficiently handling things like focus and visibility, and restructuring the incr_dom APIs to make them simpler to understand and open up new opportunities for optimization.

By the end, Corwin was able to build a demo that smoothly scrolled over a million rows of continuously updating data, all while keeping update times below 5ms. We're now looking at how to take all of this work and feed it into real applications, including Aaron's work on web-tables.

Sound like fun?

If this sounds like interesting stuff, you should consider applying for a summer internship with us!

Jane Street internships are a great learning experience, and a lot of fun to boot. You even get a chance to travel: interns get to visit Hong Kong, London or New York (depending on where they started!) as part of their summer.

And the projects I described here are really just a taste. Here are some other projects interns worked on this summer:

  • Building a low-latency order router.
  • Adding data representation optimizations to the OCaml compiler
  • A service for simulating a variety of network failures, to make it easier to test distributed applications.
  • Making it possible to write Emacs extensions in OCaml (this was actually Aaron Zeng's second project)

and that's still just a small sample. One thing I love about the work at Jane Street is the surprising variety of problems we find ourselves needing to solve.

Do you love dev tools? Come work at Jane Street.

In the last few years, we've spent more and more effort working on developer tools, to the point where we now have a tools-and-compilers group devoted to the area, for which we're actively hiring.

The group builds software supporting around 200 developers, sysadmins and traders on an OCaml codebase running into millions of lines of code. This codebase provides the foundation for the firm's business of trading on financial markets around the world.

Software that the group develops, much of which is written in-house, includes:
- build, continuous integration and code review systems;
- preprocessors and core libraries;
- editor enhancements and integration.

The group also devotes significant time to working on the OCaml compiler itself, often in collaboration with external parties, with work being released as open source. Recent work includes work on the Flambda optimization framework and the Spacetime memory profiler.

Candidates need to be familiar with a statically typed functional language and possess some amount of experience (within industry or otherwise) in this kind of infrastructure.

We're looking for candidates for both our New York and London office. Benefits and compensation are highly competitive.

If you are interested, please email tools-and-compilers-job@janestreet.com with a CV and cover letter.

Let syntax, and why you should use it

Earlier this year, we created a ppx_let, a PPX rewriter that introduces a syntax for working with monadic and applicative libraries like Command, Async, Result and Incremental. We've now amassed about six months of experience with it, and we've now seen enough to recommend it to a wider audience.

For those of you who haven't seen it, let syntax lets you write this:

  1. let%bind <var> = <expr1> in <expr2>

instead of this:

  1. <expr1> >>= fun <var> -> <expr2>

with analogous support for the map operator, via let%map. The choice of monad is made by opening the appropriate Let_syntax module, e.g., you might open Deferred.Result.Let_syntax, or Incr.Let_syntax. Note that Async.Std now opens Deferred.Let_syntax by default.

There's also support for match statements, e.g.:

  1. match%bind <expr0> with
  2. | <pattern1> -> <expr1>
  3. | <pattern2> -> <expr2>

is equivalent to:

  1. <expr0> >>= function
  2. | <pattern1> -> <expr1>
  3. | <pattern2> -> <expr2>

There's also support for parallel binds and maps, using the and syntax. So this:

  1. let%map <var1> = <expr1> and <var2> = <expr2> in <expr3>

is (roughly) equivalent to this:

  1. map3 <expr1> <expr2> ~f:(fun <var1> <var2> -> <expr3>)

This pattern generalizes to arbitrarily wide maps. It's implemented using map and the both operator, which sacrifices some performance in exchange for generality, vs the explicit mapN operators.


My experience with the new syntax has been quite positive. Here's my summary of the wins.

Parallel binds

For libraries like Command.Param and Incremental, where multi-way map functions (like `map2` and `map3`) are important, it's been a pretty big win in terms of the comprehensibility of the resulting code. This tends to be the case for applicatives like Command.Param, which are just monads without bind. The big advantage is that by writing:

  1. let%map x1 = some_very long expression
  2. and x2 = some_other long expression
  3. and x3 = yet another_thing
  4. in
  5. x1 + x2 / x3

we get to put the variable names directly next to the expressions they're being bound. Using an explicit mapN operator, the result is more awkward:

  1. map3
  2. (some_very long expression)
  3. (some_other long expression)
  4. (yet another_thing)
  5. ~f:(fun x1 x2 x3 -> x1 + x2 / x3)

This is error prone, since it's easy to mix up the variables and the expressions. To avoid the corresponding issue in the original Command library, we used some fancy combinators and the dreaded step operator, leading to some hard to understand code. The let-syntax equivalents are materially easier to use.

Variables first

Using a let-like syntax lets you put the variable before the definition, which follows the pattern of ordinary OCaml code, and makes it a bit easier to read. This cleans up some otherwise awkward patterns that are pretty common in our code. In particular, instead of this:

  1. begin
  2. <expr1>;
  3. let <var> = <expr2> in
  4. <expr3>
  5. end
  6. >>= fun meaningful_variable_name ->
  7. <expr4>

You can write this:

  1. let%bind meaningful_variable_name =
  2. <expr1>;
  3. let <var> = <expr2> in
  4. <expr3>
  5. in
  6. <expr4>

which flows a bit more naturally, in part because the meaningful variable name comes first, and in part because the extra begin and end are dropped.

Connecting bind to let

Let binds are a lot like monadic binds, even before you add in any special syntax. i.e., this

  1. <expr1> >>= fun x -> expr2

is a lot like this.

  1. let x = <expr1> in <expr2>

This is why monads are sometimes described as "programmable let-binds" (or, relatedly, "programmable semicolons", which are just let-binds with a unit argument.)

I've found this to be a useful analogy in understanding monads, and the analogy is made clearer with let syntax. We have some preliminary reports of this making monadic code more approachable for beginners, which lines up with my intuition.

The similarity between ordinary lets and monadic lets also makes diffs easier to read. e.g., in Async, if some function goes from being synchronous to deferred, the change at the call point would now be from this

  1. let x = some_synchronous_thing () in
  2. more things

to this.

  1. some_asynchronous_thing ()
  2. >>= fun () ->
  3. more things

With let-syntax, we would instead change it to this.

  1. let%bind x = some_asynchronous_thing () in
  2. more things

i.e., the only thing that would change would be the addition of %bind. The resulting diff is more targeted, making the substance of the change a bit easier to see, making refactoring that adds or remove blocking easier to do and understand.


It's not all wine and roses. There are some downsides to let-syntax:

It's new and different

Enough said.

It's kinda ugly

This is a matter of taste, but I've heard some distaste for the percent sign itself. That's something forced on us by PPX, but I don't exactly disagree.

Also, the %bind and %map are a little wordy. There's been some talk of adding the ability to define alternate let syntaxes in OCaml proper, which would allow you to write something like this.

  1. let* x = some_asynchronous_thing () in
  2. let* y = some_other_thing () in
  3. let+ z = a_third thing in
  4. x + y + z

Here, let* would be equivalent to let%bind, and let+ is equivalent to let%map. Again, it's not clear to me that this would all in be a win.

I personally find the new syntax all in less ugly than using infix operators everywhere, but again, tastes vary.

Unit binds aren't great

In particular, because we have no "monadic semicolon" in the syntax, you have to go from this:

  1. <expr1>
  2. >>= fun () ->
  3. <expr2>


  1. let%bind () = <expr1> in
  2. <expr2>

which is not ideal, since it's not parallel to the normal semicolon syntax for this outside of the monad. We've looked at making it possible to do something like:

  1. <expr1> ;%bind
  2. <expr2>

which would be more parallel with ordinary OCaml syntax, but that's not yet possible, and it's not clear it's a net win.

It changes how you think about interleaving in Async

In Async, when you write:

  1. load_something ()
  2. >>= fun x ->
  3. process_thing x

you can think of the point where interleaving can happen as the place where the bind operator is found. With let-syntax:

  1. let%bind x = load_something () in
  2. process_thing x

the location is different, and somewhat less obvious. My experience has been that this was easy to adjust to and hasn't tripped me up in practice, but it's a concern.


A few thoughts on how to use let syntax effectively.

Let syntax for variables, infix for point-free

One might wonder whether there's any use for the infix operators once you are using Let_syntax. I believe the answer is yes. In particular, the style we've adopted is to use let syntax when binding a variable.

  1. let%bind x = some_expression in

and infix operators when going point-free, i.e., when not binding variables.

  1. let v = some_function x >>| ok_exn in

One special case of this is binding unit, where some people prefer to use the following pattern, since we don't have a nice monadic semi-colon yet.

  1. let%bind x = some_operation in
  2. some_other_operation >>= fun () ->
  3. let%bind y = yet_another_thing () in
  4. a_final_thing y

rather than:

  1. let%bind x = some_operation in
  2. let%bind () = some_other_operation in
  3. let%bind y = yet_another_thing () in
  4. a_final_thing y

Mixing monads

One change we made recently was to add the return function and the monadic infix operators to the Let_syntax module that one opens to choose a monad. This has the useful property of causing one to basically switch cleanly from one monad to another when you open the Let_syntax module. Mixing multiple monads in the same scope is hard to think about.

Command.Param and Deferred

A few interesting cases that come up are mixing the Command.Param syntax with the Deferred syntax. This one is pretty easy to solve, because you don't typically need to mix them together, really. It's just that in the body of the command, you often want Deferred, but in the definition of the command line parser, you want to use Command.Param. This can be handled by doing a local open of Command.Param.Let_syntax or Deferred.Let_syntax as necessary.

Deferred and Deferred.Result

A more complicated case is choosing between the Deferred and Deferred.Result monads. In Async, there are infix operators that let you use both sets of bind and map operators (basically, with question-marks at the end of the ordinary infix operators for the Deferred.Result operators.)

Mixing these operators together in a single scope can be a little awkward, often leaving people to add and remove question-marks until things compile. With let syntax, you really have to pick a single monad, which is easier to read, but then requires some changes in behavior. In particular, you often need to move things from one monad to another. For example, if you're in the Deferred monad and get a result of type Deferred.Or_error.t, you might want to do something like this:

  1. let open Deferred.Let_syntax in
  2. let%bind v = some_operation x y >>| ok_exn in

Here, mapping over ok_exn will take the error and raise it, if necessary. Similarly, if you're using an operation that's in the ordinary Deferred monad but you're operating in the Deferred.Result monad, you might want to lift that operation up, i.e.:

  1. let open Deferred.Result.Let_syntax in
  2. let%bind v = some_other_operation x y |> Deferred.map ~f:(fun x -> Ok x) in

This is something of a mouthful, so we just added the Deferred.ok function, so on our latest release you can write:

  1. let open Deferred.Result.Let_syntax in
  2. let%bind v = some_other_operation x y |> Deferred.ok in

This idiom is useful is useful whether or not you're using let syntax.

Seven Implementations of Incremental

We finally got a decent recording of one of my favorite talks. This one is about our Incremental library (which I wrote about here), and in particular about the story of how we got to the present, quite performant, implementation.

It's not clear from the talk, but the work on this library wasn't done by me: The initial version was implemented by Stephen Weeks and Milan Stanojevic, and most of the intermediate versions were implemented by Nick Chapman and Valentin Gatien-Baron.

The high quality org-mode slides, though, are all me.