How Does Automount Work Anyway?


Autofs/automount is a combination of a user space program and some pieces in the kernel that work together to allow filesystems (many kinds of filesystems, but often NFS) to be mounted "just in time" and then be unmounted when they are no longer in use. As soon as a process wants to access an automounted filesystem, the kernel intercepts the access and passes control to a user space program (typically automount, but systemd now supports some automount functionality as well).

The user space program does whatever is necessary to mount the file system (which usually involves some invocation of mount(8)) and then reports success or failure back to the kernel so the kernel can either allow the process to continue to, or signal a failure.

We use automount for a number of things here at Jane Street. Recently, users started reporting that directories that shouldn't exist (i.e. some path on an automounted filesystem for which the automount daemon has no configuration) were spontaneously appearing and not going away. Most commonly, users were seeing dubious output from commands like hg root and hg status, both cases in which Mercurial calls stat(2) on any path that seems like it might be a ".hg" directory. The problem was that ".hg" directories kept popping up in places where they didn't actually exist, causing these stat(2) calls to succeed and Mercurial to believe it had found a valid repository directory. Because attempts to access this ghost ".hg" directory obviously fail, Mercurial provides odd output for hg root and hg status. We were stumped so I dug into automount to try to find out where things were going wrong.


Generic mapping and folding in OCaml

Haskell has a function fmap which can map over a number of different datatypes. For example, fmap can map a function over both a List and a Maybe (the equivalent of an option in OCaml):

Prelude> fmap (+ 1) [1,2]
Prelude> fmap (+ 1) (Just 3)
Just 4

Unfortunately, the equivalent is impossible in OCaml. That is, there's no way to define an OCaml value fmap so that the two expressions:

# fmap [1;2]    ~f:((+) 1)
# fmap (Some 3) ~f:((+) 1)

both typecheck and evaluate to the right value.

Even if we eliminate the complexity of type inference by specifying the type explicitly, we can't define fmap so that the two expressions:

# fmap ([1;2]  : _ list)   ~f:((+) 1)
# fmap (Some 3 : _ option) ~f:((+) 1)

typecheck and evaluate to the right value.

However, the Generic module in Jane Street's Core_extended library will let us do exactly that with just a trivial syntactic change. But before continuing, I'll warn you that the Generic module is not necessarily something you'd want to use in real world code; it falls much more in the "cute trick" category. But with that caveat, let's look at our example using Generic:

# open Core.Std;;
# open Core_extended.Generic;;

# map ([1;2] >: __ list) ~f:((+) 1);;
- : int list = [2; 3]
# map (Some 3 >: __ option) ~f:((+) 1);;
- : int option = Some 4    

Note that, after opening the Generic module, all we did to the previous example was change : to >: and _ to __. (Also, the Generic module calls the mapping function map instead of fmap, but that's inconsequential.)

Of course, the trick is that >:, __, list, and option are actually values defined by the Generic module in such a way that their intended usage looks like a type annotation.

Note that these "types" are nestable as you would expect real types to be:

# map ([None; Some 3] >: __ option list) ~f:((+) 1);;
- : int option list = [None; Some 4]        

This means that you can change what map does just by changing the "type" you assign to its argument:

# map ([None; Some 3] >: __ option list) ~f:(fun _ -> ());;
- : unit option list = [None; Some ()]
# map ([None; Some 3] >: __ list) ~f:(fun _ -> ());;
- : unit list = [(); ()]

The Generic module also defines a generic fold function so that you can accumulate values at any "depth" in your value:

# fold ([[Some 3; None]; [Some 5; Some 2]] >: __ option list list) ~init:0 ~f:(+);;
- : int = 10

Not every "type" formable is __ followed by some sequence of options and lists: for example, Generic also provides string (considered as a container of characters):

# map ([Some "foo"; None; Some "bar"] >: string option list) ~f:Char.uppercase;;
- : string option list = [Some "FOO"; None; Some "BAR"]

Note that the fact that the "types" are nestable means that these values must have unusual definitions: in particular, __ (and string) are functions which must be able to take a variable number of arguments. Indeed, these values are defined using a technique sweeks wrote about in a blog post on variable argument functions: the f and z in sweeks's post are analogous here to __ and >: respectively.

Here's the definition of the primitive values we've used so far (Generic actually defines a few more):

let __ k = k (fun f x -> f x)

let ( >: ) x t y = t (fun x -> x) y x

let map x ~f = x f

let string k = k (fun f -> ~f)

let list map k = k (fun f -> ~f:(map f))

let option map k = k (fun f -> ~f:(map f))

The types of these turn out to be extremely unhelpful, and you can't really use them to figure out how to use these values. For example, here is the type of >: (and this isn't just the inferred type of the above definition, this is the type which must actually be exposed to use >:):

val ( >: ) : 'a -> (('b -> 'b) -> 'c -> 'a -> 'd) -> 'c -> 'd

Finally, is this module actually used? The answer is no. As far as I know, it's used nowhere in Jane Street's codebase. But it's still kind of cute.

Breaking down FRP

As anyone who has looked into functional reactive programming (FRP) knows, there are lots of competing approaches to it, and not a lot of conceptual clarity about how they relate to each other. In this post, I'll try to shed some light, and in particular give you some guide posts for understanding how the different versions of FRP relate to each other. Plus, I'll show some connections to a similar technique called self-adjusting computation (SAC).

The analysis here should mostly be credited to Evan Czaplicki, who gave a talk at Jane Street a week ago. Any confusions and mistakes are, of course, my own. Also, thanks to Jake McArthur for filling me in on a few key details.

In all of this I'm basically going to talk only about discrete FRP. A lot of the people involved in FRP think of continuous-time semantics as really important, but that's not the focus here. (I am curious, however, if anyone can explain why people think continuous time semantics are important when writing programs for a largely discrete computer.)

First, some basics. Roughly speaking, FRP systems are meant to make it easier to write programs that react to external events by providing a natural way to program with and reason about time-varying signals.

Now, time to oversimplify.

An FRP program effectively ties signals together in a dependency graph, where each signal is either an external input or a derived signal that feeds off of other signals that have already been defined. A key aspect of a system like this is that the language runtime uses the dependency graph to minimize the amount of work that needs to be done in response to an external event.

Here are some properties that you might want from your FRP system:

  • History-sensitivity, or the ability to construct calculations that react not just to the current state of the world, but also to what has happened in the past.
  • Efficiency. This comes in two forms: space efficiency, mostly meaning that you want to minimize the amount of your past that you need to remember; and computational efficiency, meaning that you want to minimize the amount of the computation that must be rerun when inputs change.
  • Dynamism, or the ability to reconfigure the computation over time, as the inputs to your system change.
  • Ease of reasoning. You'd like the resulting system to have a clean semantics that's easy to reason about.

It turns out you can't have all of these at the same time, and you can roughly categorize different approaches to FRP by which subset they aim to get. Let's walk through them one by one.

Pure Monadic FRP

This approach gives you dynamism, history-sensitivity and ease of reasoning, but has unacceptable space efficiency.

As you might expect, the signal combinators in pure monadic FRP can be described as a monad. That means we have access to the usual monadic operators, the simplest of which is return, which creates a constant signal.

  1. val return : 'a -> 'a signal

You also have map, which lets you transform a signal by applying a function to it at every point in time.

  1. val map: 'a signal -> ('a -> 'b) -> 'b signal

Operators like map2 let you take multiple signals and combine them together, again by applying a function to the input signals at every point in time to produce the output signal.

  1. val map2 : 'a signal -> 'b signal -> ('a -> 'b -> 'c) -> 'c signal

Note that all of the above essentially correspond to building up a static set of dependencies between signals. To finish our monadic interface and to add dynamism, we need one more operator, called join.

  1. val join: 'a signal signal -> 'a signal

Nested signals are tricky. In a nested signal, you can think of the outer signal as choosing between different inner signals. When these are collapsed with join, you essentially get a signal that can change its definition, and therefore its dependencies, in response to changing inputs.

We're still missing history sensitivity. All of the operators thus far work on contemporaneous values. We don't have any operators that let us use information from the past. foldp is an operator that does just that, by folding forward from the past.

  1. val foldp: 'a signal -> init:'acc -> ('acc -> 'a -> 'acc) -> 'acc signal

With foldp, history is at our disposal. For example, we can write a function that takes a signal containing an x/y position, and returns the largest distance that position has ever been from the origin. Here's the code.

  1. let max_dist_to_origin (pos : (float * float) signal) : float signal =
  2. foldp pos ~init:0. ~f:(fun max_so_far (x,y) ->
  3. Float.(max max_so_far (sqrt (x * x + y * y))))

Here, max_so_far acts as a kind of state variable that efficiently summarizes the necessary information about the past.

foldp seems like it should be implementable efficiently, but we run into trouble when we try to combine history sensitivity and dynamism. In particular, consider what happens when you try to compute max_dist_to_origin on the signal representing the position of the mouse. And in particular, what if we only decide to run this computation at some point in the middle of the execution of our program? We then have two choices: either (max_dist_to_origin
always has the same meaning, or, its meaning depends on when it was called.

In pure monadic FRP, we make the choice to always give such an expression the same meaning, and thus preserve equational reasoning. We also end up with something that's impossible to implement efficiently. In particular, this choice forces us to remember every value generated by every input forever.

There are various ways out of this performance trap. In the following, I'll describe the different escape paths chosen by different styles of FRP systems.

Pure Applicative FRP

The idea behind applicative FRP is simple enough: just drop the join operator, thus giving up dynamism. This means that you end up constructing static dependency graphs. Without the ability to reconfigure, you don't run into the question of what happens when an expression like (max_dist_to_origin mouse_pos) is evaluated at multiple points in time.

This is the approach that Elm takes, and seems like the primary approach that is taken by practical systems concerned with describing UI interactions.

There's a variant on pure applicative FRP called, confusingly, Arrowized FRP. Effectively, Arrowized FRP lets you create a finite collection of static graphs which you can switch between. If those static graphs contain history-dependent computations, then all the graphs will have to be kept running at all times, which means that, while it can be more efficient then applicative FRP, it's not materially more expressive.

Impure Monadic FRP

Impure monadic FRP basically gives up on equational reasoning. In other words, the meaning of (max_dist_to_origin mouse_pos) depends on when you call it. Essentially, evaluating an expression that computes a history-sensitive signal should be thought of as an effect in that it returns different results depending on when you evaluate it.

Loosing equational reasoning is not necessarily the end of the world, but my experience programming in this style makes me think that it really is problematic. In particular, reasoning about when a computation was called in a dynamic dependency graph is really quite tricky and non-local, which can lead to programs whose semantics is difficult to predict.

Self-Adjusting Computations

Self-adjusting computations are what you get when you give up on history-sensitivity. In particular, SAC has no foldp operator. The full set of monadic operators, however, including join are in place, which means that you do have dynamism. This dynamism is quite valuable, it turns out. Among other things, it allows you to build a highly configurable computation that can respond to reconfiguration efficiently.

As you might expect, the lack of history-sensitivity makes SAC less suitable for writing user interfaces. Indeed, SAC was never intended for such applications; its original purpose was for building efficient on-line algorithms, i.e., algorithms that could be updated efficiently when the problem changes in a small way.

SAC is also easy to reason about, in that all an SAC computation is doing is incrementalizing an otherwise ordinary functional program. You get full equational reasoning as long as you avoid effects within your SAC computation.

History sensitivity for SAC

At Jane Street, we have our own SAC library called Incremental, which is used for a variety of different applications. In practice, however, a lot of our SAC applications do require some amount of history-sensitivity. The simplest and easiest approach to dealing with history within SAC is to create inputs that keep track of whatever history is important to your application. Then, the SAC computation can use that input without any complications.

Thus, if you there's an input whose minimum and maximum value you want to be able to depend on in your calculation, you simply set up calculations outside of the system that create new inputs that inject that historical information into your computation.

You can keep track of history in a more ad-hoc way by doing side-effects within the functions that are used to compute a given node. Thus, we could write a node that computes the maximum value of a given input using a reference, as follows.

  1. let max_of_signal i =
  2. let max = ref None in
  3. map i (fun x ->
  4. match !max with
  5. | None -> max := Some x; x
  6. | Some y ->
  7. let new_max = Float.max x y in
  8. max := new_max;
  9. new_max)

But this kind of trick is dangerous, particularly because of the optimizations that are implemented in Incremental and other SAC implementations. In particular, Incremental tries to avoid computing nodes whose values are not presently in use, and as such, signals that are deemed unnecessary are not kept up-to-date. Thus, if you create a signal by calling (max_of_signal s), and then keep it around but don't hook it into your final output, the computation will stop running and will thus stop receiving updates. Then, if you pull it back into your computation, it will have a value that reflects only part of the true history.

There are some tricks for dealing with this in Incremental. In particular, we have an operator called necessary_if_alive, which forces the node in question to remain alive even if it's not necessary at the moment. That helps, but there are still complicated corner cases. Our preferred approach to dealing with such cases is to statically declare the set of history-sensitive signals, and make sure that those are alive and necessary at all times.

Broader lessons

This is I think a theme in FRP systems: history is made tractable by limiting dynamism. From the work I've done with SAC systems, I think the usual approach in the FRP world is backwards: rather than start with a static system that supports history and extend it with increased dynamism, I suspect it's better to start with a highly dynamic system like SAC and carefully extend it with ways of statically adding history-sensitive computations. That said, it's hard to be confident about any of this, since this is a rather subtle area where no one has all the answers.

Async Parallel


Parallel is a library for spawning processes on a cluster of machines, and passing typed messages between them. The aim is to make using another processes as easy as possible. Parallel was built to take advantage of multicore computers in OCaml, which can't use threads for parallelism due to it's non reentrant runtime.


Parallel is built on top of Async, an OCaml library that provides cooperative concurrency. So what do we want an async interface to parallel computations to look like? (more…)

10 tips for writing comments (plus one more)

A few words about what we're striving for in our comments, particularly in Core. (more…)

A module type equivalence surprise

I usually think of two module types S1 and S2 as being equivalent if the following two functors type check:

  1. module F12 (M : S1) = (M : S2)
  2. module F21 (M : S2) = (M : S1)

And by equivalent, I mean indisinguishable -- one should be able to use S1 anywhere one uses S2, and vice versa, with exactly the same type checker and semantic behavior.

However, I found an an example today with two module types that are equivalent, both in my internal mental model of equivalence and in my formal definition that F12 and F21 type check, but that one can distinguish using module type of. Here are the module types:

  1. module type S1 = sig module N : sig type t end type u = N.t end
  3. module type S2 = sig
  4. type u
  5. module N : sig
  6. type t = u
  7. end
  8. end
  10. module F12 (M : S1) = (M : S2)
  11. module F21 (M : S2) = (M : S1)

And here is a context that distinguishes them: F1 type checks, but F2 does not:

  1. module F1 (A : S1) = struct
  2. module F (B : sig type t end) = (B : module type of A.N)
  3. end
  4. module F2 (A : S2) = struct
  5. module F (B : sig type t end) = (B : module type of A.N)
  6. end

What's going on is that in F1, module type of A.N decides to abstract t, because it doesn't have a definition. But in F2, module type of A.N does not abstract t, because it is defined to be u.

Since I thought of S1 and S2 as equivalent, I would have preferred that module type of not abstract t in both cases, and thus that both F1 and F2 be rejected. But I don't see anything unsound about what OCaml is doing.

RWO tidbits: the runtime

This is my favorite tweet about Real World OCaml.

It is indeed pretty rare for a language introduction to spend this much time on the runtime. One reason we included it in RWO is that OCaml's simple and efficient runtime is one of its real strengths - it makes OCaml simple to reason about from a performance perspective, and simple to use in a wide variety of contexts. Also, critically, the runtime is also simple enough to explain! (more…)

RWO tidbits: Benign effects

Now that Real World OCaml is out, I thought it would be fun to have a series of posts highlighting different interesting corners of the book.

Today: the section on benign effects.

Benign effects are uses of imperative programming that by and large preserve the functional nature of the code you write. Typically, benign effects are used to improve performance without changing the semantics of your code. I think there are a lot of fun and interesting ideas here, my favorite one being an elegant approach to dynamic programming, as exemplified by a simple implementation of a function for computing the edit distance between two strings.

And of course, if you enjoy the book, you can get a hardcopy on Amazon.

The making of Real World OCaml

It's taken a good long while, but Real World OCaml is finally done. You can read it for free at, or buy a hardcopy or an ebook version.

The idea for the book was born in a bar in Tokyo in 2011. After a talk-filled day at ICFP, a few of us, Ashish Agarwal and Marius Eriksen, Anil, and myself, went out drinking. We were all bellyaching over the lack of a high-quality OCaml book in English, and, emboldened by the Guinness and egged on by Ashish and Marius, Anil and I decided that we were just going to have to go ahead and write the book ourselves. We brought Jason into the project soon after.

From the very beginning, Anil and I had different visions for the book. From my perspective, it was a simple proposition: there was no good book available, and we were going to write one. The goal was to document the state of the art so as to make it more accessible.

Anil's vision was grander. He argued that writing a book is an opportunity to discover all of the things that are just too embarrassing to explain. And, once discovered, you needed to find a way to get all of those things fixed before the book was published. It was Anil's grander vision that we followed, and to good effect. It's not that we fixed the problems ourselves. We solved some problems directly, but by and large, we ended up getting help from those who were better positioned than us to fix the problems at hand.

Here are a few examples of pieces that came together over those two years.

  • OPAM. The core work on OPAM was done by Thomas Gazagnaire at OCamlPro (funded by Jane Street), with Anil collaborating closely. This is probably the biggest improvement of the bunch. It's hard to overstate how transformational OPAM is. Before OPAM, installing OCaml libraries was a complex chore. Now it's a joy.
  • The Core release process. We decided early on to base the book on Core, Jane Street's alternative to OCaml's standard library. But the release process around Core was too monolithic and too hard for external collaborators. We reorganized the process completely, moving to weekly releases to github, with the repos broken out into manageable components. Most of this work was done by Jeremie Dimino and Yury Sulsky at Jane Street.
  • Ctypes. Anil was in charge of writing the section on C bindings, and he decided that the standard way was just too painful to explain. That was the germ of ctypes, a library, written by Jeremy Yallop, that lets you build C bindings entirely from within OCaml, with a easy and simple interface.
  • Short paths. One of the real pains associated with working with Core is the heavy use that Core makes of the module system. OCaml's error messages, unfortunately, do not cope with this terribly well, leading to absurd types in error messages, so you might see a type rendered as Core.Std.Int.Map.Key.t Core.Std.Option.Monad_infix where it could just as well have been rendered as int option. We worked with Jacques Garrigue to implement a better heuristic for picking type names (suggested by Stephen Weeks, who implemented the same heuristic for Mlton). This is now available in the 4.01 version of the compiler, via the -short-paths patch.

This period also saw the founding of OCaml Labs, a lab at Cambridge University devoted to improving OCaml as a platform, with Anil at the head.

And we're not done. OCaml Labs and OCamlPro are still working on improving OPAM and building out a curated OCaml Platform that's built on top of OPAM. There's ongoing work on improving documentation generation so we can have better online API docs. Jacques Garrigue and Leo White are working on adding module aliases to OCaml which will greatly improve compilation time for libraries like Core that need to manage large namespaces.

And there are more projects that are improving OCaml and its surrounding infrastructure that have no direct connection to Real World OCaml, like:

  • Frederic Bour and Thomas Refis' work on Merlin, a tool for providing IDE-like tooling from within editors like VI and Emacs.
  • Mark Shinwell's work on improving GDB support for OCaml,
  • Fabrice le Fessant's work improving OCaml's interactions with performance tools like perf.
  • Work that Ashish Agarwal, Christophe Troestler, Esther Baruk, and now many others, have poured into improving the website.

Real World OCaml is of course a personal milestone for Jason, Anil and myself (and for Leo White, Jeremy Yallop and Stephen Weeks, who made some key contributions to the text). But viewed from a broader perspective, it's just one part of the increasing swirl of activity in the OCaml community.

OCaml has been a great language for a very long time. Now, we're growing a social and technological infrastructure around it to match.

What’s 2013 + 50? 1969, of course!

What happens when the latest CentOS 6.4/RHEL/FreeBSD GnuTLS certtool gets used to generate a TLS certificate with a 18250-day validity period? Time travel back in time, is what.

Note: This applies to the CentOS-released GnuTLS v 2.8.5. Latest source distribution is 3.2.4 Curiously enough, even in FreeBSD (by way of a counterpoint), gnutls "stable" is 2.12.23, and devel is 2.99.4_1. Professionals call this sort of a thing a "hint". FreeBSD's 2.12.23 also has the described behavior. FreeBSD's 2.99.4_1 cannot be downloaded via the usual "portinstall" mechanism - it has a known security vulnerability which hasn't been patched and portaudit does its best impersonation of The Grumpy Cat meme and says "No".

So, you'd like to use CentOS' certtool to create a self-signed certificate? Sure, no problem.

First step, create a skeleton certificate authority.

  1. $ rpm -qf which certtool gnutls-utils-2.8.5-10.el6_4.2.x86_64
  2. $ uname -a; cat /etc/redhat-release Linux buildhost 3.9.5pm1 #3 SMP PREEMPT Thu Jun 13 11:20:29 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux CentOS release 6.4 (Final)

First, CA private key:

  1. $ certtool -p --outfile ca-temp-key.pem
  2. Generating a 2048 bit RSA private key...

All good.

Next, CA signing certificate:

  1. $ certtool -s --load-privkey ca-temp-key.pem --outfile ca-test-signing.pem
  2. Generating a self signed certificate...
  3. Please enter the details of the certificate's distinguished name. Just press enter to ignore a field.
  4. Country name (2 chars): US
  5. Organization name: Jane Street
  6. Organizational unit name: Systems
  7. Locality name: US
  8. State or province name: NY
  9. Common name:
  10. UID:
  11. This field should not be used in new certificates.
  12. E-mail:
  13. Enter the certificate's serial number in decimal (default: 1378834082):
  15. Activation/Expiration time. The certificate will expire in (days): 18250
  17. Extensions.
  18. Does the certificate belong to an authority? (y/N): y
  19. Path length constraint (decimal, -1 for no constraint):
  20. Is this a TLS web client certificate? (y/N):
  21. Is this also a TLS web server certificate? (y/N):
  22. Enter the e-mail of the subject of the certificate:
  23. Will the certificate be used to sign other certificates? (y/N): y
  24. Will the certificate be used to sign CRLs? (y/N):
  25. Will the certificate be used to sign code? (y/N):
  26. Will the certificate be used to sign OCSP requests? (y/N):
  27. Will the certificate be used for time stamping? (y/N):
  28. Enter the URI of the CRL distribution point:
  29. X.509 Certificate Information:
  30. Version: 3
  31. Serial Number (hex): 522f56a2
  32. Validity:
  33. Not Before: Tue Sep 10 17:28:04 UTC 2013
  34. Not After: Wed Dec 31 23:59:59 UTC 1969
  35. Subject: C=US,O=Jane Street,OU=Systems,L=US,ST=NY,
  36. Subject Public Key Algorithm: RSA
  37. Modulus (bits 2048):
  38. ce:cb:49:2c:3d:a2:e2:97:6f:71:df:43:e1:fa:b1:14
  39. 1e:b1:e5:51:13:1c:cc:7c:18:38:29:bf:08:70:f1:35
  40. d9:5d:ad:51:dc:0e:9d:f9:e6:ec:53:20:b0:04:fe:cb
  41. 0e:a6:45:27:c0:f2:cc:34:45:fd:97:2c:11:b7:86:e9
  42. 8f:9f:58:fa:90:ac:e7:9f:4e:a0:7f:8e:eb:5b:6f:15
  43. 17:8d:82:a1:30:cf:3f:37:a8:44:6a:1d:2e:3b:69:36
  44. 3e:34:c5:2a:f3:d2:2b:1f:81:ec:25:81:76:0e:1d:b9
  45. 7f:12:23:a2:af:b7:e5:9b:f7:f6:be:c4:23:65:f1:4a
  46. 63:fc:ec:92:5b:fc:f0:2c:6b:80:ee:fb:54:bf:7f:16
  47. 33:b8:26:e5:d4:f4:ec:86:18:26:3e:31:5f:66:cf:0c
  48. 81:cd:ef:c2:ec:ad:fc:26:07:2d:67:94:de:98:c2:32
  49. d4:6e:59:31:6a:35:1d:db:19:b4:a5:27:6b:94:be:8a
  50. 77:2f:8c:7c:6b:cb:af:71:62:fa:7a:41:e5:da:63:5b
  51. 95:d1:05:62:56:33:07:67:8c:bf:3f:64:11:dc:84:69
  52. e6:f2:b7:f2:6c:a0:e1:36:fc:e3:00:c0:11:26:dd:44
  53. f0:ca:02:97:67:70:15:85:34:e9:ca:d6:60:a4:37:8b
  54. Exponent (bits 24):
  55. 01:00:01
  56. Extensions:
  57. Basic Constraints (critical):
  58. Certificate Authority (CA): TRUE
  59. Key Usage (critical):
  60. Certificate signing.
  61. Subject Key Identifier (not critical):
  62. d7dfcb520769255a65638e6dc3b899648dd4e447
  63. Other Information:
  64. Public Key Id:
  65. d7dfcb520769255a65638e6dc3b899648dd4e447

And here's the crux of the issue:

  1. Validity:
  2. Not Before: Tue Sep 10 17:28:04 UTC 2013
  3. Not After: Wed Dec 31 23:59:59 UTC 1969

Not after 1969? Yeah... Let me get my flux capacitor and a DeLorean and get back to you.

FreeBSD's 2.12.3:

  1. $ certtool -p --outfile ca-temp-key.pem
  2. Generating a 2432 bit RSA private key...
  3. $certtool -s --load-privkey=ca-temp-key.pem --outfile ca-test-signing.pem
  4. Generating a self signed certificate...
  5. Please enter the details of the certificate's distinguished name. Just press enter to ignore a field.
  6. Country name (2 chars): US
  7. Organization name: Jane Street
  8. Organizational unit name: Systems
  9. Locality name: New York
  10. State or province name: NY
  11. Common name:
  12. UID:
  13. This field should not be used in new certificates.
  14. E-mail:
  15. Enter the certificate's serial number in decimal (default: 1378834456):
  17. Activation/Expiration time.
  18. The certificate will expire in (days): 18250
  20. Extensions.
  21. Does the certificate belong to an authority? (y/N): y
  22. Path length constraint (decimal, -1 for no constraint):
  23. Is this a TLS web client certificate? (y/N):<br />
  24. Will the certificate be used for IPsec IKE operations? (y/N):
  25. Is this also a TLS web server certificate? (y/N):
  26. Enter the e-mail of the subject of the certificate:
  27. Will the certificate be used to sign other certificates? (y/N): y
  28. Will the certificate be used to sign CRLs? (y/N):
  29. Will the certificate be used to sign code? (y/N):
  30. Will the certificate be used to sign OCSP requests? (y/N):
  31. Will the certificate be used for time stamping? (y/N):
  32. Enter the URI of the CRL distribution point:
  33. X.509 Certificate Information:
  34. Version: 3
  35. Serial Number (hex): 522f5818
  36. Validity:
  37. Not Before: Tue Sep 10 17:34:17 UTC 2013
  38. Not After: Thu Jan 01 00:00:00 UTC 1970
  39. Subject: C=US,O=Jane Street,OU=Systems,L=New York,ST=NY,
  40. Subject Public Key Algorithm: RSA
  41. Certificate Security Level: Normal
  42. Modulus (bits 2432):
  43. 00:ea:3e:bf:c2:bb:55:90:4f:e1:d3:da:2b:3e:b2:81
  44. 64:97:8f:db:70:27:ad:94:ae:1d:dd:ab:28:73:6e:60
  45. 2a:39:8a:c0:1b:2c:ae:1e:f7:ce:c5:dc:01:8a:9e:31
  46. 15:e3:e5:9c:67:63:05:ec:24:6b:0c:74:7d:6b:ae:bc
  47. ba:8b:4c:fd:b8:2b:37:74:f1:10:39:a1:c7:f3:fb:dc
  48. b8:09:80:2f:a5:8b:79:13:66:e0:8b:93:56:3b:3b:dd
  49. fb:6d:78:49:cf:c6:5c:57:f0:5d:1f:2d:73:98:b2:eb
  50. 1e:10:be:0e:e7:de:2b:9b:d2:88:e0:49:34:a9:30:28
  51. ad:4c:60:8c:11:50:bb:25:c2:e5:88:0a:4d:6a:84:a9
  52. 48:2e:07:ed:dc:e0:04:9c:bd:90:2b:fb:10:92:ca:8d
  53. cc:51:4f:f8:fa:d2:51:a4:12:50:75:e6:e5:87:f2:67
  54. 5f:17:4e:12:63:4c:aa:70:2e:20:b9:07:63:1d:41:89
  55. f4:f7:7f:c7:91:55:05:49:94:ff:7f:1b:dc:23:59:08
  56. 15:c0:9f:13:c7:90:bf:c0:c1:8f:02:9b:6f:28:71:e4
  57. 1e:90:0b:1f:7b:f6:4b:1a:2d:1f:24:d4:d4:6d:11:3a
  58. 3d:e2:7e:41:d1:0d:1c:88:da:db:29:5a:1d:4d:62:c3
  59. ac:c6:dc:2c:e9:d9:7d:3d:fc:af:3a:10:fe:3a:b7:bc
  60. 8a:f1:ed:9b:85:89:b6:e2:e8:0c:36:df:55:c6:60:7a
  61. 1c:1c:3d:54:7f:d7:d5:ea:1c:0d:d1:0c:c6:ef:99:cf
  62. 5d
  63. Exponent (bits 24):
  64. 01:00:01
  65. Extensions:
  66. Basic Constraints (critical):
  67. Certificate Authority (CA): TRUE
  68. Key Usage (critical):
  69. Certificate signing.
  70. Subject Key Identifier (not critical):
  71. 6ec09c8592ba3904a301051b60223a5e50cad333
  72. Other Information:
  73. Public Key Id:
  74. 6ec09c8592ba3904a301051b60223a5e50cad333
  75. Is the above information ok? (y/N):

GnuTLS 3.2.4 (compiled from source):

  1. gnutls-3.2.4/src$ ./certtool -p --outfile ca-temp-key.pem
  2. Generating a 2432 bit RSA private key...
  4. gnutls-3.2.4/src$ ./certtool -s --load-privkey=ca-temp-key.pem --outfile ca-test-signing.pem
  5. Generating a self signed certificate...
  6. Please enter the details of the certificate's distinguished name. Just press enter to ignore a field.
  7. Common name:
  8. UID:
  9. Organizational unit name: Systems
  10. Organization name: Jane Street
  11. Locality name: New York
  12. State or province name: NY
  13. Country name (2 chars): US
  14. Enter the subject's domain component (DC):
  15. Enter the subject's domain component (DC):
  16. This field should not be used in new certificates.
  17. E-mail:
  18. Enter the certificate's serial number in decimal (default: 1378836930):</p>
  20. <p>Activation/Expiration time.
  21. The certificate will expire in (days): 18250</p>
  23. <p>Extensions.
  24. Does the certificate belong to an authority? (y/N): y
  25. Path length constraint (decimal, -1 for no constraint):
  26. Is this a TLS web client certificate? (y/N):
  27. Will the certificate be used for IPsec IKE operations? (y/N):
  28. Is this a TLS web server certificate? (y/N):
  29. Enter a dnsName of the subject of the certificate:<br />
  30. Enter a URI of the subject of the certificate:
  31. Enter the IP address of the subject of the certificate:
  32. Enter the e-mail of the subject of the certificate:
  33. Will the certificate be used to sign other certificates? (y/N): y
  34. Will the certificate be used to sign CRLs? (y/N):
  35. Will the certificate be used to sign code? (y/N):
  36. Will the certificate be used to sign OCSP requests? (y/N):
  37. Will the certificate be used for time stamping? (y/N):
  38. Enter the URI of the CRL distribution point:
  39. X.509 Certificate Information:
  40. Version: 3
  41. Serial Number (hex): 522f61c2
  42. Validity:
  43. Not Before: Tue Sep 10 18:15:31 UTC 2013
  44. Not After: Wed Aug 29 18:15:31 UTC 2063
  45. Subject:,OU=Systems,O=Jane Street,L=New York,ST=NY,C=US,
  46. Subject Public Key Algorithm: RSA
  47. Algorithm Security Level: Normal (2432 bits)
  48. Modulus (bits 2432):
  49. 00:b9:f0:d3:81:b1:d6:09:71:45:47:e6:66:ac:41:0b
  50. 93:93:b3:68:28:60:08:5e:e4:ba:9e:43:5f:b5:05:55
  51. 24:f0:34:ab:11:8a:fe:74:9e:d2:f8:e4:ab:c6:5c:f3
  52. 2c:f9:0b:b4:4c:26:b9:3d:58:3b:16:73:85:28:95:13
  53. ec:7d:7c:8b:38:c8:fa:08:64:de:5e:f5:9a:f5:70:1c
  54. cb:d4:d0:4a:e7:ad:5b:20:89:cc:29:91:c0:58:3b:dd
  55. 38:f8:6f:56:f5:9b:25:05:44:ae:f9:9d:67:0b:59:96
  56. b7:da:4c:24:37:84:a5:f6:8f:32:5b:ae:e3:e8:ac:d2
  57. 1b:7d:b4:67:42:f7:60:95:30:e4:8e:fa:4d:db:5b:65
  58. 4f:f3:04:ca:94:74:d0:b2:42:20:8f:be:22:1b:77:34
  59. 34:00:7d:0f:1a:7f:33:5a:56:b7:c6:88:9b:68:5b:7d
  60. 84:d6:c4:c2:3e:8a:b5:40:6e:35:64:10:46:b1:28:ac
  61. 8c:1f:2c:55:98:14:96:9c:e9:17:93:d3:28:30:04:8e
  62. 7d:9e:ae:55:77:13:c5:7b:1b:cd:e1:d9:85:62:66:ad
  63. 64:14:11:f3:2a:a4:f2:9a:88:36:d7:b9:7d:3f:c7:8f
  64. 45:7c:b9:7d:11:73:da:c3:36:5e:12:e3:8a:8f:94:c1
  65. 4e:33:be:e6:2c:49:d4:cf:39:d8:38:7c:fd:c5:7d:06
  66. 1d:2d:87:8e:ea:7e:80:f7:aa:25:bf:e8:a7:0f:17:c7
  67. 12:e7:21:05:aa:3a:0c:9a:a8:1c:86:98:fc:ea:30:40
  68. 29
  69. Exponent (bits 24):
  70. 01:00:01
  71. Extensions:
  72. Basic Constraints (critical):
  73. Certificate Authority (CA): TRUE
  74. Key Usage (critical):
  75. Certificate signing.
  76. Subject Key Identifier (not critical):
  77. 683cf71bc67af324655d661ecd7043a0707e3ee7
  78. Other Information:
  79. Public Key Id:
  80. 683cf71bc67af324655d661ecd7043a0707e3ee7
  81. Public key's random art:
  82. +--[ RSA 2432]----+
  83. | . . .+o.|
  84. | + . +o|
  85. | o . .*|
  86. | . . o. =.|
  87. | = S oo...|
  88. | . o o o + |
  89. | * . E |
  90. | oo= |
  91. | ...o. |
  92. +-----------------+
  93. Is the above information ok? (y/N):

Lovely, that.

On CentOS 6.4, rpm --whatdepends and rpm --requires show very few (at least in our general install) direct dependencies.

Parallel to that, it is not certain why GnuTLS in CentOS/RHEL and FreeBSD (there is also evidence that Debian and Ubuntu are in a similar version paths) use an old(er) version of GnuTLS. There is a distinct possibility of an ABI change since there is a major version number jump.