I read Evan’s recollections of the history of native code with great interest, having experienced and thought about that history in some detail. There are two points on which my recollections differ from his, and I thought it might be useful to share some research which I have previously done with the respect to the native review process.
Evan recalls that the native review process was faced with a lot of “bindings to whatever.js”. This is not consistent with my recollection. In fact, in the context of a previous discussion, I did some research on this question. You can follow the link to the full post if you like, but I’ll summarize some key results here:
Looking at the packages considered for native review, one could discern three distinct purposes for native code:
- Bindings to basic platform facilities (the Web API).
- Wrappers around “whatever.js”
- Implementing genuine Elm data structures that cannot be implemented in pure Elm (consider
Task
as a quick, familiar example).
Of the packages rejected in the native review process, my count was as follows:
- Binding to basic platform facilities: 20
- Wrapping “whatever.js”: 8
- Implementing genuinely Elm data structures: 6
This data suggests that the native review process was not, in fact, flooded with “bindings to whatever.js”. Wrapping Javascript libraries was not the dominant purpose of the packages submitted to that process.
It is also interesting to review the purpose of the native code in packages that were available in the package repository. My count (at that time) was as follows:
- Binding to basic platform facilities: 10
- Wrapping “whatever.js”: 5
- Implementing genuinely Elm data structures: 6
You will notice that the profile of the purposes of the accepted packages is roughly similar to the profile of the purposes of the rejected packages. This suggests that the failed attempts to contribute native code to the package repository were not motivated by factors fundamentally different from the successful attempts.
Why, then, did the native review process fail, if it was not due to “getting a lot of bindings to whatever.js”?
My recollection is that there were several people on the native review committee who eagerly put in the work necessary to achieve the purely technical goals of the review process – that is, ensuring that the quality of the code was sound in every way.
The difficulty was that the final decision as to whether a package would be accepted depended on additional considerations which I would characterize as “strategic” in nature. By “strategic” I mean considerations such as:
- should this Web API be exposed at all?
- is this a priority now, or should it wait for later?
- might we want to expose this Web API in a different way in the future?
- what effect would availability of this package have on the development of the Elm community generally?
Now, you might be thinking that these are good questions to ask – and, to a point, you’d be right about that. However, it was frustrating, both for members of the review committee and for package authors. The key frustration, I believe, is that it was genuinely difficult to actually discuss those questions. They are subtle questions which depend on a vision for the future of Elm, its framework, and its community that is difficult (and time-consuming) to fully articulate. Furthermore, they aren’t amenable to ordinary technical experimentation – you can’t, for instance, try one future of Elm in which certain capabilities are exposed now and another in which they wait for later and see which turns out better.
So, the final decision ended up turning on considerations which, for committee members and package authors, inevitably felt arbitrary. I don’t mean to say that the considerations were actually arbitrary – in fact, they were almost the complete opposite of arbitrary, since they were based on an extremely deep analysis. However, they weren’t really matters amenable to discussion, and for that reason they felt arbitrary.
Now, most programming languages solve this problem by not thinking quite so hard about the ideal shape for the development of their frameworks and their communities. Elm has achieved some unique results by doing things a little differently. This does lead to some frustrations, and there are surely some micro-improvements possible. However, it’s not easy to know which of Elm’s peculiarities are essential to its success, and these are matters which are very difficult to discuss in a way which proves helpful. (I may well have failed in this attempt).