Where is Elm going; where is Evan going

@rupert,

I now have read through the paper on implementing UnionFind without using IORef and have the following comments:

  1. The essence of the authors algorithm replaces mutation in place using IORef’s with deferred computations so therefore it is really the same algorithm except that the computations have been moved from the runtime as implied by IORef to interior to the code.
  2. The new algorithm seems to get adequate performance with the right combination of optimizations, but there seem to be more variables in choosing the right set.
  3. The authors think that there is a constant factor gain for their algorithm due to less copying being done.

I have now slept on those ideas and have the following back comments:

  1. I think that the original “mutating” algorithm is much simpler to read and understand as the mutation expressions are all concentrated in the IORef modification idea.
  2. The original algorithm is well proven and known to work; although the authors of the paper have also proven their algorithm is accurate, the implementation of it needs to be thoroughly tested, unlike the original algorithm.
  3. It seems to me easier to implement a mutation monad only for purposes of implementing the original algorithm (a module not publicly available, but private to the compiler application). This does not require help from the compiler as in Capabilities and HKT’s for this single use, nor is “do notation sugar” required.
  4. With a mutation ST-style monad, the original algorithm can be easily implemented.
  5. Elm would have to be modified anyway for deferred memoized computation as in a memoizing lazy.
  6. The authors missed that what they were implementing is really the same thing in a different form, just moving where the execution is done.
  7. My ideas can be implemented as part of the self-hosting Elm compiler, whoever completes that first.
  8. One problem I have with the paper is that they compare their various solutions to a purely function one and of course the best of theirs wins; however, they don’t compare the performance with the IORef solution as usually implemented. I am wary of implementing their best solution just to find that it isn’t really any better than as I have proposed above.

@rupert,

Yes, if we stay away from RankNType’s, Type Inference is fully decidable for my Capabilities proposal just as Haskell’s type classes don’t interfere with Type Inference. There aren’t many differences between Haskell’s Type Classes and Abilities/Capabilities other than attempts to make the syntax easier to use and not add any unnecessary new keywords.

If there are not ST-type mutation monads (using determinable automatic in-place mutation), we don’t need HKT’s and the implementation becomes pretty simple.

@rupert,

I think that Evan may have been aware of this paper or something like it as it appears that he has done some IORef optimizations as compared to a “standard” UnionFind. And I think I can implement an algorithm to emulate what IORef does in a package for an Elm-like language that doesn’t clutter up the language itself; also I don’t think we have the same incentive to avoid monadic mutation chains and the complexity of monad transformers when purposing this work to different purposes as did the authors of this paper in developing something that would be used in their Essential Haskell Compiler project.

Unfortunately, their code is not accessible to me (forbidden access) so it isn’t as easy to test this to compare with the alternates as it could be. However, their definition of the algorithm they use is written in Haskell syntax so it should be reasonably easy to implement oneself.

Although it isn’t necessary for the project, it might be interesting to do this comparison to see relative performance, and I’ll look into how that might be done today…

@rupert,

In reading the paper, I see that they did compare the results, with the conventional IORef implementation called “SHARING” and with their new implementation called “FUNCTIONAL SHARING” plus some variations on their scheme. Given that their “SHARING” doesn’t include Evan’s extra optimizations and that the Occurs check is likely required to assure no cyclic Type definitions, I don’t see that their scheme would have any higher performance than the conventional one. So if we are able to implement the conventional algorithm, I see no reason to go through all the work of implementing and testing this alternate one…

As noted previously, this has nothing to do with in-place mutation by using unique types…

@rupert,

Your little poll is a good idea.

I wonder if we should poll on whether the new language should provide a means of bypassing bounds checks in optimized release mode, or by the programmer using “unsafe” versions of get/put or read/write, etc., functions, or should we work on ways of automatically eliding bounds checks when the compiler can determine it is safe to do so…

Unless you are serious about implementing it, its probably just going to be another round of getting people hopes up followed by :tumbleweed:

I don’t have time to contribute to a serious implementation, at least not for another 6 months. I will probably look into a few small experiments on the auto uniqueness typing thing, and also I have started a little experiment around vdom+wasm. That about all I have the capacity for.

@rupert,

I AM serious about implementing it, which is really why I started writing the Elm compiler in Elm by translating Evan’s compiler from Haskell to Elm: as well as being a handy tool to use as an Elm library package (which I now see is likely impossible as it requires mutation in a couple of key places), it helped me learn Evan’s compiler to the point I can solve issues, and the completed project would be a handy jumping-off point to explore an extended Elm as discussed here.

This is a big project for one older person, but I hope to fire up support from others in different capacities as in the language web page, documentation, funding, package support and management, and the different areas of specialized language and compiler development, and this seemed to be as good a place as any to make inquiries in that any remaining Elm enthusiasts will likely still be lurking here. This thread has caused me to see that we are unlikely to get help from Evan himself (although I haven’t entirely given up hope) but others have mentioned working on similar projects or sub-projects that might be able to be merged into my goals. It is also why I wrote the RFC to explore where an extended Elm-like language might go.

I am older than most of you and have been programming for over fifty years (although my degree is in Electronics Engineering), which is perhaps longer than many of you have been alive, but one of the advantages of my position in life other than my experience is that I have more free time than a younger person and don’t require that I derive an income from this or other work. However, in order to maintain momentum, I need support of others opinions and advice.

I don’t even aspire to be BDFL, at least in the long term, but just believe that Elm is perhaps the perfect (delightful) syntax from which to launch a FP language that can show the world what FP can do as a general purpose language (unless one prefers the Roc course).

I don’t have time to contribute to a serious implementation, at least not for another 6 months. I will probably look into a few small experiments on the auto uniqueness typing thing, and also I have started a little experiment around vdom+wasm. That about all I have the capacity for

Even starting from the code base of Evan’s compiler and efforts made to translate it to Elm, this is a big project taking at least a year or two to even a minimally usable state. So if you become more available to help more actively six months down the road, I’m sure there will still be work to do. Meanwhile, contributing in any smaller way including to the eventual required core packages will be valuable. Also, even chatting about it here has helped me modify the goals for the project such as adding in-place uniqueness modification and removing the need for HKT’s.

2 Likes

Hello @GordonBGood!

Even more specifically, if there were such an open source project, would any of you see yourselves as contributors or available to head the project?

This is a big project for one older person, but I hope to fire up support from others in different capacities as in the language web page, documentation, funding, package support and management, and the different areas of specialized language and compiler development, and this seemed to be as good a place as any to make inquiries in that any remaining Elm enthusiasts will likely still be lurking here.

Very interesting thread! I’m pretty sure I would be very interested to get involved in such a language. Although I don’t currently have the knowledge to help with the technical implementation, I could contribute in other ways such as documentation, maybe tooling, infrastructure, feedback etc.

I’m still a functional programming padawan, and right now have settled on OCaml as my go-to language of choice for the backend. Haskell does interest me too but it feels “too powerfull” for my needs in some sens. OCaml is a lovely language, and gives me vibes very similar to Elm, but keeping the code functionnal does require discipline. Which I feel fine with for my own code and for exploring, but I’m not sure how I would feel jumping into a foreign code base with imperative gotchas (that’s the tradeoff of using OCaml). So for these reasons, I feel that some language is missing (as I’ve explored a few), and the one you propose sounds very appealing to me.

I’ve got kids + a full-time job + some other duties on the side + self study, so my time is fairly limited. But then again programming is my only area of interest at the moment (kinda obsessed with it) so my contribution power is definitely non-zero.

My modest open-source contributions related to programming currently are:

I wanted to explore Roclang, and “stole” the F# plugin and adapted it to Roc. But I quickly lost interest with Roclang when I saw that currying by default was unavailable, because currying was confusing to beginners.

I really enjoyed learning the basics of OCaml and wanted to help out the author of this great library. I basically wrote the tutorial I wish someone had wrote when I first looked into it. It is still a work in progress.

So to sum up, my main motivation is learning. And I could see myself invest into various areas if the language turns out to be close to my areas of interest, to keep on learning.

1 Like

Welcome @benjamin-thomas and thank you for chipping in.

Very interesting thread!

This is probably the most active this forum has been in quite some time…

I would be very interested to get involved in such a language. Although I don’t currently have the knowledge to help with the technical implementation, I could contribute in other ways such as documentation, maybe tooling, infrastructure, feedback etc.

We will likely need at least as many of people willing to help in all those areas as in research and actual coding the compiler. Documentation is a great area to get started because it forces one to use what one is documenting and thus learn more about it, but maintaining the web page and package support website are also extremely useful and will involve using Elm and later what Elm is to become…

right now have settled on OCaml as my go-to language of choice for the backend

Have your tried F#/Fable? Fable is kind of interesting right now in that one can use it to generate Rust source code (in only alpha status, but it seems to work quite well except for any required missing support functions), while writing in a functional language (with discipline, as for OCaml)…

Haskell does interest me too but it feels “too powerfull” for my needs in some sens.

I started my FP journey with F#, but gradually learned Haskell when I saw what it could do as the top of the FP chain. It was a long journey to where I am now with Haskell (I wouldn’t say an expert, but competent enough), but worth it as I wouldn’t be able to do what I have done with the Elm compiler, Elm, and many other languages, without all that I learned along the journey…

Once the project gains some momentum, I’ll contact you to see if you can fill in in your areas…

I actually learned the basics of F# before getting into OCaml. I got drawn to OCaml because the language felt simpler (and the libraries as a consequence), and also I felt more at home with it with it’s more *nix focused mindset.

But F# is a fine language, it definitely scores plus points where OCaml has minor points. Good to know about Fable, I had played with it indeed but somehow missed the Rust compilation target.

Good to hear. I’m pretty sure I’ll follow this path too, but it’ll probably take a while.

Sounds like a plan :slight_smile:

@rupert,

I did some looking into Roc’s current Abilities implementation compared to my Capabilities proposal for a new extended Elm language, and make the following observations:

  1. For advanced use, Roc provides a way to define new Abilities; I see that as necessary in current Roc because there are so few Abilities pre-defined (and even less currently implemented). Since a propose a pretty complete set of Capabilities be predefined in the new Basics module, I am suggesting that there may be no need to allow user-defined Capabilities.
  2. In my Capabilities proposal, I suggest that just defining the required function for a type to match the required Type signature in a where list would be enough to make that type (or type alias) be an instance of that Capability; however, seeing how Roc does this by specifying both the Ability AND an optional implementations of the Ability function’s to override the default implementation (if one is possible) is more explicit and likely better.
  3. In my Capabilities proposal, I suggest that there be default implementations of the Capability functions where this can be expressed generally, and I think this is still a good idea; Roc does have default implementations but in order to include them (if possible) in an “Opaque Type”, one has to list them explicitly without new overriding “custom” function definitions.
  4. In order to add new capabilities to an already defined type, in Roc, one needs to define an “Opaque Type” to enclose the type and do the changes in Abilities to that new Opaque Type. I would suggest that this would be possible for my proposed Capabilities by defining a new type alias and defining the extensions on that, but that may be awkward unless there is auto promoting of the returned base Type to the type alias. This needs more thought and example use, but this “promoting” kind of fixes one ambiguity in current Elm in that a type alias provides its name as a record constructor function, but type alias’s of that type alias do not reflect this. By “promoting” the base Type up to the type alias where used, new outer nested type aliases would have the same ability to use their names as constructors and when used, would provide the differently defined Capabilities.
    5: Roc does not nave a Num Ability but only a type that then includes Frac and Integer subtypes specified by a sub-sub type ranges for each sub-sub type. I guess that works well enough, and would work for ElmPlus with the backward-compatible number type family just converting to something like Num, Float to Float64 and Int to Int32.
  5. Without HKT’s, Roc cannot and states that it will not ever support Functor, Applicative, and Monad, etc. Abilities and we must accept the same limitation. For Roc, they have minimized the Monad-like Type’s so that only Result could be expressed as a monad although it isn’t, and Task as an option for different platforms could be a monad but isn’t. In current Elm, only Maybe, Result, and Task could be monads, and normal use of these doesn’t require much chaining so its not much of a limitation not to have these. There is also the thought that monadic chain mutation shouldn’t be easy to use or it will be used too often, so we can make this available but not necessarily easy to use as in supplying monad’s - it also means we may not have to be too concerned with optimizing monadic chains as to phantom argument elimination, as the back-end compiler may do all of that for use anyway.

I think we have to have Capabilities/Abilities, but can leave HKT’s on the back burner to see if a need develops.

@rupert,

I’ve been churning this over in the light of “what Evan would do” and I think that “the spirit of Elm” doesn’t forbid having different ways to do things as in passing arguments to function with surrounding brackets, pipe operators, or as point form expression with function composition operators even though Evan said himself that he preferred just surrounding arguments with brackets where necessary. I think that “the spirit of Elm” means keeping the things it does do as simple and easy-to-learn as possible but doesn’t extend to shortening syntax (rather, Evan seems to prefer writing keywords and function out in full in camelCase) with a minimum set of symbolic commonly-used operators.

So if there is an alternate syntactic “sugar” that makes it easier to express a chain of operations without having to match brackets, I think that is within the “spirit of Elm”.

@benjamin-thomas,

As you likely know, OCaml is the design ancestor to F#, so of course there are similarities. I tried OCaml for likely same reasons as you: I wanted a mostly FP language like F# that could generate efficient machine code, and indeed, it fulfills that desire. My problems with OCaml are that it doesn’t feel as syntacally “pure” as F# (where do those single and double semicolons go now?), and my problems with both of them are the mixed paradigm support in directly supporting mutation and OOP, although as you say, one can use programmer discipline to write around using these (mostly).

Yes, Fable → Rust is an interesting development, but it doesn’t compete (at least yet) in performance with OCaml or pure Rust as the Rust library for Fable wraps things to make them easy for Fable use use as a back-end target; accordingly, Fable → Rust has away too much reference counting, overflow and bounds checking that may not be even as efficient as pure Rust’s because they are wrapped, and general recursive looping overheads that make the result about twice as slow as C/C++ or likely what pure Rust or OCaml can provide for “hot” loops.

If you have the time to get into Haskell, it certainly is a mind blast and expansion device; however, knowledge of Haskell for this programming project isn’t a requirement unless you want to get involved with language design in being able to compare the way Haskell does things to what we want to incorporate, or to actaully translating Haskell compiler code to our project compiler implementation.

1 Like

I think you can define a nice set of basic capabilities, but I think I would also want user-defined capabilities.

I have been using Jeremys OO style pattern in Elm to do ad-hoc polymorphism, by hiding the implementation in a continuation. Elm Object Oriented Style

This let me define an element in a drawing like this:

type Element msg
    = Element (ElementIF msg)

type alias ElementIF msg =
    { move : VScene -> UpdateContext msg -> UpdateContext msg
    , select : UpdateContext msg -> UpdateContext msg
    , animate : Posix -> Maybe (Element msg)
    , position : PScene
    , bbox : BLocal
    , view : ViewContextIF msg -> List (Svg msg)
    }

What I would use ‘capabilities’ for is to define this exact same interface, and then create multiple implementations of it.

The issue with the technique that lets us use the above already in Elm, is that is is a bit of hack, to hide the implemtation using continuations. It is also inefficent as a result. Its actually fine for TEA event handling because the overhead is not so noticeable there. But for more fine grained use, say defining implementations of hashable for user defined hash table keys, it is too slow.

I think capabilities bring a decent level of ad-hoc polymorphism to the language, which will make it easier to program in an open-ended fashion. As per my example, everything a new drawing element needs is all specified in a single interface. I can add new ones without going through a lot of existing code and altering many case ... of statements to hook in the new behaviour - they are self-contained.

2 Likes

@rupert,

Thank you for your example of a case where one would want to be able to define their own Abilities/Capabilities. Your links to Elm examples where this would have been a good feature to have were very clear and you make a good point.

If we don’t have HDT’s, then ElmPlus can not be polluted with definitions for Functors, Applicatives, and Monads (although I don’t really have a problem with these, just would prefer to limit the complexity).

So, given you make a good case for being able to add Abilities/Capabilities, would it still work for you if we don’t allow adding them to packages published to the package infrastructure but allowing them to be added in applications?

I would also allow them to be published to packages.

For example, I tried to create an authentication package with an API implemented as a record of functions, in order to permit multiple independant implementations:

https://package.elm-lang.org/packages/the-sett/elm-auth/latest/

Admitedly, this one is a bit experimental, and perhaps not really worth the abstraction. But I can see a use case for being able to publish an interface for independant implementation by some 3rd party for supporting plugin architectures in Elm.

Which brings us to another topic… dynamic code loading.

@rupert,

Again, you make a valid point that there shouldn’t be restrictions a creating new Abilities/Capabilies.

Which brings us to another topic… dynamic code loading.

I know of this but not enough about the details to make a comment: explain if you think it’s important…

So that brings up a general philosophy: Should a general purpose language such as ElmPlus have a restriction on features at all, or perhaps features that are “dangerous” should have some “safety” guards placed on them so that their use has to be deliberate in spite of warning in the documentation and/or as issued by the compiler, which seems to be the general solution for even “safe” languages such as Rust. My train of thought then goes as follows:

  1. As in the presentation link in the OP of this thread, even Evan seems to have thought that a non-Elm non-JavaScript producing language probably shouldn’t be restricted as to its features.
  2. As Elm people, the objections we have to Roc are the decisions made on what to leave out, justifying this in the interests of less confusing errors for new users: thus, no currying, no back pipes, default ordering of arguments to suit the version of forward piping and “backpassing” used, no monads, no, no, no, etc. Perhaps the proper way to handle this other than good documentation and compiler warnings is to no forbid these but to provide the tools to make them easier to use when there is little to no risk, and perhaps a little awkward when they can cause weird errors…
  3. You have explored and justified some things that I felt should be restricted and showed that these things can be useful if not restricted, which I can see as they aren’t that “unsafe”.
  4. Higher Kinded Types (HKT’s) are kind of in the category of not that unsafe, require a higher level of programmer to use them effectively but the everyday programmer doesn’t deal with them directly as their prime use is in defining Functors, Applicatives, and Monads (or whatever they are called in the language), which most will never define themselves but just use.
  5. Some sort of “do notation sugar” actually makes the language easier to use for programmers new to FP as it makes the code more resemble imperative code; actually, I like what Roc has done with “backpassing” as it isn’t limited to just on Monad at a time (which Roc doesn’t have as an Ability, but the Task type and/or anything similar defined in platforms actually is one) - what it does is I think possitive as it makes chains of functions read cleanly without having to use a bunch of nested parenthesis.
  6. RankNType’s is not often needed, is dangerous in that it can make Type Inference ambiguous/undecidable without Type signatures, is only a convenience for defining types that can contain any type without requiring a type variable in the outer Type specification, but are mandatory if one is to have mutable monadic function chains as required by the Haskell STRef types: runST can’t be defined to work safely without them. Since functions like runST would always be defined in the core package, we can result the use of forall in function type signatures to the core packages, which seems to take care of that problem. As to the less important use of forall in Type definitions, perhaps this could be make less friendly to use by requiring that every Type defined using it must have a Type signature on its use, and thereby make it not casually used but only where the overall code is simplified.
  7. Every feature request would have this same classification: if it is safe to use, why not; if can cause problems to the compiler or programmer, restrict the use to only the situations where it may have some use and require syntax to make it safe and perhaps easier to use without weird compiler problems.

Elm has great type safety. Its non-safe features specifically the abilty to do sychronous FFI are not available to the general programmer. I don’t see any of Capabilties, HKT or RankN types are being unsafe - unsafe in the sense they can cause runtime errors. When we talk about safety, that is what I think we mean. Can a program crash and produce an unrecoverable error? Since there is no exception handling in Elm, we think of all runtimes as things that should only happen when unsafe code breaks.

Potentially unsafe FFI could be allowed for all programmers, by required an “unsafe” keyword. I’m not such a fan of that on the whole. I think C# has unsafe, and it is transient, so importing an unsafe module makes your module unsafe. What would likely happen is that programmers would publish unsafe packages wrapping all sorts of javascript libs, and all practical commercial applications in Elm would be “unsafe”.

The issue with HKT or RankN is that they come from a more accademic setting. I don’t understand them and probably most programmers looking to write a web app don’t either. But right away with Elm any programmer can see how the existing typeclasses don’t quite cut it allowing us to get type errors (int ^ minus number thing) and also stopping us doing some things that would be convenient such as custom key types in dicts. So I see capabilities as a way of solving these problems without introducing too much that feels like accademic FP. Elm is practical FP that enables programmers to be productive in a typical commercial setting. There would still be type safety and type inference. I feel I should really play around with HKTs on Haskel, to get a better understanding of what you would be missing out on by not having them. For me its about minimalism, have a high-level language that helps me focus on what I am trying to achieve, and not getting lost in the esoteric semantic possibilites that it offers. I think this sums up for me what the issue with “getting too clever” is: The Universe of Discourse : Why I never finish my Haskell programs (part 1 of ∞)

My ideas for advancing Elm probably differ from yours, since I am not really thinking along the general purpose programming language line. I am thinking more along the lines of, can we separate Elm the language from Elm the (current) runtime, and add new runtimes to the language that will let us use Elm in a wider range of specialist domains where it could work well.

My idea for kernel code would be to define “package domains”. The existing elm/* can be divided into a collection of domains - core, browser, http and maybe a few others. By extending the package system we could have a system where authors can create their own domains. When consuming packages, you get the elm/* domains automatically, but would have to opt in to 3rd party ones. This would also mean that the original elm/* packages and package site is never “poluted” with 3rd party kernel code.

To give an example, there was a recent suggestion on here of exploring using Elm to write processors for Apache Kafka Streams. Since that would require a unique runtime, possibly one that is embedded in the JVM instead of Javascript, the author might start a new kafka domain in which to write the kernel code to interface into that. (I have previously found the existing elm/* can largely be made to run on the JVM by setting up a browser like environment and running them through one of the available javascript/JVM interpreters. Yes its slow.)

Another domain I would be keen to explore - can Elm be used for scientific/data science/machine learning type applications. Writing some python at the moment, and whilst I respect its practicality and that it enables me to get these things done, the language itself is severely lacking when you come to it from Elm. The problem with FP in this domain is that it is not fast and its an area where speed matters, but I also think this is not a real limitation we can do it with better compilers. Seems to be a recent example of that on here with the Elm for signal processing post.

HTTP servers - elm-pages, but we seem to be just about managing that with Elm as it is. (Except elm-pages uses Lamdera, is it for the automatic binary wire encoding? or Bytes through ports?)

elm-morphir - an interesting package that takes Elm to the domain of business logic and domain modelling. Elm is already well suited to this, but it lacks runtimes to easily embed it on the backend and interface it with databases and so on. This project may have changed quite a bit since I first heard of it, but at one time there was an Elm → Scala compiler, presumably for the purpose of embedding Elm into a server-side application.

and so on… Elm is a really neat language, I’d like to take it beyond its current runtime.

1 Like

@rupert,

Yes, we are on the same page here: Elm has great Type safety except for the Math issue you’ve mentioned previously. Not allowing FFI except through application ports doesn’t really change runtime safety, as the JavaScript code accessed through a port could cause a runtime failure although not on the Elm side as the interface to ports is checked.

I talk about “safety” in a more general sense not just including runtime failures but any hard-to-deal-with problems such as ambiguous compiler messages if a Type cannot be resolved. Neither Capabilities nor HKT’s cause either of these problems but RankNType’s would fall into the latter category in Haskell when insufficient Type signatures are provided to be able to resolve Type Inference. My proposed solution to the RankNType’s problem is that all types defined using it must be used with Type signatures as enforced by the compiler; just using ports in an application makes the programmer go through hoops and cost execution time so that they tend not to be used unless there is no other way to do something.

Potentially unsafe FFI could be allowed for all programmers, by required an “unsafe” keyword. I’m not such a fan of that on the whole. I think C# has unsafe, and it is transient, so importing an unsafe module makes your module unsafe. What would likely happen is that programmers would publish unsafe packages wrapping all sorts of javascript libs, and all practical commercial applications in Elm would be “unsafe”.

Yes, “safe” Rust can be turned into “unsafe” that way, too, with the problem that programmers who tend to do premature or unnecessary optimization will use it almost all of the time, resulting in all code being unsafe and pretty much cancelling out the safety guarantees provided by the language. Even Haskell provides packages with unsafe functions that bypass normal runtime guarantees such as bypassing normal bounds checking or allowing in-place mutations that may bypass referential transparency, etc. Roc appears to not allow any of this, only determining at compile time when in-place mutations are not visible and therefore safe, only eliding bounds checks when there are bounds guarantees already in place, etc. I think I prefer the Roc approach although there may be some inefficiencies for some use cases caused by builtin data structures such as List, Dict, and Set, being based on contiguous arrays rather than linked persistent data structures; however, not using any linked data strictures at all for builtin’s was just an implementation choice.

In short, ElmPlus might be safer than Rust in not allowing these bypasses, yet be in general at least as fast due to the compiler optimizations generally able to compensate.

The issue with HKT or RankN is that they come from a more accademic setting. I don’t understand them and probably most programmers looking to write a web app don’t either.

My point about these is that one doesn’t have to fully understand them in order to use them. In fact, Elm has three monad’s in Maybe, Result, and Task (and Functor’s and Applicative’s) just that they aren’t called that and aren’t formalized: anything with an andThen function as defined in the core package is really a monad, and the purpose of having an AndThenable Ability/Capability for these would be to avoid the necessity to qualify the andThen function calls. It turns our that these “higher order” Abilities/Capabilities can’t be defined without HKT’s, but once defined, the general program just uses them according to the documentation without having to think about these implementation details.

What HKT’s mean is that a Type like List a is considered to be a Type that takes a Type a applied to the definition to produce the final List a Type or * -> * where * represents a Type not a binding, so ST s a is in the same way a * -> * -> * Type (Type currying may also apply); this allows manipulations by referring to the enclosing type as a separate “thing” without being concerned with how it is instantiated by the application of the Type variables, and the only way to define Abilities/Capabilities for higher order Type’s like these is using HKT’s because they need to be defined for the enclosing Type.

As I mentioned, RankNType’s have an even more limited use but can cause “unsafe” Type ambiguity which I addressed in my last post, but as said above, I think the ambiguity can be solved by the compiler requiring Type signatures.

RankNTypes allow Church Numbers to be defined as follows:

type Church = Church (forall a. (a -> a) -> (a -> a)) -- or (a -> a) -> a -> a, which curried means the same thing
zeroChurch : Church
zeroChurch = Church <| always <| identity
oneChurch : Church
oneChurch = Church <| identity
succChurch : Church -> Church
succChurch (Church chf) = Church <| \ f -> f << chf f
addChurch : Church -> Church -> Church
addChurch (Church chaf) (Church chbf) = Church <| \ f -> chaf f << chbf f
-- ...

instead of as currently implemented in Elm due to the need to handle the special case where the a type variable might represent a binding (as in an Int there used to convert the Church number to an Int) instead of a function.

The only other use I see for RandNType’s/forall that can’t be avoided is if we have monadic mutation such as STRef’s is that for the following single function:

runST : forall s. ST s a -> a
-- ...

which is needed so that the programmer can’t intermix multiple calls to runST because the forall means that each use of s is unique from every other use of s even though the same realWorld phantom binding is being applied to each.

As you can see, these can be used without fully understanding them just as most Haskell programmers don’t.

My ideas for advancing Elm probably differ from yours, since I am not really thinking along the general purpose programming language line. I am thinking more along the lines of, can we separate Elm the language from Elm the (current) runtime, and add new runtimes to the language that will let us use Elm in a wider range of specialist domains where it could work well.

I think that our aims are identical to each other but my approach may be more low-level up where yours might be more high-level down. Your ideas of separating different runtimes is actually identical, which is why I asked what you think about Roc’s “platforms”, whose purpose is essentially as you state (as I understand it).

My idea for kernel code would be to define “package domains”…

That sounds like Roc’s “platforms” doesn’t it?

You have considered a lot of high-level domains in which I don’t have experience, but you definitely get the idea…

Elm is a really neat language, I’d like to take it beyond its current runtime.

YES, exactly…

1 Like

Yes - pretty much.

I just tried to think of a way that it could be done whilst avoiding pushing any kernel code to package.elm-lang.org, and that would be backwards compatable by default with how things currently work - hence the opt-in to use it, and replacing the package system.

Current package system lacks mirroring, is vulnerable to breaking if GitHub changes hash algorithm, does not support private package repos, and so on. Again, I think it was a smart economic decision to piggy back the package system on GitHub, since developing something more expansive would take a fair amount of effort.

1 Like