RE: Depreciation of elm-lang/lazy library

If only (LazyList) were exposed by the language, you’d be forced to create singleton lists.

That wouldn’t be very efficient. If there are enough cases for libraries that use Lazy, maybe it can be added back to core, as that would make more sense that a lot of libraries that need to be included in elm-explorations because they must use Native (Kernel) code to implement Lazy within each library. If there are only two or three cases, then perhaps Lazy wouldn’t need to be exposed.

It is lazy (but not memoizing), you can see the continuations for later exploration being created on lines 173 and 178. So it can search an infinite tree. I have tested this because I made sure this code tail re-curses so it can infinite loop all day without running out of stack (at least for depth-first search, obviously breadth-first will blow the heap pretty quickly).

Cycles will break it. I am assuming that someone coming to this code to write a search will be coming with a good understanding of that anyway. The Step function may incorporate some cycle detection code that is specific to the problem at hand, or rely on the cost function to price out cycles.

Another interesting idea (that @evancz might prefer) would be to investigate what would happen if we split lazyness from memoization:

  • Just lazyness can be done by using the \()-> stuff to compute pattern we currently already have.
  • Memoization could be added by a memoize function that uses a variant of the Y-combinator to add memoization to recursive calls to the same function.

It might be a bit difficult to implement, however. (Maybe someone smarter than me can figure it out) is to figure out how the memoized results could be stored. Only using Elm, it would be intuitive to use a Dict, but it requires its keys to be comparable, which functions are not. I also cannot think of a sensible way to convert them to a comparable, which e.g. AllDict (eeue56/elm-all-dict) requires. But maybe it is possible to somehow gather all arguments and combine them into a tuple as key instead.But this might also fail if there is an argument that is not comparable…

Writing that memoization-code in a tiny snippet of native JS, would however be straightforward (I think) using the new Map() class, which even accepts functions as keys.

One other drawback might be that it leverages the Y-combinator, which

  • might be somewhat slow because of the extra function-calling indirection. (I did not perform any benchmarks.) This is probably not a problem since it is a conscious choice to use memoize in these contexts, and the time you save by not having to re-compute answers should be much greater (otherwise you should not use memoize in that situation).
  • is not the most beginner-friendly to use. However, I think that good documentation of how to use memoize (in essense: that the function you implement using memoize receives a function as input, which it should call as recursive call, because it is the wrapped-with-memoization version of the function).

@Qqwy

  • Memoization could be added by a memoize function that uses a variant of the Y-combinator to add memoization to recursive calls to the same function.

I’ve thought about this a lot, and it really doesn’t seem to work out. While we can create a non-memoizing Stream as per your first point and memoize it linearly after the fact by mapping it with a function that converts it from the Union case of a thunk to an evaluated type; this is a solution that works fine for some cases but not when we need true recursion.

Storing things in any kind of an Elm structure other than as a Union as I mention above are all O(n log n) performance and will be terribly slow and the Y-combinator doesn’t memorize, but just enables recursion we already have at a cost in execution speed without any benefit.

Even if we could make it work out with a combination of the Y-combinator and Union’s, why should we go through hoops and still not end up with what we want as to execution speed when the memoize function in JavaScript (or any language) takes less than 10 lines of code, and we have established that it isn’t detrimental to memory management?

However, I’ll look into the Y-combinator and Union’s solution and benchmark it if it works out

Here’s an API for a memoizing LazyList:

get : Int -> LazyList a -> ( Maybe a, LazyList a )

Like Random.step, it gives you back not only the answer, but also a new collection (which includes the cached results of the evaluated thunks) to use from then on instead of the previous collection. If you call the same function on the new collection with the same argument, it fetches the answer from the cache rather than reevaluating thunks.

As a teacher, I like what this API can teach on a conceptual level. Like Random.step, it demonstrates explicitly how state can evolve over a multi-step process without having to mutate anything in between! Rather than the caching functionality being hidden behind the curtain, it’s right there in the API so people can see what it’s doing.

4 Likes

@rtfeldman, You are thinking along the right lines, but you are missing something that is generally applicable in all cases: Like I said, one can memoize this way after the LazyList is formulated, but you can’t when the LazyList is formed recursively as in my “plus1” previous examples, as it degrades into the same case as for the Stream type.

I’ve done some more work on this and proved conclusively that one can’t produce a true memoizing Lazy without mutation even if that mutation is hidden inside a function as in the former Lazy library. Here is a Test Bed in Ellie that you can use to play with different schemes (other than using Native JavaScript, of course).

To use, set the clock speed to that of your CPU (the boost rate as Elm runs single threaded), the limit of the number of loops (don’t go higher than a few thousand for the recursively defined “plus1” versions or be both prepared to wait a long time and/or for a stack overflow as the machine recursively tries to evaluate back to the one value at the beginning for each new value), and un-comment the version of the calculation() you want to test, commenting out the others.

I reformulated the sequence types to be more as generally expected with the head of the list wrapped in the deferral/laziness, for example the Stream:

type Stream a
    = ConsStream (() -> { head : a, tail : Stream a })

but that makes no difference to looping speed; it just moves the laziness to enclose the head rather than the tail.

The following examples are supplied:

  1. a simple loop so one can measure maximum speed, at about 6.35 CPU cycles per loop on my machine.
  2. a non-recursive numbers(), running at about 320 cycles per loop for 10 million loops on my machine with about linear performance with range (slightly variable depending how much garbage collection is triggered and when).
  3. a recursive version of numbers using Stream, running in exponential time at 250,000 cycles per loop for a range of 1000, over twice that per loop for a range of 2000 (for about four times the execution speed, because of the exponential).
  4. An attempt at a LazyList implementation using the ideas expressed here, but with no benefit when defined recursively, and no point when applied to a non-recursive algorithm which doesn’t need laziness to run efficiently.

This proves that just a non-memoizing Stream can’t handle cases which must be defined recursively, and that there is no substitute for a mutating Lazy (although the mutation may be hidden).

2 Likes

[@evancz, With no reply I’m worried this thread will close without a proper answer:

, post:6, topic:1876"]
Having a lazy library allows you to create code like this:

type Stream a = Cons a (Lazy (Stream a))

ones : Stream Int
ones =
  Cons 1 (lazy (\() -> ones))

When the lazy part is forced, the internals switch from an unevaluated function to a pointer back to the initial allocation. This is cyclic data!

If you can create a value like this in a let expression, you can dynamically create cyclic data that needs to get GC’d at some point. This means a naive ARC implementation is not viable anymore.

So given the actual details about cycles (only top-level and statically known) I think the GC considerations are still valid. If I am missing something, I’d like to understand it via a small example.
[/quote]

With respect, I could be wrong myself but I believe your reason for rejecting Lazy for GC concerns is wrong as follows:

To re-write your above example not using Lazy:

type Stream a = Cons a (() -> Stream a)

ones : Stream Int
ones =
  Cons 1 (\() -> ones)
  1. This is identical to your example except it doesn’t memoize on evaluation.
  2. As of Elm 0.19, neither can be used in a local let .. in block, so that consideration of why to reject Lazy doesn’t apply or if they could be used, whether using Lazy or not causes the same problem either way.
  3. As you said in the quoted text above, cyclic data in the global scope is allowed because it doesn’t cause a problem; that’s true either way.
  4. If your Lazy version of “ones” is used at the global level, it doesn’t cause a memory management problem and it doesn’t cause a memory leak as although the reference occurs in two places, at the global level and buried in the closure, that’s as high as the reference count goes other than for external references to “ones”.
  5. A non-cyclic structure using Lazy with a global (or permanent local) reference can cause the entire memory memoized to be retained as the head of the chain is referenced globally, but this is true for any memoizing data structure such as Dict or Array; such definitions should really be initialized with a function call and thus the reference can be forgotten/deallocated when it isn’t needed any more, thus, this isn’t a consideration.

So given that Lazy doesn’t cause a problem with (reference counted) GC, what then is your objection? Just because you or NoRedInk don’t currently have an application doesn’t seem to be a good argument to me, when we are talking only about 10 lines of JavaScript code.

1 Like

Extending this just one more time…

2 Likes

@GordonBGood You’re not alone. I am still hoping for feedback on this as well.

Lazy is able to create cyclic data structures only because of a hole in the rules enforced by the compiler that more frequently leads to runtime exceptions due to incompletely initialized functions.

Consider a non-lazy implementation of the Lazy API. Here, Lazy.lazy would immediately call its argument rather than tucking it away to be called later. In the cyclic ones example, this would result in an effort to use the value of the ones variable which had not yet been initialized because we were calling Lazy.lazy in order to construct a value with which to initialize it. Assuming the compiler has no special knowledge of the lazy module, it should no more allow a loop to be created through Lazy than it should allow identically structured but eager code using something other than Lazy. If the code is safe against eager evaluation then the function passed to Lazy.lazy has to have all of its references already initialized at the point when it is passed and hence they all need to exist before the function is constructed and hence there can’t be any cyclic references.

Now, why doesn’t Elm enforce a prohibition against such code? My understanding is that ML does. It’s not because the condition cannot be detected. If the reference graph contains a strongly-connected component containing more than just functions (or functions and objects that the compiler can initialize without invoking other code), it has a dangerous cycle. I suspect the reason for Elm not to enforce it is that this would mean that you couldn’t create recursive generators and decoders since to do so you need to pass a function referencing the variable that will eventually hold the generator or decoder being constructed. Like lazy, it’s runtime exception safe to do this because the construction of the generator or decoder doesn’t evaluate the function even though there is nothing in the API to let the compiler know that it doesn’t. Instead, we just end up with a cyclic data structure. Elm 0.19’s rules forcing these to happen at module scope mean that the cycle exists somewhere where we no longer care from a collection standpoint so we’re just left with the potential runtime exception issue. But the core point is that if Elm went the ML route and prohibited this source of potential runtime exceptions, it would also make Lazy incapable of producing cyclic data.

Mark

@MarkHamburg, if you read through the thread, you’ll see that it isn’t just Lazy that can create cyclic data structures at the global level (now detected and forbidden at the local level), but any kind of a deferred function, and yes there is a compiler “hole” that allows it but only because @evancz knows about it and has judged that it doesn’t really do any harm, with some advantages for cases you’ve mentioned. Where I believe he is mistaken is in thinking that the Lazy library causes worse memory de-allocation problems than the use of deferred non-memoized execution, obviously still allowed.

But do note that the problem of run-time exceptions exists whether we have a Lazy library or not as one can create the same run-time exceptions using deferred execution. Also, all cases causing run-time exceptions can’t be detected as per the “halting problem”.

I don’t much use ML languages other than F#, and it has the same detection of infinite types and recursive bindings as Elm, but allows recursive bindings at every level with a warning (which can be turned off), but of course it uses garbage collection. Actually, since they wouldn’t be much of functional languages without function recursion, and with function recursion we can transform binding recursion into function recursion with injected deferral/laziness such as use of a “Y-combinator” (which can be implemented with either language), I don’t really mind if binding recursion we’re allowed at all, even at the global level.

However, even without any binding recursion at all, we haven’t eliminated run-time exceptions, as it is still easily possible to overflow the stack with recursive function calls that don’t (or can’t) use Tail Call Optimizations, or loop forever without a termination condition.

And all of this is beside the point that the Lazy library has been eliminated because it prevents the possibility of reference counted memory management, which it doesn’t. The cyclic “ones” example causes two uncollectable references at the global level (the only level where it would currently be allowed) no matter if it is implemented with “Lazy.lazy <| () -> ones” or simply “(() -> ones)”. Can’t anyone but @Qqwy and I see that?

1 Like

@GordonBGood It’s hard to engage with long posts, so I want to summarize your point as I understand it. Here’s how I would say it:

lazy can’t negatively impact ARC, because that would require using lazy to create a cyclic value in a let-expression, and Elm 0.19 already gives a compiler error for that.

I don’t know enough about GC to know if this is correct, but is it a fair summary of your point?

1 Like

The cyclic “ones” example causes two uncollectable references at the global level

Perhaps a little more expansion on the “number of memory references” would be helpful.

With either of the definitions as follows:

type Stream a = ConsSteam a (() -&gt; Stream a)
ones = ConsStream 1 <| \() -> ones

or using the old Lazy library as follows:

type LazyList a = ConsLazyList a (Lazy (LazyList a))
ones = ConsLazyList 1 <| Lazy.lazy <| \() -> ones

As to the number of memory references for the “ones” cyclic data reference as follows:

  1. Obviously, there is the “ones” reference to the instance of the data type structure either way - that’s one.
  2. There is a reference to the same data type structure by the captured value of “ones” in the thunk closure; Again, this is true either way - that’s two.
  3. For the LazyList implementation, there will be one more created when the “thunk” is evaluated and the result of “ones” saved in the internal value location for a total of three; however, after it is evaluated, the thunk doesn’t need to be preserved as it will never be used again, and it can be disposed of, leaving a total of two, just as for the “Stream” case.
  4. The first time the head of the “Stream” or “LazyList” is examined, there is as yet no need to evaluate the tail, so no new references are produced - still two.
  5. The first time we evaluate the tail in the “Lazy” case, we cause the creation of the evaluated value as discussed above, so that’s the third reference (or only second if we then dispose of the thunk).
  6. So we’ve now obtained the original reference to the instance of the data type (not a new reference) - still two (or at most the same three if the thunk wasn’t disposed).
  7. Evaluating the same tail again doesn’t produce any more references (or increase the reference count), as for “Stream” it still just uses the same reference buried in the thunk closure, and for “Lazy” it just uses the same evaluated value buried in the “Lazy” structure - no increase.

While it’s true that in either case, the instance of the data structure can never be disposed because there are active references, that is true whether reference counting or conventional garbage collection are used.

So what’s the problem with “Lazy” when there isn’t any problem with garbage collection and even if there were, there is not really any difference between using it or using features we still have? This is why recursive bindings are only allowed in the global level where they can never be disposed of anyway.

For the case of local references inside let..in blocks, what you’ve said is correct except that it doesn’t need to use lazy; rather we could say “any cyclic data reference, including via lazy”. Let me change your block as follows:

Any cyclic data reference, including via lazy can’t negatively impact ARC, because that would require using that cyclic data reference, including via lazy to create a cyclic value in a let-expression , and Elm 0.19 already gives a compiler error for that.

That’s not the main concern, and the rest of the discussion shows there is no harm in allowing lazy in global scope either, as although the structures referenced can’t be disposed; they can’t be disposed in any case as @evancz well knows and states, and the total number of references can’t go beyond a finite small number, which he seems to have missed (as per my latest post on memory references).

Or maybe I do mind if it’s slower…

I’ll try to show a short example of how one can get around needing binding recursion or the possibility of cyclic data structures at all, as follows (note that since deferred execution and memoizing recursion are different forms of the same thing, I’ll implement this as a Stream so it can run on Ellie with Elm version 0.19):

To recap, using binding recursion/cyclic data, we can implement “ones” as follows:

type Stream a = ConsSteam a (() -&gt; Stream a)
ones = ConsStream 1 <| \() -> ones

we can rewrite that using function recursion as:

ones = let rept() = ConsStream 1 <| \() -> rept() in rept()

or given the following definition for the “Y-combinator” for recursive functions with deferred execution injected so it doesn’t infinitely loop:

fix f = f <| \() -> fix f

we can write it as the following:

ones = let rept strm = ConsStream 1 <| \() -> strm() in fix rept

which works equally as well and more generally.

So what’s the disadvantage of using function recursion rather than the current global recursive bindings/possibility of global cyclic data structure?

It’s primarily one of speed of execution, as the recursive binding version only makes one “thunk” call per element and no new memory allocations per element, whereas both of the recursive function call versions make two function calls and much more seriously as to execution time is that it needs to make several new heap allocations for every successive element evaluation (speed of many small memory allocations is not very optimal for most primarily imperative languages such as JavaScript/C/C++ etc.). The memory allocations are due to creating the actual new “Stream” instance as well as the “closure” for the tail, and in the case of the “fix/Y-combinator” version, an extra closure/function for the combinator formulation.

Compare the speed with this Ellie link, setting the clock rate to that of your CPU (the boosted, single threading rate), and selecting which of the versions to run, commenting out the others, recompiling before each run.

You’ll see that it only takes less than 20 clock cycles to do the single function call and return, whereas it takes hundreds to do the memory allocation (variable time due to garbage collection as occasionally required), and even more time to run the Y-combinator version as explained.

Perhaps that is why @evancz has left us with the possibility of cyclic data structures at the global level. However, he understands (in my view incorrectly as per my post looking at number of references) that when used in combination with the old Lazy library, that the cyclic data reference count will continuously increase.

This example is actually somewhat an argument for use of the Lazy library, as the function recursive version would still need to allocate memory for the “Stream” elements for the first pass, but future sweeps through the sequence would not have to do further element allocations for already evaluated elements, saving a significant amount of time in not doing memory allocations. So if one needs to sweep through a sequence twice or more, it makes a lot of sense to use Lazy in formulation.

It’s true that if implemented with the Lazy, memory use would increase with every element evaluated, but that’s what we want in the case of many sweeps across the sequence as that is what saves the recalculation and re-allocation time. If it were desired to reduce memory use where the algorithm only needs access to a narrow range of list values at a time, then the head of the list/stream shouldn’t be bound by a fixed binding but rather be generated by a function with the head(s) of the stream released when no longer required.

Another argument as to reduced execution time for some algorithms is that memoizing using a lazy list is O(n) asymptotic performance instead of O(n log n) as all of Elm’s persistent data structures are.

I think we’re arguing from two angles but on the same side in defense of lazy.

As you’ve shown, you don’t need lazy to create cycles. So, removing lazy does not remove the problem of cycles.

My argument was that lazy would be completely safe with respect to cycles if the compiler were enforcing a perfectly checkable restriction on using uninitialized/partially-initialized values because they can lead to runtime exceptions.(*)

In other words, lazy isn’t the only way to create cyclic data AND if we enforce a reasonable check that will prevent runtime exceptions, lazy isn’t even capable of producing cycles.

Mark

(*) Build the reference DAG for the values. Find the strongly connected components (e.g., Tarjan’s algorithm). Any strongly connected component containing values other than functions is unsafe. (This is actually overly restrictive. The component can safely also contain values that can be constructed just by filling in memory — e.g., applying a constructor. Of course, that’s how we then move from cyclic function references to cyclic data structures. It’s probably cleaner to just say that only mutually recursive functions are allowed.)

Cyclic cases can even be made to work with reference counting. Everything that is ever allocated is part of such a strongly connected component and these components form a DAG because of immutability. The reference counts go on the strongly connected components not the values within them. I think that this is how aggregates work in COM but I was never all that deep into COM.

Laziness would seem to mess with the DAG property because it introduces a form of mutability but since it needs to lead to the same value that we would get if we evaluated the function eagerly, that value must fit below the component containing the lazy value and hence we still have a DAG.

1 Like

Okay, Mark, we are both agreed that there is no need for Lazy to be removed in respect to cycles, as it is no different that using deferred execution.

Referring to my further post showing that an infinite cycle, whether through Lazy or just through deferred execution, only causes a finite small number of references to the cyclic data structure, I don’t see why further complex checks need to be made; Elm already restricts cyclic references to only from within functions, no matter whether such functions are deferred executions or the “thunk” wrapped in a Lazy, but type constructors referring back to the cyclic data are already not allowed unless inside one of the other two, so it would already seem that your “overly restrictive” checks are already being made.

I think you want to add something so that the following wouldn’t run away, whether used with any kind of deferred execution, including the “thunk” inside Lazy?

test = \() -> test()

which is now only allowed in global scope but the following:

test() = test()

is allowed for either global or local scope and both will run away to stack overflow if called with test(); that outcome can’t be prevented, as that’s the “halting problem”.

So I guess I fail to see what your proposed compiler tests prevent or help, nor whether it has anything to do with whether we have a Lazy or not.

Can you give me an short example of use of Lazy where your perceived problem occurs but does not occur when just deferred execution is used (or occurs for both although that is outside the scope of the objective of this thread) and how your proposed change fixes the problem?

Gordon

Elm 0.18 because that’s where I know I can use lazy…

This builds a cycle:

type LazyList a
    = Cons a (Lazy.Lazy (LazyList a))
    | Nil

tail : LazyList a -> Maybe (LazyList a)
tail lazyList =
    case lazyList of
        Cons _ lazyTail ->
            Just <| Lazy.force lazyTail
        Nil ->
            Nothing

makeLazyCycle : a -> LazyList a
makeLazyCycle a =
    let
        lazyTail : Lazy.Lazy (LazyList a)
        lazyTail =
            Lazy.lazy (\_ -> head)

        head =
            Cons a lazyTail
    in
        — The force call is unnecessary but resolves the lazy thunk
        Lazy.force lazyTail

This does not suffer from infinite recursion problems because calling the lazy wrapped function just returns the value of head. It doesn’t crash or create corrupt data structures because at the point when we do invoke the lazy wrapped function, head has been defined and the construction of that function only needed to capture a reference to the variable holding head not the value of head.

But imagine an implementation of Lazy that had no laziness and instead simply called its argument immediately. From a time standpoint this would look different but from a referentially transparent evaluation standpoint it would be the same. Except now, there is no safe order in which to define head and lazyTail. If we define lazyTail first, then when we do so we call the “lazy” wrapped function which gives us the value of head which is presently uninitialized and we end up with lazyTail containing undefined. If we define head first, then lazyTail starts out containing undefined and that’s what we put into the Cons value. Either way, we haven’t crashed yet nor have we gone into infinite recursion, but we just built invalid Elm data structures which will lead to a crash.

What I’m arguing is that in the interest of preventing runtime exceptions, Elm could/should disallow the above construction (either in a let or at module scope where it is also unsafe for the exact same reasons) because the strongly connected component containing the non-function lazyTail in the reference graph also contains the non-function head. Both values need to be computed and there is no way to do so safely. The fact that the reference from lazyTail to head is hidden behind a function just means that that anonymous function is part of the same strongly connected component. If a strongly-connected component in the reference graph requires calls to arbitrary functions to construct any of the values, then it cannot be constructed in a manner that is guaranteed from from runtime exceptions because potentially to construct any of the values all of the other values have to already have been constructed. And as a side-effect of imposing that rule, one rules out cyclic data (or at least restricts it to mutually recursive functions).

Mark

Mark

TLDR; jump to the end Synopsis for my quick conclusion to this reasoning.

With respect, your example doesn’t apply to Elm version 0.19, which doesn’t allow the cyclic reference within the let..in inside your makeLazyCycle function, so the cyclic data can never occur other than in global scope. You can test this by converting all of your Lazy.Lazy (...)'s to just deferred executions as () -> (...) (so it will run on Elm version 0.19) as follows:

type Stream a = ConsStream a (() -> Stream a)

makeCycle : a -> Stream a
makeCycle x =
    let
        lazyTail : (() -> Stream a)
        lazyTail =
            (\_ -> head)

        head =
            ConsStream x lazyTail
    in
        -- The forced execution is unnecessary but resolves the lazy thunk
        lazyTail()

for which compilation gives the following error message:

Cyclic Value
Line 17, Column 9
I do not allow cyclic values in `let` expressions.

17|         head =
            ^^^^
The `head` value depends on itself through the following chain of definitions:

    ┌─────┐
    │    head
    │     ↓
    │    lazyTail
    └─────┘

Hint: The root problem is often a typo in some variable name, but I recommend
reading <https://elm-lang.org/0.19.0/bad-recursion> for more detailed advice,
especially if you actually do want mutually recursive values.

However, one can create the cyclic data in global scope as follows:

lazyTail = \() -> ones

ones = ConsStream 1 <| lazyTail

test = lazyTail()

which serves the same ends, and, as you say, causes no problems, as all we’ve done is make the “tail” directly accessible.

Your statement following:

But imagine an implementation of Lazy that had no laziness and instead simply called its argument immediately.

can be implemented immediately, as since it evaluates immediately it doesn’t require “under-the-covers” mutation which would require JavaScript to implement it. Consider a global (so it compiles) version of what you are proposing in the following example:

type Lazy a = Lazy a

lazy : (() -> a) -> Lazy a
-- building it evaluates it immediately
lazy thnk = Lazy (thnk())

force : Lazy a -> a
force (Lazy x) = x

type LazyList a = ConsLazyList a (Lazy (LazyList a))

lazyTail = lazy <| \() -> ones

ones = ConsLazyList 1 <| lazyTail

test = force lazyTail

The above compiles because it injects the delayed execution in the definition even though it is immediately executed as that is enough to fool the compiler into thinking there is no cycle, but which then throws an error if one tries to evaluate test, with the error as follows:

Exception Thrown in Output


Uncaught Some top-level definitions from `Main` are causing infinite recursion:
 ┌─────┐
 │ ones
 │ ↓
 │ lazyTail
 └─────┘
 These errors are very tricky, so read https://elm-lang.org/0.19.0/halting-problem to learn how to fix it!

This is the “halting problem” which can’t be detected by the compiler in all cases and can’t be fixed (Evan’s blurb on this as per the link has temporarily disappeared).

Synopsis

Which is a long chain of reasoning and examples to say what I said before: @evancz still allows such binding recursion at the global level but it isn’t strictly necessary as per my post on this showing that the effect of binding recursion can be accomplished with recursive functions and/or the “Y-combinator”, but as shown there, there is a cost in some cases of greatly increased execution time due to increased memory allocations and increased levels of function evaluation.

Even if this one further way of causing race conditions were eliminated, that doesn’t eliminate the many cases of causing infinite loops and/or stack-overflow using function recursion, so what’s the point?

It also has nothing to do with the Depreciation of the Lazy library as is the subject of this thread.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.