Find memory leak

I am creating a personal finance application with Elm 0.19.1 frontend and Elixir 1.10 and Phoenix 1.4.16 backend. Lately, I have discovered that my memory (I have 16G totally) keeps getting full. It doesn’t happen suddenly, but increase gradually over hours. If I stop running the backend, it does nothing. If I close the pane with the application (normally Firefox), it sometimes release up to 5-6 GB.

When I look at about:performance, it does not show up at the top of the list, and if looking at developmental tools, I can’t find any evidence of any loops or requests attempting to be sent based on the console and network tab. If I look at memory and take a snapshot (in Firefox), it normally ends up around 20-40MB.

Running Manjaro, if that might be relevant.

I have also tried to inspect about:cache?storage=&context=, but I am not sure what I should look for, so I haven’t found anything useful there either.

I know it is an open question, and that other information might be needed to help me, but I don’t know where to look, so any tips or pointers would greatly appreciated!

Are you sure yet if its the front-end, or the back-end that is leaking?

Have you a way of testing the 2 in isolation from each other in order to be certain about which it is? For example, can you run the back-end standalone and hit its endpoints using a test tool such as Postman? Can you do a quick mock-up of the API endpoints, just enough to run the UI against something other than the real back-end, for example Node Express might help you to do a quick mock up there.

That is how I would approach this - start by assuming the leak could be anywhere and progressively isolate and test different parts of the system as you narrow it down.

Elm cannot form cyclic structures in memory, so I think it is unlikey that you are creating stuff that cannot be garbage collected (and even cyclic structures should be ok) - but perhaps you found a bug in the runtime… However, you can still have large Lists, Arrays, Sets or Dicts that hold onto references that you maybe no longer need? So I would check over the code and have a think if you are holding onto stuff that you don’t really need any more - and drop it from the collection if so.

Also are you running the debugger? That keeps all history in memory, so prevents older versions of the Model from being garbage collected.

1 Like

I saw a similar behavior in Firefox, but not in Chromium. Each time I load in a test the same 10 MB JSON-list containing 100000 entries from CouchDB via PouchDB in a webworker and over ports to Elm, the used memory shown in about:performance increases about 25 MB in my ELM code but also about 10 MB in my webworker, which is pure JavaScript only. So it seems not related to ELM.

I found, that opening about:memory and pressing “minimize memory usage” the memory shown in about:performance" goes down to a smaller value again. Only pressing “GC” (Global Collection) or “CC” (Cycle Colletion) will not help.

Most Linux/UNIX processes do not give memory back to the OS during runtime, but Firefox seems to do so, according to the column “RES” in top. But as the memory is manged in pages of 4KiBi, even small leaks can block a whole page. So some garbage-collectors copy memory around, but I do not know, how this works in JS in Firefox.

Since it happens so gradually and over hours, I am unsure how can test them separately efficiently. But I will keep it in mind as an option if I find no other solution. I have just assumes it was Elm since the memory was freed when I close the pane in the browser.

I have both debug and verbose as true in webpack, so I have tried removing them, to see if that might have an effect. Will also try to go over the code and see if there is some dict, arrays etc that I might not need.

I tried to clean up both GC and CC in addition to “minimize memory usage”. It release maybe around 1 GB one time, but not the full amount I am assuming is “captured” somewhere.

Are you running the optimized elm version or the debug version? Just an idea, what you could try, I’m not aware of that actually causing problems.

It sounds like it may be the debug parameter, in debug mode Elm has to store every state the model as been in, so the memory usage will grow with each Msg.

In addition, to find what is leaking in Chrome, check out their memory devtools, you can take heap snapshots and compare what was allocated at the between runs and see if something is sticking around in memory longer than it should.

I tested with an optimized version. My assumption is, that only “minimizing memory usage” will compact the heap, not a normal GC, and the displayed usage in about:performance includes just the heap size. Compacting requires copying the data, which causes CPU-cache trashing and affects performance or power consumption.

Lets assume the model is copied by copying only the references. My list may need about 10 references per entry in an array (array of length 100000, containing a record having a string, an array with one string and an array with 3 strings) assuming 8 Bytes per reference, these are already 8MB.

How have you measured the memory consumption? In Firefox or on OS level?

This is an older, but still valid article about traditional memory-management from Ulrich Drepper, who works on the libc in Linux: What every programmer should know about memory

There are a lot of articles about GC in Java, but I have no good source, how it works in the JS-VMs.

Debugging was a good hint! I found the reason for the big leak in my app. I printed the big JS-object created from the mentioned JSON using console.log() each time I received it through a port. So a snaphot of the object was taken. Have you checked your JS console?

Big thanks for bringing this up!

Yes, a slow memory leak can be a hard thing to pin down for this reason. Can you expose the problem by forcing the UI to be more hyperactive? For example, code a short Time.every subscription to perform some action lots of times very quickly? That could be making a call to fetch data from the back-end or just making some change to the Model that will re-render the UI slightly differently each time and so on.

I am also thinking, so Firefox allocates a lot of memory but does not release it easily. Is this actually a memory leak? or is it just how Firefox likes to work? That is, if there is plenty system memory available, perhaps it likes to avoid compacting the heap, so just hangs onto memory - so long as plenty is available. Can you force its process, or the OS to have less memory available, and see if it behaves more conservatively under those restrictions? What I am saying is, is this really a leak or just typical firefox behaviour that is to be expected?

Hope I don’t derail this memory leak conversation… but whoops is there a link to this?

I’ve only read about --optimize shortening record field names, unboxing values, and producing smaller compiled .js files

I don’t know of a link to it - but it should be obvious. The debugger shows all of the messages and models going back to the start of running an application, and you can even time-travel back to earlier states. Where are those states stored if not in memory?

If the debugger is not running, previous models can be garbage collected. Items inside the model that have not changed before/after the call to update will not be garbage collected, as a reference to them will be copied forward into the new model. But any parts of the old model that are no longer referenced will be collected.

oops i don’t use the debugger so didn’t notice. thanks!!

Back to the memory leak, in jQuery days my memory leaks were often due to DOM event handlers not being cleaned up properly when DOM elements were removed. Could try looking there (e.g. temporarily removing on*), though not likely since these are managed by Elm

I have tried to turn of debug mode, but it doesn’t seem to have had any impact. I have also tried to look at the memory devtools in Firefox, but I struggle with analyzing the snapshots.

I have both used htop to measure on OS-level, and looked at about:performance and about:memory

After removing the console.logs from JS, my Elm-app uses always between 84 and 85 MB according to “about:performance”, after forcing a GC with pressing “GC”, “CC” and “minimizing memory” on “about:memory”.
The consumption reported from the OS is about 300MB (RES in top) per Firefox-Process including about 150 MB shared memory (SHR).
Firefox uses Servo for HTML-rendering, which is written in Rust. I could not find an information, which memory-allocator is used, still JE-malloc (an allocator from BSD, allocates only sizes with powers of 2) or the glibc-allocator. As these allocators do no garbage collection (as all low level allocators), they can not return memory to the system, as log as a single bit is used from a page of 4k, and some never do it, as the manage a single heap (per process), as glibc.

It get’s offtopic, but a common misunderstanding is the following output:

~$ free
              total        used        free      shared  buff/cache   available
Mem:       16189892     9204488      275576      839512     6709828     5810284
Swap:       5242876     1286400     3956476

275576 means 275 MB are unused, but in fact are 5810284 (5.8GB) available for new allocations in programms, as the cashed pages (disk cache) will thrown away when required. After some running time there is no memory free on Linux, all is used as cache.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.