I’m currently trying to convert everything I’ve been doing through native to work via ports. For the most part, it’s possible.
But where I’m really getting stuck is the most complex part of my native code: a decoder.
Most of my native code use is to just run some methods, but I have one part of a Json value I currently decode using native. It’s the TimeRange object, represented in JavaScript as { length: index, { start: start(index), end: end(index) }. See [here] (https://developer.mozilla.org/en-US/docs/Web/API/TimeRanges)
I have a native function that takes a value, loops through and returns a list of {start: Float, end: Float}. It’s part of a pipeline decoder of much larger object, that I decode from the eventTarget on a custom “on” function.
I’d really, really like to keep decoding the object in Elm, as a pipeline, but I have no clue how to send this Json value through a port, then wait for a subscription to plug it into the Json decoder. What’s more, there are use cases where I care about frame accuracy, and need a response in less than 1/60 seconds. It’s really not viable for this information to be delayed more than a single requestAnimationFrame.
Any ideas? Is this just impossible? Really hitting a wall here on something that’s my primary elm project.
Ps, I’ve done some testing and the information I get back from ports vs native, the ports is slightly slower, but I think the difference is small enough that for my purposes it’s ignorable (someone doing forensic video analysis may feel differently).
I’d really like to be able to solve this and keep my decoding of the rest of the object within the safety of Elm. But I can’t afford to lose the three TimeRange objects.
When working on touch decoders, I had to decode a TouchList. According to spec, you can only access the touch item through the function call TouchList.item(index). But I analyzed a bit the TouchList object and find out I could retrieve them by accessing each of them manually with this decoder.
I’m not sure you could do the same with TimeRanges but the structure looks very similar.
Haven’t tested yet, but did some reading. It looks like the reason the TimeRanges object doesn’t have an array syntax is because in many (most) cases, the ranges actually converge over time: first you have [(0,10),(45,60)], but after the whole video is loaded, you have [(0,60)].
That’s not so relevant to me, as I’m just grabbing it’s value as a snapshot in time, but totally makes sense in object oriented world, where you might get an instance of the object and watch it over time. Anyway, I found that an interesting API design tidbit, as I had been questioning the wisdom of that API design choice before reading that.