Pure Elm rich text editor

To test Japanese you don’t need a physical Japanese keyboard. On Mac is enough to add “Japanese” in Keyboard > Input Sources.

I tried it, the browser detect me typing Japanese and shows the list of Japanese words, but the selection doesn’t work. Maybe there is some event that the editor should be listening to and get some text from that…

1 Like

I doubt there is a way to achieve that with pure Elm, you really need to use IME. We discussed this here:

I found these articles to be informative around some on the difficulties of handling input:


There is a way to support e.g. japanese input using basic events to listen to from Elm. You might need an off-screen input element though.

1 Like

I noticed that pasting emojis (go to a page like this one https://getemoji.com/ , and copy-paste any emoji into the text) tends to break. It works the first time, but on the second paste, 2 emojis appear, and if you try to select, cut and paste the 2 emojis, broken characters start to appear.

This has been fixed. Thanks for noticing!

Fun fact: an emoji has length 2 if you use String.length but length 1 if you convert it first into a list of chars:

Here’s a nice blog post that dives deeper into that. Emojis can be longer than 2.


@Philipp_Krueger Thank you for the tip on CompositionEvent; implemented. Now you can add Japanese (etc.) characters using the method described by @lucamug.

@rupert: The real limitation of the pure Elm approach seems to be performance. You can paste a whole novel into mweiss’s ContentEditable-based editor and it works just fine, whereas this one breaks down around 2000 words. Perhaps there is some performance hack that I missed, but I’m already using keyed nodes and lazy view.

1 Like

I think you must have missed the posts I made around this at the end of last year? And I have preserved the work in a series of GitHub repos too.

This is for text buffers, where the text is always the same size. For rich text with variable sized fonts its going to get more complicated, but I think the techniques are still relevant.

This is looking at custom scroll bars, to reduce the amount of Html rendered when the document gets large, by only rendering the visible region:

And this is looking at making the implementation of the text buffer more efficient:

I think you must have missed the posts I made around this at the end of last year?

I did miss them, thanks for the links! They look very helpful.

@rupert On reflection, I’m not sure that zip lists or custom scrollbars will help here. My RTE starts slowing down at 1500+ words even if you do nothing but move the cursor with the arrow keys, which does not affect the underlying list of characters. I think it gets slow because it keeps asking the browser for viewportOf data to calculate where the cursor should be. Your zip list demo uses constant line height and fontsize, so this problem does not arise. If you allow arbitrary styling on any character and you allow embedded images, there’s no way (that I know of) to find out the position of a given element without asking the browser.

I’m not sure why the viewport of the 2nd character takes very little time to figure out if there are 100 characters and a lot of time if there are 50K, but that seems to be the main issue.

Yes, variable line heights certainly makes the problem harder.

Some ways to work with that might be:

  • Calculate line heights in advance. So maybe you know that paragraph text is 14 px high, but H1 is 24 px - by scanning the contents you might be able to figure out line heights in advance, and store this against each line.
  • Cache line heights that are reported by veiwportOf - so you don’t have to keep asking.
  • Approximate line heights somehow, and render a big enough margin of error around the current viewport.
  • Given cached line heights for each line, arrange lines in some kind of tree data structure that adds up line heights as you go up the tree. This would be to make computing the height of blocks of lines quick, or given a y coordinate, efficiently estimate or search for the line it corresponds to.

Is this problem harder in Elm than not in Elm? I guess viewportOf being a Task rather than a synchronous function call adds overhead.

Has there been any attempts to use a webgl canvas with an invisible textarea on top of it to capture inputs? Then you don’t have all the DOM overhead, though it comes with the disadvantage of needing to handle font rendering and layout yourself. I think this is an approach used in Rust for making web UIs.

Figma is a good reference for such an architecture. But it’s really freaking hard to do right.

@rupert Great tips, thanks! I’ll think about it, but this task may be too advanced for me. To guess which line you’re on, you need to break the text up into lines. Since any character can be bold, italic …, you need to know how many pixels such attributes add to the element width, otherwise you won’t know where the line ends. (You also need to know the width of unstyled characters, which is not a trivial issue for non-monospaced fonts.) Also, since one can drop external css classes on any paragraph (the code highlighting style is external in the demo), you would need access to the external css (for example, to check if there’s margin-top), which, as far as I know, is impossible.

I could get the viewports of all elements on init, but most of the data would have to be updated if the user edits the first half of the text. Asking for more than a few hundred element-viewports adds a big lag, so this method won’t solve lag issues in large documents.

@MartinS: Not sure how you can calculate the width of SVG text characters without adding DOM overhead. You can perhaps use getBBox, but that seems similar to using getViewportOf.

If the editor is using a webgl canvas then you would be getting the width of characters by decoding that data from a font file. So there wouldn’t be any DOM overhead. It would only take a lot of effort to write the code to handle this.

Right, sorry. Confused webgl with Svg.

Regarding fonts in webgl, some work has been done in this area:


I’ll publish the existing version as a package in about a week. I’m just adding this message to keep the topic alive until then.

Published as a package.


This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.