Copilot and ChatGPT + Intellij + One Human

I continue to experiment with various AI tools, including copilot, which I use in conjunction with Intellij. Here is something from about three minutes ago. I wrote type signature you see below, pressed , and waited a second or two. Copilot proposed what you see below the type signature. This is a best-case scenario. The code proposed is not always correct, but it is correct often enough that I’ve found it to be quite helpful.

(( Added later: see below for an experience report on chatGPT. ))

In the case at hand, I knew exactly how I would write this function — essentially just as Copilot did. Using it saves time.

It appears that Copilot has a reasonable understanding of Elm in general and my immediate working context in particular. I’d be interested in hearing what others are doing with this and related tools, and what the experience has been.

-- module Book

replaceCell : Cell -> Book -> Book
replaceCell cell book =
        cells =
    { book
        | cells =
                (\c ->
                    if c.index == cell.index then


Long addendum re ChatGPT

Here is a more elaborate and impressive example using chatGPT 4. I had written code (for the new elm-notebook, a work-in-progress) that is supposed to (1) update the dictionary of declarations in the current notebook and (2) at the same time evaluate all expressions in the notebook. The code listed below does fine for (1) but not for (2). When I run this code and then look at the notebook, only the last expression is evaluated. The problem is (very loosely stated), one of timing (I believe). Each execution of processCell_ requires first sending a request to the Elm compiler via Http, then sending the result, which is Javascript code, out to Javascript land for execution via ports, then sending the result (a stringifiied Json object) back to Elm, again via ports.

I realized that executions of processCell_ had to be properly sequenced. Below is my dialogue with chatGPT. In brief: (1) this was a harder question than I posed to Copilot; (2) chatGPT’s solution works beautifully.

My Question.

Consider the Elm code listed below. My problem is that I would like to have a pause of, say 50 milliseconds, between executions of the function ‘folder’.

updateDeclarationsDictionary : Model -> ( Model, Cmd FrontendMsg )
updateDeclarationsDictionary model =
        n =
            List.length model.currentBook.cells

        indices =
            List.range 0 n

        oldEvalState =

        newEvalState =
            { oldEvalState | decls = Dict.empty }

        ( newModel, commands ) =
            List.foldl folder ( { model | evalState = newEvalState }, [] ) indices
    ( newModel, Cmd.batch commands )

folder : Int -> ( Model, List (Cmd FrontendMsg) ) -> ( Model, List (Cmd FrontendMsg) )
folder k ( model, cmds ) =
        ( model_, cmd ) =
            processCell_ k model
    ( model_, cmd :: cmds )

ChatGPT’s Answer.

In Elm, the standard way to introduce a delay or pause is through the use of Cmd Msg with Process.sleep. Since you want a 50ms delay between each execution of the folder function, you could introduce a time-based command to help with this.

One possible way to do this is to use a new message type for your model, say ProcessCell Int, which would take care of processing a single cell. Your update function can then handle this new message type.

First, add a new message type:

type FrontendMsg
    = ...
    | ProcessCell Int

Then, modify your update function:

update : FrontendMsg -> Model -> ( Model, Cmd FrontendMsg )
update msg model =
    case msg of
        ProcessCell k ->
                ( newModel, cmd ) =
                    processCell_ k model
            ( newModel, cmd )

Now, in your updateDeclarationsDictionary function, rather than using List.foldl, you would queue these ProcessCell commands with delays.

updateDeclarationsDictionary : Model -> ( Model, Cmd FrontendMsg )
updateDeclarationsDictionary model =
        n = List.length model.currentBook.cells
        indices = List.range 0 n

        oldEvalState = model.evalState
        newEvalState = { oldEvalState | decls = Dict.empty }

        commands = List.indexedMap createDelayedCommand indices
    ( { model | evalState = newEvalState }, Cmd.batch commands )

createDelayedCommand : Int -> Int -> Cmd FrontendMsg
createDelayedCommand idx _ =
    Process.sleep (toFloat (idx * 50))
        |> Task.perform (\_ -> ProcessCell idx)

Here, each cell would be processed after idx * 50 milliseconds, which means the first cell would be processed immediately, the second one after 50ms, the third one after 100ms, and so on.


Hey James, thank you for sharing your experience!

My first impression is that tools like Copilot and the integrated text generation functions enable great applications, and I am excited to explore these further!

In the end, these completion suggestions are coming from an LLM. The utility of the output depends on what that LLM gets as input first. ‘garbage in, garbage out’ also applies here. These Copilot IDE extensions are valuable because they automate composing the prompt given to the LLM. This prompt is not limited to what is visible or the current file; an example of expanded context is recently opened tabs of other files.
There is a great article describing it in more detail at

I use GitHub Copilot routinely since it dramatically speeds up coding. Sometimes, I also use ChatGPT via the default web interface. One reason is that I can be sure everything that should be in the context is in there. (Is there some tool to view this info for the Copilot extensions?) But of course, that means I have to spend more effort copying relevant IDE parts into the chat interface.

Recently, I extensively used ChatGPT 4 to build a JSON parser in Elm. Since JSON is quite popular, knowledge about it is well-represented in the training corpus. For example, it knows which kinds of tokens exist in JSON and the various ways of encoding characters in a JSON string. Using ChatGPT to write the Elm code multiplied the development speed here.
For someone with experience making parsers, that tool might not have made such a difference, but this was new terrain for me, so I benefited more from the support.

I am looking to develop an extension for the Elm Editor IDE to make these tools more accessible. In the mid-term, it will probably use retrieval-augmented generation to help it understand Elm better than the standard ChatGPT or Codex (GitHub Copilot) models.

More generally, I want to add more automation to enable outsourcing larger tasks to these tools. One example is iteration based on error messages from the compiler or suggestions from tools like elm-review. When code produced by Copilot leads to a compiler error, applying the text generation again with the error message appended to the input is often enough to make it return a corrected version of the program code. But this kind of iterating is not (yet?) in scope for GitHub Copilot.

1 Like

@Viir 's post points to the very informative article linked also below. It explains how Github engineers have worked to make Copilot aware of your work context to provide better suggested code completion.
This includes other open tabs in your IDE and code above and below the point at which you are working in your current file.

The article makes Copilot’s performance seem less mysterious. :nerd_face:

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.