# Experience with AI: Claude Engineer

I’ve been using Claude Engineer in the Cursor IDE to write Python programs to draw pictures of random Voronoi decompositions, e.g the one you see below. (( I used to be pretty good at Python, but that was years ago and it is no longer at my fingertips. ))

The experience has been both fun and productive (little wasted time, good advice and code). Claude is a pretty good engineer. The key is to engage in a dialogue. My initial request was for Claude to write a short demo program to compute and display random Voronoi decompositions. I went back and forth with Claude, asking it to make various changes, e.g., add color, change the algorithm for choosing the colors of the Voronoi cells, etc. In the image above, cells with the smallest area are colord red, those with the largest area are colored green. Claude made an unprompted but very helpful suggestion giving four or five ways of coloring the cells, e.g. randomly, with color keyed to area or perimeter, etc., etc. I eventually arrived at programs whose output I found interesting and pleasing.

I’ve also used Claude to understand (and write python programs for) the Nokken-Poole algorithm used data-analysis of voting patterns in Congress. I could never have gotten as far and as fast as I did without such a friendly and knowledgeable pair programmer. They key, once again, is to have a dialogue with Claude.

So far I’ve used Claude to explore ideas that I don’t know much about in a language I am not proficient in. In addition to providing code, “his” explanations of code and algorithms are pretty good. Haven’t used him/it in Elm yet, but am interested in seeing what the results are.

I’d like to hear what others’ experience with Claude is.

6 Likes

Addendum. I just tried Claude Engineer on the source-to-html compiler for scripta.io . In LaTeX, math blocks are constructs like `\$\$ .. some math .. \$\$`. A more modern form of this is `\[ .. some math .. \]`. I wanted to add this construct but hadn’t looked at the code in many months. I told Claude what I wanted to do, and he-it made some good suggesions that helped me zero in on the parts of the code that needed to be changed. There was one suggested change in which Claude hallucinated a non-existent function. I pointed this out, and he supplied a correct definition of it. There was one other error that I had to correct by hand.

Applying the suggested changes with my by-hand modifications did the job.

The experience was not as good as with the Python code, but Claude was nonetheless helpful. This use case was different: Unlike Python, Elm is a language in a language in which I am somewhat proficient, but my understanding of my own codebase had grown cold. Aargh!!

—————

I also asked Claude to summarize the code in the primitive block parser for LaTeX. It did a decent job. This use of Claude could be helpful when you start to work on a new code base or when you restart work on a once-familiar codebase.

1 Like

This is not directly related to Claude, but some of my experience with LLMs for programming. I found that generally, code suggestions for Python and Rust are actually pretty good, and while they’re often non-functional, they really help you to discover new libraries and identify potential issues with your code. For Elm, I’ve had quite mixed results.

I’d prefer to run the LLM locally, and LM Studio is fantastic for getting this up and running. The only problem is that smaller LLMs like Llama and Gemma are terrible for generating Elm code. I usually use the LLM to generate boilerplate code like encoders, decoders, helper functions, really simple trivial stuff. ChatGPT 4o can do this really well, but even the bigger local models with ~9-27 billion parameters are terrible for Elm code, with weird hallucinations, but the output is okay for Python and Rust. I suppose it’s a training data problem.

My favourite thing to use LLMs for is SQL queries, they are so good for this! I have all my database tables in the prompt and you can just ask it to write the query you need in natural language. I just give them a bit of testing and all the prompts I’ve generated so far have been functional.

1 Like

I use LLMs on a daily basis and here are my experiences:

Gpt-4o is very bad compared to Gpt-4, and I recently tried to use Claude.

Claude has given me code that compiles right away when GPT4 is always wrong in some parts.

Btw @jxxcarlson I think LLMs paired with your elm-codeinstaller is awesome.

I gave elm-galery and the google-auth installer as context and it produced this code first shot:

``````config : List Rule
config =
List.concat
[ frontendGalleryRules
]

frontendGalleryRules : List Rule
frontendGalleryRules =
[ -- Imports
[ Install.Import.config "Frontend"
[ module_ "Gallery"
, module_ "Html"
]
|> Install.Import.makeRule
]
, [ -- Define someSlides as an empty list of slides
Install.Function.InsertFunction.init "Frontend"
"someSlides"
"""someSlides : List (String, Html msg)
someSlides =
[]"""
]
, [ -- init :
Install.Initializer.makeRule "Frontend"
"init"
[ { field = "gallery", value = "Gallery.init (List.length someSlides)" }
]
]
, [ -- update :
Install.ClauseInCase.init "Frontend"
"update"
"GalleryMsg msg"
"{ model | gallery = Gallery.update msg model.gallery }"
]
, [ -- New gallery view function :
Install.Function.InsertFunction.init "Frontend"
"galleryView"
"""galleryView : Model -> Html Msg
galleryView model =
Html.map GalleryMsg <| Gallery.view config model.gallery [ Gallery.Arrows ] someSlides"""
]
]
|> List.map Install.Function.InsertFunction.makeRule
``````

This was gpt4 though

Still needs some rework, but I can’t wait when we will be able to pass an entire big elm package and LLMs will do the wiring for us, or at least, write a elm-codeinstaller like that

2 Likes

Really good news @Charlon ! I haven’t tried pairing LLMs with codeinstaller, but will give it a try.

1 Like

Wouldn’t there exist a way to encode elm-style types as sql queries : to fine-tune the beast?

2 Likes

It does get better if you have a lot of elm code in the prompt, but I find that it’s a fine balance, as large prompts can end up making it hallucinate more, or hyper fixate on the examples you’ve given.

To tell the truth I’ve lazily nurtured for 15 years a project to build on top (and for 10 to try it in Elm) to build on top of an interactive gimmick I translated to ~11kb of js/xhtml/svg from a 1st version in Geogebra whose applet was too heavy for the netbook of the lady to which it was destined. With retirement comes incentive to revive it.

However, LLMs - and Claude Sonnet 3.5 in particular - have inserted themselves in the (recent) meantime with a promise of helping my ailing coding performance but ideally using js or perhaps familiar Python; a promise that competes with what I’ve long dreamt Elm was to provide.

My wonder at this point is whether Sonnet should be tuned/prompted to make best profit from the famously helpful compilation errors in Elm, following their hints to only deliver to the user, Elm code that’s verified to compile.

2 Likes

I would love that (tuning for Elm).

1 Like

I just discovered Aider.chat – it can be used with a variety of models and runs in the terminal, so is independent of editor, which I like. Interestingly, they just fixed an Elm bug yesterday. Some experiences:

3 Likes

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.