Has anyone integrated PAI into their Elm environment with Claude?
A visual add-on:
It looks quite compelling.
If you are using an AI are you running it in a secure sandbox or have you exposed your host system to it?
Devpod is a Devcontainers manager that works with Podman and Docker containers. The original project is now unmaintained but a very promising candidate for a community edition is seeing significant work:
Curious if and how anyone might be using these two systems — whether together or independently of each other.
What I do is to run Claude in a docker container, and then run the container without root privileges. I doubt that is totally secure, but in practice I think it is good enough. I run claude always with --dangerously-skip-permissions, its not really possible to have longer running tasks without doing this, otherwise it constantly stops and asks for permissions.
I check out my git tree like this:
~/project/worktrees/work/
project is the git root, work is a git worktree.
Then I map that work folder to be /work in the docker container. That way Claude cannot even use git. I decide when to branch and commit and push and so on.
Now I have a folder shared between my usual system, and the containerized git worktree. Call it a file share if you like!
No particular need for a fancy UI, just a simple container. Dockerfile is a bit of a PITA to write, but I get Claude to write mine for me, so even that is very easy to set up.
This would be where Podman comes in as it provides significantly better rootless sandboxing. Devpod CLI might be an option if you don’t want a GUI. That aside — you don’t run your containers in a VM ?
FYI, Claude has recently added an alternative to --dangerously-skip-permissions, which essentially asks itself if commands are safe to run before running them. (Normally, Anthropic is really good at naming things, but I find auto mode to be a really confusing name for this feature)
Are people using anything like PAI which enables the developer/user to setup a persistent, fully personalised project that maintains a history beyond what Claude might remember?
To also set parameters for defining the environment, create Skills and other instruments the AI can build upon for the user’s requirements?
Seems like a great way to template an environment and do things like back it up. Thoughts?
nteresting discussion. The --dangerously-skip-permissions approach (and the new “auto mode” dta mentioned) reminds me of something I’ve seen in the modded APK world — specifically with HappyMod APK.
On HappyMod, many modded apps come with permissions stripped or bypassed — similar to running Claude without root checks. It’s convenient for the user (no constant popups), but you’re implicitly trusting that the modder didn’t inject anything malicious into the bypassed layers.
The same principle applies here:
Approach
Convenience
Risk
Full permissions (default)
Low (constant prompts)
Low
--dangerously-skip-permissions
High
High (no safety net)
Auto mode (Claude decides)
Medium
Medium (trust in AI)
VM + container (rupert’s plan)
Low (setup effort)
Lowest
Nickwalt’s point about Podman for rootless sandboxing is probably the sweet spot — similar to how security-conscious users run modded APKs inside an isolated user profile or secondary device.
For persistent project history (PAI-style), I’d be cautious about letting any AI — even containerized — maintain long-term state without manual checkpoints. Backups are great until the backup itself gets poisoned. lean more at happymod
Has anyone here actually tested PAI or DevPod CE with a workflow that doesn’t require bypassing permissions entirely? Would love to hear real-world examples.
My strategy for using AI is evolving but essentially is to build an environment inside a virtual machine. The same could be done on bare metal. The idea is to run an OS inside the VM dedicated to hosting the environment. An example of such an OS is IncusOS which is a stripped down Debian distro functioning as a hypervisor aka Proxmox or VMware ESXI.
I’m thinking of using incus and Incus2ssh instead of Podman and Devpod because Incus with Incus2ssh appears to be more integrated, secure, simpler and possibly more robust.
Inside this VM or bare metal server would run a scaffolding system like PAI, and an AI like Claude. A mix of VMs and LXC containers would run various components. VMs for entire environments running containers inside, and containers for dedicated services like a Git server such as Forgejo which support the ecosystem.
Because PAI and Claude own the host VM they can be used to build the environments. In this way the scaffolding can be built as the environment is created and understanding is developed. MCP servers can be added as services are built.
The creator of PAI has a security background so PAI prioritises safety first. I recommend watching his deep dive.
Anthropic is developing new capabilities into Claude which extend the Model but scaffolding remains key to focusing and optimising the Model. Very interesting times.
I often use ChatGPT to ask things about configuring my Debian boxes or other Linux distros. It knows a lot about it and almost always gives the right answers. I think letting Claude directly do it would be even better, although obviously a bit risky - so definitely do it in a VM.
Something I have done for years is to just run git init under /etc on every new box I create. So I can version manage the machines config. I don’t ever bother to push that to a repo, but you can use git for localized version control quite nicely.
Another thing you could do with VMs is to set up a VLAN on your router, and then set up the VMs tagged onto that VLAN. Then you can run all of this in a separate network VLAN with some firewall rules to keep it apart from your normal LAN.
This is a good article introducing the concepts of a persistent information structure that supports a Model like Claude Code, called Scaffolding, and also called a Harness.
Modularising Elm code at first and then combining finished Modules into larger finished files might be a great fit for the AI Context limitations, improved upon by Scaffolding in PAI:
Elm module size was a problem until I started using Serena, which provides more surgical search/read/edit tools as an MCP. The Claude Pro got a 1M token context size which solved the problem a different way. More recently I have noticed CC is not really using Serena any more, I think maybe it got its own more surgical tools for large files. I don’t think large Elm modules is a problem any more, at least not for Claude Pro.
That has been my approach to creating “personal AI”. Get a bunch of files on some topic of interest, RAG them, then fire up the chat UI.
My eldest son is sitting his school exams currently, I downloaded all the syllabus and marking info and past papers from the official exam bodies website, and made him a personal AI for each subject. There are agents described that will run quizzes and then mark his answers to the exam bodies guidelines standards, and tell him exactly what he lost marks on and how to get from there to full marks.
One issue I had recently is that I use this to work with code. I noticed that AI kept saying, this won’t work for your code because you have a custom Thing implemented, so the out of the box technique for this library is being bypassed meaning you cannot do… It kept saying this even though I had removed the custom Thing long ago. Was still there in some of my docs! So I wonder, personal AI with memory sounds good, but what about when memories are no longer valid - how do you remove them or otherwise mark them as no longer current in a way that AI will be clear on ?
This is not really about elm but anyway … You can cleanup these memory files with claude or yourself. I have instructed claude code to never use them (since they are badly auto-maintained and they tend to taint any session with irrelevant / obsolete stuff). The knowledge management should remain in our own hands.