I’ve rediscovered plain old CGI as a great way for users to “vibe code” custom pages on our platform. [1]
The scenario is we have our first party task lists and data viewers, but often users want to highly customize it. Say build a Kanban view or a custom dashboard with data filters and charts.
The box has a coding agent which means the user can code anything vs us building traditional report builder tools.
Go’s stdlib has good support on both the server side and user space. The coding agent makes a page-name/main.go that talks CGI and the server delegates requests to it.
It’s all “person scale” data and page views so no real need to optimize with fast CGI even.
> I still don't love how CGI uses environment variables.
Neither do I. They really only make sense in the context of a request which was actually to a CGI script resident in a document root - they're an exceptionally awkward way of describing other HTTP requests, especially ones which aren't being served from a document root. And there's a lot of information lost in translation, like the order and original capitalization of HTTP headers. (Not that these things are supposed to matter, but still.)
My daily driver is Zed developing on SSH remote servers on exe.dev.
It's crazy to think of all the dev tools I've churned through over the last 18 months but these two feel sticky.
Zed has everything I need in a unified pane. File editor, terminal, agents, SSH remotes. And it's fast and intuitive
exe.dev is the first "dev container" I've ever *loved*. The remote sandbox means `dangerously-skip-permissions` is safe. Being on the internet with good private / shared / public access saves so much time.
I also use https://conductor.build/ and GitHub but this is starting to feel clunky compared hacking directly against online live reloading apps.
I'm glad to hear the SSH remote editing is working well.
A lot of the time I'm developing on a remote server using VSCode Remote-SSH. I mostly love it. But! It consumes a lot of memory. And not only that. At times it gets stuck in some infinite loop or such, and ends up consuming all memory on the machine, preventing all traffic. Takes a few minutes for the OS to finally kill it, so I can get back in. I'm pretty this is happening due to large collections of symlinks (the subprocess eating up the memory is rg). But also just JavaScript editing at times launches up a bunch of ts-servers consuming everything and more.
This is super scary, if I'm poking around on the prod server.
Actually, inspired by this, I went ahead and installed Zed to try it out. After a couple of hours of working remotely using Zed, I'm impressed. It actually works, and the experience feels great. Only little issue was that when I first opened the remote folder, I was greeted with a blank window. I thought it was stuck loading and was about to give up, but turns out I had to open the project panel myself to see the files. Otherwise, working fine so far! Memory-wise it's practically free.
EDIT: Scrap that. After a while it starts running at 100% CPU on my macbook. I'm editing a small, simple PHP remotely over SSH. I haven't yet tested if it only happens with remote editing. Too bad... Well, at least it didn't trash the server like VSCode.
EDIT: Logs showed it was trying to do some auto suggestions every few seconds, but failed due to missing credentials. Didn't seem like something that would eat up 100%, but after disabling all AI features (I'm glad there was an option for this), the problem disappeared, and I'm happy with Zed again.
The only reason I remained on vscode for so long was the remote ssh editing as I also use a dev box (M2 air + dev box = multi-day battery life) but recently got sick and tired of the vscode instability and frequent need to blow away state / reinstall plugins after updates. When I saw Zed had an ssh dev equivalent I jumped ship and haven't looked back. Here is my theme if anyone is interested, https://github.com/whalesalad/dotfiles/blob/master/zed/whale...
Personally the main advantage is to simply spin up new ones with the same setup. You can also do that by yourself but imo it includes way more managing and time compared to exe
I'll have to check it out again. Last time I tried, the got integration didn't work when connecting to a remote SSH server, and ports couldn't be mapped at runtime.
Had to shut everything down, list the port, and then reconnect. A big pain when other tools just automatically figure out what needs to be forwarded, or just let you specify arbitrary ports at runtime.
"online live reloading apps" => trying to get my head around this workflow. so the disk is shared across these? so do you still have the problem of say running a "main" version of an app, and it's weird experimental version of that same app? because they still have to live in different folders/worktrees? that's where I get stuck a little trying to enable things like this for others. right now, I've got people a system we can spin up N "vms". but it's not persistent storage if the vm goes away. it's whatever version exists in their GitHub branch. hopefully if they hack the vm app they commit and push back to the repo.
For many apps the weird experimental version is all there is. Call it vibe coding or experiments or non-critical tools. These may not even have a GitHub repo. I trust local git and the exe.dev disks.
Then for serious apps the above is the same shape for development branches. Spin up a VM in a few seconds with the code checked out and running online and editable over an SSH mount is the magic.
Then that turns into a PR on GitHub and a normal review then CI/CD to staging and prod takes over.
Using Zed with ssh is an interesting idea. I spend a lot of time mosh/ssh to VPSs, then running 'emacs -nw' locally on the server. This is a great setup since I love Emacs, but I will give Zed/ssh a try. Thanks.
Hug ops to everyone involved in these outages and trying to maintain uptime.
But glad my team is staying nimble and has multi-model (Anthropic, Codex, Gemini), multi-modal (desktop, CLI/TUI, web) dev tooling.
As our actual coding skills collectively atrophy, we'll either need to switch tools or go for a walk when the LLM is down.
In the cloud era I advised against a multi-cloud strategy, as the effort to impact just wasn't there. But perhaps this is different in the LLM era, where the cost of switching is pretty darn low.
Tbh, even if your code skills don’t atrophy, you can still use outage events like this or AWS being down etc to just make up an excuse to go for a walk.
I’ve haven’t had great experiences with Gemini for coding yet. I’m doing reasonably simple full stack Go apps. Tried Gemini-ClI, antigravity, Pi.
The problems I’ve experienced are less adept at picking the right bash commands to build and test the Go app, and not following idiomatic Go or code base patterns for changes.
It is an interesting project. I would love to see markdown improved and I agree with many of the simplifications they've made.
However I'm skeptical that any format that compromises the editing experience will gain traction with users. For example, djot requires a blank line before nested lists (at least in the default mode) which requires writing lists in a way that I've never seen anyone write in an email because it groups nested items incorrectly in the raw text.
"a small proliferation" is a nice way to describe the cluster that is my side project habit. if you bump into any issues pls pull a PR or drop an issue on the repo!
Hi, I'm the author of GAI. I'm glad that you are happy with it! I'm using it a lot myself, both for own and client projects, but I wasn't sure if anyone else was. :D
I'm actually trying to build it out in a way so that gateways aren't necessarily necessary. Cost and token tracking happen through OpenTelemetry. Fallbacks and retries are handled through the new “robust” package, and I have other plans as well. You're always welcome to file issues in the repo for things you'd like to see but aren't there yet. :-)
Yeah I share the same uncertainty here. My understanding is personal and interactive use should be fine. I use Conductor all day every day and it wraps a subscription.
Perhaps fully automated use is where the line is drawn.
But I also suspect individuals using it for light automated dispatching would be ok too.
I built Fence (https://github.com/Use-Tusk/fence) in Go, a lightweight process sandbox for CLI agents (or any command really) with filesystem and network restrictions. It's also available as a Go library if you wish to add sandboxing to Shelley.
i use this for my personal projects. some features are gated behind a license but the basics like provider proxy, logs, metrics are covered in the free version.
https://github.com/maximhq/bifrost
> Any other Go-based AI / LLM tools folks are happy with?
I can throw my hat into the ring, built on ADK, CUE, and Dagger (all also in Go); CLI, TUI, and VSCode interfaces. It's my personal / custom stack, still need to write up docs. My favorite features are powered by Dagger, sandbox with time travel, forking, jump into shell at any turn, diff between any points.
I'm seeing that these tools are extremely powerful the hands of experts that already understand software engineering, security, observability, and system reliability / safety.
And extremely dangerous in the hands of people that don't understand any of this.
Perhaps reality of economics and safety will kick in, and inexperienced people will stop making expensive and dangerous mistakes.
The future is happening. Instead of trying to raise awareness about evil AI... I think it would be more healthy if we could direct this energy to ways of improving the situation without condemning the unknown of AI evolution. As with anything.. there will be a bad side.. The bad guys will always be there.. be it AI or soccer matches.. should we stop developing nuclear energy because nuclear weapons are developed?
There is no natural law saying the good sides of any kind of tech will outweigh any bad sides.
”The future” is happening because it is allowed in our current legal framework and because investors want to make it happen. It is not ”happening” because it is good or desirable or unavoidable.
The scenario is we have our first party task lists and data viewers, but often users want to highly customize it. Say build a Kanban view or a custom dashboard with data filters and charts.
The box has a coding agent which means the user can code anything vs us building traditional report builder tools.
Go’s stdlib has good support on both the server side and user space. The coding agent makes a page-name/main.go that talks CGI and the server delegates requests to it.
It’s all “person scale” data and page views so no real need to optimize with fast CGI even.
What’s old is new again for agents!
1. https://housecat.com
reply