Hacker News new | past | comments | ask | show | jobs | submit
I've been using opencode and oh-my-opencode with Claude's models (via github Copilot). The last two or three months feel like they have been the most productive of my 28-year career. It's very good indeed with Rails code, I suspect it has something to do with the intentional expressiveness of Ruby plus perhaps some above-average content that it would be trained on for this language and framework. Or maybe that's just my bias.

It takes a bit of hand holding and multiple loops to get things right sometimes, but even with that, it's pretty damn good. I don't usually walk away from it, I actively monitor what it's doing, peek in on the sub-agents, and interject when it goes down a wrong path or writes messy code. But more often than not, it goes like this:

  - Point at a GH issue or briefly describe the task
  - Either ask it to come up with a plan, or just go straight to implementation
  - When done, run *multiple* code review loops with several dedicated code review agents - one for idiomatic Rails code, one for maintainabilty, one for security, and others as needed
These review loops are essential, they help clean up the code into something coherent most times. It really mirrors how I tend to approach tasks myself: Write something quickly that works, make it robust by adding tests, and then make it maintainable by refactoring. Just way faster.

I've been using this approach on a side project, and even though it's only nights an weekends, it's probably the most robust, well-tested and polished solo project I've ever built. All those little nice-to-have and good-to-great things that normally fall by the wayside if you only have nights and weekends - all included now.

And the funny thing is - I feel coding with AI like this gets me in the zone more than hand-coding. I suspect it's the absence of all those pesky rabbit holes that tend to be thrown up by any non-trivial code base and tool chain which can easily distract us from thinking about the problem domain and instead solving problems of our tools. Claude deals with all that almost as a side effect. So while it does its thing, I read through it's self-talk while thinking along about the task at hand, intervening if I disagree, but I stay at the higher level of abstraction, more or less. Only when the task is basically done do I dive a level deeper into code organisation, maintainability, security, edge cases, etc. etc.

Needless to say that very good test coverage is essential to this approach.

Now, I'm very ambiguous about the AI bubble, I believe very firmly that it is one, but for coding specifically, it's a paradigm shift, and I hope it's here to stay.