Internally, we have a closed beta for what is basically a hosted Claude Code harness. It's ideal for scheduled jobs or async jobs that benefit from large amounts of context.
At a glance, it seems similar to Uber's Minion concept, although we weren't aware of that until recently. I think a lot of people have converged on the same thing.
Having scheduled roundups of things (what did I post in Slack? what did I PR in Github etc) is a nice quality of life improvement. I also have some daily tasks like "Find a subtle cloud spend that would otherwise go unnoticed", "Investigate an unresolved hotfix from one repo and provide the backstory" and "Find a CI pipeline that has been failing 10 times in a row and suggest a fix"
I work in the platform space so your mileage may vary of course. More interesting to me are the second order effects beyond my own experience:
- Hints of engineering-adjacent roles (ie; technical support) who are now empowered to try and generate large PRs implementing unscoped/ill-defined new internal services because they don't have any background to know is "good" or "bad". These sorts of types have always existed as you get people on the edge of technical-adjacent roles who aspire to become fully fledged developers without an internal support mechanism but now the barrier is a little lower.
- PR review fatigue: As a Platform Engineer, I already get tagged on acres of PRs but the velocity of PRs has increased so my inbox is still flooded with merged PRs, not that it was ever a good signal anyway.
- First hints of technical folk who progressed off the tools who might now be encouraged to fix those long standing issues that are simple in their mind but reality has shifted around a lot since. Generally LLMs are pretty good at surfacing this once they check how things are in reality but LLMs don't "know" what your mental model is when you frame a question
- Coworkers defaulting to asking LLMs about niche queries instead of asking others. There are a few queries I've seen where the answer from an LLM is fine but it lacks the historical part that makes many things make sense. As an example off the top of my head, websites often have subdomains not for any good present reason but just because back in the day, you could only have like 6 XHR connections to a domain or whatever it was. LLMs probably aren't going to surface that sort of context which takes a topic from "Was this person just a complexity lover" to "Ah, they were working around the constraints at the time".
- Obviously security is a forever battle. I think we're more security minded than most but the reality is that I don't think any of this can be 100% secure as long as it has internet access in any form, even "read only".
- A temptation to churn out side quests. When I first got started, I would tend to do work after hours but I've definitely trailed off and am back to normal now. Personally I like shipping stuff compared to programming for the sake of it but even then, I think eventually you just normalise and the new "speed" starts to feel slow again
- Privileged users generating and self-merging PRs. We have one project where most everyone has force merge and because it's internal only, we've been doing that paired with automated PR reviews. It works fairly well because we discuss most changes in person before actioning them but there are now a couple historical users who have that same permission contributing from other timezones. Waking up to a changed mental model that hasn't been discussed definitely won't scale and we're going to need to lock this down.
- Signal degradation for PRs: We have a few PRs I've seen where they provide this whole post-hoc rationalisation of what the PR does and what the problem is. You go to the source input and it's someone writing something like "X isn't working? Can you fix it?". It's really hard to infer intent and capability from PR as a result. Often the changes are even quite good but that's not a reflection of the author. To be fair, the alternative might have been that internal user just giving up and never communicating that there was an issue so I can't say this is strictly a negative.
All of the above are all things that are actively discussed internally, even if they're not immediately obvious so I think we're quite healthy in that sense. This stuff is bound to happen regardless, I'm sure most orgs will probably just paper over it or simply have no mechanism to identify it. I can only imagine what fresh hells exist in Silicon Valley where I don't think most people are equipped to be good stewarts or even consider basic ethics.
Overall, I'm not really negative or positive. There is definitely value to be found but I think there will probably be a reckoning where LLMs have temporarily given a hall pass to go faster than the support structures can keep up with. That probably looks like going from starting with a prompt for some work to moving tasks back into ticket trackers, doing pre-work to figure out the scope of the problem etc. Again, entirely different constraints and concerns with Platform BAU than product work.
Actually, I should probably rephase that a little: I'm mostly positive on pure inference while mostly negative on training costs and other societal impacts. I don't believe we'll get to everyone running Gas Town/The Wasteland nor do I think we should aspire to. I like iterating with an agent back and forth locally and I think just heavily automating stuff with no oversight is bound to fail, in the same way that large corporations get bloated and collapse under their own weight.