Hacker News new | past | comments | ask | show | jobs | submit
Because the force multiplier of a good DX way outweighs the occasional nonsense from having to do k8s upgrades or troubleshooting

For example: how do you roll out a new release of your product? In sane setups, it's often $(helm upgrade --install ...), which is itself often run either in-cluster by watching a git managed descriptor, or in CI on merge to a release branch/tag

How does your developer get logs? Maybe it's via Splunk/ELK/DataDog/whatever but I have never in my life seen a case where that's a replacement for viewing the logs

How do you jump into the execution environment for your workload, to do more advanced debugging? I'm sure you're going to say ssh, which leads to the next questions of "how do you audit what was done, to prevent config drift" followed by "how do you authenticate the right developer at the right time with access to the right machine without putting root's public key file in a spreadsheet somewhere"

> For example: how do you roll out a new release of your product?

It's pretty easy to accomplish that with docker compose if you have containers, but you can also use systemd and some bash scripts to accomplish the same thing. Admittedly this would only affect a single node, but it's also possible to manage multiple nodes without using K8s / Nomad.

> How does your developer get logs?

fluentd

> How do you jump into the execution environment for your workload, to do more advanced debugging?

ssh

> how do you audit what was done, to prevent config drift

Assuming you're pulling down releases from a git repo, git diff can be used to detect changes, and you can then opt to either generate a patch file and send it somewhere, or just reset to HEAD. For server settings, any config management tool, e.g. puppet.

> how do you authenticate the right developer at the right time with access to the right machine without putting root's public key file in a spreadsheet somewhere

freeipa

I'm not saying any of this is better than K8s. I'm saying that, IMO, the above can be simpler to reason about for small setups, and has a lot less resource overhead. Now, if you're already comfortable administering and troubleshooting K8s (which is quite a bit different than using it), and you have no background in any of the above, then sure, K8s is probably easier. But if you don't know this stuff, there's a good chance you don't have a solid background in Linux administration, which means when your app behaves in strange ways (i.e. not an application bug per se, but how it's interacting with Linux) or K8s breaks, you're going to struggle to figure out why.

> Splunk/ELK/DataDog/whatever but I have never in my life seen a case where that's a replacement for viewing the logs

Uh, any time I run a distributed system and logs could appear on n nodes I need a log aggregator or I am tailing in n terminals. I almost only use Splunk. I tail logs in dev. Prod needs an aggregator. This has been my experience at 4 of my last 6 companies. The shit companies who had all the issues? Logs on cloudwatch or only on the node

kubectl logs deployment my-multinode-deployment
loading story #43088086