Hacker News new | past | comments | ask | show | jobs | submit
If you have a system that's actually big or complex enough to warrant using Kubernetes, which, to be frank, isn't really that much considering the realities of production, the only thing more complex than Kubernetes is implementing the same concepts but half-assed.

I really wonder why this opinion is so commonly accepted by everyone. I get that not everything needs most Kubernetes features, but it's useful. The Linux kernel is a dreadfully complex beast full of winding subsystems and full of screaming demons all over. eBPF, namespaces, io_uring, cgroups, SE Linux, so much more, all interacting with eachother in sometimes surprising ways.

I suspect there is a decent likelihood that a lot of sysadmins have a more complete understanding of what's going on in Kubernetes than in Linux.

> If you have a system that's actually big or complex enough to warrant using Kubernetes (...)

I think there's a degree of confusion over your understanding of what Kubernetes is.

Kubernetes is a platform to run containerized applications. Originally it started as a way to simplify the work of putting together clusters of COTS hardware, but since then its popularity drove it to become the platform instead of an abstraction over other platforms.

What this means is that Kubernetes is now a standard way to deploy cloud applications, regardless of complexity or scale. Kubernetes is used to deploy apps to raspberry pis, one-box systems running under your desk, your own workstation, one or more VMs running on random cloud providers, and AWS. That's it.

I'm not sure what your point is.
> I'm not sure what your point is.

My point is that the mere notion of "a system that's actually big or complex enough to warrant using Kubernetes" is completely absurd, and communicates a high degree of complete cluelessness over the whole topic.

Do you know what's a system big enough for Kubernetes? It's a single instance of a single container. That's it. Kubernetes is a container orchestration system. You tell it to run a container, and it runs it. That's it.

See how silly it all becomes once you realize these things?

First of all, I don't really get the unnecessary condencension. I am not a beginner when it comes to Kubernetes and don't struggle to understand the concept at all. I first used Kubernetes at version 1.3 back in 2016, ran production workloads on it, contributed upstream to Kubernetes itself, and at one point even did a short bit of consulting for it. I am not trying to claim to be any kind of authority on Kubernetes or job scheduling as a topic, but when you talk down to people the way that you are doing to me, it doesn't make your point any better, it just makes you look like an insecure dick. I really tried to avoid escalating this on the last reply, but it has to be said.

Second of all, I don't really understand why you think I'd be blown away by the notion that you can use Kubernetes to run a single container. You can also open a can with a nuclear warhead, does not mean it makes any sense.

In production systems, Kubernetes and its ecosystem are very useful for providing the kinds of things that are table stakes, like zero-downtime deployments, metric collection and monitoring, resource provisioning, load balancing, distributed CRON, etc. which absolutely doesn't come for free either in terms of complexity or resource utilization.

But if all you need to do is run one container on a Raspberry Pi and don't care about any of that stuff, then even something stripped down like k3s is simply not necessary. You can use it if you want to, but it's overkill, and you'll be spending memory and CPU cycles on shit you are basically not using. Literally anything can schedule a single pod on a single node. A systemd Podman unit will certainly work, for example, and it will involve significantly less YAML as a bonus.

I don't think the point I'm making is particularly nuanced here. It's basically YAGNI but for infrastructure.