I really wonder why this opinion is so commonly accepted by everyone. I get that not everything needs most Kubernetes features, but it's useful. The Linux kernel is a dreadfully complex beast full of winding subsystems and full of screaming demons all over. eBPF, namespaces, io_uring, cgroups, SE Linux, so much more, all interacting with eachother in sometimes surprising ways.
I suspect there is a decent likelihood that a lot of sysadmins have a more complete understanding of what's going on in Kubernetes than in Linux.
I think there's a degree of confusion over your understanding of what Kubernetes is.
Kubernetes is a platform to run containerized applications. Originally it started as a way to simplify the work of putting together clusters of COTS hardware, but since then its popularity drove it to become the platform instead of an abstraction over other platforms.
What this means is that Kubernetes is now a standard way to deploy cloud applications, regardless of complexity or scale. Kubernetes is used to deploy apps to raspberry pis, one-box systems running under your desk, your own workstation, one or more VMs running on random cloud providers, and AWS. That's it.
My point is that the mere notion of "a system that's actually big or complex enough to warrant using Kubernetes" is completely absurd, and communicates a high degree of complete cluelessness over the whole topic.
Do you know what's a system big enough for Kubernetes? It's a single instance of a single container. That's it. Kubernetes is a container orchestration system. You tell it to run a container, and it runs it. That's it.
See how silly it all becomes once you realize these things?
Second of all, I don't really understand why you think I'd be blown away by the notion that you can use Kubernetes to run a single container. You can also open a can with a nuclear warhead, does not mean it makes any sense.
In production systems, Kubernetes and its ecosystem are very useful for providing the kinds of things that are table stakes, like zero-downtime deployments, metric collection and monitoring, resource provisioning, load balancing, distributed CRON, etc. which absolutely doesn't come for free either in terms of complexity or resource utilization.
But if all you need to do is run one container on a Raspberry Pi and don't care about any of that stuff, then even something stripped down like k3s is simply not necessary. You can use it if you want to, but it's overkill, and you'll be spending memory and CPU cycles on shit you are basically not using. Literally anything can schedule a single pod on a single node. A systemd Podman unit will certainly work, for example, and it will involve significantly less YAML as a bonus.
I don't think the point I'm making is particularly nuanced here. It's basically YAGNI but for infrastructure.