Hacker News new | past | comments | ask | show | jobs | submit
The 10 pulls per IP per hour isn't my main concern. 40 pulls per hour for an authenticated user may be a little low, if you're trying out something new.

The unauthenticated limit doesn't bother me as much, though I was little upset when I first saw it. Many business doesn't bother setting up their own registry, even though they should, nor do they care to pay for the service. I suspect that many doesn't even know that Docker can be used without Docker Hub. These are the freeloaders Docker will be targetting. I've never worked for company that was serious about Docker/Kubernetes and didn't run their own registry.

One major issue for Docker is that they've always ran a publicly available registry, which is the default and just works. So people have just assumed that this was how Docker works and they've never bothered setting up accounts for developers nor production systems.

I dunno, your reasoning could also be applied to dependency management registries. It is not even only about cost, it is a lot of infra to set up authentication with every single external registry with every single automation tool that might need to pull from said registry.

Like, I get it, but it adds considerable work and headaches to thousands (millions?) of people.

We run our own registry for our containers, but we don't for images from docker.io, quay.io, mcr.microsoft.com, etc. Why would we need to? It obviously seems now we do.
To avoid having an image you're actively using being removed from the registry. Arguably it doesn't happen often, but when you're running something in production you should be in control. Below a certain scale it might not make sense to run your own registry and you just run the risk, but if you can affort it, you should "vendor" everything.

Not Docker, but I worked on a project that used certain Python libraries, where the author would yank the older versions of the library everything they felt like rewriting everything, this happened multiple times. After that happened the second time we just started running our own Python package registry. That way we where in control of upgrades.

I have also had Ubuntu do this in LTS repositories.
Did this for years at my previous job to defend against the rate limits and against dependencies being deleted out from under us with no warning. (E.g. left-pad.)

Nexus is very easy to set up.

> Why would we need to? It obviously seems now we do.

You should also run your own apt/yum, npm, pypi, maven, whatever else you use, for the same reasons. At a certain scale it's just prudent engineering.

loading story #43127837
Catching, vulnerability scanning, supply chain integrity, insurance against upstream removal. All these things are true for other artifact types as well.

Own your dependency chain.