Hacker News new | past | comments | ask | show | jobs | submit
At 10G and up, shaping still matters. Once you mix backups, CCTV, voice, and customer circuits on the same uplink, a brief saturation event can dump enough queueing delay into the path that the link looks fine on paper while the stuff people actually notice starts glitching, and latency budgets is tight. Fat pipes don't remove the need for control, they just make the billing mistake more expensive.
At this level wouldnt a proper implementation be segregating the link into multiple VMs (or jails?) ? Or is that the same thing on BSD?