Hacker News new | past | comments | ask | show | jobs | submit
You can find some re:Invent talks about it, but the level of depth may not be what you want.

The tl;dr is they built a distributed storage system that is split across 3 AZs, each with 2 storage nodes. Storage is allocated in 10 GiB chunks, called protection groups (perhaps borrowing from Ceph’s placement group terminology), with each of these being replicated 6x across the nodes in AZs as mentioned. 4/6 are required for quorum. Since readers are all reading from the same volume, replica lag is typically minimal. Finally, there are fewer / no (not positive honestly; I have more experience with MySQL-compatible Aurora) checkpoints and full page writes.

If you’ve used a networked file system with synchronous writes, you’ll know that it’s slow. This is of course exacerbated with a distributed system requiring 4/6 nodes to ack. To work around this, Aurora has “temporary local storage” on each node, which is a fixed size proportional to the instance size. This is used for sorts that spill to disk, and building secondary indices. This has the nasty side effect that if your table is too large for the local storage, you can’t build new indices, period. AWS will tell you “upsize the instance,” but IMO it’s extremely disingenuous to tout the ability for 128 TiB volumes without mentioning that if a single table gets too big, your schema becomes essentially fixed in place.

Similarly, MySQL normally has something called a change buffer that it uses for updating secondary indices during writes. Can’t have that with Aurora’s architecture, so Aurora MySQL has to write through to the cluster volume, which is slow.

AWS claims that Aurora is anywhere from 3-5x faster than the vanilla versions of the respective DBs, but I have never found this to be true. I’ve also had the goalposts shifted when arguing this point, with them saying “it’s faster under heavy write contention,” but again, I have not found this to be true in practice. You can’t get around data locality. EBS is already networked storage; requiring 4/6 quorum across 3 physically distant AZs makes it even worse.

The 64 TiB limit of RDS is completely arbitrary AFAIK, and is purely to differentiate Aurora. Also, if you have a DB where you need that, and you don’t have a DB expert on staff, you’re gonna have a bad time.