Hacker News new | past | comments | ask | show | jobs | submit
AMD CPUs have been killing it lately, but this benchmark feels quite artificial.

It's a tiny, trivial example with 1 branch that behaves in a pseudo-random way (random, but fixed seed). I'm not sure that's a really good example of real world branching.

How would the various branch predictors perform when the branch taken varies from 0% likely to 100% likely, in say, 5% increments?

How would they perform when the contents of both paths are very heavy, which involves a lot of pipeline/SE flushing?

How would they perform when many different branches all occur in sequence?

How costly are their branch mispredictions, relative to one another?

Without info like that, this feels a little pointless.

loading story #47442108
He isn't trying to determine how well it works. He's trying to determine how large it is.
loading story #47441928