Hacker News new | past | comments | ask | show | jobs | submit
There are all sorts of ways that additional test time compute can be used to get better results, varying from things like sampling multiple CoT and choosing the best, to explicit tree search (e.g. rStar-Math), to things like "journey learning" as described here:

https://arxiv.org/abs/2410.18982?utm_source=substack&utm_med...

Journey learning is doing something that is effectively close to depth-first tree search (see fig.4. on p.5), and does seem close to what OpenAI are claiming to be doing, as well as what DeepSeek-R1 is doing here... No special tree-search sampling infrastructure, but rather RL-induced generation causing it to generate a single sampling sequence that is taking a depth first "journey" through the CoT tree by backtracking when necessary.

loading story #42770283