Hacker News new | past | comments | ask | show | jobs | submit
I did a talk about this! (And also wrote up about my talk here[1]). This paper is a great example of both knowledge distillation. It's less of a paper about chess and more about how complicated non linear search functions - complete with whatever tuning experts can prepare - can be distilled into a (quasi-linear, if it's a standardized input like chess) transformer model.

[1]: https://hlfshell.ai/posts/deepmind-grandmaster-chess-without...

I think the vs. humans result should be taken with a huge grain of salt. These are blitz games, and their engine’s elo was far higher against humans than against other bots. So it’s likely that time was a factor, where humans are likely to flag (run out of time) or blunder in low time situations.

It’s still very cool that they could learn a very good eval function that doesn’t require search. I would’ve liked the authors to throw out the games where the Stockfish fallback kicked in though. Even for a human, mate in 2 vs mate in 10 is the difference between a win and a draw/loss on time.

I also would’ve liked to see a head to head with limited search depth Stockfish. That would tell us approximately how much of the search tree their eval function distilled.

loading story #41874038