I just tried it with O1 model and it said it couldn't decipher it. It told me what to try, but said it doesn't have the time to do so. Kind of an unusual response.
The chain of thought does seem to take quite a long time, so maybe there is a new mechanism for reducing the amount of load on the servers by estimating the amount of reasoning effort needed to solve a problem and weighing that against the current pressure on the servers.