The rules for translation are themselves the result of intelligence; when the thought experiment is made real (I've seen an example on TV once), these rules are written down by humans, using human intelligence.
A machine which itself generates these rules from observation has at least the intelligence* that humans applied specifically in the creation of documents expressing the same rules.
That a human can mechanically follow those same rules without understanding them, says as much and as little as the fact that the DNA sequences within the neurones in our brains are not themselves directly conscious of higher level concepts such as "why is it so hard to type 'why' rather than 'wju' today?" despite being the foundation of the intelligence process of natural selection and evolution.
* well, the capability — I'm open to the argument that AI are thick due to the need for so many more examples than humans need, and are simply making up for it by being very very fast and squeezing the equivalent of several million years of experiences for a human into a month of wall-clock time.
Minds shuffle information. Including about themselves.
Paper with information being shuffled by rules exhibiting intelligence and awareness of “self” is just ridiculously inefficient. Not inherently less capable.