That is actually correct. Do not get carried away by abacus mystics.
The following is from the paper of Scott Aaronson that I cited previously.
Scott Aaronson. “The Ghost in the Quantum Turing Machine”.
https://www.scottaaronson.com/papers/giqtm3.pdf
If you know the code of an AI, then regardless of how intelligent the AI seems to be, you can “unmask” it as an automaton, blindly following instructions. To do so, how- ever, you don’t need to trap the AI in a self-referential paradox: it’s enough to verify that the AI’s responses are precisely the ones predicted (or probabilistically predicted) by the code that you possess! Both with the Penrose-Lucas argument and with this simpler argument, it seems to me that the real issue is not whether the AI follows a program, but rather, whether it follows a program that’s knowable by other physical agents.
If your brain code is knowable by physical agents, then you too are an automaton.
...
OK, so? We are all biological machines. I don't expect there to be a 'code' in the sense of computer codes, but I *do* expect everything to be physical at base.