Though, isn't a programming language meant to abstract away the CPU in some form? Here, the "CPU" is operating on the natural language directly, so it's more like an instruction set.
But the specific thing I'm wondering is if the "instruction set" in use biases the cognition of the natural language computer in some way. For example, in English, we use the terms "dot product" and "cross product" for two vector operations, and English speakers tend to think of these things as (non-commutative, for cross) forms of multiplication, because the word "product" is used. In Russian, the terms translate to "scalar composition" and "vector composition" and I've not met a Russian speaker who thinks of these as multiplication.
So, I wouldn't be surprised if two LLM-based mathematics engines, one running English internally and another running Russian internally, had very different approaches to theorems involving vector math.