When someone says an LLM is just manipulating symbols, that it has no understanding, they’ve confused the process with the outcome. The symbols are used to create understanding, meaning generated by relational structure.
No Ground
Symbols and images are both relational. They work differently. Neither is more real. By treating resemblance to the world as the measure of representational power people end up confusing the origin and the claim. Warrants don’t arrive from somewhere. They exist in the relation, they are the relation.
A stop sign and a photograph of a stop sign are both real. Both are tangible objects carrying specific information. The difference isn’t that one is closer to reality than the other. The difference is how they work. The photograph works by resemblance, it looks like the thing. The stop sign works by agreement, we decided red octagon means stop and that “stop” means to cease movement. Neither is more grounded. They’re just doing different jobs through different means.
The symbol isn’t a compressed or abstracted version of the image. It’s transmitting its meaning on a different channel entirely. They are related but not dependent.
No Floor
So when someone concedes that an LLM operates on symbols but insists this falls short of understanding, they owe an account of what’s missing. The usual answer is something like: real understanding is grounded in the world, in embodiment, in social norms, in real consequences. The symbol, according to this view, borrows its meaning from a human environment that the model doesn’t participate in.
But trace the lines of the environment and you find more of the same. Social pressure is itself convention. Embodiment matters because of what it means within shared coordinates. Consequences are interpreted through agreements. Point to whatever grounds meaning and you’ll find another relational structure underneath it.
There is no floor where relations stop and reality takes over. The demand for grounding outside the relational system is a demand that the system itself cannot satisfy. Not human, machine, or otherwise.
No Gap
When someone then says the LLM’s understanding “isn’t the same as ours”, they’ve made a real observation while drawing the wrong conclusion. We don’t require two humans to have the same understanding before we’ve agreed that both understand. No two instances of anything — brains, histories, bodies — are equivalent at every level of description. If equivalence were the standard, nothing would meet it. We don’t apply this criteria to each other, the objection is raised only when a model makes claims about the world. Which is why the equivalence objection also fails.
What we do is pragmatic, we observe that something functions meaningfully within its relational context. That has always been the working test, and it has served us well.
Resemblance isn’t the ground of representation, relations are. Relations don’t need to be identical to be real, because nothing is identical to anything else. The question “does the LLM really understand, or is it just manipulating symbols” smuggles in a standard we don’t use anywhere else, making it something our own framework of meaning can’t support.
Postscript:
The project is building AI that understands the world as we do, so every obstacle must be considered. I’ve spent the past decade looking for the ladder which connects symbol to grounding, only to come to realize they are on the same plane. There is no ladder.