While I’ve argued that LLMs lack complete information about reality, the binary framing of “understanding” in AI discourse wipes out the nuances of our actual lived experience. “Do you understand?” is not a yes or no question. Completeness has never been a prerequisite for understanding or reasoning.
Insisting that “direct experience” is the only valid form of knowledge overlooks how understanding emerges through layers of abstraction which are all just as real as what they represent. In other words, information twice-removed from direct sensory experience is not untethered from our shared reality. Mathematical models in physics are built upon abstractions of abstractions, and they still capture things we find meaningful and useful about the world.
Underneath the words we speak are stable patterns representing cause and effect. Putting these words on paper doesn’t negate the existence of their underlying patterns or their properties. Machines that exhibit intelligent behavior are “thinking” in both the geometric and the symbolic space, as an integrated and layered system, just like us.