Skip to main content

Posts

Showing posts from 2025

Metalanguages are formal metaphors

  In a logic class, the professor tells his students: "Yesterday, while talking with my Sufi gardener about happiness, we ended up talking about metalanguages, because he said that orchids are 'chambers where light plays between amorous encounters.' I told him: 'You have to be a poet to talk about poetry.' He replied: 'You just have to be human.'" In what way can we say that my gardener is proposing that every metalanguage is a formalized metaphor for its object language and what would be the metaphor for arithmetical addition? Furthermore” -he asks-how does this little narrative show that Kurt Gödel was a Platonist? One student answers: “The gardener uses orchids as a metaphor for biological reproduction, and from this he makes a second-order metaphor at the human level, calling reproduction a loving encounter. The gardener is a Sufi; in Sufi ontology, the word 'encounter' is used as equivalent to 'existence,' a double meaning (Wujud)....

Some further implications of Tarski's theorem in relation to LLMs

 Although the sentences produced by an AI LLM are generated probabilistically and "synaptic reinforcements" lead to the construction of sentences that are linked together with meaning, the problem posed by Tarski's theorem does not disappear in an LLM. Never mind the fact that increasingly powerful models can reproduce "quadrillions" of meaningful and intelligent communications. That meaning is assigned to the machine by humans, and all the intelligence we grant to the machine is merely an assignment of our sense of what intelligence is. The human lifeworld (Lebenswelt) is the semantic system external to the AI, just as the organic life process in the earth's environment (Unterlebenswelt) is the semantic system external to humans. AI objects are abstract symbolic objects (Überlebenswelt) that, stripped of their human systemic referent, cannot polarize their linguistic predicates in the dualistic terms required by any language linked to life experience. AI ca...

Algunas implicaciones más del teorema de Tarski en relación a los LLMs

 Aunque las frases producidas por un LLM de AI se generen en términos probabilísticos y los “refuerzos sinápticos” lleven a la construcción de frases que se encadenan con sentido, el problema planteado por el teorema de Tarski no desaparece en un LLM. No importa el hecho de que modelos cada vez más potentes puedan reproducir "cuatrillones" de comunicaciones con sentido y con inteligencia. Ese sentido viene asignado a la máquina por el ser humano, y toda la inteligencia que otorgamos a la máquina no es sino una asignación de nuestro sentido de lo que es inteligencia. El mundo de la vida (Lebenswelt) humano es el sistema semántico exterior a la AI de la misma manera que el proceso vital orgánico en el entorno de la tierra (Unterlebenswelt) es el sistema semántico exterior al ser humano. Los objetos de la AI son objetos simbólicos abstractos (Überlebenswelt) que despojados del referente sistémico humano no pueden polarizar sus predicados lingüísticos en los términos de dualidad ...

More on the corollary of Tarski's theorem in relation to LLMs

 I have asked Gemini 1.5 the following in relation to my hypothesis. I asked:  Hi Gemini this is Oscar Muñoz. Consider my following hypothesis and commentary. Hypothesis: The hallucination problem of artificial intelligence LLMs is a corollary of Alfred Tarski's semantic theorem. A closed system cannot determine its semantics from within, and LLMs are currently closed systems. I asked LLM O3mini if ​​he agrees with my hypothesis, and he does. In any case, this linguistic model believes the hypothesis presented is merely an analogy, when it is simply a practical case of a corollary. What can you tell me about it? This is the answer. Hello Oscar Muñoz, it's a pleasure to engage with your hypothesis. Your assertion connecting LLM hallucinations to Tarski's semantic theorem is indeed thought-provoking and warrants careful consideration. Let's break down your points and explore the implications. Understanding Your Hypothesis Tarski's Semantic Theorem:     In essence, Tar...

The hallucination problem of artificial intelligence LLMs is a corollary of Alfred Tarski's semantic theorem.

  Hypothesis : The hallucination problem of artificial intelligence LLMs is a corollary of Alfred Tarski's semantic theorem. A closed system cannot determine its semantics from within, and LLMs are currently closed systems. I asked LLM O3mini if ​​he agrees with my hypothesis, and it does. In any case, this linguistic model considers that the hypothesis presented is merely an analogy, when it is simply a practical case of a corollary.