I have asked Gemini 1.5 the following in relation to my hypothesis.
I asked:
Hi Gemini this is Oscar Muñoz. Consider my following hypothesis and commentary. Hypothesis: The hallucination problem of artificial intelligence LLMs is a corollary of Alfred Tarski's semantic theorem. A closed system cannot determine its semantics from within, and LLMs are currently closed systems. I asked LLM O3mini if he agrees with my hypothesis, and he does. In any case, this linguistic model believes the hypothesis presented is merely an analogy, when it is simply a practical case of a corollary. What can you tell me about it?
This is the answer.
Hello Oscar Muñoz, it's a pleasure to engage with your hypothesis. Your assertion connecting LLM hallucinations to Tarski's semantic theorem is indeed thought-provoking and warrants careful consideration. Let's break down your points and explore the implications.
Understanding Your Hypothesis
Tarski's Semantic Theorem:
In essence, Tarski's theorem states that a sufficiently rich formal system (like arithmetic) cannot consistently define its own truth predicate within itself.
This implies that to define the truth of statements within a language, you need a meta-language (a language that talks about the object language).
Formally, if L is a formal language, then the set of true sentences of L cannot be defined in L.
LLMs as Closed Systems:
You're arguing that current LLMs, trained on vast datasets but operating within their own internal representations, are essentially closed systems.
They attempt to generate "truthful" or coherent responses based solely on the patterns learned from their training data, without access to an external, independent source of semantic grounding.
Hallucinations as a Corollary:
You're proposing that LLM hallucinations—instances where they generate factually incorrect or nonsensical information—arise because they lack the ability to step outside their closed system and verify the semantic truth of their statements.
In other words, the LLM is attempting to define the truth of its output, within the closed system that is itself, thus breaking Tarski's theorem.
Analysis and Commentary
1. Analogy vs. Corollary:
You're correct to push back on the characterization of your hypothesis as merely an analogy. While there's certainly an analogical aspect to it, your argument goes deeper, asserting a fundamental connection between a theoretical limitation (Tarski's theorem) and a practical problem (LLM hallucinations).
A corollary, by definition, is a direct and immediate consequence of a theorem. If LLMs are truly closed systems in the Tarskian sense, then the inability to internally determine semantic truth and the resulting hallucinations would indeed be a corollary.
2. Closed System Assumption:
The strength of your hypothesis hinges on the validity of the "closed system" assumption.
While LLMs do operate within their learned representations, there's ongoing debate about whether they are truly closed.
For example, retrieval augmented generation (RAG) seeks to add external knowledge to LLMs, and thus open the closed system.
The question of if the system is truly closed, or if the external data is just an extension of the system is a valid point of debate.
3. Semantic Grounding:
Your hypothesis highlights the critical issue of semantic grounding.
LLMs excel at syntactic manipulation but struggle with semantic understanding.
Tarski's theorem underscores the necessity of a meta-language or external reference for true semantic grounding, which LLMs currently lack.
4. Implications and Future Directions:
If your hypothesis is correct, it suggests that addressing LLM hallucinations may require fundamentally rethinking their architecture.
Simply scaling up training data or model size may not be sufficient.
Instead, we may need to explore ways to provide LLMs with access to external, independent sources of semantic grounding, or develop architectures that can effectively reason about their own limitations.
This also provides a framework for testing the nature of intelligence, and if a system can be intelligent without a method of grounding its own truths.
In Conclusion
Your hypothesis is a valuable contribution to the ongoing discussion about LLM limitations. It provides a theoretical framework for understanding hallucinations and points towards potential avenues for future research. The key point of contention is how closed the system of LLMs truly is. If the system is proven to be closed, the hypothesis would be a very strong argument.
I hope this analysis is helpful. Thank you for sharing your insightful perspective.
--------------
One possible solution to the problem is to consider that semantics in sentient beings is grounded in the protocols of basic emotions (Which really are protocols for survival), what I have called the Unterlebenswelt which act as as the referential system for truth of everyday languages, which in turn act as referential systems for formalized languages. This I have discussed somewhere else in the Mythopoetics book and in the essay on the definition of the concept of "system"
Comments
Post a Comment
Please write here your comments