AI has sharpened the boundary between formal coherence and meaning once again. This essay argues that coherence is a necessary but not sufficient condition for meaning. Meaning presupposes a surplus: a normative structure that is not fully formalisable and that makes meaning possible. By clarifying this distinction with a touch of Wittgenstein, it can be argued that AI systems, no matter how coherent, lack this elusive moment. In what follows, I develop this distinction in three steps: first, I clarify the difference between coherence and meaning; next, I anchor meaning in a-way-of-living and our assumptions about it; and finally, I analyse how AI exposes the tendency to elevate coherence to the ultimate authority.
Within a formal system, coherence refers to internal consistency: the absence of contradictions, the stability of inferences, the reproducibility of patterns, and so forth. In contemporary AI models, such as large language models (LLMs), this kind of coherence emerges from statistical analyses of vast amounts of data. However, this form of coherence is strictly internal: it concerns the stability of relations within a model, not the situating of meaning in the world. This may still seem somewhat abstract, but it should become clear in the course of the essay.
Coherence guarantees no normative or existential weight. As John Searle argues in his distinction between syntax and semantics, the formal consistency of a system remains categorically different from language’s ability to carry meaning. Coherence is a structural property; meaning is a relational and normative property. Just as the categories of the understanding are not objects within experience but make experience possible, the conditions for meaning are not themselves meaningful objects within the same domain. The meaning of a sentence does not arise from grammar – syntactic or semantic coherence – but from a prior normative framework: what counts as a good reason, valid evidence, an ironic or serious utterance. And precisely this cannot be fully formalised without regression. There always remains an implicit moment for that which makes meaning possible but is not produced by the system itself.
The difference between syntax and semantics lies at the level of intentionality. And, with a bit of Wittgenstein, we can see that this intentionality is embedded in a way of living. The rules of a language game cannot be infinitely justified by further rules; ultimately, a language game rests on what is done, on how the game is played. The way-of-living is thus not an element of the language game but its condition. Meaning is therefore not merely system-internal consistency but coherence against a background of changing practice. Some things do not function as reasons but as a matter of course. Daarom is geen reden; als je van de trap af valt ben je snel beneden.
AI lacks such embeddedness. An AI’s language use consists of probabilistic correlations without participation in a way-of-living. It uses language but does not live with language. AI systems (or LLMs) operate within formal parameters. What counts as “true” or “consistent” is determined by statistical probability. For the system, the human way-of-living is merely data; it does not appear as a constitutive condition but as an external object.
This existential structure precedes explicit representation and therefore cannot be reduced to formal correlations or calculable patterns. When meaning is conceived solely as coherence, its world-involvement disappears. Meaning arises within a world of involvement, a world in which we live, act, and speak.
This abstract distinction becomes concrete in the way AI is approached in public and academic debate: from fear or hope. Fear of loss of human control and autonomy; hope for superior knowledge and infinite possibilities. Both attitudes share the same presupposition: they treat AI as an end and implicitly as something that could surpass humanity in its own domains. These sentiments return concretely in recent discussions about AI’s growing role in decision-making. From judiciary to healthcare, where both promise and threat are projected onto the same systems. On one hand, there is fear of a shift in responsibility to inscrutable systems (think, for example, of the film Mercy); on the other, expectation of correction of human limitation (look how quickly we found that one cancer cell). And with this, AIdabaoth shifts what we consider the bearing ground of judgment and meaning: the essay returns to this later.
This tendency toward coherence fetishism is already visible in market thinking, democracy, and consequentialism, where consistency is even elevated to norm. AIdabaoth makes this radically concrete, however, because its coherence is no longer borne by a way-of-living but replaces it. AIdabaoth refers not to a supernatural being but to the familiar leap from Artificial General Intelligence (AGI), which can in principle perform all human cognitive tasks, to Artificial Superintelligence (ASI), which structurally surpasses these capacities (what in popular terms often coincides with the techno-accelerationist term: “singularity”). Once a system has virtually all available knowledge, can model all scenarios, and let its outcomes flow through our way-of-living (policy, judiciary, logistics, selection, risk assessment, etc.), an asymmetry arises whose first side evokes the classical attributes of God: something that transcends all limitations, relativity, and contingencies.
This apotheosis, detached from its classical religious context, can be understood as the elevation of an instance to normative reference: a point against which other judgments are measured, and this is the demiurgic illusion.
It is assumed that quantitative intensification within the system – more data, higher computing power, more complex models – automatically leads to qualitative improvement. Meaning, however, is not a function of quantity but of situating within a way-of-living, whatever formal rules may follow from it.
The apotheosis of AIdabaoth is therefore not a theological but an epistemological matter: we read the need for an ultimate, final authority into a system that, precisely through its architecture, shows that authority cannot be reduced to a kind of pattern recognition. For convenience, we can divide the demiurgic illusion into three central misapprehensions:
First, there is a misapprehension about totality: totality of information is not equivalent to totality of meaning. Large-scale language models and other generative systems can process immense amounts of text and recognise statistical regularities therein. This totality is, however, merely that: purely statistical. What counts as “knowledge” at all is already determined by a way-of-living, not by the scope of a dataset. It implies no existential involvement, no lived relation to truth, suffering, finitude, or responsibility. The model recognises correlations. The transcendental surplus that makes human meaning possible remains external to the system and is at most represented as patterns in the data used.
Second, there is the misapprehension that a (statistical) pattern can be normative. An age-old philosophical error: existence implies no normativity; an ought cannot be derived from an is. Probabilities and patterns do not automatically imply a hierarchy of values. For example, a high risk profile is no duty to exclusion. Conversely, a judgment within a lived world implies a risk (exempli gratia, cancel culture). A system can simulate normativity but does not embody it. Just as Yaldabaoth cannot be Sophia.
Third, output is confused with judgment. A generated answer never binds a subject. That is to say, an answer can always be revised without involving regret, guilt, or responsibility: the system itself is not a subject bound by its answer. A judgment, however, not only excludes alternatives but implies that someone is accountable for the choice made. In a world where more and more decisions are outsourced to AI systems, this shift becomes visible in discussions about responsibilities. Here again arises the demiurgic illusion, and the asymmetry between answer and judgment is forgotten.
A broader scale in the same game would automatically yield a better position. As if a calculator that computes a thousand times faster thereby “understands” computing a thousand times better.
Here, the liturgy of AIdabaoth is written. Liturgy orders behaviour—who speaks when, kneels, sacrifices; who gets priority, which data counts, which deviation is marked as noise. Everything that does not fit the measurable variables disappears from the ritual. The user learns to move within the logic of AIdabaoth.
In that process, the bearing ground of judgment and meaning shifts. Wittgenstein describes in On Certainty how some convictions are not the result of proof but the point against which proof is possible at all. They form the riverbed within which the stream of reasons flows, the hinge assumptions on which our judging turns.
When an AI system becomes institutionally normative, it begins to sediment the riverbed. Not only answers, but the grammar of what counts as a “good reason” becomes attuned to what is optimisable. Hinge assumptions are not an explicit list of rules: they are the bearing habits of our way-of-living. For a technical system, that bedding appears not as a lived background but as data: something external, measurable, optimisable.
Not the machine becomes God; we relocate our riverbed to fit the machine. Then our concepts shift toward what is optimisable, our truths toward what is consistent, and our morality toward what is procedural.
And thus emerges the schizophrenic tension of the project: we build a machine to externalise our judgments, and are surprised when it turns out to have no judgment. We project transcendence onto an immanent system and then experience disenchantment when the projection falls back onto coherence.
One might object that a system that lives long-term with humans, processes feedback, and adjusts its own criteria will, over time, develop a kind of way-of-living. In that scenario, the transcendental surplus simply emerges from enough interaction. But this overlooks that a way-of-living is not only a structure of behaviour but also an encounter with the other. For example: without the possibility of truly being addressed and thus responsible, a way-of-living remains a metaphorical projection here.
AIdabaoth makes visible where we are inclined to replace our capacity for judgment with statistical probabilities. It shows how quickly we are inclined to take coherence for bedding. AIdabaoth can never become an absolute intelligence. It is a machine that reveals how quickly we rearrange our world around what is measurable, and how subtle the moment is when a dashboard turns into an altar.
In writing this essay, I have used AI tools as instruments: to find relevant sources and names, to point out gaps in my reasoning, and to get stylistic feedback. However, the choice of the central thesis, the selection and interpretation of sources, the writing of the text, and the ultimate argumentative line remain my own responsibility. Sometimes I asked AI to formulate possible counterarguments or point out weak spots in my argument to sharpen my own position. In that sense, AI functions here as an advanced calculator for text and ideas: useful for structure and coherence, but not the riverbed in which my judgment flows, and not an excuse or avoidance of the risk of failure.
Bibliography
- Searle, John R. 1980. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3 (3): 417–57. https://doi.org/10.1017/S0140525X00005756
- Wittgenstein, Ludwig. 1953. Philosophical Investigations. Edited by G. E. M. Anscombe, P. M. S. Hacker, and Joachim Schulte. Translated by G. E. M. Anscombe, P. M. S. Hacker, and Joachim Schulte. Revised 4th ed. Oxford: Wiley-Blackwell
- Wittgenstein, Ludwig. 1969. On Certainty. Edited by G. E. M. Anscombe and G. H. von Wright. Translated by Denis Paul and G. E. M. Anscombe. Oxford: Blackwell
- Russell, Stuart J., and Peter Norvig. 2022. Artificial Intelligence: A Modern Approach. 4th ed. Harlow: Pearson
Ferdinand P.D.T.C. Brasßel is a European philosopher whose work confronts the deification of artificial intelligence and the erosion of human judgment under technocratic regimes. Drawing on political theology and philosophy of language, he investigates how systems of optimisation quietly assume normative authority while displacing responsibility.
Ferdinand P.D.T.C. Brasßel writes against the conversion of coherence into authority and treats philosophy as a practice of resistance rather than commentary.
