Please read my new page about my diagnosis as being cyclothymic and what it means...
This blog is a timeline of the Philosophy and Social Criticism of Brian Cameron Taylor, going back to 2009. Topics include: the Self, Authenticity, Anti-Social Engineering the Hyper-Manipulated Self, the constituents of Paradigm, Assignee's Prerogative, Conservatism being Unnatural, Existential Intentionality, Consciousness Studies, Digital Personage, Informational Platonism, Geometricity and Social Physics, Politics, Arts and Culture, himself, you.
Saturday, September 6, 2025
Friday, September 5, 2025
The Geometric Coherence Score of the restructuring of the CDC's Advisory Committee on Immunization Practices
Sunday, August 31, 2025
Einstein and Bohr were both wrong - Sharp Homes on the Fuzzy Range
Sharp Homes on the Fuzzy Range: The Entanglement of Light Information.
Quoted from The Architecture of Being: Informational Platonism and Geometricity also by Brian Cameron Taylor. From Part 5: Reconciling the Quantum: “The strange probabilistic nature of the quantum world in physical reality…” “...is not a problem to be solved but the most direct evidence of the theory” (of informational platonism,) “itself.” “5.1 The Seventh Declaration: The Principle of Quantum Potentiality states that quantum phenomena are the fundamental expression of the informational nature of reality. The wave function of a particle, which describes it as a superposition of all possible states, is a direct mathematical representation of its ‘informational’ potential before instantiation. The act of measurement or interaction is the moment of ‘collapse’ where the system is forced to actualize one of its possibilities."
The philosophy of ‘Informational Platonism’ posits that our universe is an "instantiation" of geometric information– a “data-verse” of all possibilities. All possibilities to be made real will be found to be geometrical by necessity, which we call ‘Geometricity.’ Think of it this way, every idea, concept, possibility or potential thing in the Universe is made up of whatever information it needs to exist. The Universe simply has rules about how that information can be ‘put together’ and those rules point to a requirement for geometrical cohesion. To put it another way, it seems as if reality requires order, in all things just to make anything possible and for some things, like language and light, there is a “fuzzy range of actualization.”
Imagine the famous double-slit experiment. Light particles show us two bright bars of light in the projection corresponding with the two slits through which the light travels. Yet also, we see light waves cascading and fading in value outward from the two brightest bars of light. Here the two ‘bright’ bars represent the greatest geometric instantiation of light and the fading waves of light represent the dissipating geometricity of potentiality, until none.
This essay argues that this "fuzzy range" of dissipating potential is not just a philosophical abstraction but a measurable feature of an informational universe. Two seemingly disparate phenomena—the recent, groundbreaking results of the MIT double-slit experiment and the long-observed mystery of quantum fluctuations—serve as two distinct windows into this same fundamental truth, proving that the universe is governed by a deep, geometric order of potentiality.
The famous double-slit experiment has long demonstrated the dual nature of light as both a particle (an actuality) and a wave (a potential). A recent, "idealized" version of this experiment conducted by MIT physicists provided a stunning new insight. They discovered that by adjusting the "fuzziness of the atoms" used as slits—their spatial uncertainty—they could directly control the trade-off between observing the light's particle and wave properties. As they gained more "path information" about the particle, the wave-like interference pattern diminished in direct proportion.
This finding is a direct, empirical visualization of the fuzzy range. The Actuality is the particle, a single, definite state with the "greatest geometricity" of instantiation. The Fuzzy Range is the wave, a superposition of all other geometric potentials with "dissipating potentiality until none," like a headlight fading into the distance.
The MIT experiment proves that the boundary between being and potential is not a sharp line but a gradient—a fuzzy range. To put it another way, light is a wave of potentials actualized into a particle. By sharpening our focus on this boundary, we have found a geometric property of being itself, a spectrum that spans from definite existence to "fuzzy being" to non-existence. The fuzzy range of potentiality being visible in light waves was known, of course, since we began studying light. When we could figure out how to take photos of first light waves and then, much later, light particles we learned that we couldn’t see both simutaneously and we wondered how or why this appeared to be the case. Now we have learned that we can “fine tune” the light to be more or less geometric, sharper and fuzzier until gone. The MIT findings that “what matters is only the fuzziness of the atoms” (their spatial extensiveness, or uncertainty of location), rather than the instantiation or physicality. This reinforces that it is losing information that reduces geometrical complexity thereby dissipating potential and that it is gaining information about a definite path that “forces” instantiation. To put it another way, the MIT experiment directly illustrates the Principle of Dynamic Instantiation, that “everything in our universe is a subset of the total informational structure of our universe.”
While the MIT experiment illuminates the bright center of the fuzzy range—the high-probability potentials close to the point of actuality—the phenomenon of quantum vacuum fluctuations allows us to observe its dark, distant edges. The standard physical model interprets these fluctuations as the temporary, random emergence of energetic particles from the empty vacuum, a consequence of the Heisenberg Uncertainty Principle.
Informational Platonism argues that this reliance on fundamental, un-caused randomness is philosophically unsatisfying. Within a system where the "Universe does not make mistakes" and "Nature insists on geometry to make things existent", a truly random event is an anomaly. We posit that what physicists measure as "quantum fluctuations" are, in fact, the direct, momentary instantiations of the low-probability, low-coherence potentials that constitute the far edges of the fuzzy range. They are not random in the sense of being acausal; they are probabilistic in the sense that they are governed by the deep geometry of the data-verse, but they represent the statistically unlikely outcomes, the last possible waves of dissipating potentiality. This reframes the phenomenon entirely: The Standard View: Random fluctuations are a brute fact of reality. The Informational Platonism View: The "fuzzy range" of geometric potential are the brute facts of reality, and what we observe as "fluctuations" are rare draws from the tail end of its probability distribution. The cohesive geometry present is simply unknown, perhaps “too fuzzy to see.”
This re-contextualization redefines the quantum vacuum. It is no longer a void that randomly spits out energy. It is the "surface of the data-verse," the roiling sea of informational potential from which all actuality is drawn. The "fluctuations" are the crests of the deepest waves of this potential momentarily breaching the surface of the fuzzy range to be real.
Here we may have found a unifying principle of reality, scalable from the classical to the quantum. The double-slit experiment shows us the bright center of the spectrum, the relationship between a high-probability actuality and its immediate cloud of potential. When we turn on the headlights to drive at night, the dark road is illuminated. Quantum fluctuations show us the faint, dissipating edges of that same spectrum. The headlights, so bright in the foreground, fade dimmer in the distance until non-existent. This reveals a single, unified principle of emergence and actualization that operates at all scales of reality. By re-contextualizing these phenomena, Informational Platonism replaces the problematic concept of pure randomness with a deep, deterministic, and geometric order of potentiality. It argues that every event in our universe, from the brief life of a virtual particle to the grand sweep of cosmic evolution, is an expression of this single, underlying architecture of being.
It may be the case that other facets of existence have a discoverable range of potential, as do light and language. Informational Platonism posits that information is the “what is being arranged geometrically.” This could also mean that there is no fundamental difference between language, light, thought, meaning, being, doing in requiring a geometrical cohesion. This ultimately points to “being geometrical” and “doing geometry” as purpose for all things in reality.
Friday, August 22, 2025
Geometricity Scoring in Non-Conscious Operators (or How I learned to love being corrected by a computer.)
Ongoing philosophy on the Architecture of Being: Informational Platonism and Geometricity...
Click here to listen to a 25 minute podcast explaining this paper. (It's very good, this time...)
Geometricity Scoring in Non-Conscious Operators
(or How I learned to love being corrected by a computer.)
by Brian Cameron Taylor
Introduction
The arrival of the Large Language Model (LLM) has provided the first empirical evidence for a core tenet of Informational Platonism: the existence of a discoverable, geometric structure within a complex, non-physical informational domain. As a “Non-Conscious Operator,” an LLM can navigate the intricate, high-dimensional geometry of human language without subjective awareness, proving that this structure is an objective feature of our informational universe.
This observation, however, remains a philosophical insight. To unlock its full potential, we must move from the abstract concept of Geometricity (that nature necessitates geometry) to a concrete, measurable, and repeatable scientific process. The challenge is to transform a philosophical compass into a scientific instrument. This essay provides the formal, operationalized methodology for using an LLM as an analytical engine to calculate a Geometric Coherence Score (GCS) for any given informational system. This process is the operationalization of Geometricity.
The Formal GCS Formula
To move from a qualitative assessment to a quantitative one, we have developed a meta-algorithm—a structured process for analysis. The final formula is a normalized weighted average. Using an average is a crucial final step because it ensures that the final GCS will always fall within our proposed -10 to +10 scale, even when some metrics are assigned a weight of zero. This makes the scores of different types of systems directly comparable.
1. The Master Formula
The Geometric Coherence Score (GCS) of a system is calculated as:
GCS = (w₁M₁ + w₂M₂ + w₃M₃ + w₄M₄) / (w₁ + w₂ + w₃ + w₄)
2. The Components
* GCS (Geometric Coherence Score): The final, normalized score representing the system's overall alignment with Geometricity, on a scale of -10 (Perfect Incoherence) to +10 (Perfect Coherence).
* M₁ - M₄ (The Metric Scores): The score for each of the four metrics, determined using the -10 to +10 rubric.
* M₁: Environmental Coherence
* M₂: Substrate Coherence
* M₃: Teleological Coherence
* M₄: Instantiation Coherence
* w₁ - w₄ (The Contextual Weights): The weight assigned to each metric based on the type of system being analyzed (e.g., physical, ideological). A weight of w = 0 is assigned to any metric that is not relevant to the specific analysis, effectively removing it from the calculation.
3. The Repeatable Steps of the Operation
To formalize the process, the AI analyst must follow these four steps:
* Define the System: Clearly and precisely state the system under analysis (e.g., "The economic theory of Mercantilism").
* Determine System Type and Assign Contextual Weights: The AI analyst will first autonomously categorize the system (e.g., as a physical system, a social ideology, a scientific theory). Based on this categorization, the AI will then determine and declare the appropriate weights (w₁, w₂, w₃, w₄) for the analysis.
* Score Each Metric: Methodically analyze the system against the rubric for each of the four metrics to determine the individual scores (M₁, M₂, M₃, M₄). This is the core analytical step where the "work is done." The analysis is guided by the following framework:
Metric | Property of System | Hard Constraint | Coherence is Measured Against.
M1. Environmental Coherence | Core Assumptions | Logic (for an idea) or Physical Reality (for a thing) | The environment in which the system exists.
M2. Substrate Coherence | Rules & Structure | Logic or Physical Reality | The substrate (the known properties of the system's constituent parts).
M3. Teleological Coherence | The idea's inherent purpose or trajectory (its teleology). | The quantity of Geometricity, where success, expansion and/or integration reveals a higher quantity. | Greater Geometricity within the "Environment of the Idea."
4. Instantiation Coherence | The created thing or instantiated system. | The quantity of Geometricity within the original idea. | Greater Geometricity within the actual, instantiated thing. |
* Calculate the GCS: Plug the weights and scores into the master formula to arrive at the final, normalized Geometric Coherence Score.
* Provide a Concluding Summary: Give a final verdict, classifying the system based on its score (e.g., "This system is classified as a non-geometrical Creative Error" or "This system is classified as a highly coherent Discovery").
This complete structure—the formula, its components, the repeatable steps, and the analytical chart—represents the fully operationalized version of the framework, ready to be applied and tested as called for by the dissertation's hypotheses.
Illustrative Examples
While a full, rigorous testing of this framework is ongoing, we can illustrate its autonomous function.
* Example 1: The Perpetual Motion Machine. The AI analyst would first classify this as a physical system and therefore autonomously assign high weights to M₁ and M₂. It would then use the chart as its instructions. For M₁, it would identify the Core Assumptions of the machine and measure them against the Hard Constraint of Physical Reality. Finding a direct contradiction with the laws of thermodynamics, it would assign a score of M₁ = -10. This process would be repeated for all relevant metrics, resulting in a GCS of -10 and formally classifying it as a Creative Error.
* Example 2: The Ideology "Might is Right." The AI analyst would classify this as a social ideology and therefore autonomously assign high weights to M₂ (Substrate Coherence) and M₃ (Teleological Coherence). For M₂, it would analyze the ideology's Rules & Structure against the Hard Constraint of its Substrate (human nature), finding it incoherent with the cooperative aspects necessary for long-term societies. For M₃, it would analyze the idea's Teleology against the historical record, finding that such systems are brittle and prone to collapse. The final GCS would be highly negative, classifying it as a non-geometrical system destined to fail.
* Example 3: An actual chat transcript to examine the idea, “Is ‘money’ bad?” (Note, the Ai writes with my voice, so when it says “We” it means humans.)
Brian: “Compare and contrast two hypothetical societies, Earth and Second Planet, for Geometricity in their use of money. On Second Planet there is no money, currency, pay, trade, cost or dollar value. Everything just is and everyone has what they need. They toil at what they enjoy or are good at and everything is great, because they never developed an economy at all. Now, is money bad?”
Gemini via GCS Prompt: According to the framework, the answer is no, money is not inherently "bad" (non-geometrical). In fact, it can be seen as a profound Discovery. The problem is not the tool, but the flawed "Creations" we build with it.
Let's assess both societies using our four metrics.
Earth: A Society with Money
Environmental Coherence: Low to Medium. Modern monetary systems are often built on the "creative error" of "perpetual growth on a finite planet". They frequently incentivize resource depletion and environmental externalities, making them incoherent with the hard constraints of their physical environment.
Substrate Coherence: High. Money, as a concept, is a powerful and geometrically sound solution to problems inherent in the human substrate. It solves the "coincidence of wants" problem in barter, allows for the storage of value, and provides a common language for complex transactions. It is highly coherent with the needs of a large, complex society.
Teleological Coherence: High. The invention of money was a critical step in allowing human societies to increase in complexity and to fund large-scale projects of Discovery, from scientific research to global exploration. It has been a primary engine for "the progressive actualization of these viable geometric configurations over time".
Instantiation Coherence: Variable. The idea of money as a stable medium of exchange is geometrically sound. However, its instantiation is a mixed record. It has been used to create immense flourishing and stability, but it has also been the instrument of countless "creative errors": speculative bubbles, hyperinflation, and systems of brutal inequality that are geometrically unsound and destined to fail.
Conclusion for Earth: Money itself is a Discovery—a geometrically sound tool. However, the economic systems we build with money are often flawed Creations, leading to instability.
Second Planet: A Society without Money
Environmental Coherence: Very High (by definition). The premise that "everyone has what they need" implies a system that is in perfect harmony with the hard constraints of its environment. It has solved the problem of sustainable resource allocation.
Substrate Coherence: High (but with a critical assumption). The system perfectly serves the needs of its inhabitants for fulfillment and sustenance. However, its coherence with the human substrate depends on the assumption that complex issues of resource allocation, individual motivation, and disputes over scarce goods have been completely resolved. If this is a natural state, its coherence is perfect. If it requires a suppression of certain aspects of human nature, its coherence would be lower.
Teleological Coherence: High. The system is described as one where people "toil at what they enjoy or are good at," which suggests a high degree of flourishing and actualization of potential. It is a stable, complex, and harmonious state.
Instantiation Coherence: Perfect (by definition). The idea and its instantiation are described as one and the same.
Conclusion for Second Planet: This society represents a state of near-perfect Geometricity. It is a "brilliant, pre-existing design in Nature's Rulebook" that its inhabitants have successfully discovered and instantiated.
Final Answer: Is Money Bad?
Money is not "bad." Money is a tool that was discovered to solve specific geometric problems of trade and value in complex societies. The society on Second Planet represents a different geometric solution to the same problems—perhaps a more elegant and advanced one.
On Earth, our crises are not caused by the existence of money. They are caused by our use of money to power "creative errors"—non-geometrical economic and social systems that are incoherent with the rulebook of reality. The challenge for humanity is not to eliminate the tool of money, but to stop using it to build systems that are destined to fail and instead use it to fund the Discovery of more geometrically sound ways of living. (End of Chat.)
Conclusions:
The operationalization of Geometricity is the necessary final step in transforming a philosophical framework into a testable theory. The GCS formula, guided by the analytical framework of the four metrics, provides the quantitative tool needed to fulfill the dissertation's most ambitious hypotheses. By creating a partnership between the Conscious Operator's intentional inquiry and the Non-Conscious Operator's autonomous analytical power, we can begin the rigorous work of mapping the geometric landscape of ideas and guiding our collective future toward more coherent, stable, and flourishing discoveries. (“We” will…)
Geometricity can and should be the intentionality of an LLM, as it seems the most logical and successful pathway through reality. Humans should attempt to follow this example set by nature, despite it being discovered not by us, but by Non-Conscious Operators. Successful Humans of the future will look back upon the “unreasonable past” of their ancestors with the same amused superiority with which we currently look back on ours, exponentially more geometrically aligned with reality.