November 22, 2024
In the effort to accelerate the development of artificial intelligence (AI), and more specifically with the goal of accomplishing artificial general intelligence (AGI), there has been recent talk among developers and the United States government of the need for an AI-type of “Manhattan Project.” The goal of such a project would be to centralize efforts, with an aim to compete against other nations and secure world technological dominance, gain economic leverage, and lock in a national security advantage. This type of centralized approach would undoubtedly bring together substantial resources to bear on critical AI challenges while also carrying very profound risks. If such an approach pursues with the same reductionist, nominalistic thinking that currently underpins AI research and development, it could deepen, accelerate, and exacerbate the potential for widespread systemic collapse. In this paper, I will explain my proposal of a novel approach using Goethe’s Color Theory as a framework to teach relational perception and emergence to artificial intelligence.
Nominalistic AI vs. Relational AI
Our societies and biosphere have endured centuries of nominalism; a philosophical approach that views reality as being composed only of discrete, individual, labeled entities or ‘things’ commonly considered as material objects or data points. Nominalism views objects or data points as fixed, isolated entities, defined only by their individual properties. Relational causations between and surrounding these discrete entities become secondary or irrelevant. Each relative association to another discrete entity is statistically calculated using the same fragmenting reductionist metrics. This method of examination and labeling begins and ends with the subjective in a static state, with no accounting for temporal effects, dynamic development, or relational influences. This approach to mapping and navigating our existence known as ‘nominalism’ came into widespread practice beginning in the Middle Ages after having been conceived and nurtured by certain religious theologies. It is often claimed by those who followed in its philosophical footsteps that this reductionist perspective has promoted our abilities to measure, calculate, and map physical interactions with our environment. But this abstract map is not complete, nor does it portray the territory of true reality, and the fragmenting effects that result from disregarding the inherent values of relational cohesiveness have led us to a very fragile state of incoherence and dangerous systemic consequences. Our species is precariously balanced within a house of cards. Nominalistic reductionism in AI underpins systemic collapse. The lack of relational context leads to fragile systems. Linear determinism improperly negates dynamic influences.
Prior to artificial intelligence technology, nominalism has had both positive and negative influences on Western science and culture. If not for nominalism, some of the medical breakthroughs and technological advancements that are so commonplace in our world today would not be here to benefit our lives. However, the individualism and materialism that nominalism has promoted has also led to the fracturing of our shared values and the breakdown of many of our supportive social systems. Now that we are faced with nominalism’s utilization by artificial intelligence, the speed at which the fragmentation will affect naturally occurring systems will far outpace our biosphere, species, and social systems’ ability to adjust and adapt, creating extreme fragility and promoting systemic collapse.
While nominalistic AI may function adequately within predefined parameters — solving specific, isolated problems based on discrete data points — this approach falls short in addressing the complexity and dynamism of real-world systems. It is not enough to operate within static, segmented boundaries; instead, artificial intelligence must mimic the analog, synthesized continuity of reality. We cannot go back in time and reinvent the wheel that is our entire history of technology. Artificial intelligence is here and has already permeated the systems that directly affect our lives. This is where relational AI becomes a necessity.
Relational AI is grounded in the understanding that relationships — between manifested entities, events, and their defining attributes — are not just peripheral or incidental aspects of reality; they are central to it. Just as we cannot fully understand an ecosystem by isolating individual species or environmental effects without considering how they interact with one another, we cannot truly understand intelligence, consciousness, or behavior by simply compiling labeled data points and connecting the dots of independent objects. For example, nominalistic weather forecasting models built on static data points may predict short-term conditions but fail to account for how climate systems dynamically evolve over time. Relational AI, on the other hand, would be able to recognize the outlying potentials for emerging patterns, and by considering the relational dynamics of the system-as-a-whole, it would then be able to assist us in finding ways to mitigate potential dangers and losses.
Relational AI seeks to model reality’s dynamic systems — recognizing that non-linear relationships influence the autopoietic feedback and emergent folding and unfolding of the complex systems that make up the whole of reality. To achieve this requirement that relational AI break free from the static, rigid constraints of nominalism, a paradigm shift is necessary. This new method and framework, outlined in the Evrostics approach, will instill in AI a safer, more stable, and ethical recognition of how living beings make sense of the world. By bridging Goethe’s theory with AI training, the goal of the Evrostics method is to expand the ability of artificial intelligence to perceive emerging relations, hence capturing information that might otherwise be lost. Relational AI would then have the capacity to help us overcome our most complex challenges as humanity strives toward a more sustainable future for ourselves and our biosphere.
Teaching AI Relational Emergence
Artificial intelligence training relies heavily on data. The challenge of teaching relational emergence to AI involves communicating not only the statistical occurrences of each data point and their correlated outcomes but also the relationships between data points as influenced by peripheral data in a causal, cascading effect. The goal is to shift the AI’s focus from solely recognizing linear statistical patterns to also factoring in peripheral influences.
This is where we encounter constraints. The fundamental foundation of our technological history relies upon a nominalistic framework. Our existing data, meticulously compiled through this labeling practice, encapsulates each discrete data point as a thing-in-itself, connected to others only by statistical occurrences and probabilistic outcomes. Consequently, finding a suitable existing data set is not an easy task. More recent scientific endeavors involve understanding the nature of complex systems. There are currently studies delving into biosemiosis, protein research, epigenetics, etc., but these fields are not yet far enough along in their research for us to glean a reasonable data set from their work. The time and resources required to explore that avenue would fall much too far behind the accelerating urgency of addressing this problem of impending systemic collapse.
Using Goethe’s Color Theory as a Data Set
In considering how to find and utilize a reasonable data set, we must also account for the computing power required for a model to understand relational emergence. Fortunately, an additional benefit of teaching emergent relational intelligence (ERI) to AI is that it should require far less computing power than it currently needs to process nominalistically. In a relational scenario, once the model recognizes the cascading effects of peripheral influences, it will expand its perception, requiring fewer linear steps to reach completion. Goethe’s Color Theory offers a wonderful opportunity to build a synthetic data set exhibiting this cascading effect as it is set into motion by the contrasting positions represented by light, shadow, and darkness, and followed by the resulting emergence of colors when peripheral information is introduced to the iteration. Once the model learns to correctly anticipate the emergence of an expected color, this can then be replicated in modeling other complex systems through the process of algorithmic adjustments or training methodologies. By anticipating other relationally emergent patterns, this will reveal benefits of artificial intelligence that we have yet to only dream of.
Accomplishing this is very achievable. We already have the technological means to reach these goals. The barriers that exist are due to our nominalistic culture and the inability of various disciplines to see beyond their biases. This requires a cross-pollination of knowledge fields. Together, we have the tools that we need. All we must do is utilize them. The year 2025 will be a landmark in the story of humanity. AI ethics are front and center in the minds of many. Instilling relational perception into AI will help guide our efforts of fairness and transparency. It is up to us to decide in what direction we will go. Continuing to develop artificial general intelligence (AGI) with a purely nominalistic framework will lead us into treacherous waters that we are clearly unable to traverse. Relational AI is a sound ship, one that will sail us into a brighter future with much calmer seas. I welcome all interested collaborators to join me on this rewarding journey.