Mark Hampton
- Mark HamptonParticipant
Hi Chris,
Why could the most fundamental particles we detect not be interpreted as the physical manifestations of actual entities within Whitehead’s framework? To be coherent we would need to understand the particle as also having a mental pole that is not being measured. We would also need to consider the particle correlated with an actual “occasion”, there is a temporal aspect where the particle is continually becoming and does not exist as a substance (it is just appears that way when we measure the physical pole).
Perhaps there are physical particles more fundamental than those we know now but the discrete nature of actual entities implies there is a “smallest” particle.
One of the challenges would be to detect the mental pole but maybe Goethe’s science would allow for that – watch enough collider experiments until you can feel the mental pole of the most fundamental particle.
The actual entities are the fundamental processes of reality, that is to say they are real in both the physical and mental poles. Eternal entities are not real but are potentials (have no physical aspect).
If we feel an indivisible particle then we might be able to yell “eureka” from inside the lab!
Maybe a physicist is already having these experiences but just too embarassed to tell her colleauges 🙂
- Mark HamptonParticipantJuly 31, 2024 at 10:31 am in reply to: Do mathematical “laws” or eternal objects function as Forms??? #29192
Hi Bill,
I did not have the metaphor with quantum events in mind (it does not seem to make sense to think of actual occasions having a scale). One way to think of this might remember that “space” arises from actual occasions so they do not have a “size”. The term “no scale” might be clearer than “scale-free” (which might be misunderstood as “all scales”). I hope Matt does clarify this further, while we are waiting, here is a take from Matt’s Whiteheadian chatGPT:
Actual occasions are the fundamental units of reality, and they do not scale in the way that societies do. Societies are emergent structures formed by the coordination and integration of multiple actual occasions, whereas actual occasions themselves remain the basic, indivisible units of becoming.
Whitehead’s actual occasions are the “final real things” of which the world is made, as he states in “Process and Reality.” They are the most basic elements of reality, each representing a single act of experience or becoming. These occasions are momentary events that do not persist through time but rather contribute to the ongoing process of reality by their succession.
Societies, on the other hand, are composed of multiple actual occasions that exhibit a certain pattern of order and continuity over time. These societies can be physical objects, living organisms, or even larger structures like social groups. The key aspect of societies is that they emerge from the relationships and interactions among actual occasions, creating a stable and enduring pattern that actual occasions alone do not possess.
This distinction is critical in understanding Whitehead’s metaphysics. While actual occasions are the building blocks, societies are the structures that emerge from the collective behavior of these building blocks. Societies scale because they can encompass larger and more complex arrangements of actual occasions, leading to emergent properties and behaviors not found in individual occasions.
To put it succinctly, actual occasions are the indivisible units of becoming, and societies are the emergent structures that arise from the coordinated activity of many actual occasions. This understanding aligns with Whitehead’s view that reality is fundamentally processual, with the basic units being the transient actual occasions and the emergent, enduring patterns being the societies they form.
In summary, while actual occasions do not scale, societies do, and it is through the interactions and relationships among actual occasions that the complexity and richness of the world emerge. This distinction helps clarify the nature of Whitehead’s metaphysical framework and the role of actual occasions and societies within it.
- Mark HamptonParticipantJuly 31, 2024 at 12:51 am in reply to: Do mathematical “laws” or eternal objects function as Forms??? #29168
Hi Eric,
Here is a quote from Whitehead that I am taking from a secondary source:
“The actual world is built up of actual occasions and by the ontological principle whatever things there are in any sense of ‘existence’ are derived by abstraction from actual occasions. . . . An actual occasion is the limiting type of an event with only one member.” (PR 73)
Regarding Schmidt, I have the impression he is not adopting Whitehead’s process-centric perspective and is trying to map Whitehead to a traditional materialist perspective. In Whitehead’s view, there is perceived movement because the substance we perceive as moving is actually a continually changing society of actual entities. These actual entities are not static “atoms” but events—each “born” and “dying” at the smallest scale as rapidly as possible.
In this framework, there is no “empty” space. Instead, space is constituted by the extensive relationships between actual entities. Thus, space emerges from the interaction of these entities, rather than being a pre-existing container in which entities are distributed.
The challenge is to think process all the way down. This means understanding that at the most fundamental level, reality is made up of processes (actual entities) rather than static substances. Consequently, movement is not the relocation of static entities through space, but the dynamic process of successive actual entities in a network of relationships.
I believe Whitehead claims that actual entities are the fundamental units of reality. Hopefully, this provides more information for Matt to point out any problems in my understanding.
- Mark HamptonParticipantJuly 30, 2024 at 10:16 am in reply to: Do mathematical “laws” or eternal objects function as Forms??? #29135
Hi Matt,
Thanks for the further background on the evolution of the “maps” in physics. I think we see mechanical materialism was lost quite some time ago. There are remnants of it (even in quantum “mechanics”) but I see this driven more by essentialism than mechanics. Newton was an alchemist, suggesting he did not view the universe through a solely mechanical lens. He seems to have had a much broader more integrated view than the caricature he gets – he was associating material with spiritual/mystical beliefs.
I may have communicated poorly. I understand that Whitehead proposes societies that are composed of actual entities. There is no strong emergence, it is organisms/life/experience all the way “down”. For example, a person is a society rather than a single actual entity. There is still the possibility for creativity etc in each emergent category (e.g. freewill of a person). I am not claiming it is mechanical reductionism but it is a process reductionism (societies composed of socities and so on until we hit the most fundamental processes).
When I referred to “algortihmic” I meant how Whitehead describes the process of concresence. There is the lure that encourages creativity but the description of his metaphysics is itself algorithmic. There are a sequence of steps that define the process of concresence.
Am I making sense ? Thanks 🙂
- Mark HamptonParticipantJuly 30, 2024 at 4:22 am in reply to: Do mathematical “laws” or eternal objects function as Forms??? #29127
Hi Matt,
Didn’t Newton demolish the idea of a mechanistic materialism ? Gravity introduced a ghost into the machine – “spooky” action at a distance. By the 19th century field theories are widely accepted in physics, this indicates to me that physicists were not buying into a mechanical materialism well before quantum mechanics completely upends things.
I have the impression that the debate over machanistic materialsim has been closed for centuries as far as physics is concerned. Something similar seems to be the case for many fields – I guess the general public’s understanding is often 100 or more years behind contemporary debates in fundamental fields. In terms of contemporary philosophy it seems arguing against mechanical materialism is a strawman.
In contemporary science there is still a strong tendency toward reductionsim e.g. molecular “biology”. Whitehead does not seem to address this as his metaphysics reduces everything to actual occasions. We should look to the very smallest possible forms to explain the origin of contemporary scientific challenges like intelligence, life, evolution, consciousness.
It seems in some ways that Whitehead is still promoting radical reductionism – everything can ultiamtely be explained by physics (if physics would attend to actual occasions instead of quantum mechanics). He is reducing to actual occasions, not forms, but the underlying reductionist assumption is there.
I can see the logic of reductionism in Whitehead’s historical context, reductionism had taken us far (and was advancing fast with revolutions in relativity and quantum effects), so just keep digging. But it seems, from our historic context, that reductionism is fundamentally problematic, that there are properties that are not reducible to physics. When I say reducible to physics I mean “in theory” as I’m sure we will never be able to literally understand the causal chains that explain a complicated system in terms of the smallest actual occasions – it would be too much information for a human mind to manage. We use abstractions with the assumption that reductionism holds.
Contemporary physics shows a marked concern with processes over static essences. This process-oriented approach is evident in quantum mechanics, relativity, quantum field theory, and the study of complex systems.
Adopting a metaphysics that explicitly supports strong emergence might provide a more robust framework against reductionism. It would explicitly acknowledges that certain properties and phenomena cannot be reduced to their constituent parts and have their own causal powers and explanatory frameworks.
A fairly recent, radical, shift in physics is the distincton between simple and complex systems. This came after Whitehead’s work. The algorithmic description of actual occasions does not age well in this regard. We have the term “complexity science” that points to the radical implications (a new science). It would be interesting to see a Whiteheadian critique of more contemporary perspectives like this.
I will end with a related point. If we believe in a universe that tends toward complexity and evolves through god’s lure, then we do not need to worry about the long term future. If we believe in strong emergence then it is possible for things to appear and disappear, it can give us a more active role to play in preserving potential futures. From this alternative view, life is not everywhere bursting forth across the universe but, as far as we know, only here on planet earth, our responsibility can then be more engaged.
- Mark HamptonParticipantJuly 29, 2024 at 2:31 am in reply to: Do mathematical “laws” or eternal objects function as Forms??? #29065
It seems causality is one of the beliefs that allows for science.
The “modelling relation” is Rosen’s take on the scientific process. It presents a cycle of:
Encoding natural systems into formal models
Decoding models into predictions/realizations of natural systemsWe can distinguish two complementary processes at work in science:
a) theoretical science which is concerned with encoding
b) applied science which is concerned with decodingApplied science can be seen as a way of positively proving a theory. It only proves the theory within the scope of the natural system but I think this can be compelling. For example, being able to use Newton’s law of gravity to build a human scale machine that uses gravity is a good proof that Newton’s law is valid within that context. Popper is perhaps assuming a particular (outdated it seems) view of science where reductionism is assumed i.e. laws should be universal (thus unprovable). In a Whiteheadian view there are no ultimate truths (we keep evolving) so it makes no sense to equate “theory” with “universal”.
An emergent conception of reality can allow for “laws/theories” that function within a “scale/scope” and are not false because they are not universal. Those same theories can be proven to be true within a particular context through applied science (obviously they may be proven to be false in all contexts or outside of specific contexts).
I like Popper’s idea of seeking to falsify – the lack of awareness of this regard in the “social sciences” has been disastrous (the “reproducibility crisis”).
- Mark HamptonParticipantJuly 29, 2024 at 12:39 am in reply to: Do mathematical “laws” or eternal objects function as Forms??? #29060
Hi Kevin,
I think most physicalists (mis)understand laws as transcendent – something that is imposed on physical processes. However I imagine that with little persuasion most physicalists could understand laws as immanent i.e. the law is a theory for humans to understand processes that do not “follow” laws but “display” behaviors that can be predicted/described with laws.
I would like to be processist in my perspective but a lot of “layers of the onion” need to be stripped before the lack of an essence is “embodied”. In Whitehead’s metaphysics there is a need for eternal objects but I do not see that need in my own understanding of processism. Western civilization provides a context where we intuit that a foundation is necessary – this is part of the essentialist paradigm. Whitehead is not foundational in the terms of essentialism but he is foundational from a process perpsective e.g. reducing process to actual occasions. In other perspectives on processism there is a developed notion of non-foundationalism which does not need a stable foundation (not even a speculative foundation like Whitehead’s proposal). I struggle to see how a coherent processism could be foundational but Whitehead does have the humility to accept that his system is incoherent.
There seem to be two options:
a) An incoherent foundation
b) A coherent non-foundationThe meaning of the word “coherent” is not the same in those two contexts. In option a) coherent means a reduction to a rational explanation but Godel and others explain why that is impossible. Whitehead seems to have understood that but his euro-centric perspective missed option b).
The Eastern processs philosophies seem to have understood the role of language and culture in ways that we don’t see in the West until Wittgenstein (his later work) and it is not really spelled out until post-structuralism. That insight takes brief flight before being essentialized/grounded in Western postmodern critiques (what a disaster!)
I think Whitehead can best be understood as trying to find a middle ground between essentialism and processism – there are elements of both in his metaphysics.
- Mark HamptonParticipant
Hi Matt,
Thanks for sharing the quote and thoughts. It is tempting to see intelligence as uniquely human but I think you could agree that intelligence is a more generic concept. Intelligence can be understood as effective, adaptive, anticipation, it can help explain how telos or final causation is realized in processes like evolution and life. Our instincts can also be intelligent, the intuition Whitehead would have us attend to is not only instinctual, we can develop intuition in abstract domains and intuition can involve learning – intuitions get more reliable and stronger when they are developed i.e. there is an intelligence.
It does seem that Whitehead missed this aspect in his philosophy, which is understandable, AI emerged as a field in 1956, nearly 30 years after Whitehead’s P&R. The first book to identify “anticipation” as a generic concept worthy of study was Rosen’s Anticipatory Systems in 1985!
There is a common misconception of AI as a tool. It makes sense to try and fit AI into that category because all earlier technologies can be understood in that way. However the intention of AI research is to create something that is far more than a technology. The goal of much research is to ultimately enable an artificial agent that can pursue goals. From the technology perspective we could then see humans as the tool/technology available to the AI in pursuing its goals (but this is not the right analogy either). This is a very ill-informed objective and lacks the wisdom you stress in your response. Technologists may lack the ability to achieve this end (chatGPT is still not close) but we have been surprised by the capacity of technologists to build things they don’t understand (e.g. genetically modifying crops, chatGPT, …).
The more we interact around this topic the more I see an opportunity for you to understand the generic concept of intelligence/anticipation and introduce this into Whitehead’s philosophy.
I also agree that the urgency is to ensure wisdom guides the evolution of our society. It would be great to see AI being put in service of this but that is not what is happening. Maybe a contemporary Whiteheadian perspective can help change that.
Whitehead makes a strong case for philosophy as a contemporary practice that should reflect the contemporary context (in some ways it can only do that) and focus on contemporary concerns (this requires an attention toward contemporary natural sciences and social sciences). AI is likely to have more impact on humanity, when compared to nearly any previous technology (I guess it is hard to beat agriculture!), so it would be good to see philosophy being of use before it is too late. The general level of philosophical discussion I see in AI circles is at a stage of unconscious incompetence when it comes to contemporary philosophy.
- Mark HamptonParticipant
Reading about Goethe leads me to understand modern art as the science of subjectivity complementing the science of objectivity. Whitehead’s message for objective (materialist) modern science seems relatively clear but Whitehead’s message for subjective (materialist) modern art is unclear to me.
The creation and appreciation of art become dynamic activities where the artwork is continuously evolving in meaning and form.
This implies an art form that constantly seeks new expressions, methods, and interpretations, for example, art practices that involve generative algorithms, where the artist sets initial parameters, and the artwork continuously generates new forms.
Whitehead’s philosophy suggests breaking the separation between art and everyday life. Art should be integrated with life, influencing and being influenced by the environment and society.
Whitehead’s philosophy calls for a profound transformation in how we perceive and engage with art, advocating for a shift from elite artists to citizen artists. This involves breaking down barriers to participation and making art creation an integral part of everyday life for all citizens.
Materialism has led to our transactional culture where we consume art and have lost art to artists. Those who do engage in art practices are often captured in the individualism of our consumer culture. Whitehead is calling us to collaborative practices, living the relations rather than pointing at them.
We could see the generative algorithms that generate art (e.g. the image at the start of this thread) to be possible through the theft of images from artists who shared their work without ever agreeing to having AI digest it. Is generative AI the Robin Hood of subjective materialism ?
By shifting the focus from individual ownership and credit to communal creativity and shared cultural enrichment, we align the use of generative AI with Whitehead’s philosophical principles. This approach fosters a more inclusive and interconnected art world, where the collective creative potential of the community is realized and celebrated. Generative AI, thus, becomes a tool for advancing communal well-being and moral progress, embodying the relational and process-oriented nature of Whitehead’s vision.
Burn Fauste!
- Mark HamptonParticipant
Hi Don,
Thanks for sharing that. When I look at the images evolving over generations of the model I see the image becoming less and less photo-realistic. I wonder if Whitehead would see it moving toward the “average” potential of an eternal object. Wouldn’t it be fascinating if AI were telling us something about eternal objects and researchers are interpreting that as “model collapse” because they do not know about Whitehead’s work.
The issue they are highlighting is centered on a materialist view – they are worried about what will happen to LLM when the internet is full of LLM generated content. I am more worried about what happens to society by the time most of the content on the internet is generated by AI.
They are basically running an ill defined experiment on all of us with no responsibility for the outcome. The one thing we know is that the researchers and engineers will have improved their material situations relative to the majority. If only they understood Whitehead!
- Mark HamptonParticipant
Hi Chris,
I don’t believe anyone, in this thread, is claiming LLM are sentient in the way a human is sentient. A brick is a society in Whitehead’s metaphysics.
He argues that the upward trend in evolution includes not just biological adaptations but also the active modification of the environment by living beings, particularly humans. This modification includes the construction of tools, dwellings, and other artifacts which are expressions of the human urge to “live well” and “live better”. This aligns with his broader view that evolution encompasses the development of culture, technology, and society.
I hope it is useful to you that I am pointing out that you are still assuming a materialist perspective. I believe we need to be able to “think” from Whitehead’s perspective to understand it. Whitehead is flipping the maerialist conception on its head. Potentially we can have a new found empathy for bricks! Given how destructive we can be when treating the natural world as a resource for brick etc., then it may not be a bad perspective to see the sacred in everything.
Understanding that computers are assemblies of specific types of parts can help us understand the limits of computers, for example, they can’t be alive in the sense of a biological organism. Whitehead is not claiming that consciousness (I assume you include this with the idea of “sentience”) requires biological life. That does not imply LLM are conscious but it leaves the door open to whether a future machine could be conscious.
A key point about our lack of understanding of what is going on with LLM is the emergent properties we do not understand. This requires a completely different way of thinking about the system. While the materialist understanding of the computer performing “steps” is useful for understanding the hardware this is not an appropriate way to understand the systems being simulated. For example if we could simulate a living cell this does not mean living cells are just steps nor does it mean we understand the emergent behaviors in the simulation.
I assume you have some experience with programming computers and what LLM are doing should not be understood as a program in the traditional sense. There is a small program that executes the LLM but the LLM is a simulation of a network of artificial neurons i.e. what we are interested in understanding is not a program with steps. We typically use GPU not CPU to simulate LLM but we could also use dedicated hardware that does not simulate the LLM but has a physical representation of the network and the artificial neurons (this is a common approach for low power applications).
I see the motivation for Whitehead’s philosophy as a realization he made about the difference between simple and complex systems. Quantum physics demonstrated complex behavior at the smallest scale and biological systems demonstrated complex behavior at our scale. He realized we need a new metaphysics to understand complexity. LLM are a complex system during their training and this is the area where we have the most resources to make advances in our understanding of complexity.
I hope we can make progress in our understanding of Whitehead and AI with our Whitehead hats on.
- Mark HamptonParticipant
Hi Douglas,
Whitehead developed his framework based on developments in the material sciences. If Whitehead were with us I would expect him to be keeping a close eye on “advances” in science. The predictable advances in AI pose serious questions for our future and I imagine he would have been sensitive to this.
I understand that Whitehead was not claiming to have a coherent philosophical system, he saw it as an ongoing process.
There is a lot of details about LLM in the public domain. This is why so many organizations are able to produce effective LLM. One of the most effective models (llama 3 with 405B parameters from Meta) was just released to the public. It would cost a lot to buy the hardware to run a model like that but all the details are there. The key papers on the underlying transformer architecture were published back in 2017 (by Google funnily enough), the field was much more open back then. The main barrier to entry seems to be the huge amounts of data and huge amounts of compute required to train huge models. It is not simple but I would put it more in the category of “engineering” to achieve this – that is the only way to explain so many effective LLM being available. There are more and more proprietary attempts at providing value but there are also models like llama3 from Meta that are inviting university collaboration by publishing.
I would guess AI is in a situation something like the internet in 1998. The stock market internet bubble burst in 1999. The impact of the internet has probably been more significant than nearly anyone anticipated. It seems a good idea to learn how to leverage AI (as you are doing), nearly all of us ended up on the internet whether we wanted that or not!
From a Whiteheadian perspective there is no purely material world. There may be value in understanding AI from Whitehead’s perspective. It is interesting to see how difficult this appears to be.
- Mark HamptonParticipant
Hi Chris,
You adopted a materialist perspective and from that perspective you are right. From a Whiteheadian perspective it would be naive to imagine the system with the boundaries the materialist defines. There is a history of relations and a process going on that we do not understand.
In terms of what the LLM is doing, you are confusing the way it was trained (single token prediction) with what it is doing. Consider how a human speaks – one word at a time. The materialist may claim that the human brain is a Bayesian network simply doing next word prediction. A whiteheadian would disagree.
Applying a Whiteheadian perspective to LLM we start with everything being connected to everything. The LLM is part of an evolutionary process.
Given that we have academic research papers cited in this thread claiming that LLM are not well understood, I struggle to understand why you already know the answer. I think this is one of the issues with the materialist perspective, everything that does not fit into that perspective is ignored and even worse assumed to be so simple as to not warrant investigation.
I am interested to learn what Whitehead has to offer AI and what AI has to offer Whitehead.
There are several categories of complex system we do not understand, the one’s I’m aware of are:
a) intelligence
b) life
c) evolution
d) consciousnessOf those systems the one that seems most attainable for our scientific understanding, given the current state of the art, is intelligence.
Whitehead reduces life, consciousness, and evolution into fundamental aspects of his metaphyics. He does not need to explain where these come from as his philosophy takes them as given.
It is less clear to me that Whitehead is doing this with intelligence. He has a decision process in the selection of prehensions but I have not seen an explanation of the process details by which the decision is intelligent (the word “feeling” might be swapped for decision in some cases). I think it only makes sense for a decision to require some intelligence (otherwise it is not a decision), for example, we can see intelligence in the behavior of the simplest life forms.
If Whitehead does not define a process by which intelligence is achieved then AI may have something to contribute in the development of a more coherent Whiteheadian philosophy. If Whitehead does define a process by which intelligence is achieved then this may be a significant contribution to AI (I very much doubt that gem exists but never say never).
The way I try to think of LLM is to look at the broader system and we immediately see that AI is a social process and the “learning” that is going on is happening during the training of a particular model and in the engineering of the next model.
Whitehead tells us to avoid “splitting” the world in the style of the naive materialist and I think we would be wise to consider that perspective when it comes to understanding intelligence and artificial intelligence.
You will notice that I have not claimed the LLM, as an algorithm, is intelligent and it is certainly not agential. Relational biology provides a formal definition of what an intelligent system requires. When we consider a company like OpenAI as a system, then it becomes clear this process is general artificial intelligence, which raises moral issues. The industrialization of war in the 20th century happened while many people were no doubt unaware of the moral implications. It would be a shame to see naive materialist views take us down that path again but we literally have the major powers in an AI arms race.
I would be very interested to hear what you think when you put your Whiteheadian hat on!
- Mark HamptonParticipant
This has always intrigued me too. The practices of institutes promoting relationalism are not exploiting the potential of the internet. Of course this is not easy. It seems someone at some point invested a lot of money into https://cobb.institute and I guess that was quite some time ago. Is this being treated as a depreciating asset ?
Here is a tutorial on how to make an “always-on” meeting:
I guess it would take some experimentation to see if a presenter can “take over” the room to mute everyone etc. I would hope that is possible and if it is not then Zoom needs that feedback. In this way we could have a shared space 24/7 and students can use it as they see fit outside of the sessions Matt controls.
- Mark HamptonParticipant
Yes, sorry. Here is the last paragraph:
Gomez-Marin calls for a more inclusive and pluralistic approach to neuroscience, one that recognizes the subjective experiences that form the basis of scientific inquiry. He underscores the importance of philosophical openness and warns against the dangers of dogmatism and the premature dismissal of innovative theories.
