Getting to this place where I can write what is obvious.
It is not an accident that within a matter of a couple of days, this “Supreme” court has viciously curtailed human rights while indefensibly expanding gun rights. But take a look at the picture below. Look at it carefully. This is not a person.
But if you are a woman, if you are non-binary, LGBTQIA+, it basically has more rights before this court than you do. In other words, before this court, you are not a person. Add BIPOC to the previous list, because unless you are white and male, if you go parading around with an assault weapon you are unlikely to be allowed to survive, never mind pass unharrassed, by our massively militarized Law Enforcement.
With the savagely ideological “Supreme” Court prepared to erase the rights of actual human beings on on no other account than that they are women (and consequently, don’t count), it seemed like a timely moment to set down my other projects and cast an eye upon the subject of abortion. Now, Whitehead himself never addressed the topic, so no pretense can be made to declare what his thoughts on the subject might have been. We can say, however, that his personal conclusions, were he to have any, are not really relevant here, as we want to develop a view of the subject within the context of process metaphysics, and not any one scholars individual declaration. That being said, it must also be added that other ways of working out conclusions other than those offered here will also be possible within the stated domain.
First off, what is a “person”? We should immediately drop any thought of conflating “person” with “human being.” “Properly developed” human beings seem clearly to be persons, but not all persons will be human beings, developed or otherwise. Non-terrestrial intelligences, for you science fiction enthusiasts, are clearly persons without being human. But many would argue that terrestrial non-human animals are also persons, deserving of our care and ethical considerations. These (humans) would be those variously involved in animal rights activism and concerns. It is a tricky subject that I’ll not pursue here, though I admit to being a little troubled by my failure to embrace vegetarianism. I’m sure you’ll have noticed by now that I’ve not tackled the previous scare-quoted qualifier “properly developed.” I promise, we will get back to that.
But more needs to be said about “person.” A person is an agent, and an agent is something capable of intentional activities, behaviors, and/or stances. There is a philosophical school known as “Personalism” that takes this as a metaphysical “primitive,” which is to say, first premise. There is what we might call the “lite” version, that argues persons are metaphysically primary because there can be no interpretation of the world without intentional agents actually interpreting the world. As stated, this position is very hard to dispute, since any attempt to do so cheats by presupposing an interpreter in the form of a “God’s eye view on the world,” while pretending to be “objective.” But that “God’s eye” is an interpreter, an intentional agent. Then there is the “Heavy” version of personalism that says everything is a person (in some sense.) An electron is “interpreting” it’s world via it’s electromagnetic field. This is a trickier position, but one that deserves serious treatment, regardless of one’s final conclusions. But the subtleties are beyond the scope of this current essay (or pretty much any essay of only 1450 words.)
Please do not confuse your Google search with my doctoral degree.
So proclaims a t-shirt of mine; one of my favorites, I should add.
In this age of anti-vax infantilism, few things can set my teeth on edge as some uneducated buffoon declaring, “I’ve done my research!” (Some “unprofessional” language is going to appear in this blog post. So prepare yourselves.)
“I’ve done my research!” No, in point of fact, you have not, you ridiculous turdwaffle. Because it is damned near a mathematical certainty that you have never done any REsearch in your entire life. You did an internet search, at best, and counted yourself special for having done so.
The company name “Google” has become synonymous with an internet search engine in much the way the company Xerox became synonymous with a photocopier, long after the company itself had lost any real dominance in the field. They were knocked of their thrown by Canon, in particular, yet photocopying remained “xeroxing,” even unto this day. Google, despite its despicable and absolutely ruthless pillaging of its users’ privacyi has yet to suffer such a well deserved fate, but time may yet tell. In the meantime, I will resort to common usage, and speak of “googling” something, without necessarily speaking of Google itself. (You’ll know which one I speak of by whether or not the word is capitalized &/or comes with a gerund.)
A scientist is someone who engages in inquiry to discover new facts
An engineer is someone who engages in inquiry to discover new applications for known facts.
A technician is someone who engages in inquiry to maintain known applications.
We can add to this the mode of inquiry which characterizes philosophy
A philosopher is someone who engages in inquiry in order to discover new meanings, and fully understand old ones.
Philosophers aren’t alone in this latter form of inquiry, but as I am a philosopher that is what I am working from. (Arguably, the philosopher’s position is more generalized and abstract than, say, that of the novelist.) I highlight the above so that we may take a poke at that most maddening and obscure subject, the meanings of Whitehead’s terms, (mostly) in his philosophical works. Because you’ll never learn the thinker’s meanings if you do not first learn the thinker’s language. With Whitehead, this means two things. First, you must “get inside” the structure of the man’s thinking, a step the overwhelming majority of scholars have categorically refused to do. The second is that you must disabuse yourself of the notion that, just because Whitehead uses a term that you find familiar, Whitehead is therefore using that term in a way that is familiar to you. This latter is the part that really drives some people – most especially myself – absolutely bananas.i We’ll approach these in order.
Now, while the second issue can drive one over the edge, I will add that the first one is pretty frustrating as well. In point of fact, it really, really annoys me. I mean, it REALLY annoys me. Let me illustrate it with a non-Whiteheadian example.
I can’t speak for other cultures, but phrases such as the above and others akin to them are fairly commonplace in American conversation, particularly when the topic involves the foolish choices made when we were young. While often accompanied with an eye roll and a shake of the head in signs of regret, there is just as often a tinge of wistfulness as well, a longing for a return to that kind of vivid recklessness and the electrifying sense of being alive that was at its core.i There is certain legitimacy to that longing – even, and even especially, for the mistakes – at the metaphysical level. For every act of creation is, in an important sense, an error, a mistake, a “failure” to follow the “correct” path. So it is worth a moment to take a look at such things.
Before going any further, I want to dismiss one kind of mistake that is grotesque in its calculated refusal of any possibility of creativity. That is the kind of action “celebrated” by the despicable Jackass films and shows. These aren’t errors of any kind. They are acts of willful stupidity pandering to the lowest element of human character, “entertainment” predicated on laughing derisively at others for pulling absurdist stunts devoid of any talent or art. These programs are simply an extension of the “Good Ol’ Boy’s last words” jokes.ii There is nothing interesting or amusing about such behavior or the people who wallow in it.
The guiding motto in the life of every natural philosopher should be, Seek simplicity and distrust it.”
– Alfred North Whitehead, The Concept of Nature (end of chapter VII.)
Ultimately, the only way we know how to measure the complexity of some process or phenomenon – beyond excruciatingly vague and unhelpful statements like, “this is really complicated” – is by measuring how hard it is to solve the mathematical equations used to characterize the problem. All the rest, even when palpably, indisputably true, is just hand-waving. Sometimes hand-waving makes us feel better, because we need to burn off the energy pent up in our frustration. But it never really tells us anything. On the other hand, we really do have some effective means of measuring how hard it is to solve some mathematical equation or other, and we’ve refined such measures significantly over the past fifty years because such measures tell us a great deal about what we can and cannot do with our beloved computers (which includes all of your portable and handheld devices, in case you weren’t sure.)
Some problems simply cannot be solved. This even despite the fact that the problems in question seem perfectly reasonable ones that are well and clearly formulated. (Actually, being well formulated makes it easier to demonstrate when a problem cannot be solved.) Some problems can be solved, albeit with certain qualifications, while still others are “simply” and demonstrably solvable.i However, saying that a problem is “solvable” – even in the pure and “simple” sense (notice how I keep scare-quoting that word) – doesn’t mean that it can be solved in any useful or practical sense. If the actual computation of a solution ultimately demands more time &/or computer memory space than exists or is possible within the physical universe, then it is unclear how we mere mortals benefit from this theoretical solvability.ii It is these latter considerations that bring us into the realm of computational complexity.
Some sixty-one years ago, the American philosopher Willard Van Orman Quine wrote a famous essay, “On Simple Theories of a Complex World.” Actually, referring to this as a “famous essay” is a tad redundant, since Quine is one of those people who only ever wrote famous essays. But setting that observation (bordering on sour grapes) aside, Quine goes on to observe the difficulty in saying just what does qualify as simplicity. He further observes the legitimate psychological and formal reasons while theory builders so ardently crave simple theories: the simpler the theory, the more readily it can be employed in our various cognitive activities. Of course, too simple a theory leaves us with no purchase on the world what-so-ever. “God willed it” is about as simple a theory as you can come up with, but it is also as singularly useless a theory as anyone could ever imagine; it provides absolutely no insight, a complete absence of predictive power, and only an illusion of emotional comfort for those readily distracted by vacuous hand waving.
Quine was writing more than a decade before the emergence of computational complexity as a sub-field of abstract Computer Science, in which upper and lower bounds for kinds of complexity (and thus, conversely, forms of simplicity) was even formulated. But we do now have a variety of ways to address Quine’s concerns about how to characterize complexity and simplicity. I’ll say more about this in a moment. What I want to start with a more controversial proposition: Namely, Quine got it backwards. In a very real sense, it is the world that is fundamentally simple and our theories that are complex.
I see yet another story bemoaning the death of a willfully stupid fool who not only denied the reality of the COVID-19 virus (SARS-CoV-2) but also declared that he would place himself “in God’s hands”, both to avoid catching the disease, and then survive it once caught. Well, God appears to have been too busy trying to wipe a grease stain off those hands to attend to this gentleman’s demands, because said individual got sick, suffered, and died gasping for air while intubated.
I’m going to step outside my usual zone of operation and address a few words at those staggeringly vain individuals who imagine it is their unique privilege to tell God what to do. Now, as a proper Whiteheadian, I do believe that the term “God” has minimal reference and conceptual content. Not even remotely enough to form the kind of center of meaning that one might go to church to celebrate. For that, one must move well beyond Whitehead and into the process theology that owes its source to Charles Hartshorne. Much of this latter, though not all by any means, is also rooted in various interpretations of Christianity. And while even this is beyond the scope of my primary interests, I’m actually going to address my remarks to the vastly more conservative field that tends to identify as evangelical or fundamentalist. (While there can be overlaps, the two groups are NOT the samei.) I’ll spare you any fatuous declarations as to either the reality of these people’s God, the truth of their concept(s), or the validity of the Bible. In fact, I’ll be taking these things as given. Rather, what I want to show is that a certain class of behavior that they publicize as evidence of their devout faith is, by their own standards, a gross and indefensible sin. It is not hard to show.
I don’t anticipate any explicitly Whiteheadian considerations this time around, but all my thoughts are informed by my Whitehead scholarship, so you never know. What I want to talk about here is the idea of infinity. I say “idea,” rather than “concept,” because even within the relatively constrained bounds of formal mathematics infinity is not one thing. Outside of the bounds of mathematics matters are significantly worse, little or since no effort is made to constrain such talks, or even render it potentially intelligible, with formally legitimate techniques.
Speaking of “outside the bounds,” the ancient Greek word for the infinite is “apeiron” (ἄπειρον), which translates as “unlimited” – the “a” being the negation (“un”) and “peiron” meaning limited or bounded. Clever as they were, the Greeks lacked our additional 2,300 years of mathematical study, so the idea that one can have something that is infinite (unbounded) – for example, the length of the perimeter of a geometrical figure – i.e. a perimeter that exceeds any possible length, measurable either in practice or the ideal, that is nevertheless bounded by an easily measured finite figure (a circle, for example) would never have occurred to them.i But the figure above, the Koch snowflake, is precisely such a figure. (Details can be found HERE. As is my wont, I skip the technical details which will take up more text than this blog post.)
Some tasks, processes, “computations,” are too difficult to do in any practical context. Some are so intrinsically hard that, even while they don’t seem especially difficult, God herself could not do them. The first is the problem of computational complexity, the other of computability/solvability. The former, complexity, emerged from the latter, computability, because the problem of computability was more obvious to mathematicians who’d never seen, much less actually used, a computer. But after Alan Turing presented his own abstract model of a computing “machine” (the “Turing Machine,” or TM) to prove the existence of unsolvable mathematical problems, the difference between what could be solved in theory (computability) and what could be solved in practice (complexity) came into view, and methods were developed to investigate the latter as well as the former. This is all by way of summary of, and pointing forward from, the previous post.
There are theoretical &/or partial work arounds, ways of tricking out the game, for both complexity and computability. For complexity, it is unclear whether the trick can be realized in practice. For computability, it is unclear whether the trick (which is only a partial trick, really) is even physically possible. Still, I’m going to talk a little about both – in the preceding order – and finish with some comments on how these theoretical considerations can be manifested in our considerations of what does and what does not constitute legitimate scientific inquiry, and a few comments closing the circle on analysis versus ontology.