Tags

,

Nine times out of ten (probably closer to ninety-nine times out of one hundred) when someone starts talking about, much less demanding, “proof” – proof of anything – unless they are discussing whiskey[1], they almost certainly have no idea what they are talking about. This is especially true in the empirical sciences, where various anti- or pseudo-scientific quacks, climate change denialists, creationist ideologues, and others like them, will insist that the fatuous twaddle they are spewing is perfectly reasonable since, after all, they (the quacks) have not been “proven” wrong, while the actual scientific literature has failed to absolutely “prove” its case. These claims are so childish that one must almost wonder if the denialists and others like them might actually know that what they are saying is not just bullshit (that last being a technical, philosophical term), but an outright lie. I am myself, however, disinclined to assign a level of intelligence to people to pull off such a clever conspiracy when nothing else in their lives gives any evidence of such nuanced and incisive reasoning. As a very loose and general rule, people are far for likely to have no idea what they are talking about, as opposed to talking about it very cleverly.Rum Gone

The idea of proof in mathematics (the only venue where non-liquor related uses have any meaning) had become so vexed by the end of the 19th Century, that the field of mathematical logic was, in essence, invented with the purpose of sorting matters out. Matters kept resisting being sorted, and along the way the nose of the mathematical camel got into the philosophical tent, and ended up swallowing philosophical logic whole for some decades that followed. Even today, the issue of how to teach logic, and what logic to teach, has not been particularly well sorted out in philosophy. So what might be said about the nature of proof, such that we do not have to become facile with mathematics, yet can still avoid being gulled by credulously accepting demands for, or putative statements of, proof?

Let us back up a little and start with some basics. An argument (in the philosophical sense) is basically a finite list of statements, the last statement of which is the conclusion. Notice that the list can be as short as a single statement which, by definition, is the conclusion. Notice also that the argument can be as vapid and vacuous as you please: there is no requirement – qua argument – that the argument exhibit the littlest sign of logical merit. This sort of thing, which exists in abundance, will often encourage more thoughtful persons to turn to the proof of whiskey (or rum).

After discussing the nature of an argument simpliciter, most logic texts will go on to examine the much more subtle notion of a valid argument. This is usually interpreted to mean that, IF the premises (the statements that come before the conclusion) are true, THEN the conclusion follows from these premises by formal, logical necessity. There are philosophical subtleties to the idea of validity (that is, of being valid) that are seldom, if ever, adequately discussed. I won’t discuss them either, as I’d need a book, and all I have is a blog post. The very, very basic idea, though, is that there are formal rules which, when followed, will connect premises with conclusions in a formally necessary way. I want to mention one of these rules, because it will open up the discussion about some of the issues around the proof concept.

Before doing that, however, I want to make one note about one of the differences between logicians and mathematicians. As I mentioned above, the logician worries about things like, “If the premises are true …” In contrast, the mathematician says, “Screw that! Of course the premises are true! Now let’s prove stuff!” Mathematicians don’t concern themselves so much with whether their premises are true, as with whether their premises are interesting; and by “interesting” they mean: can I develop clever, insightful, aesthetically satisfying, original ideas that create, or connect with other, useful or exciting research endeavors? Thus, for a mathematician, the question is not whether the premises lead to true conclusions, but rather, if interesting conclusions justify the use of just these premises.

The rule I wish to discuss is modus tollendo ponens (always use the Latin name; it simultaneously impresses, intimidates, and annoys people – the trifecta of philosophy!), also known as the “disjunctive syllogism.” Symbolically, the rule looks like this:

 A v B
~A     
     B

In words, this says, “A or B is the case; not-A is the case; therefore B must be the case.” (The “A” and the “B” are interchangeable; one could also have not-B (~B), and therefore A.) The “A” and the “B” are statements, or even collections of statements. The rule basically asserts that, once you’ve divided things up between two collections of ideas (where “collection” here can be one or more statements), if you can eliminate one of those possibilities, you have proven that the other possibility must be the case. This line of reasoning is exemplified in the solution of sudoku puzzles. As one works one’s way through a sudoku, one gathers the nine symbols (numbers or whatever) into two groups at each square, such that one group can be positively excluded (~A), therefore showing that the second group (B) is the only one that merits further consideration. This process is repeated until “B” is comprised of a single number or symbol, which is then the number/symbol that fills that square.

This is all fine and well, so long as one is dealing with an entirely fixed, limited, and thoroughly determined set of possibilities between which a choice must be made. But what happens when some or all of those conveniently defined boundaries aren’t there? For example, suppose A and be B are infinite collections; what if we cannot even find a rule to determine when something is, or is not, in one or the other collection? This was a very real issue that confronted logicians, mathematicians, and those people who went on to become computer scientists, in the early to mid 20th Century. And the answer they discovered is, there is no answer to this question! Which is to say, there are questions which defy answers; proofs that not only do not exist, but which cannot possibly exist. Beyond (or before) this, even within the nominal realm of “ordinary” mathematics, one can play with infinities so outrageous that one can derive “proofs,” such that the only response to these “proofs” is an exasperated, “That’s not mathematics! That’s theology!” (Which, according to legend, at least, actually occurred with one of the great mathematician David Hilbert‘s proofs.)

The failure to find a “rule” for when some arbitrary “x” is a member of A or B is a weaker version of the idea of whether or not A and/or (“&/v”) B are “constructible.” When they are not, it is far from obvious that the innocent looking “not-A” (~A) has any cognitive content what-so-ever. This was the argument advanced by the intuitionists (sometimes called “constructivists”) whose “founding father” was Jan Brouwer. These folks argued that there are really TWO kinds of negation (“not”) that have been conflated by linguistic habit and sloppy thinking: one negation tells us only that A-is-not, but it denies us any conclusion about whether B-really-is. The other kind of negation applies to “smaller,” constructible collections, and thus licenses the full power of the disjunctive syllogism (~A implies B). This might seem subtle, arcane, difficult, and/or obscure, but it is not a patch on how ridiculous things get once you move into “substructurallogics. At this level of formalism, negation pairs multiply without bound. Welcome to proof hell.

The point of this little divagation is to bring home the fact that “proof” is hard, so hard that outside the formal simplifications (to the point of triviality) that one finds in purely mathematical contexts, the call for “proof” is about as singularly meaningless a demand as one could imagine. Because once one moves out of the purely formal situations where one effectively exercises complete control over “both sides of the equation” (useful premises on the one hand, interesting and appealing conclusions on the other), then matters become genuinely difficult. Mathematics is simple, because proof is at least possible. Physics is only a little harder, but already at this level the notion of proof has become nonsense. Push things just one step further, to the level of ordinary experience, and matters are infinitely worse. Even as triumphalists declare that physics can explain everything, it is demonstrable, on purely logical/semantic grounds, that physics cannot account for even the most basic elements of human experience, such as consciousness, cognition, and value.

So the next time you encounter people demanding “proof,” understand that you are dealing with people who have no clue what they are talking about.

And have a drink …

[1] For the record, I’m a rum drinker.