It is a capital mistake to theorize before you have all the evidence. It biases the judgment.
– Sherlock Holmes, A Study in Scarlet
There are scientific disciplines out there that are in a state of fundamental crisis. But unless you’ve a moderate degree of expertise in those fields, it is unlikely you know about such crises. I want to examine one such crisis here, and touch on its relation to a way of approaching the world that I’ve taken to calling “model-centrism.”
The Holmesian dicta quoted above is hideously simplistic (one must already have significant theoretical commitments in play before any evidence can make its appearance AS evidence. To decline to theorize entirely would not make one open to the facts and evidence, it would make one completely incapable of recognizing anything as a fact or as evidence.) Nevertheless, it touches upon an important issue with model-centrism, and model-centric thinking, namely the impatience for gathering data that leads some people to favor abstract theories without any regard for how such theories might be tested or validated.
Let us take a moment to notice a couple of the areas where there is no real crisis: climate science and evolution. These are also disciplines that do not suffer from model-centric thinking. It is not because climate scientists and evolutionary biologists are such sterling exemplars of the human species (though I’ve no doubt that many of them are, and in a variety of ways to boot.) Rather, the “density” of the evidence available to these studies is particularly high, so it is almost impossible for theory (or “the model”) to simply overwhelm that evidence via the brute-force introduction and manipulation of parameters. There are too many facts gathered from too many sources, too many independent lines of evidence, for the theory or model to become the central, dominating feature of the research. Rather, it is the model that must conform to the data, which is a very, very good thing. Another way of saying this is that these are “local” sciences in the sense that the data is not only comparatively close to human experience, but it is rich and multi-threaded within that experience. These ideas need some explanation.
Explanation #1: my use of, and emphasis upon, the term “model” is deliberate. The concept of “model” is important in both science and formal logic, and while the uses and intentions of this concept are not identical in these non-identical pursuits, they are connected. Let us start at the abstract level of formal logic and the roll of models there. (Because, after all, being abstract, this is a much simpler starting place than the incomprehensibly more complicated and nuanced realm of the concrete.) In mathematical logic, model theory is about (1) a universe of discourse on one hand, (2) the available terms of discourse (on the other hand) and, (3) the relations between these two (on a third something that’s soft and squishy, and is not part of our discourse here.) There are some marvelously wack-a-doodle crazy things that can be proven with this sort of abstract, formal model theory. But for better or worse, we don’t have to go there. It is sufficient to notice that 1, 2, and 3, are applicable to any form of inquiry whatever: There is the (1) thing/problem/situation we are talking about, (2) there are the available terms and concepts we’ve chosen to use in talking about, and inquiring into, #1, and there is (3) the system of relations connecting #1 and #2, such that #1 is tractable and #2 is meaningful. Formal model theory can indicate some deeper results, but the above glosses why model theory is the only formal logic that is of any substantive philosophical interest.
But suppose now that your universe of discourse – #1, the thing you’re talking about – is particularly “thin,” perhaps because the range of independent observations is relatively small, while at the same time, your tools for talking about that universe (#2) are particularly rich. Then the available choices for putting the two together (#3) will significantly outrun the tests you might apply in order to determine if you are appropriately modeling #1 with your instruments in #2. Hearkening back to the opening quote, you will be theorizing in the absence of evidence. People who do that – and who, in fact, do it a lot – will tend to become married to their models with little or now regard for actual evidence, especially if that evidence is really, really hard to come by. (Who wants to wait upon facts when you’ve got a really cool theory to evangelize?) When this occurs, one has abandoned real science for what I was calling “model-centrism” above. The model is allowed to take center stage, and because there is such an abundance of tools in #2, as compared with the evidence from #1, the model can be easily modified so as to force the evidence to “fit” the model.
This is exactly the situation facing contemporary physical/gravitational cosmology. This area of physics is often referred to as “Big Bang Cosmology” (“BBC”), although I am inclined to refer to it as “the standard model of gravitational cosmology.” It is difficult for an outsider to tell if this standard model is already in a full-blown crisis, or merely on the verge of one. The majority of the scientists working in the field apparently do so without any sense that things are catastrophically wrong; certainly the triumphalist literature published in the popular presses by people like Stephen Hawking and Brian Greene provide no evidence that matters are fundamentally askew. But a little bit of digging rewards the investigator with some astonishing revelations.
For one thing, looking back at the history of the BBC, and its roots in Einstein’s general theory of relativity, it quickly becomes evident that little effort was invested in actually testing Einstein’s theory, as opposed to finding ways of confirming it. Alternatives to Einstein were largely ignored, with no real regard for their content or underlying purpose. This continues even now, with numerous research papers being posted at the physics pre-print website, arXiv.org (search embedded in link), but receiving little or no attention from the cosmology community. Physicist Michael Disney has observed that modern cosmology has permitted theory to run riot over observable evidence, and that one is entirely justified in viewing its current claims with skepticism. Physicist Paul Steinhardt has noted that aspects of the standard model of gravitational cosmology are, “fundamentally untestable, and hence scientifically meaningless.” The situation is so bad that astrophysicist Sean Carroll has suggested we abandon the idea of falsifiability from our definition of science. But as this would amount to the conscious elimination of any real empirical content from our theories, various other scientists (Ellis and Silk), as well as Steinhardt, have expressed the profoundest objections to such a move.
Now, as I have observed here myself, the simplistic reading of Popper’s criterion of falsifiability is ultimately unsupportable. But abandoning it wholesale is frankly nuts, especially when it is proposed in the context of justifying a theory that otherwise is lacking in empirical supports. As Ellis and Silk observe in their comment linked to above, this would open the door to pseudoscience. (Indeed, as they should have observed, such proposals have already been floated by proponents of creationism.) Falsifiability is a guiding ideal, even when its complete realization in fact is largely impossible. Always and everywhere, the question that scientists must ask themselves is, “how might I test this theory?” and NOT, “how might I confirm it?”
Yet confirmation is what model-centrists are invariably aiming at, whether they admit to the fact or not. Because their model is so much bigger and richer than the limited range of actual evidence available, the model can be endlessly “tweaked” by adding or adjusting parameters. But this is not science; it is casuistry.
(In a later post I hope to say more about model-centrism and the idea of the “density of evidence,” comparing climate change and cosmology.)