Scientific Bankruptcy
(First published in 2012)
I have a serious problem with the way we conduct science today, especially in particle physics the method is broken in order to feed our industry of “productivity”.
Science, philosophers would like us to believe, is based on a fundamental set of guidelines. During my education I have repeatedly heard about concepts like the “scientific method”, Occam’s razor and Karl Popper’s falsification criterium. All three represents various ideas on how to produce knowledge in an unbiased way. The thing is nobody really pays attention to these ideas anymore. Before the rant itself, let me review the three concepts.
The Scientific Method
The scientific method is a very broad term that covers how science should be done. Of course we can just take a look into the back room of any science building and see how it is done, alternatively we can theorize how it should be done in order to produce the best knowledge on a given topic.
A popular exposition goes something like this:
- Ask a question (Example: is light made of waves or particles?).
- Devise a way to observe and answer the question i.e. an experimental observation. (Example: Send light through a prism and a set of slits).
- Construct a hypothesis that explains the observation, this is what we call a model or a theory. (Example: Light must behave like waves as it can interfere and defract like water waves).
- Based on the hypothesis devise a new experiment that can test the hypothesis with predictions beyond the initial observations. (Example: Measure the line spectra of heated hydrogen).
- Analyze data and reject or accept the hypothesis based on the experimental evidence (Example: we see non-continuous emission not predicted by the wave model).
- Create a new hypothesis that better describes the observations and repeat from step 1.
This structure seems sensible to most people as it resembles how we all interpret the world around us from an early age. We all observe, hypothesize and test throughout our lives, that is the way we make “sense of the everything”. In practice it is never as linear as described above, many ideas and experiments might come on top of each other and an old hypothesis might be extended rather than simply rejected but overall this scheme is a good and highly successful one. A thing that is missing is a handle on the quality of the hypothesis/theory. Given two theories that both describes the observations equally well, which one is the better theory?
Occam’s razor
William of Ockham was an English friar born in 1288, he has come to be known for his expression:
Frustra fit per plura quod potest fieri per pauciora* (It is futile to do with more things that which can be done with fewer).
While this is in itself not especially original, the interesting thing is it’s use as a logical way of selecting one theory over another equally valid theory. Why describe the world with a complicated equation with hundreds of arbitrary parameters if we can make due with only 10? This idea of economizing the number of assumptions resonates well with the idea of a mathematical universe. Now obviously any new model that introduces more complexity than the previous must also come with an additional amount of justification. Now it is rather easy to create new ideas that can describe everything especially if it is impossible to really test the underpinnings of that theory. Falsification is a critical feature of any scientific model.
Karl Popper’s Falsification criterium
Not many philosophers of science are taken seriously by scientists. For instance as Richard Feynman jokingly put it, “Philosophy of science is about as useful to scientists as ornithology is to birds” . One such philosopher who nevertheless has gained some respect is Karl Popper. Popper was interested in the problem of separating what he considered “real” science from pseudo science. Consider a postulate such as “A planet of pure carbon exists” versus “All planets are made of pure carbon”. The content of the first statement is scientifically testable, and verifiable but as opposed to the second statement it is not falsifiable, it is impossible to ever falsify the claim. Popper stated that a good hypothesis should be falsifiable otherwise it would be hopeless from deductive standpoint to ever conclude anything. A counter argument could be that some results are so expected that a verification is more likely than a falsification given an experiment. In that case verificationism can serve as a shortcut, but the question asked is in itself of very little value scientifically as it doesn’t break with our prior hypothesis.
An act of inconsequence
With the three concepts lined up, we can start breaking down the current state of particle physics. Currently we have a dominating theory, the standard model. As far as the model goes it is in itself perhaps not the most beautiful construction, it is lacking a coherent philosophical argument and it contains quite a few “free parameters” plugged in by hand. But overall it is very good, and what it lacks in theoretical beauty it has in predictive power. The history of the standard model itself is quite interesting and filled with many experimental and theoretical surprises but I would like to avoid a long historical intermezzo at this point, let me summarize by saying that given some very bold theoretical predictions that turned out to be true, theorists became more ambitious than ever before and began to assume that experimental underpinnings was less crucial for new theories. Fast forward to 2012. At CERN we are preparing for the third year of LHC running. Many people are excited about the possible discovery of the Higgs boson, and perhaps this will be the year. Now the search for the Higgs particle is a poster example of good scientific metrology. A standard model without a higgs particle is in big trouble, perhaps big enough to call it a falsification criterion. So I don’t have anything hard to say about that. What I do want to criticize is basically everything else we do. It is very popular in particle physics to conduct searches for what we call “new physics”. By new we mean something apart from the standard model predictions. That in itself is not too bad right, we simply make ways of testing when the standard model fails, falsification right? The problem is the introduction of new unmotivated theories that suffers from more or less everything just described above. Take for instance the idea of Supersymmetry or String theory. Supersymmetry is mainly motivated by mathematical beauty. The way we represent the standard model is defined by symmetry groups. There exist a relationship between mathematical symmetries and conservation laws in nature, this means that when we talk about symmetries we usually mean some conserved quantity in nature (like energy, angular momentum or charge). In short SUSY proposes that the description of matter particles (fermions) and force carriers (bosons) can be unified by introducing additional symmetries between fermions and bosons. It is all very well, but completely unmotivated by evidence. Now as a mathematical plaything SUSY is very neat, but if we hypothetically decided to look for it we would break with all of the above mentioned concepts:
- Scientific Method: Motivated by experiment: No
- Occam’s Razor: Simpler and better model than the equivalent: No
- Popper’s demarcation: Falsifiable: No
No evidence has been presented to suggest that we should break from the standard model in a way that favors SUSY. By evidence I mean unexplainable deviations from the standard model that could motivate SUSY, not explicit searches for SUSY. As a mathematical construction Supersymmetry is a vast nearly uncontainable model space, the simplest SUSY models contains roughly a hundred free parameters, versus roughly 20 for the standard model, so without any prior motivation it is simply a worse description of the same thing. The vastness of the model space also makes falsification hard. Sure if we decided to look for some random incarnation of SUSY (chosen ad-hoc) we might be lucky and find a new particle, but falsifying the complete model is practically impossible. Proponents of SUSY would say that it is a good yardstick as it gives us calculable prediction on where to look for signs of new physics. But all that really does is creating a model bias that either hinders a truly general search for physics or it widens SUSY as a model so much that we can’t really talk about a specific model anymore but rather the full model-space of quantum field theory.
A roadmap to solution-space
What we should do is instead two things: precision measurements of the standard model. Everybody expects the theory to break at some point, but where and how? This is the place to look for new physics. If we really want to search rather than falsify what we do understand, then do signature driven searches. If we know the standard model requires CPT invariance, try to look for all signatures of it breaking, not just the few suggested by a random model. On the theory side one thing that is obvious would be a better (less parameters, new representation) standard model. Given what we have found the last 50 years the standard model is somewhat a patchwork. Can we construct a “nicer” model in the Occamian sense, that simply describes reality as we know it in a more concise way? Another focus could be creating a model that is easier to use, perhaps new methods in computer science and mathematics can lead to a refactorization of the current model. Not forgetting String theory. Grand unification is obviously hard and especially testing it has proven difficult. GUT theories are certainly relevant but in a way beyond particle physics, I would like to see a working theory that simply contains all the known effects of reality. That in itself is potentially worthwhile even if it doesn’t beat the standard model or general relativity individually, here Occam’s razor and Popper must work in tandem, a unified theory is likely to be complicated but not much more complicated that the world views it combines. Other unifications (Newton, Faraday/Maxwell and Einstein) all ended up describing more dynamics than the original parts with simpler theories. The real problem with all of this is that we have turned particle physics into an industry. It is very easy to produce new models when all you have to do is dream up a new term to the standard model lagrangian. It is also easy to do phenomenology on the new theories when all you have to do is plug the lagrangian into a piece of code and create feynman diagrams, matrix elements and in the end, simulated events. It is also easy to be an experimenter as you know exactly what to look for, you don’t even need to make a new experiment as the effective signatures must be visible with more or less the same observables as the standard model; after all it is just a perturbation of status quo. We need to change this charade.