What is Pseudoscience?
Distinguishing between science and pseudoscience
is problematic
CLIMATE DENIERS ARE ACCUSED OF PRACTICING PSEUDOSCIENCE, as are intelligent design creationists, astrologers, UFOlogists, parapsychologists, practitioners of alternative medicine, and often anyone who strays far from the scientific mainstream. The boundary problem between science and pseudoscience, in fact, is notoriously fraught with definitional disagreements because the categories are too broad and fuzzy on the edges, and the term “pseudoscience” is subject to adjectival abuse against any claim one happens to dislike for any reason. In his 2010 book Nonsense on Stilts (University of Chicago Press), philosopher of science Massimo Pigliucci concedes that there is “no litmus test,” because “the boundaries separating science, nonscience, and pseudoscience are much fuzzier and more permeable than Popper (or, for that matter, most scientists) would have us believe.”
It was Karl Popper who first identified what he called “the demarcation problem” of finding a criterion to distinguish between empirical science, such as the successful 1919 test of Einstein’s general theory of relativity, and pseudoscience, such as Freud’s theories, whose adherents sought only confirming evidence while ignoring disconfirming cases. Einstein’s theory might have been falsified had solar-eclipse data not shown the requisite deflection of starlight bent by the sun’s gravitational field. Freud’s theories, however, could never be disproved, because there was no testable hypothesis open to refutability. Thus, Popper famously declared “falsifiability” as the ultimate criterion of demarcation.
The problem is that many sciences are nonfalsifiable, such as string theory, the neuroscience surrounding consciousness, grand economic models and the extraterrestrial hypothesis. On the last, short of searching every planet around every star in every galaxy in the cosmos, can we ever say with certainty that E.T.s do not exist?
Princeton University historian of science Michael D. Gordin adds in his forthcoming book The Pseudoscience Wars (University of Chicago Press, 2012), “No one in the history of the world has ever self-identified as a pseudoscientist. There is no person who wakes up in the morning and thinks to himself, ‘I’ll just head into my pseudolaboratory and perform some pseudoexperiments to try to confirm my pseudotheories with pseudofacts.’” As Gordin documents with detailed examples, “individual scientists (as distinct from the monolithic ‘scientific community’) designate a doctrine a ‘pseudoscience’ only when they perceive themselves to be threatened—not necessarily by the new ideas themselves, but by what those ideas represent about the authority of science, science’s access to resources, or some other broader social trend. If one is not threatened, there is no need to lash out at the perceived pseudoscience; instead, one continues with one’s work and happily ignores the cranks.”
I call creationism “pseudoscience” not because its proponents are doing bad science—they are not doing science at all—but because they threaten science education in America, they breach the wall separating church and state, and they confuse the public about the nature of evolutionary theory and how science is conducted.
Here, perhaps, is a practical criterion for resolving the demarcation problem: the conduct of scientists as reflected in the pragmatic usefulness of an idea. That is, does the revolutionary new idea generate any interest on the part of working scientists for adoption in their research programs, produce any new lines of research, lead to any new discoveries, or influence any existing hypotheses, models, paradigms or world views? If not, chances are it is pseudoscience.
We can demarcate science from pseudoscience less by what science is and more by what scientists do. Science is a set of methods aimed at testing hypotheses and building theories. If a community of scientists actively adopts a new idea and if that idea then spreads through the field and is incorporated into research that produces useful knowledge reflected in presentations, publications, and especially new lines of inquiry and research, chances are it is science.
This demarcation criterion of usefulness has the advantage of being bottom up instead of top down, egalitarian instead of elitist, nondiscriminatory instead of prejudicial. Let science consumers in the marketplace of ideas determine what constitutes good science, starting with the scientists themselves and filtering through science editors, educators and readers. As for potential consumers of pseudoscience, that’s what skeptics are for, but as always, caveat emptor.
September 7th, 2011 at 3:05 am
This is perfect; I was just debating with someone tonight who was putting forth an unfalsifiable “matrix”-like theory of reality. When I inquired about the usefulness of such a theory, he postulated a string of objections,”Well what is evidence anyway, how do you define science from nonscience?, etc.” At the time, I could only produce an ultimately unsatisfying answer. Now I have more ammo for my skeptical/rational arsenal! : )
September 7th, 2011 at 5:00 am
Close, but not quite there. There were advances in the geo-centered model of the universe since Ptolemy that did improve the predictions of planets, which was extremely useful (e.g. Tycho Brahe’s model). The vexing problem reflects, IMO, that the demarcation problem is not one dimensional. Throw in some additional requirements that all have to be met (e.g., falsifiability, development/testing of theories, intersubjectively certifiable, and so on). I don’t know that you’ll get a black/white algorithm, but you’ll get closer to the truth.
September 7th, 2011 at 2:57 pm
What a useful criterion! I’ve added it to the list I teach to students and have at the “Incredible Anthropology!” website.
@Terry: Another thing on my list is Occam’s Razor, which favors Copernicus’ elegant heliocentrism over geocentrism despite the usefulness of Brahe’s model.
But rather than saying that these are “requirements that all have to be met” for an idea to be considered science, I prefer a looser, Jeff Foxworthy approach: “It might be science if . . . ” This allows for situations like Shermer’s examples of string theory and the extraterrestrial hypothesis.
September 7th, 2011 at 6:39 pm
I do not think that it is fair to label an idea as science or pseudo-science… it is the *approach* to studying that idea.
Take creationism, for example, you could examine the biblical account of creation and determine some ‘predictions’ (observables which would be different for the creationist model and the evolutionary model) and look for them.
Why won’t this demonstrate anything to creationists? Because their technique is unscientific – not because the contention that the Earth was created by a powerful being is inherently unscientific.
September 7th, 2011 at 7:26 pm
This is a good rule of thumb. I am afraid, however, that it is less useful when applied outside the field of “hard science.” Of course, that term brings up an entirely different debate.
I mean, though, what happens if you apply this criterion to the social sciences? To the humanities? I study literature — is that science, a pseudoscience, or does it not need to pass the test because no one needs it to?
The point I am belatedly making is that this is a semantic argument, targeting specifically those “fields” that claim to be sciences, forcing us to reiterate the modern definition of that term. Though I am myself a physicalist, I wonder if we really need the term “pseudoscience” at all. I guess “wrong” just doesn’t have the same ring to it.
September 7th, 2011 at 7:35 pm
Incredible luck of Einstien’s test with a later analysis of their
equipment was the slop in measurements was larger than the result.
A 50-50 chance would have put the results the other way and
we would have waited years before confirmation. In the meantime Einstien’s
theory would have been pseudoscience because it was of no use?
September 10th, 2011 at 2:04 am
Electroconvulsive therapy might fit this criteria well. However, most of us are pretty convinced that early attempts to treat mental illness were barbaric and severely uninformed – practitioners and researchers weren’t even close to being aware of how much they didn’t know about the brain or mental illness. This criteria doesn’t demarcate between emerging scientific disciplines and those that are more consolidated in methodology, such as the Einstein example that Popper cited.
September 15th, 2011 at 8:38 am
Science makes definitive predictions, which are prior, feasible, quantitative, non-adjustable, and unique to the theory being tested.
Pseudoscience (like string/brane theory) cannot make definitive predictions.
And that’s the difference in a nutshell.
RL Oldershaw
Fractal Cosmology
September 17th, 2011 at 3:43 pm
The problem of distinguishing between science and pseudoscience is well illustrated by the fact that Galileo apparently ignored astrology, but attacked Kepler’s idea that the ocean tides were caused by the moon, as occult and childishness. The probable explanation for Galileo’s behaviour is touched on in the article; astrology was no threat to him, but Kepler’s tidal theory was in direct competition with Galileo’s own incorrect idea.
The problem nowadays is that the more science is seen as an alternative to religion, the more scientific believers seem to have a religious duty to accept that anything that is normally considered to be science must be valid. It is notable that the author specifically seeks to define pseudoscience in such a way as to exclude string theory, rather than starting with a definition of pseudoscience and then applying that test to string theory.
The idea of falsifiability actually dates back to Newton, who made the distinction between ‘hypotheses’ about the nature of the universe, and ‘experimental philosophy’. String theory is not proper science, not only because it makes no testable predictions, but also because it contains meaningless concepts such as space having 13 dimensions. I am also beginning to wonder whether the Higgs mechanism is becoming unfalsifiable; because every time the experimentalists fail to find it, the theorists just think of an excuse to enable themselves to carry on believing in it. Quarks too are practically unfalsifiable, because their masses are so vague that no experimental evidence of particle masses could ever disprove the theory. The correct particle theory is experimentally testable, because in it all particles are made from electric charges; for instance an electron is a single negative charge and a positron a single positive charge, whilst a proton is made from 1250 negative charges and 1251 positive charges, and an antiproton 1250 positive and 1251 negative. For more details see http://squishtheory.wordpress.com/.
November 3rd, 2012 at 4:11 pm
Thank you for your article and the select comments. Together, they make a fine treatment of the difference between science and pseudoscience.
That said I have a problem with the “usefulness” criteria in distinguishing science from pseudoscience in that it casts every new offering as pseudoscience until it is vetted and moves through the review chain to acceptance. Even if a body of knowledge is founded on a new theory that is falsifiable, and even if that theory stands up to rigorous scrutiny, it is likely to retain its pseudoscience label if it contradicts the accepted view. After all, a theory can never be proven correct, and the stalwarts can always say that more testing needs to be done. Wouldn’t it be more prudent – assuming that replacing old ideas with better ones is the ultimate goal of science regardless of who is a casualty – to regard a new offering as science until it is proven false and (or) deemed to have little use? Then again, suppose that a new offering withstands rigorous testing, yet it proves to be of little immediate practical value. Should discovering some use for a valid body of knowledge generated by the testable theory sometime in the future be the criteria for shedding its pseudoscience label?
Don Magyar
Humanology.com
January 18th, 2013 at 3:21 pm
What about Carl Sagan’s Baloney detection kit, which I think is quite good at detecting pseudoscience (and regarding “Occam’s razor”‘s comment above, it is about the simplest explanation that DOES fit the data, not just the simplest explanation alone:
The following are suggested as tools for testing arguments and detecting fallacious or fraudulent arguments:
Wherever possible there must be independent confirmation of the facts
Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.
Arguments from authority carry little weight (in science there are no “authorities”).
Spin more than one hypothesis – don’t simply run with the first idea that caught your fancy.
Try not to get overly attached to a hypothesis just because it’s yours.
Quantify, wherever possible.
If there is a chain of argument every link in the chain must work.
“Occam’s razor” – if there are two hypothesis that explain the data equally well choose the simpler.
Ask whether the hypothesis can, at least in principle, be falsified (shown to be false by some unambiguous test). In other words, is it testable? Can others duplicate the experiment and get the same result?
Additional issues are
Conduct control experiments – especially “double blind” experiments where the person taking measurements is not aware of the test and control subjects.
Check for confounding factors – separate the variables.
Common fallacies of logic and rhetoric
Ad hominem – attacking the arguer and not the argument.
Argument from “authority”.
Argument from adverse consequences (putting pressure on the decision maker by pointing out dire consequences of an “unfavourable” decision).
Appeal to ignorance (absence of evidence is not evidence of absence).
Special pleading (typically referring to god’s will).
Begging the question (assuming an answer in the way the question is phrased).
Observational selection (counting the hits and forgetting the misses).
Statistics of small numbers (such as drawing conclusions from inadequate sample sizes).
Misunderstanding the nature of statistics (President
Eisenhower expressing astonishment and alarm on discovering that fully half of all Americans have below average intelligence!)
Inconsistency (e.g. military expenditures based on worst case scenarios but scientific projections on environmental dangers thriftily ignored because they are not “proved”).
Non sequitur – “it does not follow” – the logic falls down.
Post hoc, ergo propter hoc – “it happened after so it was caused by” – confusion of cause and effect.
Meaningless question (“what happens when an irresistible force meets an immovable object?).
Excluded middle – considering only the two extremes in a range of possibilities (making the “other side” look worse than it really is).
Short-term v. long-term – a subset of excluded middle (“why pursue fundamental science when we have so huge a budget deficit?”).
Slippery slope – a subset of excluded middle – unwarranted extrapolation of the effects (give an inch and they will take a mile).
Confusion of correlation and causation.
Straw man – caricaturing (or stereotyping) a position to make it easier to attack.
Suppressed evidence or half-truths.
Weasel words – for example, use of euphemisms for war such as “police action” to get around limitations on Presidential powers. “An important art of politicians is to find new names for institutions which under old names have become odious to the public”.