Would anyone dispute that there has been scientific change? It’s difficult to imagine science not changing, as change is one of its greatest strengths, and so, it is odd that this change, this apparent lack of stability, is presented as part of the case against science. The enemies of science, spread out as they are among various ideologies, might argue that since no science can exist without modifications, it is foolish to rely on it as a source of accumulated truths. For them, the very act of defining science is nonsensical: there is no method; rationality and evidence are rhetorical devices; and science is fully encultured.
The absence of transcendental criteria for distinguishing science from non-science should not lead one to adopt a position of epistemological relativism, but developments within the social sciences and humanities have resulted in discussions that primarily focus on the pursuit of prestige or the relationship between ideas and status. This focus on ideology seems rather cynical to me, confusing the social role of science with epistemological issues. For these New Cynics, as philosopher Susan Haack likes to call them, science may as well be a literary genre.
I accept that knowledge is a product of human actions, but it is not only a product of human actions, and I reject any effort to disguise epistemological relativism as methodological relativism. I tend to adopt what Stephen Cole (1992) refers to as the realist-constructivist view, recognizing that science is socially constructed, but that its constructions are checked or constrained by input from the empirical world.
Okay, but how do we explain science’s success from a sociological perspective? I think it’s important to reflect on scientists’ attempts to meet challenges which concern the truth of conflicting ideas. This is not to say that there aren’t struggles for power within science, but these struggles are regulated by rules inscribed in the scientific social space, leading to the production of knowledge which remains intact until someone comes along with new evidence.
For the sociology buffs out there, I recommend taking a look at Kyung-Man Kim’s paper “What would a Bourdieuan sociology of scientific truth look like?” for more detail regarding this perspective.
Just as it is difficult to determine what counts as science, it is difficult to determine what counts as fake science. One solution is to think of science and pseudoscience as “open concepts, which possess intrinsically fuzzy boundaries and an indefinitely extendable list of indicators” (Lilienfeld et al., 2004, p. 5). Fuzzy does not mean that these boundaries do not exist, but it serves as a warning about what we can expect when we try to implement rules for demarcation rather than guidelines.
Pseudoscience is not bad science; it is something else entirely. Sometimes the two are confused, and some scholars argue that we would be better off viewing pseudoscience as a set of radically flawed practices, i.e., as radically flawed complexes of theories, methods, and techniques (Lugg, 1987). This collapsing of categories is of concern because there are useful distinctions to be made between pseudoscience and bad or fringe science: “just as fake art is not art, fake (or pseudo) science is not science. Pseudoscience should therefore refer to activities … which appear to be scientific but are not” (Resnik, 2000, p. 252-253).
The danger with labeling something as pseudoscience is that we often emphasize the rhetorical power of the categorization rather than its utility. The emotional pull of pseudoscience as an accusation is powerful, and it can be an effective dismissal of any endeavor that we find unsavory. Take for example Karl Popper’s falsifiability as the criterion of demarcation. Suspicious practices are regularly accused of not being falsifiable, and while this might sway people in a debate, falsificationism is not sufficient as it mainly appeals to those who are looking for a quick fix.
According to Lilienfeld et al. (2004), the “pseudosciences can be conceptualized as possessing a fallible, but nevertheless useful, list of indicators or ‘warning signs.’ The more such warning signs a discipline exhibits, the more it begins to cross the murky dividing line separating science from pseudoscience” (p. 5). This is an interesting observation, but which indicators can we consistently rely upon?
Haack (1997) isn’t interested in lists of methodological indicators, instead focusing on motivation. She distinguishes genuine inquiry from pseudo-inquiry. Pseudo-inquiry, she argues, is “aimed not at finding the truth but at making a case from some conclusion immovably believed in advance.”
Pseudo-inquiry can be further broken down into sham and fake inquiry. A sham reasoner is “concerned, not to find out how things really are, but to make a case for some immovably-held preconceived conviction. A fake reasoner is “concerned, not to find out how things really are, but to advance himself by making a case for some proposition to the truth-value of which he is indifferent” (Haack, 1997). This distinction seems less satisfying and more difficult to put into practice than a set of pre-established criteria. Haack would perhaps respond that questions certainly need to be asked and data examined, but pseudoscience is an “ample and diverse category, including the many human activities other than inquiry … and of course there are plenty of mixed and borderline cases” (Hacking, 2005, p. 7).
While I share Haack’s interest in emphasizing what counts as well conducted inquiry, and I accept the difficulty of drawing boundaries on the basis of something like the scientific method, unlike her, I think pseudoscience is a useful sociological category. Ideally, we would be better off addressing the individual claims as they are made, and assessing whether or not they are well supported. This is what Haack would call a moderate position, as she has less strident beliefs regarding what does and does not count as science than those who identify themselves as part of the skeptical community.
The least supported claim may end up being true, as it has in the past, and pseudoscience could potentially mature into a genuine science. Haack bypasses the demarcation problem by saying “Well, whether it’s science or not, it’s pretty bad, and here’s why in the particular” (Point of Inquiry, 2007). Unfortunately, this method does not address the problem of long standing claims, models, theories, etc., that have been tested, examined, and thoroughly picked apart, such as neuro-linguistic programming (NLP). Is there no point at which we can assume some probabilistic conclusion regarding the merit of a particular claim or body of claims? It is possible that NLP could transition into genuine science, but this would require a great deal of change, not just within NLP itself, but within the brain sciences, psychology, and even physics. This is also the case with Lysenkoism, cereology, and many other examples that have come and gone. Haack thinks it unlikely that there are people being abducted by aliens, but she is comfortable addressing the individual claims as they come in. While her open-mindedness is admirable, and she is persuasive in her condemnation of strict demarcation, we need practical distinctions, and it is not viable to address every individual claim, especially when they are attached to a body of similar claims that have already been addressed ad nauseum.
Not only is it inefficient to address each claim individually, especially when there is an established history of repeatedly failing to attend to some sort of procedure that we can agree on as constituting good science, but determining the motivation of an individual that is involved in defending a claim, i.e., separating genuine inquiry from pseudo-inquiry, is not the kind of solution we need when faced with policy decisions, what gets taught in the classroom, or which kind of therapy a patient should consider. While there are some general features of science that we can agree on, such as placing the burden of proof on the person making a claim, we need something more robust for solving our practical problems.
One solution is to consider the consequences of not emphasizing a particular indicator of good science, such as reliability, problem-solving ability, and falsifiability. If the issue under examination is something trivial, such as placing Vicks VapoRub on a child’s feet in an effort to stop a cough, the consequences are not terribly severe; however going to a chiropractor for chronic asthma increases the level of severity and should be treated accordingly. Generally, the standards we invoke should be higher in some areas than in others: “When your health and well being are at stake, it is better to stick with a conventional treatment with a good track record than to take a gamble” (Resnick, 2000, p. 262). Like Haack’s solution, what I have suggested is not particularly systematic, but it provides an avenue for relegating certain dangerous medical practices to the realm of pseudoscience.
What do you think?
Cole, S. (1992). Making science: between nature and society. Harvard: Harvard UP.
Haack, S. (1997, Science, Scientism, and Anti-Science in the Age of Preposterism. Skeptical Inquirer Magazine, 21
Haack, S. (1998). Manifesto of a Passionate Moderate. Chicago, London: The University of Chicago Press.
Kim, K. (2009). What would a Bourdieuan sociology of scientific truth look like? Social Science Information, 48(57), 57-79.
Lilienfeld, S. O., Lynn, S. K., & Lohr, J. M. (Eds.). (2004). Science and Pseudoscience in Clinical Psychology The Guilford Press.
Lugg, A. (1987). Bunkum, Flim-Flam and Quackery: Pseudoscience as a Philosophical Problem. Dialectica, 41(ND3)
Resnik, D. B. (2000). A Pragmatic Approach to the Demarcation Problem. Studies in History and Philosophy of Science, 31(2)