Alert readers will have noticed that not much has happened on this blog over the last couple of months. One of my excuses is that it is summer. Another is that I have been contributing to the Early Modern Experimental Philosophy blog, which I encourage you to read if you do not already do so. But the real reason is that in October I will begin a post-doc at the Max Planck Institute for the History of Science, and I am busy tying up loose ends in Cambridge before I head east. When regular blogging resumes in autumn, the following topics will be high on the agenda: -- remarks on my paper that appeared in the June issue of Historical Studies in the Natural Sciences. There are some big issues that didn't make it into the paper and that I would like to highlight in a blog post. -- a response to the interesting discussion about the internal/external distinction that Darin Hayton has summarised here. -- a continutation of my series of posts on the symmetry principle in the history of science. There is more to say about this principle, and I promise I will say it more succinctly than I did here or here. -- a continuation of a series of posts on Thomas Kuhn's legacy for historians.
Tuesday, July 23, 2013
Yesterday morning nearly 2000 historians of science gathered in a vertiginous lecture hall at the University of Manchester, UK. Hasok Chang, the keynote speaker, told them that they could benefit from studying the technical content of science. Not a very controversial claim, you might think. After all, science does have technical content, just as it has journals, military contracts, and priority disputes. The fact that the talk was controversial—and the initial reaction on twitter suggests that it was—shows just how sensitive historians of science still are to what was once called the internal/external debate. Having written about this debate before on this blog, I can’t help commenting on the talk. I agreed with much of Chang's talk, but not all of it. I think that in some respects he went too far in defending internal history of science, and that in other respects he did not go far enough. Update: the video of Chang's talk can now be viewed for free here: http://www.ichstm2013.com/blog/audio-and-video/. In the rest of this post I've inserted (in square brackets) the time of key events in the video. Here’s what I agree with: The internal/external debate is still a live one. It may take a different from than it once did, and some of us may be repelled by the very idea that the debate continues. But there is no doubt that some historians still worry that the history of science is in danger of “losing its science,” while other historians worry that it is bad history or bad politics, or both, to separate the technical content of science from its social and political aspects. Worriers of the former kind include the historian of physics Olivier Darrigol, whom Chang quoted in his talk [2:50]. If Chang cited worriers of the latter kind, I don’t remember who they were and would be grateful to anyone who could jog my memory [I could not find any such citations when I watched the video of the talk. Update to the update: Michael Weiss, watching more carefully than I, has noted the citation of Kathryn Olesko at 4:50 and the less direct citation of Kathryn Olesko and Robert Kohler at 5:30]. The internal/external distinction is coherent and useful. Chang made free use of the terms “internal,” “external,” “internalistic,” and “externalistic.” He also asserted that the internal/external distinction is not a false dichotomy. He dismissed some other distinctions, such as those between practice and theory and between the social and intellectual (of which more below). But he had no shame in insisting on the internal/external distinction as one way of dividing up past science, and of dividing up the books and articles we write about past science. Internal history of science can be good history. One of the key slides in Chang’s presentation was a list of “reasons for doing history” that included describing, understanding, using, overcoming, and appreciating the past [42:00]. Chang’s point was that internal history of science could serve all of these aims. In other slides he argued that internal history of science could serve other aims, like teaching science in schools and advancing present-day science. But he made it clear that these extra-historical goals were not the only ones that internal history could serve. In his view, as in mine, internal history of science can be a genuinely historical enterprise. Internal history of science has a bad image among professional historians of science. This was probably one of Chang’s most controversial claims, largely because of its rhetorical effect. Like anyone who claims that the pendulum has swung too far in one direction, Chang gave encouragement to those who would swing it in the opposite direction. This will alarm historians who think that the pendulum has not yet swung far enough away from internalism, or who worry that historians encouraged by Chang will swing it too far back to internalism. But as far as I can tell, Chang’s basic point is correct. Since about 1985, the best way to mark oneself as naíve and out-of-touch, at least among historians of science in the UK and US, is to write books and articles that say little or nothing about the politics, sociology, rhetoric, architecture, print culture, visual culture, etc. of science. Darrigol did not err when he complained, in a chapter that Chang cited, that internalists tend to be seen as “fossils” who cling to a discredited brand of history [2:50]. There is more to history of science than internal history. Although Chang defended internal history, he did not defend it as the exclusive mode of history of science. His point was that internal history is more worthwhile than it is often thought to be, not that no other history of science is worthwhile. Historians should be “pluralistic,” he said. “There are no enemies here, except those who are in the habit of making enemies.” (I would add that pluralism should not stop us from rejecting opinions we consider false, and that we should be able to disagree with someone without making an enemy of them). So much for my agreements. In the following, the claims in bold are the ones that I endorse and that I want to contrast with some aspects of Chang's talk. The real debate is between internalists and hybridisers. The label “externalist” is almost as widely shunned as “internalist.” I do not know any historians of science who pride themselves on ignoring the technical content of science. But I know many who pride themselves on integrating the technical content of past science with its social or political elements, and many who think that this integration is one of the main goals—perhaps the main goal—of the discipline. Consequently, the most common criticism of internal history is not that it pays attention to the technical content of science but that it does so exclusively. Far from countering this criticism, Chang appeared to endorse it. He said that the distinction between the “social” and “intellectual” factors was a false dichotomy, and invoked well-known works of hybrid history to make his point [10:00]. I don’t recall all of his examples, but they included Peter Galison’s Image and Logic and Steven Shapin and Simon Schaffer’s Leviathan and the Air Pump. These works are (rightly) celebrated, and often they are celebrated precisely because they marry the content and context of science. I fear that the chief lesson that many people will take from Chang’s talk is that we are in dire need of more works of this kind. This would make things harder, not easier, for those who wish to emulate internalist works such as Darrigol’s magisterial Electrodynamics from Ampère to Einstein. Internalists should defend their work as history of science and not as history of science. There are two ways a work can fail to be good history of science. It can fail to be good history, or it can fail to be about science. It is important to keep these two criteria separate when assessing any historical genre, whether internalist, externalist, or hybrid. Obviously, the two criteria are distinct: there is good history that is not about science, and poor history that is about science. The main reason the distinction is important is that the correct answer to the question “what good is internal history of science?” may depend on which criterion one uses. For example, one might think a) that internal history of science is on a par with non-internal history as history, but that b) it is more squarely about science than external or hybrid history. b) is bound to be more controversial than a), simply because it is a claim for the superiority of internal history rather than a claim for its parity with other kinds of history of science. Chang seemed to make the latter, provocative claim. The title of his talk, “Putting science back into the history of science,” when read alongside the talk itself, suggests that Chang equated “science” with “technical content.” This would imply that, in his view, internal history of science is indeed more squarely about science than external history of science. Perhaps Chang is right about this. But in my view it would be better for internalists not to make that claim until they have established the less provocative (but still important and controversial) claim that internal history is no worse, as history, than external or hybrid history of science. It may not even be necessary for internalists to make the provocative claim just described. Personally, I am concerned about whether I am writing good history, not about whether I am writing history about science—even when I am writing about the technical content of science. Here's another way of putting it. I am puzzled, and sometimes irritated, by those who insist that internal history of science is somehow a second-rate form of history. But I’m unbothered by those who imply that internal history of science is no more “about science” than external or hybrid history. The best way to defend internal history is by the hybridiser’s own standards. Hybrid history of science tends to be surrounded in a halo of historiographical virtues. It is said of hybrid histories that they are contextual, cultural, and causal, that they respect actor's categories, honour the symmetry principle, display the contingency of science and treat science as a construction, a product of human activity. These terms of praise are not often applied to works of internal history of science. So it is easy to get the impression that the hybridiser's virtues are beyond the reach of the internalist. The challenge for the internalist, as I see it, is to show that there is no such imbalance. Chang's approach was not quite so direct. He did list certain historiographical virtues that he thought were within the reach of history. But these virtues were not, by and large, the ones that are most commonly associated with non-internal history of science. There were two notable exceptions to this rule. Chang did mention that internalist works can show the contingency of science, and that they can be “cultural.” Chang made the latter point with an intriguing remark that may be paraphrased as follows: “It is only an anti-intellectual culture that does not consider intellectual activity to be cultural activity.” Chang's point here was not that intellectual activity is always shaped and sustained by something other than its technical content. On this view, internal history of science is not yet cultural history but can always be turned into cultural history by adding in the politics, sociology, rhetoric, etc. This may be true, but it is not what Chang meant, I think. What he meant was that works of internal history are already cultural. No additives are necessary to make them into works of cultural history. Darrigol's Electrodynamics from Ampère to Einstein is as much a cultural history as Leviathan and the Air Pump. To think otherwise is to treat intellectual activity as a special sort of human activity, one that is not on its own cultural. There is an irony here: those who reject internal history on the grounds that it treats science as “exceptional” are thereby committing the very error they claim to save us from. Endnote. Having said all this, I had better say why I think that internal history is not a second-rate form of history. The reason is simple: the standard argument against internal history, and in favour of hybrid history, is guilty of a whopping inconsistency. The usual argument is that only hybrids do justice to the inextricability of social and epistemic factors in past science (or something along those lines). The obvious reply is that there are lots of other inextricable pairs that the historian should try to knit together—like theory and experiment, or mathematics and physics, or science in France and science in Germany—the list is endless. Given that no single work can hybridize everything, it is inconsistent to single out the science-society dyad for special attention. An example might help to make the point. As far as I know, no-one has asserted that all books and articles dealing with scientific instruments must also deal with scientific theories, and that those that fail to do so do not qualify as proper history. That would be absurd. It is equally absurd to insist, as many historians of science seem to do, that all books and articles that deal with the technical content of science must also deal with the sociology, politics, rhetoric, etc. of science. Expand post.
Tuesday, May 7, 2013
This post is a response to reflections that Lee Vinsel posted on Saturday on the AmericanScience blog. His post was about science and politics rather than about the symmetry principle, and it is the latter that I am scrutinising in my current series of posts. But I take issue with Lee's post for the same reasons I take issue with Vanessa Heggie's earlier one on the symmetry principle. It seems to me that the effect of both posts (though perhaps not the intention) is to endorse one side of a confusing and controversial issue, present the opposing view as a vulgar error, and use the wisdom of STS to confound the distinctions that could have prevented the confusion from arising in the first place. The occasion for Lee's post is a bill called the “High Quality Research Act (HQRA)” that has been drafted by the Republican Congressman Lamar Smith. The Huffington Post obtained a draft copy of Smith's proposal and published this article on the topic last week. The bill concerns the National Science Foundation (NSF), which is described in the Huffington Post article as “one of the most successful scientific research promoters in history.” If passed into law the bill would require the NSF to certify to Congress that all of the work it funds is of “the finest quality, is ground breaking, and answers questions or solves problems that are of utmost importance to society at large.” The Huffington Post seemed to criticise the bill on the grounds that it would “politicize” the decisions made by the NSF. The main point of Lee's post is to discredit this line of argument. (The AmericanScience blog contains two other useful articles on other aspects of the bill). Lee gives a number of interesting objections to the “rhetoric of politicization,” as he calls it, but to make things simple I will focus on two of those objections. I agree with the first of these objections but have a dim view of the second, as you will see in a moment. Lee's first point is that democracy requires that the general public have at least some say in the “research priorities” (his term) of the scientists they fund through their taxes. Lee does not make this point explicit in the post, but it does underlie his penultimate paragraph, where he briefly considers some ways in which science could be incorporated into the democratic process. In my view this is a good objection to those who imply (through their use of the term) that “politicization” is always a bad thing. Granted, there is much room for debate about how, and to what degree, the research priorities of scientists could be brought into line with the values of the people who pay for the research. But presumably there should be at least some such democratic oversight. And even if you think (as David Colquhoun seems to) that there should be no such oversight, you must agree that the simple equation of oversight with “politicization” is a poor argument for your view. Lee's second objection is that, as he puts it, “science is always political.” To support this bold assertion he gives a number of examples of the political character of science, such as the role of the Cold War in shaping science, the imperfections of the peer review process, and the fact that scientists use their epistemic authority for political ends. But this is old news:
The consensus was established a long time ago: there's no use in trying to separate science from politics, even rhetorically, and, moreover, attempts to make that separation are themselves political. Science, like everything else, is human and screwed up.The “consensus” to which Lee refers is the alleged agreement among people working in STS that “science is always political.” Lee says that the STS consensus on the topic has existed for nearly thirty years, and wonders what we can do to get our ideas across to the general public. I believe that Lee's approach illustrates precisely what we should not do if we want to bring STS to the masses, which is to replace one simplification with the opposite simplification. This seems to be what Lee has done by rejecting the view that science is never political in favour of the equally implausible view that it is always so. Granted, there are lots of ways in which science is, has been and should be political. But surely there are also lots of ways in which science is not, has not been, and should not be political. Denying the latter claim, as Lee seems to do, is no better than denying the former. Lee writes that “[p]op writers...are still falling back on the too easy, too simple trope of politicization.” If this is fair, then it is also fair to say that Lee is falling back on the too easy, too simple trope of the inseparability of science and politics. To insist that “science is always political” is not only an error, but one that undermines Lee's more specific argument against the “rhetoric of politicization.” One reason for this is the pragmatic one that a bad argument for a position makes that position look bad, no matter how many other good arguments one can muster in favour of the position. But a more interesting reason is that Lee's specific argument relies on a distinction that we are likely to ignore if we blithely maintain that “science is always political.” This is the distinction between the use of political values to set research priorites and the use of those values as evidence for scientific theories. (I owe this distinction to my perusal of this book by the philosopher of science Heather Douglas.) An example of the former would be a policy that channelled UK public money into research on tropical diseases on the grounds that the health of poor people in the Third World is just as important as that of rich people in the West. An example of the latter would be someone who argued that women and men must be equally intelligent on the grounds that the alternative would violate the political value of gender equality. Granted, one could argue that the social consequences of believing the alternative (ie. sex-based differences in IQ) would be so bad that we should discourage research into the topic, or even that we should discourage that belief no matter what the evidence says. But I take it that very few people would argue that the harmfulness of the belief that men are more intelligent than women (or vice versa) raises the likelihood that that belief is false. Now, Lee's specific objection to the “rhetoric of politicization” is persuasive because it is explicitly restricted to political interventions in research priorities. His objection would have been much less plausible if he had dropped this restriction and claimed that political values should also be deployed as evidence for and against scientific theories. But his assertion that “science is always political” conflates the distinction between the two cases and thereby weakens the force of his specific (and in my view well-grounded) objection. That conflation also does an injustice to those who use “politicization” as a term of abuse. There is nothing naïve or wrong-headed about criticizing those who treat political values as evidence for scientific claims. Nor is there anything wrong with the related practice of criticizing those who mislead the public about the evidence for the health risks of smoking or the reality of climate change—to use two of the cases studied by Naomi Oreskes, one of the “pop writers” targeted in Lee's post. The irony is that the conflation encouraged by the claim "science is always political" is also present in the sources that Lee criticizes. One of those sources is the Huffington Post article, which is followed by a picture gallery advertising the scientific errors of US politicians. This form of ridicule, if it works at all, only works against Smith's proposal that politicians assess the technical merit of NSF-funded work. Unless I am missing something, it does not work very well against his proposal that politicians assess the social benefit of NSF-funded work. The other source that Lee quotes is a letter written to Smith by the Texan Democrat Eddie Bernice Johnson. Like the author of the Huffington Post article, Johnson implies that politicians lack the technical expertise to carry out “peer review” of scientific papers. Again, this argument is far more effective against political assessment of “technical merit” than of “social significance.” What I am suggesting is that Lee's post would have been more effective if he had called out this conflation instead of perpetuating it by insisting that “science is always political.” Granted, it is not always easy to draw boundaries between instances of politics directing research agendas and instances of politics being used as evidence for theories. And I expect that those gray areas (which Douglas covers in the book cited above) are the ones that people disagree about the most. But there are fairly clear cases on either side of the gray area, and it is no use at all to insist that everything is gray or (worse) to arbitrarily decide that everything is black and then suggest that anyone who thinks otherwise is naïve. There are a number of analogies between Lee's critique of the claim “science is (or should be) apolitical” and Vanessa's critique of the claim “people believe things because they are true.” Both critics treat the respective claims as popular errors put about by unreflective authors who have not yet read enough STS scholarship. In both cases the real error of the popular authors, insofar as there is one, is not that of endorsing false claims but of conflating the plausible and implausible readings of ambiguous claims. And both critics mix valuable, specific points with general claims that serve only to perpetuate the confusion that gave rise to the popular errors in the first place. In Vanessa's case the specific point is that the truth-value of a past scientist's belief is a poor guide to the reasons they had for holding that belief—a historiographical maxim known as the symmetry principle. This point is easy to confuse with the more general and more controversial claim (or collection of claims) that truth and evidence are not much use in explaining scientist's beliefs: as I argued in my last two posts, there are lots of ways in which truth and evidence can legitimately enter historical explanations. Arguably, this confusion is one of the main reasons why non-historians fail to grasp the symmetry principle. So we need to clear up that confusion—not ignore it or encourage it—if we want to take the symmetry principle to the masses. In Lee's case, as we have have seen, the specific point is that laypeople should have a say about the direction of the research they fund through their taxes. This important observation is likely to be lost, ignored or rejected if we bundle it into the more general claim that science is always political. That claim is as misleading as the opposite claim that science is never political. More importantly, the claim obscures the point that really matters, ie. that there are defensible forms of political involvement in science that do not involve the (in general) dubious practice of treating political values as evidence for scientific theories. In this post I have homed in on the posts by Vanessa and Lee. This is not because I want to make enemies, or because I think their posts contain nothing of value. It is because it is handy to have well-defined targets, and most importantly because I think those two posts are representative of a wide swathe of opinion in STS, including in the history of science. As Will Thomas has been saying for a while now, STS scholars have a choice. We can make dramatic, controversial claims that puff up our discipline, condescend to people outside STS, and cloud the issues that need to be clarified. Or we can use our scholarship and our analytical skills to make specific, timely interventions in public debate. A reason for pessimism is that the claims that sow the most confusion in the two posts in question—like “science is always political” and “no-one believes things because of the evidence”—are precisely the claims that seem most distinctive of STS. As Lee himself writes, the claim that "science is always political" is a “basic tenant—perhaps even a dogma—of science and technology studies.” A reason for optimism is that Lee, Vanessa, and others have been making specific, timely interventions in many of their other posts on their respective blogs. This suggests that STS scholars can do good work in the public sphere without falling back on the one-sided slogans that too often appear to define the discipline. Expand post.
Sunday, May 5, 2013
This post continues my effort to understand the symmetry principle by distinguishing different senses of the claim “people do not believe things because they are true.” As you can see, this is not an easy job: this post adds 5 readings to the 6 discussed in my previous post. But nor is it an exercise in hair-splitting or nit-picking. I'm not suggesting that we need to make these distinctions explicit whenever we discuss the symmetry principle, the nature of scientific truth, or the role of evidence in settling scientific debates. But our discussions of all those topics would be improved if we kept these distinctions in mind when we formulate our claims and when we assess the claims of non-historians. (Readers who are pressed for time may want to skip to the end of this post, where I summarise my 11 readings and draw some morals from them.) 7. The truth-value of any given theory is obvious once you decide to consider the evidence. Call this the “self-evidence assumption.” Most historians of science think that this is a dangerous assumption, and that it underlies much popular writing about science. Vanessa alerted me to the relevance of this assumption when she wrote in a tweet that “symmetry is a starting point for undermining the 'if they'd've looked, they'd've believed' assumption.” I agree that this is a dangerous assumption, and also that it is relevant to the symmetry principle. However I would say that the wrongness off the assumption is the starting point for the symmetry principle, rather than the other way round. This is important: the main reason people give misleadingly asymmetrical accounts of past scientific debates is that they underestimate the quantity, variety and complexity of the evidence that lies behind any well-grounded belief about nature. This underestimate encourages two other false assumptions. The first is that the evidence available in the past for any given theory was more or less the same as the evidence available today. If you think that the theory of evolution from natural selection follows from a few simple inferences from everyday observations, you are unlikely to appreciate the fact that Darwin had less evidence for that theory than we do today. And if you do not appreciate that, you are unlikely to appreciate the reasonableness of Darwin's nineteenth-century opponents. The other consequence of the self-evidence assumption is the belief is that, once there is strong evidence for one theory over another, it is impossible to mount a plausible case for the rejected theory. If you think that the evidence for evolution is blatently obvious, you are likely to think that its present-day opponents are stupid, biased, or insincere. In fact it is possible to make reasonable objections to just about any piece of evidence for evolution available today. I believe that the vast majority of these objections can be answered (otherwise I would not believe in evolution). However I also think that a full answer to those objections would require specialist knowledge of biology and paleontology, not to mention genetics, biogeography, molecular biology, anatomy, and maybe some philosophy of science. This means that it is possible for non-specialists to build fairly plausible cases against evolution by natural selection. Does the falsity of the self-evidence assumption mean that evidence and argument play little role in scientific disputes? Of course not. If anything, it shows that there is more evidence, on both sides of any debate, that we might first imagine. 8. If a statement is true, it corresponds to reality. Rebekah Higgitt responded to my first post in this series by tweeting that “I do see statements that scientific theories/facts are true because they're true (ie true reflection of reality).” My response was that there is nothing wrong with this, if people mean just that truth consists in some sort of relation between a statement and the world. This is roughly what philosophers call the “correspondence theory of truth.” I sometimes come across the view in science studies literature that the correspondence theory is a hopelessly naïve theory of truth that was abandoned by all right-thinking people some time between 1950 and 1990. Granted, philosophers continue to argue about whether or not the correspondence theory of truth is a good one, and about which correspondence theory is the best one—witness this article. But the fact that Stanford Encyclopaedia of Philosophy has an up-to-date article defending the theory suggests that it is far from a minority view. 9. The evidence for a theory is a good guide to the truth-value of the theory. According to this view, we should believe theories to the extent that they have good evidence in their favour. This may seem like commonsense—after all, if the evidence does not tell scientists what to believe, what does? Nevertheless this view is often rejected by both scientists and philosophers of science. For example, IanLove commented as follows in reply to one of Vanessa's posts: “In science saying you believe something means that you consider it best fits the evidence..... As for truth - that is best not used: because, as others have said, all scientific theory and evidence is work in progress.” I find it hard to understand this blanket quietism about truth. Why would scientists collect all that evidence for their theories if they did not think it would lead them closer to the truth? True, many philosophers of science deny that evidence is a good guide to the truth-value of every kind of belief. But even those philosophers (such as Bas Van Fraassen) usually say that scientist's beliefs about observable phenomena are probably true, and that the evidence is a good guide to the truth-value of that kind of belief. But suppose for the sake of argument that IanLove is right, and that scientists endorse beliefs that best fit the evidence but never take the further step of saying that those beliefs are true. This would mean that the evidence can never explain why scientists believe theories to be true (since, by hypothesis, they never believe that of theories). But for the same reason, social or institutional factors could never explain why scientists believe theories to be true. So even IanLove's anti-truth stance is no grounds for preferring social or institutional explanations over evidential ones. 10. Historians can explain the past development of things that are defined in present-day terms. I include this one because it came up in a comment on one of Vanessa's earlier posts, the one that prompted Vanessa's post on the symmetry principle. The post was about how we might explain the decline in incidence of TB in the twentieth century—was it drugs and vaccines, or improved nutrition, or perhaps public health measures like clean water and better sanitation? Vanessa noted that the meaning of “TB” has changed over time, and that this causes problems for any attempt to explain the change in its incidence over time. This provoked the following comment from Wolfbone: “All you have to do is pick the modern, most informed, definition of what TB is and do your research and write your history in the light of that knowledge.” Vanessa subsequently presented this in a tweet as a clear example of bad historical practice, presumably because Wolfbone was proposing that we think about the past in present-day terms. As I have said elsewhere, I do not see what the problem is with thinking about the past in present-day terms. Sure, you are going to miss a lot if you ignore earlier, different definitions of TB in your history of the topic. But you are also going to miss a lot if you fail to adopt a consistent definition of TB. In particular, you are going to miss the opportunity to explain why rates of TB incidence changed over long periods of time (rather than just giving a sequence of disconnected explanations of how various TB-related conditions changed during the short periods in which each one of those conditions was thought to define TB). I can see why a historian might choose either one of those approaches, but I do not see why one would want to eject the present-centred one from the canons of good history. As Wolfbone put it, “You can perfectly well write a history of TB and a history of “TB.”” Vanessa's worry seemed to be that the present-centred approach is “progressive” in the sense that “it starts with the assumption that we're obviously right now, and were therefore obviously wrong then.” This statement is imprecise in just the place where precision is needed: the starting assumption is that today's theory is considerably more likely to be true than yesterday's theory, not that today's theory is "obviously right" (if that phrase means "certainly right" or "obviously right once you decide look at the evidence"). And even if the assumption were that we are “obviously right” now, this would not commit us to the belief that we were “obviously wrong” in the past. Perhaps we have uncovered some new evidence recently that means that the truth of our current theory is much more obvious now than it was a decade ago (see 6. above). 11. Today's theories are more likely to be true than yesterday's. Wolfbone suggested that the real reason for Vanessa's hostility to the use of the present-day definition of TB was her assumption that the present-day definition is no better—in the sense of being no more well-supported by the evidence—than previous definitions. Wolfbone wrote that Vanessa was “apparently motivated by the false belief that today's science facts are just as fragile as yesterday's.” And indeed, Vanessa wrote (for example) that “diagnosis and disease definitions change all the time; today's is as likely to be proved 'wrong' as yesterday's.” So, does the fact that we have been wrong in the past mean that today's theories are no better than yesterday's? Philosophers of science have long pondered this argument. There is even a name for it: the “pessimistic induction”. The debate is complicated, with plausible arguments on both sides. This means that it would be unwise for historians to base the central tenet of their field—the symmetry principle—on the presumed outcome of the debate. Do historians need to take this risk? That is, does the symmetry principle stand or fall with the pessimistic induction? In one sense the answer is “no.” Imagine that we were absolutely certain that the shading of the moon is due to its surface relief rather than the unenven density of its internal matter. Never mind how we might have become certain, or whether we are actually certain—just pretend that we are. Now, this certainty would be perfectly consistent with the belief that Galileo's evidence for the rockiness of the moon was no better than the Jesuit's evidence for the uneven density of the moon. But there is another sense in which the symmetry principle does appear to stand or fall with the pessimistic induction. How would we become certain that Galileo was right? Presumably by looking at the best available evidence. But to make this inference, we must suppose that now (May 2013) there is a close connection between the truth-value of a belief and the state of the evidence. But there is nothing special about May 2013, so the same connection must have existed in the 1600s, when Galileo was up against the Jesuits. But this claim is inconsistent with the symmetry principle, which denies that the truth-value of Galileo's belief is any guide to the evidence available to him. So it looks like we can have the symmetry principle, or believe present-day scientific theories, but not both. Maybe Vanessa was right to hitch the symmetry principle to the pessimistic induction. Although I think that linkage is mistaken, that is not my point here. My point is that we need to break that link in order to save the symmetry principle. Faced with a choice between the symmetry principe and trusting present-day science, many people would ditch the symmetry principle. And that would be a perfectly reasonable choice. Granted, to historians it is absurd to say that all past scientists who held true beliefs did so on the basis of good evidence, whereas those who got it wrong did so on poor evidence. But it is just as absurd, if not more so, to say that today's science is no more likely to be true than yesterday's. Conclusions There are lots of legitimate ways in which questions of truth and falsity can enter into historical research. The fact that something is the case can help to explain why people believe it to be the case (#3 in the previous post). The fact that something is the case about nature can also explain why people believe something else to be the case about nature (#4). People can believe things partly because of the evidence (#1), including the factual evidence (#5). And historians can legitimately explain the past development of things (like TB) that are defined in present-day terms (#10 in this post). In my view, there is little room for debate about these issues (except perhaps #10). They should be distinguished from other, deeper issues that are the subject of ongoing debate among honest and well-informed philosophers. Does truth consist in the correspondence between a statement and reality (#8)? Is the evidence for a theory a good guide to the truth-value of the theory (#9)? Are today's theories more likely to be true than yesterday's (#11)? Historians should not blithely assume that the answers to these questions are all “yes.” But nor should they assert that the answers are “no” and then build their historiographical principles on this assertion. It is undesirable, and probably unnecessary, for historians to base their methods on claims that are controversial among mainstream philosophers. There are other forms of historical explanation that do make illegitimate use of truth or evidence. People do not believe things to be true because they believe them to be true (#2). Nor do they respond well to dogmatism (#6). And above all, it is not the case that the truth-value of any given theory is obvious once you decide to consider the evidence (#7). However, the fact that these kinds of explanation are illegitimate does not threaten the claim with which I began my previous post, viz. that evidence and argument are on a par with social, political or institutional factors when it comes to explaining the beliefs of scientists and laypeople, whether in the past or the present. The main aim of these two posts was to clarify the symmetry principle. It should now be clear that the principle is not a blanket ban on the invocation of truth, evidence or reality in historical writing. Instead it is a ban on a rather special way of using those concepts: it is a ban on inferences from the truth-value of a past scientist's belief to a decision about whether to explain that belief in terms of evidence and argument or in terms of something else. In other words, it is a ban on what I called “The Fallacy” in my first post in this series. But it is no more than that. In particular, it has very little to do with most of my 11 readings of the claim that “people believe things because they are true.” In the next post in this series I will describe another way in which historians exaggerate the scope of the symmetry principle. But first, a political interlude. Expand post.
Wednesday, May 1, 2013
This post continues my series on the symmetry principle, with apologies to anyone who has been holding their breath since my last post five weeks ago. That was a piece of conceptual ground-clearing in which I argued that esoteric-seeming distinctions can make a big difference to the answers we give to important questions. In this post and the next I want to illustrate the point by distinguishing between different senses of the claim—which historians sometimes equate with the symmetry principle—that “people don't believe things just because they are true.” This may seem like an academic exercise, but the stakes are high. One is the professional competence of historians of science: if the symmetry principle is a central tenet of our field, and we are unable to give a clear account of it, we look bad. A second question is how historians of science engage with the public. I sometimes get the impression that, in the eyes of historians and sociologists of science, anyone who uses the words “truth,” “evidence,” or “reality” when discussing past or present science must be doing something wrong. If we do not make an effort to be more discriminating, we should not be surprised if the public is confused by our work or suspicious of it. A final issue is how scientific debates have been, or should be, carried out. This is the issue that motivated the Guardian post by Vanessa Heggie that prompted this series. As I read that post, Vanessa used the symmetry principle to draw attention to the social, personal, political and institutional factors behind beliefs that scientists had in the past and that we now consider true. She seemed to be saying that we should focus on those factors instead of, or at least in preference to, factors such as the evidence that scientists advanced for their beliefs or the facts that they had in their favour. The present-day analogue, Vanessa suggested, is that when we debate issues like climate change and religion we should consider how our social or political situation might have shaped our personal convictions on those topics. I expect that many present-day historians of science would agree with these sentiments. I disagree. I do think that social or political explanations for beliefs are important. But I think they are on a par with, rather than preferable too, more traditional explanations such as evidence or argument. More importantly, I think that the symmetry principle is irrelevant to that question. All the symmetry principle says is that we should give the same sort of explanation for true beliefs as we do for false ones. It is perfectly consistent with this principle to (for example) explain Galileo's belief that the moon was mountainous by appeal to the evidence he had in his favour—as long as we explain the beliefs of his rivals in the same way. The lesson for present-day debates, one might think, is not that we should pay attention to our own biases but that we should pay attention to the evidence or arguments advanced by people we disagree with. But isn't evidence more or less the same thing as truth? And isn't it bad practice for a historian or a sociologist of science to let questions of truth and falsity interfere with their naturalistic explanations of past beliefs? The only way to respond to such concerns—and to clarify the symmetry principle and to engage properly with the public—is to distinguish carefully between different ways in which “questions of truth and falsity” can enter into our explanations of scientists' beliefs. So here goes... 1. People believe things because they believe they have evidence in favour of those things. As I asserted in my first post in this series, it is silly to deny that evidence can play a major role in scientific debates. The evidence might not be very good, of course. And there are not many beliefs, if any, that are fully explained by the evidence possessed by the believer. But there is nothing wrong-headed about saying (for example) that Darwin believed in evolution partly because of his observations of the distribution of finches on the Galapagos islands. Likewise, there is nothing wrong-headed about trying to shake someone's confidence in an omniscient, benevolent God by observing the amount of needless suffering in the world. Anyone who denies this is likely to be accused, in my view justly, of being “tainted with relativism” or of “misunderstand[ing] how science works at a rather fundamental level” (as per the comment by DavidColquhoun in reply to the Vanessa's post). 2. People believe things to be true because they believe them to be true. In contrast to 1., this is a terrible explanation of what people believe. It is just wrong-headed to say, for example, that someone believes in God because they believe in God. It is like saying that the death of Maggie Thatcher was caused by the death of Maggie Thatcher. That's just not how explanations work. Nor is this how arguments work—people don't go around saying “God exists, therefore God exists.” 3. The fact that something is the case can explain why people believe it to be the case. It is quite common to lump this explanation together with the obviously defective 2. This is a mistake, because 3. is much more plausible than 2. Here's an illustration. It is silly to say (as per 2.) that Galileo appealed to the rockiness of the moon in order to justify his belief that the moon was rocky. But it is not silly to say (as per 3.) that the rockiness of the moon caused his belief that the moon was rocky. After all, the moon is rocky; and the rockiness of the moon was (partly) responsible for the shapes that Galileo saw in his telescope, which were turn (partly) responsible for his belief that the moon was rocky. (I owe this point, though not the illustration, to this article [paywall] by the philosopher Nick Tosh; see especially pp. 691-92). In fact, it is hard to see how scientists could consistently say true things about the natural world if their beliefs were not partly caused by the natural world. If there were no causal link between nature and scientist's beliefs about it, it would be a remarkable coincidence if the latter accurately described the former. (Again I owe this point to Nick Tosh, who discusses it on pp. 187-88 in this paper. I expect that this intuition, or something like it, underlay TonyLloyd's remark that “the trouble with this symmetry approach is that it does not just exclude truth but any relation of theory to the world.”) Nevertheless, in my experience this sort of explanation is rarely used by even the most “Whiggish” historians of science. This may be because a state of affairs can explain the false things, and not just the true things, that people believe about it. For example, one of Galileo's opponents was a Jesuit professor who thought that the blotches on the moon were due to density variations inside the moon (rather than to its surface relief). This belief, no less than Galileo's, was partly caused by the shapes the Jesuit professor saw in a telescope, which in turn were partly caused by the rockiness of the moon. 4. The fact that something is the case (about nature) can explain why people believe something else is the case. Here is an example of what I have in mind. William Gilbert was an English natural philosopher who believed that electrostatic repulsion, unlike electrostatic attraction, was not a real effect. A historian has explained this by noting that Gilbert used chaff and paper rather than metal objects as detectors of electricity. If he had used gold leaf instead, the argument goes, he would probably have observed the gold leaf to leap energetically from charged objects, rather than simply falling off them as chaff and paper tends to do. This explanation uses a present-day scientific commonplace (that metals are unusually good electrical conductors) to account for a different but related belief that Gilbert held (there is no such thing as electrostatic repulsion). To me this explanation is unobjectionable. In fact, without it we do not have a full historical explanation of the historical fact that Gilbert did not believe that repulsion was a real electrical effect. 5. People believe things because of factual evidence. This one is easily confused with 3, especially when it is expressed as “people believe things because of the facts.” That phrase could mean a) that people believe X because X is the case, or b) that people believe X because they have access to facts that count as evidence for X. The distinction matters. a) is plausible, but rarely used by historians (as per 3. above). By contrast, b) is often used by historians and laypeople, and with good reason—it is just a special case of 1., a special case in which the evidence comes in the form of “facts.” Now, people in science studies tend to be as suspicious of “facts” as they are of “truth,” as indicated by their tendency to surround both words with scare-quotes. Here as elsewhere, ambiguity has done much mischief. There are at least four senses of “fact” in common usage:
I. Any true statement. Here “fact” is to be contrasted with “opinion.” II. A particular kind of statement, namely one that is about raw data or empirical observations. Here the contrast is with an abstract or theoretical statement, as when we tell someone to “get their facts right.” (Note that this phrase only makes sense if there is such a thing as a “false fact”). III. A sub-set of the statements in II., namely those that are true rather than false, as when we use “it's a fact” to mean “it's true.” (When “fact” is used in this sense, there is no such thing as a “false fact.”) IV. A “matter of fact,” ie. a state of affairs in the world rather than a statement about the world.Sometimes popular authors exploit this four-fold ambiguity for rhetorical advantage. For example, in his book Why Evolution is True, the biologist Jerry Coyne often writes “evolution is a fact” when he means to make the (perfectly reasonable) claim that “the theory of evolution is almost certainly true.” Here he is using “fact” in the first sense listed above. But by using that word he invites readers to think that the statement “animals evolve by natural selection” is a datum rather than a theory inferred from the data. This kind of sleight-of-hand is annoying, whether deliberate or not. But it is no more annoying than the practice, fairly common in science studies, of saying “facts are socially constructed” or “facts change over time” or “facts can always be disputed” without saying which sense of “fact” is intended. To get back to the point: “X believed Y because of the factual evidence” is a perfectly acceptable explanation if “factual” is intended in sense II above. This is because the distinction between “fact” and “theory” (or something similar) dates back at least to Aristotle, and because people routinely appeal to data or observations or phenomena to support their theories or conjectures or explanations. Of course we should not forget that it usually requires quite a bit of work to go from raw sense data (like “there is a yellow patch in the top-left of my visual field”) to a fact in sense II (like “a third of Americans are obese”). And it takes more work again to go from a set of facts to a theory based on them. But these truisms do not mean that it is an error to say that people believe things because of the factual evidence. 6. People believe us when we are dogmatic about our beliefs. Perhaps the thing that historians of science mean to reject, by way of the symmetry principle, is the practice of believing one's own position so strongly as to disregard all counter-arguments. They are against the dogmatist who reasons like this: I am right; therefore all arguments against my position must be misleading arguments; therefore I can safely ignore everything my critics say. (If I read her correctly, Vanessa was describing this kind of dogmatism when she tweeted that there was “a clear link between 'it's true, therefore there's evidence, therefore belief.'”) There is obviously something wrong with this dogmatic line of reasoning: if it were sound, it would give us grounds for never changing our minds in the light of new evidence. It is not obvious exactly what is wrong with the argument—so much so, in fact, that philosophers have given it a name: the "Kripke-Harman dogmatism paradox." All the more reason, then, to think that people are vulnerable to that way of thinking, whether they realise it or not. But note that the avoidance of dogmatism does not require us to turn away from questions of evidence and towards questions of social, political, or institutional bias. The solution to dogmatism, one might think, is not to set the evidence aside but to give a fairer appraisal of a greater range of evidence. The historiographical equivalent is to give as much attention to the evidence advanced by the “losers” (ie. past scientists whose theories we consider false) as we do to the “winners.” *** It should be clear by now that the plausibility of the claim “people believe things because they are true” depends crucially on how you interpret that claim. In the next post I'll distinguish another five interpretations, beginning with one of the most important ones, namely “the truth-value of any given theory is obvious once you decide to consider the evidence.” Expand post.
Thursday, March 21, 2013
The flurry of tweets that followed my last post made it clear that there are quite a few interpretations of the sentence “people believe things just because they are true.” One question that came up was whether or not the distinction between truth and evidence is any use in understanding that sentence. I think it is. But even if it is not, I want to make the broader point that esoteric-seeming distinctions can make a big difference to the success of our interactions with the general public. Here's an illustration. If a tree falls in a wood, and no-one is around to hear it, does it make a sound? Well, if by “making a sound” you mean “producing sound waves,” then the answer is clearly “yes.” On the other hand, if you mean “causing a human to experience a sound” then the answer is clearly “no.” The distinction between producing sound waves and causing aural percepts is not one that most people care about. But if our aim is to give a sensible answer to the question posed, we don't have a choice but to make that distinction. If we don't make the distinction, our efforts to answer the question are likely to be wasted. And if we do, we do not need to do much else in order to reach agreement on an answer. Now imagine we are answering the tree-felling question in a public forum. If we answer “yes,” without making the distinction I mentioned, then we should not be surprised if we are met with howls of protest from those who have tacitly taken “making a sound” to mean “causing a human to experience a sound.” Of course it would be better if our readers made the distinction themselves, exercised charity and common sense, and assumed that we meant “producing sound waves.” But if we don't make the distinction, when it makes such a big difference to the correctness of our answer, then we can hardly complain if our audience does not do so either. Conversly, if we are reading a popular text on tree-felling, and the author asserts that trees don't make sounds when they fall in empty forests, then we should not imagine that their claim is obviously, lamentably wrong. To do so would be to ignore the fact that they are half right. And to do that would be to repeat the author's real error, which was not to answer “no” rather than “yes” but to take the question as one rather than two. Clearly there are many distinctions that are irrelevant to any given question. To answer our question about felled trees, it would not be much use to distinguish between tranverse and longitudinal waves. There are are also an infinite number of weird interpretations of a question, and no matter how careful we are to delimit our answers there is bound to be someone who takes it the wrong way. (Example: many of the commentators on Vanessa Heggie's recent post seemed to think that she was advocating some sort of radical skepticism about science, which was pretty clearly not the point of the post.) Philosophers are the specialists in conceptual distinctions, but we historians have a sense of why distinctions matter in general. Whereas philosophers tease apart the meanings of terms, historians are exquisitely sensitive to the peculiarities of different times, places, people, and episodes. The question “Is there a conflict between science and religion?” is meaningless to the historian in the same way that “Is the world one or many?” is meaningless to the philosopher. The answer to both is, “it depends.” We should also bear in mind that academic study can cloud distinctions that are perfectly clear to non-experts. This does not always reflect well on the experts. Consider the distinction between statements about the world and the world itself. If ordinary people did not make this distinction they would have great difficulty getting through their lives without going mad or getting into terrible accidents. Such people would automatically believe everything they read or heard, since they would not be able to grasp the idea of a false statement. They would live in a state of perpetual puzzlement as they witnessed objects and events that appeared to have no statements attached to them. In their confusion they might even mistake events for statements, wandering into the path of oncoming traffic in the belief that this wordly action is as harmless as the statement “there is oncoming traffic.” Yet it is only a bit of an exaggeration to say that entire academic careers have been built around the denial of the distinction between the world and the things we say about it. Much of what is called the 'Science Wars' could have been avoided if both sides had been more careful to distinguish between the two things. More generally, it has long been fashionable in the humanities to frame new ideas in terms of the erasure of one or other distinction that many people take for granted. There is no shortage of authors who claim to challenge the distinctions between subject and object, fact and fiction, fact and value, fact and theory, theory and practice, art and science, etc. It is a long time since I read a book in science studies that promised to mark out a boundary rather than transgressing one, or to heed a dichotomy rather than interrogating one. I'm not saying that all this distinction-denying is a bad thing. My point is that academic study blinds us to the ubiquity of the distinctions we deny just as surely as it blinds us to the abstruseness of the distinctions that are commonplace in our chosen field. What about the distinction between truth and evidence, which I fussed over in my previous post? Is it esoteric like the one between “producing sound waves” and “causing a human to experience a sound”? Or is it more like the distinction between “statements about the world” and “the world itself,” ie. a commonplace that only sophisticated people fail to grasp? I don't know. But the main message of this post is that it doesn't matter. If the distinction helps us to answer the question at hand then it is worth caring about, no matter how ordinary or esoteric it might be. The question at hand—to get back to the topic of this series—is whether the symmetry principle is right in stating that people do not believe things just because they are true. The aim of my next post is to give a list of distinctions that make a difference to how we answer this question. Expand post.
Thursday, March 14, 2013
Vanessa Heggie has posted a clear, visible summary of what she rightly calls a “core principle” for historians of science, namely the “symmetry principle.” So this is a great opportunity for me to explain why I disagree with much that my fellow historians of science have written on this topic. Behind the symmetry principle there is an insight that is true, important and worth keeping. But we need to save this insight from the ideas that are often associated with it, many of which I think we should reject. The basic insight is that there is a certain inference, which I will call The Fallacy, that is a tempting yet unreliable way of way of explaining the beliefs of past scientists. The point of this post, the first in a series, is to separate The Fallacy from another fallacy that is so unappealing that no-one had even thought of it before historians of science started finding it everywhere. The Fallacy Consider Galileo's theory that the uneven shading of the moon is due to the shadows cast by hills and mountains on the moon's surface. Most of us prefer this theory to one of its seventeenth-century rivals, according to which the moon is a perfectly smooth sphere whose visible blotches are due to the uneven density of the crystalline matter of which it is made. The fallacy is to say that since the theory is true, Galileo must have believed it because of the evidence in its favour, while those who rejected it must have done so out of superstition, prejudice or self-interest. In general:
Theory X is true, therefore everyone who held it did so for good reasons, while everyone who denied it did so for bad reasons.This is what I will call The Fallacy. As I have stated it, this fallacy applies to two people with conflicting beliefs, but this limitation is one of convenience rather than necessity. The same kind of fallacy could be applied to two beliefs that have opposite truth-values but that do not contradict each-other. Or it could be applied to two beliefs held by the same individual. Or it could be of these cases in one. For an example of the latter, consider Galileo's theory that comets are exhalations that mount in a straight line from the surface of the earth. The fallacy would be to infer from the truth of Galileo's moon theory that he held it from a combination of accurate observation and sound inference, and from the falsity of his comet theory that he held it out of dogmatism, amour propre, etc. The Other Fallacy It may look like I have simply repeated Vanessa's account with a different example (she uses Leibniz v Newton rather than Galileo). But there's a crucial difference. The teaser at the top of Vanessa's post says that “No one believes something simply because it is true”, and the fifth paragraph urges historians to “forego the assumption that [Newton] believed in his law of gravity because it was true.” This suggests that the fallacy that Vanessa has in mind is something like this:
Theory X is true, therefore everyone who held it did so because it was true, while everyone who denied it did so for bad reasons.Call this the The Other Fallacy. It is the same as The Fallacy except that it replaces “good reasons” with “truth” as an explanation of the true beliefs of past scientists. Although The Other Fallacy is indeed a fallacy, I think it is a very rare one, even among the most Whiggish of old-fashioned historians. Imagine asking George Sarton (to pick one such historian) why Copernicus believed that the earth goes round the sun. Would Sarton have said “because the earth does indeed go around the sun”, or perhaps “because Copernicus thought 'the earth goes around the sun' was a true statement”? I doubt it. Instead he would have said something like: “because Copernicus looked at a whole lot of data, did a bunch of calculations, and found that a number of key phenomena—such as the retrograde motions of the planets and the fact that he never observed Mercury and Venus on the opposite side of the earth to the sun—could be explained in very neat manner by supposing that the earth is just another planet.” To put one of the first two replies in Sarton's mouth is to commit what I have called the constructivist straw man. Another way to commit the same error is to say that Sarton and his ilk denied that “human agency” or “human activity” played any role in the beliefs of scientists. Both forms of the straw man can be found in this transcript, which Vanessa links to in her post. First comes the suggestion that, some time in the seventies or eighties, historians started to see scientific knowledge has “a human product, something that had to be made and maintained.” In other words, earlier historians had thought that “true knowledge was immaculate, untouched by human hands.” (This is true only on the perverse assumption that calculating, experimenting, and reasoning—the sorts of things that interested Sarton et. al.—do not count as human activities). Next in the transcript comes the misattribution that lies behind The Other Fallacy. How did these earlier historians explain the beliefs of scientists, if not as the result of human action? Answer: by appeal to the truth of those beliefs. For example, they would “say that Isaac Newton thought that there was an inverse square law of gravity acting instantly at a distance through empty space between the centers of distant bodies because there is [such a law].” (Disclaimer: the historian featured in this transcript is my thesis advisor. Clarification: yes, I am disagreeing with my thesis advisor on this point, although I agree with him on much else.) The problem with The Other Fallacy is that it sets the bar too low. A campaign against it is like an anti-smoking campaign that urges smokers to stop committing murders. Since most smokers are not murderers, the campaign is unlikely to have any effect except perhaps to convince non-smokers that many smokers are, in fact, murderers. Likewise, urging people to avoid The Other Fallacy is unlikely to solve the real problem, which is The Fallacy. Those who commit the latter are likely to continue doing so, happy in the knowledge that they have not committed The Other Fallacy. And those who avoid The Other Fallacy may end up convincing themselves that everyone else is more wrong-headed than they really are. Truth and evidence are not the same thing, and it matters Perhaps historians attack The Other Fallacy, rather than The Fallacy, because the two are hard to tell apart. After all, as noted above, the only difference between them is that between explaining a person's belief by its truth, and explaining that belief by the arguments or evidence that the person found in its favour. And aren't these pretty much the same thing? No! In the context of a debate, whether in the present or in the past, they are completely different beasts. This is easy to see from the three quotes I put in the mouth of George Sarton. Imagine if we tried putting those quotes into the mouth of Copernicus rather than Sarton. Does the De Revolutionibus contain statements like: “'the earth moves around the sun' is a true statement, therefore the earth moves around the sun”? Or statements like: “the earth moves around the sun, therefore you should believe that the earth moves round the sun”? Probably not. Or if it does, they were not the sorts of statement that convinced people that the earth went around the sun. And nor are they the sorts of statement that are used today to convince people that vaccinations work, or that climate change is real, or that God does or does not exist. And this is not something that “modern historians” discovered some time in the seventies and eighties. No sane person has ever denied it. This should be uncontroversial. Granted, there are controversial issues nearby. There are the questions of the extent to which standards of evidence have varied over time and place, whether there is some super-standard that allows us to assess this alleged multiplicity of standards, whether evidence can reliably inform us about unobservable entities like quarks and quasars, and whether evidence can be decisive in resolving scientific disputes. But we do not need to agree on any of these issues in order to agree that giving evidence for a proposition is different from asserting that the proposition is true. This large, uncontroversial difference does not stop historians of science running the two together as if they were the same thing, usually when passing judgement on dead historians of science. I've already given one example from a prominent historian. Here's another, from Stephen Shapin's 2010 book Never Pure:
Once upon a time, so the story goes, students of science too believed that truth was its own recommendation, or, if not that, something very like it. If one wanted to know, and one rarely did, why it was that true propositions were credible, one was referred back to their truth, to the evidence for them, or to those methodical procedures the unambiguous following of which testified to the truth of the product.In this passage Shapin at least recognises that the truth of a claim and the evidence for it are different things. But only just: he also says that they are “very like” each-other. He implies, absurdly, that students of science used to be uninterested in how scientists justified their beliefs. And on the preceeding page he attributes to an unnamed group of “modernist methodologists”—presumably people like Hans Reichenbach, Rudolph Carnap, and Karl Popper—the view that “truth shines by its own light.” The conflation of truth and justification does an injustice not only to old-fashioned historians like Sarton but also to present-day internalists. The latter may be defined as historians of science whose main interest is the mixture of luck, skill and insight by which past scientists—as individuals or as groups, over months or over centuries—developed arguments for their claims the natural world. Internalists can be as symmetric as anyone, giving as much attention to the errors of “winners” as they do the insights of the “losers.” They do not give complete accounts of their subject (show me a historian who does!). But they do far more than simply put tautologies in the mouths of past scientists. To sum up, by conflating truth and evidence we make the latter look as unimportant as the former in the resolution of debates, whether in the present or the past. This is an error as bad as The Fallacy. It is silly to appeal to the presumed truth of a claim in order to persuade people that the claim is true. By contrast, it is silly not to appeal to the presumed evidence for the claim in order to persuade people that the claim is true. Having identified what I think is the good idea behind the symmetry principle, in my next post I hope to explain why it is a good idea. This can be done, but it is harder to do than my colleagues usually make out. Postscript 1. If the error that I am attributing to my fellow historians is such a bad one, why had no-one spotted it until now? Actually, at least one person had. Ian Hacking wrote the following in a footnote to his 1999 book The Social Construction of What?:
Evidence, or reasonableness, is quite another matter from truth. [The sociologists of science Barry Barnes and David Bloor] are often taken to hold a symmetry thesis about evidence: you cannot invoke the evidence available to a community for a belief p, in order to explain why people in the community believed p... I find this claim (about evidence, not truth) unsatisfactory (232).Why did Hacking put this point in a footnote rather than in the main text? Because he only mentioned the symmetry principle on his way to talking about something else, and that thing (the distinction between “nominalism” and “inherent-structurism”) was relevant to truth and not to evidence. I'm still trying to work out why other commentators on the symmetry principle have not made the point that Hacking made in his footnote, and that I made in the above post. I'ld be delighted to know of any examples of people who have made the point in print. Postscript 2. Maybe I'm being too harsh. Maybe Shapin et. al. want to draw the following contrast. One can say that a scientist believed P because they were convinced, on the basis of the evidence, that it was true. Or one can say that the person believed P because they were convinced, independently of their views of the truth-value of P, that it was in their interests to believe it. On this reading, “No-one believes things just because they are true” is shorthand for “No-one believes things just because they are convinced, on the basis of the available evidence, that they are true.” But if this is what is meant, why not just say “No-one believes things just because they have evidence for them”? This would make it clear that evidence is involved. Or what about saying “No-one believes things just because they believe them to be true”? This would at least make it clear that it is not the truth of a person's belief that is under consideration but the person's conviction that it is true. Both of these clearer expressions would avoid the rhetorical sleight-of-hand that is involved in saying “No-one believes things just because they are true” when what you mean is “No-one believes things just because there is evidence for them.” The former claim is obviously true. The latter claim may be true, but not obviously so. The ambiguity has the effect of making the intended claim (if the intended claim is indeed the latter one) more obvious than it really is. But all this may be moot, since I doubt that my charitable reading is correct. I suspect that Shapin et. al. really are accusing past historians and philosophers of thinking that the truth of a scientist's belief (and not just the scientist's conviction of its truth) can be a good explanation of that belief. One reason for my doubt is that, in the passage quoted above, Shapin includes “truth” and “evidence” as separate items in his list of candidate explanations for the beliefs of past scientists. This suggests that he is not implicitly including “evidence” under the rubric of “truth.” If he were, then it would be redundant to include evidence as a separate item. Another reason to be skeptical is that, as I understand them, even sociologists of science have trouble making sense of the idea that people can believe things without first being convinced of their truth. That is, sociologists would probably not say “Newton believed his inverse square law because, although he withheld judgement about the truth-value of the law, he thought that this belief would protect his reputation as a national hero.” Rather, their explanation would be something like: “Newton believed his inverse square law because he thought the law was true, and an important source of this conviction was (not the evidence but) his desire to protect his reputation.” At least, that is what I imagine they would say. And if that is right, even sociologists think that people “believe things just because they are convinced of the truth of those things.” So that cannot be the allegedly false view that Shapin et. al. are attributing to past historians and philosophers of science. The view they are attributing must be the one I assumed in the above post, viz. that people believe things just because those things are, in fact, true. Expand post.
Wednesday, March 6, 2013
Since you are reading this, you have probably read Adam Gopnik's recent essay-review about Galileo Galilei in The New Yorker. You might also have seen some reactions from unimpressed historians, one of whom calls the article “extremely pernicious.” I think that some of Gopnik's errors have been exaggerated, and that most of his felicities have gone unnoticed. The moral is that us historians should be as alert to what popular writers get right as we are to what they get wrong. After all, we can hardly criticise Gopnik for imbalance in his treatment of Galileo if we are imbalanced in our treatment of Gopnik. To avoid over-balancing in Gopnik's favour, I've included some new “cons” next to some of the thirteen “pros” below. Most of the items on this selective list are included because they are pleasant surprises, ie. claims about Galileo that show a healthy level of historical acumen for someone who does not do history for a living. For example, if Gopnik had not made point 12 below, many would have privately groaned at his naive belief that science is insulated from its cultural context. I'm not suggesting that historians should give a point-by-point assessment of every popular article they comment on; that would paralyse commentary. I've got nothing against the practice of picking out one error from a feature article and tearing it to shreds, as thonyc has done with the Gopnik's comparison between Dee and Galileo. But I do think that we should read popular articles as if we were about to write an analysis like the one I've tried to present in this post. If we look for pleasant surprises, we might find more of them than we expected. My list covers three topics, in this order: Galileo and the Church, Galileo's science, and Galileo's context. My main source on Galileo is one of the subjects of Gopnik's review, an excellent 2010 biography by John Heilbron—the same John Heilbron who starred as Thomas Kuhn's student in my previous post. Galileo and the Church One. Gopnik writes that Galileo's conflict with the Catholic Church was “traceable to his hubris.” This should please those who say, with Thomas Mayer, that “the fault [for the condemnation of Galileo] lies with Galileo, not the pope or the Inquisition” (quoted in this news article.) (Mayer is the author of two works that Gopnik covers in his review; his take on Gopnik's article can be found in the comments of this post. I hope to read Mayer's books in full at some point, but for now I am relying on the snippets from their introductions that are available on Google Books). In fact, the theme of Galileo's insolence in was strong enough in Gopnik's article to antagonise a blogger at the Cato Institute, who read Gopnik as saying that “Galileo could have avoided a lot of trouble if he'd been just a little less stubborn and impolitic.” Two. “The Catholic Church in Italy then was very much like the Communist Party in China now: an institution in which few of the rulers took their own ideology seriously but still held a monopoly on moral and legal authority. … Like the Party in China now, the Church then was pluralistic in practice about everything except an affront to its core powers. … You could calculate, consider, and even hypothesize with Copernicus. You just couldn’t believe in him.” Gopnik is right that the Church could tolerate a large amount of Copernican science, especially when it was useful for things like making calendars. This is a key point that is often missed by those who take a black-and-white view of Galileo's conflict with the Church. On the other hand, I'm not sure where Gopnik got the idea that the rulers of the Catholic Church in Galileo's time did not take religion seriously. Even Galileo was a man of faith, at least according to Heilbron, who thinks that he “believed as surely as Bellarmine [the Pope's chief theologian until his death in 1621] and the majority of Catholic exegetes of their time that every statement in scripture is in some sense true.” But Gopnik is right that the Church was especially sensitive to doctrinal violations that posed a threat to its power:
“In Rome they pardon atheists, sodomites, libertines, and other sorts of rascals, but they never pardon those who bad mouth the Pope or his court, or who seem to question papal power” (needless to say, I got this quote from Heilbron's biography, who attributes it to Gabriel Naudé, the librarian of Urban VIII's nephew).Three. “Galileo even seems to have had six interviews with the sympathetic new Pope, Urban VIII—a member of the sophisticated Barberini family—in which he was more or less promised freedom of expression in exchange for keeping quiet about his Copernicanism.” This is not the sort of concession you would expect from someone who wants to show that there was no room for compromise between Galileo and the Church. Gopnik could have added that Urban VIII was not only willing to publish Galileo's Dialogue on the Two Chief World Systems (the book that triggered Galileo's 1633 trial) but that he considered it useful for the church. By showing that he knew all of the arguments in favour of the Copernican hypothesis, Galileo would show that he—along with all good Roman Catholics—had rejected that hypothesis out of piety and epistemic humility and not out of ignorance. I bring this up not to suggest that Gopnik should have mentioned it in his article, but because it must be one of the craftiest rhetorical manoeuvres in the history of Church-science relations. Four. “Though Galileo, vain as ever, thought he could finesse the point, Copernicanism was at the heart of what he wanted to express.” This sentence is a pretty good summary of Galileo's motivations for publishing his Dialogue. It captures the depth of Galileo's Copernican commitment, and the foolishness of his belief that he could dodge censure with verbal trickery. Five. “Galileo’s trial was a bureaucratic muddle, with crossing lines of responsibility, and it left fruitfully unsettled the question of whether Copernican ideas had been declared heretical or if Galileo had simply been condemned as an individual for continuing to promote them after he had promised not to.” “Bureaucratic muddle” echoes Mayer's point, reported in this 2010 article, that the irregularities of Galileo's trial were due to ineptitude rather than malice on the part of Church bureaucrats. The rest of the quoted sentence shows that Gopnik grasps Mayer's point, reported in the same article, that Galileo's punishment in 1633 was at least partly due to his violation of a precept issued against him in 1616. On the minus side, Gopnik omits another of Mayer's points, which is that Galileo was as clumsy as his judges, ie. he made things worse not just through “vanity” or “hubris” but also through sheer legal incompetence. Six. Other scientists have followed Galileo in “ducking and avoiding the consequences of what they discovered”; in general, “science demands heroic minds, but not heroic morals.” These claims, from the final paragraph of Gopnik's article, arguably make Galileo into even less a hero than he really was. Between his Sunspot Letters of 1613 and his trial in 1633, and especially before the precept issued against him in 1616, Galileo often did the opposite of “ducking and avoiding” the consequences of his Copernican views. Galileo's science Seven. Gopnik notes that Galileo's astronomical observations were due to his powers of his interpretation rather than his knowledge of the telescope, and that he did not invent that instrument: “since there were Dutch gadgets in many hands [by the time Galileo made the key observations], and many eyes, he understood what he was seeing as no man of his time had before.” Eight. Despite being a “founder of modern science,” Galileo wrote things that we consider false today, notably that the orbits of the planets around the sun are circular and that the tides are due to the sloshing caused by the acceleration and deceleration of different parts of the earth. As Gopnik puts it, he “had his crotchets.” Moreover, Gopnik suggests that he went wrong about the tides precisely because of the skepticism that makes him seem modern (shades of Paul Feyerabend?). A minor quibble is that Gopnik does not mention the best illustration of this theme, Galileo's 1623 book on comets, the Assayer. Heilbron shows that this work, which contains some of Galileo's most famous remarks about scientific method, was riddled with scientific errors and written out of groundless spite. A bigger complaint is that Gopnik does not mention that Galileo overstated the case for Copernicanism. Many of his arguments showed only that the earth's motion was consistent with sense experience, he completely ignored a compromise model put forward by the great astronomer Tycho Brahe, and his favourite argument for the motion of the earth was based on his dodgy theory of the tides. Many of Galileo's contemporaries, including some of his close friends, were aware of these problems. Heilbron puts it like this: “although [reason and experience] established essential and growing support for Copernican theory [by 1616], it gave no unimpeachable proof.” Nine. The Dialogue did not express a straightforward view about scientific method: “Though Galileo/Salviati wants to convince Simplicio and Sagredo of the importance of looking for yourself, he also wants to convince them of the importance of not looking for yourself.” On the minus side, Gopnik may be too generous when he implies that this two-faced attitude is “philosophically sophisticated.” Perhaps Galileo championed observation when it suited his case, and favoured a priori reflection when sense experience worked against him. Ten. “The temperament [of Galileo] is not all-seeing and curious; it is, instead, irritable and impatient with the usual stories.” This is not a bad paraphrase of Heilbron's point that “perhaps the best single-word descriptor of Galileo is 'critic.'” On the other hand, the reference to "usual stories" is misleading, since Galileo was often a conservative critic. This from pages 1 and 2 of Heilbron's biography:
Galileo was a humanist of the old school. He much preferred Ariosto, the darling poet of the sixteenth century, to Tasso, who would be a favorite of the seventeenth... He stayed with the geometry of the Greeks rather than employ the algebras of his contemporaries... He was not an innovator by temperament. And, we are told, he liked to wear clothes that were fifty years out of date.Another limit on Galileo's critical spirit was that he had trouble applying it to himself. Few people were convinced by Galileo's theory of the tides when he showed it off in Rome in 1615-16. Galileo must have been aware of the weaknesses of the theory, but once he got hold of it he could not let go. Gopnik's comparison between Galileo's temperament and that of his near-contemporary, John Dee, has been ripped apart by thonyc. I'm not going to try to put the pieces back together. But it does seem that the fault, if there is any, lies less with Gopnik that it does with his source. Gopnik could not have guessed that the biography he reviewed left out most of Dee's contributions to modern science, as thonyc seems to say it did. Galileo's context Eleven. The Dialogue was as much a literary achievement as a technical one. Indeed, “it uses every device of Renaissance humanism: irony, drama, comedy, sarcasm, pointed conflict, and a special kind of fantastic poetry.” Twelve. Galileo's “primary education” was in such things as music, drawing, poetry and rhetoric; and this cultural context had an effect on his natural philosophy. It gave him a “competitive, empirical drive” and “intellectual practices of doubting authority and trying out experiments.” A tick for paying attention to context, but possibly a cross for doing so selectively. Gopnik does not mention Heilbron's thesis that Galileo's early literary tastes foretold his black-and-white approach to debates, his heroic self-image, and (remarkably) the absence of the crucial notion of “force” in his mechanics. A determined critic might say that Gopnik has fastened on the cultural elements that make Galileo an ideal modern scientist, and ignored the rest. Thirteen. Galileo framed his discoveries so as to appeal to patrons: “A Tuscan opportunist to the bone, Galileo rushed off letters to the Medici duke in Florence, hinting that, in exchange for a job, he would name the new stars [ie. the moons of Jupiter] after the Medici.” Similarly, the telescope was not just a technical device for Galileo, but an “emblem and icon,” part of his “image.” These are key themes in Mario Biagioli's 1994 book Galileo, Courtier. True, Gopnik probably got this information second-hand, from Heilbron's biography. But it does not follow, as Darin Hayton has suggested, that he “ignores considerable recent work on Galileo.” ******* Gopnik's main errors, in approximate order of seriousness, are to ignore Galileo's overstatement of the Copernican case, to ignore the conservatism and dogmatism that went with his critical spirit, to underestimate the sincerity of his religious faith, to neglect Heilbron's novel and striking thesis about the relevance of Galileo's early literary tastes to his later career, and to omit Mayer's point that Galileo made elementary legal mistakes during his trial. These errors should be seen alongside the many pleasant surprises listed above, of which the most gratifying are, in my opinion, the recognition that Galileo was vain and stubborn, that the Church supported much Copernican science in Galileo's time, that the Pope himself agreed to the publication of an anti-realist version of the Dialogue, and that Galileo's critical spirit was partly responsible for his rejection of Kepler's ellipses and his acceptance of a mechanical tidal theory, two moves that we now regard as major blunders. In my view, and keeping in mind that this is a popular article by a non-specialist author, the pros outweigh the cons by a clear margin. If an unprejudiced non-historian reads Gopnik's article with care, there is a good chance that he or she will come away with a more nuanced and accurate view of the Galileo affair than they started with. In my next post I intend to look at some more general issues that the “Gopnik affair” raises about history-of-science communication. Expand post.