5th Biennial Conference of the Society for Philosophy of Science in Practice (SPSP) Aarhus 2015

Parallel Session 6F
Friday, 26 June 2015, 09:00–11:00 in Koll G
Session chair: Hasok Chang (University of Cambridge)
The Epistemological Role of Systematic Discrepancies
  • Teru Miyake (Nanyang Technological University)

Abstract

Recent work by philosophers of science with a strong concern for scientific practice has emphasized the epistemological role played by systematic discrepancies between calculations and observations (see, e.g., the work of George Smith, Hasok Chang, and Eran Tal). George Smith (2014), in particular, argues that the epistemological justification for gravitational theory comes, not primarily from agreement between calculation and observation, but through the discovery of physical sources for each systematic discrepancy between calculation and observation.

This spotlight on systematic discrepancies can be seen, in some respects, as a revival of a nineteenth century tradition in British philosophy of science that emphasized “residual phenomena”, one that had a deep influence on prominent scientists of the time. In the Treatise on Natural Philosophy, William Thomson and Peter Guthrie Tait credit John Herschel with noticing the epistemological role played by residual phenomena, and add that “it is here, perhaps, that in the present state of science we may most reasonably look for extensions of our knowledge; at all events we are warranted by the recent history of Natural Philosophy in so doing.” The notion of residual phenomena, and the procedure by which they are used to acquire knowledge of nature, is introduced by Herschel in his Preliminary Discourse on the Study of Natural Philosophy. William Whewell later terms this procedure the “method of residues” and discusses it at length in his Philosophy of the Inductive Sciences. This term is probably best known to contemporary philosophers through John Stuart Mill’s discussion of it in his System of Logic. There are, however, subtle but important differences between the views of these philosophers with regard to the epistemological role of residual phenomena, the most significant being the degree of emphasis on a quantitative, as opposed to a qualitative, characterization of the phenomena.

The method of residues is traditionally associated with scientific discovery, which may account for its being largely overlooked by philosophers of science in the twentieth century, with their emphasis on justification over discovery. Smith (2014), however, views systematic discrepancies as being essential to the justification of gravity theory. Herschel’s characterization of residual phenomena, which emphasizes the quantitative characterization of the phenomena, the role of systematic error, and the importance of residual phenomena in verification, bears a strong resemblance to Smith’s view—perhaps unsurprisingly, given that the views of both Smith and Herschel are arrived at through an investigation of the development of astronomy after Newton. This paper will examine Herschel’s view of residual phenomena, trace out its relation to the later views of Whewell, Mill, and Thomson and Tait, and finally compare it to the contemporary view of Smith. I will focus, in particular, on exactly how residual phenomena are supposed to play a role in justification, not merely in discovery. I will also consider possible reasons for the demise of this tradition in the late nineteenth century, including the issue of whether the view will generalize to fields other than gravity theory, particularly microphysics.

(Re-) Discovering Elementary Particles at Cern by Diagnostic Causal Inferences
  • Adrian Wüthrich (Technical University Berlin)

Abstract

Basing myself on the publications by the UA1 and ATLAS collaborations at CERN (1983, 2012) and on some unpublished documents from the ATLAS collaboration's internal communication (2010) I argue that the detection or discovery of elementary particles such as the W or the Higgs boson is best interpreted as the application of diagnostic causal inferences. Diagnostic causal inferences reach the conclusion that a particular type of cause was instantiated in a given situation from the observation that some particular type of effect was instantiated. Such an inference rests on the validity of the cause-effect relationships that have to be presupposed and on the exclusion of alternative causes. I will give an account of how the ATLAS and UA1 collaborations were able to perform reasonably well justified diagnostic causal inferences and in what sense this amounted to a (re-)discovery of the W boson in 1983 and 2010 and of the Higgs boson in 2012.

My account of the cases shows how causal reasoning can be employed even in situations where the existence of the entities involved in a causal relationship has yet to be established. Causal reasoning is usually only concerned with establishing causal relationships between already known entities or factors, the open question being whether one of them causes the other. Moreover, the methods for answering these questions presuppose, rather than infer, the existence of the involved entities. For instance, to establish the causal relevance of a factor A for a factor B by John Stuart Mill's or similar methods of difference one has to know of a situation in which A is instantiated and of a situation in which it is not. It is hard to see how such knowledge could possibly be available without even knowing of the existence of the objects involved in the instantiation of factor A.

Peter Lipton (1991, 2004) saw such problems as a decisive reason for the need to supplement causal reasoning with explanatory considerations. He argued that when it came to establishing the existence of entities, inferences to the best explanation was indispensable. By contrast, I take my reconstruction of the W and Higgs cases to show that the scope of causal reasoning includes the establishment of existence claims to a substantial extent. Along the way, I hope, my reconstruction elucidates the function of data selection, highlights the importance of the principle of causality in recent and current high energy physics, and shows how the CERN researchers can deal with the problem of unconceived alternatives.

What Would Be a Cultural Logic of Conceptual Discovery?
  • Jouni-Matti Kuukkanen (University of Oulu, Philosophy)

Abstract

Study of conceptual change has been an object of increased and great interest in recent decades. It is not difficult to name several traditions that have investigated the problem of conceptual change from their mutually incompatible perspectives. Starting from the oldest, Lovejoy’s writings of unit ideas and a long tradition of the history of philosophy provide two answers to what concepts are and how they transform in history. More recently, post-Kuhnian philosophers of science debated intensively in the 1970s and 1980s whether meanings of terms (concepts) and conceptual schemes change in the history of science. The most recent and most interesting approach is so called cognitive history and philosophy of science that explicitly applies models of cognitive science in the context of the history of science. Finally, one should not forget the German tradition of Begriffsgescihte either that emphasises social aspects of conceptual changes.

All these traditions provide different answers to what the concept is that changes, what a change of that concept is and what kind of historical examples can be given of conceptual stability, change and replacement. These questions form the core of what might called Philosophy of Conceptual Change.

It has become evident that answers given to these fundamental questions determine in part the image of science provided and the kind of narrative of science written in practice. I outlined my initial view in my paper Making Sense of Conceptual Change a few years ago (History and Theory 47, 351-372). In my talk in Aarhus I continue investigating the nature of conceptual change focusing on what might be termed as the ‘cultural logic of conceptual creation and discovery.’

Discovery and creation have typically been understood as phenomena that are mystical and that therefore defy rational explanations. This attitude is epitomised, for example, in the distinction between the logic of discovery and the logic of justification and in Popper’s philosophy. Even the early Kuhn understood conceptual change as a sudden gestalt switch. Cognitive historians and philosophers of science have provided some explanations for conceptual creation, for example, in the forms of mental modelling of consecutive conceptual schemes (e.g. Hanne Andersen, Nancy Nersessian) and of reasoning that gives birth to new ideas and concepts (e.g. Nancy Nersessian, Paul Thagard). However, their focus is usually still on individual psychological phenomena. Begriffsgescihte studies cultural phenomena behind conceptual changes but unfortunately their theories of concept and conceptual change remain implicit.

In my talk I outline the central problematique and challenges of the Philosophy of Conceptual Change as outlined above. More important, I attempt to schematise cultural conditions that precede an emergence of a new concept. The main hypothesis is that this process implies continuity with respect to previous traditions. My view is that conceptual birth is a dynamic and creative process but that it can be rationally understood and explained. R. G. Collingwood formulated the idea follows: “Any process involving an historical change from P1 to P2 leaves an unconverted residues of P1 encapsulated within an historical state of things which superficially is altogether P2” (An Autobiography, 2002, 141). Another more specific lead is given by Imre Lakatos in Proofs and Refutations (1976) in which dialogue itself is understood a form of conceptual innovation at the end of which a new concept is born. In other words, a radically new concept may emerge through a complex cultural process of argumentation and criticism.

Theoretical Bias of the Standard Research Practice in Social Psychology
  • Taku Iwatsuki (the University of Pittsburgh, History and Philosophy of Science Department)

Abstract

In this paper, I argue that the standard research design in social psychology is biased toward the confirmation of simple group-level effects that do not necessarily reflect our psychological reality. I also describe alternative research designs that less likely to suffer from this bias. I support my points with actual social-psychological studies.

One of the main goals of empirical social-psychological research is to test the hypothesized causal relations among environmental, psychological, and behavioral variables. To this end, social psychologists often conduct randomized controlled experiments and analyze data by means of analysis of variance (ANOVA). In a typical social-psychological experiment, there are 2 to 4 independent variables each of which takes 2 to 4 values and one dependent variable. The typical unit of analysis is individual persons and one experimental group typically consists of 20 to 30 people.

This standard design has at least two kinds of bias. The first is that this design tends to confirm simple effects. Social psychologists typically use independent variables that take 2 to 4 values even when it is possible to devise experimental treatments that corresponds to finer-grained values because the larger the number of values is, the larger the number of participants is necessary but the number of available participants is limited. Moreover, by using analysis of variance that treats independent variables as categorical variables, social psychologists lose information about the order of or intervals between values of the original variables. Therefore, hypotheses that can be tested with the typical experimental design are limited to those that are relatively simple and less informative. However, there is no a priori reason to assume that the causal effects social psychologists study are simple.

The second bias is that the standard design is more suitable to finding group-level effects rather than individual-level effects because what random assignment in principle establishes is not the equivalence of individual participants under different treatments but the equivalence of experimental groups in the distribution of the values of variables. Therefore, random assignment does not make possible the comparison of individual participants but the comparison of experimental groups. Social psychologists have to make some assumption about the causal homogeneity of individual participants in order to infer individual-level effects from group-level effects. Such assumption, however, is unlikely to hold in the domain of social psychology where there are many individual difference variables, e.g., demographic or personality traits, that would interact with independent variables.

Next, I describe two possible directions social psychologists can pursue. First, they may increase the number of participants used in a single experiment. This would allow them to use independent variables that takes more values and to investigate their effects on a dependent variable in more informative and finer-grained manner. Second, social psychologists can use case-study designs rather than group-comparison designs, which makes it possible to acquire detailed individual-level data. My suggestion is not that social psychologists abandon the standard design, but that they enrich their toolbox, admitting these designs, or other possible designs, as well as the standard design.