5th Biennial Conference of the Society for Philosophy of Science in Practice (SPSP) Aarhus 2015

Parallel Session 3D
Wednesday, 24 June 2015, 15:30–17:30 in G3
Session chair: Mieke Boon (University of Twente)
Expanding the Experimental Realm: An Account of Descriptive and Functional Experimentation in the Natural Sciences
  • Stephan Guttinger (Egenis, University of Exeter)

Abstract

Since the 1980s philosophers of science have moved away from a narrow understanding of experimentation as a test instance for theory-derived hypotheses. A key product of this shift in perspective was the idea that besides theory-driven experimentation (TDE) there is also ‘exploratory experimentation’ (EE), a practice free of theory-guidance that can be used to explore new phenomena or regularities that are not captured by existing theories. Intriguingly, the TDE/EE distinction seems to be where the process of expanding the picture of the experimental realm has stopped, as it has not been supplemented with further/alternative distinctions between experimental practices.

In this paper I want to take up again the task of expanding our understanding of the realm of experimental practices by looking more closely at a distinction that is often used by scientists but which has so far not found much attention in philosophy of science, namely the distinction between descriptive and functional experimentation (DE and FE resp.). The goal will be to spell out what this distinction amounts to in functional terms (i.e. the role the different practices play in the scientific context) and to identify some of the distinguishing features of the different practices.

To develop a more detailed understanding of the DE/FE distinction I will analyse an experimental system that can be used for both practices, namely the in vitro binding assay. The analysis of the different uses of this system will show that the DE/FE distinction does not map 1:1 onto the TDE/EE distinction, implying that it is indeed an independent category of experimental practices.

The analysis of the in vitro binding assay will also show that an understanding of the role FE plays in scientific practice has to be tightly linked to Robert Cummins’ account of functional analysis; using FE scientists don’t just hunt for causal connections but for causal roles that particular entities or processes play in a larger system. DE on the other hand does not give causal knowledge about the system of interest, even though it makes use of the same interventions as FE. It is this difference between general causal insight and insight into the causal role of an entity or process that marks one of the key differences between FE and DE. This characterisation of the different practices also highlights the need for a more elaborate understanding of how ‘intervention’ or ‘manipulation’ relates to causal insight generated in the experimental sciences.

Is Rigorous Measurement of Statistical Evidence Possible?
  • Veronica Vieland (The Research Institute at Nationwide Children's Hospital)

Abstract

Statistical analysis is an increasingly important component of scientific research in the biological sciences and elsewhere, particularly in the era of genomics and other data-intensive areas of investigation. Statistics can serve many purposes (hypothesis testing, parameter estimation, etc.), but for working scientists, the primary outcome of interest of a statistical analysis is often the strength of the evidence for or against hypotheses of interest on given data. Arguably, this is what drives the ubiquitous scientific practice of interpreting the p-value as if it were a measure of evidence, and more than that, as if it were a calibrated measure – that is, one that can be meaningfully compared across experiments, across time points as data accrue and even across experimental domains. It is well known that relying on p-values as evidence measures can cause errors in the interpretation of data, and easy to show that these errors can be substantial, leading to entirely wrong conclusions. Yet the practice persists, indicating a compelling scientific need for evidence measurement in the absence of a satisfactory statistical measurement procedure.

Statisticians and philosophers alike have considered the nature of statistical evidence in a literature extending back to the early 20th century, and various definitions of statistical evidence have been proposed (e.g., the likelihood ratio [LR] or Bayes factor [BF]). But if there is any general consensus on the subject, it seems to be that rigorous measurement of statistical evidence is, in general, impossible. In this paper I will motivate the underlying problem anew from the nomic measurement perspective [Chang, Inventing Temperature], starting with a fundamental measurement question: How can we be sure that our measure of evidence is correctly mapping onto the underlying quantity of interest, the evidence itself? We need a way to rigorously map observable (or computable) features of statistical systems (such as LRs or BFs) onto the true underlying evidence via some function. But how can we discover or verify the function without first having some independent means of knowing what is the true evidence? Posing the question in this way suggests the relevance of precedents from physics, especially development of the Kelvin temperature scale; it also invokes measurement theory as developed by Suppes, Narens et al., which has to my knowledge never been invoked in this context. I will argue that the problem of evidence measurement is tractable, but only once we take a step back from standard statistical precepts and adopt the measurement perspective. I will focus here on philosophical aspects of “live” (not yet settled) nomic measurement problems, which present an opportunity for philosophers of science to make a direct and very practical contribution to the day-to-day practice of scientific research.

Measurement and Metrology Post-Maxwell: A Historical, Philosophical, and Mathematical Primer
  • Daniel Mitchell (University of Cambridge)

Abstract

It is well known to historians that the Committee on Electrical Units convened at the International Electrical Congress of 1881 endorsed the CGS electromagnetic system of units for practical electrical measurement. Narratives of processes of electrical standardization typically move on to describe the intricate measurements required to establish the magnitudes of the units, particularly the ohm, and the development of associated material standards, in the 1880s and beyond. Those that tackle a more extended time period typically structure their narrative around key decisions taken at international meetings, which, quite naturally, informed subsequent experimental work at the local level.

This congress-centric historiography, however, has left much animated discussion about systems of electrical measurement and their scientific merits unexamined, particularly when practical needs predominated over strictly scientific ones. Such discussion incorporated a wide range of established mathematical principles and empirical laws, novel mathematical notations and physical theories, plausible physical conceptions, and even the latest quantitative data concerning electrical and magnetic media. No single actor could have laid claim to mastery over all these aspects of the field. Misapprehension abounded as metrology slid into metaphysics.

This paper is intended as a primer to encourage historians and philosophers of science to explore the issues that came to light, many of which were either left unresolved or remain subject to debate today. It centres on the French response to Maxwell’s Treatise, which provided an essential touchstone for the many works on absolute electrical units that appeared during the 1880s. Maxwell’s new form of dimensional analysis as presented in the Treatise left many issues open, not least the viability of the mathematical grammar itself, and the variety of possible inferences and interpretations to which it gave rise.

The disciplinary separation of mathematical and experimental physics in France, as well as the distinctness of electrical science from electrical practice, resulted in an impressive diversity of analysis. When combined with characteristically French philosophical sensibilities, this provides a conceptually-rich point of entry into Europe-wide discussions concerning the scientific foundations of electrical metrology and measurement during the late-nineteenth century and beyond.

The paper starts with the implicit attribution of the claim that ‘resistance is a speed’ to members of the BA Committee on electrical units by French mathematicians, who were skeptical of the practice-oriented British preference for the electromagnetic system (and, more generally, Maxwell’s electrodynamics). I investigate the veracity of this attribution, which concerns the possible interpretations of dimensional formulae: operational, physical, and mathematical. This leads me to a similar analysis of the various interpretations and derivations of the physical constant nu, and finally into electrical ontology through the relationship between charge and current, and the role of the medium in electrical and magnetic effects. In this way I lay the foundations for a new reading of Maxwell’s Treatise in which his concern with systems of electrical units, dimensional analysis, and methods of measurement connects with the familiar story about field theory and the electromagnetic theory of light.