Scientific uncertainty is inherent all research endeavors, and often poses major practical challenges for the application of scientific knowledge in decision-making. In many contexts, however, uncertainties not only play a role in the application of scientific models but also in their development. I will argue that precautionary principles must be applied already in the development of scientific models and not merely when they are used to inform decision-making. If this conclusion holds, it constitutes a good example of how scientific practice can benefit from philosophical considerations. I will contextualize my claims by discussing a case in the field of food toxicology.
Over the past years, improvements in analytical methods have lead to the detection of an increasing number of previously unknown substances in food products. Estimates say that we might be looking at several thousands, if not tens of thousands of chemicals. Their presence might be due to degradation processes, migration from packaging material, or impurities in the manufacturing process. Usually, they are present in low and very low concentrations and the toxicity as well as the potential large-scale effects on human health of these substances are unknown. Assessing the risk of these non-intentionally added substances in food products has become a major challenge in toxicology.
One of the current best scientific approaches for the evaluation of potential risks of incidental low-concentration substances is the so-called Threshold of Toxicological Concern (TTC) concept. The TTC provides a probability-based risk assessment tool, which rests on the assumption that it is possible to derive risk estimates for substances with unknown toxicity from toxicological data of structurally similar substances.
I will explain how the approach works, what it is supposed to accomplish, and what kinds of uncertainties arise in the context of its application. I claim that the TTC provides a useful tool for assessing quantifiable uncertainty (i.e. risk), but that there exist additional uncertainties, which cannot be treated using probability-based approaches. They include uncertainties about unconceived outcomes, uncertainties regarding the underlying theoretical assumptions (e.g. about structural similarity or about the extrapolation of animal toxicity data to humans), or controversies in the scientific community. These uncertainties are very often intimately connected to normative questions.
I will conclude that it remains questionable whether the TTC provides an adequate tool for the assessment of potential health hazards if one evaluates the approach against the standards of the precautionary principle. In accordance with Sprenger (2012)*, I will argue that precaution has substantial implications for model building in science. I will use the case of the TTC to establish the more general claim that the precautionary principle should not merely be seen as a decision rule, but that it should play an important role in responding to model uncertainty. Precaution must be applied already at the stage where we evaluate the epistemic robustness of scientific models.
The ideal of value free science has come under increasing criticism in philosophy of science. Although not a new one, a significant challenge stems from what has come to be known as the ‘inductive risk argument’ (Douglas 2009). According to this argument, because acceptance or rejection of hypothesis is unlikely to happen with certainty, scientists must consider whether there is enough evidence to do so. This involves considering not only the likelihood of error, but also how bad the consequences of error would be. When the consequences are related to public policy, this requires evaluating the ethical consequences for those potentially affected. It is thus necessary and desirable for scientists to make ethical value judgments about what sorts of errors are acceptable. Moreover, because scientific reasoning is affected by uncertainty at different stages, proponents of the inductive risk argument maintain that ethical and social value judgments are necessary not just at the moment of acceptance of theories or hypothesis but also at earlier stages of the research such as those involving the characterization of the evidence and its interpretation (Douglas 2009).
Although the argument from inductive risk has been embraced by many as a challenge to the value-free ideal of science, we contend that such is not the case. We argue that for an account of the role of contextual values in scientific decision-making to successfully challenge the value free ideal, it must address two appropriate concerns motivating such an ideal. First, it must tackle an epistemological worry that the use of contextual values in scientific reasoning will lead to wishful thinking. Second, it must address a political concern that having scientists making social and ethical value judgments in research undermines democratic values. The argument from inductive risk aims to address the epistemological concern by narrowly limiting the legitimate role of contextual values in scientific reasoning. This move, however, hinders proponents’ ability to justify the claim that contextual values are necessary in scientific decision-making. Insofar as the necessity of contextual values is undermined, then the inductive risk account of values in science is on par with the value-free one. Moreover because, contrary to the value-free ideal, proponents of the inductive risk argument seem unable to address the political concern, the value-free ideal seems preferable. We argue that a successful challenge to this ideal must ultimately reject the assumption that values cannot legitimately play evidentiary roles in order to more adequately overcome both the epistemological and political concerns that motivate the ideal. In the final section we show how this might be carried out.
In numerous disciplines, when scientists report quantitative experimental results, they distinguish between the statistical uncertainty and the systematic uncertainty associated with their measurements, and provide quantitative estimates of both. In this paper I focus on the practice of estimating systematic uncertainty as carried out within experimental high energy physics (HEP). I argue that the estimation of systematic errors in HEP should be regarded as a form of quantitative robustness analysis, understood (following Wimsatt 1981) in terms of four component procedures: (1) analysis of a variety of independent processes; (2) identification of invariants in the outcomes of those processes; (3) determination of the scope and conditions of such invariance; and (4) analysis and explanation of relevant failures of invariance. My analysis employs as an interpretive heuristic the secure evidence framework, developed in Staley 2004 (see also Staley 2014) as an approach to explaining the evidential value of robustness.
Although the quantitative estimation of systematic uncertainty is a common practice in many disciplines, it has received little attention from philosophers of science (but see Tal 2012). By providing a philosophical explication of this practice in terms of robustness analysis, I hope to clarify its epistemological purpose, thus providing potential guidance in ongoing debates amongst particle physicists over the appropriate methodology for estimating systematic uncertainty.
Philosophical neglect of this issue is unfortunate, for discussions of systematic uncertainty open a remarkable window into experimental reasoning. Even cursory presentations of systematic uncertainty estimates will note the main sources of systematic uncertainty. More careful reports detail both the ways in which systematic uncertainties arise and the methods by which they are assessed. Such discussions require forthright consideration by experimenters of the body of knowledge that they bring to bear on their investigations, the ways in which that knowledge relates to the conclusions they present, and the limitations on that knowledge. This process is epistemologically crucial to the establishment of experimental knowledge.
Moreover, philosophical insight regarding the estimation of systematic uncertainty could have significant practical value. Presently, there is no clear consensus across scientific disciplines regarding the basis or meaning of the distinction between statistical and systematic uncertainty, despite some concerted efforts surveyed in this paper. Scientists in HEP and other fields also debate the proper statistical framework in which systematic uncertainty should be evaluated, a debate with important philosophical aspects. It is the contention of this paper that some progress may come from regarding the estimation of systematic uncertainty as an instance of robustness analysis applied to a model of an experiment or measurement, the epistemic value of which concerns the security of evidence claims subjected to such analysis.
The plan of the paper is as follows: I begin with a discussion of the distinction between systematic and statistical uncertainty, then turn to debates in HEP regarding the appropriate statistical framework for the estimation of systematic uncertainty, thus highlighting the importance of philosophical insights for scientific practice in this regard. I then outline the secure evidence framework to be employed in my analysis. I present my argument for viewing systematic uncertainty estimation as quantitative robustness analysis, showing how such analyses in HEP target both inferential and measurement robustness (as these terms have been articulated by Woodward 2006) and conclude with some tentative suggestions regarding how the present analysis might illuminate the scientific debates previously mentioned.