Advocates of the new mechanistic philosophy of science have often emphasized practice-oriented aspects of mechanistic explanation, especially processes of discovery and experimentation. In this symposium, however, we argue that close attention to practice illuminates limitations of the standard characterization of mechanistic explanation advanced in philosophy of science. The canonical picture of mechanistic explanation is that biologists discover a phenomenon, relate it to a mechanism, decompose the mechanism into component entities (parts) and activities (operations), and show that together they generate the phenomenon. The resulting account of a mechanism is judged to be good if it picks out the entities and activities that actually produce the phenomena in the world. This, however, leaves out many important features of the practice exhibited in fields of biology that are rightly described as involved in mechanistic explanation. By focusing on specific examples of scientific practice, this symposium will identify shortcomings of the canonical picture and suggest ways to develop accounts of mechanistic explanation that better fit scientific practice.
The quest for mechanistic explanation is often portrayed as beginning with the delineation of a phenomenon that then becomes the target of explanation. But delineating phenomena is often a complex process of discovery that involves experimental manipulations and can themselves address the questions that scientists are posing. David Colaco will begin the symposium by exposing practices in which researchers intervene to manipulate phenomena, not parts of a mechanism, to solve the problem posed in their research. While such interventions can occur as a prelude to developing mechanistic explanations, they also occur in contexts in which developing mechanistic explanations is not the goal. Whether the focus is on phenomena themselves or the parts and operations of mechanisms, most scientific research projects focus on what Daniel Burnston, in the second talk, characterizes as explanatory relations. These establish dependency relations between variables that characterize phenomenal or component properties, and in many cases identify relations between what might be viewed as activities within a mechanism and aspects of the phenomenon. These relations, however, are not just preparations for mechanistic explanations—they are crucial to evaluating proposed mechanistic explanations and are often sought in their own right.
In the third talk William Bechtel will pick up on the issue of evaluating explanations. Among accounts of mechanistic explanation that emphasize norms, the focus is often on the mapping of accounts of the mechanism onto the mechanism operative in nature. But such mapping is not something to which scientists have access; rather they can only appeal to evidence and other epistemic considerations available to them. In the case of mechanistic explanation, this involves not only evidence supporting claims about the components but also evidence that the mechanism could actually account for the phenomenon, which sometimes takes the form of explanatory relations as discussed by Burnston. The final talk by Morgan Thompson turns to the question of what is the importance of “mechanistic” in “mechanistic explanation.” Some mechanists have contended that all explanations are mechanistic, making the adjective “mechanistic” redundant. Thompson argues for restricting the scope so as to emphasize the distinctive contents and norms of the practice of mechanistic science. Such restrictions are important for a practice perspective as they allow philosophers to focus in on the distinctive practices pursued by scientists engaged in mechanistic explanation.
Recent accounts in the philosophy of scientific discovery (e.g. Craver and Darden 2014) have endeavored to apply the framework of the mechanist program to explain how discovery often occurs in the life sciences, including neuroscience. That is, discovery in fields like neuroscience is described as the discovery of mechanisms and their activities. I argue that, while there are certainly cases that fit this characterization, it does not exhaust all cases from the discipline. There are many cases in which discovery is not best framed in terms of mechanisms. With this in mind, I will sketch out an alternative mode of discovery. Using examples from the investigation of the behavior of coordination and its relation to the motor cortex, I will show how the important framing item is not a mechanism, but rather a specific phenomenal-level problem or issue the researchers wish to solve or ameliorate. With this in mind, I will show that successful phenomenal intervention – cases in which researchers manipulate the phenomena of interest in order to resolve the problem that frames their research – is a powerful mode of discovery. While this mode may lead to the uncovering of mechanisms as a byproduct, the experimental manipulations need not be characterized in terms of them. Rather, they are directed at intervening on the phenomenon, not the components of the mechanism. I will draw out the differences between the phenomenal and mechanistic modes, and how their differences reflect the differences in the character of the experimental practices associated with them. The two modes are not incompatible, and may sometimes go together. Nevertheless, sometimes they do not, and, without an account of the non-mechanist mode, it is difficult to explain many legitimate instances of discovery in the fields of neuroscience.
Mechanists often suggest that a fully-developed mechanistic explanation, as portrayed in a mechanism diagram or schema, is the end-goal of a research program. This view entails that other ways of representing system properties, including data graphs, are at best subsidiary in the project of giving explanations. Data graphs perhaps constrain hypothesizing about, or provide evidence for, particular mechanistic accounts, but they are not themselves explanatory. I give a practice-based argument that this standard view is false. Many research papers in active science offer explanations despite not presenting any mechanistic diagrams or schemas at all, or employing them in a heuristic way as a search for other explanatory representations. I focus on one such type of representation, which I call “explanatory relations.” Explanatory relations are quantitative relationships between variables or system properties, and are often shown in individual data graphs that exemplify the relationship taken to be important. I discuss cases of the search for explanatory relations in mammalian chronobiology to argue, first, that the role of explanatory relations in explanation is not reducible to giving constraints on or evidence for hypotheses about the mechanism—i.e., the parts, operations, and organization of the system producing the phenomenon. Second, I argue that it is equally inaccurate to describe explanatory relations in terms of robust generalizations or laws. What is important in representations of explanatory relations is the pattern of quantitative relationships between variables exemplified in a data graph. These are often vital in explaining aspects of the phenomenon. These arguments reveal a flaw in standard approaches and debates about mechanistic explanation: practice reveals that what is important for understanding explanation in biology is not what kinds of representations are most fundamental to explanation. Instead, understanding practice requires analyzing the content and employment of different forms of representation, and how they relate to each other in explanations in particular contexts.
One task for philosophical accounts of explanation is to identify what differentiates good and bad explanations. Focusing on mechanistic explanation, Craver argues that this requires an ontic account of explanation in which the actual activities of entities constituting the mechanism generate the phenomena. Scientists’ representations of mechanisms provide good explanations only insofar as they map onto the ontic explanations. Scientists, however, do not have access to ontic explanations except as mediated by their representations and yet they face the challenge of differentiating good and bad explanations. Traditional philosophy of science points to a number of considerations to look for in the normative assessments generated by scientists such as fit with other well-supported explanations, both mechanistic (e.g., at other levels of organization) and non-mechanistic (e.g., evolutionary descent). Moreover, there is an obvious source of evidence relevant to assessing mechanistic hypotheses—whether there are parts of the sorts proposed and whether they can, in appropriate circumstances, perform the operations posited. This suggests the mapping relation that advocates of ontic explanation invoke, but it is important to focus on how scientists secure evidence for parts and operations. But especially important for mechanistic explanations is evidence that posited mechanisms could produce the phenomenon in question. Often this takes the form of demonstrating explanatory relations as discussed by Burnston. In scientific practice, a multitude of research projects advancing different explanatory relations are invoked to support a given proposed mechanistic explanation. Another form is the demonstration, typically through mathematical modeling, that the mechanism would generate the phenomenon. In practice researchers often develop simplified models designed to elicit the core explanatory relations that enable the mechanism to generate the phenomenon. I will illustrate these modes of assessment using recent research on circadian rhythms.
Although some mechanists worry that limiting the scope of mechanistic explanation to only a subset of all explanations will “marginalize” it, I argue that only by limiting the scope of mechanistic explanation can accounts of mechanistic explanation describe scientific practices and norms in an informative way. When alternative theories of explanation are proposed based on specific examples from the biological sciences, many mechanists reply in the following two ways: (i) the purported counter-example is not actually explanatory and so the alternative theory is not a theory of explanation or (ii) the purported counter-example is actually mechanistic and so the alternative theory of explanation is not an actual alternative to mechanistic explanation. These two responses are unhelpful not only in the dialectic of the debate, but also in terms of providing descriptively adequate and normatively satisfying theories of explanation. Mechanists have begun discussing examples of network models in graph theory—graphs consisting of nodes and the connections between nodes to describe the structure of a system and system-level properties—to illustrate networks in the brain (Sporns 2010) or protein networks (Alon 2007).
Craver (2014) provides the first response when he argues that network models are not a new kind of explanation, but rather a descriptive tool useful for scientists to describe organization and one that might contribute to mechanistic explanation. In an ethnographic study of two systems biology labs, MacLeod & Nersessian (2015) found that these labs often aim to model a system for interventions on a particular aspect of the system, usually at the expense of distorting other parts of the model through the process of parameter-fitting. I argue that these models do not fit into Craver’s phenomenal-mechanistic dichotomy and so his version of mechanistic explanation is not descriptively adequate.
Zednik (2014a, 2014b) responds in the second way by suggesting that networks models indeed provide mechanistic explanations. This response requires the mechanist to expand many aspects of the mechanistic explanation picture to the point of triviality and at the expense of respecting scientific practices. In particular, these mechanists often treat nodes in network models as straight-forward components in a mechanistic explanation, which ignores the fact that nodes are defined by the modelers often arbitrarily (e.g., random parcellation schemes). Further, node choice significantly affects the extent to which certain system-level properties (e.g., small-worldness) appear in the model (Zalensky et al. 2010). I argue that limiting the scope of mechanistic explanation allows it to be a more descriptively adequate account of scientific activities (e.g., explanation, modeling) and also provides more consistent, contentful norms for philosophers and scientists interested in successful explanations. Reducing the scope of mechanistic explanation allows the theory to contribute—along with other theories of explanation—to a more descriptively adequate account of modeling and explanation in the biological sciences.