5th Biennial Conference of the Society for Philosophy of Science in Practice (SPSP) Aarhus 2015

Parallel Session 4D
Thursday, 25 June 2015, 09:00–11:00 in G3
Session chair: Lena Kästner (Humboldt Universität zu Berlin)
Laws and Mechanisms: The Convergence of Two Explanatory Accounts in Neuroscientific Practice
  • Philipp Haueis (Berlin School of Mind and Brain, Max Planck Institute for Human Cognitive and Brain Sciences)

Abstract

This paper belongs to a larger project entitled “meeting the brain on its own terms”, which aims to show how exploratory experiments in human brain research—particularly functional neuroimaging—can help neuroscientists to develop new concepts and to formulate principles of brain organization (Author 2014). In this paper, I propose that organizational principles can be seen as specific kinds of neuroscientific laws. Mechanistic accounts, in contrast, hold that biological explanations are not law-like because they pick out properties that are contingently produced by evolution, allow for exceptions under nonstandard conditions, and vary in scope depending on the research context (Craver 2007). Such criticisms target philosophical conceptions—like the deductive-nomological model—according to which scientific laws are universally quantified sentences describing the states of affairs in their domain of application without exception. Instead of addressing the metaphysical question of what scientific laws are, however, pragmatic accounts (Lange 2000a) have given priority to the roles that laws play in scientific practice (e.g., support of counterfactuals or inductive confirmation).

A comparison of Craver’s mechanistic and Lange’s pragmatic-nomological account of explanation with regard to the role of generalizations in neuroscientific practice reveals a convergence on three levels. Firstly, Lange has refuted arguments against laws in functional biology by defending a normative conception of ceteris paribus laws and by arguing that the explanation of functions that an organism presently exhibits are independent from its evolutionary history (Lange 2000a, 2002). By transferring these arguments to neuroscientific explanations, I show that Craver’s concept of mechanism fulfills Lange’s formal criteria for natural laws (compare also Craver and Kaiser 2013). Secondly, both authors defend the autonomy of the special sciences by arguing that nonfundamental explanations pick out causally efficacious, higher-level phenomena (Craver 2007, ch. 6) and that generalizations with independent counterfactual stability pick out different forms of necessity (physical, biological, psychological etc., cf. Lange 2000, ch. 3). Thirdly, mechanism sketches—partial descriptions of the causal structure of the mechanisms—guide the experimental search for mechanistic parts of the explanandum phenomenon. They therefore fulfil the same role as Lange’s conceptual outlooks, from which researchers can predict new patterns with a law that makes otherwise empirically equivalent predictions with another law of the same domain (e.g., the Boyles-Charles and van-der-Waals law for gas behavior under normal pressure, cf. Lange 2000b). Sketching a mechanism or applying a conceptual outlook prospectively commits researchers to certain experimental results, so that they can require revision if the anticipated results do not occur.

Two examples will finally show that the results of my comparison can be applied to neuroscientific practice. I will briefly discuss how functional connectivity patterns explained by Hebb’s law—“neurons that fire together, wire together”—are counterfactually stable under alternative evolutionary trajectories. I also sketch how the discovery of unknown neurotransmitters first seemed to refute Dale’s principle, which asserts that a neuron releases the same neurotransmitter at all synapses. By adopting a new conceptual outlook, however, neuroscientists were able to extend the principle to phenomena like transmitter co-release.

References

  • Craver, C. (2007). Explaining the Brain. Oxford: Oxford University Press.
  • Craver, C. & Kaiser M., (2013). Mechanisms and Laws: Clarifying the Debate, in Hsiang-Ke C. et al., (eds.) Mechanism and Causality in Biology and Economics, Berlin: Springer: 125–146
  • Author (2014). Meeting the Brain on its own Terms. Frontiers in Human Neuroscience 8, doi: 10.3389/fnhum.2014.00815
  • Lange, M. (2000a). Natural Laws in Scientific Practice. Oxford: Oxford University Press.
  • Lange, M. (2000b). Salience, Supervenience, and Layer Cakes in Sellars's Scientific Realism, McDowell's Moral Realism, and the Philosophy of Mind. Philosophical Studies 101, 213–251.
  • Lange, M. (2002). “Who’s Afraid of Ceteris Paribus Laws? Or: How I Learned to Stop Worrying and Love them. Erkenntnis 57, 407–423.
Reverse Inference, the Cognitive Ontology and the Evidential Scope of Neuroimaging Data
  • Jessey Wright (University of Western Ontario)

Abstract

Recent work in cognitive neuroscience has been aimed at developing a reliable cognitive ontology (a one-to-one mapping between brain regions and cognitive processes) and characterizing the validity of reverse inference (the ascription of a cognitive function from information about brain activity). A cognitive ontology specifies a set of mental functions and identifies the regions (or networks) of the brain that implement those functions (Price & Friston 2005). A complete cognitive ontology would permit reasoning from function to region and from region to function. However, most of the analysis techniques in neuroimaging are only suited for attributing involvement in a cognitive process to a region of the brain. Indeed, reverse inference, the opposite procedure whereby investigators infer the engagement of a cognitive process from brain activity, is considered a ‘fallacy’ (Poldrack 2006, Machery 2013). Claims of selective association (e.g., that the amygdala is the ‘fear area’) need to be backed up with evidence which shows that activity in the region of interest reliably determines if a particular cognitive function is engaged. This evidence would help resolve philosophical concerns about the pluripotentiality of brain regions and the plausibility of a complete cognitive ontology (Klein 2010). It has been proposed that pattern classification analysis (PCA), can provide this evidence (Poldrack et al 2014).

Whether or not PCA can provide the evidence needed to develop a formal cognitive ontology, and so warrant reverse inferences, will depend on the evidential scope of the analysis results. This, I argue, is determined by the data manipulations required to produce those results. Data must be manipulated into evidence and all data manipulations involve the suppression of information. The nature of the resulting evidence (what it can be said to be about and how good it is) will be determined, in part, by what is suppressed by the analysis techniques used to produce it. I contrast PCA with subtraction, the most common analysis technique used to analyze neuroimaging data. By identifying the information in the data suppressed by each technique I show that pattern classification provides better evidence for reverse inferences because the results have the appropriate evidential scope. Subtraction analysis invokes assumptions that prohibit reliable inferences from the activation of a brain region to a particular mental function (i.e., prohibit reverse inference). PCA invokes different assumptions because it suppresses different information. PCA characterizes the informational content of the measured patterns of brain activity, which permits reverse inferences (with some caveats). Thus, it provides the needed evidence for the selective association of a cognitive process with a pattern of brain activity. This has further implications for the structure of the sought after cognitive ontology.

I conclude (1) that the inferential problems with reverse inference are an artifact of the data manipulations used; (2) that a cognitive ontology supported by evidence from pattern classification analysis maps cognitive processes to brain activity profiles, and not merely brain regions; and (3) by characterizing how data manipulations constrain the evidential scope of experimental results.

The Explanatory Payoffs of Multiple Realization in Cognitive Neuroscience
  • Maria Serban (University of Pittsburgh)

Abstract

Multiple realization designates a relation which holds between some systemic or macro-property exhibited by one or several complex systems and a class of heterogeneous micro-properties of the same system(s). Assuming that we have an articulated stable higher-level theory and a theory pitched toward the lower-level of organization of the target system, the doctrine of multiple realization claims that there are one-to-many mappings from the unified (and perhaps homogeneous) higher-level properties to the heterogeneous lower-level properties of the system. Within philosophy, the multiple realization doctrine has been traditionally taken to license a pretty strong thesis about the autonomy of psychology from neurobiology and to set an antireductionist agenda for philosophy of cognitive science in general (Putnam 1965; Fodor 1974). However, critics of multiple realization have contested the strong anti-reductionist consequences of the thesis. Their objections targeted both the conceptual arguments for multiple realization (Sober 1999) and the lack of empirical support for the doctrine within cognitive neuroscience (Bechtel and Mundale 1999).

In response, I argue that current scientific research provides ample support for the multiple realization thesis in both biology and cognitive neuroscience. Drawing a comparison between the degeneracy thesis and the multiple realization thesis allows us to refine some of the features and implications of adopting multiple realization as a viable research hypothesis in cognitive neuroscience (Figdor 2009). Within biology, degeneracy designates the ability of structurally different elements to perform the same function. This has been shown to be a ubiquitous feature of complex biological systems at different levels of organization from the genetic, cellular, system, to population levels (Tononi, Sporns, and Edelman 1999; Edelman and Gally 2001; Price and Friston 2002; Mason 2014). Besides capturing the idea that disjoint and disparate structures can have in certain contexts similar (or even the same) functions or behavioral consequences, the theoretical treatment of degeneracy allows for a mathematically precise way to measure degrees of degeneracy in biological networks and to distinguish genuine cases of degeneracy from redundancy and pluripotentiality. Using the measures developed in the study of degeneracy helps clarify the central claim of the doctrine of multiple realization, namely that the micro-properties which differentiate the multiple realizers are not relevant for the explanation of the target higher-level behavior or property.

In order to illustrate the methodological and explanatory payoffs of the multiple realization thesis I rely on research on the phenomenon of recovery of language functions after brain damage. This case study illustrates that the collaboration between different cognitive modeling paradigms (the lesion-deficit model, functional imaging studies of normal adult subjects and developmental models of brain function recovery) provides ample support for the multiple realization or degeneracy of higher-level cognitive functions. In this context, I show how the thesis of multiple realization promotes a pluralist methodology which generates hybrid (or mixed-level) explanatory strategies for explaining the properties and behaviors exhibited by complex biological systems at higher (and more abstract) levels of organization (Richardson 2009). The more general lesson is that multiple realization supports an integrationist model of intertheoretic relations in cognitive neuroscience.