5th Biennial Conference of the Society for Philosophy of Science in Practice (SPSP) Aarhus 2015

Parallel Session 5D
Thursday, 25 June 2015, 15:30–17:30 in G3
Session chair: Dingmar van Eck (Ghent University)
On the Epistemic Roles of Simulations in Cognitive Modeling
  • Maria Serban (University of Pittsburgh)

Abstract

The task of explaining how various brain structures achieve the complex cognitive functions and behaviors observed in living organisms faces the major challenge of bridging the gap between higher-level or abstract descriptions of psychological properties and behaviors and lower-level accounts of the structure and organization of neural systems at various levels of organization. The Human Brain Project, successor of the Blue Brain Project (Makram 2004) promises to create a framework which allows for the integration of experimental data and theoretical hypotheses targeting different levels of organization of biological organisms and their psychological functions. One of the strategic objectives of the project is to generate powerful simulations of the mouse and human brain which would complement the existing experimental data by connecting different levels of biological organization, and enabling in silico experiments that cannot be carried out in the laboratory. Promoters of the program claim that the simulation of neurobiological models at different levels of description, such as abstract computational models, point neuron models, detailed cellular level models of neuronal circuitry, molecular level models of small areas of the brain, multi-scale models that switch dynamically between different levels of description, will help experimentalists and theoreticians to choose the appropriate level of detail for asking new questions and exploring new hypotheses about the cognitive architecture of the brain and the neural realizers of different cognitive functions.

This type of project raises a series of important philosophical questions about the epistemic roles that large scale computational simulations play in the development of successful cognitive theories. How can simulations facilitate our understanding of the mechanisms underlying various cognitive functions like spatial perception, face recognition, reading, or language learning, among others? A preliminary response is that computational simulations allow formalization and testing of multiscale cognitive models. As such they constitute ways of exploring the limits of current theoretical proposals that cannot be directly assessed in an experimental setting. Another epistemic advantage of using computer simulations within cognitive neuroscience is that they allow the integration of cognitive models developed at different spatial and temporal scales thus producing a type of synthetic knowledge which is critical to understanding psychological phenomena and their neurobiological underpinnings.

However these epistemological and methodological advantages have also been challenged on the grounds that simulations make the relations between different levels of biological organization epistemically opaque. In addition, simulations are criticized for occluding the lack of proper empirical support for certain theoretical models used in cognitive neuroscientific research. For instance, advocates of the Brain Initiative emphasize the need to develop better technologies for collecting more data about the neuronal structures underlying different cognitive functions. They claim that only in light of a complete experimental knowledge we can hope to provide an empirically adequate explanation of the observed psychological patterns and behaviors.

Despite the theoretical challenges facing the simulations method, I claim that the latter allows the development of hybrid explanatory strategies which help advance our understanding of how biological organisms like ourselves can achieve the impressive cognitive features and complex behaviors observed on a daily basis. Drawing on a class of models used in language acquisition and language learning studies, I defend the epistemic advantages of using computational simulations for the purposes of investigating the neural bases of cognition.

About “Numerical Experiments”
  • Julie Jebeile (Université Paris-Sorbonne)

Abstract

It is commonly assumed that knowledge obtained by models does not have empirical origin and is in this sense not as reliable as empirical knowledge. However, doubts have arisen whether knowledge generated by computer simulations could not legitimately be considered empirical knowledge since they bear a strong resemblance to experiments in many respects (see, e.g., Guala 2002; Morgan 2003, 2005; Winsberg 2003). From a commonly-held view, based on similarities between simulation and experiment, one should be allowed to extend certain epistemic properties of experiments to simulations. But, once we acknowledge the similarities between computer simulations and experiments, can we conclude from them that simulations generate empirically reliable knowledge as experiments do? In this paper, I identify these similarities, and examine whether, in accordance with the analogy, they give simulations and experiments the same epistemic properties.

I first investigate four common features shared by simulations and experiments which are often highlighted by philosophers:

  1. Simulation and experiment allow for exploration: simulation consists in mathematically exploring the empirical implications of the underlying model, while experiment consists in exploring the phenomena by providing observations and measures.
  2. Scientists intervene on both of them: (i) implies that they intervene on the simulation program or the experimental setup.
  3. Both sometimes make it possible to visualize the system under study (this is not always the case since though, for example, there is no phenomenon to visualize in the experiments of particle physics).
  4. They both sometimes function as black boxes. An experiment functions like a black box when the experimenter does not know some (or all) physical processes at work in the observed phenomenon. A computer simulation also works as a black box due to the complexity of the program and the speed of the computational process, which makes the process opaque.

I then examine whether these similarities give simulations the two main epistemic functions usually assigned to experiments, i.e. either producing new empirically reliable knowledge or possibly contradicting our best theoretical assumptions.

From this study, I contend that the similarities between simulation and experiment give the scientist at most the illusion that she is facing an experiment, but cannot seriously ground the analogy. In other words, it is not in virtue of these similarities that simulations can provide empirically reliable knowledge. The reason for their epistemic function has to be found elsewhere in the verification and validation of the model content.

I conclude that the analogy between simulation and experiment does not work, but it does not mean that experiment is always epistemologically superior to simulation. While some philosophers (e.g. Mary Morgan and Ronald Giere) take for granted this empiricist presupposition, I show that such a presupposition holds less frequently that we might think.

References

  • Guala, F. (2002). Models, simulations, and experiments. In L. Magnani, & N. Nersessian (Eds.) Model-based Reasoning: Science, Technology, Values, (pp. 59–74). Kluwer/Plenum, New York.
  • Morgan, M. S. (2003). Experiments without material intervention: Model experiments, virtual experiments and virtually experiments. In H. Radder (Ed.) The philosophy of scientific experimentation, (pp. 216–235). Pittsburgh: University of Pittsburgh Press.
  • Morgan, M. S. (2005). Experiments versus models: New phenomena, inference, and surprise. Journal of Economic Methodology, 12(2):317–329.
  • Winsberg, E. (2003). Simulated Experiments: Methodology for a Virtual World. Philosophy of Science, 70(1):105–125.
An Information-Theoretic Model of Scientific Reasoning
  • Agnes Bolinska (University of Toronto)

Abstract

When little or nothing is known about a phenomenon, scientists may learn about it by consulting extant theory or gathering empirical evidence. But there are many ways in which they may do this, and some may be more successful than others. In this paper, I argue that the order in which evidence is considered affects the efficiency of a reasoning process and suggest a measure for determining efficiency. I use as examples the determination of protein and DNA structure and conclude by showing how the construction of molecular models further contributed to this efficiency.

In the cases of protein and DNA, the determination of molecular structure was primarily informed by two sorts of evidence, which I refer to as data: x-ray diffraction photographs, produced when x-rays shone at a molecule are scattered and captured on a photographic plate, and stereochemical rules dictating permissible molecular configurations, given a molecule’s atomic composition. Because x-rays are reflected but not subsequently refracted to produce a diffraction photograph, interpretation is required to determine structure from such photographs. Interpretation is also required to apply stereochemical rules to molecules.

I characterize the process of determining molecular structure as one of eliminating structural candidates through the successive interpretation of pieces of data. I argue that data serve as constraining affordances for molecular structure: the interpretation of such data yields information about structure by warranting both the elimination of certain structural possibilities and the retention of others for further consideration. Interpretations of data vary with respect to how many structural candidates they eliminate. They also vary with respect to how certain scientists could be that only incorrect structures are eliminated upon interpretation. I introduce the notion of informational entropy to show that an efficient strategy for reasoning about structure is one that, on average, maximizes the number of possibilities eliminated with each interpretation and the likelihood that those possibilities will be correctly eliminated.

I apply this notion to the cases of protein and DNA structure determination to argue that the strategy of considering stereochemical rules before x-ray diffraction photographs was more efficient than one in which this order is reversed. Then, I show that the construction of molecular models further increased the effectiveness this strategy in two ways: by serving as a concrete means of prioritizing the stereochemical rules in scientists’ reasoning and functioning as a cognitive aid, enabling scientists to consider many more such rules at once.