Inferences flow from data to an explanatory conclusion, as well as in the other direction from theory to predicted values. This is a matter of inductive and deductive logic, probabilistic reasoning, Bayesian networks, and the like. Each in their own way, these conceptions of presuppose some degree of homogeneity among data, some way of treating them as similar. A philosophy of inferential practice needs to consider the varieties of technologies and techniques by which similarity is achieved and thus the conditions established for inferential moves. These include very basic conceptual tools like methodological maxims, inductive principles, conservation laws that delimit the domain of inquiry (“ex nihilo nihil”, “natura non fecit saltus”, “the future is like the past”, “no matter is lost or created, just rearranged in space”). Similarity may also be achieved through elaborate technical frameworks of reasoning like those of measurement theory or probabilistic inference (“numbers can be assigned to qualities,” “in the long run, relative frequency approximates objective probability”), and through routines of collecting, standardizing, curating data and making them commensurable, e.g., in natural history museums or databanks.
Especially in contemporary practice, this set of strategies and techniques includes the mutual assimilation of technologies of observation and of modeling such that one can move back and forth between experimental and simulated situations. So, indeed, if the technological conditions warrant it, it becomes possible for purposes of explanation and prediction to infer a shared underlying dynamic from mere visual similarity. This common and accepted practice cannot be accounted for in terms of the methodological canon of the philosophy of science. It is this practice, in particular, to which the papers in the proposed pair of panels will be dedicated.
The first panel will consider the question of how to reconstruct and justify appeals to similarity in explanatory inference. Among the fields where these practices can be investigated are theoretical chemistry and materials research where simulations of laboratory experiments take on the role of explanation. Also, it is a common feature where models are taken as models for rather than models of, e.g., when animal models stand in for human disease processes. Arguably, this form of explanation from similarity serves as an epistemic ideal in Systems Biology and Synthetic Biology as well as the Human Brain Project and any research that surrenders the demand for intellectual tractability to machines and judges only the overall performance of the machine as a simulacrum.
The second panel will consider how the construction of systems of equivalences offers access to new phenomena, and allows us to study objects of design, that is, objects that do not yet exist. Here, the so-called emerging technologies, engineering and architecture come into view where similarity can underwrite not only inferential but thereby also constructive moves.
Two intriguing questions in aiming to understand the engineering sciences are: how is scientific research epistemologically related to technological innovation, and how is it possible that scientific research in the engineering sciences purposefully creates / invents new physical phenomena (such as new material properties in biomedical engineering or in nanotechnology). What kinds of inferential strategies enable these kinds of inventions?
It will be argued that the invention of a new physical phenomenon in the engineering sciences involves mutually related epistemological activities: (1) its conception (e.g., ‘artificial photosynthesis,’ which is the physical phenomenon of artificially producing electricity and/or useful chemical compounds from sun-light, similar to photo-synthetic processes in nature); (2) the conception of causal-mechanistic model(s) of how the phenomenon could possibly be generated (e.g., causal-mechanistic model of biochemical processes involved in photo-synthesis); and (3) the conception of how the physical phenomenon could possibly be generated by technological instrumentation.
How does similarity as an inferential strategy play a role in those epistemological activities? Bengoetxea et.al. (2014) have argued that the concept of similarity is a useful epistemological tool for performing basic tasks in science, such as: learning, inductively generalizing, and making predictions. In chemistry, similarity is used to establish classification patterns that are based not only on fundamental structural features derived from physical theory, but even more on all relevant chemical aspects useful to scientists (cf. Giere 2010). By means of these classifications chemists can obtain sophisticated sets of concepts that permit both the representation of properties and phenomena, as well as the prediction of new properties and entities. Hence, according to Bengoetxea et.al. (2014), in chemistry new properties and entities are predicted by means of concepts, which themselves have been obtained from similarities between relevant chemical aspects. Similarity put this way, involves as an epistemological strategy Newton’s second rule of philosophizing (“to natural effects of the same kind the same causes should be assigned, as far as possible”). However, it will be argued that this account does not sufficiently explain how it is possible that scientists invent properties in the materials sciences. The predictive power of concepts formed by means of similarity requires further explanation, and a more comprehensive explanation must take into account the epistemological activities mentioned above (also see Boon 2012 and forthcoming). Research on artificial photosynthesis will be used to show similarity as an inferential strategy that lead towards such inventions.
Models and simulations commonly are regarded as key strategies of scientific inference: Models are said to be ‘sources of genuine science-extending existential hypotheses’ (Harré 1970), simulations are discussed to ‘increase the range of phenomena that are epistemically accessible to us’ (Frigg & Reiss 2009). Besides well-known queries, such as if and how models represent target phenomena (which they are said to model), more recent debates reflect upon issues of validation and verification: With regard to explanatory inference it has been stated that model and target systems should share relevant similarities (i.e. Hesse 1963, Parker 2009). Here, what is considered ‘relevant’ depends on the particular question an experimental system wants to answer.
According to Parker and others relevant similarity justifies inference much more adequately than ontological equivalence. The latter refers to models that are ‘made of the same stuff as the real world’ (Morgan 2005).
Given the overall framework of biomedicine, to justify inferences from animal-based models of (human) diseases is a multifaceted issue, as a recent study taking the case of Alzheimer mice illustrates (Huber & Keuck 2013): Here, a three-fold validation process includes, (a) the selection of means and targets of modelling (appropriate organism; relevant research parameters), (b) issues of internal validation within the laboratory, such as strategies of manipulation and control, which are ranging from standardised intervention into experimental organisms to practices of stabilised replication of given features in a species. Also issues of external validation are involved, given that the generation of a specific animal model is achieved only if the experimental potentiality of an organism is realised with respect to a certain target of modelling that is proven to be clinically relevant. Furthermore, (c) validation processes relate to the applicability of animal-based approaches to patient-based (clinical) research.
Against this background key aspects of model-based reasoning and inference in biomedicine are addressed. The paper elaborates on practices of identifying relevant pathogenic processes and securing homogeneity, i.e. of transgenic experimental organisms. Especially, it explores in how far homogeneity as an epistemic end of standardisation could be regarded as prerequisite of (relevant) similarity: Relevant similarity, as research into Alzheimer’s Disease suggests, is not an object of mere assumption or stipulation, but has to be instantiated on the basis of experimental techniques, and thereby proven.
In previous work, I have claimed that even though comparisons of computational modelling and laboratory experiments or other sources of data are couched in terms of ‘resemblance’, ‘correspondence’ and ‘match’ apparently after the main activity of modelling has taken place, the process of constructing the grounds of comparability is in fact a core part of the modelling process (Carusi 2014, Carusi, Burrage and Rodriguez 2013). This means that in the scientific practices of experiment-facing modelling, there is not a straightforward matching or checking for correspondences in a ‘face-off’ between models on one hand and experiments on the other, as though these were externally related, independently constituted entities. Rather, there is a gradual and essentially temporal process, over several iterations, of establishing a system of equivalences between the different aspects of the process, which are seen as internally related, co-constituted parts (Chang 2004, Rouse 2002). Systems of equivalences – significantly in the plural – play a strong role in shaping what counts as similarity in a modelling domain, and therefore also condition what might be called an inferential style in that domain, and underpin what is meant by terms such as ‘representation’ in its discourse. The presentation focuses on the characterisation of the notion of systems of equivalences, inspired by the philosophy of vision, art and symbolic systems of Maurice Merleau-Ponty (for example, 1973): they are mediated and embodied in the symbolic systems and technologies of the modelling domain; epistemic and normative, as they provide the framework for interpretation and significance that serve as criteria of comparability; ontological with respect to the constitution of the features of the modelling domain. Very importantly, systems of equivalences must be socially shared in order to play any of these roles. I draw on apparently successfully established modelling domains by drawing on a case study in computational cardiac electrophysiology. However, failures to share systems of equivalences can lead to scientific controversies such as has recently been the case in the Human Brain Project. The neuroscientific community has been vociferous in its objections to the amount of European funding that has been given to this project. I analyse the reasons for these objections through a discourse analysis of the documents relating to it, including letters that have been written by the various parties, papers published by the main proponents, and the different modes of visualisation used. I pay especially close attention to the claims made concerning the alleged equivalences between the human brain and computational artefacts, be they for ‘science’ or for ‘engineering’ purposes. I claim that the notion of system of equivalences sheds light not only on domains that are apparently successfully constituted, but also on those that are not.
At least from its beginnings in the late 19th century, philosophy of science emphatically excluded from its methodological canon appeals to similarity. In the tradition of Kant, physicists like Heinrich Hertz and philosophers like Ludwig Wittgenstein maintained that the truth or falsity of models or pictures does not depend on their similarity to what they represent or depict. Indeed, all we can know about these models is their predictive or explanatory succcess - but we do not and cannot know whether their likeness extends beyond the agreement, say, of a predicted and an observed fact.
In contemporary discussions about similarity, partial or complete isomorphisms (e.g., Giere 1999, Suárez 2003, French 2003) the Kantian rigor about limits of knowledge is liberalized - to be sure, an oil painting of a sunset is quite dissimilar from an actual sunset, but this should not prevent us from acknowledging that painterly realism differs from abstract art in that it produces likenesses. This acknowledgment, in turn, gives rise to a program where different degrees of similarity might be used to judge veracity.
This shift misses the point, however, of the original need to reject appeals to similarity. It served to distance modern science from magical thinking. Similarity animates the pre-modern prosaic world decribed by Foucault (1970), it is a central methodological category of the so-called pseudo-sciences of astrology, alchemy, physiognomy, homeopathy. Is does not signify a representational relation (more or less similar in terms of visual or structural likeness) but a kinship relation according to which similar things participate in a shared reality, and this relation is thought to be causally significant. On this account and pace Goodman (1972), similarity is sui generis and not reducible to “sameness in some specifiable respect, difference in others.”
Against this conceptual backdrop, the paper considers the recent contributions by Bengoetxea et al. (2014) and Weisberg (2013) on similarity in chemistry. It will show that they treat similarity only as a representational notion and therefore miss out on the fact that technologies of modelling and visualization establish kinship relations that underwrite and warrant the reappearance of a notion of similarity that had been exorcised by modern conceptions of science.
Looking at design and construction processes in engineering sciences, we find a plethora of different kinds of images: sketches, drawings, plans, diagrams, and renderings (Ferguson 1992, Henderson 1999, Ewenstein & Whythe 2009).They are crucial means to develop novel artefacts and design-knowledge; their production allows processes of reasoning in order to establish the rightness of the design and to gain knowledge of the yet non-existent. In my contribution I will examine examples of image-based reasoning in engineering sciences in order to determine to which extend these procedures rely on the concept of similarity.
The argument is based on the assumption that design processes involve modes of genuine knowledge production by specific techniques, methods, and strategies – which are anchored in visual-spatial reasoning and thinking (Tversky 2005, Hegarty & Strull 2012). These techniques, methods, and strategies help to single out problems, isolate open questions, supply procedures to approach tentative solutions, as well as to refine and test them until they hold up convincingly. Little by little, in hard-won steps and iterative loops, the rightness of the design is tested within the process of drawing. It is revised, discarded or strengthened until eventually, from the struggle for rightness, a secure knowledge can be stabilized. And this knowledge finally allows for the construction and building of the artefact. To ensure this outcome, many factors may have an impact. They deliver restrictions, frameworks, or simply guide the direction of the ongoing process: When exploring the design problem, selecting among variations or assessing potential results, several factors come into play, such as the coherence of the design, its consistency with well approved bodies of knowledge, the relevance of certain parameters, the anchoring of partial results in existing design experience, the range of the intended solution, or its effects on the overall setting.
These forms of reasoning rely on a domain specific implementation of epistemic strategies. Such epistemic strategies are for example the reduction of complexity, variation and comparison, identification of relevant parameters, the development of criteria of assessment, externalizing and explaining, or a search for mistakes. In the engineering sciences, especially visuo-spatial techniques, methods, and tools enable to pursue these strategies. They are implemented in techniques such as layering and contrasting juxtaposition, projecting, scaling, or interrelating design manifestations. Their underlying epistemic dynamic can be described by a broadly conceived concept of visual similarity.
Images of different kinds are widely used in nanoscience. Firstly, relying on the analysis of my own ethnographic studies, I will propose a classification of images produced in a nanoscience laboratory: primary images, secondary images and computational simulation images. Primary images are produced by instruments that acquire data that are then transduced by a specialized algorithm linked to a computer which in turn generates a topological or associated depiction of the object under investigation. The instruments which are used to obtain primary images are, in the present case study, the transmission electronic microscope, the scanning tunnelling microscope, the atomic force microscope, and so forth. Secondary images issue from the primary images and retain their foundational data. They require the introduction of a computer graphics program specialized in image processing. Computational simulation images represent computational output as form. Computation thus operates on two levels: it calculates physical phenomena and then, in a second phase, that output is numerically processed through algorithms and emerges as images. Each class of images fulfils different epistemic functions. In a second part of my talk, I will focus on the computational simulation images’ functions: they can be used as an alternative to real experimental processes (because these are too expensive, time consuming or impossible to achieve); they can help to explain and to predict physical processes and ultimately, they may constitute an aid for decision making in case of controversial results produced by different instruments. In a third part of my talk, I will underline that such imaging practices entail their own sources of problems: for example, some computational simulation images may contain false information or lose some relevant information. One solution consists in comparing the computational simulation images to primary images or to secondary images. This strategy leads us to another problem: how can comparisons be made, and which types and degrees of similarity are needed between computational simulation images, primary or secondary images? Frequently, the compared images are both inserted in the final scientific publications. By doing so, the aim of the researchers is not to provide an absolute truth, but robustness – in other words, a convergent network of evidence.