5th Biennial Conference of the Society for Philosophy of Science in Practice (SPSP) Aarhus 2015

Parallel Session 1B
Wednesday, 24 June 2015, 10:30–12:00 in G1
Session chais: Jessica Carter (University of Southern Denmark); Christopher Pincock (Ohio State University)
Organized by: José Ferreirós (University of Sevilla); Jessica Carter (University of Southern Denmark); Henrik Kragh Sørensen (Aarhus University)
Symposium: Philosophy of Mathematical Practice

Synopsis

The Association for the Philosophy of Mathematical Practice (APMP) was founded 4 years ago with the aim of fostering “a broad outward-looking approach to the philosophy of mathematics which engages with mathematics in practice (including issues in history of mathematics, the applications of mathematics, cognitive science, etc.)”. In view of the common interests with SPSP, it seems natural to strive for establishing links with your society and stimulating shared knowledge and common action. It is with that aim that the APMP as such wants to propose a Symposium in the context of the next SPSP meeting in Aarhus. We would like to display some of the work of our associates and to take advantage of their presence at the SPSP meeting to explore options for common work.

Thinking about topics that might best suit the interests of SPSP, it seemed rather obvious that questions about the relations between mathematics and different aspects of scientific practice would be ideal. Not just the issue of the applicability of mathematics (which tends to be ideology-laden from its very formulation) but more generally the relations –back and forth–between math and science.

The proposed Symposium is an outcome of that perspective. It consists of three papers, each to be presented in 30 min. The contributors have proposed topics that explore the variety of aspects from which the interaction science/mathematics can be explored – from Fourier series and their dual role/justification in mathematical physics and pure mathematics, to strategies of tuning in computer models as a central ingredient in contemporary science, to attempts to deepen understanding of relativity theory by means of a mathematical reinterpretation.

Fourier Series as an Interface Between Mathematics and Physics

Abstract

A Fourier series is a means of representing a function as an infinite sum of trigonometric functions (sines and cosines). While such series appear sporadically in the eighteenth century, it is only with J. Fourier (1768-1830) that they become an object of theoretical study in their own right. Fourier deployed these series with masterful effect in The Analytical Theory of Heat (1822) to solve many of the outstanding problems posed by the heat equation. In this paper I will discuss some of the ways that the introduction of Fourier series changed the practice of mathematics and physics in the nineteenth century.

Historians of mathematics and physics have independently remarked on the revolutionary impact of Fourier’s work. For mathematics, Ferraro has argued that the introduction of Fourier series was a major factor in the rejection of the “formal concept” of series that was central to the work of Euler. Once accepted as genuine objects of mathematics, Fourier series prompted many difficult questions about tests for convergence and other aspects of their rigorous application (Bottazzini 1986). Equally important claims have been made for the significance of Fourier’s innovations for the development of physics. In their biography of Fourier, Dhombres and Robert (1998) present their subject as the “creator” of mathematical physics. One aspect of this creation is emphasized by Fox (1974) as well as Buchwald and Hong (2003): Fourier’s successful work on the heat equation contributed to the downfall of the then dominant Laplacian approach to physics. On a Laplacian approach, the components of a mathematical representation must be motivated by a direct interpretation via “a microphysical explanation grounded in short-range forces” (Buchwald and Hong 2003). Fourier called this restrictive program into question by showing how his successful representations had only an indirect interpretive significance. This opened the door to a more malleable program of relating sophisticated mathematical tools to complex physical systems.

I build on this historical work by arguing that these innovations in mathematics and physics are intimately connected. On my reconstruction, Fourier series achieved their initial legitimacy as mathematical entities largely due to their successful application to physical problems. This encouraged mathematicians to further refine and improve on Fourier’s own somewhat vague pronouncements. As this process of rigorization continued, later generations of physicists were confidently able to extend Fourier’s flexible approach to applications in new domains. I illustrate the main elements of this account through what has become a standard textbook problem: how deep should a wine cellar be so that it remains cool in the summer and warm in the winter? This case reveals one way that mathematical practice and scientific practice are coupled and how a success in one field can prompt important innovations in another field.

References

  • Bottazzini, U. (1986). The higher calculus: A history of real and complex analysis from Euler to Weierstrass. Springer.
  • Buchwald, J. and S. Hong (2003). Physics. In D. Cahan (ed.), From natural philosophy to the sciences: Writing the history of nineteenth-century science. University of Chicago Press, 2003, pp. 163-195.
  • Dhombres, J. and J.-B. Robert (1998). Fourier, créateur de la physique-mathématique. Belin.
  • Ferraro, G. (2007). Convergence and formal manipulation in the theory of series from 1730 to 1815. Historia Mathematica 34: 62-88.
  • Fourier, J.-B. Jean (1822/2009). The analytical theory of heat. A. Freeman (trans.). Cambridge University Press.
  • Fox, R. (1974). The rise and fall of Laplacian physics. Historical Studies in the Physical and Biological Sciences 4: 89-136.
  • Prestini, E. (2003). The evolution of applied harmonic analysis: Models of the real world. Birkhäuser.
  • Wilson, Mark (2006). Wandering significance: An essay conceptual behaviour. Oxford University Press.
Strategies of Tuning. A New Look on Mathematization

Abstract

Mathematization has been identified as an essential ingredient in the development of modern science. Dijksterhuis (1961) or Koyré (1968, 1978) are thoughtful and standard historical references. According to such perspective, heroes like Galileo and Descartes established the viewpoint that mathematics and mathematically formulated laws provide an approach to the pertinent structures of phenomena. This line of reasoning – interconnecting mathematics, laws, and structures – is strong also in the philosophy of science.

I would like to present a quite different outlook on how mathematics is used as a tool in the sciences. I shall like to concentrate on a type of mathematical modeling strategies that is closely connected to using the computer as an instrument. More precisely, the issue in this talk is parameter tuning and its importance in mathematical practices.

Tuning is a well-known part of modeling and of building artifacts in general. However, it is mostly ignored in philosophical accounts, presumably because tuning counts as an ad hoc measure for counteracting minor shortcomings of the model – necessary, but insignificant. This view is inappropriate, especially when looking at computer-based modeling. There, tuning has become a central element. It is employed in systematic ways so that mathematical models can advance to fields that otherwise would not be amenable to mathematization. Instead of determining what would happen in a highly idealized system, one wants to predict or manipulate the actual behavior of a certain system. Such behavior is usually influenced by a host of relevant, but not completely known, factors. My point is that mathematization is not prevented, but rather helps to deal with these situations. In particular, mathematical models allow for tuning.

Strategies of tuning will be discussed along the examples of chemical process engineering and modeling of atmospheric convection. Both examples critically hinge on computer-based tuning strategies that involve specifying good tuning parameters, and finding economically feasible feedback-loops for adjusting parameters in a model, or model-network. In both examples, the models contain errors and insufficiencies – which is the normal case in scientific practice, I think. Tuning a parameter according to the overall behavior of the model then means that the errors etc. get compensated (if in an opaque way). Tuning is a tool that utilizes the plasticity and adaptability of sub-models and their coupling, rather than their structure.

Based on the analysis of these cases, I want to defend the near-paradoxical – and hopefully controversial – claim that tuning is a mathematical practice for working with inconsistent models. Models can be called inconsistent, because they have no common theoretical framework. They build a façade in the sense of Wilson (2006) whose apparent consistency only emerges in the course of tuning. Thus mathematics is not restricted to consistent formal systems, quite the opposite is the case: It serves as a tool for dealing with inconsistent parts and their complex interactions. Else it would be much less relevant in contemporary sciences.

References

  • Dijksterhuis, Eduard Jan, The Mechanization of the World Picture, Oxford: Oxford University Press, 1961.
  • Koyré, Alexandre (1968) Newtonian Studies, Chicago: University of Chicago Press.
  • Koyré, Alexandre (1978) Galileo Studies, Hassocks: Harvester Press.
  • Wilson, Mark (2006). Wandering significance: An essay conceptual behaviour. Oxford University Press.
Representations and Understanding in Mathematics
  • Jessica Carter (Department of Mathematics and Computer Science, University of Southern Denmark, jessica@imada.sdu.dk)

Abstract

In the first part, I shall use Peirce’s semiotics, in particular his notions of icons and indices, in order to describe how it is possible to handle complex mathematical proofs or expressions. This will be illustrated by a result from contemporary mathematics, where the value of a complex expression is found by gradually breaking it down into simpler expressions. I claim that we handle proofs by using an interplay between different kinds of representations. One role, that these representations play, is to enable us to break down proofs into manageable parts and thus to focus on certain details of a proof, by removing irrelevant information (Carter 2010). The role of icons and indices in this process will be explained. Put briefly the role of icons, signs that represent because of likeness, is to ensure that there is likeness between the parts when the expression is broken down, and the role of the index, acting as a signpost, ensures that the parts may be reassembled in the end.

The second part will link the above description to a notion of mathematical understanding. There are many ways to characterise understanding. In this paper I will only consider one, that is, where the motive given for understanding a certain subject matter is to further development in that subject. In this sense understanding is linked to fruitfulness. I shall take understanding to be characterised by finding ways to:

  1. Handle a field/subject matter – given our cognitive set-up - in order to
  2. Reveal structure of the subject matter.

These descriptions will be regarded in view of the above process of breaking down a proof into manageable parts and in relation to Peirce’s characterisation of icons.