Computer simulation was adopted quickly in the atmospheric sciences since the early 1950s. Successes in weather and climate simulation increased the attraction and authority of this approach. Even though Edward Lorenz pointed out the sensitivity of simulation models on initial conditions in his famous paper of 1963, the inherent problems of sensitivity and uncertainty did not hamper atmospheric scientists to expand computer models and computer simulation as a valuable new scientific practice and to base long-term projections and warnings on them. The problem of climate change is a case in point. Climate simulation gained increased attention since the early 1960s. A growing number of groups, first in the USA, later also in Australia and Europe, engaged in the development and use of climate models. In various official reports the issue and potential risk of climate change was discussed. In 1979, based on the results of climate simulation atmospheric scientists reached agreement that climate change “on a regional and global scale may be detectable before the end of this century and become significant before the middle of the next century” (WMO, 1979, p. 714). The so-called Charney Report of the U.S. National Academy of Science came to similar conclusions in the same year. Already at this point, climate modeling had become a dominant resource for the production of predictive knowledge about climate.
Underlying the dominance of climate modelling and its uses in the production of predictive climate knowledge are fundamental decisions about which types of knowledge are important, which epistemic standards are used to judge that knowledge, and which applications of that knowledge are regarded as useful and socially relevant. First, interests in climate (including research questions and methodologies, types of climatic knowledge, and knowledge production) were more diverse than the present dominance of modelling suggests. Second, Climate models initially served heuristic purposes to investigate and better understand atmospheric processes. Only in the 1970s, a new generation of climate modellers pushed the development of climate models for the long-term prediction of global warming, an endeavour which has been successful but which was initially controversial. This shift from heuristic to predictive climate modelling involved new presumptions, interests, and epistemic standards which emerged and stabilized in specific historical and cultural contexts. Predictive modelling did not only entail different applications of models and different uses of modelling results, it involved different priorities and research tasks as well as different research practices and strategies. The production of predictive knowledge required a pooling of resources to problems defined by this ultimate goal. Theory needed to be developed and adjusted for the specific scope of prediction, scientific problems needed to be prioritized and those problems estimated less fundamental for meaningful prediction relegated to later treatment, models needed to be adjusted to the goal of long term prediction and relevant data needed to be collected and processed accordingly.
Based on examples from the USA, the UK and Germany, this paper aims at investigating how and why a new culture of climate prediction emerged in the course of the 1970s. It will explore the strong interaction of scientific and political interests, which paved the way for an application of climate models for predictive use, and which supported a consensual framing of climate change and a partial merging of scientific and political agendas. It seeks to provide answers to questions such as: How and why did predictive modeling gain acceptance? To what extent was this pushed by scientific and political interest? Which practices did predictive modelling entail? Which controversies did it cause within the climate modelling community? The paper will show that climate modelling strategies and practices – even though based on the same basic principles – differed considerably in different countries and in different modelling groups. It will illuminate differences of interests, perceptions and practices within the climate modeling community. The paper will be based on historical research and contribute to the philosophical understanding of the shaping scientific practice in climate modelling.
In ‘The social epistemology of the Intergovernmental Panel on Climate Change’ (forthcoming in the Journal of Applied Philosophy), Stephen John argues that due to the ineliminable issue of inductive risk, there are circumstances in which peoples’ ex-ante political commitments can actually provide them with good reasons to not defer to scientific testimony. John states that it is possible that one's political commitments might be such that she adopts an extremely high epistemic standard for accepting any claims about climate change (ie. she would demand more evidence in order to accept the scientists' assertions). This includes the science of Intergovernmental Panel on Climate Change (IPCC), although John believes that such cases of non-deferral would be very unlikely, due to the high epistemic standards used by the IPCC: “…it remains “conceivable that non-experts’ political commitments might still provide them with good reason to fail to defer to the IPCC’s testimony” but “…such cases are likely to be extremely rare”.
Despite opening up this space in the ‘value-free ideal’ debate, John does not provide an account of the criteria that would need to be satisfied in order for such cases, however rare, to be recognised as well-founded and principled. In this paper I will provide a preliminary account of such criteria, closing this loophole, which I am concerned can be easily exploited.
In addition, I will also argue (contra John) that there are certain circumstances under which one cannot appeal in any way to their non-epistemic values as good reasons not to defer to scientific testimony. I highlight two claims the IPCC make which exhibit such conditions and explain why.
In this paper, mutual interactions between science and philosophy are analysed from the point of view of contemporary applied ontology. Firstly, we shall address the question as to whether science needs philosophy, offering some perspectives that might be helpful in developing a synergetic relationship between these different domains. Secondly, we shall point out how it is possible to bring together the work of scientists and philosophers from a practical perspective. In particular, we shall focus our attention on the GEOLAT project, which offers a practical exemplification of the interaction between science and philosophy in the contemporary debate.
The aim of GEOLAT project is to make accessible the Latin literature through a query interface of geographic/cartographic type. Since all texts written in the classical period are rooted in geographic space, they all contain references to geographic places in some respect. Therefore, it becomes interesting to use a web resource that includes references to geographic context. Most research is based on the use of a gazetteer in which a place is normally represented by point locations. The limited spatial semantics associated with these approaches narrows the scope of their ability to retrieve useful resources for spatial queries.
All these information are collected in a comprehensive and informative geographical ontology, which plays a central role in intelligent spatial search on the web and serves as a shared vocabulary for spatial mark-up of Web sources. This ontology plays a specific role in representing information in four different domains: contemporary and ancient geography, informatics, Latin literature, as well as philosophical ontology of geography. The examination of this ontology allows us to rethink the relationship between science and philosophy on new bases, considering these disciplines as parts of a common project for a unitary description of reality.