Research transparency is increasingly a priority for funders, associations and journals in the social sciences. As researchers share more of their data and statistical code in public repositories, the research community has greater access to the research materials underlying published results.
Research transparency is valuable for many reasons, among them: (1) researchers are apt to be more careful in double-checking their analysis if they know they will share their statistical code and data, (2) there is potential for re-use of data for secondary analysis and meta-analysis and (3) research transparency permits replication of research results.
In my paper, I will focus primarily on the topic of replication, and specifically on the relation between replication and publicly available data. “Replication” is a term used in many ways; here it will be used to refer to re-analysis and robustness checks of published research results. Recent cases such as the replication of Reinhart’s and Rogoff’s highly-cited research in economics (Herndon et al. 2013) illustrate the value of such efforts. While the difficulty of publishing replications has limited how many of them are done and shared, here have been a variety of projects in the social sciences in recent years to increase replications.
Here I want to pinpoint one question which is of particular salience to applied epistemology: what are the minimal research materials which must be shared, in order to allow for external parties to make use of these materials to assess the reliability of the analysis and conclusions drawn in a paper? Given that one goal of the open science movement is to improve the evidential status of published research, it is worthwhile to investigate whether the research materials that are typically shared allow us to better assess a paper’s conclusions.
By looking closely at funder and journal requirements, we see that they most often call for data and code “underlying published research results” to be shared. This includes end stage analysis code, along with variables used the reported results. While this is a step in the right direction, I will argue that there are several important and basic questions – among them how analysis variables were constructed and whether there was selective reporting of outcomes – which these materials do not allow one to assess. I maintain that if the research transparency movement is to lead to better and more transparent evidence about research results, we should form our data-sharing requirements with an eye to replication. While journals may not be in a position to request full collected datasets, funders are often in a better position to do so.
I will present the example of the Arnold Foundation’s data-sharing policy. By explicitly using language that requires full study datasets among other materials, the policy addresses the concern above. While there are many challenges – e.g., giving researchers credit for their data, funding them appropriately to undertake the task of data-sharing, and how data should be curated – I will argue that the research transparency and open data movement would greatly benefit if funders adopted requirements similar to that of the Arnold Foundation.
In A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences, Gary Goertz and James Mahoney argue that there are fundamental differences quantitative and qualitative research traditions in political science. These differences include different sets of values, beliefs, and norms that result in different research procedures and practices and thus the different traditions might be characterized as constituting different cultures. The result is that while within-tradition conversations are often rich and productive, across tradition conversations are typically “difficult and marked by misunderstanding”(2012, 1). Such a characterization challenges the recent attention to mixed method research in the social sciences, suggesting deeper incompatibilities.
While it may be the case that there are ultimately “two cultures,” we suggest how quantitative and qualitative causal process observations may be combined for the study of one class of phenomena: rare events. Many phenomena in political science (and perhaps other disciplines) are extremely complex events that occur only rarely: such as wars, civil wars, revolutions, financial crises, genocides, famines. Causal hypotheses about the mechanisms that give rise to such phenomena are similarly complex and pose challenges to standard quantitative methods such as multiple regression techniques. This paper outlines a general mixed methods approach to the study of rare events. This method combines statistical analysis with an approach to process tracing that investigates all cases of the phenomenon in question.
We develop this method through its use in exploring a particular sort of rare event – the reversions of democracies to autocracies in the third wave of democratic development (Haggard and Kaufman forthcoming). We briefly illustrate the method using our case. We next consider the elements of the method and the various ways they contribute to understanding the phenomenon. For example, we give an account of causal process tracing in political science, examining recent literature on the topic (Beach and Pederson 2013, Bennett and Checkel 2014). Our account of process tracing highlights the role of theory (hypothesis) in specifying what counts as evidence that the process in question operating in this case. It also examines the way that the interplay between quantitative and qualitative methods refines hypotheses and the categories to which cases belong. We conclude that a mixed methods approach of the kind that we outline contributes to knowledge production in a variety of ways, potentially bridging the “two cultures.”
In the late 1980s social epistemology emerged as a subfield of philosophy dedicated to the study of the social dimensions of knowledge, with a particular emphasis on scientific knowledge. Prominent among the original approaches to social epistemology was the use of economic models to account for the social character of scientific knowledge production while preserving science’s epistemic goals, such as the acquisition of truth and objective knowledge about the world—an approach that Hands (1997) has called the Economics of Scientific Knowledge (ESK). Kitcher (1990) and Goldman & Shaked (1991) made two of the first contributions to ESK, building analytic models in rational choice theory to explain how scientists can make epistemic achievements through an efficient division of cognitive labor, despite following non-epistemic interests, such as the aim for personal credit. Or in other words they aimed at showing that science’s epistemic goals are not necessarily trumped by social factors.
At the same time in which social epistemologists started to use conceptual and methodological tools from economics, Dupré (1995) raised important doubts regarding economics’ imperialistic tendencies, claiming that “as scientific methodologies move further away from their central areas of application their abstractions become ever grosser, and their relevance to the phenomena become ever more distant,” and also that “…alien intellectual strategies may import inappropriate and even dangerous assumptions into the colonized domains” (380).
Economics’ imperialistic tendencies have been a matter of extensive debate. Unquestionably economics has broadened its scope well beyond “economic” phenomena to explain other “social” phenomena in the realms of political science, behavioral science, sociology, geography, and the law. But the appropriateness of such imperialism has been a controversial topic for social scientists, whose views range from an uncritical appraisal of economics’ scientific methods to a radical rejection of the trend.
Despite expanding economics explanatory scope, social epistemology has not yet been examined as a case of economics imperialism. This paper aims at filling this gap. Following recent philosophical contributions to the conceptual and normative framework of scientific imperialism (Dupré 1995, 2001; Clarke & Walsh 2009, 2013; Mäki 2009, 2013; Kidd 2013), I examine whether social epistemology can be considered a case of economics imperialism and determine whether economics explanatory expansionism appropriately contributes to this philosophical subfield or not. I argue first that ESK approaches to social epistemology count as a case of economics imperialism under a broad conception of the term, and second that we have good reasons to doubt the appropriateness of the incursion of economics into social epistemology, insofar as ESK’s attempt at explanatory unification fail to express significant human interests.
The paper is divided in five sections. The second section presents the recent philosophical literature on scientific imperialism and introduces a normative framework for the evaluation of these cases (following Mäki 2013). The third section examines social epistemology’s interdisciplinary transfers with economics, especially through the development of ESK. The fourth section evaluates such transfers according to the criteria presented in the second section and highlights the shortcomings of the ESK approach regarding its significance for non-epistemic human interests. The last section presents some concluding remarks.