How Robust are the Results?

By | July 8th, 2016|Categories: Economics General, Management of Innovation, Methodology, Open Innovation, Uncategorized|

Today I have the pleasure to present some of our (Galia, Laursen, Salter and my) recent research. Here is briefly what it is about: Introduction. As stated by Hubbard, Vetter and Little (1998: 251): “The goal of science is empirical generalization, or knowledge development leading to some degree of understanding.” However, in many fields of science, the integrity of the pertinent empirical literatures are open to question because of what Rosenthal (1979) dubbed the “file drawer problem,” which implies that journals may be dominated by papers reporting results that are Type I errors (erroneous rejections of the null hypothesis), while the null outcome remain in the file drawers of researchers. In the top five journals of strategic management research, Goldfarb and King (2016) report that between 24 to 40 percent of the findings are likely the result of chance rather than a reflection of true relationships. Replication studies can help reduce this problem by establishing a set of robust empirical results (Bettis, 2012). In addition, even if we assume away the “file drawer problem”, statistical tests by nature produce Type I errors. The result is that in strategic management general and in open innovation research in particular, we know too little about which results are empirically generalizable, and hence whether they potentially add to our understanding. In many cases, however, researchers work on similar data sets and use similar or identical dependent variables, so that in principle, the robust (and not so robust) results could be extracted, while controlling for a host of other factors. When such general datasets are available, large scale replication studies can be conducted. By large-scale replication studies, we mean studies where different independent variables are included in a single empirical model with the same dependent variable. However, in these large scale replications as in most empirical applications, the “true model”, and therefore, the appropriate selection of explanatory variables, is essentially unknown, which leads to a phenomenon described as “model uncertainty” (Chatfield, 1995). Disregarding model uncertainty results in too small standard errors and too strong confidence in the statistical findings (Raftery, 1995). Additionally, it is model uncertainty that fundamentally facilitates the “file drawer problem”. […]