Today I have the pleasure to present some of our (Galia, Laursen, Salter and my) recent research. Here is briefly what it is about:


As stated by Hubbard, Vetter and Little (1998: 251): “The goal of science is empirical generalization, or knowledge development leading to some degree of understanding.” However, in many fields of science, the integrity of the pertinent empirical literatures are open to question because of what Rosenthal (1979) dubbed the “file drawer problem,” which implies that journals may be dominated by papers reporting results that are Type I errors (erroneous rejections of the null hypothesis), while the null outcome remain in the file drawers of researchers. In the top five journals of strategic management research, Goldfarb and King (2016) report that between 24 to 40 percent of the findings are likely the result of chance rather than a reflection of true relationships.

Replication studies can help reduce this problem by establishing a set of robust empirical results (Bettis, 2012). In addition, even if we assume away the “file drawer problem”, statistical tests by nature produce Type I errors. The result is that in strategic management general and in open innovation research in particular, we know too little about which results are empirically generalizable, and hence whether they potentially add to our understanding. In many cases, however, researchers work on similar data sets and use similar or identical dependent variables, so that in principle, the robust (and not so robust) results could be extracted, while controlling for a host of other factors. When such general datasets are available, large scale replication studies can be conducted. By large-scale replication studies, we mean studies where different independent variables are included in a single empirical model with the same dependent variable. However, in these large scale replications as in most empirical applications, the “true model”, and therefore, the appropriate selection of explanatory variables, is essentially unknown, which leads to a phenomenon described as “model uncertainty” (Chatfield, 1995). Disregarding model uncertainty results in too small standard errors and too strong confidence in the statistical findings (Raftery, 1995). Additionally, it is model uncertainty that fundamentally facilitates the “file drawer problem”.

To help address some of these problems, we suggest introducing a Bayesian averaging approach of classical estimators (BACE) to the part of the inbound open innovation literature (Chesbrough, 2003; Dahlander and Gann, 2010) that relies on Community Innovation Survey (CIS) data (e.g., Cassiman and Veugelers, 2006; Laursen and Salter, 2006; Leiponen and Helfat, 2010; Garriga, von Krogh, and Spaeth, 2013). The aim is to address the problems related to uncertainty about the model structure in the context of large-scale replication studies, thereby producing a set of empirically robust results. We contribute by establishing a set of robust findings from this literature. Our analysis is based on data from the CIS for 7,481 firms for three major European countries (France, Germany and the United Kingdom) and allows us to examine the robustness of a range of models employed in previous research.

Accounting for Model Uncertainty.

Model averaging techniques address model uncertainty and recognize that, in addition to the parameters in the model, the structure of the model has to be estimated as well. Model averaging techniques do so without incurring the common problem of data mining, that is the search for — and the selection of — a single best model without presenting the process that leads to its selection (Chatfield, 1995). In sum, model averaging proposes a solution to model uncertainty by using several plausible models, by averaging over those models and by drawing inferences based on the weighted averages of those models. In our analysis we account for model uncertainty and we modify the BACE approach by Sala-i-Martin et al. (2004) to accommodate the Tobit regressions required for the dependent innovation performance variable.

Data and Measures. In developing our pool of variables for the analysis, we focused on the most influential papers, such as Laursen and Salter (2006) and Cassiman and Veugelers (2006), in the pertinent literature. Our assumption was that these initial papers helped to influence the modelling strategies of subsequent studies. We also sought to feature additional measures from research such as Leiponen (2005), Schmiedeberg (2008), Roper, Du and Love (2008), Grimpe and Kaiser (2010), Love, Roper and Vahter (2014), and Ballot et al. (2015). However, the list of variables examined in these studies is long, and the contexts are different. Accordingly, to make our analysis tractable, we include only the key variables of these studies, as well as core control variables in the wider innovation literature. In total we end up with 29 explanatory variables. We measure innovation performance by the sales share of innovations, defined as the share of sales of new products in 2004 over total sales in that year. Two variants of the innovation performance measure are used depending on the novelty of the innovation: sales share of products new to the world and sales share of products new to the firm in the same way.

Results and Discussion.

Our analysis shows that some variables are extremely important for determining firm’s innovative performance. We first look at the case for new to the world and then new to the firm innovation. For products that are new for the world, it is clear that user involvement is central, a conclusion that is consistent with the broad literature on user innovation (von Hippel, 2005). In line with the literature on exporting and innovation, we find that international market involvement is associated with new to the world products sales. It also appears that a firms approach to capturing rents from its innovative efforts, through its formal and informal appropriability strategy were associated with greater levels of new to the world innovation. Interestingly, we find little evidence for the importance of some of the more traditional innovation variables, such as R&D and size, or more recent ones, such as external search breadth and collaboration, being important for innovation new to the world. However, the effects of collaboration depth are significant when they were interacted with firm-internal R&D. We also find evidence that decisions about the R&D and the boundary of the firm in that context, are important variables across the models, indicating some support for Cassiman and Veugelers’ (2006) analysis for Belgium.

Turning the case of innovation new to the firm, we find strong evidence that external search breath is associated with innovation. This result is consistent with Laursen and Salter (2006), and the wider stream of research on openness and innovation, such as Leiponen and Helfat (2010), Garriga et al. (2013), and Love et al. (2014). We also find support for the notion of an inverted U-shaped relationship between openness and innovation, consistent with this prior literature. It is also clear that collaboration matters in explaining innovation new to the firm. Moreover, the organizational boundary of innovation also shapes innovation new to the firm, as decisions to make or buy influence innovative outcomes, which is again consistent with Cassiman and Veugelers (2006). However, we again, find little support for traditional variables from innovation studies, such as size, R&D or age. Nor do we find support for the importance of appropriability strategy, obstacles or market orientation.

These results suggest that models underpinning new the world and new to the firm innovation are distinctive and may therefore require different modeling strategies. It also suggests that some of the findings from more recent literature on the determinants of innovation, such as the make or buy decision, external search breadth and collaboration, are central to explaining product innovation, especially when it comes to innovation new to the firm.


Ballot G, Fakhfakh F, Galia F, Salter A. 2015. The Fateful Triangle: Complementarities in Performance between Product, Process and Organizational Innovation in France and the Uk. Research Policy 44(1): 217-232.

Bettis RA. 2012. The Search for Asterisks: Compromised Statistical Tests and Flawed Theories. Strategic Management Journal 33(1): 108-113.

Cassiman B, Veugelers R. 2006. In Search of Complementarity in Innovation Strategy: Internal R&D and External Knowledge Acquisition. Management Science 52(1): 68-82.

Chatfield C. 1995. Model Uncertainty, Data Mining and Statistical Inference. Journal of the Royal Statistical Society Series A (Statistics in Society) 158(3): 419–466.

Chesbrough H. 2003. Open Innovation. Harvard University Press: Cambridge, Massachusetts.

Dahlander L, Gann D. 2010. How Open Is Innovation? Research Policy 39(6): 699-709.

Garriga H, von Krogh G, Spaeth S. 2013. How Constraints and Knowledge Impact Open Innovation. Strategic Management Journal 34(9): 1134-1144.

Goldfarb B, King AA. 2016. Scientific Apophenia in Strategic Management Research: Significance Tests & Mistaken Inference. Strategic Management Journal 37(1): 167-176.

Grimpe C, Kaiser U. 2010. Balancing Internal and External Knowledge Acquisition: The Gains and Pains of R&D Outsourcing. Journal of Management Studies 47(8): 1483-1509.

Hubbard R, Vetter DE, Little EL. 1998. Replication in Strategic Management: Scientific Testing for Validity, Generalizability, and Usefulness. Strategic Management Journal 19(3): 243-254.

Laursen K, Salter AJ. 2006. Open for Innovation: The Role of Openness in Explaining Innovative Performance among Uk Manufacturing Firms. Strategic Management Journal 27(2): 131-150.

Leiponen A. 2005. Organization of Knowledge and Innovation: The Case of Finnish Business Services. Industry and Innovation 12(2): 185-203.

Leiponen A, Helfat CE. 2010. Innovation Opportunities, Knowledge Sources, and the Benefits of Breadth. Strategic Management Journal 31(2): 224-236.

Love JH, Roper S, Vahter P. 2014. Learning from Openness: The Dynamics of Breadth in External Innovation Linkages. Strategic Management Journal 35(11): 1703-1716.

Raftery AE. 1995. Bayesian Model Selection in Social Research. Sociological Methodology 25: 111–164.

Roper S, Du J, Love JH. 2008. Modelling the Innovation Value Chain. Research Policy 37(6/7): 961-977

Rosenthal R. 1979. The File Drawer Problem and Tolerance for Null Results. Psychological Bulletin 86(3): 638-641.

Sala-i-Martin X, Doppelhofer G, Miller RI. 2004. Determinants of Long-Term Growth: A Bayesian Averaging of Classical Estimates (Bace) Approach. American Economic Review 94(4): 813-835.

Schmiedeberg C. 2008. Complementarities of Innovation Activities: An Empirical Analysis of the German Manufacturing Sector. Research Policy 37(9): 1492-1503.

von Hippel E. 2005. Democratizing Innovation. MIT Press: Cambridge, MA.