Ismael Rafols, University of Sussex
Tommaso Ciarli, University of Sussex
Patrick van Zwanenberg, University of Sussex
Andy Stirling, University of Sussex
Recent years have seen much critical debate over the simplistic use of scientometric tools for formal or informal appraisal of science and technology (S&T) organisations (e.g. in university rankings) or individuals (e.g. the h-index) (Roessner, 2000; Van Raan, 2004; Weingart, 2005). As a reaction to these critiques, efforts have been made to improve the robustness of measurements by broadening the range of inputs considered in scientometric evaluations. Examples include the inclusion of books and national or regional journals (Martin et al. 2010), or more recently ‘altmetrics’ (i.e. metrics based on alternative data sources, see Priem et al., 2010). In doing so, the S&T indicator and policy communities have reverted to an early conventional wisdom that scientometrics should rely on multiple sources of data that may provide ‘converging partial indicators’ (Martin and Irvine, 1983).
While this ‘broadening out’ of the range of data used as ‘inputs’ in scientometric appraisal is, in our view, commendable (Stirling, 2003), we propose in this paper that a second dimension also needs to be considered. This relates to the extent to which the ‘outputs’ of appraisal ‘open up’ contrasting conceptualisations of the phenomena under scrutiny and consequently allow for more considered and rigorous attention to alternative policy options, both by decision makers and within wider policy debate (Stirling, 2005; Stirling et al., 2007, pp. 54-58; Leach et al., 2010 pp. 102-107). We use a recent comparative study on the performance and interdisciplinarity of six organisational units (Rafols et al, 2011) to illustrate the difference between increasing the range of inputs (‘broadening out’) and enhancing the diversity of outputs to policy decision making (‘opening out’). In this way, policy appraisal can inform decision making in a more rigorous ‘plural and conditional’ fashion – acknowledging the way in which divergent normative assumptions and metrics can yield contrasting understandings of both the phenomena under scrutiny, and of appropriate policy responses (Stirling, 2008).