The origins of statistical studies on scientific bibliographies can be traced back to the twenties of the last century (see, e.g. Hulme, 1923). In 1926, Alfred J. Lotka published his pioneering study on the frequency distribution of scientific productivity (Lotka, 1926). At almost the same time, in 1927, Gross and Gross published their citation-based study in order to aid the decision which chemistry periodicals should best purchased by small college libraries. In particular, they examined 3633 citations from the 1926 volume of the Journal of the American Chemical Society. This study is considered the first citation analysis, although it is not a citation analysis in the sense of present-day bibliometrics.

Eight years after Lotka's article appeared, Bradford (1934) published his study on the frequency distribution of papers over journals. He established a relationship concerning the frequency distribution of papers over journals In particular, he found that ‘if scientific journals are arranged in order of decreasing productivity on a given subject, they may be divided into a nucleus of journals more particularly devoted to the subject and several groups or zones containing the same number of articles as the nucleus when the numbers of periodicals in the nucleus and the succeeding zones form a geometric series.

These early attempts remained, however, unnoticed until the early 1960s. The causes for this phenomenon are twofold. These papers appeared when traditional methods of information retrieval were still sufficient, and financing systems for scientific research did not yet stand need of quantitative or even sophisticated statistical methods.

The situation dramatically changed when Derek John de Solla Price published his books entitled ‘Science since Babylon’ (Price, 1961) and ‘Little Science - Big Science’ (Price, 1963). It was especially due to him that questions dealing with quantitative aspects of research became the target of interest of scientists and of research managers. He was also one of the main propagators of using the Science Citation Index (SCI) database of the Institute for Scientific Information (ISI, Philadelphia, PA, USA) as a tool in quantitative analysis of science. He analysed the recent system of science communication, and thus he presented the first systematic approach to the structure of modern science applied to the science as a whole. At the same time he laid the foundation of modern research evaluation techniques. ‘Little Science - Big Science’ had a great impact and severe consequences. The need for the evaluation of productivity and effectiveness of scientific research became imperative and time was now ripe for reception of his ideas since globalisation of science communication, the growth of knowledge and published results, increasing specialisation as well as growing importance of interdisciplinarity in scientific research reached a stage where scientific information retrieval became insufficient and funding systems based on personal knowledge and evaluations by peer reviews became more and more difficult.

Towards “Big Scientometrics”

The sharp rise which bibliometrics took since the late sixties is reflected by remarkable academic activities, and is intimately connected with the advanced information technology, with the development in computer science and technology and, especially, with the worldwide availability of the large bibliographic databases serving as the ground work of bibliometric research. Especially the databases of the ISI should be mentioned in this context. The SCI and more recently the Web of Science have become the most generally accepted basic source for bibliometric analysis. In the new century, new abstract and citation databases emerged and also serve as standard sources of bibliometric work. In this context, we would like to mention Scopus and more recently Dimensions. Google Scholar, which is also used as data source, is however not an original bibliographic database but a web-link based search engine that indexes metadata from heterogeneous sources.

However, in the seventies, when data collection was often still a matter of manual work, the field bibliometrics was, characterised by the personalities of enthusiastic researchers, much in the way of a ‘hobby’ to later integrate interdisciplinary approaches as well as mathematical and physical models on one side, and sociological and psychological methods on the other, not speaking of the long tradition of library science. Later on, since the beginning of the eighties, bibliometrics could evolve into a distinct scientific discipline with a specific research profile, several subfields and the corresponding scientific communication structures. Major steps towards the institutionalisation of the field were, in 1978, the launching of the journal Scientometrics as the first periodical specialised on bibliometric/scientometric topics, international conferences since 1983 and the journal Research Evaluation since 1991). The publication of several comprehensive books on bibliometrics, among others by Haitun (1983), Ravichandra Rao (1983) followed by the handbooks edited by van Raan (1988),  Moed et al. (2004), Glänzel et al. (2019)  may reflect this process. The fact that bibliometric methods are already applied to the field bibliometrics itself also indicates the rapid development of the discipline.

Imitating the transition from the ‘manufactural’ form of ‘little science’ to the ‘big science’ of multinational research centres and enormous governmental and industrial supports, scientometrics itself is claimed to change from its ‘little’ form to a ‘big’ one with huge computerised databases and with national and multinational research policy agencies as major customers.

In the 1990s, bibliometrics has become a standard tool of science policy and research management. In particular, all significant compilations of science indicators heavily rely on publication and citation statistics and other, more sophisticated bibliometric techniques.

New developments – Bibliometrics in the 21st century

The American physicist Jorge E. Hirsch brought a new challenge to the bibliometric community. He proposed a new indicator for the measurement of research performance of individual scientists. The h-index (Hirsch, 2005) is based on both publication activity and citation impact. Hirsch’s idea has immediately found interest in the public, and received positive reception both in the physics community and the scientometrics literature. Since then several adjustments and improvements have been proposed by bibliometricians. At the same time, Hirsch’s index also marks the shift of bibliometric methodology and application towards the micro level, i.e., towards the evaluation of research teams and individual scientists. The requirements on the data quality, reliability of methods and indicators and the validity of results and findings at this level are enormous and cannot be ensured by using bibliometrics alone. In this context, bibliometrics can serve as merely one component or tool in a broader framework (cf. Wouters et al., 2013).

The application of bibliometric tools to the Web resulted in the emerge of a new sub-discipline called web(o)metrics. As a consequence of the essential differences between print media and the web, this research topic evolved to a sub-discipline of scientometrics/informetrics with own methodology. Peter Ingwersen (Denmark), Isidro Aguillo (Spain), Mike Thelwall (UK), Judit Bar-Ilan (Israel)  and Liwen Vaughan (Canada) are among the main founders of the research area. The Web provider further relevant information sources and biblio-/informetric applications and makes various web-based manifestations of usage and impact of scientific literature visible and measurable. Priem and Hemminger (2010) outlined the concept and framework of a new version of scientometrics, which was called  Scientometrics 2.0. They listed, among others, bookmarking, reference managers, comments, (micro-)blogging, recommendation systems and social networks as possible sources for the measurement of the use of scientific information in a broader than scholarly context. Ever since views and downloads of abstracts ad full text, bookmarks and readership, blog posts, comments and reviews, social media and citations, in a broader sense, have been incorporated in the set of new, co-called almetric indicators (cf. Altmetric, PlumX). Interpretation, reliability and fields of applicability of altmetrics are not yet clear (Gumpenberger et al., 2016) and their possible use in and evaluative context is, at least in their present form, still limited (cf. Thelwall, 2017a,b). Nonetheless, web- and altmetrics have become important topics in bibliometrics and the researchers expect an enormous input for the measurement of a wider impact of scientific research in the near future.

Back