|
||||||||||||||||||||
|
|
Bibliometric analysis of the top-cited articles on idiopathic intracranial hypertension
Correspondence Address: Source of Support: None, Conflict of Interest: None DOI: 10.4103/0028-3886.253969
Keywords: Benign intracranial hypertension, bibliometrics, citation analysis, idiopathic intracranial hypertension, pseudotumor cerebri
Idiopathic intracranial hypertension (IIH) was first reported by Heinrich Quincke [1] in 1893 as “meningitis serosa” and the syndrome has changed names multiple times till Corbett and Thompson [2] coined the term “idiopathic intracranial hypertension” in 1989. IIH, also called 'pseudotumor cerebri', is characterized by increased intracranial pressure of unknown cause with its associated signs and symptoms in an alert and oriented patient and the annual incidence of IIH is estimated to be approximately 0.9/100,000 persons.[3] There are various articles published on IIH focusing on its natural history, epidemiology, diagnosis, pathogenesis, medical and surgical treatment strategies, and the number continues to increase. However, due to this abundance, significant studies are often overlooked and therefore, bibliometric analyses are often used for identification of landmark publications to provide the readers with a unique insight into the development and trends within a topic and to serve as an educational guide to facilitate evidence-based clinical decision-making for trainees.[4] Citation analysis is one method of bibliometric analysis that has been used to quantify the relative significance of a scientific paper within the scientific community, because the influence of a paper is usually proportional to the citations attributed to that paper.[5],[6] To our knowledge, top-cited articles on IIH have not been identified or characterized before. In the current study, we performed a citation analysis to identify and characterize the 100 top-cited articles on IIH.
Study design and data search We performed a title-specific search of the Web of Science (Thomson Reuters) database to identify the 100 top-cited articles on IIH, as this database has been used to identify top-cited articles in many other medical specialties.[7],[8],[9] We used “idiopathic intracranial hypertension” or “pseudotumor cerebri” or “benign intracranial hypertension” as our search terms without restricting the timespan and language. The search, performed on November 26, 2017, yielded a complete list of articles on IIH. This was a retrospective bibliometric analysis that did not involve human or animal subjects and was exempt from institutional review board approval. Identification and assessment of articles The consistency of data abstraction was ensured by the method developed by Lim et al.[10] Three investigators (MYS, BS, and ES), including 1 neurosurgeon and 2 neurologists (with 10, 6 and 3 years of experience, respectively) initially reviewed the same randomly chosen 150 articles on IIH, independently. Any disagreements were resolved in a consensus meeting. No formal inter-observer reliability testing was conducted between the investigators; however, disagreements were rare. After the initial pilot abstraction, the searched results were manually reviewed by the investigators. The search results were arranged from most to least citations, and the results were checked three times independently to compile one comprehensive list of top-cited 100 articles. Finally, these top-cited 100 articles were reviewed and the necessary data extracted. The 100 selected articles were analyzed using the following parameters: (a) citation count, (b) citations per year, (c) references, (d) year of publication, (e) study category, (f) number of authors, (g) authors' h-index, (h) first author's specialty, (i) institutions, (j) country of origin, (k) journal of publication, (l) journal impact factor, (m) Scimago journal rank (SJR), (a prestige metric that represents a measure of scientific influence of journals and that constitutes the number of citations received by a journal and the prestige of the journals where such citations are made), and (n) journal source-normalized impact per paper (SNIP) [the ratio of a journal's citation impact and the degree of topicality of its subject field]. The citation numbers for the top 100 articles were obtained by searching the Web of Science database. Articles were grouped into the following study categories:[11] (a) natural history (observational, epidemiological, prognostic, follow-up, pathological, and radiological studies that are linked to clinical evaluation), (b) laboratory (animal, basic science, and pathological studies without classification), (c) non-operative management (medication-based therapies, and nonsurgical diagnostic studies), (d) operative management (only surgical procedures and endovascular therapies), (e) operative and non-operative management (studies that included components of both operative and non-operative management), (f) classification (pathological, radiological, and operative classification and grading studies), (g) review studies, and (h) others (meeting abstracts, letters, editorial materials, corrections, book chapters, errata, discussions, and book reviews). The Science citation index-expanded (SCIE) reference data were used to complete the reference analysis. For the purpose of our research, the country of the first author was considered as the country of origin of the article. Once a list of journal impact factors had been obtained from the Journal Citation Report 2017, we studied the correlation between each journal's impact factor and the number of top 100 articles, it had published. We also investigated the correlation between the number of citations and other characteristics, namely, the number of years since publication, the number of authors, and participating countries. The combined analysis of author keywords and keywords plus in SCIE database demonstrated research trends in IIH.[12],[13] To visualize the citation links and research topics of the 100 most frequently cited articles, we used CitNetExplorer Software version 1.0 to carry out a citation network analysis, using the method suggested by Kusumastuti et al.[14] Statistical analyses All data were analyzed using Statistical Package for the Social Sciences (SPSS Inc, IL, USA) version 22.0. Nonparametric Spearman rank correlations were used to determine the correlations among variables. All statistical tests were 2-tailed; a P value of < 0.05 was considered statistically significant.
Citations The search terms yielded a total of 2141 articles and all articles were published in the English language. The top-cited 100 articles were listed in descending order [Supplementary Table 1]. The articles received a mean of 113 citations per paper (min-max = 58-476). The most frequently cited article was “Diagnostic criteria for idiopathic intracranial hypertension” by Friedman et al., in Neurology (2002) and it received 476 citations.[15] The authors set out to further describe updated diagnostic criteria for IIH that may be used for routine patient management and for research purposes, as they stated that diagnostic criteria of this disorder had not been updated since the Modified Dandy's criteria were articulated in 1985. Since then, in 2013, Friedman et al., proposed an updated diagnostic criteria for IIH to incorporate advances and insights into the disorder realized over the past 10 years, which is also among the most cited articles.[16] This article was followed by “Idiopathic intracranial hypertension - a prospective-study of 50 patients” authored by Wall et al.,[17] in 1991. This article received 364 citations. In this article, the authors concluded that patients should be evaluated by perimetry using an appropriate strategy and contrast sensitivity testing, along with careful examination of the optic discs. The third most cited article was undertaken in 1988 by Durcan et al.[18] The authors concluded that pseudotumor cerebri was a relatively common neurologic illness and might be an important preventable cause of blindness in obese young women. The articles were analyzed regarding the article citation per year to overcome the analysis bias towards older studies and the article “Revised diagnostic criteria for the pseudotumor cerebri syndrome in adults and children” by Friedman in Neurology (2013) ranked first [Supplementary Table 1]. There was a positive correlation between the total number of citations and the citations per year, rs = 0.539, P < 0.001. Other factors and their effects on citations The reference numbers of the top-cited 100 articles ranged from 0 to 895, with an average of 50 references. Among 4964 references, 63 references were high-frequency (cited ≥ 10 times) references [Supplementary Table 2] and 36 of these were included in the top 100 articles. Although the article “Diagnostic criteria for idiopathic intracranial hypertension” by Friedman et al., in Neurology (2002) was the most frequently cited article in the top-cited 100 articles, it was only cited 17 times by top-cited 100 articles and ranked number 22 [Supplementary Table 2]. There was no correlation between the citation count and the number of references (rs = 0.062, P = 0.542). The publication dates ranged from 1980 to 2014. Regarding publication trends by 10-year intervals according to the publication date range, most articles were published between the years 1990-1999 (40%), followed by 2000-2009 (32%), 1980-1989 (20%), and 2010-present (8%). There was no correlation between the citation count and the years since publication (rs = 0.156, P = 0.121). The most frequent study categories were natural history (39%) and operative management (25%) studies, followed by review (19%), classification (7%) and non-operative management (5%) studies. A Kruskal-Wallis H test was conducted to determine if there were differences in the citation count between groups and the median citation count increased from others (66.0), to natural history (80.0), to non-operative management (99.0), to review studies (110.0), to operative management (111.0) and to classification (130.0) groups, but the differences were not statistically significant, χ2 (3) =10.282, P = 0.068. The articles were contributed by 404 authors. Most of the articles were published by authors fewer than four (48%), whereas the remaining articles were published by four to seven authors (44%) and more than seven authors (8%). There was no correlation between the citation count and the number of authors (rs = -0.093, P = 0.360). Analysis of the top 10 authors according to their number of articles, irrespective of their authorship status, showed that J.J. Corbett and M. Wall both had 13 articles, with an author's h-index of 39 and 40, respectively [Table 1]. There was no correlation between the citation count and h-index of first author (rs = -0.029, P = 0.772).
In assessing the specialties with 4 or more articles, neurology contributed to 32% of the top 100 articles, followed by ophthalmology (27%), neuro-ophthalmology (11%), neuroradiology (9%) and neurosurgery (9%). The median citation count increased from neuroradiology (82.0), to ophthalmology (86.0), to neurosurgery (93.0), to neurology (98.0), to other (98.0), and to neuro-ophthalmology (118.0), but the differences were not statistically significant, χ2 (3) =2.420, P = 0.798. There were a total of 91 institutions and 19 institutions published 3 or more of the top-cited 100 articles. Seventeen (89.5%) of these institutions were located in the US. Of the two institutions outside the US, one was in Australia and one was in UK. The institutions producing the most articles were the University of Iowa (n = 14), followed by University of Sydney (n = 7) and University of Mississippi (n = 6) [Table 2]. There was no correlation between the citation count and the number of institutions collaborated in a paper (rs = -0.084, P = 0.405).
The top-cited articles originated from 12 countries. Regarding countries with 3 or more articles, the countries producing the most articles were the USA (n = 72), followed by UK (n = 9), Australia (n = 5), and Canada (n = 4). Only 8 articles were written by multinational collaborations; 92 articles were authored by researchers from the same country. The median citation count increased from UK (77.0), to Canada (77.5), to others (83.5), to USA (101.5) and to Australia (118.0), but the differences were not statistically significant, χ2 (3) =4.138, P = 0.388. The articles were published in 39 journals and Neurology published the greatest number of articles (n = 26), followed by JAMA Neurology (formerly Archives of Neurology) (n = 9) and American Journal of Ophthalmology (n = 7). Although most of the articles were published in Neurology, JAMA Neurology had the highest impact factor (10.029) and Ophthalmology had the highest SNIP and SJR values (3.321 and 4.631, respectively) [Table 3]. There were positive correlations between the citation counts and IF (rs = 0.318, P = 0.001), SJR (rs = 0.287, P = 0.004) and SNIP (rs = 0.343, P < 0.0001) values.
Citation network After downloading full record of contents of the top-cited articles from the Web of Science Core Collection database, a citation network analysis consisting of 100 publications with 873 citation links was performed by CitNetExplorer program. Clustering analysis resulted into two main clusters of publications, as can be seen in [Figure 1]. Due to the minimum size requirement, 1 publication (Raucher HS, 1985) does not belong to a cluster. The two citation networks were rooted in 2 classical publications (Rush JA, 1980; King JO, 1995) and therefore referred to as the Rush-cluster (n = 85) and the King-cluster (n = 14) [Figure 2] and [Figure 3]. Publications included in the Rush-cluster discussed mainly the clinical profile of IIH, whereas, publications included in the King-cluster tended to focus on transverse sinus stenting in the management of IIH.
Keyword analysis The most frequently used keywords were pseudotumor cerebri, papilledema, visual loss, idiopathic intracranial hypertension, nerve sheath decompression, intracranial hypertension, cerebrospinal-fluid pressure, cerebrospinal-fluid diversion and benign intracranial hypertension. While descriptive keywords were more frequent between 1980s and 1990s, keywords describing surgical management options such as “nerve sheath decompression” and “cerebrospinal-fluid diversion” were top-listed keywords after the year 2000 [Figure 4].
The number of papers published in the scholarly scientific literature has been increasing exponentially, at a rate of approximately 3% per year, since 1980.[19] The scientific literature represents the accumulated experience, recent advance in knowledge, and new information, usually in the form of original scientific articles. Bibliometric analysis is the extraction of statistics on journal articles, commonly focusing on citation analysis of research outputs and publications. Therefore, bibliometrics can be used to extract important and effective information from large and complex databases. Since Alan Pritchard defined bibliometrics in 1969, bibliometric analyses have become widespread.[20] However, to the best of our knowledge, this is the first bibliometric analysis of publications related to “idiopathic intracranial hypertension”. Although, this study is a snapshot of the current literature, the top 100 most-cited articles in this study represent the most quoted level of evidence in this field. Our search returned a considerable number of articles, 2141 in total, with an average citation count of 113 (range 58–476). Several prior studies have looked at the highest cited papers of a specific disease or condition, such as epilepsy (401-3,749), essential tremor (79-846) and dystonia (137-560), Parkinson's disease (401-4327), and our numbers are much lower than the equivalent figures in neurology.[21],[22],[23] It is widely assumed that publication in a high impact journal will enhance the impact of an article, and previous studies have shown that the impact factor of particular journals is the best indicator for citations. Thus, top-cited articles are usually published in journals with high impact factors.[24] However, some studies do not support this finding [25],[26] and it was also stated that the citation rates of the articles determine the journal impact factor, but not vice-versa.[27] Although most of the articles in our study were published in Neurology, JAMA Neurology had the highest impact factor and Ophthalmology had the highest SNIP and SJR values. We also found positive correlations between citation counts and IF (rs = 0.318, P = 0.001), SJR (rs = 0.287, P = 0.004) and SNIP (rs = 0.343, P < 0.0001) values. The decade during which most top-cited articles on IIH were published was the 1990s. Only 8 articles were published after 2010. This result suggests that it may take some time for article citations to peak, as documented through bibliometric analysis.[28] However, “article citation per year” is primarily used to evaluate the current relevance of an article to the scientific community, regardless of its time of publication and is useful in terms of avoiding the analysis bias towards older studies. Regarding total citation counts and citations per year, only 2 out of the 5 most-cited articles kept their ranks.[15],[29] However, while the studies by Friedman et al., and Wall et al.,[16],[30],[31] were in the 8th, 54th and 95th ranks regarding total citation counts, they ranked up to 1st, 3rd and 5th, respectively, indicating the fact that researchers tend to cite the most recent study [Supplementary Table 1]. Besides, we did not find a correlation between the citation count and years since publication (rs = 0.156, P = 0.121). There is an increasing tendency across scientific disciplines to write multi-authored papers. In particular, papers with more authors are commonly better cited,[32],[33],[34] probably because more authors on a paper tend to present a greater diversity of ideas and/or data types,[35] especially when interdisciplinary subjects are focused upon, and these papers have the potential for more self-citations [34] and citations by colleagues and collaborators from a larger network.[36] However, in our study, 48% of the articles were published by authors fewer than four and there was no correlation between the citation count and the number of authors (rs = -0.093, P = 0.360). The h-index of a researcher is the number of papers coauthored by the researcher with at least h citations each.[37] It was proposed as a representative measure of individual scientific achievement, besides the total number of papers published and the total number of citations garnered.[38] However, there was no correlation between the citation count and the h-index of first author (rs = -0.029, P = 0.772). Some authors suggested that the number of citations of an article could be increased by increasing the number of references cited in an article.[19],[33],[39],[40] This may be due to a longer reference list making a particular paper more visible by show up on search results in citation databases more frequently,[41] or by encouraging researchers who have been cited to cite the paper. Also, articles with more references tend to be longer [42] and longer papers are probably better cited because they contain both more and a greater diversity of data and ideas, as in the case of more authors. In our study, the reference numbers of the top-cited articles ranged from 0 to 895, with an average of 50 references. Despite common knowledge, there was no correlation between the citation count and the number of references (rs = 0.062, P = 0.542). There is no doubt that USA leads the world in the number of medical research publications due to its large number of researchers and generous research funding.[43],[44],[45] In addition, it is known that American authors tend to cite local articles and US reviewers prefer American articles.[46],[47] In line with the literature, our study found that most articles were written in USA (n = 72), and 17 (89.5%) institutions out of 19 institutions that published 3 or more of the top-cited 100 articles were located in USA. However, the differences between countries regarding the median citation count were not statistically significant, χ2 (3) = 4.138, P = 0.388. Our study had a few limitations. First, we have only used a single medical database, Web of Science, for our analysis, and there are some un-indexed journals and therefore publications that might have been missed, as different databases return vastly different results.[48] Also, bibliometric analyses did not cover unpublished works and non-journal printed works such as books, dissertations, reports, or government documents. Second, search queries may lead to possible false positive and false negative results. Third, some information from older classic articles becomes integrated into common knowledge such that it is no longer considered necessary to cite these articles. This is known as “obliteration by incorporation” effect.[49] To counter this effect, we also used the number of annual citations and rankings. Other potential factors that affect citations include the negative citations being counted as positive citations, manipulation of the system by researchers (inappropriately self-citing, citing colleagues, splitting outputs into many articles) and bias in favour of articles written in English language.[50] Despite all these limitations, our study was the first to perform bibliometric analysis of the top-cited articles on IIH.
In conclusion, our bibliometric analysis of the top 100 most-cited articles on IIH revealed that there was no correlation between the citation count and number of references, years since publication, number of authors, authors' H-index, and number of institutions collaborating, and there were positive correlations between the citation count and journal impact factor, Scimago journal rank, and journal source-normalized impact per paper values. Acknowledgements The authors thank Prof. Betul Baykan for her supervision. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
[Figure 1], [Figure 2], [Figure 3], [Figure 4]
[Table 1], [Table 2], [Table 3]
|
|
|||||