Why impact factor cannot be relied for quality?
- IF analysis is limited to citations from the journals indexed by the Web of Science/Web of Knowledge. Currently, the Web of Science indexes only 8621 journals across the full breadth of the sciences, and just 3121 in the social sciences.
- A high IF/citation rate says nothing about the quality -- or even, validity -- of the references being cited. Notorious or even retracted articles often attract a lot of attention, hence a high number of citations. The notoriety related to the first publication on "cold fusion" is one such example.
- Journals that publish more "review articles" are often found near the top of the rankings. While not known for publishing new, creative findings, these individual articles tend to be heavily cited.
- The IF measures the average number of citations to articles in the journal -- given this, a small number of highly-cited articles will skew the figure.
- It takes several years for new journals to be added to the list of titles indexed by the Web of Science/Web of Knowledge, so these newer titles will be under-represented.
- It's alleged that journal editors have learned to "game" the system, encouraging authors to cite their works previously published in the same journal.
https://academic-accelerator.com/Impact-Factor-IF/CA-A-Cancer-Journal-for-Clinicians
In many cases articles are cited to criticise their premises/assumptions, methodology, findings, conclusions etc. These articles are getting cited, not because of their quality, but for their mediocrity or even for their futileness. In other words, they are attracting negative publicity. In this case, gaining more and more citations, is not a measure of quality, but it is just opposite. Here also using IF as a measure of quality fails miserably.
Web of Science usually uses only journals for counting citations, but many articles are being cited in books, and other type of materials. They are almost grossly avoided. Here also citation count fails.
Comments
Post a Comment