A note on using Digital Science’s Dimensions to learn about individual research impact


Viet-Phuong La, AISDL
https://orcid.org/0000-0002-4301-9292

January 12, 2023

In today’s academia, the metrics mania has continued. It will certainly continue for many years to come, regardless of the urgent call for correcting the usage of metrics for university rankings, reducing the addiction to metrics, or institutionalizing “better” methods of evaluating an academic’s contribution and merits. It is because for publishing scientists, citations and their related impacts have a far more important influence on their lives, recognition, and perceived achievements than what a person outside of the academe usually thinks of.

So instead of trying to ignore the metrics’ importance, this article introduces an increasingly powerful tool for a publishing academic to track one’s academic impact. Of course, you will soon see that the method is centered on the citation, but there is also more than the citation count alone. The tool is called Dimensions.

Dimensions is a product in Digital Science’s portfolio, https://www.dimensions.ai/. It works on a very large dataset, containing over 133 million research publications as of today. But there are other numbers concerning their databases, as shown in Figure 1.



Figure 1
. Dimensions’ research databases

The good news is for academics; basic information is provided for free. One can easily sign up to have a personal account, then start playing around with their provided data. A concise but useful article in 2018 by Catherine Williams [1] provides us with some basic information concerning Dimensions’ starting point and early functions. The author wrote:

“Researchers themselves benefit enormously, too. Whether or not their institution has chosen to license the Plus or Analytics version of the platform, they can conduct advanced literature searches using the openly available application at https://app.dimensions.ai – and explore the grants, clinical trials, patents and policy records directly associated with the publications that appear in their search results. Optionally, they can create a free account to save searches and export data, and an integration with ORCID makes it possible to add publications from Dimensions to an individual ORCID profile in just one click...”

And I can reconfirm that these utilities are still available for academics to use FOC today. In the remaining space of this article, I will show some uses of Dimensions’ metrics for one’s own evaluation of academic impact. I’ll be using the data provided by Dimensions for my colleague, the founder of this science portal. The data subset for his published articles is provided here [URL].

Dimensions allows its users to sort their research data using one of its impact metrics, namely citations, RCR, FCR, and Altmetrics. Of the four metrics, three first are centered around citation, and the last is on media mentions. Thus, we can again see the stress on the citation even though Dimensions itself has an inherent motivation to promote Altmetrics because it is also another product in Digital Science’s portfolio.

This sorting tool is useful, although it looks quite simple. It is because while the citation count for an individual research publication is meaningful, it is not enough to tell “how big is big”. Therefore, the citation ranking can only be understood correctly if further look at an additional, or two, metric measure is possible. And Dimensions has that on offer. The FCR (Field Citation Ratio) measure is a particularly useful and easy-to-communicate one.

The definition of FCR from Dimensions follows. “The Field Citation Ratio (FCR) indicates the relative citation performance of an article when compared to similarly-aged articles in its subject area. The FCR is normalized to 1.0 for this selection of articles. An FCR value of more than 1.0 shows that the publication has a higher-than-average number of citations for its group (defined by its FoR subject code and publication year). Articles that are less than 2 years old do not have an FCR. An article with zero citations has an FCR of 0.”

Figure 2 shows our colleague’s data for the first five publications with the highest FCR metrics. I will show you why this additional look at FCR has value.



Figure 2
. Top 5 highest FCR documents

The first insight is that this ranking is not identical to the citation measure ranking. His highest FCR publication [2], representing the most impactful research document among those being published around the same time, is, in reality, NOT the top-cited one, i.e., [3]. This is not quite obvious because the top-cited one is a “newer” one, published in 2020 and has received tons of citations since. (For transparency, I am the paper’s first author for some historical reasons.)

Regarding total citations, [3] has twice as many citations as [2] has received. But in terms of FCR, the metric for [2] is twice as large as that of [3]. The two documents swap their positions when we move from one metric to another.

In addition, Dimensions also provides some further insights learned from their databases for a research item. Figure 3 presents the summary for one of the highest-CFR publications [4]. It is ready to see that, although the citation count is below 90, the FCR measure is 170, meaning that the item has received 170 times more citations than the field’s average (for any research item with the same age).



Figure 3. Dimensions’ summary for research item [4]

In contrast, item [5] in the top-cited list has received 115 citations, but its FCR is only 102, much lower than [4], as seen in Fig. 3. The contrast is even deeper for research publication [6], published in 2020. Item [6] has received only 35 citations, which is 1/3 of item [4]; however, its FCR is 173, higher than that of [4]. Both articles were published in the same year, 2020.

It is also noteworthy that the RCR measure for [4] is unavailable. I think N/A means the system has not collected sufficient data to calculate the number. In addition, reading RCR measure is less straightforward than FCR, due largely to its official definition as follows (exact quotation from Dimensions):

“The Relative Citation Ratio (RCR) indicates the relative citation performance of an article when compared to other articles in its area of research. The RCR is normalized to 1.0 and calculated for all articles funded by the NIH in the Dimensions catalog. An RCR of more than 1.0 shows that a publication has an above-average citation rate for its group, when defined by the subject area citation rates of the articles that have been cited with it.”

From the law of large numbers point of view, I have seen that many individual publication records have an FCR but not an RCR. So, it seems that calculating an RCR requires more data and is not so straightforward.

But there are situations where the reverse is true, i.e., a record containing an RCR but not an FCR. Two examples are records [7] and [8]. Both records have high citation counts, 156 and 155, respectively. Their RCR metrics are also high (compared to hundreds of other well-cited research items), 8.8 and 4.2, respectively. But none of the two has an FCR for some mysterious reason. For this very “unknown” reason, these two are not in the top-cited list; otherwise, they may well kick some other records out of the top 5, given their high citation counts.

Of course, there are more aspects and functions of Dimensions that a researcher can study and try out. However, as presented above, some early insights may be of direct and instant interest to a novice to Dimensions.

References

[1] Williams C. (2018). Dimensions from Digital Science. Insights, 31,33.

[2] Vuong QH, et al. (2018). Cultural additivity: behavioural insights from the interaction of Confucianism, Buddhism and Taoism in folktales. Palgrave Communications, 4, 143.

[3] La VP, et al. (2020). Policy response, social media and science journalism for the sustainability of the public health system amid the COVID-19 outbreak: the Vietnam lessons. Sustainability, 12(7), 2931.

[4] Vuong QH. (2020). Reform retractions to make them more transparent. Nature, 582(7811), 149.

[5] Vuong QH, Napier NK. (2015). Acculturation and global mindsponge: an emerging market perspective. International Journal of Intercultural Relations, 49, 354-367.

[6] Vuong QH, et al. (2020). On how religions could accidentally incite lies and violence: folktales as a cultural transmitter. Palgrave Communications, 6, 82.

[7] Tran BX, et al. (2019). Global evolution of research in artificial intelligence in health and medicine: a bibliometric study. Journal of Clinical Medicine, 8(3), 360.

[8] Vuong QH. (2018). The (ir)rational consideration of the cost of science in transition economies. Nature Human Behaviour, 2(1), 5.