Resumen
The number of received citations and more complex bibliographic measures calculated based on them, such as the h-index, remain the most widely used indicators for measuring research impact in an objective and easy-to-compute way. However, using the number of received citations as a research impact measure has its shortcomings, some intrinsic (stemming from the doubt whether a citation is actually a confirmation of the cited paper?s impact), some extrinsic (stemming from the ease of manipulating this measure by deliberately inserting multiple unmerited references). While the first can only be addressed by a careful interpretation of the measure with consideration of its limitations, the latter can be reduced to much extent by replacing simple citation counting with a more sophisticated procedure constraining the impact that a single publication may have on the others. One such solution is ArticleRank, which has, however, several disadvantages limiting its practical use. In this paper, we propose another solution to this problem, the Transitive Research Impact Score (TRIS), which is free of these disadvantages, and validate it on a sample dataset.