|
|
|
Roberto Pecoraro, Valerio Basile and Viviana Bono
Since the Transformer architecture was introduced in 2017, there has been many attempts to bring the self-attention paradigm in the field of computer vision. In this paper, we propose LHC: Local multi-Head Channel self-attention, a novel self-attention m...
ver más
|
|
|