Resumen
Nowadays, people?s lives are filled with a huge amount of picture information, and image retrieval tasks are widely needed. Deep hashing methods are extensively used to manage such demands due to their retrieval rate and memory consumption. The problem with conventional deep hashing image retrieval techniques, however, is that high dimensional semantic content in the image cannot be effectively articulated due to insufficient and unbalanced feature extraction. This paper offers the deep cross-dimensional attention hashing (DCDAH) method considering the flaws in feature extraction, and the important points of this paper are as follows. This paper proposes a cross-dimensional attention (CDA) module embedded in ResNet18; the module can capture the cross-dimension interaction of feature maps to calculate the attention weight effectively because of its special branch. For a feature map acquired by a convolutional neural network (CNN), each branch takes different rotation measurements and residual transformations to process it. To prevent the DCDAH model from becoming too complex, the CDA module is designed to have the characteristics of low computational overhead. This paper introduces a scheme to reduce the dimension of tensors, which can reduce computation and retain abundant representation. For a dimension of a feature map, the Maxpool and Avgpool are performed, respectively, and the two results are connected. The DCDAH method significantly enhances image retrieval performance, according to studies on the CIFAR10 and NUS-WIDE data sets.