Resumen
Recently, some state-of-the-art works have used deep learning-based architectures, specifically convolutional neural networks (CNNs), for banknote recognition and counterfeit detection with promising results. However, it is not clear which design strategy is more appropriate (custom or by transfer learning) in terms of classifier performance and inference times for massive data applications. This paper presents a comparison of the two design strategies in various types of architecture. For the transfer learning (TL) strategy, the most appropriate freezing points in CNN architectures (sequential, residual and Inception) are identified. In addition, a custom model based on an AlexNet-type sequential CNN is proposed. Both the TL and the custom models were trained and compared using a Colombian banknote dataset. According to the results, ResNet18 achieved the best accuracy, with 100%. On the other hand, the network with the shortest inference times was the proposed custom network, since its performance is up to 6.48-times faster in CPU and 16.29-times faster in GPU than the inference time with the models by transfer learning.