|
|
|
Ryota Higashimoto, Soh Yoshida and Mitsuji Muneyasu
This paper addresses the performance degradation of deep neural networks caused by learning with noisy labels. Recent research on this topic has exploited the memorization effect: networks fit data with clean labels during the early stages of learning an...
ver más
|
|
|
|
|
|
|
Danilo Pau, Andrea Pisani and Antonio Candelieri
In the context of TinyML, many research efforts have been devoted to designing forward topologies to support On-Device Learning. Reaching this target would bring numerous advantages, including reductions in latency and computational complexity, stronger ...
ver más
|
|
|
|
|
|
|
Adil Redaoui, Amina Belalia and Kamel Belloulata
Deep network-based hashing has gained significant popularity in recent years, particularly in the field of image retrieval. However, most existing methods only focus on extracting semantic information from the final layer, disregarding valuable structura...
ver más
|
|
|
|
|
|
|
Xiaoyu Han, Chenyu Li, Zifan Wang and Guohua Liu
Neural architecture search (NAS) has shown great potential in discovering powerful and flexible network models, becoming an important branch of automatic machine learning (AutoML). Although search methods based on reinforcement learning and evolutionary ...
ver más
|
|
|
|
|
|
|
Leila Malihi and Gunther Heidemann
Efficient model deployment is a key focus in deep learning. This has led to the exploration of methods such as knowledge distillation and network pruning to compress models and increase their performance. In this study, we investigate the potential syner...
ver más
|
|
|
|
|
|
|
Xue Xing, Chengzhong Liu, Junying Han, Quan Feng, Qinglin Lu and Yongqiang Feng
Wheat is a significant cereal for humans, with diverse varieties. The growth of the wheat industry and the protection of breeding rights can be promoted through the accurate identification of wheat varieties. To recognize wheat seeds quickly and accurate...
ver más
|
|
|
|
|
|
|
Jihua Cui, Zhenbang Wang, Ziheng Yang and Xin Guan
As the number of layers of deep learning models increases, the number of parameters and computation increases, making it difficult to deploy on edge devices. Pruning has the potential to significantly reduce the number of parameters and computations in a...
ver más
|
|
|
|
|
|
|
Wenbo Zhang, Yuchen Zhao, Fangjing Li and Hongbo Zhu
Federated learning is currently a popular distributed machine learning solution that often experiences cumbersome communication processes and challenging model convergence in practical edge deployments due to the training nature of its model information ...
ver más
|
|
|
|
|
|
|
Huoxiang Yang, Yongsheng Liang, Wei Liu and Fanyang Meng
Due to the effective guidance of prior information, feature map-based pruning methods have emerged as promising techniques for model compression. In the previous works, the undifferentiated treatment of all information on feature maps amplifies the negat...
ver más
|
|
|
|
|
|
|
Dawei Luo, Heng Zhou, Joonsoo Bae and Bom Yun
Reliability and robustness are fundamental requisites for the successful integration of deep-learning models into real-world applications. Deployed models must exhibit an awareness of their limitations, necessitating the ability to discern out-of-distrib...
ver más
|
|
|
|