|
|
|
Jiarun Wu and Qingliang Chen
Massively pre-trained transformer models such as BERT have gained great success in many downstream NLP tasks. However, they are computationally expensive to fine-tune, slow for inference, and have large storage requirements. So, transfer learning with ad...
ver más
|
|
|
|
|
|
|
Fei Mei, Qingliang Wu, Tian Shi, Jixiang Lu, Yi Pan and Jianyong Zheng
Recently, a large number of distributed photovoltaic (PV) power generations have been connected to the power grid, which resulted in an increased fluctuation of the net load. Therefore, load forecasting has become more difficult. Considering the characte...
ver más
|
|
|
|