REVISTA
AI

   
Redirigiendo al acceso original de articulo en 19 segundos...
Inicio  /  AI  /  Vol: 4 Par: 1 (2023)  /  Artículo
ARTÍCULO
TITULO

Embarrassingly Parallel Independent Training of Multi-Layer Perceptrons with Heterogeneous Architectures

Felipe C. Farias    
Teresa B. Ludermir and Carmelo J. A. Bastos-Filho    

Resumen

In this paper we propose a procedure to enable the training of several independent Multilayer Perceptron Neural Networks with a different number of neurons and activation functions in parallel (ParallelMLPs) by exploring the principle of locality and parallelization capabilities of modern CPUs and GPUs. The core idea of this technique is to represent several sub-networks as a single large network and use a Modified Matrix Multiplication that replaces an ordinal matrix multiplication with two simple matrix operations that allow separate and independent paths for gradient flowing. We have assessed our algorithm in simulated datasets varying the number of samples, features and batches using 10,000 different models as well as in the MNIST dataset. We achieved a training speedup from 1 to 4 orders of magnitude if compared to the sequential approach. The code is available online.

 Artículos similares