Dr. Gonzalo Ruz
Faculty of Engineering and Sciences,
Universidad Adolfo Ibáñez,
We have seen recently how artificial neural networks have become popular again in the context of artificial intelligence, and in particular, machine learning. Although most of its recent popularity is due to deep architectures (deep learning) applied to perceptual problems (e.g. image classification), there is another branch of neural network research that is gaining attention focused on how to efficiently train, in a non-iterative way, shallow networks for regression and classification tasks. In this talk, we will first review a class of neural networks known as neural networks with random weights and how these networks are trained in a non-iterative way. Then we will focus on some recent extensions proposed for these types of networks: introducing parallel layers, hidden node pruning, non-iterative learning in deep architectures, and noise handling via regularization. Results using benchmark datasets as well as datasets from real applications will be discussed.