Training deep neural networks can be a challenging task, especially for very deep models. A major part of this difficulty is due to the instability of the gradients computed via backpropagation. In this post, we will learn how to create a self-normalizing deep feed-forward neural network using Keras. This will solve the gradient instability issue, speeding up training convergence, and improving model performance.
Disclaimer: This article is a brief summary with focus on implementation. Please read the cited papers for full details and mathematical justification (link in sources section).
In their 2010 landmark paper, Xavier Glorot and Yoshua Bengio provided invaluable insights concerning the difficulty of training deep neural networks. …
This post is heavily inspired from an exercise of chapter 12 in Hands-On Machine Learning with Scikit-Learn, Keras & Tensorflow. I highly recommend this book.
Working with very large datasets is a common scenario in Machine Learning Engineering. Usually, these datasets will be too large to fit in memory. This means the data must be retrieved from disk on the fly during training.
Disk access is multiple orders of magnitude slower than memory access, so efficient retrieval is of high priority.
Fortunately, TensorFlow’s tf.data
API provides a simple and intuitive interface to load, preprocess, and even prefetch data.
In this post, we will learn how to create a simple yet powerful input pipeline to efficiently load and preprocess a dataset using the tf.data
API. …
About