Tag: neural networks

Book review: Artificial intelligence for Humans: Volume 3 Deep Learning and Neural Networks by Jeff Heaton

heaton-vol3Deep learning and neural networks are receiving more attention these days, you may have seen the nightmarish images generated using this technology by Google Research. I picked up Artificial Intelligence for Humans: Volume 3 Deep Learning and Neural Networks by Jeff Heaton to find out more since the topic fits in with my interests in data science and machine learning. There doesn’t seem to be much in the way of accessible, book length treatments of this relatively new topic. Most other offerings on Amazon have publication dates in the future.

It turns out that Artificial Intelligence for Humans is the result of a Kickstarter campaign, so far the author has funded three volumes on artificial intelligence by this route: two of them for around $18,000 and on for around $10,000. I paid £16 for the physical book which seems like a reasonable price. I think it is a pretty well polished product, it doesn’t quite reach the editing and production levels of a publisher like O’Reilly but it is at least as good as other technical publishers. The accompanying code examples and web site are really nicely done.

Neural networks have been around for a long time, since the 1940s, and see period outbreaks of interest and enthusiasm. They are modelled, loosely on the workings of biological brains with “neurons” connected together with linkages of different weights which can be trained to perform tasks such as image recognition, classification and regression. The “neurons” are grouped into layers with an input layer, where data enters, feeding into potentially multiple successive “hidden” layers finally leading to an output layer of neurons where results are read off. The output of a neuron is calculated by summing its inputs multiplied by the weights of the inputs and feeding the result through an “activation function”. The training process is used to optimise the weights, and may also evolve the structure of the network.

I remember playing with neural networks in the 1980s, typing a programme into my Amstrad CPC464 from a magazine which recognised handwritten digits, funnily enough this is still the go to demonstration of neural networks! In the past neural networks have not gained traction because of the computational demands of training. This problem appears to have been solved with new algorithms and GPU-based computation. A second innovation is the introduction of techniques to evolve the structure of neural networks to do “deep learning”.

Much of what is presented is familiar to me from my reading on machine learning (supervised and unsupervised learning, regression and classification), image analysis (convolution filters), and old-fashioned optimisation (stochastic gradient descent, Levenberg-Marquardt, genetic algorithms and stimulated annealing). It does lead me to wonder sometimes whether there is nothing new under the sun and that many of these techniques are simply different fields of investigation re-casting the same methods in their own language. For example, the LeNET-5 networks used in image analysis contain convolution layers which act exactly like convolution filters in normal image analysis, the max pool layers have the effect of downscaling the image. The combination of these one would anticipate to give the same effect as multi-scale image processing techniques.

The book provides a good summary on the fundamentals of neural networks, how they are built and trained, what different variants are called and then goes on to talk in more detail about the new stuff in deep learning. It turns out the label “deep” is applied to neural networks with more than two layers, which isn’t a particularly high bar. It isn’t clear whether this is two layers including the input and output layers or two layers of hidden neurons. I suspect it is the latter. These “deep” networks are typically generated automatically.

As the author highlights, with the proliferation of easy to use machine learning and neural network libraries the problem is no longer the core algorithm rather it is the selection of the right model for your particular problem and optimising the learning and evaluation strategy. As a Pythonista it looks like the way to go is to use the NoLearn and Lasagna libraries. A measure of this book is that when I go to look at the documentation for these projects the titles at least make sense.

The author finishes off with a description of his experience with doing a Kaggle challenge. I’ve done this, it’s a great way of getting some experience in machine learning techniques on nearly real problems. I thought the coverage was a bit brief but it highlighted how neural networks are used in combination with other techniques.

This isn’t an in depth book, but it introduces all the useful vocabulary and the appropriate libraries to start work in this area. And as a result I’m off to try t-SNE on a problem I’m working on, and then maybe try some analysis using the Lasagna library.