Tag: deep learning

Book review: Deep learning with Python by François Chollet

Deep learning with Python by Francois Chollet is the third book I have reviewed on deep learning neural networks. Despite these reviews only spanning a couple of years it feels like the area is moving on rapidly. The biggest innovations I see from this book are in the use of pre-trained networks, and the dominance of the Keras/Tensorflow/Python ecosystem in doing deep learning.

Deep learning is a type of artificial intelligence based on many-layered neural networks. This is where the “deep” comes in – it refers to the numbers of layers in the networks. The area has boomed in the last few years with the availability of massive datasets on which to train, improvements in numerical algorithms for training neural networks and the use of GPUs to further accelerate deep learning. Neural networks have been used in production since the 1990s – by the US postal service for reading handwritten zip codes.

Chollet works on artificial intelligence at Google and is the author of the Keras deep learning library. Google is also the home of Tensorflow, a lower level library which is often used as a backend to Keras. This is a roundabout way of saying we should expect Chollet to be expert and authoritative in this area.

The book starts with some nice background to machine learning. I liked Chollet’s description of machine learning (deep learning included) being about finding a representation of data which makes the problem at hand trivial to solve. Imagine taking two pieces of coloured paper, placing them one on top of the other and then crumpling them into a ball. Machine learning is the process of un-crumpling the ball.

As an introduction to the field Deep Learning in Python runs through some examples of deep learning applied to various classes of problem, including movie review sentiment analysis, classifying newswire articles and predicting house prices before going back to discuss some issues these problems raise. A recurring theme is the problem of overfitting. Deep learning models can learn their training data really well, essentially they memorise the answers to questions and so when they are faced with questions they have not seen before they perform badly. Overfitting can be addressed with a range of techniques.

One twist I had not seen before is the division of the labelled data used in machine learning into three, not two parts: training, validation and test. The use of training and validation parts is commonplace, the training set is used for training – the validation set is used to test the quality of a model after training. The third component which Chollet introduces is the “test” set, this is like the validation set but it is only used when your model is about to go into production to see how it will perform in real life. The problem it addresses is that machine learning involves a large number of hyperparameters (things like the type of machine learning model, the number of layers in a deep network, the form of the activation function) which are not changed during training but are changed by the data scientist quite possibly automatically and systematically. The hyperparameters can be overfitted to the validation set, hence a model can perform well on validation data (that it has seen before) but not on test data which represents real life.

A second round of examples looks at deep learning in computer vision, using convolution neural networks (convnets). These are related to the classic computer vision process of convolution and image morphology. Also introduced here are recurrent neural networks (RNNs) for applications in processing sequences such as time series data and language. RNNs have memory across layers which dense and convolution networks don’t, this makes them effective for problems where the sequence of data is important.

The final round of examples is in generative deep learning including generating text, the DeepDream system, image style transfer and generating images of faces.

The book ends with some thoughts of the future. Chollet comments that he doesn’t like to use the term neural networks which implies the ability to reason and abstract in the way that humans do. One of the limitations of deep learning is that, as currently used, does not have the ability to abstract or generate programmatic descriptions of solutions. You would not use deep learning to launch a rocket – we have detailed knowledge of the physics of rockets, gravity and the atmosphere which makes a physics-based approach far better.

As I read I realised that keeping up with what was new in machine learning was a critical and challenging task, Chollet answers this question exactly suggesting three approaches to keeping abreast of new developments:

  1. Kaggle – the machine learning competition site;
  2. ArXiv – the preprint server, in particular http://www.arxiv-sanity.com/ which is a curated view of the machine learning part of arXiv;
  3. Keras – keeping up with developments in the Keras ecosystem;

If you’re going to read one book on deep learning this should probably be the one, it is readable, covers off the field pretty well, Chollet is an authority in this area and in my view has particularly acute insight into deep learning.

Book review: Artificial intelligence for Humans: Volume 3 Deep Learning and Neural Networks by Jeff Heaton

heaton-vol3Deep learning and neural networks are receiving more attention these days, you may have seen the nightmarish images generated using this technology by Google Research. I picked up Artificial Intelligence for Humans: Volume 3 Deep Learning and Neural Networks by Jeff Heaton to find out more since the topic fits in with my interests in data science and machine learning. There doesn’t seem to be much in the way of accessible, book length treatments of this relatively new topic. Most other offerings on Amazon have publication dates in the future.

It turns out that Artificial Intelligence for Humans is the result of a Kickstarter campaign, so far the author has funded three volumes on artificial intelligence by this route: two of them for around $18,000 and on for around $10,000. I paid £16 for the physical book which seems like a reasonable price. I think it is a pretty well polished product, it doesn’t quite reach the editing and production levels of a publisher like O’Reilly but it is at least as good as other technical publishers. The accompanying code examples and web site are really nicely done.

Neural networks have been around for a long time, since the 1940s, and see period outbreaks of interest and enthusiasm. They are modelled, loosely on the workings of biological brains with “neurons” connected together with linkages of different weights which can be trained to perform tasks such as image recognition, classification and regression. The “neurons” are grouped into layers with an input layer, where data enters, feeding into potentially multiple successive “hidden” layers finally leading to an output layer of neurons where results are read off. The output of a neuron is calculated by summing its inputs multiplied by the weights of the inputs and feeding the result through an “activation function”. The training process is used to optimise the weights, and may also evolve the structure of the network.

I remember playing with neural networks in the 1980s, typing a programme into my Amstrad CPC464 from a magazine which recognised handwritten digits, funnily enough this is still the go to demonstration of neural networks! In the past neural networks have not gained traction because of the computational demands of training. This problem appears to have been solved with new algorithms and GPU-based computation. A second innovation is the introduction of techniques to evolve the structure of neural networks to do “deep learning”.

Much of what is presented is familiar to me from my reading on machine learning (supervised and unsupervised learning, regression and classification), image analysis (convolution filters), and old-fashioned optimisation (stochastic gradient descent, Levenberg-Marquardt, genetic algorithms and stimulated annealing). It does lead me to wonder sometimes whether there is nothing new under the sun and that many of these techniques are simply different fields of investigation re-casting the same methods in their own language. For example, the LeNET-5 networks used in image analysis contain convolution layers which act exactly like convolution filters in normal image analysis, the max pool layers have the effect of downscaling the image. The combination of these one would anticipate to give the same effect as multi-scale image processing techniques.

The book provides a good summary on the fundamentals of neural networks, how they are built and trained, what different variants are called and then goes on to talk in more detail about the new stuff in deep learning. It turns out the label “deep” is applied to neural networks with more than two layers, which isn’t a particularly high bar. It isn’t clear whether this is two layers including the input and output layers or two layers of hidden neurons. I suspect it is the latter. These “deep” networks are typically generated automatically.

As the author highlights, with the proliferation of easy to use machine learning and neural network libraries the problem is no longer the core algorithm rather it is the selection of the right model for your particular problem and optimising the learning and evaluation strategy. As a Pythonista it looks like the way to go is to use the NoLearn and Lasagna libraries. A measure of this book is that when I go to look at the documentation for these projects the titles at least make sense.

The author finishes off with a description of his experience with doing a Kaggle challenge. I’ve done this, it’s a great way of getting some experience in machine learning techniques on nearly real problems. I thought the coverage was a bit brief but it highlighted how neural networks are used in combination with other techniques.

This isn’t an in depth book, but it introduces all the useful vocabulary and the appropriate libraries to start work in this area. And as a result I’m off to try t-SNE on a problem I’m working on, and then maybe try some analysis using the Lasagna library.