You look like a thing and I love you by Janelle Shane is a non-technical overview of machine learning. This isn’t to say it doesn’t go into some depth, and that if you are experienced practitioner in machine learning you won’t learn something. The book is subtitled “How Artificial Intelligence Works and Why It’s Making the World a Weirder Place” but Shane makes clear at the outset that it is all about machine learning – Artificial Intelligence is essentially the non-specialist term for the field.
Machine learning is based around training an algorithm with a set of data which represents the task at hand. It might be a list of names (of kittens, for example) where essentially we are telling the algorithm “all these things here are examples of what we want”. Or it might be a set of images where we indicate the presence of dogs, cats or whatever we are interested in. Or, to use one of Shane’s examples, it might be sandwich recipes labelled as “tasty” or “not so tasty”. After training, the algorithm will be able to generate names consistent with the training set, label images as containing cats or dogs or tell you whether a sandwich is potentially tasty.
The book has grown out of Shane’s blog AI Weirdness where she began posting about her experiences of training recurrent neural networks (a machine learning algorithm) at the beginning of 2016. This started with her attempts to generate recipes. The results are, at times, hysterically funny. Following attempts at recipes she went on to the naming of things, using neural networks to generate the names of kittens, guinea pigs, craft beers, Star Wars planet names and to generate knitting patterns. More recently she has been looking at image labelling using machine learning, and at image generation using generative adversarial networks.
The “happy path” of machine learning is interrupted by a wide range of bumps in the road which Shane identifies, these include:
- Messy training data – the recipe data, at one point, had ISBN numbers mixed in which led to the neural network erroneously trying to include ISBN-like numbers in recipes;
- Biased training data – someone tried to analyse the sentiment of restaurant reviews but found that Mexican restaurants were penalised because the Word2vec training set (word2vec is a popular machine learning library which they used in there system) associated Mexican with “illegal”;
- Not detecting the thing you thought it was detecting – Shane uses giraffes as an example, image labelling systems have a tendency to see giraffes where they don’t exist. This is because if you train a system to recognise animals then in all likelihood you will not include pictures with no animals. Therefore show a neural network an image of some fields and trees with no animals in it will likely “see” an animal because, to its knowledge, animals are always found in such scenes. And neural networks just like giraffes;
- Inappropriate reward functions – you might think you have given your machine learning system an appropriate “reward function” aka a measure for success but is it really the right one? For example the COMPAS system, which recommends whether prisoners in the US should be recommended for parole, was trained using a reward based on re-arrest, not re-offend. Therefore it tended to recommend against parole for black prisoners because they were more likely to be arrested (not because they were more likely to re-offend);
- “Hacking the Matrix” – in some instances you might train your system in a simulation of the real world, for example if you want to train a robot to walk then rather than trying to build real robots you would build virtual robots and try them out in a simulated environment. The problem comes when your virtual robot works out how to cheat in the simulated environment, for example by exploiting limitations of collision detection to generate energy;
- Problems unsuited to machine learning – some tasks are not amenable to machine learning solutions. For example, in the recipe generation problem the “memory” of the neural network limits the recipes generated because by the time a neural network has reached the 10th ingredient in a list it has effectively forgotten the first ingredient. Furthermore, once trained in one task, a neural network will “catastrophically forget” how to do that task if it is subsequently trained to do another task – machine learning systems are not generalists;
My favourite of these is “Hacking the matrix” where algorithms discover flaws in the simulations in which they run, or flaws in their reward system, and exploit them for gain. This blog post on AI Weirdness provides some examples, and links to original research.
Some of this is quite concerning, the examples Shane finds are the obvious ones – the flight simulator which found that it could meet the goal of a “minimum force” landing by making the landing force enormous and overflowing the variable that stored the value, making it zero. This is catastrophic from the pilot’s point of view. This would have been a very obvious problem which could be identified without systematic testing. But what if the problem is not so obvious but equally catastrophic when it occurs?
A comment that struck me towards the end of the book was that humans “fake intelligence” with prejudices and stereotypes, it isn’t just machines that use shortcuts when they can.
The book finishes with how Shane sees the future of artificial intelligence, essentially in a recognition that these systems have strengths and weaknesses and that the way forward is to combine artificial and human intelligence.
Definitely worth a read!