Sunday 6 March 2016

Your Own Handwriting - The Real Test

We've trained and tested the simple 3 layer neural network on the MNIST training and test data sets. That's fine and worked incredibly well - achieving 97.4% accuracy!

The code in python notebook form is at github:
That's all fine but it would feel much more real if we got the neural network to work on our own handwriting, or images we created.

The following shows six sample images I created:


The 4 and 5 are my own handwriting using different "pens". The 2 is a traditional textbook or newspaper two but blurred. The 3 is my own handwriting but with bits deliberately taken out to create gaps. The first 6 is a blurry and wobbly character, almost like a reflectioni n water. The last 6 is the previous but with a layer of random noise added.

We've created deliberately challening images for our network. Does it work?

The demonstration code to train against the MNIST data set but test against 28x28 PNG versions of these images is at:
It works! The following shows a correct result for the damaged 3.


In fact the code works for all the test images except the very noisy one. Yippeee!



Neural Networks Work Well Despite Damage - Just Like Human Brains
There is a serious point behind that broken 3. It shows that neural networks, like biological brains, can work quite well even with some damage. Biological brains work well when damaged themselves, here the damage is to the input data, which is analogous. You could do your own experiments to see how well a network performs when random trained neurons are removed.