Sunday, 26 February 2017

Book Translations

I've been really lucky with the interest in my Make Your Own Neural Network book.

Some publishers have been interested in taking the book, but after some thinking I've resisted the temptation because:

  • I can price the books how I want .. this is important especially for the ebook which I want to be as cheap and accessible as possible. Some publishers will increase the ebook price by an order of magnitude!
  • I can update the books to fix errors, and have the updated book ready for people to buy within hours, and usually within 24 hours.
  • As an author who has spent lots of my own time and effort on this, I get a much fairer deal with Amazon than with traditional publishers.

However, I have agreed to other language translations of the book to be handled by publishers. So far, the book is on course to be published in:

  1. German
  2. Chinese
  3. Russian
  4. Japanese
  5. Korean

I love the "traditional animal" that O'Reilly have done for the German version:


I'm looking forward to more translations - personally I wish there was a Spanish and Italian one too.

10 comments:

  1. Dear Tariq, first of all - your book is an awesome read! I worked through your examples and even did the calculations by pen and paper! To follow up on neural networks I would love to take one of your versions you are sharing on github and extend it with some, imho, cool functions like "store" and "load" the net configuration (weights) and visualize the hidden layer while querying the net. As you are providing the examples in jupyter notebooks, I would love to convert them to normal .py files, are you fine with me just copying the code and not fork your repository? For sure I will link on your blog/rep within the readme. But I feel like coding within an IDE makes it much easier for me. Greetings from Austria! Cheers Mario

    ReplyDelete
    Replies
    1. Hey Mario

      sorry for the late reply!

      Yes of course - please do take the code! Your idea is great .. adding load/store is something a few people have asked about.

      I'd love to hear about your work, and I'd be very happy to link to and share your blog.

      I know that more advanced readers like to use an IDE.

      Thanks from London!

      Tariq

      Delete
  2. Dear Tariq,

    I just bought the German translation of your book and read within two days. I am not working in IT-Business but since I left school end of the 1980's I am interesting in stuff like that.

    The chapters I am missing are how to train a network from the scratch and working with Boolean. It was good to train it with the given database for a learning purpose. But how would it work to start from the scratch.

    I thought e.g. about teaching the network when the light in the bathroom is switched on/off with a kind of prediction.

    Greetings from Germany (with rusty skills in English and Algebra)

    Marco

    ReplyDelete
    Replies
    1. Hi Marco

      To train a network from scratch you do need a training data set. That is, example of what the correct output should be given a set of inputs.

      For an example like a bathroom light, you would need to think about what the inputs are. They could be time, light, temperature, humidity and so on. Once you have a data set which represents what the correct answer is (light on / off) you then need to think about what size of neuralnetwork you will need. This will depend a lot on the kinds of input parameters. In practice, many professionals will experiment with different sizes and shapes of networks until they find a good one, and then use that.

      I hope that helps a little. Let me know if you need more.

      Delete
  3. Dear Tariq, I just finished your book once on Korean version.
    It was a good read, but after thinking about some key components I noticed something weird. (Maybe too easy for experts like you.. but anyway)
    Why Sigmoid? - how this arbitrary function happens to be the problem solving method? (I mean sigmoid-like functions)
    Is this really the result of imitating the nature and just happens to be working? Or, is there a whole bunch of reasons which didn't make it into this book? I want to know your thoughts on this. Thank you.

    Greetings from S.Korea.
    - anonymous

    ReplyDelete
    Replies
    1. Thanks for getting in touch with a great question!

      I wondered this myself. The answer is that you can use different functions instead of the sigmoid - for example tanh() is also popular as well as the rectified linear unit (relu). You can find many more examples here.

      The reason sigmoid and tanh() are popular is (1) because they are fairly easy to do algebra with which you need to work out how to do back propagation, and (ii) early thinking was about how biological neurons worked and they thought a sigmoid like shape was best.

      The most important thing to remember is that the function you use must be non-linear. If they aren't then you lose the power of a neural network as the combination of these functions collapses down to a simple one. For example you can't use f(x) = ax+b as many of these combines collapses back down to a simple linear function which can't model complex data.

      Let me know if this doesn't help and I'll try again.

      Delete
  4. Dear Mr. Rashid,
    I have read your book (German translation) and I think it is simply fantastic.
    With no real background in math or programming except some pascal on CPM in the nineties and some C around the turn of the millennium I was able to build a simple neural net to teach a small robot recognizing images with amazing results. Thank you very much for your work!
    For my purpose I converted your .ipynb code to .py using jupyter nbconvert.
    As training my neural net (with 1,350 images 50x50px) lasts about 3 minutes (3 hours on a raspberry pi Zero) I looked for a way to save/load the trained net (the object n).
    As “pickle” does not work, I used the dill library (https://pypi.python.org/pypi/dill) which extends python’s pickle module for serializing and de-serializing python objects and is published under a 3-clause BSD license.
    That little code (python3) is all you need to save or load the trained net:
    import dill as pickle
    #saving object n
    with open(“filename.bin”, "wb") as pf:
    pickle.dump(n, pf, pickle.HIGHEST_PROTOCOL)
    #loading object n
    with open("filename.bin", "rb") as pf:
    n = pickle.load(pf)
    I hope that perhaps this hint/example helps others a bit to move forward with their projects.
    I wish you and your family relaxed holidays and a beautiful turn of the year!
    Knut

    ReplyDelete
    Replies
    1. Thanks Knut! I'm really pleased you found the book useful. If you'd like to, please do leave a review on amazon.de

      Thanks also for your suggestions to save the neural network's state. A few readers have suggested similar ideas. Perhaps I should do a post on the blog abour saving and loading state?

      Delete
  5. Hi Tariq, I am interested in doing the Spanish translation. Write me at angel.berniz@gmail.com, so that we can talk about the details. All the best,
    Angel.

    ReplyDelete
    Replies
    1. Angel - gracias, te escribire directamente!

      My spanish isn't great, but I will write to you :)

      Delete