I was also pissed about school maths when I read this. It is such a beautiful concept!
The whole course of multivariable calculus from Khan Academy is amazing and more than enough for this book. It is on their website as a blog and on YouTube as a series of videotutorials.
You don't need an entire network to work with the Iris dataset. In fact, a single perceptron is capable of doing it.
A very nice introduction and very useful to start getting into the machine learning world, but I feel that it lacks a little bit of explaination about the algorithms described like the perceptron and ADALINE...
As an useful resource I recommend reading chapter's two and tree of the book "Python Machine Learning" by Sebastian Raschka in order to get a more in-depth view of them (I won't post a link as it is not open source, but it can be found very easily online).
Very interesting comment and question!
As far as my ML knowledge goes, matrix calculations aren't very computationally complex (as it is just multiplying and adding numbers), so in order to make it faster you don't need a very powerful processor, you need more cores to perform parallel calculations. A normal CPU has 4 or 8 cores, so even though each one is very fast it can't do many simultaneous additions or multiplications. A GPU improves things as it has thousands of cores (because graphics rendering is just a lot of matrix calculations). Having a chip specially designed for machine learning will improve the learning as having a CPU improves graphics rendering.
One thing that intrigued me about what is said on the introduction is that the size and depth of the Neural Networks will continue to grow linearly (as it has been since now), but even though Moore's low says processing power doubles every two years, the quantum problem that arises from the size of current transistors could slow down this growth (until the development of quantum computers).
Note: sorry if my English sounds weird
I'm in! That's an amazing idea to keep motivated and to takle this book the easy way