So first this is great background, especially since it touches on McCulloch Pitts, which I think still serves as the major historical milestone for machine learning.
Can someone point me to how FPGAs and TPUs and other custom AI chips are going to continue to expand the upper limits of neural network depth and size?
One of my chief questions among the long "wax and wane" of deep learning / computational neuroscience is are we just confusing increased computational power with some sort of qualitative advance in our knowledge of AGI. Seeing how easily representational learning can be attacked adversarially and how fragile many neural networks are definitely doesn't help the case.
Anyway here's my brilliant resource link which is the Computer Vision intro from Stanford. A lot of the material from this chapter is covered visually which is nice.
I am in!