A fascinating first pass at the history of deep learning. It's interesting for me to think about different ways to read contemporary deep learning systems in the context of this history (in the same way that the Internet acquires a dark undertone in the context of its military history). What do DL's roots in neuroscience tell us about the way it works in industry today? For me this is important to think in tandem with DL's misrecognition in media as an algorithmic process that works the same way that the brain does--a perspective that Goodfellow's introduction here does a great job to debunk with historical and technical specificity.
I like the way that Goodfellow's definition of DL in this chapter works to open the category beyond a reduction to just neural nets. I am new to the field, and so I am curious to know: what does a non-neural net DL architectures look like? Are they comparably successful to neural nets? What kinds of problems are they good for solving in contrast to neural nets? Or is 'neural net' just another word for a DL architecture in general?
Count me in, thanks!