Here are your tasks for this week.

This week we'll be going over Sequence Modeling(RNNs). RNNs happen to be one of the coolest concepts to emerge from deep learning. They are the workhorse behind many of the language translation and modeling applications.

Before diving straight into this week's chapter, I'll recommend first reading: The Unreasonable Effectiveness of Recurrent Neural Networks, by Andrej Karpathy and then Understanding LSTM Networks, by Christopher Olah.

  • Read pages 373 - 420 of the Deep Learning book.
  • Leave at least one comment below, e.g. a question about something you don’t understand or a link to a brilliant resource. It must somehow relate to pages 373 - 420.
  • Use your extra time to help answer the questions others leave 🙏, this may go a long way to improve your understanding of many concepts, too.
  • As always, if you’re feeling lost mention it in the comments and ask for help!

Enjoy 🍻, see you next week.