Black Swans

bdhammel

2 points

joined
https://bdhammel.github.io
History

Recent History

I've got a question on problem 1.2

The analogous form of 1.122 for the MSE with regularization I get is:

$$
A_{ij}= \sum_{n=1}^N \left ( \lambda+(x_n)^{i+j} \right )
$$

for

$$
T_i=\sum_{j=0}^M A_{ij}w_j
$$

To me, this suggest that regularization in the loss function is akin to a constant offset in \(x\); but, intuitively, that doesn't make any sense...

Could someone shed some light on this, or point out the flaw in my interpretation?

Hi! Did a list of exercises ever get finalized?

Two points from the chapter I found interesting. If anyone has good links to supplemental material on the topic, I'd appreciate seeing them!

  1. Further discussion on deep probabilistic models, pertaining to the depth of the conditional dependence of the nodes.
  2. Any survey or review of works trying different neurons with "greater neural realism," I'm curious where the failures are. Do they perform worse? or is it just that the added complexity makes it not worthwhile?

Sounds great! How will the logistics work?

To contact bdhammel, email .