Black Swans

rschaefer

6 points

joined
rschaefer hasn't added a bio yet
History

Recent History

I'm not sure. I'm an NLP person so word embeddings are natural to me. This article may help: https://www.sciencedirect.com/science/article/pii/S2405918816300459

If I come across another example I'll let you know.

Isn't learning word vectors an example for this?

In NLP tasks you usually need to create vectors first, i.e. transfer language to a numerical representation (e.g. word2vec, doc2vec etc). These models use neural networks but often they only have one hidden layer, so many wouldn't classify them as 'deep' learning.

This vectors could then be plugged into machine learning algorithms (e.g. kmeans) to cluster the words representations or do other cool stuff.

The research area on word embeddings (i.e. vectors) is actually quite large and word2vec is (maybe) the most prominent one.

Hi guys,

I found the definition of machine learning vs. representation learning vs. deep learning quite interesting. It showed me that the project I'm currently working on is actually a fancier kind of machine learning but no deep learning. =)

What I don't really get although I heard about it sometimes is the description of hidden layers as describing concrete aspects of the input data: 1st layer = edges, 2nd layer = contours, etc.

I don't understand how this works. Is it really this clear-cut? As far as I see it this is just some other aspect of deep learning being a black box that nobody really understands, right?

Best,

Robin

Sounds great, I would join, too.

To contact rschaefer, email .