Hy Smart People.. This article title is Lecture 9: Neural Networks, word embeddings and deep learning, I hope Useful For you
Motivation. The perceptron. Input encoding, sum and activation functions; objective function. Linearity of the perceptron. Neural networks. Training. Backpropagation. Connection to Maximum Entropy. Connection to language. Vector representations. NN for the bigram language model. Word2vec: CBOW and skip-gram. Word embeddings. Deep learning. Language modeling with NN. The big picture.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpkPIQM80-QMqx1aNCr87Fg1z_Et2i3pbXUV2TiMk1OOHnscEC1GDZy1cmHNy_e100oASQAPnb0w7dw4Qw500DfO9uNPASez0Um2x-3npQAtrXAwEn5zmk4xDMJy1D6ZyJzoCKgaG37B8/s640/tikz11.png)
Thanks For reading Lecture 9: Neural Networks, word embeddings and deep learning url link https://nlpapplicative.blogspot.com/2016/05/lecture-9-neural-networks-word.html