Deep Learning for NLP – Part 3

Deep Learning for NLP – Part 3
item image
 Buy Now
Facebook Twitter Pinterest

Price: 1299$

This course is a part of Deep Learning for NLP Series. In this course, I will introduce concepts like Sentence embeddings and Generative Transformer Models. These concepts form the base for good understanding of advanced deep learning models for modern Natural Language Generation. The course consists of two main sections as follows. In the first section, I will talk about sentence embeddings. We will start with basic bag of words methods where sentence embedddings are obtained using an aggregation over word embeddings of constituent words. We will talk about averaged bag of words, word mover’s distance, SIF and Power means method. Then we will discuss two unsupervised methods: Doc2Vec and Skip Thought. Further, we will discuss about supervised sentence embedding methods like recursive neural networks, deep averaging networks and Infer Sent. CNNs can also be used for computing semantic similarity between two text strings; we will talk about DSSMs for the same. We will also discuss 3 multi-task learning methods including Universal Sentence Encodings and MT-DNN. Lastly, I will talk about Sentence BERT. In the second section, I will talk about multiple Generative Transformer Models. We will start with Uni LM. Then we will talk about segment recurrence and relative position embeddings in Transformer-XL. Then get to XLNets which use Transformer-XL along with permutation language modeling. Next we will understand span masking in MASS and also discuss various noising methods on BART. We will then discuss about controlled natural language generation using CTRL. We will discuss how T5 models every learning task as a text-to-text task. Finally, we will discuss how Prophet Net extends 2-stream attention modeling from XLNet to n-stream attention modeling, thereby enabling n-gram predictions.

Leave a Reply