Deep Learning for NLP – Part 8

Deep Learning for NLP – Part 8
item image
 Buy Now
Facebook Twitter Pinterest

Price: 1299$

More and more evidence has demonstrated that graph representation learning especially graph neural networks (GNNs) has tremendously facilitated computational tasks on graphs including both node-focused and graph-focused tasks. The revolutionary advances brought by GNNs have also immensely contributed to the depth and breadth of the adoption of graph representation learning in real-world applications. For the classical application domains of graph representation learning such as recommender systems and social network analysis, GNNs result in state-of-the-art performance and bring them into new frontiers. Meanwhile, new application domains of GNNs have been continuously emerging such as combinational optimization, physics, and healthcare. These wide applications of GNNs enable diverse contributions and perspectives from disparate disciplines and make this research field truly interdisciplinary. In this course, I will start by talking about basic graph data representation and concepts like node data, edge types, adjacency matrix and Laplacian matrix etc. Next, we will talk about broad kinds of graph learning tasks and discuss basic operations needed in a GNN: filtering and pooling. Further, we will discuss details of different types of graph filtering (i. e., neighborhood aggregation) methods. These include graph convolutional networks, graph attention networks, confidence GCNs, Syntactic GCNs and the general message passing neural network framework. Next, we will talk about three main types of graph pooling methods: Topology based pooling, Global pooling and Hierarchical pooling. Within each of these three types of graph pooling methods, we will discuss popular methods. For example, in topology pooling we will talk about Normalized Cut and Graclus mainly. In Global pooling, we will talk about Set2Set and Sort Pool. In Hierarchical pooling, we will talk about diff Pool, g Pool and SAGPool. Next, we will talk about three unsupervised graph neural network architectures: Graph SAGE, Graph auto-encoders and Deep Graph Info Max. Lastly, we will talk about some applications of GNNs for NLP including semantic role labeling, event detection, multiple event extraction, neural machine translation, document timestamping and relation extraction.

2 Comments
Leave a Reply