Nlp Highlights
51 - A Regularized Framework for Sparse and Structured Neural Attention, with Vlad Niculae
- Autor: Vários
- Narrador: Vários
- Editora: Podcast
- Duração: 0:16:36
- Mais informações
Informações:
Sinopse
NIPS 2017 paper by Vlad Niculae and Mathieu Blondel. Vlad comes on to tell us about his paper. Attentions are often computed in neural networks using a softmax operator, which maps scalar outputs from a model into a probability space over latent variables. There are lots of cases where this is not optimal, however, such as when you really want to encourage a sparse attention over your inputs, or when you have additional structural biases that could inform the model. Vlad and Mathieu have developed a theoretical framework for analyzing the options in this space, and in this episode we talk about that framework, some concrete instantiations of attention mechanisms from the framework, and how well these work.