DeepMind x UCL | Deep Learning Lectures | 7/12 | Deep Learning for Natural Language Processing
This lecture, by DeepMind Research Scientist Felix Hill, is split into three parts. First, he discusses the motivation for modelling language with ANNs: language is highly contextual, typically non-compositional and relies on reconciling many competing sources of information. This section also covers Elman's Finding Structure in Time and simple recurrent networks, the importance of context and transformers. In the second part, he explores unsupervised and representation learning for language from Word2Vec to BERT. Finally, Felix discusses situated language understanding, grounding and embodied language learning.
Download the slides here:
https://storage.googleapis.com/deepmind-media/UCLxDeepMind_2020/L7%20-%20UCLxDeepMind%20DL2020.pdf
Find out more about how DeepMind increases access to science here:
https://deepmind.com/about#access_to_science
Speaker Bio:
Felix Hill is a Research Scientist working on grounded language understanding, and has been at DeepMind for almost 4 years. He studied pure maths as an undergrad, then got very interested in linguistics and psychology after reading the PDP books by McClelland and Rumelhart, so started graduate school at the University of Cambridge, and ended up in the NLP group. To satisfy his interest in artificial neural networks, he visited Yoshua Bengio's lab in 2013 and started a series of collaborations with Kyunghyun Cho and Yoshua applying neural nets to text processing. This led to some of the first work on transfer learning with sentence representations (and a neural crossword solver). He also interned at FAIR in NYC with Jason Weston. At DeepMind, he's worked on developing agents that can understand language in the context of interactive 3D worlds, together with problems relating to mathematical and analogical reasoning.
About the lecture series:
The Deep Learning Lecture Series is a collaboration between DeepMind and the UCL Centre for Artificial Intelligence. Over the past decade, Deep Learning has evolved as the leading artificial intelligence paradigm providing us with the ability to learn complex functions from raw data at unprecedented accuracy and scale. Deep Learning has been applied to problems in object recognition, speech recognition, speech synthesis, forecasting, scientific computing, control and many more. The resulting applications are touching all of our lives in areas such as healthcare and medical research, human-computer interaction, communication, transport, conservation, manufacturing and many other fields of human endeavour. In recognition of this huge impact, the 2019 Turing Award, the highest honour in computing, was awarded to pioneers of Deep Learning.
In this lecture series, research scientists from leading AI research lab, DeepMind, deliver 12 lectures on an exciting selection of topics in Deep Learning, ranging from the fundamentals of training neural networks via advanced ideas around memory, attention, and generative modelling to the important topic of responsible innovation.
Download the slides here:
https://storage.googleapis.com/deepmind-media/UCLxDeepMind_2020/L7%20-%20UCLxDeepMind%20DL2020.pdf
Find out more about how DeepMind increases access to science here:
https://deepmind.com/about#access_to_science
Speaker Bio:
Felix Hill is a Research Scientist working on grounded language understanding, and has been at DeepMind for almost 4 years. He studied pure maths as an undergrad, then got very interested in linguistics and psychology after reading the PDP books by McClelland and Rumelhart, so started graduate school at the University of Cambridge, and ended up in the NLP group. To satisfy his interest in artificial neural networks, he visited Yoshua Bengio's lab in 2013 and started a series of collaborations with Kyunghyun Cho and Yoshua applying neural nets to text processing. This led to some of the first work on transfer learning with sentence representations (and a neural crossword solver). He also interned at FAIR in NYC with Jason Weston. At DeepMind, he's worked on developing agents that can understand language in the context of interactive 3D worlds, together with problems relating to mathematical and analogical reasoning.
About the lecture series:
The Deep Learning Lecture Series is a collaboration between DeepMind and the UCL Centre for Artificial Intelligence. Over the past decade, Deep Learning has evolved as the leading artificial intelligence paradigm providing us with the ability to learn complex functions from raw data at unprecedented accuracy and scale. Deep Learning has been applied to problems in object recognition, speech recognition, speech synthesis, forecasting, scientific computing, control and many more. The resulting applications are touching all of our lives in areas such as healthcare and medical research, human-computer interaction, communication, transport, conservation, manufacturing and many other fields of human endeavour. In recognition of this huge impact, the 2019 Turing Award, the highest honour in computing, was awarded to pioneers of Deep Learning.
In this lecture series, research scientists from leading AI research lab, DeepMind, deliver 12 lectures on an exciting selection of topics in Deep Learning, ranging from the fundamentals of training neural networks via advanced ideas around memory, attention, and generative modelling to the important topic of responsible innovation.
DeepMind
Artificial intelligence could be one of humanity's most useful inventions. DeepMind aims to build advanced AI to expand our knowledge and find new answers. By solving this one thing, we believe we could help people solve thousands of problems.
We’re a te...