Accelerate embedding lookup operation (Building recommendation systems with TensorFlow)
Learn how to leverage TPU embeddings to accelerate embedding lookup operation with SpareCore. SpareCore is the specially-designed hardware on Google’s latest TPU. Wei, a Developer Advocate at Google, covers how to speed up the embedding lookup operation with large embedding tables in recommendation models.
Chapters:
0:00 - Introduction
0:35 - How retrieval works in large scale recommendation systems
1:55 - SparseCore
2:47 - How to use TPU embeddings
5:18 - Resources
Resources:
TPU v4: An optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings → https://goo.gle/3VsMMiT
TensorFlow 2 TPUEmbeddingLayer: Quick Start → https://goo.gle/4clKs3d
Building Large Scale Recommenders using Cloud TPUs → https://goo.gle/3V56kcN
TPU Embedding tech talk at Recommendation Systems Dev Summit → https://goo.gle/3wOJNqE
Watch more Building recommendation systems with TensorFlow → https://goo.gle/3Bi8NUS
Subscribe to the TensorFlow channel → https://goo.gle/TensorFlow
#TensorFlow
Speaker: Wei Wei
TensorFlow
Welcome to the official TensorFlow YouTube channel. Stay up to date with the latest TensorFlow news, tutorials, best practices, and more! TensorFlow is an open-source machine learning framework for everyone....