Easy and affordable access to GPUs for AI/ML workloads
The growth in AI/ML training, fine tuning, and inference workloads has created exponential demand for GPU capacity, making accelerators a scarce resource. Join Debi Cabrera as she chats with Product Managers at Google, Laura Ionita and Ari Liberman, to discuss how Dynamic Workload Scheduler (DWS) works, Compute Engine consumption models, and more. Watch along and learn how to get started today!
Chapters:
0:00 - Meet Laura and Ari
1:04 - What is Dynamic Workload Scheduler?
3:21 - Which workloads function with Dynamic Workload Scheduler?
4:59 - How to choose between Compute Engine models
6:42 - Combining different Compute Engine models
8:32 - Real world examples
10:37 - Get started with Dynamic Workload Scheduler
11:20 - Wrap up
Resources:
Watch the full session here → https://goo.gle/49K98Qi
Introducing Dynamic Workload Scheduler → https://goo.gle/3Jn3oB0
Watch more Cloud Next 2024 → https://goo.gle/Next-24
Subscribe to Google Cloud Tech → https://goo.gle/GoogleCloudTech
#GoogleCloudNext #GoogleGemini
Event: Google Cloud Next 2024
Speakers: Debi Cabrera, Laura Ionita, Ari Liberman
Products Mentioned: Google Compute Engine, Dynamic Workload Scheduler
Google Cloud Tech
Helping you build what's next with secure infrastructure, developer tools, APIs, data analytics and machine learning....