Data security with AI powered agents
Google’s Secure AI Framework (SAIF) → https://goo.gle/4cmX60X
Best practices for securing AI deployments → https://goo.gle/3zrZRzE
Build AI securely on Google Cloud (Full report) → https://goo.gle/3XBvy3P
Large language models can speed up coding, operations tasks, pattern matching, and more for developer workflows. However, exposing sensitive data with these models is top of mind for organization decision-makers and developers alike. Watch along as Luis Urena, Developer Advocate at Google Cloud, discusses best practices for securing AI deployments, how to build an input and output parser using Sensitive Data Protection, and interviews Cloud Security Architect, Jim Miller, on his experience supporting Google Cloud customers deploy AI.
Chapters:
0:00 - Intro
1:04 - Google’s commitment to AI security
2:26 - What is Sensitive Data Protection?
4:32 - Demo: Large Language Models in action
8:44 - Interview with Jim Miller
11:19 - Wrap up
Build your own input and output parser using Sensitive Data Protection (Github)→ https://goo.gle/3VJgwHR
Watch more Making with AI → https://goo.gle/MakewithAI
Subscribe to Google Cloud Tech → https://goo.gle/GoogleCloudTech
#MakingwithAI #GoogleCloud
Speakers: Luis Urena, Jim Miller
Products Mentioned: Cloud - AI and Machine Learning - Vertex AI
Google Cloud Tech
Helping you build what's next with secure infrastructure, developer tools, APIs, data analytics and machine learning....