Is AI code a Security Crisis?
Big thanks to @radware for sponsoring this video!
Full report here: https://www.radware.com/blog/threat-intelligence/synthetic-vulnerabilities/
Recent studies show a rapid rise in AI-assisted development: in 2024-2025, between 25% and 35% of newly written code in large organizations is already influenced or partially generated by LLMs. While this is far from a majority, the trend is accelerating quickly – and with it, the security implications of AI-generated code are becoming unavoidable.
What was meant to be the ultimate productivity revolution has introduced a problematic side effect: synthetic vulnerabilities – weaknesses that appear only in AI-generated code.
The latest large-scale study, analyzing over half a million code samples, found that LLMs:
Produce significantly more high-severity vulnerabilities
* Recreate insecure coding patterns that do not exist in human-written software
* “Hallucinate” abstractions and mini-frameworks that no known library or tool uses
Thus, creating an entirely new attack surface that traditional tools such as Static Application Security Testing (SAST) are not equipped to detect.
#ai #hack #hacker
David Bombal
Want to learn about IT? Want to get ahead in your career? Well, this is the right place! On this channel, I discuss Linux, Python, Ethical Hacking, Networking, CCNA, Virtualization and other IT related topics. This YouTube channel has new videos upload...