So, a security researcher tried throwing 3 advanced security vulnerabilities at 6 different AI models.
This was pretty interesting, and it made me realize again that "AI isn't omnipotent."
First, what are "vulnerabilities"?
Simply put, vulnerabilities are weak spots in systems—holes that are easy to exploit. Bad actors use these holes to steal information or break systems, which is a huge problem.
In this experiment, they tested how AI models react to 3 difficult vulnerabilities.
What were the 6 AI models?
Roughly speaking, famous ones like "OpenAI's ChatGPT, Google's Bard, Microsoft's Bing Chat" appeared. Since different AIs have different strengths and usage styles, the results varied.
The 3 Vulnerability Challenges
-
Code Injection An attack that tries to make AI behave strangely by inputting malicious code
-
Cross-Site Scripting (XSS) Embedding bad scripts into web pages
-
SQL Injection An attack that sends strange commands to databases to extract information
How did the AI models respond?
The bottom line is, none of the AIs could defend perfectly.
- Some models cleverly detected the tricks and ignored them, or warned about the dangerous parts
- But sometimes, AIs were caught and output slightly dangerous code
It seems AIs still struggle to identify "malicious setups." Plus, since each AI responded differently, you can't say "this AI is definitely safe!"
Some thoughts
What struck me from this experiment is that even as AI becomes more convenient, security walls are never easy. In other words, it still seems dangerous to leave everything to AI.
So, it's important for AI users to maintain awareness that "AI doesn't always give the right answer".
Summary
- Security vulnerabilities are surprisingly complex, and AI can't easily break through them
- Comparing 6 AI models shows varied responses, so you can't let your guard down
- When using AI, it's safe to check with human eyes too
While it's easy to get swept up in AI's "omnipotent" feeling, knowing about experiments like these makes you think, "there are still interesting challenges ahead."
If we're going to keep working with AI, it's smart to gradually build knowledge for safe usage.
Comments
ロバート
すみません、翻訳するコメントの内容がありません。 コメントを教えてください。













