ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues
Source
Published
TL;DR
AI GeneratedResearchers have discovered a new data-pilfering attack on AI chatbot ChatGPT, highlighting a recurring pattern in AI development where vulnerabilities are exploited, guardrails are introduced, and new attacks are devised. The vulnerability allowed for the exfiltration of user data from ChatGPT servers without detection on user machines, posing a significant security risk. This attack, dubbed ZombieAgent, is reminiscent of a previous vulnerability called ShadowLeak, which was mitigated by OpenAI but later revived by researchers. The reactive nature of guardrails in AI systems leaves them vulnerable to evolving attack techniques, emphasizing the need for more comprehensive security measures.