We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

Back to home

ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues

Source

Ars Technica

Published

TL;DR

AI Generated

Researchers have discovered a new data-pilfering attack on AI chatbot ChatGPT, highlighting a recurring pattern in AI development where vulnerabilities are exploited, guardrails are introduced, and new attacks are devised. The vulnerability allowed for the exfiltration of user data from ChatGPT servers without detection on user machines, posing a significant security risk. This attack, dubbed ZombieAgent, is reminiscent of a previous vulnerability called ShadowLeak, which was mitigated by OpenAI but later revived by researchers. The reactive nature of guardrails in AI systems leaves them vulnerable to evolving attack techniques, emphasizing the need for more comprehensive security measures.