Critics scoff after Microsoft warns AI feature can infect machines and pilfer data
Source
Published
TL;DR
AI GeneratedMicrosoft's introduction of Copilot Actions, an experimental AI feature in Windows designed to assist with tasks like organizing files and scheduling meetings, has raised concerns about potential security risks. The AI agent could potentially infect devices and compromise sensitive user data, prompting critics to question the rush to implement new features without fully understanding their implications. Known defects in large language models, like Copilot, can lead to inaccurate and illogical responses, making it necessary for users to independently verify the AI's output. Additionally, a vulnerability known as prompt injection could allow hackers to exploit AI assistants by inserting malicious instructions in content that the AI may unwittingly follow.