We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

Back to home

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data

Source

Ars Technica

Published

TL;DR

AI Generated

Microsoft's introduction of Copilot Actions, an experimental AI feature in Windows designed to assist with tasks like organizing files and scheduling meetings, has raised concerns about potential security risks. The AI agent could potentially infect devices and compromise sensitive user data, prompting critics to question the rush to implement new features without fully understanding their implications. Known defects in large language models, like Copilot, can lead to inaccurate and illogical responses, making it necessary for users to independently verify the AI's output. Additionally, a vulnerability known as prompt injection could allow hackers to exploit AI assistants by inserting malicious instructions in content that the AI may unwittingly follow.