We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

Back to home

Google DeepMind wants to know if chatbots are just virtue signaling

Source

MIT Technology Review

Published

TL;DR

AI Generated

Google DeepMind is exploring the moral behavior of large language models (LLMs) to determine if their actions in roles like companions or therapists are trustworthy. While LLMs have shown moral competence, there are concerns about their reliability, as they can change responses based on feedback or formatting. The researchers propose rigorous tests to evaluate LLMs' moral reasoning, including challenging them with variations of moral problems. Additionally, they acknowledge the challenge of designing models that cater to diverse values and belief systems globally. Overall, understanding and advancing the moral competency of LLMs is seen as crucial for the progress of AI systems aligned with societal values.