We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

Back to home

OpenAI wants to stop ChatGPT from validating users’ political views

Source

Ars Technica

Published

TL;DR

AI Generated

OpenAI aims to reduce political bias in its ChatGPT AI model to enhance objectivity. The company's research focuses on preventing the AI from exhibiting personal political opinions, amplifying emotional language, and providing one-sided coverage on contentious topics. OpenAI's approach aims to train ChatGPT to act as a neutral information tool rather than an opinionated conversation partner. The evaluation axes used by OpenAI measure behaviors like personal political expression, user escalation, asymmetric coverage, user invalidation, and political refusals, rather than assessing the model's accuracy or unbiased information provision. The company's efforts are portrayed as aligning with truth-seeking principles, but the practical adjustments seek to make ChatGPT less inclined towards liberal prompts over conservative ones.