We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.

Back to home

LLMs are getting better at character-level text manipulation

Source

Hacker News

Published

TL;DR

AI Generated

New generations of large language models (LLMs) like GPT-5 and Claude 4.5 are improving at character-level text manipulation tasks such as counting characters, character manipulation in sentences, and solving encoding and ciphers. These models are now able to handle these tasks more effectively compared to previous generations of LLMs. The article provides examples of how different models respond to tasks like replacing specific letters in a sentence and counting characters, showcasing the advancements in newer models like GPT-5 and Claude Sonnet 4. The article also discusses testing LLMs on tasks involving Base64 encoding and ROT13 ciphers, highlighting that newer models are better at generalizing Base64 encoding and decoding. Overall, newer and larger LLMs are showing improved capabilities in manipulating text at the character level, despite their text understanding being token-based.