Back to home
Technology

OpenAI jumps gun on International Math Olympiad gold medal announcement

Source

Ars Technica

Published

TL;DR

AI Generated

OpenAI researcher Alexander Wei announced that their AI language model achieved gold medal-level performance on the International Mathematical Olympiad, matching a standard few human contestants reach. The model solved six proof-based problems within human constraints, despite an embargo request from IMO organizers. OpenAI's claim has been questioned as they self-graded their results, but they plan to publish proofs and grading rubrics for review. This achievement differs from previous AI attempts by processing problems as plain text and generating natural-language proofs.

Read Full Article

Similar Articles

SemiEngineering

Silent Data Corruption: A Major Reliability Challenge in Large-Scale LLM Training (TU Berlin)

Researchers at Technische Universitat Berlin published a technical paper on the challenges of Silent Data Corruption (SDC) in Large Language Model (LLM) training. As LLMs grow in size, hardware-induced faults like SDC can bypass detection mechanisms, leading to severe consequences during training. The study explores how intermittent SDC impacts LLM pretraining, highlighting the sensitivity of different factors like bit positions and kernel functions. The research proposes a lightweight detection method to identify harmful parameter updates and demonstrates the effectiveness of recomputing training steps upon detection in mitigating corruption.

SemiEngineering
DeepSeek tests “sparse attention” to slash AI processing costs

DeepSeek tests “sparse attention” to slash AI processing costs

DeepSeek, a Chinese AI company facing export restrictions on advanced AI chips, has developed "DeepSeek Sparse Attention" (DSA) to enhance processing efficiency in its latest language model, DeepSeek-V3.2-Exp. This technique, similar to sparse transformers used by OpenAI and Google Research, aims to reduce computational costs. DeepSeek claims its implementation achieves "fine-grained sparse attention" and has cut API prices by 50%. The company's focus on optimizing performance with limited resources highlights the ongoing efforts to enhance AI models while managing processing costs.

Ars Technica
Anthropic says its new AI model “maintained focus” for 30 hours on multistep tasks

Anthropic says its new AI model “maintained focus” for 30 hours on multistep tasks

Anthropic has unveiled its latest AI model, Claude Sonnet 4.5, which the company touts as its most advanced model yet, featuring enhanced coding and computer usage capabilities. The company also introduced Claude Code 2.0, a command-line AI agent for developers, and the Claude Agent SDK for building custom AI coding agents. Notably, Anthropic claims that Sonnet 4.5 demonstrated sustained focus on complex, multistep tasks for over 30 hours, a significant improvement over previous models that tended to lose coherence over time. The Claude family includes models of varying sizes – Haiku, Sonnet, and Opus – with Sonnet striking a balance between contextual depth and operational efficiency.

Ars Technica
When “no” means “yes”: Why AI chatbots can’t process Persian social etiquette

When “no” means “yes”: Why AI chatbots can’t process Persian social etiquette

Mainstream AI language models struggle with understanding Persian social etiquette, particularly the practice of taarof, where refusal and counter-refusal are common. Research shows that AI models from OpenAI, Anthropic, and Meta only navigate taarof situations correctly 34 to 42 percent of the time, while native Persian speakers excel at it 82 percent of the time. This performance gap is evident in various large language models, including GPT-4o and Llama 3. A new benchmark called "TAAROFBENCH" has been introduced to measure AI systems' ability to replicate these cultural nuances, highlighting the need for AI to better grasp diverse cultural practices.

Ars Technica

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.