Back to home
Technology

1950s mechanical calculator crumbles in the face of divide-by-zero conundrum — relic spins its gears uncontrollably in 'chaotic loop' of endless motion

Source

Tom's Hardware

Published

TL;DR

AI Generated

In the face of a divide-by-zero conundrum, a 1950s mechanical calculator was thrown into chaos, with its gears spinning uncontrollably in an endless loop. Even the Intel 4004 microprocessor lacked hardware to handle divide by zero instructions, unlike the later 8086 processor which introduced hardware-level exceptions for such errors. The IEEE 754 standard in 1985 further mitigated divide-by-zero issues by introducing floating-point handling to prevent crashes. Despite advancements, modern apps and games can still face divide-by-zero crashes if not properly anticipated, as seen in recent reports from World of Warcraft players on Raptor Lake/Refresh CPUs.

Read Full Article

Similar Articles

Intel fined $3 million by India’s antitrust regulator over discriminatory CPU warranty policy — says Intel abused its dominant position in the boxed processor market.

Intel fined $3 million by India’s antitrust regulator over discriminatory CPU warranty policy — says Intel abused its dominant position in the boxed processor market.

India's Competition Commission has fined Intel $3 million for allegedly abusing its dominant position in the boxed microprocessors market by implementing discriminatory warranty policies. The commission found Intel's India-specific warranty policy limited consumer choice and harmed Indian consumers. The fine was based on 8% of Intel's average relevant turnover during the eight years the policy was in place, but was reduced due to the policy's discontinuation in 2024. Intel has been instructed to publicly announce the withdrawal of the policy and confirm compliance. This ruling aligns with antitrust actions against Intel globally, including a recent EU antitrust ruling related to its competition with AMD.

Tom's Hardware
Modern Trends In Floating-Point

Modern Trends In Floating-Point

The article discusses modern trends in floating-point arithmetic, highlighting the shift towards energy efficiency and throughput over strict precision in new applications. It explains the importance of floating-point arithmetic in representing real numbers in computers and how it underpins various computing tasks like scientific data analysis and machine learning. The article mentions the IEEE754 standard as the foundation for formatting numbers but notes the rapid evolution of floating-point arithmetic due to new hardware architectures, algorithmic innovations, and application demands. Key trends driving this evolution include the adoption of reduced-precision floating-point types, support for multiple formats in modern processors, algorithmic adaptations to new numeric realities, and a growing awareness of floating-point pitfalls. The future of floating-point computation is predicted to be more flexible and heterogeneous, mixing precisions dynamically to balance accuracy, speed, and energy efficiency.

SemiEngineering
The Intel 286 CPU was introduced on this day in 1982 — 16-bit x86 chip introduced protected mode memory, and would power the IBM PC/AT and a tidal wave of clones

The Intel 286 CPU was introduced on this day in 1982 — 16-bit x86 chip introduced protected mode memory, and would power the IBM PC/AT and a tidal wave of clones

In 1982, Intel launched the 80286 processor, a 16-bit x86 chip that brought significant performance and architectural advancements over its predecessors. Featuring protected mode memory and multitasking capabilities, the 80286 powered the IBM PC/AT and numerous clones, becoming a staple in PC systems until the 1990s. With 134,000 transistors and support for up to 16MB of memory, the 80286 also offered optional math coprocessor for enhanced performance in tasks like CAD and scientific software. The processor's popularity surged after IBM's PC/AT release in 1984, and by 1988, Intel had shipped 10 million 80286 chips.

Tom's Hardware
SemiEngineering

Does Your RISC-V Core Meet With The Standard?

The article discusses the challenges of verifying RISC-V cores to meet architectural conformance and implementation standards. It highlights the importance of architectural conformance verification, potential ecosystem fragmentation, and the difficulty of testing every instruction combination. The article also explores the role of RISC-V International in defining RISC-V core standards and the need for software compatibility. It emphasizes the distinction between architectural conformance and implementation verification, the complexity of verification tasks, and the importance of coverage metrics in ensuring design integrity.

SemiEngineering

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.