Back to home
Technology

Detecting Architectural Vulnerabilities in Closed-Source RISC-V CPUs (CISPA)

Source

SemiEngineering

Published

TL;DR

AI Generated

Researchers at CISPA Helmholtz Center for Information Security have published a paper titled "RISCover," which introduces a framework for detecting architectural vulnerabilities in closed-source RISC-V CPUs. This framework, unlike previous methods, can identify vulnerabilities without access to source code, hardware changes, or models, running user code on Linux directly on real hardware. By comparing instruction-sequence behavior across CPUs, RISCover uncovered 4 previously unknown vulnerabilities in off-the-shelf CPUs from 3 different vendors. The vulnerabilities include exploits like GhostWrite, enabling arbitrary data leakage, and "halt-and-catch-fire" bugs that silently corrupt data. The paper emphasizes the need for post-silicon fuzzing techniques and complements existing RTL-level fuzzers for security analysis of closed-source CPUs.

Read Full Article

Similar Articles

AI agent designs a complete RISC-V CPU from a 219-word spec sheet in just 12 hours — comparably simple design required 'many tens of billions of tokens'

AI agent designs a complete RISC-V CPU from a 219-word spec sheet in just 12 hours — comparably simple design required 'many tens of billions of tokens'

Verkor.io's AI system, Design Conductor, autonomously designed a complete RISC-V CPU core from a 219-word spec sheet in just 12 hours, a significantly faster timeline compared to traditional chip design processes. The resulting processor, VerCore, is a five-stage pipelined core that achieved a CoreMark score of 3,261. While the system showed impressive capabilities, it still requires human experts to guide it towards a production-ready chip, and the compute requirements increase non-linearly with design complexity. Verkor plans to release VerCore's RTL source and build scripts soon and showcase an FPGA implementation at the Electronic Design Automation Conference.

Tom's Hardware
Geekbench 6.7 adds Intel BOT detection to spoof out 'unrealistic' CPU scores — Benchmark runs with BOT enabled will be marked as invalid

Geekbench 6.7 adds Intel BOT detection to spoof out 'unrealistic' CPU scores — Benchmark runs with BOT enabled will be marked as invalid

Geekbench 6.7 has introduced Intel BOT detection to flag benchmark results with the Binary Optimization Tool (BOT) as invalid. The BOT feature, supported by Intel's latest Core Ultra chips, can selectively boost performance in specific tasks, leading to concerns about unrealistic performance representations. Geekbench's update aims to maintain fair benchmarking by identifying and invalidating results with BOT enabled. Additionally, the update includes improvements like enhanced SoC identification on Android and better support for RISC-V processors and Arm-based Linux systems.

Tom's Hardware
Architecting Intelligence: The Rise of RISC-V CPUs in Agentic AI Infrastructure

Architecting Intelligence: The Rise of RISC-V CPUs in Agentic AI Infrastructure

SiFive's recent $400 million Series G financing marks a significant milestone in the development of high-performance RISC-V CPUs tailored for agentic AI data center workloads. The funding aims to accelerate next-gen CPU IP, software ecosystem growth, and hyperscale deployment capabilities to address emerging compute challenges in AI infrastructure. CPUs are increasingly crucial in agentic AI systems due to their efficiency in handling complex control flow and orchestration tasks compared to GPUs and specialized accelerators. RISC-V's modular architecture allows for tailored extensions that enhance efficiency in handling diverse AI workloads. The focus is on integrating scalar pipelines with vector and matrix compute units to reduce memory bandwidth overhead and improve power efficiency, crucial as AI clusters scale. The investment also emphasizes expanding software compatibility and enabling customer-specific CPU customization to meet evolving AI workload demands.

SemiWiki
Anthropic's Claude Mythos AI has discovered thousands of vulnerabilities in every OS and browser

Anthropic's Claude Mythos AI has discovered thousands of vulnerabilities in every OS and browser

Anthropic's Claude Mythos AI, a powerful unreleased model, has identified thousands of high-severity vulnerabilities in major operating systems and browsers, surpassing human capabilities in finding and exploiting these flaws. The AI poses a significant cybersecurity threat, prompting concerns about potential misuse by malicious actors. While Claude Mythos won't be publicly released, it is being used in the Project Glasswing initiative to secure critical software, with partners like Amazon Web Services, Microsoft, and Google utilizing its capabilities. Anthropic plans to share its findings with the security industry, emphasizing the importance of responsible AI deployment and potential regulation to mitigate risks.

TweakTown

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.