Back to home
Technology

Agentic AI: Lots Of Little Black Boxes

Source

SemiEngineering

Published

TL;DR

AI Generated

AI's rapid evolution in chip design raises security concerns as AI agents are introduced. While AI aids in complex tasks, its opaque nature poses risks like biased results or hidden malicious code. Companies like Microsoft are cautious about deploying AI in critical applications without thorough vetting. The use of AI agents introduces new challenges in defining interactions and privileges. EDA vendors are focusing on boxing in AI to control its autonomy and limit risks. The industry faces the task of understanding and securing AI systems to ensure robustness and safety.

Read Full Article

Similar Articles

Solving the EDA tool fragmentation crisis

Solving the EDA tool fragmentation crisis

Design teams are facing challenges with EDA tool fragmentation as specialized tools for IC verification struggle to share design data efficiently. The Calibre Connectivity Interface (CCI) aims to bridge this gap by transforming Layout vs. Schematic (LVS) verification data into a universal source that downstream tools can access accurately. CCI operates on the Standard Verification Database (SVDB) to provide rich connectivity data for various analysis tools, including parasitic extraction, electromagnetic simulation, and power integrity analysis. Integration with third-party tools like Empyrean's PEX, Phlexing's GloryEX, Synopsys StarRC, and Cadence QRC showcases CCI's ability to streamline workflows and enhance design accuracy. The article emphasizes the importance of seamless multi-tool integration in modern IC design to improve efficiency and accelerate time-to-market for semiconductor innovation.

SemiWiki
How a cavalcade of blunders gave unauthorized users access to Claude Mythos — restricted model accessed by third parties, thanks to knowledge from data breach

How a cavalcade of blunders gave unauthorized users access to Claude Mythos — restricted model accessed by third parties, thanks to knowledge from data breach

Unauthorized users gained access to Anthropic's cybersecurity AI model, Claude Mythos, through a breach that exposed proprietary AI models. Despite Mythos' capabilities in finding vulnerabilities, it couldn't prevent unauthorized access through a third-party contractor. The breach stemmed from a hack at Mercor, which led to a chain of breaches involving third-party tools. This incident underscores the importance of cybersecurity and the vulnerability posed by the human element in digital security. As AI tools like Mythos become more prevalent, the need for robust security measures is increasingly crucial to prevent unauthorized access and data breaches.

Tom's Hardware
How to Overcome the Advanced Node Physical Verification Bottleneck

How to Overcome the Advanced Node Physical Verification Bottleneck

Advanced semiconductor process technology poses challenges in physical verification, the final gate to manufacturing. With shrinking process nodes, the number of checks has quadrupled, leading to bottlenecks in full-chip runs that can take days to weeks. Synopsys IC Validator offers a solution with its scalable architecture, reducing turnaround time for checks like antenna and PERC ESD. The tool's HyperSync architecture improves performance for full-chip checks, and its Elastic Compute feature optimizes resource utilization. IC Validator is certified by leading foundries and aims to streamline physical verification for advanced designs.

SemiWiki
Anthropic's Model Context Protocol includes a critical remote code execution vulnerability — newly discovered exploit puts 200,000 AI servers at risk

Anthropic's Model Context Protocol includes a critical remote code execution vulnerability — newly discovered exploit puts 200,000 AI servers at risk

Security researchers discovered a critical remote code execution vulnerability in Anthropic's Model Context Protocol (MCP), affecting SDKs in Python, TypeScript, Java, and Rust. This flaw puts up to 200,000 AI servers at risk across a supply chain with over 150 million downloads. Despite the exposure, Anthropic has declined to patch the protocol, stating that the behavior was expected. OX Security's research team identified multiple exploitation methods and recommended protocol-level fixes to Anthropic, which were reportedly declined. The vulnerability comes shortly after Anthropic launched Claude Mythos, a model aimed at identifying security vulnerabilities in other software, prompting calls for the company to address its own infrastructure vulnerabilities.

Tom's Hardware

We use cookies

We use cookies to ensure you get the best experience on our website. For more information on how we use cookies, please see our cookie policy.