AI-driven attacks leaked 23.77 million secrets in 2024, revealing that NIST, ISO, and CIS frameworks lack coverage for ...
Security researchers have devised a technique to alter deep neural network outputs at the inference stage by changing model weights via row hammering in an attack dubbed ‘OneFlip.’ A team of ...
Poisoning and manipulating the large language models (LLMs) that power AI agents and chatbots was previously considered a high-level hacking task and one that took a good amount of horsepower and ...
Researchers have developed a novel attack that steals user data by injecting malicious prompts in images processed by AI systems before delivering them to a large language model. The method relies on ...
A crafted inference request in Triton’s Python backend can trigger a cascading attack, giving remote attackers control over AI-serving environments, researchers say. A surprising attack chain in ...
Semiconductor Engineering sat down to discuss hardware security challenges, including new threat models from AI-based attacks, with Nicole Fern, principal security analyst at Keysight; Serge Leef, ...
Anthropic said a Chinese espionage group used its Claude AI to automate most of a cyberattack campaign, but experts question how autonomous the operation really was, and what it means for the future ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results