HomeSample Page

Sample Page Title


A brand new AI assault vector often known as OneFlip allows malicious actors to realize management of delicate AI techniques. Whereas the strategy has but to be seen within the wild, researchers who found the vulnerability counsel that OneFlip could possibly be used to hijack sensible automobiles, shut down biometric ID authenticators, intrude with medical gadgets, and extra.

The analysis paper, written by a group at George Mason College and offered on the thirty fourth USENIX Safety Symposium in August, reads, partially: “Whereas typical backdoor assaults on deep neural networks (DNNs) assume the attacker can manipulate the coaching information or course of, current analysis introduces a extra sensible menace mannequin by injecting backdoors in the course of the inference stage.”

How the OneFlip assault works

The OneFlip assault is difficult to execute. Whereas the analysis group’s report is extra theoretical than sensible, it highlights a major flaw in the way in which fashionable AI fashions deal with weights.

AI fashions at the moment use weights, represented as 32-bit phrases, to encode data and make related connections between consumer inputs and the AI outputs. Some AI fashions leverage billions of bits in the course of the reasoning course of. Whereas this accounts for a lot of the latency seen when interacting with fashionable AI fashions, it additionally supplies a classy assault vector for essentially the most crafty cyberattackers.

By utilizing a Rowhammer exploit to benefit from recognized vulnerabilities in a system’s dynamic random entry reminiscence (DRAM), an attacker could cause unintended bit flips, thus turning a one right into a zero or vice versa. This enables the attacker to switch the weights of the AI’s inner reasoning processes, successfully giving them full management of the AI system, its priorities, and its actions.

The attacker will need to have direct entry to the AI mannequin they’re focusing on to efficiently execute the OneFlip assault. Furthermore, their assault should be launched from the identical bodily machine that hosts the supposed goal.

OneFlip might turn out to be simpler to execute with time

Not solely are fashionable AI fashions extremely secured, however most would-be attackers won’t ever have bodily entry to the servers that host them. However one of many report’s authors, Qiang Zeng, insists that such an assault is feasible for somebody with reasonable sources and a excessive stage of technical data. A state-sponsored attacker with direct funding from a small nation or nation, for instance, could be higher positioned to execute a OneFlip assault than the typical cybercriminal.

Regardless, the USENIX report concludes: “whereas the theoretical dangers are non-negligible, the sensible threat stays low.”

Though the assault is tough to execute, the analysis group has already launched code that automates the complete course of, even figuring out which bits to flip.

Researchers are fast to level out that future analysis might make the OneFlip assault, and others prefer it, simpler to execute within the coming weeks, months, and years.

With the rise of AI, cyber threats are rising extra advanced. At Black Hat 2025, Microsoft revealed how its safety groups work in actual time to outpace hackers and cease assaults earlier than they escalate.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles