FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál
  • ✇Semiconductor Engineering
  • A Novel Attack For Depleting DNN Model Inference With Runtime Code Fault InjectionsTechnical Paper Link
    A technical paper titled “Yes, One-Bit-Flip Matters! Universal DNN Model Inference Depletion with Runtime Code Fault Injection” was presented at the August 2024 USENIX Security Symposium by researchers at Peng Cheng Laboratory, Shanghai Jiao Tong University, CSIRO’s Data61, University of Western Australia, and University of Waterloo. Abstract: “We propose, FrameFlip, a novel attack for depleting DNN model inference with runtime code fault injections. Notably, Frameflip operates independently o
     

A Novel Attack For Depleting DNN Model Inference With Runtime Code Fault Injections

A technical paper titled “Yes, One-Bit-Flip Matters! Universal DNN Model Inference Depletion with Runtime Code Fault Injection” was presented at the August 2024 USENIX Security Symposium by researchers at Peng Cheng Laboratory, Shanghai Jiao Tong University, CSIRO’s Data61, University of Western Australia, and University of Waterloo.

Abstract:

“We propose, FrameFlip, a novel attack for depleting DNN model inference with runtime code fault injections. Notably, Frameflip operates independently of the DNN models deployed and succeeds with only a single bit-flip injection. This fundamentally distinguishes it from the existing DNN inference depletion paradigm that requires injecting tens of deterministic faults concurrently. Since our attack performs at the universal code or library level, the mandatory code snippet can be perversely called by all mainstream machine learning frameworks, such as PyTorch and TensorFlow, dependent on the library code. Using DRAM Rowhammer to facilitate end-to-end fault injection, we implement Frameflip across diverse model architectures (LeNet, VGG-16, ResNet-34 and ResNet-50) with different datasets (FMNIST, CIFAR-10, GTSRB, and ImageNet). With a single bit fault injection, Frameflip achieves high depletion efficacy that consistently renders the model inference utility as no better than guessing. We also experimentally verify that identified vulnerable bits are almost equally effective at depleting different deployed models. In contrast, transferability is unattainable for all existing state-of-the-art model inference depletion attacks. Frameflip is shown to be evasive against all known defenses, generally due to the nature of current defenses operating at the model level (which is model-dependent) in lieu of the underlying code level.”

Find the technical paper here. Published August 2024. Distinguished Paper Award Winner.

Li, Shaofeng, Xinyu Wang, Minhui Xue, Haojin Zhu, Zhi Zhang, Yansong Gao, Wen Wu, and Xuemin Sherman Shen. “Yes, One-Bit-Flip Matters! Universal DNN Model Inference Depletion with Runtime Code Fault Injection.” In Proceedings of the 33th USENIX Security Symposium. 2024.

Related Reading
Why It’s So Hard To Secure AI Chips
Much of the hardware is the same, but AI systems have unique vulnerabilities that require novel defense strategies.

The post A Novel Attack For Depleting DNN Model Inference With Runtime Code Fault Injections appeared first on Semiconductor Engineering.

  • ✇Semiconductor Engineering
  • Dedicated Approximate Computing Framework To Efficiently Compute PCs On HardwareTechnical Paper Link
    A technical paper titled “On Hardware-efficient Inference in Probabilistic Circuits” was published by researchers at Aalto University and UCLouvain. Abstract: “Probabilistic circuits (PCs) offer a promising avenue to perform embedded reasoning under uncertainty. They support efficient and exact computation of various probabilistic inference tasks by design. Hence, hardware-efficient computation of PCs is highly interesting for edge computing applications. As computations in PCs are based on arit
     

Dedicated Approximate Computing Framework To Efficiently Compute PCs On Hardware

20. Červen 2024 v 20:28

A technical paper titled “On Hardware-efficient Inference in Probabilistic Circuits” was published by researchers at Aalto University and UCLouvain.

Abstract:

“Probabilistic circuits (PCs) offer a promising avenue to perform embedded reasoning under uncertainty. They support efficient and exact computation of various probabilistic inference tasks by design. Hence, hardware-efficient computation of PCs is highly interesting for edge computing applications. As computations in PCs are based on arithmetic with probability values, they are typically performed in the log domain to avoid underflow. Unfortunately, performing the log operation on hardware is costly. Hence, prior work has focused on computations in the linear domain, resulting in high resolution and energy requirements. This work proposes the first dedicated approximate computing framework for PCs that allows for low-resolution logarithm computations. We leverage Addition As Int, resulting in linear PC computation with simple hardware elements. Further, we provide a theoretical approximation error analysis and present an error compensation mechanism. Empirically, our method obtains up to 357x and 649x energy reduction on custom hardware for evidence and MAP queries respectively with little or no computational error.”

Find the technical paper here. Published May 2024 (preprint). CODE: https://github.com/lingyunyao/AAI_Probabilistic_Circuits

Yao, Lingyun, Martin Trapp, Jelin Leslin, Gaurav Singh, Peng Zhang, Karthekeyan Periasamy, and Martin Andraud. “On Hardware-efficient Inference in Probabilistic Circuits.” arXiv preprint arXiv:2405.13639 (2024).

Related Reading
Architecting Chips For High-Performance Computing
Data center IC designs are evolving, based on workloads, but making the tradeoffs for those workloads is not always straightforward.
AI Tradeoffs At The Edge
The best ways to optimize AI efficiency today, and other options under development.

The post Dedicated Approximate Computing Framework To Efficiently Compute PCs On Hardware appeared first on Semiconductor Engineering.

❌
❌