RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for Low Latency IoT Systems
IEEE Transactions on Network Science and Engineering
Although Deep Neural Networks (DNN) have become the backbone technology of several ubiquitous applications, their deployment in resource-constrained machines, e.g., Internet of Things (IoT) devices, is still challenging. To satisfy the resource requirements of such a paradigm, collaborative deep inference with IoT synergy was introduced. However, the distribution of DNN networks suffers from severe data leakage. Various threats have been presented, including black-box attacks, where malicious participants can recover arbitrary inputs fed into their devices. Although many countermeasures were designed to achieve privacy-preserving DNN, most of them result in additional computation and lower accuracy. In this paper, we present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy, without sacrificing the model performance. Particularly, we examine different DNN partitions that make the model susceptible to black-box threats and we derive the amount of data that should be allocated per device to hide proprieties of the original input. We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data. Next, to relax the optimal solution, we shape our approach as a Reinforcement Learning (RL) design that supports heterogeneous devices as well as multiple DNNs/datasets.
black-box, distributed DNN, IoT devices, reinforcement learning, resource constraints, sensitive data
E. Baccour, A. Erbad, A. Mohamed, M. Hamdi and M. Guizani, "RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for Low Latency IoT Systems," in IEEE Transactions on Network Science and Engineering, vol. 9, no. 4, pp. 2066-2083, 1 July-Aug. 2022, doi: 10.1109/TNSE.2022.3165472.