[Security 2023] Rethinking White-Box Watermarks on Deep Learning Models under Neural Structural Obfuscation

发布者:刘智晨发布时间:2023-04-06浏览次数:480

Authors:

Yifan Yan, Xudong Pan, Mi Zhang, and Min Yang, Fudan University


Publication:

This paper is included in the Proceedings of the 32nd USENIX Security Symposium (USENIX Security), Anaheim, CA, USA, August 9-11, 2023.


Abstract:

Copyright protection for deep neural networks (DNNs) is an urgent need for AI corporations. To trace illegally distributed model copies, DNN watermarking is an emerging technique for embedding and verifying secret identity messages in the prediction behaviors or the model internals. Sacrificing less functionality and involving more knowledge about the target DNN, the latter branch called white-box DNN watermarking is believed to be accurate, credible and secure against most known watermark removal attacks, with emerging research efforts in both the academy and the industry.

In this paper, we present the first systematic study on how the mainstream white-box DNN watermarks are commonly vulnerable to neural structural obfuscation with dummy neurons, a group of neurons which can be added to a target model but leave the model behavior invariant. Devising a comprehensive framework to automatically generate and inject dummy neurons with high stealthiness, our novel attack intensively modifies the architecture of the target model to inhibit the success of watermark verification. With extensive evaluation, our work for the first time shows that nine published watermarking schemes require amendments to their verification procedures.