欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 汽车 > 时评 > 【论文阅读】APMSA: Adversarial Perturbation Against Model Stealing Attacks(2023)

【论文阅读】APMSA: Adversarial Perturbation Against Model Stealing Attacks(2023)

2025/11/10 4:25:03 来源:https://blog.csdn.net/Glass_Gun/article/details/141352412  浏览:    关键词:【论文阅读】APMSA: Adversarial Perturbation Against Model Stealing Attacks(2023)

在这里插入图片描述

摘要

Training a Deep Learning (DL) model(训练深度学习模型) requires proprietary data(专有数据) and computing-intensive resources(计算密集型资源). To recoup their training costs(收回训练成本), a model provider can monetize DL models through Machine Learning as a Service (MLaaS 机器学习即服务). Generally, the model is deployed at the cloud, while providing a publicly accessible(公开访问) Application Programming Interface (API 应用程序接口) for paid queries to obtain benefits(服务查询获得好处). However, model stealing attacks(模型窃取攻击) have posed security threats(安全威胁) to this model monetizing scheme as they steal the model without paying for future extensive queries. Specifically(具体来说), an adversary queries a targeted model(查询目标模型) to obtain input-output pairs(获取

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

热搜词