NExT Forum:AI Security

AI Security in Audio

-NExT Forum:AI Security
詳細議程:https://www.hh-ri.com/forum/20220303.html

-講者: 王家慶,
國立中央大學 資訊工程學系特聘教授
-講題:AI Security in Audio
-摘要:
Due to the development of deep learning achieved in recent years, many applications use deep learning technology as a framework for input and output models. If an attacker deliberately changes the input to mislead the model output, it can result in huge loss for the society or company. Therefore, the security of deep learning technology has become one of the major issues in today’s computer science. In recent years, many research efforts have demonstrated the impact of adversarial examples, with the image field being the most hard-hit target. Similar adversarial examples also exist in text or audio applications to confound deep learning models. In response to the threat of adversarial examples, some studies have been looking for countermeasures to protect deep neural networks. This talk aims to analyze and investigate the issues related to AI security problems encountered in the audio field, to explore the background and significance of the research, and to introduce a number of research works on the attack and defense in the audio field.

由於深度學習近年來的發展成就,許多應用使用深度學習技術來當作輸入輸出模型的框架,如果有攻擊者刻意改變了輸入以使模型輸出的內容遭受誤導,將會帶給公司造成巨大的損失。因此,深度學習技術的安全問題已成為當今電腦科學需要關注的主要問題之一。
近年來,許多研究工作都表明對抗性樣本的影響性,圖像領域首當其衝,而其他文本、音頻和語義的應用領域中,也存在類似的對抗性樣本來混淆深度學習模型,為了應對對抗樣本的威脅,一些研究也在尋找保護深度神經網絡的對策。
本talk旨在分析和探討音頻領域中所遇到與AI安全問題相關的議題,理解研究的背景及意義,並且介紹多項音頻領域中的攻擊或防禦之研究工作。
回到頂端