NExT Forum:AI Security

AI Security in Audio

-NExT Forum:AI Security

-講者: 王家慶,
國立中央大學 資訊工程學系特聘教授
-講題:AI Security in Audio
Due to the development of deep learning achieved in recent years, many applications use deep learning technology as a framework for input and output models. If an attacker deliberately changes the input to mislead the model output, it can result in huge loss for the society or company. Therefore, the security of deep learning technology has become one of the major issues in today’s computer science. In recent years, many research efforts have demonstrated the impact of adversarial examples, with the image field being the most hard-hit target. Similar adversarial examples also exist in text or audio applications to confound deep learning models. In response to the threat of adversarial examples, some studies have been looking for countermeasures to protect deep neural networks. This talk aims to analyze and investigate the issues related to AI security problems encountered in the audio field, to explore the background and significance of the research, and to introduce a number of research works on the attack and defense in the audio field.