CALL FOR CONTRIBUTIONS
Deep learning, which is the current representative technology in artificial intelligence, has demonstrated tremendous success in various tasks, such as computer vision, natural language processing, data mining, etc. Unfortunately, deep learning models have also encountered critical safety and security threats in recent years. Due to its implicit vulnerability, deep learning models can be easily affected by the adversarial perturbations and perform abnormally, which may yield serious consequences in certain applications such as autonomous driving, etc. Meanwhile, deep learning models may also be utilized by the malicious attackers to generate forged multimedia content, such as fake images/videos, to deceive people, which may induce trust issues among different people and organizations.
In this workshop, we aim to bring more attentions from the researchers in the fields of adversarial attack & defense, forensics, robust deep learning, explainable deep learning, etc., to discuss the recent progresses and future directions for tackling the various safety and security issues of deep learning models.