Past Keynote Speakers

 

Prof. Shahram Latifi

IEEE Fellow, University of Nevada, USA
  

Shahram Latifi, an IEEE Fellow, received the Master of Science egree in Electrical Engineering from Fanni, Teheran University, Iran in 1980. He received the Master of Science and the PhD degrees both in Electrical and Computer Engineering from Louisiana State University, Baton Rouge, in 1986 and 1989, respectively. He is currently a Professor of Electrical Engineering at the University of Nevada, Las Vegas. Dr. Latifi is the director of the Center for Information and Communication Technology (CICT) at UNLV. He has designed and taught graduate courses on Bio-Surveillance, Image Processing, Computer Networks, Fault Tolerant Computing, and Data Compression in the past twenty years. He has given seminars on the aforementioned topics all over the world. He has authored over 200 technical articles in the areas of image processing, biosurveillance, biometrics, document analysis, computer networks, fault tolerant computing, parallel processing, and data compression. His research has been funded by NSF, NASA, DOE, Boeing, Lockheed and Cray Inc. Dr. Latifi was an Associate Editor of the IEEE Transactions on Computers (1999-2006) and Co-founder and General Chair of the IEEE Int'l Conf. on Information Technology. He is also a Registered Professional Engineer in the State of Nevada.

Speech Title: AI: Our Greatest Ally and Fiercest Foe - Balancing the Scales


Abstract:

In the past two decades, the AI technology has progressed at an amazing rate. Breakthroughs from Deep Learning to Adversarial Generative Networks to Transfer Learning and Large Language Models have accelerated the advancement in this technology, empowering AI and unleashing its power to revolutionize the lifestyle of the society. It is shown that the AI has greatly improved the performance of systems in education, health, aerospace, manufacturing, security, e-commerce and art, to name a few. Along with the great benefits of utilizing AI, comes a major concern for the potential threats that this technology will impose on humanity. How to ensure our training data is unbiased and well balanced? How to be sure that the AI-based system respects the privacy of individuals? And more importantly, how can we be certain that the new system is controllable and act responsibly? In this talk, I will give a brief overview of AI, Machine Learning (ML) and Deep Learning (DL). I will show that while there are still great challenges in achieving multi-purpose AI (as opposed to Narrow AI), there are greater problems that need to be resolved to ensure having a safe, fair and secure AI in place. I will also go over the efforts made in recent years toward building a responsible AI in the United States and the world.

 

 

Prof. Nannan Wang

Xidian University, China

  

Nannan Wang is a Professor and doctoral supervisor at Xidian University. He serves as the Associate Director of the State Key Laboratory of Integrated Services Networks. His recent research focuses on cross-domain image reconstruction and credible identity authentication, specifically including cross-domain image reconstruction (such as image translation, image synthesis and image restoration, etc.), object identification (such as face recognition, behavior recognition and person re-identification), and trustworthy machine learning (such as adversarial attacks and defenses with noisy samples and robust learning with noisy labels). He has over 200 publications in prominent international journals such as IEEE TPAMI, IJCV and conferences such as CVPR, ICCV, ICML, NeurIPS, etc. He has granted over 30 national invention patents, 7 of which have achieved patent technology transfer. He has three software copyright. His received several awards including the First Prize of Natural Science of Ministry of Education, the First Prize of Science and Technology of Shaanxi Province, the Second Prize of the Natural Science of Chinese Society of Image and Graphics, the Excellent Doctoral Dissertation of Chinese Association for Artificial Intelligence and the Excellent Doctoral Dissertation Award of Shaanxi Province, etc. He has led various research projects including the National Science Fund for Excellent Young Scholars of China, the Joint Funds Key Program, the General Program, the Youth Scientists Fund of the National Natural Science Foundation of China, as well as the sub-projects under the National Key Research and Development Program of China and the Joint Funds of Ministry of Education of China, etc. He served as Associate-Editor-in-Chief of the international journal "Visual Computer" and editor board member of “Neural Network”.


Speech Title: The Robustness of Deep Learning Models under Adversarial Environments


Abstract:

With the rapid development of artificial intelligence, deep learning models have shown excellent application value and broad development prospects in many fields. However, while deep learning models exhibit good performances, they are facing a serious challenge, i.e., the lack of robustness under adversarial environments. Deep learning models can be maliciously interfered by imperceptible adversarial noise and thus make significantly wrong decisions. The lack of adversarial robustness will cause serious potential security threats to deep learning-based intelligent systems. Therefore, it is of great significance to deeply understand the intrinsic properties of adversarial noise and study effective adversarial defense strategies to enhance the adversarial robustness, which will also bring opportunities for the development of next-generation trustworthy deep learning. This report will first introduce the major generation mechanism of adversarial noise and representative adversarial attack algorithms. Against the malicious interference of adversarial noise, this report will then present different adversarial defense strategies from the perspectives of adversarial training, pre-processing and post-processing, hoping to promote the development of robust deep learning and trustworthy artificial intelligence.