In modern software, the train of security vulnerabilities is headed full-steam at the artificial intelligence track, while experts from NJIT, Rutgers University and Temple University are developing new educational materials intended to prevent a collision.

NJIT’s Cong Shi, assistant professor in Ying Wu College of Computing, is a principal investigator on the $127,000 grantEducation on Securing AI System under Adversarial Machine Learning Attacks, from the National Science Foundation. His prior research on the security of computer vision systems and voice assistants led him and collaborators Yingying Chen (Rutgers) and Yan Wang (Temple) to see that AI’s fast and vast adoption, without proper education, could expose massive risk.

Shi further explained why AI courses tend to lack security aspects. “I believe the main reason is the rapid pace at which AI technologies have evolved, combined with the huge focus on the benefits of benign applications, such as ChatGPT and other widely used AI models. As a result, most AI courses tend to prioritize teaching foundational concepts like model construction, optimization and evaluation using clean datasets. Unfortunately, this leaves out real-world scenarios where models are vulnerable to adversarial attacks during deployment or backdoor attacks during the training phase,” he said. “AI security is still relatively new compared to traditional cybersecurity. While cybersecurity education has long focused on protecting data, networks and systems, AI security presents unique challenges — like adversarial examples and model poisoning — that educators may not yet be familiar with or ready to teach in a systematic way.” To read the full story.