NIST researchers warn of top AI security threats

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

As dozens of states race to set standards for how their agencies use AI to increase efficiency and streamline services to the public, researchers at the National Institute of Standards and Technology found that artificial Intelligence systems, which rely on large amounts of data to perform tasks, can malfunction when exposed to unreliable data, according to a report published last week.

The report, part of a broader effort by the institute to support the development of trustworthy AI, found that cybercriminals can deliberately confuse or “poison” AI systems to make them vulnerable. Can corrupt them by exposing them to bad data. What’s more, according to the study, there are no defenses that developers or cybersecurity experts can implement to protect AI systems.

“Data is incredibly important for machine learning,” NIST computer scientist Apostol Vassilev, one of the authors of the publication, told StateScoop. “‘Garbage in, garbage out’ is a well-known catchphrase in business.”

To perform tasks such as autonomously driving vehicles or interacting with users as online chatbots, AI is trained on large amounts of data, which helps the technology predict Learn how to best respond to different situations. Autonomous vehicles, for example, are trained on images of highways and street signs, among other datasets. A chatbot can be exposed to records of online conversations.

The researchers warned that some AI training data — such as websites with false information or unwanted interactions with the public — may not be reliable and could cause AI systems to perform in unintended ways. For example, chatbots can learn to respond with abusive or racist language when carefully crafted malicious cues deter their defenders.

Joseph Thakur, a principal AI engineer and security researcher at AppOmni, a security management software used by state and local governments, said it’s important to consider the security protocols needed to protect against every possible attack — as in the NIST report. has been described.

“We’re going to need everyone’s help to make it safe,” Thakur told StateScoop. “And I think people should think it through.”

‘Malice’

The NIST report outlined four types of attacks on AI — poisoning, theft, privacy and abuse — and categorized them based on criteria such as the attacker’s goals and objectives, capabilities and knowledge of the system.

Poisoning occurs when an AI system is trained on corrupted data, such as by slipping multiple instances of inappropriate language into conversation records so that the chatbot knows enough to use those instances in its users’ interactions. Describe it as a common occurrence.

“Using a creative AI example, if you’re malicious and try to change some of the input data that’s fed into the model during training, where the model learns that the cat What is, what is the dog and all these things, it can actually learn perturbations that can cause the model to misclassify,” explained Apostol Vassilev, one of the NIST computer scientists who wrote the report.

But Thakur, who specializes in application security, hacking and AI, argued that while data poisoning is possible, its window is limited to the tool’s training phase and other types of attacks — theft, privacy and instant injection. In case of misuse – are So more likely.

“If you can avoid the filter, it’s an attack on the system, because you’re bypassing set protection,” Thakur said of instant injection, when a bad actor volunteers someone else’s data. Cheat the system.

Quick injection attacks aim to force a chatbot to provide sensitive training data that it is programmed to intercept, Thakur said.

“If you’re able to extract the data directly from the model that went into training it — and many times it’s trained on all the data that’s out there on the Internet, which often contains people’s private information,” Thakur said. “If you can get a large language model to output that sensitive information, that violates that person’s privacy.”

So what can be done?

Vasiliev said a major challenge for state and local governments is to safely incorporate large language models into their workflows. And while there are ways to mitigate attacks against AI, he warned agencies not to be lulled into a false sense of security, as there is no foolproof way to protect AI from misdirection.

“You can’t just say, ‘OK, I’ve got this model and I’ve applied this technique and I’m done.’ What you need to do is monitor, evaluate and react when problems arise,” said Vasiliev, who also acknowledged that researchers also need to develop better cybersecurity defenses. Should. “In the meantime, you guys need to be alert and aware of all of these things and constantly monitor.”

Thakur, who helps tech companies find these kinds of vulnerabilities in their software, insisted there are some common-sense ways to protect against AI security threats, including restricting access to sensitive data. Is.

“Do not connect systems that have access to sensitive data, such as Social Security numbers or other personal information,” Thakur said. “If a government agency wants to enable its employees to work more efficiently through the use of AI, such as ChatGPT or a similar service, then don’t. [training] Data that is sensitive. And don’t connect it to any system that allows access to that data.”

But Thakur also struck a note of optimism, predicting that AI security features will become more common, as will the ubiquity of two-factor authentication.

“A lot of people don’t realize everything that’s underwater when they’re using some kind of website or [software-as-a-service] application,” he said. “I think AI security will be integrated through your traditional security tech stack, and then your cloud security and then your SaaS security.”

Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for state scope. She was previously a multimedia producer for CNET, where her coverage focused on food production, climate change and private sector innovation through podcast and video content. He earned a bachelor’s in anthropology from Wagner College and a master’s in media innovation from Northeastern University.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment