AI MODEL:
Researchers have developed a method for stealing AI models by detecting their electromagnetic ‘signature’ and comparing it to other models running on the same type of chip. While stressing that their goal is not to assist in attacking neural networks, researchers at North Carolina State University detailed this technique in a recent paper.
Using an electromagnetic probe, a few pre-trained open-source AI models, and a Google Edge Tensor Processing Unit (TPU), they were able to analyze the electromagnetic emissions while the TPU chip was in operation.
“It’s quite expensive to build and train a neural network,” said Ashley Kurian, the lead author of the study and a Ph.D. student at NC State, in an interview with Gizmodo.
ETHICAL AI MODEL HACKING:
“It’s intellectual property owned by companies, requiring significant time and computing resources. For example, ChatGPT consists of billions of parameters, which are the ‘secret’ behind it. If someone steals it, ChatGPT becomes theirs—no need to pay for it, and they can potentially sell it.
“The researchers successfully determined the architecture and specific features of the AI models, known as layer details, with 99.91% accuracy.
To achieve this, they had physical access to the chip for probing and running other models. They also worked closely with Google to assess the potential vulnerabilities of its chips.
Kurian speculated that this technique could also be applied to AI models running on smartphones, although their smaller design would make capturing electromagnetic signals more challenging.
“Side-channel attacks on edge devices are not new,” said Mehmet Sencan, a security researcher at AI standards nonprofit Atlas Computing.
However, this method of extracting entire model architectures and hyperparameters is noteworthy. Since AI hardware “performs inference in plaintext,” Sencan warned that anyone deploying models on unsecured edge devices or servers should assume their architectures could be exposed through extensive probing.