Recently, three well-known security analysts, Zhi Wang, Chaoge Liu, and Xiang Cui have published a research report known as EvilModel, a model through which attacker will send a malware covertly and evade the detection.
In this security report, these three cybersecurity researchers have demonstrated a new method for hiding malware into AI models and evading the automated detection of security tools or Antivirus engines.
The researchers claimed that in order to bypass detection, the threat actors are hiding messages and accounts from malicious programs in authorized services like Twitter, GitHub, and even blockchain.
However, the team of researchers composed of Zhi Wang, Chaoge Liu, and Xiang Cui, has distinguished a technique that enables circumventing these problems.
According to the report, the researchers have chosen the container that is a neural network model. Apart from this, they have clearly explained, the solution that enables both to “disassemble” the malware code by making it improbable to recognize and another one is to minimize the probable signs of infection.
In this research, malware has been successfully delivered covertly and evade antivirus detection through neural network models.
Unlike another way through which attackers using steganography to hide the malware, hides malware inside a neural network model is more effective when embedding large-sized malware.
Basically, a neural network model usually consists of an input layer, one or more hidden layer(s), and an output layer as follows.
Researchers have been embedded malware into the neurons and delivered it to the victim’s device neural network without impacting the neural network performance.
Since the Neural network performance remains unchanged, Antivirus engines are unable to detect the malware.
This case has experimented and determined that 36.9MB of malware can be embedded into a 178MB-AlexNet model within 1% accuracy loss, as a result, there was no detection found in the Antivirus engines listed in the Virustotal.
According to the research paper, Researchers explained the referenceable scenario for the defense on neural network-assisted attacks.
The analysts asserted that the malware that has been utilized by the threat actors is the malware-embedded models that are specifically are used in end devices.
And that’s why they have suggested that whenever the application gets launch in the model, users must apply the verifications on the models as soon as possible.
However, the malware can be detected and analyzed that is being compiled and implemented in the targeted device, with the help of traditional methods like dynamic analysis, heuristic ways, static, etc.
Apart from this, the experts also affirmed that there are possibilities where the threat actors can launch initiatives such as supply chain pollution, well in this case the original providers of the models must take some security measures to bypass this kind of attack.
The method that has been pronounced by the researchers, does not need to rely on other system vulnerabilities. However, the models that are carrying malicious programs can be addressed with the help of model updates that are present in the supply chain.
Apart from this, the characteristics of the malware are no longer available, and therefore it can evade detection by applying some common antivirus engines.
While in the case of neural network models, they are quite strong enough to change, and that’s why there will be no apparent losses on the performances.
However, neural networks have become quite popular, and the researchers consider that this method will convert more and more popular in the near future.