Why is artificial intelligence very easy to be deceived?


[Netease smart news October 16 news] fraud is one of the world's oldest and most innovative "professional", it may soon have a new goal. Research shows that artificial intelligence may be vulnerable to fraudsters, and as its influence in the modern world continues to grow, attacks against it may become more common.

The root of the problem lies in the fact that artificial intelligence algorithms can understand the world in a very different way from humans. Therefore, the slight adjustment of the data of these algorithms may make it completely scrapped, but it has no effect on human beings.

Many studies in this area are conducted on image recognition systems, especially those that rely on deep learning neural networks. By displaying these images of thousands of specific objects to these systems, they are trained until they can extract the common features of these images so that they can be identified in a new group of images.

But the features they extract are not necessarily the kind of high-level features that humans want, such as the sign “stop” on a card, or the tail of a dog. These systems analyze individual pixel-level images to detect common patterns between them. These patterns can be fuzzy combinations of pixel values ​​that humans cannot recognize, but are very accurate in predicting a particular object.

This means that by identifying these patterns and applying them to different images, the attacker can make the object recognition algorithm fooled into seeing something that doesn't actually exist, and these things are not very human. obvious. This manipulation is called "hostile attack."

When early attempts to deceive image recognition systems in this way, it was necessary to have access to the internal workings of the algorithm to decipher these patterns. However, in 2016, researchers demonstrated a "black box" type of attack that allows the system to be deceived without knowing its internal workings.

By inputting tampered images to the system and observing its classification, they can calculate what the system is paying attention to, and thus generate images that can spoof the system. Importantly, these falsified images do not appear to be noticeably different from the human eye.

Tests of these methods are performed by directly inputting modified image data in the algorithm, but recently, similar methods have been applied in the real world. Last year, studies showed that after successfully modifying the image with a smartphone, it successfully fooled the image classification system.

Another group of studies has shown that wearing specially designed glasses that can create illusions can make facial recognition systems misunderstand that some people are stars. In August of this year, scientists discovered that adding tags in specific configurations to prevent signals from appearing may lead people to design a new neural network to identify and classify these signals.

The last two examples highlight some of the potential gray applications of this technology. Leaving a self-driving car to miss a stop sign may cause an accident, which may be both an insurance fraud and harm to people. If facial recognition technology becomes more and more popular in biometric security applications, it is necessary for fraudsters to learn how to pretend to be someone else.

Not surprisingly, people have taken some measures to counter the threat of hostile attacks. In particular, studies have shown that deep neural networks can be trained to detect the aforementioned false images. According to a study conducted by the Bosch Research Center, the detector's strategy is proposed. Hostile attacks can fool the detectors. The detector's trainer intelligence can invalidate the attack and a corresponding arms race may occur in the future.

Although the fraudulent process of the image recognition system can be used as a simple and intuitive demonstration, they are not the only machine learning systems with risks. Techniques for disturbing pixel data can also be applied to other types of data.

Chinese researchers found that adding specific words or misspelling a single word in a sentence can completely interfere with machine learning systems used to analyze texts. Another set of experiments shows that the sounds played in the speakers are distorted, allowing the smartphone running the Google Now Voice Command system to access specific websites and download malicious software.

The last example is a more worrying application, and perhaps it will happen in the near future: Bypassing cyber security defenses. The industry is increasingly using machine learning and data analysis to identify malware and its intrusions, but these systems are also very susceptible to fraud.

At this summer's hacking conference, a security company demonstrated how they bypassed anti-malware artificial intelligence in a similar way to black box attacks on image classifiers, but using its own artificial intelligence system.

Their system will enter malicious code into anti-virus software, and then write down the score given by the system. Then, it uses a genetic algorithm to repeatedly modify the code until it can bypass the defense while maintaining its functionality.

All the methods mentioned so far have been focused on “deception” of pre-trained machine learning systems, but another major concern of the cybersecurity industry is “data poisoning”. In this view, the introduction of erroneous data into the training set of the machine learning system will cause it to begin misclassifying things.

This is especially challenging for anti-malware systems, which are constantly updated to include new virus data. A related method uses data to bomb the system. The data is designed to make erroneous decisions so that defenders will be able to re-adjust their systems and the attacker will be able to enter.

How likely these methods are to be used under natural conditions depends on the potential rewards and the proficiency of the attacker. Most of the techniques described above require a high level of expertise, but getting training materials and machine learning tools becomes easier.

For years, simple versions of machine learning have been the core technology of spam filters, and spammers have developed a series of innovative solutions to bypass them. As machine learning and artificial intelligence are increasingly integrated into our lives, the rewards of deceiving them are likely to exceed the cost.

(From: SingularityHub Compilation: NetEase See External Intelligent Compilation Platform Review: Small ka)

Pay attention to NetEase smart public number (smartman163), get the latest report of artificial intelligence industry.

Tool

contact remove tool, contact crimp tool

Kunshan SVL Electric Co.,Ltd , https://www.svlelectric.com

This entry was posted in on