Seeing that a few days in 2016 will come to an end, in the past year, artificial intelligence has mushroomed, and some people even call it "the first year of artificial intelligence." Automatic driving, speech recognition, popular games Pokémon Go... Machines seem to be omnipresent and omnipotent. At the same time, artificial intelligence has also caused many misfortunes this year. We need to pay more attention to these mistakes so that we will not repeat the same mistakes in the future. Lei Fengnet learned that a recent paper entitled "Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures" was published by Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville. In the past year, the performance was not very good. Yampolskiy stated that these failures can be attributed to the mistakes made by the AI ​​system during the learning and performance phases. What are the top ten AI failure cases in 2016? Microsoft Tay and Tesla crash list The following is a list compiled by Yapolskiy's dissertation and opinions of many artificial intelligence experts and foreign media TechRepublic, in no particular order. Compiled by Lei Fengnet: 1. Pokémon Go allows gamers to focus on white communities After the release of the popular Pokémon Go game in July, multiple users noticed that very few Pokémons were in the black community. Minu's chief data officer Anu Tewary said the reason is because the inventors of these algorithms did not provide a diverse training set and did not spend time in the black community.
2. Tesla's accident with a semi-autonomous driving system This year, Tesla's accidents have emerged throughout the world. In May, on a highway in Florida, a Tesla in Autopilot mode crashed and the driver died. This is Tesla's first death in the world. Afterwards, Tesla made a major update to Autopilot software. In an interview with its CEO, Musk, said that this update can prevent the accident from happening again. In addition, Tesla accidents have also occurred in other countries and regions such as China. However, some of these accidents cannot be said to be directly caused by AI. 3. Microsoft's chat robot Tay spreads racism, sexism, and attacks on homosexuality. This spring, Microsoft released an artificial intelligence-driven chat robot, Tay, on Twitter, hoping that it would be pleasant for young people on the Internet. At first, Tay was designed to imitate teenagers in the United States. However, Tay was taken away shortly after the launch and became a bad guy who loves Hitler and ironically feminism. In the end, Microsoft had to “kill†Tay and announced that it would adjust the relevant algorithms. 4. Google's AI Alpha Dog AlphaGo lost a game with human Go Master Li Shishi on March 13 this year, Google Alpha Go vs. Li Shishi's man-machine warfare The fifth inning of the fourth chess game at the Four Seasons Hotel in Seoul, South Korea, Go Master Li Shishi The middle game defeated Alpha and regained one game. Although the final artificial intelligence still won with a score of 1 to 3, the lost game shows that the current AI system is still not perfect. "Perhaps Li Shishi discovered the weaknesses of Monte Carlo Tree Search (MCTS)." Toby Walsh, professor of artificial intelligence at New South Wales University, said. However, although this is seen as a failure of artificial intelligence, Yampolskiy believes that this failure is within an acceptable range. 5. In the video game, non-player characters created weapons unpredicted by the creators. In June of this year, an AI-equipped video game “Elite: Dangerous†appeared outside the game plan of the game maker: AI was created. Out of the game set super weapons. A gaming site commented: "Human gamers may be defeated by strange weapons created by AI." It is worth mentioning that game developers then withdrew these weapons. 6. Artificial intelligence also has racial discrimination In the first “International Artificial Intelligence Pageant Contestâ€, the robot expert group based on the “Algorithm that can accurately assess human aesthetic and health standards†judged the face. However, because there is no diversified training set for artificial intelligence, the winners of the competition are all white. As Yampolskiy said, "beauty is in the pattern recognizer." 7. Predict crime with AI, involving racial discrimination Northpointe has developed an artificial intelligence system to predict the probability of an alleged offender’s second-time crime. This algorithm, called "Minority Report", was accused of prejudicing racial bias. Because in the test, black criminals are far more likely to be labeled than other races. In addition, another media ProPublica also pointed out that Northpointe's algorithm “although the problem of racial discrimination is removed, the correct rate is not high in most cases.†8. The robot caused a child to be injured Knightscope platform has created a claim that is "Crime fighting robots." In July this year, the robot injured a 16-year-old boy in a shopping mall in Silicon Valley. The Los Angeles Times quoted the company as saying it was an "accidental accident." 9. China uses facial recognition technology to predict criminals. This is considered biased. Two researchers at Shanghai Jiaotong University in China published a paper entitled "Automated Inference on Criminality Using Face Images." According to the foreign media Mirror report, the researchers analyzed 1856 facial images of half criminals, and used some identifiable facial features to predict criminals, such as lip curvature, eye inner corner distance, There are even nose-mouth angles and so on. For this study, many industry players questioned the test results and raised ethical issues. 10. Insurance companies use Facebook big data to predict accident rates The final case was from Admiral Insurance, England’s largest car insurance company, and this year it plans to use Facebook users’ tweet data to test the association between social networking sites and good drivers. This is an abuse of artificial intelligence. Walsh thinks "Facebook has done a good job of limiting data." Due to Facebook's restrictions on the company's access to data, this project, called "first car quote," has not been opened. From the above cases, readers of Lei Fengwang can see that the AI ​​system can easily become extreme. Therefore, humans need to train machine learning algorithms on diverse data sets to avoid AI bias. At the same time, with the continuous development of AI, ensuring the scientific testing of relevant research, ensuring data diversity, and establishing relevant ethical standards have become increasingly important.
2. Tesla's accident with a semi-autonomous driving system This year, Tesla's accidents have emerged throughout the world. In May, on a highway in Florida, a Tesla in Autopilot mode crashed and the driver died. This is Tesla's first death in the world. Afterwards, Tesla made a major update to Autopilot software. In an interview with its CEO, Musk, said that this update can prevent the accident from happening again. In addition, Tesla accidents have also occurred in other countries and regions such as China. However, some of these accidents cannot be said to be directly caused by AI. 3. Microsoft's chat robot Tay spreads racism, sexism, and attacks on homosexuality. This spring, Microsoft released an artificial intelligence-driven chat robot, Tay, on Twitter, hoping that it would be pleasant for young people on the Internet. At first, Tay was designed to imitate teenagers in the United States. However, Tay was taken away shortly after the launch and became a bad guy who loves Hitler and ironically feminism. In the end, Microsoft had to “kill†Tay and announced that it would adjust the relevant algorithms. 4. Google's AI Alpha Dog AlphaGo lost a game with human Go Master Li Shishi on March 13 this year, Google Alpha Go vs. Li Shishi's man-machine warfare The fifth inning of the fourth chess game at the Four Seasons Hotel in Seoul, South Korea, Go Master Li Shishi The middle game defeated Alpha and regained one game. Although the final artificial intelligence still won with a score of 1 to 3, the lost game shows that the current AI system is still not perfect. "Perhaps Li Shishi discovered the weaknesses of Monte Carlo Tree Search (MCTS)." Toby Walsh, professor of artificial intelligence at New South Wales University, said. However, although this is seen as a failure of artificial intelligence, Yampolskiy believes that this failure is within an acceptable range. 5. In the video game, non-player characters created weapons unpredicted by the creators. In June of this year, an AI-equipped video game “Elite: Dangerous†appeared outside the game plan of the game maker: AI was created. Out of the game set super weapons. A gaming site commented: "Human gamers may be defeated by strange weapons created by AI." It is worth mentioning that game developers then withdrew these weapons. 6. Artificial intelligence also has racial discrimination In the first “International Artificial Intelligence Pageant Contestâ€, the robot expert group based on the “Algorithm that can accurately assess human aesthetic and health standards†judged the face. However, because there is no diversified training set for artificial intelligence, the winners of the competition are all white. As Yampolskiy said, "beauty is in the pattern recognizer." 7. Predict crime with AI, involving racial discrimination Northpointe has developed an artificial intelligence system to predict the probability of an alleged offender’s second-time crime. This algorithm, called "Minority Report", was accused of prejudicing racial bias. Because in the test, black criminals are far more likely to be labeled than other races. In addition, another media ProPublica also pointed out that Northpointe's algorithm “although the problem of racial discrimination is removed, the correct rate is not high in most cases.†8. The robot caused a child to be injured Knightscope platform has created a claim that is "Crime fighting robots." In July this year, the robot injured a 16-year-old boy in a shopping mall in Silicon Valley. The Los Angeles Times quoted the company as saying it was an "accidental accident." 9. China uses facial recognition technology to predict criminals. This is considered biased. Two researchers at Shanghai Jiaotong University in China published a paper entitled "Automated Inference on Criminality Using Face Images." According to the foreign media Mirror report, the researchers analyzed 1856 facial images of half criminals, and used some identifiable facial features to predict criminals, such as lip curvature, eye inner corner distance, There are even nose-mouth angles and so on. For this study, many industry players questioned the test results and raised ethical issues. 10. Insurance companies use Facebook big data to predict accident rates The final case was from Admiral Insurance, England’s largest car insurance company, and this year it plans to use Facebook users’ tweet data to test the association between social networking sites and good drivers. This is an abuse of artificial intelligence. Walsh thinks "Facebook has done a good job of limiting data." Due to Facebook's restrictions on the company's access to data, this project, called "first car quote," has not been opened. From the above cases, readers of Lei Fengwang can see that the AI ​​system can easily become extreme. Therefore, humans need to train machine learning algorithms on diverse data sets to avoid AI bias. At the same time, with the continuous development of AI, ensuring the scientific testing of relevant research, ensuring data diversity, and establishing relevant ethical standards have become increasingly important.
High Frequency Inverter Charger
High Frequency Inverter Charger,solar inverter charger,Inverter Charger,High Frequency Car Power Charger
SUZHOU DEVELPOWER ENERGY EQUIPMENT CO.,LTD , http://www.fisoph-power.com