Xinhua News Agency, Beijing, January 20, New Media Special Telegram According to a report from the US Daily Science website on January 11, artificial intelligence (AI) has been making greater progress. Some scientists and philosophers have been A warning was issued for the danger. An international research team including scientists from the Max·Planck Human Development Institute Human and Machine Research Center, calculated the results with theory Proved that controlling super-intelligent AI is likely to be difficult. The results of this study were published in the”Journal of Artificial Intelligence Research”.

The report said that suppose someone wants to program an AI system that is smarter than humans so that it can learn independently. AI connected to the Internet may access all human data. It can replace all existing programs and control all machines in the world online. Will this lead to a utopia or a dystopia? Can AI cure cancer, bring world peace and prevent climate disasters? Or does it destroy humans and occupy the earth?

Manuel Sebrian, one of the authors of the research report and the head of the Digital Mobilization Team of the Human and Machine Research Center of the Max Planck Institute for Human Development, said:”A super-intelligent machine controls The whole world, this sounds like science fiction. However, there are already some machines that can complete certain important tasks independently, and programmers do not fully understand how they learn. Therefore, the question arises, whether this situation will It becomes uncontrollable at a certain moment and poses a danger to humans.”

For how to control super intelligent AI, scientists have explored two different ideas. On the one hand, the ability of super-intelligent AI is particularly restricted, for example, by separating it from the Internet and all other technical equipment, so that it cannot contact the outside world-but this will greatly reduce the functions of super-intelligent AI and cannot satisfy humans. Requirements. If there is no such option, AI can be motivated from the beginning to pursue goals that are only in the best interests of mankind, such as by codifying moral principles into its programs. But researchers also show that these and other contemporary and historical ideas for controlling superintelligent AI have their limitations.

In this study, the research team conceived a theoretical containment algorithm that simulates the behavior of AI and stops it if it is deemed harmful, thereby ensuring that the super-intelligent AI is Will not harm humans. However, detailed analysis results show that in the current computing paradigm, this algorithm cannot be established.

The director of the Center for Human and Machine Research, Iyad Rawan, said:“If the problem is broken down into the basic rules of theoretical computer science, then it turns out that the algorithm that orders AI not to destroy the world may stop unintentionally. Your own operation. If this happens, you don’t know if the containment algorithm is still analyzing the threat or has stopped to contain the harmful AI. In fact, this makes the containment algorithm unusable.”

The report believes that based on these results, the containment problem is incalculable, that is, no single algorithm can find a solution to judge whether AI will cause harm to the world.