Recently, SAM ALTMAN, the CEO of OpenAI, stated that China should play a pivotal role in defining the technological boundaries.
Last week, Altman stated during a talk at the Beijing Academy of Artificial Intelligence (BAAI) that China has some of the world’s finest AI talent. “Solving alignment for advanced AI systems requires some of the world’s brightest minds, so I sincerely hope that Chinese AI researchers will make significant contributions in this area.”
Altman is qualified to comment on these issues. His company is responsible for ChatGPT, the chatbot that demonstrated the rapid advancement of AI capabilities to the world. Such advances have prompted scientists and technologists to advocate for technological limitations. In March, a number of experts signed an open letter requesting a six-month moratorium on the development of AI algorithms more potent than those underlying ChatGPT. Last month, executives including Altman and the CEO of Google DeepMind, Demis Hassabis, issued a warning that AI could one day pose an existential threat comparable to nuclear conflict or pandemics.
Such statements, which are frequently signed by executives who work on the very technology they claim will harm us, can feel hollow. Some individuals also misunderstand the point. Many AI experts argue that it is more essential to focus on the harms that AI is already capable of causing by exacerbating societal biases and facilitating the spread of misinformation.
The chair of the BAAI, Zhang Hongjiang, informed me that AI researchers in China are also profoundly concerned about the emergence of new capabilities. He stated, “I truly believe [Altman] is doing humanity a service by making this tour and speaking to various governments and institutions.”
Zhang stated that a number of Chinese scientists, including the director of the BAAI, had signed the letter calling for a halt to the development of more powerful AI systems, but he noted that the BAAI has been concentrating on more imminent AI risks for some time. Zhang stated that we will “certainly devote more resources to AI alignment” as a result of AI’s recent advancements. However, he added that the situation is complicated because “smarter models can actually make things safer.”
Altman was not the only AI expert from the West who attended the BAAI conference.
Geoffrey Hinton, one of the pioneers of deep learning, the technology that underpins all modern AI, was also present. He quit Google last month in order to warn the public about the imminent dangers posed by increasingly advanced algorithms.
Max Tegmark, professor at the Massachusetts Institute of Technology (MIT) and director of the Future of Life Institute, which organized the letter calling for a halt in AI development, also spoke about AI risks, while Yann LeCun, another deep learning pioneer, suggested that the current alarm over AI risks may be a bit exaggerated.
Regardless of where you stand on the apocalyptic debate, there is something to be said for the US and China sharing AI perspectives. It can appear as if AI has become hopelessly entangled in politics, given that the customary discourse focuses on the nations’ struggle to dominate development of the technology. In January, FBI Director Christopher Wray told the World Economic Forum in Davos that he is “deeply concerned” about the Chinese government’s artificial intelligence program.
Given the importance of artificial intelligence to economic growth and strategic advantage, international competition is not surprising. But nobody benefits from developing the technology in a hazardous manner, and the rising power of artificial intelligence will necessitate cooperation between the United States, China, and other global powers.
As with the development of other “world-changing” technologies, such as nuclear power and the tools required to combat climate change, it may fall to the scientists who comprehend the technology the most to find common ground.