Why So Many Ordinary Chinese Misunderstand AI

2017-11-17 04:56:26

Few scientific fields are more entwined with philosophy than artificial intelligence (AI). In 1950, the article “Computing Machinery and Intelligence” appeared in the British magazine Mind. Written by the British scientist Alan Turing, it contained his proposal for the famed Turing test, a means of judging whether a machine was able to exhibit intelligent behavior indistinguishable from that of a human. In an attempt to answer questions related to the nature of intelligence, Turing’s behaviorist approach to the theory of the mind finally closed the gap between the fields of machine programming design and psychological research.

As a field, the philosophy of artificial intelligence has yet to truly catch on in China. Conditions are slightly better abroad, where it is acknowledged — albeit tepidly — as a branch of philosophy. For example, Margaret A. Boden, a British philosopher with degrees in computing, biology, and psychology, as well as close contacts with leading AI researchers at the Massachusetts Institute of Technology and Carnegie Mellon University, has established herself as one of the field’s leading figures. Chinese scholars, by comparison, do a poor job of integrating the humanities and the sciences.

Yet there is evident public enthusiasm in China for learning about the effects of AI. Earlier this year, a computer program called AlphaGo inflicted overwhelming defeats on the country’s foremost players of Go, a strategic board game that has ancient roots in China. The news left an indelible mark on the minds of many Chinese, who have traditionally seen Go proficiency as a mark of a keen intellect.

There is a social aspect to China’s fascination with AI, too. More and more of the aspirational middle-class are expressing anxiety over the impending automation of work, and remain unsure whether to see AI development as a force for liberation or impoverishment. This attitude is different from how people tend to see AI in, say, Japan, where groundbreaking developments in AI during the 1990s have stalled and the technology is mostly being used to ease the country’s aging population problems.

More and more of the aspirational middle-class remain unsure whether to see AI development as a force for liberation or impoverishment.

In addition, the cresting wave of Chinese sci-fi writing, with celebrated author Liu Cixin at the forefront of it, has popularized the idea that AI presents us with potentially limitless — not to mention threatening — possibilities.

The average natural scientist busies themself with finding answers to specialty-related problems and is rarely called upon to reflect on the legitimacy of their work. Philosophers, however, are concerned with grand strategy: Why, in the end, do we pursue these goals in the first place?

The Dartmouth Conference in 1956 was a milestone in the development of artificial intelligence. Held over the summer at Dartmouth College in the United States, academics traveled from all over to try and hash out how advances in computing technology could be utilized to develop artificial intelligence.

Although no philosophers attended the conference, it nonetheless had a deeply philosophical bent. First of all, the conference’s participants gamely discussed the more profound questions of their field, such as how to achieve human-level machine intelligence, and not just how to use an algorithm to solve a particular problem. Happily, the United States preserved this “philosophical aspect” of AI research well past the conference’s conclusion.

The goal of AI research is for manmade machines to be able to imitate human intelligence and actions, and eventually to realize machine intelligence. Clearly, to be able to do so, we must first answer the question: “What is intelligence?” Those who believe the nature of intelligence lies in the ability of an entity to perform the same tasks humans can will tend to focus their energies on cramming as much information as possible into the machine’s so-called black box — its “brain.”

Viewed from this perspective, it is the lack of a clearly defined research target that has caused philosophical differences between AI researchers when it comes to the nature of “intelligence.” It has also had a significant impact on the technology used.

The classic philosophical conundrum of traditional AI is known as the frame problem. As humans, we know that when we, for example, grab a toy block, the only thing we have changed is its position. We have done nothing to affect, say, its color, shape, or size. However, computer systems must be told that objects do not change arbitrarily. Unless you clearly state this fact while defining the “grab” function, it will not work as intended.

AlphaGo is already better than any human Go player, but the same technology can’t be used on China’s best table tennis stars.

Any such definition is necessarily complex, since it requires the programmer to first run through every characteristic of a given object, and make the system exclude certain “frame axioms” for each of these characteristics before it can perform the action. In simple terms, this basically means that when you tell the robot to “grab” something, it understands that this does not mean it can change the shape, size, color, or structure of the block. However, the goal of computing is to be able to execute theoretically simple tasks with relatively fewer resources, something that’s in serious conflict with the current reality.

In fact, some experts have already pointed out that modern training methods for deep learning machines — for example, applied AI technology such as a Go player or a driverless car — require vast sample sizes. Inputting small samples often leads systems involving complex parameters to experience problems related to “overfitting,” a phenomenon that occurs when highly specialized systems perfectly predict the data they are trained with, but fail to adequately predict future outcomes based on new data. For example, AlphaGo is already better than any human Go player, but the same technology can’t be used on China’s best table tennis stars.

China’s AI buffs must first set our eyes on helping the global AI community achieve artificial general intelligence (AGI), whereby machines could hypothetically perform any intellectual task a human being can. By applying algorithms to cognitive linguistics, we can develop a meaning-based universal inference engine — essentially an adaptable form of human brain-like technology — combine it with resource-conscious algorithms, and add in concepts from cognitive psychology, including artificial emotions.

AI will not have the power to challenge human intelligence until it is flexible enough to mimic humans in working and thinking across diverse goals, an eventuality so far off that it is not worth China’s middle classes losing sleep over it yet. It bodes well that many Chinese take a keen interest in AI, but a lasting edge in AI research won’t be perceptible until we see rising public interest not in sci-fi, but in basic science.

Translator: Kilian O’Donnell; Editors: Lu Hongyong and Matthew Walsh.

(Header image: Two women interact with a robot at a health industry exhibition in Chongqing, Sept. 17, 2017.Chen Chao/VCG)