4 Misconceptions in AI Research community

In recent years, AI has appeared a lot in the media. Is it really miraculous as people say, or just hype?

Let's get this problem enlightened a bit through exploring the sufferings of AI formation in the paper “Why AI is harder than we think”


Until now, AI has gone up and down many times. People call its periods the Springs and Winters of AI. In the 1960s and early 1970s, there had been many over-optimistic predictions about AI but they soon backed out for AI winter. There was an upsurge in eagerness in the early 1980s with several new initiatives. In the latter parts of the 1980s, with the problems of brittleness and lack of generality, hopes of AI had been all halted once again. The neural network approaches of this period cannot be extended to complex problems. Afterward, statistical machine learning started its spectacular rise in the 1990s and 2000s. However, it was not until 2010 that deep learning rose to monumental status. All of a sudden, the term ‘AI’ became ubiquitous.

The author of [1] also points out 4 fallacies that have long (and still) existed in the AI research community:

  • “Narrow intelligence is on a continuum with general intelligence”: To understand this fallacy, you must first know what a continuum is. It is a sequence of elements in which 2 side-by-side elements are almost the same but the 2 extremes are very different. Imagine that general AI is at one last end of the continuum and progress in an AI task is at somewhere else of the continuum. Describing advances on a particular AI task as “a step toward general AI” as some papers today makes no sense because no matter what the size of a contribution, they all can be seen as “on a continuum with general intelligence”.
  • “Easy things are easy and hard things are hard”: This is about a paradox. Things that humans do easily are a hard task for AI (like avoiding running into other people when walking), things that humans have problems doing in contrast, AI can solve very well (like playing chess). One explanation may be those simple tasks may require subconscious or unconscious parts of the brain which current AIs don’t have. So, maybe researchers should think in an opposite way that is “Easy things are hard and hard things are easy” when developing a new idea of AI. Also, the complexity of unconscious perception should not be underestimated.
  • “The lure of wishful mnemonics”: This fallacy is to reflect the way researchers use words in their program/procedure description. They named their program‘s operations with words such as “UNDERSTAND” or “THOUGHT”,... “Wishful mnemonics” is what Drew McDermott called these aspirational terms [2]. Using such human words can unintentionally cause a risk of misunderstandings to people that AI has achieved common sense as humans even though it hasn’t. Currently, work on AI is still filled with such wishful mnemonics.
  • “Intelligence is all in the brain”: There is a view that believes that "cognition takes place wholly in the brain", so we can upload our cognition and consciousness to computers. For this, some researchers thought that we need to scale up machines to the level of our brain's capacity. They have just focused on increasing the computational power of resources for AI to reach human-level ability. However, simulating AI by only the human brain part is not sufficient, there must be some connections with the body. According to a number of cognitive scientists: “Our thoughts are grounded, or inextricably associated with perception, action, and emotion, and that our brain and body work together to have cognition” [1].

From the four ideas above, you can now somewhat realize the difficulties in improving AI. And to keep a clear knowledge, you should always take notice of the flow of media, consider whether an idea has been seen as a whole or not. Though bad things exist like that, we can not just deny the progress in AI altogether. Instead, we should have a positive look at the time when AI replaces humans (in a good way). No one knows when it will happen, maybe next year, next century, next millennium or even more than that. Whether it happens or not, one thing we can perceive for now is that AI will definitely be able to support us do most of the things in the near future.


References:

[1] Melanie Mitchell, Why AI is harder than we think?, arXiv, 2021.

[2] Noel Sharkey and Lucy Suchman, Wishful Mnemonics and Autonomous Killing Machines, 2013.