AI Myths Debunked: Unpacking Five Common Misconceptions
Media coverage of AI has led to misinformation about what it can do now and what it might achieve in the future. It’s time to unpick the hype. By Vassilis Galanos, SJ Bennett, Ruth Aylett and Drew Hemment.
Artificial Intelligence (AI) is the subject of extraordinary publicity concerning its abilities and possibilities, with the associated hype resulting in the spread of misinformation and myths. In the news media we see examples of AI used in policing to identify potential suspects and in recruitment to screen CVs, while in films and TV we are shown sentient robots and computer systems. AI is even marketed as something that can autonomously produce its own artworks.1.
These many and varied portrayals mean that AI has become a ‘suitcase word’ – it carries multiple meanings that change depending on the context in which it is used2. Here, we debunk five of the common misconceptions that have taken root about AI, using examples from some recent writings on this topic.
1. Personification by Design: Why do we humanise AI?
People often refer to ‘an AI’, as if talking about a person-like entity with greater-than-human intelligence, and maybe even sentience. Yet rather than talking about ‘intelligence’ – a term that psychologists often avoid as there is no generally agreed definition – it is more accurate to focus on AI as a set of algorithms. We encounter these every day; they are a list of steps to follow in order to achieve a particular outcome, like a cooking recipe or instructions for making a cup of coffee.
In practice, AI is a set of many different pieces of algorithmic software similar to our smartphone apps or Google Search. However, some of these AI systems are combined into artefacts, such as robots, which in order to make them more user-friendly are designed to look, sound and behave in similar ways to humans. Examples include computer assistants like Apple’s Siri, Amazon’s Alexa, or Sophia the robot3. Humans are hard-wired to empathise with what appears similar to us. This can make us feel that systems that mimic speech or emotion actually possess these characteristics – after all, we have been personifying things since the days of tree spirits.
2. Cognition and Capability: Does AI think?
We are frequently shown footage of robots that makes them appear much more successful than they actually are. We are led to believe that scientists can implement capabilities of perception, understanding, planning, enabling robots to react sensibly to new situations, or even have self-awareness or consciousness. However, most of these videos are staged to one degree or another: in some, robots are remotely controlled, while others might show one successful run out of a hundred.
Our understanding of how cognition works is patchy and shallow, and AI programs are very specialised, matching some human capabilities only in very specific cases and well-understood environments, and failing when placed within new contexts. Scientists still do not possess the necessary knowledge to allow us to combine skills of perception, analysis and reaction in the way living creatures can. Even humble lifeforms like slugs have surprisingly complex and nuanced cognition, but try searching YouTube for ‘robot fail compilations’ to see the stage of our current engineering capabilities4. Overconfidence in designing ‘intelligent’ systems may have disastrous consequences; take driverless cars, which have caused fatal accidents when they meet unexpected situations.
3. Machine learning and pattern recognition: Do robots learn like humans?
A common misconception is that new AI systems learn the same way as humans, only better, and that they are more ‘objective’ and ‘correct’. However, while learning systems can find patterns in large amounts of data that a human might miss, they have no idea of meaning or cause and effect – they are really making statistical associations. What they learn depends entirely on what data they are given. For example, face analysis systems trained on data with too few people of colour cannot accurately process faces with dark skin. And their learning is fallible: a robot cleaner can confuse useful items with trash; a medical system might miss significant patient background information; and a robot judge might suggest that someone is guilty because of previous convictions or because of the neighbourhood they live in5. It is an illusion of automated systems that ‘the computer is always right’.
4. Labour and Automation: Will AI displace our jobs?
There is a widely-held and understandable fear that AI will remove 50% of current jobs over the next 15 years, resulting in a plethora of new, low-skilled work. However, we tend to vastly overestimate AI’s capabilities and underestimate the flexibility and judgement needed in many manual jobs. In the last couple of centuries, every introduction of new and more efficient tools has meant jobs are lost which are then replaced by a vast array of others. Moreover, the impact of automation is a political as well as a technological issue, illustrated by the growth of the gig economy which has resulted in a swathe of low-paid, unstable jobs with little oversight. One example of this is Amazon Mechanical Turk, a labour marketplace which is essential for the development of many machine-learning systems.6
Concerns about how AI enables certain exploitative employment models to be deployed are certainly valid, and work is needed to combat the impact of such systems. The New Real artist Caroline Sinders gives an example of how to engage people in probing the massive, often opaque systems of unstable, low-paid labour, in her provocation TRK (Technically Responsible Knowledge)7, which focuses on Amazon Mechanical Turk.
5. Computer power does not mean cognitive power: What if Robots Take Over?
Given the rapid increases in computing capability over the past decade, it is easy to think that there will be a tipping point – a singularity – when computers are more ‘intelligent’ than humans. Similarly, because robots are often represented as being able to ‘become’ sentient and even dangerous, it is presumed that this is something that will inevitably happen. In reality, making computers compute at higher speeds with bigger memories just means they can process the same data, faster – it doesn’t make them more 'clever'. Speed doesn’t give computers the ability to understand things in the way humans do, or to be more flexible and less failure-prone. How robots are often presented in the press is way out of step with their actual or likely capabilities.8 Besides, improvement in cognitive capabilities is curtailed by the physical limits on how much speed we can engineer. For example, robots rely on electricity and they consume lots of it. Their capacity to evolve into sentient beings is dependent on their very limited electric capacities – what if the batteries run out after a few hours, leaving the robot helpless until recharged?
Epilogue:
The abundance of myths and hype that surrounds AI doesn’t mean it is useless or that it shouldn’t be developed and supported as a field. Many applications that we use in our everyday lives are products of simple machine learning, such as recommendation systems (“people who bought this also bought…” or music playlist algorithms), text prediction, and other assistive technologies. Advances in AI help us make important steps forward in medicine and surgery, from robotic prosthetic limbs to object recognition systems for the visually impaired.
The important thing to remember is that all these systems and machines, whether we identify them as AI, robots, machine learning or algorithms, are useless without humans making meaning out of them.9
1. Christie’s. 2018. Is artificial intelligence set to become art’s next medium? https://www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx
2. Rodney Brooks. 2017. The Seven Deadly Sins of AI Predictions. MIT Technology Review.
https://www.technologyreview.com/2017/10/06/241837/the-seven-deadly-sins-of-ai-predictions/
Ruth Aylett. 2019. AI-DA: A robot Picasso or smoke and mirrors?
https://medium.com/@r.s.aylett/ai-da-a-robot-picasso-or-smoke-and-mirrors-a77d4464dd92
3. Joanna J. Bryson, Mihailis E. Diamantis & Thomas D. Grant. 2017. Of, For, And By The People: The Legal Lacuna Of Synthetic Persons. Artificial Intelligence and Law, volume 25, pages 273–291. (Open access)
https://link.springer.com/article/10.1007/s10506-017-9214-9
4. Elizabeth Fernandes, 2019. AI Is Not Similar To Human Intelligence. Thinking So Could Be Dangerous. Forbes.
IEEE Spectrum. 2015. A Compilation of Robots Falling Down at the DARPA Robotics Challenge
https://www.youtube.com/watch?v=g0TaYhjpOfo
5. Thomas C. Redman. 2018. If Your Data Is Bad, Your Machine Learning Tools Are Useless. Harvard Business Review.
https://hbr.org/2018/04/if-your-data-is-bad-your-machine-learning-tools-are-useless
Brian Cantwell Smith. 2019. The Promise Of Artificial Intelligence: Reckoning And Judgement. The MIT Press.
https://mitpress.mit.edu/books/promise-artificial-intelligence
6. Jonathan Vanian. 2018. When it comes to A.I., worry about ‘job churn’ instead of ‘job loss’. Fortune.
Angela Chen. 2019. How Silicon Valley’s successes are fueled by an underclass of ‘ghost workers’. The Verge.
https://www.theverge.com/2019/5/13/18563284/mary-gray-ghost-work-microwork-labor-silicon-valley-automation-employment-interview
7. See more at: https://carolinesinders.com/trk/
8. Luciano Floridi. 2015. Singularitarians, AItheists, And Why The Problem With Artificial Intelligence Is HAL (humanity at large), Not HAL. APA Newsletter, volume 14(2), pages 8-11.
Ammon H. Eden, Eric Steinhart, Pearce, D., & Moor, J. H. (2012). Singularity Hypotheses: An Overview. In Singularity Hypotheses: A Scientific And Philosophical Assessment. (pp. 1-12). Springer, Berlin, Heidelberg.
https://www.springer.com/gp/book/9783642325595
9. Walsh, T. (2018). 2062: The World That AI Made. Australia: LaTrobe University & Black Inc..