What separates humans from AI? It’s doubt
Computers can drive our cars and beat us at chess, but what they lack (for now) is our ability to know when we don’t know
Back in April, the FT had an intriguing piece written by Stephen M. Fleming (Wellcome/Royal Society Sir Henry Dale Fellow at University College London) that examined the issue of metacognition in AI. The piece presented Flemming’s concerns about the most important way in which even the most sophisticated AI machines differ from people: they cannot doubt and so cannot question their correctness and, by implication, their decisions. As he notes:
AI researchers have known for some time that machine-learning technology tends to be overconfident. For instance, imagine I ask an artificial neural network — a piece of computer software inspired by how the brain works, which can learn to perform new tasks — to classify a picture of a dolphin, even though all it has seen are cats and dogs. Unsurprisingly, having never been trained on dolphins, the network cannot issue the answer “dolphin”. But instead of throwing up its hands and admitting defeat, it often gives wrong answers with high confidence. In fact, as a 2019 paper from Matthias Hein’s group at the University of Tübingen showed, as the test images become more and more different from the training data, the AI’s confidence goes up, not down — exactly the opposite of what it should do.
Keep reading with a 7-day free trial
Subscribe to DEI Research to keep reading this post and get 7 days of free access to the full post archives.