8 Matching Annotations
- Jul 2018
-
aeon.co aeon.co
-
Kahneman concluded his aforementioned presentation to academics by arguing that computers or robots are better than humans on three essential dimensions: they are better at statistical reasoning and less enamoured with stories; they have higher emotional intelligence; and they exhibit far more wisdom than humans.
A little over-the-top?
-
‘omniscience in the observer’
-
To illustrate, consider Isaac Newton.
But there are examples of where our theory has led us astray, the heliocentric vision of the universe being an example. If not for that attachment to previous thinking, we might have learned more quickly about the heliocentric truth.
'Even as He hath revealed: "As oft as an Apostle cometh unto you with that which your souls desire not, ye swell with pride, accusing some of being impostors and slaying others."' - Kitab-i-Iqan
-
However, computers and algorithms – even the most sophisticated ones – cannot address the fallacy of obviousness. Put differently, they can never know what might be relevant.
One goal of systems science and modelling, to explore what might be relevant and give us better heuristics.
-
At the other extreme we have behavioural economics, which focuses on human bias and blindness by pointing out biases or obvious things that humans miss.
-
So, given the problem of too much evidence – again, think of all the things that are evident in the gorilla clip – humans try to hone in on what might be relevant for answering particular questions. We attend to what might be meaningful and useful
Consumat, heuristics - actually, this does work with thinking fast and slow. But maybe the divide isn't so clear - a spectrum?
-
‘blind to the obvious, and that we also are blind to our blindness’
-
building on Herbert Simon’s 1950s work on bounded rationality
-