Conversations with Dan Ariely on the future of business decision making: Part 2
Artificial Intelligence (AI) has become one of the hottest trends. Some people see it as the wave of the future, some people dismiss it as a gimmick.
I spoke with Dan Ariely to get his thoughts on how AI is going to affect decision making.
In this post, he answers the questions:
Will people trust computer algorithms over human experts? Are jobs at risk? Is there a danger to trusting artificial intelligence for decision making?
1. Will people trust computer algorithms over human experts?
My answer is that people won’t trust the algorithms, at least not always.
There are a couple of papers showing that when computers fail, people believe that the computers are broken in various different ways. When people makes mistakes, we are more accepting of their mistakes. We expect it and move on.
We have a model of people being wrong from time to time; but if computers or technology gets it wrong, then something is broken in them. I think this will be a huge obstacle for relying computer decision making.
Will Artificial Intelligence (AI) put management consultants and business analyst’s positions at risk?
It would take away all the routinized part of labor analysis, and there is a great deal of routinized labor, and it would make it unneeded.
But there is still going to be things about creativity and uniqueness that only humans can create.
Computer systems are very good in taking similar cases and making inferences relative to them.
What computer systems are not good for is generating unique things and producing inferences relevant to them. Computers do not take unique things and say what will happen, because it is more difficult and complex.
We still need people. Our brains evolved with emotional and perceptual systems to permit creative and intuitive decision making.
2. Will people choose to trust human recommendations over AI because they are more comfortable with the process?
Yes! As I said before, once we experience failure, we think differently about the system. We understand our reliance of human agents.
Trust comes from saying: “You and I, our futures are tied together. We have a long term interest. You have a reputation to maintain. You know my friends. If you mistreat me, my friends will mistrust you." There is something about trust and it’s aligned with long term interest.
AI will be hard to create long term mutual interest. You can get out of business. It doesn’t matter to you, if you are an algorithm. So what will be the nature of trust? I think algorithms will have a harder time appealing to the human instinct of trust and long term alliance, because they are not human agents.
3. Do I see a risk in trusting AI for decision making?
Lots and lots of risk!
There is a risk that the decision algorithm will not maximize our needs. It will not maximize happiness, not maximize long term and not take all the complexities into account. There is a risk that it will not get us to feel good about a decision.
Elon Musk and others are scared. They are scared for other reasons. They are scared about human agency and freedom, and they have fundamental fears.
My fears are more mundane!
4. Are AI recommendations going to lead to more rational or irrational decision making?
I think it depends on how we build them. My guess is that the next generation would lead to more rational decisions. AI will be a simple tool that looks at prices and then compares time with time wasted.
Waze or Google Maps are algorithms that learn what is going on and makes recommendations on what roads to take. They are more rational. They have the element of time, and they are trying to save our time. They are doing a good job.
I believe the easiest mechanisms will lead to better decision making. The more complex ones will come later and might not necessarily increase our quality of life.