Hammond Pens Op-ed on Intelligent Systems

He writes that with intelligent systems, we now have the opportunity to be genuinely smarter.

Recode

Prof. Kristian Hammond has written an editorial discussing his belief that current and future intelligent systems are going to eventually be partners that help humans improve their decision-making skills by avoiding many of the cognitive biases that plague commonly us.

Prof. Hammond is chief scientist at Narrative Science. Prior to joining the faculty at Northwestern, Hammond founded the University of Chicago’s Artificial Intelligence Laboratory. His research has been primarily focused on artificial intelligence, machine-generated content and context-driven information systems. He currently sits on a United Nations policy committee run by the United Nations Institute for Disarmament Research (UNIDIR).

Excerted from an Friday, April 29, 2016 article published by Re/code, titled, "Teaching Machines to Avoid Our Mistakes"

The conventional wisdom is that intelligent systems, while good with numbers and maybe facts, are not going to be able to cope with the world of judgment and decision-making. The common assumption is that computers will not be able to deal with the nuance of reasoning that drives the solely human ability to assess what is happening in the world and then make reasoned decisions in reaction to that assessment.

And herein lies my problem — the assumption hidden in this belief is that humans are actually good at this sort of reasoning. And it’s not clear that this is true. In particular, we seem prone to reasoning mistakes based on biases in decision-making that hinder us every day. Because of this, I believe that current and future intelligent systems are going to end up being partners that help us to improve our decision-making skills by avoiding many of the cognitive biases that plague us.

Last year, I had the privilege of participating in a United Nations working group tasked with crafting a policy document around issues of intelligent autonomous weapons. As the issue includes the possibility of machines killing people, it is one that is even more fraught with challenges than the expanding role of intelligent systems in the workplace, but it raised questions that can be applied across the category.

One particular moment stood out to me. As we talked about the question of proportionality — the assessment of how many casualties are acceptable given a military goal — the core assumption from the group was this kind of decision should never be in the hands of a machine. Not a surprising reaction, and my guess is that this is a commonly held point of view.

What surprised me was not the idea that machines should never make life-and-death decisions, but the overwhelming assumption and unshakeable belief that people are actually good at such decisions.

The drivers behind this assumption include issues of empathy, context, human judgment, dynamically dealing with changing circumstances, etc. Given the current state of machine intelligence, these are perfectly valid arguments. What surprised me, however, was not the idea that machines should never make life-and-death decisions, but the overwhelming assumption and unshakeable belief that people are actually good at such decisions.

The reality is, we’re not.

Please understand, I love my human brothers and sisters, but when it comes to many areas of decision-making, we are pretty much goofballs. Just looking at the work of Richard Thaler (“Misbehaving: The Making of Behavioral Economics”), Daniel Kahneman (“Thinking, Fast and Slow”) and Dan Arely (“Predictably Irrational: The Hidden Forces That Shape Our Decisions”), we see that even under the most controlled of situations, our decision-making skills are faulty. We cherry-pick data to fit our worldviews, prefer inaction to action, misunderstand nearly everything related to probability, and prefer decisions skewed in the direction of avoiding failure rather than achieving success.

One such bias is anchoring, the common human tendency to rely heavily on the first piece of information offered (the “anchor”) when making decisions. For example, in an effort to contextualize numbers, we try to find other numbers to compare them to, but if we don’t have any relevant numbers at hand, we tend to use the most recent one we have seen or heard.

So let’s say my son approaches me asking for more allowance just as I am reading the news that Cisco is acquiring another company for $1.4 billion. Whatever increase he is requesting will seem small to me because I’ve just been exposed to such a large sum. If I have been looking at the current price of a gallon of gas ($2.56), however, he is not going to get such a positive response.

The amazing thing is that while numbers impact our decision-making, they are irrelevant, in that they have nothing to do with each other. Whatever we hear first tends to provide a starting point against which other numbers will be viewed and compared. And while it skews our thinking, it does so without us even noticing.

Other biases, such as confirmation bias, make it hard for us to see the evidence that is placed in front of us if it is in conflict with our beliefs. So, for example, if we think pitbulls are mean, then we tend to only remember those dogs that have displayed aggressive tendencies. If we think a co-worker is argumentative, we tend to interpret everything that person says as adversarial. Once an idea is in our head, our mechanisms to understand the world become focused on making sure everything we see supports that idea.

My favorite bias is one that almost killed me, the status quo fallacy. This is our tendency to view the world as being the same over time, even in the face of change. It is the view that because something has never happened before, it will not happen now. For me, this played out as a resistance to the idea that I had a pulmonary embolism until my inability to breathe forced the issue and convinced me to go to the hospital — but that is a story for another day.

All of these biases are grounded in absolutely reasonable heuristics that make it possible for us to make decisions quickly. Of course, the status quo is going to hold most of the time. Of course, the world is going to fit our understanding of it most of the time. That these heuristics sometimes fail is not a condemnation of our ability to think, but just a confirmation that they are heuristics.

So what does this mean for intelligent systems?

The ability of intelligent systems to reason in the absence of these biases is powerful. Since technology is not prone to these biases, it can do a better job than we do at managing complex and nuanced decisions.

Imagine, for example, an investment that you made is not performing well, but unfortunately, because you now own the stock and think of yourself as a good investor, you are prone to ownership bias and have a bit of cognitive dissonance. Ownership bias causes you to value the things you have over the potential things to acquire. The cognitive dissonance is born of the tension between the decision you made, your view of yourself as a good investor, and the current evidence that this stock wasn’t a great buy. These factors come together to make you want to discount the evidence and hold on to the stock longer than you should.

A machine considering the same factors would not be prone to such bias, and could make buy/sell decisions or give advice on the basis of the numbers without a self-defeating sense of unease or embarrassment.

Of course, this is not unique to financial decision-making. For every cognitive bias that disrupts our thinking, there is an opportunity to partner with intelligent systems that can assess the situation and ask us, “Are you sure that this isn’t just you being embarrassed about making a bad decision last week?” or “If you didn’t already own it, would you buy now?”

I am not saying that we should give ourselves over to algorithmic decision-making. We should always remember that just as the machine is free of the cognitive biases that often defeat us, we have information about the world that the machine does not. My argument is that, with intelligent systems, we now have the opportunity to be genuinely smarter.

Going back to the earlier question of autonomous lethal devices and proportionality assessment, it turns out that such decisions usually have equations associated with them. Or, in other words, there are some places where we’re already partnering with intelligent systems. In these situations, there is a calculation or algorithm used to inform the decision-making. The reasoning behind this is simple. People tend to have difficulty with such assessments because emotions bias their thinking. The algorithms’ output provides an anchor to help the human decision-makers think more clearly.

Even here, the algorithms make us into better thinkers. And isn’t that what we want to be?

McCormick News Article