Meet the Debate Champ who Challenged and Partnered with IBM Project Debater
Harish Natarajan, the 2016 World Debating Championships Grand Finalist and 2012 European Debate Champion, can claim something no other person in the world can — he has both debated against and debated with IBM Project Debater.
Harish challenged Project Debater, IBM’s AI debating technology, earlier this year, which ended in a draw. While he manged to sway more of the audience to his position, Project Debater was more informative.
week Harish is meeting with Project Debater again for at the historic Cambridge Union Society, but this time the duo will be on the same side of the podium along with Professor Sylvie Delacroix , a Law and Ethics at the University of Birmingham arguing on the motion “AI will bring more harm than good in the next decade.”
Prior to the debate, Harish took a few questions:
What’s your view on the future of AI — especially in the context of this debate — do you think it’s important that AI may effectively augment humans in the near future?
Harish: I think one of the most amazing cultural journeys is to look at what humanity envisioned the future to be at various points. Take two examples — in Kubrick’s 2001, which was released in the late 1960s, the world that he constructs is one where we would make routine Pan-Am flights to space. Yet its views of computers, particularly their size and functionality, missed the trajectory of technological development. Back to the Future II — which is partially set in October 2015 — imagined a world with flying cars, and hoverboards. Yet it is a print-heavy version of 2015, and one where the internet was absent. There are thousands of similar examples.
All of that is to say that we are very limited in our ability to predict the future of technology. There are some things that we will get right, and many more that we will get wrong. We may think of a future where many routine and repetitive tasks are done by AI, potentially even one where creative activity is done by AI. But perhaps instead AI is limited and can only find regularities in large sets of data — and human’s still have an advantage due to our ability to use causal reasoning and to ask counterfactual questions. At this stage, at least, it looks likely that the real future of AI is to augment human’s, rather than to fully replace us. There are many other possibilities, which I guess suggests that I really don’t know what the future of AI is. It certainly will have a large impact, but we can’t say what that is.
“At this stage, at least, it looks likely that the real future of AI is to augment human’s, rather than to fully replace us.”
But lets say that AI can replace humans in the vast majority of activities — what happens then? In some ways this is the fear of a dystopian future, where the returns on capital are high, and those that control capital are the beneficiaries. That future though extrapolates a lot from the present, imagining capital almost in factory form. Our smartphones can play chess now as well as Deep Blue, and its certainly possible that many of the returns from AI could be captured by small and generally available technology. Maybe we’ll be able to do most things with our own personal AI software, and then the fears of mass inequality or deprivation are overstated. These are very important questions to ask and to be aware of, but I think we should be careful in assuming that we know the answers.
What do you think about the prospect of people working with robots? Do you find it sort of like an uncanny valley-like scenario? Or it doesn’t bother you?
I think I’d get very use to the idea. I’ve grown up with technology that has enhanced our productivity and added to the value of our lives in many ways. Perhaps AI will be different, and its limitations will be concerning — although I don’t’ share those fears.
One way I look at it is by analogy with the past. The famous Kasparov-Deep Blue chess contest may have done more than any other single event in creating the ‘man vs machine’ narrative. However, looking back at the footage from those matches, one thing that is striking is what Deep Blue does not do. Amongst the simplest skills for a human is the ability to pick up and move physical chess pieces. Deep Blue relied on a human to move to the pieces around the board. Twenty years later, Deep Minds’ Alpha Go similarly relied on a human to place pieces on a Go board. These incredibly powerful machines dominate in some of the fields that humans find to be the most complex. Yet machines struggle even compared to toddlers with motor skills and dexterity. And I think that is probably part of our future — we’ll do slightly different tasks than we did before and avoid the uncanny-valley type fears- which is why working with machines doesn’t really bother me.
Do you think we are still far away from real applications of a machine-like Project Debater? Where do you think it’ll be used first?
Project Debater, or at least the technology underlying it, has huge potential. In essence, it can synthesise more information than a human can in a lifetime and construct credible arguments out of it. There are a range of applications for such technology — including in almost any field that requires research. I think we’re already seeing some of the first immediate uses of it via Speech By Crowd, which was recently demonstrated in the Swiss city of Lugano to poll the opinions of citizens on automonous vehicles.
What’s the purpose of this experiment on Thursday, in your view?
In February 2019 I was involved in a public debate against IBM’s Project Debater. For a human, debating is not more complicated than either Chess or Go, but it is a complicated challenge for a machine. At its core, successful debating involves four components. First, a debater needs to process a large amount of information and construct relevant arguments. Second, those arguments need to be explained to an audience in a clear and structured way. Third, effective debating requires listening and understanding the natural language arguments of another person or group of people. Fourth, debating requires making those arguments matter to an audience. Humans do not operate on logic alone, and the careful use of language, emotions, rhetoric, and examples can give a logical claim impact. Project Debater was incredible in its ability to process information from more than 400 million articles and construct relevant arguments from those. Its processing ability is far greater than a human. Project Debater thus overcame a difficulty that Alvin Toffler referred to as an ‘information overload- — when faced with a large amount of information on a topic, it was nonetheless able to construct clear arguments with its insights. Project Debater was less powerful in its ability to full comprehend four minutes of natural language and form complete responses.
“My experience with Project Debater thus suggested that there is considerable scope for humans to work with machines. Artificial intelligence at its best hast the ability to process large amounts of information and synthesise it.”
Where Project Debater struggled was in its ability to make those arguments matter sufficiently to a human audience. In simple terms, Project Debater lacked ‘common sense’. When debating the motion on whether to subsidise preschool education, Project Debater argued that preschool education was valuable in improving educational outcomes and was worth the cost due to the positive effect on other social indicators. What Project Debater mentioned, but never focused on, was the impact on poverty and the affect on inequality. Common sense reasoning tells us that you can maximise the emotional impact of the logical arguments by focusing on serious problems of inequality and arguing that they can be at least partially mitigated by subsidising access to preschool. Project Debater, like other artificial intelligence machines, lacks intuition and the common sense reasoning that humans exhibit.
My experience with Project Debater thus suggested that there is considerable scope for humans to work with machines. Artificial intelligence at its best hast the ability to process large amounts of information and synthesise it. A machine’s thought process will differ from that of a human, but that is a feature more than a bug. Machines are able to find statistically valid connections that humans may either miss or ignore. This human failing may be due to our propensity to use overuse heuristics, default to biases, or limits in our ability to process information. Humans have the ability to use common sense, to make decisions that are consistent with deeply held principle of justice and may provide the emotional intelligence to communicate a decision.
The debate I think tests this hypothesis. Rather than putting us against a machine, we speak with it. Ideally, it will show the ability of machines and people to work together — and the potential improvements that result in the quality of cases produced.