Hector Levesque, Professor of Computer Science, University of Toronto
In the forty years since the publication of “Computers and Society” by Gotlieb and Borodin, much has changed in the view of the potential and promise of the area of Artificial Intelligence (AI).
The general view of AI in 1973 was not so different from the one depicted in the movie “2001: A Space Odyssey”, that is, that by the year 2001 or so, there would be computers intelligent enough to be able to converse naturally with people. Of course it did not turn out this way. Even now no computer can do this, and none are on the horizon. In my view, the AI field slowly came to the realization that the hurdles that needed to be cleared to build a HAL 9000 went well beyond engineering, that there were serious scientific problems that would need to be resolved before such a goal could ever be attained. The field continued to develop and expand, of course, but by and large turned away from the goal of a general autonomous intelligence to focus instead on useful technology. Instead of attempting to build machines that can converse in English, for example, we concentrate on machines that can respond to spoken commands or locate phrases in large volumes of text. Instead of a machine that can oversee the housework in a home, we concentrate on machines that can perform certain specific tasks, like vacuuming.
Many of the most impressive of these applications rely on machine learning, and in particular, learning from big data. The ready availability of massive amounts of online data was truly a game changer. Coupled with new ideas about how to do automatic statistics, AI moved in a direction quite unlike what was envisaged in 1973. At the time, it was generally believed that the only way to achieve flexibility, robustness, versatility etc. in computer systems was to sit down and program it. Since then it has become clear that it is very difficult to do this because the necessary rules are so hard to come by. Consider riding a bicycle, for example. Under what conditions should the rider lean to the left or to the right and by how much? Instead of trying to formulate precise rules for this sort of behaviour in a computer program, a system could instead learn the necessary control parameters automatically from large amounts of data about successful and unsuccessful bicycle rides. For a very wide variety of applications, this machine learning approach to building complex systems has worked out extremely well.
However, it is useful to remember that this is an AI technology whose goal is not necessarily to understand the underpinnings of intelligent behaviour. Returning to English, for example, consider answering a question like this:
The ball crashed right through the table because it was made of styrofoam. What was made of styrofoam, the ball or the table?
Contrast that with this one:
The ball crashed right through the table because it was made of granite. What was made of granite, the ball or the table?
People (who know what styrofoam and granite are) can easily answer such questions, but it is far from clear how learning from big data would help. What seems to be at issue here is background knowledge: knowing some relevant properties of the materials in question, and being able to apply that knowledge to answer the question. Many other forms of intelligent behaviour seem to depend on background knowledge in just this way. But what is much less clear is how all this works: what it would take to make this type of knowledge processing work in a general way. At this point, forty years after the publication of the Gotlieb and Borodin book, the goal seems as elusive as ever.Hector Levesque received his BSc, MSc and PhD all from the University of Toronto; after a stint at the Fairchild Laboratory for Artificial Intelligence Research in Palo Alto, he later joined the Department of Computer Science at the University of Toronto where he has remained since 1984. He has done extensive work in a variety of topics in knowledge representation and reasoning, including cognitive robotics, theories of belief, and tractable reasoning. He has published three books and over 60 research papers, four of which have won best paper awards of the American Association of Artificial Intelligence (AAAI); two others won similar awards at other conferences. Two of the AAAI papers went on to receive AAAI Classic Paper awards, and another was given an honourable mention. In 2006, a paper written in 1990 was given the inaugural Influential Paper Award by the International Foundation of Autonomous Agents and Multi-Agent Systems. Hector Levesque was elected to the Executive Council of the AAAI, was a co-founder of the International Conference on Principles of Knowledge Representation and Reasoning, and is on the editorial board of five journals, including the journal Artificial Intelligence. In 1985, Hector Levesque became the first non-American to receive IJCAI’s Computers and Thought Award. He was the recipient of an E.W.R. Steacie Memorial Fellowship from the Natural Sciences and Engineering Research Council of Canada for 1990-91. He is a founding Fellow of the AAAI and was a Fellow of the Canadian Institute for Advanced Research from 1984 to 1995. He was elected to the Royal Society of Canada in 2006, and to the American Association for the Advancement of Science in 2011. In 2012, Hector Levesque received the Lifetime Achievement Award of the Canadian AI Association, and in 2013, the IJCAI Award for Research Excellence.