Social Issues in Computing » General http://socialissues.cs.toronto.edu 40th Anniversary of "Social Issues in Computing", C.C. Gotlieb and Allan Borodin, 1973 Fri, 10 Jan 2014 20:44:29 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.7 Computational Thinking Benefits Society http://socialissues.cs.toronto.edu/2014/01/computational-thinking/ http://socialissues.cs.toronto.edu/2014/01/computational-thinking/#comments Fri, 10 Jan 2014 20:44:29 +0000 http://socialissues.cs.toronto.edu/?p=279 Jeannette M. Wing, Corporate Vice President, Microsoft Research

Computer science has produced, at an astonishing and breathtaking pace, amazing technology that has transformed our lives with profound economic and societal impact.  Computer science’s effect on society was foreseen forty years ago by Gotlieb and Borodin in their book Social Issues in Computing.  Moreover, in the past few years, we have come to realize that computer science offers not just useful software and hardware artifacts, but also an intellectual framework for thinking, what I call “computational thinking” [Wing06].

Everyone can benefit from thinking computationally.  My grand vision is that computational thinking will be a fundamental skill—just like reading, writing, and arithmetic—used by everyone by the middle of the 21st Century.

This article describes how pervasive computational thinking has become in research and education.  Researchers and professionals in an increasing number of fields beyond computer science have been reaping benefits from computational thinking.  Educators in colleges and universities have begun to change undergraduate curricula to promote computational thinking to all students, not just computer science majors.  Before elaborating on this progress toward my vision, let’s begin with describing what is meant by computational thinking.

1.      What is computational thinking?

1.1   Definition

I use the term “computational thinking” as shorthand for “thinking like a computer scientist.”  To be more descriptive, however, I now define computational thinking (with input from Al Aho at Columbia University, Jan Cuny at the National Science Foundation, and Larry Snyder at the University of Washington) as follows:

Computational thinking is the thought processes involved in formulating a problem and expressing its solution(s) in such a way that a computer—human or machine—can effectively carry out.

Informally, computational thinking describes the mental activity in formulating a problem to admit a computational solution.  The solution can be carried out by a human or machine.  This latter point is important.  First, humans compute.  Second, people can learn computational thinking without a machine.  Also, computational thinking is not just about problem solving, but also about problem formulation.

In this definition I deliberately use technical terms.  By “expressing” I mean creating a linguistic representation for the purpose of communicating a solution to others, people or machines.  The expressiveness of a language, e.g., programming language, can often make the difference between an elegant or inelegant solution, e.g., between a program provably absent of certain classes of bugs or not.  By “effective,” in the context of the Turing machine model of computation, I mean “computable” (or “decidable” or “recursive”); however, it is open research to revisit models of computation, and thus the meaning of “effective,” when we consider what is computable by say biological or quantum computers [Wing08] or what is solvable by humans [Levin13, Wing08].

1.2. Abstraction is Key

Computer science is the automation of abstractions[1].  So, the most important and high-level thought process in computational thinking is the abstraction process. Abstraction is used in defining patterns, generalizing from specific instances, and parameterization. It is used to let one object stand for many. It is used to capture essential properties common to a set of objects while hiding irrelevant distinctions among them. For example, an algorithm is an abstraction of a process that takes inputs, executes a sequence of steps, and produces outputs to satisfy a desired goal. An abstract data type defines an abstract set of values and operations for manipulating those values, hiding the actual representation of the values from the user of the abstract data type. Designing efficient algorithms inherently involves designing abstract data types.

Abstraction gives us the power to scale and deal with complexity. Applying abstraction recursively allows us to build larger and larger systems, with the base case (at least for traditional computer science) being bits (0’s and 1’s). In computing, we routinely build systems in terms of layers of abstraction, allowing us to focus on one layer at a time and on the formal relations (e.g., “uses,” “refines” or “implements,” “simulates”) between adjacent layers.  When we write a program in a high-level language, we are building on lower layers of abstractions. We do not worry about the details of the underlying hardware, the operating system, the file system, or the network; furthermore, we rely on the compiler to correctly implement the semantics of the language. The narrow-waist architecture of the Internet demonstrates the effectiveness and robustness of appropriately designed abstractions: the simple TCP/IP layer at the middle has enabled a multitude of unforeseen applications to proliferate at layers above, and a multitude of unforeseen platforms, communications media, and devices to proliferate at layers below.


[1] Aho and Ullman in their 1992 Foundations of Computer Science textbook define Computer Science to be “The Mechanization of Abstraction.”

2.      Computational Thinking and Other Disciplines

Computational thinking has already influenced the research agenda of all science and engineering disciplines. Starting decades ago with the use of computational modeling and simulation, through today’s use of data mining and machine learning to analyze massive amounts of data, computation is recognized as the third pillar of science, along with theory and experimentation [PITAC 2005].

Consider just biology. The expedited sequencing of the human genome through the “shotgun algorithm” awakened the interest of the biology community in computational concepts (e.g., algorithms and data structures) and computational approaches (e.g., massive parallelism for high throughput), not just computational artifacts (e.g., computers and networks).  In 2005, the Computer Science and Telecommunications Board of the National Research Council (NRC) published a 468-page report laying out a research agenda to explore the interface between biology and computing [NRC05].  In 2009, the NRC Life Sciences Board’s study on Biology in the 21st Century recommends that “within the national New Biology Initiative, priority be given to the development of the information technologies and sciences that will be critical to the success of the New Biology [NRC09].”  Now at many colleges students can choose to major in computational biology.

The volume and rate at which scientists and engineers are now collecting and producing data—through instruments, experiments, simulations, and crowd-sourcing—are demanding advances in data analytics, data storage and retrieval, as well as data visualization. The complexity of the multi-dimensional systems that scientists and engineers want to model and analyze requires new computational abstractions. These are just two reasons that every scientific directorate and office at the National Science Foundation participated in the Cyber-enabled Discovery and Innovation, or CDI, program, an initiative started when I first joined NSF in 2007.  By the time I left, the fiscal year 2011 budget request for CDI was $100 million. CDI was in a nutshell “computational thinking for science and engineering [CDI11].”

Computational thinking has also begun to influence disciplines and professions beyond science and engineering. For example, areas of active study include algorithmic medicine, computational economics, computational finance, computational law, computational social science, digital archaeology, digital arts, digital humanities, and digital journalism. Data analytics is used in training Army recruits, detecting email spam and credit card fraud, recommending movies and books, ranking the quality of services, and personalizing coupons at supermarket checkouts.   Machine learning is used by every major IT company for understanding human behavior and thus to tailor a customer’s experience to his or her own preferences.  Every industry and profession talks about Big Data and Cloud Computing.  New York City and Seattle are vying to be named Data Science Capital of the US [Miller13].

3.      Computational Thinking and Education

In the early-2000s, computer science had a moment of panic. Undergraduate enrollments were dropping.   Computer science departments stopped hiring new faculty.  One reason I wrote my 2006 CACM article on computational thinking was to inject some positive thinking into our community.  Rather than bemoan the declining interest in computer science, I wanted us to shout to the world about the joy of computing, and more importantly, about the importance of computing.  Sure enough, today enrollments are skyrocketing (again).  Demand for graduates with computing skills far exceeds the supply; six-figure starting salaries offered to graduates with a B.S. in Computer Science are not uncommon.

3.1 Undergraduate Education

Campuses throughout the United States and abroad are revisiting their undergraduate curriculum in computer science. They are changing their first course in computer science to cover fundamental principles and concepts, not just programming.   For example, Carnegie Mellon revised its undergraduate first-year courses to promote computational thinking for non-majors [BryantSutnerStehlik10].  Harvey Mudd redesigned its introductory course with stellar success, including increasing the participation of women in computing [Klawe13].  At Harvard, “In just a few short years CS50 has rocketed from being a middling course to one of the biggest on campus, with nearly 700 students and an astounding 102-member staff [Farrell13].”  For MIT’s introductory course to computer science, Eric Grimson and John Guttag say in their opening remarks “I want to begin talking about the concepts and tools of computational thinking, which is what we’re primarily going to focus on here. We’re going to try and help you learn how to think like a computer scientist [GrimsonGuttag08].”

Many such introductory courses are now offered to or required by non-majors to take.  Depending on the school, the requirement might be a general requirement (CMU) or a distribution requirement, e.g., to satisfy a science and technology (MIT), empirical and mathematical reasoning (Harvard), or a quantitative reasoning (Princeton) requirement.

3.2 What about K-12?

Not till computational thinking is taught routinely at K-12 levels of education will my vision be truly realized.  Surprisingly, as a community, we have made faster progress at spreading computational thinking to K-12 than I had expected.  We have professional organizations, industry, non-profits, and government policymakers to thank.

The College Board, with support from NSF, is designing a new Advanced Placement (AP) course that covers the fundamental concepts of computing and computational thinking (see the CS Principles Project).  Phase 2 of the CS Principles project is in play and will lead to an operational exam in 2016-2017.  Roughly forty high schools and ten colleges are part of piloting this course in the next three years.  Not coincidentally, the changes to the Computer Science AP course are consistent with the changes in introductory computer science courses taking place now on college campuses.

Another boost is expected to come from the NSF’s Computing Education for the 21st Century (CE21) program, started in September 2010 and designed to help K-12 students, as well as first- and second-year college students, and their teachers develop computational thinking competencies. CE21 builds on the successes of the two prior NSF programs, CISE Pathways to Revitalized Undergraduate Computing Education (CPATH) and Broadening Participating in Computing (BPC). CE21 has a special emphasis on activities that support the CS 10K Project, an initiative launched by NSF through BPC.  CS 10K aims to catalyze a revision of high school curriculum, with the new AP course as a centerpiece, and to prepare 10,000 teachers to teach the new courses in 10,000 high schools by 2015.

Industry is also promoting the importance of computing for all.  Since 2006, with help from Google and later Microsoft, Carnegie Mellon has held summer workshops for high school teachers called “CS4HS.” These workshops are designed to deliver the message that there is more to computer science than computer programming.  CS4HS spread in 2007 to UCLA and the University of Washington. By 2013, under the auspices of Google, CS4HS had spread to 63 schools in the United States, 20 in China, 12 in Australia, 3 in New Zealand, and 28 in Europe, the Middle East and Africa. Also at Carnegie Mellon, Microsoft Research funds the Center for Computational Thinking, which supports both research and educational outreach projects.

Computing in the Core is a “non-partisan advocacy coalition of associations, corporations, scientific societies, and other non-profits that strive to elevate computer science education to a core academic subject in K-12 education, giving young people the college- and career-readiness knowledge and skills necessary in a technology-focused society.”  Serving on Computing in the Core’s executive committee are: Association For Computing Machinery, Computer Science Teachers Association, Google, IEEE Computer Society, Microsoft, and National Center for Women and Information Technology.

Code.org is a newly formed public non-profit, sister organization of Computing in the Core.  Its current corporate donors are Allen and Company, Amazon, Google, JPMorgan Chase and co., Juniper Networks, LinkedIn, Microsoft, and Salesforce.  These companies and another 20 partners came together out of need for more professionals trained with computer science skills.  Code.org hosts a rich suite of educational materials and tools that run on many platforms, including smart phones and tablets.  It lists local high schools and camps throughout the US where students can learn computing.

Computer science has also gotten attention from elected officials. In May 2009, computer science thought leaders held an event on Capitol Hill to call on policymakers to make sure that computer science is included in all federally-funded educational programs that focus on science, technology, engineering and mathematics (STEM) fields. The U.S. House of Representatives designated the first week of December as Computer Science Education Week, originally conceived by Computing in the Core, and produced in 2013 by Code.org.  In June 2013, U.S. Representative Susan Brooks (R-IN) and Representative Jared Polis (D-CO) and others introduced legislation to bolster K-12 computer science education efforts.  A month later, U.S. Senators Robert Casey (D-PA) and Marco Rubio (R-FL) followed suit with similar legislation.

Computational thinking has also spread internationally.  In January 2012, the British Royal Society published a report that says that “’Computational thinking’ offers insightful ways to view how information operates in many natural and engineered systems” and recommends that “Every child should have the opportunity to learn Computing at school.” (“School” in the UK is the same as K-12 in the US.)  Since that report the UK Department for Education published in February 2013 a proposed national curriculum of study for computing [UKEd13] with the final version of the curriculum becoming statutory in September 2014.  In other words, by Fall 2014, all K-12 students in the UK will be taught concepts in computer science appropriate for their grade level.  Much of the legwork behind this achievement was accomplished by the grassroots effort called “Computing at School.”  This organization is helping to organize the teacher training in the UK needed to achieve the 2014 goal.

Asian countries are also making rapid strides in the same direction.  I am aware of efforts similar to those in the US and the UK taking place in China, Korea, and Singapore.

4.      Progress So Far and Work Still to Do

Nearly eight years after the publication of my CACM Viewpoint, how far have we come?  We have come a long way, along all dimensions: computational thinking has influenced the thinking in many other disciplines and many professional sectors; computational thinking, through revamped introductory computer science courses, has changed undergraduate curricula.  We are making inroads in K-12 education worldwide.

While we have made incredible progress, our journey has just begun.  We will see more and more disciplines make scholarly advances through the use of computing.  We will see more and more professions transformed by their reliance on computing for conducting business.  We will see more and more colleges and universities requiring an introductory computer science course to graduate.  We will see more and more countries adding computer science to K-12 curricula.

We need to continue to build up and on our momentum.  We still need to explain better to non-computer scientists what we mean by computational thinking and the benefits of being able to think computationally.  We need to continue to promote with passion and commitment the importance of teaching computer science to K-12 students.  Minimally, we should strive to ensure that every high school student around the world has access to learning computer science.  The true impact of what we are doing now will not be seen for decades.

Computational thinking is not just or all about computer science. The educational benefits of being able to think computationally—starting with the use of abstractions—enhance and reinforce intellectual skills, and thus can be transferred to any domain.  Science, society, and our economy will benefit from the discoveries and innovations produced by a workforce trained to think computationally.

Personal Notes and Acknowledgements

Parts of this article, which I wrote for Carnegie Mellon School of Computer Science’s publication The Link [Wing11], were based on earlier unpublished writings authored with Jan Cuny and Larry Snyder.  I thank them for letting me them use our shared prose and for their own efforts in advocating computational thinking.

Looking back over how much progress has been made in spreading computational thinking, I am grateful for the opportunity I had while I was the Assistant Director of the Computer and Information Science and Engineering (CISE) Directorate of the National Science Foundation.  I had a hand in CDI and CE21 from their start, allowing me—through the reach of NSF—to spread computational thinking directly to the science and engineering research (CDI) and education (CE21) communities in the US.  Jan Cuny’s initiative and persistence led to NSF’s efforts with the College Board and beyond.

Since the publication of my CACM article, which has been translated into French and Chinese, I have received hundreds of email messages from people of all walks of life—from a retired grandfather in Florida to a mother in central Pennsylvania to a female high school student in Pittsburgh, from a Brazilian news reporter to the head of a think tank in Sri Lanka  to an Egyptian student blogger, from artists to software developers to astrophysicists—thanking me for inspiring them and eager to support my cause.  I am grateful to everyone’s support.

Bibliography and Further Reading

Besides the citations I gave in text, I recommend the following references: CSUnplugged [BellWittenFellows10] for teaching young children about computer science without using a machine; the textbook used in MIT’s 6.00 Introductory to Computer Science and Programming [Guttag13]; a soon-to-be published book on the breadth of computer science, inspired by Feynman lectures for physics [HeyPapay14]; a framing for principles of computing [Denning10]; and two National Research Council workshop reports [NRC10, NRC11], as early attempts to scope out the meaning and benefits of computational thinking.

[BellWittenFellows10] Tim Bell, Ian H. Witten, and Mike Fellows, “Computer Science Unplugged,” http://csunplugged.org/, March 2010.

[BryantSutnerStehlik10] Randal E. Bryant, Klaus Sutner and Mark Stehlik, “Introductory Computer Science Education: A Deans’ Perspective,” Technical Report, CMU-CS10-140, August 2010.

[CDI11] Cyber-enabled Discovery and Innovation, National Science Foundation, http://www.nsf.gov/crssprgm/cdi/ , 2011.

[Denning10] Peter J. Denning, “The Great Principles of Computing,” American Scientist, pp. 369-372, 2010.

[GrimsonGuttag08] Eric Grimson and John Guttag, 6.00 Introduction to Computer Science and Programming, Fall 2008. (Massachusetts Institute of Technology: MIT OpenCourseWare). http://ocw.mit.edu (accessed January 3, 2014). License: Creative Commons Attribution-Noncommercial-Share Alike.

[Guttag13] John V. Guttag, Introduction to Computation and Programming Using Python, MIT Press, 2013.

[HeyPapay14] Tony Hey and Gyuri Papay, The Computing Universe, Cambridge University Press, scheduled for June 2014.

[Klawe13] Maria Klawe, “Increasing the Participation of Women in Computing Careers,” Social Issues in Computing, http://socialissues.cs.toronto.edu/2013/12/women/, 2013.

[Farrell13] Michael B. Farrell, “Computer science fills seats, needs at Harvard,” Boston Globe, http://www.bostonglobe.com/business/2013/11/26/computer-science-course-breaks-stereotypes-and-fills-halls-harvard/7XAXko7O392DiO1nAhp7dL/story.html, November 26, 2013.

[Levin13] Leonid Levin, “Universal Heuristics: How Do Humans Solve ‘Unsolvable’ Problems?,” Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence Lecture Notes in Computer Science Volume 7070, 2013, pp 53-54.

[Miller13] Claire Cain Miller, “Geek Appeal: New York vs. Seattle,” New York Times http://www.nytimes.com/2013/04/14/education/edlife/new-york-and-seattle-compete-for-data-science-crown.html?_r=0, April 14, 2013.

[NRC05] Frontiers at the Interface of Computing and Biology, National Research Council, 2005.

[NRC09] “A New Biology for the 21st Century,” National Research Council, 2009.

[NRC10] “Report of a Workshop on the Scope and Nature of Computational Thinking,” National Research Council, 2010.

[NRC11] “The Report of a Workshop on Pedagogical Aspects of Computational Thinking, National Research Council, 2011.

[PITAC05] President’s Information Technology Advisory Council, “Computational Science: Ensuring America’s Competitiveness,” Report to the President, June 2005.

[UKEd13] UK Department for Education, “Computing Programmes of study for Key Stages 1-4,” February 2013, http://media.education.gov.uk/assets/files/pdf/c/computing%2004-02-13_001.pdf

[Wing06] Jeannette M. Wing, “Computational Thinking,” Communications of the ACM, Vol. 49, No. 3, March 2006, pp. 33–35.  In French: http://www.cs.cmu.edu/afs/cs/usr/wing/www/ct-french.pdf and in Chinese: http://www.cs.cmu.edu/afs/cs/usr/wing/www/ct-chinese.pdf

[Wing08] Jeannette M. Wing, “Five Deep Questions in Computing, Communications of the ACM, Vol. 51, No. 1, January 2008, pp. 58-60.

[Wing11] Jeannette M. Wing, “Computational Thinking: What and Why,” The Link, March 2011.

Jeannette M. Wing is Corporate Vice President, Microsoft Research.  She is on leave as President’s Professor of Computer Science from Carnegie Mellon University where she twice served as Head of the Computer Science Department. From 2007-2010 she was the Assistant Director of the Computer and Information Science and Engineering Directorate at the National Science Foundation. She received her S.B.,S.M., and Ph.D. degrees from the Massachusetts Institute of Technology. Wing’s general research interests are in formal methods, programming languages and systems, and trustworthy computing.  She is a Fellow of the American Academy of Arts and Sciences, American Association for the Advancement of Science (AAAS), Association for Computing Machinery (ACM), and Institute of Electrical and Electronics Engineers (IEEE).
 
]]>
http://socialissues.cs.toronto.edu/2014/01/computational-thinking/feed/ 4
Artificial Intelligence: Then and Now http://socialissues.cs.toronto.edu/2013/11/artificial-intelligence/ http://socialissues.cs.toronto.edu/2013/11/artificial-intelligence/#comments Wed, 20 Nov 2013 18:49:07 +0000 http://socialissues.cs.toronto.edu/?p=249 Hector Levesque, Professor of Computer Science, University of Toronto

In the forty years since the publication of “Computers and Society” by Gotlieb and Borodin, much has changed in the view of the potential and promise of the area of Artificial Intelligence (AI).

The general view of AI in 1973 was not so different from the one depicted in the movie “2001: A Space Odyssey”, that is, that by the year 2001 or so, there would be computers intelligent enough to be able to converse naturally with people.  Of course it did not turn out this way.  Even now no computer can do this, and none are on the horizon.  In my view, the AI field slowly came to the realization that the hurdles that needed to be cleared to build a HAL 9000 went well beyond engineering, that there were serious scientific problems that would need to be resolved before such a goal could ever be attained.  The field continued to develop and expand, of course, but by and large turned away from the goal of a general autonomous intelligence to focus instead on useful technology.  Instead of attempting to build machines that can converse in English, for example, we concentrate on machines that can respond to spoken commands or locate phrases in large volumes of text.  Instead of a machine that can oversee the housework in a home, we concentrate on machines that can perform certain specific tasks, like vacuuming.

Many of the most impressive of these applications rely on machine learning, and in particular, learning from big data.  The ready availability of massive amounts of online data was truly a game changer.  Coupled with new ideas about how to do automatic statistics, AI moved in a direction quite unlike what was envisaged in 1973. At the time, it was generally believed that the only way to achieve flexibility, robustness, versatility etc. in computer systems was to sit down and program it.  Since then it has become clear that it is very difficult to do this because the necessary rules are so hard to come by.  Consider riding a bicycle, for example.  Under what conditions should the rider lean to the left or to the right and by how much? Instead of trying to formulate precise rules for this sort of behaviour in a computer program, a system could instead learn the necessary control parameters automatically from large amounts of data about successful and unsuccessful bicycle rides.  For a very wide variety of applications, this machine learning approach to building complex systems has worked out extremely well.

However, it is useful to remember that this is an AI technology whose goal is not necessarily to understand the underpinnings of intelligent behaviour.  Returning to English, for example, consider answering a question like this:

The ball crashed right through the table because it was made of styrofoam.  What was made of styrofoam, the ball or the table?

Contrast that with this one:

The ball crashed right through the table because it was made of granite.  What was made of granite, the ball or the table?

People (who know what styrofoam and granite are) can easily answer such questions, but it is far from clear how learning from big data would help.  What seems to be at issue here is background knowledge: knowing some relevant properties of the materials in question, and being able to apply that knowledge to answer the question.  Many other forms of intelligent behaviour seem to depend on background knowledge in just this way.  But what is much less clear is how all this works: what it would take to make this type of knowledge processing work in a general way.  At this point, forty years after the publication of the Gotlieb and Borodin book, the goal seems as elusive as ever.

Hector Levesque received his BSc, MSc and PhD all from the University of Toronto; after a stint at the Fairchild Laboratory for Artificial Intelligence Research in Palo Alto, he later joined the Department of Computer Science at the University of Toronto where he has remained since 1984.  He has done extensive work in a variety of topics in knowledge representation and reasoning, including cognitive robotics, theories of belief, and tractable reasoning.  He has published three books and over 60 research papers, four of which have won best paper awards of the American Association of Artificial Intelligence (AAAI); two others won similar awards at other conferences. Two of the AAAI papers went on to receive AAAI Classic Paper awards, and another was given an honourable mention. In 2006, a paper written in 1990 was given the inaugural Influential Paper Award by the International Foundation of Autonomous Agents and Multi-Agent Systems. Hector Levesque was elected to the Executive Council of the AAAI, was a co-founder of the International Conference on Principles of Knowledge Representation and Reasoning, and is on the editorial board of five journals, including the journal Artificial Intelligence.  In 1985, Hector Levesque became the first non-American to receive IJCAI’s Computers and Thought Award. He was the recipient of an E.W.R. Steacie Memorial Fellowship from the Natural Sciences and Engineering Research Council of Canada for 1990-91. He is a founding Fellow of the AAAI and was a Fellow of the Canadian Institute for Advanced Research from 1984 to 1995. He was elected to the Royal Society of Canada in 2006, and to the American Association for the Advancement of Science in 2011. In 2012, Hector Levesque received the Lifetime Achievement Award of the Canadian AI Association, and in 2013, the IJCAI Award for Research Excellence.
]]>
http://socialissues.cs.toronto.edu/2013/11/artificial-intelligence/feed/ 0
Ubiquitous Computing http://socialissues.cs.toronto.edu/2013/07/ubiquitous-computing/ http://socialissues.cs.toronto.edu/2013/07/ubiquitous-computing/#comments Wed, 31 Jul 2013 18:51:07 +0000 http://socialissues.cs.toronto.edu/?p=233 David Naylor, President, University of Toronto

Contributing to the Social Issues in Computing blog has caused a subacute exacerbation of my chronic case of impostor syndrome. I am, after all, a professor of medicine treading in the digital backyard of the University’s renowned Department of Computer Science [CS].

That said, there are three reasons why I am glad to have been invited to contribute.

First, with a few weeks to go before I retire from the President’s Office, it is a distinct privilege for me to say again how fortunate the University has been to have such a remarkable succession of faculty, staff, and students move through CS at U of T.

Second, this celebration of the 40th anniversary of Social Issues in Computing affords me an opportunity to join others in acknowledging Kelly Gotlieb, visionary founder of the Department, and Allan Borodin, a former Department chair who is renowned worldwide for his seminal research work.

Third, it seems to me that Social Issues in Computing both illustrated and anticipated what has emerged as a great comparative advantage of our university and similar large research-intensive institutions world-wide. We are able within one university to bring together scholars and students with a huge range of perspectives and thereby make better sense of the complex issues that confront our species on this planet. That advantage obviously does not vitiate the need to collaborate widely. But it does make it easier for conversations to occur that cross disciplinary boundaries.

All three of those themes were affirmed last month when two famous CS alumni were awarded honorary doctorates during our Convocation season. Dr Bill Buxton and Dr Bill Reeves are both in their own ways heirs to the intellectual legacy of Gotlieb, Borodin, and many other path-finders whom they both generously acknowledged at the events surrounding their honorary degrees. Among those events was a memorable celebratory dinner at the President’s Residence, complete with an impromptu performance by a previous honorary graduate — Dr Paul Hoffert, the legendary musician and composer, who is also a digital media pioneer. But the highlight for many of us was a stimulating conversation at the MaRS Centre featuring the CS ‘Double Bill’, moderated by another outstanding CS graduate and faculty member, Dr Eugene Fiume.

Drs Buxton and Reeves were part of a stellar group of University of Toronto CS graduate students and faculty back in the 1970s and 1980s. At that time, the Computer Science Department, and perhaps especially its Dynamic Graphics Project, was at the very heart of an emerging digital revolution.

Today, Dr Buxton is Principal Researcher at Microsoft and Dr Reeves is Technical Director at Pixar Animation Studios. The core businesses of those two world-renowned companies were unimaginable for most of us ordinary mortals forty years ago when Social Issues in Computing was published. Human-computer interaction was largely conducted through punch cards and monochrome text displays. Keyboards, colour graphical display screens, and disk drives were rudimentary and primitive. Indeed, they were still called ‘peripherals’ and one can appreciate their relative status in the etymology of the word. The mouse and the graphical user interface, justly hailed as advances in interface design, were steps in the right direction. But many in U of T’s CS department and their industry partners saw that these were modest steps at best, failing to sufficiently put the human in human-computer interaction. Toronto’s CS community accordingly played a pivotal role in shaping two themes that defined the modern digital era. The first was the primacy of user experience. The second was the potential for digital artistry. From Alias / Wavefront / Autodesk and Maya to multi-touch screens, breakthroughs in computer animation and Academy Award-winning films, Toronto’s faculty, staff, students and alumni have been at the forefront of humanized digital technology.

What will the next forty years bring? The short answer is that I have no idea. We live in an era of accelerating change that is moving humanity in directions that are both very exciting and somewhat unsettling. I’ll therefore take a shorter-term view and, as an amateur, offer just a few brief observations on three ‘Big Things’ in the CS realm.

First and foremost, we seem to have entered the era of ubiquitous computing. Even if the physical fusion occurs only rarely (e.g. externally programmable cardiac pacemakers), in a manner of speaking we are all cyborgs now. Our dependency on an ever widening range of digital devices borders on alarming, and evidence of the related threats to privacy continues to grow. However, the benefits have also been incalculable in terms of safety, convenience, relief from drudgery, productivity and connectivity. At the centre of this human-computer revolution has been the rise of mobile computing – and the transformation of the old-fashioned cell phone into a powerful hand-held computer. Add in tablets, notebooks, and ultra-lightweight laptops and the result is an intensity of human-computer interaction that is already staggering and growing exponentially. The trans-disciplinary study of the social and psychological implications of this shift in human existence will, I hope, remain a major priority for scholars and students the University of Toronto in the years ahead as others follow the lead of Gotlieb, Borodin and their collaborators.

A second topic of endless conversation is ‘The Cloud’. I dislike the term, not because it is inaccurate, but precisely because it aptly captures a new level of indifference to where data are stored. I cannot fully overcome a certain primitive unease about the assembly of unthinkable amounts of data in places and circumstances about which so little is known. Nonetheless, the spread of server farms and growth of virtual machines are true game-changers in mobile computing. Mobility on its own promotes ubiquity but leads to challenges of integration – and it is the latter challenge that has been addressed in part by cloud-based data storage and the synergy of on-board and remote software.

A third key element, of course, is the phenomenon known as ‘Big Data’. The term seems to mean different things to different audiences. Howsoever one defines ‘Big Data’, the last decade has seen an explosion in the collection of data about everything. Ubiquitous computing and the rise of digitized monitoring and logging has meant that human and automated mechanical activity alike are creating on an almost accidental basis a second-to-second digital record that has reached gargantuan proportions. We have also seen the emergence of data-intensive research in academe and industry. Robotics and digitization have made it feasible to collect more information in the throes of doing experimental or observational research. And the capacity to store and access those data has grown apace, driving the flywheel faster.

Above all, we have developed new capacity to mine huge databases quickly and intelligently. Here it is perhaps reasonable to put up a warning flag. There is, I think, a risk of the loss of some humanizing elements as Big Data become the stock in trade for making sense of our world. Big Data advance syntax over semantics. Consider: The more an individual’s (or a society’s) literary preferences can be reduced to a history of clicks – in the form of purchases at online bookstores, say – the less retailers and advertisers (perhaps even publishers, critics, and writers) might care about understanding those preferences. Why does someone enjoy this or that genre? Is it the compelling characters? The fine writing? The political engagement? The trade-paperback format?

On this narrow view, understanding preferences does not matter so much as cataloguing them. Scientists, of course, worry about distinguishing causation from correlation. But why all the fuss about root causes, the marketing wizards might ask: let’s just find the target, deliver the message, and wait for more orders. Indeed, some worry that a similar indifference will afflict science, with observers like Chris Anderson, past editor of Wired, arguing that we may be facing ‘the end of theory’ as ‘the data deluge makes the scientific method obsolete’. I know that in bioinformatics, this issue of hypothesis-free science has been alive for several years. Moreover, epidemiologists, statisticians, philosophers, and computer scientists have all tried to untangle the changing frame of reference for causal inference and, more generally, what ‘doing science’ means in such a data-rich era.

On a personal note, having dabbled a bit in this realm (including an outing in Statistics in Medicine as the data broker for another CS giant, Geoff Hinton and a still-lamented superstar who moved south, Rob Tibshirani), I remain deeply uncertain about the relative merits and meanings of these different modes of turning data into information, let alone deriving knowledge from the resulting information.

Nonetheless, I remain optimistic. And in that regard, let me take my field, medicine, as an example. To paraphrase William Osler (1849-1919), one of Canada’s best-known medical expatriates, medicine continues to blend the science of probability with the art of uncertainty. The hard reality is that much of what we do in public health and clinical practice involves educated guesses. Evidence consists in effect sizes quantifying relative risks or benefits, based on population averages. Yet each individual patient is unique. Thus, for each intervention – be it a treatment of an illness or a preventive manoeuver, many individuals must be exposed to the risks and costs of intervention for every one who benefits. Eric Topol, among others, has argued that new biomarkers and monitoring capabilities mean that we are finally able to break out of this framework of averages and guesswork. The convergence of major advances in biomedical science, ubiquitous computing, and massive data storage and processing capacity, has meant that we are now entering a new era of personalized or precision medicine. The benefits should include not only more effective treatments with reduced side-effects from drugs. The emerging paradigm will also enable customization of prevention, so that lifestyle choices – including diet and exercise patterns – can be made with a much clearer understanding of the implications of those decisions for downstream risk of various disease states.

We are already seeing a shift in health services in many jurisdictions through adoption of virtual health records that follow the patient, with built-in coaching on disease management. Combine these records with mobile monitoring and biofeedback and there is tremendous potential for individuals to take greater responsibility for the management of their own health. There is also the capacity for much improved management of the quality of health services and more informed decision-making by professionals and patients alike.

All this, I would argue, is very much in keeping with ubiquitous computing as an enabler of autonomy, informed choice, and human well-being. It is also entirely in the spirit of the revolutionary work of many visionaries in CS at the University of Toronto who first began to re-imagine the roles and relationships of humans and computers. Here, I am reminded that Edmund Pellegrino once described medicine as the most humane of the sciences and the most scientific of the humanities. The same could well be said for modern computer science – a situation for which the world owes much to the genius of successive generations of faculty, staff and students in our University’s Department of Computer Science.

Selected references:

“A comparison of statistical learning methods on the Gusto database.”
Ennis M, Hinton G, Naylor D, Revow M, Tibshirani R.
Stat Med. 1998 Nov 15;17(21):2501-8.

“Predicting mortality after coronary artery bypass surgery: what do artificial neural networks learn? The Steering Committee of the Cardiac Care Network of Ontario.”
Tu JV, Weinstein MC, McNeil BJ, Naylor CD.
Med Decis Making. 1998 Apr-Jun;18(2):229-35.

“A comparison of a Bayesian vs. a frequentist method for profiling hospital performance.”
Austin PC, Naylor CD, Tu JV.
J Eval Clin Pract. 2001 Feb;7(1):35-45.

“Grey zones of clinical practice: some limits to evidence-based medicine.”
Naylor CD.
Lancet. 1995 Apr 1;345(8953):840-2.

David Naylor has been President of the University of Toronto since 2005. He earned his MD at Toronto in 1978, followed by a D Phil at Oxford where he studied as a Rhodes Scholar. Naylor completed clinical specialty training and joined the Department of Medicine of the University of Toronto in 1988. He was founding Chief Executive Officer of the Institute for Clinical Evaluative Sciences (1991-1998), before becoming Dean of Medicine and Vice Provost for Relations with Health Care Institutions of the University of Toronto (1999 – 2005). Naylor has co-authored approximately 300 scholarly publications, spanning social history, public policy, epidemiology and biostatistics, and health economics, as well as clinical and health services research in most fields of medicine. Among other honours, Naylor is a Fellow of the Royal Society of Canada, a Foreign Associate Fellow of the US Institute of Medicine, and an Officer of the Order of Canada.
]]>
http://socialissues.cs.toronto.edu/2013/07/ubiquitous-computing/feed/ 0
Interview with the Authors: Part 3 http://socialissues.cs.toronto.edu/2013/05/interview-3/ http://socialissues.cs.toronto.edu/2013/05/interview-3/#comments Mon, 27 May 2013 19:51:15 +0000 http://socialissues.cs.toronto.edu/?p=221 How about today?  How did things turn out differently than you thought?

Borodin:   

Nobody anticipated how ubiquitous computers would be, that they would be in everybody’s home and more commonplace than telephones.  I do not think we envisioned that, and of course everything that comes with that: the internet and high-speed communications.  We always talk about information as power, but the fact is now that there is so much power involved in computing. We carry around a little telephone that is a thousand times more powerful than the big computer we had at the University at the time.

Any issue that comes along with that widespread use is something that I do not think we would have addressed. Yes, we talked about decision making and centralization of power and the importance of data, we talked about all that, but we did not envision just how important it would be.  Who would ever have imagined that political protest movements would use computers or flash crowds,  and just how you can manipulate information to start causes, for good or for bad, how you can sometime facilitate  the overthrow of a government as am extreme example of having a tremendous political impact.  It is quite interesting and of course we see this whole thing being played out on such an interesting scale when you look at something like the Chinese Government which so worries about controlling of information and ideas and the power of those ideas.  They control access to the internet and which sites you are allowed to see.

Gotlieb:

In December [2012] the ITU met. The ITU, the International Telecommunications Union, which essentially governed the rules for international telephony, had not had a meeting for about 40 years.  In December they had their first meeting in 40 years in Dubai, and there were 190 countries represented.  The big issue that came up there was: should there now be government control of the internet? The motion was put forward and voted on, but not passed unanimously. China and Russia and Iran and Pakistan all felt that really what goes on the internet ought to be seen by the government and controlled first.  The United States of course, Canada and other countries, democratic countries, objected.  So how much control should you have over information?  The internet is relatively free. We have right here in Toronto the Citizen Lab, which is determined to make sure that government censorship does not deny their citizens access to certain topics.  So there is a big debate going on, a global debate going on, about what control is needed.

Some say that maybe more transparency is needed.  For example, ICANN, the International Corporation for Assigned Names and Numbers is a private corporation in the United States.  China says: we have a Chinese language internet, why on earth should an American company decide whether we can have a particular Chinese language name for a website? ICANN actually does have the rule that if you have something on .ca, the Canadian group look after names for .ca.  So ICANN replies: if you have .cn at the end of it, you can do what you want. But .cn is English, of course.  So China responds that they do not want any English in the name: we have a perfectly good writing system of our own that is thousands of years older than yours, so “thanks but no thanks”.  So what to do about the internet is really a work in progress.  People admit that there are problems: but how much control and who controls and what you control is really an ongoing process.

Borodin:

Something else that we could not have anticipated when we wrote the book is the widespread use on the internet of electronic commerce. We were thinking about automation as an employment reducer, but now look at online sales.  They are starting to grow.  It may level off, but how many physical stores are being affected?  Well, judging by the Yorkdale shopping mall I guess we can keep on expanding stores, but I know more and more people who want to do all their shopping online. It is often cheaper, it is convenient and they like doing it this way.  And as long as you do not have sizing problems (clothing can be an issue), and you are buying products that you can buy off the internet, why not do it that way? Some people like myself still like to go into stores; well, I do not like to shop at all, but to the extent that I shop, I prefer to shop in a store.  But the internet has changed things dramatically, in the same way that all the internet sources of information are making the newspaper business a whole different business completely.  It may turn out to be just an electronic business after a while.  We have what people would say is the popularization of information dissemination: everybody is an expert now.  Who would have ever thought that something like Wikipedia could work, that you could replace real experts in a field by a kind of collaborative work on things  by people with genuine interest and self-interest.  Now of course Wikipedia has its problems.  You have a lot of stuff on there that is just not correct. But usually Wikipedia addresses it, or it addresses itself.  It is an interesting phenomenon.  Really, there is no more notion of an Encyclopaedia Britannica. More generally, we will see other applications of crowd sourcing partially or completely replacing experts.

Gotlieb: 

I was actually an advisor to Encyclopaedia Britannica.  They paid me $1000 a year to give them advice, and if you look at the last editions of it you will find my name as an advisor.  Now, when CDs and DVDs came out, I wrote them and said this is going to make a big difference to you, because it is a storage device where you have pretty rapid access and you could look things up.  And they did not answer me.  And I wrote them again and they continued to pay me $1000 a year until they went out of [the printed book] business.  But they did not pay attention. You see, they would have been driven by the marketing department and the marketing department was selling books, so that is who they took their advice from.

Borodin: 

But even now, even if you are selling content on DVDs or something like that, nobody really needs anything physical.   I still like to read books, but there is a growing population that prefers to read things off the internet.   I observe my wife, who was mainly uninterested in most things technological till recently.  She has now learned to read electronically: she likes reading off her iPhone.  She finds an iPhone enjoyable to read from, and it is just astounding to me. It is a little bite-sized window which you can hold wherever you are, and in particular when you are in bed.

When you think about it, how computers and just the way we do information technology has changed, it just changes the way we operate.  So, for example, we used to go to libraries to look up things. Now, search engines have taught us to be experts on query based search.  This is not new anymore.  Search engine technology has been the same for the last twenty years.  It is keyword search and we have learned how to do it.  We humans have learned how to phrase our questions so we usually get the answers that we want without asking experts, without going in and having a dialogue.

There were a lot of things we did not anticipate but in general, whenever you predict something is going to happen in the short term it does not happen:  you are usually way off.  When you say something is not going to happen soon, it happens a lot sooner than you thought.  But we tend to be very bad at predicting what the issues are going to be. It is not just us, it is the industry itself. A few years ago, the Vice-President of Microsoft Research came to visit the university. I do not remember what he was talking about exactly, but I remember a comment in the middle of his talk.  He said that we all knew search was going to be big in the mid-1990s. But if you knew it was going to be big, why didn’t you do something?  And IBM did the same thing.  IBM had a search engine before Google: it had all the experts in the world there and they did not do anything with it in the mid-1990s. But the real thing was that these companies did not think there was any money to be made or that search engines were not part of their business.  And as soon as the right way to do advertising on search became clear (there were companies that led the way in search, but they did not do it the right way and it cost them their futures), when Google (or someone) had the right idea to take what was going on before but add a quality component to the ads, to match up the ads with the search queries, then all of a sudden this became 98% of the income of Google. That’s why they are a successful company.   Nobody initially knew what the model would be for making money.  Could you sell information?  Was that the way you were going to make money on the internet? Or is it going to be a TV model where you are going to make your money through advertising?

Gotlieb: 

And a slightly different question that you hear a lot about now, a little less, but the question is this so-called network neutrality.  And the question here is, do they charge all users the same rate according to volume and rate at which you give them their answers, or do you give a preferred charge.  Now, if you are on email, you are interacting at a certain rate but of course if you are trying to look at a video there is a lot more data coming through per second than you are when you are typing emails.  So companies have said that people and situations where you demand a faster rate and better bandwidth, we should be entitled to charge more for that.  On the other hand, or maybe they are our big customers, maybe we will charge less because we will make more money from them anyway from selling the thing.  But there are other people that say, well, look, the internet is a tool for everybody and we are trying to preserve a kind of democratic internet and everybody ought to be charged the same.  And you heard a lot about the phase “network neutrality”, it been passed around in the last year without being answered.

Borodin: 

At the provider level, many of the providers do offer different qualities of service for different payments, but I do not know when you are actually paying for the communication links, when the providers are actually paying for it, I do not know how that whole economy works.  It is kind of a hidden business out there.  But at the level of the provider, most of them now do try to have different levels of service according to what your bandwidth usage is.

Gotlieb:

Rogers [Communications] was always saying they are the fastest. You see all kinds of ads now:  “My computer gives me an answer quicker than yours does”, and you actually see ads for that from Rogers.  So clearly they feel that faster response time is something that is valuable and that you can either charge more for or use that as an advantage to get a bigger business.

How do you think the field of social computing will develop in the future?

Borodin:   

As a field, I am not sure where it is going.  We do have a [University of Toronto Computer Science] faculty member who is working in climate change informatics and things like that.  So I suppose you are going to see various examples of people working in information technology who will apply it to something that they think is important.

Gotlieb:   

You see that now on Linked-in.  The field of computers has subdivided into so many special interest groups already and if you at social network, for example, there are these social networks for community networks, so, for instance, there might be a social network for people in Vancouver who are interested around what they are doing in their community.  And then there might be somebody else who asks, “Let us see what different communities are doing about a particular problem?”  There are social networks for people who are suffering from cancer.  Specialized networks are springing up and now it is pretty hard to say where it will end up.

Borodin:   

I think social networks is really an important point.  I mean the large-scale online social networks like Facebook; you know, who would ever have imagined how popular that would have been.  And again in not being able to forecast these things, if we remember the movie [about Facebook], in that movie the President of Harvard thinks, “What is this worth, a couple of thousand dollars, and why are you making such a big fuss over this whole thing?”  We know that already social issues have developed because of the way people are using social networks, the way you can intimidate people over a social network, people are driven to suicide because of these things.  So, clearly, when anytime something becomes so widely used and so entrenched in our culture, obviously it brings along social issues with it.

Gotlieb:   

At my 90th birthday celebration, somebody asked me to give an example of social issues in computing.  Well, I gave them one, it is in the book.  I said the following: we see that computer controlled cars are coming, some states already allow them on highways.  Now, if a robot or computer controlled car gets into an accident, in the United States, given that it is a litigious society, who do you sue?  Do you sue the programmer, do you sue the company that put that program in, or do you sue the driver who was there and may have been able to intervene but did not?  Who gets sued?

Borodin:   

Well, in the US you sue everybody.

Gotlieb:   

Well, that is true.  The social issues grow out of this.  We come up against the wider problem of a responsibility for autonomous or nearly autonomous systems.  What are the ethical, moral, legal responsibilities?

Borodin:   

I think if you really wanted to get into the field and have an impact, you would probably start a blog and all of a sudden you would write things and people would write back and argue with you.  And before you know it, if you have got enough of an audience, you are an expert.

Borodin:   

Two fields in the theoretical side of Computer Science where quite a bit of interest in social computing has come from are the mathematics and algorithmics of social networks, and of course game theory and mechanism design, economics.  So Craig Boutilier and I co-teach a course now called Social and Economic Networks based on the text and course by Easley and Kleinberg. It’s not a social issues course, per se, but we do talk about the phenomena of large-scale social networks: how are friendships formed, the power of weak links and so on. A lot of the issues that originate both in the sociology world and the game theory world have now been given an algorithmic flavour.  This is happening because in the social networks world, the sociologists, for the first time, have large-scale data; they had always had very interesting questions but never before had the big data to look at these questions.

The course is much more tied into popular culture, if you will, because the game theory side asks how people make decisions, how you converge over repeated games and things like that, and what is the whole meaning of equilibrium.  Auctions and things like that are clearly in everybody’s minds because everybody does electronic commerce and people are bidding all the time, whether they know it or not, for various things.  The social networks side focuses on issues about connectivity and how are we so connected and why, how links get formed and how influences spread in the social network.  Is it like a biological epidemic model or are there other models for that? So we are talking about things that kind of border on what might be called “social issues in computing”, but it is a little bit different of a course.  So you will see things come up, depending on people’s research interests, depending on things that are interesting.

Obviously the widespread use of social networks caused the field to impact computer science and how are we going to study these phenomena?  The game theory stuff has been around a long time but then, all of a sudden, people realized a lot of traditional game theory requires or assumes that you know how to compute certain things optimally, which you cannot do, so you wind up having a whole new field based upon computational constraints.  So things will develop.  Whether or not they will still be called “Social issues in computing”, or something else, remains to be seen.

 

C.C. (Kelly) Gotlieb is the founder of the Department of Computer Science (DCS) at the University of Toronto (UofT), and has been called the “Father of Computing in Canada”. Gotlieb has been a consultant to the United Nations on Computer Technology and Development, and to the Privacy and Computers Task Force of the Canadian Federal Department of Communications and Justice.  During the Second World War, he helped design highly-classified proximity fuse shells for the British Navy.  He was a founding member of the Canadian Information Processing Society, and served as Canada’s representative at the founding meeting of the International Federation of Information Processing Societies.  He is a former Editor-in-Chief of the Journal of the Association of Computing Machinery, and a member of the Editorial Advisory Board of Encyclopaedia Britannica and of the Annals of the History of Computing.  Gotlieb has served for the last twenty years as the co-chair of the awards committee for the Association of Computing Machinery (ACM), and in 2012 received the Outstanding Contribution to ACM Award.  He is a member of the Order of Canada, and awardee of the Isaac L. Auerbach Medal of the International Federation of Information Processing Societies.  Gotlieb is a Fellow of the Royal Society of Canada, the Association of Computing Machinery, the British Computer Society, and the Canadian Information Processing Society, and holds honorary doctorates from the University of Toronto, the University of Waterloo, the Technical University of Nova Scotia and the University of Victoria.
Allan Borodin is a University Professor in the Computer Science Department at the University of Toronto, and a past Chair of the Department.  Borodin served as Chair of the IEEE Computer Society Technical Committee for the Mathematics of Computation for many years, and is a former managing editor of the SIAM Journal of Computing. He has made significant research contributions in many areas, including algebraic computation, resource tradeoffs, routing in interconnection networks, parallel algorithms, online algorithms, information retrieval, social and economic networks, and adversarial queuing theory.  Borodin’s awards include the CRM-Fields PIMS Prize; he is a Fellow of the Royal Society of Canada, and of the American Association for the Advancement of Science.
]]>
http://socialissues.cs.toronto.edu/2013/05/interview-3/feed/ 0
Interview With the Authors: Part 2 http://socialissues.cs.toronto.edu/2013/02/interview-with-the-authors-part-2/ http://socialissues.cs.toronto.edu/2013/02/interview-with-the-authors-part-2/#comments Mon, 25 Feb 2013 16:01:28 +0000 http://socialissues.cs.toronto.edu/?p=133 What were some of the big issues in Computers and Society when the book came out?

Gotlieb:

Well, there were at least three issues that stand out in my mind.  One of them was computers and privacy.  I had already been involved and written a report on that.

Borodin: 

People still talk about the issue of privacy.  When we wrote about it in the book, we were, I think, referring back to the responsibility for privacy. The sort of responsibility we were thinking of was like the conscience of people who worked during World War II, the nuclear physicists who developed atomic weaponry and things like that. There was a movement post-war about the responsibility of scientists for the things they do.

Gotlieb:

Another issue was computers and work.  A that time, and when computers first came out, there was an enormous debate going on as to whether computers would cause job losses.  I had given an invited speech on this topic on computers in work in Tokyo and Melbourne, and the largest audience I ever had was in Melbourne, about 4,000 people, and it was picked up in Computer World and spoken about all over.  Computers had already had an effect through the introduction of postal codes, for example: a lot of people in the Post Office lost jobs because they had manually sorted mail.  Before the postal code, they would look at the addresses and look up what part of the country it was.   The question was: what is going to happen to the Post Office?  We did not have email yet, and the Post Office did not go away. But the Post Office is still shrinking today because of email.

We invited the Head of the Postal Union to come and speak to our class.  At the time, the postal union was under tremendous public censure for holding a strike, and he was grateful for the invitation.  He said nobody had thought to invite him to present his case, especially for a university audience.  I maintained contact with him for a long time after.  This question as to whether automation or advances in technology caused lost jobs had long been considered by economists and in particular by Schumpeter, who essentially said technology creates and destroys.  He used this example: when motor cars came in, the buggy industry was shot, but there were far more jobs in producing cars than there ever were in producing buggies. I generally had the feeling that the same thing would hold for computers, but at the time I did not have the evidence.  There were a certain number of jobs designing computers and building them, and then programming them, but nobody ever thought that, let us say, computer games would be a big industry, as we have now.
Automation replacing workers was the threat.

Borodin:   

You know, it is interesting,  I went back and looked at our summary for  computers in employment. I think it was very guarded. At the time it was true that we said things and presented evidence for the case that computing has not had the unsettling effect on employment forecast by many.   I think we presented a lot of evidence and that is what was different about the book; we tried to kind of be balanced about it. We presented what the various people were saying.

Gotlieb:

There was a third topic that was quite hot at the time.  At MIT, there was a whole group who had put together research on the limits to growth.  They had predictions about how we were using up resources, not just oil, but metals and so on, and they made predictions as to how long the world’s supply would last, at which time population growth will have to stop.  So the question was how accurate were these predictions, could you trust them? They were  computerized predictions, so here again was an area of where computers were making predictions that were affecting policy quite seriously.

Borodin:  

Well, it was mainly simulations that they were doing, rather than, say, statistical analysis of  a precise model.

Gotlieb:

They were simulations, right.  They had a whole lot of simulations about particular resources.

Borodin:  

Another topic that I think was always very popular: to what extent can machines think?  We considered some classical computer uses at that time, but the issue of “Artificial Intelligence” and the ultimate limits of computation has continued to be a relevant subject.

What are some of the issues today? Are there any other issues that you think have emerged since you wrote the book that you would consider hot topics?

Gotlieb:

Some of the problems are still there and there are new ones.  And they continue.  By and large, the effect of computers on society continued to multiply, so there are now more important issues and different issues. I continue to follow things quite carefully.

For example, right now let us take the question of drones and the ethics and morality of drones. Now, Asimov had the three laws of robotics which said what a robot might be allowed to do.  But drones are getting more powerful. They fly over Pakistan and then we think we have found the head of or an important person in Al Qaeda. They get permission from a person before they send a bomb to attack a car that they think holds a terrorist leader.  The ethical and moral questions about robots continue. For example, in Japan you have all kinds of robots that act as caregivers.  So they are looking after older people.  Presumably if they see an old person about to put their hand on the stove, they do not have to ask questions, they can act on that.  But there are other times when before the robot interferes with the person, you have to ask: is it right to intervene, or should the person be allowed to do their thing?

Another question about the ethics and morality of robots, is automated car driving.  Let us say we are about to have cars that drive themselves.  The senses that computers can have, and their reaction times, are better than us, better than humans. So if there is a decision to be made about driving a fast car, decisions which are normally under the control of a computer, should these always be under the control of the computer, or are there times when decisions about what you do with that fast car ought to be made by humans rather than machines?  Or again, consider a medical decision.  Computers get better and better at diagnosis, but if they want to give treatment, should a human be involved in the decision about that?  So the ethical and moral questions about robots as they become more and more intertwined with human life continue, and as computers become more powerful, then they become more important.

Borodin:  

It is not just robots: it is any automated decision making.

Gotlieb:    

Exactly, any automated decision, so the ethics and morality of automated decisions is a continuing, ongoing issue. For example, consider automated driving. In the case of drivers’ licences, there are times in which you get a suspended licence, but the period of suspension depends upon the seriousness of the infraction.  Driving under the influence becomes a lot more important if you happen to kill somebody while you were drunk, rather than just being stopped because your car was weaving.  You may lose your licence for a week or a month in one case and for much longer in another case. So we need good judgment: we have legal judgments and we need that.  But when the sins are made by autonomous devices like robots and drones, then the ethical and moral problems do not go away. The ethics and morality on autonomous systems has become more urgent because there are more autonomous systems, and they are smarter.

Gotlieb:

Another issue that I think is very important is security. If you want to do the most damage, you can probably bring down the electrical grid. Or if you mess up all the aircraft weather predictions so that the aircraft fly into a storm that they cannot manage.  We know there are people who are prepared to do those things, obviously, so the question of computer security when it comes to data privacy, safety issues and so on, is important today, because we have systems that are so dependent on computers and networks.  The problem of making them secure is more important, more urgent than ever, and I think unsolved.  At least it is far from being solved.  You see lots of people who say it is important, and fortunately they are addressing it, but nobody that I know claims that we have a good handle on it yet. And by the same token, computer security has become more urgent because we have, you know, power grids and networks of weather systems and so on that all depend on them.  So I would say that what certainly has happened is that some of the old problems which we saw have become more serious and maybe still demanding solutions that we do not have.

Many years ago there was a conference at Queens when privacy was a big issue and I was invited to give the keynote talk.  Essentially, people were worried about too much  data-gathering by companies and government and so on.  The title of my talk was Privacy, A Concept Whose Time has Come and Gone. That was completely counter to what the people who invited me expected.  If I give a talk, I would say security trumps privacy every time  So it is a changing concept.

Borodin:  

Well,  you do anything on Facebook and it could be there forever.  That is a good social issue, who really owns the data after a while and how long are they allowed to keep data on people? That becomes a very big question.

Apropos your comment about security trumps privacy, I probably agree with that. When I listen in the morning to CBC News, I  hear a theme come up, an underlying kind of theme that we have had for who knows how many years: “Big Brother”, 1984, the whole idea that we can be controlled centrally, everything about our lives are known,  and things like that.  It is a theme that we have had in literature, in our popular thinking, in our consciousness for many, many years.  I do not think that has gone away.  I think it is still there.

Gotlieb:

One guy was caught going through an airport with explosives in his shoes, so in airports all over the world forever, you take your shoes off.  Now in the United States if you are over 70 they changed the rule on that.

Borodin:

No, it is more than 70.  I was stopped, they asked me how old I was. I am 71, and he said, no, you are not old enough, you have to take your shoes off.

Gotlieb:

Yes, so they are making some tiny changes to it.  But I think they have only caught one guy who ever tried explosives in his shoes.  There has been one-off cases which have led to extreme, exaggerated, I would say, conditions in airports all over the world.  Three-year-olds have to take their shoes off, you know.

Borodin:

We were in the Buffalo airport a couple of years ago and there was a 99-year-old woman in a wheelchair and she just happened to be the random number that came up that you have to search, and they were searching her.

Gotlieb: 

I remember, I was in Israel, and I went to Eilat.  It is in the south. I was coming back to Tel Aviv and I was the only non-Israeli in the group.  So the security person went through and said to me, “I have got somebody who I am teaching how to do a search. Do you mind if we practise on you?”  What was I going to say?  “Yes, I mind?”  No. So I got special treatment.

Borodin:   

This brings up a related issue, and I think the Israelis are very good at this: they have a lot of data on people.  So when you show up at the airport, they pretty much know quickly who you are. And they do profile you. To the extent the government, or whoever is running security at the airport, has a lot of information about you, it may alleviate how much physical invasion of your privacy you are going to go through.  So you have lost a lot of privacy in terms of all the stuff they are going to know about you, but on the other hand it may save you much more intrusive physical types of embarrassment.

Gotlieb:

If you are a frequent traveller to the United States you can get something to let you go through a lot faster:  Global Entry and there is also the Nexus pass.

Borodin:

You give up a certain amount of privacy for those. For the privacy issue, even though in the end we usually will trump privacy in favour of security, it is still on people’s minds.  They still feel sensitive to it; there is this underlying issue that we still have that we do not want to be controlled centrally, we do not want “Big Brother”, 1984 is still in our minds.

 

C.C. (Kelly) Gotlieb is the founder of the Department of Computer Science (DCS) at the University of Toronto (UofT), and has been called the “Father of Computing in Canada”. Gotlieb has been a consultant to the United Nations on Computer Technology and Development, and to the Privacy and Computers Task Force of the Canadian Federal Department of Communications and Justice.  During the Second World War, he helped design highly-classified proximity fuse shells for the British Navy.  He was a founding member of the Canadian Information Processing Society, and served as Canada’s representative at the founding meeting of the International Federation of Information Processing Societies.  He is a former Editor-in-Chief of the Journal of the Association of Computing Machinery, and a member of the Editorial Advisory Board of Encyclopaedia Britannica and of the Annals of the History of Computing.  Gotlieb has served for the last twenty years as the co-chair of the awards committee for the Association of Computing Machinery (ACM), and in 2012 received the Outstanding Contribution to ACM Award.  He is a member of the Order of Canada, and awardee of the Isaac L. Auerbach Medal of the International Federation of Information Processing Societies.  Gotlieb is a Fellow of the Royal Society of Canada, the Association of Computing Machinery, the British Computer Society, and the Canadian Information Processing Society, and holds honorary doctorates from the University of Toronto, the University of Waterloo, the Technical University of Nova Scotia and the University of Victoria.
Allan Borodin is a University Professor in the Computer Science Department at the University of Toronto, and a past Chair of the Department.  Borodin served as Chair of the IEEE Computer Society Technical Committee for the Mathematics of Computation for many years, and is a former managing editor of the SIAM Journal of Computing. He has made significant research contributions in many areas, including algebraic computation, resource tradeoffs, routing in interconnection networks, parallel algorithms, online algorithms, information retrieval, social and economic networks, and adversarial queuing theory.  Borodin’s awards include the CRM-Fields PIMS Prize; he is a Fellow of the Royal Society of Canada, and of the American Association for the Advancement of Science.
 
]]>
http://socialissues.cs.toronto.edu/2013/02/interview-with-the-authors-part-2/feed/ 0
Social Issues in Computing and the Internet http://socialissues.cs.toronto.edu/2013/01/social-issues-and-internet/ http://socialissues.cs.toronto.edu/2013/01/social-issues-and-internet/#comments Tue, 29 Jan 2013 17:58:31 +0000 http://socialissues.cs.toronto.edu/?p=117 Vinton G. Cerf, Vice-President and Chief Internet Evangelist, Google Inc.

Forty years ago, C.C. (“Kelly”) Gotlieb and Allan Borodin wrote about Social Issues in Computing. We can thank them for their insights so many years ago and can see now how computing and communication have combined to produce benefits and hazards for the 2.5 billion people who are thought to be directly using the Internet. To these we may add many who use mobile applications that rely on access to Internet resources to function. And to these we may add billions more who are affected by the operation of network-based systems for all manner of products, services and transactions that influence the pulse of daily life.

Not only are we confronted with cyber-attacks, malware, viruses, worms and Trojan Horses, but we are also affected by our own social behavior patterns that lead to reduced privacy and even unexpected invasion of privacy owing to the inadvertent acts of others. Photo sharing is very common in the Internet today, partly owing to the fact that every mobile seems to have an increasingly high-resolution camera and the ability to upload these images to any web site or sent to any email address. What the photos contain, however, may include people we don’t know who just happened to be caught in the photo. When these photos have time, date and location information (often supplied by the mobile itself!), the involuntary participants in the image may find that their privacy has been eroded. Maybe they were not supposed to be there at the time…. Others “surfing” the Internet may find and label these photos correctly or incorrectly. In either case, one can easily construct scenarios in which these images are problematic.

One imagines that social mores and norms will eventually emerge for how we would prefer that these technologies be used in society. For example, it is widely thought that banning mobile phone calls in restaurants and theatres is appropriate for the benefit of other patrons. We will probably have to experience a variety of situations, some of them awkward and even damaging, before we can settle on norms that are widely and possibly internationally accepted.

While the technical community struggles to develop reliable access control, authentication and cryptographic methods to aid in privacy protection, others work to secure operating systems and browsers through which many useful services are constructed and, sadly, also attacked. We are far from having a reliable theory of resilient, attack-resistant operating system, browsers and other applications, let along practices that are effective.

We have ourselves to blame for some of this. We use poorly constructed passwords, we give up privacy in exchange for convenience (think of the record of your purchases that the credit card company/bank accumulates in the course of a year). We revel in sharing information with others, without necessarily attending to the potential side-effects to ourselves or our associates. Identify theft is a big business because we reveal so much that others can pretend to be us! Of course, negligence results in the exposure of large quantities of personally identifiable information (e.g. lost laptops and memory sticks).

This problem will only become more complex as the “Internet of Things” arrives in the form of computer-controlled appliances that are also networked. Unauthorized third parties may gain access to and control over these devices or may be able to tap them for information that allows them to track your habits and know whether you are at home or your car is unoccupied.

The foresight shown by Gotlieb and Borodin so many years ago reinforces my expectation that we must re-visit these issues in depth and at length if we are to fashion the kind of world we really wish to live in. That these ideas must somehow take root in many countries and cultures and be somehow compatible only adds to the challenge.

Vinton G. Cerf is VP and Chief Internet Evangelist for Google; President of the ACM; member of the US National Science Board; US National Medal of Technology; Presidential Medal of Freedom; ACM A. M. Turing Award; Japan Prize; former chairman of ICANN and President of the Internet Society.

 

]]>
http://socialissues.cs.toronto.edu/2013/01/social-issues-and-internet/feed/ 1
Interview With the Authors: Part 1 http://socialissues.cs.toronto.edu/2013/01/interview-p1/ http://socialissues.cs.toronto.edu/2013/01/interview-p1/#comments Mon, 28 Jan 2013 21:53:57 +0000 http://socialissues.cs.toronto.edu/?p=104 When and how did you become interested in social computing? How did the book come about?

Gotlieb:

I was a voracious reader when I was young, absolutely voracious.  If I started on a book, I felt I was insulting the author if I didn’t finish it, even whether I liked it or not. And if I liked it, then I read everything by the author. For example, when I discovered I liked The Three Musketeers, I then read Vicomte de Bragelonne: Ten Years Later in five volumes, and then I read it in French.  When I was young, I could read 100 pages an hour, and I read everything.

I remember that I decided to be a scientist quite young, and that was due to reading The Microbe Hunters by Paul de Kruif.   I took mathematics, physics and chemistry. During the Second World War I went to England and worked on a very highly classified shell proximity fuse: we did a lot of calculations. After the war when ENIAC was announced, I naturally fell into computers.  After all, I had electronics, and I had ballistics.

Although I was interested in science, I continued to be interested in philosophy and English.  After I graduated in Mathematics, Physics and Chemistry, I decided it was time to “get educated”.  The university curriculum for English used to list all the books that you would read in particular courses, so I read them all.  I decided that computing was the future, because computers provided a way to organize knowledge.

When  you organize knowledge, you start to build up databases, so I built up databases. Then people started to get worried about databases and privacy, and there was a committee formed in Canada for the Departments of Justice and Communications to make recommendations as to what we ought to do about corporate databases.  Alan Gottlieb was in charge of the committee; Richard Gwynne was also on it. We issued a report on how to protect privacy in databases.

Then the UN got interested in the topic of computers, and particularly how computers could help developing countries. U Thant, who was UN Secretary General at the time, put together a committee of six people from six countries to produce a report on that.  I was one of the six: I represented Canada.  I went around Europe visiting the World Health Organization, UNESCO and the World Bank, to discuss computing with them.

Having done all this,  it was natural to ask what social issues there might be in computing.  Then I invited Al to join me in writing a book, which became Social Issues in Computing.

Borodin:

I can’t remember exactly, but I think I got interested in the general topic of social issues in computing while I was a graduate student at Cornell in the late 60s. The late 60s was a period of anti-war protests and racial integration issues.  In some sense, everybody at universities was very politically interested in things. It would have naturally come up that because we were doing computing, and computing involves keeping track of people and maintaining records on people, that we would get interested in the issue.

In 1969 I came to the University of Toronto, and very soon after I arrived, Kelly and I together decided that we should teach a course on Computers and Society. I had become sensitized to the issue, and it seemed like a natural thing to do something on, even though it has never been my main research interest. Notes grew out of the course, and the book grew out of the notes.

Why did you choose to publish your ideas in a book?

Gotlieb:

We were reaching students.  I felt I had to raise a debate. So in order to bring these ideas to people, and talk about it, and to bring about a debate about them and see what other people had to say about them, we wrote the book.  We were simply looking for a wider audience.

Borodin:

That’s what happens in many cases for academics.  We write books because we’ve been teaching a course and we realize that there is an audience for the topic, there’s an interest. This was before the internet, of course.  So if you wanted to reach an audience, you wrote a book. We naively thought, given that we had all our notes for our course, that it was going to be easy.  Easy or not, a book was the natural way of going forward. We certainly felt that we had enough material for a book.

Was it a difficult book to write?

Borodin:

Every book is difficult to write.

Gotlieb:

I had already written a book with J.N.P Hume, High-Speed Data Processing. I didn’t know this at the time, but it was studied by the Oxford English Dictionary. They decided there were 11 words, like loop, which were used by the book in a different sense than the ordinary.  When I told this to a colleague and friend of mine, he said, “The first use of words in a new sense in the OED never changes, so you’re now immortal.”

My wife was a poet and science fiction writer. She was a beautiful writer, and she taught Hume and me how to write.  She taught us to write a sentence, and then take out half the words.  Only then will it be a good sentence.

Were you pleased with the response to the book?  Did it have the impact that you were hoping for?

Gotlieb:

I know for a fact that it was the first textbook in Computers and Society.  Not many people know that.  I’m willing to bet that out of several hundred members of the ACM special interest group on Computers and Society,  there might be three who realize that Al and I wrote the first textbook on this, or could quote it.  I’m not going to say we initiated the topic, but we certainly wrote the first textbook on it.  But the number of people who are aware of that is not large.

On the other hand, of the problems that are there, some of them are still there and there are new ones.  And they continue.  By and large the effects of computers on society have continued to multiply, so there are now even more important issues, and different issues. I continue to follow things quite carefully.

C.C. (Kelly) Gotlieb is the founder of the Department of Computer Science (DCS) at the University of Toronto (UofT), and has been called the “Father of Computing in Canada”. Gotlieb has been a consultant to the United Nations on Computer Technology and Development, and to the Privacy and Computers Task Force of the Canadian Federal Department of Communications and Justice.  During the Second World War, he helped design highly-classified proximity fuse shells for the British Navy.  He was a founding member of the Canadian Information Processing Society, and served as Canada’s representative at the founding meeting of the International Federation of Information Processing Societies.  He is a former Editor-in-Chief of the Journal of the Association of Computing Machinery, and a member of the Editorial Advisory Board of Encyclopaedia Britannica and of the Annals of the History of Computing.  Gotlieb has served for the last twenty years as the co-chair of the awards committee for the Association of Computing Machinery (ACM), and in 2012 received the Outstanding Contribution to ACM Award.  He is a member of the Order of Canada, and awardee of the Isaac L. Auerbach Medal of the International Federation of Information Processing Societies.  Gotlieb is a Fellow of the Royal Society of Canada, the Association of Computing Machinery, the British Computer Society, and the Canadian Information Processing Society, and holds honorary doctorates from the University of Toronto, the University of Waterloo, the Technical University of Nova Scotia and the University of Victoria.

Allan Borodin is a University Professor in the Computer Science Department at the University of Toronto, and a past Chair of the Department.  Borodin served as Chair of the IEEE Computer Society Technical Committee for the Mathematics of Computation for many years, and is a former managing editor of the SIAM Journal of Computing. He has made significant research contributions in many areas, including algebraic computation, resource tradeoffs, routing in interconnection networks, parallel algorithms, online algorithms, information retrieval, social and economic networks, and adversarial queuing theory.  Borodin’s awards include the CRM-Fields PIMS Prize; he is a Fellow of the Royal Society of Canada, and of the American Association for the Advancement of Science.

 
]]>
http://socialissues.cs.toronto.edu/2013/01/interview-p1/feed/ 1
The Enduring Social Issues in Computing http://socialissues.cs.toronto.edu/2013/01/enduring-social-issues-in-computing/ http://socialissues.cs.toronto.edu/2013/01/enduring-social-issues-in-computing/#comments Fri, 25 Jan 2013 22:32:41 +0000 http://socialissues.cs.toronto.edu/?p=73 William H. Dutton, Professor of Internet Studies, Oxford Internet Institute, University of Oxford

2013 marks 40 years since the publication of Social Issues in Computing (Academic Press, 1973) by Calvin ‘Kelly’ Gotlieb and Allan Borodin, but the social issues they identified and explicated have become increasingly central to the social and computer sciences as well as the humanities.

It was the year after its publication I began research on the social implications of computing. I was trained in political science and social research methods, with a focus on urban studies. I joined the Public Policy Research Organization (PPRO) at the University of California, Irvine, in 1974 to work with Ken Kraemer, Rob Kling, Jim Danziger, Alex Mood and John King on the Evaluation of Urban Information Systems – the URBIS Project. My initial role in the project was focused on the survey components of the project, supporting the research underpinning our assessment of the impact of computing in American local governments.

Prior to URBIS, I had used computing as a tool for social research, but had not studied the impact of computing. One of the best sources I found for guidance on how a social scientist might think about computers in organizations and society was Social Issues in Computing, even though it was written by two computer scientists. I am amazed that forty years since its publication, despite many subsequent waves of technological change – the rise of microelectronics, personal computing and the Internet among other digital media, this wonderful book remains seminal, refreshingly multidisciplinary, foresighted and inspiring – actually career changing for me.

Pioneering

The book was groundbreaking – seminal – even though framed as a more of a textbook synthesis than a research report. The early 1970s, post-Vietnam War, was seething with debate over technology and society. However, even though computers and electronic data processing – already termed Information Technology – were being increasingly employed by organizations, study of the social implications of computing were limited and not anchored in the social sciences. With rare exceptions, such as in Daniel Bell’s work on the information society, and Allan Westin and Michael Baker’s work on privacy, social scientists viewed computers as calculators. I was consistently questioned about why a political scientist would be interested in what was a technical or administrative tool.

It was primarily among a core group of computer scientists, who thought about the future of computing and who were concerned over the societal issues of computing, almost as an avocation, where some of the most insightful work was being done. And Kelly Gotlieb and Allan Borodin were among the pioneers in this group. Other influential works, such as Joseph Weizenbaum’s Computer Power and Human Reason (1976) played major roles in building this field, but came later and spanned less terrain than Social Issues in Computing, which helped scope and map the field.

Forty Years of Influence

The ultimate test of the social issues identified by Gotlieb and Borodin is that they remain so relevant today. Consider some of the key issues identified in 1973, and how they are reflected in contemporary debate (Table). Update the computing terminology of 1973 and you have most of the questions that still drive social research. The enduring nature of these issues, despite great technical change, is illustrated well by Social Issues in Computing.

In the early 1970s, the very idea of every household having a computer – much less multiple devices being carried by an individual – was considered fanciful ‘blue sky’ dreaming. Gotlieb and Borodin discussed the idea of an ‘information utility’ and were well aware of the J. C. R. Licklider’s call for a global network, but ARPANET was only at the demonstration stage at the time they wrote, and governments were the primary adopters of computing and electronic data processing systems. Nevertheless, the issues they defined in 1973 remain remarkably key to discussions of the Internet, big data, social media and mobile Internet debates over forty years hence.

Table. Social Issues Over the Decades.

Topic Circa 1973 Circa 2013
Users Staff, computing centres, user departments in governments and organizations Individuals and ‘things’ in households, on the move, and in organizations and governments
Diffusion of technologies Issues over the pace of change, and disparities within and across nations in computing, storage, speed, …, such as developing countries; focus on IT Social and global digital divides, but also the decline of North America and West Europe in the new Internet world in Asia and global South; greater focus on ICTs, including communication technology
Privacy Data banks, information gathering, linking records, government surveillance Big data, data sharing, surveillance,
Security Security of organizational and government computing systems Cyber-security from individual mobile phones to large scale systems and infrastructures, such as cloud computing
Transportation, urban, other planning systems Systems, models, and simulations in planning and decision-making Intelligent cities, smart transportation, digital companions for individuals
Capabilities and limitations Artificial intelligence (AI): Can computers think? AI, semantic web, singularity
Learning and education Computer-assisted instruction Online, blended and informal learning; global learning networks
Employment Productivity, cost cutting, deskilling, information work, training and education Reengineering work, collaborative network organizations, crowd sourcing, out-sourcing, knowledge work, women in computing
Products and services Anti-trust, software protection (copyright) Intellectual property protection across all digital content versus open data, and innovation models
Power and politics Power shifts within organizations, across levels of government, and nations; (de)centralization (Dis)empowerment of individuals, nations and others in an expanding ecology of actors; (de)centralization; regime change
Attitudes and values Priority given values tied to technology, privacy, freedom, efficiency, equality Global values and attitudes toward the Internet and related technologies, rights and value conflicts, such as freedom of expression
Responsibilities Professional and social responsibilities of computer scientists, programmers, users in organizations Responsibilities, norms, rights across all users and providers, including parents and children in households, bloggers, …
Policy and Governance Anti-trust, telecommunication policy, standards, privacy, IP Privacy (government and service provider policies), expression, content filtering, child protection, and Internet governance and standards

 

Social Implications in Context: Intended and Unanticipated

The book not only identified key social issues, but it also set out many of the assumptions that still guide social research. The book alerted readers to the direct as well as secondary, unanticipated and unplanned implications of technical change. Gotlieb and Borodin were not technological determinists. They insisted that context is critical to the study of key issues. We need to understand the social implications in particular social and organizational contexts. To this day, many discussions focused on big data or the Internet of Things are too often context free. It is when placed in particular contexts that key questions take on added meaning and the potential for empirical study.

Of course, Social Issues of Computing did not identify all of the issues that would emerge of coming decades. They did not anticipate such rising issues as the role of women in the computing professions, the shift of the centre of gravity of computer use away from North America and West Europe to the rapidly developing economies of Asia and the global South. How could they have foreseen the current focus on child protection in Internet policy. They were not oracles, but they provided a framework that could map nearly all the social issues, intended and unintended, which would be of value for decades.

The Case for Multi- and Inter-Disciplinary Research

As a political scientist moving into the study of computing in organizations, I found in Gotlieb and Borodin the case for embracing multi-disciplinary research. Their succinct, clear and well organized exposition of the technical, managerial, economic, political, legal, ethical, social and philosophical problem areas made the best case I had yet seen for focusing on computing from multiple disciplines. I quickly saw that my focus on political science was limited as a perspective on all the big issues, which required more interdisciplinary insights. At the same time, I also found their discussion of the power shifts tied to computing to provide an immediate focus for me as a political scientist, one that drove the major book emerging from our URBIS project, entitled Computers and Politics (Columbia University Press, 1982). However, it would have been impossible to write Computers and Politics had we not had a multi-disciplinary team of researchers collaborating on this issue.

In many ways, I have continued to pursue issues of power shifts from my early study of computers in government to my most recent work on the role of the Internet in empowering individuals across the world, creating what I have called a Fifth Estate role that is comparable to the Fourth Estate enabled by the press in liberal democratic societies. And throughout my career, I found the multidisciplinary study of the social issues of computing, information and communication technologies, and the Internet to be more compelling than any single disciplinary pursuit.

Inspiring Colleagues

I met Kelly Gotlieb a few years after the URBIS project had concluded. I was able to tell him how influential his work was for me as a new entrant to this area of study. Looking back over the last 40 years of my own work, I am even more struck by just how influential he and his book were, and I was simply one of many who read Social Issues in Computing. There is no question in my mind why the former students and colleagues, of Kelly Gottlieb and Allan Borodin, want to acknowledge their book, and the seminal role they much have played in their intellectual and academic lives and the broader study of the social issues in computing.

Bill Dutton, Oxford, 20 December 2012

William H. Dutton is Professor of Internet Studies at the Oxford Internet Institute, University of Oxford, and a Fellow of Balliol College. He became a co-principal investigator on the URBIS project at the University of California in 1974, supported by an NSF grant led by Professor Kenneth L. Kraemer. Is most recent book is The Oxford Handbook of Internet Studies (Oxford University Press 2013).

 

 

]]>
http://socialissues.cs.toronto.edu/2013/01/enduring-social-issues-in-computing/feed/ 2
40th Anniversary Blog Introduction http://socialissues.cs.toronto.edu/2013/01/40th-anniversary/ http://socialissues.cs.toronto.edu/2013/01/40th-anniversary/#comments Fri, 25 Jan 2013 21:08:37 +0000 http://socialissues.cs.toronto.edu/?p=43 John DiMarco, Department of Computer Science, University of Toronto

In 1973, Kelly Gotlieb and Allan Borodin’s seminal book, Social Issues in Computing, was published by Academic Press.  It tackled a wide array of topics: Information Systems and Privacy;  Systems, Models and Simulations; Computers and Planning; Computer System Security; Computers and Employment; Power shifts in Computing; Professionalization and Responsibility; Computers in Developing Countries; Computers in the Political Process; Antitrust actions and Computers; and Values in Technology and Computing, to name a few.  The book was among the very first to deal with these topics in a coherent and consistent fashion, helping to form the then-nascent field of Computing and Society. In the ensuing decades, as computers proliferated dramatically and their importance skyrocketed, the issues raised in the book have only become more important.  The year 2013, the 40th anniversary of the book, provides an opportunity to reflect on the many aspects of Computing and Society touched on by the book, as they have developed over the four decades since it was published. After soliciting input from the book’s authors and from distinguished members of the Computers and Society intellectual community, we decided that this blog, with insightful articles from a variety of sources, was a fitting and suitable way to celebrate the 40th anniversary of the book.

John DiMarco has maintained an avid interest in Computing and Society while pursuing a technical career at the Department of Computer Science at the University of Toronto, where he presently serves as IT Director. He is a regular guest-lecturer for the department’s “Computers and Society” course, and is the editor of this blog.
]]>
http://socialissues.cs.toronto.edu/2013/01/40th-anniversary/feed/ 3