Computational Thinking Benefits Society

Jeannette M. Wing, Corporate Vice President, Microsoft Research

Computer science has produced, at an astonishing and breathtaking pace, amazing technology that has transformed our lives with profound economic and societal impact.  Computer science’s effect on society was foreseen forty years ago by Gotlieb and Borodin in their book Social Issues in Computing.  Moreover, in the past few years, we have come to realize that computer science offers not just useful software and hardware artifacts, but also an intellectual framework for thinking, what I call “computational thinking” [Wing06].

Everyone can benefit from thinking computationally.  My grand vision is that computational thinking will be a fundamental skill—just like reading, writing, and arithmetic—used by everyone by the middle of the 21st Century.

This article describes how pervasive computational thinking has become in research and education.  Researchers and professionals in an increasing number of fields beyond computer science have been reaping benefits from computational thinking.  Educators in colleges and universities have begun to change undergraduate curricula to promote computational thinking to all students, not just computer science majors.  Before elaborating on this progress toward my vision, let’s begin with describing what is meant by computational thinking.

1.      What is computational thinking?

1.1   Definition

I use the term “computational thinking” as shorthand for “thinking like a computer scientist.”  To be more descriptive, however, I now define computational thinking (with input from Al Aho at Columbia University, Jan Cuny at the National Science Foundation, and Larry Snyder at the University of Washington) as follows:

Computational thinking is the thought processes involved in formulating a problem and expressing its solution(s) in such a way that a computer—human or machine—can effectively carry out.

Informally, computational thinking describes the mental activity in formulating a problem to admit a computational solution.  The solution can be carried out by a human or machine.  This latter point is important.  First, humans compute.  Second, people can learn computational thinking without a machine.  Also, computational thinking is not just about problem solving, but also about problem formulation.

In this definition I deliberately use technical terms.  By “expressing” I mean creating a linguistic representation for the purpose of communicating a solution to others, people or machines.  The expressiveness of a language, e.g., programming language, can often make the difference between an elegant or inelegant solution, e.g., between a program provably absent of certain classes of bugs or not.  By “effective,” in the context of the Turing machine model of computation, I mean “computable” (or “decidable” or “recursive”); however, it is open research to revisit models of computation, and thus the meaning of “effective,” when we consider what is computable by say biological or quantum computers [Wing08] or what is solvable by humans [Levin13, Wing08].

1.2. Abstraction is Key

Computer science is the automation of abstractions[1].  So, the most important and high-level thought process in computational thinking is the abstraction process. Abstraction is used in defining patterns, generalizing from specific instances, and parameterization. It is used to let one object stand for many. It is used to capture essential properties common to a set of objects while hiding irrelevant distinctions among them. For example, an algorithm is an abstraction of a process that takes inputs, executes a sequence of steps, and produces outputs to satisfy a desired goal. An abstract data type defines an abstract set of values and operations for manipulating those values, hiding the actual representation of the values from the user of the abstract data type. Designing efficient algorithms inherently involves designing abstract data types.

Abstraction gives us the power to scale and deal with complexity. Applying abstraction recursively allows us to build larger and larger systems, with the base case (at least for traditional computer science) being bits (0’s and 1’s). In computing, we routinely build systems in terms of layers of abstraction, allowing us to focus on one layer at a time and on the formal relations (e.g., “uses,” “refines” or “implements,” “simulates”) between adjacent layers.  When we write a program in a high-level language, we are building on lower layers of abstractions. We do not worry about the details of the underlying hardware, the operating system, the file system, or the network; furthermore, we rely on the compiler to correctly implement the semantics of the language. The narrow-waist architecture of the Internet demonstrates the effectiveness and robustness of appropriately designed abstractions: the simple TCP/IP layer at the middle has enabled a multitude of unforeseen applications to proliferate at layers above, and a multitude of unforeseen platforms, communications media, and devices to proliferate at layers below.

[1] Aho and Ullman in their 1992 Foundations of Computer Science textbook define Computer Science to be “The Mechanization of Abstraction.”

2.      Computational Thinking and Other Disciplines

Computational thinking has already influenced the research agenda of all science and engineering disciplines. Starting decades ago with the use of computational modeling and simulation, through today’s use of data mining and machine learning to analyze massive amounts of data, computation is recognized as the third pillar of science, along with theory and experimentation [PITAC 2005].

Consider just biology. The expedited sequencing of the human genome through the “shotgun algorithm” awakened the interest of the biology community in computational concepts (e.g., algorithms and data structures) and computational approaches (e.g., massive parallelism for high throughput), not just computational artifacts (e.g., computers and networks).  In 2005, the Computer Science and Telecommunications Board of the National Research Council (NRC) published a 468-page report laying out a research agenda to explore the interface between biology and computing [NRC05].  In 2009, the NRC Life Sciences Board’s study on Biology in the 21st Century recommends that “within the national New Biology Initiative, priority be given to the development of the information technologies and sciences that will be critical to the success of the New Biology [NRC09].”  Now at many colleges students can choose to major in computational biology.

The volume and rate at which scientists and engineers are now collecting and producing data—through instruments, experiments, simulations, and crowd-sourcing—are demanding advances in data analytics, data storage and retrieval, as well as data visualization. The complexity of the multi-dimensional systems that scientists and engineers want to model and analyze requires new computational abstractions. These are just two reasons that every scientific directorate and office at the National Science Foundation participated in the Cyber-enabled Discovery and Innovation, or CDI, program, an initiative started when I first joined NSF in 2007.  By the time I left, the fiscal year 2011 budget request for CDI was $100 million. CDI was in a nutshell “computational thinking for science and engineering [CDI11].”

Computational thinking has also begun to influence disciplines and professions beyond science and engineering. For example, areas of active study include algorithmic medicine, computational economics, computational finance, computational law, computational social science, digital archaeology, digital arts, digital humanities, and digital journalism. Data analytics is used in training Army recruits, detecting email spam and credit card fraud, recommending movies and books, ranking the quality of services, and personalizing coupons at supermarket checkouts.   Machine learning is used by every major IT company for understanding human behavior and thus to tailor a customer’s experience to his or her own preferences.  Every industry and profession talks about Big Data and Cloud Computing.  New York City and Seattle are vying to be named Data Science Capital of the US [Miller13].

3.      Computational Thinking and Education

In the early-2000s, computer science had a moment of panic. Undergraduate enrollments were dropping.   Computer science departments stopped hiring new faculty.  One reason I wrote my 2006 CACM article on computational thinking was to inject some positive thinking into our community.  Rather than bemoan the declining interest in computer science, I wanted us to shout to the world about the joy of computing, and more importantly, about the importance of computing.  Sure enough, today enrollments are skyrocketing (again).  Demand for graduates with computing skills far exceeds the supply; six-figure starting salaries offered to graduates with a B.S. in Computer Science are not uncommon.

3.1 Undergraduate Education

Campuses throughout the United States and abroad are revisiting their undergraduate curriculum in computer science. They are changing their first course in computer science to cover fundamental principles and concepts, not just programming.   For example, Carnegie Mellon revised its undergraduate first-year courses to promote computational thinking for non-majors [BryantSutnerStehlik10].  Harvey Mudd redesigned its introductory course with stellar success, including increasing the participation of women in computing [Klawe13].  At Harvard, “In just a few short years CS50 has rocketed from being a middling course to one of the biggest on campus, with nearly 700 students and an astounding 102-member staff [Farrell13].”  For MIT’s introductory course to computer science, Eric Grimson and John Guttag say in their opening remarks “I want to begin talking about the concepts and tools of computational thinking, which is what we’re primarily going to focus on here. We’re going to try and help you learn how to think like a computer scientist [GrimsonGuttag08].”

Many such introductory courses are now offered to or required by non-majors to take.  Depending on the school, the requirement might be a general requirement (CMU) or a distribution requirement, e.g., to satisfy a science and technology (MIT), empirical and mathematical reasoning (Harvard), or a quantitative reasoning (Princeton) requirement.

3.2 What about K-12?

Not till computational thinking is taught routinely at K-12 levels of education will my vision be truly realized.  Surprisingly, as a community, we have made faster progress at spreading computational thinking to K-12 than I had expected.  We have professional organizations, industry, non-profits, and government policymakers to thank.

The College Board, with support from NSF, is designing a new Advanced Placement (AP) course that covers the fundamental concepts of computing and computational thinking (see the CS Principles Project).  Phase 2 of the CS Principles project is in play and will lead to an operational exam in 2016-2017.  Roughly forty high schools and ten colleges are part of piloting this course in the next three years.  Not coincidentally, the changes to the Computer Science AP course are consistent with the changes in introductory computer science courses taking place now on college campuses.

Another boost is expected to come from the NSF’s Computing Education for the 21st Century (CE21) program, started in September 2010 and designed to help K-12 students, as well as first- and second-year college students, and their teachers develop computational thinking competencies. CE21 builds on the successes of the two prior NSF programs, CISE Pathways to Revitalized Undergraduate Computing Education (CPATH) and Broadening Participating in Computing (BPC). CE21 has a special emphasis on activities that support the CS 10K Project, an initiative launched by NSF through BPC.  CS 10K aims to catalyze a revision of high school curriculum, with the new AP course as a centerpiece, and to prepare 10,000 teachers to teach the new courses in 10,000 high schools by 2015.

Industry is also promoting the importance of computing for all.  Since 2006, with help from Google and later Microsoft, Carnegie Mellon has held summer workshops for high school teachers called “CS4HS.” These workshops are designed to deliver the message that there is more to computer science than computer programming.  CS4HS spread in 2007 to UCLA and the University of Washington. By 2013, under the auspices of Google, CS4HS had spread to 63 schools in the United States, 20 in China, 12 in Australia, 3 in New Zealand, and 28 in Europe, the Middle East and Africa. Also at Carnegie Mellon, Microsoft Research funds the Center for Computational Thinking, which supports both research and educational outreach projects.

Computing in the Core is a “non-partisan advocacy coalition of associations, corporations, scientific societies, and other non-profits that strive to elevate computer science education to a core academic subject in K-12 education, giving young people the college- and career-readiness knowledge and skills necessary in a technology-focused society.”  Serving on Computing in the Core’s executive committee are: Association For Computing Machinery, Computer Science Teachers Association, Google, IEEE Computer Society, Microsoft, and National Center for Women and Information Technology. is a newly formed public non-profit, sister organization of Computing in the Core.  Its current corporate donors are Allen and Company, Amazon, Google, JPMorgan Chase and co., Juniper Networks, LinkedIn, Microsoft, and Salesforce.  These companies and another 20 partners came together out of need for more professionals trained with computer science skills. hosts a rich suite of educational materials and tools that run on many platforms, including smart phones and tablets.  It lists local high schools and camps throughout the US where students can learn computing.

Computer science has also gotten attention from elected officials. In May 2009, computer science thought leaders held an event on Capitol Hill to call on policymakers to make sure that computer science is included in all federally-funded educational programs that focus on science, technology, engineering and mathematics (STEM) fields. The U.S. House of Representatives designated the first week of December as Computer Science Education Week, originally conceived by Computing in the Core, and produced in 2013 by  In June 2013, U.S. Representative Susan Brooks (R-IN) and Representative Jared Polis (D-CO) and others introduced legislation to bolster K-12 computer science education efforts.  A month later, U.S. Senators Robert Casey (D-PA) and Marco Rubio (R-FL) followed suit with similar legislation.

Computational thinking has also spread internationally.  In January 2012, the British Royal Society published a report that says that “’Computational thinking’ offers insightful ways to view how information operates in many natural and engineered systems” and recommends that “Every child should have the opportunity to learn Computing at school.” (“School” in the UK is the same as K-12 in the US.)  Since that report the UK Department for Education published in February 2013 a proposed national curriculum of study for computing [UKEd13] with the final version of the curriculum becoming statutory in September 2014.  In other words, by Fall 2014, all K-12 students in the UK will be taught concepts in computer science appropriate for their grade level.  Much of the legwork behind this achievement was accomplished by the grassroots effort called “Computing at School.”  This organization is helping to organize the teacher training in the UK needed to achieve the 2014 goal.

Asian countries are also making rapid strides in the same direction.  I am aware of efforts similar to those in the US and the UK taking place in China, Korea, and Singapore.

4.      Progress So Far and Work Still to Do

Nearly eight years after the publication of my CACM Viewpoint, how far have we come?  We have come a long way, along all dimensions: computational thinking has influenced the thinking in many other disciplines and many professional sectors; computational thinking, through revamped introductory computer science courses, has changed undergraduate curricula.  We are making inroads in K-12 education worldwide.

While we have made incredible progress, our journey has just begun.  We will see more and more disciplines make scholarly advances through the use of computing.  We will see more and more professions transformed by their reliance on computing for conducting business.  We will see more and more colleges and universities requiring an introductory computer science course to graduate.  We will see more and more countries adding computer science to K-12 curricula.

We need to continue to build up and on our momentum.  We still need to explain better to non-computer scientists what we mean by computational thinking and the benefits of being able to think computationally.  We need to continue to promote with passion and commitment the importance of teaching computer science to K-12 students.  Minimally, we should strive to ensure that every high school student around the world has access to learning computer science.  The true impact of what we are doing now will not be seen for decades.

Computational thinking is not just or all about computer science. The educational benefits of being able to think computationally—starting with the use of abstractions—enhance and reinforce intellectual skills, and thus can be transferred to any domain.  Science, society, and our economy will benefit from the discoveries and innovations produced by a workforce trained to think computationally.

Personal Notes and Acknowledgements

Parts of this article, which I wrote for Carnegie Mellon School of Computer Science’s publication The Link [Wing11], were based on earlier unpublished writings authored with Jan Cuny and Larry Snyder.  I thank them for letting me them use our shared prose and for their own efforts in advocating computational thinking.

Looking back over how much progress has been made in spreading computational thinking, I am grateful for the opportunity I had while I was the Assistant Director of the Computer and Information Science and Engineering (CISE) Directorate of the National Science Foundation.  I had a hand in CDI and CE21 from their start, allowing me—through the reach of NSF—to spread computational thinking directly to the science and engineering research (CDI) and education (CE21) communities in the US.  Jan Cuny’s initiative and persistence led to NSF’s efforts with the College Board and beyond.

Since the publication of my CACM article, which has been translated into French and Chinese, I have received hundreds of email messages from people of all walks of life—from a retired grandfather in Florida to a mother in central Pennsylvania to a female high school student in Pittsburgh, from a Brazilian news reporter to the head of a think tank in Sri Lanka  to an Egyptian student blogger, from artists to software developers to astrophysicists—thanking me for inspiring them and eager to support my cause.  I am grateful to everyone’s support.

Bibliography and Further Reading

Besides the citations I gave in text, I recommend the following references: CSUnplugged [BellWittenFellows10] for teaching young children about computer science without using a machine; the textbook used in MIT’s 6.00 Introductory to Computer Science and Programming [Guttag13]; a soon-to-be published book on the breadth of computer science, inspired by Feynman lectures for physics [HeyPapay14]; a framing for principles of computing [Denning10]; and two National Research Council workshop reports [NRC10, NRC11], as early attempts to scope out the meaning and benefits of computational thinking.

[BellWittenFellows10] Tim Bell, Ian H. Witten, and Mike Fellows, “Computer Science Unplugged,”, March 2010.

[BryantSutnerStehlik10] Randal E. Bryant, Klaus Sutner and Mark Stehlik, “Introductory Computer Science Education: A Deans’ Perspective,” Technical Report, CMU-CS10-140, August 2010.

[CDI11] Cyber-enabled Discovery and Innovation, National Science Foundation, , 2011.

[Denning10] Peter J. Denning, “The Great Principles of Computing,” American Scientist, pp. 369-372, 2010.

[GrimsonGuttag08] Eric Grimson and John Guttag, 6.00 Introduction to Computer Science and Programming, Fall 2008. (Massachusetts Institute of Technology: MIT OpenCourseWare). (accessed January 3, 2014). License: Creative Commons Attribution-Noncommercial-Share Alike.

[Guttag13] John V. Guttag, Introduction to Computation and Programming Using Python, MIT Press, 2013.

[HeyPapay14] Tony Hey and Gyuri Papay, The Computing Universe, Cambridge University Press, scheduled for June 2014.

[Klawe13] Maria Klawe, “Increasing the Participation of Women in Computing Careers,” Social Issues in Computing,, 2013.

[Farrell13] Michael B. Farrell, “Computer science fills seats, needs at Harvard,” Boston Globe,, November 26, 2013.

[Levin13] Leonid Levin, “Universal Heuristics: How Do Humans Solve ‘Unsolvable’ Problems?,” Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence Lecture Notes in Computer Science Volume 7070, 2013, pp 53-54.

[Miller13] Claire Cain Miller, “Geek Appeal: New York vs. Seattle,” New York Times, April 14, 2013.

[NRC05] Frontiers at the Interface of Computing and Biology, National Research Council, 2005.

[NRC09] “A New Biology for the 21st Century,” National Research Council, 2009.

[NRC10] “Report of a Workshop on the Scope and Nature of Computational Thinking,” National Research Council, 2010.

[NRC11] “The Report of a Workshop on Pedagogical Aspects of Computational Thinking, National Research Council, 2011.

[PITAC05] President’s Information Technology Advisory Council, “Computational Science: Ensuring America’s Competitiveness,” Report to the President, June 2005.

[UKEd13] UK Department for Education, “Computing Programmes of study for Key Stages 1-4,” February 2013,

[Wing06] Jeannette M. Wing, “Computational Thinking,” Communications of the ACM, Vol. 49, No. 3, March 2006, pp. 33–35.  In French: and in Chinese:

[Wing08] Jeannette M. Wing, “Five Deep Questions in Computing, Communications of the ACM, Vol. 51, No. 1, January 2008, pp. 58-60.

[Wing11] Jeannette M. Wing, “Computational Thinking: What and Why,” The Link, March 2011.

Jeannette M. Wing is Corporate Vice President, Microsoft Research.  She is on leave as President’s Professor of Computer Science from Carnegie Mellon University where she twice served as Head of the Computer Science Department. From 2007-2010 she was the Assistant Director of the Computer and Information Science and Engineering Directorate at the National Science Foundation. She received her S.B.,S.M., and Ph.D. degrees from the Massachusetts Institute of Technology. Wing’s general research interests are in formal methods, programming languages and systems, and trustworthy computing.  She is a Fellow of the American Academy of Arts and Sciences, American Association for the Advancement of Science (AAAS), Association for Computing Machinery (ACM), and Institute of Electrical and Electronics Engineers (IEEE).

Computer Technology and Voting

Jeremy Epstein, Senior Computer Scientist, SRI International

Technology in the form of tabulators has been used for tallying votes for about 100 years.  ([1] provides a detailed history of tabulators and computers in voting and elections.)  Starting in the late 1970s, Roy Saltman from the National Bureau of Standards (now the National Institute of Science & Technology) wrote several reports pointing out risks in use of computers in elections [2][3].  Other far-sighted individuals pointed out the risks, including Peter Neumann in the first issue of the RISKS Forum [4], who quoted a New York Times article on problems with software in vote tabulating systems [5] and wrote “This topic is just warming up”.  A later New York Times article [6] noted “In Indiana and West Virginia, charges were made that company officials and the local voting authorities altered vote totals of several races, including those for two House seats, by secretly manipulating the computers.  In addition, computer consultants hired by the plaintiffs in three states and two independent experts working for The New York Times have examined the programs used in Indiana and West Virginia. All five concluded separately that the computer programs were poorly written and highly vulnerable to secret manipulation or fraud.”

First generations of Direct Recording Electronic (DRE) computer systems introduced in this time period used simple computers and pushbuttons for voters to record their votes, thus going beyond the tabulation heretofore in use.  Examination of the first generation DRE systems before use was cursory, as they were treated largely as identical to the lever-based mechanical voting systems they replaced.  Computer scientists and security experts generally were not engaged in the debate.  Rebecca Mercuri’s dissertation [7], with its defense just one week before the 2000 US Federal election, pointed out the risks of computerized vote counting, and proposed the “Mercuri Method” for counting ballots on electronic voting systems – suggesting that on electronic voting systems, voters be shown a printed summary of their vote before casting, to be used in case of recount and to guarantee against accidental or intentional software bugs that lead to miscounted votes.  While limitations in her approach have since become clear, her dissertation marks the beginning of computer scientists paying attention to the issue.

However, little attention was paid to the risks until after the 2000 Federal election, which introduced the terms “butterfly ballots” and “hanging chads” to the American lexicon, and raised awareness of voting system accuracy.  The Help America Vote Act [HAVA] of 2002 launched America towards purchasing second generation DRE voting systems everywhere, which generally used touchscreens for voters to select their votes, and offered the promise of making private voting available to voters with disabilities, as well as offering the patina of modernization.

The story of the past decade is one of (mis)use of technology.  HAVA encouraged states to use Federal funds to purchase DREs, but without first establishing standards for reliability, security, usability, or accessibility.  While those standards were being developed, academics and advocates were analyzing systems.  In 2003, an academic group analyzed the leaked source code for the Diebold AccuVote DRE [8], and by doing so showed the security risks in DREs, and set off a fierce battle that raged through the next decade.  The California Top To Bottom Review [9] conclusively established the insecurity of DREs, led directly to the decertification of most DREs in California, and set in motion the national move towards optical scan voting.  Today, while efforts to pass Federal laws to move away from DREs have failed, the change is underway at the state level – not only because of security concerns, but also because optical scan systems result in shorter lines on election day (since extra pencils for more voters are far cheaper than additional machines).  (Some states are lagging behind, notably Maryland and Georgia, but most other states have either changed over or are in the process of changing.)  However, optical scan technology for voting is not perfect – despite generations of taking SAT and other standardized tests, many voters have difficulty understanding how to mark an optical scan ballot by coloring in the circles.  This has increased the voting gap for poorly educated, low income, and elderly voters, who are less likely to have experienced optical scan tests in their early years, and hence more likely for their votes to be mismarked resulting in disenfranchisement.

Academic research into alternate methods for voting has yielded surprising results.  A summary screen of selected candidates does not help voters detect errors; an experiment showed that even when the voting system deliberately changed the results before the summary screen, most voters did not notice [10][11].  Usability plays a critical role in voter actions; an accidental experiment in Florida indicates that even color bars and placement of contests on screens can cause many voters to miss contests [12][13].  And giving voters the opportunity to vote on contests in any order also increases the number of races missed [14].   Voters like DREs, but don’t trust their accuracy.  In short, the social angle on voting is critical to helping voters cast their votes as intended.

The changes in voting equipment have not happened in a vacuum, though.  While security has been a primary driver, other concerns have included accessibility to voters with disabilities, and support for voters using languages other than English.  ([1] provides extensive information on voting for voters with disabilities.)  The first and second generations of DREs included limited support for voters with disabilities – most notably audio ballots.  However, these systems were developed without significant consultation with disability specialists, and were effectively unusable by their intended audiences.  For example, audio ballots for voters with limited vision rely on complex navigation schemes, and typically take 10 times longer to use for those voters than for voters without disabilities using a visual display.  And even the mechanical aspects of DREs limit their usefulness: some are designed so that a wheelchair cannot get close enough to be usable by the voter; others cannot be at an appropriate angle to be voted in a wheelchair.  Such social aspects of the voting experience are outside the normal scope of computer science, but must be addressed as part of solving the “computer” problem.

Other drivers have included making the lives of poll workers and election officials less stressful – and recognizing the critical roles they play in accurate and secure elections.  Computer scientists habitually write manuals that describe how to operate software, but only in recent years do we see increased understanding that not only don’t users read manuals, but they will take shortcuts to make their lives easier, even if those shortcuts impede the proper operation of the system.

Meanwhile, the ubiquity of the internet has pushed another headlong rush into internet voting, again without considering the implications.  Despite the assumption that internet voting would increase turnout, especially by young people, studies so far indicate that internet voting has no effect on overall turnout, simply reducing in-person voting by an equivalent number of voters.  And surprisingly, internet voting is most popular among middle-aged voters, with younger voters preferring the social aspect of going to the polls in person.

Internet voting unquestionably increases security risks, since now elections can be attacked by anyone anywhere in the world.  Perhaps less obviously, it also exacerbates the social issues in voting.  For example, voter coercion, which is limited (although certainly not eliminated) by in-person voting, becomes much easier.  As noted in a recent comment on a British Columbia site about their proposed use of internet voting [15], “Internet voting will, in effect, transfer the vote of many women and of some adults who live with a dominant spouse, parent or care giver. I have often heard women say that they vote differently than their husband or father thinks they do. […]  Sometimes they voted contrary to the signs in their front yard.”  Technology, in this case, can work against the social good of giving each person his/her secret vote.

What will the next forty years bring in voting?  Some in younger generations have suggested a move to direct democracy (where the public directly votes on issues), rather than representative democracy.  However, the subtlety of issues and need for compromise is unlikely to have happy results in a direct democracy, and technology is unlikely to be the answer.  In the meantime, computer scientists will need to work more closely with psychologists, political scientists, usability and accessibility experts, and election experts to design systems that guarantee security, privacy, accuracy, and accessibility to all voters in modern democracies.

[1] “Broken Ballots: Will Your Vote Count” Douglas Jones and Barbara Simons, 2012.

[2] NSBIR 75-687, “Effective Use of Computing Technology in Vote-Tallying” Roy Saltman, National Bureau of Standards, March 1975.

[3] NBS Special Publication 500-158, “Accuracy, Integrity, and Security in Computerized Vote-Tallying”, Roy Saltman, National Bureau of Standards, August 1988.

[4] RISKS Forum, volume 1, number 1.

[5] “Voting By Computer Requires Standards, A U.S. Official Says”, David Burnham, New York Times, July 30, 1985

[6] “California Official Investigating Computer Voting System Security”, David Burnham, New York Times, December 18, 1985

[7] Rebecca Mercuri, “Electronic Vote Tabulation: Checks & Balances”, University of Pennsylvania, 2000.

[8] “Analysis of an Electronic Voting System”, Tadayoshi Kohno, Adam Stubblefield, Aviel D. Rubin, Dan S. Wallach, IEEE Symposium on Security and Privacy 2004.  (First appeared as Johns Hopkins University Information Security Institute Technical Report TR-2003-19, July 23, 2003.)

[9] Top To Bottom Review, California Secretary of State, August 2007.

[10] “Do Voters Really Fail to Detect Changes to their Ballots? An Investigation of Voter Error Detection”, Acemyan, C.Z., Kortum, P. & Payne, D., Proceedings of the Human Factors and Ergonomics Society, Santa Monica, CA: Human Factors and Ergonomics Society.

[11] “The Usability of Electronic Voting Machines and How Votes Can Be Changed Without Detection”, Sarah P. Everett, PhD dissertation, Rice University, May 2007.

[12] “Ballot Formats, Touchscreens, and Undervotes: A Study of the 2006 Midterm Elections in Florida”, Laurin Frisina, Michael C. Herron, James Honaker, Jeffrey B. Lewis, Election Law Journal.

[13] “Florida’s District 13 Election in 2006: Can Statistics Tell Us Who Won?”, Arlene Ash and John Lamperti

[14] “How to build an undervoting machine: Lessons from an alternative ballot design”, Greene, K. K., Byrne, M. D., & Goggin, S. N., Journal of Election Technology and Systems, 2013.

[15] From comments on the report of the Independent Panel on Internet Voting, October 2013.

Jeremy Epstein is Senior Computer Scientist at SRI International, where he focuses on software assurance and voting systems security for government customers including DHS S&T, NSF, and DoD. He is currently on loan from SRI to the National Science Foundation, where he leads the Secure and Trustworthy Cyberspace (SaTC) program, NSF’s flagship information security research program. Over his 20+ years in information security, he’s been a researcher, product developer, and consultant. He led projects including development of the first high assurance windowing system and one of the first multilevel secure UNIX systems, the first C2 evaluation of a network system, and the first Common Criteria evaluation of a commercial business software system. Jeremy has published extensively on security topics in professional magazines, journals, and conferences. He’s been an organizer of the ACSAC conference for over 15 years, and is associate editor-in-chief of IEEE Security and Privacy magazine. He holds an MS in Computer Sciences from Purdue University.

The Evolution of Computer-Assisted Instruction

Jane E. G. Lipson, Albert W. Smith Professor of Chemistry, Dartmouth College

In 1973 Gotlieb and Borodin published Social Issues in Computing, which included (Chapter 8 “Computer Capabilities and Limitations”) a brief, but fascinating, five pages on the promise and issues associated with computer-assisted instruction (CAI).  From the remove of forty years it is striking to look back and see the extent to which the concerns they raised still resonate. In reflecting on how the view of forty years ago compares with the reality of today it is useful to distinguish – in an approximate way – two different areas in which CAI has evolved. One is associated with the development of curricular material that either focuses on instruction in computational skills, or makes use of computer technology to deliver educational opportunities that would not otherwise be possible (I’ll call this ‘augmented instruction’). The second involves the use of computers and related technical advances (for example, in graphics and delivery of video) to approximate or replace the kind of educational encounters that students have traditionally experienced in person, for example, lectures, tutorial sessions, various modes of testing, etc. (my shorthand will be ‘replacement instruction’).

With respect to the augmented instruction, Gotlieb and Borodin gave as an example the LOGO project at MIT. As described in Social Issues, LOGO provided an introduction to programming skills, and was used even early on by very young students.  It was successful at turning the abstraction of programming into concrete, tangible results – initially by allowing students to program the movement of a robotic turtle.  A good overview of the history of LOGO, formulated by a team at MIT, can be found at  LOGO ended up evolving very successfully; I was amused to learn that the version released in the 1990’s, called ‘Microworlds’, turned out to be the route by which my own two sons (now a university freshman and junior, respectively) were introduced to programming. One can certainly see a bright line from the programming-oriented computational tools being developed when Social Issues was written and those available now, LOGO and derivatives being but one example.

Other technical materials in the category of augmented instruction illustrate the spectacular expansion in opportunities that computers have provided, in large part because of the parallel growth in graphics capabilities, coupled with their plunging economic cost. The most dramatic examples involve training via computer-generated ‘virtual reality’, a category that includes flight simulators, combat training, and surgical training (particularly laparoscopic surgery).  Research is being aggressively pursued in this area, as the goals of providing a lifelike environment (be it a geographical or an anatomical landscape) remain challenging.

Now we turn to so-called replacement instruction, the ways in which computers have (or can) replace or supplement more traditional educational routes. Gotlieb and Borodin suggested that the most significant potential for CAI was in the opportunity it offered to tailor educational material to maintain the interest of students on the edges of the distribution of preparedness – those who are high achievers and those who have learning difficulties.  Within the context of what was available in the early 1970’s the authors discussed some of the obstacles associated with ‘individualized instruction': cost, the limited ‘cognitive’ ability of machines (an example they give is in dealing with spelling mistakes made by students answering computer-provided questions), and the need to operate within a structured school environment that would inevitably be slow to change.

Strikingly, Gotlieb and Borodin suggested that CAI “could come to be important if, as is considered desirable, schools become more decentralized and the educational process is extended over longer periods into adult life.”  What effectively made this prediction come to pass has been the astonishing availability of personal computers, which, arguably, no one anticipated in the 1970s.  As a result, learning has indeed become, to some extent, decentralized.  Students of all ages have effectively unlimited access to information, presented in both textual and visual ways.  One result of there being an online, enormous, ‘student body’ of potential individual learners has been production of a vast amount of tutorial-like material; this is wildly varying in quality, to be sure.  One particularly fine example in this category involves the Kahn Academy (, an educational organization/website created by Salman Khan, whose stated objective is to provide “A free world-class education for anyone anywhere”.  The content includes an impressive library of video tutorials (mainly math and science-related, but now branching out into the humanities and social sciences) and related exercises.  The production values are fairly simple, with the visuals involving an unseen lecturer (often Salman Kahn) speaking, while writing notes on a board.  Viewers can send in questions, which are listed (along with the answers) below the video.  The Kahn Academy has an inter/national reach, which is likely to expand further, as result of a partnership just announced  ( with Comcast.

We are thus now at the point that, instead of relying on help and guidance from teachers, librarians, and other educational professionals to supplement classroom material, students at all levels can access a huge array of resources, day or night, in order to enhance and amplify their understanding of almost any subject. I would argue that help in vetting these resources will continue to be a key role for educators, and that this point should be clearly made to students.  There can be a deceptive aura of authenticity and legitimate expertise associated with seeing concepts explained in videos and described in online texts, and misinformation is not always necessarily benign. The need to educate students in how to judge online credibility will only grow in importance.  That notwithstanding, the age of individual tutorial instruction has definitely arrived.

Now we turn to the last, and most controversial, topic, the role of CAI in replicating the experience of classroom instruction.  The relatively recent series of efforts to reach many thousands of students via online courses (or MOOCs – Massively Open Online Courses) has generated tremendous publicity.  In part, this is because some of these initiatives have involved individuals from highly acclaimed educational institutions, such as Harvard, MIT, and Stanford. My own views on this subject evolved into an OpEd piece that appeared earlier this year in the online magazine, Quartz (see  Of the various points I raised in that article there are two contrasting ones I would reiterate here:  On the one hand, it takes resources and proximity to attend university, and if the choice lies between a larger world of cyber-educated citizens or uneducated ones, I’ll take the former.  However, as a person who has been involved in research and teaching for over twenty-five years at one of the premier educational institutions in the U.S., I firmly believe that “The energy and excitement that animates a campus is generated by the creation, accrual and sharing of knowledge among a community of learners.  Only part of that occurs in organized and predictable gatherings. The sparks come from interactions that are informal and unplanned…”.

Well-delivered, thoughtfully produced online presentations of material will certainly allow for dramatic reach in terms of potential students, and may even serve to enhance what is available  at resource-strapped institutions. However, it is difficult to imagine that they will be able to replace completely the impact of being an active and engaged member of a community learning together in ‘real time’. We are currently in an era of significant experimentation in this regard, and whether the results end up providing credit-driven educational opportunities for the masses or supplementary training for the few remains to be seen.

Near the end of their remarks the authors comment “The computer is only the latest candidate in a long line of technological devices that were supposed to revolutionize education”.  Going back to the ‘augmented instruction’ facet of CAI, mentioned at the start of these remarks, there can be no doubt that computers have created a revolution in education, particularly those aspects of education which use technology to boost technology competence, and which leverage the ability of millions of users to access individually all the resources generously available online. However, things are not as clear regarding the issue of ‘replacement instruction’; just as Gotlieb and Borodin suggested, we continue to grapple with what the role can and should be regarding computer-aided delivery of learning in a group context. The authors noted “the larger issues of the effects on the skills, motivation, and values imparted to students, when machines are substituted for teachers” and I would argue that this concern applies even when the machines are supplying knowledge delivered by remote but unknowable humans in lieu of in-the-flesh teachers.

We are social beings and learning has primarily evolved as a social enterprise.  The vast personal reach enabled by computer technology is individually empowering and socially disruptive. It is causing us to question notions of how to define community, connectivity, and social responsibility.  As our sense of human social networks continues to change, so will our understanding of what it means to be part of a learning community, and this will certainly result in a continuing evolution for the role of computers in education.

Jane Elizabeth Gotlieb Lipson is the Albert W. Smith Professor of Chemistry at Dartmouth College. She is a Fellow of the American Physical Society (APS) and an Associate Editor of the American Chemical Society (ACS) journal ‘Macromolecules’. Honours have included the Camille and Henry Dreyfus Teacher-Scholar Award, the Arthur K. Doolittle Award of the ACS, and service as Chair of both the Polymer Physics Gordon Conference, and the Polymer Physics Division of the APS.  She is very proud of her long-ago peripheral involvement in the production of “Social Issues in Computing”, for which she did literature reference work – as acknowledged at the start of the book.


Increasing the Participation of Women in Computing Careers

Maria Klawe, President, Harvey Mudd College

In the 40 years since the publication of Social Issues in Computing, new awareness of a significant issue has arisen: the lack of women in computing and what can and should be done about it.

While women’s participation in the other STEM disciplines has risen over the past three decades, participation in computer science has decreased. In Canada and the U.S., the percentage of women who graduate from college with a degree in computer science is at a 30-year low. In 2012, only 13 percent of undergraduate computer science majors were female (Computing Research Association, 2012 Taulbee Survey,

Women represented nearly a third of those receiving bachelor’s degrees in CS during the technology boom in the 1980s. Yet a decade later, the percentage of women majors had dropped to 20 percent, and for the past 10 years, the national average has hovered around 12-14 percent.

The dotcom crash in 2001 caused both male and female students to lose interest in the major, but the decrease over the last decade was nearly twice as sharp for women. One report found that the number of males receiving CS bachelor’s degrees at research institutions fell 35 percent (from 10,903 in 2001 to 7,039 in 2009). The corresponding number of females plummeted 67 percent (from 2,679 to 892). Only now are we starting to see a turnaround as the U.S. enters a new tech boom, but female CS participation still greatly lags that of males.

The current demand from industry for software developers, computer scientists and computer engineers far outpaces the supply of CS graduates. This shortfall impedes technological and economic advancement. We cannot meet the needs of industry if we are drawing from half the population. We also cannot develop the best, most creative solutions when teams are homogenous. Clearly, there is a need to ignite women’s interest in CS, particularly at the undergraduate level.

Why so few women?

With major technology companies and startups competing intensely for talented college graduates in computer science, why do so few women enter the field? Research shows that young women are reluctant to study computer science for three reasons: 1) Young women think computer science is boring; 2) Young women think they won’t be good at computer science; and, 3) Young women think they will not feel comfortable in the CS culture–they view computer scientists as nerdy people with poor social skills.

Successful efforts to increasing participation

Fortunately, colleges and universities that have made a serious commitment to increasing female participation in CS have had substantial success—starting in the mid-1990s with Carnegie Mellon University and the University of British Columbia, and, more recently, Harvey Mudd College. Over a four-year period, Harvey Mudd quadrupled the percentage of CS majors who were female from 10 percent in 2006 to 42 percent in 2010. Since then, that percentage has ranged between 35 to 45 percent. The CS department accomplished this by implementing three innovative practices: a redesigned introductory course, early research opportunities during the first few summers, and sponsored trips to the Grace Hopper Celebration of Women in Computing.  These changes plus the recent intense demand for CS graduates also resulted in a dramatic increase in the total number of CS majors at Harvey Mudd, which have roughly tripled from 2006 to 2013. Thus the actual number of female CS majors graduating from Harvey Mudd each year has increased by close to a factor of ten over that period.  While institutions differ in many respects, most of these approaches can be replicated or adapted by institutions seeking to encourage women to study CS.

The redesigned intro course, CS 5  

To spark interest in CS—and demonstrate that it is anything but boring—Harvey Mudd’s computer science faculty reframed the introductory course, CS 5, from “learn to program in Java” to “creative problem solving in science and engineering using computational approaches.”  The new course covers the same computer science concepts at the same level of rigor as before, and students do even more programming, though in Python rather than Java. Students find Python a more flexible and forgiving language than Java, and unlike some other “easy to learn” languages, Python is popular in industry. Students are given a choice of problems for each homework assignment in contexts such as epidemiology, robotics or physics. Both male and female students report that they are excited—and often surprised—to discover that CS is a much more fascinating and rewarding discipline than they previously thought.  As a result of these changes, CS 5 went from being Mudd students’ least favorite required first-semester course to the most popular.  Moreover its popularity spread to students at the other Claremont Colleges. This semester more than half of the students in CS 5 were from the other colleges.

Building confidence

To increase students’ confidence, CS 5 is divided into sections according to their prior computing experience. Students with little or no programming experience are placed in CS 5 Gold while students with substantial programming experience are in CS 5 Black (Harvey Mudd’s school colors are black and gold). Students with even more programming experience are placed in CS 42, a course that combines the material in CS 5 with the next CS course, CS 60.  In some years Harvey Mudd also offers CS 5 Green, which covers all the material in a biology context. In all sections, instructors deliberately work to eliminate the “macho” effect—where a few students with more experience inadvertently intimidate others by showing off their knowledge—and discourage less-experienced, but equally talented, classmates. We have found a private conversation along the lines of “I’m delighted to have such a knowledgeable student in my class, but some of the other students may find your knowledge intimidating, so I’d like to have our conversations on a one-on-one basis rather than in class” very effective. Eliminating the “macho” effect in our introductory courses has significantly improved the culture in all CS courses at Harvey Mudd and resulted in a more supportive learning environment for all students. The grouping by prior experience has created a confidence-boosting atmosphere, especially for beginners, who are disproportionately women and students of color.

Early research opportunities

A number of studies have shown that research experiences for undergraduate women increase retention in STEM fields and the likelihood they will attend graduate school. From 2007 to 2010 a grant allowed us to offer summer research experiences to about 10 female students at the end of their first year. Faculty created research projects suitable for students who had completed only one or two CS courses; these projects allowed first-years the chance to immediately apply their knowledge, boost their confidence and deepen their interest in the discipline. Students embraced this opportunity to engage in 10 weeks of intensive, challenging summer research on projects such as artificial intelligence, robotics and educational video games. They discovered they could do CS research, do it well and enjoy doing it.  This program helped increase the number of female CS majors during the years of transitioning from 10 percent to about 40 percent. By the time we achieved critical mass in terms of the number of women in all CS courses, we no longer found it necessary to continue this program, though we still offer summer research experiences to many students, male and female, at the end of their first year.

A welcoming CS culture

To increase female students’ sense of belonging in the technology field, in 2006 Harvey Mudd began taking a large cohort of first-year female students to the Grace Hopper Celebration of Women in Computing. At Hopper, students see the variety of jobs available within the discipline and meet successful role models at all career stages, as well as experience an effervescent and welcoming culture. The conference has proved to be a powerful tool in encouraging young women to take more computer science classes and ultimately major in computer science.

The path to choosing to major in computer science

The redesigned CS 5 introductory course and the Hopper conference work together to encourage female students to take the next course in the CS sequence, CS 60. By ensuring that the CS 60 experience is also interesting and enjoyable, we motivate many to take the third course, CS 70. For most of our female students, it is during or after CS 70 that they decide to make CS their major.

Transferability to other institutions

A National Science Foundation grant (CPATH-2) for $800,000 allowed us to disseminate our highly successful CS 5 curriculum and share our approaches with other institutions, many of which are now teaching the course in its entirety or adapting it with great results. Of course Harvey Mudd has two advantages that are not present in every institution. First, all students must take a CS course in their first semester. Second, students do not have to choose their major until the end of their second year, which gives them time to try out several CS courses before choosing a major. Still, every institution can make its introductory course fun and non-intimidating. CS 5 is not more expensive or challenging to teach than other introductory courses. Eliminating the “macho” effect takes a very modest amount of extra time by instructors and has a huge payoff. Taking students to Hopper costs about $750 per student, depending on the location. There are now several regional versions of Hopper, making attendance even more affordable. Finally, many foundations provide support for summer research for undergraduates.  Another approach that has worked well at UBC is to encourage double majors in disciplines with large percentages of females such as biology, chemistry, psychology or statistics. The truth is that every CS department that makes a serious extended commitment to increasing women’s participation in CS can make substantial progress.

The world of computing–and the world in general–will benefit.

For more information about Harvey Mudd’s effective practices to increase women’s participation in computing careers, please see:  

Evaluating a Breadth-First CS 1 for Scientists. Dodds, Libeskind-Hadas, Alvarado, Kuenning. In Proceedings of SIGCSE 2008

Women in CS: An Evaluation of Three Promising Practices. Alvarado and Dodds.  In Proceedings of SIGCSE 2010.

HMC CS 5 course website:

For more information on national statistics and issues surrounding women in CS:

Women in Computing–Take 2. Klawe, Whitney and Simard. In Communications of the ACM 2009.

Computing Research Association’s Taulbee Report:

Watch a short video of one of Harvey Mudd’s engaging CS summer research projects:

Maria Klawe began her tenure as Harvey Mudd College’s first female president in 2006. Prior to joining HMC, she served as dean of engineering and professor of computer science at Princeton University. Klawe joined Princeton from the University of British Columbia where she served in various roles from 1988 to 2002. Prior to UBC, Klawe spent eight years with IBM Research in California and two years at the University of Toronto. She received her Ph.D. (1977) and B.Sc. (1973) in mathematics from the University of Alberta. Klawe is a member of the board of Microsoft Corporation, Broadcom Corporation and the nonprofit Math for America, a fellow of the American Academy of Arts & Sciences, a trustee for the Mathematical Sciences Research Institute in Berkeley and a member of both the Stanford Engineering Advisory Council and the Advisory Council for the Computer Science Teachers Association.


What Aspects of E-Learning Will Have the Biggest Impact in the Next Decade?

Anant Agarwal, President, edX

In the 40 years since the publication of Social Issues in Computing, the area of computer-assisted instruction (now called e-learning) has developed in huge and unexpected ways.  Who would have imagined that college-level courses would be delivered via computer to students in every country? Or that this delivery would provide a means to generate and mine vast amounts of data for pedagogical research? The social implications of these developments are enormous, and not yet fully understood.  They are tools to democratize education through access, but can also, through research, enhance how education is delivered and packaged, online and on campus.  E-learning is in its infancy, with much unexplored potential. It is quickly evolving, with much more to come.

The next decade will bring continued, rapid change in the educational landscape, with the biggest impact in three areas: blended learning on campus, education research and continuous learning.

Early results of several blended learning pilots, where online learning activities are blended with in-person interaction on campus, suggest improved learning outcomes for students. Instructors too will benefit from using MOOC technology as a next generation textbook. Professors will have a choice to use multiple sources of content in their classrooms that best fit the topic, their teaching style and their students’ learning styles.  For example, MIT professor Michael Cima uses online assessments common in MOOCs in an on-campus class to see if more frequent assessment will improve learning outcomes. We expect to see more of this kind of blended learning on college campuses in the next decade, with SPOCs (Small Private Online Courses) becoming more popular.

In research, I anticipate we will discover more about how people learn, as we work with our partner institutions to mine MOOC’s Big Data.  For example, researcher Philip Guo studied student engagement as it relates to video length. By mining five million video viewing sessions, he concluded that six-minute videos were the ideal length.  In addition, a team of researchers at Harvard and MIT, led by David Pritchard and Lori Breslow, recently released their initial findings.  (Studying Learning in the Worldwide Classroom:  Research into edX’s First MOOC, RPA Journal, June 14, 2013, By Lori Breslow, David E. Pritchard, Jennifer DeBoer, Glenda S. Stump, Andrew D. Ho, and Daniel T. Seaton). One of their findings relates particularly to the social aspects of e-learning.  They found that a student who worked offline with someone else in the class, or with someone with expertise in the subject scored almost three points higher than someone working alone. Pritchard and Breslow’s group concluded that, “This is a noteworthy finding as it reflects what we know about on-campus instruction:  that collaborating with another person, whether novice or expert, strengthens learning.”  This finding will prove particularly insightful for colleges who incorporate online instruction into their on-campus curriculum in blended courses, discussed above.

And finally, I predict that we will see alternative education paths, such as continuous learning, start to emerge. With continuous learning, students may complete first-year subjects through MOOCs, then study for two years in a traditional campus setting—experiencing the magic of campus and group interaction–then enter the workforce for real-world skills, taking MOOCs or other courses as needed through their career in place of the final year.

Anant Agarwal is the president of edX, an online learning destination founded by Harvard and MIT. Anant taught the first edX course on circuits and electronics from MIT, which drew 155,000 students from 162 countries. He has served as the director of CSAIL, MIT’s Computer Science and Artificial Intelligence Laboratory, and is a professor of electrical engineering and computer science at MIT. He is a successful serial entrepreneur, having co-founded several companies including Tilera Corporation which created the Tile multicore processor, and Virtual Machine Works. His work on Organic Computing was selected by Scientific American as one of 10 World-Changing Ideas in 2011, and he was named in Forbes’ list of top 15 education innovators in 2012. Anant is a member of the National Academy of Engineering, a fellow of the American Academy of Arts and Sciences, and a fellow of the ACM. He hacks on WebSim, in his spare time. Anant holds a Ph.D. from Stanford and a bachelor’s from IIT Madras.

Artificial Intelligence: Then and Now

Hector Levesque, Professor of Computer Science, University of Toronto

In the forty years since the publication of “Computers and Society” by Gotlieb and Borodin, much has changed in the view of the potential and promise of the area of Artificial Intelligence (AI).

The general view of AI in 1973 was not so different from the one depicted in the movie “2001: A Space Odyssey”, that is, that by the year 2001 or so, there would be computers intelligent enough to be able to converse naturally with people.  Of course it did not turn out this way.  Even now no computer can do this, and none are on the horizon.  In my view, the AI field slowly came to the realization that the hurdles that needed to be cleared to build a HAL 9000 went well beyond engineering, that there were serious scientific problems that would need to be resolved before such a goal could ever be attained.  The field continued to develop and expand, of course, but by and large turned away from the goal of a general autonomous intelligence to focus instead on useful technology.  Instead of attempting to build machines that can converse in English, for example, we concentrate on machines that can respond to spoken commands or locate phrases in large volumes of text.  Instead of a machine that can oversee the housework in a home, we concentrate on machines that can perform certain specific tasks, like vacuuming.

Many of the most impressive of these applications rely on machine learning, and in particular, learning from big data.  The ready availability of massive amounts of online data was truly a game changer.  Coupled with new ideas about how to do automatic statistics, AI moved in a direction quite unlike what was envisaged in 1973. At the time, it was generally believed that the only way to achieve flexibility, robustness, versatility etc. in computer systems was to sit down and program it.  Since then it has become clear that it is very difficult to do this because the necessary rules are so hard to come by.  Consider riding a bicycle, for example.  Under what conditions should the rider lean to the left or to the right and by how much? Instead of trying to formulate precise rules for this sort of behaviour in a computer program, a system could instead learn the necessary control parameters automatically from large amounts of data about successful and unsuccessful bicycle rides.  For a very wide variety of applications, this machine learning approach to building complex systems has worked out extremely well.

However, it is useful to remember that this is an AI technology whose goal is not necessarily to understand the underpinnings of intelligent behaviour.  Returning to English, for example, consider answering a question like this:

The ball crashed right through the table because it was made of styrofoam.  What was made of styrofoam, the ball or the table?

Contrast that with this one:

The ball crashed right through the table because it was made of granite.  What was made of granite, the ball or the table?

People (who know what styrofoam and granite are) can easily answer such questions, but it is far from clear how learning from big data would help.  What seems to be at issue here is background knowledge: knowing some relevant properties of the materials in question, and being able to apply that knowledge to answer the question.  Many other forms of intelligent behaviour seem to depend on background knowledge in just this way.  But what is much less clear is how all this works: what it would take to make this type of knowledge processing work in a general way.  At this point, forty years after the publication of the Gotlieb and Borodin book, the goal seems as elusive as ever.

Hector Levesque received his BSc, MSc and PhD all from the University of Toronto; after a stint at the Fairchild Laboratory for Artificial Intelligence Research in Palo Alto, he later joined the Department of Computer Science at the University of Toronto where he has remained since 1984.  He has done extensive work in a variety of topics in knowledge representation and reasoning, including cognitive robotics, theories of belief, and tractable reasoning.  He has published three books and over 60 research papers, four of which have won best paper awards of the American Association of Artificial Intelligence (AAAI); two others won similar awards at other conferences. Two of the AAAI papers went on to receive AAAI Classic Paper awards, and another was given an honourable mention. In 2006, a paper written in 1990 was given the inaugural Influential Paper Award by the International Foundation of Autonomous Agents and Multi-Agent Systems. Hector Levesque was elected to the Executive Council of the AAAI, was a co-founder of the International Conference on Principles of Knowledge Representation and Reasoning, and is on the editorial board of five journals, including the journal Artificial Intelligence.  In 1985, Hector Levesque became the first non-American to receive IJCAI’s Computers and Thought Award. He was the recipient of an E.W.R. Steacie Memorial Fellowship from the Natural Sciences and Engineering Research Council of Canada for 1990-91. He is a founding Fellow of the AAAI and was a Fellow of the Canadian Institute for Advanced Research from 1984 to 1995. He was elected to the Royal Society of Canada in 2006, and to the American Association for the Advancement of Science in 2011. In 2012, Hector Levesque received the Lifetime Achievement Award of the Canadian AI Association, and in 2013, the IJCAI Award for Research Excellence.

Ubiquitous Computing

David Naylor, President, University of Toronto

Contributing to the Social Issues in Computing blog has caused a subacute exacerbation of my chronic case of impostor syndrome. I am, after all, a professor of medicine treading in the digital backyard of the University’s renowned Department of Computer Science [CS].

That said, there are three reasons why I am glad to have been invited to contribute.

First, with a few weeks to go before I retire from the President’s Office, it is a distinct privilege for me to say again how fortunate the University has been to have such a remarkable succession of faculty, staff, and students move through CS at U of T.

Second, this celebration of the 40th anniversary of Social Issues in Computing affords me an opportunity to join others in acknowledging Kelly Gotlieb, visionary founder of the Department, and Allan Borodin, a former Department chair who is renowned worldwide for his seminal research work.

Third, it seems to me that Social Issues in Computing both illustrated and anticipated what has emerged as a great comparative advantage of our university and similar large research-intensive institutions world-wide. We are able within one university to bring together scholars and students with a huge range of perspectives and thereby make better sense of the complex issues that confront our species on this planet. That advantage obviously does not vitiate the need to collaborate widely. But it does make it easier for conversations to occur that cross disciplinary boundaries.

All three of those themes were affirmed last month when two famous CS alumni were awarded honorary doctorates during our Convocation season. Dr Bill Buxton and Dr Bill Reeves are both in their own ways heirs to the intellectual legacy of Gotlieb, Borodin, and many other path-finders whom they both generously acknowledged at the events surrounding their honorary degrees. Among those events was a memorable celebratory dinner at the President’s Residence, complete with an impromptu performance by a previous honorary graduate — Dr Paul Hoffert, the legendary musician and composer, who is also a digital media pioneer. But the highlight for many of us was a stimulating conversation at the MaRS Centre featuring the CS ‘Double Bill’, moderated by another outstanding CS graduate and faculty member, Dr Eugene Fiume.

Drs Buxton and Reeves were part of a stellar group of University of Toronto CS graduate students and faculty back in the 1970s and 1980s. At that time, the Computer Science Department, and perhaps especially its Dynamic Graphics Project, was at the very heart of an emerging digital revolution.

Today, Dr Buxton is Principal Researcher at Microsoft and Dr Reeves is Technical Director at Pixar Animation Studios. The core businesses of those two world-renowned companies were unimaginable for most of us ordinary mortals forty years ago when Social Issues in Computing was published. Human-computer interaction was largely conducted through punch cards and monochrome text displays. Keyboards, colour graphical display screens, and disk drives were rudimentary and primitive. Indeed, they were still called ‘peripherals’ and one can appreciate their relative status in the etymology of the word. The mouse and the graphical user interface, justly hailed as advances in interface design, were steps in the right direction. But many in U of T’s CS department and their industry partners saw that these were modest steps at best, failing to sufficiently put the human in human-computer interaction. Toronto’s CS community accordingly played a pivotal role in shaping two themes that defined the modern digital era. The first was the primacy of user experience. The second was the potential for digital artistry. From Alias / Wavefront / Autodesk and Maya to multi-touch screens, breakthroughs in computer animation and Academy Award-winning films, Toronto’s faculty, staff, students and alumni have been at the forefront of humanized digital technology.

What will the next forty years bring? The short answer is that I have no idea. We live in an era of accelerating change that is moving humanity in directions that are both very exciting and somewhat unsettling. I’ll therefore take a shorter-term view and, as an amateur, offer just a few brief observations on three ‘Big Things’ in the CS realm.

First and foremost, we seem to have entered the era of ubiquitous computing. Even if the physical fusion occurs only rarely (e.g. externally programmable cardiac pacemakers), in a manner of speaking we are all cyborgs now. Our dependency on an ever widening range of digital devices borders on alarming, and evidence of the related threats to privacy continues to grow. However, the benefits have also been incalculable in terms of safety, convenience, relief from drudgery, productivity and connectivity. At the centre of this human-computer revolution has been the rise of mobile computing – and the transformation of the old-fashioned cell phone into a powerful hand-held computer. Add in tablets, notebooks, and ultra-lightweight laptops and the result is an intensity of human-computer interaction that is already staggering and growing exponentially. The trans-disciplinary study of the social and psychological implications of this shift in human existence will, I hope, remain a major priority for scholars and students the University of Toronto in the years ahead as others follow the lead of Gotlieb, Borodin and their collaborators.

A second topic of endless conversation is ‘The Cloud’. I dislike the term, not because it is inaccurate, but precisely because it aptly captures a new level of indifference to where data are stored. I cannot fully overcome a certain primitive unease about the assembly of unthinkable amounts of data in places and circumstances about which so little is known. Nonetheless, the spread of server farms and growth of virtual machines are true game-changers in mobile computing. Mobility on its own promotes ubiquity but leads to challenges of integration – and it is the latter challenge that has been addressed in part by cloud-based data storage and the synergy of on-board and remote software.

A third key element, of course, is the phenomenon known as ‘Big Data’. The term seems to mean different things to different audiences. Howsoever one defines ‘Big Data’, the last decade has seen an explosion in the collection of data about everything. Ubiquitous computing and the rise of digitized monitoring and logging has meant that human and automated mechanical activity alike are creating on an almost accidental basis a second-to-second digital record that has reached gargantuan proportions. We have also seen the emergence of data-intensive research in academe and industry. Robotics and digitization have made it feasible to collect more information in the throes of doing experimental or observational research. And the capacity to store and access those data has grown apace, driving the flywheel faster.

Above all, we have developed new capacity to mine huge databases quickly and intelligently. Here it is perhaps reasonable to put up a warning flag. There is, I think, a risk of the loss of some humanizing elements as Big Data become the stock in trade for making sense of our world. Big Data advance syntax over semantics. Consider: The more an individual’s (or a society’s) literary preferences can be reduced to a history of clicks – in the form of purchases at online bookstores, say – the less retailers and advertisers (perhaps even publishers, critics, and writers) might care about understanding those preferences. Why does someone enjoy this or that genre? Is it the compelling characters? The fine writing? The political engagement? The trade-paperback format?

On this narrow view, understanding preferences does not matter so much as cataloguing them. Scientists, of course, worry about distinguishing causation from correlation. But why all the fuss about root causes, the marketing wizards might ask: let’s just find the target, deliver the message, and wait for more orders. Indeed, some worry that a similar indifference will afflict science, with observers like Chris Anderson, past editor of Wired, arguing that we may be facing ‘the end of theory’ as ‘the data deluge makes the scientific method obsolete’. I know that in bioinformatics, this issue of hypothesis-free science has been alive for several years. Moreover, epidemiologists, statisticians, philosophers, and computer scientists have all tried to untangle the changing frame of reference for causal inference and, more generally, what ‘doing science’ means in such a data-rich era.

On a personal note, having dabbled a bit in this realm (including an outing in Statistics in Medicine as the data broker for another CS giant, Geoff Hinton and a still-lamented superstar who moved south, Rob Tibshirani), I remain deeply uncertain about the relative merits and meanings of these different modes of turning data into information, let alone deriving knowledge from the resulting information.

Nonetheless, I remain optimistic. And in that regard, let me take my field, medicine, as an example. To paraphrase William Osler (1849-1919), one of Canada’s best-known medical expatriates, medicine continues to blend the science of probability with the art of uncertainty. The hard reality is that much of what we do in public health and clinical practice involves educated guesses. Evidence consists in effect sizes quantifying relative risks or benefits, based on population averages. Yet each individual patient is unique. Thus, for each intervention – be it a treatment of an illness or a preventive manoeuver, many individuals must be exposed to the risks and costs of intervention for every one who benefits. Eric Topol, among others, has argued that new biomarkers and monitoring capabilities mean that we are finally able to break out of this framework of averages and guesswork. The convergence of major advances in biomedical science, ubiquitous computing, and massive data storage and processing capacity, has meant that we are now entering a new era of personalized or precision medicine. The benefits should include not only more effective treatments with reduced side-effects from drugs. The emerging paradigm will also enable customization of prevention, so that lifestyle choices – including diet and exercise patterns – can be made with a much clearer understanding of the implications of those decisions for downstream risk of various disease states.

We are already seeing a shift in health services in many jurisdictions through adoption of virtual health records that follow the patient, with built-in coaching on disease management. Combine these records with mobile monitoring and biofeedback and there is tremendous potential for individuals to take greater responsibility for the management of their own health. There is also the capacity for much improved management of the quality of health services and more informed decision-making by professionals and patients alike.

All this, I would argue, is very much in keeping with ubiquitous computing as an enabler of autonomy, informed choice, and human well-being. It is also entirely in the spirit of the revolutionary work of many visionaries in CS at the University of Toronto who first began to re-imagine the roles and relationships of humans and computers. Here, I am reminded that Edmund Pellegrino once described medicine as the most humane of the sciences and the most scientific of the humanities. The same could well be said for modern computer science – a situation for which the world owes much to the genius of successive generations of faculty, staff and students in our University’s Department of Computer Science.

Selected references:

“A comparison of statistical learning methods on the Gusto database.”
Ennis M, Hinton G, Naylor D, Revow M, Tibshirani R.
Stat Med. 1998 Nov 15;17(21):2501-8.

“Predicting mortality after coronary artery bypass surgery: what do artificial neural networks learn? The Steering Committee of the Cardiac Care Network of Ontario.”
Tu JV, Weinstein MC, McNeil BJ, Naylor CD.
Med Decis Making. 1998 Apr-Jun;18(2):229-35.

“A comparison of a Bayesian vs. a frequentist method for profiling hospital performance.”
Austin PC, Naylor CD, Tu JV.
J Eval Clin Pract. 2001 Feb;7(1):35-45.

“Grey zones of clinical practice: some limits to evidence-based medicine.”
Naylor CD.
Lancet. 1995 Apr 1;345(8953):840-2.

David Naylor has been President of the University of Toronto since 2005. He earned his MD at Toronto in 1978, followed by a D Phil at Oxford where he studied as a Rhodes Scholar. Naylor completed clinical specialty training and joined the Department of Medicine of the University of Toronto in 1988. He was founding Chief Executive Officer of the Institute for Clinical Evaluative Sciences (1991-1998), before becoming Dean of Medicine and Vice Provost for Relations with Health Care Institutions of the University of Toronto (1999 – 2005). Naylor has co-authored approximately 300 scholarly publications, spanning social history, public policy, epidemiology and biostatistics, and health economics, as well as clinical and health services research in most fields of medicine. Among other honours, Naylor is a Fellow of the Royal Society of Canada, a Foreign Associate Fellow of the US Institute of Medicine, and an Officer of the Order of Canada.

Interview with the Authors: Part 3

How about today?  How did things turn out differently than you thought?


Nobody anticipated how ubiquitous computers would be, that they would be in everybody’s home and more commonplace than telephones.  I do not think we envisioned that, and of course everything that comes with that: the internet and high-speed communications.  We always talk about information as power, but the fact is now that there is so much power involved in computing. We carry around a little telephone that is a thousand times more powerful than the big computer we had at the University at the time.

Any issue that comes along with that widespread use is something that I do not think we would have addressed. Yes, we talked about decision making and centralization of power and the importance of data, we talked about all that, but we did not envision just how important it would be.  Who would ever have imagined that political protest movements would use computers or flash crowds,  and just how you can manipulate information to start causes, for good or for bad, how you can sometime facilitate  the overthrow of a government as am extreme example of having a tremendous political impact.  It is quite interesting and of course we see this whole thing being played out on such an interesting scale when you look at something like the Chinese Government which so worries about controlling of information and ideas and the power of those ideas.  They control access to the internet and which sites you are allowed to see.


In December [2012] the ITU met. The ITU, the International Telecommunications Union, which essentially governed the rules for international telephony, had not had a meeting for about 40 years.  In December they had their first meeting in 40 years in Dubai, and there were 190 countries represented.  The big issue that came up there was: should there now be government control of the internet? The motion was put forward and voted on, but not passed unanimously. China and Russia and Iran and Pakistan all felt that really what goes on the internet ought to be seen by the government and controlled first.  The United States of course, Canada and other countries, democratic countries, objected.  So how much control should you have over information?  The internet is relatively free. We have right here in Toronto the Citizen Lab, which is determined to make sure that government censorship does not deny their citizens access to certain topics.  So there is a big debate going on, a global debate going on, about what control is needed.

Some say that maybe more transparency is needed.  For example, ICANN, the International Corporation for Assigned Names and Numbers is a private corporation in the United States.  China says: we have a Chinese language internet, why on earth should an American company decide whether we can have a particular Chinese language name for a website? ICANN actually does have the rule that if you have something on .ca, the Canadian group look after names for .ca.  So ICANN replies: if you have .cn at the end of it, you can do what you want. But .cn is English, of course.  So China responds that they do not want any English in the name: we have a perfectly good writing system of our own that is thousands of years older than yours, so “thanks but no thanks”.  So what to do about the internet is really a work in progress.  People admit that there are problems: but how much control and who controls and what you control is really an ongoing process.


Something else that we could not have anticipated when we wrote the book is the widespread use on the internet of electronic commerce. We were thinking about automation as an employment reducer, but now look at online sales.  They are starting to grow.  It may level off, but how many physical stores are being affected?  Well, judging by the Yorkdale shopping mall I guess we can keep on expanding stores, but I know more and more people who want to do all their shopping online. It is often cheaper, it is convenient and they like doing it this way.  And as long as you do not have sizing problems (clothing can be an issue), and you are buying products that you can buy off the internet, why not do it that way? Some people like myself still like to go into stores; well, I do not like to shop at all, but to the extent that I shop, I prefer to shop in a store.  But the internet has changed things dramatically, in the same way that all the internet sources of information are making the newspaper business a whole different business completely.  It may turn out to be just an electronic business after a while.  We have what people would say is the popularization of information dissemination: everybody is an expert now.  Who would have ever thought that something like Wikipedia could work, that you could replace real experts in a field by a kind of collaborative work on things  by people with genuine interest and self-interest.  Now of course Wikipedia has its problems.  You have a lot of stuff on there that is just not correct. But usually Wikipedia addresses it, or it addresses itself.  It is an interesting phenomenon.  Really, there is no more notion of an Encyclopaedia Britannica. More generally, we will see other applications of crowd sourcing partially or completely replacing experts.


I was actually an advisor to Encyclopaedia Britannica.  They paid me $1000 a year to give them advice, and if you look at the last editions of it you will find my name as an advisor.  Now, when CDs and DVDs came out, I wrote them and said this is going to make a big difference to you, because it is a storage device where you have pretty rapid access and you could look things up.  And they did not answer me.  And I wrote them again and they continued to pay me $1000 a year until they went out of [the printed book] business.  But they did not pay attention. You see, they would have been driven by the marketing department and the marketing department was selling books, so that is who they took their advice from.


But even now, even if you are selling content on DVDs or something like that, nobody really needs anything physical.   I still like to read books, but there is a growing population that prefers to read things off the internet.   I observe my wife, who was mainly uninterested in most things technological till recently.  She has now learned to read electronically: she likes reading off her iPhone.  She finds an iPhone enjoyable to read from, and it is just astounding to me. It is a little bite-sized window which you can hold wherever you are, and in particular when you are in bed.

When you think about it, how computers and just the way we do information technology has changed, it just changes the way we operate.  So, for example, we used to go to libraries to look up things. Now, search engines have taught us to be experts on query based search.  This is not new anymore.  Search engine technology has been the same for the last twenty years.  It is keyword search and we have learned how to do it.  We humans have learned how to phrase our questions so we usually get the answers that we want without asking experts, without going in and having a dialogue.

There were a lot of things we did not anticipate but in general, whenever you predict something is going to happen in the short term it does not happen:  you are usually way off.  When you say something is not going to happen soon, it happens a lot sooner than you thought.  But we tend to be very bad at predicting what the issues are going to be. It is not just us, it is the industry itself. A few years ago, the Vice-President of Microsoft Research came to visit the university. I do not remember what he was talking about exactly, but I remember a comment in the middle of his talk.  He said that we all knew search was going to be big in the mid-1990s. But if you knew it was going to be big, why didn’t you do something?  And IBM did the same thing.  IBM had a search engine before Google: it had all the experts in the world there and they did not do anything with it in the mid-1990s. But the real thing was that these companies did not think there was any money to be made or that search engines were not part of their business.  And as soon as the right way to do advertising on search became clear (there were companies that led the way in search, but they did not do it the right way and it cost them their futures), when Google (or someone) had the right idea to take what was going on before but add a quality component to the ads, to match up the ads with the search queries, then all of a sudden this became 98% of the income of Google. That’s why they are a successful company.   Nobody initially knew what the model would be for making money.  Could you sell information?  Was that the way you were going to make money on the internet? Or is it going to be a TV model where you are going to make your money through advertising?


And a slightly different question that you hear a lot about now, a little less, but the question is this so-called network neutrality.  And the question here is, do they charge all users the same rate according to volume and rate at which you give them their answers, or do you give a preferred charge.  Now, if you are on email, you are interacting at a certain rate but of course if you are trying to look at a video there is a lot more data coming through per second than you are when you are typing emails.  So companies have said that people and situations where you demand a faster rate and better bandwidth, we should be entitled to charge more for that.  On the other hand, or maybe they are our big customers, maybe we will charge less because we will make more money from them anyway from selling the thing.  But there are other people that say, well, look, the internet is a tool for everybody and we are trying to preserve a kind of democratic internet and everybody ought to be charged the same.  And you heard a lot about the phase “network neutrality”, it been passed around in the last year without being answered.


At the provider level, many of the providers do offer different qualities of service for different payments, but I do not know when you are actually paying for the communication links, when the providers are actually paying for it, I do not know how that whole economy works.  It is kind of a hidden business out there.  But at the level of the provider, most of them now do try to have different levels of service according to what your bandwidth usage is.


Rogers [Communications] was always saying they are the fastest. You see all kinds of ads now:  “My computer gives me an answer quicker than yours does”, and you actually see ads for that from Rogers.  So clearly they feel that faster response time is something that is valuable and that you can either charge more for or use that as an advantage to get a bigger business.

How do you think the field of social computing will develop in the future?


As a field, I am not sure where it is going.  We do have a [University of Toronto Computer Science] faculty member who is working in climate change informatics and things like that.  So I suppose you are going to see various examples of people working in information technology who will apply it to something that they think is important.


You see that now on Linked-in.  The field of computers has subdivided into so many special interest groups already and if you at social network, for example, there are these social networks for community networks, so, for instance, there might be a social network for people in Vancouver who are interested around what they are doing in their community.  And then there might be somebody else who asks, “Let us see what different communities are doing about a particular problem?”  There are social networks for people who are suffering from cancer.  Specialized networks are springing up and now it is pretty hard to say where it will end up.


I think social networks is really an important point.  I mean the large-scale online social networks like Facebook; you know, who would ever have imagined how popular that would have been.  And again in not being able to forecast these things, if we remember the movie [about Facebook], in that movie the President of Harvard thinks, “What is this worth, a couple of thousand dollars, and why are you making such a big fuss over this whole thing?”  We know that already social issues have developed because of the way people are using social networks, the way you can intimidate people over a social network, people are driven to suicide because of these things.  So, clearly, when anytime something becomes so widely used and so entrenched in our culture, obviously it brings along social issues with it.


At my 90th birthday celebration, somebody asked me to give an example of social issues in computing.  Well, I gave them one, it is in the book.  I said the following: we see that computer controlled cars are coming, some states already allow them on highways.  Now, if a robot or computer controlled car gets into an accident, in the United States, given that it is a litigious society, who do you sue?  Do you sue the programmer, do you sue the company that put that program in, or do you sue the driver who was there and may have been able to intervene but did not?  Who gets sued?


Well, in the US you sue everybody.


Well, that is true.  The social issues grow out of this.  We come up against the wider problem of a responsibility for autonomous or nearly autonomous systems.  What are the ethical, moral, legal responsibilities?


I think if you really wanted to get into the field and have an impact, you would probably start a blog and all of a sudden you would write things and people would write back and argue with you.  And before you know it, if you have got enough of an audience, you are an expert.


Two fields in the theoretical side of Computer Science where quite a bit of interest in social computing has come from are the mathematics and algorithmics of social networks, and of course game theory and mechanism design, economics.  So Craig Boutilier and I co-teach a course now called Social and Economic Networks based on the text and course by Easley and Kleinberg. It’s not a social issues course, per se, but we do talk about the phenomena of large-scale social networks: how are friendships formed, the power of weak links and so on. A lot of the issues that originate both in the sociology world and the game theory world have now been given an algorithmic flavour.  This is happening because in the social networks world, the sociologists, for the first time, have large-scale data; they had always had very interesting questions but never before had the big data to look at these questions.

The course is much more tied into popular culture, if you will, because the game theory side asks how people make decisions, how you converge over repeated games and things like that, and what is the whole meaning of equilibrium.  Auctions and things like that are clearly in everybody’s minds because everybody does electronic commerce and people are bidding all the time, whether they know it or not, for various things.  The social networks side focuses on issues about connectivity and how are we so connected and why, how links get formed and how influences spread in the social network.  Is it like a biological epidemic model or are there other models for that? So we are talking about things that kind of border on what might be called “social issues in computing”, but it is a little bit different of a course.  So you will see things come up, depending on people’s research interests, depending on things that are interesting.

Obviously the widespread use of social networks caused the field to impact computer science and how are we going to study these phenomena?  The game theory stuff has been around a long time but then, all of a sudden, people realized a lot of traditional game theory requires or assumes that you know how to compute certain things optimally, which you cannot do, so you wind up having a whole new field based upon computational constraints.  So things will develop.  Whether or not they will still be called “Social issues in computing”, or something else, remains to be seen.


C.C. (Kelly) Gotlieb is the founder of the Department of Computer Science (DCS) at the University of Toronto (UofT), and has been called the “Father of Computing in Canada”. Gotlieb has been a consultant to the United Nations on Computer Technology and Development, and to the Privacy and Computers Task Force of the Canadian Federal Department of Communications and Justice.  During the Second World War, he helped design highly-classified proximity fuse shells for the British Navy.  He was a founding member of the Canadian Information Processing Society, and served as Canada’s representative at the founding meeting of the International Federation of Information Processing Societies.  He is a former Editor-in-Chief of the Journal of the Association of Computing Machinery, and a member of the Editorial Advisory Board of Encyclopaedia Britannica and of the Annals of the History of Computing.  Gotlieb has served for the last twenty years as the co-chair of the awards committee for the Association of Computing Machinery (ACM), and in 2012 received the Outstanding Contribution to ACM Award.  He is a member of the Order of Canada, and awardee of the Isaac L. Auerbach Medal of the International Federation of Information Processing Societies.  Gotlieb is a Fellow of the Royal Society of Canada, the Association of Computing Machinery, the British Computer Society, and the Canadian Information Processing Society, and holds honorary doctorates from the University of Toronto, the University of Waterloo, the Technical University of Nova Scotia and the University of Victoria.
Allan Borodin is a University Professor in the Computer Science Department at the University of Toronto, and a past Chair of the Department.  Borodin served as Chair of the IEEE Computer Society Technical Committee for the Mathematics of Computation for many years, and is a former managing editor of the SIAM Journal of Computing. He has made significant research contributions in many areas, including algebraic computation, resource tradeoffs, routing in interconnection networks, parallel algorithms, online algorithms, information retrieval, social and economic networks, and adversarial queuing theory.  Borodin’s awards include the CRM-Fields PIMS Prize; he is a Fellow of the Royal Society of Canada, and of the American Association for the Advancement of Science.

ICT Professionalism: Progress and Future

Stephen Ibaraki, Founder and Chair, IFIP IP3 Global Industry Council

Gotlieb and Borodin raised the question of Professionalism in computing, as it would develop over time. It is rapidly developing now.

ICT Specialist demand will drop by 60% in the next 3 years. By 2014, 60% of IT Roles will be business facing; over 50% will have business and non-IT Experience.

By 2016, 80% of leading-edge firms will be developing those with multiple skills/with a focus on Professionalism and Business. Business Analysts are already in high demand. There are 35M computing workers growing 30% yearly for the next five years. There is an added 50% in IT that are not even accounted for.


The first major national survey of IT professionals on industry certification finds that 78% are in favor of a complete package or framework for industry certification which includes recognition of vendor certifications, and that combines business and technical competencies where work experience is valued.

In terms of career development, this graphic illustrates what is in demand and my views on what this means:


Career Growth in the future is about having what I call BAIT attributes:

  • Business skills and core industry knowledge where the IT worker is employed;
  • A service oriented Attitude which is a focus on the client and user experience;
  • Deep Interpersonal skills tied in with project management, client relationship management, and communication capabilities;
  • All of this rounded out by Technical skills/competencies with a focus on “professionalism” and current E-Skills.

Demonstrated Progress in Professionalism

The question can be asked: What progress has been made since the publication of Kelly Gotlieb and Allan Borodin’s seminal book, Social Issues in Computing?

The answer is that progress has been profound, far reaching and international.

IFIP, the International Federation for Information Processing was founded under the auspices of the United Nations Educational Scientific Organization in 1960, and now has over 40 country member bodies and affiliates representing over 90 countries. IFIP is a consultative body for IT for the United Nations Educational Scientific Cultural Organization, Sector Member for the International Telecommunication Union, and Scientific Associate Member of the International Council for Science or ICSU.

In 2007, the IFIP General Assembly voted and overwhelming approved their commitment and support for Professionalism and formed the International Professional Practice Partnership (IP3) Board, the international accreditation body for ICT Professionalism and for standards in ICT E-Skills. IFIP IP3 in turn is aligned with the Seoul Accord, the international body for accreditation alignment and standards in post-secondary computing education programs.

CIPS is the official Canadian representative to IFIP and has supported ICT professionalism with professional certifications since 1989. CIPS is also a founding member of IFIP, IP3, FEAPO, ICCP, ICTC, GITCA and the Seoul Accord. All of these bodies support ICT professionalism. In 2012, CIPS Chair Brenda Byers did a national Webcast on Professionalism which received a spotlight from the media and pickup by Ron Richard.

It should be noted that Kelly Gotlieb is the co-founder of CIPS and was at the founding of IFIP. Kelly was also an early pioneer with the ACM and co-author of the first ICT study on ICT-enabled economic development for the United Nations. In 2011, there was a special celebration for Kelly at the University of Toronto, highlighting his significant contributions to professionalism, education, research, government policy and much more.

In 2012, international Professionalism support was in strong evidence for the first time. For example, at the 2012 IFIP World Computer Congress in Amsterdam, an entire stream was dedicated to Professionalism and E-Skills. At the ISACA World Congress in San Francisco, over 80% of their membership is professionally certified. ISACA, with over 100,000 members is the world’s largest and premier association in security, governance and auditing well known for COBIT 5. At the ITU World Summit for the Information Society in Geneva, there was a strong call for action to support professionalism. The Astana World Economic Forum/Connect 2012 (AEF) generated interest in professionalism. AEF recommendations feed into the G8, G20, WTO, World Bank, and OECD.

Security and Cybersecurity continue to top technology needs. In fact, in the US, it is government mandated into curriculum and professional certifications.

In a 2012 interview, Dr. Hamadoun Toure, Secretary General of the ITU provided support for ICT Professionalism. “First, professional best practice is to be encouraged in every industry…In addition, we have our own Ethics office which promulgates its guidelines on professional ethics through regular in-house workshops as well as serving as a focal point for individual staff wishing to consult on issues of professional ethics.”  The ITU is the specialized United Nations Agency governing, regulating and setting global standards in ICT with 193 countries and 700 global corporations/organizations as members.

An update on EU Professionalism which also applies to the North American context was reported by CEPIS Honorary Secretary Declan Brady to the IFIP General Assembly (GA) in September 2012. Declan reported that a comprehensive study into Professional e-Competence in Europe has been undertaken. The study was a ground-breaking research project involving 2000 professionals across 28 European countries. The aims of the project were:

  • To provide a picture of the competences of ICT practitioners in Europe today;
  • To promote and raise awareness of the European e-Competence Framework by using it as the basis for analysis, demonstrating its practical application;
  • To work towards developing a pan-European vision of professionalism.

In addition, the project sought to:

  • Promote IT Professionalism in Europe and assist in developing a pan-European vision of professionalism;
  • Provide an individual profile report to each participant showing gap analysis against e-CF competences;
  • Provide a country report that enables each county to be benchmarked against the European results;
  • provide a pan-European report.

The survey was conduct via on-line tools, using information taken from the e-CF. The research undertaken by CEPIS produced National Reports for 10 countries and a Pan-European Report. These research outputs demonstrate the utility of the e-CF as a practical competence framework as stated in the feedback received from the respondents.

Some highlights of some of the more interesting items from the report follow:

  • Only 21% of professionals had the e-competences to match their declared profile. In other words, 79% may not have the breadth of e-competences needed for their roles;
  • IT Manager was the most declared job profile, however only 8% of these match the e-competences needed for the role;
  • IT professionals across Europe show a low level of competence in some of the five e-CF e-competence areas, especially in ‘Enable’;

The final report produced the following recommendations:

  • The young talent that Europe needs is lacking. Therefore, promoting the IT profession among young people is essential;
  • Continuous Professional Development (CPD) needs to play a greater role and should be targeted to existing and anticipated e-competence gaps;
  • Career paths with defined training and education requirements are needed;
  • All countries urgently need to address the gender imbalance;
  • The e-CF should be applied as a pan-European reference tool to categorise competences and identify competence gaps. It has become clear that the e-CF is a practical reference tool and it should be further disseminated across Europe.

This project has been enormously well received by both the European Commission, and by all the member societies participating. CEPIS is now looking at how to take this further forward, and create an indispensable resource for all stakeholders interested in the shape of professional e-competences across Europe. The full European and national reports can be found at

The second initiative, a European funded study into “A European Framework for ICT Professionalism” was heading into its final phase at last IFIP GA (General Assembly) in Prague. This project, conducted in partnership with the Innovation Value Institute (, has now published its final report to considerable acclaim in the European Commission. This project has re-positioned the European Commission’s thinking on IT Professionalism in relation to its strategy on addressing the future of the IT Industry in Europe.

Some findings from this project include:

  • Little awareness of ICT Competence frameworks and low adoption rates;
  • E-Competence frameworks are unbalanced and often neglect non-technical skills;
  • Two leading benefits identified from ICT Competency frameworks: Process consistency and Workforce capability planning;

The final report and other details about this important research project can be found at

A further initiative, based on recommended next steps from the European Framework for ICT Professionalism report, has been to create a repository of Codes of Conduct, Practice and Ethics, as part of the CEPIS Professionalism Taskforce’s work in looking at the importance of ethics in ICT Professionalism. This repository can be found at:

Licensing (registration and regulation), though controversial, has also made progress. Software Engineering licensing has focused on areas involving the protection of public health, safety and welfare. Initiatives appear in Alberta, BC, Ontario, Quebec, Texas, Alabama, Delaware, Florida, Michigan, Missouri, New Mexico, New York, North Carolina, Virginia, etc., and internationally (Australia, UK, New Zealand, and others). In the US, the Principles and Practices (PE) exam has been launched in 2013.  To be licensed requires graduation from an engineering accredited program, passing a fundamentals of engineering exam, four or more years of professional practice, and passing the PE exam. Malaysia is undertaking a more extensive ICT-wide program with support from international communities such as IFIP and the Seoul Accord.

The Prime Minister of Canada has acknowledged that CIPS’ work in certification, accreditation and professional development have made positive and lasting contributions to Canada’s economic growth and competitiveness.

In 2011, IFIP hosted the first World CIO Forum (WCF) with involvement of over 800 senior executives from industry, government and academia. In their WCF “Joint Declaration” they stated, “We strive to support [the] IT Industry and professionalism of IT career.” “We will ensure the highest standards in our work, and with both quality and ethics, and will act diligently and professionally, and with integrity in discharge of our duties for the best interest of our respective organizations and society.”

Tadao Saito, CTO of Toyota, a global fortune 8 company with over $220B USD in revenues, stated “[IFIP] IP3 [International Professional Practice Partnership] is the start of this kind of important global activity.” This is a key acknowledgement of the importance of ethics and IT professionalism which lays the foundation for IT as a recognized profession.

There is also support for Professionalism from  the Global Industry Council consisting of prominent leaders from business, industry, governments, academia, international bodies representing over 15 Trillion USD in market capitalization and GDP. The “Global Industry Council Directors are specially nominated and invited to serve within the UN-rooted body as internationally recognized luminary executives, thought leaders and visionaries and for their strong history of providing substantive contributions to global business, industry, society, education, and governments. The IP3-GIC is a first of its kind focusing on Computing as a Profession, which will further align computing with organizational strategy and business agility driving innovation, entrepreneurship, business growth, regional GDP growth, high yield investment opportunities, and regional economic development. Global GDP is over 70 Trillion USD and the global program for computing as spearheaded by IP3 and IP3-GIC will be a catalyst for a more than a 20% increase in global GDP in the next 10 years to over 85 Trillion USD.”

In a meeting I had with senior Canadian government officials in 2006, they commented about 3rd party support for Professionalism outside of established professional bodies such as the IEEE-CS, BCS, ACS, and CIPS. In 2013, a good example of this support would be GITCA (Global IT Community Association). GITCA is the world’s largest federation of over 1200 professional groups and associations representing over 6 million executives, IT professionals, and students. GITCA supports IFIP IP3 professional certification and professionalism. They believe that IFIP IP3’s accreditation program of ethical conduct, demonstrated professional development and recognized professional certification are the hallmarks for an enabled IT professional and profession.


Kelly Gotlieb and Allan Borodin’s seminal book, Social Issues in Computing, laid the foundation for the continuing evolution of ICT professionalism with the key elements of accredited education, demonstrated professional development, adherence to a published code of ethics, alignment with best practices and an ICT Body of Knowledge (BOK), and recognized credentials though not necessarily licensing. Licensing was deemed too restrictive since there was and is a global shortage in ICT skills.

ICT is integrated into all facets of business, industry, governments, media, society and consumers. This is demonstrated in the latest ICT trends and the business-focus of ICT Skills, all of which demands professionalism.

Since the publication of the book, considerable strides have been made in support of Professionalism with endorsement of the UN-founded IFIP, and many major international organizations. In addition, professional certifications are already mandated in ICT-related domains such as project management, security/cybersecurity, governance/auditing; addressing closing the Skill gap and demands for STEM education and innovation. There is also a tie-in to increasing economic health and growing GDP.

Beyond the publication of the book, Kelly Gotlieb continues to be a driving force for positive change in the world enabled by ICT and professionalism cemented on his pioneering role with IFIP, CIPS, United Nations, ACM, government policy, academic contributions and societal impact.

Kelly and Allan foresaw many major issues in computing and many of these issues will no doubt continue to be priorities at the 100th anniversary of the book’s publication, clearly establishing its value in history.

Stephen Ibaraki is the founding chairman of the United-Nations-founded IFIP-IP3 Global Industry Council, as well as iGEN Knowledge Solutions, Global Board GITCA, and first board chairman, The Vine Group. He serves as vice-Chair of  the World CIO Forum, founding board director FEAPO, and is a past president of the Canadian Information Processing Society(CIPS), which elected him a Founding Fellow in 2005.  Ibaraki is chair of the ACM Practitioners Board Professionalism and Certification  and Professional Development Committees, and is the recipient of many ICT awards,  including a IT Leadership Lifetime Achievement Award, an Advanced Technology Lifetime Achievement Award,  Professionalism Career Achievement Awards, an IT Hero Award, the Gary Hadford Award, and others.  Ibaraki has been the recipient of a Microsoft Most Valuable Professional Award each year since 2006. Ibaraki serves as an advisor on ICT matters for a variety of global organizations, companies, and governments.




Digitizing Ozymandias

Barry Wellman, S.D.Clark Professor of Sociology and Information, University of Toronto

For 40 years, computer scientists and sociologists have mostly danced
unaware of each other. Kelly and Alan’s book was a pioneering conversation
starter for computer scientists taking into account the social forces
driving computing and the social implications of what computerization has
wrought. Both authors have wonderfully kept the conversation going, and
they have been joined by sociologists studying the impacts of technology
on the structure of society and our everyday lives.

Digitizing Ozymandias

The dialectic between the virtual and the material is not new. Recall Percy Bysshe Shelley’s 1819 poem Ozymandias describing the statue of a fictional great warrior, where only the legs and the pedestal remain. Here’s the poem:

I met a traveller from an antique land
Who said: Two vast and trunkless legs of stone
Stand in the desert. Near them, on the sand,
Half sunk, a shattered visage lies, whose frown,
And wrinkled lip, and sneer of cold command,
Tell that its sculptor well those passions read
Which yet survive, stamped on these lifeless things,

The hand that mocked them and the heart that fed:
And on the pedestal these words appear:
“My name is Ozymandias, king of kings:
Look on my works, ye Mighty, and despair!”
Nothing beside remains. Round the decay
Of that colossal wreck, boundless and bare
The lone and level sands stretch far away.

And of course, someone has constructed a physical replica of the non-existent statue that has been virtually portrayed in the poem.

Figure 1: Ford Madox Brown, 1870. Romeo and Juliet, oil on canvas. Delaware Art Museum, Wilmington, Delaware. 1

Nor is Ozymandias a singular case. The next time you go to Verona, I guarantee you’ll see a crowd gathered at the base of “Juliet’s balcony”. Indeed, you can rent the balcony for your wedding ceremony.  But, of course, Juliet (and Romeo) existed only in the world of Shakespeare’s 16th century play. If they had lived today, they would have used mobile phones to stay alive. Their reliance on mobile phones would have removed them from the surveillance of their parents. “But their parents would have seen the bill at the end of the month,” one of my students protested. “Not if they texted,” another student from the Middle East answered knowingly.

So the interplay between the digital and the material – between atoms and bits – continues and develops. Yet, these are not separate worlds: there is no “digital dualism”, to use Nathan Jurgenson’s nice term (2012). Rather, we and our physical objects are part of the same worlds, although we need to think carefully about how we take care of and link our bodies, minds, and artifacts.

On November 13, 2012, Whitney Erin Boesel tweeted and emailed about a debate in her University of California Santa Cruz graduate course. Sociology professor Jenny Reardon asked her class, “What about albums: Do people still listen to albums.” Boesel reported: “This caused some confusions; what does she mean by ‘album?’ Do digital files count? I interjected to define an album as ‘a set of tracks that an artist records and releases together, as a set and in a specific order, that you listen to in that order.’” Prof. Reardon responded, “See, an album is no longer a ‘thing’; it’s become a concept!” The material album has become a virtual concept.

Hypertext Just Beginning: Why doesn't the e-book version of Rainie & Wellman Networked have hyperlinks to the many URLs they call out?Yet, I am writing this essay while listening to the sound of the Rolling Stones’ greatest hits—on a vinyl LP of course. Next to me, the face of Keith Richards stares from the material cover of his majestic autobiography (2010), part of whose joy is its hefty 565 pages. This is an experience that cannot be fully reproduced in an e-book.

But even an e-book is crippled today. When I read Keith Richards on e-book, I should be able to click and hear the song he’s discussing, and I should be able to click on the photos or videos of the events he is recounting. I can’t get enough satisfaction just reading the text, despite a pretty good rock n roll memory. When Lee Rainie and I put together our Networked book (2012), we were frustrated that the Kindle e-book version did not have any hyperlinks. When Toronto subway riders read the electronic version of the porn novel Fifty Shades of Grey  (James, 2011), where are the animations, the moans, and the instructional videos? This might be one of the few places where earphones would be welcomed by all.

Mona Lisa
Figure 2: Leonardo da Vinci, 1503-1505. Mona Lisa, oil on poplar wood. Musée du Louvre, Paris.1

Look at how formerly stand-alone objects have gone digital and social. Consider the Mona Lisa, at whom multitudes have stared while trying to figure out who she was and why she is half smiling. Even Nat King Cole could not figure it out (

Of course, the social came before the digital. Leonardo didn’t just do solo shots: consider his The Last Supper. It may be two-dimensional, but it certainly shows some of the connections among Jesus and his disciples.

Nor is the digital always social. Some of us are old enough to remember that the first personal computers in the 1980s predated the internet. They were primarily stand-alone word processors and spread sheeters. But then social media came along, now epitomized by Facebook. Basically, it combines the Mona Lisa with the Last Supper. One heart of Facebook is the self-portraits it presents: the profiles that individuals prepare about themselves. This is the Mona Lisa and the Ozymandias approach. Or, if you prefer a digital approach, it resembles the one-way nature of Web 1.0 where many of us prepared self-description pages complete with bloggish musings. Imagine if Jesus had his own page, with all of his sayings on it. Or if you are secular, call up “facebook” on Google images, and the first screen will be filled with multiple pictures of Facebook founder Mark Zuckerberg. If you believe The Social Network movie, Zuckerberg founded Facebook to find friends (Fincher, 2010).

The Last Supper
Figure 3: Leonardo da Vinci, 1495-1498. The Last Supper, pitch and mastic. Convent of Santa Maria della Grazie, Milan.1

But, Facebook is more than profiles. It is also a series of fishing lines, connecting the person at the centre of the network to his or her “friends”. In short, it’s the fellowship of the Last Supper, with each person at the centre of his or her universe. Yet Facebook can do better than Leonardo in two ways. First, it can provide detailed profiles of each individual, and second, it can provide openings to learn about friends of friends. Just who was Judas hanging out with? Oxford sociologist Bernie Hogan and I are writing a paper about this called “The Relational Self-Portrait: How Social Network Sites Put the Network in Networked Individualism” (2013).

Moreover, just as Facebook connects individuals to their friends, the concatenation of these networks connects cities and continents. Although this is a relational artifact that only digital analysis can discover, nevertheless, it is real. At times, the digital and the physical coincidence: Yuri Takhteyev, Anatoliy Gruzd, and I (2012) have shown that interconnections on Twitter largely mirror airline routes. Many people use Twitter to talk to those with whom they have in-person contact.

In our NetLab’s work, we argue that North Americans—and perhaps others—are moving toward a networked society centred on individual connectivity—what Lee Rainie and I have called “networked individualism” (2012). What are the implications for the missions of libraries and archives?

Groups: Door to DoorIn pre-industrial days—and still in very rural parts of Canada—the society was door-to-door. The building block was groups, embedded in villages and neighbourhoods, with all of their social support and social control. This is where people got their information. This is what libraries originally served and where nascent archives—often in the hands of village schoolteachers, clerks or pastors—got their material. Indeed, some of us still wander churchyards to get our historical sense of a place. Big national libraries and archives were far away, difficult to access, and only for canonically important material. Mostly, knowledge came from within the group and stayed within the group.

GloCalization: Place-to-PlaceThe situation changed in the late twentieth century with the proliferation of multiple technologies that weakened the boundaries of distance: the telephone, the car, the airplane for two-way communication, and the radio, movies, and television for one-way information flows. In this “glocalized” milieu, family and work units remained important, but information and communication links were less constrained by distance—what we call “place-to-place” connectivity (Wellman and Hampton 1999).

Rather than a single canonical source of information and communication, people were embedded in multiple, partial social networks that sometimes conflicted. Information sources proliferated, and while archives remained distant, they became more accessible. Although I write in the past tense, this situation continues for many people.

Networked Individualism: Person-to-PersonPersonal networks have come to the forefront with the proliferation of personal computers, the internet, mobile devices, and multiple-car households. People function more as networked individuals and less as group members. This provides them with greater access to multiple sources of information and communication, at the cost of less contact with tangible objects. I call up a picture of Ozymandias or The Last Supper rather than having a direct physical encounter with them. Rather than LP records or CDs, there are personal MP3 players. Information has become networked through links, crowdsourcing, perpetual editing, and feedback. The social control of the group has been replaced by the social control of governments and large organizations that have access to emails and databases of search information. For better or worse (and in fact, both simultaneously), amateur experts sit aside credentialed experts. Books, music and objects—the historic domain of libraries and archives—are now going to people rather than people going to them.

The map we now have of how people will communicate and get informed is undoubtedly wrong. We know little about the bulk of online communication that resides in the dark web that Google and Facebook do not access; we know even less about what the future will bring. We do know there will be ongoing tensions between personal freedom and mega-organizational control (Kling, 1989; McElheran 2012; Rainie and Wellman, 2012; Chapter 11). Who will the agent-based software work for? We do know the half-century long struggle between digital personalization and central control will continue as we all grope toward a better future.


Fincher, David, director. 2010. The Social Network. Culver City, CA: Columbia Pictures. October 10.

Hogan, Bernie & Barry Wellman. 2013. “The Relational Self-Portrait: How Social Network Sites Put the Network in Networked Individualism.” Forthcoming in Society and the Internet, edited by Mark Graham and William Dutton. Oxford: Oxford University Press.

James, E(rika). L(eonard). 2011. Fifty Shades of Grey. New York: Vintage.

Jurgenson, Nathan. 2012. “The IRL Fetish.” The New Inquiry, June 28.

Kling, Rob. 1989. “The Institutional Character of Computerized Information Systems.” Office: Technology and People 5(1): 7-28.

McElheran, Kristina. 2012. “Decentralization versus Centralization in IT Governance.” Communications of the ACM 56, 11: 28-30.

Rainie, Lee and Barry Wellman. 2012. Networked: The New Social Operating System. Cambridge, MA: MIT Press. Kindle e-book:

Richards, Keith. 2010. Life. New York: Little Brown

Shakespeare, William. c1597. Romeo and Juliet.

Shelley, Percy Bysshe 1819. “Ozymandias” in Rosalind and Helen, a Modern Eclogue, with Other Poems. London: Ollier.

Takhteyev, Yuri, Anatoliy Gruzd and Barry Wellman, 2012. “Geography of Twitter Networks.” Social Networks: 34, 1: 73–81

Wellman, Barry and Keith Hampton. 1999. “Living Networked On and Offline.” Contemporary Sociology 28, 6: 648-54

1 All photographic reproductions of works of art are taken from Wikipedia. These photographs, as well as the original masterpieces, are held in the public domain.


I am grateful for the comments of the participants in the Library and Archives Canada “Whole-of-Society” seminar (Ottawa, November 2012) and for the editorial support of Christian Beermann, Isabella Chiu and Esther Jung Yun Sok.


Barry Wellman is Director of Netlab and the S.D. Clark Professor of Sociology and Information at the University of Toronto. He recently published the prize-winning book, “Networked: The New Social Operating System”, co-authored with Lee Rainie for MIT Press (2012).  A member of the Royal Society of Canada, Wellman is Chair-Emeritus of both the Community and Information Technologies section and the Community and Urban Sociology section of the American Sociological Association. He founded the International Network for Social Network Analysis, and co-edited “Social Structures: A Network Approach” which has been named by the International Sociological Association as one of the “Books of the Century”.  He has been affiliated with Intel Corporation’s People and Practices research unit and is a Fellow of IBM Toronto’s Centre for Advanced Studies. Wellman has been a Fellow of IBM’s Institute of Knowledge Management, a consultant with Mitel Networks, a member of Advanced Micro Devices’ Global Consumer Advisory Board, and a keynoter at conferences ranging from Computer Science to Theology. He has authored or co-authored about 300 articles with more than eighty scholars, and he is the (co-)editor of three books.

Computers in Developing Countries: An Update

Kenneth L. Kraemer, Research Professor, The Paul Merage School of Business, University of California, Irvine

Looking Back

2013 marks 40 years since the publication of Social Issues in Computing (Academic Press, 1973) by Calvin Gotlieb and Allan Borodin.  Their central concern in the section on  “computers in developing countries” was “How should computing be applied in developing countries and what might be the impacts of that application?” They cited reports of international agencies (OECD, UNESCO and the United Nations), which argued that some computer activity was appropriate for developing countries and these countries should be actively engaged in seeking how to make use of computers for their situation.  They also noted that although some officials in developed countries felt that developing countries should focus on more urgent, basic needs such as food, health and poverty, officials in developing countries rejected this view because they regarded computers as stepping stones to social and economic development. They showed determination to join the computer revolution by investing in computers, seeking education and participating in computing-related meetings and conferences.

While encouraging computer use, Gotlieb and Borodin urged government officials in developing to be cautious about trying to develop their own computer industry.  They argued that only a very few large developing countries, such as India, would have the technical and human resources required, and so they urged countries to concentrate on computer applications.  With respect to social impacts, they felt that developing countries would experience social issues such as unemployment, privacy threats and the need for skills development similar to the industrially advanced countries.   Their chief concern was that “the gap between the industrially advanced countries and those which are still developing is widening”.

 Looking Forward

This short article will review what we know about the foregoing issues:  (1) the returns to IT investment (2) computer production vs. use and (3) the digital divide.  At the time that Social Issues in Computing was published, I had just become Director of the Public Policy Research Organization (which later morphed into the Center for Research on IT and Organizations or CRITO) at UC Irvine.  We had just received our first NSF grant to study policies for effective use of computers in governments in the U.S.  Rob Kling, Jim Danziger, Bill Dutton, Alana Northrop, John King, Suzanne Iacono and Debora Dunkle were all colleagues in this work and one of our first tasks was to do a literature survey. It was Rob Kling who brought this book to our attention, mostly for its concern with the social and economic impacts of the technology. We were not thinking about developing countries at the time, but turned to them in the nineties at which time we became interested in understanding why so many developed and developing countries were not succeeding in their attempts to develop domestic computer industries. With this context setting, we examine the issues.

Payoffs from IT investment in developing countries

The question of whether IT investments lead to greater productivity and economic growth has been studied extensively at multiple levels of analysis, with strong evidence that the returns to IT investment are positive and significant for firms, industries and developed countries.  Cross-national research for the 1985-1993 time period found that IT investment was associated with significant productivity gains for developed countries but not for developing countries (Dewan and Kraemer, 1980).  Other studies found similar results (Pohjola , 2001).

Nonetheless, developing countries increased their investment in IT from 0.5% of GDP in 2000 to 1.0% from 1994 to 1997.  For some the change has been dramatic.  For example, China had fewer than 10 million PCs in use in 1997 and barely 1 million Internet users.  In 2011, China passed the U.S. as the largest PC market and led the world with over 400 million Internet users (Shih, Kraemer and Dedrick, forthcoming).  Similar rapid growth in places such as India, Latin America, Southeast Asia and Eastern Europe has transformed the landscape for IT use in developing countries.  So, an important question is  whether this level of investment and experience has now changed the results for developing countries.

A recent study, which analyzed new data on IT investment and productivity for 45 countries from 1994-2007, found that upper-middle income developing countries have achieved positive and significant productivity gains from IT investment in this more recent period as they have increased their IT capital stocks and gained experience with the use of IT (Shih, Kraemer and Dedrick, forthcoming). The study also found that the productivity impacts of IT are moderated by country factors including human resources, openness to foreign investment, and the quality and cost of the telecommunications infrastructure. These policies are useful in their own right, but the study suggests that the impacts of IT depend not only on the level of use, but on the presence of resources and favorable policies to support IT use.

The academic implication is that the impact of IT on productivity is expanding from the richest countries into a large group of developing countries.  The policy implication is that lower tier developing countries can also expect productivity gains from IT investments, particularly through policies that support IT use, such as greater openness to foreign investment, increased investment in tertiary education, and reduced telecommunications costs.

IT Production versus use

Gotlieb and Borodin’s 1973 caution that most developing countries should focus on computer use rather than production was wise, and prescient.  Many countries have sought gains from investments in IT production rather than IT use, but the research shows that there are gains to the whole economy from investment in IT use rather than gains to only a single industry sector from investment in IT production.  The significance of investment in IT use is heightened by the fact that production in the IT industry is now global.  Increasingly, products designed in one country are made in others with components from still other countries and then marketed throughout the world.  This global production system relies on IT to link it all together.  Therefore, countries desiring to participate in the global production system must develop IT capabilities through both short term and longer term strategies.  By doing so, countries can not only participate in the global production system but also increase the productivity of their economy, and thereby raise the standard of living for their citizens.

Recognition of this fact has been slow in coming. In the 1950s with the introduction of computers, many countries wanted to develop their own computer hardware industry and many of the advanced industrial countries did so.  The history of the industry shows that success requires large investments in R&D and human resources, and that one misstep in this highly competitive global industry can bring downfall.  Computer makers in England, Germany, France and Italy succeeded initially against the leader IBM because they were national champions, but there were unable to compete on the world market and were bought out, merged or disappeared.  Japanese companies succeeded longer in their protected market.

With the introduction of the PC, new opportunities arose for developing countries. Domestic companies in such as India, Brazil and Mexico succeeded for a while in their own markets, but could not compete globally with the multinational brands or their own white box makers. Because of its large English-speaking population, India switched from computers to software and services whereas Brazil and Mexico companies exited quietly.  Taiwan and China traditionally have been workshops for the multinationals, but China’s Lenovo and Taiwan’s Acer have risen to number two and three in the global computer hardware industry.  Neither China nor Taiwan is a model for other developing countries due to their unique circumstances.  The oft-cited successful small countries like Singapore and Ireland have been regional outposts for multinationals despite the hyperbole about having a domestic industry.

However, there is a side of production that is close-to-use (Figure 1), which has proven valuable for some developing countries even though Gotlieb and Borodin did not specifically identify it at the time– the development of packaged software and information services.  India has succeeded tremendously as an offshore workshop for software development and for computing services around the world. However, the industry has not yet created notable packaged software.   Many smaller developing countries such as Brazil, Costa Rica, Bulgaria, and Israel are also succeeding in software development and information services, usually as a result of language or specialized knowledge/skill.

China set the goal of becoming a computing software and services powerhouse as early as the mid-eighties and in its subsequent five-year plans.  Since 1999, China has made the “fundamentals of computer science” mandatory for all university students and now graduates over a million computer scientists a year (Zang and Lo, 2010). This has not gone unnoticed by foreign multinationals who rushed into China after 2000 to take advantage of the large, low cost labor pool.  However, multinationals (MNCs) that employ graduates report that they have theoretical knowledge but little practical knowledge.  A recent study of five large U.S. MNCs in the computer and telecommunications industry indicated they employed from a low of 4,000 to a high of 20,000 computer professionals in their software and services operations in China (Dedrick et al., 2012).  The study also indicated that the MNCs must put new hires through in-house training programs for up to a year.  The training involves not only teaching in-house methods, but also promoting greater work discipline and teaching teamwork, as China’s one child policy has created many little emperors.  Still, these computer professionals are developing valuable practical skills, learning how Western and other Asian organizations operate and rising up to management and leadership positions within the MNCs.  Whereas foreign MNCs were the first job choice of Chinese computer science graduates early on, government enterprises and the military are now the first choice followed by Chinese private enterprises and then the foreign MNCs.  One only has to read daily newspaper accounts of hacked corporations and government agencies the world over to realize the advanced skill and creativity being achieved by at least some Chinese computer professionals.

As early as the nineties, several experts urged developing countries to follow production close-to-use strategies (Schware, 1992; Dedrick and Kraemer, 1998), but only a few have done so.   Although late in coming, a recent UNCATD (2012) study urges developing countries to develop indigenous software capabilities in order to take advantage of IT opportunities. It makes a strong case for the social and economic benefits to be gained from leveraging software skills in the domestic market – in both the private and public sectors. It urges governments to promote domestic software writing that meets local needs and local capabilities as a means of increasing income and addressing broader economic and social development goals.  The argument is that developing such software locally increases the chances that it will fit the context, culture, and language where it is used.  The Report also argues that such capabilities also can help to expand software exports.

Figure 1.  Information services as link between production and use


Source:  Dedrick and Kraemer, 1998


The Digital Divide

Gotlieb and Borodin, like many others since, were concerned that the gap in use of IT between developed and developing countries was widening.  The literature documents this digital divide, or difference in cross-country penetration of computers, the Internet, smart phones and other ITs.  Explanations for the divide usually are based on socio-economic factors such as GDP, human capital, openness to trade, IT infrastructure and IT costs–especially telecommunciations costs. However, a recent cross-country study of the diffusion of personal computers (PCs) and the Internet in 26 developed and developing countries over the period 1991-1995 indicates that the divide might be narrowing due to co-diffusion effects between PCs and the Internet (Dewan et al., 2008).  That is, the adoption of PCs leads to greater adoption of the Internet, which in turn leads to greater adoption of the Internet in a virtuous cycle.  Moreover, the study found that the impact of PCs on Internet diffusion is substantially stronger in developing countries as compared to developed ones.

The fact that these co-diffusive effects are a significant driver of the narrowing of the digital divide has policy implications for developing countries with respect to how diffusion of PC, Internet, and smart phone/tables technologies might be harnessed to further accelerate the narrowing of the global digital divide.  The key is government policies that promote expansion of the IT infrastructure, lower the cost of access devices and telecoms costs and upgrade users’ knowledge and skill.  Even then, it is necessary to recognize that there will be failures along the way, as illustrated by the OLPC effort to bring low cost computers to schools in the poorest countries of the world (see box).

The OLPC experiment

[The abstract below is from an article that appeared in Communications of the ACM called “One Laptop Per Child (OLPC): Vision and Reality” (Kraemer et al., 2009).  It documents the experience with OLPC and suggests reasons for its failure to achieve the original goal of transforming education while unintentionally creating a new segment in the IT industry to compete with its own invention.]

In January 2005, at the World Economic Forum in Davos, Switzerland, Nicholas Negroponte unveiled the idea of One Laptop Per Child (OLPC), a $100 PC that would transform education for the world’s disadvantaged school children by providing the means for them to teach themselves and each other.  Negroponte estimated there could be 100-150 million of these laptops shipped every year by the end of 2007 (BBC News, 2005).  With $20 million in startup investments, sponsorships and partnerships with major industry players, and interest from several developing countries, the non-profit OLPC project generated excitement among international leaders and the world media.  Yet as of January 2009, only a few hundred thousand laptops had been distributed and OLPC had scaled down its ambitions dramatically.

Although some developing countries are deploying the OLPC laptops, others have cancelled planned deployments or are awaiting the results of pilot projects before deciding whether to adopt.  The OLPC organization is struggling with key staff defections, budget cuts  and ideological disillusionment as it appears to some that the educational mission has given way to just getting laptops out the door.  In addition, low-cost commercial “netbooks” from PC vendors such as Asus, Hewlett-Packard and Acer have been launched with great initial success.

Thus, rather than distributing millions of laptops to poor children itself, the OLPC has prodded the PC industry to develop lower cost, education-oriented PCs, providing developing countries with low cost computing options directly in competition to its own innovation.  In that sense, OLPC’s apparent failure may be a step towards a broader success in providing a new tool to some of the world’s poor children.  However, it is clear that the PC industry cannot profitably reach millions of the poorest children, so the objective of one laptop per child will not be achieved by the market alone.



Dedrick, J. and Kraemer, K.L.  Asia’s Computer Challenge: Threat or Opportunity for the United States and the World?  New York: Oxford University Press, 1998.

Dedrick, J., Kraemer, K.L. and Tang, J.  2012. China’s indigenous innovation policy: impact on multinational R&D, Computer, 45 (11) November: 70-77.

Dewan, S., Ganley, D., and Kraemer, K.L. Complementarities in the diffusion of personal computers and the internet: implications for the global digital divide. Information Systems Research, 20, 1 (2009), 1-17.

Dewan, S., and Kraemer, K.L. Information technology and productivity: preliminary evidence from country-level data. Management Science, 46, 4 (2000), 548-562.

Han, K., Chang, Y.B., and Hahn, J. Information technology spillover and productivity: the role of information technology intensity and competition.  Journal of Management Information Systems, 28, 1 (Summer 2011), 115–145.

Kraemer, K..L., Dedrick, J. and Sharma, P. 2009.  One Laptop Per Child (OLPC): vision and reality, Communications of the ACM, 52(6), 66-73.

Kiiski, S., and Pohjola, M. Cross-country diffusion of the Internet. United Nations University, World Institute for Economic Development Research, 2001.

Papaioannou, S.K. and Dimelis,S.P..  Information technology as a factor in economic development: evidence from developed and developing countries. Economics of Innovation and New Technology, 16, 3 (2007), 179-194.

Park, J., Shin, S.K., and Sanders, G.L.. Impact of international information technology transfer on national productivity. Information Systems Research, 18, 1 (2007), 86-102.

Pohjola, M. Information technology and economic growth:  a cross-country analysis. In M. Pohjola (ed.), Information Technology and Economic Development. Cambridge: Oxford University Press, 2001, pp. 242-256.

Robert Schware, 1992. Software industry entry strategies for developing countries: A “walking on two legs” proposition, World Development, 20 (2): 143-164.

UNCTAD, 2012.  Information Economy Report 2012 – The Software Industry and Developing Countries (UNCTAD/IER/2012) 28 November. Geneva: United Nations Conference on Trade and Development, 142 pages.

Zang, M. and Lo, M.M. 2010.  Undergraduate computer science education in China. SIGCSE ’10, March 10-13, Milwaukee, Wisconsin, USA.

Kenneth L. Kraemer is Research Professor at the Paul Merage School of Business, University of California, Irvine, past Director of the School’s Center for Research on Information Technology and Organizations (CRITO) and of the Personal Computing Industry Center (PCIC), and former holder of the Taco Bell Chair in Information Technology for Management.  The author of 16 books, including Global E-Commerce: Impacts of National Environment and Policy and Asia’s Computer Challenge: Threat of Opportunity for the U.S. and the World, Kraemer has written more than 165 articles, many on the computer industry and the Asia – Pacific region, that have been published in journals such as Communications of the ACM, IEEE Computer, MIS Quarterly, Management Science, Information Systems Research, The Information Society, Public Administration Review, Telecommunications Policy, and Policy Analysis.  He has been a consultant on IT policy to major corporations, the US Federal government, the National Academy of Sciences, the National Academy of Engineering, and the governments of Singapore, Hong Kong, Indonesia, Malaysia, and China.

Interview With the Authors: Part 2

What were some of the big issues in Computers and Society when the book came out?


Well, there were at least three issues that stand out in my mind.  One of them was computers and privacy.  I had already been involved and written a report on that.


People still talk about the issue of privacy.  When we wrote about it in the book, we were, I think, referring back to the responsibility for privacy. The sort of responsibility we were thinking of was like the conscience of people who worked during World War II, the nuclear physicists who developed atomic weaponry and things like that. There was a movement post-war about the responsibility of scientists for the things they do.


Another issue was computers and work.  A that time, and when computers first came out, there was an enormous debate going on as to whether computers would cause job losses.  I had given an invited speech on this topic on computers in work in Tokyo and Melbourne, and the largest audience I ever had was in Melbourne, about 4,000 people, and it was picked up in Computer World and spoken about all over.  Computers had already had an effect through the introduction of postal codes, for example: a lot of people in the Post Office lost jobs because they had manually sorted mail.  Before the postal code, they would look at the addresses and look up what part of the country it was.   The question was: what is going to happen to the Post Office?  We did not have email yet, and the Post Office did not go away. But the Post Office is still shrinking today because of email.

We invited the Head of the Postal Union to come and speak to our class.  At the time, the postal union was under tremendous public censure for holding a strike, and he was grateful for the invitation.  He said nobody had thought to invite him to present his case, especially for a university audience.  I maintained contact with him for a long time after.  This question as to whether automation or advances in technology caused lost jobs had long been considered by economists and in particular by Schumpeter, who essentially said technology creates and destroys.  He used this example: when motor cars came in, the buggy industry was shot, but there were far more jobs in producing cars than there ever were in producing buggies. I generally had the feeling that the same thing would hold for computers, but at the time I did not have the evidence.  There were a certain number of jobs designing computers and building them, and then programming them, but nobody ever thought that, let us say, computer games would be a big industry, as we have now.
Automation replacing workers was the threat.


You know, it is interesting,  I went back and looked at our summary for  computers in employment. I think it was very guarded. At the time it was true that we said things and presented evidence for the case that computing has not had the unsettling effect on employment forecast by many.   I think we presented a lot of evidence and that is what was different about the book; we tried to kind of be balanced about it. We presented what the various people were saying.


There was a third topic that was quite hot at the time.  At MIT, there was a whole group who had put together research on the limits to growth.  They had predictions about how we were using up resources, not just oil, but metals and so on, and they made predictions as to how long the world’s supply would last, at which time population growth will have to stop.  So the question was how accurate were these predictions, could you trust them? They were  computerized predictions, so here again was an area of where computers were making predictions that were affecting policy quite seriously.


Well, it was mainly simulations that they were doing, rather than, say, statistical analysis of  a precise model.


They were simulations, right.  They had a whole lot of simulations about particular resources.


Another topic that I think was always very popular: to what extent can machines think?  We considered some classical computer uses at that time, but the issue of “Artificial Intelligence” and the ultimate limits of computation has continued to be a relevant subject.

What are some of the issues today? Are there any other issues that you think have emerged since you wrote the book that you would consider hot topics?


Some of the problems are still there and there are new ones.  And they continue.  By and large, the effect of computers on society continued to multiply, so there are now more important issues and different issues. I continue to follow things quite carefully.

For example, right now let us take the question of drones and the ethics and morality of drones. Now, Asimov had the three laws of robotics which said what a robot might be allowed to do.  But drones are getting more powerful. They fly over Pakistan and then we think we have found the head of or an important person in Al Qaeda. They get permission from a person before they send a bomb to attack a car that they think holds a terrorist leader.  The ethical and moral questions about robots continue. For example, in Japan you have all kinds of robots that act as caregivers.  So they are looking after older people.  Presumably if they see an old person about to put their hand on the stove, they do not have to ask questions, they can act on that.  But there are other times when before the robot interferes with the person, you have to ask: is it right to intervene, or should the person be allowed to do their thing?

Another question about the ethics and morality of robots, is automated car driving.  Let us say we are about to have cars that drive themselves.  The senses that computers can have, and their reaction times, are better than us, better than humans. So if there is a decision to be made about driving a fast car, decisions which are normally under the control of a computer, should these always be under the control of the computer, or are there times when decisions about what you do with that fast car ought to be made by humans rather than machines?  Or again, consider a medical decision.  Computers get better and better at diagnosis, but if they want to give treatment, should a human be involved in the decision about that?  So the ethical and moral questions about robots as they become more and more intertwined with human life continue, and as computers become more powerful, then they become more important.


It is not just robots: it is any automated decision making.


Exactly, any automated decision, so the ethics and morality of automated decisions is a continuing, ongoing issue. For example, consider automated driving. In the case of drivers’ licences, there are times in which you get a suspended licence, but the period of suspension depends upon the seriousness of the infraction.  Driving under the influence becomes a lot more important if you happen to kill somebody while you were drunk, rather than just being stopped because your car was weaving.  You may lose your licence for a week or a month in one case and for much longer in another case. So we need good judgment: we have legal judgments and we need that.  But when the sins are made by autonomous devices like robots and drones, then the ethical and moral problems do not go away. The ethics and morality on autonomous systems has become more urgent because there are more autonomous systems, and they are smarter.


Another issue that I think is very important is security. If you want to do the most damage, you can probably bring down the electrical grid. Or if you mess up all the aircraft weather predictions so that the aircraft fly into a storm that they cannot manage.  We know there are people who are prepared to do those things, obviously, so the question of computer security when it comes to data privacy, safety issues and so on, is important today, because we have systems that are so dependent on computers and networks.  The problem of making them secure is more important, more urgent than ever, and I think unsolved.  At least it is far from being solved.  You see lots of people who say it is important, and fortunately they are addressing it, but nobody that I know claims that we have a good handle on it yet. And by the same token, computer security has become more urgent because we have, you know, power grids and networks of weather systems and so on that all depend on them.  So I would say that what certainly has happened is that some of the old problems which we saw have become more serious and maybe still demanding solutions that we do not have.

Many years ago there was a conference at Queens when privacy was a big issue and I was invited to give the keynote talk.  Essentially, people were worried about too much  data-gathering by companies and government and so on.  The title of my talk was Privacy, A Concept Whose Time has Come and Gone. That was completely counter to what the people who invited me expected.  If I give a talk, I would say security trumps privacy every time  So it is a changing concept.


Well,  you do anything on Facebook and it could be there forever.  That is a good social issue, who really owns the data after a while and how long are they allowed to keep data on people? That becomes a very big question.

Apropos your comment about security trumps privacy, I probably agree with that. When I listen in the morning to CBC News, I  hear a theme come up, an underlying kind of theme that we have had for who knows how many years: “Big Brother”, 1984, the whole idea that we can be controlled centrally, everything about our lives are known,  and things like that.  It is a theme that we have had in literature, in our popular thinking, in our consciousness for many, many years.  I do not think that has gone away.  I think it is still there.


One guy was caught going through an airport with explosives in his shoes, so in airports all over the world forever, you take your shoes off.  Now in the United States if you are over 70 they changed the rule on that.


No, it is more than 70.  I was stopped, they asked me how old I was. I am 71, and he said, no, you are not old enough, you have to take your shoes off.


Yes, so they are making some tiny changes to it.  But I think they have only caught one guy who ever tried explosives in his shoes.  There has been one-off cases which have led to extreme, exaggerated, I would say, conditions in airports all over the world.  Three-year-olds have to take their shoes off, you know.


We were in the Buffalo airport a couple of years ago and there was a 99-year-old woman in a wheelchair and she just happened to be the random number that came up that you have to search, and they were searching her.


I remember, I was in Israel, and I went to Eilat.  It is in the south. I was coming back to Tel Aviv and I was the only non-Israeli in the group.  So the security person went through and said to me, “I have got somebody who I am teaching how to do a search. Do you mind if we practise on you?”  What was I going to say?  “Yes, I mind?”  No. So I got special treatment.


This brings up a related issue, and I think the Israelis are very good at this: they have a lot of data on people.  So when you show up at the airport, they pretty much know quickly who you are. And they do profile you. To the extent the government, or whoever is running security at the airport, has a lot of information about you, it may alleviate how much physical invasion of your privacy you are going to go through.  So you have lost a lot of privacy in terms of all the stuff they are going to know about you, but on the other hand it may save you much more intrusive physical types of embarrassment.


If you are a frequent traveller to the United States you can get something to let you go through a lot faster:  Global Entry and there is also the Nexus pass.


You give up a certain amount of privacy for those. For the privacy issue, even though in the end we usually will trump privacy in favour of security, it is still on people’s minds.  They still feel sensitive to it; there is this underlying issue that we still have that we do not want to be controlled centrally, we do not want “Big Brother”, 1984 is still in our minds.


C.C. (Kelly) Gotlieb is the founder of the Department of Computer Science (DCS) at the University of Toronto (UofT), and has been called the “Father of Computing in Canada”. Gotlieb has been a consultant to the United Nations on Computer Technology and Development, and to the Privacy and Computers Task Force of the Canadian Federal Department of Communications and Justice.  During the Second World War, he helped design highly-classified proximity fuse shells for the British Navy.  He was a founding member of the Canadian Information Processing Society, and served as Canada’s representative at the founding meeting of the International Federation of Information Processing Societies.  He is a former Editor-in-Chief of the Journal of the Association of Computing Machinery, and a member of the Editorial Advisory Board of Encyclopaedia Britannica and of the Annals of the History of Computing.  Gotlieb has served for the last twenty years as the co-chair of the awards committee for the Association of Computing Machinery (ACM), and in 2012 received the Outstanding Contribution to ACM Award.  He is a member of the Order of Canada, and awardee of the Isaac L. Auerbach Medal of the International Federation of Information Processing Societies.  Gotlieb is a Fellow of the Royal Society of Canada, the Association of Computing Machinery, the British Computer Society, and the Canadian Information Processing Society, and holds honorary doctorates from the University of Toronto, the University of Waterloo, the Technical University of Nova Scotia and the University of Victoria.
Allan Borodin is a University Professor in the Computer Science Department at the University of Toronto, and a past Chair of the Department.  Borodin served as Chair of the IEEE Computer Society Technical Committee for the Mathematics of Computation for many years, and is a former managing editor of the SIAM Journal of Computing. He has made significant research contributions in many areas, including algebraic computation, resource tradeoffs, routing in interconnection networks, parallel algorithms, online algorithms, information retrieval, social and economic networks, and adversarial queuing theory.  Borodin’s awards include the CRM-Fields PIMS Prize; he is a Fellow of the Royal Society of Canada, and of the American Association for the Advancement of Science.

Perspectives on ICT Professionalism in 2013

Stephen Ibaraki, Founder and Chair, IP3 Global Industry Council

Building on the discussion of professionalism in Kelly Gotlieb and Allan Borodin’s seminal book, Social Issues in Computing, I’m often asked what ICT professionalism means to me, and below I provide three perspectives for 2013.

First, since the 1980’s I have interviewed over 1000 global thought leaders with many appearing with IT Manager Connection (the world’s largest ICT management blog). In my interviews, I ask them this question on professionalism:

Do you feel computing should be a recognized profession on par with accounting, medicine and law with demonstrated professional development, adherence to a published code of ethics and a discipline process for those who breach it, personal responsibility, public accountability, quality assurance and recognized credentials?”

Prior to 2005, fewer than 5% of the interviewees would provide any comment on professionalism. In 2013, over 90% provide their experiences, and the vast majority are supportive of all facets of Professionalism –a substantial shift.

Finally, in my speech at the ITU World Summit for the Information Society (WSIS) in Geneva in May, I provided a map of a profession adapted from sources such as the IEEE-CS, CIPS, IFIP and others and explained the graphic in this way:

The Concept of IT as Profession or Professionalism is gaining ground and a recognition that it must apply to all sectors of the IT industry.

Looking to the LEFT:

  • Certification involves: initial education, skills development, leading to certification;
  • Professional Status or Professionalism, or the Concept of IT as a Profession adds to this: adherence to a code of ethics, continuing demonstrated professional development, and alignment with common standards or body of knowledge (BOK), and standards of practice (SOP).

Looking to the RIGHT:

  • A Professional Society provides for: a sense of common identity or belonging;
  • Accreditation of substantive educational programs;
  • Assessment of skills development;
  • Provisions for “professional” certification;
  • Support for a code of ethics and a discipline process for those who breach it to ensure trust in the IT worker;
  • An assessment of continuing professional development;
  • And support for a Body of Knowledge (BOK) and Standards of Practice (SOP).”

The ITU released reports from the sessions.

The debated issues in ITU Professionalism were:

  • Potential of Skills and Competences Frameworks in use to produce fragmentation and non-alignment between industry and academia;
  • Labour force diversity issues including shortages because of the ageing society, lack of STEM graduates and lack of appropriate workplace diversity (example: unequal representation of women ICT professionals);
  • Developing versus developed countries treatments need to be different;
  • Who should drive the professionalism of its workforce?
  • How to develop the maturity of the Society’s profession?

Main Outcomes of the ITU Session:

  • IFIP IP3 is in a position to assist with the resolution of issues about driving professionalism in the ICT workforce;
  • IFIP IP3 mapping and harmonization addresses the fragmentation and non-alignment between industry and academia with regards to Skills and Competences Frameworks;
  • IFIP IP3 is taking a proactive approach to solving labour force diversity issues including shortages because of the ageing society, lack of STEM graduates and lack of appropriate workplace diversity (example: unequal representation of women ICT professionals);
  • IFIP IP3 localized mentorship programs addresses the need for developing versus developed countries and recognises that approaches need to be different;
  • IFIP IP3 will support local entities in driving the professionalism of its workforce;
  • IFIP IP3’s collaborative model and best practices provide a ready toolbox to develop the maturity of the Society’s profession.

Quotes released from the ITU sessions included:

  • “The common denominator for sustained growth in economic development, GDP, innovation, sustainability and security is a professional workforce supported by internationally accredited industry relevant education, demonstrated skills development, recognized ethical conduct and adherence to proven best practices and standards. This involves the collaboration of business, industry, governments, academia, and professional societies.” – Stephen Ibaraki, ICT Fellow, Global Fellow, Distinguished Fellow
  •  “In our country, there’s a desire to create a professional ICT body and we want to find ways to do this. This workshop has shown me that IFIP IP3 is an organisation that can help us to achieve this.” – Samson Mwela, A/Assistant Director Telecommunications, Tanzania
  • “Industry in Switzerland needs 6000 graduates in ICT, 3500 are graduated, this creates a shortage each year of 2500.  IFIP IP3 produces an attractive career path, progression, recognition and mobility addressing skills shortages and shortages in STEM.” – Professor Raymond Morel, Geneva.

There were emerging trends relevant to the Action Lines in the context of the WSIS +10 Process. By 2017, 70% of leading-edge firms will be developing Versatilists or those with multiple skills/with a focus on Professionalism and Business. Business Analysts are already in high demand. There are 35M computing workers growing 30% yearly for the next five years. There is an added 50% in IT that is not accounted for. However skills shortages and shortages in STEM will blunt business, industry, governments, education, society, sustainability, security, economic development, and GDP growth without a focus on professionalizing the computing worker.

Moreover, ICT is heavily integrated into business, industry, governments, education, society, sustainability, security, economic development and accounts for 50% of GDP growth, producing  a five times total factor productivity gain. Underlying ICT is a professional and skilled workforce. The IFIP IP3 global professionalism program adds significant value to producing the required outcomes to support ICT:

  • Global standards; Quality assurance; Protection of the public; (Action line C5)
  • Professionalism, Trust, Code of Ethics; AL C10
  • Stronger Voice for the IT practitioner, a Sense of Common Identity; AL C5
  • The feeling of being an Engineer or Executive over a Geek/Pirate; AL C4
  • Business Solutions over Technical Features; AL C5
  • A Career path, progression, recognition, and mobility over an isolated job; AL C4
  • And growing GDP and innovation over skill shortages and shortages in Science Technology Engineering, Math or STEM; AL C4

These three perspectives from interviews and ITU WSIS provide an overview of Professionalism in 2013.

Stephen Ibaraki is the founding chairman of the United-Nations-founded IFIP-IP3 Global Industry Council, as well as iGEN Knowledge Solutions, Global Board GITCA, and first board chairman, The Vine Group. He serves as vice-Chair of  the World CIO Forum, founding board director FEAPO, and is a past president of the Canadian Information Processing Society(CIPS), which elected him a Founding Fellow in 2005.  Ibaraki is chair of the ACM Practitioners Board Professionalism and Certification  and Professional Development Committees, and is the recipient of many ICT awards,  including a IT Leadership Lifetime Achievement Award, an Advanced Technology Lifetime Achievement Award,  Professionalism Career Achievement Awards, an IT Hero Award, the Gary Hadford Award, and others.  Ibaraki has been the recipient of a Microsoft Most Valuable Professional Award each year since 2006. Ibaraki serves as an advisor on ICT matters for a variety of global organizations, companies, and governments.

Social Issues in Computing and the Internet

Vinton G. Cerf, Vice-President and Chief Internet Evangelist, Google Inc.

Forty years ago, C.C. (“Kelly”) Gotlieb and Allan Borodin wrote about Social Issues in Computing. We can thank them for their insights so many years ago and can see now how computing and communication have combined to produce benefits and hazards for the 2.5 billion people who are thought to be directly using the Internet. To these we may add many who use mobile applications that rely on access to Internet resources to function. And to these we may add billions more who are affected by the operation of network-based systems for all manner of products, services and transactions that influence the pulse of daily life.

Not only are we confronted with cyber-attacks, malware, viruses, worms and Trojan Horses, but we are also affected by our own social behavior patterns that lead to reduced privacy and even unexpected invasion of privacy owing to the inadvertent acts of others. Photo sharing is very common in the Internet today, partly owing to the fact that every mobile seems to have an increasingly high-resolution camera and the ability to upload these images to any web site or sent to any email address. What the photos contain, however, may include people we don’t know who just happened to be caught in the photo. When these photos have time, date and location information (often supplied by the mobile itself!), the involuntary participants in the image may find that their privacy has been eroded. Maybe they were not supposed to be there at the time…. Others “surfing” the Internet may find and label these photos correctly or incorrectly. In either case, one can easily construct scenarios in which these images are problematic.

One imagines that social mores and norms will eventually emerge for how we would prefer that these technologies be used in society. For example, it is widely thought that banning mobile phone calls in restaurants and theatres is appropriate for the benefit of other patrons. We will probably have to experience a variety of situations, some of them awkward and even damaging, before we can settle on norms that are widely and possibly internationally accepted.

While the technical community struggles to develop reliable access control, authentication and cryptographic methods to aid in privacy protection, others work to secure operating systems and browsers through which many useful services are constructed and, sadly, also attacked. We are far from having a reliable theory of resilient, attack-resistant operating system, browsers and other applications, let along practices that are effective.

We have ourselves to blame for some of this. We use poorly constructed passwords, we give up privacy in exchange for convenience (think of the record of your purchases that the credit card company/bank accumulates in the course of a year). We revel in sharing information with others, without necessarily attending to the potential side-effects to ourselves or our associates. Identify theft is a big business because we reveal so much that others can pretend to be us! Of course, negligence results in the exposure of large quantities of personally identifiable information (e.g. lost laptops and memory sticks).

This problem will only become more complex as the “Internet of Things” arrives in the form of computer-controlled appliances that are also networked. Unauthorized third parties may gain access to and control over these devices or may be able to tap them for information that allows them to track your habits and know whether you are at home or your car is unoccupied.

The foresight shown by Gotlieb and Borodin so many years ago reinforces my expectation that we must re-visit these issues in depth and at length if we are to fashion the kind of world we really wish to live in. That these ideas must somehow take root in many countries and cultures and be somehow compatible only adds to the challenge.

Vinton G. Cerf is VP and Chief Internet Evangelist for Google; President of the ACM; member of the US National Science Board; US National Medal of Technology; Presidential Medal of Freedom; ACM A. M. Turing Award; Japan Prize; former chairman of ICANN and President of the Internet Society.


Interview With the Authors: Part 1

When and how did you become interested in social computing? How did the book come about?


I was a voracious reader when I was young, absolutely voracious.  If I started on a book, I felt I was insulting the author if I didn’t finish it, even whether I liked it or not. And if I liked it, then I read everything by the author. For example, when I discovered I liked The Three Musketeers, I then read Vicomte de Bragelonne: Ten Years Later in five volumes, and then I read it in French.  When I was young, I could read 100 pages an hour, and I read everything.

I remember that I decided to be a scientist quite young, and that was due to reading The Microbe Hunters by Paul de Kruif.   I took mathematics, physics and chemistry. During the Second World War I went to England and worked on a very highly classified shell proximity fuse: we did a lot of calculations. After the war when ENIAC was announced, I naturally fell into computers.  After all, I had electronics, and I had ballistics.

Although I was interested in science, I continued to be interested in philosophy and English.  After I graduated in Mathematics, Physics and Chemistry, I decided it was time to “get educated”.  The university curriculum for English used to list all the books that you would read in particular courses, so I read them all.  I decided that computing was the future, because computers provided a way to organize knowledge.

When  you organize knowledge, you start to build up databases, so I built up databases. Then people started to get worried about databases and privacy, and there was a committee formed in Canada for the Departments of Justice and Communications to make recommendations as to what we ought to do about corporate databases.  Alan Gottlieb was in charge of the committee; Richard Gwynne was also on it. We issued a report on how to protect privacy in databases.

Then the UN got interested in the topic of computers, and particularly how computers could help developing countries. U Thant, who was UN Secretary General at the time, put together a committee of six people from six countries to produce a report on that.  I was one of the six: I represented Canada.  I went around Europe visiting the World Health Organization, UNESCO and the World Bank, to discuss computing with them.

Having done all this,  it was natural to ask what social issues there might be in computing.  Then I invited Al to join me in writing a book, which became Social Issues in Computing.


I can’t remember exactly, but I think I got interested in the general topic of social issues in computing while I was a graduate student at Cornell in the late 60s. The late 60s was a period of anti-war protests and racial integration issues.  In some sense, everybody at universities was very politically interested in things. It would have naturally come up that because we were doing computing, and computing involves keeping track of people and maintaining records on people, that we would get interested in the issue.

In 1969 I came to the University of Toronto, and very soon after I arrived, Kelly and I together decided that we should teach a course on Computers and Society. I had become sensitized to the issue, and it seemed like a natural thing to do something on, even though it has never been my main research interest. Notes grew out of the course, and the book grew out of the notes.

Why did you choose to publish your ideas in a book?


We were reaching students.  I felt I had to raise a debate. So in order to bring these ideas to people, and talk about it, and to bring about a debate about them and see what other people had to say about them, we wrote the book.  We were simply looking for a wider audience.


That’s what happens in many cases for academics.  We write books because we’ve been teaching a course and we realize that there is an audience for the topic, there’s an interest. This was before the internet, of course.  So if you wanted to reach an audience, you wrote a book. We naively thought, given that we had all our notes for our course, that it was going to be easy.  Easy or not, a book was the natural way of going forward. We certainly felt that we had enough material for a book.

Was it a difficult book to write?


Every book is difficult to write.


I had already written a book with J.N.P Hume, High-Speed Data Processing. I didn’t know this at the time, but it was studied by the Oxford English Dictionary. They decided there were 11 words, like loop, which were used by the book in a different sense than the ordinary.  When I told this to a colleague and friend of mine, he said, “The first use of words in a new sense in the OED never changes, so you’re now immortal.”

My wife was a poet and science fiction writer. She was a beautiful writer, and she taught Hume and me how to write.  She taught us to write a sentence, and then take out half the words.  Only then will it be a good sentence.

Were you pleased with the response to the book?  Did it have the impact that you were hoping for?


I know for a fact that it was the first textbook in Computers and Society.  Not many people know that.  I’m willing to bet that out of several hundred members of the ACM special interest group on Computers and Society,  there might be three who realize that Al and I wrote the first textbook on this, or could quote it.  I’m not going to say we initiated the topic, but we certainly wrote the first textbook on it.  But the number of people who are aware of that is not large.

On the other hand, of the problems that are there, some of them are still there and there are new ones.  And they continue.  By and large the effects of computers on society have continued to multiply, so there are now even more important issues, and different issues. I continue to follow things quite carefully.

C.C. (Kelly) Gotlieb is the founder of the Department of Computer Science (DCS) at the University of Toronto (UofT), and has been called the “Father of Computing in Canada”. Gotlieb has been a consultant to the United Nations on Computer Technology and Development, and to the Privacy and Computers Task Force of the Canadian Federal Department of Communications and Justice.  During the Second World War, he helped design highly-classified proximity fuse shells for the British Navy.  He was a founding member of the Canadian Information Processing Society, and served as Canada’s representative at the founding meeting of the International Federation of Information Processing Societies.  He is a former Editor-in-Chief of the Journal of the Association of Computing Machinery, and a member of the Editorial Advisory Board of Encyclopaedia Britannica and of the Annals of the History of Computing.  Gotlieb has served for the last twenty years as the co-chair of the awards committee for the Association of Computing Machinery (ACM), and in 2012 received the Outstanding Contribution to ACM Award.  He is a member of the Order of Canada, and awardee of the Isaac L. Auerbach Medal of the International Federation of Information Processing Societies.  Gotlieb is a Fellow of the Royal Society of Canada, the Association of Computing Machinery, the British Computer Society, and the Canadian Information Processing Society, and holds honorary doctorates from the University of Toronto, the University of Waterloo, the Technical University of Nova Scotia and the University of Victoria.

Allan Borodin is a University Professor in the Computer Science Department at the University of Toronto, and a past Chair of the Department.  Borodin served as Chair of the IEEE Computer Society Technical Committee for the Mathematics of Computation for many years, and is a former managing editor of the SIAM Journal of Computing. He has made significant research contributions in many areas, including algebraic computation, resource tradeoffs, routing in interconnection networks, parallel algorithms, online algorithms, information retrieval, social and economic networks, and adversarial queuing theory.  Borodin’s awards include the CRM-Fields PIMS Prize; he is a Fellow of the Royal Society of Canada, and of the American Association for the Advancement of Science.


ICT E-Skills and Professionalism in 2013

Stephen Ibaraki, Founder and Chair, IP3 Global Industry Council

2013 is the 40th anniversary of Kelly Gotlieb and Allan Borodin’s seminal book, “Social Issues in Computing,” where in Chapter 12 they addressed the question of “Professionalization and Responsibility”, provided a definition for a profession, and an analysis of the professionalism environment. In doing so, they became a catalyst and pioneers for the continuing evolution and growth of Information Communication and Technology (ICT) professionalism.

Gotlieb helped found the Canadian Information Processing Society (CIPS), which continues to impact business, industry, governments, education, media, and society internationally. On its 50th anniversary, Canadian Prime Minister Stephen Harper stated “Since 1958, CIPS has represented its membership on important issues affecting the IT industry and profession. The association has promoted high ideals of competence and ethical practices through certification, accreditation programs, and professional development…Your efforts have made positive and lasting contributions to Canada’s economic growth and competitiveness.”

In this article, expanding upon their book, I start by providing my personal perspective on current ICT trends and the ecosystem for ICT professionalism and E-Skills.

ICT Trends

Below are some trends in ICT usage, extrapolating from the International Telecommunication Union (ITU) and major research groups. It emphasizes how dependent we are on E-Skills and a professionalized workforce.

For 2013:

  • There are over 2.4B Internet Users representing over 2.5 Trillion USD in commerce
  • 1.7B mobile phones shipped; over 40% are smartphones
  • 6.5B mobile phone subscriptions
  • Countries such as China and India, each having over 1B mobile subscribers
  • ICT accounting for over 20% of GDP growth in some countries
  • Every 10% increase in broadband penetration produces a 1.3% gain in economic growth
  • 90% of the world’s data was created in the last two years. There is now over 2 Zettabytes of data created and replicated (1 Zettabyte is one billion terabytes).

For 2016:

  • Over 4B Internet users anticipated, due to the wide proliferation of internet enabled mobile phones and smart devices—the expensive smartphones/tablets of 2013 will be the inexpensive commodity phones/devices of the future
  • Estimated 2.5B mobile shipments with over 60% being smartphones/tablet inspired devices

ICT is integrated into all facets of business, industry, governments, media, society and consumers. This is demonstrated in the latest ICT trends and the business-focus of ICT skills, all of which drives the demand for ICT professionalism.

Kelly Gotlieb and Allan Borodin’s seminal book, “Social Issues in Computing,” laid the foundation for the continuing evolution of ICT professionalism, through key elements: accredited education, demonstrated professional development, adherence to a published code of ethics, alignment with best practices and an ICT Body of Knowledge (BOK), and recognized credentials.

In future articles, I will provide an updated definition of ICT Professionalism in 2013, as it has evolved since Social Issues in Computing, and I will outline key examples demonstrating the substantial progress made in defining E-Skills and ICT Professionalism in the four decades since the book was published.

Stephen Ibaraki is the founding chairman of the United-Nations-founded IFIP-IP3 Global Industry Council, as well as iGEN Knowledge Solutions, Global Board GITCA, and first board chairman, The Vine Group. He serves as vice-Chair of  the World CIO Forum, founding board director FEAPO, and is a past president of the Canadian Information Processing Society(CIPS), which elected him a Founding Fellow in 2005.  Ibaraki is chair of the ACM Practitioners Board Professionalism and Certification  and Professional Development Committees, and is the recipient of many ICT awards,  including a IT Leadership Lifetime Achievement Award, an Advanced Technology Lifetime Achievement Award,  Professionalism Career Achievement Awards, an IT Hero Award, the Gary Hadford Award, and others.  Ibaraki has been the recipient of a Microsoft Most Valuable Professional Award each year since 2006. Ibaraki serves as an advisor on ICT matters for a variety of global organizations, companies, and governments.

Privacy: It’s Harder Than We Thought

John Leslie King, W.W. Bishop Professor of Information, University of Michigan

Chapter 5 of Gotlieb and Borodin’s (1973) Social Issues in Computing was titled “Information Systems and Privacy.”  This was a compelling issue at that time (Hoffman, L.J., 1969; Miller, A.R. 1971; Westin, 1970, 1971; Westin and Baker, 1972).  My early academic work was in this area (Mossman and King, 1975).  I thought privacy was the “issue of the future,” little imagining that it would still be “the issue of the future” four decades later.  In the early 1970s the focus was on “databanks,” large collections of personal data held mainly by government.  Today there is concern about personal data held by private companies.  The concerns might have shifted, but privacy remains salient (Rule, 1973, 2007).  Why, after so many years of serious discussion, is privacy still a top issue in computing and society?  Because dealing with privacy is harder than we thought.

The challenge of privacy in the computing era can be understood by comparing it to another hard and persistent problem in social computation: computer-assisted natural language processing, especially what was once called machine translation (MT).  Both MT and privacy appear relatively easy to solve, but looks are deceiving: both are hard.  Examining privacy through the mirror of MT allows us to better understand the challenge, and might help us calibrate our expectations for learning about the problems rather than expecting simple and permanent solutions to the problems.

MT reaches back to efforts by Voltaire and others to construct universal languages, but the goal of MT (Fully Automatic High Quality Translation or FAHQT) took on new importance following WW II.  There were many technical documents yet to be translated from German into English, and the Cold War would soon create the need to translate Russian into English.  Also, the power of digital computers was established during the war, especially in the “translation” work of code-breakers.

A 1949 essay by Warren Weaver titled, simply, “Translation,” triggered the hope that FAHQT could be achieved by MT (Weaver, 1949).  The U.S. Government began devoting substantial sums in pursuit of the dream.  Breakthroughs in Linguistics (e.g., Chomsky’s 1957 Semantic Structures) fueled the optimism.  Distinguished computer scientists and linguists predicted that the challenges of MT would be overcome within ten or fifteen years.  We now know that the problem was much harder than we thought at the time.

As early as 1960, MT optimism was being challenged from within (e.g., Bar-Hillel, 1960).  In 1966 a report by the National Academy of Sciences strongly criticized the MT field and its lack of progress (ALPAC, 1966).   Funding for MT was dramatically reduced, and major investments did not re-emerge until the late 1980s and the advent of statistical MT based on information theory (Macklovitch, 1996).  Hope for effective MT never abated because the potential benefits of FAHQT are too compelling to ignore.  Nearly 60 years after the initial burst of optimism we can see some progress, but progress has come slowly.

It is tempting to draw the usual lessons of Fate and Hubris from the story of MT.  To be fair, some proponents of FAHQT were over-zealous in their views of what could be done by when.  But the real lesson of MT was that human handling of natural language is far more complicated and sophisticated than anyone realized.  The proponents of MT fell prey to a natural mistake: that the commonplace must be easy to understand.  MT brought home the lesson that human capability in natural language is far more amazing than anyone knew.  The dream of FAHQT failed in its mission but succeeded in teaching us a lot about natural language processing in nature.

The mirror of MT says something about the persistence of the privacy problem. Language evolves over time; before English spelling and grammar were standardized in the 19th century, it was customary for even talented people (e.g., William Shakespeare) to spell and use words however they thought best.  A founder of American literature, Mark Twain, declared that he had little respect for good spelling (Twain, 1925).   Yet a single, idiographic written language has long spanned much of Asia, in contrast to a wide variety of spoken languages that are not comprehensible to non-native speakers.

The relatively recent U.S. controversy over “ebonics” (Ramirez, et al., 2005) pitted those who favor teaching children in a recognizable (to them) vernacular against those who believe that all instruction should be in standardized American English.  Different languages evolve at different speeds, and one size does not fit all.  Scholars often seek a simplifying model to explain complicated things, but natural language is much more complicated than it first appears.  The simplifying assumptions behind MT did not work out.  Perhaps this was in part because MT researchers assumed the primary goal of natural language is to be understood.  Language is often used to obscure, misdirect or otherwise confound understanding.  And that leaves aside the whole issue of poetry.

Privacy also evolves.  Hundreds of millions of users give social networking sites permanent, royalty-free license to use personal information for almost any purpose.   Privacy advocates complain about such license, and occasionally win concessions from social networking companies or changes in public policy.  Still, users continue to post information that would have seldom seen the light of day in an earlier era.  They mock Thomas Fuller’s 16th century aphorism, “Fools’ names, like fools’ faces, are often seen in public places.”  While some social networkers have learned the hard way that prospective employers, future in-laws, and college admission officers visit such sites, sensitive personal information continues to be posted.  The number of people using such services continues to grow, and the amount of personal information “out there” grows accordingly.

Evolution in privacy is not uniform: some topics are treated differently than others.  For example, people in the United States are now willing to disclose same-sex preference on the Web, yet disclosure of personal information about health and income has not evolved in the same way.  Forty years ago few people would have disclosed same-sex preferences for fear of being arrested and prosecuted.  Harsh laws against homosexuals were common.  Today many people disclose same-sex preferences openly on the Web.  In contrast, consider the common affliction of cancer.  It was almost never discussed publicly four decades ago.  Betty Ford and Happy Rockefeller shocked the public when they openly announced that they had breast cancer in 1974.  By 2012 the breast-cancer charity Susan G. Komen For the Cure was front-page news for weeks.  Yet such health information (e.g., the admission that one has cancer) is not discussed routinely on social network sites.  Why not?

A possible culprit is confusion between privacy and reciprocity.  Health and life insurance in the United States have long discriminated against people with “prior conditions.”  Individuals with cancer hide that information from powerful insurance companies that seek to exclude cancer patients from coverage.   Although discussed as a privacy issue, this is fundamentally about power.  Similarly, people with high incomes are reluctant to disclose personal financial information on the Web for fear of disapprobation or worse (e.g., kidnapping for ransom, property theft, identity theft).  This is described as a privacy issue even when the real motivations are reciprocity and power.

Even more interesting is possible change in what constitutes “public” information.   Following the December, 2012 elementary school shooting in Newtown, Connecticut that left 20 children and six adults dead, the Lower Hudson Journal News, a small New York newspaper, published an interactive map with names and addresses of registered gun owners.  Such information is public by law in New York State, but many gun owners complained that the newspaper violated their privacy and made them targets of thieves who might steal firearms.  Employees of the newspaper were subjected to threats, and the names of the schools their children attended were made public.  This escalation was less about privacy than about reciprocity.

The issue of computing and privacy is still with us four decades after Gotleib and Borodin raised it because addressing it effectively is harder than we thought.  The mirror of MT is relevant here: both natural language and privacy are moving targets, and both draw much of their instantaneous meaning from context that is very difficult to capture in computers.  Difficulty in achieving the goal of FAHQT might have predicted difficulty in dealing with computing and privacy, but that link was not made.

As hard as MT proved to be, dealing effectively with privacy is harder.   Unlike natural language, privacy is often confused with other things.  The main confusion involves reciprocity, a related issue but not the same.  The word “privacy” is often an allusion to a complicated power relationship between an individual and others.  Changing technology also has different effects for MT and for privacy, enabling improvements in MT, but complicating privacy.  New technology can make it appear that “everyone already knows,” and the growth of social networking might make disclosure of previously sensitive personal information the “new normal.”  New technology such as Google maps can also cause information about gun registration that was formerly considered “public” to be declared “private” in the interests of preserving privacy.

Computing can reveal much about social issues by drawing attention to the issues and changing the way we attempt to deal with them.  Machine Translation revealed how marvelous human natural language processing is.  Computing and privacy similarly shows how complicated privacy is.  “Social impacts” of computers are seldom linear and predictable, and by studying social issues in computing we often learn how little we know about social issues.


ALPAC (1966), Language and Machines: Computers in Translation and Linguistics.  A report of the Automatic Language Processing Advisory Committee, National Academy of Sciences: Washington, DC: NAS Press (available online as of January 1, 2013 at

Bar-Hillel, Y. (1960) ‘The present status of automatic translation of languages’, Advances in Computers 1 (1), pp. 91-163.

Chomsky ,N. (1957) Syntactic Structures.  The Hague: Mouton.

Gotleib, C.C. and Borodin, A. (1973) Social Issues in Computing.  New York: Academic Press.

Hoffman, L.J. (1969).  Computers and Privacy: A Survey.  ACM Computing Surveys, 1(2) pp. 85-103.

Macklovitch, E. (1996) “The Future of MT is now and Bar-Hillel was (almost entirely) right.”  In Koppel, M. and Shamir, E. (eds), Proceedings of the Fourth Bar-Ilan Symposium on Foundations of Artificial Intelligence, June 22-25, 1995.  Cambridge: MIT Press, pp 137-148.

Miller, A.R. (1971). The Assault on Privacy.  Ann Arbor: University of Michigan Press.

Mossman, F.I. and King, J.L. (1975) ” Municipal Information Systems: Evaluation of Policy Related Research. Volume V. Disclosure, Privacy, and Information Policy, Final report.”  Washington, DC: National Technical Information Service PB-245 691/1.  Reprinted in Kraemer, K.L. and King, J.L. (Eds.) (1977) Computers and Local Government, Volume 2: A Review of Research, with Kenneth Kraemer. New York: Praeger.

Ramirez, J.D, Wiley, T.G., de Klerk, G., Lee, E. and Wright, W.E. (Eds.) (2005) Ebonics: the Urban Education Debate (2nd Edition).  Tonawanda, NY: Multilingual Matters, Ltd.

Rule, J.B. (1973) Private Lives and Public Surveillance: Social Control in the Computer Age.  New York: Schocken.

Rule, J.B. (2007) Privacy In Peril: How We Are Sacrificing a Fundamental Right in Exchange for Security and Convenience. New York: Oxford University Press.

Twain. M. (1925) The Writings of Mark Twain, compiled by C.D. Warner and A.B. Paine, New York: G. Wells, p. 68.

Weaver, W. (1949): ‘Translation.’ A memorandum reproduced in: Locke, W.N. and Booth, A.D. (Eds.) Machine Translation of Languages: Fourteen Essays. Cambridge: MIT Press, 1955), pp. 15-23.

Westin, A.F.  (1970) Privacy and Freedom.  Oxford: Bodley Head.

Westin, A.F. (1971) Information Technology in a Democracy.  Cambridge: Harvard University Press.

Westin, A. and Baker, M.A. (1972)  Databanks in a Free Society: Report of the Project on Computer Databanks of the Computer Science and Engineering Board, National Academy of Sciences.  New York: Quadrangle.

 John Leslie King is W.W. Bishop Professor of Information and former Dean of the School of Information and former Vice Provost at the University of Michigan. He joined the faculty at Michigan in 2000 after twenty years on the faculties of computer science and management at the University of California at Irvine.  He has published more than 180 academic and professional books and research papers from his research on the relationship between changes in information technology and changes in organizations, institutions, and markets.  He has been Marvin Bower Fellow at the Harvard Business School, distinguished visiting professor at the National University of Singapore and at Nanyang Technological University in Singapore, and Fulbright Distinguished Chair in American Studies at the University of Frankfurt.  From 1992-1998 he was Editor-in-Chief of the INFORMS journal Information Systems Research, and has served as associate editor of many other journals.  He has been a member of the Board of the Computing Research Association (CRA) and has served on the Council of the Computing Community Consortium, run by the CRA for the National Science Foundation.  He has been a member of the Advisory Committees for the National Science Foundation’s Directorates for Computer and Information Science and Engineering (CISE) and Social, Behavioral and Economic Sciences (SBE), as well as the NSF Advisory Committee for Cyberinfrastructure (ACCI).  He holds a PhD in administration from the University of California, Irvine, and an honorary doctorate in economics from Copenhagen Business School.  He is a Fellow of the Association for Information Systems and a Fellow of the American Association for the Advancement of Science.


The Enduring Social Issues in Computing

William H. Dutton, Professor of Internet Studies, Oxford Internet Institute, University of Oxford

2013 marks 40 years since the publication of Social Issues in Computing (Academic Press, 1973) by Calvin ‘Kelly’ Gotlieb and Allan Borodin, but the social issues they identified and explicated have become increasingly central to the social and computer sciences as well as the humanities.

It was the year after its publication I began research on the social implications of computing. I was trained in political science and social research methods, with a focus on urban studies. I joined the Public Policy Research Organization (PPRO) at the University of California, Irvine, in 1974 to work with Ken Kraemer, Rob Kling, Jim Danziger, Alex Mood and John King on the Evaluation of Urban Information Systems – the URBIS Project. My initial role in the project was focused on the survey components of the project, supporting the research underpinning our assessment of the impact of computing in American local governments.

Prior to URBIS, I had used computing as a tool for social research, but had not studied the impact of computing. One of the best sources I found for guidance on how a social scientist might think about computers in organizations and society was Social Issues in Computing, even though it was written by two computer scientists. I am amazed that forty years since its publication, despite many subsequent waves of technological change – the rise of microelectronics, personal computing and the Internet among other digital media, this wonderful book remains seminal, refreshingly multidisciplinary, foresighted and inspiring – actually career changing for me.


The book was groundbreaking – seminal – even though framed as a more of a textbook synthesis than a research report. The early 1970s, post-Vietnam War, was seething with debate over technology and society. However, even though computers and electronic data processing – already termed Information Technology – were being increasingly employed by organizations, study of the social implications of computing were limited and not anchored in the social sciences. With rare exceptions, such as in Daniel Bell’s work on the information society, and Allan Westin and Michael Baker’s work on privacy, social scientists viewed computers as calculators. I was consistently questioned about why a political scientist would be interested in what was a technical or administrative tool.

It was primarily among a core group of computer scientists, who thought about the future of computing and who were concerned over the societal issues of computing, almost as an avocation, where some of the most insightful work was being done. And Kelly Gotlieb and Allan Borodin were among the pioneers in this group. Other influential works, such as Joseph Weizenbaum’s Computer Power and Human Reason (1976) played major roles in building this field, but came later and spanned less terrain than Social Issues in Computing, which helped scope and map the field.

Forty Years of Influence

The ultimate test of the social issues identified by Gotlieb and Borodin is that they remain so relevant today. Consider some of the key issues identified in 1973, and how they are reflected in contemporary debate (Table). Update the computing terminology of 1973 and you have most of the questions that still drive social research. The enduring nature of these issues, despite great technical change, is illustrated well by Social Issues in Computing.

In the early 1970s, the very idea of every household having a computer – much less multiple devices being carried by an individual – was considered fanciful ‘blue sky’ dreaming. Gotlieb and Borodin discussed the idea of an ‘information utility’ and were well aware of the J. C. R. Licklider’s call for a global network, but ARPANET was only at the demonstration stage at the time they wrote, and governments were the primary adopters of computing and electronic data processing systems. Nevertheless, the issues they defined in 1973 remain remarkably key to discussions of the Internet, big data, social media and mobile Internet debates over forty years hence.

Table. Social Issues Over the Decades.

Topic Circa 1973 Circa 2013
Users Staff, computing centres, user departments in governments and organizations Individuals and ‘things’ in households, on the move, and in organizations and governments
Diffusion of technologies Issues over the pace of change, and disparities within and across nations in computing, storage, speed, …, such as developing countries; focus on IT Social and global digital divides, but also the decline of North America and West Europe in the new Internet world in Asia and global South; greater focus on ICTs, including communication technology
Privacy Data banks, information gathering, linking records, government surveillance Big data, data sharing, surveillance,
Security Security of organizational and government computing systems Cyber-security from individual mobile phones to large scale systems and infrastructures, such as cloud computing
Transportation, urban, other planning systems Systems, models, and simulations in planning and decision-making Intelligent cities, smart transportation, digital companions for individuals
Capabilities and limitations Artificial intelligence (AI): Can computers think? AI, semantic web, singularity
Learning and education Computer-assisted instruction Online, blended and informal learning; global learning networks
Employment Productivity, cost cutting, deskilling, information work, training and education Reengineering work, collaborative network organizations, crowd sourcing, out-sourcing, knowledge work, women in computing
Products and services Anti-trust, software protection (copyright) Intellectual property protection across all digital content versus open data, and innovation models
Power and politics Power shifts within organizations, across levels of government, and nations; (de)centralization (Dis)empowerment of individuals, nations and others in an expanding ecology of actors; (de)centralization; regime change
Attitudes and values Priority given values tied to technology, privacy, freedom, efficiency, equality Global values and attitudes toward the Internet and related technologies, rights and value conflicts, such as freedom of expression
Responsibilities Professional and social responsibilities of computer scientists, programmers, users in organizations Responsibilities, norms, rights across all users and providers, including parents and children in households, bloggers, …
Policy and Governance Anti-trust, telecommunication policy, standards, privacy, IP Privacy (government and service provider policies), expression, content filtering, child protection, and Internet governance and standards


Social Implications in Context: Intended and Unanticipated

The book not only identified key social issues, but it also set out many of the assumptions that still guide social research. The book alerted readers to the direct as well as secondary, unanticipated and unplanned implications of technical change. Gotlieb and Borodin were not technological determinists. They insisted that context is critical to the study of key issues. We need to understand the social implications in particular social and organizational contexts. To this day, many discussions focused on big data or the Internet of Things are too often context free. It is when placed in particular contexts that key questions take on added meaning and the potential for empirical study.

Of course, Social Issues of Computing did not identify all of the issues that would emerge of coming decades. They did not anticipate such rising issues as the role of women in the computing professions, the shift of the centre of gravity of computer use away from North America and West Europe to the rapidly developing economies of Asia and the global South. How could they have foreseen the current focus on child protection in Internet policy. They were not oracles, but they provided a framework that could map nearly all the social issues, intended and unintended, which would be of value for decades.

The Case for Multi- and Inter-Disciplinary Research

As a political scientist moving into the study of computing in organizations, I found in Gotlieb and Borodin the case for embracing multi-disciplinary research. Their succinct, clear and well organized exposition of the technical, managerial, economic, political, legal, ethical, social and philosophical problem areas made the best case I had yet seen for focusing on computing from multiple disciplines. I quickly saw that my focus on political science was limited as a perspective on all the big issues, which required more interdisciplinary insights. At the same time, I also found their discussion of the power shifts tied to computing to provide an immediate focus for me as a political scientist, one that drove the major book emerging from our URBIS project, entitled Computers and Politics (Columbia University Press, 1982). However, it would have been impossible to write Computers and Politics had we not had a multi-disciplinary team of researchers collaborating on this issue.

In many ways, I have continued to pursue issues of power shifts from my early study of computers in government to my most recent work on the role of the Internet in empowering individuals across the world, creating what I have called a Fifth Estate role that is comparable to the Fourth Estate enabled by the press in liberal democratic societies. And throughout my career, I found the multidisciplinary study of the social issues of computing, information and communication technologies, and the Internet to be more compelling than any single disciplinary pursuit.

Inspiring Colleagues

I met Kelly Gotlieb a few years after the URBIS project had concluded. I was able to tell him how influential his work was for me as a new entrant to this area of study. Looking back over the last 40 years of my own work, I am even more struck by just how influential he and his book were, and I was simply one of many who read Social Issues in Computing. There is no question in my mind why the former students and colleagues, of Kelly Gottlieb and Allan Borodin, want to acknowledge their book, and the seminal role they much have played in their intellectual and academic lives and the broader study of the social issues in computing.

Bill Dutton, Oxford, 20 December 2012

William H. Dutton is Professor of Internet Studies at the Oxford Internet Institute, University of Oxford, and a Fellow of Balliol College. He became a co-principal investigator on the URBIS project at the University of California in 1974, supported by an NSF grant led by Professor Kenneth L. Kraemer. Is most recent book is The Oxford Handbook of Internet Studies (Oxford University Press 2013).



40th Anniversary Blog Introduction

John DiMarco, Department of Computer Science, University of Toronto

In 1973, Kelly Gotlieb and Allan Borodin’s seminal book, Social Issues in Computing, was published by Academic Press.  It tackled a wide array of topics: Information Systems and Privacy;  Systems, Models and Simulations; Computers and Planning; Computer System Security; Computers and Employment; Power shifts in Computing; Professionalization and Responsibility; Computers in Developing Countries; Computers in the Political Process; Antitrust actions and Computers; and Values in Technology and Computing, to name a few.  The book was among the very first to deal with these topics in a coherent and consistent fashion, helping to form the then-nascent field of Computing and Society. In the ensuing decades, as computers proliferated dramatically and their importance skyrocketed, the issues raised in the book have only become more important.  The year 2013, the 40th anniversary of the book, provides an opportunity to reflect on the many aspects of Computing and Society touched on by the book, as they have developed over the four decades since it was published. After soliciting input from the book’s authors and from distinguished members of the Computers and Society intellectual community, we decided that this blog, with insightful articles from a variety of sources, was a fitting and suitable way to celebrate the 40th anniversary of the book.

John DiMarco has maintained an avid interest in Computing and Society while pursuing a technical career at the Department of Computer Science at the University of Toronto, where he presently serves as IT Director. He is a regular guest-lecturer for the department’s “Computers and Society” course, and is the editor of this blog.