Theopolis Monk, Part III: The Hypothesis is Probably Wrong

In Part I of this series, we reflected on a set of hopes for “benevolent” AI governance as seen in the science fiction TV series Buck Rogers in the 25th Century. Humanity, having brought themselves to near ruin with wars and ecological disasters, decided to turn over the care of their society to a Computer Council, whose decisions saved humanity and the planet from “certain doom.”

In Part II, we looked “under the hood,” at how the representations that AI systems employ in their decision-making can be very different from what humans find intuitive, and how the requirement that algorithmic decisions be “explainable” is manifesting legislation such as the General Data Protection Regulation (GDPR) of the European Union.

Implicit in the hopes of Part I and the concerns of Part II is a suggestion that it is the machines themselves who will be responsible for making the decisions. Currently, we see this as essentially the case in some fields, as algorithms determine who will get healthcare [51] or bank loans [52], and even civil liberties in China such as who is allowed to book airline flights [53].

This bears asking the question, are the machines truly the ones doing the deciding, or are they merely “advising” the humans who truly make the decisions? The answer is “Yes”: both of these cases are currently happening. Humans being advised by algorithms is the norm, however, in the financial sector, a large class of stock trades are entirely automated, with companies agreeing to be legally bound by the trading decisions of their algorithms. The speed at which the trading algorithms can operate is both their key strength for earning money—spawning the entire field of “High Frequency Trading” [54]—and yet their key weakness for human oversight, as in the “Flash Crash” of 2010 brought about by trading algorithms run amok [55]. The issue of speed has been identified as a key issue for the oversight of a multitude of AI systems; in the words of the promoters of the Speed conference on AI Safety, “When an algorithm acts so much faster than any human can react, familiar forms of oversight become infeasible” [56].

In the coming technological future of self-driving cars, passengers will be subject to the decisions of the driving algorithms. This is not the same as legal accountability. The outcomes of automated decision making are still the responsibility of humans, whether as individuals or corporations. Recently it has been debated whether to recognize AIs as legal persons [57], and ethicists such as Joanna Bryson and others have spoken out strongly against doing so [58], noting that the responsibility for the actions of such systems should be retained by the corporations manufacturing the systems—“attributing responsibility to the actual responsible legal agents—the companies and individuals that build, own, and/or operate AI” [59], not merely the individual human owners of a product.

The responsibility of developers to steward their AI creations has been a concern since nearly the inception of AI. This is not in the sense of Frankenstein whereby the creator is obliged toward some sentient creature [60]; there are interesting theological reflections on such a situation [61], but they are well outside the scope of our current discussion. In fact, with respect to conceptions of AI for the foreseeable future, Bryson has stated forcefully that, because AIs are not persons and should not be regarded as such, “We are therefore obliged not to build AI we are obliged to” [62]. Rather, the type of responsibility we speak of is the need for AI developers to be mindful of the intended and unintended uses of their creations, to consider the impact of their work.

AI developers [need] to be mindful of the intended and unintended uses of their creations, to consider the impact of their work.

Norbert Wiener, creator of the field of cybernetics on which modern machine learning is based, also wrote extensively about ethical concerns; indeed he is regarded as the founder of the field of Computer and Information Ethics [63]. His deep concerns about the ethical issues likely to arise from computer and information technology are developed in his 1950 book The Human Use of Human Beings [64], in which he foretells the coming of a second industrial revolution, an age of automation with “enormous potential for good and for evil.” Joseph Weizenbaum, creator of the famous ELIZA computer program [65], the first chatbot, was outspoken on the topic of social responsibility both in printed form [66] and in interviews, noted that a turning point for him came when he reflected on the “behavior of German academics during the Hitler time” [67] who devoted their efforts to scientific work without regard sufficient for the ends to which their research was applied. Weizenbaum’s remarks were taken up by Kate Crawford in her recent “Just an Engineer: The Politics of AI” address for DeepMind’s “You and AI” lecture series at the Royal Society in London [68], voicing a concern over the “risk of being so seduced by the potential of AI that we would essentially forget or ignore its deep political ramifications.”

This need for responsible reflection and stewardship is particularly acute for AI systems which are intended to be used in social and political contexts. Noteworthy examples of this include police use of predictive algorithms [69] and facial recognition [70], immigration control [71], and the dystopian scope of China’s Social Credit System [53], as well as the scandal of election propaganda-tampering made possible by Facebook data employed by Cambridge Analytica [72].

It must be emphasized that most of these applications are seen by their creators as addressing a public need, and are thus being employed in the service of public good. The catchphrase “AI for Good” is now ubiquitous, forming the titles of major United Nations Global Summits [73], foundations [74], numerous internet articles and blogs, and trackable on Twitter via the “#AIForGood” hashtag. The phrase’s widespread use makes it difficult to interpret; most who use the phrase are likely to view autonomous weapons systems as not in the interest of public good, whereas fostering sustainable environmental practices would be good. Yet one sees conflicting claims about whether AI systems could facilitate “unbiased” [75] decision-making versus (more numerous) demonstrations of AIs becoming essentially platforms for promoting existing bias [76, 77]. One can find many optimistic projections for the use of AI for helping with the environment [78-80] which include improving the efficiency of industrial process to reduce consumption, providing better climate modeling, preventing pollution, improving agriculture and streamlining food distribution

These are worthy goals; however, many rest on the assumption that the societal problems we face with regard to the law, to the environment and other significant areas, all result of a lack of intelligence and/or data, and perhaps also a lack of “morality.” The application of AI toward the solution of these problems amounts to a hypothesis that these problems admit a technical solution. This hypothesis is probably wrong, but to see why we should give some attention to why this hypothesis seems so compelling. The increasing automatization of the workplace (e.g., see the Weizenbaum interview for interesting insights on the development of automated bank tellers, ca. 1980 [67]) and the ever-growing list of announcements of human-level performance by AIs at a host of structured, well-defined tasks demonstrate that many challenges do admit such technical solutions. A large class of these announcements in recent years involves the playing of games, whether they be video games, board games, card games or more abstract conceptions from the field of Game Theory.

Game Theory has been used to model and inform both individual and collective decision-making and is important enough to merit political science courses dedicated to its application [81]. One famous example of individual decision-making is the Prisoner’s Dilemma, which astronomer Carl Sagan extended to suggest as a foundation for morality [82]. In the case of collective action, the Nobel-prize-winning work of John Nash (popularized in the film A Beautiful Mind) provided a framework for defining fixed points, known as “Nash equilibria” in competitive games. Nash proved that these equilibria exist in any finite game [83] (i.e. games involving a finite number of players, each with a finite number of choices), such if the choices of all the other players are known, then no rational player will benefit by changing his or her choice. In addition to existence, there are algorithms that guarantee finding these equilibria [84] but they are not guaranteed to be unique and may not be optimal in the sense of being in the best interest of all players collectively, nor are they necessarily attainable for players with limited resources [85]. The outcomes of such games can sometimes lead to paradoxical conclusions that policy-makers learn to take into account [86], however the particular outcomes depend strongly on the weighting of the relative rewards built into the game, and care must be taken before applying the results of one set of assumed weights to real-world situations [87]. Apart from the general applicability of one particular solution, significant other limitations exist, such as the fact that game theory models are necessarily reductionistic and fail to capture complex interactions, and that human beings do not behave as entirely rational agents. Noted economist and game theorist Ariel Rubinstein cautions,

“For example, some contend that the Euro Bloc crisis is like the games called Prisoner’s Dilemma, Chicken or Diner’s Dilemma. The crisis indeed includes characteristics that are reminiscent of each of these situations. But such statements include nothing more profound than saying that the euro crisis is like a Greek tragedy. In my view, game theory is a collection of fables and proverbs. Implementing a model from game theory is just as likely as implementing a fable…I would not appoint a game theorist to be a strategic advisor ”[88].

It is simply not evident that all societal interactions can be meaningfully reduced to games between a constant number of non-resource-bound rational players, and thus the application of game-playing—whether played by economists, mathematicians or AIs—while informative, does not provide a complete “technical solution.”

What of the earlier claim that AIs have (so far) only demonstrated success at “structured, well-defined tasks”? Could one not argue that the current AI explosion is precisely due to the ability of machine learning systems to solve difficult, even “intractable,” problems and complete tasks which humans find hard to fully specify—tasks including image classifications, artistic style transfer [89, 90], turning shoes into handbags [91], advanced locomotion [92, 93], to name a few? Is it inconceivable that given the power of advancing machine learning systems to form representations and make predictions using vast datasets, that they could find “connections” and “solutions” which have eluded the grasp of human historians, political theorists, economists, etc.?

This is why the word “probably” is included in the phrase “the hypothesis is probably wrong,” because recent history has shown that negative pronouncements about the features and capabilities of AI have a tendency to be superseded with actual demonstrations of such features and capabilities; generally such gaffes proceed as, “Well an AI could never do X,” or “AIs don’t do Y,” to be followed by someone developing an AI that does X, or pulling up a reference showing that AIs are doing Y as of last year. However, there is a difference between caution about limiting predictions for the future, and the expression of a hope that someday, somehow AI systems will solve the world’s problems.

Such a hope in the salvific power of a higher intelligence shares features with non-technical, non-scientific outlooks, notably religious outlooks such as the eschatological hopes of Christianity. With Christianity, however, there exists at least a set of historical events, rational philosophical arguments and personal experience which, at least in the minds of believers, constitute sufficient evidence to warrant such hopes, and although the characteristics of the Savior are (almost by definition) not fully specified, they are enumerated through textual testimony, and these are characteristics which would warrant entrusting the care of one’s life and affairs with. In contrast, the vagueness of the hope for future AI saviors has more in common with the “Three Point Plan to Fix Everything” expressed by the U.S. President in the movie Idiocracy:

“Number one, we got this guy, [named] Not Sure.

Number two, he’s got a higher I.Q. than any man alive.

And number three, he’s gonna fix everything” [94].

The arrival of intelligent machines that somehow resolve long-standing conundrums and conflicts amounts to a new twist on the notion of deus ex machina, which historically is taken to imply a lack of continuity or precedent, and rightly contains a pejorative connotation implying a lack of warrant.

This lack of warrant in a belief of a technological solution has its seeds in the very assumption it is intended to address: that the problems of society result from lack of intelligence. With respect to environmental concerns, this is contradicted by the observations and conclusions of the former dean of the Yale School of Forestry & Environmental Studies and administrator of the United Nations Development Programme, Gus Speth:

“I used to think that top environmental problems were biodiversity loss, ecosystem collapse and climate change. I thought that thirty years of good science could address these problems. I was wrong. The top environmental problems are selfishness, greed and apathy, and to deal with these we need a cultural and spiritual transformation. And we scientists don’t know how to do that” [95].

Erle Ellis, director of the Laboratory for Anthropogenic Landscape Ecology expressed a similar doubt regarding the lack of intelligence and/or data as fundamental causes of ecological challenges in his essay “Science Alone Won’t Save the Earth. People Have to Do That”: “But no amount of scientific evidence, enlightened rational thought or innovative technology can resolve entirely the social and environmental trade-offs necessary to meet the aspirations of a wonderfully diverse humanity—at least not without creating even greater problems in the future” [96]. Kate Crawford, in her aforementioned talk to the Royal Society emphasized that even the details of developing applications of AI systems affecting the public involve implementation choices which “are ultimately political decisions.”[68] Thus we see the use of AI for a more just and harmonious society as requiring human oversight, not as obviating it. And rather than seeing AI resolve human disputes, data scientist Richard Sargeant predicts that “Because of the power of AI…there will be rows. Those rows will involve money, guns and lawyers” [97].

To sum up: Despite amazing success of algorithmic decision making in a variety of simplified domains, well-informed AI ethicists maintain that the responsibility for those decisions must remain attached to humans. Having a machine learning system able to make sense of vast quantities of data does not seem to offer a way to circumvent the necessary “cultural and spiritual” and “political” involvement of humans in the exercise of government because the assumption that the political, environmental and ethical challenges of our world result from lack of intellect or data is incorrect, and the hypothesis that these problems admit a technical solution is self-contradictory (because the technical solutions require human political activity for design and oversight). The desire for such a relief from these human communal conflict-resolution processes amounts to a form of hope akin to religious eschatology, which may be warranted for adherents of faith, but is inconsistent with the trajectory of technical developments in machine learning applications. Thus, we are left with AI as a tool for humans. We may make better decisions by means of it, but it is we who will be making them; abdicating to machines is essentially impossible.

All this is not to say that AI can’t be used by people for many powerful public goods—and evils! As Zynep Tufecki famously remarked. “Let me say: too many worry about what AI—as if some independent entity—will do to us. Too few people worry what power will do with AI” [98].

In the next installment, we will highlight some of these uses for AI in service to secular society as well as to the church as a class of applications I will term “AI monks.”

Reality Changing Observations:

1. What do you think is the source of the desire to apply technical “solutions” to social “problems”?

2. “AI for Good” can be only a good thing, so why might we need greater discernment and oversight for such applications than for other intended uses of AI?

3. If the AI advisor you were using instructed you to do something uncomfortable for the sake of a greater good that you didn’t fully understand, would you defer to the AI’s judgement or follow your own inclinations?

Acknowledgement:

Sponsored by a grant given by Bridging the Two Cultures of Science and the Humanities II, a project run by Scholarship and Christianity in Oxford (SCIO), the UK subsidiary of the Council for Christian Colleges and Universities, with funding by Templeton Religion Trust and The Blankemeyer Foundation.

[51] C. Lecher, “A healthcare algorithm started cutting care, and no one knew why,” The Verge, Mar 21, 2018.

[52] T. Hills, “The Mental Life of a Bank Loan Algorithm: A True Story,” Psychology Today, Oct 2, 2018.

[53] M. Palin, “China’s ‘social credit’ system is a real-life ‘Black Mirror’ nightmare,” New York Post, Sep 19, 2018. .

[54] I. Staff, “High-Frequency Trading – HFT,” Investopedia, Jul 23, 2009.

[55] M. Phillips, “Nasdaq: Here’s Our Timeline of the Flash Crash,” Wall Street Journal, May 11, 2010.

[56] “DLI | Speed Confence | Cornell Tech,” Digital Life Initiative | Cornell Tech, New York, 2018.

[57] J. Delcker, “Europe divided over robot ‘personhood,’” POLITICO, Apr 11, 2018.

[58] J. J. Bryson, “Robots Should Be Slaves,” Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues, pp. 63–74, 2010.

[59] J.J. Bryson, “I’m Joanna Bryson, a Professor in Artificial (and Natural) Intelligence. I am being consulted by several governments on AI ethics, particularly on the obligations of AI developers towards AI and society. I’d love to talk – AMA!” r/science – Science AMA Series, Reddit, Jan 13, 2017.

[60] J. Johnston, “Traumatic Responsibility: Victor Frankenstein as Creator and Casualty,” in Frankenstein, M. W. Shelley, Ed. MIT Press, 2017, pp. 201–208.

[61] M. Burdett, “Danny Boyle’s Frankenstein: An Experiment in Self-Imaging,” Transpositions: Journal of Theology, Imagination and the Arts, Winter 2016.

[62] J. J. Bryson, “Patiency is not a virtue: the design of intelligent systems and systems of ethics,” Ethics Inf. Technol., vol. 20, no. 1, pp. 15–26, Mar. 2018.

[63] T. Bynum, “Computer and Information Ethics,” in The Stanford Encyclopedia of Philosophy, Summer 2018., E. N. Zalta, Ed. Metaphysics Research Lab, Stanford University, 2018.

[64] N. Wiener, The Human Use of Human Beings: Cybernetics and Society. New York, N.Y: Houghton Mifflin Harcourt, 1950.

[65] J. Weizenbaum, “ELIZA—a computer program for the study of natural language communication between man and machine,” Commun. ACM, vol. 9, no. 1, pp. 36–45, 1966.

[66] J. Weizenbaum, Computer power and human reason: from judgment to calculation. San Francisco: Freeman, 1976.

[67] D. ben-Aaron, “Weizenbaum examines computers and society,” The Tech, Apr 9, 1985.

[68] K. Crawford, “Just An Engineer: The Politics of AI,” The Royal Society, YouTube, Jul 23, 2018.

[69] S. Fussell, “The LAPD Uses Palantir Tech to Predict and Surveil ‘Probable Offenders,’” Gizmodo, May 8, 2018.

[70] R. Hill, “Rights group launches legal challenge over London cops’ use of facial recognition tech, The Register, Jul 26, 2018.

[71] T. Galey and K. Van Cleave, “Feds use facial recognition to arrest man trying to enter U.S. illegally,” CBS News, Aug 23, 2018.

[72] S. Meredith, “Facebook-Cambridge Analytica: A timeline of the data hijacking scandal,” CNBC.com, Apr 10, 2018.

[73] “AI for Good Global Summit 2018” conference, United Nations International Telecommunications Union (ITU), May 15-17, 2018.

[74] “AI for Good Foundation” [Accessed: 07-Oct-2018].

[75] S. Captain, “This news site claims its AI writes ‘unbiased’ articles,” Fast Company, Apr 4, 2018.

[76] B. Dickson, “Why it’s so hard to create unbiased artificial intelligence,” TechCrunch, Nov 7, 2016.

[77] J. Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, Oct 10, 2018.

[78] C. Herweijer, “8 ways AI can help save the planet,” World Economic Forum, Jan 24, 2018.

[79] S. Muraleedharan, “Role of Artificial Intelligence in Environmental Sustainability,” EcoMENA, Mar 6, 2018.

[81] N. Griffith, “PSC 3610 Game Theory and Public Choice,” Undergraduate Catalog 2018-2019, Belmont University*,* Aug 2018.

[82] C. Sagan, “A New Way to Think About Rules to Live By,” Parade, Nov 28, 1993.

[83] J. F. Nash, “Equilibrium Points in n-Person Games,” Proceedings of the National Academy of Sciences of the United States of America, vol. 36, no. 1, pp. 48–49, 1950.

[84] R. Porter, E. Nudelman, and Y. Shoham, “Simple search methods for finding a Nash equilibrium,” Games and Economic Behavior, vol. 63, no. 2, pp. 642–662, Jul 2008.

[85] J. Y. Halpern, R. Pass, and D. Reichman, “On the Non-Existence of Nash Equilibrium in Games with Resource-Bounded Players,” arXiv:1507.01501 [cs], Jul 2015.

[86] “Braess’s paradox,” Wikipedia, Oct 8, 2018.

[87] W. Chen, “Bad Traffic? Blame Braess’ Paradox, Forbes, Oct 20, 2016.

[88] A. Rubinstein, “Game theory: How game theory will solve the problems of the Euro Bloc and stop Iranian nukes,” FAZ.net, Mar 27, 2013.

[89] L. A. Gatys, A. S. Ecker, and M. Bethge, “A Neural Algorithm of Artistic Style,” arXiv:1508.06576 [cs, q-bio], Aug 2015.

[90] S. Desai, “Neural Artistic Style Transfer: A Comprehensive Look,” Medium, Sep 14, 2017.

[91] T. Kim, M. Cha, H. Kim, J. K. Lee, and J. Kim, “Learning to Discover Cross-Domain Relations with Generative Adversarial Networks,” arXiv:1703.05192 [cs], Mar 2017.

[92] N. Heess et al.“Emergence of Locomotion Behaviours in Rich Environments,” arXiv:1707.02286 [cs], Jul 2017.

[93] C. Chan, S. Ginosar, T. Zhou, and A. A. Efros, “Everybody Dance Now,” arXiv:1808.07371 [cs], Aug 22, 2018.

[94] M. Judge, Idiocracy. 20th Century Fox, 2006.

[95] G. Speth, “Gus Speth Calls for a “New” Environmentalism,” Living on Earth podcast, Feb 13, 2015.

[96] E. C. Ellis, “Science Alone Won’t Save the Earth. People Have to Do That.,” The New York Times, Aug 11, 2018.

[97] R. Sargeant, “AI Ethics: send money, guns & lawyers,” Afterthought, Jun 20, 2018.

[98] Z. Tufekci, “Too many worry what AI…,” Twitter, Sep 4, 2017.

Recommended Posts

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments