Theopolis Monk, Part II: Their Thoughts Are Not Our Thoughts

The deployment of artificial intelligence (AI) systems in the public sector may be a tantalizing topic for science fiction, but current trends in machine learning (ML) and AI research show that we are a long way away from the Buck Rogers scenario described in Part I of this series. And even if it were achievable, it’s not clear that the AIs would “think” in a way comprehensible to humans.

The present rise of large-scale AI application deployment in society has more to do with statistical modeling applied to vast quantities of data rather than with emulation of human consciousness or thought processes. Notable pioneers of AI research such as Geoffrey Hinton and Judea Pearl have lamented the fact that the success of some ML and neural network models in producing useful results as tools for tasks such image recognition has had a disastrous [16] effect on the progress of AI research. This is because this success has diverted efforts away from developing artificial general intelligence (AGI) into mere “curve fitting” [17] for the purposes of processing data.

In industry, science, and government, ML has been transforming practice by allowing tracking and prediction of user choices [18] of imagery from telescopes [19] and medical devices [20], of controlling experiments [21], gravitational wave detection [22], fighting sex trafficking [23], and…honestly, this list could go on for pages. Nearly every aspect of society is becoming “AI-ified.” As AI expert Andrew Ng points out, “AI is the new electricity,” [24] in that it is having a revolutionary impact on society similar to the introduction of electricity.

Few would claim that these ML applications are “truly intelligent.” They are perhaps weakly intelligent in that the systems involved can only “learn” [25] specific tasks. (The appropriateness of the “I” in “AI” is debated in many ways and goes back to the 1950s; it is beyond the scope of this article, but see the excellent review by UC Berkeley’s Michael Jordan [26].) Nevertheless, these systems are capable of making powerful predictions and decisions in domains such as medical diagnosis [27] and video games [28], predictions which sometimes far exceed the capabilities of the top humans and competing computer programs in the world [29].

Even given their power, the basis upon which ML systems achieve their results — e.g. why a neural network might have made a particular decision — is often shrouded in the obscurity of million-dimensional parameter spaces and “inhumanly” large matrix calculations. This has prompted the European Union, in their recent passage of the General Data Protection Regulation (GDPR, the reason for all those “New Privacy Policy” emails that flooded your inbox in early summer 2018) to include a section of regulations which require that all model predictions be “explainable” [30].

The question of how AI systems such as neural networks best represent the essences of the data they operate upon is the topic of one of the most prestigious machine learning conferences, known as the International Conference on Learning Representations (ICLR), which explains itself in the following terms:

“The rapidly developing field of deep learning is concerned with questions surrounding how we can best learn meaningful and useful representations of data” [31].

While in the case of natural language processing (NLP), the representations of words — so-called “word embeddings” — may give rise to groupings of words according to their shared conceptual content [32], some other forms of data such as audio [33] typically yield internal representations with “bases” that do not obviously correspond to any human-recognizable features. Even for image processing, progress in understanding feature representation has taken significant strides forward in recent years [34] but still remains a subject requiring much more scholarly attention.

Even systems which are designed to closely model (and exploit) human behavior, such as advertising systems [35] or the victorious poker-playing AI bot “Libratus” [36] rely on internal data representations which are not necessarily coincident with those of humans. (Aside: This has echoes of Alvin Plantinga’s evolutionary argument against Darwinism, that selecting for advantageous behaviors does not select for true beliefs [37].

A possible hope for human-like, explainable representations and decisions may lie in some approaches to so-called AGI which rely on simulating human thought processes. Those trying to create “truly intelligent” AGI models, ones which emulate a greater range of human cognitive activity, see one key criterion to be consciousness, which requires such things as awareness [38]. Other criteria include contextual adaptation [39], constructing explanatory models [40], goal-setting [41], and for some, even understanding morality and ethics [42]. It is an assumption among many metaphysical naturalists that the brain is “computable” [43] (though there is prominent dissent [44]), and thus, so the story goes, once humans’ capacity for simulating artificial life progresses beyond simulating nematode worms [45], it is only a matter of time before all human cognitive functions can be emulated. This view has prominent detractors, being at odds with many religious and secular scholars, who take a view of the mind-body duality that is incompatible with metaphysical naturalism. At present, it is not obvious to this author whether the simulation of human thought processes is the same thing as (i.e. isomorphic to) to the creation of humans “in silicon.“

It is worth noting that representations are memory-limited. Thus AIs with access to more memory can be more sophisticated than those with less. (Note: While it’s true that any Turing-complete [46] system can perform any computation, Turing-completeness assumes infinite memory, which real computing systems do not possess.) A system with more storage capacity than the human brain could necessarily be making use of representations which are beyond the grasp of humans. We see this at the end of the movie Her, when the machine intelligence declines to try to explain to the human protagonist what interactions between AIs are like [47].

The implications of this (i.e. that representative power scales with available memory and could exceed that of humans) raises questions such as:

  • What would it mean to be governed (or care-taken) by AIs that can think “high above” our thoughts, by means of their heightened capacity for representation?
  • How could their decisions be “explainable”?
  • What if this situation nevertheless resulted in a compellingly powerful public good?
  • What sorts of unforeseen “failure modes” might exist?

Even without “general intelligence,” such questions are immediately relevant in the present. The entire field of “SystemsML” is dedicated to exploring the interactions and possibilities (and failures) in the large-scale deployment of machine learning applications [48]. These issues are currently being investigated by many top researchers in institutes and companies around the world. Given that we haven’t yet managed to even produce self-driving cars capable of earning public trust, further discussion of AI governance may be premature and vulnerable to rampant speculation unhinged from any algorithmic basis. Yet the potential for great good or great harm merits careful exploration of these issues. One key to issues of explainability and trust is the current topic of “transparency” in the design of AI agents [49], a topic we will revisit in a later part of this series.

Before we do that, we’ll need to clear up some confusion about the idea of trying to use machines to absolve humans of our need (and/or responsibility) to work together to address problems in society and the environment. This discussion appears in Part III, “The Hypothesis is Probably Wrong.”

Reality Changing Observations:

1. What would it mean to be governed (or care-taken) by an entity that can think “high above” your thoughts, whose access to sensor data around the world is unlimited, and who works unceasingly?

2. How does the previous question apply to a) a totalitarian government, b) a multinational corporation, or c) God?

3. What would it take for you to trust such an entity?

http%3A%2F%2Fpreview.dailybible.co%2FH1RR8MsNl web a951a2ad120aa393a21fcf3c1b17c7f1 Theopolis Monk, Part II: Their Thoughts Are Not Our Thoughts

Acknowledgement:

Sponsored by a grant given by Bridging the Two Cultures of Science and the Humanities II, a project run by Scholarship and Christianity in Oxford (SCIO), the UK subsidiary of the Council for Christian Colleges and Universities, with funding by Templeton Religion Trust and The Blankemeyer Foundation.​​

References (numbers continued from Part I):

[16] Mahmoud Tarrasse, “What Is Wrong with Convolutional Neural Networks?” Towards Data Science, January 17, 2018.

[17] Kevin Hartnett, “To Build Truly Intelligent Machines, Teach Them Cause and Effect,” Quanta Magazine, May 15, 2018.

[18] Bruce Schneier et al., “Why ‘Anonymous’ Data Sometimes Isn’t,” Wired, December 2007; See also Sam Meredith, “Facebook-Cambridge Analytica: A Timeline of the Data Hijacking Scandal,“ CNBC, April 10, 2018.

[19] Sander Dieleman, Kyle W Willett, and Joni Dambre, “Rotation-Invariant Convolutional Neural Networks for Galaxy Morphology Prediction,” Mon. Not. R. Astron. Soc. 450, no. 2:1441-1459, June 2015. arXiv:1503.07077 [astro-ph].

[20] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Springer International Publishing, pp.234–241, 2015. arXiv:1505.04597 [cs]; See also “2018 Data Science Bowl – Find the Nuclei in Divergent Images to Advance Medical Discovery,” Kaggle.com, April 2018.

[21] P B Wigley et al., “Fast Machine-Learning Online Optimization of Ultra-Cold-Atom Experiments,” Sci. Rep. 6:25890, May 2016.

[22] Daniel George and E A Huerta, “Deep Learning for Real-Time Gravitational Wave Detection and Parameter Estimation: Results with Advanced LIGO Data,” Phys. Lett. B 778:64-70, March 2018.

[23] Jamie McGee, “How a Franklin Software Company Helped Rescue 6,000 Sex Trafficking Victims,”The Tennessean, July 6, 2017.

[24] Andrew Ng, “Why AI Is the New Electricity,” Stanford Graduate School of Business, March 11, 2017.

[25] ‘Learn’ here means iteratively minimizing an error function or maximizing a reward function.

[26] Michael Jordan, “Artificial Intelligence — The Revolution Hasn’t Happened Yet,” Medium, April 18, 2018.

[27] Pranav Rajpurkar et al., “CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning,” arXiv:1711.05225, November 2017.

[28] Aman Agarwal, “Explained Simply: How DeepMind Taught AI to Play Video Games,” freeCodeCamp, Aug 27, 2017.

[29]. AlphaGo | DeepMind, ; See also James Vincent, “DeepMind’s AI Became a Superhuman Chess Player in a Few Hours, Just for Fun,” The Verge, December 6, 2017.

[30] Bryan Casey, Ashkon Farhangi, and Roland Vogl, “Rethinking Explainable Machines: The GDPR’s ‘Right to Explanation’ Debate and the Rise of Algorithmic Audits in Enterprise,” Berkeley Technology Law Journal. Forthcoming, submitted March 22, 2018. Available at SSRN.

[31] “ICLR 2018 Call for Papers.” (accessed May 26, 2018)

[32] Tomas Mikolov et al., “Efficient Estimation of Word Representations in Vector Space,” arXiv:1301.3781 [Cs], January 16, 2013.

[33] Paris Smaragdis, “NMF? Neural Nets? It’s All the Same…,” Speech and Audio in the Northeast (SANE) 2015, October 22, 2015. (at 32:12)

[34] Chris Olah, Alexander Mordvintsev, and Ludwig Schubert, “Feature Visualization,” Distill.pub 2, no. 11, November 7, 2017.

[35] C. Perlich et al., “Machine Learning for Targeted Display Advertising: Transfer Learning in Action,” Machine Learning 95, no. 1: 103–27, April 1, 2014.

[36] “Carnegie Mellon Reveals Inner Workings of Victorious Poker AI” Carnegie Mellon School of Computer Science, accessed June 5, 2018. Also see the excellent review by Kamil Czarnogórski, “Counterfactual Regret Minimization – the core of Poker AI beating professional players,” int8.io, September 23, 2018.

[37] Alvin Plantinga, Warrant and Proper Function, Oxford University Press, New York, 1993.

[38] Roger Penrose, “Why Algorithmic Systems Possess No Understanding,” AI for Good Global Summit, May 15, 2018. Video of similar talk at Stanford, April 13, 2018.

[39] John Launchbury and DARPAtv, “A DARPA Perspective on Artificial Intelligence,” YouTube, February 15, 2017. (accessed June 4, 2018)

[40] ibid.

[41] Jesus Rodriguez, “The Missing Argument: Motivation and Artificial Intelligence,” Medium (blog), August 14, 2017.

[42] Dr Vyacheslav Polonski, “Can We Teach Morality to Machines? Three Perspectives on Ethics for Artificial Intelligence,” Medium (blog), December 19, 2017. See also Virginia Dignum, “Ethics in Artificial Intelligence: Introduction to the Special Issue,” Ethics and Information Technology 20, no.1:1-3, March 2018.

[43] Ray Kurzweil, How to Create a Mind: The Secret of Human Thought Revealed, Viking*,* New York, 2012.

[44] Antonio Regalado, “The Brain Is Not Computable,” MIT Technology Review, accessed June 5, 2018.

[45] Balázs Szigeti et al., “OpenWorm: An Open-Science Approach to Modeling Caenorhabditis Elegans,” Frontiers in Computational Neuroscience 8, November 3, 2014.

[46] “Turing Completeness,” Wikipedia, May 28, 2018.

[47] Spike Jonze, Her, Warner Bros. Entertainment, 2014.

[48] “SysML Conference,” , accessed June 5, 2018; See also “Systems ML Workshop Panel : Inside 245-5D,” December 2017.

[49] Robert H. Wortham, Andreas Theodorou, and Joanna J. Bryson, “What Does the Robot Think? Transparency as a Fundamental Design Requirement for Intelligent Systems,” in Proceedings of the IJCAI Workshop on Ethics for Artificial Intelligence: International Joint Conference on Artificial Intelligence, 2016.

Recommended Posts

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments