Theopolis Monk, Part IV: Servant and Sword

(Image: Der Wächter des Paradieses, Franz Stuck, 1889, with a bit of photoshop by the author.)

In exploring the potential use of AI for public good, we have veered from the purely speculative narrative of an AI-governed utopia (in Part I), to concerns about how such systems might be making their decisions (in Part II), to a resignation that humans probably will not be removable from the process of government, and instead find AI to be a powerful tool to be used by humans (in Part III). And even though we’ve already covered many possible uses of AI, and the daily news continually updates us with new ones, in this section we want to give an overview of various “public” applications of AI with perhaps a different structure than is often provided: the Good, the Bad, and the Holy.

A. What Isn’t AI?

Before we go into that, it is finally worth talking about what we mean by the term “artificial intelligence.” Why wait until the fourth installment to define terms? Because this particular term is so difficult to pin down, it’s often not worth trying. As I argue in a separate essay [1] trying to answer the question “What is AI?” leads one into multiple difficulties which I will briefly summarize:

  1. Too Many Definitions. There are a variety of definitions which different people employ, from the minimal “doing the right thing at the right time,” to nothing short of artificial general intelligence (AGI) where all human cognitive tasks are emulated to arbitrary satisfaction. One particularly insightful definition is on the level of folklore: “AI is machines doing what we used to think only humans could do.”
  2. The New Normal. The collection of applications regarded to be AI is ever changing, making the term a moving target and trying to define it amounts to chasing after the wind. On the one hand, applications which used to be regarded as AI when they were new, become regarded merely as automated tasks as they become “reified” into the background of “The New Normal” operations of our lives, and thus part of the list of AI applications decreases over time. On the other hand, methods and techniques which have been around for centuries — such as curve-fitting — are now regarded as AI; as “AI hype” grows, it seems that “everything is AI” and the list of AI tasks and methods is thus increasing.
  3. Anthropomorphism. A final, insurmountable hurdle is the challenge of anthropomorphism, the unavoidable human tendency to ascribe human faculties and/or intentions to entities in the world (whether animals, machines, or forces of nature). This amounts to a cognitive bias leading one to overestimate AIs’ human-like capabilities, an error known as “overidentification” [2].

A host of the devices we use every day contain “artificial intelligence” endowed to them by human engineers to improve upon previous devices which required greater user setup, tuning and/or intervention. For example, computer peripherals and expansion cards used to require manual configuration such as the setting of jumpers or DIP switches on circuit boards, but this was obviated by the rise of “Plug and Play” standards for peripherals and busses [3] and network hardware [4] which automate the assignment of resources and protocols between devices. Another example: The cars we drive are largely drive-by-wire devices with computer systems designed to adaptively adjust the car’s performance, expertise programmed-in. Programmed-in expertise “used to count” as AI in the minds of some but tended to vary from application to application. The 2018 “AI in Education” conference in London saw posters and workshops showcasing computer systems that lacked evidence of learning or adaptivity, and were merely tutor-style quiz programs [5] and yet these were regarded to be “AI” in the eyes of the peer-review conference organizers, presumably because the tasks the programs performed were similar to (some of) the work of human tutors.

The point of this discussion is that when we intend to speak of “uses of AI,” it is worthwhile to consider that we are already using many “AI” systems that we simply don’t regard as such, because they are “solved” and their deeds “reified” into what we consider to be “normal” for our current technological experience. Furthermore, if by “uses of AI” we simply mean regression or classification inferences based on curve-fitting to large datasets, we could just as easily (and with greater specificity) say “uses of statistics” instead. The intent here is not to limit the use of the term “AI” as only referring to fictitious sentient machines, but to be cognizant of the multifaceted, subjective and mercurial applicability that the term carries.

“What isn’t AI?” isn’t necessarily any clearer of a question than “What is AI?” I used the phrase simply to note that in the current hour, with the bounds of “AI” extending outward via hype, and the prior examples of AI fading into the background via reification, we do well to be aware of our terminological surroundings.

B. The Good

As noted earlier, the list of wonderful things AI systems are being used for in public service is growing so large and so quickly (almost as quickly as the number of societies, conference institutes and companies dedicated to “AI for Good”) that citing any examples seems to be pedantic on the one hand and myopic on the other. Nevertheless, here are just a few that may pique interest:

  1. Saving the Coral [6]. Dr. Emma Kennedy led a team conducting imaging surveys of Pacific reefs and used image classification (AI) models to “vastly improve the efficiency of” analyzing the image to discern which reefs were healthy and which were not. Data from this work will be used to target specific reefs’ areas for protection and enhanced conservation efforts. The use of image classifiers to speed the analysis of scientific data is advancing many other fields as well, notably astronomy [7].
  2. Stopping Sex Traffickers [8]. Nashville machine learning (ML) powerhouse Digital Reasoning developed their Spotlight software in collaboration with the Thorn non-profit agency funded by actor Ashton Kutcher, to track and identify patterns consistent with human slavery so that law enforcement could intervene. According to Fast Company in March 2018, “The system has helped find a reported 6,000 trafficking victims, including 2,000 children, in a 12-month period, and will soon be available in Europe and Canada” [9].
  3. Medical Applications (?). In recent years, numerous claims have surfaced of AI systems outperforming doctors at various tasks, such as diagnosing conditions such as skin cancer [10], pneumonia [11], and fungal infections [12], as well as predicting the risk of heart attacks [13] — sufficient to spawn an official “AI vs. Doctors” scoreboard at the IEEE Spectrum website [14]. But some of these have come into question. The pneumonia study that used the “CheXNet” software was trained with an inconsistent dataset and made claims exceeding what the results actually showed [15]. In another famous example, IBM’s Watson AI system was promoted by its creators as a way to deliver personalized cancer treatment protocols,[16] but when it was revealed that the system performed much worse than advertised [17], IBM went quiet and its stock price began to sink. There are great opportunities for beneficial medical applications of AI; one can hope that these setbacks encourage responsible claims of what such systems can do. Meanwhile, some of the greatest inroads for successful medical AI applications involve not diagnosis or image analysis, but rather natural language processing (NLP): processing records, generating insurance codes, and scanning notes from doctors and nurses to look for red flags [18].

C. The Bad

Hollywood has given us plenty of ‘evil’ AI characters to ponder —there are lists of them [19]. These are sentient artificial general intelligences (AGI) which exist only in the realm of fiction. The problem with this is that plenty of other real and immediate threat vectors exist, and the over-attention to AGI serves as a distraction from these. As Andrew Ng publicly complained, “AI+ethics is important, but has been partly hijacked by the AGI (artificial general intelligence) hype. Let’s cut out the AGI nonsense and spend more time on the urgent problems: Job loss/stagnant wages, undermining democracy, discrimination/bias, wealth inequality” [20]. This is echoed in the call by Zeynep Tufecki: “let’s have realistic nightmares” [21] about technological dangers. One such realistic nightmare is the use of AI by humans who may have selfish, nefarious or repressive goals, and may be regarded as weaponized AI. Here we should revisit the words of Tufekci that appeared in Part 2:

“Let me say: too many worry about what AI—as if some independent entity—will do to us. Too few people worry what power will do with AI” [22].

Here are a few people who have worried about this:

  1. Classification as Power. At SXSW 2017, Kate Crawford gave an excellent speech on the history of oppressive use of classification technology by governments [23], such as the Nazis’ use of Hollerith machines to label and track “undesirable” or “suspect” groups. In the past such programs were limited by their inaccuracy and inefficiency, but modern ML methods offer a vast performance “improvement” that could dramatically increase the power and pervasiveness of such applications. In the Royal Society address mentioned earlier [24], she quoted Jamaican-born British intellectual Stuart Hall as once saying “systems of classification are themselves objects of power” [25]. She then connects these earlier applications with current efforts in China to identify “criminality” of people based on their photographs [26], a direct modern update of the (discredited) “sciences” of physiognomy and phrenology. She concludes that using AI in this way “seems like repeating the errors of history…and then putting those tools into the hands of the powerful. We have an ethical obligation to learn the lessons of the past” [27].
  2. Multiple Malicious Misuses. In February 2018, a group of 26 authors from 14 institutions led by Miles Brundage released a 100-page advisory entitled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” [28]. The report recommended practices for policymakers, researchers and engineers, including actively planning for misuse of AI applications, and structured these recommendations around the three areas of digital security, physical security, and political security. The first two are frequent topics among IT professionals —albeit without the AI context — however the third is perhaps new to many readers. Brundage et al. define political security threats to be

**“**The use of AI to automate tasks involved in surveillance (e.g. analysing mass-collected data), persuasion (e.g. creating targeted propaganda), and deception (e.g. manipulating videos) may expand threats associated with privacy invasion and social manipulation. We also expect novel attacks that take advantage of an improved capacity to analyse human behaviors, moods, and beliefs on the basis of available data. These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates.”

As we have already cited from various news outlets, such misuses are not mere potentialities.

3. Slaughterbots. In 2017, The Future of Life Institute produced a video by Stuart Russell (of “Russell & Norvig,” the longtime-standard textbook for AI[29]) called “Slaughterbots” [30] to draw attention to the need to oppose AWS development, which they term “killer robots”: “weapons systems that, once activated, would select and fire on targets without meaningful human control” [31]. In this video, tiny quadcopter drones endowed with shaped explosive charges are able to target individuals for assassination using facial recognition. The use of AI allows the drones to act autonomously, with two main implications: a.) the weapons system can scale to arbitrarily large numbers of drones (the video shows thousands being released over a city), and b.) the lack of communication with a central control system provides a measure of anonymity to the party who released the drones.

D. The Holy

In addition to AI systems which might serve the public at large, one might consider applications that benefit the church. Here I am concerned with applications of ML systems, not AGIs. Questions regarding the personhood of AGIs and the roles and activities available to them — would they have souls, could they pray, could they be “saved,” could they be priests, could they be wiser than us, and so on — are beyond the scope of this article, but can be found in many other sources [32-34]. These answers would be determined by the ontology ascribed to such entities, a discussion which is still incomplete [35]. There are still other interesting topics regarding present-day ML systems worth investigating, which we describe briefly here.

  1. Dr. Theophilus, an AI “Monk.” For much of church history, the scholarly work of investigating and analyzing data of historical, demographic or theological significance was done by monks. In our time, one could imagine AI systems performing monk-like duties: investigating textual correlations in Scripture, predicting trends in missions or church demographics, aiding in statistical analysis of medical miracle reports, aiding in (or autonomously performing) translation of the Bible or other forms of Christian literature, or analyzing satellite images to make archaeological discoveries [36].
  2. Chatbots for the Broken. London-based evangelism organization CVGlobal.co use ML for content recommendation (“if you liked this article, you might like”) for their “Yes He Is” website [37], and also have developed a “Who is Jesus” chatbot to respond to common questions about the person of Christ, the message of the gospels, and some typical questions that arise in apologetics contexts. This is essentially the same program as those used by major corporations such as banks [38] to answer common questions about their organizations. One can argue over whether this removes the ‘relational’ element of witnessing in a “profane” way; the structure of such a tool amounts to turning a “FAQ” page (e.g. “Got Questions about Jesus?” [39]) into an interactive conversational model. Relatedly, researchers at Vanderbilt University have gained attention for their use of ML to predict the risk of suicide [40], and apps exist for tracking mental and spiritual health [41], and thus a call has gone out for investigating predictive models in mental and spiritual counseling [42].
  3. Foundations of AI Ethics. This is more of an opportunity for engagement rather than a use of AI. Discussions on topics affecting society such as those described in this document should not be limited to only secular, non-theistic sources. There are significant points of commonality between Christian worldviews and others on topics involving affirming human dignity and agency, resisting the exploitation and oppression of other human beings, and showing concern for the poor and others affected economically by the automation afforded by AI [43,44]. The world at large is interested in having these discussions, and persons informed by wisdom and spiritual principles are integral members at the table for providing ethical input. We will revisit the topic of foundations for ethics in Part V.

E. A Tool, But Not “Just a Tool”

In casting AI as a tool to be used by humans “for good or evil,” we shouldn’t make the mistake of thinking all tools are “neutral,” i.e., that they do not have intentions implied by their very design. As an example of this, the Future of Humanity’s information page on “AI Safety Myths” points out, “A heat-seeking missile has a goal” [45]. Referring to our earlier list of uses: while it is true that stopping sex trafficking is “good” and repressing political dissidents is “bad,” both are examples of surveillance technology, which by its nature imposes a sacrifice of personal privacy. (The tradeoff between security and privacy is an age-old discussion; for now we simply note that AI may favor applications on the security side.)

Sherry Turkle of MIT chronicled the introduction of computers into various fields in the early 1980s, and observed that those asserting that the computer was “just a tool” indicated a lack of reflection: “Calling the computer ‘just a tool,’ even as one asserted that tools shape thought, was a way of saying that a big deal was no big deal” [46]. Turkle cited the famous question of architect Louis Kahn, asking a brick what it wants —“‘What do you want, brick?’ And brick says to you, ‘I like an arch’” [47] — and she asked the new question, “What does a simulation want?” In the words of those she interviewed, simulations favor experimentation, a result of its use a disconnect from reality (“it can tempt its users into a lack of fealty to the real”), and as a consequence, users must cultivate a healthy doubt of their simulations.

Thus we do well to ask: What does an AI “want”? What forms of usage does it favor? What sorts of structures will it promote and/or rely on? (Keep in mind, we are referring here to modern ML algorithms, not fictional sentient AGIs.) We conclude this section by briefly answering each of these.

  1. Like any piece of software, AI wants to be used. This led to Facebook employing psychological engineering to generate “eyeball views” and addictive behavior [48], including experimenting on users without their consent and without ethical oversight [49]. The more use, the more data, which fits in with the next point:
  2. An AI Wants Data. Given their statistical nature, the rise of successful ML algorithms is closely linked with the rise in availability of large amounts of data (to train on) made possible by the internet [50], rather than from improvements in the underlying algorithms. This even motivates some ML experts to advocate improving a model’s performance via getting more data rather than adjusting an algorithm [51]. It may be said that ML systems are data-hungry, and data-hungry algorithms make for data-hungry companies and governments. Thus we see the rise of tracking everything users do online for the purposes of mining later, and Google contracting with the healthcare system of the UK for the exchange of user data [52].
  3. An AI Wants “Compute.” A corollary of #1. In order to “burn through” gargantuan amounts of data, huge computational resources are required. This is the other reason for the rise of ML systems: significant advances in computing hardware, notably graphics processing units (GPUs). Thus, vast data centers and server farms have arisen, and the energy consumption of large-scale AI systems is an increasing environmental concern [53]. In response, Google has built dedicated processing units to reduce their energy footprint [54], but with the growth of GPU usage significantly outpacing Moore’s Law [55], this energy concern isn’t going away. Some are proposing to distribute the computation to low-power onboard sensors [56], which is also likely to occur. Either way, “AI wants compute.”
  4. AI Tempts Toward “Magic Box” Usage. “Give the system a bunch of inputs, and a bunch of labeled outputs, and let the system figure out how to map one to the other.” So goes the hope of many a new ML application developer, and when this works, it can be fun and satisfying (e.g. [57]). This can be one of the strengths of ML systems, freeing the developer from having to understand and explicitly program how to map complicated inputs to outputs, allowing the “programmer” to be creative, such as with Rebecca Fiebrink’s excellent “Wekinator” tool for musicians [58]. But this can also encourage lazy usage such as the “physiognomy” applications cited by Kate Crawford, and biased models which accidentally discriminate against certain groups (of which there too many instances to cite). As with simulation, users should cultivate a healthy doubt of their correlations.

Finally, in terms of what other structures AI will promote and/or rely on, we should remember the general warnings on technological development by Christian philosopher Jacques Ellul. In The Technological Society, Ellul cautioned that “purposes drop out of sight and efficiency becomes the central con­cern” [59]. Furthermore, Ellul noted that successful technological development tends to become self-serving, as we have all inherited the nature of Cain, the first city-builder who was also the first murderer [60]. In the next and final part, we will relate some current conversations aimed at keeping AI development and government oriented toward serving people.

Reality Changing Observations:

1. One concern people have with the use of AI is the “encroachment” of automation: In your own work, where would you draw the line between a tool that would make your job easier, vs. a tool that would make you replaceable?

2. Do you think it is possible for an AI system to learn to exceed human performance at being “convincing” and manipulating human beings?

Acknowledgement

This work was sponsored by a grant given by Bridging the Two Cultures of Science and the Humanities II, a project run by Scholarship and Christianity in Oxford (SCIO), the UK subsidiary of the Council for Christian Colleges and Universities, with funding by Templeton Religion Trust and The Blankemeyer Foundation.​​

References

[1] Scott H. Hawley, “Challenges for an Ontology of Artificial Intelligence,” Accepted for Publication in Perspectives on Science and Christian Faith, Oct 13, 2018.

[2] Joanna J. Bryson and Philip P Kime, “Just an Artifact: Why Machines Are Perceived as Moral Agents,” vol. 22, 2011, 1641.

[3] Per Christensson, “Plug and Play Definition,” TechTerms, 2006.

4] Katie Bird Head, “ISO/IEC Standard on UPnP Device Architecture Makes Networking Simple and Easy,” ISO, accessed Oct 11, 2018.

[5] “AIED2018 – International Conference on Artificial Intelligence in Education,” accessed Oct 12, 2018.

[6] Johnny Langenheim, “AI Identifies Heat-Resistant Coral Reefs in Indonesia,” The Guardian, Aug 13, 2018.

[7] Sander Dieleman, Kyle W Willett, and Joni Dambre, “Rotation-Invariant Convolutional Neural Networks for Galaxy Morphology Prediction,” Mon. Not. R. Astron. Soc. 450, no. 2 (June 2015): 1441–1459.

[8]Jamie McGee, “How a Franklin Software Company Helped Rescue 6,000 Sex Trafficking Victims,” The Tennesseean, Jul 6, 2017.

[8] “Digital Reasoning: Most Innovative Company,” Fast Company, Mar 19, 2018, .

[9] H A Haenssle et al., “Man against Machine: Diagnostic Performance of a Deep Learning Convolutional Neural Network for Dermoscopic Melanoma Recognition in Comparison to 58 Dermatologists,” Annals of Oncology 29, no. 8 (August 1, 2018): 1836–42.

[11] Pranav Rajpurkar et al., “CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning” arXiv: 1711.05225, Nov 14, 2017.

[12] Seung Seog Han et al., “Deep Neural Networks Show an Equivalent and Often Superior Performance to Dermatologists in Onychomycosis Diagnosis: Automatic Construction of Onychomycosis Datasets by Region-Based Convolutional Deep Neural Network,” ed. Manabu Sakakibara, PLOS ONE 13, no. 1 (January 19, 2018): e0191493.

[13] Stephen F. Weng et al., “Can Machine-Learning Improve Cardiovascular Risk Prediction Using Routine Clinical Data?,” ed. Bin Liu, PLOS ONE 12, no. 4 (April 4, 2017): e0174944.

[14] IEEE, “AI vs Doctors,” IEEE Spectrum: Technology, Engineering, and Science News, Sep 26, 2017.

[15] Luke Oakden-Rayner, “CheXNet: An in-Depth Review,” Luke Oakden-Rayner (PhD Candidate / Radiologist) Blog (blog), Jan 24, 2018.

[16] Felix Salmon, “IBM’s Watson Was Supposed to Change the Way We Treat Cancer. Here’s What Happened Instead.,” Slate Magazine, Aug 18, 2018.

[17] Casey Ross, “IBM Pitched Watson as a Revolution in Cancer Care. It’s Nowhere Close,” STAT, Sep 5, 2017.

[18] Steve Griffiths, “Hype vs. Reality in Health Care AI: Real-World Approaches That Are Working Today,” MedCity News (blog), Sep 27, 2018.

[19] Michael Ahr, “The Most Evil Artificial Intelligences in Film,” Den of Geek, June 29, 2018.

[20] Andrew Ng, “AI+ethics Is Important, but Has Been Partly Hijacked by the AGI (Artificial General Intelligence) Hype…,” @andrewyng on Twitter (blog), June 11, 2018, .

[21] Zeynep Tufekci, “My Current Lifegoal Is Spreading Realistic Nightmares…,” Twitter, @zeynep on Twitter (blog), June 28, 2018, .

[22] Zeynep Tufekci, “Let Me Say: Too Many Worry about What AI—as If Some Independent Entity—Will Do to Us…,” @zeynep on Twitter (blog), Sep 4, 2017

[23] Kate Crawford, “Dark Days: AI and the Rise of Fascism,” SXSW 2017, YouTube, Jun 7, 2017.

[24] Kate Crawford, “Just An Engineer: The Politics of AI,” You and AI, The Royal Society, YouTube, Jul 25, 2018.

[25] Sut Jhally and Stuart Hall, Race: The Floating Signifier (Media Education Foundation, 1996).

[26] “Return of Physiognomy? Facial Recognition Study Says It Can Identify Criminals from Looks Alone,” RT International, accessed Oct 12, 2018, .

[27] Crawford, You and AI – Just An Engineer.

[28] Miles Brundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” ArXiv:1802.07228 [Cs], February 20, 2018,

[29] Stuart J Russell, Stuart Jonathan Russell, and Peter Norvig, Artificial Intelligence: A Modern Approach (Prentice Hall, 2010).

[30] Stuart Russell, Slaughterbots, Stop Autonomous Weapons (Future of Life Institute, 2017).

[31] “Frequently Asked Questions,” Ban Lethal Autonomous Weapons (blog), Nov 7, 2017.

[32] Jonathan Merritt, “Is AI a Threat to Christianity?,” The Atlantic, Feb 3, 2017.

[33] Paul Scherz, “Christianity Is Engaging Artificial Intelligence, but in the Right Way,” Crux (blog), Feb 28, 2017.

[34] “As Artificial Intelligence Advances, What Are Its Religious Implications?,” Religion & Politics (blog), Aug 29, 2017.

[35] Derek C Schuurman, “Artificial Intelligence: Discerning a Christian Response,” Perspect. Sci. Christ. Faith 70, no. 1 (2018): 72–73.

[36] Julian Smith, “How Artificial Intelligence Helped Find Lost Cities,” iQ by Intel, Mar 20, 2018.

[37] “YesHEis: Life on Mission,” accessed Oct 13, 2018, .

[38] Robert Barba, “Bank Of America Launches Erica Chatbot | Bankrate.Com,” Bankrate, accessed Oct 13, 2018, .

[39] “Questions about Jesus Christ,” GotQuestions.org, accessed Oct 13, 2018, .

[40] Colin G. Walsh, Jessica D. Ribeiro, and Joseph C. Franklin, “Predicting Risk of Suicide Attempts Over Time Through Machine Learning,” Clinical Psychological Science 5, no. 3 (May 2017): 457–69, .

[41] Casey Cep, “Big Data for the Spirit,” The New Yorker, Aug 5, 2014, .

[42] J. Nathan Matias, “AI in Counseling & Spiritual Care,” AI and Christianity (blog), Nov 2, 2017, .

[43] Andrew Spicer, “Universal Basic Income and the Biblical View of Work,” Institute For Faith, Work & Economics, Sep 20, 2016, .

[44] J. Nathan Matias, “How Will AI Transform Work, Creativity, and Purpose?,” Medium (blog), Oct 27, 2017, .

[45] “AI Safety Myths,” Future of Humanity Institute, accessed Oct 13, 2018, .

[46] Sherry Turkle, ed., Simulation and Its Discontents, Simplicity (Cambridge, Mass: The MIT Press, 2009).

[47] Wendy Lesser, You Say to Brick: The Life of Louis Kahn, 2018.

[48] Hilary Andersson Cellan-Jones Dave Lee, Rory, “Social Media Is ‘deliberately’ Addictive,” July 4, 2018, sec. Technology.

[49] Katy Waldman, “Facebook’s Unethical Experiment,” Slate, June 28, 2014, ; Inder M. Verma, “Editorial Expression of Concern: Experimental Evidence of Massivescale Emotional Contagion through Social Networks,” Proceedings of the National Academy of Sciences 111, no. 29 (July 22, 2014): 10779–10779.

[50] Roger Parloff, “Why Deep Learning Is Suddenly Changing Your Life,” Fortune (blog), accessed Oct 14, 2018.

[51] Gordon Haff, “Data vs. Models at the Strata Conference,” CNET, Mar 2, 2012.

[52] Ben Quinn, “Google given Access to Healthcare Data of up to 1.6 Million Patients,” The Guardian, May 3, 2016.

[53] “‘Tsunami of Data’ Could Consume One Fifth of Global Electricity by 2025,” The Guardian, Dec 11, 2017, sec. Environment.

[54] Richard Evans and Jim Gao, “DeepMind AI Reduces Google Data Centre Cooling Bill by 40%,” DeepMind, Jul 20, 2016.

[55] OpenAI, “AI and Compute,” OpenAI Blog, May 16, 2018.

[56] Pete Warden, “Why the Future of Machine Learning Is Tiny,” Pete Warden’s Blog, Jun 11, 2018.

[57] Scott Hawley, “Learning Room Shapes,” May 4, 2017.

[58] Rebecca Fiebrink, Wekinator: Software for Real-Time, Interactive Machine Learning, 2009.

[59] Jacques Ellul, The Technological Society, trans. John Wilkinson, A Vintage Book (New York, NY: Alfred A. Knopf, Inc. and Random House, Inc., 1964).

[60] Jacques Ellul, The Meaning of the City, trans. Dennis Pardee, Jacques Ellul Legacy (Wipf & Stock Pub, 2011).

Recommended Posts

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments