Hinton & Me: Don’t Pause Giant AI Experiments, Ban Them.

Mr Nemo
18 min readMay 15, 2023

--

By Robert Hanna

“The Leader of the Luddites” (1812) (Wikipedia, 2023b)

***

You can also download and read or share a .pdf of the complete text of this essay HERE.

***

Hinton & Me: Don’t Pause Giant AI Experiments, Ban Them

In an open letter published online in late March 2023, directed not only to the digital technology and AI community in particular but also to the world more generally, card-carrying members of what I call the military-industrial-digital complex[i] — including Elon Musk and Steve Wozniak — have urged “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4” (Future of Life Institute, 2023). Here’s their letter in full, leaving out the Notes and References:

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall. (Future of Life Institute, 2023 [boldfaced, unitalicized text represents italicized text in the original; all other boldfaced text is in the original)

I sharply disagree with what’s being asserted and argued in this letter. For people with a huge vested interest in designing, producing, and marketing digital technology and AI, to call on AI labs and the fabulously wealthy global technocratic capitalist corporations (including the corporations they own or founded, for example, Musk/Tesla and Wozniak/Apple) that design, produce, and market digital technology, to “pause” their research on Large Language Models (LLMs) and the chatbots that implement these models “beyond GPT-4” for “at least six months,”[ii] and then go forward again full-steam-ahead, having paid public lip-service to “digital ethics,” so that these corporations can create an all-out existential attack on humankind, is just like it would have been for leading US generals during the latter phases of World War II to call on The Manhattan Project to “pause” their work on the atomic bomb for “at least six months,” then go forward again full-steam-ahead, having paid public lip-service to “just war theory,” so that the USA could drop the A-bomb not just once but twice on hundreds of thousands of Japanese civilians, and thereby create a permanent existential threat for humankind.

On 1 May Day 2023 — significantly, May Day — Geoffrey Hinton, a groundbreaking researcher on neural networks and LLMs, quit his job at Google and then publicly made a very similar point, also explicitly using the analogy with The Manhattan Project:

As companies improve their A.I. systems, [Hinton] believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.

But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore. (New York Times, 2023)

What Hinton (apparently) and I (absolutely) strongly believe, then, is that we ought to ban all giant AI experiments and LLM/chatbot technology while they are still in their infancy, just as we ought to have banned all atomic bomb experiments and nuclear weapons technology while they were still in their infancy. My strong belief, in turn, is a direct expression and logical implication of the doctrine I call dignitarian neo-luddism with respect to digital technology, which I’ll now explain and justify.

For clarity’s sake, I’ll start with some definitions. According to The Oxford Encyclopedic Dictionary, the term “Luddite” means this:

1. hist. a member of the bands of English craftsmen who, when their jobs were threatened by the progressive introduction of machinery into their trades in the early 19th c. attempted to reverse the trend towards mechanization by wrecking the offending machines…. 2. a person opposed to increased industrialization or new technology …. [perh. f. Ned Lud[d], an insane person said to have destroyed two stocking-frames c. 1799] (Hawkins and Allen, 1991: p. 856)

Generalizing from that, and also precisifying a little, I’ll say that classical or modern Luddism says that

all mechanical technology is bad and wrong, because it harms and oppresses ordinary people (i.e., people other than technocrats), and therefore all mechanical technology should be rejected and destroyed.

By an important contrast, however, as I’m understanding it, modern neo-Luddism (see, for example, Glendinning, 1990) says that

not all mechanical technology is bad and wrong, but instead all and only the mechanical technology that harms and oppresses ordinary people (i.e., people other than technocrats) is bad and wrong, and therefore all and only this bad and wrong mechanical technology should be rejected but not — except in extreme cases of mechanical technology whose coercive use is actually violently harming and oppressing ordinary people, for example, weapons being used for mass destruction or mass murder — destroyed, rather only either simply refused, non-violently dismantled, or radically transformed into its moral opposite.

Now, by digital technology I mean all mechanical technology that inherently involves computers, algorithms, digital data or information, artificial intelligence/AI, or robotics. Then, neo-Luddism with respect to digital technology says that

not all digital technology is bad and wrong, but instead all and only the digital technology that harms and oppresses ordinary people (i.e., people other than digital technocrats) is bad and wrong, and therefore all and only this bad and wrong digital technology should be rejected but not — except in extreme cases of digital technology whose coercive use is actually violently harming and oppressing ordinary people, for example, digitally-driven weapons or weapons-systems being used for mass destruction or mass murder — destroyed, rather only either simply refused, non-violently dismantled, or radically transformed into its moral opposite.

Finally, dignitarian neo-Luddism with respect to digital technology says that

not all digital technology is bad and wrong,[iii] but instead all and only the digital technology that harms and oppresses ordinary people (i.e., people other than digital technocrats), by either failing to respect our human dignity sufficiently or by outright violating our human dignity, is bad and wrong, and therefore all and only this bad and wrong digital technology should be rejected but not — except in extreme cases of digital technology whose coercive use is actually violently harming and oppressing ordinary people, for example, digitally-driven weapons or weapons-systems being used for mass destruction or mass murder — destroyed, rather only either simply refused, non-violently dismantled, or radically transformed into its moral opposite.

So much for the definitions, and now for some moral imperatives. What I strongly believe is that we all ought to be dignitarian neo-Luddites with respect to digital technology. Why? To be sure, there are many ways in which digital technology can be bad and wrong in the dignitarian sense, including invasive digital surveillance, digitally-driven weapons and weapon systems, algorithmic bias, and digital manipulation and nudging. And of course there are also ways in which digital technology can be bad and wrong in the utilitarian sense, for example, putting many people out of work. But the principal reason for being a dignitarian neo-Luddite with respect to digital technology is that our excessive use of and indeed addiction to digital technology is systematically undermining our innate capacities for thinking, caring, and acting for ourselves. This is preeminently true with respect to the new chatbots — for example, ChatGPT and LaMDA — and “artificial intelligence,” aka AI, more generally (see, e.g., Hanna, 2023a, 2023b, 2023c), but also to an increasingly important degree true for our excessive use of and addiction to smart-phones, desktop and laptop computers, the internet, social media, and so-on and so-forth. When you combine our excessive use of and addiction to chatbots and AI with our excessive use of and addiction to smart-phones, desktop and laptop computers, the internet, social media, etc., the result is nothing less than an all-out existential attack on our rational human mindedness or intelligence.

By “our rational human mindedness or intelligence” I mean the essentially embodied, unified set of basic innate cognitive, affective, and practical capacities present in all and only those human animals possessing the essentially embodied neurobiological basis of those capacities, namely: (i) consciousness, i.e., subjective experience, (ii) self-consciousness, i.e., consciousness of one’s own consciousness, second-order consciousness, (iii) caring, i.e., desiring, emoting, or feeling, (iv) sensible cognition, i.e., sense-perception, memory, or imagination, (v) intellectual cognition, i.e., conceptualizing, believing, judging, or inferring, (vi) volition, i.e., deciding, choosing, or willing, and (vii) free agency, i.e., free will and practical agency. This unified set of capacities constitutes our human real personhood, which in turn is the metaphysical ground of our human dignity (Hanna, 2023d, 2023e). Therefore, this all-out existential attack on our rational human mindedness or intelligence is also an all-out existential attack on our human dignity.

The Cassandra-like prophecy and warning that I’m issuing, however, is not that chatbots or AI more generally could ever become rational, super-intelligent, and morally satanic, and then run amok. Interestingly, that seems to be one of Hinton’s worries. But in fact, it’s metaphysically impossible for computing machines ever to be rationally minded or intelligent in the sense that we’re rationally minded or intelligent, because (i) it’s metaphysically necessary that all creatures possessing the seven basic innate capacities I listed above are complex living organisms, i.e., animals (Hanna and Maiese, 2009), hence not machines, hence not computing machines, and (ii) it’s also metaphysically necessary that our rational mindedness or intelligence includes (iia) an innate non-basic (“non-basic,” in the sense that it essentially depends on the seven basic innate capacities listed in the just-previous paragraph) capacity for spontaneous creativity, and also (iib) an innate non-basic capacity for either conceptual or essentially non-conceptual a priori intuition of (iib1) innately-specified universal, unconditional, a priori or non-empirical moral principles such as everyone ought always to choose and act with sufficient respect for everyone’s dignity, including their own, (iib2) universal, unconditional, a priori or non-empirical logical principles such as the minimal principle of non-contradiction, namely, not every statement is both true and false, and (iib3) the universal a priori or non-empirical formal structures of the orientable, three-dimensional space and the forward-directed, processual, purposive, asymmetric organic time in which our minded animal bodies are ineluctably embedded (Hanna, 2006, 2015, 2018), none of which can ever exist in computing machinery. And, closely related to these metaphysically necessary modal facts, there are also some strictly logical and mathematical reasons why computing machinery can never be rationally minded or intelligent in the sense that we’re rationally minded or intelligent (see, e.g., Hanna, 2023f; Keller, 2023; Landgrebe and Smith, 2022.

On the contrary, the Cassandra-like prophecy and warning that I’m issuing about this all-out existential attack on our rational human mindedness or intelligence is instead directed at the global technocratic capitalist corporations — especially those that supply weapons and surveillance systems for military and government use — millionaires, and billionaires who reap immense profits and wield immense political power by designing, producing, marketing, and above all controlling our use of and reliance on digital technology: namely, the members of the military-industrial-digital complex. Correspondingly, my Cassandra-like prophecy and warning is simply this:

the members of the military-industrial-digital complex are systematically harming and oppressing ordinary people like us by not only enabling but also effectively mandating our excessive use of and addiction to digital technology, which in turn systematically undermines our innate capacities for thinking, caring, and acting for ourselves, and therefore undermines our human real personhood, and thereby violates our human dignity — therefore, we ought to ban all giant AI experiments and LLM/chatbot technology while they are still in their infancy, just as we ought to have banned all atomic bomb experiments and nuclear weapons technology while they were still in their infancy.

Now, what is to be done? Perhaps dignitarian neo-Luddism with respect to digital technology will become a worldwide, world-changing social and political movement, comparable to the Ban-the-Bomb and anti-nuclear movement : I wholeheartedly hope so. If we ban all further giant AI experiments and LLM/chatbot technology now, when it’s already obvious what their existential threat to humankind is, then the world will be a substantially better place, just as the world would have been a substantially better place if we had banned the A-bomb and nuclear weapons technology after its initial tests, when it was already obvious what their existential threat to humankind was.

But, in the meantime, given the immense power of the military-industrial-digital complex, individual dignitarian digital neo-Luddites like you and I (assuming that you agree with what I’ve argued so far, that is) cannot do very much to change the world. Nevertheless, and now updating J.C. Scott’s “weapons of the weak” (Scott, 1985) for our 21st century context, a world pervaded by digital technology and dominated by the military-industrial-digital complex, I do think that we individually can become what I’ll call daily dignitarian digital refusards, aka DDDRs.[iv] How?We can become DDDRs by resolving to cultivate our own innate capacities in a self-disciplined and autonomous way, for six waking hours every day, altogether independently of digital technology, to the extent that this is humanly possible.

To take just a few of many examples of daily dignitarian digital refusardism, you can log off all your digital devices for six waking hours a day, you can refuse to use ChatGPT or any other chatbot altogether, and you can instead do some or all of these things: read hard-copy books; write out your ideas and thoughts longhand; memorize and recite poetry; memorize things about subject areas that especially interest you; do mental calculations; do logic puzzles, acrostics, crossword puzzles, etc.; go for long contemplative walks; sit in a park for an hour or so; work in a garden; look carefully at the natural landscape around you; meditate for fifteen minutes or half an hour; cook, eat, and drink; doodle or draw longhand, or paint; play music on a real musical instrument, whistle, or sing; reminisce; get together with your loved ones or friends in person and talk about anything under the sun other than what’s being enabled or mandated by the members of the military-industrial-digital complex; and fall or re-fall in love with someone or something.

Even the military-industrial-digital complex, with all its immense power, can’t stop you from practicing daily dignitarian digital refusardism in its many modes, each of which involves the self-disciplined, autonomous cultivation of your innate capacities. And if the day does ever come when the military-industrial-digital complex can stop you from being a DDDR, whether by means of permanently-implanted brain-computer interfaces or some other malign instruments of digital-technological harm and oppression, then that will really-&-truly be the end of the road for rational humankind and human dignity.[v]

NOTES

[i] This riffs on a famous phrase in US President Dwight D. Eisenhower’s “Farewell Address” in 1961:

[The] conjunction of an immense military establishment and a large arms industry is new in the American experience. The total influence — economic, political, even spiritual — is felt in every city, every statehouse, every office of the federal government. We recognize the imperative need for this development. Yet we must not fail to comprehend its grave implications. Our toil, resources and livelihood are all involved; so is the very structure of our society. In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military–industrial complex. The potential for the disastrous rise of misplaced power exists, and will persist. We must never let the weight of this combination endanger our liberties or democratic processes. We should take nothing for granted. Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals so that security and liberty may prosper together. (See, e.g., Wikipedia, 2023a, [boldfacing] added)

[ii] The authors and signatories of the open letter say that “[i]f such a [six-months-minimum] pause cannot be enacted quickly, governments should step in and institute a moratorium” (Future of Life Institute, 2023). Now, “moratorium” means “temporary prohibition of an activity.” So, supposing that a self-adopted pause or government moratorium were to occur, then this still logically or at least conversationally implies that sooner or later, giant AI experiments and LLM/chatbot technology will be resumed. On the contrary, I’m saying that it should be shut down permanently, just as A-bomb research and nuclear weapons technology should have been shut down permanently.

[iii] It needs to emphasized and re-emphasized that dignitarian neo-Luddism with respect to digital technology is also committed to the positive dignitarian moral doctrine that some digital technology is good and right, and therefore ought to be used, precisely because it promotes the betterment of humankind and sufficiently respects human dignity. For example, in my opinion this is true of posting or self-publishing essays about dignitarian digital/AI ethics for universal free sharing on the internet. Why else would I be doing it? But in this context, I’m focusing on the negative dignitarian moral doctrine.

[iv] Pronounced “didd-ers.”

[v] For a similar line of thinking, see also (Corbyn, 2023; Farhany, 2023).

REFERENCES

(Corbyn, 2023). Corbyn, Z. “Prof Nita Farahany: ‘We Need a New Human Right to Cognitive Liberty’.” The Guardian. 4 March. Available online at URL = <https://www.theguardian.com/science/2023/mar/04/prof-nita-farahany-we-need-a-new-human-right-to-cognitive-liberty>.

(Farhany, 2023). Farhany, N. The Battle for the Brain. New York: St Martin’s Press.

(Future of Life Institute, 2023). “Pause Giant AI Experiments: An Open Letter.” Future of Life Institute. 22 March. Available online at URL = <https://futureoflife.org/open-letter/pause-giant-ai-experiments/>.

(Glendinning, 1990). Glendinning, C. “Notes toward a Neo-Luddite Manifesto.” The Anarchist Library. Available online at URL = <https://theanarchistlibrary.org/library/chellis-glendinning-notes-toward-a-neo-luddite-manifesto>.

(Hanna, 2006). Hanna, R. Rationality and Logic. Cambridge MA: MIT Press. Also available online in preview at URL = <https://www.academia.edu/21202624/Rationality_and_Logic>.

(Hanna, 2015). Hanna, R. Cognition, Content, and the A Priori: A Study in the Philosophy of Mind and Knowledge. THE RATIONAL HUMAN CONDITION, Vol. 5. Oxford: Oxford Univ. Press. Also available online in preview HERE.

(Hanna, 2018). Hanna, R. Deep Freedom and Real Persons: A Study in Metaphysics. THE RATIONAL HUMAN CONDITION, Vol. 2. New York: Nova Science. Available online in preview HERE.

(Hanna, 2023a). Hanna, R. “How and Why ChatGPT Failed The Turing Test.” Unpublished MS. Available online at URL = <https://www.academia.edu/94870578/How_and_Why_ChatGPT_Failed_The_Turing_Test_January_2023_version_>.

(Hanna, 2023b). “It’s All Done With Mirrors: A New Argument That Strong AI is Impossible.” Unublished MS. Available online HERE.

(Hanna, 2023c). “Are There Some Legible Texts That Even The World’s Most Sophisticated Robot Can’t Read?” Unpublished MS. Available online HERE.

(Hanna, 2023d). Hanna, R. “Dignity, Not Identity.” Unpublished MS. Available online at URL = <https://www.academia.edu/96684801/Dignity_Not_Identity_February_2023_version_>.

(Hanna, 2023e). Hanna, R. “Frederick Douglass, Kant, and Human Dignity.” Unpublished MS. Available online at URL = <https://www.academia.edu/97518662/Frederick_Douglass_Kant_and_Human_Dignity_February_2023_version_>.

(Hanna, 2023f). Hanna, R. “Babbage-In, Babbage-Out: On Babbage’s Principle.” Unpublished MS. Available online at URL = <https://www.academia.edu/101462742/Babbage_In_Babbage_Out_On_Babbages_Principle_May_2023_version_>.

(Hanna and Maiese, 2009). Hanna, R. and Maiese, M., Embodied Minds in Action. Oxford: Oxford Univ. Press. Available online in preview HERE.

(Hawkins and Allen, 1991). Hawkins, J.M. and Allen, R. (eds.), The Oxford Encyclopedic English Dictionary. Oxford: Clarendon/Oxford Univ. Press.

(Keller, 2023). Keller, A. “Artificial, But Not Intelligent: A Critical Analysis of AI and AGI.” Against Professional Philosophy. 5 March. Available online at URL = <https://againstprofphil.org/2023/03/05/artificial-but-not-intelligent-a-critical-analysis-of-ai-and-agi/>.

(Landgrebe and Smith, 2022). Landgrebe, J. and Smith, B. Why Machines Will Never Rule the World: Artificial Intelligence without Fear. London: Routledge.

(New York Times, 2023). Metz, C. “‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead.” The New York Times. 1 May. Available online at URL = <https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html>.

(Scott, 1985). Scott, J.C. Weapons of the Weak: Everyday Forms of Peasant Resistance. New Haven CT: Yale Univ. Press.

(Wikipedia, 2023a). Wikipedia. “Military-Industrial Complex.” Available online at URL = <https://en.wikipedia.org/wiki/Military%E2%80%93industrial_complex>.

(Wikipedia, 2023b). Wikipedia. “Luddite.” Available online at URL = <https://en.wikipedia.org/wiki/Luddite>.

AGAINST PROFESSIONAL PHILOSOPHY REDUX 775

Mr Nemo, W, X, Y, & Z, Monday 15 May 2023

Against Professional Philosophy is a sub-project of the online mega-project Philosophy Without Borders, which is home-based on Patreon here.

Please consider becoming a patron!

--

--

Mr Nemo

Formerly Captain Nemo. A not-so-very-angry, but still unemployed, full-time philosopher-nobody.