THE PHILOSOPHY OF THE FUTURE, #48–Digital Technology Only Within The Limits of Human Dignity.

Mr Nemo
20 min readNov 21, 2022

By Robert Hanna

“FUTUREWORLD,” by A. Lee/Unsplash

***

This book, THE PHILOSOPHY OF THE FUTURE: Uniscience and the Modern World, by Robert Hanna, presents and defends a critical philosophy of science and digital technology, and a new and prescient philosophy of nature and human thinking.

It is being made available here in serial format, but you can also download and read or share a .pdf of the complete text–including the BIBLIOGRAPHY–of THE PHILOSOPHY OF THE FUTURE HERE.

This forty-eighth installment contains section 5.0.

***

We know the truth not only through our reason but also through our heart. It is through the latter that we know first principles, and reason, which has nothing to do with it, tries in vain to refute them. (Pascal, 1995: #110, p. 28)

If there is any science humankind really needs, it is the one I teach, of how to occupy properly that place in [the world] that is assigned to humankind, and how to learn from it what one must be in order to be human. (Rem 20: 45)

Natural science will one day incorporate the science of humankind, just as the science of humankind will incorporate natural science; there will be a single science. (Marx, 1964: p. 70, translation modified slightly)

***

TABLE OF CONTENTS

A NOTE ON REFERENCES TO KANT’S WORKS

PREFACE AND ACKNOWLEDGMENTS

0. Introduction: Science, The Four Horsemen of The New Apocalypse, and The Uniscience

0.0 How Uncritical and Unreformed Science Is Literally Killing The Modern World

0.1 My Aim In This Book

0.2 The Uniscience and Pascal’s Dictum

Chapter 1. Natural Piety: A Kantian Critique of Science

1.0 Kantian Heavy-Duty Enlightenment and The Uniscience

1.1 Kant’s Neo-Aristotelian Natural Power Grid

1.2 Kant, Natural Piety, and The Limits of Science

1.3 From Kant’s Anti-Mechanism to Kantian Anti-Mechanism

1.4 In Defense of Natural Piety

1.5 Scientific Pietism and Scientific Naturalism

1.6 How to Ground Natural Science on Sensibility

1.7 Sensible Science 1: Natural Science Without Natural Mechanism

1.8 Sensible Science 2: Natural Science Without Materialism/Physicalism

1.9 Sensible Science 3: Natural Science Without Scientism

1.10 Frankenscience, the Future of Humanity, and the Future of Science

Chapter 2. This is the Way the World Ends: A Philosophy of Civilization Since 1900, The Rise of Mechanism, and The Emergence of Neo-Organicism

2.0 Introduction

2.1 Wrestling with Modernity: 1900–1940

2.1.1 Six Sociocultural or Sociopolitical Developments

2.1.2 Two Philosophical Developments: Classical Analytic Philosophy and First Wave Organicism

2.1.3 Architectural and Artistic Trends

2.2 The Historical Black Hole, The Mechanistic Mindset, and The Mechanistic Worldview: 1940–1980

2.2.1 Formal and Natural Science After 1945, The Mechanistic Mindset, and The Rise of The Mechanistic Worldview

2.2 The Emergence of Post-Classical Analytic Philosophy

2.2.3 The Two Images Problem and its Consequences

2.2.4 Modernism and Countercurrents in the Arts and Design

2.3 The Philosophical Great Divide, Post-Modernist Cultural Nihilism, and Other Apocalyptic Developments: 1980–2022

2.3.1 The Rise of Po-Mo Philosophy

2.3.2 Po-Mo Architecture: Unconstrained Hybridity

2.3.3 Other Apocalyptic Developments: Crises in Physics and Big Science, and The One-Two Punch

2.4 From The Mechanistic Worldview to Neo-Organicism

2.4.0 Against The Mechanistic Worldview

2.4.1 Seven Arguments Against The Mechanistic Worldview

2.4.1.1 Logical and Mathematical Arguments

2.4.1.2 Physical and Metaphysical Arguments

2.4.1.3 Mentalistic and Agential Arguments

2.4.2 Beyond The Mechanistic Worldview: The Neo-Organicist Worldview

2.4.2.1 The Neo-Organist Thesis 1: Solving The Mind-Body Problem

2.4.2.2 Dynamic Systems Theory and The Dynamic World Picture

2.4.2.3 The Neo-Organicist Thesis 2: Solving The Free Will Problem

2.4.2.4 Dynamic Emergence, Life, Consciousness, and Free Agency

2.4.2.5 How The Mechanical Comes To Be From The Organic

2.5 Neo-Organicism Unbound

2.6 Conclusion

Chapter 3. Thought-Shapers

3.0 Introduction

3.1 A Dual-Content Nonideal Cognitive Semantics for Thought-Shapers

3.2 The Cognitive Dynamics of Thought-Shapers

3.3 Constrictive Thought-Shapers vs. Generative Thought-Shapers

3.4 Some Paradigmatic Classical Examples of Philosophical and Moral or Sociopolitical Constrictive Thought-Shapers, With Accompanying Diagrams

3.5 Thought-Shapers, Mechanism, and Neo-Organicism

3.6 Adverse Cognitive Effects of Mechanical, Constrictive Thought-Shapers

3.7 How Can We Acknowledge Organic Systems and Organic, Generative Thought-Shapers?

3.8 We Must Cultivate Our Global Garden

Chapter 4. How To Complete Physics

4.0 Introduction

4.1 The Incompleteness of Logic, The Incompleteness of Physics, and The Primitive Sourcehood of Rational Human Animals

4.2 Frame-by-Frame: How Early 20th Century Physics Was Shaped by Brownie Cameras and Early Cinema

4.3 How to Complete Quantum Mechanics, Or, What It’s Like To Be A Naturally Creative Bohmian Beable

4.4 Can Physics Explain Physics? Anthropic Principles and Transcendental Idealism

4.5 The Incredible Shrinking Thinking Man, Or, Cosmic Dignitarianism

4.6 Conclusion

Chapter 5. Digital Technology Only Within The Limits of Human Dignity

5.0 Introduction

00. Conclusion: The Point Is To Shape The World

APPENDICES

Appendix 1. A Neo-Organicist Turn in Formal Science: The Case of Mathematical Logic

Appendix 2. A Neo-Organicist Note on The Löwenheim-Skolem Theorem and “Skolem’s Paradox”

Appendix 3. A Neo-Organicist Approach to The Nature of Motion

Appendix 4. Sensible Set Theory

Appendix 5. Complementarity, Entanglement, and Nonlocality Pervade Natural Reality at All Scales

Appendix 6. Neo-Organicism and The Rubber Sheet Cosmos

BIBLIOGRAPHY

***

Chapter 5. Digital Technology Only Within The Limits of Human Dignity

A collection of stills from Alphaville, directed by J.-L. Godard (1965)

These are some of the reasons why robots, which may finally be here, aren’t — and really shouldn’t be — a harbinger of something negative. In theory, more technology will mean more efficiency for America’s employers and, perhaps, in a post-pandemic world, a safer one. The early signals, however, are that we should not fear new robot overlords. Instead, the sharpest, most strategic companies are making a simultaneous investment in people who can power progress. (Horn and Jackson, 2021)

In the realm of ends everything has either a price or a dignity (Würde). What has a price can be replaced by something else as its equivalent; what on the other hand is raised above all price and therefore admits of no equivalent has dignity. What is related to general human inclinations and needs has a market price; that which, even without presupposing a need, conforms with a certain taste, that is, with a delight in the mere purposeless play of our mental powers, has an affective price (Affectionpreis); but that which constitutes the condition under which alone something can be an end in itself has not merely a relative worth, that is, a price, but an inner worth, that is, dignity. Now, morality is the condition under which alone a rational being can be an end in itself, since only through this is it possible to be a lawgiving member in the realm of ends. Hence morality, and humanity insofar as it is capable of morality, is that which alone has dignity. (GMM 4: 434–435)

5.0 Introduction

Is digital technology — including computers, algorithms, digital data or information, artificial intelligence/AI, and robotics — our tool or our master? I’ll call the first option cybernetic instrumentalism, and the second option cybernetic oppression, up to and including cybernetic totalitarianism — as, for example, brilliantly imagined in Jean-Luc Godard’s 1965 dystopian science fiction classic, Alphaville. It’s essential to understand from the outset, however, that the problem posed by Alphaville is not a fundamental problem in the metaphysics of mind: the mega-computer Alpha-60 is neither conscious, nor self-conscious, nor rational, nor a free agent, nor intelligent (see section 2.4.1 above). But the global power elite constituting the military-industrial-university-digital complex that controls and guides all nation-States and their governments, i.e., The Hyper-State, which thereby also controls the government of Alphaville (aka Paris circa 1965, in the wake of the bloody Algerian War for independence from France, from 1954 to 1962) and all its inhabitants, is terrifyingly authoritarian, coercive, and brutal. So, even though the problem about digital technology is not a fundamental problem in the metaphysics of mind, it is a fundamental moral, sociopolitical, and existential-spiritual problem, even despite the soothing, sophistical reassurances of contemporary consultant experts in digital ethics who work either directly for the global technocratic business corporations that reap immense profits from digital technology (see, e.g., Muldoon, 2021), or for university-based digital ethics “centers,” “institutes,” or “programs” that are heavily funded by those same corporations.

For example, here’s how M.B. Horn and C.J. Jackson purport to dispel what they call the “apparent paradox” about digital technology and the lives of workers:

Scientist and fiction writer Isaac Asimov once wrote, “You can’t differentiate between a robot and the very best of humans.” The fear of automation, if not robot overlords, has been a fixation of business analyses and science fiction for generations. So with U.S. employers embarking on a clearer path toward machines, it might be easy to read these developments as a dire sign for American workers. In particular, some victims of robots would seem to be investments in areas like upskilling and education. The presumption might be that building a better bot gives you more bang for your buck.

Except the opposite appears to be happening. Automation is here, but so too is a deeper appreciation for and investment in things like upskilling, learning and development, and education for all workers. (Horn and Jackson, 2021)

So, according to Horn and Jackson, any metaphysical, moral, sociopolitical, or existential worries we might have about digital technology are to be bought off by global technocratic business corporations investing big or not-so-big bucks in (re-)educating an elite sub-class of workers at those self-same global technocratic business corporations. Nice, very nice.

The core thesis of Dignitarian Digital Ethics, aka DDE, then, is that digital technology, no matter how powerful, sophisticated, or profitable, is nothing more and nothing less than a tool created by humanity for the sake of humanity,whose use therefore can and should be strictly constrained by general and specific moral principles flowing from the concept and fact of human dignity. Now all tools significantly shape the minds and lives of the users of those tools. Moreover, all high-powered tools can be used either well or wrongly — to nudge us, control us, torture us, or kill us; and they can also accidentally spin out of control, and cause serious damage, catastrophe, or even apocalypse. Therefore, DDE is a maximally morally robust version of cybernetic instrumentalism that’s explicitly and fully opposed to cybernetic oppression of any and all kinds, intentional or accidental, up to and including cybernetic totalitarianism. Or in other and fewer words: digital technology only within the limits of human dignity.

In order to clarify and explicate these claims, here are some definitions.

By consciousness, I mean the capacity for immanently reflexively aware, egocentrically-centered — aka subjective — essentially embodied experience that’s innately possessed by rational human animals and also by other non-rational animals, whether human or non-human.

Rational human animals and other conscious animals are all capable of intentionality — aka the “aboutness” or “directedness” of conscious mental acts, states, or processes — which is the conscious mental representation of any object in the world, or any other aspect of that world, or of themselves. Intentionality is a basic fact about us and other animals, that’s exemplified by any animals of any species that are capable of any or all of these: consciousness, cognition (i.e., essentially non-conceptual sense perception or non-empirical representation, conceptualization or thinking, imagination, belief or judgment, memory, or anticipation), emotion (i.e., desire, feeling, passion), logical reasoning, practical reasoning, choice, and/or intentional action and agency. In particular, if you’re actually reading these words now, then you’re capable of basic intentionality.

By saying that a fact or phenomenon is basic, I mean (i) that it’s an irreducible and essential feature of at least some, many, or even all entities that are manifestly real, (ii) that it’s fully capable of grounding other non-basic facts or phenomena, and (iii) that it’s neither unanalyzable, nor non-relational, nor inexplicable.

Correspondingly, by contrast with basic intentionality, some representations are not, as items in the manifestly real world, in-and-of-themselves, conscious mental representations with basic intentionality, although they’re actually used, or at least in principle can be used, by conscious animals as representations with non-basic intentionality.[i] Leading examples of representations with non-basic intentionality include human language, logic, mathematics, diagrams, pictures, and symbols more generally.

By data or information, I mean the communicative, semantic, logical, mathematical, or other kind of representational content, of any kind of representation — i.e., what that representation denotes, connotes, expresses, indicates, means, presents, says, etc., via its basic or non-basic intentionality — as opposed to that representation’s encoding or presentational format alone.

And by digital data or information, I mean data or information that’s actually or potentially encoded or presentationally formatted as a binary sequence of 0s and 1s, and can be operated on by the primitive recursive functions of arithmetic.

Then, by digital technology I mean the following:

Anything X counts as “digital technology” if and only if X is an abstract or real-world machine, or a sub-part of such a machine, that operates on and/or processes digital data or information according to the logico-mathematical principles of Turing-computation (Turing, 1936/1937; Boolos and Jeffrey, 1989), aimed at some end or purpose.

In turn, by artificial intelligence, aka AI, I mean a proper subset of digital technology that’s designed specifically in order to mimic or model human consciousness, cognition, emotion, knowledge, logical reasoning, practical reasoning, choice, and/or intentional action and agency, up to the point at which it can also greatly exceed contextual or natural limitations on human performance, or even fool rational human animals into believing that they’re encountering another mind (Turing, 1950). In turn, AI includes robotics, since robots are real-world implementations of some or another version of AI — even when robots are so-called “autonomous,” in the attenuated sense that they’re more or less detached from central control systems and operating “in the wild.”

Now an algorithm, is any well-defined, finite, data-or-information-processing routine or rule-governed sequence, according to the logico-mathematical principles of Turing-computation. Algorithms are implemented by digital technology that’s specifically in the form of real-world artificial automata or artificial machines, aka “computers,” and also by digital technology more generally, of many different kinds, built out of many different kinds of materials. The end or purpose of any algorithm — say, to solve a class of problems, or to perform a calculation, or to yield some other sort of result (aka “optimization”) — is pre-established by humankind, i.e., by us.

Therefore, all forms of digital technology, including computers, algorithms, digital data or information, AI, and robotics, are nothing more and nothing less than tools for data-or information-processing, or for real-world implementations of these, according to the logico-mathematical principles of Turing-computation, created by humankind for specifically human ends and purposes. Again, that’s the thesis of cybernetic instrumentalism.

By ethics, I mean the domain of humankind’s basic individual and social commitments, and its leading ideals and values.

And by morality, I mean humankind’s attempt to guide human choice and action by rationally formulating and following principles or rules that reflect humankind’s basic individual and social commitments and its leading ideals and values; and morality is the core of ethics.

Therefore, by digital ethics I mean humankind’s attempt to guide human choice and action in the design, creation, and use of digital technology, including computers, digital data or information, AI, and robotics, by rationally formulating and following moral principles or rules that reflect humankind’s basic individual and social commitments and its leading ideals and values.

Sadly, popular moral thinking and sociopolitical thinking are rife with fallacies, and contemporary digital ethics is no exception. Here’s the general form of a particularly pernicious one that I’ll call The Fallacy of Inevitability:

(i) X is a very large social institution (in terms of the number of people who are members of that institution), X is very profitable for an elite group of powerful people, X has been in existence for a very long time (let’s say, anywhere from 50 years to hundreds or even thousands of years), and X is very widespread (whether locally, regionally, nationally or even globally),

(ii) therefore X is inevitable, and

(iii) because X is inevitable, therefore X ought to be accepted by us even if X itself and its consequences are very often bad, false, and wrong, and furthermore,

(iv) therefore the most we can do in order to respond to the badness, falsity, and wrongness of X is to impose legal audits, regulations, or restrictions on X, while still allowing X to exist essentially unchanged in its current form.

In turn, let’s consider some examples of real-world values of X:

X = chattel slavery

X = people’s owning, carrying, or using guns

X = the State’s owning, carrying, or using guns and/or weapons more generally

X = military and domestic intelligence, together with policing, together with coercive authoritarian government, aka the surveillance State, aka the security State

X = ecologically-destructive, technocratic global corporate capitalism

X = any digital technology whose design, creation, and/or use fails to treat people with sufficient respect for their human dignity (see also, e.g., McDonald et al., 2021), up to and including cybernetic totalitarianism.

Now there’s not merely one fallacy, but in fact three sub-fallacies, contained under The Fallacy of Inevitability, that I’ll call the three modes of The Fallacy.

First, the step from from (i) to (ii) is clearly a non sequitur. Just because a social institution is very large, very profitable for an elite group of powerful people, has been in existence for a very long time, and is very widespread, since no social institution is necessitated either by the laws of logic or the laws of nature, then that social institution is not inevitable. More generally, because humanity freely create and freely sustain all actual and possible social institutions, then humanity can also freely abolish, refuse, or at the very least radically devolve-and-replace (hence, transform into its moral opposite) any such social institution. That’s mode 1 of The Fallacy.

Second, step (iii) clearly contains another non sequitur. Since social institutions are contingent facts about humanity, but no contingent fact automatically entails a moral obligation, then step (iii) is clearly an instance of the naturalistic fallacy, namely, arguing directly from the factual (the “is”) to the morally obligatory (the “ought”). Indeed, and diametrically on the contrary, if any social institution itself and its consequences are bad, false, and wrong, then humanity ought freely to abolish it, refuse it, or at the very least radically transform it into its moral opposite. That’s mode 2 of The Fallacy.

Third, step (iv) is yet another clear non sequitur. On the assumption that some social institution itself and its consequences are bad, false and wrong, then it simply does not follow that the most we can do in order to respond to the badness, falsity, and wrongness of X is to impose legal audits, regulations, or restrictions on X, while still allowing X to exist essentially unchanged in its current form. Diametrically on the contrary, humanity ought freely to abolish it, refuse it, or at the very least radically transform it into its moral opposite. And that’s mode 3 of The Fallacy.

Consider, for example, chattel slavery, a social institution that’s been in existence since even before the emergence of the earliest States (Scott, 2017: p. 155), and during certain historical periods has been practiced virtually worldwide. Chattel slavery is self-evidently inherently bad, false, and wrong, and its consequences are bad, false, and wrong, precisely because it violates human dignity — and for the basics of a theory of human dignity, see sections 5.1 and 5.2 below.[ii] And this holds even if chattel slavery is a very large social institution (say, forcibly employing and/or non-forcibly servicing millions of people living in the USA), very profitable for an elite group of powerful people (say, tobacco plantation owners and cotton plantation owners, and their business affiliates, in the American South), has been in existence for a very long time (say, from 1776 to 1865, and also even earlier during the pre-Revolutionary period), and is very widespread (say, spread all across the American South during those periods). Chattel slavery wasn’t and isn’t inevitable, and it wasn’t and isn’t even morally impermissible, much less morally obligatory, precisely because it violates human dignity, even when it was an actual social fact. Hence it would have been a moral absurdity and a moral scandal, merely to impose legal audits, regulations, and restrictions on chattel slavery, since the absurdity and moral scandal of the very idea of “common sense slavery control” are self-evident. Diametrically on the contrary, and also self-evidently, it was, is, and forever will be humanity’s moral obligation to abolish chattel slavery, refuse it, or at the very least radically transform it into its moral opposite. And indeed, that’s what actually happened to chattel slavery in the USA by the end of The Civil War — although, catastrophically and tragically, as everyone knows, the USA has continued to suffer for 156 years from persistent and pervasive racist violations of human dignity and other malign consequences of structural or systematic racism, aka “white rage” (Anderson, 2017), from Reconstruction and the Jim Crow period, through the Civil Rights era and its troubled aftermath in the 1970s to the end of the 20th century, through the Black Lives Matter era, right up to 6 am this morning.

Finally, to come to the sticking point, here’s an excerpt from a recent article, “The Algorithmic Auditing Trap,” by another consultant expert in digital ethics, Mona Sloane:

We have a new A.I. race on our hands: the race to define and steer what it means to audit algorithms. Governing bodies know that they must come up with solutions to the disproportionate harm algorithms can inflict.

This technology has disproportionate impacts on racial minorities, the economically disadvantaged, womxn, and people with disabilities, with applications ranging from health care to welfare, hiring, and education. Here, algorithms often serve as statistical tools that analyze data about an individual to infer the likelihood of a future event — for example, the risk of becoming severely sick and needing medical care. This risk is quantified as a “risk score,” a method that can also be found in the lending and insurance industries and serves as a basis for making a decision in the present, such as how resources are distributed and to whom.

Now, a potentially impactful approach is materializing on the horizon: algorithmic auditing, a fast-developing field in both research and application, birthing a new crop of startups offering different forms of “algorithmic audits” that promise to check algorithmic models for bias or legal compliance….

Recently, the issue of algorithmic auditing has become particularly relevant in the context of A.I. used in hiring. New York City policymakers are debating Int. 1894–2020, a proposed bill that would regulate the sale of automated employment decision-making tools. This bill calls for regular “bias audits” of automated hiring and employment tools.

These tools — résumé parsers, tools that purport to predict personality based on social media profiles or text written by the candidate, or computer vision technologies that analyze a candidate’s “micro-expressions” — help companies maximize employee performance to gain a competitive advantage by helping them find the “right” candidate for the “right” job in a fast, cost-effective manner.

This is big business. The U.S. staffing and recruiting market, which includes firms that assist in recruiting new internal staff and those that directly provide temporary staff to fill specific functions (temporary or agency staffing), was worth $151.8 billion in 2019. In 2016, a company’s average cost per hire was $4,129, according to the Society for Human Resource Management.

Automated hiring and employment tools will play a fundamental role in rebuilding local economies after the Covid-19 pandemic. For example, since March 2020, New Yorkers were more likely than the national average to live in a household affected by loss of income. The economic impact of the pandemic also materializes along racial lines: In June 2020, only 13.9% of white New Yorkers were unemployed, compared to 23.7% of Black residents, 22.7% of Latinx residents, and 21.1% of Asian residents.

Automated hiring tools will reshape how these communities regain access to employment and how local economies are rebuilt. Against that backdrop, it is important and laudable that policymakers are working to mandate algorithmic auditing. (Sloane, 2021)

This argument is a perfect example of The Fallacy of Inevitability in all three of its modes. No form of digital technology is inevitable,[iii] even if it’s very large in the social-institutional sense, very profitable for an elite group of powerful people, has been in existence a very long time (i.e., at least 50 years), and is very widespread. Moreover, humanity is under absolutely no moral obligation whatsoever to use any form of digital technology that violates sufficient respect for human dignity, just because it’s an actual social fact. And above all, the assumption that the most we can do is to impose legal audits, regulations, and restrictions on digital technology is completely false. Although “common sense digital technology control” might not look morally scandalous and morally absurd at first glance, just as “common sense gun control” might not look morally scandalous and morally absurd at first glance, in fact they both are.[iv] Diametrically on the contrary, then, if any form of digital technology violates human dignity, then humanity should freely abolish, refuse, or at the very least radically transform it into its moral opposite. Period. — No matter how much this rejection disrupts the wonders of “innovation,” the self-interests of digital technology billionaires and their global corporations, and/or the putatively public interests of the governments of neoliberal nation-States.

To be fair, in “The Algorithmic Auditing Trap” Sloane does go on to say, immediately after the text I quoted earlier, that

we are facing an underappreciated concern: To date, there is no clear definition of “algorithmic audit.” Audits, which on their face sound rigorous, can end up as toothless reputation polishers, or even worse: They can legitimize technologies that shouldn’t even exist because they are based on dangerous pseudoscience. (Sloane, 2021, underlining added)

That underlined sentence, at least prima facie, looks like the dignitarian abolish-it-refuse-it-or-radically-transform-it-into-its-moral-opposite principle I’m defending. But unfortunately, by the end of the article, she’s fallen back into The Fallacy of Inevitability, by merely recommending “three steps”: (i) “transparency about where and how algorithms and automated decision-making tools are deployed,” (ii) “need[ing] to arrive at a clear definition of what ‘independent audit’ means in the context of automated decision-making systems, algorithms, and A.I.,” and (iii) “need[ing] to begin a conversation about how, realistically, algorithmic auditing can and must be operationalized in order to be effective.” Morally speaking, that’s pretty thin gruel.

So again: if digital technology is bad, false, and wrong because it violates sufficient respect for human dignity, then, just like chattel slavery, humanity not only can but should abolish it, refuse it, or at the very least radically transform it into its moral opposite. As I mentioned at the outset, the core thesis of DDE is that digital technology, no matter how powerful, sophisticated, or profitable, is nothing more and nothing less than a tool created by humanity for the sake of humanity,whose use therefore can and should be strictly constrained by general and specific moral principles flowing from the concept and fact of human dignity. Digital technology is our tool, not our master.

In turn, the core thesis of DDE is grounded on the neo-organicist worldview, which, as we’ve seen in earlier chapters, says (i) that natural or physical nature is essentially processual, purposive, and self-organizing, hence essentially non-mechanical, and (ii) that there’s a single, unbroken metaphysical continuity between The Big Bang Singularity, temporally asymmetric/unidirectional, non-equilibrium negentropic thermodynamic matter/energy flows, organismic life, minded animals generally, minded human animals specifically, their free agency, and their dignity. In diametric opposition to DDE’s core thesis and the neo-organicist worldview, any and all forms of cybernetic oppression, up to and including cybernetic totalitarianism, are grounded on the mechanistic worldview. In view of the pervasiveness and ubiquity of digital technology in our contemporary and foreseeably future individual, social, economic, and political lives, DDE’s core thesis is of global existential significance — in all senses of “existential” — for humankind.

In the six sections that follow in this final chapter, I’ll present and defend the metaphysical foundations, moral principles, and sociopolitical applications of DDE.

NOTES

[i] The distinction I’m making here between basic intentionality and non-basic intentionality is similar in some ways ways to John Searle’s distinction between intrinsic intentionality and derived intentionality. See, e.g., (Searle, 1980b, 1983, 1992: esp. 78–82). The crucial differences between my distinction and Searle’s are (i) that my distinction presupposes the essential embodiment theory of the mind-body relation — see, e.g., (Hanna and Maiese, 2009; and section 2.4.2.1 above) — whereas by sharp contrast Searle’s distinction presupposes a biologically-driven and causal version of non-reductive materialism/physicalism, and (ii) although my notion of intentionality’s basicness rules out any implication of its unanalyzability, non-relationality, and inexplicability from the get-go, Searle’s notion of intentionality’s intrinsicness doesn’t rule this out.

[ii] For a fully worked-out theory of human dignity, see (Hanna, 2021f).

[iii] The false thesis of the inevitability of digital technology is sometimes called “technological determinism.” But this term is also used in a baffling variety of other non-equivalent senses: see, e.g., (Dafoe, 2015). Hence it’s philosophically least confusing, and all-around better, simply to avoid using that term in this context altogether.

[iv] For an application of dignitarian moral reasoning to owning, carrying, or using guns, see (Hanna, 2022d).

AGAINST PROFESSIONAL PHILOSOPHY REDUX 742

Mr Nemo, W, X, Y, & Z, Monday 21 November 2022

Against Professional Philosophy is a sub-project of the online mega-project Philosophy Without Borders, which is home-based on Patreon here.

Please consider becoming a patron!

--

--

Mr Nemo

Formerly Captain Nemo. A not-so-very-angry, but still unemployed, full-time philosopher-nobody.