Hodges' Model: Welcome to the QUAD: Book review #3: Handbook on the Ethics of AI

Hodges' model is a conceptual framework to support reflection and critical thinking. Situated, the model can help integrate all disciplines (academic and professional). Amid news items, are posts that illustrate the scope and application of the model. A bibliography and A4 template are provided in the sidebar. Welcome to the QUAD ...

Wednesday, February 12, 2025

Book review #3: Handbook on the Ethics of AI

I've started several 'gap' lists, including a draft post or two. Seeking to help bridge the theory-practice gap is an original purpose for Hodges' model.

Friedman's chapter 5 Responding to the (techno) Responsibility Gap(s). It is almost as if Friedman's title is parameterised. We could insert the domains of Hodges' model (plus, spiritual - safeguarding!) in there. Then there is the magic of languages, for responsibility substitute accountability and do a deep-dive applying Hodges' model in a specific domain. Friedman notes it is not entirely accurate to talk of a single "responsibility gap". Can this provide evidence for the situated potentiality of Hodges' model's and role in fostering situational awareness? Although there is an ongoing roll-back of CSR - corporate social responsibility, I scribbled in the margin:

INDIVIDUAL
|
      INTERPERSONAL    :     SCIENCES               
HUMANISTIC  --------------------------------------  MECHANISTIC      
 SOCIOLOGY  :    POLITICAL 
|
GROUP
user
user
user
corporate

Of course, 'user' still applies in the political domain, as we are all citizens; users of the State (note also the socio-technical utility of keeping the 'user' in sight). There is also personal ethics, and the legal duty of care - as a nurse (in my case) and for a member of the public. Discussion on notions of responsibility, in light of AI, especially self-driving vehicles, and autonomous machines (weapons) is much needed. Regulatory gaps are recognised; amid current news of debate on the degree of regulation of national and international financial systems. Nyholm's four-fold formulation on responsibility is invaluable as examples are provided. There is a reference to nine notions of responsibility, and the link to accountability and retribution.  I like Friedman's  discussion of responses to responsibility gaps as; optimists - technological approach; human-centred and hybrid approaches; plus pessimists (4.3). The latter includes debate and campaigns to 'stop the killer robots'. Did this chapter, have a positive effect in extending thoughts about events in the middle east? There are two points here I will return to in the final post.

For students, Schwarz's chapter 6 takes readers back to a literally pivotal development in ethical thought and debate: Trolleyology, with the addition of algorithms and killer robots. What Searle's Chinese Room (noted p.276 here) argument is to AI; so trolley-based dilemmas are to ethics. A point made by Schwarz, referring to the development of subsequent variations. (Following on from #2, Brian Magee's interviews with John Searle on philosophy of language and Wittgenstein are also well worth a listen.) Schwarz's chapter (and the whole book) could be read in preparation for an undergraduate course: computing, law, policy, philosophy, and ethics, for example. The challenge of determining responsibility and accountability "the problem of many hands" (p.71), is not limited to AI. This is a global issue in achieving justice for many previously healthy, competent, quite 'ordinary' people.

The question posed above, applies "equally"(!) here too: 3. mathematical morality for war. Ever since reading of argumentation through the Open University and other sources, I'm sure this is applicable to Hodges' model (with its 'own' algebra)? I'm no mathematician but the book provoked excitement rather than anxiety as I read: 'Principle of Permissible Harm (PPH) and probabilistic reasoning - with formulae (pp.88-89). There are some important (imho) 'take-away' messages here, but I don't want to spoil things. Nurses need mathematical skills, in drug calculations and are assessed on the same. Statistics are used in research, of course but maths is hardly the primary motivation for entry into the care professions: 
'Designing technologies in a way that takes into account broader human-centric values is an important and laudable approach to responding to new and emerging technologies in any arena. It cannot, however, serve as a substitute for moral deliberation and the act of taking moral responsibility for harmful acts, especially when lives are at stake. Reducing this to a mathematical problem ignores everything that cannot be captured in discrete numerical terms or as data points. And in situations where people and their lives become nothing more than data points, non-computational aspects, such as relational or embodied dimensions of human life, are marginalized if not entirely obscured. The real world is not reducible to binary logic, it is rife with contested, contradictory, and clashing values that might all be equally relevant It is complex in ways that cannot always be neatly captured or mathematically modeled. In other words, ethics cannot be "solved."' p.91.
More to follow ...

Handbook on the Ethics of Artificial Intelligence. David J. Gunkel (ed.). Cheltenham, UK: Edward Elgar Publishing Ltd. ISBN: 978 1 80392 671 1245

https://www.e-elgar.com/shop/gbp/handbook-on-the-ethics-of-artificial-intelligence-9781803926711.html


Related previous posts: 'general + AI'