Hodges' Model: Welcome to the QUAD: Book review: Handbook on the Ethics of AI #2

Hodges' model is a conceptual framework to support reflection and critical thinking. Situated, the model can help integrate all disciplines (academic and professional). Amid news items, are posts that illustrate the scope and application of the model. A bibliography and A4 template are provided in the sidebar. Welcome to the QUAD ...

Monday, February 10, 2025

Book review: Handbook on the Ethics of AI #2

Handbook on the Ethics of AI

Well, I am sold. So, encouraging again to read 'reflect -ion' on the first page and throughout chapter 1:

'... why the very idea of AI gives rise to (or should give rise to) ethical reflection.' (p.21), and I'm sure the book as a whole.

Most readers may pass this by, but to me this matters. Ethics scholars however will see how important and conjoint reflection is to deliberation and argumentation of ethics. Especially, as noted in post #1 the five part division of the text Foundations and Context; Responsibilities; Rights; Politics and Power; and Thinking Otherwise. 

Tables break up and inform the text. I like the way for many tables the source is: Author's own elaboration.
Given the subject matter, yes, the authors have the literature but they are also formulating their chosen theme against the present, near future, socio-political scenarios and much more. So, reflection - this should be expected. With Hodges' model I'm often asked why are those concepts placed in that particular domain? Perhaps that is a question for general artificial intelligence? Subjectivity is a key challenge, and its effective communication.

If I say, chapter 1 is a great primer on What Is This Thing Called the Ethics of AI and What Calls for It? I'd sell the book short, the whole text fulfills this purpose. The definitions and responses to What is A.I. are very helpful, taking into account ancient to more recent history. I've a book in a box by David Chalmers (sorry Prof.) - one day - which helps here discussing intelligence and consciousness.

I'm primed to pick up on 'gaps' (Theory-Practice) and in human experience they are inevitably legion. The responsibility gap (and meaningfulness!) c/o Sven Nyholm is a place to return to, to debate why A.I. raises such a broad range of ethical issues. Patiency is never far away. Here, the patient is stark: in  moral agents and moral patients (p.20 and 89). This blog is littered with posts on Drupal, which to date I have never mastered. I remember sitting in DrupalCon presentations, in the midst very skilled professional coders. A member of the community stood at the podium recounting their lived experience of impostor syndrome. In November with a presentation of my own, I had my own encounter. Well into reading this book, it really did help. The conclusion of chapter 1 contrasts the frequent need for a big-picture overview and reflection on more specific issues. That's quite fitting for me.

I am biased in the encouraging kernels I find: as seek them I do. Yes, as chapter 2 AI Ethics before Frankenstein begins: 'there remains work to be done in charting the long-range conceptual development of  AI in the history of political thought.' (p.27, my emphasis). And much more I hope. Chapter 2 combines literature, myth and I enjoyed the discussion on techne, Prometheus, Hobbes, the interdisciplinary bridge of techno-politics and articulating the mechanistic in our lives. Focussed recently on thresholds and threshold concepts in revising two papers: Hunt writes, 'Frankenstein is a "threshold" text for "modern political science fiction" (p.31). 

This  resonated - especially 'for its prediction that the nascent Enlightenment-era sciences of chemistry, anatomy, and electricity could be used to artificially make a human being' (p.31). The exploration of bad education and bad governance through Shelley and social history is other-worldly in itself. Hunt's conclusion which includes 'Wollstonecraft was the pivotal figure in the process of refracting ideas of AI from Hobbes to Shelley, ...' left me wondering about the refactoring of code. 

Chapter 3, Smith's Faith, Tech, and Ethics of AI brings more reflect -ion with the bonus for me of Descartes. On faith, the human situation is also repeatedly stressed. Delving into creation too, this is detailed on forms of theology - Augustine, Bede, for example, and philosophy Bacon, Hume, Kant and their role in diminishing the influence of religious thought and epistemology (p.39). How long a push was that? Surely it is ongoing - in some quarters? Medieval thought is discussed, and I recall this broad period also being regarded as misunderstood [Medieval Philosophy - Bryan Magee & Anthony Kenny (1987)] . 

On a personal level, I've always placed belief in the intra- interpersonal domain of Hodges' model. In psychosis, depression, anxiety, phobias in short mental health, beliefs can be disrupted, and influenced of course when all is well. Religion I have framed at the individual and socio-political level. Clinically, it is the individual patient's beliefs, if they follow a particular religion that we also need to be cognizant of as health personnel. Machine, or human; object or being as Smith asks:

SELF / INDIVIDUAL  -  OBJECT / THING
|
      INTERPERSONAL    :     SCIENCES               
HUMANISTIC  --------------------------------------  MECHANISTIC      
 SOCIOLOGY  :    POLITICAL 
|
GROUP

the human -

the machine -


- can become machine

- can become human



socio-


-political

There is depth of discussion too: in AI and Power; a Christian response to AI ethics. This handbook is not just for quick reference. There is indeed redemption (p.43) to be found:

'It is easier to cover a blemish than examine why it resulted in the first place: genetics, diet, stress, or lack of self-care.'

I argue for Hodges' model as a tool to identify, (re)present, and relate the determinants of health: all of them. The points and paragraphs on theology, mind, body and self are well worth revisiting.

Johnson's chapter 4, What are Responsible AI researchers really arguing about? provides support for my ongoing belief in the value of sociotechnical theories and approaches, and another book on enactivism. Given the ethics of the global south (and technological/electronic colonialism) more could perhaps be made of LMIC, but the point is made in Table 4.1 (Examples of functionalist evaluations of AI models, p.52). Constructivism 2.2 is a rich seam for me. Table 4.2 touches on medical diagnosis and socio-technical. Johnson notes that in seeking answers to AI ethics problems, scientists see constructivist approaches and '.. constructivism as a metaethic, holding a rich space for more research into pluralistic AI-Alignment', (p.53). And, the potential for much more. This is suggested (for me) in reference to the need for a holistic view of AI model's genesis, and 'two sides of one coin' - dichotomy, polarity, oppositions, binary reductions ... ? 

Johnson's conclusion is an indirect thumbs up (cue Arnie style - of course) to Hodges' model as a project. Conceptual frameworks can indeed:
'... offer unique perspectives on the methods commonly employed and to understand and mitigate the risks and dangers of AI.' (p.62).
If ethics gets 'technical' then the book's full title is well-earned. Two case studies demonstrate functionalist and constructivist debates in responsible AI. I don't remember Capt. Kirk et al. stating to a malign alien 'intelligence' "The trophy doesn't fit in the suitcase because it's too large (small)," (p.58). The writer's put other solutions to the characters lips. Spock, however, would find the discussion here on 'Artificial General Intelligence, 4E Cognition and Enactivism' - "Fascinating!"

Much more to follow ...

Handbook on the Ethics of Artificial Intelligence. David J. Gunkel (ed.). Cheltenham, UK: Edward Elgar Publishing Ltd. ISBN: 978 1 80392 671 1245
https://www.e-elgar.com/shop/gbp/handbook-on-the-ethics-of-artificial-intelligence-9781803926711.html


Related previous posts: 'general + AI'