Hodges' Model: Welcome to the QUAD: Book review: #4 - Handbook on the Ethics of AI

Hodges' model is a conceptual framework to support reflection and critical thinking. Situated, the model can help integrate all disciplines (academic and professional). Amid news items, are posts that illustrate the scope and application of the model. A bibliography and A4 template are provided in the sidebar. Welcome to the QUAD ...

Sunday, February 16, 2025

Book review: #4 - Handbook on the Ethics of AI

Chapter 7 is no doubt one of the engine rooms of this book. In AI Ethics and Machine Ethics, John-Stewart Gordon follows preceding authors citing Nyholm and others. This probably helps the overall coherence of the book, and underlines what is a dynamic and emerging field. Reading 'What is AI ethics?' (p.98), I made a note about our humanity's cognitive house-keeping (and other life-forms?) executed through sleep, the ability to unlearn, and forget. Efforts to mitigate machine bias are provided in a list 1-5 (p.100). As observed previously, this author is also thinking on their feet (and very well too) with 'Source: Author's own' in a figure for the systemic place of machine ethics and another (table) 7.1 'Four approaches to machine ethics'.

3.4 considers the challenges involved in developing ethical AI systems. As in healthcare, we see where the (problem of) 'many hands' emerges from:
'Developing ethical AI systems requires collaboration between computer scientists, ethicists, and other stakeholders, which can be challenging due to differences in expertise, terminology, and perspectives. Computer scientists bring technical expertise to the table, while ethicists provide insights on moral principles and values that should guide AI behavior. However, these experts often operate within their own domains and may lack a common language to effectively communicate and collaborate. 
Other stakeholders, such as policymakers, industry professionals, and end-users, also play vital roles in shaping the ethical development of AI systems. They bring diverse perspectives, needs and expectations that need to be considered and balanced during the design, implementation, and regulation of AI technologies. For instance, policymakers are responsible for creating guidelines and regulations that ensure the safe and responsible use of AI, while industry professionals must adhere to these standards and navigate the practical challenges of integrating ethical considerations into their AI-driven products and services.' p.104.
The solution includes having interdisciplinary and ethnically diverse teams, cross-disciplinary training, and a more holistic and comprehensive approach to AI development. Unintended consequences (a news item today (13th Feb 2025) are discussed, the dual-use nature of AI and technological 'delivery' surveillance acting for good and potentially ill (China's - Social Credit Point System).

AI ethics is also itself apparently a conceptual framework (p. 99,107).

What is permitted by law in a State clearly varies. Chapter 8 deals with From Ethics to Law: Why, When, and How to Regulate AI. Collingridge is another recurring citation, and presents a dilemma no-less (Section 3. p.116); one to follow up  (Coeckelbergh too, chap 9). I've always felt that some knowledge of social and economic history is a good foundation. History is reflected here in resort to 'masterly inactivity' (p.118) and regulatory approaches - as debated this past week at an AI Action Summit in France

I can still recall the sense of competition in having time in the sandpit at infants school.* Software developers will use sandboxes to develop and revise code in an isolated environment, before it goes live, or wild. With AI the book refers to mishaps, projects that have gone wrong. We need to take care regards the nature of the 'sandboxes' that are used. In a way this speaks to the utility of Hodges' model as a resource for lifelong learning, a learner being fully socialised into a profession and its evolving disciplines:

INDIVIDUAL
|
      INTERPERSONAL    :     SCIENCES               
HUMANISTIC  --------------------------------------  MECHANISTIC      
 SOCIOLOGY  :    POLITICAL 
|
GROUP
box
sand-
box
box

I had to look up the CREEPER (Curbing Realistic Exploitative Electronic Pedophilic Robots) ACT (p.121). We are accustomed to outsourcing, and its perverse effects facilitated through globalisation, with nations transferring their pollution - plastics, and e-wastes. There are efforts as Chesterman writes to limit outsourcing to AI in terms of the decisions (pp.122-123). I enjoyed the nods to Isaac Asimov too. 

Gellers in chapter 9 AI, Design, and More-than-Human Justice provides another cylinder to drive the text not just forward but acknowledges the future for all: organic and inorganic. Gellers is argues for attention to AI and justice, not merely the moral and legal status of AI. Reading of  'communities of justice' (p.128) the established notion of 'communities of practice' or the collective of users - socio-technically. Gellers highlights our being in the anthropocene and how the environment and nature is figuring in law. So, not just legal questions for humans but natural 'things' and entities also (posted previously: Person & Sense of Place: A River runs through it ...).


INDIVIDUAL
|
      INTERPERSONAL    :     SCIENCES               
HUMANISTIC  --------------------------------------  MECHANISTIC      
 SOCIOLOGY  :    POLITICAL 
|
GROUP
person-
[what >]
'Who'
-hood (p.129)

algorithmic
collective

communities of practice
justice

communities of justice

I don't think we are there yet? Even, accepting that built-in obsolescence is an established and legal thing. But AI could turn materialism on its head. Looking at the materials, metals, rare earth elements ... that goes in how long do they last and how are the materials recovered? AI, computing desperately needs the doughnut economy, must epitomize the doughnut. Looking for a place to live, you notice the way probably quite serviceable bathrooms and kitchens are ripped out. The suites on a skip. That is quite a scene on the rubbish tip in the film: 'A.I. Artificial Intelligence'. The amount of energy in total production and materials is still not wholly valued, even before other values are 'added'? The focus on design here is integral to such questions. It has brought me to care design, and care architecture. Socio-technical aspects are highlighted again by Gellers (for me, as noted on W2tQ, that hyphen matters). 

Unsurprisingly, Descartes also figures in this chapter, section -Technology in the anthropocene:

'First, for philosophers of technology, the Anthropocene obliterates the nature/culture divide intrinsic to Cartesian thinking (Conty, 2017) that altered the course of history by rebranding technical objects as artificial and thus divorced from nature (Hui, 2017).' p.131.
It is encouraging that there is now recognition that 'we are nature too'. While we can record thought - brain activity on an EEG, from the outset (school studies of biology, then human biology) I've always associated intelligence, mind, thought, emotion, hope, dreams and cognition .. with the intra- interpersonal domain. If you adopt a socio-technical approach (overview - mindset) then invariably the technosphere will straddle the HUMANISTIC <-> MECHANISTIC axis of Hodges' model:
'So far, I hope to have established that technology plays a prominent, if complicated, role in the Anthropocene. But how does this relate to justice? Technology's relationship to ecological systems holds important implications for determining who belongs to communities of justice. To begin, the Earth system consists of interrelated biological, physical, and mental "worlds" (Kotzé, 2020, p. 80). The mental world, also referred to as the "technosphere," is an autonomous amalgamation of energy, communication, transportation, and financial systems, along with cities, governments, factories, farms, and other built systems and their constituent parts (Haff, 2014, P. 127). Technological artifacts such as AI are therefore part of the technosphere.' p.132.

Hodges' model: axes and care domains

We read of robots-in-ecology and robots-for-ecology. I was reminded of the SF film 'Silent Running' here. The real merit and encouragement for me in Geller's contribution, is section 4 which compares how we can approach more-than-human justice. Total eclipses are supposed to be rare, irrespective of what they might portend, now AI, the climate and life on Earth keep eclipsing each other: thus far partially. 4.1 Multispecies justice - Peter Singer really is ahead of his time (chapter 11 - here!); following socio-ecological justice 4.2, we arrive at planetary justice. Note (in Hodges' model) the 'distance' of what is generally deemed social and where the ecological in reality - physically resides. In the margin I'd scribbled 'Less a case of "take me to your leader" and more a case of "show me all your cards!"'. The synthesis and closing section on the design of robots specifically for more-than-human-justice helps here by including relationality, intersectionality and and contexts. 'Design justice' is something the book prompts me to carry forward (p.137).

One final post on HEoAI to follow ...

Handbook on the Ethics of Artificial Intelligence. David J. Gunkel (ed.). Cheltenham, UK: Edward Elgar Publishing Ltd. ISBN: 978 1 80392 671 1245
https://www.e-elgar.com/shop/gbp/handbook-on-the-ethics-of-artificial-intelligence-9781803926711.html

Related previous posts: 'general + AI'

*That early streak of competitiveness was clearly educated out of me.