Hodges' Model: Welcome to the QUAD

Hodges' Model: Welcome to the QUAD

Hodges' model is a conceptual framework to support reflection and critical thinking. Situated, the model can help integrate all disciplines (academic and professional). Amid news items, are posts that illustrate the scope and application of the model. A bibliography and A4 template are provided in the sidebar. Welcome to the QUAD ...

Thursday, February 20, 2025

Book review: #5 - Handbook on the Ethics of AI

Handbook on the Ethics of AI
If there is an overall theme to the book it is anthropomorphism. Some argue it is consequential in nature due to the risks we are running. Sandry's chapter 10 Anthropomorphism and its Discontents begins by highlighting duality, dichotomies, and  oppositions that instantly come into effect in this emerging  theoretical, practical and policy field. If a term anthropomorphic can be 'loaded' this one carries extra baggage: history, religion, natural, aesthetic, philosophy, physical, existential. The dual issue of making a machine that looks human; versus, machines that could deceive humans (used remotely today and in situ in the future?) is considered. In our interacting with AI Tech, I found intention (and attention) of specific interest. Sandry seeks definitions, starting with dictionaries, the discussion is helpful across arts too. The reader is left well briefed and technically too: intrinsic and extrinsic forms, the role of the intentional stance, 3-factors. ...

Health is not a primary focus of the book. The index does not list healthmedicinenursing, at least not where they may be expected. The index is comprehensive but I wonder if it could be improved. Care is suggested through social robotics (p.147). Design figures again, anthropomorphically of course (p.147). Specific attention to ethics and Taking Care With Language are given (sections 6-7). I scribbled! again about care for tech - in the material, energy, and production 'costs' in an ecological sense. SUVs annoy me (sorry!) are they all necessary? On language, I thought back to McDermott D (1985) Artificial Intelligence Meets Natural Stupidity, In MIND DESIGN, Haugeland J (Ed), MIT Press, London, p.144-145:


Balance in subjectivity and objectivity of stances and resulting content / conclusions can be difficult to achieve and represent. The latter section prompts respond to frustration with the term anthropomorphism. I found myself in a couple's lounge, as a community mental health nurse, acutely aware of the role of proxies in dementia care; as Sandry described Paula Sweeney's 'fictional dualism' (8, 150). Hodges' model fits well here too, regards anxiety. To socialbots, I added carebots. There seems potential in sociomorphing.

As noted previously, reflection and relation-al points litter the text. In Jecker's chapter 11 A Relational Approach to Moral Standing for Robots and AI this is more explicit. Jecker refers to care of others - as animals too. In computer science and seeking to retain a socio-technical perspective, I've seen potential in capability and maturity frameworks. Section 2 provides some discussion of the former. One of the first words I looked up in the index was isomorphic. It wasn't listed but I found reference to it on page 157: '... a community of robots psychologically isomorphic to to human beings that share our psychology ...'. Maths is a focus here, even though I must try to utilise AI to aid my learning and understanding. This is - must be an outcome of reading HEoAI.

The section on (self-)counsciousness is engaging and not limited (again) to machine intelligence. Subjectivity arises again. I wrote a note re. the precautionary principle, my 'prompt' the ethical principle. A gift was dicovered in 3. A CONCEPTUAL REFLECTION and within 3.2 Relational Ethics, preceded by the potential of Kant, utilitarian and vurtue ethics. While not wishing to virtue signal I've long speculated on how other cultures could inform nursing theory and models of care. Jecker incorporates the African philosophy of ubuntu in relational ethics. Student's would enjoy this, especially as Jecker (Source: Author) provides tables laying out the ethical approaches and robot & AI capabilities. This can also encompass older adults and care contexts with social robots. Nussbaum's capabilities applied to human development is also adopted here - another useful reminder:

(And, once again I recall Nussbaum's talk on Aristotle.) In post #4 I wrote of the rubbish scene in the film A.I. Artificial Intelligence, but it's here (p.166) that I wrote the note. There is so much I'm skimming over - believe it not.

The conclusion in mentioning a hybrid future consisting of both humanistic and mechanistic agents appears to find an additional theoretical and practical ally in Hodges' model?


Chapter 12 by Navas is AI Ethics, Aesthetics, Art and Artistry visits the history and philosophy of this subject too, esp. from 1700s. With Žižek and Deleuze there is much prepatory reading for would-be undergrads - and general readers keen to have an awareness of contemporary issues. There is quite a triad here - disassembled - across several sections. I have quoted from the volume many times, but p.179 concerns empathy: 
'Empathy challenges the ongoing optimization of technology, because it takes time to exercise it. A person needs time to think about whatever issue, situation, thing, or person they may empathize with. In other words, empathy is essential for humans to understand and figure their relation to others and their surroundings. Empathy, if practiced reflexively, can lead to critical thinking. which may not lead to clear results but the activity may and often does end in "wasted" time if framed under the drive for efficiency, which is clearly something Al is designed to achieve. And lastly, empathy, because it has been foundational to art, is also part of art's long-term resistance against capitalism's exponential dependence on speed of production. At the core of AI ethics, then, we find speed of production and consumption coming in conflict with human existence itself. Humans are proving to be inefficient actors in the very system they built for their own benefit, which obsessively demands faster cultural activity from people, which (to be blunt) translates to an unapologetic and incessant desire for profit.' pp.179-180.
Section 6 on creativity AND speed is fascinating, especially as human-machine (brain) interfaces develop apace. Concerned as I am with what is a metacognitive tool, 8 Metacreativity formed the conclusion of chapter 12. The notes refer to generative coding which has come up in various webinars.

Silent Running: Film
Briefly, chapter 13 covers AI and the Environment. It is amazing (or not) how in a few sentences your mind can be changed? From "Really!" upon reading 'species culling', to this being explained in the crown-of-thorns starfish (hence COTSbot) and the toll on coral in Queensland. There are robots-for-ecology. I smiled returning to Silent Running's Dewey, Huey, and Louie becoming reality. Facinating point in: what is collective must be recognised in terms of causation. Appropriatly, PEAS is an acronym: probalistic weather event attribution studies are a reality in conjunction with remote sensing and tracking.

Drones are also developing apace, and applied robotics - tree-climbing. Less reassuring (adding to nature's precarity?) are artificial insects to undertake pollination; more positively reducing the impact of chemical and toxic spills. Simulation for training is well established in medicine and nursing; section 2.3 addresses this were PEAS form super-ensembles of data (I like that). PEAS can also have a role in determining an evidence-base and demonstrating it is hoped provenance for that evidence in the movement of populations, for example, climate refugees. In conflicts, human rights and justice, forensic architecture is of course well established: Forensic Architecture (Care Forensics?)

The summative nature in closing chapter 13 is a helpful approach which I will possibly try to duplicate.

Socio-technical approaches are found in chapter 14 Uses and Abuses of AI Ethics by Frank & Klincewicz. Understandably, boundaries play a large role, as you would expect in deliberating value and values, moral patients and agents. The Collingridge dilemma is discussed, regards putting in place controls for a technology while it is still in development, otherwise control may be lost (p.213). Diversity, a story of the moment - closes out this chapter. There is a 'nice' continuity across chapters to
15 The (Un)bearable Whiteness of AI Ethics by Syed Mustafa Ali et al. (the first note highlights the format is a dialogue). The (colonial) politics of north-SOUTH are duly noted and Africa (section 4). Section 8 points to technological (and health) colonialism. I take as a positive that the 'hyphen' also has a place (S.10).

The Savage Mind

Again related, as per the book's structured parts, chapter 16 Ethics beyond Ethics: AI, Power, and Colonialism by Kim prompts the reader to revisit the concepts of 'other', alterity even if 'understood'. I pencilled 'Savage Mind' here. Machines are posited as the colonial other. There will be a repeated argument as in response to humankind's going back to the moon (to stay) and on to Mars. We should sort Earth out first. So too for inclusion and the machines. What about the people who are excluded, disenfranchised and increasingly so? I wondered (previously) if (even) more could be made a transparency? Noting the word subaltern, it appears this has suddenly been attributed to Ukraine? On disability and ableism reminded also of radio history and 'Does He Take Sugar?'

The statement: 'Machines are perceived as distant - temporally, spatially, or socially - and different from human culture.' p.234. So true.

INDIVIDUAL
|
      INTERPERSONAL    :     SCIENCES               
HUMANISTIC  --------------------------------------  MECHANISTIC      
 SOCIOLOGY  :    POLITICAL 
|
GROUP
cognitive - conceptual
'spaces'
distance:
time, space
human cultures
societies
difference


'Binary-opposition' and the need to think outside of this is acknowledged. And as if (perhaps) to stress both distance and proximity, I wrote 'mobius' in the margin (p.238). Hutchings (2015) sounds a valuable reference: 'Ethical Encounters - Encountering Ethics'. The books I've read contribute to evidence to revalidate my nurse registration. I realise I've sold-myself short in listing the book's titles. Although not discussed here the remaining chapters are excellent critical reading at a time when diversity, equality and inclusion policies are being rolled-back and undone. I will highlight these:

  • 17 Disabling AI: Biases and Values Embedded in Artificial Intelligence (quoted in 'Do you fit the description?')
  • 18 The AI Imaginary: AI, Ethics and Communication
  • 19 Feminist Ethics and AI: A Subfield of Feminist Philosophy of Technology
  • 20 Buddhism and the Ethics of Artificial Intelligence
  • 21 Queering the Ethics of AI

In chapter 16 'Ethics beyond Ethics...' just before the conclusion I will carry forward a sentence -
'Forming an identity requires that "I identify something or someone beyond me" - with one or more categories persons, non-human others, acts, ideals, values, or social systems' (p.243).
- and the points following, and end there.

Many thanks to David J. Gunkel (Editor), the many contributors, and Edward Elgar Publishing Ltd for my copy. I have greatly enjoyed and learned much from reading this book.

Handbook on the Ethics of Artificial Intelligence. David J. Gunkel (ed.). Cheltenham, UK: Edward Elgar Publishing Ltd. ISBN: 978 1 80392 671 1245
https://www.e-elgar.com/shop/gbp/handbook-on-the-ethics-of-artificial-intelligence-9781803926711.html





Images:
Silent Running 
https://cdna.artstation.com/p/assets/images/images/016/839/904/large/david-eagan-screenshot001.jpg?1553673214
The Savage Mind
https://en.wikipedia.org/wiki/File:The_Savage_Mind_(first_edition).jpg

Related previous posts: 'general + AI'

*That early streak of competitiveness was clearly educated out of me.

Wednesday, February 19, 2025

Frayn's - shared 'Home Address'

Frequently I've made the point that 'person-centred' means placing the individual, patient, carer ... whatever the context, at the centre of Hodges' model. At the same time this is of course an idealisation:

individual
|
INTERPERSONAL : SCIENCES
humanistic ------------------------------- mechanistic
SOCIOLOGY : POLITICAL
|
group

What is your
'home' address?


'Science, it turns out, is in this respect simply an extension of all the other means we have found of representing the world. As with pictures and narratives, the world is unimaginable without the focus of a viewpoint. To understand the world in any way whatsoever scientifically, historically, artistically, anecdotally, imaginatively we find ourselves compelled to assume a potential point of convergence from which everything is  viewed, measured, and recounted. About the entity that defines this point we know only that it is by definition invisible to us. And since the instruments of science, of logic and mathematics, and of art, are all products of this invisible entity, they will not serve to represent or explain it to us. Anything in the world, or out of it, can be perceived or thought about, or both, and represented in our various codes. The only thing that systematically eludes us, whichever way we turn, is the something upon which everything else depends. The conscious subject that gives meaning to the objective universe cannot give meaning to itself. Without it nothing can be understood; about it nothing can be said. 
'The point of convergence . . .' And already we've gone wrong. The self isn't a point, in any remotely geometrical sense, but a complex organisation; and the consciousness that it generates, and by which it is animated, is a complex phenomenon.' pp.401-402. (my emphasis)







Frayn, Michael (2006). The human touch: our part in the creation of a universe. 'Home Address', Chapter 5. London: Faber & Faber. 

A final post from my paperback copy, next - off to a good secondhand home. Previously: 'Frayn'.

Monday, February 17, 2025

Harnessing the Power of Artificial Intelligence to Improve Outcomes for Patients with for Long-Term Health Conditions

 Dear Colleagues and Friends,

We are organising a Special Session: Harnessing the Power of Artificial Intelligence to Improve Outcomes for Patients with for Long-Term Health Conditions

(https://aiih.cc/lthc/) in the International Conference on AI in Healthcare (AIiH), 8-10 September 2025, Jesus College, University of Cambridge.

We would like to accept both full length papers (12 pages plus references) and short abstracts (up to 5 pages including references) for special sessions. Submission guideline can be found here, including paper templates in both Word and LaTeX: https://aiih.cc/paper-submission/

The accepted full papers and abstracts will be published in the Springer LNCS volumes.

Full Paper submission deadline:            Friday 11 April 2025

Abstract submission deadline:               Monday 30 June 2024

We are looking forward to meeting you.

Best wishes

Shang-Ming Zhou

Professor in e-Health | Faculty of Health | University of Plymouth | PL4 8AA | UK.

Email :  shangming.zhou AT plymouth.ac.uk; smzhou AT ieee.org

https://www.plymouth.ac.uk/staff/shang-ming-zhou

https://www.plymouth.ac.uk/research/centre-for-health-technology

Sunday, February 16, 2025

Book review: #4 - Handbook on the Ethics of AI

Chapter 7 is no doubt one of the engine rooms of this book. In AI Ethics and Machine Ethics, John-Stewart Gordon follows preceding authors citing Nyholm and others. This probably helps the overall coherence of the book, and underlines what is a dynamic and emerging field. Reading 'What is AI ethics?' (p.98), I made a note about our humanity's cognitive house-keeping (and other life-forms?) executed through sleep, the ability to unlearn, and forget. Efforts to mitigate machine bias are provided in a list 1-5 (p.100). As observed previously, this author is also thinking on their feet (and very well too) with 'Source: Author's own' in a figure for the systemic place of machine ethics and another (table) 7.1 'Four approaches to machine ethics'.

3.4 considers the challenges involved in developing ethical AI systems. As in healthcare, we see where the (problem of) 'many hands' emerges from:
'Developing ethical AI systems requires collaboration between computer scientists, ethicists, and other stakeholders, which can be challenging due to differences in expertise, terminology, and perspectives. Computer scientists bring technical expertise to the table, while ethicists provide insights on moral principles and values that should guide AI behavior. However, these experts often operate within their own domains and may lack a common language to effectively communicate and collaborate. 
Other stakeholders, such as policymakers, industry professionals, and end-users, also play vital roles in shaping the ethical development of AI systems. They bring diverse perspectives, needs and expectations that need to be considered and balanced during the design, implementation, and regulation of AI technologies. For instance, policymakers are responsible for creating guidelines and regulations that ensure the safe and responsible use of AI, while industry professionals must adhere to these standards and navigate the practical challenges of integrating ethical considerations into their AI-driven products and services.' p.104.
The solution includes having interdisciplinary and ethnically diverse teams, cross-disciplinary training, and a more holistic and comprehensive approach to AI development. Unintended consequences (a news item today (13th Feb 2025) are discussed, the dual-use nature of AI and technological 'delivery' surveillance acting for good and potentially ill (China's - Social Credit Point System).

AI ethics is also itself apparently a conceptual framework (p. 99,107).

What is permitted by law in a State clearly varies. Chapter 8 deals with From Ethics to Law: Why, When, and How to Regulate AI. Collingridge is another recurring citation, and presents a dilemma no-less (Section 3. p.116); one to follow up  (Coeckelbergh too, chap 9). I've always felt that some knowledge of social and economic history is a good foundation. History is reflected here in resort to 'masterly inactivity' (p.118) and regulatory approaches - as debated this past week at an AI Action Summit in France

I can still recall the sense of competition in having time in the sandpit at infants school.* Software developers will use sandboxes to develop and revise code in an isolated environment, before it goes live, or wild. With AI the book refers to mishaps, projects that have gone wrong. We need to take care regards the nature of the 'sandboxes' that are used. In a way this speaks to the utility of Hodges' model as a resource for lifelong learning, a learner being fully socialised into a profession and its evolving disciplines:

INDIVIDUAL
|
      INTERPERSONAL    :     SCIENCES               
HUMANISTIC  --------------------------------------  MECHANISTIC      
 SOCIOLOGY  :    POLITICAL 
|
GROUP
box
sand-
box
box

I had to look up the CREEPER (Curbing Realistic Exploitative Electronic Pedophilic Robots) ACT (p.121). We are accustomed to outsourcing, and its perverse effects facilitated through globalisation, with nations transferring their pollution - plastics, and e-wastes. There are efforts as Chesterman writes to limit outsourcing to AI in terms of the decisions (pp.122-123). I enjoyed the nods to Isaac Asimov too. 

Gellers in chapter 9 AI, Design, and More-than-Human Justice provides another cylinder to drive the text not just forward but acknowledges the future for all: organic and inorganic. Gellers is argues for attention to AI and justice, not merely the moral and legal status of AI. Reading of  'communities of justice' (p.128) the established notion of 'communities of practice' or the collective of users - socio-technically. Gellers highlights our being in the anthropocene and how the environment and nature is figuring in law. So, not just legal questions for humans but natural 'things' and entities also (posted previously: Person & Sense of Place: A River runs through it ...).


INDIVIDUAL
|
      INTERPERSONAL    :     SCIENCES               
HUMANISTIC  --------------------------------------  MECHANISTIC      
 SOCIOLOGY  :    POLITICAL 
|
GROUP
person-
[what >]
'Who'
-hood (p.129)

algorithmic
collective

communities of practice
justice

communities of justice

I don't think we are there yet? Even, accepting that built-in obsolescence is an established and legal thing. But AI could turn materialism on its head. Looking at the materials, metals, rare earth elements ... that goes in how long do they last and how are the materials recovered? AI, computing desperately needs the doughnut economy, must epitomize the doughnut. Looking for a place to live, you notice the way probably quite serviceable bathrooms and kitchens are ripped out. The suites on a skip. That is quite a scene on the rubbish tip in the film: 'A.I. Artificial Intelligence'. The amount of energy in total production and materials is still not wholly valued, even before other values are 'added'? The focus on design here is integral to such questions. It has brought me to care design, and care architecture. Socio-technical aspects are highlighted again by Gellers (for me, as noted on W2tQ, that hyphen matters). 

Unsurprisingly, Descartes also figures in this chapter, section -Technology in the anthropocene:

'First, for philosophers of technology, the Anthropocene obliterates the nature/culture divide intrinsic to Cartesian thinking (Conty, 2017) that altered the course of history by rebranding technical objects as artificial and thus divorced from nature (Hui, 2017).' p.131.
It is encouraging that there is now recognition that 'we are nature too'. While we can record thought - brain activity on an EEG, from the outset (school studies of biology, then human biology) I've always associated intelligence, mind, thought, emotion, hope, dreams and cognition .. with the intra- interpersonal domain. If you adopt a socio-technical approach (overview - mindset) then invariably the technosphere will straddle the HUMANISTIC <-> MECHANISTIC axis of Hodges' model:
'So far, I hope to have established that technology plays a prominent, if complicated, role in the Anthropocene. But how does this relate to justice? Technology's relationship to ecological systems holds important implications for determining who belongs to communities of justice. To begin, the Earth system consists of interrelated biological, physical, and mental "worlds" (Kotzé, 2020, p. 80). The mental world, also referred to as the "technosphere," is an autonomous amalgamation of energy, communication, transportation, and financial systems, along with cities, governments, factories, farms, and other built systems and their constituent parts (Haff, 2014, P. 127). Technological artifacts such as AI are therefore part of the technosphere.' p.132.

Hodges' model: axes and care domains

We read of robots-in-ecology and robots-for-ecology. I was reminded of the SF film 'Silent Running' here. The real merit and encouragement for me in Geller's contribution, is section 4 which compares how we can approach more-than-human justice. Total eclipses are supposed to be rare, irrespective of what they might portend, now AI, the climate and life on Earth keep eclipsing each other: thus far partially. 4.1 Multispecies justice - Peter Singer really is ahead of his time (chapter 11 - here!); following socio-ecological justice 4.2, we arrive at planetary justice. Note (in Hodges' model) the 'distance' of what is generally deemed social and where the ecological in reality - physically resides. In the margin I'd scribbled 'Less a case of "take me to your leader" and more a case of "show me all your cards!"'. The synthesis and closing section on the design of robots specifically for more-than-human-justice helps here by including relationality, intersectionality and and contexts. 'Design justice' is something the book prompts me to carry forward (p.137).

One final post on HEoAI to follow ...

Handbook on the Ethics of Artificial Intelligence. David J. Gunkel (ed.). Cheltenham, UK: Edward Elgar Publishing Ltd. ISBN: 978 1 80392 671 1245
https://www.e-elgar.com/shop/gbp/handbook-on-the-ethics-of-artificial-intelligence-9781803926711.html

Related previous posts: 'general + AI'

*That early streak of competitiveness was clearly educated out of me.

Saturday, February 15, 2025

'Life's compass' ...

'... I have been thinking a lot about the responsibilities of leadership. How do people in positions of power reach decisions and do they fully consider the consequences of these decisions? In questioning how someone chooses to lead, I'm also asking how that individual navigates the possible courses of action, and what kind of compass is being used to guide them. It has made me think about how any of us choose to navigate our way through life and where we look for a sense of direction. I wonder what we might change about the choices we make if we were more reflective about what serves as a compass in our lives.'

'"The Invention of the Compass" is attributed to an anonymous painter of the late 16th century, although there exists a roughly contemporaneous plate of the same title, and of an almost identical image, engraved by the Dutch artist Jan Collaert the Elder. In the original painting an old man with a long white beard, and dressed in a voluminous red and brown cloak, sits behind a large table in the centre of a room. As well as a canopied bed and another desk spread with books and scientific instruments, there is a model, or vision perhaps, of a carrack sailing ship hanging from the ceiling. There is also a large round compass device that sits heavy on the floor on the left foreground of the canvas. ...'

'It is as if this painting (Caravaggio) intended to serve as a compass, helping to move our steps and actions in the direction of caring for others. This is a powerful idea, and it suggests that all images contain that potential to influence how we live. Where we choose to direct and keep our gaze to some extent can determine how we permit our lives to be guided, if not in a physical sense then toward a value system that can end up having an impact on the way we engage with the wider world.' (My emphasis)

Ack. Enuma Okoro, Life's compass, The art of life, Life&Arts, FTWeekend, 1-2 February 2025. p.2. (Still need to subscribe, marvellous reading!)


Caravaggio, The Seven Works of Mercy


The Invention of the Compass


INDIVIDUAL
|
      INTERPERSONAL    :     SCIENCES               
HUMANISTIC  --------------------------------------  MECHANISTIC      
 SOCIOLOGY  :    POLITICAL 
|
GROUP

moral compass
personal values
my choices

maps - compass
position - navigation
direction - distance
degrees of freedom

the public
the public good
group think


leadership
power
political freedom


Enuma Okoro, Life's compass, The art of life, Life&Arts, FTWeekend, 1-2 February 2025. p.2.

Images: 

New Inventions of Modern Times [Nova Reperta], The Invention of the Compass, plate 2
https://collectionapi.metmuseum.org/api/collection/v1/iiif/659663/1403273/main-image

By Caravaggio - Unknown source, Public Domain, https://commons.wikimedia.org/w/index.php?curid=10286196

Previously:

'compass' :: 'map'

Friday, February 14, 2025

Curtains ...

'I'm not sure many rational designers would set out to emulate, or even to echo the interiors in David Lynch's films. These are rooms that haunt us: the chevron carpets, the red velvet curtains, the peeling wallpaper suggesting that our environment is somehow theatrical, temporary, a dream - an expression of our subconscious. ...

"I don't know where it came from, but I love curtains," Lynch - famously reluctant to explain his work - said in 2014. "There is something so incredibly cosmically magical about curtains opening and revealing a new world. It resonates on a deep level with people."'


INDIVIDUAL
|
      INTERPERSONAL    :     SCIENCES               
HUMANISTIC  --------------------------------------  MECHANISTIC      
 SOCIOLOGY  :    POLITICAL 
|
GROUP

IMAGINE a curtain:
What colour is it?
Imagine opening the curtain -

<<== CURTAIN
                      [a room: LxDxH]
- is the wallpaper peeling? ...









Edwin Heathcote, Head spaces, Interiors, House&Home, FTWeekend, 1-2 February 2025. p.4.

Previously:

Wednesday, February 12, 2025

Book review #3: Handbook on the Ethics of AI

I've started several 'gap' lists, including a draft post or two. Seeking to help bridge the theory-practice gap is an original purpose for Hodges' model.

Friedman's chapter 5 Responding to the (techno) Responsibility Gap(s). It is almost as if Friedman's title is parameterised. We could insert the domains of Hodges' model (plus, spiritual - safeguarding!) in there. Then there is the magic of languages, for responsibility substitute accountability and do a deep-dive applying Hodges' model in a specific domain. Friedman notes it is not entirely accurate to talk of a single "responsibility gap". Can this provide evidence for the situated potentiality of Hodges' model's and role in fostering situational awareness? Although there is an ongoing roll-back of CSR - corporate social responsibility, I scribbled in the margin:

INDIVIDUAL
|
      INTERPERSONAL    :     SCIENCES               
HUMANISTIC  --------------------------------------  MECHANISTIC      
 SOCIOLOGY  :    POLITICAL 
|
GROUP
user
user
user
corporate

Of course, 'user' still applies in the political domain, as we are all citizens; users of the State (note also the socio-technical utility of keeping the 'user' in sight). There is also personal ethics, and the legal duty of care - as a nurse (in my case) and for a member of the public. Discussion on notions of responsibility, in light of AI, especially self-driving vehicles, and autonomous machines (weapons) is much needed. Regulatory gaps are recognised; amid current news of debate on the degree of regulation of national and international financial systems. Nyholm's four-fold formulation on responsibility is invaluable as examples are provided. There is a reference to nine notions of responsibility, and the link to accountability and retribution.  I like Friedman's  discussion of responses to responsibility gaps as; optimists - technological approach; human-centred and hybrid approaches; plus pessimists (4.3). The latter includes debate and campaigns to 'stop the killer robots'. Did this chapter, have a positive effect in extending thoughts about events in the middle east? There are two points here I will return to in the final post.

For students, Schwarz's chapter 6 takes readers back to a literally pivotal development in ethical thought and debate: Trolleyology, with the addition of algorithms and killer robots. What Searle's Chinese Room (noted p.276 here) argument is to AI; so trolley-based dilemmas are to ethics. A point made by Schwarz, referring to the development of subsequent variations. (Following on from #2, Brian Magee's interviews with John Searle on philosophy of language and Wittgenstein are also well worth a listen.) Schwarz's chapter (and the whole book) could be read in preparation for an undergraduate course: computing, law, policy, philosophy, and ethics, for example. The challenge of determining responsibility and accountability "the problem of many hands" (p.71), is not limited to AI. This is a global issue in achieving justice for many previously healthy, competent, quite 'ordinary' people.

The question posed above, applies "equally"(!) here too: 3. mathematical morality for war. Ever since reading of argumentation through the Open University and other sources, I'm sure this is applicable to Hodges' model (with its 'own' algebra)? I'm no mathematician but the book provoked excitement rather than anxiety as I read: 'Principle of Permissible Harm (PPH) and probabilistic reasoning - with formulae (pp.88-89). There are some important (imho) 'take-away' messages here, but I don't want to spoil things. Nurses need mathematical skills, in drug calculations and are assessed on the same. Statistics are used in research, of course but maths is hardly the primary motivation for entry into the care professions: 
'Designing technologies in a way that takes into account broader human-centric values is an important and laudable approach to responding to new and emerging technologies in any arena. It cannot, however, serve as a substitute for moral deliberation and the act of taking moral responsibility for harmful acts, especially when lives are at stake. Reducing this to a mathematical problem ignores everything that cannot be captured in discrete numerical terms or as data points. And in situations where people and their lives become nothing more than data points, non-computational aspects, such as relational or embodied dimensions of human life, are marginalized if not entirely obscured. The real world is not reducible to binary logic, it is rife with contested, contradictory, and clashing values that might all be equally relevant It is complex in ways that cannot always be neatly captured or mathematically modeled. In other words, ethics cannot be "solved."' p.91.
More to follow ...

Handbook on the Ethics of Artificial Intelligence. David J. Gunkel (ed.). Cheltenham, UK: Edward Elgar Publishing Ltd. ISBN: 978 1 80392 671 1245

https://www.e-elgar.com/shop/gbp/handbook-on-the-ethics-of-artificial-intelligence-9781803926711.html


Related previous posts: 'general + AI'

Tuesday, February 11, 2025

Project: Partners against Stigma and Mortality in Epilepsy (PASME) Cameroon

Boyo Association for Rural Development (BARUDEV - Cameroon) has launched a project on epilepsy called Partners against Stigma and Mortality in Epilepsy (PASME).

We are calling for partnerships and support for our project on epilepsy which will focus on mental health and psychosocial support, treatment, knowledge and skills exchange,raising awareness, rights protection amongst patients affected and or bereaved by epilepsy, health care providers. Also reduce seizures and consequences particularly in children and adolescents living with epilepsy and HIV/AIDS. 
  • ADVOCACY AND AWARENESS. Promoting awareness and supporting initiatives that prevent epilepsy. Prioritize epilepsy at national and international levels, while lobbying and advocating support for partners and governments to prioritize epilepsy into their health programs and policies.
  • TREATMENT AND SUPPORT. Provide access to treatment and support initiatives that improve access to effective treatment including medications and their therapies. 
  • PROVIDE BEST-PRACTICE MODELS. Develop and share models for effective management of epilepsy in diverse settings including conflicts and disasters where people are forced to displace.
  • BUILD CAPACITIES THROUGH TRAINING. Advance training and information sharing for healthcare givers, CHWs and family care givers involved in epilepsy care. 
  • PROMOTE HUMAN RIGHTS AND DIGNITY. Advocate for the accessibility of services, protection of human rights and preservation of dignity for all with epilepsy including children and adolescents living with epilepsy and HIV/AIDS. 
  • EMPOWERMENT AND RIGHTS: Lobby and advocate for the rights of people with epilepsy through their associations and to enable them participate in decision making processes. 
  • RESEARCH AND COLLABORATION: Foster research through encouragement and support in epilepsy including causes, prevention, treatment and social impact. Strengthen their associations, develop and support them through mental health and psychosocial support projects and ensuring and protecting their well-being and their families.
BARUDEV is currently building a website (which I will add upon news - PJ).
Please contact me if you need further information

HIFA (my source) profile: Chiabi Bernard Ful is Director of Boyo Association for Rural Development (BARUDEV--Cameroon). This is a local NGO found in Boyo district of North Western Cameroon. Our activities are to empower women, protect the sexual and reproductive health for women and girls, and protect the rights of children. We have been training community health workers to follow up patients, pregnant women, sick children and refer them to the hospital.
barudev AT yahoo.co.uk

Monday, February 10, 2025

Book review: Handbook on the Ethics of AI #2

Handbook on the Ethics of AI

Well, I am sold. So, encouraging again to read 'reflect -ion' on the first page and throughout chapter 1:

'... why the very idea of AI gives rise to (or should give rise to) ethical reflection.' (p.21), and I'm sure the book as a whole.

Most readers may pass this by, but to me this matters. Ethics scholars however will see how important and conjoint reflection is to deliberation and argumentation of ethics. Especially, as noted in post #1 the five part division of the text Foundations and Context; Responsibilities; Rights; Politics and Power; and Thinking Otherwise. 

Tables break up and inform the text. I like the way for many tables the source is: Author's own elaboration.
Given the subject matter, yes, the authors have the literature but they are also formulating their chosen theme against the present, near future, socio-political scenarios and much more. So, reflection - this should be expected. With Hodges' model I'm often asked why are those concepts placed in that particular domain? Perhaps that is a question for general artificial intelligence? Subjectivity is a key challenge, and its effective communication.

If I say, chapter 1 is a great primer on What Is This Thing Called the Ethics of AI and What Calls for It? I'd sell the book short, the whole text fulfills this purpose. The definitions and responses to What is A.I. are very helpful, taking into account ancient to more recent history. I've a book in a box by David Chalmers (sorry Prof.) - one day - which helps here discussing intelligence and consciousness.

I'm primed to pick up on 'gaps' (Theory-Practice) and in human experience they are inevitably legion. The responsibility gap (and meaningfulness!) c/o Sven Nyholm is a place to return to, to debate why A.I. raises such a broad range of ethical issues. Patiency is never far away. Here, the patient is stark: in  moral agents and moral patients (p.20 and 89). This blog is littered with posts on Drupal, which to date I have never mastered. I remember sitting in DrupalCon presentations, in the midst very skilled professional coders. A member of the community stood at the podium recounting their lived experience of impostor syndrome. In November with a presentation of my own, I had my own encounter. Well into reading this book, it really did help. The conclusion of chapter 1 contrasts the frequent need for a big-picture overview and reflection on more specific issues. That's quite fitting for me.

I am biased in the encouraging kernels I find: as seek them I do. Yes, as chapter 2 AI Ethics before Frankenstein begins: 'there remains work to be done in charting the long-range conceptual development of  AI in the history of political thought.' (p.27, my emphasis). And much more I hope. Chapter 2 combines literature, myth and I enjoyed the discussion on techne, Prometheus, Hobbes, the interdisciplinary bridge of techno-politics and articulating the mechanistic in our lives. Focussed recently on thresholds and threshold concepts in revising two papers: Hunt writes, 'Frankenstein is a "threshold" text for "modern political science fiction" (p.31). 

This  resonated - especially 'for its prediction that the nascent Enlightenment-era sciences of chemistry, anatomy, and electricity could be used to artificially make a human being' (p.31). The exploration of bad education and bad governance through Shelley and social history is other-worldly in itself. Hunt's conclusion which includes 'Wollstonecraft was the pivotal figure in the process of refracting ideas of AI from Hobbes to Shelley, ...' left me wondering about the refactoring of code. 

Chapter 3, Smith's Faith, Tech, and Ethics of AI brings more reflect -ion with the bonus for me of Descartes. On faith, the human situation is also repeatedly stressed. Delving into creation too, this is detailed on forms of theology - Augustine, Bede, for example, and philosophy Bacon, Hume, Kant and their role in diminishing the influence of religious thought and epistemology (p.39). How long a push was that? Surely it is ongoing - in some quarters? Medieval thought is discussed, and I recall this broad period also being regarded as misunderstood [Medieval Philosophy - Bryan Magee & Anthony Kenny (1987)] . 

On a personal level, I've always placed belief in the intra- interpersonal domain of Hodges' model. In psychosis, depression, anxiety, phobias in short mental health, beliefs can be disrupted, and influenced of course when all is well. Religion I have framed at the individual and socio-political level. Clinically, it is the individual patient's beliefs, if they follow a particular religion that we also need to be cognizant of as health personnel. Machine, or human; object or being as Smith asks:

SELF / INDIVIDUAL  -  OBJECT / THING
|
      INTERPERSONAL    :     SCIENCES               
HUMANISTIC  --------------------------------------  MECHANISTIC      
 SOCIOLOGY  :    POLITICAL 
|
GROUP

the human -

the machine -


- can become machine

- can become human



socio-


-political

There is depth of discussion too: in AI and Power; a Christian response to AI ethics. This handbook is not just for quick reference. There is indeed redemption (p.43) to be found:

'It is easier to cover a blemish than examine why it resulted in the first place: genetics, diet, stress, or lack of self-care.'

I argue for Hodges' model as a tool to identify, (re)present, and relate the determinants of health: all of them. The points and paragraphs on theology, mind, body and self are well worth revisiting.

Johnson's chapter 4, What are Responsible AI researchers really arguing about? provides support for my ongoing belief in the value of sociotechnical theories and approaches, and another book on enactivism. Given the ethics of the global south (and technological/electronic colonialism) more could perhaps be made of LMIC, but the point is made in Table 4.1 (Examples of functionalist evaluations of AI models, p.52). Constructivism 2.2 is a rich seam for me. Table 4.2 touches on medical diagnosis and socio-technical. Johnson notes that in seeking answers to AI ethics problems, scientists see constructivist approaches and '.. constructivism as a metaethic, holding a rich space for more research into pluralistic AI-Alignment', (p.53). And, the potential for much more. This is suggested (for me) in reference to the need for a holistic view of AI model's genesis, and 'two sides of one coin' - dichotomy, polarity, oppositions, binary reductions ... ? 

Johnson's conclusion is an indirect thumbs up (cue Arnie style - of course) to Hodges' model as a project. Conceptual frameworks can indeed:
'... offer unique perspectives on the methods commonly employed and to understand and mitigate the risks and dangers of AI.' (p.62).
If ethics gets 'technical' then the book's full title is well-earned. Two case studies demonstrate functionalist and constructivist debates in responsible AI. I don't remember Capt. Kirk et al. stating to a malign alien 'intelligence' "The trophy doesn't fit in the suitcase because it's too large (small)," (p.58). The writer's put other solutions to the characters lips. Spock, however, would find the discussion here on 'Artificial General Intelligence, 4E Cognition and Enactivism' - "Fascinating!"

Much more to follow ...

Handbook on the Ethics of Artificial Intelligence. David J. Gunkel (ed.). Cheltenham, UK: Edward Elgar Publishing Ltd. ISBN: 978 1 80392 671 1245
https://www.e-elgar.com/shop/gbp/handbook-on-the-ethics-of-artificial-intelligence-9781803926711.html


Related previous posts: 'general + AI'

Sunday, February 09, 2025

iConference 2025 & new Community Informatics Research Network list

Dear CI colleagues,

I am writing with two updates.

First, this listserv* has been moved to the iSchool at the University of Illinois Urbana-Champaign: https://lists.ischool.illinois.edu/lists/info/ciresearchers. To post a new message to this mailing list, please use the following email address: ciresearchers AT ischool.illinois.edu

I'd like to thank the Metropolitan New York Library Council for hosting this list during the past several years. We moved the list to the iSchool at Illinois where it will be supported in the years ahead.

Second, the new iSchools Community Informatics group will meet during the on-site portion of the 2025 iConference at Indiana University on Wednesday 19 March 2 PM-3:30 PM local time in Bloomington, Indiana. To learn more about the iConference, please visit the conference website: https://www.ischools.org/iconference



Dr. Khalid Hossein has generously agreed to help lead the iSchools Community Informatics group *virtual* session on Tuesday 11 March 9:30 PM-11:00 PM local time in Bloomington, Indiana. Khalid and I are hoping to provide opportunities for everyone to engage virtually in both sessions of the conference.

Please stay tuned for more information about the conference.

Thank you,
Colin

Colin Rhinesmith, Ph.D.
Pronouns: he/him
Visiting Associate Professor
Director, Digital Equity Action Research (DEAR) Lab
School of Information Sciences
University of Illinois Urbana-Champaign

 *My source, with my editing. PJ