If there is an overall theme to the book it is - rather inevitably - anthropomorphism. Some argue it is consequential in nature due to the risks we are running. Sandry's chapter 10 Anthropomorphism and its Discontents begins by highlighting duality, dichotomies, and oppositions that instantly come into effect in this emerging theoretical, practical and policy field. If a term anthropomorphic can be 'loaded' this one carries extra baggage: history, religion, natural, aesthetic, philosophy, physical, existential. The dual issue of making a machine that looks human; versus, machines that could deceive humans (used remotely today and in situ in the future?) is considered. In our interacting with AI Tech, I found intention (and attention) of specific interest. Sandry seeks definitions, starting with dictionaries, the discussion is helpful across arts too. The reader is left well briefed and technically too: intrinsic and extrinsic forms, the role of the intentional stance, 3-factors. ...
Health is not a primary focus of the book. The index does not list health, medicine, nursing, at least not where they may be expected. The index is comprehensive but I wonder if it could be improved. Care is suggested through social robotics (p.147). Design figures again, anthropomorphically of course (p.147). Specific attention to ethics and Taking Care With Language are given (sections 6-7). I scribbled! again about care for tech - in the material, energy, and production 'costs' in an ecological sense. SUVs annoy me (sorry!) are they all necessary? On language, I thought back to McDermott D (1985) Artificial Intelligence Meets Natural Stupidity, In MIND DESIGN, Haugeland J (Ed), MIT Press, London, p.144-145:
Balance in subjectivity and objectivity of stances and resulting content / conclusions can be difficult to achieve and represent. The latter section prompts respond to frustration with the term anthropomorphism. I found myself in a couple's lounge, as a community mental health nurse, acutely aware of the role of proxies in dementia care; as Sandry described Paula Sweeney's 'fictional dualism' (8, 150). Hodges' model fits well here too, regards anxiety. To socialbots, I added carebots. There seems potential in sociomorphing.
As noted previously, reflection and relation-al points litter the text. In Jecker's chapter 11
A Relational Approach to Moral Standing for Robots and AI this is more explicit. Jecker refers to care of others - as animals too. In computer science and seeking to retain a socio-technical perspective, I've seen potential in capability and maturity frameworks. Section 2 provides some discussion of the former. One of the first words I looked up in the index was
isomorphic. It wasn't listed but I found reference to it on page 157: '... a community of robots psychologically isomorphic to to human beings that share our psychology ...'. Maths is a focus here, even though I must try to utilise AI to aid my learning and understanding. This is -
must be an outcome of reading HEoAI.
The section on (self-)counsciousness is engaging and not limited (again) to machine intelligence.
Subjectivity arises again. I wrote a note re. the precautionary principle, my 'prompt' the
ethical principle. A gift was dicovered in 3. A CONCEPTUAL REFLECTION and within 3.2 Relational Ethics, preceded by the potential of Kant, utilitarian and vurtue ethics. While not wishing to virtue signal I've long speculated on how other cultures could inform nursing theory and models of care. Jecker incorporates the African philosophy of ubuntu in relational ethics. Student's would enjoy this, especially as Jecker (Source: Author) provides tables laying out the ethical approaches and robot & AI capabilities. This can also encompass older adults and care contexts with social robots. Nussbaum's capabilities applied to human development is also adopted here - another useful reminder:
(And, once again I recall Nussbaum's talk on
Aristotle.) In post
#4 I wrote of the rubbish scene in the film A.I. Artificial Intelligence, but it's here (p.166) that I wrote the note. There is so much I'm skimming over - believe it not.
The conclusion in mentioning a hybrid future consisting of both humanistic and mechanistic agents appears to find an additional theoretical and practical ally in Hodges' model?
Chapter 12 by Navas is AI Ethics, Aesthetics, Art and Artistry visits the history and philosophy of this subject too, esp. from 1700s. With Žižek and Deleuze there is much prepatory reading for would-be undergrads - and general readers keen to have an awareness of contemporary issues. There is quite a triad here - disassembled - across several sections. I have quoted from the volume many times, but p.179 concerns empathy:
'Empathy challenges the ongoing optimization of technology, because it takes time to exercise it. A person needs time to think about whatever issue, situation, thing, or person they may empathize with. In other words, empathy is essential for humans to understand and figure their relation to others and their surroundings. Empathy, if practiced reflexively, can lead to critical thinking. which may not lead to clear results but the activity may and often does end in "wasted" time if framed under the drive for efficiency, which is clearly something Al is designed to achieve. And lastly, empathy, because it has been foundational to art, is also part of art's long-term resistance against capitalism's exponential dependence on speed of production. At the core of AI ethics, then, we find speed of production and consumption coming in conflict with human existence itself. Humans are proving to be inefficient actors in the very system they built for their own benefit, which obsessively demands faster cultural activity from people, which (to be blunt) translates to an unapologetic and incessant desire for profit.' pp.179-180.
Section 6 on creativity AND speed is fascinating, especially as human-machine (brain) interfaces develop apace. Concerned as I am with what is a metacognitive tool, 8 Metacreativity formed the conclusion of chapter 12. The notes refer to generative coding which has come up in various webinars.
 |
Silent Running: Film |
Briefly, chapter 13 covers
AI and the Environment. It is amazing (or not) how in a few sentences your mind can be changed? From "Really!" upon reading 'species culling', to this being explained in the crown-of-thorns starfish (hence COTSbot) and the toll on coral in Queensland. There are robots-for-ecology. I smiled returning to Silent Running's Dewey, Huey, and Louie becoming reality. Facinating point in: what is collective must be recognised in terms of causation. Appropriatly, PEAS is an acronym:
probalistic weather event attribution studies are a reality in conjunction with remote sensing and tracking.
Drones are also developing apace, and applied robotics - tree-climbing. Less reassuring (adding to nature's precarity?) are artificial insects to undertake pollination; more positively reducing the impact of chemical and toxic spills. Simulation for training is well established in medicine and nursing; section 2.3 addresses this were PEAS form super-ensembles of data (I like that). PEAS can also have a role in determining an evidence-base and demonstrating it is hoped provenance for that evidence in the movement of populations, for example, climate refugees. In conflicts, human rights and justice,
forensic architecture is of course well established:
Forensic Architecture (Care Forensics?)
The summative nature in closing chapter 13 is a helpful approach which I will possibly try to duplicate.
Socio-technical approaches are found in chapter 14
Uses and Abuses of AI Ethics by Frank & Klincewicz. Understandably, boundaries play a large role, as you would expect in deliberating value and values, moral patients and agents. The Collingridge dilemma is discussed, regards putting in place controls for a technology while it is still in development, otherwise control may be lost (p.213). Diversity, a story of the moment - closes out this chapter. There is a 'nice' continuity across chapters to
15
The (Un)bearable Whiteness of AI Ethics by Syed Mustafa Ali et al. (the first note highlights the format is a dialogue). The (colonial) politics of north-
SOUTH are duly noted and Africa (section 4). Section 8 points to
technological (and health) colonialism. I take as a positive that the 'hyphen' also has a place (S.10).
.jpg) |
The Savage Mind |
Again related, as per the book's structured parts, chapter 16
Ethics beyond Ethics: AI, Power, and Colonialism by Kim prompts the reader to revisit the concepts of 'other', alterity even if 'understood'. I pencilled 'Savage Mind' here. Machines are posited as the colonial other. There will be a repeated argument as in response to humankind's going back to the moon (to stay) and on to Mars. We should sort Earth out first. So too for inclusion and the machines. What about the people who are excluded, disenfranchised and increasingly so? I wondered (previously) if (even) more could be made a transparency? Noting the word subaltern, it appears this has suddenly been attributed to Ukraine? On disability and ableism reminded also of radio history and '
Does He Take Sugar?'
The statement: 'Machines are perceived as distant - temporally, spatially, or socially - and different from human culture.' p.234. Is, so true.
INDIVIDUAL
|
INTERPERSONAL : SCIENCES
HUMANISTIC -------------------------------------- MECHANISTIC
SOCIOLOGY : POLITICAL
|
GROUPcognitive - conceptual 'spaces' | distance: time, space |
human cultures societies | |
'Binary-opposition' and the need to think outside of this is acknowledged. And as if (perhaps) to stress both distance and proximity, I wrote 'mobius' in the margin (p.238). Hutchings (2015) sounds a valuable reference: 'Ethical Encounters - Encountering Ethics'. The books I've read contribute to evidence to revalidate my nurse registration. I realise I've sold-myself short in listing the book's titles. Although not discussed here the remaining chapters are excellent critical reading at a time when diversity, equality and inclusion policies are being rolled-back and undone. I will highlight these:
- 17 Disabling AI: Biases and Values Embedded in Artificial Intelligence (quoted in 'Do you fit the description?')
- 18 The AI Imaginary: AI, Ethics and Communication
- 19 Feminist Ethics and AI: A Subfield of Feminist Philosophy of Technology
- 20 Buddhism and the Ethics of Artificial Intelligence
- 21 Queering the Ethics of AI
In chapter 16 'Ethics beyond Ethics...' just before the conclusion I will carry forward a sentence -
'Forming an identity requires that "I identify something or someone beyond me" - with one or more categories persons, non-human others, acts, ideals, values, or social systems' (p.243).
- and the points following, and end there.
Many thanks to David J. Gunkel (Editor), the many contributors, and Edward Elgar Publishing Ltd for my copy. I have greatly enjoyed and learned much from reading this book.
Handbook on the Ethics of Artificial Intelligence. David J. Gunkel (ed.). Cheltenham, UK: Edward Elgar Publishing Ltd. ISBN: 978 1 80392 671 1245
https://www.e-elgar.com/shop/gbp/handbook-on-the-ethics-of-artificial-intelligence-9781803926711.html
Images:
Silent Running
https://cdna.artstation.com/p/assets/images/images/016/839/904/large/david-eagan-screenshot001.jpg?1553673214
The Savage Mind
https://en.wikipedia.org/wiki/File:The_Savage_Mind_(first_edition).jpg
*That early streak of competitiveness was clearly educated out of me.