Book review #3: Handbook on the Ethics of AI
user | user |
user | corporate |
Of course, 'user' still applies in the political domain, as we are all citizens; users of the State (note also the socio-technical utility of keeping the 'user' in sight). There is also personal ethics, and the legal duty of care - as a nurse (in my case) and for a member of the public. Discussion on notions of responsibility, in light of AI, especially self-driving vehicles, and autonomous machines (weapons) is much needed. Regulatory gaps are recognised; amid current news of debate on the degree of regulation of national and international financial systems. Nyholm's four-fold formulation on responsibility is invaluable as examples are provided. There is a reference to nine notions of responsibility, and the link to accountability and retribution. I like Friedman's discussion of responses to responsibility gaps as; optimists - technological approach; human-centred and hybrid approaches; plus pessimists (4.3). The latter includes debate and campaigns to 'stop the killer robots'. Did this chapter, have a positive effect in extending thoughts about events in the middle east? There are two points here I will return to in the final post.
For students, Schwarz's chapter 6 takes readers back to a literally pivotal development in ethical thought and debate: Trolleyology, with the addition of algorithms and killer robots. What Searle's Chinese Room (noted p.276 here) argument is to AI; so trolley-based dilemmas are to ethics. A point made by Schwarz, referring to the development of subsequent variations. (Following on from #2, Brian Magee's interviews with John Searle on philosophy of language and Wittgenstein are also well worth a listen.) Schwarz's chapter (and the whole book) could be read in preparation for an undergraduate course: computing, law, policy, philosophy, and ethics, for example. The challenge of determining responsibility and accountability "the problem of many hands" (p.71), is not limited to AI. This is a global issue in achieving justice for many previously healthy, competent, quite 'ordinary' people.
'Designing technologies in a way that takes into account broader human-centric values is an important and laudable approach to responding to new and emerging technologies in any arena. It cannot, however, serve as a substitute for moral deliberation and the act of taking moral responsibility for harmful acts, especially when lives are at stake. Reducing this to a mathematical problem ignores everything that cannot be captured in discrete numerical terms or as data points. And in situations where people and their lives become nothing more than data points, non-computational aspects, such as relational or embodied dimensions of human life, are marginalized if not entirely obscured. The real world is not reducible to binary logic, it is rife with contested, contradictory, and clashing values that might all be equally relevant It is complex in ways that cannot always be neatly captured or mathematically modeled. In other words, ethics cannot be "solved."' p.91.
Handbook on the Ethics of Artificial Intelligence. David J. Gunkel (ed.). Cheltenham, UK: Edward Elgar Publishing Ltd. ISBN: 978 1 80392 671 1245
https://www.e-elgar.com/shop/gbp/handbook-on-the-ethics-of-artificial-intelligence-9781803926711.html