AI, Nursing, Safety and presentation - 1st Aug 2-3pm
Hodges' model was not created primarily as a safety tool. It cannot claim to achieve or adhere to an ISO standard. ISO 45001 - health and safety management standard, for example. There is however a relation to clinical risk across healthcare professions, disciplines and clinical fields, including community and public (mental) health. To which of course we must now add planetary health. That said, the question of ISO safety and quality standards for Hodges' model has not been assessed. What exactly would certification entail? Would this process be appropriate for what is a generic - foundational tool?
What this means, however, is that as a situated model for reflection, reflective practice and critical thinking Hodges' model constitutes a deliberative step in the right (formal) direction. As noted previously, in template form, Hodges' model acknowledges the initial personal, professional and organisational standards that (must) shape our clinical encounters. That is, if assessed (as students - and our peers clearly are, and you would expect to be), unconditional positive regard would be observed, supported by the required standards of professional behaviour.
There is another step here. The ethical and legal edict of 'do no harm' must also be central to care delivery, outcomes and evaluation. So from the outset, implicit in Hodges' model is the (NON-LEGAL) statement:
You, the practitioner - agent (student - and your mentor/supervisor) will not knowingly, or through professional ignorance, or neglect cause physical, psychological, social (cultural), political (power), or spiritual harm to the patient - subject (or their carer - guardian/proxy).
There is no escape from AI and Large Language Models as I noticed in FTWeekend:
'Jilin University Hospital in the eastern city of Changchun has rolled out a diagnostic tool it claims can produce treatment plans through DeepSeek consulting the hospital's database, medical guidelines and drug efficacy results. Jinxin Women and Children's Hospital in south-western China said it had a tool for patients to track their ovulation cycles, with test results combined with the hospital's patient data to produce personalised fertility plans.One doctor at public hospital in Hubei province in central China said the institution's leadership had issued a directive that DeepSeek should be used as a third-party arbiter if two doctors have differing views on treatment.
There have been rollouts in public hospitals in Chengdu, Hangzhou and Wuhan for less complex applications, such as digital nurses directing patients to the right consulting room or explaining complicated medical reports.
Several industry insiders warned against taking all the announcements at face value, as some companies were trying to capture investor enthusiasm around DeepSeek without meaningfully deploying its models. Meanwhile, government bodies are also under political pressure to be seen as aligned with China's AI darling.
The SOE tech supplier said "much work still needs to be done to make these models useful" for more complex work such as medical diagnosis. "It must be trained on enough medical data to produce good results. This will take time and needs collaboration from leading AI companies. It is not something hospitals can buld on their own."
Another doctor described a move to deploy DeepSeek last week at a hospital in eastern Zhejiang as a "publicity stunt".
Even if some announcements should be treated with scepticism, experts say the willingness to test out its models still marks a step change.'
- Introduction to the model
- How it can be applied in different situations – safety/risk/improvement
- Examples of its use