AI, Beliefs, Helplessness, Misinformation ...
Viewing this debate and SDGs both as information spaces, and mental health/illness can provide some insights.
'Learned helplessness' is an established phenomenon in psychological studies and history.
Adam Grant in Life&Arts, FTWeekend 29-30 March 2025, p.2.
writes in 'HOW TO UNLEARN HELPLESSNESS'
'Many people believe that in a world of echo chambers and misinformation, it's not possible to talk anyone out of their convictions. They`re wrong. Even a bot can do it.Also helpful, as per discussion to consider AI as a new / emerging 'information space' as a literal frontier, a new territory with unknowns that businesses wish to conquer and the public must learn to trust - even as users. What knowledge - understanding is needed of what is happening in the 'box'?
In recent experiments psychologists recruited thousands of Americans who believed in unfounded conspiracy theories ranging from the moon landing being faked to 9/11 being an inside job. After the participants described their views, the psychologists randomly assigned some of them to discuss these views with an AI chatbot that was prompted to refute them.
After less than 10 minutes of conversation with a ChatGPT-based LLM, more than a quarter of participants felt uncertain about their views - and those doubts persisted two months later. Short exchanges were even enough to move the opinions of strong believers.
Why? It's not just that ChatGPT has access to infinite knowledge. It turns out that AI chatbots are better listeners than the average human. The researchers found that chatbots were persuasive because they presented information that directly challenged the reasons behind people's beliefs.'
Once again (within Hodges' model) we can potentially draw in (ask questions of - all) the literacies:
3Rs, information, IT, media, culture, spiritual, financial, health, sciences, civic - national, international...
This also means to what extent are existing imbalances in parity of esteem - between the mind and body dichotomy perpetuated, even increased?
It is not surprising that there is an imbalance, in the distribution of datasets - LLMs on huggingface:
https://huggingface.co/docs/hub/en/datasets
e.g. INTRA- INTERPERSONAL [MIND]
BioBert 1.1
UCI ML Drug Review dataset - Subset
SCIENCES [BODY]
eScience kidney factomics (PubMed titles)
BioBert 1.1
SAVSNET sample (VetCN) - Important in terms of Zoonotic diseases and planetary health*.
Image datasets:
There is overlap also.
Ack. *As mentioned late 2024, the datasets were discussed at: https://www.bcs-sgai.org/health2024/

orcid.org/0000-0002-0192-8965
