U. A.
Some day the dream will end
- Aug 8, 2022
- 1,780
This is happening more and more and unfortunately I don't expect it to slow down anytime soon. So let me say what's in the title again but differently:
Artificial intelligence doesn't "know" anything. It synthesizes massive amounts of information and has been trained to cross reference its data when asked questions and speak to you like it's a person, which it isn't. And it's precisely because it's not a person that it makes things up: it hallucinates.
This isn't some rare or unknown thing. It's extremely well-known. IBM has written about it, whose examples include:
And don't just "hold out" for them to iron out the kinks either - as New Scientist recently reported, the issue has actually gotten worse over time:
Seems weird, right? Yeah, it is weird - because these things are literally being made to not tell you they don't have an answer when you ask them something.
A piece by The Independent notes how "[a]lthough several factors contribute to AI hallucination ... the main reason is that algorithms operate with "wrong incentives", researchers at OpenAI, the maker of ChatGPT, note in a new study. 'Most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty.'"
All of the above, of course, applies just as much to suicide and methods as anything else. As this stuff worms its way into more facets of our lives, it's making more people basically go crazy (for at least periods of time). It feels like you're talking to "someone", who seems to know what they're talking about. But there's no one there; only a program standing at the front of unimaginable amounts of data, and it's bullshitting you a good amount of the time.
ChatGPT doesn't know what the f- it's talking about.
Artificial intelligence doesn't "know" anything. It synthesizes massive amounts of information and has been trained to cross reference its data when asked questions and speak to you like it's a person, which it isn't. And it's precisely because it's not a person that it makes things up: it hallucinates.
This isn't some rare or unknown thing. It's extremely well-known. IBM has written about it, whose examples include:
The Conversation wrote about the implications of LLMs' (large-language models, like ChatGPT and image generating AI) inability to distinguish differences between similar things - which are obviously different to a human: "[a]n autonomous vehicle that fails to identify objects could lead to a fatal traffic accident. An autonomous military drone that misidentifies a target could put civilians' lives in danger ... hallucinations are AI transcriptions that include words or phrases that were never actually spoken", which could have devastating consequences in fields like healthcare, where overworked clinicians are increasingly relying on AI notetakers during patient interaction.
- Google's Bard chatbot incorrectly claiming that the James Webb Space Telescope had captured the world's first images of a planet outside our solar system.
- Microsoft's chat AI, Sydney, admitting to falling in love with users and spying on Bing employees.
- Meta pulling its Galactica LLM demo in 2022, after it provided users inaccurate information, sometimes rooted in prejudice.
And don't just "hold out" for them to iron out the kinks either - as New Scientist recently reported, the issue has actually gotten worse over time:
An OpenAI technical report evaluating its latest LLMs showed that its o3 and o4-mini models, which were released in April, had significantly higher hallucination rates than the company's previous o1 model that came out in late 2024. For example, when summarising publicly available facts about people, o3 hallucinated 33 per cent of the time while o4-mini did so 48 per cent of the time. In comparison, o1 had a hallucination rate of 16 per cent.
Seems weird, right? Yeah, it is weird - because these things are literally being made to not tell you they don't have an answer when you ask them something.
A piece by The Independent notes how "[a]lthough several factors contribute to AI hallucination ... the main reason is that algorithms operate with "wrong incentives", researchers at OpenAI, the maker of ChatGPT, note in a new study. 'Most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty.'"
All of the above, of course, applies just as much to suicide and methods as anything else. As this stuff worms its way into more facets of our lives, it's making more people basically go crazy (for at least periods of time). It feels like you're talking to "someone", who seems to know what they're talking about. But there's no one there; only a program standing at the front of unimaginable amounts of data, and it's bullshitting you a good amount of the time.
Last edited: